Next Article in Journal
Econophysics and Fractional Calculus: Einstein’s Evolution Equation, the Fractal Market Hypothesis, Trend Analysis and Future Price Prediction
Previous Article in Journal
Existence, Uniqueness and Exponential Stability of Periodic Solution for Discrete-Time Delayed BAM Neural Networks Based on Coincidence Degree Theory and Graph Theoretic Method
Previous Article in Special Issue
Rock Classification from Field Image Patches Analyzed Using a Deep Convolutional Neural Network
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Novel Monarch Butterfly Optimization with Global Position Updating Operator for Large-Scale 0-1 Knapsack Problems

1
School of Information Engineering, Hebei GEO University, Shijiazhuang 050031, China
2
School of Information Science and Technology, Qingdao University of Science and Technology, Qingdao 266061, China
3
Department of Computer Science and Technology, Ocean University of China, Qingdao 266100, China
4
Institute of Algorithm and Big Data Analysis, Northeast Normal University, Changchun 130117, China
5
School of Computer Science and Information Technology, Northeast Normal University, Changchun 130117, China
6
Key Laboratory of Symbolic Computation and Knowledge Engineering of Ministry of Education, Jilin University, Changchun 130012, China
*
Author to whom correspondence should be addressed.
Mathematics 2019, 7(11), 1056; https://doi.org/10.3390/math7111056
Submission received: 18 September 2019 / Revised: 24 October 2019 / Accepted: 25 October 2019 / Published: 4 November 2019
(This article belongs to the Special Issue Evolutionary Computation)

Abstract

:
As a significant subset of the family of discrete optimization problems, the 0-1 knapsack problem (0-1 KP) has received considerable attention among the relevant researchers. The monarch butterfly optimization (MBO) is a recent metaheuristic algorithm inspired by the migration behavior of monarch butterflies. The original MBO is proposed to solve continuous optimization problems. This paper presents a novel monarch butterfly optimization with a global position updating operator (GMBO), which can address 0-1 KP known as an NP-complete problem. The global position updating operator is incorporated to help all the monarch butterflies rapidly move towards the global best position. Moreover, a dichotomy encoding scheme is adopted to represent monarch butterflies for solving 0-1 KP. In addition, a specific two-stage repair operator is used to repair the infeasible solutions and further optimize the feasible solutions. Finally, Orthogonal Design (OD) is employed in order to find the most suitable parameters. Two sets of low-dimensional 0-1 KP instances and three kinds of 15 high-dimensional 0-1 KP instances are used to verify the ability of the proposed GMBO. An extensive comparative study of GMBO with five classical and two state-of-the-art algorithms is carried out. The experimental results clearly indicate that GMBO can achieve better solutions on almost all the 0-1 KP instances and significantly outperforms the rest.

1. Introduction

The 0-1 knapsack problem (0-1 KP) is a classical combinatorial optimization task and a challenging NP-complete problem as well. That is to say, it can be solved by nondeterministic algorithms in polynomial time. Similar to other NP-complete problems, such as vertex cover (VC), hamiltonian circuit (HC), and set cover (SC), the 0-1 KP is intractable. In other words, no polynomial-time exact algorithms have been found for it thus far. This problem was originated from the resource allocation involving financial constraints and since then, has been extensively studied in an array of scientific fields, such as combinatorial theory, computational complexity theory, applied mathematics, and computer science [1]. Additionally, it has been found to have many practical applications, such as project selection [2], investment decision-making [3], and network interdiction problem [4]. Mathematically, we can describe the 0-1 KP as follows:
M a x i m i z e f ( x ) = i = 1 n p i x i s u b j e c t t o i = 1 n w i x i C , x i = 0 o r 1 , i = 1 , , n ,
where n is the number of items, pi, and wi denote the profit and weight of item i, respectively. C represents the total capacity of the knapsack. The 0-1 variable xi indicates whether the item i is put into the knapsack, i.e., if any item i is selected and belongs to the knapsack, xi = 1, otherwise, xi = 0. The objective of the 0-1 KP is to maximize the total profits of the items placed in the knapsack, subject to the condition that the sum of the weights of the corresponding items does not exceed a given capacity C.
Since the 0-1 KP was reported by Dantzig [5] in 1957, a large number of researchers have focused on addressing it in diverse areas. Some of the main early methods in this field are exact methods, such as the branch and bound method (B&B) [6] and the dynamic programming (DP) method [7]. It is a breakthrough to introduce the concept of the core by Martello et al. [8]. In addition, some effective algorithms have been proposed for 0-1 KP [9], the multidimensional knapsack problem (MKP) [10]. With the rapid development of computational intelligence, some modern metaheuristic algorithms have been proposed for addressing the 0-1 KP. Some of those related algorithms include genetic algorithm (GA) [11], differential evolution (DE) [12], shuffled frog-leaping algorithm (SFLA) [13], cuckoo search (CS) [14,15], artificial bee colony (ABC) [16,17], harmony search (HS) [17,18,19,20,21], and bat algorithm (BA) [22,23]. Many research methods are applied to the 0-1 KP problem. Zhang et al. converted the 0-1 KP problem into a directed graph by the network converting algorithm [24]. Kong et al. proposed an ingenious binary operator to solve the 0-1 KP problem by simplified binary harmony search [20]. Zhou et al. presented a complex-valued encoding scheme for the 0-1 KP problem [22].
In recent years, inspired by natural phenomena, a variety of novel meta-heuristic algorithms have been reported, e.g., bat algorithm (BA) [23], amoeboid organism algorithm [24], animal migration optimization (AMO) [25], artificial plant optimization algorithm (APOA) [26], biogeography-based optimization (BBO) [27,28], human learning optimization (HLO) [29], krill herd (KH) [30,31,32], monarch butterfly optimization (MBO) [33], elephant herding optimization (EHO) [34], invasive weed optimization (IWO) algorithm [35], earthworm optimization algorithm (EWA) [36], squirrel search algorithm (SSA) [37], butterfly optimization algorithm (BOA) [38], salp swarm algorithm (SSA) [39], whale optimization algorithm (WOA) [40], and others. A review of swarm intelligence algorithms can be referred to [41].
As a novel biologically inspired computing approach, MBO is inspired by the migration behavior of the monarch butterflies with the change of the seasons. The related investigations [42,43] have demonstrated that the advantage of MBO lies in its simplicity, being easy to carry out, and efficiency. In order to address the 0-1 KP, which falls within the domain of the discrete combinatorial optimization problems with constraints, this paper presents a specially designed monarch butterfly optimization with global position updating operator (GMBO). What needs special mention is that GMBO is a supplement and perfection to previous related work, namely, a binary monarch butterfly optimization (BMBO) and a novel chaotic MBO with Gaussian mutation (CMBO) [42]. The main difference and contributions of this paper are as follows, compared with BMBO and CMBO.
Firstly, the original MBO was proposed to address the continuous optimization problems, i.e., it cannot be directly applied in the discrete space. For this reason, in this paper, a dichotomy encoding strategy [44] was employed. More specifically, each monarch butterfly individual is represented as two-tuples consisting of a real-valued vector and a binary vector. Secondly, although BMBO demonstrated excellent performance in solving 0-1 KP, it did not show a prominent advantage [42]. In other words, some techniques can be combined with BMBO for the purpose of improving its global optimization ability. Based on this, an efficient global position updating operator [16] was introduced to enhance the optimization ability and ensure its rapid convergence. Thirdly, a novel two-stage repair operator [45,46] called the greedy modification operator (GMO), and greedy optimization operator (GOO), respectively, was adopted. The former repairs the infeasible solutions while the latter optimizes the feasible solutions during the search process. Fourthly, empirical studies reveal that evolutionary algorithms have certain dependencies on the selection of parameters. Moreover, certain coupling between the parameters still exists. However, suitable parameter combination for a particular problem was not analyzed in BMBO and CMBO. In order to verify the influence degree of four important parameters on the performance of GMBO, an orthogonal design (OD) [47] was applied, and then the appropriate parameter settings were examined and recommended. Fifthly, generally speaking, the approximate solution of an NP-hard problem can be obtained by evolutionary algorithms. However, the most important thing is to obtain higher quality approximate solutions, which are closer to the optimal solutions more profitably. In BMBO, the optimal solutions of all the 0-1 KP instances were not provided. It is difficult to judge the quality of an approximate solution obtained by an evolutionary algorithm. In GMBO, the optimal solutions of 0-1 KP instances are calculated by a dynamic programming algorithm. Meanwhile, the approximation ratio based on the best values and the worst values are provided, which clearly reflect the degree of the closeness of the approximate solutions to the optimal solutions. In addition, the application of statistical methods in GMBO is one of the differences between GMBO and BMBO, CMBO, including Wilcoxon’s rank-sum tests [48] with a 5% significance level. Moreover, boxplots can visualize the experimental results from the statistical perspective.
The rest of the paper is organized as follows. Section 2 presents a snapshot of the original MBO, while Section 3 introduces the GMBO for large-scale 0-1 KP in detail. Section 4 reports the outcomes of a series of simulation experiments as well as to compare results. Finally, the paper ends with Section 5 after providing some conclusions, along with some directions for further work.

2. Monarch Butterfly Optimization

Animal migration involves mainly long-distance movement, usually in groups, on a regular seasonal basis. MBO [33,43] is a population-based intelligent stochastic optimization algorithm that mimics the seasonal migration behavior of monarch butterflies in nature. It should be noted that the entire population is divided into two subpopulations, named subpopulation_1 and subpopulation_2, respectively. Based on this, the optimization process consists of two operators, which operate on subpopulation_1 and subpopulation_2, respectively. The information is interchanged among the individuals of subpopulation_1 and subpopulation_2 by applying the migration operator. The butterfly adjusting operator delivers the information of the best individual to the next generation. Additionally, Lévy flights [49,50] are introduced into MBO. The main steps of MBO are outlined as follows:
Step 1.
Initialize the parameters of MBO. There are five basic parameters to be considered while addressing various optimization problems, including the number of the population (NP), the ratio of the number of monarch butterflies in subpopulation_1 (p), migration period (peri), the monarch butterfly adjusting rate (BAR), the max walk step of the Lévy flights (Smax).
Step 2.
Initialize the population with NP randomly generated individuals according to a uniform distribution in the search space.
Step 3.
Sort the individuals according to their fitness in descending order (Here assumptions for the maximum). The better NP1 (p*NP) individuals constitute subpopulation_1, and NP2 (NP-NP1) individuals make up subpopulation_2.
Step 4.
The position updating of individuals in subpopulation_1 is determined by the migration operator. The specific procedure is described in Algorithm 1.
Algorithm 1. Migration Operator
Begin
  for i = 1 to NP1 (for all monarch butterflies in subpopulation_1)
    for j = 1 to d (all the elements in ith monarch butterfly)
      r = rand * peri, where rand ~ U(0,1)
      if rp then
         x i , j = x r 1 , j , where r1~U[1, 2,…, NP1]
      else
         x i , j = x r 2 , j , where r2~U[1, 2,…, NP2]
      end if
    end for j
  end for i
End.
Step 5.
The moving direction of the individuals in subpopulation_2 depends on the butterfly adjusting operator. The detailed procedure is shown in Algorithm 2.
Step 6.
Recombine two subpopulations into one population
Step 7.
If the termination criterion is already satisfied, output the best solution found, otherwise, go to Step 3.
where dx is calculated by implementing the Lévy flights. It should be noted that the Lévy flights, which originated from the Lévy distribution, are an impactful random walk model, especially on undiscovered, higher-dimensional search space. The step size of Lévy flights refers to Equation (2).
S t e p S i z e = c e i l ( e x p r n d ( 2 M a x g e n ) )
where function exprnd(x) returns random numbers of an exponential distribution with mean x and ceil(x) gets a value to the nearest integer greater than or equal to x. Maxgen is the maximum number of iterations.
The parameter ω is the weighting factor which has inverse proportional relationship to the current generation.
Algorithm 2. Butterfly Adjusting Operator
Begin
  for i = 1 to NP2 (for all monarch butterflies in subpopulation_2)
    for j = 1 to d (all the elements in ith monarch butterfly)
      if randp then where rand ~ U(0,1)
         x i , j = x b e s t , j
      else
         x i , j = x r 3 , j  where r3 ~ U[1,2,…, NP2]
        if rand > BAR then
           x i , j = x i , j + ω × ( d x j 0.5 )
        end if
      end if
    end for j
  end for i
End.

3. A Novel Monarch Butterfly Optimization with Global Position Updating Operator for the 0-1 KP

In this section, we give the detailed design procedure of the GMBO for the 0-1 KP. Firstly, a dichotomy encoding scheme [46] is used to represent each individual. Secondly, a global position updating operator [16] is embedded in GMBO in order to increase the probability of finding the optimal solution. Thirdly, the two-stage individual optimization method is employed, which successively tackles the infeasible solutions and then further improves the existing feasible solutions. Finally, the basic framework of GMBO for 0-1 KP is formed.

3.1. Dichotomy Encoding Scheme

KP belongs to the category of discrete optimization. The solution space is a collection of discrete points rather than a contiguous area. For this reason, we should either redefine the evolutionary operation of MBO or directly apply a continuous algorithm to discrete problems. In this paper, we prefer the latter for its simplicity of operation, comprehensibility, and generality.
As previously mentioned, each monarch butterfly individual is expressed as a two-tuple <X, Y>. Here, real number vectors X still constitute the search space as in the original MBO, which can be regarded as a phenotype similar to the genetic algorithm. Binary vectors, Y, form the solution space, which can be seen as a genotype common in the evolutionary algorithm. It should be noted that Y may be a valid solution because 0-1KP is a constraint optimization problem. Here we abbreviate the monarch butterfly population to MBOP. Then the structure of MBOP is given as follows:
M B O P = [ ( x 1 , 1 , y 1 , 1 ) ( x 1 , 2 , y 1 , 2 ) ( x 1 , d , y 1 , d ) ( x 2 , 1 , y 2 , 1 ) ( x 2 , 2 , y 2 , 2 ) ( x 2 , d , y 2 , d ) ( x N P , 1 , y N P , 1 ) ( x N P , 2 , y N P , 2 ) ( x N P , d , y N P , d ) ]
The first step to adopting a dichotomous encoding scheme is to transfer the phenotype to genotype. Therefore, a surjective function g is used to realize the mapping relationship from each element of X to the corresponding element of Y.
g ( x ) = { 1 i f s i g ( x ) 0.5 0 e l s e
where s i g ( x ) = 1 / ( 1 + e x ) is the sigmoid function. The sigmoid function is often used as the threshold function of neural networks. It was applied to the binary particle swarm optimization (BPSO) [51] to convert the position of a particle from a real-valued vector to a 0-1 vector. It should be noted that there are other conversion functions [52] can be used.
Now assume a 0-1 KP problem with 10 items, Figure 1 shows the above process, in which each x i [ 5.0 , 5.0 ] ( 1 i 10 ) is randomly chosen based on the uniform distribution.

3.2. Global Position Updating Operator

The main feature of particle swarm optimization (PSO) is that the particle always tends to converge to two extreme positions viz. the best position ever found by itself and the global best position. Inspired by the behavior of swarm intelligence of PSO, a novel position updating operator was recently proposed and successfully embedded in HS for solving 0-1 KP [16]. After that, the position updating operator combines with CS [14] to deal with 0-1 KP.
It is well-known that the evolutionary algorithm can yield strong optimization performance under the condition of the balance between exploitation and exploration, or attraction and diffusion [53]. The original MBO concentrates much on the exploration ability but weak exploitation capability [33,43]. With the aim of enhancing the exploitation capability of MBO, we introduce a global position updating operator mentioned above. The procedure is shown in Algorithm 3, where “best” and “worst” represent the global best individual and the global worst individual, respectively. r4, r5, and rand are uniform random real numbers in [0, 1]. The pm parameter is mutation probability.
Algorithm 3. Global Position Updating Operator
Begin
  for i = 1 to NP (for all monarch butterflies in the whole population)
    for j = 1 to d (all the elements in ith monarch butterfly)
       s t e p j = | x b e s t , j x w o r s t , j |
      if (rand ≥ 0.5)        where rand ~ U(0,1)
         x j = x b e s t , j + r 4 × s t e p j  where r4 ~ U(0,1)
      else
         x j = x b e s t , j r 4 × s t e p j
      end if
      if ( r a n d p m )
         x j = L j + r 5 × ( U j L j ) where r5 ~ U(0,1)
          end if
    end for j
  end for i
End.

3.3. Two-Stage Individual Optimization Method Based on Greedy Strategy

Since KP is a constrained optimization problem, it may lead to the occurrence of infeasible solutions. There are usually two major methods: Redefining the objective function by penalty function method (PFM) [54,55] and individual optimization method based on the greedy strategy (IOM) [56,57]. Unfortunately, the former shows poor performance when encountering large-scale KP problems. In this paper, we adopt IOM to address infeasible solutions.
A simple greedy strategy, namely GS [58], is proposed to choose the item with the greatest density pi/wi first. Although the feasibility of all individuals can be guaranteed, it is obvious that there are several imperfections. Firstly, for a feasible individual, there is a possibility that the corresponding objective function value may turn to be worse by applying GS. Secondly, the lack of further optimization for all individuals can lead to unsatisfactory solutions.
In order to overcome the shortcomings of GS, the two-stage individual optimization method is proposed by He et al. [45,46]. A greedy modification operator (GMO) is used to repair the infeasible individuals in the first stage. It is followed by the application of the greedy optimization operator (GOO), which further optimizes the feasible individuals. The method proceeds as follows.
Step 1.
Quicksort algorithm is used to sort all items in the non-ascending order according to pi/wi, and the index of items is stored in an array H[1], H[2]…, H[n], respectively.
Step 2.
For an infeasible individual X = { x 1 , x 2 , , x n } { 0 , 1 } n , GMO is applied.
Step 3.
For a feasible individual X = { x 1 , x 2 , , x n } { 0 , 1 } n , GOO is performed.
After the above repair process, it is easy to verify that each optimized individual is feasible. The significance of GMO and GOO seems particularly prominent while solving high dimensional KP problems [45,46]. The pseudo-code of GMO and GOO can be shown in Algorithms 4 and 5, respectively.
Algorithm 4. Greedy Modification Operator
Begin
  Input: X = { x 1 , x 2 , , x n } { 0 , 1 } n , W = { w 1 , w 2 , , w n } , P = { p 1 , p 2 , , p n } , H [ 1 n ] , C.
    Weight = 0
    for i = 1 to n
     w e i g h t = w e i g h t + x H [ i ] w H [ i ]
    if w e i g h t > C
       w e i g h t = w e i g h t x H [ i ] w H [ i ]
       x H [ i ] = 0
    end if
  end for i
  Output: X = { x 1 , x 2 , , x n } { 0 , 1 } n
End.
Algorithm 5. Greedy Optimization Operator
Begin
  Input: X = { x 1 , x 2 , , x n } { 0 , 1 } n , W = { w 1 , w 2 , , w n } , P = { p 1 , p 2 , , p n } H [ 1 n ] , C.
    Weight = 0
    for i = 1 to n
     w e i g h t = w e i g h t + x i w i
    end for i
    for i = 1 to n
    if x H [ i ] = 0 and w e i g h t + w H [ i ] C
       x H [ i ] = 1
       w e i g h t = w e i g h t + w H [ i ]
    end if
    end for i
    Output: X = { x 1 , x 2 , , x n } { 0 , 1 } n
End.
Algorithm 6. GMBO for 0-1 KP
Begin
Step 1:
Sorting. Quicksort is used to sort all items in the non-ascending order by p i / w i , 1 i n and the index of items is stored in array H [1…n].
Step 2:
Initialization. Set the generation counter g = 1; set migration period peri, the migration ratio p, butterfly adjusting rate BAR, and the max walk step of Lévy flights Smax; set the maximum generation MaxGen. Set the generation interval between recombination RG. Generate NP monarch butterfly individuals randomly {<X1, Y1>, <X2, Y2>, …, <XNP, YNP>}. Calculate the fitness of each individual, f(Yi), 1 ≤ iNP.
Step 3:
While (stopping criterion)
Divide the whole population (NP individuals) into subpopulation_1 (NP1 individuals) and subpopulation_2 (NP2 individuals) according to their fitness;
Calculate and record the global optimal individual <Xgbest, Ygbest>.
Update subpopulation_1 with migration operator.
Update subpopulation_2 with butterfly adjusting operator.
Update the whole population with Global position updating operator.
Repair the infeasible solutions by performing GMO.
Improve the feasible solutions by performing GOO.
Keep best solutions.
Find the current best solution (Ygbest, f(Ygbest)).
g = g + 1.
if Mod(g, RG) = 0
   Reorganize the two subpopulations into one population.
        end if
Step 4:
end while
Step 5:
Output the best results
  End.

3.4. The Procedure of GMBO for the 0-1 KP

In this section, the procedure of GMBO for 0-1 KP is described in Algorithm 6, and the flowchart is illustrated in Figure 2. Apart from the initialization, it is divided into three main processes.
(1) In the migration process, the position of each monarch butterfly individual in subpopulation_1 is updated. We can view this process as exploitation by combining the properties of the currently known individuals in subpopulation_1 or subpopulation_2.
(2) In the butterfly adjusting process, partial genes of the global best individual are passed on to the next generation. Moreover, Lévy flights come into play owing to longer step length in exploring the search space. This process can be considered as exploration, which may find new solutions in the unknown domain of the search space.
(3) In the global position updating process, we can define the distance of the global best individual and the global worst individual as the adaptive step. Obviously, the two extreme individuals differ greatly at the early stage of the optimization process. In other words, the adaptive step has a larger value, and the search scope is broader, which is beneficial to the global search over a wide range. With the progress of the evolution, the global worst individual tends to be more similar to the global best individual, and then the difference becomes small at the late stage of the optimization process. Meanwhile, the adaptive step has a smaller value, and the search area narrows, which is useful for performing the local search. In addition, the genetic mutation is applied to preserve the population diversity and avoid premature convergence. It should be noted that, unlike the original MBO, in GMBO, the two newly-generated subpopulations regroup one population at a certain generation rather than each generation, which can reduce time consumption.

3.5. The Time Complexity

In this subsection, the time complexity of GMBO is simply estimated (Algorithm 6). It is not hard to see that the time complexity of GMBO mainly hinges on steps 1–3. In Step 1, Quicksort algorithm costs time O (n log n). In Step 2, the initialization of NP individuals consumes time O (NP * n). The calculation of fitness has time O (NP). In Step 3, migration operator costs time O (NP1 * n), and the butterfly adjusting operator has time O (NP2 * n). Moreover, the global position updating operator consumes O (NP * n). It is noticed that GMO and GOO cost the same time complexity O (NP * n). Thus, the time complexity of GMBO would be O (n log n) + O (NP * n) + O (NP) + O (NP1 * n) + O (NP2 * n) + O (NP * n) + O (NP * n) = O (n2).

4. Simulation Experiments

We chose 3 different sets of 0-1 KP test instances to verify the feasibility and effectiveness of the proposed GMBO method. The test set 1 and test set 2 were widely used low-dimensional benchmark instances with dimension = 4 to 24. The test set 3 consisted of 15 high-dimensional 0-1 KP test instances generated randomly with dimension = 800 to 2000.

4.1. Experimental Data Set

The generation form of test set 3 was firstly given. Since the difficulty of the knapsack problems was greatly affected by the correlation between the profits and weights [59], 3 typical large scale 0-1 KP instances were randomly generated to demonstrate the performance of the proposed algorithm. Here function Rand (a, b) returned a random integer uniformly distributed in [a, b]. For each instance, the maximum capacity of the knapsack equaled 0.75 times of the total weights. The procedure is as follows:
  • Uncorrelated instances:
    w j = R a n d ( 10 , 100 ) p j = R a n d ( 10 , 100 )
  • Weakly correlated instances:
    w j = R a n d ( 10 , 100 ) p j = R a n d ( w j 10 , w j + 100 )
  • Strongly correlated instances:
    w j = R a n d ( 10 , 100 ) p j = w j + 10
In this section, 3 groups of large scale 0-1 KP instances with dimensionality varying from 800 to 2000 were considered. These 15 instances included 5 uncorrelated instances, 5 weakly correlated instances, and 5 strongly correlated instances. The dimension size was 800, 1000, 1200, 1500, and 2000, respectively. We simply denoted these instances by KP1–KP15.

4.2. Parameter Settings

As mentioned earlier, GMBO included 4 important parameters: p, peri, BAR, and Smax. In order to examine the effect of the parameters on the performance of GMBO, Orthogonal Design (OD) [47] was applied with uncorrelated 1000-dimensional 0-1 KP instance. Our experiment contained 4 factors, 4 levels per factor, and 16 combinations of levels. The combinations of different parameter values are given in Table 1.
For each experiment, the average value of the total profits was obtained with 50 independent runs. The results are listed in Table 2.
Using data from Table 2, we can carry out factor analysis, rank the 4 parameters according to the degree of influence on the performance of GMBO, and deduce the better level of each factor. The factor analysis results are recorded in Table 3, and the changing trends of all the factor levels are shown in Figure 3.
As we can see from Table 3 and Figure 3, p is the most important parameter and needs a reasonable selection for the 0-1 KP problems. A small p signifies more elements from subpopulation_2. Conversely, more elements were selected from subpopulation_1. For peri, the curve was in a small range in an upward trend. This implied individual elements from subpopulation_2 had more chance to embody in the newly generated monarch butterfly. For BAR and Smax, it can be seen from Figure 3 that the effect on the algorithm was not obvious.
According to the above analysis based on OD, the most suitable parameter combination is p2 = 3/12, peri4 = 1.4, BAR1 = 1/12, and Smax3 = 1.0, which will be adopted in the following experiments.

4.3. The Comparisons of the GMBO and the Classical Algorithms

In order to investigate the ability of GMBO to find the optimal solutions and to test the convergence speed, 5 representative classical optimization algorithms, including the BMBO [42], ABC [60], CS [61], DE [62] and GA [11], were selected for comparison. GA was an important branch in the field of computational intelligence that has been intensively studied since it was developed by John Holland et al. In addition, GA was representative of the population-based algorithm. DE was a vector-based evolutionary algorithm, and more and more researchers have paid attention to it since it was first proposed. Since then, it has been applied to solve many optimization problems. CS is one of the latest swarm intelligence algorithms. The similarity of CS with GMBO lies in the introduction of Levy flights. ABC is a relatively novel bio-inspired computing method and has the outstanding advantage of the global and local search in each evolution.
There are several points to explain. Firstly, all 5 comparative methods (not including GA) used the previously mentioned dichotomy encoding mechanism. Secondly, all 6 comparative methods used GMO and GOO to carry out the additional repairing and optimization operations. Thirdly, ABC, CS, DE, GA, MBO, and GMBO were short for 6 methods based on binary, respectively.
The parameters were set as follows. For ABC, the number of food sources is set to 25 and maximum search times = 100. CS, DE, and GA are set the same parameters as that of [15]. For MBO, we take the same parameters suggested in Section 4.2. In addition, the 2 subpopulations recombined every 50 generations. GMBO and MBO have identical parameters except for that mutation probability pm = 0.25 is included in GMBO.
For the sake of fairness, the population sizes of six methods are set to 50. The maximum run time is set to 8 s for 800, 1000, and 1200 dimensional instances but 10 s for 1500 and 2000 dimensional instances. 50 independent runs are performed to achieve the experimental results.
We use C++ as the programming language and run the codes on a PC with Intel (R) Core(TM) i5-2415M CPU, 2.30GHz, 4GB RAM.

4.3.1. The Experimental Results of GMBO on Solving Two Sets of Low-Dimensional 0-1 Knapsack Problems

In this section, 2 sets of 0-1 KP test instances were considered for testing the efficiency of the GMBO. The maximum number of iterations was set to 50. As mentioned earlier, 50 independent runs were made. The first set, which contained 10 low-dimensional 0-1 knapsack problems [19,20], was adopted with the aim of investigating the basic performance of the GMBO. The standard 10 0-1 KP test instances were studied by many researchers, and detailed information about these instances can be taken from the literature [13,19,20]. Their basic parameters are recorded in Table 4. The experimental results obtained by GMBO are listed in Table 5.
Here, “Dim” is the dimension size of test problems; Opt.value is the optimal value obtained by DP method [7]; Opt.solution is the optimal solution; “SR” is success rate; “Time” is the average time to reach the optimal solution among 50 runs; “MinIter”, “MaxIter” and “MeanIter” represents the minimum iterations, maximum iterations, and the average iterations to reach the optimal solution among 50 runs, respectively. “Best”, “Worst”, “Mean” and “Std” are the best value, worst value, mean value, and the standard deviation, respectively.
As can be seen from Table 5, GMBO can achieve the optimal solution for all 10 instances with 100% success rates. Furthermore, the best value, the worst value, and the mean value are all equal to the optimal value for every test problem. Obviously, the efficiency of GMBO is very high for the considered set of instances because GMBO can get the optimal solution in a very short time. The minimum iterations are only 1, and the mean iterations are less than 6 for all the test problems. In particular, for f6, HS [18], HIS [63], and NGHS [19] can only achieve the best value 50 while GMBO can get the optimal value 52.
The second set, which includes 25 0-1 KP instances, was taken from references [64,65]. For all we know, the optimal value and the optimal solution of each instance are provided for the first time in this paper. The primary parameters are recorded in Table 6. The experimental results are summarized in Table 7. Compared to Table 5 above, three new evaluation criteria, that is “ARB”, “ARW”, and “ARM”, are used to evaluate the proposed method. “Opt.value” represents the optimal solution value obtained by the DP method. Here, the following definitions are given:
ARB = Opt . value Best
ARW = Opt . value Worst
ARM = Opt . value Mean
Here, “ARB” represents the approximate ratio [66] of the optimal solution value (Opt.value) to the best approximate solution value (Best). Similarly, “ARW” and “ARM” are based on the worst approximate solution value (Worst) and the mean approximate solution value (mean), respectively. ARB, ARW, and ARM indicate the proximity of Best, Worst, and Mean to the Opt.value, respectively. Plainly, ARB, ARW, and ARM are real numbers greater than or equal to 1.0.
From Table 7, it was clear that GMBO could obtain the optimal solution value for all the 25 instances. Among them, GMBO could find the optimal solution values of 13 instances with 100% SR, and the success rate of nine instances was more than 80%. In addition, the standard deviation of 13 instances was 0. In particular, ARB can reflect well the proximity between the best approximate solution value and the optimal solution value. ARW and ARM were similar to this. For the three new evaluation criteria, it can be seen that the values were equal to 1.0 or very close to 1.0 for all the 25 instances.
Thus, the conclusion is that GMBO had superior performance in solving low-dimensional 0-1 KP instances.

4.3.2. Comparisons of Three Kinds of Large-Scale 0-1 KP Instances

In this section, in order to make a comprehensive investigation on the optimization ability of the proposed GMBO, test set 3, which included 5 uncorrelated, 5 weakly correlated, and 5 strongly correlated large-scale 0-1 KP instances, were considered. The experimental results are listed in Table 8, Table 9 and Table 10 below. The best results on all the statistical criteria of each 0-1 KP instances, i.e., the best values, the mean values, the worst values, the standard deviation, and the approximation ratio, appear in bold. As noted earlier, Opt and Time represent the optimal value and time spending taken by the DP method, respectively.
The performance comparisons of the six methods on the five large-scale uncorrelated 0-1 KP instances are listed in Table 8. It can be seen that GMBO outperformed the other five algorithms on the six and five evaluation criteria for KP1 and KP4, respectively. In addition, GMBO obtained the best values concerning the best and the mean value for KP3 and was superior to the other five algorithms in the worst value for KP2. Unfortunately, GMBO failed to come up with superior performance while encountering 2000-dimensional 0-1 KP instances (KP5). MBO beat the competitors on KP5. Moreover, an apparent phenomenon can be observed, which points out that ABC has better stability. The best value of KP2 was achieved by CS. Obviously, DE and GA showed the worst performance for KP1–KP5. Meanwhile, the approximation ratio of the best value of GMBO for KP1 equaled approximately 1.0. Additionally, there was little difference between the worst approximation ratio of the best value (1.0242) of GMBO and the best approximation ratio of the best value (1.0237) of MBO for KP5.
Table 9 records the comparison of the performances of six methods on five large-scale weakly correlated 0-1 KP instances. The experimental results in Table 9 differ from that in Table 8. It is clear that GMBO had a striking advantage in almost all the six statistical standards for KP6–KP9. For KP10, similarly to KP5, GMBO was still not able to win out over MBO. It is worth mentioning that the approximation ratio of the best value of GMBO for KP6–KP7, and KP9 equaled 1.0. Moreover, the standard deviation value of KP6–KP7 and KP9 obtained by GMBO was much smaller than the corresponding value of the other five algorithms.
A comparative study of the six methods on five large-scale strongly correlated 0-1 KP instances are recorded in Table 10. Obviously, GMBO outperforms the other five methods for KP11–KP14 on five statistical standards except for Std. ABC obtains the best Std values for KP11–KP15. To KP15, GMBO can get better values on the worst. CS, DE, and GA fail to show outstanding performance for this case. Under these circumstances, the approximation ratio of the worst value of GMBO for KP11–KP15 was less than 1.0019.
For a clearer and more intuitive measure of the similar level of the theoretical optimal value and the actual value obtained by each algorithm, the values of ARB on three types of 0-1 KP instances are illustrated in Figure 4, Figure 5 and Figure 6. From Figure 4, the ARB of GMBO for KP1, KP3, and KP4 were extremely close to or equal to 1. GMBO had the smallest ARB for KP1, KP3–KP5, except for KP2, for which CS obtained the smallest ARB. Similar to Figure 4, from Figure 5, GMBO still had the smallest ARB values, which are 1.0 (KP6, KP7, and KP9) or less than 1.015 (KP8, KP10). In terms of the strongly correlated 0-1 KP instances, GMBO consistently outperformed the other five methods (see Figure 6), in which GMBO had the smallest ARB values except for KP15. Particularly, the ARB of GMBO was even less than 1.0015 for KP15.
Overall, Table 8, Table 9 and Table 10 and Figure 4, Figure 5 and Figure 6 indicate that GMBO was superior to the other five methods when addressing large-scale 0-1 KP problems. In addition, if we look at the worst values achieved by GMBO and the best values obtained by other methods, we can observe that for the majority instances, the former were even far better than the latter.
With regard to the best values, GMBO can gain better values than the others for almost all the instances except KP2, KP5, KP10, and KP15, in which CS and MBO twice achieved the best values, respectively. More specifically, compared to the suboptimal values researched by others, the improvements in KP1–KP15 brought by GMBO were 0.59%, −0.22%, 1.10%, 1.45%, −0.05%, 0.27%, 0.18%, 1.09%, 0.24%, −0.09%, 0.07%, 0.10%, 0.00%, and −0.02%, respectively.
With regard to the mean values, they were very similar to the best values. The improvements in KP1–KP15 were 1.51%, −0.02%, 1.15%, 2.16%, −0.09%, 0.77%, 0.83%, 1.37%, 0.96%, −0.24%, 0.11%, 0.09%, 0.07%, 0.00% and 0.00%, respectively.
With regard to the worst values, GMBO can still reach better values for almost all the 15 instances except KP3, KP5, and KP10 in which MBO was a little better than GMBO. The improvements in KP1–KP15 were 1.73%, 0.23%, −0.29%, 0.80%, −0.17%, 0.94%, 1.00%, 0.37%, 1.05%, −0.44%, 0.10%, 0.02%, 0.00%, 0.00%, and 0.01%, respectively.
In order to test the differences between the proposed GMBO and the other five methods, Wilcoxon’s rank-sum tests with the 5% significance level were used. Table 11 records the results of rank-sum tests for KP1–KP15. In Table 11, “1” indicates that GMBO outperforms other methods at 95% confidence. Conversely, “−1”. Particularly, “0” represents that the two compared methods possess similar performance. The last three rows summarized the times that GMBO performed better than, similar to, and worse than the corresponding algorithm among 50 runs.
From Table 11, GMBO outperformed ABC, CS, DE, and GA on 15 0-1 KP instances. Compared with MBO, GMBO performed better than, similar to, or worse than MBO on 9, 3, 3 0-1KP instances, respectively. Therefore, one conclusion is easy to draw that GMBO was superior to or at least comparable to the other five methods. This conclusion is consistent with the foregoing analysis.
Table 12, Table 13 and Table 14 illustrate the ranks of six methods for 15 large-scale 0-1 KP instances on the best values, the mean values, and the worst values, respectively. These clearly show the performance of GMBO in comparison with the other five algorithms.
According to Table 12, obviously, the proposed GMBO exhibited superior performance compared with all the other five methods. In addition, CS and MBO can be regarded as the second-best methods, having identical performance. GA consistently showed the worst performance. Overall, the average rank in descending order according to the best values were: GMBO (1.33), MBO (2.33), CS (2.53), ABC (3.80), DE (4.80), and GA (6).
From Table 13, we can see that the average rank of GMBO still occupied the first. MBO consistently outperformed the other four methods. Note that the rank value of ABC was identical to that of CS. The detailed rank was as follows: GMBO (1.27), MBO (1.73), ABC (3.27), CS (3.87), DE (4.87), and GA (6).
Table 14 shows the statistical results of the six methods based on the worst values. The ranking order of the six methods was GMBO (1.20), MBO (2.07), ABC (2.60), CS (3.93), DE (5), and GA (6), which was identical with that in Table 12.
Then, a comparison of the six highest dimensional 0-1 KP instances, i.e., KP4, KP5, KP9, KP10, KP14, and KP15, is illustrated in Figure 7, Figure 8, Figure 9, Figure 10, Figure 11 and Figure 12, which was based on the best profits achieved by 50 runs.
Figure 7, Figure 9 and Figure 11 illustrate the best values achieved by the six methods on 1500-dimensional uncorrelated, weakly correlated, and strongly correlated 0-1 KP instances in 50 runs, respectively. From Figure 7, it can be easily seen that the best values obtained by GMBO far exceed that of the other five methods. Meanwhile, the two best values of CS outstripped the two worst values of GMBO. By looking at Figure 9, we can conclude that GMBO greatly outperformed the other five methods. The distribution of best values of GMBO in 50 times was close to a horizontal line, which pointed towards the excellent stability of GMBO in this case. With regard to numerical stability, CS had the worst performance. From Figure 11, the curve of GMBO still overtopped that of ABC, CS, DE, and GA, as illustrated in Figure 7 and Figure 9. This advantage, however, was not obvious when compared with MBO.
Figure 8, Figure 10 and Figure 12 show the best values obtained by six methods on 2000-dimensional uncorrelated, weakly correlated, and strongly correlated 0-1 KP instances in 50 runs, respectively. As the dimension becomes large, space is expanded dramatically to 22000, which represents a challenge for any method. It can be said with certainty that almost all the values of GMBO are bigger than that of the other five methods except MBO. Similar to Figure 11, the curves of MBO partially overlaps that of GMBO in Figure 12, which may be interpreted as the ability of GMBO towards competing with MBO.
For the purpose of visualizing the experimental results from the statistical perspective, the corresponding boxplots of six higher dimensional KP4–KP5, KP9–KP10, and KP14-15 are shown in Figure 13, Figure 14, Figure 15, Figure 16, Figure 17 and Figure 18. On the whole, the boxplot for GMBO has greater value and less height than those of the other five methods, which indicates the stronger optimization ability and stability of GMBO even encountering high-dimensional instances.
In order to examine the convergence rate of GMBO, the evolutionary process and convergent trajectories of six methods are illustrated in Figure 19, Figure 20, Figure 21, Figure 22, Figure 23 and Figure 24. It should be noted that six high dimensional instances, viz., KP4, KP5, KP9, KP10, KP14, and KP15, were chosen. In addition, Figure 19, Figure 20, Figure 21, Figure 22, Figure 23 and Figure 24 show the average best values with 50 runs, and not one independent experimental result.
From Figure 19, the curves of GMBO and MBO were almost coincident before 6 s, but afterward, GMBO converged rapidly to a better value as compared to the others. From Figure 20, it is indeed interesting to note that MBO has a weak advantage in the average values as compared to GMBO. From Figure 21, MBO and GMBO have identical initial function values, and the average values obtained by MBO were better than that of GMBO before 3 s. However, similar to the trend in Figure 19, 3 s later, GMBO quickly converged to a higher value. As depicted in Figure 22, unexpectedly, when addressing the 2000-dimensional weakly correlated 0-1 KP instances, GMBO was inferior to MBO. Figure 23 and Figure 24 illustrate the evolutionary process of strongly correlated 0-1 KP instances. By observation of 2 convergence graphs, we can conclude that GMBO and MBO have similar performance. Throughout Figure 19, Figure 20, Figure 21, Figure 22, Figure 23 and Figure 24, GMBO has a stronger optimization ability and faster convergence speed to reach optimum solutions than the other five methods.

4.4. The Comparisons of the GMBO and the Latest Algorithms

To evaluate the performance of the proposed GMBO, the two latest algorithms, namely, moth search (MS) [67] and moth-flame optimization (MFO) [68], were especially selected to compare with GMBO. The following factors were mainly considered. (1) The literature on the application of MS and MFO to solve 0-1 KP problem was not found. (2) The GMBO, MS, and MFO were novel nature-inspired swarm intelligence algorithms, which simulated the migration behavior of the monarch butterfly, the Lévy flight mode, or the navigation method of moths.
For MS, the max step Smax = 1.0, acceleration factor φ = 0.618, and the index β = 1.5. For MFO, the maximum number of flames N = 30. In order to make a fair comparison, all experiments were conducted in the same experimental environment as described above. The detailed experimental results of GMBO and the other two algorithms on the three kinds of large-scale 0-1 KP instances were presented in Table 15. The best, mean, and standard deviation values in bold indicate superiority. The dominant times of the three algorithms in the three statistical values are given in the last line of Table 15. As the results presented in Table 15, the number of times the GMBO algorithm had priority in the best, the mean, and the standard deviation values were 8, 10, 5, respectively. The simulation results indicated that GMBO generally provided very excellent performance in most instances compared with MFO and MS. The two metrics, namely mean and standard deviation, demonstrated again that GMBO was more stable. The comprehensive performance of MFO was superior to that of MS.
To summarize, by analyzing Table 4, Table 5, Table 6, Table 7, Table 8, Table 9, Table 10, Table 11, Table 12, Table 13, Table 14 and Table 15 and Figure 4, Figure 5, Figure 6, Figure 7, Figure 8, Figure 9, Figure 10, Figure 11, Figure 12, Figure 13, Figure 14, Figure 15, Figure 16, Figure 17, Figure 18, Figure 19, Figure 20, Figure 21, Figure 22, Figure 23 and Figure 24, it can be inferred that GMBO had better optimization capability, numerical stability, and higher convergence speed. In other words, it can be claimed that GMBO is an excellent MBO variant, which is capable of addressing large-scale 0-1 KP instances.

5. Conclusions

In order to tackle high-dimensional 0-1 KP problems more efficiently and effectively, as well as to overcome the shortcomings of the original MBO simultaneously, a novel monarch butterfly optimization with the global position updating operator (GMBO) has been proposed in this manuscript. Firstly, a simple and effective dichotomy encoding scheme, without changing the evolutionary formula, is used. Moreover, an ingenious global position updating operator is introduced with the intention of enhancing the optimization capacity and convergence speed. The inspiration behind the new operator lies in creating a balance between intensification and diversification, a very important feature in the field of metaheuristics. Furthermore, a two-stage individual optimization method based on the greedy strategy is employed, which besides guaranteeing the feasibility of the solutions, is able to improve the quality further. In addition, the Orthogonal Design (OD) was applied to find suitable parameters. Finally, GMBO was verified and compared with ABC, CS, DE, GA, and MBO on large-scale 0-1 KP instances. The experimental results demonstrate that GMBO outperforms the other five algorithms on solution precision, convergence speed, and numerical stability.
The introduction of a global position operator coupled with an efficient two-stage repairing operator is instrumental towards the superior performance of GMBO. However, there is room for further enhancing the performance of GMBO. Firstly, the hybridization of the two methods complementing each other is becoming more and more popular, such as the hybridization of HS with CS [69]. Combining MBO with other methods could indeed be very promising and hence worth experimentation. Secondly, in the present work, three groups of high-dimensional 0-1 KP instances were selected. In the future, a multidimensional knapsack problem, quadratic knapsack problem, knapsack sharing problem, and randomized time-varying knapsack problem can be considered to investigate the performance of MBO. Thirdly, some typical combinatorial optimization problems, such as job scheduling problems [70,71,72], feature selection [73,74,75], and classification [76], deserve serious investigation and discussion. For these challenging engineering problems, the key issue is how to encode and process constraints. The application of MBO for these problems is another interesting research area. Finally, perturb [77], ensemble [78], learning mechanisms [79,80], or information feedback mechanisms [81] can be effectively combined with MBO to improve performance.

Author Contributions

Investigation, Y.F. and X.Y.; Methodology, Y.F.; Resources, X.Y.; Supervision, G.-G.W.; Validation, G.-G.W.; Visualization, G.-G.W.; Writing—original draft, Y.F. and X.Y.; Writing—review & editing, G.-G.W.

Funding

This research was funded by National Natural Science Foundation of China, grant number 61503165, 61806069.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Martello, S.; Toth, P. Knapsack Problems: Algorithms and Computer Implementations; John Wiley & Sons, Inc.: Hoboken, NJ, USA, 1990. [Google Scholar]
  2. Mavrotas, G.; Diakoulaki, D.; Kourentzis, A. Selection among ranked projects under segmentation, policy and logical constraints. Eur. J. Oper. Res. 2008, 187, 177–192. [Google Scholar] [CrossRef]
  3. Peeta, S.; Salman, F.S.; Gunnec, D.; Viswanath, K. Pre-disaster investment decisions for strengthening a highway network. Comput. Oper. Res. 2010, 37, 1708–1719. [Google Scholar] [CrossRef]
  4. Yates, J.; Lakshmanan, K. A constrained binary knapsack approximation for shortest path network interdiction. Comput. Ind. Eng. 2011, 61, 981–992. [Google Scholar] [CrossRef]
  5. Dantzig, G.B. Discrete-variable extremum problems. Oper. Res. 1957, 5, 266–288. [Google Scholar] [CrossRef]
  6. Shih, W. A branch and bound method for the multi-constraint zero-one knapsack problem. J. Oper. Res. Soc. 1979, 30, 369–378. [Google Scholar] [CrossRef]
  7. Toth, P. Dynamic programing algorithms for the zero-one knapsack problem. Computing 1980, 25, 29–45. [Google Scholar] [CrossRef]
  8. Martello, S.; Toth, P. A new algorithm for the 0-1 knapsack problem. Manag. Sci. 1988, 34, 633–644. [Google Scholar] [CrossRef]
  9. Pisinger, D. An expanding-core algorithm for the exact 0–1 knapsack problem. Eur. J. Oper. Res. 1995, 87, 175–187. [Google Scholar] [CrossRef]
  10. Puchinger, J.; Raidl, G.R.; Pferschy, U. The Core Concept for the Multidimensional Knapsack Problem. In Proceedings of the European Conference on Evolutionary Computation in Combinatorial Optimization, Budapest, Hungary, 10–12 April 2006; Gottlieb, J., Raidl, G.R., Eds.; Springer: Berlin, Germany, 2006; pp. 195–208. [Google Scholar] [Green Version]
  11. Thiel, J.; Voss, S. Some experiences on solving multi constraint zero-one knapsack problems with genetic algorithms. Inf. Syst. Oper. Res. 1994, 32, 226–242. [Google Scholar]
  12. Chen, P.; Li, J.; Liu, Z.M. Solving 0-1 knapsack problems by a discrete binary version of differential evolution. In Proceedings of the Second International Symposium on Intelligent Information Technology Application, Shanghai, China, 21–22 December 2008; Volume 2, pp. 513–516. [Google Scholar]
  13. Bhattacharjee, K.K.; Sarmah, S.P. Shuffled frog leaping algorithm and its application to 0/1 knapsack problem. Appl. Soft Comput. 2014, 19, 252–263. [Google Scholar] [CrossRef]
  14. Feng, Y.H.; Jia, K.; He, Y.C. An improved hybrid encoding cuckoo search algorithm for 0-1 knapsack problems. Comput. Intell. Neurosci. 2014, 2014, 1. [Google Scholar] [CrossRef]
  15. Feng, Y.H.; Wang, G.G.; Feng, Q.J.; Zhao, X.J. An effective hybrid cuckoo search algorithm with improved shuffled frog leaping algorithm for 0-1 knapsack problems. Comput. Intell. Neurosci. 2014, 2014, 36. [Google Scholar] [CrossRef]
  16. Kashan, M.H.; Nahavandi, N.; Kashan, A.H. DisABC: A new artificial bee colony algorithm for binary optimization. Appl. Soft Comput. 2012, 12, 342–352. [Google Scholar] [CrossRef]
  17. Xue, Y.; Jiang, J.; Zhao, B.; Ma, T. A self-adaptive artificial bee colony algorithm based on global best for global optimization. Soft Comput. 2018, 22, 2935–2952. [Google Scholar] [CrossRef]
  18. Zong, W.G.; Kim, J.H.; Loganathan, G.V. A New Heuristic Optimization Algorithm: Harmony Search. Simulation 2001, 76, 60–68. [Google Scholar] [CrossRef]
  19. Zou, D.; Gao, L.; Li, S.; Wu, J. Solving 0-1 knapsack problem by a novel global harmony search algorithm. Appl. Soft Comput. 2011, 11, 1556–1564. [Google Scholar] [CrossRef]
  20. Kong, X.; Gao, L.; Ouyang, H.; Li, S. A simplified binary harmony search algorithm for large scale 0-1 knapsack problems. Expert Syst. Appl. 2015, 42, 5337–5355. [Google Scholar] [CrossRef]
  21. Rezoug, A.; Boughaci, D. A self-adaptive harmony search combined with a stochastic local search for the 0-1 multidimensional knapsack problem. Int. J. Biol. Inspir. Comput. 2016, 8, 234–239. [Google Scholar] [CrossRef]
  22. Zhou, Y.; Li, L.; Ma, M. A complex-valued encoding bat algorithm for solving 0-1 knapsack problem. Neural Process. Lett. 2016, 44, 407–430. [Google Scholar] [CrossRef]
  23. Cai, X.; Gao, X.-Z.; Xue, Y. Improved bat algorithm with optimal forage strategy and random disturbance strategy. Int. J. Biol. Inspir. Comput. 2016, 8, 205–214. [Google Scholar] [CrossRef]
  24. Zhang, X.; Huang, S.; Hu, Y.; Zhang, Y.; Mahadevan, S.; Deng, Y. Solving 0-1 knapsack problems based on amoeboid organism algorithm. Appl. Math. Comput. 2013, 219, 9959–9970. [Google Scholar] [CrossRef]
  25. Li, X.; Zhang, J.; Yin, M. Animal migration optimization: An optimization algorithm inspired by animal migration behavior. Neural Comput. Appl. 2014, 24, 1867–1877. [Google Scholar] [CrossRef]
  26. Cui, Z.; Fan, S.; Zeng, J.; Shi, Z. Artificial plant optimization algorithm with three-period photosynthesis. Int. J. Biol. Inspir. Comput. 2013, 5, 133–139. [Google Scholar] [CrossRef]
  27. Simon, D. Biogeography-based optimization. IEEE Trans. Evol. Comput. 2008, 12, 702–713. [Google Scholar] [CrossRef]
  28. Li, X.; Wang, J.; Zhou, J.; Yin, M. A perturb biogeography based optimization with mutation for global numerical optimization. Appl. Math. Comput. 2011, 218, 598–609. [Google Scholar] [CrossRef]
  29. Wang, L.; Yang, R.; Ni, H.; Ye, W.; Fei, M.; Pardalos, P.M. A human learning optimization algorithm and its application to multi-dimensional knapsack problems. Appl. Soft Comput. 2015, 34, 736–743. [Google Scholar] [CrossRef]
  30. Wang, G.-G.; Guo, L.H.; Wang, H.Q.; Duan, H.; Liu, L.; Li, J. Incorporating mutation scheme into krill herd algorithm for global numerical optimization. Neural Comput. Appl. 2014, 24, 853–871. [Google Scholar] [CrossRef]
  31. Wang, G.-G.; Gandomi, A.H.; Yang, X.-S.; Alavi, H.A. A new hybrid method based on krill herd and cuckoo search for global optimization tasks. Int. J. Biol. Inspir. Comput. 2016, 8, 286–299. [Google Scholar] [CrossRef]
  32. Wang, G.-G.; Gandomi, A.H.; Alavi, H.A. Stud krill herd algorithm. Neurocomputing 2014, 128, 363–370. [Google Scholar] [CrossRef]
  33. Wang, G.-G.; Deb, S.; Cui, Z. Monarch butterfly optimization. Neural Comput. Appl. 2015. [Google Scholar] [CrossRef]
  34. Wang, G.-G.; Deb, S.; Gao, X.-Z.; Coelho, L.D.S. A new metaheuristic optimization algorithm motivated by elephant herding behavior. Int. J. Biol. Inspir. Comput. 2016, 8, 394–409. [Google Scholar] [CrossRef]
  35. Sang, H.-Y.; Duan, Y.P.; Li, J.-Q. An effective invasive weed optimization algorithm for scheduling semiconductor final testing problem. Swarm Evol. Comput. 2018, 38, 42–53. [Google Scholar] [CrossRef]
  36. Wang, G.-G.; Deb, S.; Coelho, L.D.S. Earthworm optimisation algorithm: A bio-inspired metaheuristic algorithm for global optimisation problems. Int. J. Biol. Inspir. Comput. 2018, 12, 1–22. [Google Scholar] [CrossRef]
  37. Jain, M.; Singh, V.; Rani, A. A novel nature-inspired algorithm for optimization: Squirrel search algorithm. Swarm Evol. Comput. 2019, 44, 148–175. [Google Scholar] [CrossRef]
  38. Singh, B.; Anand, P. A novel adaptive butterfly optimization algorithm. Int. J. Comput. Mater. Sci. Eng. 2018, 7, 69–72. [Google Scholar] [CrossRef]
  39. Sayed, G.I.; Khoriba, G.; Haggag, M.H. A novel chaotic salp swarm algorithm for global optimization and feature selection. Appl. Intell. 2018, 48, 3462–3481. [Google Scholar] [CrossRef]
  40. Simhadri, K.S.; Mohanty, B. Performance analysis of dual-mode PI controller using quasi-oppositional whale optimization algorithm for load frequency control. Int. Trans. Electr. Energy Syst. 2019. [Google Scholar] [CrossRef]
  41. Brezočnik, L.; Fister, I.; Podgorelec, V. Swarm intelligence algorithms for feature selection: A review. Appl. Sci. 2018, 8, 1521. [Google Scholar] [CrossRef]
  42. Feng, Y.H.; Wang, G.-G.; Deb, S.; Lu, M.; Zhao, X.-J. Solving 0-1 knapsack problem by a novel binary monarch butterfly optimization. Neural Comput. Appl. 2015. [Google Scholar] [CrossRef]
  43. Wang, G.-G.; Zhao, X.C.; Deb, S. A Novel Monarch Butterfly Optimization with Greedy Strategy and Self-adaptive Crossover Operator. In Proceedings of the 2nd International Conference on Soft Computing & Machine Intelligence (ISCMI 2015), Hong Kong, China, 23–24 November 2015. [Google Scholar]
  44. He, Y.C.; Wang, X.Z.; Kou, Y.Z. A binary differential evolution algorithm with hybrid encoding. J. Comput. Res. Dev. 2007, 44, 1476–1484. [Google Scholar] [CrossRef]
  45. He, Y.C.; Song, J.M.; Zhang, J.M.; Gou, H.Y. Research on genetic algorithms for solving static and dynamic knapsack problems. Appl. Res. Comput. 2015, 32, 1011–1015. [Google Scholar]
  46. He, Y.C.; Zhang, X.L.; Li, W.B.; Li, X.; Wu, W.L.; Gao, S.G. Algorithms for randomized time-varying knapsack problems. J. Comb. Optim. 2016, 31, 95–117. [Google Scholar] [CrossRef]
  47. Fang, K.-T.; Wang, Y. Number-Theoretic Methods in Statistics; Chapman & Hall: New York, NY, USA, 1994. [Google Scholar]
  48. Wilcoxon, F.; Katti, S.K.; Wilcox, R.A. Critical values and probability levels for the Wilcoxon rank sum test and the Wilcoxon signed rank test. Sel. Tables Math. Stat. 1970, 1, 171–259. [Google Scholar]
  49. Gutowski, M. Lévy flights as an underlying mechanism for global optimization algorithms. arXiv 2001, arXiv:math-ph/0106003. [Google Scholar]
  50. Pavlyukevich, I. Levy flights, non-local search and simulated annealing. Mathematics 2007, 226, 1830–1844. [Google Scholar] [CrossRef]
  51. Kennedy, J.; Eberhart, R.C. A discrete binary version of the particle swarm algorithm. In Proceedings of the 1997 Conference on Systems, Man, and Cybernetics, Orlando, FL, USA, 12–15 October 1997; pp. 4104–4108. [Google Scholar]
  52. Zhu, H.; He, Y.; Wang, X.; Tsang, E.C. Discrete differential evolutions for the discounted {0-1} knapsack problem. Int. J. Biol. Inspir. Comput. 2017, 10, 219–238. [Google Scholar] [CrossRef]
  53. Yang, X.S.; Deb, S.; Hanne, T.; He, X. Attraction and Diffusion in Nature-Inspired Optimization Algorithms. Neural Comput. Appl. 2019, 31, 1987–1994. [Google Scholar] [CrossRef]
  54. Joines, J.A.; Houck, C.R. On the use of non-stationary penalty functions to solve nonlinear constrained optimization problems with GA’s. Evolutionary Computation. In Proceedings of the First IEEE Conference on Evolutionary Computation, Orlando, FL, USA, 27–29 June 1994; pp. 579–584. [Google Scholar]
  55. Olsen, A.L. Penalty functions and the knapsack problem. Evolutionary Computation. In Proceedings of the First IEEE Conference on Evolutionary Computation, Orlando, FL, USA, 27–29 June 1994; pp. 554–558. [Google Scholar]
  56. Goldberg, D.E. Genetic Algorithms in Search, Optimization, and Machine Learning; Addison-Wesley: Boston, MA, USA, 1989. [Google Scholar]
  57. Simon, D. Evolutionary Optimization Algorithms; Wiley: New York, NY, USA, 2013. [Google Scholar]
  58. Du, D.Z.; Ko, K.I.; Hu, X. Design and Analysis of Approximation Algorithms; Springer Science & Business Media: Berlin, Germany, 2011. [Google Scholar]
  59. Pisinger, D. Where are the hard knapsack problems. Comput. Oper. Res. 2005, 32, 2271–2284. [Google Scholar] [CrossRef]
  60. Karaboga, D.; Basturk, B. A powerful and efficient algorithm for numerical function optimization: Artificial bee colony (ABC) algorithm. J. Glob. Optim. 2007, 39, 459–471. [Google Scholar] [CrossRef]
  61. Yang, X.S.; Deb, S. Cuckoo search via Lévy flights. In Proceedings of the World Congress on Nature and Biologically Inspired Computing (NaBIC 2009), Coimbatore, India, 9–11 December 2009; pp. 210–214. [Google Scholar]
  62. Storn, R.; Price, K. Differential evolution–A simple and efficient heuristic for global optimization over continuous spaces. J. Glob. Optim. 1997, 11, 341–359. [Google Scholar] [CrossRef]
  63. Mahdavi, M.; Fesanghary, M.; Damangir, E. An improved harmony search algorithm for solving optimization problems. Appl. Math. Comput. 2007, 188, 1567–1579. [Google Scholar] [CrossRef]
  64. Bansal, J.C.; Deep, K. A Modified Binary Particle Swarm Optimization for Knapsack Problems. Appl. Math. Comput. 2012, 218, 11042–11061. [Google Scholar] [CrossRef]
  65. Lee, C.Y.; Lee, Z.J.; Su, S.F. A New Approach for Solving 0/1 Knapsack Problem. In Proceedings of the IEEE International Conference on Systems, Man and Cybernetics, Montreal, QC, Canada, 7–10 October 2007; pp. 3138–3143. [Google Scholar]
  66. Cormen, T.H. Introduction to Algorithms; MIT Press: Cambridge, MA, USA, 2009. [Google Scholar]
  67. Wang, G.-G. Moth search algorithm: A bio-inspired metaheuristic algorithm for global optimization problems. Memet. Comput. 2018, 10, 151–164. [Google Scholar] [CrossRef]
  68. Mehne, S.H.H.; Mirjalili, S. Moth-Flame Optimization Algorithm: Theory, Literature Review, and Application in Optimal Nonlinear. In Nature Inspired Optimizers: Theories, Literature Reviews and Applications; Mirjalili, S., Jin, S.D., Lewis, A., Eds.; Spring: Berlin, Germany, 2020; Volume 811, p. 143. [Google Scholar]
  69. Gandomi, A.H.; Zhao, X.J.; Chu, H.C.E. Hybridizing harmony search algorithm with cuckoo search for global numerical optimization. Soft Comput. 2016, 20, 273–285. [Google Scholar]
  70. Li, J.-Q.; Pan, Q.-K.; Liang, Y.-C. An effective hybrid tabu search algorithm for multi-objective flexible job-shop scheduling problems. Comput. Ind. Eng. 2010, 59, 647–662. [Google Scholar] [CrossRef]
  71. Han, Y.-Y.; Gong, D.; Sun, X. A discrete artificial bee colony algorithm incorporating differential evolution for the flow-shop scheduling problem with blocking. Eng. Optim. 2014, 47, 927–946. [Google Scholar] [CrossRef]
  72. Li, J.-Q.; Pan, Q.-K.; Tasgetiren, M.F. A discrete artificial bee colony algorithm for the multi-objective flexible job-shop scheduling problem with maintenance activities. Appl. Math. Model. 2014, 38, 1111–1132. [Google Scholar] [CrossRef]
  73. Zhang, W.-Q.; Zhang, Y.; Peng, C. Brain storm optimization for feature selection using new individual clustering and updating mechanism. Appl. Intell. 2019. [Google Scholar] [CrossRef]
  74. Zhang, Y.; Li, H.-G.; Wang, Q.; Peng, C. A filter-based bare-bone particle swarm optimization algorithm for unsupervised feature selection. Appl. Intell. 2019, 49, 2889–2898. [Google Scholar] [CrossRef]
  75. Zhang, Y.; Wang, Q.; Gong, D.-W.; Song, X.-F. Nonnegative laplacian embedding guided subspace learning for unsupervised feature selection. Pattern Recognit. 2019, 93, 337–352. [Google Scholar] [CrossRef]
  76. Zhang, Y.; Gong, D.W.; Cheng, J. Multi-objective particle swarm optimization approach for cost-based feature selection in classification. IEEE ACM Trans. Comput. Biol. Bioinform. 2017, 14, 64–75. [Google Scholar] [CrossRef]
  77. Zhao, X. A perturbed particle swarm algorithm for numerical optimization. Appl. Soft Comput. 2010, 10, 119–124. [Google Scholar]
  78. Wu, G.; Shen, X.; Li, H.; Chen, H.; Lin, A.; Suganthan, P.N. Ensemble of differential evolution variants. Inf. Sci. 2018, 423, 172–186. [Google Scholar] [CrossRef]
  79. Wang, G.G.; Deb, S.; Gandomi, A.H.; Alavi, A.H. Opposition-based krill herd algorithm with Cauchy mutation and position clamping. Neurocomputing 2016, 177, 147–157. [Google Scholar] [CrossRef]
  80. Zhang, Y.; Gong, D.-W.; Gao, X.-Z.; Tian, T.; Sun, X.-Y. Binary differential evolution with self-learning for multi-objective feature selection. Inf. Sci. 2020, 507, 67–85. [Google Scholar] [CrossRef]
  81. Wang, G.-G.; Tan, Y. Improving metaheuristic algorithms with information feedback models. IEEE Trans. Cybern. 2019, 49, 542–555. [Google Scholar] [CrossRef]
Figure 1. The example of the dichotomy encoding scheme.
Figure 1. The example of the dichotomy encoding scheme.
Mathematics 07 01056 g001
Figure 2. Flowchart of global position updating operator (GMBO) algorithm for 0-1 knapsack (KP) problem.
Figure 2. Flowchart of global position updating operator (GMBO) algorithm for 0-1 knapsack (KP) problem.
Mathematics 07 01056 g002
Figure 3. The changing trends of all the factor levels.
Figure 3. The changing trends of all the factor levels.
Mathematics 07 01056 g003
Figure 4. Comparison of ARB for KP1-KP5.
Figure 4. Comparison of ARB for KP1-KP5.
Mathematics 07 01056 g004
Figure 5. Comparison of ARB for KP6-KP10.
Figure 5. Comparison of ARB for KP6-KP10.
Mathematics 07 01056 g005
Figure 6. Comparison of ARB for KP11-KP15.
Figure 6. Comparison of ARB for KP11-KP15.
Mathematics 07 01056 g006
Figure 7. Comparison of the best values on KP4 in 50 runs.
Figure 7. Comparison of the best values on KP4 in 50 runs.
Mathematics 07 01056 g007
Figure 8. Comparison of the best values on KP5 in 50 runs.
Figure 8. Comparison of the best values on KP5 in 50 runs.
Mathematics 07 01056 g008
Figure 9. Comparison of the best values on KP9 in 50 runs.
Figure 9. Comparison of the best values on KP9 in 50 runs.
Mathematics 07 01056 g009
Figure 10. Comparison of the best values on KP10 in 50 runs.
Figure 10. Comparison of the best values on KP10 in 50 runs.
Mathematics 07 01056 g010
Figure 11. Comparison of the best values on KP14 in 50 runs.
Figure 11. Comparison of the best values on KP14 in 50 runs.
Mathematics 07 01056 g011
Figure 12. Comparison of the best values on KP15 in 50 runs.
Figure 12. Comparison of the best values on KP15 in 50 runs.
Mathematics 07 01056 g012
Figure 13. Boxplot of the best values on KP4 in 50 runs.
Figure 13. Boxplot of the best values on KP4 in 50 runs.
Mathematics 07 01056 g013
Figure 14. Boxplot of the best values on KP5 in 50 runs.
Figure 14. Boxplot of the best values on KP5 in 50 runs.
Mathematics 07 01056 g014
Figure 15. Boxplot of the best values on KP9 in 50 runs.
Figure 15. Boxplot of the best values on KP9 in 50 runs.
Mathematics 07 01056 g015
Figure 16. Boxplot of the best values on KP10 in 50 runs.
Figure 16. Boxplot of the best values on KP10 in 50 runs.
Mathematics 07 01056 g016
Figure 17. Boxplot of the best values on KP14 in 50 runs.
Figure 17. Boxplot of the best values on KP14 in 50 runs.
Mathematics 07 01056 g017
Figure 18. Boxplot of the best values on KP15 in 50 runs.
Figure 18. Boxplot of the best values on KP15 in 50 runs.
Mathematics 07 01056 g018
Figure 19. The convergence graph of six methods on KP4 in 10 s.
Figure 19. The convergence graph of six methods on KP4 in 10 s.
Mathematics 07 01056 g019
Figure 20. The convergence graph of six methods on KP5 in 10 s.
Figure 20. The convergence graph of six methods on KP5 in 10 s.
Mathematics 07 01056 g020
Figure 21. The convergence graph of six methods on KP9 in 10 s.
Figure 21. The convergence graph of six methods on KP9 in 10 s.
Mathematics 07 01056 g021
Figure 22. The convergence graph of six methods on KP10 in 10 s.
Figure 22. The convergence graph of six methods on KP10 in 10 s.
Mathematics 07 01056 g022
Figure 23. The convergence graph of six methods on KP14 in 10 s.
Figure 23. The convergence graph of six methods on KP14 in 10 s.
Mathematics 07 01056 g023
Figure 24. The convergence graph of six methods on KP15 in 10 s.
Figure 24. The convergence graph of six methods on KP15 in 10 s.
Mathematics 07 01056 g024
Table 1. Combinations of different parameter values.
Table 1. Combinations of different parameter values.
ParametersFactor Level
1234
p1/123/125/1210/12
peri0.811.21.4
BAR1/123/125/1210/12
Smax0.60.811.2
Table 2. Orthogonal array and the experimental results.
Table 2. Orthogonal array and the experimental results.
Experiment.FactorsResults
NumberpperiBARSmax
11111R1 = 49,542
21222R2 = 49,538
31333R3 = 49,503
41444R4 = 49,528
52123R5 = 49,745
62214R6 = 49,739
72341R7 = 49,763
82432R8 = 49,739
93134R9 = 49,704
103243R10 = 49,728
113312R11 = 49,730
123421R12 = 49,714
134142R13 = 49,310
144231R14 = 49,416
154324R15 = 49,460
164413R16 = 49,506
Table 3. Factor analysis with the orthogonal design (OD) method.
Table 3. Factor analysis with the orthogonal design (OD) method.
LevelsFactor Analysis
pperiBARSmax
1(R1 + R1 + R1 + R1)/4(R1 + R5 + R9 + R13)/4(R1 + R6 + R11 + R16)/4(R1 + R7 + R12 + R14)/4
49,52849,57549,62949,609
2(R5 + R6 + R7 + R8)/4(R2 + R6 + R10 + R14)/4(R2 + R5 + R12 + R15)/4(R1 + R1 + R1 + R1)/4
49,74649,60549,60349,579
3(R9 + R10 + R11 + R12)/4(R3 + R7 + R11 + R15)/4(R3 + R8 + R9 + R14)/4(R3 + R5 + R10 + R16)/4
49,71949,61449,59049,620
4(R13 + R14 + R15 + R16)/4(R4 + R8 + R12 + R16)/4(R4 + R7 + R10 + R13)/4(R4 + R6 + R9 + R15)/4
49,42349,62249,58249,608
Std134.0617.7217.7815.13
Rank1324
resultsp2peri4BAR1Smax3
Table 4. The basic information of 10 standard low-dimensional 0-1KP instances.
Table 4. The basic information of 10 standard low-dimensional 0-1KP instances.
fDimOpt.valueOpt.solution
f110295(0,1,1,1,0,0,0,1,1,1)
f2201024(1,1,1,1,1,1,1,1,1,1,1,1,1,0,1,0,1,0,1,1)
f3435(1,1,0,1)
f4423(0,1,0,1)
f515481.0694(0,0,1,0,1,0,1,1,0,1,1,1,0,1,1)
f61052(0,0,1,0,1,1,1,1,1,1)
f77107(1,0,0,1,0,0,0)
f8239767(1,1,1,1,1,1,1,1,0,0,1,0,0,0,0,1,1,0,0,0,0,0,0)
f95130(1,1,1,1,0)
f10201025(1,1,1,1,1,1,1,1,1,0,1,1,1,1,0,1,0,1,1,1)
Table 5. The experimental results of 10 standard low-dimensional 0-1 KP instances obtained by GMBO.
Table 5. The experimental results of 10 standard low-dimensional 0-1 KP instances obtained by GMBO.
fSRTime(s)MinIterMaxIterMeanIterBestWorstMeanStd
f1100%0.00321112952952950
f2100%0.00921526.101024102410240
f3100%0.00031113535350
f4100%0.00041112323230
f5100%0.0072141.30481.07481.07481.070
f6100%0.00231115252520
f7100%0.00001111071071070
f8100%0.0024131.459767976797670
f9100%0.00001111301301300
f10100%0.00001111025102510250
Table 6. The basic information of 25 low-dimensional 0-1KP instances.
Table 6. The basic information of 25 low-dimensional 0-1KP instances.
0-1 KPDimOpt.valueOpt.solution
ks_8a83,924,4001 1 1 0 1 1 0 0
ks_8b83,813,6691 1 0 0 1 0 0 1
ks_8c83,347,4521 0 0 1 0 1 0 0
ks_8d84,187,7070 0 1 0 0 1 1 1
ks_8e84,955,5550 1 0 1 0 0 1 1
ks_12a
ks_8b
125,688,8871 0 0 0 1 1 0 1 1 0 1 0
ks_12b126,498,5970 0 0 1 1 0 1 0 1 1 1 0
ks_12c125,170,6260 1 1 0 1 0 0 1 1 0 1 1
ks_12d126,992,4041 1 0 0 0 1 1 1 0 1 0 0
ks_12e125,337,4720 1 0 0 0 0 0 0 1 1 0 1
ks_16a167,850,9830 1 0 0 1 1 0 1 1 0 1 0 0 1 1 0
ks_16b169,352,9981 0 0 0 0 0 1 0 0 1 1 1 1 1 1 0
ks_16c169,151,1471 1 0 1 0 0 0 1 1 1 0 1 0 0 1 0
ks_16d169,348,8891 0 1 1 1 1 1 0 0 1 0 0 0 1 0 0
ks_16e167,769,1170 0 1 1 0 0 1 1 1 0 1 1 1 0 1 0
ks_20a2010,727,0490 0 1 0 0 1 1 1 0 1 1 1 0 0 0 1 1 1 0 1
ks_20b209,818,2611 0 0 0 0 1 1 1 0 1 0 1 1 1 1 1 0 0 0 1
ks_20c2010,714,0231 1 1 1 1 1 0 0 0 1 1 0 0 1 0 0 0 1 0 1
ks_20d208,929,1560 0 0 0 0 0 0 1 1 1 1 0 1 1 0 1 1 1 0 0
Ks_20e209,357,9690 0 1 0 1 0 0 0 0 0 1 0 0 1 0 1 1 1 0 1
ks_24a2413,549,0941 1 0 1 1 1 0 0 0 1 1 0 1 0 0 1 0 0 0 0 0 1 1 1
ks_24b2412,233,7131 0 0 0 1 0 1 0 1 1 1 1 1 1 0 0 1 1 1 0 1 0 1 1
ks_24c2412,448,7801 0 0 0 1 1 0 1 0 1 0 0 1 1 0 0 0 1 0 1 1 0 1 0
ks_24d2411,815,3151 0 1 0 1 1 0 0 0 0 1 0 1 1 1 1 0 0 1 1 0 1 1 0
ks_24e2413,940,0990 0 1 0 1 1 0 0 0 1 1 0 1 1 1 1 1 0 1 1 0 0 1 0
Table 7. The experimental results of 25 low-dimensional 0-1 KP instances obtained by GMBO.
Table 7. The experimental results of 25 low-dimensional 0-1 KP instances obtained by GMBO.
0-1 KPSRBestWorstMeanStdARBARWARM
ks_8a100%925,369925,369925,36901.00001.00001.0000
ks_8b100%3,813,6693,813,6693,813,66901.00001.00001.0000
ks_8c100%3,837,3983,837,3983,837,39801.00001.00001.0000
ks_8d100%4,187,7074,187,7074,187,70701.00001.00001.0000
ks_8e100%4,955,5554,955,5554,955,55501.00001.00001.0000
ks_12a88%5,688,8875,681,3605,688,0462283.521.00001.00131.0001
ks_12b86%6,498,5976,473,0196,495,0168875.231.00001.00401.0006
ks_12c100%5,170,6265,170,6265,170,62601.00001.00001.0000
ks_12d100%6,992,4046,992,4046,992,40401.00001.00001.0000
ks_12e88%5,337,4725,289,5705,331,72415,566.301.00001.00911.0011
ks_16a100%7,850,9837,850,9837,850,98301.00001.00001.0000
ks_16b100%9,352,9989,352,9989,352,99801.00001.00001.0000
ks_16c100%9,151,1479,151,1479,151,14701.00001.00001.0000
ks_16d56%9,348,8899,300,0419,342,05610,405.281.00001.00531.0007
ks_16e82%7,769,1177,750,4917,765,9916713.971.00001.00241.0004
ks_20a100%10,727,04910,727,04910,727,04901.00001.00001.0000
ks_20b98%9,818,2619,797,3999,817,8442920.681.00001.00211.0000
ks_20c96%10,714,02310,700,63510,713,4872623.501.00001.00131.0000
ks_20d100%8,929,1568,929,1568,929,15601.00001.00001.0000
Ks_20e48%9,357,9699,357,1929,357,565388.181.00001.00011.0000
ks_24a80%13,549,09413,504,87813,543,47611,554.741.00001.00331.0004
ks_24b100%12,233,71312,233,71312,233,71301.00001.00001.0000
ks_24c96%12,448,78012,445,37912,448,644666.451.00001.00031.0000
ks_24d72%11,815,31511,810,05111,813,8412363.531.00001.00041.0001
ks_24e98%13,940,09913,929,87213,939,8941431.781.00001.00071.0000
Table 8. Performance comparison on large-scale five uncorrelated 0-1KP instances.
Table 8. Performance comparison on large-scale five uncorrelated 0-1KP instances.
KP1KP2KP3KP4KP5
DPOpt40,68650,59261,84677,033102,316
Time(s)0.9521.2351.9142.5212.705
ABCBest3981649,37460,22274,95999,353
ARB1.02191.02471.02701.02771.0298
Worst39,54249,10559,86774,57199,822
ARW1.02891.03031.03311.03301.0250
Mean39,63949,25660,05974,74299,035
Std55.558.5682.2890.07130.80
CSBest40,44550,10460,49075,82899,248
ARB1.00601.00971.02241.01591.0309
Worst39,41149,05659,76474,47298,706
ARW1.03241.03131.03481.03441.0366
Mean39,60249,21159,93874,66698,926
Std218.11205.08120.76245.28124.58
DEBest39,48649,30359,92174,67198,943
ARB1.03041.02611.03211.03161.0341
Worst39,15448,69659,43574,07798,330
ARW1.03911.03891.04061.03991.0405
Mean39,32348,94559,64574,31998,645
Std80.60111.08114.17113.92154.40
GABest39,19048,95559,57874,37298,828
ARB1.03821.03341.03811.03581.0353
Worst38,27447,80958,10672,47796,830
ARW1.06301.05821.06441.06291.0567
Mean38,83848,38458,99673,58497,765
Std196.70256.69362.53414.02480.15
MBOBest40,27650,02361,09075,40599,946
ARB1.01021.01141.01241.02161.0237
Worst39,83949,41160,40174,81599,017
ARW1.02131.02391.02391.02961.0333
Mean40,03649,74360,73275,07299,512
Std100.34133.40163.76149.57187.15
GMBOBest40,68449,99261,76476,92999,898
ARB1.00001.01201.00131.00141.0242
Worst40,52749,52460,22575,41098,848
ARW1.00391.02161.02691.02151.0351
Mean40,64149,73261,43076,69199,424
Std40.09116.12379.76267.90200.38
Table 9. Performance comparison of large-scale five weakly correlated 0-1KP instances.
Table 9. Performance comparison of large-scale five weakly correlated 0-1KP instances.
KP6KP7KP8KP9KP10
DPOpt3506943,78653,55365,710118,200
Time(s)1.1881.1741.4132.7172.504
ABCBest3470643,32152,06164,864115,305
ARB1.01051.01071.02871.01301.0251
Worst34,65043,24351,71164,752114,586
ARW1.01211.01261.03561.01481.0315
Mean34,67543,27551,87664,806114,922
Std16.0018.7479.7225.45123.59
CSBest34,97543,70852,84865,549116,597
ARB1.00271.00181.01331.00251.0137
Worst34,62143,21551,61764,749114,560
ARW1.01291.01321.03751.01481.0318
Mean34,67643,32651,83864,932114,879
Std65.25143.50260.46245.06428.93
DEBest34,62943,25151,90064,770114,929
ARB1.01271.01241.03181.01451.0285
Worst34,54943,14051,28964,620114,199
ARW1.01511.01501.04411.01691.0350
Mean34,58843,18751,54764,692114,462
Std20.9323.94123.6735.66160.77
GABest34,58543,17251,46064,769114,539
ARB1.01401.01421.04071.01451.0320
Worst34,36142,90150,11264,315112,681
ARW1.02061.02061.06871.02171.0490
Mean34,47643,04950,94564,535113,674
Std60.9174.36281.4185.75405.23
MBOBest34,85043,48752,72065,144116,466
ARB1.00631.00691.01581.00871.0149
Worst34,72443,34952,18564,941115,273
ARW1.00991.01011.02621.01181.0254
Mean34,79543,42552,44965,041115,998
Std31.4131.78111.2648.66248.70
GMBOBest35,06943,78653,42665,708116,496
ARB1.00001.00001.00241.00001.0146
Worst35,05243,78152,37665,625114,761
ARW1.00051.00011.02251.00131.0300
Mean35,06443,78453,16765,666115,718
Std4.041.57300.9018.48492.92
Table 10. Performance comparison of large-scale five strongly correlated 0-1KP instances.
Table 10. Performance comparison of large-scale five strongly correlated 0-1KP instances.
KP11KP12KP13KP14KP15
DPOpt40,16749,44360,64074,93299,683
Time(s)0.7931.1231.2001.9712.232
ABCBest40,12749,39060,56774,82299,523
ARB1.00101.00111.00121.00151.0016
Worst40,10749,36360,54074,79299,490
ARW1.00151.00161.00171.00191.0019
Mean40,11649,37660,55474,80599,506
Std4.525.615.546.857.29
CSBest40,12749,39360,55974,83799,517
ARB1.00101.00101.00131.00131.0017
Worst40,09649,35360,53374,77999,473
ARW1.00181.00181.00181.00201.0021
Mean40,10849,36460,54374,79499,489
Std6.596.805.399.198.19
DEBest40,13749,36360,54574,77899,501
ARB1.00071.00161.00161.00211.0018
Worst40,08749,32360,4987473799,436
ARW1.00201.00241.00231.00261.0025
Mean40,11949,34060,51874,75999,459
Std10.198.3110.1610.2314.03
GABest40,06949,33360,52074,76699,461
ARB1.00241.00221.00201.00221.0022
Worst39,93049,23160,39174,60699,305
ARW1.00591.00431.00411.00441.0038
Mean40,02349,28760,45174,68999,382
Std31.1229.7629.8737.2038.42
MBOBest40,13749,39360,58074,84999,573
ARB1.00071.00101.00101.00111.0011
Worst40,10249,36360,53974,77899,496
ARW1.00161.00161.00171.00211.0019
Mean40,11949,37960,56274,82299,536
Std7.189.9410.7714.7015.63
GMBOBest40,16749,44260,63074,85299,553
ARB1.00001.00001.00021.00111.0013
Worst40,14749,37160,54074,79299,503
ARW1.00051.00151.00171.00191.0018
Mean40,16249,42560,60474,82599,534
Std5.1111.5820.8812.0014.14
Table 11. Rank sum tests for GMBO with the other five methods on KP1–KP15.
Table 11. Rank sum tests for GMBO with the other five methods on KP1–KP15.
GMBOABCCSDEGAMBO
KP111111
KP211110
KP311111
KP411111
KP51111−1
KP611111
KP711111
KP811111
KP911111
KP101111−1
KP111111−1
KP1211111
KP1311111
KP1411110
KP1511110
1151515159
000003
−100003
Table 12. Rankings of six algorithms based on the best values.
Table 12. Rankings of six algorithms based on the best values.
ABCCSDEGAMBOGMBO
KP1425631
KP2415623
KP3435621
KP4425631
KP5345612
KP6425631
KP7425631
KP8425631
KP9425631
KP10415632
KP11442621
KP12425621
KP13345621
KP14435621
KP15345612
Mean rank3.802.534.8062.331.33
Table 13. Rankings of six algorithms based on the mean values.
Table 13. Rankings of six algorithms based on the mean values.
ABCCSDEGAMBOGMBO
KP1345621
KP2345612
KP3345621
KP4345621
KP5345612
KP6435621
KP7435621
KP8345621
KP9435621
KP10345612
KP11453621
KP12345621
KP13345621
KP14345621
KP15345612
Mean rank3.273.874.8761.731.27
Table 14. Rankings of six algorithms based on the worst values.
Table 14. Rankings of six algorithms based on the worst values.
ABCCSDEGAMBOGMBO
KP1345621
KP2345621
KP3345612
KP4345621
KP5345612
KP6345621
KP7345621
KP8345621
KP9345621
KP10345612
KP11245631
KP12245621
KP13145631
KP14135641
KP15345621
Mean rank2.603.93562.071.20
Table 15. Performance comparison of three algorithms on large-scale 0-1 KP instances.
Table 15. Performance comparison of three algorithms on large-scale 0-1 KP instances.
No.MBOMFOMS
BestMeanStdBestMeanStdBestMeanStd
KP140,68440,64140.0940,53839,976309.0040,24240,10156.37
KP249,99249,732116.125059050200384.7250,05649,79079.57
KP36176461430379.7661,83661,238608.7861,05960,721101.80
KP476,92976,691267.9077,00776,353656.3675,71675,50595.94
KP599,89899,424200.38102,276101,475781.66100,348100,036120.08
KP635,06935,0644.0435,06934,952116.8434,85034,79920.64
KP743,78643,7841.5743,78443,630132.2143,47443,42420.34
KP853,42653,167300.9053,55253,048556.6052,63752,48973.73
KP965,70865,66618.4865,69265,421253.2965,09365,02527.59
KP10116,496115,718492.92118,183117,381838.07116,283115,937117.92
KP1140,16740,1625.1140,15740,14215.4840,13740,1275.58
KP1249,44249,42511.5849,43349,41115.6449,40349,3907.10
KP1360,63060,60420.8860,58160,55768.2860,58160,5719.24
KP1474,85274,85212.0074,91074,87432.4974,85274,8337.71
KP1599,55399,53414.1499,64399,60237.1099,57299,5469.28
Total81058500010

Share and Cite

MDPI and ACS Style

Feng, Y.; Yu, X.; Wang, G.-G. A Novel Monarch Butterfly Optimization with Global Position Updating Operator for Large-Scale 0-1 Knapsack Problems. Mathematics 2019, 7, 1056. https://doi.org/10.3390/math7111056

AMA Style

Feng Y, Yu X, Wang G-G. A Novel Monarch Butterfly Optimization with Global Position Updating Operator for Large-Scale 0-1 Knapsack Problems. Mathematics. 2019; 7(11):1056. https://doi.org/10.3390/math7111056

Chicago/Turabian Style

Feng, Yanhong, Xu Yu, and Gai-Ge Wang. 2019. "A Novel Monarch Butterfly Optimization with Global Position Updating Operator for Large-Scale 0-1 Knapsack Problems" Mathematics 7, no. 11: 1056. https://doi.org/10.3390/math7111056

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop