Next Article in Journal
Optimizing the Grouting Design for Groundwater Inrush Control in Completely Weathered Granite Tunnel: An Experimental and Field Investigation
Previous Article in Journal
Environmental and Economic Benefits from the Phase-out of Residential Oil Heating: A Study from the Aosta Valley Region (Italy)
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Multi-Objective Neighborhood Search Algorithm Based on Decomposition for Multi-Objective Minimum Weighted Vertex Cover Problem

1
School of Computer Science and Information Technology, Northeast Normal University, Changchun 130117, China
2
School of Management Science and Information Engineering, Jilin University of Finance and Economics, Changchun 130117, China
3
Key Laboratory of Applied Statistics of MOE, Northeast Normal University, Changchun 130024, China
*
Authors to whom correspondence should be addressed.
Sustainability 2019, 11(13), 3634; https://doi.org/10.3390/su11133634
Submission received: 31 May 2019 / Revised: 21 June 2019 / Accepted: 24 June 2019 / Published: 2 July 2019

Abstract

:
The multi-objective minimum weighted vertex cover problem aims to minimize the sum of different single type weights simultaneously. In this paper, we focus on the bi-objective minimum weighted vertex cover and propose a multi-objective algorithm integrating iterated neighborhood search with decomposition technique to solve this problem. Initially, we adopt the decomposition method to divide the multi-objective problem into several scalar optimization sub-problems. Meanwhile, to find more possible optimal solutions, we design a mixed score function according to the problem feature, which is applied in initializing procedure and neighborhood search. During the neighborhood search, three operators ( A d d , D e l e t e , S w a p ) explore the search space effectively. We performed numerical experiments on many instances, and the results show the effectiveness of our new algorithm (combining decomposition and neighborhood search with mixed score) on several experimental metrics. We compared our experimental results with the classical multi-objective algorithm non-dominated sorting genetic algorithm II. It was obviously shown that our algorithm can provide much better results than the comparative algorithm considering the different metrics.

1. Introduction

The task of optimization is to find the best possible solutions according to the fitness expressed by the problem objective function given some constraints. For the single-objective problems, there is only one goal to be optimized and the aim is to look for the best possible solution available or a good approximation of it. However, in the real world, there exist many problems that are associated with multiple metrics of performance, i.e. objectives. In such cases, we should optimize multiple goals simultaneously. Such problems are called Multi-objective Optimization Problems (MOP) [1,2]. For scientists and engineers, MOP is a significantly important research topic not only due to the features of such problems but also there are still many open problems in this area [3,4]. However, the fuzziness still lies on the fact that we cannot give the accepted definition of “optimum” for MOP as the single-objective problem. Therefore, we cannot compare the performance of different methods directly as the single-objective problem, which is the main bottleneck restricting the development of MOP [5].
In the past years, many surveys of multi-objective optimization techniques have been done from the mathematical programming perspective. For example, in [6], Pareto optimality is reviewed in connection with production theory, programming and economics. Lieberman [7] analyzed 17 multi-objective mathematical programming methods and paid attention to their distinctive features and the issues of computational feasibility. Fishburen [8] reviewed the area of multi-objective evaluation theories and provided a perspective of how this fits in with the other areas. In [9], Evans discussed the advantages and disadvantages of the general approaches towards multi-objective mathematical programming and gave an overview of the specific solution techniques for multi-objective mathematical programming. The popular method to solve Constraint Optimization Problems (COP) is Branch and Bound (BB). Mini-Bucket Elimination (MBE) is a powerful mechanism for lower bound computation and has been extended to multi-objective optimization problems (MO-MBE) [10]. In [11], the authors embeded the MO-MBE in a multi-objective branch and bound search to compute the lower bound of the frontier of the problem. Actually, because of the limitations of branch and bound algorithms, it is hard to apply these methods to solve real-world problems. With the development in multi-objective optimization problem, it was recognized that evolutionary algorithms may be appropriate to solve the multi-objective optimization problem [12,13]. In the population, different individuals can explore the solutions in different directions concurrently, finally merging the effective information in the solution set. Moreover, evolutionary algorithms can handle complex problems and at the same time it have many features that are appropriate to solve multi-objective problems, such as discontinuities and disjoint spaces. Therefore, most present algorithms used to solve the multi-objective problems are based on the evolutionary methods; for example, in [14], an improved multi-objective evolutionary algorithm is proposed for the vehicle routing problem with time windows. Carvalho [15] presented a multi-objective evolutionary algorithm for optimal design of Yagi–Uda antennas. Meanwhile, to solve the multi-objective workflow scheduling in cloud, Zhu [16] also used the evolutionary approach to design the multi-objective algorithm. Recently, Zhang [17] proposed a popular method to solve MOP, that is, Multi-objective Evolutionary Algorithm Based on Decomposition (MOEA/D). It decomposes the multi-objective optimization problem into a number of scalar optimization sub-problems. Because of the decomposition technique, MOEA/D is more efficient compared with the general evolutionary algorithm. Up to now, many researchers have paid attention to it and many works based on it are published [18,19].
Generally, the aim of MOP is not to find a solution but a solution set, which consists of all the non-dominated solutions. We say the solutions in this set are Pareto optimal solutions, which is the concept of Pareto optimal [1,2]. Now, we give the definition of “dominate”. For a decision vector x, if for each objective there are always f i ( x ) f i ( x ) (for minimum multi-objective problem) and at least one objective such that f j ( x ) < f j ( x ) , we say that x dominates another decision vector x denoted by x x . A decision vector x is said to be Pareto optimal only if there exists no decision vector x Ω such that x x . Meanwhile, the Pareto front is the set of solutions that are all Pareto optimal. The shape of the curve may vary due to the different problem. Generally, the maximum multi-objective problem’s curve is convex while the minimum multi-objective problem’s curve is concave. However, the curve of the discrete multi-objective problem may be not as smooth as the continuous problem’s curve.
As is known, minimum vertex cover is a core optimization problem, which has been widely applied in multiple areas such as industrial machine assignment, network security, and pattern recognition [20,21]. The Minimum Weighted Vertex Cover Problem (MWVCP) is a generalized version of Minimum Vertex Cover Problem (MVCP) [22,23]. There are also plentiful crafted algorithms to solve MWVCP. For example, Li [24] proposed an efficient local search framework for the MWVCP. However, in real-world problems, there are usually more than one decision and the vertex may have more than one associated weight. Subsequently, the Multi-objective Minimum Weighted Vertex Cover Problem (MMWVCP) arises. That is, each vertex is associated with several weights and we need to find a vertex subset minimizing every sum weight of different single type weights. Although many researchers have devoted their efforts toward the MWVCP and many algorithms have been successfully applied to solve it, few algorithms are proposed for MMWVCP. To our knowledge, the work in [11] is the first to analyze the lower bound of the frontier of MMWVCP and we are the first to propose the method for solving MMWVCP. The researchers also pay attention to the different objectives for the vertex cover problem. In [25], the authors minimized the total weights of vertexes in the vertex cover and the LP-value at the same time, which is the value of optimal solution of the LP for the graph obtained by removing the vertexes in the vertex cover and the corresponding covered edges. Due to the different objectives, the method in [25] cannot be applied to solve the multi-objective minimum weighted vertex cover problem.
Our main contribution is proposing an effective algorithm to solve the multi-objective minimum weighted vertex cover. Firstly, we adopt the decomposition technique to decompose the multi-objective optimization problems into several scalar optimization sub-problems. Thus, most of the strategies having been used in the minimum weighted vertex cover problems algorithms can be introduced into our algorithm. We combine the degree score D s c o r e function with the weight score W s c o r e and design the score function W D s c o r e , which can optimize the objective considering the number of vertexes and the sum of vertexes’ weights simultaneously. The new score function is applied in the initializing phase and the iterated neighborhood search. To guarantee the quality and diversity of the population, in the initializing procedure, we construct a restricted candidate list considering the W D s c o r e of the candidate vertexes. Meanwhile, for each individual in the population, we perform an iterated neighborhood search to improve the current solution. In the neighborhood search, for each step, we execute one operator choosing from three operators D e l e t e , S w a p , and A d d according to the values of W D s c o r e . Finally, a multi-objective algorithm, which is based on iterated neighborhood search and decomposition, is proposed to solve MMWVCP. For simplicity, we only focus on two objectives, i.e. the Bi-objective Minimum Weighted Vertex Cover Problem (BMWVCP).
In the next section, we introduce some background knowledge about multi-objective optimization problem and the technique decomposition. This allows us to know how the algorithm MOEA/D works. Then, in Section 3, our new algorithm including the important components is described in detail. We present the results of our experiment carried out on the benchmark instances in Section 4. Finally, we draw the conclusions and put forward some ideas for further research.

2. Preliminaries

2.1. Multi-Objective Minimum Weighted Vertex Cover

The definition of a multi-objective optimization problem (MOP) can be formulated as the following Equation (1):
m i n i m i z e F ( x ) = ( f 1 ( x ) , f 2 ( x ) , f 3 ( x ) , , f m ( x ) ) T s u b j e c t t o x Ω
where Ω is the decision (variable) space and R m is the objective space. The function F : Ω R m consists of m real-valued objective functions.
Before introducing the multi-objective minimum weighted vertex cover problem, we give some relevant background knowledge. Given an undirected graph G ( V , E ) , where V = { 1 , 2 , , n } is the vertex set and E = { 1 , 2 , , z } is the edge set, the Vertex Cover Problem (VCP) consists of finding a subset of vertexes S u b V such that ( u , v ) E , either u S u b or v S u b . The Minimum Vertex Cover Problem (MVCP) is to find a vertex cover with minimum size. In the weighted version, each vertex v is associated with a weight w ( v ) and the Minimum Weighted Vertex Cover Problem (MWVCP) is to find a vertex cover S u b with minimum f ( S u b ) = v S u b w ( v ) .
Based on the above notions, we give the definition of Multi-objective Minimum Weighted Vertex Cover Problem (MMWVCP) as follows:
Definition 1.
(Multi-objective Minimum Weighted Vertex Cover Problem, MMWVCP) Given an undirected weighted graph G ( V , E , w ) , each vertex v has m associated weight, w 1 ( v ) , w 2 ( v ) , , w m ( v ) , and the task is to minimize the m objective functions f i ( S ) = S [ v ] = 1 w i ( v ) ( i = 1 , 2 , , m ) . The MMWVCP can be formally defined as follows:
m i n i m i z e F ( S ) = ( f 1 ( S ) , , f m ( S ) ) T f i ( S ) = S [ v ] = 1 w i ( v )
where S is a binary vector and in our paper we call S as a solution or individual. When S [ v ] is equal to 1, we say the vertex v is in the solution S. Otherwise, we say the vertex v is not in the solution S. When a vertex is added to the solution, we set S [ v ] = 1 . Similarly, we set S [ v ] = 0 when deleting a vertex from the solution S. For the BMWVCP, each vertex is associated with two weights and there are two objectives to optimize. Therefore, BMWVCP can be seen as a special case of MMWVCP for m = 2 .

2.2. Decomposition of Multi-Objective Optimization

At present, most algorithms used to solve multi-objective problems treat the MOP as a whole. Therefore, we need to consider every objective during searching the solution space. However, we can compare their objective functions directly and just optimize single objective if we associate each individual in the population with the particular scalar optimization problem. Meanwhile, many conventional and useful strategies which are initially applied in scalar optimization problems cannot be directly embedded in the non-decomposition algorithms. Hence, Zhang [17] proposed decomposition technique for the multi-objective problems which decomposes the initial MOP into several scalar optimization sub-problems. They combined the decomposition technique with the evolutionary algorithm to solve the multi-objective optimization problems and named the algorithm as MOEA/D. At each generation, the individuals in the population are the best solutions found so far considering the relevant sub-problems. Inspired by MOEA/D, we use the weight vector to decompose the multi-objective minimum weighted vertex cover problem into a number of single objective minimum weighted vertex cover sub-problems. In such a case, many approaches, which have been successfully used to solve minimum weighted vertex cover problem, can be applied in our algorithm. Now, we give the method of decomposition in our algorithm.
As mentioned above, given an objective vector F ( x ) = ( f 1 ( x ) , , f m ( x ) ) T , we introduce an associated weight vector λ = { λ 1 , λ 2 , , λ m } , with i = 1 m λ i = 1 and λ i 0 . Then, the optimal solution to the following scalar optimization problem:
g ( x | λ ) = i = 1 m λ i × f i ( x )
When the weight vector λ is non-negative, Equation (3) has an optimal solution which is Pareto optimal to Equation (1). According to Equation (3), the multi-objective optimization problems can be decomposed into several sub-problems. For a different weight vector, there is always a sub-problem corresponding to it. Generally, the number of sub-problems is equal to the size of the population.

3. Multi-Objective Iterated Neighborhood Search Based on Decomposition Algorithm (MONSD)

In this section, we first present the framework of our new algorithm which is based on neighborhood search and decomposition. Subsequently, the three main components, namely the score function, initializing procedure and iterated neighborhood search, are described in detail.

3.1. Framework of MONSD

Our algorithm MONSD (Multi-objective Neighborhood Search based on Decomposition) is based on decomposition and it also integrates the neighborhood search according to the features of multi-objective minimum weighted vertex cover. The framework of MONSD is outlined in Algorithm 1. Firstly, we generate Pop (the size of population) evenly distributed weight vectors λ = { λ 1 , λ 2 , , λ P o p } . That is, the multi-objective minimum weighted vertex cover is decomposed into P o p sub-problems. The ith sub-problem is defined by Equation (3) with λ = λ i . Then, MONSD calls the Init_greedy procedure to greedily produce P o p individuals to compose the current population P c . The population P n o n is used to store the non-dominated individuals found thus far. Subsequently, the algorithm executes a series of iterations until the stopping criterion is met. In one iteration, MONSD utilizes neighborhood search to improve the solutions in P c . If the solution is improved, the signal boolean variable i s I m p r o v e d is assigned as true. i s I m p r o v e d is used to mark whether the solution gets the improvement after the neighborhood search or the perturbation process. Otherwise, we apply perturbation to the current solution. Namely, we randomly delete several vertexes and randomly add some vertexes to guarantee the solution is a vertex cover. The purpose of the perturbation is to increase the diversity and exploit more solution space. Meanwhile, if the solution is improved after perturbing, we also set the value i s I m p r o v e d as true. At the end of iteration, if i s I m p r o v e d is equal to true, the individual S i will be replaced by S i . However, we always update the population P n o n no matter whether the solution S i is improved. Because we have decomposed the original multi-objective, the sub-problems cannot exactly indicate the objective. Even though the sub-problem’s objective value is not much better, the original multi-objective values may be better than those solutions in the non-dominated population. In next subsection, we describe the components of MONSD in details.
Algorithm 1: The Multi-objective Neighborhood Search Algorithm based on Decomposition (MONSD).
Sustainability 11 03634 i001

3.2. The Score Function

After decomposing, each sub-problems can be seen as a single objective minimum weighted vertex cover problem. For the single objective minimum weighted vertex cover problem, many strategies have been successfully used to explore the search space. Among these strategies, the score function plays a critically important role in the whole algorithm which can guide the search to approach the possible best solutions. For the minimum vertex cover problem and the minimum weighted vertex cover problem, many kinds of score function have been applied in the algorithms. There are two basic kinds of score function: degree function and weighted score function. In our algorithm, we define a new score function to adapt to the new problem which is applied in initializing procedure and neighborhood search. Our new score function mixes the D s c o r e based on the degree and W s c o r e based on the weight. We give these definitions as follows.
The nature of our D s c o r e function is similar to the degree of the vertex, thus we name it as D s c o r e and it is defined as Equation (4), where e ( v ) is the edge set adjacent to the vertex v. If the vertex is not in the solution, the score value is used to evaluate how many edges would become covered from uncovered by adding the vertex to the solution. On the contrary, if the vertex is in the solution, the score value is used to evaluate how many edges would become uncovered by deleting the vertex from the solution and the value is non-negative. Accordingly, in the initializing phase, we initialize the score value of a vertex as the number of edges adjacent to it.
D s c o r e [ v ] = | { e | e e ( v ) a n d e i s o n l y c o v e r e d b y v } | , v s o l u t i o n | { e | e e ( v ) a n d e i s n o t c o v e r e d } | , v s o l u t i o n
When one vertex is added to the solution, then its neighbor vertexes’ D s c o r e values are decreased by one. On the contrary, when one vertex is deleted from the solution, its neighbor vertexes’ D s c o r e values are increased by one. Considering the values of D s c o r e , we prefer the vertexes with bigger D s c o r e , which makes the number of vertexes in the solution smaller. On the contrary, when we select vertexes to remove from the solution, we prefer the vertexes with smaller D s c o r e , which makes fewer edges become uncovered.
Because we have decomposed the multi-objective minimum weighted vertex cover into several sub-problems, we define a weighted score function to evaluate the effect of one vertex to the single sub-problems. For one sub-problem, the weighted score can be defined as Equation (5). Once the vector λ is fixed, the W s c o r e of each vertex is assigned and constant during the neighborhood search. Considering the values of W s c o r e , when we select the vertexes to add to solution, we prefer to the vertexes with smaller W s c o r e , which makes objective value smaller. On the contrary, when we select vertexes to remove from the solution we prefer to the vertexes with bigger W s c o r e .
W s c o r e [ v ] = i = 1 m λ i w i ( v )
From the above definitions of D s c o r e and W s c o r e , we can see that the D s c o r e motivates the algorithm to use fewer vertexes to optimize the objective while the W s c o r e motivates the algorithm to select vertexes with smaller weight to optimize the objective. Based on the definitions, we design a more effective score function combining the D s c o r e with W s c o r e . It is defined in Equation (6).
W D s c o r e [ v ] = D s c o r e [ v ] W s c o r e [ v ]
According to the definition of W D s c o r e , to effectively optimize the objective, we select the vertexes with bigger W D s c o r e to add to the solution and select the vertexes with smaller to remove from solution. The experimental results show the W D s c o r e is more effective than D s c o r e and W s c o r e .
We use an example to illustrate the difference of the score functions in Figure 1. There are two small graphs with five vertexes and four edges and v 3 is with different weight in the two graphs. Such graphs may be a component of a bigger graph. Each vertex is associated with a tuple ( W s c o r e , D s c o r e , W D s c o r e ). Because W s c o r e keeps constant, once the λ is fixed, we assign an integer ranging in [20, 120] to each vertex as its W s c o r e . The triple tuple of each vertex is initialized as shown in Figure 1. For the graph in Figure 1a, v 3 is with bigger degree and bigger weight. If we use the D s c o r e , we will choose v 3 to cover the four edges. However, the weight of v 3 is greater than the sum weight of the remaining four vertexes, and we should add v 1 , v 2 , v 4 and v 5 to the solution to cover the edges. If we use the W s c o r e or W D s c o r e , we can change the situation. Initially, the W D s c o r e of v 5 is smaller than v 3 ’s, but the W D s c o r e of v 5 will decrease after inserting v 1 , v 2 or v 4 . Similarly, the W s c o r e will also lose the effectiveness in the second graph. We can see that W D s c o r e can perform well in the two situations. However, D s c o r e and W s c o r e will lose the advantage in the corresponding situation.

3.3. The Procedure of Initializing

The initial population is essentially important in algorithms, which can lead the algorithm to different search directions. On the one hand, we should guarantee the population quality which will seriously influence the final quality of Pareto-front. On the other hand, to find more non-dominated solutions, we should guarantee the population diversity. To balance the quality and diversity, we adopt constructing RCL (short for Restricted Candidate List), which has been widely used in local search and approximate algorithms [26,27]. The procedure of initializing population called I n i t _ g r e e d y is outlined in Algorithm 2.
Algorithm 2: Init_greedy( P o p ).
Sustainability 11 03634 i002
After executing I n i t _ g r e e d y , there will be P o p individuals to compose the population P c . When I n i t _ g r e e d y generates one individual, it firstly assigns the score value of every vertex according to Equations (4)–(6). Then, all edges will be added to uncovered edges link. Subsequently, the procedure will construct RCL. First, it finds the maximum score ( m a x s c o r e ) and minimum score ( m i n s c o r e ) values of the vertexes not in S I n d i N u m . Second, I n i t _ g r e e d y sets the limited score as m i n s c o r e + α × ( m a x s c o r e m i n s c o r e ) . Only if the vertex’s score is greater than the limited score do we add it to RCL. The parameter α is used to control the greediness of the procedure. A greater α indicates a greater greediness. Then, I n i t _ g r e e d y randomly selects one vertex from RCL and adds it to S I n d i N u m . At the end of one iteration, we judge whether the S I n d i N u m is a vertex cover and different from the individuals existing in P c . The procedure doe not stop until it generates P o p different individuals. In Algorithm 2, we present the initialize procedure for the version MONSD with W D s c o r e (MONSD W D s c o r e ). For the version MONSD with D s c o r e or W s c o r e , we should update the score function.

3.4. The Iterated Neighborhood Search

The iterated neighborhood search is an important component in our algorithm, which focuses on one solution and deeply explores it to find more promising solutions. A neighbor is typically defined by a move operator ( m v ). Namely, a move operator ( m v ) is applied to a given solution S and S is accordingly transformed to a neighbor of S , denoted by S = S m v . Then, the neighborhood N of S can be defined as: N ( S ) = { S | S S m v , m v is an operator } . In our neighborhood search, there are three complementary neighborhoods defined by three basic operators. These neighborhoods are explored in a combined manner employing a rule that selects the most favorable neighbor solution with the best score.
The score function W D s c o r e is used to evaluate the benefit if we delete or add one vertex considering the D s c o r e and W s c o r e . Based on the W D s c o r e mechanism, we design three move operators (denoted by D e l e t e , A d d and S w a p ). The three operators are defined as:
D e l e t e ( S ) : The D e l e t e operator consists in deleting one vertex from S while guaranteeing S is always a vertex cover. If the operator can find an appropriate vertex, it will return the vertex number, otherwise it will return −1 indicating the operator is not feasible. Based on the nature of W D s c o r e , we can find that such vertexes are with W D s c o r e equal to 0. If there are more than one vertex with W D s c o e r equal to 0, we always select the vertex with the biggest score ( W s c o r e ) to ensure the quality of the neighbors of S. The neighborhood defined by the D e l e t e operator is given by N 1 ( S ) = { S | S S D e l e t e ( v ) } . The D e l e t e operator is described in Algorithm 3.
Algorithm 3: Delete(S).
Sustainability 11 03634 i003
S w a p ( S ) : The S w a p operator consists in exchanging one vertex ( D e l V t x ) in S with one vertex ( A d d V t e x ) not in S while guarantee S always is a vertex cover, which is outlined in Algorithm 4. The vertex pair with smaller W s c o r e [ A d d V t x ] W s c o r e [ D e l V t x ] has a higher priority. If the value is less than 0, it indicates the objective value will be improved if we execute the swap operator. Because the swap operator does not change the number of vertexes in the solution, we should consider the benefit brought for the objective by swapping two vertexes directly. Therefore, we use the W s c o r e to evaluate the vertex pair. Similar to the operator D e l e t e , there may not be an appropriate vertex pair. In this case, it will return ( 1 , 1 ), which indicates that the operator is not feasible; otherwise, it will return a pair of vertex number. The neighborhood defined by the move operator is given by N 2 ( S ) = { S | S S S w a p ( v , u ) } .
Algorithm 4: Swap(S).
Sustainability 11 03634 i004
A d d ( S ) : The A d d operator consists adding a vertex not in S. Before adding a vertex, the S is already a vertex cover, thus we can judge that S is always a vertex cover after adding one vertex. Because D e l e t e and S w a p are greedy operators, we randomly select a vertex from the vertex set not in S to increase diversity. It returns the selected vertex number. The neighborhood defined by the A d d operator is given by N 3 ( S ) = { S | S S A d d ( v ) } . The A d d operator is described in Algorithm 5.
Algorithm 5: Add(S).
Sustainability 11 03634 i005
Based on the definitions of three operators, we can see the D e l e t e and S w a p move always make the single objective value decrease, while A d d always increases the single objective value. Considering the above features of three operators, we create a combined neighborhood. To effectively optimize the objective, when we choose the operator between D e l e t e and S w a p , we directly compare the value of W s c o r e [ D e l V t x 1 ] and W s c o r e [ D e l V t x 2 ] W s c o r e [ A d d V t x 2 ] and choose the operator with the bigger value. The iterated neighborhood search is described in Algorithm 6.
Algorithm 6: NeighborhoodSearch(S).
Sustainability 11 03634 i006
We take an example to explain how the iterated neighborhood search works. As shown in Figure 2, the graph is with 12 vertexes and 19 edges. Each vertex is associated with a triple tuple with the same meaning as in Figure 1. The black vertexes are in the solution while the white ones are not in the solution. At each iteration, one operator is performed and the score values are updated subsequently. By executing the procedure of initializing, an initial solution is generated and the objective f ( S ) is 224 (the order of adding vertexes can be v 11 , v 9 , v 8 , v 5 , v 1 , v 3 , v 10 , v 12 ). At the first iteration, we perform the swap operator and the objective becomes 223. At the second iteration, there is no feasible D e l e t e or S w a p operator. We randomly select a vertex to add, such as the vertex v 4 . After updating the score values, we find that two vertexes can be deleted v 1 and v 3 . v 3 is deleted due to the bigger weight. At the next iteration, there is no feasible D e l e t e or S w a p operator again and v 10 is selected to add to the solution. Then, v 7 and v 10 are deleted in the following two iterations. Due to the limitation of space, we show them in one iteration here. After deleting v 7 and v 10 , the objective is 212, which is the minimum objective over the graph.
During the neighborhood search, if we only consider the D s c o r e , v 1 and v 3 would have the same probability to be deleted in the third iteration, even though v 3 is with the bigger W s c o r e , which would lead the search to converge more slowly. Actually, the version MONSD W s c o r e only considering the W s c o r e needs to use the D s c o r e to judge whether there exists the feasible D e l e t e or S w a p operator. Therefore, the version MONSD W s c o r e and MONSD W D s c o r e have little difference during neighborhood search if they have the same initial solution. However, the important difference between MONSD W s c o r e and MONSD W D s o c r e exists in the initialization procedure which plays a critical role in the framework. For example, v 2 has a higher priority to be added to the solution in the version MONSD W s c o r e due to the smaller weight. v 6 has the same possibility with v 12 to add to the solution in the version MONSD D s c o r e , although v 6 has the bigger weight. Therefore, we should take the D s c o r e and W s c o r e into account in the initializing phase and neighborhood search if we want to optimize the objectives well. The experimental results also indicate that the version MONSD W D s c o r e is more effective than the other two versions.

4. Computational Results

4.1. Benchmark

To evaluate the performance of our algorithm MONSD for the multi-objective minimum weighted vertex cover problem, experiments were carried out on some instances taken from [28], which have been widely used for minimum weighted vertex cover. Because of the feature of the multi-objective, we selected 48 instances, which were divided into small-scale, moderate-scale and large-scale for illustrating the results clearly. Each instance was an N-vertex and M-edge undirected graph and each vertex was associated with a weight that was randomly drawn from a uniform distribution of the interval [20, 120]. For simplicity, in this study, we tested our algorithm on the bi-objective minimum weighted vertex cover problem. Therefore, we associated each vertex with a second weight according to the same method. The instances are named as V C -N-M, where N is the number of vertexes and M is the number of edges.

4.2. Experimental Protocol

Our proposed algorithm MONSD was coded in C++ and compiled using GNU G++. We conducted our experiment on a computer with Intel(R) Xeon(R) CPU E7-4830 with 2.13 GHz. The stop criterion of our algorithm was the time limit 500 s on the small and moderate graphs and 1000 s on the large instance. In this study, the maximum step of neighborhood search was set as 3000, which is relatively smaller compared with most algorithms. Because in our algorithm we need to maintain a population and apply neighborhood search to each individual, we set the maximum step a little smaller to avoid consuming much time. We set the size of the population as 50 and the parameter α used in constructing RCL as 0.8. Here, the size of P n o n was also set to 50. In the implement, we recorded the number of non-dominated solutions in the P n o n and found that it did not exceed the size of population in all instances. Therefore, for MMWVCP, we think it is reasonable to set the size of P n o n to 50.

4.3. Performance Metrics

In this study, we mainly used the following two metrics to do performance evaluation:
  • Hyper-volume indication ( I H ): It is a comprehensive metric that can evaluate the convergence and diversity of the algorithm. Besides, it can give an intuitive value. For the two objectives, I H indicates the area covered by a set of non-dominated solutions. The value I H is calculated based on the objective values and we normalized it into [0, 1]. Let p = { f 1 , f 2 , , f m } be one point in objective space. We normalized the point as the equation: f i = ( f i F ( i , m i n ) ) / ( F ( i , m a x ) F ( i , m i n ) ) , where F ( i , m i n ) and F ( i , m a x ) are the maximal and minimal value of the ith objective and they were found on all Pareto Fronts obtained by all compared algorithms. There is an example in Figure 3, where (0.2, 0.8) and (0.8, 0.2) are two normalized points and (1.2, 1.2) is the reference point. In this case, I H is the area surrounded by dashed lines.
  • Set coverage (C- m e t r i c ): A and B are two final solution sets gained by different algorithms, which are approximations to the Pareto Front of a MOP. C ( A , B ) indicates the percentage of the solutions in B that are dominated by at least one solution in A. C ( A , B ) can be formally defined as follows:
    C ( A , B ) = | { S B | S A : S dominates S } | | B |
    C ( A , B ) = 100% implies that, for each solution S in B, there always exists at least one solution in A which dominates S. Likewise, C ( A , B ) = 0 indicates that there is no solution in B which is dominated by one solution in A.

4.4. Comparison Results with NSGA-II

Non-Dominated Sorting Genetic Algorithm II (NSGA-II) is a popular multi-objective evolutionary algorithm which is based on the elitist strategy and fast non-dominated sorting [29] and has been used in many studies [30,31,32]. NSGA-II adopts the fast non-dominated sorting approach, maintains a mating pool by combining parent and child populations and selects the elitists from the offspring. NSGA-II has been widely applied in many problems due to its effectiveness. To prove the performance of our new algorithm MONSD, we conducted the comparisons with NSGA-II. For NSGA-II, we set the size of the population as 50 and the cut-off time was 500 and 1000 s on the small and moderate instances and large instances, respectively.
The experimental results are shown in Table 1 and Table 2. We mark the better results in bold. Given the I H , MONSD performed better than NSGA-II on all instances. NSGA-II obtained I H bigger than 1 only on two instances, whereas MONSD obtained I H bigger than 1 on 31 instances out of 33. I H could evaluate the convergence and diversity of the algorithm. Therefore, the results of I H shown in Table 1 and Table 2 indicate that our algorithm performed better than NSGA-II considering the convergence and diversity. Subsequently, we paid attention to the performance metrics C- m e t r i c , which focuses on the quality of individual solution in the final non-dominated set. On the small and moderate instances, MONSD obtained 100% on 30 instances out of 33. On the other three instances, however, NSGA-II could not get 100%. Meanwhile, even on those three instances MONSD could obtain relatively diverse curves when observing the distribution of the final non-dominated sets, as discussed in the following section. On the large instances, as shown in Table 2, MONSD could obtain 100% and performed better than NSGA-II on all 15 instances when considering C- m e t r i c .
From the analysis of experimental results, we found that MONSD performed much better than NSGA-II when considering the quality of the final non-dominated sets.

4.5. The Effectiveness of the Score Function

In this section, we show the results of our algorithm with different functions to prove the effectiveness of our new score function W D s c o r e . Therefore, we conducted our experiment with the algorithm MONSD combined with W D s c o r e (MONSD W D s c o r e ), and the algorithm MONSD combined with W s c o r e (MONSD W s c o r e ) and the algorithm MONSD combined with D s c o r e (MONSD D s c o r e ). In Table 3 and Table 4, we use W D s c o r e , W s c o r e and D s c o r e to indicate the different versions, respectively. There was little difference when considering the time used by each version. Hence, we do not show the time due to the limitation of space.

4.5.1. Experiments on Small-Scale and Moderate-Scale Instances

In this section, we show the experimental results on small-scale and moderate-scale instances. We mainly compared the performance of three algorithms in terms of I H and C- m e t r i c and the results are summarized in Table 3. In Table 3, the column I H shows the final results obtained by MONSD W D s c o r e , MONSD W s c o r e and MONSD D s c o r e . From the definition of Hyper-volume, we know that the bigger the value is, the better the result is. We mark the better results in bold. We found that the version MONSD W D s c o r e could perform better than the other versions except the V C -200-200. However, on this instance, MONSD W D s c o r e could obtain the I H value of 1.23 while MONSD W s c o r e provides the I H value of 1.25. The gap between the two results is small compared with those gaps on the other instances. Overall, we found that MONSD D s c o r e performed relatively poorly because it only applies the D s c o r e , which is based on vertexes’ degrees and does not consider the weights. Therefore, MONSD D s c o r e may lead to results with fewer vertexes but with bigger weights, meaning the objectives cannot be optimized well. In the last row, we list the average values I H of different versions, which also indicates the MONSD W D s c o r e performed much better.
In Table 3, we list the C- m e t r i c values of MONSD W D s c o r e compared with MONSD W s c o r e and MONSD D s c o r e in the column C- m e t r i c , which indicates MONSD W D s c o r e was much or slightly better than the other versions on most instances. For the instance V C -200-2000, MONSD W D s c o r e was still a little worse than MONSD W s c o r e , similar to the I H . In the mixed score function, the D s c o r e is numerator, which may reduce the effect of W s c o r e and lead to worse results. However, when we analyzed the distribution (listed in next section) of the final sets obtained by different versions, we found that even on the V C -200-2000 MONSD W D s c o r e was also competitive compared with others.

4.5.2. Experiments on Large-Scale Instances

Among the weighted vertex cover problem instances, there are many large-scale instances. Therefore, to better show the performance of our proposed algorithm, we conducted the experiments on large scale instances and the results are listed in Table 4 in terms of I H and C- m e t r i c . From the results shown in Table 4, we found that MONSD W D s c o r e performed much better than MONSD W s c o r e and MONSD D s c o r e on all instances. The average I H of MONSD W D s c o r e was 1.21 while for MONSD W s c o r e and MONSD D s c o r e it was 1.04 and 0.37, respectively. Meanwhile, on some instances, the C- m e t r i c of MONSD W D s c o r e reached 100% compared with MONSD W s c o r e . On all instances, the C- m e t r i c of MONSD W D s c o r e reached 100% compared with MONSD D s c o r e . Hence, we could conclude that MONSD W D s c o r e performed better than the other two versions considering the objective of solutions and the number of non-dominated solutions.

4.5.3. Discussion

  • Why is the new score function more effective than the other two functions?
    For the sub-problem objective, we aim to minimize the sum of the associated weights of the vertexes. In general, there are two approaches to obtain the goal. The first is to minimize the number of vertexes in the solution, that is, D s c o r e . D s c o r e is used to evaluate how many edges would change the state if a vertex were flipped. The purpose of D s c o r e is to keep fewer vertexes in the solution but cover more edges. The second is to minimize total weights directly. In the implementation, we prefer the vertexes with smaller W s c o r e . W s c o r e can compute how much benefit a vertex can bring for the objective directly. However, there are drawbacks if we only use D s c o r e or W s c o r e , which may lead the vertex with bigger (smaller) D s c o r e but also bigger (smaller) W s c o r e in the solution. Actually, we should prefer the vertex with the bigger D s c o r e but smaller W s c o r e . Therefore, we combine the D s c o r e and W s c o r e , that is, W D s c o r e . D s c o r e is the numerator and W s c o r e is the denominator. Thus, we can optimize the number of vertex and the sum of weights simultaneously. W D s c o r e takes two factors into account, thus it is more effective.
  • How is the convergence of the neighborhood search?
    In Figure 4, we present how the neighborhood search works. Firstly, it explores the neighborhood D e l e t e ( S ) and S w a p ( S ) until there is no neighbor solution better than the current solution. If there are two feasible operators, we choose the operator which can bring greater benefit for the objective. No better neighbor solution in D e l e t e ( S ) and S w a p ( S ) neighborhood indicates that the search is convergent to some extent. Then, the A d d ( S ) neighborhood is explored. Actually, the search cannot get any improvement in A d d ( S ) neighborhood and it aims to explore the bigger solution space. After the A d d ( S ) neighborhood, the search returns back to perform the D e l e t e ( S ) and S w a p ( S ) search. Therefore, the search finally converges.

4.6. The Analysis of Distribution

We analyzed the distribution of the final non-dominated sets obtained by MONSD W D s c o r e , MONSD W s c o r e , MONSD D s c o r e and NSGA-II. Due to the limitation of space, we selected some representative instances that vary from small-scale instances to large-scale instances and also contain sparse and dense instances. Because the multi-objective minimum weighted vertex cover problem is discrete, the curve is not too smooth. However, the curve can describe the tendency of the problem. In Figure 5, we can see that MONSD could find a better spread and more solutions in the entire Pareto optimal region compared with NSGA-II. On the VC-100-100 instance, NSGA-II obtained a better C- m e t r i c value than MONSD. However, it did not have a good spread and the solutions were clustered in a small region. We also found that MONSD W D s c o r e performed better than MONSD W s c o r e and MONSD D s c o r e , although, on the instance V C -200-2000 mentioned above, MONSD W D s c o r e obtained a relatively diverse and smooth curve.

5. Conclusions and Future Work

In this paper, an effective multi-objective algorithm integrating iterated neighborhood search with decomposition technique is proposed to solve multi-objective minimum weighted vertex cover problem. To start with, a greedy initializing procedure combined with restricted candidate list mechanism is used to generate the population. Subsequently, according to the features of the problem, we design a score function to explore the search space effectively. Then, we apply the neighborhood search including three operators to improve the solution in the population. Finally, we utilize perturbation to help the search to jump out of the local optima. Computational results on instances verified the performance of our algorithm.
To demonstrate the performance of the proposed algorithm, extensive experiments were carried out on small, moderate and large graphs. We compared our experimental results with the classical multi-objective algorithm non-dominated sorting genetic algorithm II. It was obviously shown that our algorithm can provide much better results than the comparative algorithm on several metrics. In the future, the proposed algorithm is expected to solve other multi-objective problems such as the multi-objective minimum weighed dominating set problem. Meanwhile, we plan to study the theoretical aspects as well as the performance of the method.

Author Contributions

Software, S.H.; Methodology, R.L. and X.W.; Writing—original draft preparation, S.H. and H.L.; and Writing—review and editing, M.Y. and Y.W.

Funding

This work was supported by Jilin education department 13th five-year science and technology project under Grant Nos. JJKH20190726KJ, JJKH20190756SK, and JJKH20180465KJ and the National Natural Science Foundation of China (NSFC) under Grant Nos. 61502464, 61503074 and 61806082.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
MOPMulti-objective Optimization Problem
COPConstraint Optimization Problem
BBBranch and Bound
MBEMini-Bucket Elimination
MO-MBEMulti-objective Optimization Problem with Mini-Bucket Elimination
MOEA/DMulti-objective Evolutionary Algorithm Based on Decomposition
MVCPMinimum Vertex Cover Problem
MWVCPMinimum Weighted Vertex Cover Problem
MMWVCPMulti-objective Minimum Weighted Vertex Cover Problem
BMWVCPBi-objective Minimum Weighted Vertex Cover Problem
MONSDMulti-objective Iterated Neighborhood Search based on Decomposition Algorithm
RCLRestricted Candidate List
NSGA-IINon-dominated Sorting Genetic Algorithm II

References

  1. Hillermeier, C. Nonlinear Multiobjective Optimization: A Generalized Homotopy Approach; Springer Science & Business Media: Berlin, Germany, 2001. [Google Scholar]
  2. Hwang, C.L.; Masud, A.S.M. Multiple Objective Decision Making—Methods and Applications: A State-of-the-Art Survey; Springer: Berlin, Germany, 1994. [Google Scholar]
  3. Marler, R.T.; Arora, J.S. Survey of multi-objective optimization methods for engineering. Struct. Multidiscip. Optim. 2004, 26, 369–395. [Google Scholar] [CrossRef]
  4. Huang, H.Z.; Qu, J.; Zuo, M.J. A new method of system reliability multi-objective optimization using genetic algorithms. In Proceedings of the RAMS’06—Annual Reliability and Maintainability Symposium, Newport Beach, CA, USA, 23–26 January 2006; pp. 278–283. [Google Scholar]
  5. Coello, C.A. An updated survey of GA-based multiobjective optimization techniques. ACM Comput. Surv. 2000, 32, 109–143. [Google Scholar] [CrossRef]
  6. Stadler, W. A survey of multicriteria optimization or the vector maximum problem, part I: 1776–1960. J. Optim. Theory Appl. 1979, 29, 1–52. [Google Scholar] [CrossRef]
  7. Lieberman, E.R. Soviet multi-objective mathematical programming methods: An overview. Manag. Sci. 1991, 37, 1147–1165. [Google Scholar] [CrossRef]
  8. Fishburn, P.C. A survey of multiattribute/multicriterion evaluation theories. In Multiple Criteria Problem Solving; Springer: Berlin/Heidelberg, Germany, 1978; pp. 181–224. [Google Scholar]
  9. Evans, G.W. An overview of techniques for solving multiobjective mathematical programs. Manag. Sci. 1984, 30, 1268–1282. [Google Scholar] [CrossRef]
  10. Dechter, R. Mini-buckets: A general scheme for generating approximations in automated reasoning. In Proceedings of the International Joint Conferences on Artificial Intelligence, Nagoya, Japan, 23–29 August 1997; pp. 1297–1303. [Google Scholar]
  11. Emma, R.; Javier, L. Constraint optimization techniques for exact multi-objective optimization. In Multiobjective Programming and Goal Programming; Springer: Berlin/Heidelberg, Germany, 2009; pp. 89–98. [Google Scholar]
  12. Kalyanmoy, D. Multi-Objective Optimization Using Evolutionary Algorithms: An Introduction; John Wiley & Sons: Hoboken, NJ, USA, 2011; pp. 75–96. [Google Scholar]
  13. Laumanns, M.; Thiele, L.; Deb, K.; Zitzler, E. Combining convergence and diversity in evolutionary multiobjective optimization. Evolut. Comput. 2002, 10, 263–282. [Google Scholar] [CrossRef]
  14. GarciaNajera, A.; Bullinaria, J.A. An improved multi-objective evolutionary algorithm for the vehicle routing problem with time windows. Comput. Oper. Res. 2011, 38, 287–300. [Google Scholar] [CrossRef] [Green Version]
  15. Carvalho, R.D.; Saldanha, R.R.; Gomes, B.N.; Lisboa, A.C.; Martins, A.X. A multi-objective evolutionary algorithm based on decomposition for optimal design of Yagi-Uda antennas. IEEE Trans. Magn. 2012, 48, 803–806. [Google Scholar] [CrossRef]
  16. Zhu, Z.; Zhang, G.; Li, M.; Liu, X. Evolutionary multi-objective workflow scheduling in cloud. IEEE Trans. Parallel Distrib. Syst. 2015, 27, 1344–1357. [Google Scholar] [CrossRef]
  17. Zhang, Q.; Li, H. MOEA/D: A multiobjective evolutionary algorithm based on decomposition. IEEE Trans. Evolut. Comput. 2007, 11, 712–731. [Google Scholar] [CrossRef]
  18. Ke, L.; Zhang, Q.; Battiti, R. Hybridization of decomposition and local search for multiobjective optimization. IEEE Trans. Cybern. 2014, 44, 1808–1820. [Google Scholar] [PubMed]
  19. Li, X.; Li, M. Multiobjective local search algorithm-based decomposition for multiobjective permutation flow shop scheduling problem. IEEE Trans. Eng. Manag. 2015, 62, 544–557. [Google Scholar] [CrossRef]
  20. Hu, Y.; Yang, B.; Wong, H.S. A weighted local view method based on observation over ground truth for community detection. Inf. Sci. 2016, 355, 37–57. [Google Scholar] [CrossRef]
  21. Fernau, H.; Fomin, F.V.; Philip, G.; Saurabh, S. On the parameterized complexity of vertex cover and edge cover with connectivity constraints. Theor. Comput. Sci. 2015, 565, 1–15. [Google Scholar] [CrossRef]
  22. Wang, L.; Du, W.; Zhang, Z.; Zhang, X. A PTAS for minimum weighted connected vertex cover P3 problem in 3-dimensional wireless sensor networks. J. Comb. Optim. 2017, 33, 106–122. [Google Scholar] [CrossRef]
  23. Pullan, W. Optimisation of unweighted/weighted maximum independent sets and minimum vertex covers. Discrete Optim. 2009, 6, 214–219. [Google Scholar] [CrossRef]
  24. Li, R.; Hu, S.; Zhang, H.; Yin, M. An efficient local search framework for the minimum weighted vertex cover problem. Inf. Sci. 2016, 372, 428–445. [Google Scholar] [CrossRef]
  25. Pourhassan, M.; Feng, S.; Frank, N. Parameterized analysis of multi-objective evolutionary algorithms and the weighted vertex cover problem. In Proceedings of the International Conference on Parallel Problem Solving from Nature, Edinburgh, UK, 17–21 September 2016. [Google Scholar]
  26. Resende, M.G.C. Greedy Randomized Adaptive Search Procedures; Springer US: New York, NY, USA, 2001; pp. 219–249. [Google Scholar]
  27. Resende, M.G.C.; Ribeiro, C.C. Greedy Randomized Adaptive Search Procedures: Advances, Hybridizations, and Applications. J. Glob. Optim. 2010, 6, 109–133. [Google Scholar]
  28. Shyu, S.J.; Yin, P.Y.; Lin, B.M.T. An Ant Colony Optimization Algorithm for the Minimum Weight Vertex Cover Problem. Ann. Oper. Res. 2004, 131, 283–304. [Google Scholar] [CrossRef]
  29. Deb, K.; Pratap, A.; Agarwal, S.; Meyarivan, T.A. A fast and elitist multiobjective genetic algorithm: NSGA-II. IEEE Trans. Evolut. Comput. 2002, 6, 197. [Google Scholar] [CrossRef]
  30. Bekele, E.G.; Nicklow, J.W. Multi-objective automatic calibration of SWAT using NSGA-II. J. Hydrol. 2007, 341, 165–176. [Google Scholar] [CrossRef]
  31. Alikar, N.; Mousavi, S.M.; Ghazilla, R.A.; Tavana, M.; Olugu, E.U. Application of the NSGA-II Algorithm to A Multi-Period Inventory-Redundancy Allocation Problem in a Series-Parallel System. Reliab. Eng. Syst. Saf. 2017, 160, 1–10. [Google Scholar] [CrossRef]
  32. Vo-Duy, T.; Duong-Gia, D.; Ho-Huu, V.; Vu-Do, H.C.; Nguyen-Thoi, T. Multi-objective optimization of laminated composite beam structures using NSGA-II algorithm. Compos. Struct. 2017, 168, 498–509. [Google Scholar] [CrossRef] [Green Version]
Figure 1. The example illustrating the difference of score functions. (a) the D s c o r e loses the effectiveness (b) the W s c o r e loses the effectiveness.
Figure 1. The example illustrating the difference of score functions. (a) the D s c o r e loses the effectiveness (b) the W s c o r e loses the effectiveness.
Sustainability 11 03634 g001
Figure 2. The illustration of the iterated neighborhood search.
Figure 2. The illustration of the iterated neighborhood search.
Sustainability 11 03634 g002
Figure 3. Hyper-volume indicator I H .
Figure 3. Hyper-volume indicator I H .
Sustainability 11 03634 g003
Figure 4. The illustration of neighborhood search.
Figure 4. The illustration of neighborhood search.
Sustainability 11 03634 g004
Figure 5. Distribution of the final sets obtained by three versions on some instances.
Figure 5. Distribution of the final sets obtained by three versions on some instances.
Sustainability 11 03634 g005
Table 1. Experimental results of MONSD in comparison to NSGA-II on small and moderate instances instances.
Table 1. Experimental results of MONSD in comparison to NSGA-II on small and moderate instances instances.
Intances I H C- Metric
MONSDNSGA-IIMONSD, NSGA-II
C(A,B)C(B,A)
VC-100-1001.221.080.0054.17
VC-100-2501.210.940.0028.57
VC-100-5001.210.18100.000.00
VC-100-7501.080.27100.000.00
VC-100-10001.160.20100.000.00
VC-100-20000.960.06100.000.00
VC-150-1501.090.59100.000.00
VC-150-2501.210.70100.000.00
VC-150-5001.120.19100.000.00
VC-150-7501.220.33100.000.00
VC-150-10001.180.22100.000.00
VC-150-20001.080.25100.000.00
VC-150-30001.170.07100.000.00
VC-200-2501.210.28100.000.00
VC-200-5001.220.190.0046.15
VC-200-7501.210.17100.000.00
VC-200-10001.160.19100.000.00
VC-200-20001.230.29100.000.00
VC-200-30001.170.11100.000.00
VC-250-2501.200.34100.000.00
VC-250-5001.191.04100.000.00
VC-250-7501.220.22100.000.00
VC-250-10001.160.26100.000.00
VC-250-20001.300.11100.000.00
VC-250-30001.230.16100.000.00
VC-250-50001.260.10100.000.00
VC-300-3001.140.24100.000.00
VC-300-5001.200.22100.000.00
VC-300-7501.270.21100.000.00
VC-300-10001.210.18100.000.00
VC-300-20001.290.21100.000.00
VC-300-30001.290.20100.000.00
VC-300-50001.140.08100.000.00
Average1.190.3090.913.91
Table 2. Experimental results of MONSD in comparison to NSGA-II on large instances.
Table 2. Experimental results of MONSD in comparison to NSGA-II on large instances.
Intances I H C- Metric
MONSDNSGA-IIMONSD, NSGA-II
C(A,B)C(B,A)
VC-500-5001.070.68100.000.00
VC-500-10001.260.44100.000.00
VC-500-20001.020.27100.000.00
VC-500-50001.290.39100.000.00
VC-500-100001.360.24100.000.00
VC-800-5001.220.79100.000.00
VC-800-10001.200.09100.000.00
VC-800-20001.240.41100.000.00
VC-800-50001.170.09100.000.00
VC-800-100001.280.10100.000.00
VC-1000-10001.220.72100.000.00
VC-1000-50000.980.07100.000.00
VC-1000-100001.270.09100.000.00
VC-1000-150001.390.08100.000.00
VC-1000-200001.230.08100.000.00
Average1.210.30100.000.00
Table 3. Experimental results on small-scale and moderate-scale instances.
Table 3. Experimental results on small-scale and moderate-scale instances.
Instances I H C- Metric
WDsocreWscoreDscore(WDscore, Wscore)(WDscore, Dscore)
C(A,B)C(B,A)C(A,B)C(B,A)
VC-100-1001.221.191.0428.574.1785.714.17
VC-100-2501.211.210.7111.119.52100.000.00
VC-100-5001.211.190.7520.0018.75100.000.00
VC-100-7501.081.030.7945.4528.57100.000.00
VC-100-10001.161.090.8355.560.00100.000.00
VC-100-20000.960.720.680.600.0060.000.00
VC-150-1501.091.070.6238.466.25100.000.00
VC-150-2501.211.200.9050.0036.36100.000.00
VC-150-5001.121.020.5563.6425.00100.000.00
VC-150-7501.221.180.6280.0011.76100.000.00
VC-150-10001.181.170.5961.1140.00100.000.00
VC-150-20001.081.060.6950.0025.00100.000.00
VC-150-30001.171.100.9650.009.0971.000.00
VC-200-2501.211.140.8350.0018.18100.000.00
VC-200-5001.221.040.3978.5714.29100.000.00
VC-200-7501.211.150.4450.0023.08100.000.00
VC-200-10001.161.100.4066.6716.67100.000.00
VC-200-20001.231.250.7133.3357.89100.000.00
VC-200-30001.171.110.7741.678.33100.000.00
VC-250-2501.201.120.8381.2512.50100.000.00
VC-250-5001.191.170.6050.0015.38100.000.00
VC-250-7501.221.180.5960.8735.00100.000.00
VC-250-10001.161.140.6945.4533.33100.000.00
VC-250-20001.301.120.4788.890.00100.000.00
VC-250-30001.231.190.4761.1157.89100.000.00
VC-250-50001.261.170.7483.330.00100.000.00
VC-300-3001.140.900.66100.000.00100.000.00
VC-300-5001.201.130.6685.0011.11100.000.00
VC-300-7501.271.150.4590.484.35100.000.00
VC-300-10001.211.070.4586.6717.65100.000.00
VC-300-20001.291.130.43100.000.00100.000.00
VC-300-30001.291.220.5683.3316.67100.000.00
VC-300-50001.141.010.3285.715.88100.000.00
Average1.191.110.6459.9017.0597.000.13
Table 4. Experimental results on large-scale instances.
Table 4. Experimental results on large-scale instances.
Instances I H C- Metric
WDsocreWscoreDscore(WDscore, Wscore)(WDscore, Dscore)
C(A,B)C(B,A)C(A,B)C(B,A)
VC-500-5001.071.020.4386.1117.86100.000.00
VC-500 -10001.261.040.42100.000.00100.000.00
VC-500-20001.020.780.1980.009.09100.000.00
VC-500-50001.291.190.3680.0031.58100.000.00
VC-500-100001.361.160.46100. 000.00100.000.00
VC-800-5001.220.980.69100.000.00100.000.00
VC-800-10001.201.110.5671.435.26100.000.00
VC-800-20001.240.940.53100.000.00100.000.00
VC-800-50001.171.100.3282.3510.71100.000.00
VC-800-100001.281.070.2481.250.00100.000.00
VC-1000-10001.221.000.55100.000.00100.000.00
VC-1000-50000.980.750.25100.000.00100.000.00
VC-1000-100001.271.110.1566.6715.79100.000.00
VC-1000-150001.391.260.28100.000.00100.000.00
VC-1000-200001.231.050.0950.007.69100.000.00
Average1.211.040.3786.526.53100.000.00

Share and Cite

MDPI and ACS Style

Hu, S.; Wu, X.; Liu, H.; Wang, Y.; Li, R.; Yin, M. Multi-Objective Neighborhood Search Algorithm Based on Decomposition for Multi-Objective Minimum Weighted Vertex Cover Problem. Sustainability 2019, 11, 3634. https://doi.org/10.3390/su11133634

AMA Style

Hu S, Wu X, Liu H, Wang Y, Li R, Yin M. Multi-Objective Neighborhood Search Algorithm Based on Decomposition for Multi-Objective Minimum Weighted Vertex Cover Problem. Sustainability. 2019; 11(13):3634. https://doi.org/10.3390/su11133634

Chicago/Turabian Style

Hu, Shuli, Xiaoli Wu, Huan Liu, Yiyuan Wang, Ruizhi Li, and Minghao Yin. 2019. "Multi-Objective Neighborhood Search Algorithm Based on Decomposition for Multi-Objective Minimum Weighted Vertex Cover Problem" Sustainability 11, no. 13: 3634. https://doi.org/10.3390/su11133634

APA Style

Hu, S., Wu, X., Liu, H., Wang, Y., Li, R., & Yin, M. (2019). Multi-Objective Neighborhood Search Algorithm Based on Decomposition for Multi-Objective Minimum Weighted Vertex Cover Problem. Sustainability, 11(13), 3634. https://doi.org/10.3390/su11133634

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop