Next Article in Journal
A Modified Complex Variational Mode Decomposition Method for Analyzing Nonstationary Signals with the Low-Frequency Trend
Next Article in Special Issue
A New Frequency Analysis Operator for Population Improvement in Genetic Algorithms to Solve the Job Shop Scheduling Problem
Previous Article in Journal
Untangling Computer-Aided Diagnostic System for Screening Diabetic Retinopathy Based on Deep Learning Techniques
Previous Article in Special Issue
An Arithmetic-Trigonometric Optimization Algorithm with Application for Control of Real-Time Pressure Process Plant
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Population-Based Iterated Greedy Algorithm for Maximizing Sensor Network Lifetime

by
Salim Bouamama
1,
Christian Blum
2,* and
Pedro Pinacho-Davidson 
3
1
Mechatronics Laboratory (LMETR)—E1764200, Department of Computer Science, Ferhat Abbas University Sétif 1, Sétif 19000, Algeria
2
Artificial Intelligence Research Institute (IIIA-CSIC), Campus of the UAB, 08193 Bellaterra, Spain
3
Department of Computer Science, Faculty of Engineering, Universidad de Concepción, Concepción 4070411, Chile
*
Author to whom correspondence should be addressed.
Sensors 2022, 22(5), 1804; https://doi.org/10.3390/s22051804
Submission received: 17 January 2022 / Revised: 20 February 2022 / Accepted: 21 February 2022 / Published: 24 February 2022

Abstract

:
Finding dominating sets in graphs is very important in the context of numerous real-world applications, especially in the area of wireless sensor networks. This is because network lifetime in wireless sensor networks can be prolonged by assigning sensors to disjoint dominating node sets. The nodes of these sets are then used by a sleep–wake cycling mechanism in a sequential way; that is, at any moment in time, only the nodes from exactly one of these sets are switched on while the others are switched off. This paper presents a population-based iterated greedy algorithm for solving a weighted version of the maximum disjoint dominating sets problem for energy conservation purposes in wireless sensor networks. Our approach is compared to the ILP solver, CPLEX, which is an existing local search technique, and to our earlier greedy algorithm. This is performed through its application to 640 random graphs from the literature and to 300 newly generated random geometric graphs. The results show that our algorithm significantly outperforms the competitors.

1. Introduction

The field of wireless sensor networks (WSNs) has been enjoying a lot of attention in the last 20 years, both in research and in industry. This is certainly due to a multitude of different applications, including environmental monitoring, medical and health applications, security surveillance, transportation applications, structural applications, and emergency operations [1,2], just to name a few. WSNs are generally composed of a number of small devices equipped with one or more sensors, limited storage capacity, a limited power supply, and a radio communication system. As the weight of sensor devices often plays an important role, power supply—for example, by means of a battery—is generally limited, and battery-saving techniques are often used. The lifetime of a sensor device (in hours) may be computed by a division of the battery capacity (in Watt hours) and the average power drain (in Watts). However, the estimation of the lifetime of a sensor node is not a trivial task (see [3]) because energy consumption is the result of various factors, including, for example, the environmental temperature.
For these reasons, one of the principal research topics concerning WSNs is about network lifetime extension, while at the same time, providing sufficient communication reliability and sensing coverage. Note that in this context, the term network lifetime refers to the time during which the network is fully operational with respect to its tasks. In other words, the network lifetime is the time duration in which the overall sensing coverage is maintained. The lifetime of a WSN, therefore, heavily depends on the energy consumption of the individual sensor devices. Real-world examples of mechanisms for maximizing the network lifetime are manifold. They include, but are not limited to, smart agriculture monitoring [4], structural health monitoring [5], human activity monitoring [6], and road traffic monitoring [7]. Power-saving strategies such as the ones found in these examples can—according to [8]—be classified as belonging to one of the following groups:
  • Sleep–wake cycling, also referred to as duty cycling. Here, sensor devices alternate between active and sleep mode;
  • Power control through the adjustment of the transmission range of the radio communication systems;
  • Routing and data gathering in an energy efficient way;
  • Reduction of the amount of data transmissions and avoidance of useless activity.
In this paper, we will provide a technique for WSN lifetime extension that falls into the first category. More precisely, our technique makes use of the so-called communication graph. The nodes of this graph are the sensor devices belonging to the network (together with their locations). Two such nodes are connected by an edge if the corresponding sensor devices can communicate with each other via their radio communication systems. Note that sensor nodes have at least two tasks, also known as functionalities: (1) sensing data and (2) data processing and forwarding data to a base station. Between these two tasks, the latter one is by far more energy-intensive than the first one. In order to keep energy spending to a minimum, the nodes in a sensor network may be organized in terms of dominating sets in which the dominators (that is, the sensor nodes that form part of the dominating set) assume the task of cluster heads that take care of data processing and forwarding. However, sensor nodes are never switched off in this model. Those sensor nodes that do not form part of the (current) dominating set save energy by not having to perform data processing and forwarding. In contrast, data sensing is performed by all sensor nodes at all times. Such a model (or similar models) have been used in a wide range of papers in the literature; for example, see refs. [9,10,11]. For the above-mentioned reasons, our technique organizes the sensor nodes into a number of disjoint dominating sets, which are used—one after the other—for data processing and data forwarding.

1.1. Necessary Graph Theoretic Concepts

This paper makes use of some basic definitions and notations from graph theory. The most important ones are outlined in the following. (For a more profound introduction, the interested reader may refer to [12].) First, the communication graph is modeled by means of an undirected graph, G = ( V , E ) , where V is the set of nodes, and E is the set of edges connecting (some of) the nodes. Hence, two nodes, v u V , which are connected by an edge, ( v , u ) E , are called neighbors. They may also be called adjacent nodes. The open neighborhood of a node, v V , is defined as N ( v ) : = { u V ( v , u ) E } . Sometimes, however, it is necessary to refer to the closed neighborhood N [ v ] of a node, v V , which is defined by adding v to N ( v ) , that is, N [ v ] : = N ( v ) { v } . Next, the degree deg ( v ) of a node, v V , is defined as the cardinality of N ( v ) , that is, deg ( v ) : = | N ( v ) | . The concept of neighborhood can also be extended, from nodes to sets of nodes, in the following way. The open neighborhood, N ( D ) , of a set, D V , is defined as v D N ( v ) . Correspondingly, a node v’s closed neighborhood, N [ D ] , is defined as N [ D ] : = N ( D ) D .
In this context, we also formally introduce the terms: dominating set, domatic partition, and domatic number of a graph. First, a subset, D V , in which each node, v V \ D , has at least one neighbor that forms part of D is called a dominating set of G. A node, v D —where D is a dominating set—is said to cover all its neighbors, in addition to itself. A trivial example of a dominating set of an undirected graph, G = ( V , E ) , is V. Next, a set D = { D 1 , D 2 , , D k } of subsets D i V is called a domatic partition of a given, undirected graph, G = ( V , E ) , if the following two conditions are fulfilled: (1) D i ( i = 1 , , k ) is a dominating set of G, and (2) all sets of D are pairwise disjoint, that is, D i D j = for all 1 i < j k . If D = { D 1 , D 2 , , D k } is a set of disjoint dominating sets of G with (1) D i D D i V and (2) V \ D i D D i is not a dominating set, a domatic partition D of G can easily be obtained by adding all vertices from V \ D i D D i , for example, to D k . The domatic number of an undirected graph, G = ( V , E ) , is defined as the size of the largest domatic partition of G, that is, the domatic number of G is | D * | , where D * : = argmax { | D | D   is   a   domatic   partition   of   G } . It was shown in the literature that the domatic number of a graph, G, can be at most δ + 1 , where δ : = min { deg ( v ) v V } . The problem of identifying a domatic partition of an undirected graph, G, is sometimes called the maximum disjoint dominating sets (MDDS) problem.

1.2. Graph Problems Used to Model WSN Lifetime Maximization

The related literature offers different approaches for the maximization of the sensor network lifetime. Most of these approaches have modeled this problem either in terms of the set K-cover problem (also known as the target coverage problem) or as the MDDS problem. Modeling the problem as a K-cover problem was performed for the first time in [13]. In the same work, the problem was shown to be NP-hard. In this context, note that the set K-cover problem is defined on the basis of a bipartite graph in which the first set of nodes are the sensor devices and the second set of nodes are the sensing targets. The aim of the problem is to partition the sensor devices into a maximum number of disjoint sets, with each one covering all targets. As mentioned before, these disjoint sets are then activated one after the other in order to keep the network alive for the maximum period of time. Due to the NP-hardness of the problem, a range of approximate algorithms have been proposed in the literature in order to solve it. Examples, which also include algorithms for closely related problem formulations, are a greedy heuristic [13], some memetic algorithms [14,15,16], a cuckoo search approach [17], and finally, a genetic algorithm [18].
As already indicated above, the problem of maximizing sensor network lifetime is also frequently modeled as an MDDS problem, the goal of which is to identify a partition of the sensor devices into a maximum number of disjoint dominating sets of the corresponding communication graph. The MDDS problem, which belongs to the important family of dominating set problems [19,20,21], was shown to be NP-hard for general graphs [22]. Cardei et al. [23] proved the NP-completeness of a special case of the MDDS problem known as the 3-disjoint dominating sets problem. This variant deals with the question of whether or not a given graph contains three disjoint dominating sets. Nguyen and Huynh [9] proved that the 3-disjoint dominating sets problem remains NP-complete even for the special cases of planar unit disk graphs. Moreover, they introduced and evaluated the performance of four greedy heuristics for the general MDDS problem. In [23], it was also proved that unless P = NP, the MDDS problem has no polynomial-time approximation with a performance guarantee better than 1.5. Finally, the same authors introduced a graph coloring-based heuristic approach. Next, Feige et al. [24] showed that any graph with n nodes, a maximum degree of Δ , and a minimum degree of δ has a domatic partition with a size of ( 1 o ( 1 ) ) ( δ + 1 ) / ln Δ . Note that the term o ( 1 ) tends to zero with increasing n. Moreover, the same authors were able to show the non-existence of an approximation algorithm with an approximation ratio of ( 1 + o ( 1 ) ) ln n unless N P D T I M E ( n O ( log log n ) ) . Finally, they also introduced a centralized algorithm generating a domatic partition with a size of Ω ( δ / ln Δ ) . Moscibroda and Wattenhöfer [25] regarded the MDDS problem as one with a maximizing cluster lifetime. They introduced a randomized, distributed algorithm having—with high probability—a performance ratio of O ( log ( n ) ) . Finally, a greedy heuristic for the MDDS problem, with a time complexity of O ( n 3 ) , was described in [26].

1.3. Existing Work for the MWDDS Problem and Our Contribution

Recently, a weighted variant of the MDDS problem, in which the weights of the nodes of a given undirected graph, G = ( V , E ) , indicate the remaining lifetime of single sensor devices, was introduced in [27]. The authors labeled this problem as the maximum weighted disjoint dominating sets (MWDDS) problem. The lifetime of a dominating set in G is hereby defined as the minimum of the lifetimes of the nodes that form part of the set. The MWDDS problem asks to identify a domatic partition that maximizes the sum of the lifetimes of the corresponding dominating sets.
In addition, three algorithms based on a local search were provided in [27]. Each of these local search methods takes a solution generated by a greedy heuristic from [26] as the initial solution. The proposed local search methods make use of swap neighborhoods, trying, for example, to exchange nodes between different dominating sets and to incorporate nodes not belonging to any dominating set of the current solution. The three local search methods differ in the type of swaps that are considered. The current state-of-the-art approach for the MWDDS problem is, surprisingly, a greedy heuristic that was introduced in [28]. This algorithm generates one dominating set after the other by adding one node at a time to the partial dominating set under construction.
Given that the greedy heuristic from [28] seems to be very powerful, in this work, we propose a metaheuristic extension of this greedy heuristic. More specifically, we propose a population-based iterated greedy (PBIG) algorithm on the lines of [29,30]. Just as with any other iterated greedy (IG) approach [31], our algorithm iteratively generates a new solution to the problem by partially destroying incumbent solutions and re-constructing the obtained partial solutions by means of a randomized greedy technique. As we will show in the section on the experimental results, our proposed approach clearly outperforms all existing algorithms for the MWDDS problem. In addition to 640 problem instances from the related literature, we likewise evaluate our algorithm—in comparison to the competitors—on 300 random geometric graphs.

1.4. Paper Organization

The remainder of this paper is organized as follows. The MWDDS problem is formally introduced, together with notations and basic definitions, in Section 2. This also includes a graphical example and an ILP model for the MWDDS problem. In Section 3, we introduce our algorithmic proposal, a population-based iterated greedy algorithm for solving the MWDDS problem. Finally, Section 4 presents a comprehensive experimental evaluation and a comparison to the current state of the art, while Section 5 summarizes our paper and offers directions for future lines of work.

2. The MWDDS Problem

Let G = ( V , E , lifetime ) be an undirected, node-weighted graph. As already mentioned before, V refers to the set of nodes, while E V × V is the set of edges. Moreover, lifetime : V R + is a weight function defined over the set of nodes, assigning a positive weight, lifetime ( v ) > 0 , to each node, v V . The maximum weighted disjoint dominating sets (MWDDS) problem is then defined, such that any domatic partition D of G is a valid solution. The objective function value of a valid solution, D = { D 1 , , D | D | } , is defined as follows:
f ( D ) : = i = 1 | D | min { lifetime ( v ) v D i }
In other words, the quality of a subset, D i , is determined as the minimum lifetime of all its nodes. The objective is to find a valid solution, D * , that maximizes objective function f.

2.1. Graphical Example

Figure 1 shows a graphical example of the MWDDS problem. In particular, Figure 1a shows an undirected graph on seven nodes. The labels printed within the nodes have the format x , y , where x is the nodes’ ID and y is the nodes’ lifetime. These lifetime values are normalized to the range [ 0 , 1 ] for the sake of simplicity. Furthermore, the graphic in Figure 1b shows a feasible solution, D : = { D 1 = { 3 , 4 } , D 2 = { 1 , 5 , 6 } } , which consists of two dominating sets, D 1 = { 3 , 4 } and D 2 = { 1 , 5 , 6 } . According to the node labels, the remaining lifetimes of nodes 3 and 4 are 0.7 and 0.4, respectively. Correspondingly, D 1 has a lifetime of 0.4. Next, it can also be easily calculated that the lifetime of D 2 is 0.1 because the individual lifetimes of nodes 1, 5, and 6 are 0.8, 0.8, and 0.1, respectively. The objective function value f ( D ) of D is calculated as the sum of the lifetimes of the dominating sets in D. Therefore, the lifetime of D is 0.5. Finally, the graphic in Figure 1c demonstrates the optimal solution, D * : = { D 1 = { 1 , 3 , 5 } , D 2 = { 2 , 4 , 6 } } , to this problem instance. Hence, the lifetime of D 1 is 0.7, while the lifetime of D 2 is 0.1. Therefore, the objective function value f ( D * ) of D * is 0.8. Since this small sample graph contains a node with a degree of 1, any valid solution can contain at most two disjoint dominating sets.

2.2. ILP Model for the MWDDS Problem

The following integer linear programming (ILP) model for the MWDDS problem was introduced in [28]. First, we describe the sets of variables and their domains utilized by this model:
1.
A binary variable, x i j , for each combination of a node, v i ( i = 1 , , n ), and a possible disjoint set, D j ( j = 1 , , δ ( G ) + 1 ), indicates whether or not node v i forms part of the dominating set D j . That is, when x i j = 1 , node v i is assigned to the dominating set D j . In this context, remember that (1) δ ( G ) : = min { d e g ( v ) v V } , and (2) the number of disjoint dominating sets in a graph is bounded from above by δ ( G ) + 1 .
2.
Second, a binary variable, y j ( j = 1 , , δ ( G ) + 1 ), indicates whether the j th dominating set is utilized at all.
3.
Finally, a real-valued variable, z j [ 0 , M ] , is used to store the weight of the j th dominating set. In our implementation of the model, we used M : = max { lifetime ( v ) v V } .
The MWDDS can then be stated in terms of an ILP model in the following way:
(2) max j = 1 δ ( G ) + 1 z j (3) s . t . j = 1 δ ( G ) + 1 x i j 1 i = 1 , , n (4) v k N ( v i ) x k j y j x i j j = 1 , , δ ( G ) + 1 (5) y j x i j i = 1 , , n   and   j = 1 , , δ ( G ) + 1 (6) x i j · lifetime ( v i ) + ( 1 x i j ) · M z j i = 1 , , n   and   j = 1 , , δ ( G ) + 1 (7) y j · M z j j = 1 , , δ ( G ) + 1 (8) y j y j + 1 j = 1 , , δ ( G ) (9) z j z j + 1 j = 1 , , δ ( G )
The objective function is the sum of the weight values of all dominating sets. Constraints (3) ensure that each node is assigned to one dominating set at most. In this way, the chosen dominating sets are disjoint. Next, constraints (4) are the usual dominating set constraints, that is, they make sure that the set of nodes assigned to the j-th set (if utilized) form a dominating set of G. Furthermore, constraints (5) make sure that nodes can only be assigned to utilized dominating sets. Constraints (6) correctly determine the lifetimes of the utilized dominating sets. This is accomplished by setting the value of the variable z j of the j-th dominating set (if utilized) to the minimum of the lifetime values of all nodes assigned to it. Next, note that the objective function (Equation (2)) is only correct if the weight value of the dominating sets not utilized is set to zero. This is ensured by constraints (7). The remaining two sets of constraints, (8) and (9), are not required for the correctness of the ILP. They were introduced for tie-breaking purposes that have the following effect: (1) if k dominating sets are utilized, they are assigned to sets 1 , , k , and (2) the utilized dominating sets are ordered according to a non-increasing weight value.

3. Proposed Algorithm

Our population-based iterated greedy (PBIG) algorithm is a population-based extension of the well-known iterated greedy (IG) metaheuristic [31], that is, it produces a sequence of solutions by iterating over a constructive greedy heuristic in the following way. At each iteration, first, some part of the current/incumbent solution is removed, and second, the greedy heuristic is applied to the resulting partial solution in order to again obtain a complete solution. The first of these phases is called the destruction phase, while the second one is known as the reconstruction phase. A high-level description of our PBIG algorithm for solving the MWDDS problem is given in Algorithm 1. Apart from a problem instance, PBIG requires seven input parameters: (1) the population size ( p size ), (2) the lower bound of the degree of greediness during solution construction ( det min ), (3) the upper bound of the degree of greediness during solution construction ( det max ), (4) the lower bound of the degree of solution destruction ( destr min ), (5) the upper bound of the degree of solution destruction ( destr max ), (6) the maximum number of iterations without the improvement of the best-so-far solution D bsf before applying a restart ( max noimpr ), and (7) the degree of partial solution removal ( r del ). Moreover, note that in our algorithm, each solution D has two solution-specific parameters: the individual destruction rate, destr D , and the individual degree of greediness, det D . The use of all parameters will be carefully described below.
Algorithm 1 PBIG for the MWDDS problem.
Input: A problem instance G = ( V , E , lifetime ) and values for parameters p size , destr min , destr max , det min , det max , max noimpr , and r del .
Output: A family of disjoint dominating sets D = { D 1 , D 2 , , D k }
1:
P : = GenerateInitialPopulation ( )
2:
D bsf : = argmmax { f ( D ) D P }
3:
c n t : = 0
4:
while termination condition not satisfied do
5:
    P new : =
6:
   for each candidate solution D P  do
7:
      D p : = DestroyPartially ( D )
8:
      D ^ : = GreedyMWDDS ( D p ) // see Algorithm 2
9:
      P new P new { D ^ }
10:
      AdaptParameters ( D , D ^ )
11:
   end for
12:
    D ib : = argmmax { f ( D ) D P new }
13:
   if  f ( D ib ) > f ( D bsf ) then D bsf : = D ib , c n t : = 0 else c n t : = c n t + 1  end if
14:
    P SelectNextPopulation ( P , P n e w , c n t )
15:
end while
16:
return D bsf
The algorithm works as follows. A set of p size solutions for the initial population are generated in the function GenerateInitialPopulation ( ) (see line 1 of Algorithm 1). Moreover, the best-so-far solution, D bsf , and the c n t counter are initialized. Afterwards, the main loop of the algorithm starts. Each iteration consists of the following steps. Each solution, D P , is partially destroyed using procedure DestroyPartially ( D ) (see line 7 of Algorithm 1), resulting in a partial solution, D p . On the basis of D p , a complete solution, D ^ , is then constructed using the procedure GreedyMWDDS ( D p ) (see line 8 of Algorithm 1). Each newly obtained complete solution is stored in an initially empty, new population, P n e w . Moreover, the individual destruction rates and the individual degree of greediness of D and D ^ are adapted in the function, AdaptParameters ( D , D ^ ) . Then, the iteration-best solution, D ib , is determined, and in case this solution improves over D bsf , the non-improvement counter c n t is set back to zero. Otherwise, this counter is incremented. As a last step in each iteration procedure, SelectNextPopulation ( P , P n e w , c n t ) chooses the solutions for the population of the next iteration, maintaining the population size constant at p size at all times. Finally, the algorithm terminates when a given CPU time limit has been reached, and the best found solution is returned. The five procedures mentioned above are described in more detail below.
GenerateInitialPopulation ( p size ) : Each of the p size solutions of the initial population is constructed by applying the procedure GreedyMWDDS ( · ) , with the empty solution D 0 = as input. Note that this procedure depends on the degree of greediness, which is the same for all solutions because each empty partial solution, D 0 , is initialized with det D 0 = det max , that is, the upper bound for the greediness of solution construction. Moreover, it is also initialized with destr D 0 = destr min , that is, the lower bound of the destruction rate is set as the initial value.
GreedyMWDDS ( D p ) : The reconstruction procedure follows the general principle of a greedy algorithm, which builds a complete solution step-by-step, selecting one additional node at each construction step. In this work, we adopt the recent greedy heuristic presented in [28]. However, we extend this greedy heuristic (1) in order to allow for randomized steps and (2) to be able to take a partial (possibly non-empty) solution as input. In other words, our randomized greedy mechanism takes as input a partial solution, D p , which might be empty. Note that such a partial solution is composed of independent, partially destroyed dominating sets. Now, the construction of a complete solution, D = { D 1 , D 2 , , D m } , on the basis of D p = { D 1 p , D 2 p , , D k p } (where k m ) is performed by consecutively dealing with the generation of D i starting from D i p for all i = 1 , , m . In this process, whenever i > | D p | or D p = , D i is initialized with the empty set. In the following, V rem denotes the set that includes all nodes that are not yet added to a dominating set of a current (partial) solution D p . That is, when receiving a partial solution D p as input, V rem : = V \ i = 1 | D p | D i p . Thus, if D p = , then V rem : = V .
In the following, we describe the way to obtain a dominating set, D i , starting from a partial dominating set, D i p (possibly being empty). At the start, all nodes in V can be divided into three disjoint subsets with respect to D i :
  • Black nodes: node from D i ;
  • Gray nodes: nodes that are not in D i but are dominated by black nodes, that is, all nodes in N ( D i ) \ D i , where N ( D i ) : = v D i N ( v ) and N ( v ) is the neighborhood of v in G;
  • White nodes: all nodes from V that are neither black nor gray.
With this classification of the nodes, we can define the white degree of a node, v V rem —with respect to D i —as the number of white nodes from the closed neighborhood of v:
white _ degree ( v ) : = | { u N [ v ] V rem u   is   a   white   node   with   respect   to D i } |
To be able to choose the next node to be added to set D i at the current construction step, all nodes from V rem are evaluated using a greedy function denoted by score ( · ) , which is calculated as follows:
score ( v ) : = lifetime ( v ) w h i t e _ d e g r e e ( v ) v V rem
Then, the randomization incorporated in our greedy heuristic is implemented using a quality-based restricted candidate list (RCL). The size of the RCL is controlled by the solution-specific parameter, det D [ 0 , 1 ] , called the degree of greediness. Its value is adaptive and depends on the quality of the generated solution, as explained further below.
Let score m i n : = min { score ( v ) | v V rem } and score m a x : = max { score ( v ) | v V rem } . The RCL then contains all nodes, v V rem , whose scoring value is greater than or equal to score m i n + det D ( score m a x score m i n ) . Note that when det D = 1 , our solution construction procedure behaves similar to a deterministic greedy heuristic. On the other hand, setting det D = 0 leads to pure random construction. Finally, a node is selected at random from the RCL to be incorporated into the partial dominating set, D i .
Once D i is a dominating set, it might contain redundant nodes which—after identification—can be safely removed. In this context, note that a node is redundant if and only if any node from its closed neighborhood is dominated by at least two nodes from D i . If, by removing redundant nodes, the node with the lowest lifetime in D i can be removed, the overall lifetime of D i is improved. After removing redundant nodes, D i is placed in the solution, D , under construction. Afterwards, the set V rem is updated accordingly before moving to the construction of the next dominating set, D i + 1 . This solution construction process ends once no further dominating set can be generated from the nodes in V rem . This occurs when it is impossible to complete the partial dominating set under construction because either (1) V rem is empty or (2) no node from V rem has a white closed neighbor. The pseudo-code of the complete procedure is shown in Algorithm 2.
DestroyPartially ( D ) : Let D = { D 1 , D 2 , , D m } be the valid solution given as input to the destruction procedure. In the following, we outline the three strategies for destruction that are conducted sequentially: partial solution removal, worst node removal, and random node removal. In this context, it is important to note that, on the one hand, the best solution does not necessarily correspond to the solution with the maximum number of disjoint dominating sets; on the other hand, the size of a re-constructed solution after performing the destruction and reconstruction phases may be different to the size of the solution that served as input to these two phases. With this in mind, some disjoint dominating sets should be completely removed as a first step of partial solution destruction. For this purpose, the max { 1 , r del · | D | } randomly chosen dominating sets are removed from D , resulting in a partial solution D p = { D 1 p , D 2 p , , D r p } , where r < m .
Then, since the quality of a subset, D i D , is determined as the minimum lifetime of all its nodes, keeping the node with the smallest lifetime during the partial destruction makes its further improvement impossible. For this reason, the removal of the node, v w o r s t : = argmin { lifetime ( v ) v D i p } , from each subset, D i p D p , i = 1 , , r , becomes necessary. Afterwards, a set of destr D × | D i p | randomly chosen nodes are removed from each subset, D i p D p , in an iterative way. Thus, at each step, exactly one randomly chosen vertex is removed.
Algorithm 2 Procedure GreedyMWDDS ( D P ) .
Input: A (possibly empty) partial solution D p = { D 1 p , D 2 p , , D k p } .
Output: A complete valid solution D = { D 1 , D 2 , , D m }
1:
D : =
2:
if D P = , then V rem : = V else V rem : = V \ i = 1 | D p | D i p
3:
s t o p p i n g _ c o n d i t i o n : = false
4:
i : = 0
5:
while not s t o p p i n g _ c o n d i t i o n  do
6:
    i : = i + 1
7:
   for each node v V  do
8:
      c o l o r ( v ) : = WHITE
9:
   end for
10:
   if ( D P = or i > | D P | ), then D i : = else D i : = D i p
11:
   while  D i is not a dominating set of G (that is, N [ D i ] V ) and not a s t o p p i n g _ c o n d i t i o n  do
12:
     if  V rem =  then
13:
         s t o p p i n g _ c o n d i t i o n : = true
14:
     else
15:
         score m a x : = max { score ( v ) | v V rem }
16:
        if  score m a x = 0  then
17:
           s t o p p i n g _ c o n d i t i o n : = true {▸ No node from V rem has a white closed neighbor}
18:
        else
19:
           score m i n : = min { score ( v ) | v V rem }
20:
          RCL : = { v V rem score ( v ) score m i n + det D ( score m a x score m i n ) }
21:
          Choose v * uniformly at random from RCL
22:
           D i : = D i { v * }
23:
           V rem : = V rem \ { v * }
24:
          for each node u N ( v * )  do
25:
             if ( c o l o r ( u ) = WHITE ) then
26:
                c o l o r ( v ) : = GRAY
27:
             end if
28:
          end for
29:
           c o l o r ( v * ) : = BLACK
30:
        end if
31:
     end if
32:
   end while
33:
   if not s t o p p i n g _ c o n d i t i o n  then
34:
      Reduce ( D i , V rem ) {▸ Remove redundant nodes}
35:
      D : = D { D i }
36:
   end if
37:
end while
38:
return D .
AdaptParameters ( D , D ^ ) : The solution-specific parameters—concerning the degree of greediness (RCL parameter) and the destruction rate—are adapted in relation to the results of the destruct and re-construct procedures. More specifically, while the newly generated solution, D ^ , is initialized with the default values—that is, det D ^ : = det max and destr D ^ : = destr min —the adaptation of the parameter values of D depends on D ^ , and vice versa. In case f ( D ^ ) > f ( D ) , D ^ will adopt the values of D , that is, det D ^ : = det D and destr D ^ : = destr D . Otherwise, the values of D are adapted as follows:
det D : = det D 0.1
destr D : = destr D + destr max destr min 9
Once the value of det D falls below the lower bound, det min , it is set back to the upper bound, det max . In the same way, once the value of destr D exceeds the upper bound, destr max , it is set back to the lower bound, destr min .
Note that that the constants 0.1 and 9 were fixed after preliminary experiments. In contrast, the values of seven important parameters will be determined by scientific tuning experiments (see Section 4.2). The denominator in the case of the adaptation of destr D was set to 9 in order to have 10 different values between destr min and destr max . The motivation behind this adaptive scheme for the degree of greediness and the destruction rate is to use a higher degree of greediness and a smaller solution destruction as this leads to better solutions, and to move towards a lower degree of greediness and a higher destruction once no more improving solutions are found.
SelectNextPopulation ( P , P new , c n t ) : This last function concerns the selection/generation of the solutions for the population of the next iteration. If c n t < max noimpr , the new population, P , is simply filled with the p size best solutions from P P new . In case c n t = max noimpr , all solutions, apart from the best one, are deleted from P , and p size 1 new solutions are added via the use of the GreedyMWDDS ( D ) procedure (used with D = as input). Note that in this case, the RCL parameter, det D , is each time randomly picked from { 0.5 , 0.6 , 0.7 , 0.8 , 0.9 , 1 } . This set of values was chosen after preliminary experiments. The variation in this set potentially ensures some diversification in the search space, with the hope of covering unexplored areas of the search space. Moreover, c n t is set to zero. In summary, this function implements a restart procedure which is performed once max noimpr iterations have been performed without the improvement of the best-so-far solution, S bsf .

4. Experimental Evaluation

We implemented the proposed PBIG algorithm in ANSI C++ using GCC 10.2.0 for the compilation of the software. Moreover, we decided to compare PBIG with the following algorithmic approaches: (1) the best of three available local search approaches from the literature, called VD (Variable Depth) [27]; (2) our own greedy heuristic, labeled GH-MWDDS + [28]; (3) application of the ILP solver ILOG CPLEX 20.01 in a single-threaded mode to all problem instances. Note that, surprisingly, GH-MWDDS + is currently the state-of-the-art method used to solve the MWDDS problem, outperforming both the local search method (VD) and CPLEX. In order to conduct a fair comparison to the VD algorithm, we used the original source code provided by the authors of [27].
The time limit for each application of CPLEX was set to 2 CPU hours. Moreover, the experiments were performed on a cluster of machines with two Intel® Xeon® Silver 4210 CPUs, with 10 cores of 2.20 GHz and 92 Gbytes of RAM.

4.1. Problem Instances

All considered techniques were applied to two sets of benchmark instances. The first set, consisting of 640 random graph instances, was already used for the evaluation of GH-MWDDS + in [28]. In particular, this set—henceforth called Set1—contained graphs with n { 50 , 100 , 150 , 200 , 250 } nodes. For each value of n, there were graphs of different densities, expressed by the average degree, d. In the case of n = 50 , for example, Set1 contained graphs with average degrees of d { 15 , 20 , 25 , 30 , 35 } . For each combination of n and d, Set1 contained 20 randomly generated graphs.
In order to test our algorithms on graphs that are also commonly used to model sensor networks, we generated an additional set of benchmark instances (Set2) consisting of random geometric graphs (RGGs). These graphs were generated by scattering n { 100 , 500 , 1000 } nodes randomly on the square, [ 0 , 1 ] 2 . This means that each node, i, had its location in ( x i , y i ) [ 0 , 1 ] 2 . Two nodes, i and j, were then connected by an edge if and only if the Euclidean distance between i and j was smaller or equal to a predefined threshold value, r > 0 . For each n { 100 , 500 , 1000 } , we considered five different threshold values. Moreover, for each combination of n and r, we randomly generated 20 graphs. Accordingly, Set2 consists of 300 problem instances.
Finally, note that—both in the case of Set1 and Set2—each node (sensor) of the network was given a random real value between 0 and 1 as node weight (lifetime). Both benchmark sets can be obtained at https://www.iiia.csic.es/~christian.blum/research.html#Instances (accessed on 16 January 2022).

4.2. Algorithm Tuning

PBIG requires well-working parameter values for the following seven parameters:
1.
Population size ( p size );
2.
Lower bound of the determinism rate ( det min );
3.
Upper bound of the determinism rate ( det max );
4.
Lower bound of the destruction rate ( destr min );
5.
Upper bound of the destruction rate ( destr max );
6.
Number of iterations without improvement ( max noimpr );
7.
Deletion rate ( r del ).
For the the purpose of parameter tuning, we used the scientific tuning software irace [32]. More specifically, we tuned PBIG separately for Set1 and Set2. For this purpose, we generated specific tuning instances as follows. For Set1, exactly one instance was randomly generated for the following combinations of n (number of nodes) and d (average degree): ( n = 50 , d = 15 ) , ( n = 50 , d = 35 ) , ( n = 100 , d = 20 ) , ( n = 100 , d = 60 ) , ( n = 150 , d = 30 ) , ( n = 150 , d = 90 ) , ( n = 200 , d = 40 ) , ( n = 200 , d = 100 ) , ( n = 250 , d = 50 ) , and ( n = 250 , d = 140 ) . In other words, 10 instances were specifically generated in order to tune PBIG for its application to instances from Set1. In this case, the irace software was run with a budget of 5000 algorithm applications. Regarding Set2, we randomly generated one tuning instance for each of the following combinations of n and r (threshold value for connecting two nodes): ( n = 100 , r = 0.2 ) , ( n = 100 , r = 0.3 ) , ( n = 500 , r = 0.1 ) , ( n = 500 , r = 0.2 ) , ( n = 1000 , r = 0.05 ) , and ( n = 1000 , r = 0.15 ) . In this case, as irace is applied with only six tuning instances, the budget was limited to 3000 algorithm applications. The obtained parameter value settings are shown in Table 1.

4.3. Results and Discussion

First of all, note that PBIG was applied exactly once to each problem instance, with a computation time limit of n / 2 CPU seconds. Table 2 presents the results of all competing methods—that is, CPLEX, VD, GH-MWDDS + , and PBIG—for the instances of Set1. The first two columns indicate the problem instance type in terms of: (1) the number of nodes (n) and (2) the average degree (d). Naturally, the density of the networks grows with an increasing average degree, d. Each table row provides the average result of each competing algorithm for the 20 generated problem instances concerning the corresponding combination of n and d. Table columns 3 and 4 show the results of CPLEX. The first of these columns (with the heading “Value”) provides the average quality of the best solutions generated by CPLEX for 20 instances, while the second column presents the average gap (in percent) between the objective function value of the solutions obtained by CPLEX and the best upper bounds identified by CPLEX. The results of the other three competitors are shown by the columns with the headings “Value” and “Time”. The first column provides—as in the case of CPLEX—the average quality of the generated solutions, while the second one provides the computation time. In the case of VD and GH-MWDDS + , the computation time corresponds to the time at which the algorithm terminated, while in the case of PBIG, the computation time is the time at which the best solution of a run was found on average. The corresponding standard deviation in the case of PBIG is provided in an additional column with the heading “ σ T i m e ”. Finally, note that the best result in each row is indicated in bold font.
The results displayed in Table 2 allow for the following observations:
  • As already mentioned in [28], solving the MWDDS problem by means of an ILP solver such as CPLEX is only useful in the context of the smallest of all problem instances. In fact, even though CPLEX obtains the best results in the case of ( n = 50 , d = 15 ) , the gap information indicates that—even in this case—CPLEX is far from being able to prove optimality.
  • For all instances, apart from ( n = 50 , d = 15 ) , PBIG outperforms the remaining approaches. In particular, the current state-of-the-art method, GH-MWDDS + , is consistently outperformed. This shows that our way of extending the solution construction mechanism of GH-MWDDS + into a PBIG algorithm was successful.
  • Both GH-MWDDS + and PBIG clearly outperform the best local search algorithm (VD) from the literature. In fact, while VD achieves an average solution quality of 1.757, GH-MWDDS + obtains an average solution quality of 9.515, and PBIG achieves one of 10.321. This does not only hold for solution quality but also for computation time. While VD requires a computation time of 984.143 seconds on average, GH-MWDDS + requires only 0.006 seconds. Even the average computation time of PBIG is, with 20.125 seconds, around 50 times lower than that of VD.
Next, we study the results obtained by CPLEX, GH-MWDDS + , and PBIG for the new RGG instances from Set2. These results are shown in Table 3, which has the same structure as Table 2. The only exception is the second table column, which provides the threshold value, r, used to generate the RGGs, instead of the average degree, d. The following conclusions can be drawn based on the obtained results:
  • First of all, CPLEX does seem to have fewer problems in solving RGG instances in comparison to random graphs. In fact, CPLEX is able to solve all 60 instances with n = 100 and r { 0.2 , 0.225 , 0.25 } to proven optimality. Furthermore, 19 out of 20 instances with n = 100 and r = 0.275 are solved to optimality, as well as 18 out of 20 cases with n = 100 and r = 0.3 . This is in contrast to the case of RGs, for which CPLEX was not even able to solve problem instances with 50 nodes to optimality. Nevertheless, for the larger instances (with n { 500 , 1000 } ) CPLEX was, with very few exceptions, only able to derive the trivial solutions that do not contain any dominating sets.
  • PBIG obtains the same results as CPLEX in those cases in which CPLEX is able to provide optimal solutions. Moreover, PBIG is able to do so in very short computation times of less than 10 s.
  • As in the case of the instances in Set1, PBIG also consistently outperforms the other competitors for the RGG instances in Set2. While GH-MWDDS + obtains an average solution quality of 2.911, PBIG achieves one of 3.134.
Finally, we decided to study the structure of the solutions provided by GH-MWDDS + and PBIG in more detail. In particular, this was performed for two cases: the first (out of 20) RGG graphs with 100 nodes and a threshold value of r = 0.2 , and the first (out of 20) RGG graphs with 100 nodes and a threshold value of r = 0.3 . In both cases, we graphically present the solutions of GH-MWDDS + and PBIG in Figure 2 (for the graph with r = 0.2 ) and Figure 3 (for the graph with r = 0.3 ). Note that the node color in all the graphics indicates the dominating set to which a node belongs; the purple color indicates that the corresponding node is not assigned to any of the disjoint dominating sets. The four solutions shown in these figures are additionally provided in textual form in Table 4. The four sub-tables provide all disjoint dominating sets, the color in which these sets are shown in the graphics of Figure 2 and Figure 3, the number of nodes contained in all dominating sets, their lifetime, and the lists of nodes belonging to the dominating sets.
The following interesting observations can be made. First, in both cases, the structure of the PBIG solution is quite different to the structure of the GH-MWDDS + solution. This means that PBIG does not just locally improve the GH-MWDDS + solutions; it often seems to perform a profound restructuring. Second, in the case of the graph with ( n = 100 , r = 0.2 ) , PBIG comes up with a solution that contains one more dominating set than the GH-MWDDS + solution. This leads to the fact that the PBIG solution leaves less nodes unused in comparison to the GH-MWDDS + solution (44 nodes unused vs. 58 nodes). Third, the solutions of PBIG and GH-MWDDS + , in the case of the graph with n = 100 and r = 0.3 (Figure 3), indicates that making use of more nodes does not always lead to better solutions. More specifically, both algorithms generate solutions with six disjoint dominating sets. While the GH-MWDDS + solution makes use of 47 nodes, the PBIG solution only makes use of 46 nodes. Nevertheless, the PBIG solution is better, with an objective function value of 3.106, in comparison to a quality of 2.908 for the solution generated by GH-MWDDS + . This is because the dominating sets in the PBIG solutions have a longer lifetime on average than those in the GH-MWDDS + solution.
Finally, we decided to show the evolution of the quality of the solutions produced by PBIG over time. This was performed for two of the hardest cases. In particular, we chose the first random graph, with n = 250 nodes and an average node degree of d = 50 , and the first random geometric graph, with n = 1000 nodes and a threshold of r = 0.15 . Remember that the computation time limit is n / 2 CPU seconds in both cases. The graphics in Figure 4 show PBIG’s evolution for 10 runs per problem instance. The shaded area around the average behavior line indicates the variance of the algorithm. Moreover, the horizontal, dashed lines indicate the quality of the solutions produced by GH-MWDDS + . Note that in both cases, PBIG outperforms GH-MWDDS + very early in each run. In the case of Figure 4a, most of the runs even start off with a solution better than the one of GH-MWDDS + . Additionally, note that the given computation time is sufficient in both cases as PBIG shows clear signs of convergence before reaching the computation time limit of 125 CPU seconds in the case of Figure 4, and 500 CPU seconds in the case of Figure 4b.

5. Conclusions

This paper dealt with lifetime maximization in wireless sensor networks by means of solving an optimization problem known as the maximum weighted disjoint dominating sets problem. As shown by the weak results of the ILP solver, CPLEX, this problem is a challenging combinatorial optimization problem. In this work, we extended an existing high-quality greedy algorithm from the literature towards a population-based iterated greedy algorithm. This algorithm worked on a population of solutions. At each iteration, it applied partial destruction to each solution in the population. Subsequently, the obtained partial solutions were subjected to re-construction, often resulting in different and improved solutions. We compared our approach to three competitors: the application of CPLEX, the best greedy algorithm from the literature, and the best available local search method. This comparison was based on two benchmark sets. The first set, consisting of 640 random graphs, was taken from the related literature, while the second set, consisting of 300 random geometric graphs, was newly generated for this work. In summary, we can say that our population-based iterated greedy algorithm consistently outperformed all competitors. This algorithm can therefore be called the new state-of-the-art approach for the tackled problem.
Given the weak performance of CPLEX for this problem, one promising line of future work might be that of dealing with the development of specialized exact approaches. Another line of research might focus on hybrid techniques such as construct, merge, solve, and adapt (CMSA) [33]. In particular, CMSA allows users to take profit from ILP solvers such as CPLEX, even in the context of large problem instances for which a direct application of CPLEX is currently not beneficial.

Author Contributions

Conceptualization, S.B.; investigation, S.B. and P.P.-D.; methodology, S.B. and C.B.; software, S.B.; supervision, C.B.; writing—original draft preparation, S.B. and C.B.; writing—review and editing, C.B. and P.P.-D. All authors have read and agreed to the published version of the manuscript.

Funding

This work was suppoorted by grant PID2019-104156GB-I00 (project CI-SUSTAIN) funded by MCIN/AEI/10.13039/501100011033.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Both benchmark sets used in this work can be obtained at https://www.iiia.csic.es/~christian.blum/research.html#Instances (accessed on accessed on 16 January 2022).

Acknowledgments

The authors would like to thank Tayler Pino for providing the code of the VD local search algorithm; S. Bouamama is grateful to DGRSDT-MESRS (Algeria) for financial support.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
PBIGPopulation-Based Iterated Greedy
MDDSMaximum Disjoint Dominating Sets
MWDDSMaximum Weighted Disjoint Dominating Sets

References

  1. Yetgin, H.; Cheung, K.T.K.; El-Hajjar, M.; Hanzo, L.H. A survey of network lifetime maximization techniques in wireless sensor networks. IEEE Commun. Surv. Tutor. 2017, 19, 828–854. [Google Scholar] [CrossRef] [Green Version]
  2. Kandris, D.; Nakas, C.; Vomvas, D.; Koulouras, G. Applications of wireless sensor networks: An up-to-date survey. Appl. Syst. Innov. 2020, 3, 14. [Google Scholar] [CrossRef] [Green Version]
  3. Rodrigues, L.M.; Montez, C.; Budke, G.; Vasques, F.; Portugal, P. Estimating the lifetime of wireless sensor network nodes through the use of embedded analytical battery models. J. Sens. Actuator Netw. 2017, 6, 8. [Google Scholar] [CrossRef] [Green Version]
  4. Sharma, H.; Haque, A.; Jaffery, Z.A. Maximization of wireless sensor network lifetime using solar energy harvesting for smart agriculture monitoring. Ad Hoc Netw. 2019, 94, 101966. [Google Scholar] [CrossRef]
  5. Mansourkiaie, F.; Ismail, L.S.; Elfouly, T.M.; Ahmed, M.H. Maximizing lifetime in wireless sensor network for structural health monitoring with and without energy harvesting. IEEE Access 2017, 5, 2383–2395. [Google Scholar] [CrossRef]
  6. Lewandowski, M.; Płaczek, B.; Bernas, M. Classifier-Based Data Transmission Reduction in Wearable Sensor Network for Human Activity Monitoring. Sensors 2021, 21, 85. [Google Scholar] [CrossRef]
  7. Lewandowski, M.; Bernas, M.; Loska, P.; Szymała, P.; Płaczek, B. Extending Lifetime of Wireless Sensor Network in Application to Road Traffic Monitoring. In Computer Networks, Proceedings of the International Conference on Computer Networks, Kamień Śląski, Poland, 25–27 June 2019; Springer: Berlin/Heidelberg, Germany, 2019; pp. 112–126. [Google Scholar]
  8. Cardei, M.; Thai, M.T.; Li, Y.; Wu, W. Energy-efficient target coverage in wireless sensor networks. In Proceedings of the IEEE 24th Annual Joint Conference of the IEEE Computer and Communications Societies, Miami, FL, USA, 13–17 March 2005; Volume 3, pp. 1976–1984. [Google Scholar]
  9. Nguyen, T.N.; Huynh, D.T. Extending sensor networks lifetime through energy efficient organization. In Proceedings of the International Conference on Wireless Algorithms, Systems and Applications (WASA 2007), Chicago, IL, USA, 1–3 August 2007; pp. 205–212. [Google Scholar]
  10. Kui, X.; Wang, J.; Zhang, S.; Cao, J. Energy Balanced Clustering Data Collection Based on Dominating Set in Wireless Sensor Networks. Adhoc Sens. Wirel. Netw. 2015, 24, 199–217. [Google Scholar]
  11. Hedar, A.R.; Abdulaziz, S.N.; Mabrouk, E.; El-Sayed, G.A. Wireless sensor networks fault-tolerance based on graph domination with parallel scatter search. Sensors 2020, 20, 3509. [Google Scholar] [CrossRef]
  12. Haynes, T.W.; Hedetniemi, S.T.; Henning, M.A. Topics in Domination in Graphs; Springer: Berlin/Heidelberg, Germany, 2020. [Google Scholar]
  13. Slijepcevic, S.; Potkonjak, M. Power efficient organization of wireless sensor networks. In Proceedings of the IEEE International Conference on Communications, Conference Record (Cat. No. 01CH37240), Helsinki, Finland, 11–14 June 2001; Volume 2, pp. 472–476. [Google Scholar]
  14. Wang, H.; Li, Y.; Chang, T.; Chang, S. An effective scheduling algorithm for coverage control in underwater acoustic sensor network. Sensors 2018, 18, 2512. [Google Scholar] [CrossRef] [Green Version]
  15. Liao, C.C.; Ting, C.K. A novel integer-coded memetic algorithm for the set k-cover problem in wireless sensor networks. IEEE Trans. Cybern. 2017, 48, 2245–2258. [Google Scholar] [CrossRef]
  16. Chen, Z.; Li, S.; Yue, W. Memetic algorithm-based multi-objective coverage optimization for wireless sensor networks. Sensors 2014, 14, 20500–20518. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  17. Balaji, S.; Anitha, M.; Rekha, D.; Arivudainambi, D. Energy efficient target coverage for a wireless sensor network. Measurement 2020, 165, 108167. [Google Scholar] [CrossRef]
  18. D’Ambrosio, C.; Iossa, A.; Laureana, F.; Palmieri, F. A genetic approach for the maximum network lifetime problem with additional operating time slot constraints. Soft Comput. 2020, 24, 14735–14741. [Google Scholar] [CrossRef]
  19. Li, J.; Potru, R.; Shahrokhi, F. A Performance Study of Some Approximation Algorithms for Computing a Small Dominating Set in a Graph. Algorithms 2020, 13, 339. [Google Scholar] [CrossRef]
  20. Li, R.; Hu, S.; Liu, H.; Li, R.; Ouyang, D.; Yin, M. Multi-Start Local Search Algorithm for the Minimum Connected Dominating Set Problems. Mathematics 2019, 7, 1173. [Google Scholar] [CrossRef] [Green Version]
  21. Bouamama, S.; Blum, C. An Improved Greedy Heuristic for the Minimum Positive Influence Dominating Set Problem in Social Networks. Algorithms 2021, 14, 79. [Google Scholar] [CrossRef]
  22. Garey, M.; Johnson, D. Computers and Intractability. A Guide to the Theory of NP-Completeness; W. H. Freeman: New York, NY, USA, 1979. [Google Scholar]
  23. Cardei, M.; MacCallum, D.; Cheng, M.X.; Min, M.; Jia, X.; Li, D.; Du, D.Z. Wireless sensor networks with energy efficient organization. J. Interconnect. Netw. 2002, 3, 213–229. [Google Scholar] [CrossRef]
  24. Feige, U.; Halldórsson, M.M.; Kortsarz, G.; Srinivasan, A. Approximating the domatic number. SIAM J. Comput. 2002, 32, 172–195. [Google Scholar] [CrossRef]
  25. Moscibroda, T.; Wattenhofer, R. Maximizing the lifetime of dominating sets. In Proceedings of the 19th IEEE International Parallel and Distributed Processing Symposium, Denver, CO, USA, 4–8 April 2005; p. 8. [Google Scholar]
  26. Islam, K.; Akl, S.G.; Meijer, H. Maximizing the lifetime of wireless sensor networks through domatic partition. In Proceedings of the 2009 IEEE 34th Conference on Local Computer Networks, Zurich, Switzerland, 20–23 October 2009; pp. 436–442. [Google Scholar]
  27. Pino, T.; Choudhury, S.; Al-Turjman, F. Dominating set algorithms for wireless sensor networks survivability. IEEE Access 2018, 6, 17527–17532. [Google Scholar] [CrossRef]
  28. Balbal, S.; Bouamama, S.; Blum, C. A Greedy Heuristic for Maximizing the Lifetime of Wireless Sensor Networks Based on Disjoint Weighted Dominating Sets. Algorithms 2021, 14, 170. [Google Scholar] [CrossRef]
  29. Bouamama, S.; Blum, C.; Boukerram, A. A population-based iterated greedy algorithm for the minimum weight vertex cover problem. Appl. Soft Comput. 2012, 12, 1632–1639. [Google Scholar] [CrossRef]
  30. Bouamama, S.; Blum, C. On solving large-scale instances of the knapsack problem with setup by means of an iterated greedy algorithm. In Proceedings of the 6th International Conference on Systems and Control (ICSC), Batna, Algeria, 7–9 May 2017; pp. 342–347. [Google Scholar]
  31. Ruiz, R.; Stützle, T. A simple and effective iterated greedy algorithm for the permutation flowshop scheduling problem. Eur. J. Oper. Res. 2007, 177, 2033–2049. [Google Scholar] [CrossRef]
  32. López-Ibánez, M. The irace package: Iterated racing for automatic algorithm configuration. Oper. Res. Perspect. 2016, 3, 43–58. [Google Scholar] [CrossRef]
  33. Blum, C.; Davidson, P.P.; López-Ibáñez, M.; Lozano, J.A. Construct, Merge, Solve & Adapt: A new general algorithm for combinatorial optimization. Comput. Oper. Res. 2016, 68, 75–88. [Google Scholar]
Figure 1. An illustrative example of the MWDDS problem. (a) Problem instance; (b) Feasible solution D = { { 3 , 4 } , { 1 , 5 , 6 } } , with f ( D ) = 0.5; (c) Optimal solution D * = { { 1 , 3 , 5 } , { 2 , 4 , 6 } } , with f ( D * ) = 0.8 .
Figure 1. An illustrative example of the MWDDS problem. (a) Problem instance; (b) Feasible solution D = { { 3 , 4 } , { 1 , 5 , 6 } } , with f ( D ) = 0.5; (c) Optimal solution D * = { { 1 , 3 , 5 } , { 2 , 4 , 6 } } , with f ( D * ) = 0.8 .
Sensors 22 01804 g001
Figure 2. Solutions of GH-MWDDS + (a) and PBIG (b) for the first RGG graph with 100 nodes and a threshold value of r = 0.2 . The lifetime of each node is provided as the node label. Moreover, the node colors indicate to which dominating set a node belongs. In both cases, the color purple indicates that the respective node is not chosen for a dominating set.
Figure 2. Solutions of GH-MWDDS + (a) and PBIG (b) for the first RGG graph with 100 nodes and a threshold value of r = 0.2 . The lifetime of each node is provided as the node label. Moreover, the node colors indicate to which dominating set a node belongs. In both cases, the color purple indicates that the respective node is not chosen for a dominating set.
Sensors 22 01804 g002
Figure 3. Solutions of GH-MWDDS + (a) and PBIG (b) for the first RGG graph with 100 nodes and a threshold value of r = 0.3 . The lifetime of each node is provided as the node label. Moreover, the node colors indicate to which dominating set a node belongs. In both cases, the color purple indicates that the respective node is not chosen for a dominating set.
Figure 3. Solutions of GH-MWDDS + (a) and PBIG (b) for the first RGG graph with 100 nodes and a threshold value of r = 0.3 . The lifetime of each node is provided as the node label. Moreover, the node colors indicate to which dominating set a node belongs. In both cases, the color purple indicates that the respective node is not chosen for a dominating set.
Sensors 22 01804 g003
Figure 4. Evolution of the quality of the solutions produced by PBIG over time. Both graphics show the results of 10 independent PBIG runs.
Figure 4. Evolution of the quality of the solutions produced by PBIG over time. Both graphics show the results of 10 independent PBIG runs.
Sensors 22 01804 g004
Table 1. Parameters, value domains, and values chosen for PBIG by irace .
Table 1. Parameters, value domains, and values chosen for PBIG by irace .
ParameterDomainValue (Set1)Value (Set2)
p size { 1 , , 100 } 6242
det min [ 0 , 1 ] 0.91 0.56
det max [ 0 , 1 ] 0.96 0.99
destr min [ 0 , 1 ] 0.44 0.22
destr max [ 0 , 1 ] 0.61 0.44
max noimpr { 1 , 500 } 417244
r del [ 0.1 , 0.5 ] 0.11 0.17
Table 2. Numerical results for the instances of Set1 (random graphs).
Table 2. Numerical results for the instances of Set1 (random graphs).
ndCPLEX VD GH-MWDDS + PBIG
ValueGap (%) ValueTime (s) ValueTime (s) ValueTime (s) σ Time
50152.77932.839 0.5551.984 2.1910.002 2.7162.9356.080
203.922121.088 1.0147.651 3.4500.000 3.9602.7635.228
255.098158.038 1.4904.885 4.6720.000 5.2442.2695.475
306.432200.567 2.78013.117 6.0420.000 6.6631.9932.607
357.929197.434 3.16625.048 7.5460.000 8.1895.6155.841
100202.467405.669 0.42330.458 2.8420.001 3.47517.67017.685
303.202>1000.0 0.57631.796 4.5800.001 5.40219.16216.345
403.338>1000.0 1.341183.921 6.7940.002 7.69523.59216.882
503.857>1000.0 2.407225.704 8.5250.002 9.68724.89817.007
605.804823.022 3.226362.775 11.1740.002 12.12336.21211.447
150300.055>1000.0 0.326106.602 4.1410.002 4.99039.01321.052
400.028>1000.0 0.745156.994 5.8720.002 6.83049.54118.100
500.011>1000.0 1.009182.630 7.5700.002 8.61859.87617.433
600>1000.0 1.521646.268 9.3710.005 10.44558.71515.019
700>1000.0 2.4641028.650 11.4460.005 12.57360.90110.934
800>1000.0 3.036549.900 13.6110.003 14.74464.14311.666
900>1000.0 4.193906.030 15.5890.005 16.60059.96913.109
200400>1000.0 0.37081.750 5.4860.005 6.48681.31815.772
500>1000.0 0.483186.900 6.8480.006 7.76091.4987.205
600>1000.0 0.917313.850 8.7100.008 9.63582.30012.738
700>1000.0 1.6803112.950 10.3950.008 11.25281.53017.200
800>1000.0 1.7951629.400 12.5290.003 13.41588.92610.166
900>1000.0 2.0451364.400 14.0460.009 14.69377.53923.186
1000>1000.0 3.0983024.830 15.9930.009 16.94182.57514.761
250500>1000.0 0.329557.676 6.7830.008 7.536103.79723.062
600>1000.0 0.9451400.788 8.2850.010 9.02199.99030.057
700>1000.0 1.3262380.366 9.6990.013 10.49997.34229.580
800>1000.0 1.445647.763 11.5710.013 12.09652.14949.798
900>1000.0 1.5911242.663 12.9780.016 13.58970.92850.748
1000>1000.0 2.4432210.880 14.8000.016 15.39241.70053.728
1200>1000.0 2.7812249.350 18.4180.018 18.89541.50852.769
1400>1000.0 4.7136624.596 22.5140.020 23.12325.10341.312
Avg 1.757984.143 9.5150.006 10.32151.48320.125
Table 3. Numerical results for the instances of Set 2 (random geometric graphs).
Table 3. Numerical results for the instances of Set 2 (random geometric graphs).
nrCPLEX GH-MWDDS + PBIG
ValueGap (%) ValueTime (s) ValueTime (s) σ Time
1000.21.1160.000 1.0640.000 1.1160.0510.088
0.2251.4510.000 1.3540.000 1.4511.9334.674
0.251.8480.000 1.7630.000 1.8480.5861.657
0.2752.6100.204 2.4620.001 2.6101.0403.835
0.33.0477.640 2.9310.001 3.0472.8269.378
5000.10.021>1000.0 1.0040.006 1.0375.42817.623
0.1250.000>1000.0 1.9220.011 2.0122.4905.020
0.150.000>1000.0 3.3180.016 3.60628.48941.489
0.1750.000>1000.0 4.5170.021 4.92943.21567.606
0.20.000>1000.0 6.6340.029 7.14182.95181.343
10000.050.000>1000.0 0.2620.014 0.2660.3731.093
0.0750.000>1000.0 1.2430.030 1.36912.42520.908
0.10.000>1000.0 2.7210.056 3.08383.973106.192
0.1250.000>1000.0 4.7910.092 5.199196.269187.676
0.150.000>1000.0 7.6800.141 8.295363.599136.273
Avg 2.9110.028 3.13455.04345.657
Table 4. Detailed textual description of the solutions displayed in Figure 2 and Figure 3.
Table 4. Detailed textual description of the solutions displayed in Figure 2 and Figure 3.
Solution of GH-MWDDS + for the 1st RGG Graph with 100 Nodes and r = 0.2 ; see Figure 2a.
ID disjoint setColor (Figure 2a)#NodesLifetimeNode IDs
1blue130.6817, 27, 36, 43, 60, 62, 71, 72, 78, 82, 88, 89, 91
2red140.5642, 5, 6, 8, 10, 12, 20, 37, 76, 79, 80, 87, 95, 97
3light green150.18515, 28, 32, 40, 44, 46, 50, 54, 56, 67, 69, 75, 81, 86, 98
unused nodespurple58 0, 1, 3, 4, 9, 11, 13, 14, 16, 17, 18, 19, 21, 22, 23, 24, 25, 26, 29, 30, 31, 33, 34, 35, 38, 39, 41, 42, 45, 47, 48, 49, 51, 52, 53, 55, 57, 58, 59, 61, 63, 64, 65, 66, 68, 70, 73, 74, 77, 83, 84, 85, 90, 92, 93, 94, 96, 99
Best solution of PBIG for the first RGG graph with 100 nodes and r = 0.2 ; see Figure 2b.
ID disjoint setColor (Figure 2b)#NodesLifetimeNode IDs
1blue140.75412, 27, 36, 37, 43, 60, 62, 71, 78, 80, 82, 88, 89, 91
2dark green130.5642, 5, 6, 7, 8, 10, 20, 72, 79, 87, 95, 97, 98
3light green150.18515, 28, 32, 40, 44, 46, 50, 54, 56, 63, 67, 75, 81, 83, 86
4red140.16016, 24, 31, 42, 51, 53, 57, 59, 68, 69, 76, 85, 90, 96
unused nodespurple44 0, 1, 3, 4, 9, 11, 13, 14, 17, 18, 19, 21, 22, 23, 25, 26, 29, 30, 33, 34, 35, 38, 39, 41, 45, 47, 48, 49, 52, 55, 58, 61, 64, 65, 66, 70, 73, 74, 77, 84, 92, 93, 94, 99
Solution of GH-MWDDS + for the first RGG graph with 100 nodes and r = 0.3 ; see Figure 3a.
ID disjoint setColor (Figure 3a)#NodesLifetimeNode IDs
1dark green60.71526, 27, 58, 64, 73, 95
2light green90.6571, 6, 20, 24, 40, 51, 83, 88, 94
3red70.58812, 25, 72, 76, 89, 91, 98
4blue90.5322, 15, 38, 39, 61, 75, 79, 93, 99
5brown80.3378, 18, 21, 36, 44, 46, 90, 92
6orange80.07914, 17, 43, 54, 55, 81, 84, 85
unused nodespurple53 0, 3, 4, 5, 7, 9, 10, 11, 13, 16, 19, 22, 23, 28, 29, 30, 31, 32, 33, 34, 35, 37, 41, 42, 45, 47, 48, 49, 50, 52, 53, 56, 57, 59, 60, 62, 63, 65, 66, 67, 68, 69, 70, 71, 74, 77, 78, 80, 82, 86, 87, 96, 97
Best solution of PBIG for the first RGG graph with 100 nodes and r = 0.3 ; see Figure 3b.
ID disjoint setColor (Figure 3b)#NodesLifetimeNode IDs
1blue80.8346, 20, 22, 26, 27, 51, 73, 95
2red70.73612, 25, 58, 72, 76, 88, 94
3orange80.5321, 2, 15, 24, 36, 38, 93, 98
4dark green60.58844, 61, 64, 74, 83, 89
5brown80.3378, 18, 21, 39, 46, 90, 91, 92
6light green90.07914, 17, 43, 75, 79, 81, 84, 85, 99
unused nodespurple54 0, 3, 4, 5, 7, 9, 10, 11, 13, 16, 19, 23, 28, 29, 30, 31, 32, 33, 34, 35, 37, 40, 41, 42, 45, 47, 48, 49, 50, 52, 53, 54, 55, 56, 57, 59, 60, 62, 63, 65, 66, 67, 68, 69, 70, 71, 77, 78, 80, 82, 86, 87, 96, 97
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Bouamama, S.; Blum, C.; Pinacho-Davidson , P. A Population-Based Iterated Greedy Algorithm for Maximizing Sensor Network Lifetime. Sensors 2022, 22, 1804. https://doi.org/10.3390/s22051804

AMA Style

Bouamama S, Blum C, Pinacho-Davidson  P. A Population-Based Iterated Greedy Algorithm for Maximizing Sensor Network Lifetime. Sensors. 2022; 22(5):1804. https://doi.org/10.3390/s22051804

Chicago/Turabian Style

Bouamama, Salim, Christian Blum, and Pedro Pinacho-Davidson . 2022. "A Population-Based Iterated Greedy Algorithm for Maximizing Sensor Network Lifetime" Sensors 22, no. 5: 1804. https://doi.org/10.3390/s22051804

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop