1. Introduction
The minimum dominating tree problem for weighted undirected graphs is to find a dominating tree in a weighted undirected graph such that all vertices in this weighted undirected graph are either in or adjacent to this tree, and the sum of the edge weights of this tree is minimized [
1]. Adjacent means that there is an edge between this vertex and at least one vertex in the tree. The minimum dominating tree is a concept in graph theory and one of the important classes of tree structures in graph theory.
A highly related problem, the minimum connected dominating set (MCDS), has been extensively studied for building routing backbone wireless sensor networks (WSNs) [
2,
3]. One of the goals of introducing the MCDS in WSNs is to minimize energy consumption; if two devices are too far away from each other, they may consume too much power to communicate [
4,
5]. Using a routing backbone to transmit messages will greatly reduce energy consumption, which increases dramatically as the transmission distance becomes longer [
6]. However, some directly connected vertices in MCDS may still be far away from each other because MCDS does not account for distance [
7]. Therefore, considering each edge in the routing backbone is more in line with energy consumption purposes [
8]. The minimum dominating tree (MDT) problem was first proposed by Zhang et al. [
9] for generating a routing backbone that is well adapted to broadcast protocols.
Shin et al. [
1] proved that the MDT problem is NP-hard and introduced an approximate framework for solving it. They also provided heuristic algorithms and mixed-integer programming (MIP) formulations for the MDT problem. Adasme et al. [
10] introduced two other MIP formulations, one based on a tree formulation in the bidirectional counterpart of the input graph, and the other obtained from a generalized spanning tree polyhedron. Adasme et al. [
11] proposed a primal dyadic model for the minimum-cost dominated tree problem and an effective inequality to improve the linear relaxation. Álvarez-Miranda et al. [
12] proposed a precise solution framework that combined a primal–dual heuristic algorithm with a branch-and-cut approach to transform the problem into a Steiner tree problem with additional constraints. Their framework solved most instances in the literature within three hours and proved its optimality.
In recent years, efficient heuristic algorithms for MDT problems have flourished. Sundar and Singh [
13] proposed two metaheuristic algorithms, the artificial bee colony (ABC-DT) algorithm and the ant colony optimization (ACO-DT) algorithm, for the MDT problem. These two algorithms were the first metaheuristics for the MDT problem and provided better performance than previous algorithms. They also provided 54 randomly generated instances in their work, which are considered challenging instances of the MDT problem and are widely used to evaluate the performance of algorithms for the MDT problem. Based on the latter work, Chaurasia and Singh [
14] proposed an evolutionary algorithm with guided mutation (EA/G-MP) for MDT problems. Dražic et al. [
15] proposed a variable neighborhood search algorithm (VNS) for MDT problems. Singh and Sundar [
16] proposed another artificial bee colony (ABC-DTP) algorithm for the MDT problem. This new ABC-DTP method differed from the ABC-DT in the way it generated initial solutions and in the strategy for determining neighboring solutions. Their experiments showed that for the MDT problem, ABC-DTP outperformed all existing problem-specific heuristics and metaheuristics available in the literature. Hu et al. [
17] proposed a hybrid algorithm combining genetic algorithms (GAITLS) and an iterative local search to solve the dominating tree problem. Experimental results on classical instances showed that the method outperformed existing algorithms. Xiong et al. [
18] presented a two-level metaheuristic (TLMH) algorithm for solving the MDT problem with a solution sampling phase and two local search-based procedures nested in a hierarchical structure. The results demonstrated the efficiency of the proposed algorithm in terms of solution quality compared with the existing metaheuristics.
Metaheuristics have been shown to be very effective in solving many challenging real-world problems [
19]. However, for some problems, due to the complexity of the problem structure and the large search space, the classical metaheuristic framework fails to produce the desired results [
20]. Many researchers have relied on composite neighborhood structures. If properly designed, most composite neighborhood structures have proven successful [
21]. These methods include variable depth search (VDS), which searches a large search space through a series of successive simple neighborhood search operations. Although understanding the basic concepts of VDS algorithms dates back to the 1970s [
22], researchers have maintained a sustained enthusiasm for the term [
23,
24]. For a more detailed survey of VDS, we refer to Ahuja et al. [
25,
26,
27]. Another idea for dealing with complex structural problems is to use a hierarchical metatrial approach, where several trials are combined in a nested structure. Wu et al. [
28] successfully implemented a two-level iterative local search for a network design problem with traffic sparing. According to their analysis, hierarchical metaheuristics must be carefully designed to balance the complexity of the algorithm and its performance. In particular, for the outer framework, keeping it as simple as possible makes the algorithm converge faster. Pop et al. [
29] proposed a two-level solution to the generalized minimum spanning tree problem. Carrabs et al. [
30] introduced a metaheuristic algorithm implementing a two-level structure to solve the shortest path problem for all colors. Contreras Bolton and Parada [
31] proposed an iterative local search method to solve the generalized minimum spanning tree problem using a two-level solution.
In recent years, some improved tabu search algorithms and multineighborhood metaheuristic algorithms have been proposed and applied to NP-hard problems. Li et al. [
32] proposed an improved tabu search algorithm to solve the vehicle routing problem, introducing an adaptive tabu length and neighborhood structure. Tong et al. [
33] established a mixed-integer nonlinear programming model and solved the unmanned aerial vehicle transportation route optimization planning problem through a variable neighborhood tabu search algorithm. Seydanlou et al. [
34] proposed a metaheuristic algorithm with a multineighborhood procedure, and experimental results proved the effectiveness of that method. Song et al. [
35] proposed a new competition-guided multineighborhood local search (CMLS) algorithm to solve the course-based course scheduling problem, and computational results showed that the proposed algorithm was highly competitive.
In this paper, we design a metaheuristic algorithm for a two-neighborhood search to solve the MDT problem that uses two neighborhood moves to perform the search and combines a tabu search to escape local optima. The DNS algorithm is described in detail in
Section 2, the experimental results of the DNS algorithm and comparison with other algorithms are given in
Section 3, and some comparative experiments within the DNS algorithm are conducted in
Section 4.
3. Dual-Neighborhood Search
3.1. Main Framework
The basic idea of our proposed DNS algorithm is to tackle the MDT problem by optimizing the candidate dominating tree weight using a neighborhood search-based metaheuristic with two neighborhood move operators. The search space of the DNS consists of all the minimum spanning trees of all the possible dominating sets of the instance graph. The proposed NDS algorithm optimizes the following objective function:
where
stands for the current configuration, i.e., the candidate dominating tree. Notations
X and
represents the vertex and edge sets of
T, respectively. Function
calculates the number of vertices not dominated by
T. Function
calculates the weights of the minimum spanning tree of
T.
is a constant parameter to balance the importance between
and
.
T is a feasible solution to the minimum dominating tree problem if and only if
.
The algorithm primarily comprises several key steps. Firstly, an initial solution is generated, followed by a neighborhood evaluation. Subsequently, the best neighborhood move is selected and executed iteratively. During the iteration, the best overall configuration is recorded. The framework of the algorithm can be represented in pseudocode as Algorithm 1.
In Algorithm 1,
represents the initial configuration,
represents the recorded best overall solution, and
represents the current configuration. In each iteration, the subprocedure
Do_NeighborEvaluate evaluates all the neighborhood moves in the current configuration. The following two subprocedures select and execute the best move. The termination condition can be the time or iteration limit. The time complexity of the DNS algorithm is
, and its space complexity is
.
Algorithm 1 Algorithm for the MDT problem. |
Require: The instance graph Ensure: A DTP configuration 1: procedure DNS(G) 2: Generate_InitialSolution(G)
3: 4: Repeat 5: Do_NeighborEvaluate(G)
6: Select_BestMove(EvaluateMatrices)
7: 8: if then 9: 10: end if 11: until The termination condition is met 12: return 13: end procedure
|
3.2. Initial Solution Generation
The proposed DNS algorithm uses a feasible dominating tree as the initial configuration. The subprocedure
Generate_InitialSolution generates this initial dominating tree. It first finds the minimum spanning tree for the whole graph and tries to trim the tree by removing leaves iteratively until removing one more leave breaks the dominancy of the tree. The pseudocode of this procedure is defined in Algorithm 2.
Algorithm 2 Algorithm for generating the initial solution. |
Require: The instance graph Ensure: A DTP configuration 1: procedure Generate_InitialSolution(G) 2: 3: repeat 4: 5: for do 6: if n can remove and then 7: 8: end if 9: end for 10: if then 11: 12: end if 13: until 14: return 15: end procedure
|
The procedure starts from the minimum spanning tree generated by Kruskal’s algorithm. Then, it tries to delete the leaf with the largest edge weight. The process terminates when no more leaves can be deleted. The algorithm returns a feasible dominating tree as the initial configuration. In the following sections, we focus on the metaheuristic part of the proposed DNS algorithm, i.e., the neighborhood structure as well as its evaluation.
3.3. Definition
For a better description, we first define some important concepts and notations used in the proposed DNS algorithm.
X: the set of vertices in the current dominator tree.
: the set of vertices dominated by X and not in X.
: An array of the number of undominated vertices; the length of the array is the number of graph vertices.
denotes the number of vertices not dominated by the new X when moving i from X to (or from to X).
: array of minimum spanning tree weights for
X. The length of the array is the number of graph vertices.
denotes the weight of the new minimum spanning tree of X when moving i from X to (or from to X).
The following example illustrates how and are calculated.
As shown in the
Figure 1, the current dominating tree is
, containing two vertices,
B and
D. Therefore,
. The vertices dominated by
X are
A,
C, and
E. Thus,
. The vertices
A,
B,
C,
D, and
E correspond to the array subscripts 0, 1, 2, 3, and 4, respectively. To evaluate the neighborhood moves, the algorithm takes vertex
A out and puts it in the set of the other side. The number of vertices that are not dominated by the new
X after this move is 0, thus
is assigned to 0. The weight of the new minimum spanning tree of
X is 13, thus
is assigned to 13. After evaluating all the neighborhood moves, the resulting arrays are
and
.
and
are used to evaluate the neighborhood moves.
3.4. Neighborhood Move and Evaluation
There are two kinds of neighborhood moves in the DNS algorithm: one is to take out one vertex in
X and put it into
, and the other one is to take out one vertex in
and put it into
X. At each iteration, the best neighborhood move is selected and performed among all the two kinds of neighborhood moves. There are two criteria to evaluate the quality of the moves, one is the dominance and the other is the weight of the dominating tree. The pseudocode for the neighborhood evaluation is described in Algorithm 3.
Algorithm 3 Algorithm for performing a neighborhood evaluation. |
Require: , Ensure: 1: procedure Do_NeighborEvaluate(G) 2: for do 3: move v to other set 4: 5: 6: move v back 7: end for 8: end procedure
|
The evaluation is conducted by trying to move each vertex to the other set, then the
and
values are calculated. The time complexity of this module is
, and its space complexity is
. Based on these two arrays, the best move is selected as described in Algorithm 4.
Algorithm 4 Algorithm for selecting the best move. |
Require:
Ensure: The best move 1: procedure Select_BestMove() 2: 3: for do 4: if then 5: 6: end if 7: if and then 8: 9: end if 10: end for 11: return 12: end procedure
|
Procedure
Select_BestMove picks the move with the smallest
and
, with a higher priority for
. The time complexity of this module is
, and its space complexity is
. Then, the best move selected is performed by Algorithm 5.
Algorithm 5 Algorithm for executing the best neighborhood move. |
Require: X, Ensure:
1: procedure Execute_BestMove(X, ) 2: if then 3: move from X to 4: else 5: move from to X 6: end if 7: 8: return 9: end procedure
|
Procedure Execute_BestMove moves the selected vertex to if it is in X, and vice versa. After the move, the minimum spanning tree of is calculated using Kruskal’s algorithm and assigned to . The time complexity of this module is , and its space complexity is . The following example illustrates how the best move is evaluated and performed.
As shown in
Figure 2, the current dominating tree is
,
,
. To evaluate vertex
A, we first move it from
to
X; then,
X becomes
. The number of vertices that are not dominated by the new
X at this point is 0, thus
. The minimum spanning tree weight of
is 13, thus
. We then move
A back to its original set. The evaluation for
A is concluded.
B,
C,
D, and
E are evaluated sequentially by the same process. After the evaluation for each vertex,
and
.
Then, we pick the best neighborhood move by finding the minimum value from and , prior to . There are 3 minimum values in , corresponding to A, C, and E. Then, we compare the value of these three vertices in , the minimum value is 3, corresponding to vertex E. Therefore, the best vertex is E, and the best neighboring move is to move E. After the move, the new . We calculate the minimum spanning tree of the new X. The new minimum spanning tree is with a weight of 3.
3.5. Fast Neighborhood Evaluation
In order to improve the efficiency of the algorithm, this paper proposes a method to dynamically update the neighborhood evaluation matrices , .
3.5.1. Fast Evaluation for
The number of undominated vertices may increase or remain unchanged when vertices are moved from X to . The newly added undominated vertices must be originally in the set and connected to the moved vertex. Since the number of undominated vertices is zero throughout the algorithm, we can count the newly introduced undominated vertices by counting the vertices in , where the moved vertex is its only connection to X.
When we move vertices from
to
X, the number of undominated vertices may decrease or remain the same. Because
X is dominated throughout the algorithm, the number of undominated vertices after this kind of moves is still 0. The above observation can be utilized to dynamically compute
without having to traverse the entire graph. The formula is as follows:
3.5.2. Fast Evaluation for
For
, we use a dynamic Kruskal’s algorithm. The algorithm dynamically maintains a set
, which is the set of edges contained in subgraph
, i.e., the set of edges whose two vertices are in
X. The
set is sorted from smallest to largest by the weights of the edges. When a
X-to-
move is performed, the edges connecting to the moved vertex and
X are deleted from the
set. Similarly, when a
-to-
X move is performed, the edges connecting to the moved vertex and
X are inserted into the
set. Note that edges should be inserted into the appropriate position in
to guarantee that it is sorted. The dynamic Kruskal’s algorithm then assumes that the edges before the deletion or insertion position are in the new minimum spanning tree and starts the normal procedure from that position. The pseudocode for the dynamic Kruskal’s algorithm is described in Algorithms 6 and 7.
Algorithm 6 Algorithm for calculate a new minimum spanning tree. |
Require: , G, , Ensure: weight of minimum spanning tree 1: procedure Calculate_NewMinSpanTree(X, , ) 2: 3: 4: for do 5: if then 6: if then 7: 8: 9: end if 10: if then 11: 12: end if 13: if then 14: 15: end if 16: end if 17: end for 18: 19: return 20: end procedure
|
In Algorithm 6, the notation
represents the edge connecting vertices
a and
b, and
is the original minimum spanning tree, i.e., the entire algorithm of the current solution. The time complexity of this module is
, and its space complexity is
. The main job of this procedure is to update the
set. Moreover, Algorithm 7 calculates the spanning tree dynamically according to
.
Algorithm 7 Algorithm for the dynamic Kruskal algorithm. |
Require: , , G, Ensure: a minimum spanning tree 1: procedure DynamicKruskal(, , G, ) 2: 3: for i from 0 to do 4: if then 5: 6: end if 7: end for 8: for i from to do 9: if can add to then 10: 11: end if 12: end for 13: return 14: end procedure
|
The time complexity of this module is , and its space complexity is . The following example illustrates the above procedures:
As shown in
Figure 3, the original tree is
; currently,
,
,
, and the weights of the edges
. Let us evaluate the move of vertex
E from
to
X. After the move
,
. Since
E was originally in
,
. The new edge added after the move is the edge
with weights
. Then, we insert these two edges into the appropriate position in
according to their weights from smallest to largest in
, and the corresponding
. We only need to start from the position of
to determine the new minimum spanning tree. The edges before
must be in the new minimum spanning tree. The evaluated minimum spanning tree is
with weight 12, thus
.
3.6. Tabu Strategy and Aspiration Mechanism
The proposed DNS algorithm implements the tabu strategy. The vertex is prohibited to be moved again within a tenure once it is moved. The tabu strategy is implemented in both kinds of moves in the algorithm. Since there is no intersection of X and , only one tabu table is needed. We denote the tabu tenure of the move from to X as and the move from X to as . These two tabu tenures are set to the number of vertices in X and , respectively, thus implementing dynamic tabu tenures. This tabu strategy improves the accuracy and efficiency and makes the algorithm jump out of the local optimum more easily.
In order to avoid missing some good solutions, an aspiration strategy is introduced. If one tabu move may improve the best overall solution, the searching process breaks its tabu status and selects it as a candidate best move.
3.7. Perturbation Strategy
In order to further improve the quality of the solution, the proposed DNS algorithm implements a perturbation strategy. The specific perturbation is to move some vertices from to X randomly. The algorithm sets a parameter as the perturbation period. When the number of iterations reaches the perturbation period, a perturbation is triggered, and the number of iterations is cleared to zero if the best overall solution is updated within this period. There are two other parameters, the perturbation amplitude and the perturbation tabu tenure. The perturbation amplitude is the number of vertices taken out from in the perturbation. The perturbation tabu tenure is the tabu tenure used during the perturbation period. In addition, after a certain number of small perturbations, a larger perturbation needs to be triggered to give a larger spatial span to the search process. The larger perturbation is implemented by moving one-third of the vertices from to X randomly.
6. Conclusions
In this paper, a dual-neighborhood search algorithm was proposed to solve the minimum dominating tree problem. In order to improve the efficiency of the algorithm, a fast neighborhood evaluation method was proposed, in which the method of dynamically generating the minimum spanning tree from the subgraph was deduced from the dominating set. The tabu and the perturbation mechanisms helped the algorithm jump out of the local optimum trap, thus obtaining better solutions. The DNS algorithm was demonstrated to be highly effective in tests on a collection of widely used benchmark instances, where it was compared with algorithms from the literature. Out of 72 public instances, the DNS improved the best result on four problems while being competitive on the remaining ones with less computational time. Although the techniques proposed in this paper are specific to the minimum dominating tree problem, most of these ideas can be applied to other combinatorial optimization problems. For example, the dynamic spanning tree calculation used in the fast neighborhood evaluation can be used in problems with spanning tree structures. Moreover, the collaboration of two neighborhood structures can also be introduced to other relevant optimization problems. Finally, it would be interesting to test the proposed ideas in other metaheuristic frameworks with other optimization problems.
Additionally, the DNS algorithm has some shortcomings. For instance, the optimal solutions obtained by the DNS on some instances are not good enough. At the same time, there are some areas for improvement in the algorithm. For example, a weighted approach can be used to guide the position after perturbation. According to some rules or judgment functions, nodes that are likely to appear in the optimal solution are weighted. The initial solution is probabilistically generated through weights. The larger the weight, the greater the probability of selecting this node. Furthermore, some judgment conditions can be used to adaptively scale the tabu search, reducing the tabu length in areas where global optimal solutions may appear to refine the search.