Next Article in Journal
Kinematic Analysis of a Tendon-Driven Hybrid Rigid–Flexible Four-Bar; Application to Optimum Dimensional Synthesis
Next Article in Special Issue
Conditions for Implicit-Degree Sum for Spanning Trees with Few Leaves in K1,4-Free Graphs
Previous Article in Journal
Feedback Control Techniques for a Discrete Dynamic Macroeconomic Model with Extra Taxation: An Algebraic Algorithmic Approach
Previous Article in Special Issue
Generalized Connectivity of the Mycielskian Graph under g-Extra Restriction
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Dual-Neighborhood Search for Solving the Minimum Dominating Tree Problem

School of Computer Science, Hubei University of Technology, Wuhan 430068, China
*
Author to whom correspondence should be addressed.
Mathematics 2023, 11(19), 4214; https://doi.org/10.3390/math11194214
Submission received: 10 August 2023 / Revised: 26 September 2023 / Accepted: 8 October 2023 / Published: 9 October 2023
(This article belongs to the Special Issue Advanced Graph Theory and Combinatorics)

Abstract

:
The minimum dominating tree (MDT) problem consists of finding a minimum weight subgraph from an undirected graph, such that each vertex not in this subgraph is adjacent to at least one of the vertices in it, and the subgraph is connected without any ring structures. This paper presents a dual-neighborhood search (DNS) algorithm for solving the MDT problem, which integrates several distinguishing features, such as two neighborhoods collaboratively working for optimizing the objective function, a fast neighborhood evaluation method to boost the searching effectiveness, and several diversification techniques to help the searching process jump out of the local optimum trap thus obtaining better solutions. DNS improves the previous best-known results for four public benchmark instances while providing competitive results for the remaining ones. Several ingredients of DNS are investigated to demonstrate the importance of the proposed ideas and techniques.

1. Introduction

The minimum dominating tree problem for weighted undirected graphs is to find a dominating tree in a weighted undirected graph such that all vertices in this weighted undirected graph are either in or adjacent to this tree, and the sum of the edge weights of this tree is minimized [1]. Adjacent means that there is an edge between this vertex and at least one vertex in the tree. The minimum dominating tree is a concept in graph theory and one of the important classes of tree structures in graph theory.
A highly related problem, the minimum connected dominating set (MCDS), has been extensively studied for building routing backbone wireless sensor networks (WSNs) [2,3]. One of the goals of introducing the MCDS in WSNs is to minimize energy consumption; if two devices are too far away from each other, they may consume too much power to communicate [4,5]. Using a routing backbone to transmit messages will greatly reduce energy consumption, which increases dramatically as the transmission distance becomes longer [6]. However, some directly connected vertices in MCDS may still be far away from each other because MCDS does not account for distance [7]. Therefore, considering each edge in the routing backbone is more in line with energy consumption purposes [8]. The minimum dominating tree (MDT) problem was first proposed by Zhang et al. [9] for generating a routing backbone that is well adapted to broadcast protocols.
Shin et al. [1] proved that the MDT problem is NP-hard and introduced an approximate framework for solving it. They also provided heuristic algorithms and mixed-integer programming (MIP) formulations for the MDT problem. Adasme et al. [10] introduced two other MIP formulations, one based on a tree formulation in the bidirectional counterpart of the input graph, and the other obtained from a generalized spanning tree polyhedron. Adasme et al. [11] proposed a primal dyadic model for the minimum-cost dominated tree problem and an effective inequality to improve the linear relaxation. Álvarez-Miranda et al. [12] proposed a precise solution framework that combined a primal–dual heuristic algorithm with a branch-and-cut approach to transform the problem into a Steiner tree problem with additional constraints. Their framework solved most instances in the literature within three hours and proved its optimality.
In recent years, efficient heuristic algorithms for MDT problems have flourished. Sundar and Singh [13] proposed two metaheuristic algorithms, the artificial bee colony (ABC-DT) algorithm and the ant colony optimization (ACO-DT) algorithm, for the MDT problem. These two algorithms were the first metaheuristics for the MDT problem and provided better performance than previous algorithms. They also provided 54 randomly generated instances in their work, which are considered challenging instances of the MDT problem and are widely used to evaluate the performance of algorithms for the MDT problem. Based on the latter work, Chaurasia and Singh [14] proposed an evolutionary algorithm with guided mutation (EA/G-MP) for MDT problems. Dražic et al. [15] proposed a variable neighborhood search algorithm (VNS) for MDT problems. Singh and Sundar [16] proposed another artificial bee colony (ABC-DTP) algorithm for the MDT problem. This new ABC-DTP method differed from the ABC-DT in the way it generated initial solutions and in the strategy for determining neighboring solutions. Their experiments showed that for the MDT problem, ABC-DTP outperformed all existing problem-specific heuristics and metaheuristics available in the literature. Hu et al. [17] proposed a hybrid algorithm combining genetic algorithms (GAITLS) and an iterative local search to solve the dominating tree problem. Experimental results on classical instances showed that the method outperformed existing algorithms. Xiong et al. [18] presented a two-level metaheuristic (TLMH) algorithm for solving the MDT problem with a solution sampling phase and two local search-based procedures nested in a hierarchical structure. The results demonstrated the efficiency of the proposed algorithm in terms of solution quality compared with the existing metaheuristics.
Metaheuristics have been shown to be very effective in solving many challenging real-world problems [19]. However, for some problems, due to the complexity of the problem structure and the large search space, the classical metaheuristic framework fails to produce the desired results [20]. Many researchers have relied on composite neighborhood structures. If properly designed, most composite neighborhood structures have proven successful [21]. These methods include variable depth search (VDS), which searches a large search space through a series of successive simple neighborhood search operations. Although understanding the basic concepts of VDS algorithms dates back to the 1970s [22], researchers have maintained a sustained enthusiasm for the term [23,24]. For a more detailed survey of VDS, we refer to Ahuja et al. [25,26,27]. Another idea for dealing with complex structural problems is to use a hierarchical metatrial approach, where several trials are combined in a nested structure. Wu et al. [28] successfully implemented a two-level iterative local search for a network design problem with traffic sparing. According to their analysis, hierarchical metaheuristics must be carefully designed to balance the complexity of the algorithm and its performance. In particular, for the outer framework, keeping it as simple as possible makes the algorithm converge faster. Pop et al. [29] proposed a two-level solution to the generalized minimum spanning tree problem. Carrabs et al. [30] introduced a metaheuristic algorithm implementing a two-level structure to solve the shortest path problem for all colors. Contreras Bolton and Parada [31] proposed an iterative local search method to solve the generalized minimum spanning tree problem using a two-level solution.
In recent years, some improved tabu search algorithms and multineighborhood metaheuristic algorithms have been proposed and applied to NP-hard problems. Li et al. [32] proposed an improved tabu search algorithm to solve the vehicle routing problem, introducing an adaptive tabu length and neighborhood structure. Tong et al. [33] established a mixed-integer nonlinear programming model and solved the unmanned aerial vehicle transportation route optimization planning problem through a variable neighborhood tabu search algorithm. Seydanlou et al. [34] proposed a metaheuristic algorithm with a multineighborhood procedure, and experimental results proved the effectiveness of that method. Song et al. [35] proposed a new competition-guided multineighborhood local search (CMLS) algorithm to solve the course-based course scheduling problem, and computational results showed that the proposed algorithm was highly competitive.
In this paper, we design a metaheuristic algorithm for a two-neighborhood search to solve the MDT problem that uses two neighborhood moves to perform the search and combines a tabu search to escape local optima. The DNS algorithm is described in detail in Section 2, the experimental results of the DNS algorithm and comparison with other algorithms are given in Section 3, and some comparative experiments within the DNS algorithm are conducted in Section 4.

2. Tabu Search Algorithm

The DNS algorithm in this paper is based on the tabu search algorithm, so we first introduce the tabu search algorithm.

2.1. Introduction

Tabu search (TS) is a metaheuristic search method used for mathematical optimization. It was proposed by Fred W. Glover in 1998 [36]. Tabu search improves the performance of local search by relaxing the basic rules of local search. It starts from an initial feasible solution, selects a series of specific search directions (moves) as probes, and chooses the move that makes the specific objective function value change the most. To avoid falling into local optimal solutions, TS search uses a flexible “memory” technique to record and select the optimization process that has been carried out, guiding the next step of search direction. Tabu search is based on neighborhood search, by setting up a tabu list to tabu some operations that have been experienced and using aspiration criteria to reward some good states.

2.2. Basic Elements

The basic components of tabu search include:
  • Initial solution: this is where the tabu search starts, usually a randomly generated solution.
  • Neighborhood function: this function defines the neighborhood of a given solution, i.e., a set of solutions that are similar to but subtly different from the current solution.
  • Objective function: this function is used to evaluate the quality or fitness of a solution.
  • Tabu list: this is a list of solutions that have been visited, used to prevent the algorithm from revisiting the same solution.
  • Aspiration criteria: based on evaluation value rules, if a solution appears whose objective value is better than any previous best candidate solution, it can be pardoned.
  • Stopping criteria: this is a condition that when met, the algorithm will stop running.
The main indicators of the tabu list include:
  • Tabu objects: those changing elements that are tabued in the tabu list.
  • Tabu length: the number of steps that are tabued.

2.3. Basic Steps

The basic steps of tabu search are as follows:
1.
Start from an initial solution.
2.
Find all neighborhoods of the current solution and find the optimal neighborhood solution.
3.
If the optimal neighborhood solution is better than the current best solution, then it is taken as the new current solution.
4.
Add the new current solution to the tabu list. If the tabu list exceeds its maximum length, delete the oldest entry.
5.
Repeat steps 2–4 until the stopping criteria are met.
Perturbation is often used with tabu search algorithms. During the optimization process, if the algorithm stagnates around some local optimum value, the perturbation strategy will be activated. A perturbation strategy usually makes larger range changes to a current solution so that search can jump out of a current local optimum value and enter a new search area.

2.4. Design Challenges

The design of the various components and overall flow of a tabu search often has certain impact on its efficiency. When designing a tabu search, the following challenges need to be considered:
  • How to define the neighborhood: The neighborhood function determines the space of solutions that the algorithm can explore. If the neighborhood is defined too small, the algorithm may fall into a local optimum; if it is defined too large then the computational cost may become too high.
  • How to select tabu objects: The number of tabu objects need to be sufficient to prevent the algorithm from falling into a local optimum. However, if there are too many tabu objects, it may take up too much memory, and the lookup operation will also slow down.
  • How to determine the tabu length: too large a tabu length will slow down the search and make it difficult to converge; too small a length will make the search easily fall into a local optimum.
  • How to set stopping criteria: stopping too early may lead to not finding optimal solution; running for too long may lead to low efficiency.
  • How to set perturbation intensity: although a perturbation strategy helps improve the global optimization ability of the search, excessive perturbation may make the search process chaotic and unable to effectively converge to a global optimum.

3. Dual-Neighborhood Search

3.1. Main Framework

The basic idea of our proposed DNS algorithm is to tackle the MDT problem by optimizing the candidate dominating tree weight using a neighborhood search-based metaheuristic with two neighborhood move operators. The search space of the DNS consists of all the minimum spanning trees of all the possible dominating sets of the instance graph. The proposed NDS algorithm optimizes the following objective function:
f T = α f 1 X + f 2 E
where T = ( X , E ) stands for the current configuration, i.e., the candidate dominating tree. Notations X and E represents the vertex and edge sets of T, respectively. Function f 1 X calculates the number of vertices not dominated by T. Function f 2 E calculates the weights of the minimum spanning tree of T. α is a constant parameter to balance the importance between f 1 and f 2 . T is a feasible solution to the minimum dominating tree problem if and only if f 1 ( X ) = 0 .
The algorithm primarily comprises several key steps. Firstly, an initial solution is generated, followed by a neighborhood evaluation. Subsequently, the best neighborhood move is selected and executed iteratively. During the iteration, the best overall configuration is recorded. The framework of the algorithm can be represented in pseudocode as Algorithm 1.
In Algorithm 1, T i represents the initial configuration, T b represents the recorded best overall solution, and  T c represents the current configuration. In each iteration, the subprocedure Do_NeighborEvaluate evaluates all the neighborhood moves in the current configuration. The following two subprocedures select and execute the best move. The termination condition can be the time or iteration limit. The time complexity of the DNS algorithm is O ( V 2 + V E + E log E ) , and its space complexity is O ( V 2 ) .
Algorithm 1 Algorithm for the MDT problem.
  • Require: The instance graph G ( V , E )
  • Ensure: A DTP configuration T b
  •   1: procedure DNS(G)
  •   2:         T i Generate_InitialSolution(G)
  •   3:         T b T i
  •   4:        Repeat
  •   5:            E v a l u a t e M a t r i c e s Do_NeighborEvaluate(G)
  •   6:            B e s t M o v e Select_BestMove(EvaluateMatrices)
  •   7:            T c E XECUTE _ B EST M OVE ( T c , B e s t M o v e )
  •   8:           if  f ( T c ) < f ( T b )  then
  •   9:               T b T c
  • 10:           end if
  • 11:        until The termination condition is met
  • 12:        return  T b
  • 13: end procedure

3.2. Initial Solution Generation

The proposed DNS algorithm uses a feasible dominating tree as the initial configuration. The subprocedure Generate_InitialSolution generates this initial dominating tree. It first finds the minimum spanning tree for the whole graph and tries to trim the tree by removing leaves iteratively until removing one more leave breaks the dominancy of the tree. The pseudocode of this procedure is defined in Algorithm 2.
Algorithm 2 Algorithm for generating the initial solution.
  • Require: The instance graph G ( V , E )
  • Ensure: A DTP configuration T i
  •   1: procedure Generate_InitialSolution(G)
  •   2:      T i K RUSKAL ( G )
  •   3:     repeat
  •   4:          v null
  •   5:         for  n AllLeafVertices  do
  •   6:            if n can remove and w ( n ) > w ( v )  then
  •   7:                 v n
  •   8:            end if
  •   9:         end for
  • 10:         if  v null  then
  • 11:             T i . remove ( v )
  • 12:         end if
  • 13:     until  v = null
  • 14:     return  T i
  • 15: end procedure
The procedure starts from the minimum spanning tree T i generated by Kruskal’s algorithm. Then, it tries to delete the leaf with the largest edge weight. The process terminates when no more leaves can be deleted. The algorithm returns a feasible dominating tree as the initial configuration. In the following sections, we focus on the metaheuristic part of the proposed DNS algorithm, i.e., the neighborhood structure as well as its evaluation.

3.3. Definition

For a better description, we first define some important concepts and notations used in the proposed DNS algorithm.
  • X: the set of vertices in the current dominator tree.
  • X p l u s : the set of vertices dominated by X and not in X.
  • A 1 : An array of the number of undominated vertices; the length of the array is the number of graph vertices.
    A 1 [ i ] = | { j V ( X X p l u s ) : ( i , j ) E , k ( X X p l u s ) , ( k , j ) E } |
    A 1 [ i ] denotes the number of vertices not dominated by the new X when moving i from X to X p l u s (or from X p l u s to X).
  • A 2 : array of minimum spanning tree weights for X. The length of the array is the number of graph vertices.
    A 2 [ i ] = w ( M S T ( G [ X { i } ] ) ) if i X w ( M S T ( G [ X { i } ] ) ) if i X p l u s
    A 2 [ i ] denotes the weight of the new minimum spanning tree of X when moving i from X to X p l u s (or from X p l u s to X).
The following example illustrates how A 1 and A 2 are calculated.
As shown in the Figure 1, the current dominating tree is T < B , D > , containing two vertices, B and D. Therefore, X = B , D . The vertices dominated by X are A, C, and E. Thus, X p l u s = A , C , E . The vertices A, B, C, D, and E correspond to the array subscripts 0, 1, 2, 3, and 4, respectively. To evaluate the neighborhood moves, the algorithm takes vertex A out and puts it in the set of the other side. The number of vertices that are not dominated by the new X after this move is 0, thus A 1 [ 0 ] is assigned to 0. The weight of the new minimum spanning tree of X is 13, thus A 2 [ 0 ] is assigned to 13. After evaluating all the neighborhood moves, the resulting arrays are A 1 = [ 0 , 1 , 0 , 1 , 0 ] and A 2 = [ 13 , 0 , 17 , 0 , 3 ] . A 1 and A 2 are used to evaluate the neighborhood moves.

3.4. Neighborhood Move and Evaluation

There are two kinds of neighborhood moves in the DNS algorithm: one is to take out one vertex in X and put it into X p l u s , and the other one is to take out one vertex in X p l u s and put it into X. At each iteration, the best neighborhood move is selected and performed among all the two kinds of neighborhood moves. There are two criteria to evaluate the quality of the moves, one is the dominance and the other is the weight of the dominating tree. The pseudocode for the neighborhood evaluation is described in Algorithm 3.
Algorithm 3 Algorithm for performing a neighborhood evaluation.
  • Require:  E v a l u a t e M a t r i c e s = ( A 1 , A 2 ) , G ( V , E )
  • Ensure:  E v a l u a t e M a t r i c e s
  •   1: procedure Do_NeighborEvaluate(G)
  •   2:     for  v X X p l u s  do
  •   3:         move v to other set
  •   4:          A 1 [ v ] C ALCULATE _ N O D OMI N UMBER ( X , X p l u s , v )
  •   5:          A 2 [ v ] C ALCULATE _ N EW M IN S PAN T REE ( X , X p l u s , v )
  •   6:         move v back
  •   7:     end for
  •   8: end procedure
The evaluation is conducted by trying to move each vertex to the other set, then the A 1 and A 2 values are calculated. The time complexity of this module is O ( V 2 + V E ) , and its space complexity is O ( V 2 ) . Based on these two arrays, the best move is selected as described in Algorithm 4.
Algorithm 4 Algorithm for selecting the best move.
  • Require:  E v a l u a t e M a t r i c e s = ( A 1 , A 2 )
  • Ensure: The best move
  •   1: procedure Select_BestMove( E v a l u a t e M a t r i c e s )
  •   2:      M b e s t 0
  •   3:     for  v V  do
  •   4:         if  A 1 [ v ] < A 1 [ M b e s t ]  then
  •   5:             M b e s t v
  •   6:         end if
  •   7:         if  A 1 [ v ] = A 1 [ M b e s t ] and A 2 [ v ] < A 2 [ M b e s t ]  then
  •   8:             M b e s t v
  •   9:         end if
  • 10:     end for
  • 11:     return  M b e s t
  • 12: end procedure
Procedure Select_BestMove picks the move with the smallest A 1 and A 2 , with a higher priority for A 1 . The time complexity of this module is O ( V ) , and its space complexity is O ( 1 ) . Then, the best move selected is performed by Algorithm 5.
Algorithm 5 Algorithm for executing the best neighborhood move.
  • Require: X, B e s t M o v e
  • Ensure:  T c
  •   1: procedure Execute_BestMove(X, B e s t M o v e )
  •   2:     if  B e s t M o v e X  then
  •   3:         move B e s t M o v e from X to X p l u s
  •   4:     else
  •   5:         move B e s t M o v e from X p l u s to X
  •   6:     end if
  •   7:      T c K RUSKAL ( G ( X ) )
  •   8:     return T c
  •   9: end procedure
Procedure Execute_BestMove moves the selected vertex to X p l u s if it is in X, and vice versa. After the move, the minimum spanning tree of G ( X ) is calculated using Kruskal’s algorithm and assigned to T c . The time complexity of this module is O ( E log E ) , and its space complexity is O ( V ) . The following example illustrates how the best move is evaluated and performed.
As shown in Figure 2, the current dominating tree is T < B , D > , X = { B , D } , X p l u s = { A , C , E } . To evaluate vertex A, we first move it from X p l u s to X; then, X becomes { A , B , D } . The number of vertices that are not dominated by the new X at this point is 0, thus A 1 [ A ] = 0 . The minimum spanning tree weight of X = { A , B , D } is 13, thus A 2 [ A ] = 13 . We then move A back to its original set. The evaluation for A is concluded. B, C, D, and E are evaluated sequentially by the same process. After the evaluation for each vertex, A 1 = [ 0 , 1 , 0 , 1 , 0 ] and A 2 = [ 13 , 0 , 17 , 0 , 3 ] .
Then, we pick the best neighborhood move by finding the minimum value from A 1 and A 2 , prior to A 1 . There are 3 minimum values in A 1 , corresponding to A, C, and E. Then, we compare the value of these three vertices in A 2 , the minimum value is 3, corresponding to vertex E. Therefore, the best vertex is E, and the best neighboring move is to move E. After the move, the new X = { B , D , E } . We calculate the minimum spanning tree of the new X. The new minimum spanning tree is T < B , D , E > with a weight of 3.

3.5. Fast Neighborhood Evaluation

In order to improve the efficiency of the algorithm, this paper proposes a method to dynamically update the neighborhood evaluation matrices A 1 , A 2 .

3.5.1. Fast Evaluation for A 1

The number of undominated vertices may increase or remain unchanged when vertices are moved from X to X p l u s . The newly added undominated vertices must be originally in the X p l u s set and connected to the moved vertex. Since the number of undominated vertices is zero throughout the algorithm, we can count the newly introduced undominated vertices by counting the vertices in X p l u s , where the moved vertex is its only connection to X.
When we move vertices from X p l u s to X, the number of undominated vertices may decrease or remain the same. Because X is dominated throughout the algorithm, the number of undominated vertices after this kind of moves is still 0. The above observation can be utilized to dynamically compute A 1 without having to traverse the entire graph. The formula is as follows:
A 1 [ i ] = | { j X p l u s : ( i , j ) E } | if i X 0 if i X p l u s

3.5.2. Fast Evaluation for A 2

For A 2 , we use a dynamic Kruskal’s algorithm. The algorithm dynamically maintains a set R o a d s , which is the set of edges contained in subgraph G ( X ) , i.e., the set of edges whose two vertices are in X. The R o a d s set is sorted from smallest to largest by the weights of the edges. When a X-to- X p l u s move is performed, the edges connecting to the moved vertex and X are deleted from the R o a d s set. Similarly, when a X p l u s -to-X move is performed, the edges connecting to the moved vertex and X are inserted into the R o a d s set. Note that edges should be inserted into the appropriate position in R o a d s to guarantee that it is sorted. The dynamic Kruskal’s algorithm then assumes that the edges before the deletion or insertion position are in the new minimum spanning tree and starts the normal procedure from that position. The pseudocode for the dynamic Kruskal’s algorithm is described in Algorithms 6 and 7.
Algorithm 6 Algorithm for calculate a new minimum spanning tree.
  • Require:  M o v e d V e r t e x , G, R o a d s , T c
  • Ensure: weight of minimum spanning tree T s
  •   1: procedure Calculate_NewMinSpanTree(X, X p l u s , M o v e d V e r t e x )
  •   2:      m i n MAX _ VALUE
  •   3:      U C ALCULATE L INK V ERTEX ( G , M o v e d V e r t e x )
  •   4:     for  v U  do
  •   5:         if  v X  then
  •   6:            if  w ( E ( v , M o v e d V e r t e x ) ) < m i n  then
  •   7:                 i n d e x R ECORD I NDEX I N R OADS ( E ( v , M o v e d V e r t e x ) )
  •   8:                 m i n w ( E ( v , M o v e d V e r t e x ) )
  •   9:            end if
  • 10:            if  M o v e d V e r t e x X  then
  • 11:                 R o a d s . delete ( E ( v , M o v e d V e r t e x ) )
  • 12:            end if
  • 13:            if  M o v e d V e r t e x X p l u s  then
  • 14:                 R o a d s . insert ( E ( v , M o v e d V e r t e x ) )
  • 15:            end if
  • 16:         end if
  • 17:     end for
  • 18:      T s D YNAMIC K RUSKAL ( R o a d s , i n d e x , G , T c )
  • 19:     return  w ( T s )
  • 20: end procedure
In Algorithm 6, the notation E ( a , b ) represents the edge connecting vertices a and b, and T c is the original minimum spanning tree, i.e., the entire algorithm of the current solution. The time complexity of this module is O ( E ) , and its space complexity is O ( V ) . The main job of this procedure is to update the R o a d s set. Moreover, Algorithm 7 calculates the spanning tree dynamically according to R o a d s .
Algorithm 7 Algorithm for the dynamic Kruskal algorithm.
  • Require:  R o a d s , i n d e x , G, T c
  • Ensure: a minimum spanning tree T s
  •   1: procedure DynamicKruskal( R o a d s , i n d e x , G, T c )
  •   2:      T s null
  •   3:     for i from 0 to i n d e x  do
  •   4:         if  R o a d s [ i ] T c  then
  •   5:             T s . add ( R o a d s [ i ] )
  •   6:         end if
  •   7:     end for
  •   8:     for i from i n d e x to R o a d s . size  do
  •   9:         if  R o a d s [ i ] can add to T s  then
  • 10:             T s . add ( R o a d s [ i ] )
  • 11:         end if
  • 12:     end for
  • 13:     return  T s
  • 14: end procedure
The time complexity of this module is O ( E ) , and its space complexity is O ( V ) . The following example illustrates the above procedures:
As shown in Figure 3, the original tree is T < A , B , D , F > ; currently, X = { A , B , D , F } , X p l u s = { C , E , G } , R o a d s = { < D , F > , < A , B > , < B , D > } , and the weights of the edges w ( R o a d s ) = { 1 , 4 , 8 } . Let us evaluate the move of vertex E from X p l u s to X. After the move X = { A , B , D , E , F } , X p l u s = { C , G } . Since E was originally in X p l u s , A 1 [ E ] = 0 . The new edge added after the move is the edge { < B , E > , < E , F > } with weights { 2 , 5 } . Then, we insert these two edges into the appropriate position in R o a d s according to their weights from smallest to largest in w ( R o a d s ) = { 1 , 2 , 4 , 5 , 8 } , and the corresponding R o a d s = { < D , F > , < B , E > , < A , B > , < E , F > , < B , D > } . We only need to start from the position of < B , E > to determine the new minimum spanning tree. The edges before < B , E > must be in the new minimum spanning tree. The evaluated minimum spanning tree is T < A , B , D , E , F > = { < D , F > , < B , E > , < A , B > , < E , F > } with weight 12, thus A 2 [ E ] = 12 .

3.6. Tabu Strategy and Aspiration Mechanism

The proposed DNS algorithm implements the tabu strategy. The vertex is prohibited to be moved again within a tenure once it is moved. The tabu strategy is implemented in both kinds of moves in the algorithm. Since there is no intersection of X and X p l u s , only one tabu table is needed. We denote the tabu tenure of the move from X p l u s to X as T a b u L e n g t h 1 and the move from X to X p l u s as T a b u L e n g t h 2 . These two tabu tenures are set to the number of vertices in X and X p l u s , respectively, thus implementing dynamic tabu tenures. This tabu strategy improves the accuracy and efficiency and makes the algorithm jump out of the local optimum more easily.
In order to avoid missing some good solutions, an aspiration strategy is introduced. If one tabu move may improve the best overall solution, the searching process breaks its tabu status and selects it as a candidate best move.

3.7. Perturbation Strategy

In order to further improve the quality of the solution, the proposed DNS algorithm implements a perturbation strategy. The specific perturbation is to move some vertices from X p l u s to X randomly. The algorithm sets a parameter as the perturbation period. When the number of iterations reaches the perturbation period, a perturbation is triggered, and the number of iterations is cleared to zero if the best overall solution is updated within this period. There are two other parameters, the perturbation amplitude and the perturbation tabu tenure. The perturbation amplitude is the number of vertices taken out from X p l u s in the perturbation. The perturbation tabu tenure is the tabu tenure used during the perturbation period. In addition, after a certain number of small perturbations, a larger perturbation needs to be triggered to give a larger spatial span to the search process. The larger perturbation is implemented by moving one-third of the vertices from X p l u s to X randomly.

4. Algorithm Experimentation

4.1. Datasets and Experimental Protocols

The experiments were carried out on the following two datasets:
  • The DTP dataset is a dataset proposed by Dražic et al. [15], with the number of vertices ranging from 150 to 1000.
  • The Range dataset is a dataset proposed by Sundar and Singh [13], with the number of vertices ranging from 50 to 500 and a transmission range from 100 to 150 m.
Both datasets are randomly generated and can be downloaded online or obtained from the authors. The DNS algorithm was implemented in Java (JDK17) and tested on a desktop computer equipped with an Intel® Xeon® W-2235 CPU @3.80 GHz, with 16.0 GB of RAM.

4.2. Calibration

In this section, we present experiments to fix the value of key parameters of the DNS algorithm:
  • Parameter D i s t u r b P e r i o d : The first perturbation period. Values from 14 to 17 were tested.
  • Parameter D i s t u r b L e v e l : The perturbation amplitude. Values from 7 to 12 were tested.
  • Parameter D i s t u r b T L 1 : The tabu length of the neighborhood move consisting in taking a vertex from X p l u s and putting it into X during the perturbation. Values from one to two were tested.
  • Parameter D i s t u r b T L 2 : The tabu length of the neighborhood move consisting in taking a vertex from X and putting it into X p l u s during perturbation. Values from three to eight were tested.
We selected 13 representative instances to tackle the calibration experiments. Representatives were instances 200-400-1, 200-600-1, 300-600-1, and 300-1000-1 In DTP; instances 300-1, 400-1, and 500-1 in Range100, Range125, and Range150, respectively. The experiment was conducted as follows: First, we performed a rough experiment with parameter combinations to select better parameter combinations. Then, for each set of parameters, we ran these 13 instances in sequence. Each instance was run five times with different random seeds for 300 s each time. We compared the gap rates for each parameter setting. The gap was calculated as:
g a p = n 1 n 2 n 2
where n 1 is the average result obtained, and n 2 is the known best objective. Table 1 shows the result for the calibration experiment.
According to the experimental data, the minimum total g a p rate was 0.075, corresponding to a D i s t u r b P e r i o d of 15, a D i s t u r b L e v e l of 8, a D i s t u r b T L 1 of 1, and a D i s t u r b T L 2 of 4. In the following experiment, we set the parameters of the algorithm to this setting. Note that this experiment does not guarantee the optimal values of the parameters, and the optimal scheme may vary from one benchmark to another. It can also be seen that for different parameter combinations, the g a p rate is small, indicating the robustness of the algorithm.

4.3. Algorithm Comparison

In this section, a comparison of algorithms for the minimum dominating tree is conducted, and the specific comparison is shown in Table 2.

4.4. Comparison on DTP Dataset

In this section, we compare the proposed DNS algorithm with other methods in the literature on the DTP dataset. There are two DTP datasets: dtp_large and dtp_small. Since all algorithms can obtain the best results for dtp_small with little difference in speed, only the experimental results for dtp_large are shown here. The compared algorithms were the TLMH, VNS, and GAITLS algorithms. For each instance, 10 runs with different random seeds were performed, each lasting 1000 s. The best, average objective values, and average time were recorded for each instance. The experimental results and comparisons are shown in Table 3. Bolded numbers represent that the current best value has been obtained and the results are not worse than other algorithms. The stars indicate when the DNS algorithm improved on the best objective value in the literature.
From Table 3, it can be seen that our algorithm obtained the best value in most instances, and those that did not reach the optimal value were also very close to it. The overall best values were slightly worse than those of the TLMH algorithm but better than those of the VNS and GAITLS algorithms. The overall average of our algorithm outperformed the other algorithms, demonstrating its stability and faster speed. It also improved on the best solution in two instances.

4.5. Range Dataset Experiments

In this section, the widely used Range dataset with 54 instances was tested and compared with the TLMH, ACO-DT, EA/G-MP, and ABC-DTP algorithms. The experimental results of these algorithms compared in this paper are the best results obtained using the best parameters in the original literature. In this section, our algorithm was run 10 times for each dataset with the previously measured best parameters and different random seeds. Each run lasted 1000 s and the best, average objective values, and average time were calculated. The results and comparisons are shown in Table 4, Table 5 and Table 6. Bolded numbers represent that the current best value has been obtained and the results are not worse than other algorithms. The stars indicate when the DNS algorithm improved on the best objective value in the literature.
Our algorithm obtained the best solution for most instances in the Range dataset, and those that were not optimal were close to the optimal solution. It improved on the best solution for two instances and outperformed the TLMH algorithm in speed.

5. Analysis and Discussion

5.1. The Importance of the Initial Solution

A procedure for generating the initial solution was proposed in the previous section. To see the effect of that procedure, experiments were conducted using it in this section, where 18 instances of Range150 were tested. The objective value obtained by our procedure were compared with both the minimum spanning tree weights and the known best objective value to see how much our procedure improved on the initial solution and how close that initial solution was to the minimum dominating tree. The experimental results are shown in Table 7.
In Table 7, T m represents the minimum spanning tree, T i represents the objective value obtained from the initialization procedure, and T b represents the known best objective value. From the results, it can be seen that using the initialization procedure to obtain a dominating tree as the initial solution improves the results significantly compared to using the minimum spanning tree as the initial solution. The weight of this initial dominating tree is relatively close to that of the minimum dominating tree, allowing the algorithm to converge quickly to a near-optimal solution at the very beginning. To verify this improvement, this paper also used the minimum spanning tree of graph G ( V , E ) as an initial solution, and we conducted experiments on Range150 for comparison. This method is denoted as DNS-MS. The experimental results are presented in Table 8.
The experimental results show that there is not much difference between the best and average values obtained by DNS and DNS-MS, demonstrating the robustness of the local search procedure of the DNS algorithm. It can be seen that under the condition that there is not much difference in the calculation results, the solution time required by DNS is less than that of DNS-MS. This indicates that the initial solution proposed in this paper improves the efficiency of the algorithm.

5.2. The Importance of the Fast Neighborhood Evaluation

The proposed DNS algorithm uses a fast neighborhood evaluation technique. To verify its effectiveness, an experiment was conducted to test the time taken to reach the same result for 18 instances of Range150 with and without the fast neighborhood evaluation. In this experiment, the perturbation was disabled, and only the tabu mechanism was enabled. The best objective value that could be reached at complete convergence was tested in advance for each instance and used as the target result. The random seed was fixed for each instance so that the difference in speed was only due to using the fast neighborhood evaluation. The program ran until it reached the target result, and the time taken by each instance to reach the target result under these two methods was recorded separately. The results are shown in Table 9, where Method 1 represents the version without fast neighborhood evaluation and Method 2 represents the version with fast neighborhood evaluation:
From Table 9, it can be seen that the version using the fast neighborhood evaluation significantly improves its speed compared to the version without it, verifying the effectiveness of the fast neighborhood evaluation. To observe the convergence of these two methods, scatter plots were generated by outputting the weights and corresponding times after each update. The convergence curves of some instances are shown in Figure 4, where NDNU represents the method without fast neighborhood evaluation, and DNU represents the method with the fast neighborhood evaluation.
From Figure 4, it can be seen that the version using the fast neighborhood evaluation converges to the target value more quickly. For the version without the fast neighborhood evaluation, the curve is slower and takes longer to converge to the same objective value.

5.3. Importance of the Perturbation

The proposed DNS algorithm also implements a perturbation strategy. In this section, the effectiveness of that strategy was verified through experiments by testing the version with and without that strategy, for 18 instances in Range150. Each instance was run five times with different random seeds, with a time limit of 1000 s, and the results of the experiments are shown in Table 10:
From Table 10, it can be seen that better solutions can be obtained by the version using the perturbation strategy, especially in some larger instances, verifying the effectiveness of the strategy.

5.4. Statistical Significance Testing among Versions of DNS

We conducted t-tests on different versions of DNS on some instances to check whether the differences in results were merely caused by randomness. These different versions of the algorithm included:
  • DNS-MS, the version of the algorithm with the minimum spanning tree as the initial solution.
  • DNS-NF, the version of the algorithm without fast neighborhood evaluation.
  • DNS-NP, the version of the algorithm without perturbation.
Among them, DNS-MS and DNS-NF tested for differences in time, while DNS-NP tested for differences in calculation results. The results and comparisons are as follows.
From Table 11, we can see that some results or speeds of DNS are significantly different ( p 0.05 ) from the other versions. Additionally, it can be seen in DNS vs. DNS-NP that the p-value is 1.00 on some small-scale instances. This is because both DNS and DNS-NP can always obtain the optimal solution for these instances, so no difference can be observed between DNS and DNS-NP on these small-scale instances. However, significant differences can be observed on large-scale instances.

6. Conclusions

In this paper, a dual-neighborhood search algorithm was proposed to solve the minimum dominating tree problem. In order to improve the efficiency of the algorithm, a fast neighborhood evaluation method was proposed, in which the method of dynamically generating the minimum spanning tree from the subgraph was deduced from the dominating set. The tabu and the perturbation mechanisms helped the algorithm jump out of the local optimum trap, thus obtaining better solutions. The DNS algorithm was demonstrated to be highly effective in tests on a collection of widely used benchmark instances, where it was compared with algorithms from the literature. Out of 72 public instances, the DNS improved the best result on four problems while being competitive on the remaining ones with less computational time. Although the techniques proposed in this paper are specific to the minimum dominating tree problem, most of these ideas can be applied to other combinatorial optimization problems. For example, the dynamic spanning tree calculation used in the fast neighborhood evaluation can be used in problems with spanning tree structures. Moreover, the collaboration of two neighborhood structures can also be introduced to other relevant optimization problems. Finally, it would be interesting to test the proposed ideas in other metaheuristic frameworks with other optimization problems.
Additionally, the DNS algorithm has some shortcomings. For instance, the optimal solutions obtained by the DNS on some instances are not good enough. At the same time, there are some areas for improvement in the algorithm. For example, a weighted approach can be used to guide the position after perturbation. According to some rules or judgment functions, nodes that are likely to appear in the optimal solution are weighted. The initial solution is probabilistically generated through weights. The larger the weight, the greater the probability of selecting this node. Furthermore, some judgment conditions can be used to adaptively scale the tabu search, reducing the tabu length in areas where global optimal solutions may appear to refine the search.

Author Contributions

Conceptualization, X.W. and C.X.; investigation, X.W. and Z.P.; methodology, X.W. and Z.P.; software, Z.P. and X.W.; data collection, Z.P. and X.W.; writing, Z.P. and X.W.; equipment, C.X. and X.W.; funding acquisition, X.W. and C.X.; supervision, C.X. and X.W. All authors have read and agreed to the published version of the manuscript.

Funding

The research was supported by the National Natural Science Foundation of China (Grant No. 62002105 and 62201203) and the Science and Technology Program of Hubei Province (2021BLB171).

Data Availability Statement

The source code for the experiment can be obtained through the following link: https://github.com/pan204981292/DNS.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Shin, I.; Shen, Y.; Thai, M.T. On approximation of dominating tree in wireless sensor networks. Optim. Lett. 2010, 4, 393–403. [Google Scholar] [CrossRef]
  2. Wu, X.; Lü, Z.; Galinier, P. Restricted swap-based neighborhood search for the minimum connected dominating set problem. Networks 2017, 69, 222–236. [Google Scholar] [CrossRef]
  3. Li, R.; Hu, S.; Gao, J.; Zhou, Y.; Wang, Y.; Yin, M. GRASP for connected dominating set problems. Neural Comput. Appl. 2017, 28, 1059–1067. [Google Scholar] [CrossRef]
  4. Li, R.; Hu, S.; Liu, H.; Li, R.; Ouyang, D.; Yin, M. Multi-start local search algorithm for the minimum connected dominating set problems. Mathematics 2019, 7, 1173. [Google Scholar] [CrossRef]
  5. Bouamama, S.; Blum, C.; Fages, J.G. An algorithm based on ant colony optimization for the minimum connected dominating set problem. Appl. Soft Comput. 2019, 80, 672–686. [Google Scholar] [CrossRef]
  6. Chinnasamy, A.; Sivakumar, B.; Selvakumari, P.; Suresh, A. Minimum connected dominating set based RSU allocation for smartCloud vehicles in VANET. Clust. Comput. 2019, 22, 12795–12804. [Google Scholar] [CrossRef]
  7. Hedar, A.R.; Ismail, R.; El-Sayed, G.A.; Khayyat, K.M.J. Two meta-heuristics designed to solve the minimum connected dominating set problem for wireless networks design and management. J. Netw. Syst. Manag. 2019, 27, 647–687. [Google Scholar] [CrossRef]
  8. Li, B.; Zhang, X.; Cai, S.; Lin, J.; Wang, Y.; Blum, C. Nucds: An efficient local search algorithm for minimum connected dominating set. In Proceedings of the Twenty-Ninth International Conference on International Joint Conferences on Artificial Intelligence 2021, Yokohama, Japan, 7–15 January 2021; pp. 1503–1510. [Google Scholar]
  9. Zhang, N.; Shin, I.; Li, B.; Boyaci, C.; Tiwari, R.; Thai, M.T. New approximation for minimum-weight routing backbone in wireless sensor network. In Proceedings of the Wireless Algorithms, Systems, and Applications: Third International Conference, WASA 2008, Dallas, TX, USA, 26–28 October 2008; pp. 96–108. [Google Scholar]
  10. Adasme, P.; Andrade, R.; Leung, J.; Lisser, A. Models for minimum cost dominating trees. Electron. Notes Discret. Math. 2016, 52, 101–107. [Google Scholar] [CrossRef]
  11. Adasme, P.; Andrade, R.; Leung, J.; Lisser, A. Improved solution strategies for dominating trees. Expert Syst. Appl. 2018, 100, 30–40. [Google Scholar] [CrossRef]
  12. Álvarez-Miranda, E.; Luipersbeck, M.; Sinnl, M. An exact solution framework for the minimum cost dominating tree problem. Optim. Lett. 2018, 12, 1669–1681. [Google Scholar] [CrossRef]
  13. Sundar, S.; Singh, A. New heuristic approaches for the dominating tree problem. Appl. Soft Comput. 2013, 13, 4695–4703. [Google Scholar] [CrossRef]
  14. Chaurasia, S.N.; Singh, A. A hybrid heuristic for dominating tree problem. Soft Comput. 2016, 20, 377–397. [Google Scholar] [CrossRef]
  15. Dražić, Z.; Čangalović, M.; Kovačević-Vujčić, V. A metaheuristic approach to the dominating tree problem. Optim. Lett. 2017, 11, 1155–1167. [Google Scholar] [CrossRef]
  16. Singh, K.; Sundar, S. Two new heuristics for the dominating tree problem. Appl. Intell. 2018, 48, 2247–2267. [Google Scholar] [CrossRef]
  17. Hu, S.; Liu, H.; Wu, X.; Li, R.; Zhou, J.; Wang, J. A hybrid framework combining genetic algorithm with iterated local search for the dominating tree problem. Mathematics 2019, 7, 359. [Google Scholar] [CrossRef]
  18. Xiong, C.; Liu, H.; Wu, X.; Deng, N. A two-level meta-heuristic approach for the minimum dominating tree problem. Front. Comput. Sci. 2023, 17, 171406. [Google Scholar] [CrossRef]
  19. Yang, W.; Ke, L. An improved fireworks algorithm for the capacitated vehicle routing problem. Front. Comput. Sci. 2019, 13, 552–564. [Google Scholar] [CrossRef]
  20. Hou, N.; He, F.; Zhou, Y.; Chen, Y. An efficient GPU-based parallel tabu search algorithm for hardware/software co-design. Front. Comput. Sci. 2020, 14, 145316. [Google Scholar] [CrossRef]
  21. Hao, X.; Liu, J.; Zhang, Y.; Sanga, G. Mathematical model and simulated annealing algorithm for Chinese high school timetabling problems under the new curriculum innovation. Front. Comput. Sci. 2021, 15, 151309. [Google Scholar] [CrossRef]
  22. Lin, S.; Kernighan, B.W. An effective heuristic algorithm for the traveling-salesman problem. Oper. Res. 1973, 21, 498–516. [Google Scholar] [CrossRef]
  23. Glover, F. Ejection chains, reference structures and alternating path methods for traveling salesman problems. Discret. Appl. Math. 1996, 65, 223–253. [Google Scholar] [CrossRef]
  24. Yagiura, M.; Yamaguchi, T.; Ibaraki, T. A variable depth search algorithm with branching search for the generalized assignment problem. Optim. Methods Softw. 1998, 10, 419–441. [Google Scholar] [CrossRef]
  25. Ahuja, R.K.; Ergun, Ö.; Orlin, J.B.; Punnen, A.P. A survey of very large-scale neighborhood search techniques. Discret. Appl. Math. 2002, 123, 75–102. [Google Scholar] [CrossRef]
  26. Santos, L.F.M.; Iwayama, R.S.; Cavalcanti, L.B.; Turi, L.M.; de Souza Morais, F.E.; Mormilho, G.; Cunha, C.B. A variable neighborhood search algorithm for the bin packing problem with compatible categories. Expert Syst. Appl. 2019, 124, 209–225. [Google Scholar] [CrossRef]
  27. Wu, X.; Xiong, C.; Deng, N.; Xia, D. A variable depth neighborhood search algorithm for the Min–Max Arc Crossing Problem. Comput. Oper. Res. 2021, 134, 105403. [Google Scholar] [CrossRef]
  28. Wu, X.; Lü, Z.; Guo, Q.; Ye, T. Two-level iterated local search for WDM network design problem with traffic grooming. Appl. Soft Comput. 2015, 37, 715–724. [Google Scholar] [CrossRef]
  29. Pop, P.C.; Matei, O.; Sabo, C.; Petrovan, A. A two-level solution approach for solving the generalized minimum spanning tree problem. Eur. J. Oper. Res. 2018, 265, 478–487. [Google Scholar] [CrossRef]
  30. Carrabs, F.; Cerulli, R.; Pentangelo, R.; Raiconi, A. A two-level metaheuristic for the all colors shortest path problem. Comput. Optim. Appl. 2018, 71, 525–551. [Google Scholar] [CrossRef]
  31. Contreras-Bolton, C.; Parada, V. An effective two-level solution approach for the prize-collecting generalized minimum spanning tree problem by iterated local search. Int. Trans. Oper. Res. 2021, 28, 1190–1212. [Google Scholar] [CrossRef]
  32. Li, G.; Li, J. An improved tabu search algorithm for the stochastic vehicle routing problem with soft time windows. IEEE Access 2020, 8, 158115–158124. [Google Scholar] [CrossRef]
  33. Tong, B.; Wang, J.; Wang, X.; Zhou, F.; Mao, X.; Zheng, W. Optimal Route Planning for Truck–Drone Delivery Using Variable Neighborhood Tabu Search Algorithm. Appl. Sci. 2022, 12, 529. [Google Scholar] [CrossRef]
  34. Seydanlou, P.; Sheikhalishahi, M.; Tavakkoli-Moghaddam, R.; Fathollahi-Fard, A.M. A customized multi-neighborhood search algorithm using the tabu list for a sustainable closed-loop supply chain network under uncertainty. Appl. Soft Comput. 2023, 114, 110495. [Google Scholar] [CrossRef]
  35. Song, T.; Chen, M.; Xu, Y.; Wang, D.; Song, X.; Tang, X. Competition-guided multi-neighborhood local search algorithm for the university course timetabling problem. Appl. Soft Comput. 2021, 110, 107624. [Google Scholar] [CrossRef]
  36. Glover, F.; Laguna, M. Tabu Search; Springer: Berlin/Heidelberg, Germany, 1998. [Google Scholar]
Figure 1. T < B , D > .
Figure 1. T < B , D > .
Mathematics 11 04214 g001
Figure 2. Move T < B , D > to T < B , D , E > .
Figure 2. Move T < B , D > to T < B , D , E > .
Mathematics 11 04214 g002
Figure 3. T < A , B , D , F > to T < A , B , D , E , F > .
Figure 3. T < A , B , D , F > to T < A , B , D , E , F > .
Mathematics 11 04214 g003
Figure 4. Fast neighborhood evaluation scatter plot.
Figure 4. Fast neighborhood evaluation scatter plot.
Mathematics 11 04214 g004aMathematics 11 04214 g004b
Table 1. Experimental results of parameter testing for the DNS.
Table 1. Experimental results of parameter testing for the DNS.
DisturbPeriod DisturbLevel DisturbTL _ 1 DisturbTL _ 2 Total GapAverage Time
147130.0831959
147140.0821696
147150.0831806
148130.0931803
148140.0771776
148150.0801733
149130.0931910
149140.0901985
149150.0952084
158140.0751822
158150.0861768
158160.0831854
159140.0961956
159150.0951772
159160.1071835
1510140.0771945
1510150.0931955
1510160.0961906
169250.0971930
169260.1101932
169270.0942168
1610250.0871919
1610260.0892025
1610270.1012054
1611250.1012013
1611260.0931769
1611270.0931895
1710260.1061662
1710270.1061781
1710280.1111968
1711260.1081835
1711270.1171863
1711280.1081896
1712260.1131942
1712270.1141892
1712280.1101707
Table 2. Comparison of Algorithms for Minimum Dominating Tree.
Table 2. Comparison of Algorithms for Minimum Dominating Tree.
AlgorithmAuthorTimeAdvantageDisadvantage
VNSZorica Dražić2016Can calculate the optimal solution for small-scale instancesThe calculation result is poor for large-scale instances
GAITLSShuli Hu2019The average calculation result is goodThe overall optimal solution level is slightly poor
ACO-DTShyam Sundar2013Can calculate the optimal solution for small-scale instancesThe overall optimal solution level and average level are poor
ABC-DTPKavita Singh2018The overall optimal solution level is goodThe average level is slightly poor
EA/G-MPChaurasia2016Can calculate the optimal solution for small-scale instancesThe overall optimal solution level and average level are slightly poor
TLMHCaiquan Xiong2023The overall optimal solution level and average level are goodTakes a long time
Table 3. Computational results of the DNS and comparisons on dtp_large.
Table 3. Computational results of the DNS and comparisons on dtp_large.
DNSTLMHVNSGAITLS
InstanceBestAverageTimeBestAverageTimeBestAverageBestAverage
100-150-0152.57152.5714152.57152.572152.57154.61152.57152.57
100-150-1192.21192.216192.21192.2111192.21194.22192.21192.21
100-150-2146.34146.34<1146.34146.3487146.34148.35146.34146.34
100-200-0135.04135.04<1135.04135.0460135.04136.41135.04135.04
100-200-191.8891.88<191.8891.881391.8892.0391.8891.88
100-200-2115.93115.9317115.93115.939115.93117.11115.93115.93
200-400-0257.09257.52376257.09257.23370306.06343.95257.09257.09
200-400-1258.77258.88181258.77258.88486303.53331.10258.93258.93
200-400-2241.07241.426238.27241.72370274.37389.51238.29238.29
200-600-0121.62122.94307121.62127.73460132.49150.39121.62121.62
200-600-1135.08137.63293135.08145.20441162.92198.21135.08135.08
200-600-2123.70124.04166123.31123.70264139.08154.36123.31123.31
300-600-0352.32353.36297348.03351.22529471.69494.62348.03348.03
300-600-1416.23416.99157413.93416.64753494.91542.46415.32415.32
300-600-2354.35356.5257352.15353.77760500.72535.30385.53385.53
300-1000-0148.86151.05331148.63150.10629257.72264.33149.57149.57
300-1000-1* 164.65165.77404165.21165.91477242.79325.16165.19165.19
300-1000-2* 154.59158.90434154.64169.39595233.18251.41154.61154.61
average197.91198.83169197.26199.75351241.86267.97199.25199.25
Table 4. Computational results of the DNS and comparisons on Range100.
Table 4. Computational results of the DNS and comparisons on Range100.
DNSTLMHACO-DTEA/G-MPABC-DTP
InstanceBestAverageTimeBestAverageTimeBestAverageBestAverageBestAverage
50-11204.411204.41291204.411204.4111204.411204.411204.411204.411204.411204.41
50-21340.441340.44251340.441340.44<11340.441340.441340.441340.441340.441340.69
50-31316.391316.3961316.391316.39<11316.391316.391316.391316.391316.391316.39
100-11217.471217.47<11217.471217.47171217.471217.471217.471217.611217.471218.59
100-21128.401128.4071128.401128.40441152.851152.851128.401128.541128.401136.50
100-31252.991252.99201252.991253.412021253.491253.491253.491257.371252.991253.30
200-11206.791206.79231206.791206.805151206.791207.611206.791208.261206.791210.25
200-21213.241213.241701213.241213.273951216.231217.731216.411222.231216.411219.38
200-31247.251247.251141247.251247.413131247.251248.941247.631250.781247.731252.15
300-11217.591224.325871215.481217.405641228.241243.701225.221230.481215.481220.39
300-21170.851171.534411170.851171.083411176.451193.951170.851171.301170.851171.15
300-31247.511254.164531247.511249.513481261.181276.751252.141260.831249.541254.67
400-11211.331216.454261211.331213.515021220.621237.451211.721220.791212.511214.36
400-21201.741205.344251197.661198.994321209.691246.141199.921202.821199.231202.90
400-31257.521262.984871245.311248.476331254.101270.341248.291268.381246.941258.76
500-11202.121209.064821197.261202.816781219.661240.051206.071222.121200.061208.73
500-2* 1220.471233.986241221.761226.815701273.861295.511226.781240.621220.681230.07
500-3* 1231.811244.933811231.841236.645831232.711259.081232.151250.481231.951236.33
average1227.131230.562611225.911227.323481235.101245.681228.031234.101226.571230.57
Table 5. Computational results of the DNS and comparisons on Range125.
Table 5. Computational results of the DNS and comparisons on Range125.
DNSTLMHACO-DTEA/G-MPABC-DTP
InstanceBestAverageTimeBestAverageTimeBestAverageBestAverageBestAverage
50-1802.95802.9510802.95802.951802.95803.26802.95802.95802.95802.95
50-21055.101055.10191055.101055.1021055.101055.101055.101055.101055.101055.10
50-3877.77877.773877.77877.774877.77877.77877.77877.77877.77877.83
100-1943.01943.013943.01943.01102943.01946.37943.01943.01943.01943.01
100-2917.00917.00126917.00917.23281935.71938.71917.95917.95917.00917.38
100-3998.18998.185998.18998.1844998.181006.11998.18998.18998.18999.91
200-1910.17910.1711910.17910.17195910.17910.50910.17910.17910.17911.66
200-2921.76921.7679921.76921.76184928.84942.72921.76923.03921.76925.38
200-3939.58939.58452939.60939.61333951.36959.63939.58949.18939.58943.20
300-1977.65978.33412977.65977.65416978.91980.11977.65981.04979.81981.85
300-2913.01913.01228913.01913.01402918.40949.05913.01914.08913.01913.88
300-3974.85974.94383974.78974.78315981.15981.33974.85979.34974.78978.35
400-1965.99966.08292966.01966.03225968.66980.60965.99966.59965.99966.71
400-2938.54942.45643934.17937.88506941.52961.71941.02943.53941.02942.59
400-31002.611003.135791002.611002.675251002.611009.071002.971003.621002.611003.33
500-1963.89964.10484963.89965.91272986.49991.85963.89963.89963.89964.80
500-2950.18956.48539948.57949.57457953.77996.85948.57952.96948.96950.12
500-3982.02988.86514980.67982.735531006.231007.36980.67992.64981.90986.01
average946.35947.38265945.94946.45283952.27961.01946.39948.61946.53948.00
Table 6. Computational results of the DNS and comparisons on Range150.
Table 6. Computational results of the DNS and comparisons on Range150.
DNSTLMHACO-DTEA/G-MPABC-DTP
InstanceBestAverageTimeBestAverageTimeBestAverageBestAverageBestAverage
50-1647.75647.75<1647.75647.751647.75647.75647.75647.75647.75647.75
50-2863.69863.69<1863.69863.692863.69863.69863.69863.69863.69864.04
50-3743.94743.94<1743.94743.942743.94743.94743.94743.94743.94745.68
100-1876.69876.697876.69876.79297881.37885.36876.69876.69876.69877.02
100-2657.35657.35<1657.35657.3511657.35657.35657.35657.53657.35657.53
100-3722.87722.87<1722.87722.872722.87722.87722.87722.87722.87722.87
200-1809.90809.9023809.90809.90138809.90810.87809.90810.49809.90809.90
200-2736.23736.232736.23736.23354736.23736.23736.23736.23736.23736.23
200-3792.71792.7117792.71792.7197792.71793.73792.71795.65792.71793.48
300-1796.15796.60288796.15796.15283796.70797.17796.15798.12796.29796.99
300-2741.02741.72257741.02741.03298748.94752.33741.02743.05741.02742.88
300-3819.76819.76171819.76819.78129826.48826.56819.76821.67819.76820.45
400-1796.70797.42435795.53795.88445796.70798.24795.53798.82795.53797.92
400-2779.63780.64477779.67779.67388782.91787.66779.63783.14779.63781.40
400-3814.14816.62589814.14814.18388826.48831.32814.14817.38814.14815.35
500-1792.32793.49469792.21792.31357794.47797.13792.21793.59793.98796.16
500-2779.35779.92465779.38779.41274779.35791.20779.35781.28779.35780.04
500-3808.64810.00538808.37808.39281808.50811.35808.50810.27808.50808.50
average776.60777.07207776.52776.56208778.69780.82776.52777.90776.53777.46
Table 7. Experimental results of the initial dominating tree algorithm.
Table 7. Experimental results of the initial dominating tree algorithm.
Instance T m T i T b
50-12368.211145.40647.75
50-22521.751222.31863.69
50-32461.331103.13743.94
100-13313.791324.04876.69
100-23155.531212.55657.35
100-33354.601043.87722.87
200-14618.791327.49809.90
200-24704.181322.38736.23
200-34720.931353.97792.71
300-15685.141559.49796.15
300-25718.301269.55741.02
300-35839.221516.01819.76
400-16599.331730.84795.53
400-26618.891833.34779.63
400-36524.291638.08814.14
500-17356.761375.03792.21
500-27342.621334.51779.35
500-37305.092111.56808.37
Table 8. Computational results of the initial solution experiment on Range150.
Table 8. Computational results of the initial solution experiment on Range150.
DNSDNS-MS
InstanceBestAverageTimeBestAverageTime
50-1647.75647.75<1647.75647.75<1
50-2863.69863.69<1863.69863.69<1
50-3743.94743.94<1743.94743.94<1
100-1876.69876.697876.69876.6910
100-2657.35657.35<1657.35657.35<1
100-3722.87722.87<1722.87722.87<1
200-1809.90809.9023809.90809.9020
200-2736.23736.232736.23736.237
200-3792.71792.7117792.71792.7132
300-1796.15796.60288796.65796.69282
300-2741.02741.72257741.02741.02273
300-3819.76819.76171819.76819.84484
400-1796.70797.42435796.70797.34500
400-2779.63780.64477779.63780.40525
400-3814.14816.62589815.03817.09567
500-1792.32793.49469792.21793.73808
500-2779.35779.92465779.35780.31652
500-3808.64810.00538809.69810.34670
average776.60777.07207776.73777.11268
Table 9. Comparison of fast neighborhood evaluation experiments.
Table 9. Comparison of fast neighborhood evaluation experiments.
InstanceTarget ResultsMethod 1’s TimeMethod 2’s Time
50-1647.75<1<1
50-2903.37<1<1
50-3751.24<1<1
100-1876.69133
100-2657.351<1
100-3722.871<1
200-1809.90101
200-2736.23101
200-3797.118515
300-1798.1813622
300-2745.29283
300-3827.5644777
400-1803.0728836
400-2785.6341161
400-3825.0744872
500-1801.3620023
500-2780.0337256
500-3818.4931520
average782.6215321
Table 10. Comparison of disturbance strategy experiments.
Table 10. Comparison of disturbance strategy experiments.
Using PerturbationWithout Perturbation
InstanceBestAverageBestAverage
50-1647.75647.75647.75647.75
50-2863.69863.69903.37903.37
50-3743.94743.94751.24751.24
100-1876.69876.69876.69876.69
100-2657.35657.35657.35657.35
100-3722.87722.87722.87722.87
200-1809.90809.90809.90809.90
200-2736.23736.23736.23736.23
200-3792.71792.71792.71794.27
300-1796.29796.62796.70799.93
300-2741.02741.02743.99744.87
300-3819.76819.76822.70828.09
400-1796.70797.84802.80805.29
400-2779.63781.21782.98786.56
400-3814.14816.47824.31825.71
500-1792.65793.55799.82806.55
500-2779.35779.65780.03784.35
500-3808.64809.92811.63817.03
average776.63777.07781.28783.23
Table 11. p values of t-Tests on each instance between versions of DNS.
Table 11. p values of t-Tests on each instance between versions of DNS.
InstanceDNS vs. DNS-MSDNS vs. DNS-NFDNS vs. DNS-NP
50-10.450.121.00
50-20.260.150.00
50-30.220.080.00
100-10.160.011.00
100-20.040.061.00
100-30.050.051.00
200-10.120.001.00
200-20.000.001.00
200-30.030.010.08
300-10.650.000.00
300-20.220.000.00
300-30.010.010.00
400-10.040.000.00
400-20.050.000.00
400-30.220.000.00
500-10.010.000.00
500-20.030.000.00
500-30.040.000.00
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Pan, Z.; Wu, X.; Xiong, C. Dual-Neighborhood Search for Solving the Minimum Dominating Tree Problem. Mathematics 2023, 11, 4214. https://doi.org/10.3390/math11194214

AMA Style

Pan Z, Wu X, Xiong C. Dual-Neighborhood Search for Solving the Minimum Dominating Tree Problem. Mathematics. 2023; 11(19):4214. https://doi.org/10.3390/math11194214

Chicago/Turabian Style

Pan, Ze, Xinyun Wu, and Caiquan Xiong. 2023. "Dual-Neighborhood Search for Solving the Minimum Dominating Tree Problem" Mathematics 11, no. 19: 4214. https://doi.org/10.3390/math11194214

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop