Next Article in Journal
Biometric Image-Analysis Techniques for Monitoring Chronic Neck Pain
Previous Article in Journal
On the Footsteps of Active Faults from the Saronic Gulf to the Eastern Corinth Gulf: Application of Tomographic Inversion Using Recent Seismic Activity
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Dual-Neighborhood Tabu Search for Computing Stable Extensions in Abstract Argumentation Frameworks

School of Computer Science, Hubei University of Technology, Wuhan 430068, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2024, 14(15), 6428; https://doi.org/10.3390/app14156428
Submission received: 19 June 2024 / Revised: 10 July 2024 / Accepted: 21 July 2024 / Published: 23 July 2024
(This article belongs to the Special Issue Heuristic and Evolutionary Algorithms for Engineering Optimization)

Abstract

:
Abstract argumentation has become one of the important fields of artificial intelligence. This paper proposes a dual-neighborhood tabu search (DNTS) method specifically designed to find a single stable extension in abstract argumentation frameworks. The proposed algorithm implements an improved dual-neighborhood strategy incorporating a fast neighborhood evaluation method. In addition, by introducing techniques such as tabu and perturbation, this algorithm is able to jump out of the local optimum, which significantly improves the performance of the algorithm. In order to evaluate the effectiveness of the method, the performance of the algorithm on more than 300 randomly generated benchmark datasets was studied and compared with the algorithm in the literature. In the experiment, DNTS outperforms the other method regarding time consumption in more than 50 instances and surpasses the other meta-heuristic method in the number of solved cases. Further analysis shows that the initialization method, the tabu strategy, and the perturbation technique help guarantee the efficiency of the proposed DNTS.

1. Introduction

Formal argumentation has received widespread attention in research areas such as non-monotonic reasoning [1], communication in multi-agent systems [2], and semantic web reasoning [3]. Early research in these areas originated in the early 1990s. However, widespread academic attention was sparked by Dung’s theory of abstract argumentation frameworks [4], which marked an important developmental milestone in the field.
Dung’s theory of abstract argumentation frameworks provides a foundational model for the field of computational argumentation. This model has been widely applied in various domains such as multi-agent systems, decision support tools, and medical and legal reasoning. From this, computational argumentation has developed into an important branch of artificial intelligence research [5].
Dung [4] proposed a generic and highly abstract model of argumentation, which formalizes the argumentation framework as an abstract argumentation framework ( A F ), represented as A F = ( A , R ) , by defining arguments as vertices and attack relations between arguments as directed edges.
In this framework, a stable extension is defined as the set of arguments E A that satisfies the following conditions: (1) there is no attack relation between any arguments within E; and (2) all arguments in A F that do not belong to E are attacked by at least one of the arguments in E. Figure 1 illustrates an example of a stable extension. In the illustration presented, set E consisting of red vertices is identified as a stable extension. Set E attacks all the arguments in the framework that are not contained in it, while the arguments within set E do not have any mutual attack relations between each other.
The purpose of this paper is to study the problem of finding a single stable extension in abstract argumentation frameworks, also called the SE-ST problem.
In the field of research on formal argumentation modeling, the problem of semantic parsing in the theory of abstract argumentation frameworks [6] is particularly prominent. The International Competition on Computational Models of Argumentation (ICCMA) has been launched specifically for this purpose, in order to promote this field of research. The competition consists of several competition tracks aimed at solving a series of related problems. The research in this paper is on one of its main tracks and focuses on exploring solution strategies for computing a single stable extension of an abstract argumentation framework.
Currently, algorithms for solving the stable extensions problem are mainly divided into two categories [7]: reduction-based approaches and direct approaches. Reduction-based approaches typically employ efficient existing solvers, such as SAT solvers [8], to transform abstract argumentation frameworks and stable extensions into a format that the solver can handle. In the main track of the International Competition on Computational Models of Argumentation (ICCMA), the majority of algorithms that perform outstandingly adopt reduction-based approaches, including the u-toksia [9] algorithm and the pyglaf [10] algorithm.
The other category is direct approaches, which provide solutions directly for arguments on the abstract argumentation framework. These methods are typically based on labeling theory [11]. This approach was pioneered by Pollock [12] and was later expanded by Caminada [11], Vreeswijk [13], and Verheij [14]. The most prominent variant is the 3-valued labelings proposed by Caminada and Gabbay [11]. The first labeling-based algorithm framework was proposed by Doutre and Mengin [15]. The core idea of labeling-based algorithms is that, once the label of an argument is determined, it immediately affects the potential labels of its adjacent arguments.
In recent years, most of the algorithms for computing stable extensions have been based on invoking SAT solvers or other types of solvers, with relatively fewer studies employing direct approaches. In research on solving abstract argumentation problems using local search approaches, Niu [16] and M. Thimm [17] have explored the use of stochastic local search methods. However, the work of Niu et al. focuses on solving the preferred extensions, and their approach conceptually differs from the present study. They operate on the CNF representation of the problem and apply the Swcca algorithm [18]. Therefore, their approach is essentially reduction-based rather than direct. The HAYWOOD algorithm proposed by M. Thimm in 2018 is a direct approach based on heuristic information for local search [19], mainly expanding the search space through a stochastic selection mechanism [20]. Similarly, the approach proposed in this paper also adopts a direct approach.
Due to the NP-hardness of the problem, the current algorithms that accurately determine whether a stable extension exists are all exponential in time complexity. Meta-heuristic approaches, such as tabu search, attempt to find solutions to NP-hard problems in polynomial time. Although they cannot guarantee a solution or prove its absence, meta-heuristics are more efficient in solving large-scale NP-hard problems than exact algorithms.
Tabu search enhances the performance of local search by relaxing the basic rules of local search. It forbids certain previously experienced operations by establishing a tabu list based on neighborhood search. Battiti [21] and Kelly [22] proposed two early variants of tabu search, which employ strategies to drive the search away from the current solution without using objective function thresholds. Glover [23] controls the objective function by setting tabu thresholds and employing a more comprehensive diversification strategy.
In this paper, we propose a dual-neighborhood tabu search algorithm (DNTS) to compute a single stable extension and introduce a perturbation technique. The main focus of this paper is on the efficiency of the algorithms in terms of time performance, while also conducting experiments on the number of correctly solved instances, comparing the performance of different algorithms on random instances.
The paper is organized as follows. In Section 2, we introduce preliminary definitions. Section 3 introduces the main framework of the proposed DNTS algorithm. Section 4 describes details of the DNTS algorithm. Section 5 presents the experiments, including parametric and comparative experiments. Section 6 presents analyses and discussion. Section 7 concludes the paper.

2. Preliminary Definitions

2.1. Argumentation Frameworks

Definition 1.
An abstract argumentation framework is denoted by a pair, A F = ( A , R ) , where A is a finite set of arguments and R A × A is an attack relation built on set A.
For two arguments, a, b A , argument a is said to attack argument b if ( a , b ) R . Set E A attacks b if any argument in E attacks b. { a } refers to the set of arguments attacking the argument a.
In general, we depict A F as a directed graph, where vertices denote arguments and edges denote attack relationships between arguments. Figure 2 illustrates a set of abstract argumentation frameworks.
Example 1.
Argumentation frameworks: A F 1 = ( { a } , { } ) , there is only one argument node with no attack relationship; A F 2 = ( { a , b } , { ( a , b ) } ) , two arguments, a and b, and a attacks b; A F 3 = ( { a , b } , { ( a , b ) , ( b , a ) } ) , denotes mutual attacks between arguments, a attacks b and b attacks a; A F 4 = ( { a , b , c } , { ( a , b ) , ( b , c ) } ) , comprising three arguments, a, b, and c, where a attacks b, b attacks c, and c is defended by a.

2.2. Stable Extension

Dung-style semantics define the sets of arguments that can jointly be accepted (extensions). A σ -extension refers to an extension under semantics σ [24].
Definition 2
(A conflict-free set). Let an abstract argumentation framework be A F = ( A , R ) . Set E A is a conflict-free set if there do not exist x , y E , such that x attacks y or y attacks x. The set of all conflict-free sets of A F is denoted by cf(AF).
Definition 3
(Acceptability of arguments). Let an abstract argumentation framework be A F = ( A , R ) . An argument a A is acceptable with respect to the set of arguments E A if b A ( b E ) , and if b attacks a then E attacks b.
Definition 4
(An admissible set). Let an abstract argumentation framework be A F = ( A , R ) . Set E A , E is an admissible set if E cf(AF), and every argument within E is acceptable with respect to E. The admissible sets are denoted by adm(AF).
Definition 5
(A complete extension). Let an abstract argumentation framework be A F = ( A , R ) . Set E A is a complete extension if E adm(AF) and every acceptable argument with respect to E belongs to E. The complete extensions are denoted by com(AF).
Definition 6
(A stable extension). Let an abstract argumentation framework be A F = ( A , R ) . Set E A is a stable extension if E cf(AF) and a A , and if a E , then E attacks a. The stable extensions are denoted by stb(AF).
Definition 7
(A grounded extension). Let an abstract argumentation framework be A F = ( A , R ) . Set E A is a grounded extension if E com(AF) and, for each T com(AF), T E . The grounded extension is denoted by grd(AF).
Figure 3 shows several instances of grounded extensions. In Figure 3a, the set S = { a } is first obtained by performing a characteristic function call on the empty set, which contains the initial argument a. Subsequently, the characteristic function call is performed again on set S, resulting in the updated set S = { a , c } . When the characteristic function call continues on updated set S, the set remains unchanged as S = { a , c } . This indicates that a grounded extension { a , c } , has been obtained.
Following the same procedure, the grounded extension { a } is obtained in Figure 3b. In Figure 3c, the grounded extension results in the empty set. For details of the theoretical basis and proof of this procedure, as well as details of the characteristic function, please refer to the literature [4].
There are various semantic extensions, such as the preferred extension and semi-stable extension. For further information, refer to the literature [4]. This paper specifically addresses the stable extension.

2.3. Stable Labelings

We incorporate a labeling-based algorithm in the proposed method. A labeling is a function L: A r g { i n , o u t , u n d e c } that assigns a label to each argument to indicate the current state of the argument. For the classical 3-valued labelings approach, in signifies acceptance of the argument, out denotes rejection of the argument, and undec indicates the argument’s status as undecided.
Definition 8.
Given an A F = ( A , R ) and L is considered as a complete labeling, if for any a A satisfy:
1. L ( a ) = i n s . t . b { a } : L ( b ) = o u t ;
2. L ( a ) = o u t s . t . b { a } : L ( b ) = i n ;
3. L ( a ) = u n d e c s . t . b { a } : L ( b ) i n c { a } : L ( c ) = u n d e c .
With Definition 8, it follows that a is legally labeled in if and only if all its attackers are labeled out. If a is legally labeled as out if there is at least one attacker labeled as in. a is legally labeled as undec then it has at least one attacker that is labeled as undec and it does not have an attacker that is labeled as in.
It has been proved in the literature [11] that complete labeling and stable labeling are related as follows in Theorem 1.
Theorem 1.
Given an A F = ( A , R ) and a labeling L. The following statements are equivalent:
1. L is a complete labeling such that u n d e c ( L ) = ;
2. L is a stable labeling.
Therefore, the solution for stable labelings relies only on the use of the labels in and out. A stable labeling is a set of labels that all arguments are legally labeled. Then the corresponding in(L), the set of arguments labeled as in, is also called the corresponding stable extension.
Figure 4 illustrates an example of stable labelings. In the left abstract argumentation framework, the labels of the arguments a , b , c , d are incorrect according to the first rule in Definition 8; the labels of the arguments f , g are also incorrect according to the second rule in Definition 8. Thus, only the label of parameter e is identified as correct. On the right-hand side, the labels of all arguments satisfy Definition 8, i.e., it is a stable labeling. Arguments b , d , f labeled in form a stable extension.
Based on Definition 8 and Theorem 1, we can summarize the overall idea of our algorithm. First, we use only in and out labels in our algorithm to label the argument states. Then, we keep modifying the mislabeled arguments by the first and second provisions of Definition 8 until there are no mislabeled arguments. At this point, all the labels of the arguments are legal and no undec label satisfies the criteria of Theorem 1, and we can obtain a stable labeling. Then we output the set labeled in.

3. Main Framework of DNTS

HAYWOOD [17] implements a local search algorithm with stochastic selection and restart operations to calculate the stable extension. Inspired by HAYWOOD, this paper proposes the DNTS algorithm. The algorithm uses a meta-heuristic strategy that makes local search-based and integrates two neighborhood moves operators to extend the search space. Further, by adding a tabu strategy and perturbation technique, the DNTS algorithm optimizes the search process by effectively stepping out of the local optimum.
We implement Definition 8 through the function d e f i n i t i o n but excluding the label undec. If an argument is labeled as in, then all arguments attacking it must be labeled as out. If an argument is labeled as out, then at least one of the arguments attacking it must be labeled out.
The function δ is designed to determine if the argument v is incorrectly labeled under L. The correctness is judged on the basis of d e f i n i t i o n . The δ function returns 0 if the label is correctly labeled or 1 if it is not.
δ ( v , L ) = 0 , i f d e f i n i t i o n ( v , L ) i s t r u e 1 , i f d e f i n i t i o n ( v , L ) i s f a l s e
On this basis, the DNTS algorithm proposed in this paper minimizes the following objective function:
F ( L ) = v V δ ( v , L )
The current labeling L represents a stable labeling if F ( L ) = 0 . The objective function computes the number of incorrectly labeled arguments given a known labeling L.
In this paper, we introduce a tabu search approach to search for a stable labeling. The search space of the proposed DNTS algorithm consists of all possible labelings including infeasible ones.
Algorithm 1 illustrates the main framework of our proposed DNTS algorithm, implementing a local search strategy for the SE-ST problem. It receives an input of abstract argumentation framework A F and outputs stable labeling L * if it finds one.
The algorithm begins with an initialization phase, followed by a tabu search phase.
Algorithm 1 Main framework of the DNTS algorithm
INPUT: 
A F ( A , R )
OUTPUT: 
A stable labeling L *
  1:
procedure LS_DNTS
  2:
     p e r t u r b I t e r 0 , i t e r 0 ,
  3:
     ( L , m i s l a b e l ) I N I T I A L I S A T I O N ( A F )
  4:
     L * L
  5:
     F b e s t m i s l a b e l _ n u m b e r ( m i s l a b e l )
  6:
    repeat
  7:
          if  F ( L * ) = 0  then
  8:
             return L *
  9:
          end if
10:
          for  v m i s l a b e l  do
11:
              N E I G H B O R E V A L U A T E ( v , L * )
12:
         end for
13:
         repeat
14:
              b e s t v S E L E C T M I N ( E v a l u a t e H e a p )
15:
         until  t a b u t a b l e ( b e s t v ) < i t e r
16:
          L * E X E C U T I N G M O V E ( b e s t v , i t e r , L * )
17:
          ( F b e s t , m i s l a b e l ) U P D A T E M I S L A B E L ( L * )
18:
          ( p e r t u r b I t e r , L * ) D I S T U R B _ R A N D ( p e r t u r b I t e r , F b e s t , L * )
19:
          i t e r i t e r + 1
20:
    until termination condition
21:
end procedure
In Algorithm 1, the initialization takes place from lines 2 to 5. During the initialization phase, variables i t e r and p e r t u r b I t e r , F b e s t , labeling L and L * , and the set m i s l a b e l are initialized. Notations i t e r denotes the iteration count of the local search procedure, and p e r t u r b I t e r is the maximum iteration count before triggering the perturbation technique. m i s l a b e l stores all mislabeled arguments. F b e s t denotes the best objective value found during the local search.
With the function I N I T I A L I S A T I O N , we obtain the initial L and m i s l a b e l . Then we assign L to L * and obtain the initial F b e s t via set m i s l a b e l .
Subsequently, the algorithm enters the tabu search phase where the target set is identified through a series of search and flip moves. If the modified set of labels L * meets the conditions for stable labeling, where all labels comply with Definition 8, all arguments labeled in form a stable extension and the result is output; otherwise, the process repeats until the termination condition is met. If there are mislabeled arguments, the algorithm iterates through them, conducts neighborhood evaluations, and generates the evaluation heap E v a l u a t e H e a p (detailed in Section 4.3). The top non-tabu argument from the heap is chosen to be flipped. If the top one is tabu, we simply pop it and select the next one. After the move, the current labeling is modified to be L * . The set of mislabeled arguments m i s l a b e l and F b e s t are updated accordingly. The D I S T U B _ R A N D procedure determines the need for a perturbation operation, which, if necessary, updates the L * set and adjusts the p e r t u r b I t e r value.
The termination condition can be a maximum number of iterations or a runtime limit.

4. Algorithm Details

This section presents the details of the algorithm design, i.e., the neighborhood structure, the evaluation of the neighborhood move, the tabu strategy, and the perturbation technique.

4.1. Initialisation

The design of the initialization phase is based on the relationship between the grounded extension and the stable extension. If the grounded extension is not empty, it is always included in the stable extension. Thus identifying the grounded labeling before solving for stable labeling constitutes an efficient strategy. The derivation of the grounded extension depends on initial arguments, which are not attacked by any other arguments. These initial arguments are obtained by calling the c h a r a c t e r i s t i c f u n c t i o n on the empty set. Dung [4] defines the grounded extension as the least fixpoint of the c h a r a c t e r i s t i c f u n c t i o n . In the iterative flipping process, argument labels within the grounded extension remain unchanged. The arguments attacked by the arguments in the grounded extension must be labeled out. We store this information in set G r o u n d e d .
Algorithm 2 details the initialization sub-procedure in Algorithm 1. The initial labeling L is formed by randomly assigning in or out labels to all arguments not belonging to set G r o u n d e d . Set m i s l a b e l contains all arguments that are incorrectly labeled according to Definition 8. G r o u n d e d stores the labels of the grounded extension as well as the labels of arguments attacked by the grounded extension.
Algorithm 2 Initialisation
INPUT: 
A F ( A , R )
OUTPUT: 
L , m i s l a b e l
  1:
procedure Initialisation
  2:
     G r o u n d e d c a l c u l a t e G r o u n d e d ( A F )
  3:
    for i = 1 . . . N  do
  4:
        if  i G r o u n d e d  then
  5:
            c o n t i n u e
  6:
        end if
  7:
         l ( i ) RandomLabel( i n , o u t )
  8:
    end for
  9:
     m i s l a b e l G e t M i s l a b e l ( L )
10:
end procedure

4.2. Definition of Neighborhood Moves

L can be represented as a set pair ( I N , O U T ) . Here, I N = in(L) is the set of arguments labeled as in, and O U T = out(L) is the set of arguments labeled as out. The neighborhood moves of the DNTS algorithm consist of two types: one is to move an argument from I N to O U T , and the other is to move an argument from O U T to I N .
According to Definition 8 for legal in and out labels, we only adjust for mislabeled arguments. Figure 5 provides a specific example of this process. In this example, the function f l i p ( v ) is defined to flip the label of the current argument v. If L ( v ) = in, it flips to out; conversely, if L ( v ) = out, it flips to in.
In Figure 5, the initial state I N = { a , b } , O U T = { c , d , e } . According to Definition 8 for legal in and out labels, we identify and flip the labels of mislabeled arguments. First, by performing neighborhood move (1), we move argument a from I N to O U T , flipping its label from in to out. Next, neighborhood move (2) moves argument e from O U T to I N , achieving the flip of argument e’s label from out to in. In the following sub-section, we detail how to evaluate moves.

4.3. Neighborhood Evaluation

To quantify the impact of different neighborhood moves on the objective function, we introduce function P ( v L ) , measuring the correctness of argument v under the labeling L. This involves assessing the contribution of the move in optimizing the objective function. Based on this evaluation, the most optimal neighborhood move is determined and selected for implementation. The function P is defined as follows:
P ( v , L ) = C o r r e c t S e l f ( v , L ) + C o r r e c t N e i g h b o r s ( v , L )
C o r r e c t S e l f = δ ( v , L )
C o r r e c t N e i g h b o r s = u N e i g h b o r s ( v ) δ ( u , L )
The evaluation function P consists of C o r r e c t S e l f and C o r r e c t N e i g h b o r s . C o r r e c t S e l f calculates the correctness of labeling a selected argument v with label L ( v ) under the labeling L. C o r r e c t N e i g h b o r s calculates the correctness of the neighbors of v, i.e., u when argument v is labeled with L ( v ) . The criterion for evaluating the quality of a neighborhood move is Δ P .
Δ P ( v ) = P ( v , L * ) P ( v , L )
The altered labeling L * arises from applying the function f l i p ( v ) to the current labeling L. Lower values of Δ P indicate higher quality of neighborhood moves. If Δ P is negative, it improves the configuration, and, conversely, if Δ P is positive, it deteriorates it. By calculating the evaluation function values for the current argument under different labels, the impact of executing the neighborhood move on the objective function can be assessed. Without involving any other operations, the algorithm selects the neighborhood move that minimizes the Δ P value for execution.
Taking Figure 6 as an example, this paper elucidates the key steps in problem solving. We assume the labeling L is as follows.
L ( A ) = in, L ( B ) = in, L ( C ) = in, L ( D ) = out, L ( E ) = in
During the analysis, based on Definition 8, we identified that the label assignments for arguments B and C are incorrect. According to the first clause of Definition 8, all arguments attacking an argument labeled as in should be labeled out. Hence, the label assignments for B and C are erroneous. Next, we calculate the Δ P of the incorrectly labeled arguments.
Calculate the Δ P of B:
Δ P ( B ) = P ( B , L * ) P ( B , L ) = 2
Calculate the Δ P of C:
Δ P ( C ) = P ( C , L * ) P ( C , L ) = 0
Argument B is used as an example to illustrate the calculation process. First, P ( B , L ) evaluates the correctness of argument B under the current label in according to Definition 8. By assessing the correctness of its own and its neighbors under the current label, it is determined that B is mislabeled and C is correctly labeled, hence P ( B , L ) = 2 . P ( B , L * ) evaluates the correctness of argument B under the flipped label out. At this juncture, the label of B is correct and the label of C is correct, resulting in P ( B , L * ) = 0 , leading to a Δ P ( B ) = 2 .
Subsequently, based on the calculated results, a greedy selection strategy is employed to operate on the argument with the smallest Δ P value.

4.3.1. Pseudo-Code for Neighborhood Move Evaluation

Algorithm 3 illustrates the neighborhood evaluation process. This process receives an incorrectly labeled argument, v, and the current labeling L as inputs. By flipping the label of argument v, we obtain the updated labeling L * . Subsequently, an effectiveness evaluation is conducted by comparing the differences in the evaluation function between the original and the updated labeling.
The evaluation process is divided into two steps: first, assessing the impact of the label flip on the argument v itself, and then evaluating its effect on neighboring arguments. The variable Δ P records these impacts for determining the effectiveness of the label flip. Obviously, the key to the process is not only to evaluate the quality of the label after the flip of the argument itself but also to consider its impact on neighboring arguments. The efficiency of the neighborhood evaluation has a direct impact on the overall performance of the algorithm, while the complexity of the neighborhood structure determines the speed of execution of the evaluation. One of the main advantages of using the labeling approach is that it simplifies the interactions between neighbors, which improves the execution efficiency of the algorithm. The time complexity of this module is O ( | u | + l o g N ) , where N is the total number of arguments.
Algorithm 3 Neighborhood evaluation
INPUT: 
v , L
  1:
procedure NEIGHBOREVALUATE
  2:
     Δ P 0
  3:
     L * f i l p ( v )
  4:
     Δ P Δ P δ ( v , L )
  5:
     Δ P Δ P + δ ( v , L * )
  6:
    for  u N e i g h b o r s ( v )  do
  7:
         Δ P Δ P δ ( u , L )
  8:
         Δ P Δ P + δ ( u , L * )
  9:
    end for
10:
     H e a p I n s e r t ( v , Δ P )
11:
end procedure

4.3.2. Fast Neighborhood Evaluation

In the main framework of the local search described in Section 3, we begin each iteration by evaluating all incorrectly labeled arguments within the mislabeled set m i s l a b e l . However, after executing the chosen argument label flip, only the correctness of the labels of the neighboring arguments to the argument is affected. This means that, in the next round of evaluation, we only need to assess the neighboring arguments to the flipped argument and then insert the results into the heap according to the magnitude of the evaluation values. In subsequent iterations, this operation alone is required, without the need to traverse all incorrectly labeled arguments, significantly enhancing our search efficiency.
In the DNTS algorithm, we also use the fast neighborhood evaluation. Algorithm 4 replaces lines 10 to 12 of Algorithm 1.
Algorithm 4 Fast neighborhood evaluation
INPUT: 
v , L , m i s l a b e l
OUTPUT: 
E v a l u a t e H e a p
  1:
procedure FastNeigborEvaluate
  2:
    if  d e f i n i t i o n ( v , L ) i s f l a s e and l ( v ) G r o u n d e d ( v )  then
  3:
         A d d T o M i s l a b e l ( v )
  4:
         N E I G H B O R E V A L U A T E ( v , L )
  5:
    else
  6:
         M i s l a b e l R e m o v e ( v )
  7:
    end if
  8:
    for  u N e i g h b o r s ( v )  do
  9:
        if  d e f i n i t i o n ( u , L ) i s f l a s e and l ( u ) G r o u n d e d ( u )  then
10:
            A d d T o M i s l a b e l ( u )
11:
            N E I G H B O R E V A L U A T E ( u , L )
12:
        else
13:
            M i s l a b e l R e m o v e ( u )
14:
            H e a p R e m o v e ( u )
15:
        end if
16:
    end for
17:
return  E v a l u a t e H e a p
18:
end procedure
Algorithm 4 illustrates the process of fast neighborhood move evaluation. First, we individually check the correctness of the argument label L ( v ) after performing the optimal neighborhood move according to d e f i n i t i o n . If the label is incorrect and the argument is not a part of the grounded extension, it is added to the set of incorrectly labeled arguments m i s l a b e l via the function A d d T o M i s l a b e l and evaluated. If the labeling is correct, it is removed from m i s l a b e l via the function M i s l a b e l R e m o v e . The algorithm checks the correctness of the label for each argument in the neighboring argument set u of argument v. If any label is incorrect, it is added to the set of incorrectly labeled arguments m i s l a b e l and evaluated. If it is correct, it is removed from m i s l a b e l and from the evaluation heap E v a l u a t e H e a p . Eventually, the evaluation heap E v a l u a t e H e a p is returned. The time complexity of this module is O ( | u | 2 + | u | l o g N ) , and its space complexity is O ( N ) .

4.4. Neighborhood Move Execution

During the evaluation process, the algorithm evaluates the arguments based on their labeled flips and inserts the evaluation results sequentially into a heap data structure. The top element of the heap thus represents the current optimal choice, i.e., the argument that needs to be flipped. After completing the neighborhood move selection, the argument is removed from the heap and the heap structure is readjusted to maintain the ordering by evaluation result. The neighborhood move is then executed to flip the label of that argument. Specifically, the selected argument label is flipped to out if it is in, or to in if it is out.
Algorithm 5 explains how a move is implemented. The top element of the heap represents the argument with the smallest evaluation value ( Δ P ), and it is implemented by flipping the label of that argument. After selecting the argument for which a neighborhood move is to be executed, the label of that argument is first identified and then the corresponding flip operation is executed.
After flipping the label of the selected argument, we make the neighborhood move tabu through the function T a b u A c t i o n . The tabu search algorithm utilizes the tabu strategy to implement a memory function in the search process, marking explored local optima and avoiding these solutions for a subsequent period. This memory and tabu functionality not only enhances the effect of local search but also helps to prevent the search process from becoming trapped in local optima, thereby facilitating the discovery of global optimal solutions. Specifically, after each flip operation, the involved arguments are temporarily tabued to prevent them from being flipped again in the short term. The details will be illustrated in Section 4.5.
Setting the tabu length too long may result in missing the opportunity to flip a key argument, which in turn wastes a lot of time, while setting it too short may cause the argument to be unblocked too quickly, leading the algorithm to fall into a local loop. Therefore, the tabu length has a significant impact on the performance of the algorithm, and the determination of the tabu length needs to rely on extensive experimental research.
Algorithm 5 Executing Move
INPUT: 
v , i t e r , L
OUTPUT: 
L
  1:
procedure EXECUTINGMOVE
  2:
    if  L ( v ) i s i n  then
  3:
         L ( v ) c h a n g e _ l a b e l ( v , o u t )
  4:
         T a b u A c t i o n ( v , o u t , i t e r )
  5:
    else  L ( v ) i s o u t
  6:
         L ( v ) c h a n g e _ l a b e l ( v , i n )
  7:
         T a b u A c t i o n ( v , i n , i t e r )
  8:
        for  u N e i g h b o r s ( v )  do
  9:
           if  L ( v ) i s i n a n d l ( v ) G r o u n d e d ( u )  then
10:
                L ( u ) c h a n g e _ l a b e l ( u , o u t )
11:
           end if
12:
        end for
13:
    end if
14:
return L
15:
end procedure
Any argument attacking a in labeled argument should be labeled as out; in labeled arguments cannot attack other in labeled arguments. Therefore, after executing a neighborhood move, the algorithm further modifies the neighbors of the moved argument. When the label of an argument is flipped from out to in, we uniformly flip all the neighboring argument labels of that argument to out. The time complexity of this module is O ( | u | ) .

4.5. Adaptive Tabu Strategy

In order to enlarge the search space and effectively avoid falling into local optimums, this algorithm introduces a tabu search strategy.
We denote by t l 1 the length of the tabu for an argument to flip from I N to O U T and by t l 2 the length of the tabu for an argument to flip from O U T to I N , respectively. In the presence of a large number of mislabeled arguments, the algorithm may need to go through a higher number of iterations to reach a local optimum; on the contrary, when there are fewer mislabeled arguments, fewer iterations may achieve a local optimum. Therefore, employing a regulation mechanism based on the number of mislabeled arguments to determine the tabu length is a more rational choice. Thus, the tabu length’s parameters t l 1 and t l 2 are adjusted based on the current number of mislabeled arguments, with specific details provided in the parameter introduction in Section 5.1.
Algorithm 6 illustrates its main steps as tabu action. First, we identify whether the neighborhood move executed is a flip from out to in, or from in to out, and then we identify in the tabu table t a b u t a b l e how long the argument needs to be tabued, i.e., the length of the tabu.
Algorithm 6 TabuAction
INPUT: 
v , m o v e , i t e r
OUTPUT: 
t a b u t a b l e
1:
procedure TabuAction
2:
    if  m o v e i s o u t  then
3:
         t a b u t a b l e ( v ) = i t e r + t l 1
4:
    else  m o v e i s i n
5:
         t a b u t a b l e ( v ) = i t e r + t l 2
6:
    end if
7:
return  t a b u t a b l e
8:
end procedure

4.6. Perturbation Technique

The DNTS algorithm proposed in this paper implements a perturbation technique in cases where the prolonged search fails to improve the quality of labeling significantly.
The technique periodically applies a random perturbation to the labeling, whose triggering conditions depend on the search history and the perturbation period ( p e r t u r b P e r i o d t ). Its purpose is to free the current solution from the local optimum and explore more of the solution space. The perturbation involves a series of move combinations that are randomly exchanged. Specifically, it flips some of the out labels of the argument into in labels, with the parameter details introduced in Section 5.1.
Algorithm 7 gives the pseudo-code for D I S T U R B _ R A N D , where F b e s t is the historical optimal value reached by the objective function during the iteration process. After executing a label flip, the value of p e r t u r b I t e r will be increased if the current objective function value F ( L ) fails to fall below F b e s t , otherwise p e r t u r b I t e r is set to 0 to reflect the finding of new optimal labeling.
Algorithm 7 Adaptive perturbation
INPUT: 
F b e s t , L , p e r t u r b P e r i o d t
OUTPUT: 
L *
  1:
procedure DISTURB_RAND
  2:
    if  F ( L ) F b e s t  then
  3:
         p e r t u r b I t e r p e r t u r b I t e r + 1
  4:
    else
  5:
         p e r t u r b I t e r 0
  6:
    end if
  7:
    if  p e r t u r b I t e r > p e r t u r b P e r i o d t  then
  8:
         L * d i s t u r b _ r a n d ( L )
  9:
    end if
10:
return  L *
11:
end procedure

5. Experiment

The experimental design of this paper is divided into two parts: parametric and comparative experiments. In the parametric experiment phase, the evaluation metric was solely the running time. However, in the comparative experiment phase, both the running time and the number of correctly solved instances were evaluated. However, both the parametric and comparison experiments used the AFGen benchmark generator to generate different test instances. The number of correctly solved instances is defined as the number of instances that can be successfully solved correctly within the time limit of 600 s. According to the ICCMA evaluation rules, solving an instance with an incorrect result and failing to solve the instance in less than 600 s will be considered a failure.
The DNTS algorithm is implemented in C and tested on a desktop computer equipped with an Intel(R) Xeon(R) W-2235 CPU at 3.80 GHz, with 16.0 GB of RAM. The operating system is Ubuntu 18.04.
In this paper, we use the AFGen benchmark generator provided by ICCMA to randomly generate experimental instances of different sizes. The maximum size of one of the instances can be up to 650,000 edges. To ensure the diversity of the instances, each directed graph (AF) is checked for isomorphism by Nauty to exclude repeated instances. The generated instances contain vertex sets and edge sets.
AFGen is not the only generator used to generate instances, ICCMA also provides other instance generators as well as instances used for competitions. However, the AFGen generator produces the most randomized instances. In this paper, we mainly test the performance of the algorithm on randomized instances, and therefore we chose AFGen to generate instances for testing.

5.1. Calibration

In this section, preliminary experiments are conducted to determine the key parameter values of the DNTS algorithm.
For the parametric experiments, 30 instances were randomly generated for testing. The number of arguments ranged from 500 to 4000, and the number of edges ranged from 1000 to 250,000. By evaluating a large number of parameter combinations, 64 sets of parameters were finally selected for precise efficacy testing. To ensure reliable results, five tests were conducted for each parameter set. The best, worst, and average running times in seconds were then computed from these test results to evaluate the performance.
We tested the following parameters:
D i s t u r b P e r i o d : perturbation periods are divided into two types: dynamic and static periods. Dynamic perturbation periods include | A | and 14 | A | , where | A | signifies the total number of arguments in the instance, and 14 | A | is the setting of the restart parameter in the Haywood algorithm. Although this parameter hardly ever triggers the mechanism it is set for, it was shown to have the most significant effect in the HAYWOOD algorithm; hence, this experiment opts for an equivalent parameter as the perturbation period for testing. The static perturbation periods are 2000 and 8000.
D i s t u r b L e v e l : the perturbation levels are set to four levels: 0.01, 0.0025, 0.00125, and 0.001. For example, when the perturbation level is 0.01 and the number of global arguments is 4000, 40 out of the 4000 arguments are perturbed every time the perturbation condition is met.
t l : the tabu length is set to include four combinations t l 1 = 4 , t l 2 = 2 ; t l 1 = 4 , t l 2 = 4 ; t l 1 = 6 , t l 2 = 4 , and t l = 6 , t l 2 = 6 . For example, if the number of mislabeled arguments is 20 and t l 1 = 4 , t l 2 = 4 , the tabu length is set to one-quarter of the number of arguments, i.e., 5.
Note that this experiment does not guarantee the optimal values of the parameters, and the optimal scheme may vary from one benchmark to another.
The analysis of data from Table 1 reveals that the DNTS algorithm demonstrates relative stability in its best values. However, it exhibits significant fluctuations in its worst values. Notably, the average values tend to be closer to the best values in the majority of cases.
Among the four sets of perturbation period settings, the 14 | A | setting performs the worst. In the four sets of perturbation level settings, we find that long perturbation periods need to be matched with large perturbation levels, and short perturbation periods need to be matched with small perturbation levels. The tabu length effect is more randomized and can show good or bad results with different parameters. Based on these observations, we screened a set of more stable and efficient parameters for subsequent comparison experiments.
DNTS: D i s t u r b P e r i o d = | A | , D i s t u r b L e v e l l = 0.00125 , t l 1 = 4 , t l 2 = 4

5.2. Algorithm Comparison

This section evaluates the performance of the DNTS algorithm against other algorithms in terms of running time and number of correctly solved instances through comparative experiments.
The compared algorithms include u-toksia [9] and HAYWOOD [17].
Other algorithms based on the SAT solver, such as pyglaf [10] and dpdb [25], which excel in structured instances, exhibit poor performance in random instances. Consequently, u-toksia was selected for comparison. As a notable representative in ICCMA competitions, u-toksia has participated in several consecutive events. It achieved overall first place in the main track of the 2019 and 2021 ICCMA competitions. u-toksia did not achieve the top rank in the 2021 SE-ST project, with dpdb taking the lead. However, u-toksia performance in handling random instances significantly outperforms other higher-ranked algorithms such as dpdb. Regarding Haywood, multiple versions exist, and this experiment selects the best-performing HAYWOOD-4 version for demonstration.

5.2.1. Comparison on Algorithm Efficiency

In this sub-section, five groups of instances are generated for the experiment, each containing 20 instances with the same number of vertices but different numbers of edges, totaling 100 instances. The number of arguments were set at 500, 1000, 2000, 3000, and 4000, and the range of edges for each group of instances were 4000 to 10,000, 8,000 to 50,000, 20,000 to 160,000, 60,000 to 350,000, and 240,000 to 640,000, respectively. In solving random instances, the speed of solving is not exactly proportional to the size of the instances, but the general trend shows that the larger the size of the instances is, the longer is the time required for solving.
The purpose of establishing benchmarks is to evaluate the time efficiency of algorithms in solving instances. This ensures that all reference algorithms can find the stable extension for each instance, allowing for fair comparison. For each algorithm, 10 runs are conducted to record the best, worst, and average run times. Additionally, the average total time (att) for solving all instances across three categories is computed. All timings are reported in seconds.
If the algorithm fails to find a stable extension within 600 s, despite the existence of a stable extension, the algorithm will report that it cannot find a stable extension in this instance. This is the nature of local search-based meta-heuristics. If it fails to find a feasible solution, it cannot guarantee that there is no feasible one.
Based on the results presented in Table 2, the u-toksia algorithm performs particularly well in the performance evaluation of the smallest-scale random instances generated for 500 arguments. The time to solve the instances is mostly measured in microseconds, showing its significant advantage in handling small-sized instances. Although the DNTS algorithm is not optimal in terms of speed, it still manages to solve small-scale instances in a very short time, proving its utility in dealing with such problems.
Based on the results presented in Table 3, in the performance evaluation of the random instances generated from 1000 arguments, the DNTS algorithm and the HAYWOOD algorithm are close to each other in the majority of the instances and are generally lower than the u-toksia algorithm. In particular, the DNTS algorithm uses less time in some of the more time-consuming instances (e.g., V1000_17, V1000_19). Overall, for this group of instances, the DNTS algorithm performs better in handling complex instances and is more time efficient than the u-toksia and HAYWOOD algorithms.
Based on the results presented in Table 4, the performance evaluation for random instances generated with 2000 arguments shows that the DNTS algorithm has significantly lower best values than the HAYWOOD and u-toksia algorithms for most instances. It shows that the DNTS algorithm is the fastest for these instances. However, the DNTS algorithm performs poorly in terms of worst values. The DNTS algorithm is better than HAYWOOD and u-toksia in terms of att values of average, indicating that worst values do not occur as frequently.
The overall performance of DNTS is similar to that of HAYWOOD in most of the cases, and significantly better than HAYWOOD in V2000_2, V2000_8, V2000_20, and V2000_20. Both of them outperform the u-toksia algorithms in most of the cases in terms of the best and average values, but the performance of worst is not as stable as the u-toksia algorithms.
Based on the results presented in Table 5 and Table 6, in the performance evaluation for randomly generated instances with 3000 and 4000 arguments, the DNTS algorithm shows a clear advantage in the best values over HAYWOOD and u-toksia in most instances. DNTS exhibits significant fluctuations in its worst values. In terms of average values, DNTS performs similarly to HAYWOOD in most instances but significantly outperforms HAYWOOD in some complex instances with longer solution times (such as V3000_10, V3000_17, V3000_18, V4000_3, V4000_6, V4000_7). The superior performance of DNTS in complex instances results in a better average total time (att), compared with HAYWOOD and u-toksia.
Based on the above experimental results, it can be observed that the DNTS algorithm significantly outperforms the HAYWOOD and u-toksia algorithms in terms of the best of the solved instances. However, there is a large difference between its best and worst values, which reveals the lack of stability of the DNTS algorithm. The DNTS algorithm shows more outstanding performance compared with HAYWOOD when dealing with some complex instances that take a long time, which leads to a consistent outperformance of the DNTS algorithm over HAYWOOD and u-toksia in terms of the overall average time (att). Therefore, the DNTS algorithm can be considered as an effective optimization of the HAYWOOD algorithm in terms of improving the upper bound of the performance of the solved instances.

5.2.2. Comparison on Number of Solved Instances

u-toksia, as an exact algorithm, does not participate in the experiments of this section. This section compares the performance differences between the DNTS algorithm and the HAYWOOD algorithm in processing random instances, both of which are heuristic algorithms. For this purpose, we generated test instances different from those used in the previous experiments and ensured that each instance had a stable extension. The instances were divided into four groups based on the number of arguments, with 50 instances per group, totaling 200 instances for the experiment, aiming to assess the solving efficiency of each algorithm for instances of various sizes. To ensure the reliability of the experimental results, each group of instances was tested three times. In Table 7, best represents the maximum number of instances solved, worst represents the minimum number of instances solved, and average represents the average number of instances solved in three runs.
Based on the experimental results in Table 7, it can be observed that the HAYWOOD and DNTS algorithms are able to successfully solve basically all the instances in the instances generated with 1000 and 2000 arguments. However, the number of solutions of both algorithms decrease as the instance size increases. Nevertheless, the DNTS algorithm still outperforms HAYWOOD in terms of the number of solutions, and this advantage becomes more and more significant as the instance size increases.

6. Discussion and Analysis

To further explore the impact of different initialization schemes, tabu strategies, and perturbation techniques on algorithm performance, this section introduces three variants of the original DNTS.
For the time comparison experiments, we selected the same 50 instances as in the time comparison experiments in Section 5.2.1 for testing, and each instance was executed five times to ensure the reliability of the results. In the experiment for evaluating the number of correctly solved instances by the algorithm, the same 200 instances from Section 5.2.2 were used for testing. Each instance was run three times to ensure reliability.

6.1. The Importance of Initializing Solutions

The DNTS algorithm generates initial solutions for all arguments randomly labeled in or out labels. We name the algorithm version that labels all arguments as out in the initialization phase as DNTS-OUT. Running times of best, worst, and average in seconds are compared for two initialization solutions. The goal of this experiment is to evaluate the effect of different initialization strategies on the running time.
We use the same 50 instances as Section 5.2.1. From the experimental results in Table 8, we see that both the DNTS and DNTS-OUT algorithms are able to give results quickly.
For the small instances of 500 and 1000 arguments, the difference in running time between the DNTS and DNTS-OUT algorithms is insignificant, and both of them are able to give quick solution results. However, as the instance size increases, i.e., for 2000, 3000, and 4000 arguments, the DNTS algorithm shows a more significant advantage over DNTS-OUT in terms of running time, and this advantage increases with the number of arguments.
We use the same 200 instances as Section 5.2.2. The experimental results in Table 9 show that, for smaller-sized instances, both algorithms can successfully solve 50 instances. However, when facing the larger size instances of 3000 and 4000 arguments, both algorithms fail or time out, but the DNTS algorithm is still able to successfully solve more instances than DNTS-OUT.
Combining the above experimental results, the use of the random label initialization strategy shows significant advantages over the initialization approach of marking all labels as out. These advantages are evident both in terms of running time and the number of successfully solved instances.

6.2. The Importance of Tabu Strategy

In this section, we test the impact of the tabu strategy on the algorithm.
The algorithm version that does not execute tabu actions is named DNTS-NT.
We use the same 50 instances as Section 5.2.1. The results in Table 10 show that both the DNTS and DNTS-NT algorithms produce fast results for small-size instances. For larger instances with 2000, 3000, and 4000 arguments, both algorithms present similar solution times in most instances. However, for complex instances that take longer to solve, particularly those with 4000 arguments, the DNTS algorithm significantly outperforms the DNTS-NT algorithm. This advantage is mainly attributed to its use of the tabu search mechanism.
We use the same 200 instances as Section 5.2.2. The results in Table 11 show that, when processing instances with 3000 and 4000 arguments, both the DNTS and DNTS-NT algorithms encounter instances they cannot solve. However, the DNTS algorithm still successfully solves a higher number of instances compared with DNTS-NT.
In summary, while the DNTS and DNTS-NT algorithms perform similarly in small or simple instances, the DNTS algorithm exhibits significant advantages in more complex and time-consuming instances. This is due to its effective use of the forbidden search mechanism, notably reducing the time required to find a solution.

6.3. The Importance of Perturbation Techniques

This sub-section tests the effect of perturbation techniques on algorithm performance. The algorithm variant that does not implement perturbation techniques is defined as DNTS-NP.
We use the same 50 instances as Section 5.2.1. The results in Table 12 show that there is little difference between the performance of the DNTS and DNTS-NP algorithms in most of the instances. However, in some of the higher complexity instances, the DNTS algorithm has a significantly shorter solution time than DNTS-NP. In particular, DNTS-NP is more prone to extremely poor results when dealing with certain instances (e.g., v1000_3, v1000_7, v1000_10, v2000_1, v2000_2), which can have problems with extremely high running times or unsolvability.
We use the same 200 instances as Section 5.2.2. The results in Table 13 indicate that both algorithms are successful in producing solutions when dealing with small instances. When dealing with large instances, the DNTS algorithm is able to solve more instances and shows better performance compared with DNTS-NP.
Comprehensive analyses show that DNTS and DNTS-NP perform close to each other in most cases. However, the DNTS-NP algorithm is prone to produce extremely poor results in some instances. This result emphasizes the importance of perturbation techniques in improving the stability of the DNTS algorithm and its ability to escape from locally optimal solutions.

6.4. Analysis of Algorithm Performance with Different Graph Type

We evaluated the DNTS algorithm’s iterations per second under different graph types. We chose 32 instances in Section 5.2.1, excluding 500 groups. Based on the size of these instances, we categorized them into sparse and dense graphs.
In randomized instances, the main factor affecting the iterative performance of the algorithm is the size of the instance since the graph is not very structured. From Figure 7, we find that both plots have distinct peaks and troughs and that the numbers of peaks and troughs are more numerous and of greater magnitude in Figure 7a and relatively fewer in Figure 7b. The higher the density of the graph is, the lower is the iterative performance of the algorithm. In addition, we analyzed the instances at indexes 1 and 2 in Figure 7a and indexes 1 and 3 in Figure 7b to test the number of stable extensions.
Analyzing these four instances, we find that the number of arguments in the stable extension from instance 1 is higher than in instance 2 in Figure 7a and instance 3 in Figure 7b, potentially affecting the algorithm’s iteration speed. Additionally, the graph’s ring structure may impact performance, with the algorithm’s search process becoming extremely difficult when there are too many rings or nested rings in the graph.
By analyzing the iterative performance of sparse and dense graphs, it can be seen that the DNTS algorithm has the most stable and superior performance in dense graphs, especially in medium and large instances. The DNTS-NT algorithm, on the other hand, has outstanding performance in sparse graphs in some instances, but its overall stability is not as good as that of the DNTS algorithm.

7. Conclusions

This paper introduces a dual-neighborhood tabu search (DNTS) algorithm to solve the single stable extension (SE-ST) problem within abstract argumentation frameworks. It implements a dual-neighborhood strategy incorporating a fast neighborhood evaluation method. In addition, by introducing techniques such as tabu and perturbation, this algorithm is able to jump out of the local optimum, which significantly improves the performance of the algorithm. The DNTS algorithm was experimentally validated on a series of random instances generated by the AFGen generator provided by ICCMA, demonstrating significant effectiveness. Comparative results with the u-toksia and HAYWOOD algorithms reveal that DNTS achieves superior best run times in most of the randomly generated instances, though it slightly lags in worst-case scenarios. Nevertheless, the efficient performance of DNTS in processing complex instances enables it to surpass both u-toksia and HAYWOOD in overall performance, evidencing that DNTS has raised the upper limit of solving efficiency, yet there is still room for improvement in terms of stability.
As the size of instances increases, the number of instances correctly solved by the DNTS algorithm shows a declining trend, but it still outperforms the HAYWOOD algorithm, which is of the same class of local search algorithms. This experimental result not only confirms the feasibility and advantages of the DNTS algorithm in computing stable extensions within abstract argumentation frameworks but also highlights the need for further optimization in terms of the number of solutions the algorithm can find.
Future work will explore methods to enhance the stability of the DNTS algorithm and attempt to apply it to solving other extension problems such as complete extension, preferred extension, etc., to fully assess its application potential and scalability.

Author Contributions

X.W. conceived and developed the main ideas of the study; X.H. conducted all the experiments and wrote the initial draft of the manuscript under the supervision of Y.K. and X.W.; C.X. provided oversight and guidance to ensure the overall quality of the study; J.S. organized the statistics of the experiments. M.L. provided insightful ideas during the discussion. All authors contributed to the refinement of the final manuscript. All authors have read and agreed to the published version of the manuscript.

Funding

The research was supported by the National Natural Science Foundation of China (Grant No. 62201203) and the Key R&D Program of Hubei Provincial Department of Science and Technology (Grant No. ZZCXHZYF2022000815).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The instances and code used in this paper can be found at https://github.com/trues-d/dnts (accessed on 28 Jun 2024).

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Besnard, P.; Doutre, S. Characterization of Semantics for Argument Systems. 2004, pp. 183–193. Available online: https://cdn.aaai.org/KR/2004/KR04-021.pdf (accessed on 1 March 2024).
  2. Amgoud, L.; Maudet, N.; Parsons, S. Modelling dialogues using argumentation. In Proceedings of the Fourth International Conference on MultiAgent Systems, Boston, MA, USA, 10–12 July 2000; IEEE: Piscataway, NJ, USA, 2000; pp. 31–38. [Google Scholar]
  3. Rahwan, I.; Zablith, F.; Reed, C. Laying the foundations for a world wide argument web. Artif. Intell. 2007, 171, 897–921. [Google Scholar] [CrossRef]
  4. Dung, P.M. On the acceptability of arguments and its fundamental role in nonmonotonic reasoning, logic programming and n-person games. Artif. Intell. 1995, 77, 321–357. [Google Scholar] [CrossRef]
  5. Atkinson, K.; Baroni, P.; Giacomin, M.; Hunter, A.; Prakken, H.; Reed, C.; Simari, G.; Thimm, M.; Villata, S. Towards artificial argumentation. AI Mag. 2017, 38, 25–36. [Google Scholar] [CrossRef]
  6. Cerutti, F.; Gaggl, S.A.; Thimm, M.; Wallner, J. Foundations of implementations for formal argumentation. IfCoLog J. Logics Their Appl. 2017, 4, 2623–2705. [Google Scholar]
  7. Charwat, G.; Dvořák, W.; Gaggl, S.A.; Wallner, J.P.; Woltran, S. Methods for solving reasoning problems in abstract argumentation–a survey. Artif. Intell. 2015, 220, 28–63. [Google Scholar] [CrossRef] [PubMed]
  8. Dvořák, W.; Järvisalo, M.; Wallner, J.P.; Woltran, S. Complexity-sensitive decision procedures for abstract argumentation. Artif. Intell. 2014, 206, 53–78. [Google Scholar] [CrossRef]
  9. Niskanen, A.; Järvisalo, M. Algorithms for dynamic argumentation frameworks: An incremental SAT-based approach. In ECAI 2020; IOS Press: Amsterdam, The Netherlands, 2020; pp. 849–856. [Google Scholar]
  10. Alviano, M. Argumentation reasoning via circumscription with Pyglaf. Fundam. Inform. 2019, 167, 1–30. [Google Scholar] [CrossRef]
  11. Caminada, M.W.; Gabbay, D.M. A logical account of formal argumentation. Stud. Log. 2009, 93, 109–145. [Google Scholar] [CrossRef]
  12. Pollock, J.L. Cognitive Carpentry A Blueprint for How to Build a Person; MIT Press: Cambridge, MA, USA, 1995. [Google Scholar]
  13. Vreeswijk, G. An algorithm to compute minimally grounded and admissible defence sets in argument systems. In Proceedings of the COMMA, Liverpool, UK, 11–12 September 2006; pp. 109–120. [Google Scholar]
  14. Verheij, B. A Labeling Approach to the Computation of Credulous Acceptance in Argumentation. In Proceedings of the IJCAI, Citeseer, Hyderabad, India, 6–12 January 2007; Volume 7, pp. 623–628. [Google Scholar]
  15. Doutre, S.; Mengin, J. Preferred extensions of argumentation frameworks: Query, answering, and computation. In Proceedings of the International Joint Conference on Automated Reasoning, Siena, Italy, 18–22 June 2001; Springer: Berlin/Heidelberg, Germany, 2001; pp. 272–288. [Google Scholar]
  16. Niu, D.; Liu, L.; Lü, S. New stochastic local search approaches for computing preferred extensions of abstract argumentation. AI Commun. 2018, 31, 369–382. [Google Scholar] [CrossRef]
  17. Thimm, M. Stochastic Local Search Algorithms for Abstract Argumentation Under Stable Semantics. In Proceedings of the COMMA, Warsaw, Poland, 12–14 September 2018; pp. 169–180. [Google Scholar]
  18. Min, W.; Liu, J.; Zhang, S. Sparse weighted canonical correlation analysis. Chin. J. Electron. 2018, 27, 459–466. [Google Scholar] [CrossRef]
  19. Selman, B.; Mitchell, D.G.; Levesque, H.J. Generating hard satisfiability problems. Artif. Intell. 1996, 81, 17–29. [Google Scholar] [CrossRef]
  20. Selman, B.; Kautz, H.A.; Cohen, B. Local search strategies for satisfiability testing. Cliques Color. Satisf. 1993, 26, 521–532. [Google Scholar]
  21. Battiti, R.; Tecchiolli, G. The reactive tabu search. ORSA J. Comput. 1994, 6, 126–140. [Google Scholar] [CrossRef]
  22. Kelly, J.P.; Laguna, M.; Glover, F. A study of diversification strategies for the quadratic assignment problem. Comput. Oper. Res. 1994, 21, 885–893. [Google Scholar] [CrossRef]
  23. Glover, F.; Laguna, M. Tabu Search; Springer: Berlin/Heidelberg, Germany, 1998. [Google Scholar]
  24. Baroni, P.; Caminada, M.; Giacomin, M. An introduction to argumentation semantics. Knowl. Eng. Rev. 2011, 26, 365–410. [Google Scholar] [CrossRef]
  25. Fichte, J.K.; Hecher, M.; Gorczyca, P.; Dewoprabowo, R. A-Folio DPDB—System Description for ICCMA 2021. 2021. Available online: https://argumentationcompetition.org/2021/downloads/a-folio-dpdb.pdf (accessed on 1 April 2024).
Figure 1. Example of a stable extension, with the stable extension marked in red.
Figure 1. Example of a stable extension, with the stable extension marked in red.
Applsci 14 06428 g001
Figure 2. Example of abstract argumentation frameworks.
Figure 2. Example of abstract argumentation frameworks.
Applsci 14 06428 g002
Figure 3. Example of grounded extension. Directed edges represent attack relations. (a) there is a relationship of defense, with a defending against c; (b) there is a relationship of self-attack; (c) there is a relationship of mutual attack.
Figure 3. Example of grounded extension. Directed edges represent attack relations. (a) there is a relationship of defense, with a defending against c; (b) there is a relationship of self-attack; (c) there is a relationship of mutual attack.
Applsci 14 06428 g003
Figure 4. Stable labeling.
Figure 4. Stable labeling.
Applsci 14 06428 g004
Figure 5. Example of a neighborhood move.
Figure 5. Example of a neighborhood move.
Applsci 14 06428 g005
Figure 6. Example of abstract argumentation frameworks.
Figure 6. Example of abstract argumentation frameworks.
Applsci 14 06428 g006
Figure 7. The evaluation of iterations per second with the growth of the problem size. The x-axis represents the increasing size of instances from left to right. (a) Experimental results for sparse graphs; (b) experimental results for dense graphs.
Figure 7. The evaluation of iterations per second with the growth of the problem size. The x-axis represents the increasing size of instances from left to right. (a) Experimental results for sparse graphs; (b) experimental results for dense graphs.
Applsci 14 06428 g007
Table 1. Experimental results for the calibration.
Table 1. Experimental results for the calibration.
DisturbPeriod DisturbLevel tl 1 tl 2 BestWorstAverage
| A | 0.0142498.37713.4609.092
| A | 0.0144783.821118.67884.91
| A | 0.0164526.53907.49727.28
| A | 0.0166572.591119.43763.35
| A | 0.002542748.61869.94821.38
| A | 0.002544537.061034.21759.91
| A | 0.002564625.05974.94750.57
| A | 0.002566598.04782.81655.49
| A | 0.0012542887.081007.93929.67
| A | 0.0012544491.88546.52527.54
| A | 0.0012564530.281107.77822.3
| A | 0.0012566560.85691.76619.64
| A | 0.00142961.311328.851049.92
| A | 0.00144578.56942.9842.42
| A | 0.00164867.07901.54887.32
| A | 0.00166885.431310.331064.99
14 | A | 0.0142493.361098.75686.89
14 | A | 0.01445982138.621070.86
14 | A | 0.0164563.31931.99699.97
14 | A | 0.0166571.731378.23857.24
14 | A | 0.002542753.851094.87870.79
14 | A | 0.002544667.621402.25976.12
14 | A | 0.002564559.07970.37659.72
14 | A | 0.002566517.711248.42834.15
14 | A | 0.0012542875.81182.45982.33
14 | A | 0.0012544613.452164.911158.9
14 | A | 0.0012564575.3638.43606.43
14 | A | 0.0012566539.91744.79893.24
14 | A | 0.00142839.751304.881015.3
14 | A | 0.00144897.971269.521033.56
14 | A | 0.00164892.81132.671002.72
14 | A | 0.00166936.851564.241130.73
20000.0142520.68726.94622.13
20000.0144672.811289.86902.94
20000.0164589.02887.46665.27
20000.0166629.851274.51822.14
20000.002542804.391074.58919.29
20000.002544679.781011.3835.49
20000.002564895.341076.88945.9
20000.002566585.871102.71784.46
20000.0012542838.071152.281018.12
20000.0012544576.311133.41792.79
20000.0012564515.6824.22625.21
20000.0012566549.93866.76759.66
20000.00142893.921051.53977.36
20000.00144921.281115.361053.87
20000.00164901.141106.72983.8
20000.00166850.881047.121033.94
80000.0142530.02787.72687.41
80000.0144603.071116.43838.08
80000.0164522.51606.95543.14
80000.0166553.831057.55786.53
80000.002542883.28943.8916.21
80000.002544525.61808.93628.24
80000.002564557.59678.45610.19
80000.002566593.92683.44649.73
80000.0012542855.891017.46934.42
80000.0012544583.19881.29722.35
80000.0012564509.32810.09622.09
80000.0012566839.91222.42945.88
80000.00142844.471061.15953.9
80000.00144931.31162.98989.34
80000.00164854.51029.2962.8
80000.00166814.321347.121033.09
Table 2. Experimental results for instances of 500 vertices.
Table 2. Experimental results for instances of 500 vertices.
 DNTS  HAYWOOD  u-toksia 
Instance Best Worst Average Best Worst Average Best Worst Average
V500_10.070.120.10.070.120.080.020.020.02
V500_20.120.120.120.120.120.120.020.020.02
V500_30.070.120.10.070.070.070.020.020.02
V500_40.070.070.070.070.070.070.020.020.02
V500_50.120.120.120.120.120.120.020.020.02
V500_60.070.120.110.120.120.120.020.020.02
V500_70.120.120.120.120.120.120.020.020.02
V500_80.070.070.070.070.070.070.020.020.02
V500_90.070.070.070.070.070.070.020.020.02
V500_100.070.120.10.070.070.070.020.020.02
V500_110.070.070.070.070.070.070.010.020.02
V500_120.070.070.070.070.070.070.020.020.02
V500_130.070.070.070.070.070.070.020.020.02
V500_140.120.120.120.120.120.120.020.030.02
V500_150.070.120.110.120.120.120.020.040.02
V500_160.070.120.110.120.120.120.020.040.02
V500_170.070.120.10.070.070.070.020.030.02
V500_180.120.120.120.120.120.120.020.020.02
V500_190.070.070.070.070.070.070.010.020.02
V500_200.070.070.070.070.070.070.010.020.01
Att0.080.10.090.090.090.090.020.020.02
Table 3. Experimental results for instances of 1000 vertices.
Table 3. Experimental results for instances of 1000 vertices.
 DNTS  HAYWOOD  u-toksia 
Instance Best Worst Average Best Worst Average Best Worst Average
V1000_10.370.430.410.370.430.380.931.040.94
V1000_20.370.480.420.370.380.375.926.076.02
V1000_30.420.480.440.420.480.450.070.070.07
V1000_40.320.630.390.320.320.322.362.412.4
V1000_50.533.371.262.213.383.2116.517.0616.72
V1000_60.370.480.410.370.430.420.040.070.05
V1000_70.420.530.460.420.430.432.822.922.89
V1000_80.532.761.231.141.191.1715.9916.3516.23
V1000_90.325.761.370.320.320.320.020.020.02
V1000_100.321.950.540.320.320.320.270.270.27
V1000_110.420.480.440.420.470.432.462.562.51
V1000_120.370.730.550.420.430.433.733.783.75
V1000_130.320.480.350.320.370.330.040.040.04
V1000_140.433.580.840.420.430.4214.1114.4614.31
V1000_150.420.880.590.580.580.583.073.223.15
V1000_160.320.520.350.320.320.326.536.736.66
V1000_172.7128.112.6613.2441.2230.2617.7218.1317.91
V1000_180.320.990.430.270.270.270.040.040.04
V1000_191.7518.626.3914.3620.5715.1918.6919.2418.97
V1000_200.682.661.110.830.930.8718.7419.1418.92
Att0.593.71.531.873.662.826.56.686.59
Table 4. Experimental results for instances of 2000 vertices.
Table 4. Experimental results for instances of 2000 vertices.
 DNTS  HAYWOOD  u-toksia 
Instance Best Worst Average Best Worst Average Best Worst Average
V2000_15.46142.644.521414.4114.2126.0127.1326.62
V2000_220.62251.3376.8960.09216.14120.4620.5721.1820.83
V2000_32.8726.315.523.273.983.3726.3126.9826.66
V2000_42.3620.526.678.39.488.5520.1620.7720.52
V2000_52.9225.865.792.873.382.9825.7526.4726.07
V2000_60.072.11.8622.12.050.070.070.07
V2000_70.122.512.062.152.212.180.120.120.12
V2000_88.7136.1821.9757.5270.97167.2826.5727.3826.94
V2000_92.719.933.663.173.273.229.689.949.81
V2000_10215.784.632.052.12.077.8587.9
V2000_110.071.951.641.751.81.80.070.070.07
V2000_121.7517.263.611.651.71.6617.1617.5117.34
V2000_132.3620.874.982.212.262.2220.8721.3321.09
V2000_143.0710.654.353.783.993.8510.6510.8510.74
V2000_153.0710.394.074.244.454.3310.3910.610.46
V2000_161.96.732.441.91.951.916.636.786.7
V2000_173.2210.94.613.483.633.5710.7511.0110.86
V2000_181.8517.163.682.212.262.2516.917.2617.11
V2000_192.9733.9615.677.447.757.5720.2620.7220.49
V2000_205.5628.7612.2621.0727.8922.1128.7529.1728.9
Att3.6834.5911.5410.2629.2918.8815.2815.6715.47
Table 5. Experimental results for instances of 3000 vertices.
Table 5. Experimental results for instances of 3000 vertices.
 DNTS  HAYWOOD  u-toksia 
Instance Best Worst Average Best Worst Average Best Worst Average
V3000_157.195.84.855.054.9622.723.1622.87
V3000_26.177.856.666.276.436.3610.9511.2111.13
V3000_38.5113.3511.337.78.117.9234.335.3235.05
V3000_48.0511.1610.4610.0410.6510.4414.2114.5714.41
V3000_512.8931.7121.2823.1624.1823.7733.9434.7634.35
V3000_66.388.617.379.6310.149.9228.4529.8928.93
V3000_77.2911.519.456.937.297.1727.2827.8927.58
V3000_87.0911.218.7318.5319.0418.7328.2528.9128.6
V3000_911.0116.0913.2722.4523.2122.9215.3315.6315.5
V3000_1012.5844.4722.9663.568.0766.2133.8434.5534.09
V3000_115.615.975.735.565.725.618.879.179.06
V3000_125.465.775.65.265.365.3322.5523.0122.81
V3000_139.8313.3411.4410.811.2110.9835.4236.3436.01
V3000_1412.2831.2518.6911.8712.2212.138.0638.8838.41
V3000_159.4313.811.7411.1111.5111.2835.4735.9835.75
V3000_166.336.586.496.226.486.390.170.170.17
V3000_1713.3524.5918.257.0960.559.2336.6937.6137.23
V3000_1812.38157.4470.56128.45134.15131.5232.8233.3833.05
V3000_196.437.956.916.536.786.6611.0611.4111.24
V3000_207.398.677.787.247.67.4513.0913.413.25
Att8.6721.9314.0221.1622.1921.7524.1724.7624.47
Table 6. Experimental results for instances of 4000 vertices.
Table 6. Experimental results for instances of 4000 vertices.
 DNTS  HAYWOOD  u-toksia 
InstanceBestWorstAverageBestWorstAverageBestWorstAverage
V4000_120.9225.4922.1319.6520.5220.123.6748.3944.93
V4000_223.019347.820.1621.2820.5941.6357.6543.79
V4000_332.57138.9861.95245.5261.06251.0647.47121.4855.29
V4000_417.1624.0820.517.0117.8717.318.6938.2236.07
V4000_519.7524.8922.6319.0419.9619.5122.1442.3940.01
V4000_634.15120.6257.42200.78208.1203.8148.0955.349.26
V4000_728.283.0846.1776.0180.2877.4530.3848.8946.55
V4000_824.7931.5628.2427.3331.128.2924.0254.6550.96
V4000_923.5733.2328.4523.8725.224.4525.9648.6445.85
V4000_1030.9965.4937.8733.0834.5533.7146.9152.3650.73
V4000_1120.7723.6721.6520.0620.8720.4717.7221.3318.21
V4000_1219.0425.0422.1421.6922.72223.3644.9842.38
V4000_1324.1331.2527.1951.9555.0552.8528.1550.4347.57
V4000_1419.828.4422.7520.5721.4820.9616.826.8717.94
V4000_1519.2525.2822.0822.7523.7223.1517.4121.1817.88
V4000_1628.6579.6251.66155.61162.32158.7350.2782.5253.94
V4000_1721.9926.9224.4126.9228.3527.4225.547.2744.49
V4000_1836.5986.2852.371.5375.2573.0348.5482.0152.37
V4000_1928.5546.5133.9134.6136.3335.1432.0655.4152.46
V4000_2036.8467.6249.89287.14299.6291.5748.4968.3350.89
Att25.5454.0535.0669.7673.2871.0831.8653.4243.08
Table 7. Experimental results on the number of instances correctly solved.
Table 7. Experimental results on the number of instances correctly solved.
 DNTS  HAYWOOD 
Group Best Worst Average Best Worst Average
1000505050505050
2000505050505050
3000494848.33464646
4000454545363535.33
Table 8. Comparison of different initialization solutions.
Table 8. Comparison of different initialization solutions.
  DNTS   DNTS-OUT  
Group Instance Best Worst Average Best Worst Average
vertex_500v500_10.120.120.120.120.120.12
v500_20.120.120.120.170.170.17
v500_30.070.120.090.120.120.12
v500_40.070.070.070.120.120.12
v500_50.120.120.120.170.170.17
v500_60.120.120.120.120.170.16
v500_70.070.120.110.120.170.14
v500_80.070.120.080.120.120.12
v500_90.070.120.080.120.120.12
v500_100.070.120.090.120.120.12
vertex_1000v1000_10.421.040.550.680.730.7
v1000_20.422.660.880.680.830.75
v1000_30.420.530.460.981.191.04
v1000_40.320.530.400.480.580.53
v1000_50.536.121.980.982.11.47
v1000_60.370.430.400.631.750.88
v1000_70.420.580.490.680.730.71
v1000_80.836.062.241.091.851.48
v1000_90.321.140.530.373.882.46
v1000_100.371.650.660.480.530.51
vertex_2000v2000_19.1737.3019.1555.00151.85103.30
v2000_220.8339.4928.3534.10188.7288.81
v2000_33.835.615.0417.2622.5519.65
v2000_42.974.143.804.859.737.39
v2000_53.125.163.9119.5523.8220.76
v2000_62.212.412.302.513.072.79
v2000_72.512.712.615.217.656.23
v2000_821.3366.7545.77176.36374.11291.23
v2000_94.906.535.8217.7224.1820.88
v2000_102.312.462.414.956.735.79
vertex_3000v3000_14.956.785.987.499.028.56
v3000_26.438.827.1226.6730.1328.32
v3000_38.2124.2813.5654.7069.0959.55
v3000_49.4314.5211.0983.0387.7084.60
v3000_513.7578.6442.33139.24150.02143.93
v3000_66.2811.368.3626.8329.8228.31
v3000_78.369.848.8824.2331.0528.40
v3000_86.5310.658.8527.6937.2529.77
v3000_912.1216.5514.16108.18120.71117.22
v3000_1011.11114.4644.33100.17119.29106.65
vertex_4000v4000_119.7027.7923.40155.34173.01162.97
v4000_223.8776.7242.90184.01262.12206.70
v4000_335.22169.9077.79309.48600.00405.81
v4000_416.9524.9419.5585.2695.5888.29
v4000_519.9630.1824.49150.98182.36162.84
v4000_635.6773.3649.53321.31380.52347.62
v4000_727.8458.3639.37246.75269.32257.73
v4000_824.1335.4730.25223.92232.01226.24
v4000_925.4035.9329.72237.76246.97241.41
v4000_1027.0862.9443.40310.87386.32346.17
Table 9. Experimental results on the number of instances correctly solved.
Table 9. Experimental results on the number of instances correctly solved.
DNTS   DNTS-OUT   
Group Best Worst Average Best Worst Average
1000505050505050
2000505050505050
3000494949494848.33
4000454545212020.67
Table 10. Comparison of tabu strategy.
Table 10. Comparison of tabu strategy.
  DNTS   DNTS-NT  
Group Instance Best Worst Average Best Worst Average
vertex_500v500_10.120.120.120.070.070.07
v500_20.120.120.120.120.120.12
v500_30.070.120.090.070.120.08
v500_40.070.070.070.070.070.07
v500_50.120.120.120.070.120.11
v500_60.120.120.120.070.070.07
v500_70.070.120.110.070.120.09
v500_80.070.070.080.070.070.07
v500_90.070.120.080.070.070.07
v500_100.070.120.090.070.120.08
vertex_1000v1000_10.421.040.550.384.491.23
v1000_20.422.660.880.420.480.46
v1000_30.420.530.460.480.530.52
v1000_40.320.530.40.321.190.52
v1000_50.536.121.980.631.851.26
v1000_60.370.430.400.430.530.47
v1000_70.420.580.490.430.480.45
v1000_80.836.062.240.831.441.06
v1000_90.321.140.530.320.680.39
v1000_100.371.650.660.370.430.39
vertex_2000v2000_19.1737.3019.1514.0181.6643.66
v2000_220.8339.4928.355.46141.2643.09
v2000_33.835.615.043.935.924.58
v2000_42.974.143.803.539.026.32
v2000_53.125.163.912.975.153.74
v2000_62.212.412.302.162.262.21
v2000_72.512.712.612.262.462.35
v2000_821.3366.7545.7723.7762.8949.68
v2000_94.906.535.823.835.314.63
v2000_102.312.462.412.213.832.58
vertex_3000v3000_14.956.785.985.776.075.96
v3000_26.438.827.1211.6712.0211.85
v3000_38.2124.2813.5611.3115.6313.28
v3000_49.4314.5211.0917.5231.9621.69
v3000_513.7578.6442.3322.4592.9554.65
v3000_66.2811.368.3612.2818.7915.49
v3000_78.369.848.886.7811.719.98
v3000_86.5310.658.857.508.778.03
v3000_912.1216.5514.169.4314.5711.11
v3000_1011.11114.4644.3336.4450.4243.44
vertex_4000v4000_119.7027.7923.4023.1131.2526.61
v4000_223.8776.7242.9059.28130.74104.79
v4000_335.22169.9077.7945.44119.9780.71
v4000_416.9524.9419.5539.0357.6545.96
v4000_519.9630.1824.4943.4649.8146.43
v4000_635.6773.3649.5373.97122.8090.77
v4000_727.8458.3639.3758.71109.2791.50
v4000_824.1334.5030.2545.0855.9750.62
v4000_925.4035.9329.7221.4827.0823.96
v4000_1027.0862.9443.4065.5380.0372.62
Table 11. Experimental results on the number of instances correctly solved.
Table 11. Experimental results on the number of instances correctly solved.
DNTS  DNTS-NT  
GroupBestWorstAverageBestWorstAverage
1000505050505050
2000505050505050
3000494949494848.67
4000454545434243.67
Table 12. Comparison of perturbation technique.
Table 12. Comparison of perturbation technique.
  DNTS   DNTS-NP  
Group Instance Best Worst Average Best Worst Average
vertex_500v500_10.120.120.120.120.120.12
v500_20.120.120.120.120.120.12
v500_30.070.120.090.070.120.1
v500_40.070.070.070.070.070.07
v500_50.120.120.120.120.120.12
v500_60.120.120.120.120.120.12
v500_70.070.120.110.120.120.12
v500_80.070.070.080.070.070.07
v500_90.070.120.080.070.120.08
v500_100.070.120.090.070.120.09
vertex_1000v1000_10.421.040.550.370.430.4
v1000_20.422.660.880.370.530.45
v1000_30.420.530.460.42600120.36
v1000_40.320.530.40.370.380.37
v1000_50.536.121.980.470.990.68
v1000_60.370.430.40.370.430.41
v1000_70.420.580.490.42600120.37
v1000_80.836.062.240.582.261.22
v1000_90.321.140.530.3298.0319.88
v1000_100.371.650.660.32527.63105.81
vertex_2000v2000_19.1737.3019.1521.43600.00153.29
v2000_220.8339.4928.3536.29600.00366.56
v2000_33.835.615.043.325.014.18
v2000_42.974.143.803.436.385.10
v2000_53.125.163.913.025.614.60
v2000_62.212.412.302.102.612.39
v2000_72.512.712.612.263.222.70
v2000_821.3366.7545.7720.47600.00152.72
v2000_94.906.535.823.374.493.87
v2000_102.312.462.412.102.822.46
vertex_3000v3000_14.956.785.985.516.375.97
v3000_26.438.827.126.539.227.97
v3000_38.2124.2813.5611.0614.1612.09
v3000_49.4314.5211.099.6316.7513.03
v3000_513.7578.6442.3319.76600.00156.63
v3000_66.2811.368.367.4912.8310.34
v3000_78.369.848.889.5315.5312.31
v3000_86.5310.658.857.3411.519.78
v3000_912.1216.5514.1613.7020.7816.61
v3000_1011.11114.4644.3325.1942.6933.60
vertex_4000v4000_119.7027.7923.4024.8930.9927.36
v4000_223.8776.7242.9035.78153.0865.36
v4000_335.22169.9077.7968.02215.63135.98
v4000_416.9524.9419.5520.1132.1224.14
v4000_519.9630.1824.4924.2832.4229.03
v4000_635.6773.3649.5342.18600.00277.31
v4000_727.8458.3639.3744.0862.8748.87
v4000_824.1334.5030.2531.8038.5234.20
v4000_925.4035.9329.7228.9137.1032.72
v4000_1027.0862.9443.4034.8652.1542.51
Table 13. Experimental results on the number of instances correctly solved.
Table 13. Experimental results on the number of instances correctly solved.
DNTS  DNTS-NP  
Group Best Worst Average Best Worst Average
1000505050505050
2000505050484747.67
3000494949494949
4000454545454444.33
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ke, Y.; Hu, X.; Sun, J.; Wu, X.; Xiong, C.; Luo, M. Dual-Neighborhood Tabu Search for Computing Stable Extensions in Abstract Argumentation Frameworks. Appl. Sci. 2024, 14, 6428. https://doi.org/10.3390/app14156428

AMA Style

Ke Y, Hu X, Sun J, Wu X, Xiong C, Luo M. Dual-Neighborhood Tabu Search for Computing Stable Extensions in Abstract Argumentation Frameworks. Applied Sciences. 2024; 14(15):6428. https://doi.org/10.3390/app14156428

Chicago/Turabian Style

Ke, Yuanzhi, Xiaogang Hu, Junjie Sun, Xinyun Wu, Caiquan Xiong, and Mao Luo. 2024. "Dual-Neighborhood Tabu Search for Computing Stable Extensions in Abstract Argumentation Frameworks" Applied Sciences 14, no. 15: 6428. https://doi.org/10.3390/app14156428

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop