Next Article in Journal
The Effect of Green Areas on Urban Microclimate: A University Campus Model Case
Previous Article in Journal
Reliability Prediction of Mixed-Signal Module Based on Multi-Stress Field Failure Mechanisms
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Transparent Task Delegation in Multi-Agent Systems Using the QuAD-V Framework

by
Jeferson José Baqueta
1,*,
Mariela Morveli-Espinoza
2 and
Cesar Augusto Tacla
1,*
1
Programa de Pós-Graduação em Engenharia Elétrica e Informática Industrial, Universidade Tecnológica Federal do Paraná (UTFPR), Curitiba 80230-901, Brazil
2
Universidad Nacional de San Agustín de Arequipa, Arequipa 04000, Peru
*
Authors to whom correspondence should be addressed.
Appl. Sci. 2025, 15(8), 4357; https://doi.org/10.3390/app15084357
Submission received: 12 March 2025 / Revised: 6 April 2025 / Accepted: 11 April 2025 / Published: 15 April 2025

Abstract

:
Task delegation in multi-agent systems (MASs) is crucial for ensuring efficient collaboration among agents with different capabilities and skills. Traditional delegation models rely on social mechanisms such as trust and reputation to evaluate potential partners. While these approaches are effective in selecting competent agents, they often lack transparency, making it difficult for users to understand and trust the decision-making process. To address this limitation, we propose a novel task delegation model that integrates explainability through argumentation-based reasoning. Our approach employs the quantitative argumentation with votes framework (QuAD-V), a voting-based argumentation system that enables agents to justify their partner selection. We evaluate our model in a scenario involving the distribution of petroleum products via pipelines, where agents represent bases capable of temporarily storing a quantity of product. The connections between agents represent transportation routes, allowing the product to be sent from an origin to a destination base. The results demonstrate the effectiveness of our model in optimizing delegation decisions while maintaining clear, understandable explanations for agents’ decisions.

1. Introduction

Multi-agent systems (MASs) play a fundamental role in various domains, enabling distributed problem-solving and autonomous collaboration among agents [1,2,3]. In many applications, agents must delegate tasks to others, taking advantage of different capabilities and expertise to achieve common goals [4,5,6]. Task delegation mechanisms typically rely on social evaluation metrics, such as trust and reputation, to assess potential partners [7,8,9,10,11,12]. In such cases, agents learn about their partners’ behaviors through direct and indirect experiences, as well as observations of environmental conditions [13,14]. For example, in [9], a Fuzzy Cognitive Map (FCM)-based structure [15] is used to integrate different trust components into a performance measure that agents use to infer the probability of delegation associated with a partner. Similarly, a reputation-based delegation model is proposed in [10], where the partners are selected based on a performance metric computed from direct social evaluations or third-party assessments. Finally, in [12], partners are chosen based on a performance measure represented as a reward factor that is updated as tasks are delegated among agents, considering direct experiences between delegators and their partners. However, in general, such models produce only a quantitative measure that influences partner selection, without providing an explicit explanation of why a particular partner is preferred. This lack of explainability can undermine trust in autonomous decision-making systems, particularly in critical domains where understanding the rationale behind decisions is essential, such as medicine or finance [16,17,18].
Explainability in computational systems has gained significant attention in recent years, as users increasingly demand transparency in automated processes [19,20]. The need for transparency arises from the growing integration of artificial intelligence (AI) and autonomous systems, particularly in domains where automated decision-making affects human lives [21,22]. As these systems become more complex, users not only expect effective solutions but also a clear understanding of the reasoning behind them. Explainability is particularly valuable in high-stakes domains, such as autonomous vehicles, military operations, and critical infrastructure management, where the consequences of poor decisions can be severe [23,24,25].
A solution that has been explored to generate explanations for computational systems is formal argumentation (FA). FA enables the generation of explanations by analyzing a set of arguments and the relationships established between them [26]. In the literature, different types of FA are presented, such as the abstract argumentation framework (AAF) introduced in [27], where arguments are treated as atomic entities without internal structure. This framework is formally defined as a set of arguments, and a conflict relation between them can be represented as a directed graph. An AAF does not require complex rules or advanced logical processing to resolve conflicts but has limited expressiveness, as it does not account for support relations between arguments. On the other hand, more sophisticated approaches, such as structured argumentation frameworks (i.e., ABA [28], ASPIC+ [29], and DeLP [30]), build argumentation based on logical rules and deductions, allowing for the formal modeling of arguments within a robust formalism. However, these approaches require specific grammars and generally present greater complexity for integration into applications without specialized libraries.
In this work, we propose a novel task delegation model that incorporates explainability through argumentation-based reasoning. Unlike traditional delegation models that rely solely on trust or reputation scores to select partners, our approach not only computes an evaluation metric but also justifies the decision by explaining the underlying reasons for a partner’s acceptance or rejection. In particular, the explanations about the agents’ decisions are generated based on the quantitative argumentation with votes framework (QuAD-V) [31]. Such a framework is based on bipolar argumentation frameworks (BAFs) [32], which are more expressive than AAF as they better capture dialectical relationships by considering both attacks and supports between arguments. QuAD-V offers a good balance between simplicity and expressiveness, as it can be modeled as a directed graph with two types of relations, without requiring defeasible logic or complex grammars. Additionally, QuAD-V is particularly suitable for scenarios that require explainability and human participation in decision-making. In our case, the QuAD-V framework is used to address the problem of task delegation, where multiple delegators vote for the best partners based on their behavior. These features make our model transparent, as it provides clear justifications for agent decisions. Its straightforward design ensures high reproducibility and easy replication. Furthermore, the model is adaptable to different contexts by adjusting argument structures, maintaining its explanatory power across various delegation scenarios. These features (transparency, reproducibility, and adaptability) distinguish our model apart from existing delegation approaches.
To validate the effectiveness of the model, we simulate a scenario involving the distribution of petroleum products via pipelines. In this scenario, the pipeline network is modeled as a series of delegation chains, where intermediate agents represent temporary storage bases that facilitate the transportation of products from one point to another within the network. In this context, the QuAD-V framework is adopted to generate explanations at two levels: locally, by explaining the reasoning behind an individual agent’s decision, and globally, by justifying why a particular delegation route is optimal given the selected partners. Our results demonstrate that the proposed model not only enhances partner selection by allowing delegators to identify partners with high success rates, low rejection rates, and the ability to make accurate performance estimations, but also provides comprehensible explanations for the agents’ decisions, making the decision-making process more interpretable and trustworthy.
The remainder of this paper is structured as follows: Section 2 presents the basic concepts adopted in this work. Section 3 presents our proposed model, detailing the integration of the QuAD-V framework. Section 4 describes the experimental setup. Section 5 analyzes the results, highlighting the advantages of our approach. Finally, Section 6 concludes the paper and outlines future research directions.

2. Background

This section discusses the basic concepts that underpin our delegation model, including a description of the task delegation process and its phases, the social evaluation mechanisms adopted by agents to share their impressions about a partner, and a brief formalization of QuAD-V, where we describe how the acceptance score (i.e., the measure used by the agents to select their partners) is computed based on a set of arguments.

2.1. Task Delegation

A task delegation scenario involves an agent (delegator) who aims to achieve a goal g but cannot perform a task τ necessary to accomplish g by itself. Thus, to achieve its goal, the delegator needs to delegate τ to a partner (delegatee) capable of completing τ from the delegator’s perspective [2,9]. In this scenario, the delegator will achieve g only if the delegatee successfully completes τ  [14]. In particular, we divide the task delegation process into three phases (i.e., offer request, partner selection, and execution):
  • Offer request: A delegator notifies its partners of the intention to delegate a task τ (i.e., a task request). After receiving such a notification, partners send their offers to the delegator, providing their performance estimations for τ . Performance estimates are made based on the task criteria (e.g., the cost and time required to perform τ ).
  • Partner selection: After receiving offers from its partners, the delegator selects its delegatees. This selection involves evaluating each partner, considering their availability, effectiveness in performing τ , and accuracy in meeting estimates (competence).
  • Task execution: Delegatees execute tasks, while delegators evaluate their performance. A delegator α assesses a delegatee, β , based on the outcome produced for task τ . This outcome, a tuple similar to an offer, reflects β ’s actual performance. If β successfully delivers an outcome, τ is complete or partially complete, depending on how accurately β met its performance estimates. Otherwise, β fails to perform τ .
Definition 1.
An offer is a four-tuple α , β, τ, V , where
  • α is the agent who requests the offer (delegator);
  • β is the agent who produced the offer (partner);
  • τ is the task that α intends to delegate to β if β is selected as its delegatee;
  • V = [ ( c 1 , v 1 ) ,   ,   ( c k , v k ) ] is a vector that represents the performance estimations made by β, where each value v i [ 0 , + ] is a β’s estimation for a τ’s criterion c i .

2.2. Impressions

Impressions are evaluative beliefs that a delegator α produces or receives from third parties about β ’s competencies as a delegatee for τ . These beliefs attest to β ’s capability to perform the task τ according to its performance estimates [33]. For example, an impression, produced by α , can include scores assigned to β for performing τ within the agreed budget and delivery time, based on criteria such as cost and time. These scores reflect how well β met the expectations of α for τ .
Definition 2.
An impression is a five-tuple α , β, τ, t, S , where
  • α is the delegator who formed the impression.
  • β is the assessed delegatee.
  • τ is the task performed by β.
  • t is the time in which the impression was created by α.
  • S = [ ( c 1 , s 1 ) ,   ,   ( c k , s k ) ] is a vector of ratings, where a pair ( c i , s i ) represents a score s i [ 0 , 1 ] assigned to β by α concerning its performance regarding a τ’s criterion c i of τ.
During task execution, a delegator α evaluates a delegatee β by comparing β ’s estimated performance for a task τ with the actual outcome, for each criterion associated with τ . The estimation error for a criterion c is the difference between the estimated and actual performance. If this error is zero, β accurately predicted its performance for c. Otherwise, in the impression produced by α , β is penalized based on its estimation error. Assuming an offer O f f = α , β , τ , V and an outcome O u t = α , β , τ , V , the delegator α assigns a score to β based on the criterion c of task τ . This score is stored in an impression, represented as a five-tuple, I m p = α , β , τ , t , S , where S is a vector of ratings. The score assigned to c is denoted as S [ c ] [ 0 , 1 ] and is formally defined as follows:
I m p ( S [ c ] ) = O f f ( V [ c ] ) O u t ( V [ c ] ) if i s M i n ( c ) O u t ( V [ c ] ) O f f ( V [ c ] ) O u t ( V [ c ] ) O f f ( V [ c ] ) if i s M a x ( c ) O u t ( V [ c ] ) O f f ( V [ c ] ) 1 otherwise
where i s M i n ( c ) and i s M a x ( c ) indicate whether c is a minimization (e.g., time, cost) or maximization criterion (e.g., quality). If β ’s estimation is accurate or exceeds α ’s expectations, the score for c is one [2,34].
Additionally, impressions can be grouped to build more comprehensive social measures of a partner’s competencies, including its social image, reputation, and know-how [13,35]. Even though these are distinct concepts, they are complementary in agent interactions. Social image represents a delegator’s personal assessment of a delegatee’s competencies based on direct interactions. It captures the delegator’s immediate impression of how well the delegatee performed in specific tasks. Reputation, on the other hand, is a collective measure formed by aggregating impressions from multiple delegators. It reflects a general consensus on a partner’s competencies as a delegatee, even when individual delegators lack direct experience with that partner. Know-how is a specialized form of reputation where a delegatee accumulates impressions from delegators about tasks performed in the past, similar to job references. These impressions can be shared with potential new delegators, offering additional information about the partners competencies concerning the execution of a given task.
To illustrate these differences, consider a scenario where an agent α (buyer) is evaluating an agent β (seller) for book and magazine purchases. If α has previously interacted with β , it can form distinct social images based on its direct experiences—for instance, perceiving β as a good bookseller due to high-quality books and fair prices while, at the same time, viewing β as a poor magazine seller due to its limited diversity of magazines for sale. If no prior interactions exist, α can rely on β ’s reputation, aggregating feedback from other buyers to assess β ’s reliability. Additionally, α may request references from β to infer its know-how, verifying its expertise as a book or magazine seller based on impressions provided by past clients who interacted directly with β but are not directly related to α .

2.3. QuAD-V Framework

The QuAD-V framework, proposed by [31], extends the quantitative argumentation debate framework (QuAD) [36]. Its main advantage is the ability to resolve debates using a voting system, where users vote for or against arguments. As defined in [31], QuAD-V is a six-tuple ( A , C , P , R , U , V ) , where
  • A is a finite set of answer arguments (possible responses to a question).
  • C is a finite set of con arguments (arguments opposing another argument).
  • P is a finite set of pro arguments (arguments supporting another argument).
  • R is an acyclic binary relation over ( C P ) × ( A C P ) .
  • U is a finite set of users.
  • V is a total function assigning votes V ( u , a ) ,   ? ,   + , where u U votes on argument a ( A C P ) .
In the QuAD-V framework, arguments can attack or support each other. Attackers and supporters of an argument are defined based on con and pro arguments, respectively. For any argument a ( A C P ) , the set of attackers is R ( a ) = b C | ( b , a ) R , and the set of supporters is R + ( a ) = b P | ( b , a ) R . Moreover, each argument a has a vote-based score b S ( a ) : A C P I , with I = [ 0 , 1 ] , computed from user votes:
b S ( a ) = 0.5 if | U | = 0 0.5 + ( 0.5 N + ( a ) N ( a ) | U | ) if | U | 0
where N + ( a ) = | V + ( a ) | is the count of positive votes, with V + ( a ) = u U : V ( u , a ) = + , and N ( a ) = | V ( a ) | is the count of negative votes, with V ( a ) = u U : V ( u , a ) = . Note that in this modeling, an argument with no votes (i.e., | U | = 0 ) is assigned a base score of 0.5, representing an initial state where it has not yet been evaluated positively or negatively by users [37]. Conversely, if all users vote in favor of an argument a (i.e., | V + ( a ) | = | U | ), its base score is 1 (fully positive), whereas if all users vote against it (i.e., | V ( a ) | = | U | ), its base score is 0 (fully negative).
On the other hand, the acceptance score of an argument a ( a S [ 0 , 1 ] ) determines its strength. For example, 1 for accepted, 0.5 for neutral, and 0 for rejected [37]. This measure is computed by considering the influence of the attackers and supporters of the argument a. In particular, for each argument (parent), the attackers and supporters are extracted, forming two sequences of child arguments. These sequences can be normalized by the number of arguments, considering the strength of each argument within the sequence. The aggregated strength of a sequence of arguments is computed as follows:
s ( a ) = 0 , if s = 1 | s | i = 1 | s | s i , otherwise
where s represents the strength of a sequence of arguments, either from the attackers or the supporters. Taking into account the aggregate strength of the attackers ( s a t t ) and supporters ( s s u p p ), the acceptance score of the argument is computed as follows:
a S ( a ) = b S ( a ) + ( s s u p p s a t t ) × ( 1 b S ( a ) ) , if s s u p p s a t t b S ( a ) + ( s s u p p s a t t ) × b S ( a ) , otherwise
Note that the process used to compute the acceptance score of an argument a ensures that stronger supporters amplify a’s acceptability without exceeding 1, while stronger attackers reduce it but never below 0.

3. Delegation Model

The proposed delegation model is built based on the QuAD-V framework (Figure 1). This model is employed by the delegators (users) during the partner selection phase of the task delegation process to assess their partners and select delegatees. Each partner maintains an instance of the framework, which accumulates the delegates’ votes as they interact with its partners. This ensures that the decision-making process continuously adapts based on new interactions, improving the selection process over time.

3.1. Evaluation Metrics

As illustrated in Figure 1, the framework contains only one answer argument ( A 1 ). The acceptance score ( a S [ 0 , 1 ] ) quantifies the partner’s suitability based on past evaluations, reinforcing trust in consistently reliable partners while penalizing those with lower performance. If the answer argument is accepted ( a S ( A 1 ) 0.5 ), the partner can be selected as a delegatee; otherwise, if rejected ( a S ( A 1 ) < 0.5 ), the partner should not be accepted. As presented in Figure 1, a partner is evaluated based on three dimensions:
  • Availability (associated with arguments from class A 2 * ) (The asterisk (*) indicates that all arguments in a class have identifiers starting with a specific number. For example, arguments 21 and 22 belong to the Availability class, while arguments 31, 32, 321, and 322 belong to the Effectiveness class.): This dimension reflects a partner’s capability to accept task requests from delegators. As presented in Figure 1, a delegator votes for or against this class of arguments based on the partner’s rejection rate ( R R ( β , τ ) [ 0 , 1 ] ), which represents the frequency with which a partner β has rejected a task τ requested by its delegators over time:
    R R ( β , τ ) = | ( β , τ ) | | ( β , τ ) |
    where | ( β , τ ) | is the number of times β rejected a task request for τ from a delegator due to being occupied with another task or being unable to perform τ , and | ( β , τ ) | is the number of times β received a task request for τ from its delegators.
  • Effectiveness (associated with arguments of class A 3 * ): This dimension estimates how often a partner successfully completes its tasks. A delegator votes for or against the arguments of this class based on the following measures:
    • Success rate ( S R ( β , τ ) [ 0 , 1 ] ): This measure indicates the frequency with which a partner successfully completes a task, allowing its delegator to achieve its goal. The success rate is calculated as follows:
      S R ( β , τ ) = | ( β , τ ) + | | ( β , τ ) * |
      where | ( β , τ ) + | is the number of times that β successfully completed τ , and | ( β , τ ) * | is the number of times that β attempted to execute τ , regardless of the success or failure in performing τ .
    • Success confidence ( S C ( β , τ ) [ 0 , 1 ] ): This measure indicates the reliability of a partner’s success rate ( β ) from the perspective of a delegator α , taking into account the number of iterations between them. The success confidence tends to increase as the number of iterations between α and β grows [38]:
      S C ( β , τ ) = I U + ( I L I U ) I n | ( β , τ ) * | I n 2
      where I L is the reliability lower bound, I U is the reliability upper bound, and I n is the number of interactions needed for α to accurately infer β ’s success rate. (The parameters I U , I L , and I n are user-defined and configurable according to system needs. In our experiments, we assigned I U = 1 and I L = 0 , meaning the confidence score starts at 0 and increases to 1 as interactions grow. We defined I n as the number of task requests received by β from α . Thus, a partner β who frequently rejects task requests from a delegator α tends to have lower reliability scores from α ’s perspective, since the number of interactions between α and β remains low relative to the total task requests received by β .)
  • Competence (associated with arguments from class A 4 * ): This dimension evaluates whether a partner is capable of meeting the delegator’s expectations regarding task execution, considering the partner’s performance estimations. A delegator votes for or against arguments related to competence based on the following measures:
    • Competence score ( C O M P ( β , τ ) [ 0 , 1 ] ): This measure provides an up-to-date value of the partner’s ability to fulfill its performance estimations. The competence score is computed as the simple average of the scores associated with the most recent impression produced by a delegator α regarding a delegatee β during the task execution phase of the delegation process.
    • Social image ( S I ( α , β , τ ) [ 0 , 1 ] ): A social measure calculated by the aggregation of the set of impressions produced by a delegator α regarding the competencies of a partner β as a delegatee of a task τ . (To aggregate a set of impressions, a delegator α first averages such impressions for each task criterion, forming a single aggregated impression. Then, the final value is obtained by averaging the scores across all criteria [13,35].) The social image can be seen as α ’s personal opinion about β ’s competencies, taking into account a task τ  [10].
    • Reputation ( R E P ( β , τ ) [ 0 , 1 ] ): A delegator α calculates β ’s reputation regarding τ by aggregating impressions obtained from third parties (other delegators). Reputation, therefore, represents a collective evaluation shared within a group, where most members agree without necessarily verifying the truth or sources of the impressions [10,35]. In our case, the group consists of delegators who assess a common delegatee based on the execution of a specific task.
    • Know-how ( K W ( β , τ ) [ 0 , 1 ] ): It is a specialized form of reputation where delegators share impressions with their delegatees [8]. As β interacts with delegators while executing τ , it accumulates impressions on its competencies. These impressions, similar to job references, can be sent to a delegator α during partner selection [39,40]. By aggregating them, α computes a social measure of β ’s know-how.
It is important to note that the QuAD-V structure adopted in our delegation is robust enough to handle inconsistent information during the processing of argument strengths. An inconsistency occurs when an argument is simultaneously supported and attacked by different sources. This issue is naturally resolved by the argumentation-based voting mechanism, which balances supporting and opposing arguments during the recursive computation of argument strength. By considering both supporting and opposing contributions, this process ensures that conflicting viewpoints influence the final acceptance score in a structured manner.

3.2. Explanation Generation Process

The explanations produced by our delegation model are built based on Algorithm 1. This algorithm takes as input an answer argument and its supporting QuAD-V framework. It recursively evaluates the strength of arguments related to the answer argument to construct an explanation. The generated explanation consists of a decision (acceptance or rejection) and a set of arguments that justify such a decision. If the acceptance score of the answer argument is greater than or equal to 0.5, the argument is accepted along with supporting arguments; otherwise, it is rejected and opposing arguments are provided as justification [37]. Additionally, Algorithm 1 follows four main steps to generate explanations:
  • Initialization (lines 1–4): The algorithm initializes the necessary variables, including the sets of supporting arguments ( p r o s ) and opposing arguments ( c o n s ), as well as setting the acceptance threshold value ( T H ). Then, the children of the answer argument ( a r g ) are retrieved from the QuAD-V framework.
  • Processing supporting and opposing arguments (lines 5–7): For each child argument, the algorithm calls the GenerateExplanation function, which determines whether the child argument supports or opposes the answer argument. If the child is a supporting argument and has an acceptance score above the threshold, it is added to the set of supporting arguments ( p r o s ). If it is an opposing argument and has an acceptance score above the threshold, it is added to the set of opposing arguments ( c o n s ).
  • Constructing the explanation (lines 8–14): After evaluating all supporting and opposing arguments, the algorithm constructs the explanation. The explanation consists of the decision (accepted or rejected) followed by a structured justification. If the answer argument meets or exceeds the threshold ( T H ), the explanation includes its acceptance and the supporting arguments that reinforce this decision. Otherwise, the argument is rejected, and the opposing arguments are included to justify the rejection.
  • Recursive propagation (lines 17–25): The algorithm recursively processes the children of each argument, ensuring that all relevant arguments are considered when forming the explanation.
Algorithm 1 Generating Explanations based on QuAD-V Framework
1:
Input: Argument a r g , QuAD-V framework f w
2:
Output: Explanation for acceptance or rejection
3:
Initialize p r o s , c o n s , T H 0.5
4:
c h i l d r e n f w . c h i l d r e n O f ( a r g )
5:
for each   c h i l d c h i l d r e n   do
6:
      Call GenerateExplanation( a r g , c h i l d , T H , f w )
7:
end for
8:
if   a r g . b a s e S c o r e T H   then
9:
       Append ( a r g . c o n t e n t , A C C E P T E D ) to explanation
10:
     Append supporting arguments p r o s to explanation
11:
else
12:
      Append ( a r g . c o n t e n t , R E J E C T E D ) to explanation
13:
      Append attacking arguments c o n s to explanation
14:
end if
15:
Return explanation
16:
—————————————————▹                 Line separator
17:
Function GenerateExplanation( f a t h e r , c h i l d , T H , f w )
18:
if   f w . i s P r o A r g u m e n t O f ( f a t h e r , c h i l d )   then
19:
       Append c h i l d to p r o s if c h i l d . b a s e S c o r e T H
20:
else if   f w . i s C o n A r g u m e n t O f ( f a t h e r , c h i l d )   then
21:
       Append c h i l d to c o n s if c h i l d . b a s e S c o r e T H
22:
end if
23:
if   f w . c h i l d r e n ( c h i l d )   is not empty then
24:
        for each  a r g f w . c h i l d r e n ( c h i l d )  do
25:
              Call GenerateExplanation( c h i l d , a r g , T H , f w )
26:
        end for
27:
end if
In particular, the explanations follow a structured format. Each explanation contains the following: (i) the final decision (ACCEPTED or REJECTED), (ii) supporting arguments that justify acceptance, and (iii) opposing arguments that justify rejection. Regarding the completeness of the explanation generation process, the algorithm always produces an explanation, as the acceptance decision is determined based on the predefined threshold ( T H ). Even in cases where there are no supporting or opposing arguments, the explanation will still contain a decision based on the acceptance score of the answer argument.
Regarding the computational cost of Algorithm 1, it depends on the size of the argumentation graph processed within the QuAD-V framework. The algorithm follows a depth-first search (DFS) approach, recursively traversing the argumentation structure. Since a QuAD-V framework is represented by an acyclic graph, arguments relate to each other in a directed and non-cyclic manner, preventing infinite loops and redundant computations. In particular, during argument strength computation, each argument (vertex) and its corresponding relations (edges) are visited once, leading to a time complexity of O ( n + m ) , where n is the number of arguments and m is the number of argument relations. This ensures that explanation generation remains efficient even for large argumentation graphs, as it scales linearly with the structure size. Unlike general argumentation frameworks that may require iterative convergence or cycle detection, QuAD-V maintains predictable performance due to its tree-like organization.
We highlight that in scenarios where multiple tasks are executed concurrently, performance degradation is mitigated by the independent processing of argumentation graphs per delegation instance. Consequently, different delegators can evaluate distinct delegatees in parallel, as the voting processes applied to QuAD-V framework instances for different partners are independent of each other.

4. Case Study

The issue addressed in this paper focuses on a multi-modal logistics network for petroleum-based products [41,42]. The scenario is represented as a graph G ( B , E ) , where B denotes a finite set of bases b, and E represents a finite set of connections e. Each connection e E links a pair of bases ( b i , b j ) , where b i ,   b j B correspond to the origin and destination bases, respectively. Moreover, a route is defined as a finite sequence of bases connected by edges, used to transfer a quantity of product from a source base b i to a destination base b j .

4.1. Network Structure

Our experiments were conducted using a graph structure consisting of 10 bases (agents). As presented in Figure 2, a base can assume the role of root, inner, or terminator. A root base can only send products to others, an inner base can receive and send products, and a terminator base can only receive products. In particular, the simulation involves transporting a quantity of product from a root agent (origin base) to a terminator agent (destination base). Additionally, each base stores a finite amount of product (current stock). Moving a quantity of product between bases incurs storage and transportation costs, as well as time consumption. The cost of transporting a product from a base b i (delegator) to a base b j (delegatee) depends on the storage capacity and the storage cost of b j ’, which is added to the transport cost (throughput cost). Both storage and throughput costs are defined per unit of product. In contrast, the transport time depends on the throughput of the edge e i j , considering the total quantity of product to be transported between the bases.
The delegation process starts at the root base and spreads through the network until it reaches the terminator base, signaling the end of an iteration. During an iteration, a delegator can perform several delegation actions, as its goal is to reduce the quantity of product in its stock to zero. An iteration is represented by a series of delegations required to transport a quantity of product from the root base to the terminator base.

4.2. Task Delegation Dynamics

The delegation process begins when a delegator notifies its partners (neighboring bases) of its intention to send a specific quantity of product (task request). Upon receiving this notification, each partner responds with an offer, including their performance estimations for the task (offer request phase). These estimations detail the expected storage capacity, transportation cost, and time required to execute the task. If a partner lacks available storage or is already engaged in another task, it rejects the request and does not send an offer.
After receiving the offers, the delegator selects its delegatees based on the acceptance score in each partner’s QuAD-V framework (partner selection phase). Once a delegatee completes the task, the delegator provides a vote and an impression based on the delegatee’s performance (task execution phase). This impression is shared with the delegatee (reference) and other delegators who may interact with the delegatee in the future, contributing to the delegatee’s reputation.

4.3. Failure Likelihood

We assume that a delegatee may fail at the beginning of the execution phase of the task delegation process. The likelihood of failure is determined by its failure probability (Figure 2). A higher failure probability increases the risk of failure. When a failure occurs, one of the following events may take place:
  • Complete failure (80% chance): The delegatee entirely fails to execute the task, preventing the delegator from sending the product and achieving its goal. Consequently, the delegator must initiate a new task request (offer request phase). Each complete failure decreases the delegatee’s success rate and results in an impression where all task criteria receive a score of zero.
  • Partial performance deviation (20% chance): The delegatee completes the task but underperforms. The actual stored quantity ranges from 50% to 100% of the expected amount, while the completion time may be up to four times longer. The cost is adjusted proportionally based on the actual time, using a uniform random variable in the range ( [ 0 , 1 ] ) .
Therefore, a delegatee may not fully meet the performance estimates associated with its offer. However, this is not classified as a complete failure, as the delegator can still achieve its goal, either fully or partially, even if a lower quantity of the product is sent, costs are higher, or delivery is delayed. In such cases, instead of a complete failure impression, the delegator generates a negative impression reflecting the delegatee’s competencies.

5. Experimental Results

Our simulation consisted of 100 iterations. Each iteration represents a series of delegations required to transport a quantity of product from the root base to the terminator base. The network shown in Figure 2 is used as input for the simulation, configuring the initial states of each base at the beginning of each iteration. However, the votes cast by delegators, along with their impressions, are retained across iterations, serving as a memory of their partners’ behaviors.

5.1. Partner Selection Policy

In our evaluation, the partner selection is performed using a variation of ϵ -greedy algorithm [43], in which the parameter ϵ depends on the number of times a delegator α delegated a task τ ( | ( α , τ ) | ) to another agent. This policy meets our needs for an exploitation/exploration mechanism without additional complexity required by more sophisticated policies, such as UCB1 [44] and Thompson sampling [45]. The value assigned to ϵ is calculated as follows:
ϵ = 1 | ( α , τ ) | + 1
where the probability of exploration becomes higher as ϵ approaches 1. In this case, a delegator tends to select a partner randomly. Otherwise, α exploits its known options by selecting partners based on their acceptance scores. The higher a partner’s acceptance score, the greater the likelihood of it being chosen as α ’s delegatee.

5.2. Results

In Figure 3, the results obtained from the simulation are presented. Note that the success rate, rejection rate, competence score, and the acceptance score over iterations are shown for each inner base (i.e., bases able to act as partners). Bases with a higher probability of failure are more likely to be unable to complete a task or fail to meet performance estimations, which negatively affects their success rate and competence score, making them less reliable as delegatees. Moreover, as can be seen in Figure 3, in the first iterations (generally up to around iteration 25), the delegators tend to explore their partner options in order to find the most suitable delegatee. These initial iterations represent a learning period during which delegators collect information about their partners as they interact with others (i.e., producing their own impressions through direct experiences or receiving opinions from third parties). After the learning phase, delegators tend to exploit the best partner choices, resulting in more stable partner behavior, as the best choices have already been discovered, and the probability of random selections decreases with each new iteration.
As expected, the best route identified in our simulation prioritized bases with lower failure likelihoods (Figure 2), minimizing the risks associated with failed delegations. In particular, a base b i is part of a transportation route if, at some point during the iteration, it was used to temporarily store a quantity of the product. The results confirm this behavior through the analysis of the acceptance score curve (red line). The agents included in the best route demonstrate a balanced combination of success rate, competence score and rejection rate, making them the most suitable choices for delegation. In particular, when considering the rejection rate to select a partner, a delegator considers not only the partner’s ability to complete a task or meet its expected performance but also its availability, which reflects how often a base has refused a task due to being occupied or lacking the capacity to store the received product.
Another crucial aspect shown in Figure 3 is the threshold values ( T H A and T H B ), which determine how delegators vote on partners’ arguments in the QuAD-V framework, affecting their acceptance scores. T H A evaluates a partner’s effectiveness and competence: to be positively rated, a partner must have values for success rate and competence score greater than or equal to 0.7. This threshold ensures that only partners demonstrating at least 70% of the maximum performance are positively rated, favoring those with a strong history of successful task execution and high competence. T H B evaluates availability: a partner is considered available if its rejection rate is below 0.3. Since the rejection rate ranges from 0 to 1, this threshold allows for occasional rejections due to workload constraints while ensuring that partners remain sufficiently accessible to be considered reliable. These thresholds directly influence the acceptance score by setting criteria for positive evaluations. Bases that meet these conditions receive higher scores and are more likely to be selected as delegatees. The success rate, rejection rate, competence score and acceptance score reflect a partner’s overall performance at the end of each iteration. The acceptance score of an answer argument, however, is computed by the delegator during partner selection.
At last, we present in Table 1 the average values of the performance metrics for each inner base, considering a window of iterations from 25 to 100. This window was selected because, around iteration 25, delegators tend to exploit known partners with acceptable and well-established behaviors rather than explore unknown or less familiar ones, leading to system stabilization. Note that Table 1 confirms the performance of the bases that form the best route ( b 2 , b 3 , b 4 , b 6 , b 8 ). These bases exhibit a good balance across evaluation metrics, demonstrating a favorable trade-off between success rate, competence score, rejection rate, and acceptance score. This suggests that such bases are consistently chosen as delegatees due to their reliability and overall effectiveness in task execution.

5.3. Explanations

Figure 4 and Figure 5 show the explanations produced to justify the behavior of one of the bases on the best route, base 3 ( b 3 ) during iteration 25 (i.e., local explanation), and the choice of the best transportation route at the end of the simulation (i.e., global explanation), respectively. We highlight that the process used to produce these explanations (described in Algorithm 1) can be adopted to justify the choice of a partner for any delegator. Moreover, note that by adapting the input arguments (answer argument), the same process can be applied to different decision-making contexts, such as selecting a delegatee or evaluating transportation routes.
For instance, a similar QuAD-V framework, as presented in Figure 1, can be used to evaluate transportation routes. By averaging the base scores of each argument across all frameworks of the bases involved in the route, it is possible to compute the mean base score for each argument. Using these computed scores as input, Algorithm 1 determines the acceptance score of the route. The flexibility of this approach allows for argument substitution while maintaining the same reasoning process. For example, the argument ‘The partner has a high success rate, meaning it successfully completes most of its assigned tasks’ can be reformulated as ‘On average, the bases in the route have a high success rate, meaning they successfully complete most of their assigned tasks’. Consequently, the same explanation process that justifies the selection of a delegatee can also be applied to explain why certain routes are preferable over others.

6. Conclusions

In this work, we presented a task delegation model built upon a voting-based argumentation framework (QuAD-V framework). Using this model, agents (delegators) can select their partners (delegatees) for a given task by considering three evaluation dimensions: effectiveness, availability, and competencies. These dimensions are translated into an acceptability measure (acceptance score), which, in turn, summarizes the partner’s behavior as a delegatee. However, it is important to note that our model is flexible and can accommodate additional characteristics by incorporating new argumentation criteria into its structure, allowing it to be adapted to different delegation scenarios.
The main contribution of the proposed delegation model is its capability to generate explanations for the delegators’ choices. Due to the adoption of the QuAD-V framework, our model can justify the delegators’ deliberative actions by providing the reasons behind the selection of a certain partner. Furthermore, the modeling approach introduced in this work can be easily adapted to different contexts. As discussed in our case study, the framework was also employed to generate explanations justifying why one transportation route is better than others.
As future work, we intend to apply the proposed model to explain partner selection in dynamic environments where agents can enter and leave the system, as well as exhibit behavioral changes over time. Thus, our model could be used not only to justify the choice of a potential partner but also to explain the reasons behind behavioral changes. For instance, it would be possible to identify the reasons that lead a partner who previously demonstrated good behavior—such as meeting performance estimates and completing assigned tasks—to start failing in its estimations and in the execution of delegated tasks.
Additionally, we aim to explore our model to generate counterfactual explanations. Counterfactuals are particularly useful for understanding how different decisions or conditions could have led to alternative outcomes. In the context of task delegation, counterfactual explanations can help answer questions such as: “What if a different partner had been chosen?” or “What if a delegatee had a higher acceptance score?” Such elements contribute to transparency and trust in decision-making processes and are particularly valuable in scenarios where human agents collaborate with artificial agents.
To conclude, in this work, we assume that agents vote based on threshold values, which ensures interpretability and structured evaluation. However, in practical applications, voting motivations can be complex and context-dependent. To better capture this complexity, we propose three possible extensions to the current threshold-based approach: thresholds could be dynamically adjusted over time using machine learning techniques based on delegators’ experiences in evaluating their partners; different delegators may have distinct evaluation criteria, so thresholds could be personalized based on user profiles; and replacing strict thresholds with an interval-valued membership function would allow for smoother transitions between positive and negative votes. These extensions would enable the model to better accommodate diverse voting behaviors while preserving interpretability and structured evaluation.

Author Contributions

Conceptualization, J.J.B., M.M.-E. and C.A.T.; methodology, J.J.B., M.M.-E. and C.A.T.; software, J.J.B.; validation, J.J.B.; formal analysis, J.J.B. and C.A.T.; investigation, J.J.B., M.M.-E. and C.A.T.; resources, J.J.B.; data curation, J.J.B.; writing—original draft preparation, J.J.B.; writing—review and editing, J.J.B., M.M.-E. and C.A.T.; visualization, J.J.B.; supervision, C.A.T.; project administration, C.A.T.; funding acquisition, C.A.T. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by CNPq grant number 409523/2021-6.

Data Availability Statement

This article is a revised and expanded version of a paper entitled “Explaining Task Delegation Through Argumentation Debates with Votes” [42], which was presented at the Ibero-American Conference on Artificial Intelligence, Montevideo—Uruguay, 13–15 November 2024. Further inquiries can be directed to the corresponding authors.

Acknowledgments

The authors would like to thank the CAPES (Coordenação de Aperfeiçoamento de Pessoal de Nível Superior) and CNPq (Conselho Nacional de Desenvolvimento Científico e Tecnológico) (process 409523/2021-6), which fully funded this work.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Griffiths, N. Task delegation using experience-based multi-dimensional trust. In Proceedings of the Fourth International Joint Conference on Autonomous Agents and Multiagent Systems, Utrecht, The Netherlands, 25–29 July 2005; pp. 489–496. [Google Scholar]
  2. Cantucci, F.; Falcone, R.; Castelfranchi, C. A Computational Model for Cognitive Human-Robot Interaction: An Approach Based on Theory of Delegation. In Proceedings of the WOA, Parma, Italy, 26–28 June 2019; pp. 127–133. [Google Scholar]
  3. Xing, L. Reliability in Internet of Things: Current status and future perspectives. IEEE Internet Things J. 2020, 7, 6704–6721. [Google Scholar] [CrossRef]
  4. Manavalan, E.; Jayakrishna, K. A review of Internet of Things (IoT) embedded sustainable supply chain for industry 4.0 requirements. Comput. Ind. Eng. 2019, 127, 925–953. [Google Scholar] [CrossRef]
  5. Cui, Y.; Idota, H.; Ota, M. Improving supply chain resilience with implementation of new system architecture. In Proceedings of the 2019 IEEE Social Implications of Technology (SIT) and Information Management (SITIM), Matsuyama, Japan, 9–10 November 2019; pp. 1–6. [Google Scholar]
  6. Yliniemi, L.; Agogino, A.K.; Tumer, K. Multirobot coordination for space exploration. AI Mag. 2014, 35, 61–74. [Google Scholar] [CrossRef]
  7. Sabater, J.; Sierra, C. REGRET: Reputation in gregarious societies. In Proceedings of the Fifth International Conference on Autonomous Agents, Montreal, QC, Canada, 28 May–1 June 2001; pp. 194–195. [Google Scholar]
  8. Huynh, T.D.; Jennings, N.R.; Shadbolt, N. FIRE: An integrated trust and reputation model for open multi-agent systems. In Proceedings of the 16th European Conference on Artificial Intelligence (ECAI), Valencia, Spain, 22–27 August 2004; pp. 18–22. [Google Scholar]
  9. Castelfranchi, C.; Falcone, R. Trust Theory: A Socio-Cognitive and Computational Model; John Wiley & Sons: Hoboken, NJ, USA, 2010; Volume 18. [Google Scholar]
  10. Pinyol, I.; Sabater-Mir, J. Computational trust and reputation models for open multi-agent systems: A review. Artif. Intell. Rev. 2013, 40, 1–25. [Google Scholar] [CrossRef]
  11. Cho, J.H.; Chan, K.; Adali, S. A survey on trust modeling. ACM Comput. Surv. (CSUR) 2015, 48, 1–40. [Google Scholar] [CrossRef]
  12. Afanador, J.; Baptista, M.S.; Oren, N. Algorithms for recursive delegation. AI Commun. 2019, 32, 303–317. [Google Scholar] [CrossRef]
  13. Sabater, J.; Paolucci, M.; Conte, R. Repage: Reputation and image among limited autonomous partners. J. Artif. Soc. Soc. Simul. 2006, 9, 3. [Google Scholar]
  14. Castelfranchi, C.; Falcone, R. Trust: Perspectives in Cognitive Science. In The Routledge Handbook of Trust and Philosophy; Routledge: New York, NY, USA, 2020; pp. 214–228. [Google Scholar]
  15. Kosko, B. Fuzzy cognitive maps. Int. J. Man Mach. Stud. 1986, 24, 65–75. [Google Scholar] [CrossRef]
  16. Waltl, B.; Vogl, R. Increasing transparency in algorithmic-decision-making with explainable AI. Datenschutz Datensicherheit DuD 2018, 42, 613–617. [Google Scholar] [CrossRef]
  17. Gerdes, A. The role of explainability in AI-supported medical decision-making. Discov. Artif. Intell. 2024, 4, 29. [Google Scholar] [CrossRef]
  18. Atf, Z.; Lewis, P.R. Human centricity in the relationship between explainability and trust in AI. IEEE Technol. Soc. Mag. 2024, 42, 66–76. [Google Scholar] [CrossRef]
  19. Ribeiro, M.T.; Singh, S.; Guestrin, C. Why Should I Trust You? Explaining the Predictions of Any Classifier. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA, 13–17 August 2016; pp. 1135–1144. [Google Scholar]
  20. Miller, T. Explainable AI: A Review of the State of the Art. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, Scotland, UK, 4–9 May 2019; pp. 1–11. [Google Scholar]
  21. Kirkpatrick, D. The Ethical and Legal Implications of Artificial Intelligence in Healthcare. J. Healthc. Ethics 2021, 5, 23–32. [Google Scholar]
  22. Liu, H.; Wang, Y.; Fan, W.; Liu, X.; Li, Y.; Jain, S.; Liu, Y.; Jain, A.; Tang, J. Trustworthy ai: A computational perspective. ACM Trans. Intell. Syst. Technol. 2022, 14, 1–59. [Google Scholar] [CrossRef]
  23. Binns, S. Artificial Intelligence in Critical Systems: Ethical Implications; Springer: Berlin/Heidelberg, Germany, 2016. [Google Scholar]
  24. Lin, P.; Calo, R. The Ethical Implications of Automated Decision-Making in Autonomous Vehicles. J. Auton. Transp. 2019, 12, 87–102. [Google Scholar]
  25. Evans, K.; de Moura, N.; Chauvier, S.; Chatila, R.; Dogan, E. Ethical decision making in autonomous vehicles: The AV ethics project. Sci. Eng. Ethics 2020, 26, 3285–3312. [Google Scholar] [CrossRef] [PubMed]
  26. Čyras, K.; Rago, A.; Albini, E.; Baroni, P.; Toni, F. Argumentative XAI: A survey. arXiv 2021, arXiv:2105.11266. [Google Scholar]
  27. Dung, P.M. On the acceptability of arguments and its fundamental role in nonmonotonic reasoning, logic programming and n-person games. Artif. Intell. 1995, 77, 321–357. [Google Scholar] [CrossRef]
  28. Bondarenko, A.; Dung, P.M.; Kowalski, R.A.; Toni, F. An abstract, argumentation-theoretic approach to default reasoning. Artif. Intell. 1997, 93, 63–101. [Google Scholar] [CrossRef]
  29. Modgil, S.; Prakken, H. The ASPIC+ framework for structured argumentation: A tutorial. Argum. Comput. 2014, 5, 31–62. [Google Scholar] [CrossRef]
  30. García, A.J.; Chesñevar, C.I.; Rotstein, N.D.; Simari, G.R. Formalizing dialectical explanation support for argument-based reasoning in knowledge-based systems. Expert Syst. Appl. 2013, 40, 3233–3247. [Google Scholar] [CrossRef]
  31. Rago, A.; Toni, F. Quantitative argumentation debates with votes for opinion polling. In Proceedings of the International Conference on Principles and Practice of Multi-Agent Systems, Nice, France, 30 October–3 November 2017; Springer: Berlin/Heidelberg, Germany, 2017; pp. 369–385. [Google Scholar]
  32. Cayrol, C.; Lagasquie-Schiex, M.C. On the acceptability of arguments in bipolar argumentation frameworks. In Proceedings of the European Conference on Symbolic and Quantitative Approaches to Reasoning and Uncertainty, Barcelona, Spain, 6–8 July 2005; Springer: Berlin/Heidelberg, Germany, 2005; pp. 378–389. [Google Scholar]
  33. Conte, R.; Paolucci, M. Social cognitive factors of unfair ratings in reputation reporting systems. In Proceedings of the IEEE/WIC International Conference on Web Intelligence (WI 2003), Halifax, NS, Canada, 13–17 October 2003; pp. 316–322. [Google Scholar]
  34. Cantucci, F.; Falcone, R.; Castelfranchi, C. Robot’s self-trust as precondition for being a good collaborator. In Proceedings of the TRUST@ AAMAS, London, UK, 3–7 May 2021. [Google Scholar]
  35. Conte, R.; Paolucci, M. Reputation in Artificial Societies: Social Beliefs for Social Order; Kluwer Academic Publishers: Norwell, MA, USA, 2002; Volume 6. [Google Scholar]
  36. Baroni, P.; Romano, M.; Toni, F.; Aurisicchio, M.; Bertanza, G. Automatic evaluation of design alternatives with quantitative argumentation. Argum. Comput. 2015, 6, 24–49. [Google Scholar] [CrossRef]
  37. Rago, A.; Toni, F.; Aurisicchio, M.; Baroni, P. Discontinuity-free decision support with quantitative argumentation debates. In Proceedings of the Fifteenth International Conference on the Principles of Knowledge Representation and Reasoning, Cape Town, South Africa, 25–29 April 2016. [Google Scholar]
  38. Ashtiani, M.; Azgomi, M.A. Contextuality, incompatibility and biased inference in a quantum-like formulation of computational trust. Adv. Complex Syst. 2014, 17, 1450020. [Google Scholar] [CrossRef]
  39. Botelho, V.; Kredens, K.V.; Martins, J.V.; Ávila, B.C.; Scalabrin, E.E. Dossier: Decentralized trust model towards a decentralized demand. In Proceedings of the 2018 IEEE 22nd International Conference on Computer Supported Cooperative Work in Design (CSCWD), Nanjing, China, 9–11 May 2018; pp. 371–376. [Google Scholar]
  40. Buccafurri, F.; Comi, A.; Lax, G.; Rosaci, D. Experimenting with certified reputation in a competitive multi-agent scenario. IEEE Intell. Syst. 2015, 31, 48–55. [Google Scholar] [CrossRef]
  41. Banaszewski, R.F.; Arruda, L.V.; Simão, J.M.; Tacla, C.A.; Barbosa-Póvoa, A.P.; Relvas, S. An application of a multi-agent auction-based protocol to the tactical planning of oil product transport in the Brazilian multimodal network. Comput. Chem. Eng. 2013, 59, 17–32. [Google Scholar] [CrossRef]
  42. Baqueta, J.J.; Tacla, C.A. Explaining Task Delegation Through Argumentation Debates with Votes. In Proceedings of the Ibero-American Conference on Artificial Intelligence, Montevideo, Uruguay, 16–15 November 2024; Springer: Berlin/Heidelberg, Germany, 2024; pp. 372–383. [Google Scholar]
  43. Sutton, R.S.; Barto, A.G. Reinforcement Learning: An Introduction; MIT Press: Cambridge, MA, USA, 2018. [Google Scholar]
  44. Garivier, A.; Moulines, E. On upper-confidence bound policies for switching bandit problems. In Proceedings of the International Conference on Algorithmic Learning Theory, Espoo, Finland, 5–7 October 2011; Springer: Berlin/Heidelberg, Germany, 2011; pp. 174–188. [Google Scholar]
  45. Chapelle, O.; Li, L. An empirical evaluation of thompson sampling. Adv. Neural Inf. Process. Syst. 2011, 24, 2249–2257. [Google Scholar]
Figure 1. The proposed delegation model and its components. A vote is cast in favor or against an argument depending on whether the associated evaluation measure meets or exceeds a threshold value ( T h ).
Figure 1. The proposed delegation model and its components. A vote is cast in favor or against an argument depending on whether the associated evaluation measure meets or exceeds a threshold value ( T h ).
Applsci 15 04357 g001
Figure 2. Transportation bases with their initial states, including maximum capacity, current stock, storage cost per unit, failure likelihood, throughput, and throughput cost for each connection between origin and destination bases.
Figure 2. Transportation bases with their initial states, including maximum capacity, current stock, storage cost per unit, failure likelihood, throughput, and throughput cost for each connection between origin and destination bases.
Applsci 15 04357 g002
Figure 3. Performance evaluation of inner bases acting over 100 iterations, considering success rate, rejection rate, competence score, and acceptance score. T h A : Threshold for voting on arguments related to effectiveness and competence dimensions, including the partners’ success rate and competence score. T h B : Threshold for voting on arguments related to availability, including the partners’ rejection rate. (*) Indicates bases that are part of the best route, specifically, b 0 (root), b 2 , b 3 , b 4 , b 6 , b 8 , and b 9 (terminator).
Figure 3. Performance evaluation of inner bases acting over 100 iterations, considering success rate, rejection rate, competence score, and acceptance score. T h A : Threshold for voting on arguments related to effectiveness and competence dimensions, including the partners’ success rate and competence score. T h B : Threshold for voting on arguments related to availability, including the partners’ rejection rate. (*) Indicates bases that are part of the best route, specifically, b 0 (root), b 2 , b 3 , b 4 , b 6 , b 8 , and b 9 (terminator).
Applsci 15 04357 g003
Figure 4. Local explanation of base b 3 ’s behavior as a delegatee in iteration 25.
Figure 4. Local explanation of base b 3 ’s behavior as a delegatee in iteration 25.
Applsci 15 04357 g004
Figure 5. Global explanation of the choice of the best route ( b 0 (root), b 2 , b 3 , b 4 , b 6 , b 8 , and b 9 (terminator)) produced during iteration 100.
Figure 5. Global explanation of the choice of the best route ( b 0 (root), b 2 , b 3 , b 4 , b 6 , b 8 , and b 9 (terminator)) produced during iteration 100.
Applsci 15 04357 g005
Table 1. Average values of the performance metrics for each inner base from iterations 25 to 100.
Table 1. Average values of the performance metrics for each inner base from iterations 25 to 100.
BaseSuccess RateRejection RateCompetence ScoreAcceptance Score
b 1 0.320.030.690.49
b 2 0.750.070.890.82
b 3 1.000.290.990.82
b 4 0.750.270.880.63
b 5 0.270.200.710.61
b 6 1.000.300.990.65
b 7 0.430.080.740.65
b 8 1.000.170.991.00
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Baqueta, J.J.; Morveli-Espinoza, M.; Tacla, C.A. Transparent Task Delegation in Multi-Agent Systems Using the QuAD-V Framework. Appl. Sci. 2025, 15, 4357. https://doi.org/10.3390/app15084357

AMA Style

Baqueta JJ, Morveli-Espinoza M, Tacla CA. Transparent Task Delegation in Multi-Agent Systems Using the QuAD-V Framework. Applied Sciences. 2025; 15(8):4357. https://doi.org/10.3390/app15084357

Chicago/Turabian Style

Baqueta, Jeferson José, Mariela Morveli-Espinoza, and Cesar Augusto Tacla. 2025. "Transparent Task Delegation in Multi-Agent Systems Using the QuAD-V Framework" Applied Sciences 15, no. 8: 4357. https://doi.org/10.3390/app15084357

APA Style

Baqueta, J. J., Morveli-Espinoza, M., & Tacla, C. A. (2025). Transparent Task Delegation in Multi-Agent Systems Using the QuAD-V Framework. Applied Sciences, 15(8), 4357. https://doi.org/10.3390/app15084357

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop