Transparent Task Delegation in Multi-Agent Systems Using the QuAD-V Framework
Abstract
:1. Introduction
2. Background
2.1. Task Delegation
- Offer request: A delegator notifies its partners of the intention to delegate a task (i.e., a task request). After receiving such a notification, partners send their offers to the delegator, providing their performance estimations for . Performance estimates are made based on the task criteria (e.g., the cost and time required to perform ).
- Partner selection: After receiving offers from its partners, the delegator selects its delegatees. This selection involves evaluating each partner, considering their availability, effectiveness in performing , and accuracy in meeting estimates (competence).
- Task execution: Delegatees execute tasks, while delegators evaluate their performance. A delegator assesses a delegatee, , based on the outcome produced for task . This outcome, a tuple similar to an offer, reflects ’s actual performance. If successfully delivers an outcome, is complete or partially complete, depending on how accurately met its performance estimates. Otherwise, fails to perform .
- α is the agent who requests the offer (delegator);
- β is the agent who produced the offer (partner);
- τ is the task that α intends to delegate to β if β is selected as its delegatee;
- is a vector that represents the performance estimations made by β, where each value is a β’s estimation for a τ’s criterion .
2.2. Impressions
- α is the delegator who formed the impression.
- β is the assessed delegatee.
- τ is the task performed by β.
- t is the time in which the impression was created by α.
- is a vector of ratings, where a pair represents a score assigned to β by α concerning its performance regarding a τ’s criterion of τ.
2.3. QuAD-V Framework
- A is a finite set of answer arguments (possible responses to a question).
- C is a finite set of con arguments (arguments opposing another argument).
- P is a finite set of pro arguments (arguments supporting another argument).
- R is an acyclic binary relation over .
- U is a finite set of users.
- V is a total function assigning votes , where votes on argument .
3. Delegation Model
3.1. Evaluation Metrics
- Availability (associated with arguments from class ) (The asterisk (*) indicates that all arguments in a class have identifiers starting with a specific number. For example, arguments 21 and 22 belong to the Availability class, while arguments 31, 32, 321, and 322 belong to the Effectiveness class.): This dimension reflects a partner’s capability to accept task requests from delegators. As presented in Figure 1, a delegator votes for or against this class of arguments based on the partner’s rejection rate (), which represents the frequency with which a partner has rejected a task requested by its delegators over time:
- Effectiveness (associated with arguments of class ): This dimension estimates how often a partner successfully completes its tasks. A delegator votes for or against the arguments of this class based on the following measures:
- Success rate (): This measure indicates the frequency with which a partner successfully completes a task, allowing its delegator to achieve its goal. The success rate is calculated as follows:
- Success confidence (): This measure indicates the reliability of a partner’s success rate () from the perspective of a delegator , taking into account the number of iterations between them. The success confidence tends to increase as the number of iterations between and grows [38]:
- Competence (associated with arguments from class ): This dimension evaluates whether a partner is capable of meeting the delegator’s expectations regarding task execution, considering the partner’s performance estimations. A delegator votes for or against arguments related to competence based on the following measures:
- Competence score (): This measure provides an up-to-date value of the partner’s ability to fulfill its performance estimations. The competence score is computed as the simple average of the scores associated with the most recent impression produced by a delegator regarding a delegatee during the task execution phase of the delegation process.
- Social image (): A social measure calculated by the aggregation of the set of impressions produced by a delegator regarding the competencies of a partner as a delegatee of a task . (To aggregate a set of impressions, a delegator first averages such impressions for each task criterion, forming a single aggregated impression. Then, the final value is obtained by averaging the scores across all criteria [13,35].) The social image can be seen as ’s personal opinion about ’s competencies, taking into account a task [10].
- Reputation (): A delegator calculates ’s reputation regarding by aggregating impressions obtained from third parties (other delegators). Reputation, therefore, represents a collective evaluation shared within a group, where most members agree without necessarily verifying the truth or sources of the impressions [10,35]. In our case, the group consists of delegators who assess a common delegatee based on the execution of a specific task.
- Know-how (): It is a specialized form of reputation where delegators share impressions with their delegatees [8]. As interacts with delegators while executing , it accumulates impressions on its competencies. These impressions, similar to job references, can be sent to a delegator during partner selection [39,40]. By aggregating them, computes a social measure of ’s know-how.
3.2. Explanation Generation Process
- Initialization (lines 1–4): The algorithm initializes the necessary variables, including the sets of supporting arguments () and opposing arguments (), as well as setting the acceptance threshold value (). Then, the children of the answer argument () are retrieved from the QuAD-V framework.
- Processing supporting and opposing arguments (lines 5–7): For each child argument, the algorithm calls the GenerateExplanation function, which determines whether the child argument supports or opposes the answer argument. If the child is a supporting argument and has an acceptance score above the threshold, it is added to the set of supporting arguments (). If it is an opposing argument and has an acceptance score above the threshold, it is added to the set of opposing arguments ().
- Constructing the explanation (lines 8–14): After evaluating all supporting and opposing arguments, the algorithm constructs the explanation. The explanation consists of the decision (accepted or rejected) followed by a structured justification. If the answer argument meets or exceeds the threshold (), the explanation includes its acceptance and the supporting arguments that reinforce this decision. Otherwise, the argument is rejected, and the opposing arguments are included to justify the rejection.
- Recursive propagation (lines 17–25): The algorithm recursively processes the children of each argument, ensuring that all relevant arguments are considered when forming the explanation.
Algorithm 1 Generating Explanations based on QuAD-V Framework |
|
4. Case Study
4.1. Network Structure
4.2. Task Delegation Dynamics
4.3. Failure Likelihood
- Complete failure (80% chance): The delegatee entirely fails to execute the task, preventing the delegator from sending the product and achieving its goal. Consequently, the delegator must initiate a new task request (offer request phase). Each complete failure decreases the delegatee’s success rate and results in an impression where all task criteria receive a score of zero.
- Partial performance deviation (20% chance): The delegatee completes the task but underperforms. The actual stored quantity ranges from 50% to 100% of the expected amount, while the completion time may be up to four times longer. The cost is adjusted proportionally based on the actual time, using a uniform random variable in the range .
5. Experimental Results
5.1. Partner Selection Policy
5.2. Results
5.3. Explanations
6. Conclusions
Author Contributions
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Griffiths, N. Task delegation using experience-based multi-dimensional trust. In Proceedings of the Fourth International Joint Conference on Autonomous Agents and Multiagent Systems, Utrecht, The Netherlands, 25–29 July 2005; pp. 489–496. [Google Scholar]
- Cantucci, F.; Falcone, R.; Castelfranchi, C. A Computational Model for Cognitive Human-Robot Interaction: An Approach Based on Theory of Delegation. In Proceedings of the WOA, Parma, Italy, 26–28 June 2019; pp. 127–133. [Google Scholar]
- Xing, L. Reliability in Internet of Things: Current status and future perspectives. IEEE Internet Things J. 2020, 7, 6704–6721. [Google Scholar] [CrossRef]
- Manavalan, E.; Jayakrishna, K. A review of Internet of Things (IoT) embedded sustainable supply chain for industry 4.0 requirements. Comput. Ind. Eng. 2019, 127, 925–953. [Google Scholar] [CrossRef]
- Cui, Y.; Idota, H.; Ota, M. Improving supply chain resilience with implementation of new system architecture. In Proceedings of the 2019 IEEE Social Implications of Technology (SIT) and Information Management (SITIM), Matsuyama, Japan, 9–10 November 2019; pp. 1–6. [Google Scholar]
- Yliniemi, L.; Agogino, A.K.; Tumer, K. Multirobot coordination for space exploration. AI Mag. 2014, 35, 61–74. [Google Scholar] [CrossRef]
- Sabater, J.; Sierra, C. REGRET: Reputation in gregarious societies. In Proceedings of the Fifth International Conference on Autonomous Agents, Montreal, QC, Canada, 28 May–1 June 2001; pp. 194–195. [Google Scholar]
- Huynh, T.D.; Jennings, N.R.; Shadbolt, N. FIRE: An integrated trust and reputation model for open multi-agent systems. In Proceedings of the 16th European Conference on Artificial Intelligence (ECAI), Valencia, Spain, 22–27 August 2004; pp. 18–22. [Google Scholar]
- Castelfranchi, C.; Falcone, R. Trust Theory: A Socio-Cognitive and Computational Model; John Wiley & Sons: Hoboken, NJ, USA, 2010; Volume 18. [Google Scholar]
- Pinyol, I.; Sabater-Mir, J. Computational trust and reputation models for open multi-agent systems: A review. Artif. Intell. Rev. 2013, 40, 1–25. [Google Scholar] [CrossRef]
- Cho, J.H.; Chan, K.; Adali, S. A survey on trust modeling. ACM Comput. Surv. (CSUR) 2015, 48, 1–40. [Google Scholar] [CrossRef]
- Afanador, J.; Baptista, M.S.; Oren, N. Algorithms for recursive delegation. AI Commun. 2019, 32, 303–317. [Google Scholar] [CrossRef]
- Sabater, J.; Paolucci, M.; Conte, R. Repage: Reputation and image among limited autonomous partners. J. Artif. Soc. Soc. Simul. 2006, 9, 3. [Google Scholar]
- Castelfranchi, C.; Falcone, R. Trust: Perspectives in Cognitive Science. In The Routledge Handbook of Trust and Philosophy; Routledge: New York, NY, USA, 2020; pp. 214–228. [Google Scholar]
- Kosko, B. Fuzzy cognitive maps. Int. J. Man Mach. Stud. 1986, 24, 65–75. [Google Scholar] [CrossRef]
- Waltl, B.; Vogl, R. Increasing transparency in algorithmic-decision-making with explainable AI. Datenschutz Datensicherheit DuD 2018, 42, 613–617. [Google Scholar] [CrossRef]
- Gerdes, A. The role of explainability in AI-supported medical decision-making. Discov. Artif. Intell. 2024, 4, 29. [Google Scholar] [CrossRef]
- Atf, Z.; Lewis, P.R. Human centricity in the relationship between explainability and trust in AI. IEEE Technol. Soc. Mag. 2024, 42, 66–76. [Google Scholar] [CrossRef]
- Ribeiro, M.T.; Singh, S.; Guestrin, C. Why Should I Trust You? Explaining the Predictions of Any Classifier. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA, 13–17 August 2016; pp. 1135–1144. [Google Scholar]
- Miller, T. Explainable AI: A Review of the State of the Art. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, Scotland, UK, 4–9 May 2019; pp. 1–11. [Google Scholar]
- Kirkpatrick, D. The Ethical and Legal Implications of Artificial Intelligence in Healthcare. J. Healthc. Ethics 2021, 5, 23–32. [Google Scholar]
- Liu, H.; Wang, Y.; Fan, W.; Liu, X.; Li, Y.; Jain, S.; Liu, Y.; Jain, A.; Tang, J. Trustworthy ai: A computational perspective. ACM Trans. Intell. Syst. Technol. 2022, 14, 1–59. [Google Scholar] [CrossRef]
- Binns, S. Artificial Intelligence in Critical Systems: Ethical Implications; Springer: Berlin/Heidelberg, Germany, 2016. [Google Scholar]
- Lin, P.; Calo, R. The Ethical Implications of Automated Decision-Making in Autonomous Vehicles. J. Auton. Transp. 2019, 12, 87–102. [Google Scholar]
- Evans, K.; de Moura, N.; Chauvier, S.; Chatila, R.; Dogan, E. Ethical decision making in autonomous vehicles: The AV ethics project. Sci. Eng. Ethics 2020, 26, 3285–3312. [Google Scholar] [CrossRef] [PubMed]
- Čyras, K.; Rago, A.; Albini, E.; Baroni, P.; Toni, F. Argumentative XAI: A survey. arXiv 2021, arXiv:2105.11266. [Google Scholar]
- Dung, P.M. On the acceptability of arguments and its fundamental role in nonmonotonic reasoning, logic programming and n-person games. Artif. Intell. 1995, 77, 321–357. [Google Scholar] [CrossRef]
- Bondarenko, A.; Dung, P.M.; Kowalski, R.A.; Toni, F. An abstract, argumentation-theoretic approach to default reasoning. Artif. Intell. 1997, 93, 63–101. [Google Scholar] [CrossRef]
- Modgil, S.; Prakken, H. The ASPIC+ framework for structured argumentation: A tutorial. Argum. Comput. 2014, 5, 31–62. [Google Scholar] [CrossRef]
- García, A.J.; Chesñevar, C.I.; Rotstein, N.D.; Simari, G.R. Formalizing dialectical explanation support for argument-based reasoning in knowledge-based systems. Expert Syst. Appl. 2013, 40, 3233–3247. [Google Scholar] [CrossRef]
- Rago, A.; Toni, F. Quantitative argumentation debates with votes for opinion polling. In Proceedings of the International Conference on Principles and Practice of Multi-Agent Systems, Nice, France, 30 October–3 November 2017; Springer: Berlin/Heidelberg, Germany, 2017; pp. 369–385. [Google Scholar]
- Cayrol, C.; Lagasquie-Schiex, M.C. On the acceptability of arguments in bipolar argumentation frameworks. In Proceedings of the European Conference on Symbolic and Quantitative Approaches to Reasoning and Uncertainty, Barcelona, Spain, 6–8 July 2005; Springer: Berlin/Heidelberg, Germany, 2005; pp. 378–389. [Google Scholar]
- Conte, R.; Paolucci, M. Social cognitive factors of unfair ratings in reputation reporting systems. In Proceedings of the IEEE/WIC International Conference on Web Intelligence (WI 2003), Halifax, NS, Canada, 13–17 October 2003; pp. 316–322. [Google Scholar]
- Cantucci, F.; Falcone, R.; Castelfranchi, C. Robot’s self-trust as precondition for being a good collaborator. In Proceedings of the TRUST@ AAMAS, London, UK, 3–7 May 2021. [Google Scholar]
- Conte, R.; Paolucci, M. Reputation in Artificial Societies: Social Beliefs for Social Order; Kluwer Academic Publishers: Norwell, MA, USA, 2002; Volume 6. [Google Scholar]
- Baroni, P.; Romano, M.; Toni, F.; Aurisicchio, M.; Bertanza, G. Automatic evaluation of design alternatives with quantitative argumentation. Argum. Comput. 2015, 6, 24–49. [Google Scholar] [CrossRef]
- Rago, A.; Toni, F.; Aurisicchio, M.; Baroni, P. Discontinuity-free decision support with quantitative argumentation debates. In Proceedings of the Fifteenth International Conference on the Principles of Knowledge Representation and Reasoning, Cape Town, South Africa, 25–29 April 2016. [Google Scholar]
- Ashtiani, M.; Azgomi, M.A. Contextuality, incompatibility and biased inference in a quantum-like formulation of computational trust. Adv. Complex Syst. 2014, 17, 1450020. [Google Scholar] [CrossRef]
- Botelho, V.; Kredens, K.V.; Martins, J.V.; Ávila, B.C.; Scalabrin, E.E. Dossier: Decentralized trust model towards a decentralized demand. In Proceedings of the 2018 IEEE 22nd International Conference on Computer Supported Cooperative Work in Design (CSCWD), Nanjing, China, 9–11 May 2018; pp. 371–376. [Google Scholar]
- Buccafurri, F.; Comi, A.; Lax, G.; Rosaci, D. Experimenting with certified reputation in a competitive multi-agent scenario. IEEE Intell. Syst. 2015, 31, 48–55. [Google Scholar] [CrossRef]
- Banaszewski, R.F.; Arruda, L.V.; Simão, J.M.; Tacla, C.A.; Barbosa-Póvoa, A.P.; Relvas, S. An application of a multi-agent auction-based protocol to the tactical planning of oil product transport in the Brazilian multimodal network. Comput. Chem. Eng. 2013, 59, 17–32. [Google Scholar] [CrossRef]
- Baqueta, J.J.; Tacla, C.A. Explaining Task Delegation Through Argumentation Debates with Votes. In Proceedings of the Ibero-American Conference on Artificial Intelligence, Montevideo, Uruguay, 16–15 November 2024; Springer: Berlin/Heidelberg, Germany, 2024; pp. 372–383. [Google Scholar]
- Sutton, R.S.; Barto, A.G. Reinforcement Learning: An Introduction; MIT Press: Cambridge, MA, USA, 2018. [Google Scholar]
- Garivier, A.; Moulines, E. On upper-confidence bound policies for switching bandit problems. In Proceedings of the International Conference on Algorithmic Learning Theory, Espoo, Finland, 5–7 October 2011; Springer: Berlin/Heidelberg, Germany, 2011; pp. 174–188. [Google Scholar]
- Chapelle, O.; Li, L. An empirical evaluation of thompson sampling. Adv. Neural Inf. Process. Syst. 2011, 24, 2249–2257. [Google Scholar]
Base | Success Rate | Rejection Rate | Competence Score | Acceptance Score |
---|---|---|---|---|
0.32 | 0.03 | 0.69 | 0.49 | |
0.75 | 0.07 | 0.89 | 0.82 | |
1.00 | 0.29 | 0.99 | 0.82 | |
0.75 | 0.27 | 0.88 | 0.63 | |
0.27 | 0.20 | 0.71 | 0.61 | |
1.00 | 0.30 | 0.99 | 0.65 | |
0.43 | 0.08 | 0.74 | 0.65 | |
1.00 | 0.17 | 0.99 | 1.00 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Baqueta, J.J.; Morveli-Espinoza, M.; Tacla, C.A. Transparent Task Delegation in Multi-Agent Systems Using the QuAD-V Framework. Appl. Sci. 2025, 15, 4357. https://doi.org/10.3390/app15084357
Baqueta JJ, Morveli-Espinoza M, Tacla CA. Transparent Task Delegation in Multi-Agent Systems Using the QuAD-V Framework. Applied Sciences. 2025; 15(8):4357. https://doi.org/10.3390/app15084357
Chicago/Turabian StyleBaqueta, Jeferson José, Mariela Morveli-Espinoza, and Cesar Augusto Tacla. 2025. "Transparent Task Delegation in Multi-Agent Systems Using the QuAD-V Framework" Applied Sciences 15, no. 8: 4357. https://doi.org/10.3390/app15084357
APA StyleBaqueta, J. J., Morveli-Espinoza, M., & Tacla, C. A. (2025). Transparent Task Delegation in Multi-Agent Systems Using the QuAD-V Framework. Applied Sciences, 15(8), 4357. https://doi.org/10.3390/app15084357