Next Article in Journal
New Evidence on the Influence of Coloured Lighting on Students’ Cognitive Processes
Previous Article in Journal
A Study on the Simple Encryption of QR Codes Using Random Numbers
Previous Article in Special Issue
Efficient Adversarial Attack Based on Moment Estimation and Lookahead Gradient
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
This is an early access version, the complete PDF, HTML, and XML versions will be available soon.
Article

Collaborative Decision Making with Responsible AI: Establishing Trust and Load Models for Probabilistic Transparency

School of Mechanical Engineering, Southeast University, Nanjing 211189, China
*
Author to whom correspondence should be addressed.
Electronics 2024, 13(15), 3004; https://doi.org/10.3390/electronics13153004
Submission received: 27 May 2024 / Revised: 18 July 2024 / Accepted: 29 July 2024 / Published: 30 July 2024
(This article belongs to the Special Issue Artificial Intelligence and Applications—Responsible AI)

Abstract

In responsible AI development, the construction of AI systems with well-designed transparency and the capability to achieve transparency-adaptive adjustments necessitates a clear and quantified understanding of user states during the interaction process. Among these, trust and load are two important states of the user’s internal psychology, albeit often challenging to directly ascertain. Thus, this study employs transparency experiments involving multiple probabilistic indicators to capture users’ compliance and reaction times during the interactive collaboration process of receiving real-time feedback. Subsequently, estimations of trust and load states are established, leading to the further development of a state transition matrix. Through the establishment of a trust–workload model, probabilistic estimations of user states under varying levels of transparency are obtained, quantitatively delineating the evolution of states and transparency within interaction sequences. This research lays the groundwork for subsequent endeavors in optimal strategy formulation and the development of transparency dynamically adaptive adjustment strategies based on the trust–workload state model constraints.
Keywords: responsible AI; human computer interaction; transparency design; collaborative decision making; human computer trust; cognitive modeling responsible AI; human computer interaction; transparency design; collaborative decision making; human computer trust; cognitive modeling

Share and Cite

MDPI and ACS Style

Wang, X.; Li, Y.; Xue, C. Collaborative Decision Making with Responsible AI: Establishing Trust and Load Models for Probabilistic Transparency. Electronics 2024, 13, 3004. https://doi.org/10.3390/electronics13153004

AMA Style

Wang X, Li Y, Xue C. Collaborative Decision Making with Responsible AI: Establishing Trust and Load Models for Probabilistic Transparency. Electronics. 2024; 13(15):3004. https://doi.org/10.3390/electronics13153004

Chicago/Turabian Style

Wang, Xinyue, Yaxin Li, and Chengqi Xue. 2024. "Collaborative Decision Making with Responsible AI: Establishing Trust and Load Models for Probabilistic Transparency" Electronics 13, no. 15: 3004. https://doi.org/10.3390/electronics13153004

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop