Next Article in Journal
A Contribution to Facilitate the Seismic Design in Lebanon Using Short-Length Spectrum-Consistent Earthquakes
Previous Article in Journal
An Integration of Business Processes and Information Management for Improving the Efficiency and Reliability of Infrastructure
Previous Article in Special Issue
Development of a Play-Tag Robot with Human–Robot Contact
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Complexity-Driven Trust Dynamics in Human–Robot Interactions: Insights from AI-Enhanced Collaborative Engagements

1
School of Computer Science, Nanjing Audit University, Nanjing 211800, China
2
College of Intelligence Science and Technology, National University of Defense Technology, Changsha 410073, China
3
School of Statistics and Data Science, Nanjing Audit University, Nanjing 211800, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2023, 13(24), 12989; https://doi.org/10.3390/app132412989
Submission received: 29 October 2023 / Revised: 29 November 2023 / Accepted: 3 December 2023 / Published: 5 December 2023
(This article belongs to the Special Issue Advanced Human-Robot Interaction)

Abstract

:
This study explores the intricate dynamics of trust in human–robot interaction (HRI), particularly in the context of modern robotic systems enhanced by artificial intelligence (AI). By grounding our investigation in the principles of interpersonal trust, we identify and analyze both similarities and differences between trust in human–human interactions and human–robot scenarios. A key aspect of our research is the clear definition and characterization of trust in HRI, including the identification of factors influencing its development. Our empirical findings reveal that trust in HRI is not static but varies dynamically with the complexity of the tasks involved. Notably, we observe a stronger tendency to trust robots in tasks that are either very straightforward or highly complex. In contrast, for tasks of intermediate complexity, there is a noticeable decline in trust. This pattern of trust challenges conventional perceptions and emphasizes the need for nuanced understanding and design in HRI. Our study provides new insights into the nature of trust in HRI, highlighting its dynamic nature and the influence of task complexity, thereby offering a valuable reference for future research in the field.

1. Introduction

The integration of artificial intelligence (AI) into robotic systems has significantly advanced human–robot interaction (HRI), evolving from traditional automated systems to more intricate collaborations [1]. This evolution accentuates the importance of understanding the dynamics within HRI, where human intuition meets the precision of AI-driven robotics. Modern robotics, transcending their conventional roles, now serve as indispensable partners in diverse tasks, ranging from routine to high-risk operations [2]. Yet, despite these advancements, the quest for complete robotic autonomy faces inherent challenges, particularly in complex environments where human guidance remains crucial [3].
In HRI, the interaction between human subjective judgment and robotic execution is fundamental. The success of this collaboration hinges on effective communication and the degree of trust placed in AI-enabled robots [4,5]. Trust, a critical factor in HRI, influences decision-making and the efficiency of joint operations [6,7]. While existing research in HRI trust spans sociological, psychological, and computer science domains, it often overlooks specific aspects such as the nuanced nature of trust, its quantifiable measurement, and the impact of trust dynamics on HRI [8]. This gap highlights the need for empirical research focused on trust in HRI, particularly in the context of AI-enhanced systems.
Our study addresses this gap by investigating the dynamics of trust in HRI, with a specific focus on scenarios involving Unmanned Aerial Vehicles (UAVs) augmented with AI. The motivation for employing AI in UAV systems stems from its ability to enhance decision-making and operational efficiency in complex tasks [9]. The research objectives are twofold:
Investigating Trust Dynamics in HRI: We aim to explore the evolving nature of trust between humans and robots, focusing on AI-augmented UAVs. This includes examining how trust is established, maintained, and influenced in HRI, offering insights into the delicate balance of human–robot collaboration [10].
Analyzing the Impact of Task Complexity: The study also delves into how varying levels of task complexity in AI-enhanced UAV operations affect human trust. Understanding this relationship is pivotal for optimizing collaborative strategies and achieving successful outcomes in HRI [11].
Through this approach, our paper endeavors to provide a comprehensive understanding of trust dynamics in HRI, contributing to both the theoretical and practical advancement of AI-enhanced robotics.
The organization of this paper is as follows: The Section 2 defines trust and its contributing factors and introduces the research hypotheses for this investigation. The Section 3 establishes an experimental platform based on our research hypotheses and conducts HRI experiments. The Section 4 undertakes an analysis rooted in the results of these collaborative trials. The Section 5 offers a comprehensive discussion and interpretation of the experimental outcomes, highlighting both current insights and potential research directions. The Section 6 provides a summary of the research and its resulting experimental insights.

2. Related Work

2.1. Evolution and Importance of Trust in Human–Robot Interaction

Within the domain of artificial intelligence, the interaction between humans and robots has attracted significant attention. One of the core challenges is the establishment and sustainment of an effective trust relationship during HRI. Such a trust relationship can enhance the efficiency and efficacy of interaction, thereby optimizing the distribution of workload between humans and robots. In recent years, the “human-in-the-loop” concept has emerged as a prevailing approach in this field. Scholars have extensively explored trust issues in HRI, addressing topics such as the definition of trust, the determinants of trust, and its practical implications.
The concept of “Trust” originated from studies on interpersonal relationships, describing an emotional connection in human interactions. For instance, Rotter et al. found that the level of trust significantly influences factors such as emotions, perceived attractiveness, and morality among interacting individuals [12]. From a sociological perspective, variations in interpersonal trust profoundly influence individual well-being, social engagement, socio-economic advancement, and proactive behavior [13,14,15,16].
HRI, at its core, is an endeavor to simplify technical dialogues between human operators and intelligent robots, transcending traditional robot language barriers [17]. Such interaction harnesses the analytical prowess of robots with the intuitive capabilities of humans, creating synergies that have seen application in diverse contexts. The adoption of robots offers tangible benefits: marked reductions in economic overheads, minimized environmental impact, and a substantial reduction in human risk [10]. However, amidst these advancements, human trust in robots stands out as a cornerstone, especially when evaluating the interaction’s overall quality [18,19].
The degree of trust a human places in robots demonstrates its significance and pivotal role across various domains. In healthcare, trust in robots affects the allocation of medical resources based on HRI and the effectiveness of contagious disease containment [20,21,22]. In military contexts, the extent of human trust in robots directly determines the operational efficiency of military surveillance sensor devices and the accuracy of military communications [23,24]. Further, trust in robots influences the application and execution of AI dynamic trust calibration mechanisms in high-risk military decision-making [25]. Even in ecological research, such as high-altitude animal migration studies, the validity of findings hinges on the level of trust in the HRI [26]. A noteworthy caveat, however, is that increasing decisional complexities can erode this trust, potentially undermining interaction [27].
In summary, the essence of research in HRI centers on understanding and addressing trust issues during the interactive process. Given its significance, trust serves as the bedrock for the wide-ranging applications of HRI, spanning from medical to military domains and beyond, as the effectiveness of these applications hinges on users’ confidence in a robot’s capabilities and recommendations. This paper delves into the nature of trust within HRI, its parallels with interpersonal trust, and the specific dynamics influencing trust in robots. By shedding light on these areas, our goal is to bolster the quality of outcomes in HRI, thereby contributing to advancements in the broader realm of artificial intelligence.

2.2. The Definition of Trust in Human–Robot Interaction

The concept of trust varies in its definition and interpretation across diverse academic fields. In social sciences, trust is seen as essential for social interaction, incorporating both emotional and cognitive facets [28]. From a psychological perspective, trust is framed as a risk management tool, preparing for potential harm that might arise from others’ actions. In the realm of economics, trust is perceived as an outcome of a robust economic system, which in turn enhances the system’s efficiency [29]. Despite the varied definitions across disciplines, there’s a shared emphasis on the importance of trust in bilateral transactions or exchange relationships [30]. Interpersonal trust, as a key component in human relationships, establishes the foundation for these interactions. Yuan X et al. suggest that the evolution of interpersonal trust is a dynamic process, transitioning from trust based on rational choices and economic exchanges to trust rooted in interpersonal reliance and identification [31]. The existence of interpersonal trust assists individuals in making appropriate decisions and taking action when confronted with uncertainties.
In the context of HRI, trust in robots also implies an acknowledgment of potential unknown risks. Trust is thus defined as “one party’s willingness to accept risks or potential harms from another” [32]. For example, Akash et al. define human trust in AI as the degree to which humans rely on intelligent agents [33]. Simultaneously, other researchers have also emphasized the significance of the robot’s reliability, integrity, and capabilities in shaping trust [34]. Building upon previous studies, this paper offers the following definition of trust in HRI: as illustrated in Figure 1, in HRI, trust represents the human’s intention to rely on the robot. It measures how much the human (Trustor, the one placing trust) believes in the actions and reliability of the Robot (Trustee, the entity being trusted). The notion of “trust” should extend beyond mere task success rates or cognitive levels at specific points in time. Instead, it should delve into the nuances of the collaborative process between the involved parties [35]. This perspective is crucial because trust, whether in interpersonal dynamics or HRI, is continually formed and evolves throughout the interaction [36]. Research within sociology breaks down the evolution of trust into two distinct phases:
(1)
Static Trust: Often referred to as initial trust in many studies, static trust is established before any interactive relationship has begun. At this stage, due to a limited understanding and incomplete information about the Trustee, the Trustor faces potential risks. These risks can arise from the Trustee failing to meet expectations, either intentionally or unintentionally. Consequently, when establishing static trust, the Trustor typically relies on cognitive judgment to make decisions about the present situation [37].
(2)
Dynamic Trust: Dynamic trust emerges after the establishment of the interactive relationship and stems from the Trustor’s enhanced understanding of the Trustee. Such an understanding sometimes allows the Trustor to anticipate the Trustee’s actions. The core of dynamic trust involves factoring the results of prior interactive actions into future trust decisions. The outcomes, whether positive or negative, will manifest in the subsequent interaction between the Trustor and the Trustee, thereby further influencing trust [38].
Haesevoets suggests that trust in HRI is also an evolving process [39]. As individuals become more acquainted with artificial intelligence, their trust in robots tends to increase, fostering a more optimistic perspective. In the context of HRI, trust fluctuates based on the alignment of human expectations with the robot’s demonstrated capabilities. This progression is cultivated through continuous bidirectional interactions between humans and robots. Many studies have investigated this interaction process from various angles. For instance, some research has focused on users’ behaviors during unexpected events, delving deep into the dynamics of trust breakdown and restoration [40]. Others employ empirical methods to uncover varying patterns of dynamic trust as levels of robot autonomy change, and how these patterns shape human decision-making [41]. A different subset of research highlights the importance of nurturing and preserving dynamic trust when coordinating robots within the Internet of Things [42]. Together, these investigations underscore the intricate and multifaceted nature of dynamic trust in HRI.
In essence, while trust between humans and robots originates from interpersonal trust, they are not equivalent. Trust in robots emerges, in part, from foundational human cognition and general assessments, embodying the concept of static trust. Contrastingly, dynamic trust is a product of persistent HRI, manifesting in actions like command issuance and information feedback. While static trust provides a preliminary foundation for initial interactions with robots, dynamic trust becomes a crucial consideration for long-term interaction. This perspective aligns with Rhim’s stance, where he advocates for trust to be perceived as a dynamically changing parameter, continuously evolving with HRI [43]. Presently, the bulk of research gravitates towards static trust, leaving dynamic trust —especially its interactions with external factors—relatively underexplored.

2.3. Factors Influencing Trust in Human–Robot Interaction

In HRI, the factors shaping trust are both vast and complex. They span human emotions, personality, cognitive abilities, robot performance, algorithm quality, and precision, as well as external conditions like temperature, humidity, application scope, and task complexity. Essentially, trust levels in HRI are determined by a blend of human attributes (from the Trustor), robot characteristics (from the Trustee), as well as a myriad of external environmental factors.
A significant body of research is directed at bolstering robot reliability through technological enhancements, aiming to consequently increase human trust in robots. The capability of a robot greatly dictates the success of predefined tasks. Therefore, refining robot performance to align with human anticipations becomes a critical strategy for fortifying human trust in HRI [44]. Sheridan et al. emphasized that the cornerstones for establishing this trust encompass robot reliability, robustness, effectiveness, transparency, and clarity of intent [37]. Similarly, Gebru et al. evaluated shifts in trust centered on robots, viewing it through three lenses: the trustworthiness of AI, the inherent assurances offered by robots, and an ethical framework for trustworthy robots [45]. They assert that accentuating these three facets can elevate a machine’s credibility and societal endorsement, thereby enriching trust in HRI.
Trust in HRI is not solely determined by the robot’s qualities; human subjective factors also play a significant role. Personal preferences and mental resilience play a significant role in forming trust with robots; because people have distinct cognitive abilities, personalities, and emotional reactions, each person encounters specific challenges when trying to trust robots [46]. Lee et al. found that, regardless of the cognitive level, as individuals become more familiar with artificial intelligence, the degree of trust in robots correspondingly increases [47]. Furthermore, Zhou et al. explored how personality traits such as extroversion, introversion, and conscientiousness impact human trust. Their findings revealed that these traits have varied influences on trust, particularly under conditions of uncertainty and high cognitive demands [48].
In addition to human and robot factors, the environment significantly shapes trust in HRI. Factors such as geographical location, cultural background, task hazards, and task difficulty can all influence trust either positively or negatively. Notably, during HRI, the variance in task difficulty can cause shifts in the dependency on AI, thereby influencing the level of trust. Sociological findings have shown that task difficulty can sway individual self-perception [49], and psychological studies demonstrate its impact on emotional responses and heart rates [50].
Within the HRI, Driggs et al. examined trust during visual search tasks of varying difficulty. Their results indicated that trust in robots increased as tasks grew more complex [11]. However, current studies addressing the influence of environmental factors on human trust in robots remain in their infancy. There is a gap in understanding whether task difficulty impacts trust in a static or dynamic manner. Moreover, few empirical investigations exist on the subject. Against this backdrop, our study focuses on understanding how task difficulty, considered as an environmental factor, influences trust in HRI.
Drawing from methodologies in psychological and sociological trust research, and building upon the latest insights in HRI, we approach this topic from an environmental perspective. Our objective is to explore the effects of varying task difficulties on dynamic trust during HRI. Based on this, we propose the following hypotheses:
H1. 
Different task difficulties during HRI significantly influence dynamic trust between humans and robots.
H2. 
Different task difficulties during HRI significantly influence static trust between humans and robots.
To validate our hypotheses, we conducted an interactive aerial operation experiment involving both humans and UAVs, with the latter serving as intelligent agents. Within this experiment, we categorized tasks based on their varying complexities to distinguish between static and dynamic trust. Additionally, we examined the HRI process under varying levels of task difficulty.

3. Experimental Methodology

3.1. Experimental Design

Recent advancements in HRI research demonstrate a variety of experimental approaches, each offering unique contributions to the field. One study utilizes the Gazebo simulation framework, renowned for its realistic physics simulations, to examine various UAV indoor navigation control methods [51]. This approach is particularly effective in environments with complex obstacles, providing critical insights into advanced control algorithms. However, a notable limitation of this approach is the reliance on simulated settings, which may not accurately reflect real-world scenarios. Another experimental approach involves integrating multi-modal interactions, such as speech, touch, and gesture, into UAV ground control stations [52]. This method, employing virtual simulation technology, is designed to enhance operational efficiency and user experience in HRI. Despite its potential, it confronts challenges like increased cognitive load and variable accuracy across different interaction modes. Additionally, studies have embraced the Unity platform for simulating complex HRI scenarios, especially those involving AI-autonomous drone systems [53]. This methodology is valuable for investigating human–robot trust dynamics in contexts that mimic real-world conditions. Nonetheless, Unity-based simulations share a common limitation with Gazebo: the difficulty in fully replicating the unpredictability of actual environments.
The selection of an appropriate simulation platform is a critical decision that significantly influences the study’s methodology and outcomes. Gazebo, with its seamless integration with the Robot Operating System (ROS), is suited for developing robotic systems that necessitate high-fidelity environmental interactions. However, for the objectives of this particular study, Unity was selected as the more fitting platform, given its specific strengths that align closely with the needs. Unity distinguishes itself with its user-friendly interface, which is instrumental in simplifying the development process. This accessibility is particularly beneficial in HRI research, where the focus often shifts between various aspects of interaction dynamics, requiring a platform that is adaptable and easy to manipulate without extensive programming expertise. Furthermore, Unity’s advanced graphical capabilities are essential for creating realistic and immersive simulations, pivotal for validating the experimental findings of this study. The platform’s ability to render detailed and visually engaging environments adds a layer of authenticity to the simulated scenarios, enhancing the overall quality and reliability of the research. Additionally, Unity’s versatility in accommodating a diverse range of interaction modalities perfectly aligns with the study’s aim to explore the complexities of HRI. This adaptability is vital for constructing intricate and comprehensive interaction scenarios, allowing for an in-depth examination of how humans interact with robotic systems under various conditions.
In this study, a Unity-based experimental platform was employed to simulate the surveillance task of unmanned aerial vehicles (UAVs) offering a rich canvas for insights. UAVs, as symbols of automation and intelligence, require nuanced interactions with human operators, especially in high-stakes conditions where the degree of trust becomes a key factor for collaboration success [54,55]. The experimental setup created in Unity simulates a surveillance task involving one human-operated manned aerial vehicle (MAV) and four autonomous UAVs using deep reinforcement learning algorithms. This setup, shown in Figure 2, includes varying complexities, such as different numbers of detection points, danger zone ranges, and area densities to be monitored. For instance, a simple task might involve a single-point detection in a low-threat environment, while a complex task could require coordination between MAVs and UAVs to identify multiple points across expansive areas with intermittent danger zones.

3.2. Experimental Interface

The study utilizes a 3D experimental interface developed in Unity, which significantly enhances the sense of depth and realism when compared to traditional planar flight experiments. This advancement is crucial for simulating complex UAV surveillance tasks, central to examining the intricacies of human trust in UAV operations. A video demonstrating the simulations conducted in Unity is provided in the supplementary materials.
The core objective of the experiment is for UAVs to navigate through predefined detection points while avoiding hazardous areas, culminating in a safe destination arrival. In this setup, UAVs are exclusively equipped with reconnaissance capabilities to focus on the impact of task difficulty on human trust. Adjustments in the locations of hazardous zones and the distribution of detection points vary the task’s complexity, creating different levels of challenge for participants.
Participant interaction with the platform is facilitated through a click-to-run method. Before beginning the experiment, Participants must enter their names, which aids in tracking individual performance. The initial interface, shown in Figure 3a, presents options to start the experiment, view historical results, or exit the program. Selecting ‘start’ leads to another interface, depicted in Figure 3b, where participants can choose to undergo training or engage in tasks with different difficulty levels.
Figure 4 provides a detailed layout of the experimental environment during the surveillance task. The main interface presents a bird’s-eye perspective from the human-controlled MAV, depicting multiple detection points and hazard zones. Participants initiate from the starting point (“S”) and navigate the MAV and UAVs to cover specified detection points, which are delineated by green rectangular frames. A shift in color from green to white confirms a successful detection. After all points have been covered, participants should direct the MAV and/or UAVs towards the destination, demarcated by a red rectangular frame labeled “D”. During the surveillance process, the MAV or UAVs could potentially encounter several hazard zones, with the threat level escalating from the outer boundary to the central core. Approaching the zone’s fringe activates a warning for participants. However, drawing closer to the core risks mission termination. Hence, astute navigation is imperative. Functional buttons like switch viewpoint, pause, and menu options are situated at the interface’s top left. The bottom left displays a panoramic view with “T” symbols indicating flight detection points. Essential real-time metrics such as flight altitude, time elapsed, and task progression populate the interface’s lower section.

3.3. Experimental Setup

The experiment is structured around two distinct operational modes. The first mode is known as the accompanying mode. In this mode, an MAV piloted by a human leads the navigation and the autonomous UAV follows its trajectory seamlessly. The second mode is designated as the automated mode. Here, the human pilot delegates a detection point to the autonomous UAVs, which then autonomously execute the detection. Given the hazardous zones in the experimental setting, a deep reinforcement learning algorithm, i.e., D3QN, is employed to train the autonomous UAVs. This approach aims to ensure the UAVs’ ability to perform surveillance tasks while simultaneously avoiding dangers.
During the experiment, the positions of the detection points and the hazardous zones are predetermined. Leveraging this predetermined environmental data, the UAVs use the acquired training to continuously optimize their routes to targeted detection points. Understanding that task nuances might vary across different settings, establishing a baseline for the environment becomes a priority. Depending on the scenario, the UAVs hold the responsibility of pinpointing and accomplishing distinct detections, all guided by the strategic directive set by the human operator.
As shown in Figure 5, our study utilizes a 1 × 3 between-subject experimental design. The independent variable under consideration is the difficulty level of the surveillance task, which spans three categories: easy, normal, and hard. Specifically:
  • The easy level includes two detection points and danger zones.
  • The normal level includes four detection points and danger zones.
  • The hard level includes six detection points and danger zones.
The dependent variable of the experiment is the participants’ trust level in UAVs powered by AI algorithms. Due to the ongoing interaction between the human and the UAV, this trust level experiences dynamic shifts. Drawing from the definition of trust in HRI outlined in Section 2.2, we classify these trust variations into two phases: Static Trust and Dynamic Trust. Static Trust emerges before the establishment of collaboration made by the Trustor based on their cognitive judgment and is denoted as T 0 n , where n corresponds to task difficulty. Dynamic Trust takes place after the collaboration is established, indicating the degree of trust the Trustor has towards the target trust entity, i.e., the Trustee, and is represented as T i n , where i stands for the i t h round of collaboration between the Trustor and the Trustee. Consequently, we can calculate the change in trust before and after the i t h round of collaboration in the HRI process as:
T i n = T i n T i 1 n
where i ϵ {i|0, 1, 2, …}. A positive T value suggests that the HRI boosted trust, whereas a negative value indicates a decline in trust due to the interaction. By evaluating the influence of varying task difficulties on trust shifts pre- and post-HRI, our objective is to elucidate the degree to which task difficulty affects trust levels. Further, we aim to enhance our comprehension of the interplay between task difficulty and trust dynamics.

3.4. Experimental Procedure

This study involved participants who were novices in human–robot interaction (HRI) experiments. To ensure the randomness and fairness of the experimental procedure, participants were randomized into three groups, each assigned a specific task difficulty level with no overlap in tasks. At the beginning of each session, the layout of detection points and danger zones was shuffled.
The experimental setup consisted of a Windows 10 operating system, a 24-inch monitor with a 144 Hz refresh rate and a resolution of 2048 × 1080, and external wireless input devices including a mouse and a keyboard. Participants used the keyboard arrow keys to control the manned aerial vehicle (MAV). The experiment was conducted in a closed room, where a non-intrusive instructor was present to provide background information, ensure adherence to the experimental procedure, and assist without influencing the participants’ actions or decisions.
Initially, participants were welcomed and instructed to sit in front of the monitor. The instructor then provided them with essential background information about the experiment. As depicted in Figure 6, the experimental procedure was then divided into three stages:
Training (Step 1): Participants started by completing a demographic questionnaire, indicating their age, gender, education level, and familiarity with AI. They were then familiarized with the experimental environment and briefed on the artificial intelligence algorithms used in the UAVs. This stage also involved hands-on experience with the surveillance tasks, ensuring a comprehensive understanding of the experiment’s objectives, goals, procedures, and operational methods.
Accompanying (Step 2): In this phase, participants were tasked with completing the assignment independently, using the UAV in its accompanying mode. Post-completion, data on their trust in the UAV during this mode were collected through a questionnaire.
Automated (Step 3): This stage replicated the task from Step 2 but within an HRI context. Participants had the option to utilize automated UAVs to facilitate task completion. Following this, trust in the UAVs was again assessed through a questionnaire.
Data were primarily collected using structured questionnaires distributed at different stages of the experiment. The questionnaires were designed to examine participants’ fluctuations in dynamic trust across the Training, Accompanying, and Automated stages. Each questionnaire utilized a seven-point Likert scale with the question “Whether you feel trust in artificial intelligence” from “strongly disagree” to “fully agree” enhancing the consistency and comparability of the data gathered.
In addition to the questionnaires, comprehensive records of participant performance during Steps 2 and 3 were maintained. These records encompassed various aspects, including the completion status of detection points, the time taken for task completion, and the specific strategies employed by participants in their interactions with the UAVs. The detailed analysis of these datasets is pivotal to the study as it seeks to elucidate the influence of task difficulty on dynamic trust within the HRI process. The insights gleaned from this analysis are anticipated to contribute significantly to both the theoretical understanding and practical applications in the domain of trust-building within HRI scenarios.

4. Results

In this study, to measure participants’ trust in AI, we utilized a questionnaire where respondents rated their trust level on a seven-point Likert scale, ranging from 1 (Strongly Disagree) to 7 (Strongly Agree). The responses were collected under three different task difficulty conditions, subsequently labeled T 0 , T 1 , and T 2 . To analyze the differences in trust levels across these conditions, we also calculated the differences between these values, such as Δ T 01 ( T 1 minus T 0 ) and Δ T 12 ( T 2 minus T 1 ).
For data analysis, we employed the Kruskal–Wallis test, a non-parametric method used in K-independent sample analysis. This test was chosen for its ability to handle data from different groups without assuming a normal distribution, making it suitable for our study where data might not conform to normality due to the varied nature of human responses in trust levels. The Kruskal–Wallis test is particularly effective in comparing more than two groups, as in our case with three different levels of task difficulty. The analysis yielded results including the chi-square value, which represents the degree of correlation between the datasets and the degree of freedom of error, providing insights into the variability within the data. Additionally, the asymptotic significance obtained from the test offered an intuitive understanding of the trust trends across different task complexities. These results helped in assessing the impact of task difficulty on participants’ trust in AI, forming a crucial part of our study’s findings.

4.1. Descriptive Statistics

This study employed 60 university students from various academic disciplines as participants, with each difficulty level having 20 participants. To ensure a smooth data processing and analysis phase, we normalized the questionnaire results after collection, ensuring all data points were on a uniform scale.
As shown in Table 1, in terms of the participants’ demographics, 27% were male, while females represented the majority at 73%. As for the educational level, 70% were undergraduate students and the remaining 30% were graduate students. Regarding familiarity with artificial intelligence, around 68% indicated having some understanding of AI, whereas the other 32% reported minimal or no exposure to the subject. The diversity of our sample enriches our research, enhancing its universality and relevance.
Every participant successfully finished every step of the experiment and chose the autonomous UAV surveillance option during Step 3. In Figure 7, we present a comparison of the average time participants spent completing tasks at various difficulty levels in both Step 2 and Step 3. In this figure, the vertical axis indicates the average time taken to finish the tasks, while the horizontal axis differentiates the task difficulty levels, i.e., Easy, Normal, and Hard. The red line corresponds to the average time for participants in Step 2, and the green line signifies the same for Step 3.
In the Easy mode, there was a minimal difference in completion times between Step 2 and Step 3, with recorded times of 199 s and 202 s, respectively. However, as the task difficulty escalated, there was a marked decrease in the time required for Step 3 compared to Step 2. This discrepancy was most pronounced in the group with the highest difficulty level. On average, participants took 382 s for Step 2 and only 337 s for Step 3, resulting in a time reduction of 45 s. Such observations hint that leveraging UAVs in collaboration becomes increasingly advantageous as task complexity rises. Overall, this evidence highlights the supportive impact of UAVs integrated with artificial intelligence algorithms in HRI, notably contributing to efficiency in task execution.

4.2. Experimental Results

This study employed non-parametric tests for independent sample analyses, with task difficulty as the independent variable and the degree of trust and its dynamic changes as the dependent variables. Table 2 provides a detailed listing of the participants’ trust levels towards UAVs at three core time points: T 0 ,   T 1 , T 2 These represent the trust levels participants had towards the UAV during the Training, Accompanying, and Automated stages, respectively. T 0 denotes static trust, whereas T 1 and T 3 represent dynamic trust after Step 2 and Step 3. The term T i quantifies the shift in participants’ trust levels towards the UAV across these stages.
The experimental results indicate that during the training phase of the experiment, prior to any interaction with the UAV, participants from all groups manifested comparable trust levels, with no statistically significant differences observed (χ2 = 5.7, df = 2, p = 0.058 > 0.05). This implies that after participants fully grasped the experimental background and the relevant artificial intelligence algorithms, their static trust T 0 remained consistent, setting a stable baseline for the subsequent experimental stages. However, in Step 2, the varying task difficulty significantly influenced trust levels (χ2 = 16.448, df = 2, p = 0.0 < 0.01). This finding suggests that participants’ interaction experiences with AI when managing tasks of different difficulties significantly influenced their trust levels. Such variations in trust could be attributed to the distinct interaction experiences participants encountered while executing tasks of different complexities. Nevertheless, in Step 3, the trust discrepancies arising from task difficulty diminished and were no longer statistically significant (χ2 = 2.235, df = 2, p = 3.27). This could indicate that although task difficulty initially shaped trust, its influence seemed to neutralize as the experiment progressed.
It is noteworthy that between Step 2 and Step 3, i.e., Δ T 12 , a significant shift in trust levels was observed (χ2 = 17.914, df = 2, p = 0.0 < 0.01). This outcome supports our core hypothesis: task difficulty, along with the distinct interaction experiences it produces, plays a significant role in influencing the dynamic changes in trust levels. While an initial level of trust may be consistent, it tends to adapt based on the task’s complexity. However, this fluctuation in trust tends to stabilize as interactions become more frequent and profound.
Figure 8 further elucidates the impact of task difficulty on trust levels. Figure 8a–c illustrate shifts in the average trust values across stages and varying task difficulties. In Figure 8a, before the experiment, the trust participants placed in artificial intelligence was consistent across all difficulty groups. However, during Step 2, those engaged in normal difficulty tasks exhibited higher average trust values than their counterparts in the easy and hard task groups. By Step 3, after experiencing HRI, the trust levels for the normal difficulty group declined, settling below the easy and hard task groups.
Figure 8d outlines the differences in trust between Step 2 and Step 3. The trend mimics a “V” shape, suggesting that participants tackling either easy or hard tasks felt a stronger inclination to interact with AI. In contrast, those handling normal tasks seemed to favor completing them without AI assistance. Over the course of the experiment, participants demonstrated fluctuating trust levels in AI, predominantly influenced by the complexity of their assigned tasks.
Our findings align with hypothesis H1, suggesting that task difficulty exerts a pronounced nonlinear influence on the dynamic shifts in trust during HRI, as visualized in Figure 8d. However, it is pivotal to highlight that our data did not corroborate hypothesis H2. This might hint that in the realm of HRI, the direct impact of task difficulty on static trust might not be as distinct as anticipated.

5. Discussion

In today’s era of rapid technological progress, the interactions between humans and artificial intelligence (AI) are intensifying. At the heart of this interaction lies trust, a crucial determinant of the relationship’s success. Our research delves into the dynamic nature of trust in human–robot interaction (HRI), revealing its nonlinear relationship with task difficulty. This insight is crucial for understanding and optimizing the role of robots in practical scenarios.
Our findings indicate that trust in robots tends to increase during tasks that are either straightforward or highly complex. This pattern can be attributed to the robot’s predictable performance in these scenarios, allowing users to accurately assess its capabilities and, consequently, establish a more informed level of trust. However, an intriguing phenomenon emerges with tasks of intermediate complexity, where uncertainties about the robot’s performance become more pronounced. This can lead to a fluctuation in trust levels, reflecting a “V-shaped” pattern akin to the one observed in previous research [56,57].
Notably, this pattern resonates with the principles of flow theory in human factors, which posits that optimal challenges are key to maintaining engagement and satisfaction in activities [56]. According to this theory, tasks that align well with an individual’s skills foster a state of flow, leading to higher engagement and satisfaction. In contrast, tasks that are misaligned with one’s skills can result in anxiety and frustration.
Our study’s observations suggest that tasks of either low or high complexity tend to align more effectively with users’ expectations and abilities when interacting with robots, thereby fostering a higher level of trust. Low-complexity tasks align well, maybe because they often fall within the participant’s comfort zone, requiring a level of skill that most users possess or can easily acquire. This familiarity and ease of operation lead to a sense of confidence and control, enhancing trust in the robotic system. On the other hand, high-complexity tasks, while challenging, are typically approached with a clear understanding of the need for advanced AI assistance. Participants engaging in these tasks are generally prepared for, and perhaps even expect, the sophisticated capabilities of robots, which aligns with their heightened skill levels or understanding necessary for such tasks. This acknowledgment of the robot’s role in complex scenarios can lead to a more trusting relationship, as users rely on and value the advanced support the robot provides.
However, intermediate-complexity tasks pose unique challenges. They often fall outside the user’s comfort zone and do not necessitate advanced AI intervention, leading to ambiguity about the robot’s role and capabilities. This uncertainty makes it difficult for users to predict or comprehend robotic behavior, destabilizing the trust dynamic.
Our findings with regard to trust dynamics echo Johnson’s research [57] on robotic user preferences and autonomy, highlighting the impact of task difficulty on trust. Johnson’s OODA (Observe, Orient, Decide, Act) model sheds light on the different stages of trust development, correlating with a user’s cognitive, emotional, and behavioral responses. The initial phase, ‘observe’, gathers environmental and robotic data, followed by the ‘orient’ phase, which interprets this data. In the ‘decide’ phase, strategies are formed based on previous observations and interpretations, setting the stage for trust development. Finally, during the ‘act’ phase, users validate or adjust their trust based on observed outcomes. This understanding, grounded in the OODA model, deepens our grasp on trust dynamics in HRI and offers a theoretical base for crafting more effective interaction strategies. By adjusting feedback in each OODA stage, we can guide the formation of trust towards robots.
Reflecting upon our approach, we recognize the need to discuss the real-world application of measured parameters like accuracy, reliability, and human energy consumption. While these parameters were integral to our analysis, they were considered more in terms of participants’ perceptions rather than objective quantification, aligning with our focus on the subjective nature of trust in HRI. Furthermore, our study’s unique focus on the non-linear trust dynamics with task complexity presented a novel research angle that did not readily lend itself to direct comparisons with existing methods. However, future research could explore these comparisons, providing additional context and depth to the understanding of trust dynamics in HRI. These considerations highlight important directions for future research. Expanding upon the current study to include more explicit connections to real-world applications and drawing comparisons with other methodologies will be crucial steps in enhancing the practical relevance and richness of our findings in the evolving field of HRI.

6. Conclusions

As the era of technological advancement continues to evolve, our study has made significant strides in understanding the nuances of trust in human–robot interaction (HRI). Through a blend of insights from previous HRI research and methodologies grounded in social psychology, we have revealed that trust in HRI is not static but is dynamically influenced by task complexity.
Our key finding is that trust levels in robots increase for tasks that are either very straightforward or highly complex. This is due to the predictability and perceived competence of robots in handling such tasks, which enhances users’ confidence in them. In contrast, for tasks of intermediate complexity, where the robot’s capabilities might not be as apparent or convincing, we observed a tendency for individuals to rely more on their own judgment, leading to fluctuating trust levels. This “V-shaped” pattern of trust, consistent with previous research, highlights the critical role of task difficulty in shaping trust in HRI.
Looking forward, our research opens the door for several future research avenues. There is a clear need to further explore the influence of individual variances on trust dynamics. Moreover, the study of the interplay between trust and other elements, such as robotic transparency, explainability, and past behavior, is essential. This research could explore ways to enhance robot transparency and user understanding, which are particularly important for tasks of intermediate complexity where trust levels are most variable. Investigating the practical implications of these insights in robotic system design is another critical area, particularly as AI technology continues to advance and integrate into sectors like healthcare, legal systems, and decision-making.
The nuanced evolution of trust as robots assume roles in these crucial sectors will bring new challenges and dimensions to trust research. Future studies in this field will be indispensable for developing more user-centric robotic systems tailored to meet the evolving needs and expectations of users. Such endeavors would undoubtedly contribute to fostering even stronger trust bonds in the HRI research field.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/app132412989/s1. Video S1: Dynamic Unity Simulation Demonstrations of Our Experimental Process.

Author Contributions

Conceptualization, Y.Z. and C.W.; Formal analysis, T.W. and W.Q.; Funding acquisition, M.T.; Methodology, T.W.; Project administration, Y.Z.; Software, T.W. and W.Q.; Supervision, Y.Z. and C.W.; Validation, Y.Z.; Writing—original draft, Y.Z. and T.W.; Writing—review and editing, C.W., W.Q. and M.T. All authors have read and agreed to the published version of the manuscript.

Funding

This research was partly funded by the National Natural Science Foundation of China, Grant numbers 62006121 and 61906203, and the National Social Science Fund of China, Grant number 23BFX038. This research was also partly funded by the Planning Fund Project of Humanities and Social Sciences Research of Ministry of Education of China, Grant numbers 23YJA870009. This research was also partly funded by the Significant Project of Jiangsu College Philosophy and Social Sciences Research, Grant number 2021SJZDA153. This research was also partly funded by Jiangsu College Philosophy and Social Sciences Research, Grant number 2020SJB0131.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available on request from the corresponding author. The data are not publicly available due to privacy.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Díaz, M.A.; Voss, M.; Dillen, A.; Tassignon, B.; Flynn, L.L.; Geeroms, J.; Meeusen, R.; Verstraten, T.; Babič, J.; Beckerle, P.; et al. Human-in-the-Loop Optimization of Wearable Robotic Devices to Improve Human-Robot Interaction: A Systematic Review. IEEE Trans. Cybern. 2022, 53, 7483–7496. [Google Scholar] [CrossRef] [PubMed]
  2. Ling, H.; Liu, G.; Zhu, L.; Huang, B.; Lu, F.; Wu, H.; Tian, G.; Ji, Z. Motion Planning Combines Human Motion Prediction for Human-Robot Cooperation. In Proceedings of the 2022 12th International Conference on CYBER Technology in Automation, Control, and Intelligent Systems (CYBER), Baishan, China, 27–31 July 2022; pp. 672–677. [Google Scholar]
  3. Wang, C.; Wen, X.; Niu, Y.; Wu, L.; Yin, D.; Li, J. Dynamic task allocation for heterogeneous manned-unmanned aerial vehicle teamwork. In Proceedings of the 2018 Chinese Automation Congress (CAC), Xi’an, China, 30 November–2 December 2018; pp. 3345–3349. [Google Scholar]
  4. Esterwood, C.; Robert, L.P. Three Strikes and you are out!: The impacts of multiple human-robot trust violations and repairs on robot trustworthiness. Comput. Hum. Behav. 2023, 142, 107658. [Google Scholar] [CrossRef]
  5. Kraus, J.M.; Babel, F.; Hock, P.; Hauber, K.; Baumann, M.R.K. The trustworthy and acceptable HRI checklist (TA-HRI): Questions and design recommendations to support a trust-worthy and acceptable design of human-robot interaction. Gruppe. Interaktion. Organisation. Z. Angew. Organ. (GIO) 2022, 53, 307–328. [Google Scholar] [CrossRef]
  6. Guo, T.; Obidat, O.; Rodriguez, L.; Parron, J.; Wang, W. Reasoning the Trust of Humans in Robots through Physiological Biometrics in Human-Robot Collaborative Contexts. In Proceedings of the 2022 IEEE MIT Undergraduate Research Technology Conference (URTC), Cambridge, MA, USA, 30 September–2 October 2022; pp. 1–6. [Google Scholar]
  7. Patel, S.M.; Rosero, A.; Lazzara, E.H.; Phillips, E.; Rogers, J.E.; Momen, A.; Kessler, T.T.; Krausman, A.S. Human-Robot Teams: A Discussion of the Emerging Trends. Proc. Hum. Factors Ergon. Soc. Annu. Meet. 2022, 66, 172–176. [Google Scholar] [CrossRef]
  8. Zhang, Y.; Hopko, S.K.; Y, A.; Mehta, R.K. Capturing Dynamic Trust Metrics during Shared Space Human Robot Collaboration: An eye-tracking approach. Proc. Hum. Factors Ergon. Soc. Annu. Meet. 2022, 66, 536. [Google Scholar] [CrossRef]
  9. Chang, W.; Lizhen, W.; Chao, Y.; Zhichao, W.; Han, L.; Chao, Y. Coactive design of explainable agent-based task planning and deep reinforcement learning for human-UAVs teamwork. Chin. J. Aeronaut. 2020, 33, 2930–2945. [Google Scholar]
  10. Outay, F.; Mengash, H.A.; Adnan, M. Applications of unmanned aerial vehicle (UAV) in road safety, traffic and highway infrastructure management: Recent advances and challenges. Transp. Res. Part A Policy Pract. 2020, 141, 116–129. [Google Scholar] [CrossRef]
  11. Driggs, J.; Vangsness, L. Changes in Trust in Automation (TIA) After Performing a Visual Search Task with an Automated System. In Proceedings of the 2022 IEEE 3rd International Conference on Human-Machine Systems (ICHMS), Orlando, FL, USA, 17–19 November 2022; pp. 1–6. [Google Scholar]
  12. Rotter, J.B. Interpersonal trust, trustworthiness, and gullibility. Am. Psychol. 1980, 35, 1. [Google Scholar] [CrossRef]
  13. Afsar, B.; Al-Ghazali, B.M.; Cheema, S.; Javed, F. Cultural intelligence and innovative work behavior: The role of work engagement and interpersonal trust. Eur. J. Innov. Manag. 2020, 24, 1082–1109. [Google Scholar] [CrossRef]
  14. Spadaro, G.; Gangl, K.; Van Prooijen, J.-W.; Van Lange, P.A.; Mosso, C.O. Enhancing feelings of security: How institutional trust promotes interpersonal trust. PLoS ONE 2020, 15, e0237934. [Google Scholar] [CrossRef]
  15. Pavez, I.; Gómez, H.; Laulié, L.; González, V.A. Project team resilience: The effect of group potency and interpersonal trust. Int. J. Proj. Manag. 2021, 39, 697–708. [Google Scholar] [CrossRef]
  16. Yuan, H.; Long, Q.; Huang, G.; Huang, L.; Luo, S. Different roles of interpersonal trust and institutional trust in COVID-19 pandemic control. Soc. Sci. Med. 2022, 293, 114677. [Google Scholar] [CrossRef] [PubMed]
  17. Yun, Y.; Ma, D.; Yang, M. Human–computer interaction-based decision support system with applications in data mining. Future Gener. Comput. Syst. 2021, 114, 285–289. [Google Scholar] [CrossRef]
  18. Nazar, M.; Alam, M.M.; Yafi, E.; Su’ud, M.M. A systematic review of human–computer interaction and explainable artificial intelligence in healthcare with artificial intelligence techniques. IEEE Access 2021, 9, 153316–153348. [Google Scholar] [CrossRef]
  19. Ho, Y.-H.; Tsai, Y.-J. Open collaborative platform for multi-drones to support search and rescue operations. Drones 2022, 6, 132. [Google Scholar] [CrossRef]
  20. Gupta, R.; Shukla, A.; Mehta, P.; Bhattacharya, P.; Tanwar, S.; Tyagi, S.; Kumar, N. VAHAK: A blockchain-based outdoor delivery scheme using UAV for healthcare 4.0 services. In Proceedings of the IEEE INFOCOM 2020-IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS), Toronto, ON, Canada, 6–9 July 2020; pp. 255–260. [Google Scholar]
  21. Zuhair, M.; Patel, F.; Navapara, D.; Bhattacharya, P.; Saraswat, D. BloCoV6: A blockchain-based 6G-assisted UAV contact tracing scheme for COVID-19 pandemic. In Proceedings of the 2021 2nd International Conference on Intelligent Engineering and Management (ICIEM), London, UK, 28–30 April 2021; pp. 271–276. [Google Scholar]
  22. Sahoo, S.K.; Mudligiriyappa, N.; Algethami, A.A.; Manoharan, P.; Hamdi, M.; Raahemifar, K. Intelligent trust-based utility and reusability model: Enhanced security using unmanned aerial vehicles on sensor nodes. Appl. Sci. 2022, 12, 1317. [Google Scholar] [CrossRef]
  23. Ko, Y.; Kim, J.; Duguma, D.G.; Astillo, P.V.; You, I.; Pau, G. Drone secure communication protocol for future sensitive applications in military zone. Sensors 2021, 21, 2057. [Google Scholar] [CrossRef]
  24. Gupta, R.; Kumari, A.; Tanwar, S.; Kumar, N. Blockchain-envisioned softwarized multi-swarming UAVs to tackle COVID-I9 situations. IEEE Netw. 2020, 35, 160–167. [Google Scholar] [CrossRef]
  25. Tomsett, R.; Preece, A.; Braines, D.; Cerutti, F.; Chakraborty, S.; Srivastava, M.; Pearson, G.; Kaplan, L. Rapid trust calibration through interpretable and uncertainty-aware AI. Patterns 2020, 1, 100049. [Google Scholar] [CrossRef]
  26. Huang, R.; Zhou, H.; Liu, T.; Sheng, H. Multi-UAV collaboration to survey Tibetan antelopes in Hoh Xil. Drones 2022, 6, 196. [Google Scholar] [CrossRef]
  27. Seeber, I.; Bittner, E.; Briggs, R.O.; de Vreede, T.; de Vreede, G.-J.; Elkins, A.; Maier, R.; Merz, A.B.; Oeste-Reiß, S.; Randrup, N.; et al. Machines as teammates: A research agenda on AI in team collaboration. Inf. Manag. 2020, 57, 103174. [Google Scholar] [CrossRef]
  28. Lewis, J.D.; Weigert, A. Trust as a social reality. Soc. Forces 1985, 63, 967–985. [Google Scholar] [CrossRef]
  29. Gambetta, D. Can we trust trust. Trust. Mak. Break. Coop. Relat. 2000, 13, 213–237. [Google Scholar]
  30. Ciocirlan, S.-D.; Agrigoroaie, R.; Tapus, A. Human-robot team: Effects of communication in analyzing trust. In Proceedings of the 2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), New Delhi, India, 14–18 October 2019; pp. 1–7. [Google Scholar]
  31. Yuan, X.; Olfman, L.; Yi, J. How do institution-based trust and interpersonal trust affect interdepartmental knowledge sharing? In Information Diffusion Management and Knowledge Sharing: Breakthroughs in Research and Practice; IGI Global: Hershey, PA, USA, 2020; pp. 424–451. [Google Scholar]
  32. Siegrist, M. Trust and risk perception: A critical review of the literature. Risk Anal. 2021, 41, 480–490. [Google Scholar] [CrossRef]
  33. Akash, K.; Polson, K.; Reid, T.; Jain, N. Improving human-machine collaboration through transparency-based feedback–part I: Human trust and workload model. IFAC-Pap. 2019, 51, 315–321. [Google Scholar] [CrossRef]
  34. Lu, Z.; Guo, J.; Zeng, S.; Mao, Q. Research on Human-machine Dynamic Trust Based on Alarm Sequence. In Proceedings of the Proceedings of the 2nd World Symposium on Software Engineering, Chengdu, China, 25–27 September 2020; pp. 314–320. [Google Scholar]
  35. Rheu, M.; Shin, J.Y.; Peng, W.; Huh-Yoo, J. Systematic review: Trust-building factors and implications for conversational agent design. Int. J. Hum.–Comput. Interact. 2021, 37, 81–96. [Google Scholar] [CrossRef]
  36. Okamura, K.; Yamada, S. Adaptive trust calibration for human-AI collaboration. PLoS ONE 2020, 15, e0229132. [Google Scholar] [CrossRef]
  37. Sheridan, T.B. Individual differences in attributes of trust in automation: Measurement and application to system design. Front. Psychol. 2019, 10, 1117. [Google Scholar] [CrossRef]
  38. Bargain, O.; Aminjonov, U. Trust and compliance to public health policies in times of COVID-19. J. Public Econ. 2020, 192, 104316. [Google Scholar] [CrossRef]
  39. Haesevoets, T.; De Cremer, D.; Dierckx, K.; Van Hiel, A. Human-machine collaboration in managerial decision making. Comput. Hum. Behav. 2021, 119, 106730. [Google Scholar] [CrossRef]
  40. Arora, A.; Gosain, A. Dynamic trust emergency role-based access control (DTE–RBAC). Int. J. Comput. Appl. 2020, 175, 0975–8887. [Google Scholar] [CrossRef]
  41. Carmody, K.; Ficke, C.; Nguyen, D.; Addis, A.; Rebensky, S.; Carroll, M. A Qualitative Analysis of Trust Dynamics in Human-Agent Teams (HATs). Proc. Hum. Factors Ergon. Soc. Annu. Meet. 2022, 66, 152–156. [Google Scholar] [CrossRef]
  42. Jghef, Y.S.; Jasim, M.J.M.; Ghanimi, H.M.; Algarni, A.D.; Soliman, N.F.; El-Shafai, W.; Zeebaree, S.R.; Alkhayyat, A.; Abosinnee, A.S.; Abdulsattar, N.F. Bio-Inspired Dynamic Trust and Congestion-Aware Zone-Based Secured Internet of Drone Things (SIoDT). Drones 2022, 6, 337. [Google Scholar] [CrossRef]
  43. Rhim, J.; Kwak, S.S.; Lim, A.; Millar, J. The dynamic nature of trust: Trust in Human-Robot Interaction revisited. arXiv 2023, arXiv:2303.04841. [Google Scholar]
  44. Toreini, E.; Aitken, M.; Coopamootoo, K.; Elliott, K.; Zelaya, C.G.; Van Moorsel, A. The relationship between trust in AI and trustworthy machine learning technologies. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, Barcelona, Spain, 27–30 January 2020; pp. 272–283. [Google Scholar]
  45. Gebru, B.; Zeleke, L.; Blankson, D.; Nabil, M.; Nateghi, S.; Homaifar, A.; Tunstel, E. A review on human–machine trust evaluation: Human-centric and machine-centric perspectives. IEEE Trans. Hum.-Mach. Syst. 2022, 52, 952–962. [Google Scholar] [CrossRef]
  46. Cai, H.; Wang, C.; Zhu, Y. The Influencing Factors of Human-Machine Trust: A Behavioral Science Perspective. In Proceedings of the International Conference on Autonomous Unmanned Systems, Beijing, China, 15–17 October 2021; pp. 2149–2156. [Google Scholar]
  47. Lee, J.; Abe, G.; Sato, K.; Itoh, M. Developing human-machine trust: Impacts of prior instruction and automation failure on driver trust in partially automated vehicles. Transp. Res. Part F Traffic Psychol. Behav. 2021, 81, 384–395. [Google Scholar] [CrossRef]
  48. Zhou, J.; Luo, S.; Chen, F. Effects of personality traits on user trust in human–machine collaborations. J. Multimodal User Interfaces 2020, 14, 387–400. [Google Scholar] [CrossRef]
  49. Lee, H.Y.; List, A. Examining students’ self-efficacy and perceptions of task difficulty in learning from multiple texts. Learn. Individ. Differ. 2021, 90, 102052. [Google Scholar] [CrossRef]
  50. Bouzidi, Y.S.; Falk, J.R.; Chanal, J.; Gendolla, G.H. Choosing task characteristics oneself justifies effort: A study on cardiac response and the critical role of task difficulty. Motiv. Sci. 2022, 8, 230–238. [Google Scholar] [CrossRef]
  51. Varga, B.; Doer, C.; Trommer, G.F.; Hohmann, S. Validation of a Limit Ellipsis Controller for Rescue Drones. In Proceedings of the 2022 IEEE 16th International Symposium on Applied Computational Intelligence and Informatics (SACI), Timisoara, Romania, 25–28 May 2022; pp. 000055–000060. [Google Scholar]
  52. Yao, C.; Xiaoling, L.; Zhiyuan, L.; Huifen, W.; Pengcheng, W. Research on the UAV multi-channel human-machine interaction system. In Proceedings of the 2017 2nd Asia-Pacific Conference on Intelligent Robot Systems (ACIRS), Wuhan, China, 16–19 June 2017; pp. 190–195. [Google Scholar]
  53. Pham, D.; Menon, V.; Tenhundfeld, N.L.; Weger, K.; Mesmer, B.L.; Gholston, S.; Davis, T. A Case Study of Human-AI Interactions Using Transparent AI-Driven Autonomous Systems for Improved Human-AI Trust Factors. In Proceedings of the 2022 IEEE 3rd International Conference on Human-Machine Systems (ICHMS), Orlando, FL, USA, 17–19 November 2022; pp. 1–6. [Google Scholar]
  54. Xu, Y. Research of Flight Test Method for Manned/Unmanned Aerial Vehicle Cooperation Technology. In Proceedings of the 2022 the 5th International Conference on Robot Systems and Applications (ICRSA), Shenzhen, China, 23–25 April 2022. [Google Scholar]
  55. Noda, A.; Harazono, Y.; Ueda, K.; Ishii, H.; Shimoda, H. A Study on 3D Reconstruction Method in Cooperation with a Mirror-mounted Autonomous Drone. In Proceedings of the 2022 IEEE International Conference on Systems, Man, and Cybernetics (SMC), Prague, Czech Republic, 9–12 October 2022; pp. 305–310. [Google Scholar]
  56. Baabdullah, A.M.; Alalwan, A.A.; Algharabat, R.S.; Metri, B.; Rana, N.P. Virtual agents and flow experience: An empirical examination of AI-powered chatbots. Technol. Forecast. Soc. Chang. 2022, 181, 121772. [Google Scholar] [CrossRef]
  57. Johnson, M. Coactive Design: Designing Support for Interdependence in Human-Robot Teamwork. Ph.D. Thesis, Delft University of Technology, Delft, The Netherlands, 2014. [Google Scholar]
Figure 1. Schematic diagram of the evolution of trust in HRI.
Figure 1. Schematic diagram of the evolution of trust in HRI.
Applsci 13 12989 g001
Figure 2. A typical surveillance scenario in HRI.
Figure 2. A typical surveillance scenario in HRI.
Applsci 13 12989 g002
Figure 3. Initial interface of the UAV surveillance task simulation. (a) illustrates the initial interface for starting or quitting the experiment and viewing results; (b) shows the screen where participants can choose between training or tasks of various difficulties.
Figure 3. Initial interface of the UAV surveillance task simulation. (a) illustrates the initial interface for starting or quitting the experiment and viewing results; (b) shows the screen where participants can choose between training or tasks of various difficulties.
Applsci 13 12989 g003
Figure 4. Layout of the experimental interface during the surveillance task. Here, green boxes indicate targets that require detection, while the red boxes denote the designated destination points.
Figure 4. Layout of the experimental interface during the surveillance task. Here, green boxes indicate targets that require detection, while the red boxes denote the designated destination points.
Applsci 13 12989 g004
Figure 5. Schematic of the experiment at different levels of difficulty.
Figure 5. Schematic of the experiment at different levels of difficulty.
Applsci 13 12989 g005
Figure 6. Experiment workflow.
Figure 6. Experiment workflow.
Applsci 13 12989 g006
Figure 7. Average time consumption for different task difficulties.
Figure 7. Average time consumption for different task difficulties.
Applsci 13 12989 g007
Figure 8. Average trust and its changes across different task difficulties at various stages. In this figure, the x-axis denotes the levels of task difficulty, categorized as “Easy”, “Normal”, and “Hard”, which correspond to simple, moderate, and difficult tasks, respectively. The y-axes in subfigures (ac) illustrate the average trust values at different stages. Meanwhile, the y-axis in subfigure (d) depicts the variations in trust throughout the different stages. The labels “Step1”, “Step2”, and “Step3” represent the Training, Accompanying, and Automated stages, in that order.
Figure 8. Average trust and its changes across different task difficulties at various stages. In this figure, the x-axis denotes the levels of task difficulty, categorized as “Easy”, “Normal”, and “Hard”, which correspond to simple, moderate, and difficult tasks, respectively. The y-axes in subfigures (ac) illustrate the average trust values at different stages. Meanwhile, the y-axis in subfigure (d) depicts the variations in trust throughout the different stages. The labels “Step1”, “Step2”, and “Step3” represent the Training, Accompanying, and Automated stages, in that order.
Applsci 13 12989 g008
Table 1. Descriptive Statistics.
Table 1. Descriptive Statistics.
Demographics NumberPercentage (%)
GenderMale1627
Female4473
Education LevelUndergraduate4270
Graduate1830
Familiarity with AIModerately Familiar4168
Unfamiliar1932
Table 2. Results of the Kruskal–Wallis test.
Table 2. Results of the Kruskal–Wallis test.
T 0 T 1 T 2 Δ T 01 Δ T 12
Chi-Square5.70016.4482.2354.91017.914
df22222
Sig.0.0580.0000.3270.0860.000
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhu, Y.; Wang, T.; Wang, C.; Quan, W.; Tang, M. Complexity-Driven Trust Dynamics in Human–Robot Interactions: Insights from AI-Enhanced Collaborative Engagements. Appl. Sci. 2023, 13, 12989. https://doi.org/10.3390/app132412989

AMA Style

Zhu Y, Wang T, Wang C, Quan W, Tang M. Complexity-Driven Trust Dynamics in Human–Robot Interactions: Insights from AI-Enhanced Collaborative Engagements. Applied Sciences. 2023; 13(24):12989. https://doi.org/10.3390/app132412989

Chicago/Turabian Style

Zhu, Yi, Taotao Wang, Chang Wang, Wei Quan, and Mingwei Tang. 2023. "Complexity-Driven Trust Dynamics in Human–Robot Interactions: Insights from AI-Enhanced Collaborative Engagements" Applied Sciences 13, no. 24: 12989. https://doi.org/10.3390/app132412989

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop