Next Article in Journal
Assessment of Soil Horizons and Their Matric Potential from Ground-Penetrating Radar Signal Attributes
Previous Article in Journal
Singular Phenomenon Analysis of Wind-Driven Circulation System Based on Galerkin Low-Order Model
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

The Sense of Agency in Human–Machine Interaction Systems

by
Hui Yu
1,
Shengzhi Du
1,*,
Anish Kurien
1,
Barend Jacobus van Wyk
1 and
Qingxue Liu
2
1
Department of Electrical Engineering, Tshwane University of Technology, Pretoria 0001, South Africa
2
School of Mechanical and Electrical Engineering, Kunming University, Kunming 650214, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2024, 14(16), 7327; https://doi.org/10.3390/app14167327
Submission received: 15 June 2024 / Revised: 11 August 2024 / Accepted: 18 August 2024 / Published: 20 August 2024

Abstract

:
Human–Machine Interaction (HMI) systems are integral to various domains and rely on human operators for effective performance. The sense of agency (SoA) is crucial in these systems, as it influences the operator’s concentration and overall efficiency. This review explores the SoA in HMI systems, analyzing its definition, key influencing factors, and methods for enhancement. We provide a comprehensive examination of SoA-related research and suggest strategies for measuring and improving the SoA. Two key research directions are highlighted: the impact of user experience on the SoA, and the role of the SoA in enabling unconscious communication between humans and machines. We propose a development route for HMI systems, outlining a progressive structure across three stages: machine-centric, human-centric, and human–machine integration. Finally, we discuss the potential of gaming platforms as tools for advancing SoA research in HMI systems. Our findings aim to enhance the design and functionality of HMI systems, ensuring improved operator engagement and system performance.

1. Introduction

From ancient times to the present, tools have been an indispensable part of human development history. From humans using natural tools to humans making their own tools, tools have never existed independently. The design of tools has always been based on user and task needs, which are also the most critical part of the human–machine system design. Nowadays, with the development of technology, tools are equipped with intelligence, and some devices even perform some tasks independently without significant human interference. But, humans are never detached from the task, and their role has changed from the operator to the supervisor [1,2]. This change not only places a greater burden on and adds complexity to HMI systems, especially in the design stage, but also leads to many out-of-the-loop problems [3,4]: that is, humans cannot efficiently take over control when automation fails. For instance, in assisted driving systems, when the system needs to switch from the auto-operating mode to the manual mode, a lower SoA will be detrimental to the completion of the main task. In March 2018, an Uber self-driving car killed a pedestrian who was crossing the road. At the time of the accident, its operator was focused on his smartphone and did not correct the Uber self-driving car’s erroneous behavior in time. A 2019 investigation by the National Transportation Safety Board found that the safety drivers employed by Uber did not look at the road in more than a third of the trips, and the board believed that the accident was “avoidable” if the safety driver had been alert [5,6]. In this case, obviously, the purpose of the HMI system is to reduce human workload and increase work efficiency and safety, but over-reliance on machines leads to reduced vigilance, attention diversion, and the loss of sensitivity to important clues about accidental events.
As automation technology advances in HMI systems, humans are becoming less involved in regular/normal operations, leading to a loss of focus and an increased risk of human-out-of-the-loop problems [7]. The SoA can be considered a composite of many basic experiences, including the experience of intentional causation, the sense of initiation, and the sense of control [8]. The SoA plays a key role in guiding responsibility attribution [9], which is a key driver of human behavior [10]. In HMI systems, in addition to humans, there are quite a few robotic agents [11], exoskeletons [12,13,14], and avatars in virtual reality environments [15,16], which will cause the operator to be unable to attribute the outcomes to themselves, especially when the machine can work independently. A wrongly attributed SoA can be problematic in terms of ethics, as it can lead to accountability issues [17,18,19,20]. Therefore, determining how to make users attribute the output of HMI systems to their operations is the core of maintaining the SoA. The purpose of maintaining or even improving the SoA in HMI systems is not only to enhance the user’s experience of involvement but also to keep the user’s attention focused on their duty, even when assistance systems are acting. Therefore, maintaining the SoA of humans is a key part of the success of such systems. The mechanism and influence of the SoA have been the subject of in-depth studies in the fields of cognitive science and neuroscience [19,21,22,23,24]. However, due to the complexity, variability, and continuity of HMI systems, the analysis of the SoA is still a challenge.
The Web of Science (WoS) is an ideal data source for bibliometric investigations, so we chose WoS for this study. We specifically selected the WoS Core Collection to ensure the acquisition of high-quality articles and to exclude less relevant ones. To acquire high-quality data, this review used articles, proceedings papers, and review articles as analysis data. According to the information from WoS, the first article about SoA was published in 1992. The data collection process for this study started on 5 June 2024; thus, the scope of this research is from 1992 to 2024. TS = “sense of agency” was used as the search term in the process of data collection. The detailed search information is summarized in Table 1.
In recent years, more and more attention has been paid to the SoA [25]. Published research on the SoA has begun to speed up in the past decade [26,27,28,29]. The number of publications about the SoA from 1992 to 2024 is shown in Figure 1.
The number of research publications was relatively small between 1992 and 2005, with no more than 10 articles recorded per year. Since 2005, the number of research publications has gradually increased, reaching significant growth in 2019 and then increasing year by year until reaching a peak of 229 articles in 2023. The blue dotted line in Figure 1 represents the fitted exponential trend line, which is used to show the growth trend of the number of records. Research on the SoA has shown a gradual upward trend in the past few decades. Especially after 2010, the growth rate of research publications has accelerated significantly, which is consistent with the exponential growth trend shown by the blue dotted line. In general, SoA research has grown rapidly in recent years, and this trend will continue for quite a long time.
In addition, we used VOSviewer to construct a keyword co-occurrence network diagram, extracted keywords from the “sense of agency”-related literature retrieved from the Web of Science database, and screened out keywords with more than 50 occurrences for analysis. The minimum number of relationships between terms in VOSviewer was set to two terms. After that, the data were analyzed using VOSviewer, and the results were divided into three clusters: (i) Cluster 1 is represented by red, (ii) Cluster 2 is represented by green, and (iii) Cluster 3 is represented by blue. Each cluster shows the relationship between one term and another. Keywords are marked with colored circles. The size of the circles is positively correlated with the occurrence of the keywords in the title and abstract. Therefore, the size of the letters and circles is determined by their frequency of occurrence [30,31]. The more frequently a keyword appears, the larger the size of the letters and the circle, as shown in Figure 2.
The red cluster focuses on the experimental and psychological aspects of the sense of agency. Keywords such as “action”, “actor”, “effect”, “self”, and “soa” are central, indicating a strong emphasis on empirical research involving actors and their actions. Concepts such as intentional constraints and ownership are also important, indicating a need to investigate the cognitive and perceptual mechanisms underlying the sense of agency. The blue cluster appears to focus on the phenomenological and subjective aspects of the sense of agency, while the green cluster is centered on education, social identity, and community practices. The keywords “action”, “actor”, “effect”, and “self” in the red cluster are the most central and frequently occurring, highlighting the literature’s primary focus on these concepts. In addition, the map shows important interdisciplinary connections, especially between psychological experiments and phenomenological/contextual research.
This review examines the publication count from 2005 to 2024 across the five most significant fields: experimental psychology, neurosciences, education educational research, multidisciplinary, and computer science, as shown in Figure 3. The following key observations are made:
  • Experimental psychology and neurosciences: Both fields experienced declines after reaching their peaks in 2021 and 2020, respectively.
  • Education educational research and multidisciplinary: Although the number of SoA papers declined after reaching their peak, a rebound occurred in 2021.
  • Computer science: Over the years, the number of published articles in this field has shown an overall upward trend. Since 2018, it has experienced rapid growth, culminating in a peak in 2023. We anticipate further increases in publications in the field.
In addition, interdisciplinary collaborations often yield fresh perspectives and encourage greater scholarly output. Notably, in 2016, such collaborations may have contributed to a rapid increase in the number of published articles. Subsequently, in 2018, the maturation and evolution of theoretical frameworks related to the SoA may have prompted researchers to delve deeper into this area, resulting in a surge in the number of papers published in 2019. Furthermore, technological advances—such as brain imaging, wearable sensors, and computational modeling—likely facilitated more nuanced research on the SoA, which may be the reason for the rapid growth in the number of publications in computer science after 2020.
This narrative review aims to provide a comprehensive understanding of the SoA in HMI systems. By systematically reviewing the existing literature, this study identifies the definition, influencing factors, measurement methods, and improvement strategies for SoA. This review focuses on HMI systems for the collaboration between humans and machines through human–machine interfaces to accomplish tasks. Meanwhile, the involved machines specifically refer to smart machine agents, i.e., physical machines with perceptual, computational, and action capabilities [32]. Data have been extracted and a thematic analysis has been performed to identify common themes, trends, and research gaps. And, to the best of our knowledge, there is little research in this area. Furthermore, compared with existing reviews on the SoA in HMI systems, this review differentiates itself from existing works in the following aspects:
  • This review focuses on the characteristics of HMI systems (complexity, variability, and continuity) and summarizes and analyzes the existing research on methods for SoA measurement and improvement spanning the period from 2000 to 2024, with the following findings:
    • The overall SoA in HMI systems deserves attention, and explicit measurement still has great potential.
    • Improving transparency and employing appropriate automated assistance can effectively improve the SoA.
  • This review provides suggestions for SoA research. Routes to improve the SoA in HMI systems are first identified. Then, two potential directions for SoA research are suggested. Firstly, the impact of the experience obtained by using the identified HMI systems on the SoA deserves more attention. We believe that the SoA dynamically changes in an HMI system, because the operator’s feeling changes, their experience/knowledge increases, and their learning ability is improved with the progressive use of the system. HMI systems are designed as long-lifespan solutions, so the SoA in both the transition stage and stable stage should be considered when designing and using HMI systems. Secondly, we explored the potential of SoA as a bond to facilitate seamless communication between humans and machines. We emphasized that the accurate and real-time measurement of dynamic changes in human behavior is critical for achieving unconscious communication.
  • This review proposes a development route, which is divided into three stages: machine-centric, human-centric, and human–machine integration. The first stage, machine-centric, aims to increase the operator’s SoA by adjusting the machine. The second stage, human-centric, mainly develops human–machine communication (HMC) technology, allowing machines to communicate unconsciously with humans. In the third stage, human–machine integration, the machine begins to have “thoughts” and understand human emotions, intentions, and feelings. The machine adjusts itself to adapt to humans, moving toward a tacit agreement with humans.
  • In addition, the potential to apply existing gaming platforms as HMI systems in SoA research is analyzed.
The rest of this review is organized as follows. The definition and influencing factors of the SoA are introduced in Section 2. In Section 3, we review in detail how to measure the SoA in discrete tasks commonly considered in the literature and give suggestions on measuring the SoA in continuous tasks. In Section 4, we summarize and analyze research on how to improve the SoA in three typical HMI systems (teleoperation systems, driving assistance systems, and human–robot joint actions) and further discuss the SoA improvement in general HMI systems. Two potential directions for research on the SoA are discussed in Section 5, while Section 6 proposes a seven-level development route and discusses the potential of gaming platforms for analyzing the SoA in HMI systems. Finally, we conclude the review in Section 7.

2. The SoA: Definition and Influencing Factors

2.1. The Definition of the SoA

The SoA refers to the subjective experience of controlling one’s actions and, through them, external events [33,34]. Wen [35] considered the SoA to contain two layers. The first layer is body agency, which is the SoA that people have over controlling their bodies. The second layer is the feeling of controlling external events, i.e., external agency. For example, the light is turned on by pressing a switch. The feeling of controlling the body by pressing the switch is the body’s agency. And, the light is turned on as an outcome of the action of pressing the switch, which is the external agency. In HMI systems, only external agency is considered relevant [36].
There are two explanations for how one attributes agency to oneself [37]. The first one is the comparator model (CM) [20,38,39,40,41,42], which is also the most common explanation, as shown in Figure 4. An HMI system is usually intended to complete tasks or achieve goals. Before taking an action, the agent forms an intention and translates it into a motor plan. The motor plan predicts the outcome of the intended action. The agent then performs the action according to the motor plan. After taking the action, the agent compares the perceived outcome of the action with the predicted outcome. When the comparison is consistent, the agent obtains a strong SoA, while an inconsistency results in a weak SoA.
The second explanation is the multifactorial weighting model (MWM), which is an extended model of [42,43,44,45,46]. The SoA can be separated into two related concepts: feelings of agency (FoAs) and judgments of agency (JoAs). The FoA refers to the pre-reflective, experiential sense of being an individual agent of action, involving implicit, low-level sensorimotor processes, similar to the components of the CM. The FoA is influenced by feed-forward cues, proprioception, and sensory feedback. The JoA refers to the cognitive assessment of an agent’s control over their actions and outcomes; higher-order factors, including thoughts, intentions, social cues, and situational cues, all contribute to the JoA, as shown in Figure 5 [42]. The SoA is a dynamic interplay between the JoA and FoA, where the FoA provides the experiential basis and the JoA offers the cognitive evaluation of agency.
Although many cues have been found to promote agency [41,43,47,48,49], previously studied systems mainly relied on a single cue. There are usually multiple cues available, so the SoA produced by a single cue is weaker than that elicited by multiple cues. According to the context, the reliability of these cues is weighted for agency attribution toward a stronger SoA [43], because multiple cues improve the consistency between outcomes and expectations. In short, the consistency between outcomes and expectations is the key to a high SoA. Both the CM and MWM produce a strong SoA when the outcomes and expectations are congruent. In addition, Wen [36] believes that in an autonomous driving system, whether the vehicle’s movements match the driver’s intentions is a higher-level comparison that is more conducive to maintaining the SoA. On the other hand, Howard et al. [37] proposed that the SoA also depends on the action–effect relationship after the operation, that is, the post hoc inference account. However, this review believes that the post hoc inference account can be included in the MWM.

2.2. Influencing Factors of the SoA

The SoA is complex and influenced by many factors, such as intention [50,51], effort [37,52], social interaction [19,53], etc. Due to the complexity, variability, and continuity of HMI systems, in most cases, there are multiple factors influencing the SoA at the same time. We briefly introduce the main influencing factors below.

2.2.1. Attention and Cognitive Resources

Attention and cognitive resources are crucial, especially in dangerous environments. Reduced attention resources weaken explicit ratings of agency [54]. Conversely, a strong SoA while controlling objects increases task attention [55]. In some experiments on cognitive load, it has been proven that, in complex tasks, insufficient cognitive resources lead to reduced control performance and awareness efficiency [56,57,58,59], which makes it difficult for operators to attribute outcomes to their own actions. Some studies of experimental psychology have also been carried out on physical and mental efforts [37,60,61,62,63], where physical and mental efforts were found to consume cognitive resources, causing fatigue, which manifested as a reduced ability to inhibit attention shifts and exclude irrelevant information, prolonged subjective judgment time, and reduced sensory ability [64], resulting in a poor SoA. The paper emphasizes that cognitive load and attentional demands greatly impact the SoA. When individuals are distracted or under stress, their sense of control and autonomy diminishes. This underscores the critical importance of managing attention in environments requiring high cognitive engagement to maintain a strong sense of autonomy in HMI systems [25]. Nataraj et al. [65] also suggested that a loss of attention can reduce the SoA in HMI systems.

2.2.2. Operating Performance

In most cases, the reason for using HMI systems is that the operating tasks are challenging, human operators cannot or are not willing to complete them alone, or their efficiency and accuracy are not acceptable. In these systems, the operating performance affects the SoA. It is interesting that in car manufacturing, aviation manufacturing, electronic product processing, etc., when the operating performance improves, even if the operator knows that the performance is improved largely due to the help of machines, the operator still gains a strong SoA [66,67,68]. In this situation, with the help of machines, operators complete tasks better than if they completed them alone, so the improved operating performance increases their confidence and self-affirmation; therefore, the expectation becomes more congruent with the actual performance, resulting in a greater SoA [69]. Even if the performance improved for reasons other than the operator’s actions, the improvement may be “wrongly” attributed to their control, which still enhances the operator’s SoA [68].

2.2.3. Automation Assistance

The level of automation has always been a key issue in HMI systems, and sometimes, it is also expressed in opposition to the degree of operator involvement. A higher degree of automation means a lower degree of operator involvement. The level of automation is one of the criteria for grading HMI systems. The Society of Automotive Engineers (SAE) International defines six autonomous driving levels, denoted from 0 to 5, based on how much automation is applied in a vehicle [36]. However, there is some controversy about the impact of the level of automation on the SoA. Some researchers believe that increasing the level of automation reduces the operator’s SoA [70,71,72,73,74,75]. Albutihe [76] suggested that the more automated the tool is, the less SoA participants experience. However, some support that the level of automation has a positive impact on the SoA [67,77,78]. It was found that, depending on the nature of the task, varied levels of automation may impact the SoA. Since many factors affect the SoA, it is normal to have controversy when only considering the level of automation, but generally, the level of automation can regulate the SoA in an HMI system.

2.2.4. System Transparency

Transparency is one of the important indicators to evaluate the performance of devices. In the field of HMI systems, transparency is defined as the degree of the user’s understanding of the machine in HMSs [79]. Transparency in HMSs mainly manifests in two aspects:
  • Machine: It reflects the information that the machine feeds back to the operator.
  • Operator: It is the degree of the operator’s understanding of the impact of their actions.
Transparency in the machine aspect refers to the information fed back by the machine, which does not require operators to learn and is not affected by operators. Transparency in the operator aspect refers to the operator’s understanding of the machine, which is the operator’s expectation of the machine’s response. Transparency in the operator aspect will be affected by the operator’s operating experience and knowledge. In high-transparency HMI systems, the information fed back by the machine enables the operator to perceive the current status of the machine and makes the operator feel like the outcomes were caused by their actions. This is conducive to the operator attributing the outcomes to themselves [80]. In the operator aspect, transparency enables the operator to understand the outcomes of their actions, which can improve the predictability and acceptability of the system and then improve the SoA [81].

3. Measurement of the SoA

Usually, the analysis of the SoA in an HMI system is complex because there are many factors influencing the SoA, and some of them are interrelated: for example, attention resources and cognitive resources affect concentration and performance, and, in turn, performance affects attention resources. Meanwhile, the SoA is also affected by human subjective perception; therefore, it is challenging to measure the SoA directly, effectively, and accurately. At present, there are mainly two methods for measuring the SoA: explicit measurement and implicit measurement.

3.1. Explicit Measurement

The explicit measurement of the SoA is mainly based on the operator’s subjective feeling when using an HMI system, where a questionnaire can be used to directly measure the SoA. At present, there are mainly two explicit measurement approaches: (1) the binary judgment method [82] and (2) the rating method [83,84]. The literature shows that spatial and temporal biases affect the explicit measurement of the SoA. The actual status of the controlled objects can be wrongly attributed to other operators due to spatial distortion and time delay [83,85]. But, if the operators know about the existence of spatial and temporal biases, then the impact of these biases is reduced [35]. Explicit measurement is also affected by persistent and long-term individual differences related to cognitive ability or personality, such as the operator’s trust and understanding of the machine. Although explicit measurement is prone to the operator’s subjective feelings and experimental conditions, as long as the sample size is large enough, the subjective influence of the operator can be diluted by data averaging [36]. Explicit SoA measurement mainly uses the CM and MWM to attribute outcomes to the operator’s actions and then actively evaluates the SoA.

3.2. Implicit Measurement

Implicit SoA measurement can avoid the influence of the operator’s subjective feelings, and it is also conducive to revealing the cognitive mechanism behind the SoA. Many studies have shown that the perceptual intensity and neural response of self-initiated sensory events are reduced [86], and the subjective interval between an action and the observed outcome in active movements is compressed [87]. There are three methods commonly used: sensory attenuation, intentional binding, and electroencephalogram (EEG).
Sensory attenuation refers to the phenomenon of the reduced subjective perception of self-initiated actions; for instance, self-generated sound stimuli are not as strong as external stimuli [88]. There have been many studies on the mechanism of sensory attenuation [38,89,90,91,92,93]. There are mainly two mechanisms to explain sensory attenuation [36]. The first is the central cancellation theory, which posits that the sensory prediction generated by the forward model in the motor system cancels out the actual sensory feedback, resulting in reduced actual sensory feedback [38]. The second mechanism is the pre-activation theory [92]. Using the sensory prediction generated by the forward model in the motor system as a baseline, the pre-activation of neurons enlarges the baseline and, therefore, weakens the actual sensory feedback. In both of these mechanisms, the physical sensory feedback does not change, but the brain’s signal processing (subjective perception) leads to sensory attenuation.
Intentional binding refers to the phenomenon in which the perceived time between two events tends to be shortened in conscious awareness [87]. Intentional binding is commonly experimentally measured by using a clock-reading paradigm [94] or having participants report the position of the clock arms when they hear a tone after pressing a key [21,87]. The second method is an interval estimation paradigm, where participants estimate the time interval between an action and its outcome [37,95]. In high-SoA HMI systems, operators perceive a shorter time between an action and the corresponding outcome than the actual time, and a stronger SoA results in a shorter perceived time interval. In other words, the subjective time perception theory holds that operators have an “internal clock”, where the “tick” of the “internal clock” causes our sense of how fast time is passing. And, this clock’s rhythm changes with arousal and motor activity [96,97,98,99]. People believe that, due to motor prediction, self-agency background characteristics slow down the rhythm of the “internal clock”, which leads to intentional binding shortening [100].
Recently, the EEG has become an important indicator of the SoA, and psychologists and neuroscientists have made many contributions to EEG-based methods for SoA measurement [94,101,102,103]. In contrast to the operator’s active expression in explicit measurement, it is believed that the use of EEG signals to measure the SoA is an implicit measurement but closely related to active consciousness. Measuring the SoA using EEG signals falls into two categories: time–frequency analysis and event-related potential (ERP) analysis. Time–frequency analysis characterizes the oscillations contained in the EEG data by separating the power and phase information in different frequencies. It is a direct method to analyze the SoA via the operator’s EEG data when using the HMI system. Some researchers have found a negative correlation between the frontal α power in EEG signals and SoA changes through time–frequency analysis [101,104,105,106]. The second method is ERP analysis. ERP refers to the EEG potential changes caused by a specific stimulus applied to the sensory system or a part of the brain. When the stimulus occurs or disappears, the ERP can be measured. The ERP analysis method is mainly based on the study of the sensory attenuation phenomenon [107]. Studies have shown that, in the 10-20 EEG electrode layout, if the N1 component is suppressed while the P3 component is enhanced, then the SoA is stronger [108,109,110,111].

3.3. How to Measure the SoA in Continuous Tasks within HMI Systems

Measuring the SoA has always been a key issue. However, evaluating the SoA in continuous tasks presents problems for both explicit and implicit measurements. Explicit measurement is easily affected by the subjective judgment of the operator and experimental conditions. Even worse, this method interrupts the operator because they need to report their subjective evaluation of the SoA. Sensory attenuation and intentional binding are designed for single discontinuous tasks and need to introduce interference items, which is inconvenient for continuous tasks. EEG signal measurement is capable of handling continuous and complex measurement tasks, but it is easily interfered with by the environment and varies significantly among operators, so the collection of EEG signals has stringent requirements for the experimental environment, EEG equipment, and the operator. So, it is unrealistic for continuous tasks, especially when the operator’s movement has to be restricted. On the other hand, in most studies, physiological signals are usually used as dependent variables, and because these features may also be affected by other irrelevant variables. If external variables are not fully controlled, they may not provide high accuracy as proxy indicators [36].
So, how can the SoA in continuous tasks using HMI systems be measured? Wen et al. [36] gave their suggestions. The first approach is to use physiological signals to decode the SoA. The second approach is to use attention as a proxy for the SoA. The third approach is to use control-motivated action to indicate the SoA. These three suggested methods hold significant value. Additionally, these approaches effectively visualize the dynamic changes in SoA over time. As operators gain more experience using HMI systems over time, their familiarity directly impacts the system’s SoA. However, it is important to note that these methods may not yield the precise quantification of the SoA. Legaspi et al. [112,113] suggested a data-driven approach to infer SoA levels from sensor data, similar to how human affective states are determined. They propose that SoA changes can be modeled like human emotions in Affective Computing, using physiological, facial, vocal, postural, and gestural signals, among others, with subjective self-assessment as the ground truth. Tao J. and Tan T. [114] have shown these bodily signals accurately reflect changes in affective states. Legaspi et al. [113] utilized a smartphone app to collect participants’ Self-Reported Data, Behavioral Logs, Contextual Data, and Goal-Related Metrics, enabling a comprehensive and multidimensional analysis of the SoA. This multi-faceted data collection approach captures both subjective experiences and objective behavioral patterns. While using multidimensional data to analyze the SoA is a valuable research direction, this method still requires active reporting by participants. In HMI systems, the key challenge lies in enabling machines to autonomously predict and analyze the operator’s SoA in real time using sensor-derived data. This is crucial for the comprehensive application of the SoA in HMI systems. Potential approaches include the following:
  • Using cameras to recognize the operator’s emotions.
  • Employing eye-tracking and an EEG to analyze changes in the operator’s attention.
  • Analyzing changes in the operator’s physical state through electromyography, blood pressure, and other physiological signals.
  • Utilizing multiple physiological signals to model and numerically analyze the SoA.
The integration of these technologies opens new horizons for SoA research in HMI systems. Merat et al. [115] concluded that measures of driver out-of-the-loop can be divided into two main categories: (1) vehicle-based sensors that can assess the degree of the driver’s physical control of the vehicle during or immediately after automation and (2) driver-based sensors and measures. Although the study by Merat et al. is not directly related to the SoA, out-of-the-loop situations are associated with a low SoA, as the SoA can be considered a composite of many basic experiences, including the experience of intentional causation, the sense of initiation, and the sense of control [8], and maintaining the SoA during tool use or object control can effectively avoid out-of-the-loop problems [36,71]. Therefore, measures of driver out-of-the-loop are valuable for assessing whether a driver is in a low SoA (driver out of the loop) and also confirm the four potential methods proposed in this review. Furthermore, HMI systems are designed as long-lifespan solutions; therefore, we suggest that the SoA in HMI systems can be divided into two aspects:
  • Transient SoA: The transient SoA refers to the SoA of an action, and its impact on the environment can be explained using the CM.
  • Overall SoA: The overall SoA refers to the SoA upon the completion of a continuous task (a composite of a series of actions) and the impacts on the environment, which requires operators to compare and analyze the previous and current operation experiences, and finally, the overall SoA is explained using the MWM.
Therefore, not only does the overall SoA in HMI systems need analysis, but the transient SoA during the HMI also deserves attention. Explicit measurement not only quantifies the transient SoA but also measures the overall SoA of HMI systems. So, explicit measurement still has great potential.

4. Improving the SoA

In recent years, more and more scholars have made efforts to improve the SoA; for instance, Wen et al. [77] established a joint control model for a human and driving assistance system and reported that drivers maintained their SoA and improved their driving performance via the driving assistance system. The effectiveness of shared control has also been validated by other studies [55,116,117], which improved the SoA by increasing transparency. These studies showed that providing more detailed information on the machine can improve the acceptability of automated systems and the sense of control of users. These studies demonstrated that by controlling the factors of the SoA, it is possible to maintain or even improve the SoA. This section uses three typical HMI systems as examples, i.e., teleoperation, driving assistance, and human–robot joint action, to analyze and discuss ways to improve the SoA.

4.1. Improving the SoA in a Teleoperation System

Teleoperation has always been an important branch of HMI systems [118] and has a wide range of applications in remote medical care, deep-sea exploration, anti-terrorism, riot control, etc. Teleoperation refers to the remote control of machines or systems by human operators through a user interface, allowing for the interaction with and manipulation of distant environments [119]. In teleoperation, maintaining the operator’s SoA is crucial for effective control and decision-making. Sensory feedback, including haptic feedback, plays a vital role in bridging the physical distance between the operator and the controlled system, thereby enhancing the operator’s sense of control and presence in the remote environment. Therefore, sensory feedback signals are critical for maintaining the SoA in such systems. Sensory feedback can be used to improve the transparency and acceptability of the teleoperation system.
Haptic feedback is one of the most commonly used sensory signals in teleoperation systems. Haptic feedback in teleoperation refers to the use of the sense of touch to convey information from the remote environment to the operator. A practical example is in robotic surgery, where surgeons can feel the resistance of tissues through force feedback in the control interface, allowing them to manipulate surgical instruments more precisely and intuitively [120]. Many studies have shown that teleoperation systems with haptic feedback result in more efficient and stable human–machine collaborative operation [121,122,123]. Haptic feedback increases the operator’s telepresence and the transparency of the system, as haptic feedback helps operators not only feel the physical properties of the remote environment but also control the remote machine more accurately. In addition, haptic feedback provides additional perceptual information to enhance perceptual consistency, improve the realism of the operation, and make it easier for users to immerse themselves in the task, thereby enhancing their SoA [124,125,126]. Haptic feedback integrates multiple forms of sensory information by adding a sensory channel (touch), making the user’s operating experience richer and more three-dimensional and helping to improve the sense of subjectivity. Especially in skin-based interactions, haptic feedback acts directly on the user’s body, making it easier for the user to perceive the connection between their behavior and their body, thereby enhancing the sense of subjectivity [127]. Therefore, we can affirm that haptic feedback improves the SoA of teleoperation systems, even if the articles on haptic feedback did not discuss the SoA. Morita et al. [128] designed four different auditory feedback conditions to perform tracking tasks in the virtual environment with different degrees of automation to manipulate virtual robot fixtures. The experimental results showed that auditory feedback suppressed the decline in the SoA at the medium automation level.
Among various teleoperation systems, assistance systems are important for tasks that cannot be completed independently by humans. For instance, to solve the problem of low work efficiency in remote operations on construction machinery due to the lack of perspective, Tanimoto et al. [129] developed a semiautomatic system that obtained a high work efficiency and realized a similar control feeling to manual control. Pacaux-Lemoine et al. [130] studied the application of an assistance system in remote train-driving training, which showed that to overcome the difference between local driving and remote driving, an assistance system must be integrated. Maekawa et al. [131] proposed a context-aware assistance method that can retrieve user performance and the SoA and used Air Hockey (a two-dimensional physical game) as a test platform. Maekawa et al. [131] predicted the trajectory of ice hockey from real-time-captured video and provided striker control assistance to players according to the context. The results showed that with the assistance system, remote players performed better without affecting their SoA, and players could experience the excitement of the game.

4.2. Improving the SoA in Driving Assistance Systems

In the past few decades, the combination of automation technology and automobile driving technology has become increasingly close. The introduction of automation technology improves not only the overall driving experience and performance [132,133,134] but also the safety of driving [135,136,137]. But, the introduction of automation technology also reduces the driver’s SoA. Merat et al. [115] proposed precise definitions of in-loop, out-of-loop, and on-loop in the driving environment and emphasized the importance of human-in-loop for automated driving systems, while maintaining the operator’s SoA has a positive effect on keeping the driver in the loop [36,71]. Kim [138] analyzed in detail how the driver’s motion perception and sense of autonomy change with the level of automation using six levels of automation, providing qualitative insights into how different levels of automation affect the driver’s perception of control and responsibility. The introduction of automation technology has a significant impact on the driver’s SoA, so determining how to maintain and enhance the SoA during driving is a key issue. Yun et al. [72] found that drivers’ SoA was generally low when driving assistance or interference was active. Even when just intervening in steering when the vehicle left the lane, the SoA of experienced drivers was significantly lower than in manual driving conditions. However, for inexperienced drivers, driving assistance may enhance the SoA. Wen et al. [77] conducted further research, where the experimental task was to decelerate in a driving simulator to avoid colliding with a vehicle that suddenly cuts in front and immediately decelerates. Participants were asked to decelerate and follow the cut-in vehicle. In the assisted condition, deceleration assistance was activated 100 ms after the cut-in vehicle decelerated, resulting in the rapid deceleration of the ego-vehicle to a matching speed. The joint control model of shared human–machine intention maintains the driver’s SoA and improves future driving performance [77]. It was shown that identifying the human driver’s intention and providing less abrupt assistance are necessary steps for future driver-automation collaborative driving systems [36]. Nakashima and Kumada [84] also showed that when the assistance is congruent with the driver’s intention, it may lead to notable goal-sharing effects between the system and the driver, which is essential for maintaining the SoA.

4.3. Improving the SoA in Human–Robot Joint Actions

The robot is an agentive device in a broad sense, with the purpose of acting in the physical world to accomplish one or more tasks [139], with the following components:
  • Agent: Something or someone that can act on its own and produce changes in the world.
  • Device: A device is an artifact whose purpose is to serve as an instrument in a specific subclass of a process.
Robots can autonomously/semi-autonomously complete pre-set goals. Grynszpan et al. [73] compared human–human and human–robot joint actions, where two participants jointly manipulated two interconnected haptic devices, allowing them to feel each other’s force, but the participants did not know that their partner was sometimes replaced by a robot. And, intentional binding only occurs when working with a human partner. Obhi and Hall [140] and Sahaï et al. [74] showed that a human’s SoA would be affected by the presence of robots in human–robot joint action. Stenzel et al. [141] found that humanoid robots are conducive to improving the operator’s acceptance of and trust in robots, thus improving the SoA. Frenzel and Halama [142] also found that systems with human-like characteristics can enhance the operator’s sense of autonomy. Yu et al. [143] showed that the transparency of human–robot joint action and the participant’s trust in the robot may be the key to maintaining the SoA in human–robot joint action. Yu et al. [143] designed an augmented reality concept that visualizes the robot’s navigation intention and the pedestrian’s predicted path. The case of visualizing both the robot’s intention and pedestrian path prediction was compared against the cases of just visualizing the robot’s intention and a baseline without augmentation. It was observed that when either visualizing the robot’s intention or visualizing both the intention and predicted path, the pedestrian’s trust in the robot and the user experience were significantly improved. Although adding the pedestrian’s predicted path visualization hurts the SoA, a questionnaire survey of pedestrians showed that this is due to the lack of trust in robots by pedestrians. Pagliari et al. [144] analyzed the human acceptance of AI and confidence in decisions made by machines, which further illustrated the importance of human trust in machines. Takechi et al. [117] and Fu et al. [145] used human experience to train robots, accelerate the robot-training process, improve performance, and make robots closer and closer to humans.

4.4. How to Improve the SoA in General HMI Systems

From the above analysis, one finds that the main methods to maintain or improve the SoA focus on (1) improving the transparency of HMI systems and (2) employing appropriate automation assistance. These are feasible for general HMI systems, too.

4.4.1. Improving the Transparency of HMI Systems

To improve system transparency, there are two common methods. The first is to use sensory feedback. The sensory feedback provided by the machine increases both the operator’s operation experience and the transparency of the HMI system. The second method is to share the intentions of humans and machines. Keeping the machine’s intention congruent with the human’s intention blurs the human’s agency judgment while making humans attribute the outcome to their actions. This also enables humans to understand the machine’s intention and predict the machine’s subsequent behavior, thereby increasing the transparency and predictability of HMI systems. In addition, intention sharing has two aspects:
  • The machine’s intention is shared with the human: the human operator obtains/predicts the robot’s intention in advance before the machine moves [81].
  • The human operator’s intention is shared with the machine: the human operator shares their intention with the machine when performing tasks, and the machine performs the same actions as the human based on the human’s intentions [77].
Improving the transparency of the HMI can enhance the performance of HMI systems and operators’ operating experience. In addition, the use of sensory feedback can help the operator perceive the current behavior and environment of the machine, while shared intentions can improve the predictability and acceptability of the HMI system.

4.4.2. Employing Appropriate Automation Assistance

As reported in the literature [130,131], when the human operator cannot complete a difficult task independently, automation assistance improves the SoA. To improve the SoA, the type of automation assistance first needs to be considered. Tanimoto et al. [129] developed a semiautomatic system with high work efficiency and achieved a feeling of control that was the same as manual control using the teleoperation of a construction machine. The awareness of assistance was evaluated for automatic, manual, trajectory, and goal assistance by the SoA. Tanimoto et al. [129] reported that when the semiautomatic system performs goal-oriented assistance rather than trajectory assistance, the SoA is greatly weakened. Yu et al. [143] found that visualizing the pedestrian’s predicted path alone hurts the SoA. Some studies also found that an appropriate level of automation assistance improves the SoA. The SoA and sense of body ownership are usually increased when the user control weight increases; however, when the user control weight was 75%, the SoA obtained was higher than when it was 100% [117]. Appropriate normal haptic assistaance achieved better driver-automation interaction than stronger haptic assistance [116]. Ueda et al. [146] reported that agency awareness increased with the increase in the automation level, but when the automation level exceeded 90%, agency awareness began to decline.
Although attention and cognitive resources have been studied in relation to the performance of HMI systems [116], the performance is usually used as the dependent variable to analyze the influence of some other control factors on the SoA.
In summary, improving system transparency and introducing appropriate automation assistance are the most common methods to improve the SoA. Among them, improving transparency is currently the simplest and most effective and applies to all HMI systems. Moreover, if the increase in experience using HMI systems reduces the limitations of this method, transparency will be the first option to consider for the SoA (the impact of using experience on the SoA will be discussed in the next section). The use of automation assistance needs to take into account the nature of the task and the level of assistance. In some tasks that the human operator cannot complete independently (for example, an experienced operator), the assistance system plays a positive role in the SoA. But, in other tasks that the human operator can complete independently, the assistance system can hurt the SoA [67]. Therefore, an assistance system can significantly improve the SoA, but only when it is properly designed and employed according to the operator’s knowledge and experience.

4.4.3. Hybrid Methods

Currently, combining shared intentions, sensory feedback, and assistance systems is developing as an emerging research direction, such as shared haptic assistance systems, which perform well in teleoperation and driving assistance [147,148,149,150]. Among them, Luo et al. [147] proposed a hybrid shared control method for a mobile robot with omnidirectional wheels. A human partner utilizes a six-degree-of-freedom haptic device and electromyography (EMG) signal sensor to control the mobile robot. A hybrid shared control approach based on EMG and an artificial potential field is exploited to avoid obstacles according to the repulsive force and attractive force, thereby enhancing the human perception of the remote environment when the force feedback of the mobile platform is provided. Both system performance and transparency are improved, and then the SoA is achieved.
Wen et al. [36] also reported a similar view that shared control can weaken the SoA, but if co-actors share the agent’s intention and co-action accordingly, then good performance can be achieved while the SoA is maintained at a high level. Benloucif et al. [151] proposed a trajectory-planning algorithm for shared control, which automatically adapts and replans the vehicle trajectory according to the driver’s action. The algorithm can provide active driving assistance in the obstacle avoidance process, which enables drivers to follow their desired trajectory more easily, thus obtaining better vehicle control sense and driving comfort. Benloucif et al. [151] enabled machines to adapt to humans, which increases the consistency between machine intentions and human intentions; therefore, the SoA greatly increases while maintaining high performance.

5. Discussion

The existing research on the SoA has been in-depth and obtained many valuable outcomes, but most studies have ignored the impact of experience (the period of time using HMI systems) on the SoA, and we believe that the impacts of experience are critical, because HMI systems are designed as long-lifespan systems. If the human experience can be added to the machine’s thoughts, then the machine can understand the operator. Therefore, determining how to make machines understand humans and adjust themselves according to human behavior is critical for high-SoA HMI systems. For this purpose, the SoA, as a bond for HMC, is an important research direction, where the SoA can be obtained by machines as the subjective feelings of the operator.

5.1. The Impact of the Operator’s Experience on the Dynamically Changing SoA

HMI systems are designed as long-lifespan solutions that can be used frequently. As the period of time using an HMI system increases, the operator’s understanding, trust, and operation proficiency of the machine improves, because human operators have strong learning abilities, regardless of whether it is intentional learning or unconscious concomitant learning. Therefore, in the process of using HMI systems, the operator’s SoA changes dynamically. Chatila et al. [32] defined agency as an autonomous organization that adaptively regulates its coupling with its environment and contributes to sustaining itself as a consequence; therefore, as agents, humans will adaptively regulate their coupling with HMI systems in the process of using them for a higher SoA. A stable HMI state can be achieved after a long time using the HMI system. This is critical because, in most cases, out-of-the-loop problems are caused by operators’ over-reliance on machines. For the SoA that changes dynamically due to changes in the operator’s experiences and subjective feelings, the overall SoA of a period can be assessed. Based on the dynamic change in the operator’s SoA, the use of an HMI system can be separated into three stages:
  • Initial stage: After the minimum learning time, the operator has just started to use the HMI system, and the SoA is greatly affected by the operator’s subjective feelings.
  • Transition stage: At this stage, the operator’s proficiency in using the HMI system gradually increases over time, so the transparency and performance gradually improve, which finally leads to an increase in the SoA.
  • Stable stage: The operator has mastered the HMI system proficiently and acquired plenty of experience and an unconscious CM. At this stage, the SoA reaches a stable high state.
Most of the HMI systems in the literature are in the initial stage. Of course, there are a few experiments with a simple design, a short transition stage, and a quick entry into the stable stage. However, very few papers focus on the impact of the operator’s experience on the SoA. Even in studies that set up long-term experiments or simpler experiments including the above three stages, they only analyzed the overall SoA or the transient SoA and did not analyze the dynamic change process of SoA. Legaspi et al. [152] pointed out that only a few research works have observed the dynamic changes in the SoA, and none have attempted to automatically model the dynamic behavior of the human SoA in real time within natural or complex settings. However, the SoA measurement methods proposed by Wen et al. [36] and Legaspi et al. [112,113] for HMI systems show potential for measuring the dynamically changing SoA of humans. Investigating the impact of an operator’s experience on the dynamically changing SoA, therefore, presents a valuable area for future research. To guide future research in this area, we propose the following theoretical framework:
  • Phase Definition and Characterization: Clearly define the three phases of HMI use: initial, transitional, and stable. Identify key characteristics and measurable variables for each phase (e.g., subjective perception, proficiency, transparency, and performance).
  • Dynamic SoA Measurement: Leverage and improve existing SoA measurement methods [36,112,113] to capture real-time changes in the SoA. Develop or adopt tools that can automatically simulate dynamic human SoA behaviors in natural and complex environments [113].
  • Experiential Factors: Investigate the specific factors that lead to SoA changes in each phase (e.g., learning mechanisms, trust development, and operator’s mental models). Distinguish between conscious and unconscious learning accompanying learning processes.
  • Longitudinal Studies: Design long-term studies that track operators’ SoA over a longer period of time to observe transitions between phases. Include frequent and detailed evaluations to capture subtle changes in the SoA. Apply the framework to different types of HMI systems (e.g., autonomous vehicles and remote control systems) and test and improve the framework in real-world environments.
By systematically studying the dynamic changes in the SoA as operators gain experience with HMI systems, we can better understand and enhance the interaction between humans and machines, ultimately improving system design and operator performance.

5.2. The SoA as a Bond of HMC

We posit that a potential solution for enhancing the SoA in HMI systems is enabling machines to obtain the operator’s intentions and feelings without disrupting human activities. The machine can then dynamically adjust itself to align with the operator’s needs and feelings. While explicit information about the operator can be obtained using sensors and data processing technology, capturing the full spectrum of the operator’s intentions and emotions remains challenging. For instance, employing cameras and image processing techniques to recognize human expressions provides some explicit information, but it falls short of fully capturing the operator’s nuanced feelings and intentions. Explicit information, while valuable, remains inherently limited in its ability to fully convey the operator’s intricate feelings and intentions. The rapid advancements in HMC technology now enable machines to directly discern the operator’s intentions and emotions through techniques like computer vision and speech recognition, significantly enhancing the SoA. However, during interactions with the machine, the operator must actively express their intentions and emotions, further amplifying the damage caused by the presence of machines to the SoA of humans [73,74,140].
Takechi et al. [117] and Fu et al. [145] used the human experience to train robots so that machines could act like humans. The human experience is added to the robot’s thoughts; therefore, the robot can think like a human. In addition, Benloucif et al. [151] proposed a trajectory-planning algorithm for shared control, which can be regarded as the thought of a machine. Chatila et al. [32] proposed that a machine’s understanding of its environment (the machine itself and its impacts on the environment included) requires self-awareness. When the machine has self-awareness, it can feel its behavior and the impact of its behavior on the environment, i.e., synthetic agency [153]. Legaspi et al. [153] proposed a two-pronged perspective of the SoA in both humans and AI: first-person and second-person perspectives ( F p and S p , respectively). In S p , the AI (or human) can perceive the SoA of the other party (human or AI) through the model or mental representation of the other party’s F p . In this way, the AI can understand and adapt its F p to enhance, rather than undermine, the human operator’s F p . So, the SoA serves as a bond for unconscious communication between humans and machines. By adjusting itself based on the operator’s dynamic SoA, the machine can achieve unconscious HMI without disrupting the operator. To fully realize this vision, meticulous attention should be devoted to measuring the SoA in the HMI system and analyzing the influencing factors. The SoA can be integrated into HMI systems as a communication channel, enabling more intuitive and effective interactions between humans and machines. To guide future research in this area, we propose the following theoretical framework:
  • Measure and analyze the SoA: Use tools to accurately measure the operator’s SoA. This can include explicit measurement (e.g., questionnaires) [87] and implicit measurement [87] (e.g., intentional binding). Analyzing these data can help elucidate how different factors affect the SoA and improve the HMI system accordingly.
  • Implement advanced sensing technologies: Utilize sensors that can capture a variety of explicit information from operators, such as gestures, facial expressions, and voice intonation. These data can be processed using computer vision and speech recognition techniques to infer the operator’s intent and emotions [113].
  • Develop adaptive algorithms: Create algorithms that enable the machine to adjust its behavior in real time based on the operator’s inferred intent and emotions. This may involve machine learning models trained on human experience data to dynamically predict and respond to the operator’s needs [151].
  • Incorporate self-awareness into machines: Design systems with self-awareness so that machines understand their own behavior and its impact on the environment. This self-awareness can be modeled using a synthetic agent framework, where machines can evaluate and modify their behavior to align with the operator’s SoA [112].
  • Iterative testing and feedback: Regularly test HMI systems with real users to collect feedback and make iterative improvements. This process ensures that the system is aligned with user needs and continues to effectively enhance the SoA.

6. The Proposed Route for the Future Development of HMI Systems

Currently, most research on the operator’s SoA in the HMI system focuses on shared control and transparency. To enhance the human SoA, machines are adjusted by humans. However, with the recent development of HMC technology, a key research question for the HMI system is, “how can machines be enabled to accurately perceive human intentions and emotions?” Machines achieve this by using sensors and data processing techniques to understand human intentions and emotions. Nevertheless, due to the machine’s limited understanding capacity, operators can sense that they are communicating with a machine, which harms their SoA. Consequently, the key challenge in current HMC technology is achieving unconscious communication between operators and machines, minimizing negative impacts on the operator’s SoA. Furthermore, if the SoA serves as a bond for HMC, it enables unconscious communication, allowing machines to adjust themselves according to the human SoA without causing any disruption to humans; therefore, a stronger SoA can be achieved. Inga et al. [154] proposed a symbiotic human–machine system, and they collaborated on a multivariate perspective of symbiosis as the highest form of interaction in physically coupled HMSs, characterized by the oneness of the human and the machine. However, we believe that the ultimate goal of HMSs should be to achieve the spiritual coupling between the human and the machine, where the machine becomes the spiritual companion of the human. Summarizing the above ideas, we suggest a seven-level development route. In this development route, the maturity of the HMI system is divided into the following three stages, consisting of seven levels, as shown in Figure 6:
  • The first stage is machine-centric. This stage aims to increase the operator’s SoA by adjusting the machine. To maintain the balance between the SoA and operation performance, the machine’s weight in shared control is adaptively adjusted according to the difficulties of tasks (level 1). Furthermore, when the transparency of the HMI system is increased, the operator can understand the intention of the machine and, therefore, trust and accept the machine (level 2). Most of the existing HMI systems are in this stage.
  • The second stage is human-centric, which aims to enhance the SoA by HMC. In this stage, the machine begins to gradually act like a human. In the third level, “Exist” in Figure 6, the human operator gets used to and accepts the existence of the machine and, therefore, does not pay attention to the machine deliberately. Even if the human operator and machine communicate preliminarily through control instructions and feedback information in level 3, the existence of the machine does not affect the SoA. At the fourth level, “Communicate”, the machine begins to have the ability to understand human intention preliminarily via motions, body language, linguistic information, and so on, and then gives feedback accordingly. At this level, the human operator also begins to slowly regard the machine as their collaborator. This level marks a change in the way the human and the machine interact. When it develops to the fifth level, “Unconscious”, the machine fully understands human intention and emotions, with a certain logical judgment ability. So, it can communicate freely with the human operator like a person, and the human operator already regards the machine as their collaborator from the previous level. At level 5, when performing HMI tasks, the human operator is not aware that they are collaborating with a robot.
  • The third stage is to enhance the SoA by making machines “think”. At the sixth level, “Symbiosis”, human transparency begins to be considered. The machine begins to have “thoughts” and understand human emotions, intentions, and feelings, like a person. Then, the machine will adjust itself to adapt to humans based on the obtained human operator’s emotions, intentions, and feelings. Therefore, mental human–machine integration is achieved. At the highest level, “Accompany”, the machine also has emotions and feelings, so it is no longer a collaborator but a partner, or even a confidant. In HMI systems, the machine will express its feelings through expressions, making humans feel that they are interacting with people rather than machines.
From a global perspective, the seven-level development route proposed in this review is progressive. Many HMI systems have achieved the goal of the first stage. With the rapid development of HMC technology, achieving the goal of the second stage becomes possible. The third stage is still in the exploration stage, but it is indeed a promising research direction, especially in the field of elderly-care robots.
To achieve unconscious HMC, or even integrate machines as human partners, it is crucial to consider the role of AI, especially in the third stage of HMI systems. In this context, the works of Legaspi et al. [152,153] provided valuable insights into the human–AI interaction system. Legaspi et al. [153] put forward a simple but coherent synthesis of key theoretical human science treatments of the SoA, while Legaspi et al. [152] delved deeper into the human–AI interaction system, building upon this synthesis. We strongly concur with the findings of Legaspi et al. [152,153]; i.e., in the context of human–AI interaction systems, machines should dynamically adjust to align themselves with the human SoA, particularly to empower individuals who cannot independently perform the needed tasks. This perspective closely aligns with our own. We firmly believe that enhancing the human SoA remains pivotal in shaping effective HMI systems, even as we progress into the third stage of the HMI system.
Moreover, Legaspi et al. [152] posited that when an AI demonstrates a strong SoA, the human SoA will be weakened because the human judges the AI as having greater control over interactions. However, our perspective diverges: in collaborative human–AI task completion (tasks that humans cannot accomplish alone), the task’s intrinsic nature warrants consideration. If enhancing task performance leads to a greater SoA for human operators, they will perceive heightened agency, even if the AI also experiences a strong SoA. We contend that as the AI SoA increases (starting from 0% participation), the human SoA initially rises. However, once AI participation surpasses a threshold, the human SoA declines, as demonstrated by Ueda et al. [146].
In addition, the interest in experiments plays a crucial role in advancing research on the SoA in HMI systems. This interest ignites a spirit of innovation and paves the way for exploring groundbreaking paradigms. As the distinction between humanity and technology becomes increasingly indistinct, a space will be created for cooperative and mutual agency during task performance. Interesting experiments can catalyze more active participation, which, in turn, may reveal intricate insights into the user’s perceptions and experiences of control when interfacing with machines. Furthermore, the zeal for experimentation can stimulate the creation of technologies that have a tangible impact on human actions and results. This contributes to a more profound comprehension of how agency is allocated between humans and machines, enhancing our understanding of this complex relationship [26,155].
There are challenges to designing complex, realistic, and attractive experiments that can boost subjects’ interest. Not only does designing such experiments require a lot of manpower and material resources, but realistic physical models also become impossible for some experimental designs. These problems can be alleviated by using gaming technology [26,156,157]. Games can be used to create more engaging, intuitive, and user-empowering experiments that promote deeper connections between people and machines [155]. Highly realistic driving simulators can be used to analyze the SoA of assisted driving systems. Some multiplayer cooperation games can be applied in SoA research, such as joint action tasks. Some interesting games with player assistance systems can be employed in the experimental analysis of the impact of the automation level on the SoA.
Using mature gaming platforms to design experiments not only alleviates the problems of immersion and realism but also introduces some interference into the experiment, such as music and pictures. We believe that irrelevant variables of the experiment can be controlled by choosing suitable games or designing new games (such as fun games that grab attention). Especially with the help of AI, the use of gaming platforms has become more and more attractive.

7. Conclusions

In this review, we posit that the HMI system is intentionally designed as a long-lifespan solution. During the course of the HMI system’s utilization, the human SoA undergoes dynamic changes. Firstly, we summarize the SoA definition and influencing factors. Leveraging the distinctive features of HMI systems, we conduct an in-depth analysis and provide recommendations for measuring and improving the SoA within HMI systems. Then, this review highlights the importance of user experience and unconscious communication in shaping the SoA and suggests them as key areas for future research. The dynamic nature of the SoA requires continuous evaluation and adaptation, especially as users gain more experience and proficiency with HMI systems. Finally, we propose a development route with a progressive seven-level structure for HMI system development. This route delineates the system’s evolution across three distinct stages. In addition, the potential application of mature gaming platforms as playgrounds in SoA research in HMI systems was analyzed, which can help us gain a deeper understanding of the complex interactions between humans and machines. Our findings aim to advance the design and functionality of HMI systems, ensuring that they remain user-centric and responsive to the changing needs of operators. By addressing the challenges and opportunities associated with the SoA, more powerful and intuitive HMI systems can be built.

Author Contributions

Conceptualization, H.Y., S.D. and A.K.; writing—original draft preparation, H.Y.; writing—review and editing, S.D., A.K. and B.J.v.W.; project administration, S.D., B.J.v.W. and Q.L.; funding acquisition, S.D. and Q.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Research Foundations of South Africa (Ref Numbers SRUG2203291049 and RA210210585474), the Foundation of Yunnan Province Science and Technology Department (Grant number: 202305AO350007), and Kunming University Foundation (No. YJL2205).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

No potential conflicts of interest were reported by the authors.

References

  1. Dekker, S.W.; Woods, D.D. MABA-MABA or abracadabra? Progress on human–automation co-ordination. Cogn. Technol. Work. 2002, 4, 240–244. [Google Scholar] [CrossRef]
  2. Bütepage, J.; Kragic, D. Human-robot collaboration: From psychology to social robotics. arXiv 2017, arXiv:1705.10146. [Google Scholar]
  3. Endsley, M.R.; Kiris, E.O. The out-of-the-loop performance problem and level of control in automation. Hum. Factors 1995, 37, 381–394. [Google Scholar] [CrossRef]
  4. Kaber, D.B.; Endsley, M.R. Out-of-the-loop performance problems and the use of intermediate levels of automation for improved control system functioning and safety. Process. Saf. Prog. 1997, 16, 126–131. [Google Scholar] [CrossRef]
  5. Ibrahim, M.A.; Assaad, Z.; Williams, E. Trust and communication in human-machine teaming. Front. Phys. 2022, 10, 942896. [Google Scholar] [CrossRef]
  6. CNN Business Safety Driver in Fatal 2018 Uber Self-Driving Car Crash Found Guilty of Endangerment. Available online: https://edition.cnn.com/2023/07/29/business/uber-self-driving-car-death-guilty/index.html (accessed on 27 July 2024).
  7. Berberian, B.; Somon, B.; Sahaï, A.; Gouraud, J. The out-of-the-loop Brain: A neuroergonomic approach of the human automation interaction. Annu. Rev. Control. 2017, 44, 303–315. [Google Scholar] [CrossRef]
  8. Pacherie, E. The sense of control and the sense of agency. Psyche 2007, 13, 1–30. [Google Scholar]
  9. Bigenwald, A.; Chambon, V. Criminal responsibility and neuroscience: No revolution yet. Front. Psychol. 2019, 10, 1406. [Google Scholar] [CrossRef] [PubMed]
  10. Di Costa, S.; Théro, H.; Chambon, V.; Haggard, P. Try and try again: Post-error boost of an implicit measure of agency. Q. J. Exp. Psychol. 2018, 71, 1584–1595. [Google Scholar] [CrossRef] [PubMed]
  11. Park, K.H.; Lee, H.E.; Kim, Y.; Bien, Z.Z. A steward robot for human-friendly human-machine interaction in a smart house environment. IEEE Trans. Autom. Sci. Eng. 2008, 5, 21–25. [Google Scholar] [CrossRef]
  12. Strausser, K.A.; Kazerooni, H. The development and testing of a human machine interface for a mobile medical exoskeleton. In Proceedings of the 2011 IEEE/RSJ International Conference on Intelligent Robots and Systems, San Francisco, CA, USA, 25–30 September 2011; pp. 4911–4916. [Google Scholar]
  13. Schiele, A.; Van Der Helm, F.C. Kinematic design to improve ergonomics in human machine interaction. IEEE Trans. Neural Syst. Rehabil. Eng. 2006, 14, 456–469. [Google Scholar] [CrossRef] [PubMed]
  14. Schnieders, T.M.; Stone, R.T. Current work in the human-machine interface for ergonomic intervention with exoskeletons. Int. J. Robot. Appl. Technol. (IJRAT) 2017, 5, 1–19. [Google Scholar] [CrossRef]
  15. Kasahara, S.; Konno, K.; Owaki, R.; Nishi, T.; Takeshita, A.; Ito, T.; Kasuga, S.; Ushiba, J. Malleable embodiment: Changing sense of embodiment by spatial-temporal deformation of virtual human body. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, Denver, CO, USA, 6–11 May 2017; pp. 6438–6448. [Google Scholar]
  16. Gonzalez-Franco, M.; Cohn, B.; Ofek, E.; Burin, D.; Maselli, A. The self-avatar follower effect in virtual reality. In Proceedings of the 2020 IEEE Conference on Virtual Reality and 3D User Interfaces (VR), Atlanta, GA, USA, 22–26 March 2020; pp. 18–25. [Google Scholar]
  17. Barlas, Z.; Obhi, S.S. Freedom, choice, and the sense of agency. Front. Hum. Neurosci. 2013, 7, 514. [Google Scholar] [CrossRef] [PubMed]
  18. Battaglia, F. Agency, Responsibility, Selves, and the Mechanical Mind. Philosophies 2021, 6, 7. [Google Scholar] [CrossRef]
  19. Caspar, E.A.; Christensen, J.F.; Cleeremans, A.; Haggard, P. Coercion changes the sense of agency in the human brain. Curr. Biol. 2016, 26, 585–592. [Google Scholar] [CrossRef]
  20. Haggard, P. Sense of agency in the human brain. Nat. Rev. Neurosci. 2017, 18, 196–207. [Google Scholar] [CrossRef]
  21. Demanet, J.; Muhle-Karbe, P.S.; Lynn, M.T.; Blotenberg, I.; Brass, M. Power to the will: How exerting physical effort boosts the sense of agency. Cognition 2013, 129, 574–578. [Google Scholar] [CrossRef]
  22. Vinding, M.C.; Pedersen, M.N.; Overgaard, M. Unravelling intention: Distal intentions increase the subjective sense of agency. Conscious. Cogn. 2013, 22, 810–815. [Google Scholar] [CrossRef]
  23. Reddy, N.N. Non-motor cues do not generate the perception of self-agency: A critique of cue-integration. Conscious. Cogn. 2022, 103, 103359. [Google Scholar] [CrossRef]
  24. Haering, C.; Kiesel, A. Was it me when it happened too early? Experience of delayed effects shapes sense of agency. Cognition 2015, 136, 38–42. [Google Scholar] [CrossRef] [PubMed]
  25. Wen, W.; Imamizu, H. The sense of agency in perception, behaviour and human–machine interactions. Nat. Rev. Psychol. 2022, 1, 211–222. [Google Scholar] [CrossRef]
  26. Cornelio, P.; Haggard, P.; Hornbaek, K.; Georgiou, O.; Bergström, J.; Subramanian, S.; Obrist, M. The sense of agency in emerging technologies for human–computer integration: A review. Front. Neurosci. 2022, 16, 949138. [Google Scholar] [CrossRef] [PubMed]
  27. Limerick, H.; Coyle, D.; Moore, J.W. The experience of agency in human-computer interactions: A review. Front. Hum. Neurosci. 2014, 8, 643. [Google Scholar] [CrossRef]
  28. Sahaï, A.; Pacherie, E.; Grynszpan, O.; Berberian, B. Predictive mechanisms are not involved the same way during human-human vs. human-machine interactions: A review. Front. Neurorobotics 2017, 11, 52. [Google Scholar] [CrossRef] [PubMed]
  29. Hafner, V.V.; Loviken, P.; Pico Villalpando, A.; Schillaci, G. Prerequisites for an artificial self. Front. Neurorobotics 2020, 14, 5. [Google Scholar] [CrossRef]
  30. Nandiyanto, A.B.D.; Al Husaeni, D.F. A bibliometric analysis of materials research in Indonesian journal using VOSviewer. J. Eng. Res. 2021, 9, 1–16. [Google Scholar] [CrossRef]
  31. Yu, L.; Yu, Z. Qualitative and quantitative analyses of artificial intelligence ethics in education using VOSviewer and CitNetExplorer. Front. Psychol. 2023, 14, 1061778. [Google Scholar] [CrossRef] [PubMed]
  32. Chatila, R.; Renaudo, E.; Andries, M.; Chavez-Garcia, R.O.; Luce-Vayrac, P.; Gottstein, R.; Alami, R.; Clodic, A.; Devin, S.; Girard, B.; et al. Toward self-aware robots. Front. Robot. AI 2018, 5, 88. [Google Scholar] [CrossRef] [PubMed]
  33. Gallagher, S. Philosophical conceptions of the self: Implications for cognitive science. Trends Cogn. Sci. 2000, 4, 14–21. [Google Scholar] [CrossRef]
  34. Loehr, J.D. The sense of agency in joint action: An integrative review. Psychon. Bull. Rev. 2022, 29, 1089–1117. [Google Scholar] [CrossRef]
  35. Wen, W. Does delay in feedback diminish sense of agency? A review. Conscious. Cogn. 2019, 73, 102759. [Google Scholar] [CrossRef] [PubMed]
  36. Wen, W.; Kuroki, Y.; Asama, H. The sense of agency in driving automation. Front. Psychol. 2019, 10, 2691. [Google Scholar] [CrossRef] [PubMed]
  37. Howard, E.E.; Edwards, S.G.; Bayliss, A.P. Physical and mental effort disrupts the implicit sense of agency. Cognition 2016, 157, 114–125. [Google Scholar] [CrossRef] [PubMed]
  38. Blakemore, S.J.; Wolpert, D.M.; Frith, C.D. Central cancellation of self-produced tickle sensation. Nat. Neurosci. 1998, 1, 635–640. [Google Scholar] [CrossRef] [PubMed]
  39. Blakemore, S.J.; Frith, C.D.; Wolpert, D.M. The cerebellum is involved in predicting the sensory consequences of action. Neuroreport 2001, 12, 1879–1884. [Google Scholar] [CrossRef]
  40. Blakemore, S.J.; Wolpert, D.M.; Frith, C.D. Abnormalities in the awareness of action. Trends Cogn. Sci. 2002, 6, 237–242. [Google Scholar] [CrossRef] [PubMed]
  41. Wegner, D.M.; Sparrow, B.; Winerman, L. Vicarious agency: Experiencing control over the movements of others. J. Personal. Soc. Psychol. 2004, 86, 838. [Google Scholar] [CrossRef] [PubMed]
  42. Robinson, J.D.; Wagner, N.F.; Northoff, G. Is the sense of agency in schizophrenia influenced by resting-state variation in self-referential regions of the brain? Schizophr. Bull. 2016, 42, 270–276. [Google Scholar] [CrossRef] [PubMed]
  43. Moore, J.W.; Fletcher, P.C. Sense of agency in health and disease: A review of cue integration approaches. Conscious. Cogn. 2012, 21, 59–68. [Google Scholar] [CrossRef]
  44. Synofzik, M.; Thier, P.; Leube, D.T.; Schlotterbeck, P.; Lindner, A. Misattributions of agency in schizophrenia are based on imprecise predictions about the sensory consequences of one’s actions. Brain 2010, 133, 262–271. [Google Scholar] [CrossRef] [PubMed]
  45. Synofzik, M.; Vosgerau, G.; Lindner, A. Me or not me—An optimal integration of agency cues? Conscious. Cogn. 2009, 18, 1065–1068. [Google Scholar] [CrossRef]
  46. Synofzik, M.; Vosgerau, G.; Voss, M. The experience of agency: An interplay between prediction and postdiction. Front. Psychol. 2013, 4, 127. [Google Scholar] [CrossRef]
  47. Wegner, D.M.; Sparrow, B. Authorship Processing. In Cognitive Neurosciences III; Gazzaniga, M., Ed.; MIT Press: Cambridge, MA, USA, 2004; pp. 1201–1209. [Google Scholar]
  48. Moore, J.W.; Wegner, D.M.; Haggard, P. Modulating the sense of agency with external cues. Conscious. Cogn. 2009, 18, 1056–1064. [Google Scholar] [CrossRef]
  49. Kranick, S.M.; Hallett, M. Neurology of volition. Exp. Brain Res. 2013, 229, 313–327. [Google Scholar] [CrossRef]
  50. Haggard, P. Conscious intention and motor cognition. Trends Cogn. Sci. 2005, 9, 290–295. [Google Scholar] [CrossRef] [PubMed]
  51. Wen, W.; Muramatsu, K.; Hamasaki, S.; An, Q.; Yamakawa, H.; Tamura, Y.; Yamashita, A.; Asama, H. Goal-directed movement enhances body representation updating. Front. Hum. Neurosci. 2016, 10, 329. [Google Scholar] [CrossRef]
  52. Minohara, R.; Wen, W.; Hamasaki, S.; Maeda, T.; Kato, M.; Yamakawa, H.; Yamashita, A.; Asama, H. Strength of intentional effort enhances the sense of agency. Front. Psychol. 2016, 7, 1165. [Google Scholar] [CrossRef] [PubMed]
  53. Caspar, E.A.; Cleeremans, A.; Haggard, P. Only giving orders? An experimental study of the sense of agency when giving or receiving commands. PLoS ONE 2018, 13, e0204027. [Google Scholar] [CrossRef] [PubMed]
  54. Hon, N.; Poh, J.H.; Soon, C.S. Preoccupied minds feel less control: Sense of agency is modulated by cognitive load. Conscious. Cogn. 2013, 22, 556–561. [Google Scholar] [CrossRef] [PubMed]
  55. Nakashima, R. Beyond one’s body parts: Remote object movement with sense of agency involuntarily biases spatial attention. Psychon. Bull. Rev. 2019, 26, 576–582. [Google Scholar] [CrossRef] [PubMed]
  56. Woollacott, M.; Shumway-Cook, A. Attention and the control of posture and gait: A review of an emerging area of research. Gait Posture 2002, 16, 1–14. [Google Scholar] [CrossRef] [PubMed]
  57. Lacour, M.; Bernard-Demanze, L.; Dumitrescu, M. Posture control, aging, and attention resources: Models and posture-analysis methods. Neurophysiol. Clin. Neurophysiol. 2008, 38, 411–421. [Google Scholar] [CrossRef] [PubMed]
  58. Kannape, O.A.; Barré, A.; Aminian, K.; Blanke, O. Cognitive loading affects motor awareness and movement kinematics but not locomotor trajectories during goal-directed walking in a virtual reality environment. PLoS ONE 2014, 9, e85560. [Google Scholar] [CrossRef] [PubMed]
  59. Shepherd, J. Conscious cognitive effort in cognitive control. Wiley Interdiscip. Rev. Cogn. Sci. 2023, 14, e1629. [Google Scholar] [CrossRef] [PubMed]
  60. Dietrich, A. Functional neuroanatomy of altered states of consciousness: The transient hypofrontality hypothesis. Conscious. Cogn. 2003, 12, 231–256. [Google Scholar] [CrossRef]
  61. Dietrich, A.; Sparling, P.B. Endurance exercise selectively impairs prefrontal-dependent cognition. Brain Cogn. 2004, 55, 516–524. [Google Scholar] [CrossRef] [PubMed]
  62. Franconeri, S.L.; Alvarez, G.A.; Cavanagh, P. Flexible cognitive resources: Competitive content maps for attention and memory. Trends Cogn. Sci. 2013, 17, 134–141. [Google Scholar] [CrossRef] [PubMed]
  63. Howard, E. The Effect of Effort and Individual Differences on the Implicit Sense of Agency. Ph.D. Thesis, University of East Anglia, Norwich, UK, 2016. [Google Scholar]
  64. Block, R.A.; Hancock, P.A.; Zakay, D. How cognitive load affects duration judgments: A meta-analytic review. Acta Psychol. 2010, 134, 330–343. [Google Scholar] [CrossRef]
  65. Nataraj, R.; Hollinger, D.; Liu, M.; Shah, A. Disproportionate positive feedback facilitates sense of agency and performance for a reaching movement task with a virtual hand. PLoS ONE 2020, 15, e0233175. [Google Scholar] [CrossRef]
  66. Metcalfe, J.; Greene, M.J. Metacognition of agency. J. Exp. Psychol. Gen. 2007, 136, 184. [Google Scholar] [CrossRef]
  67. Wen, W.; Yamashita, A.; Asama, H. The sense of agency during continuous action: Performance is more important than action-feedback association. PLoS ONE 2015, 10, e0125226. [Google Scholar] [CrossRef] [PubMed]
  68. Inoue, K.; Takeda, Y.; Kimura, M. Sense of agency in continuous action: Assistance-induced performance improvement is self-attributed even with knowledge of assistance. Conscious. Cogn. 2017, 48, 246–252. [Google Scholar] [CrossRef] [PubMed]
  69. van der Wel, R.P.; Sebanz, N.; Knoblich, G. The sense of agency during skill learning in individuals and dyads. Conscious. Cogn. 2012, 21, 1267–1279. [Google Scholar] [CrossRef] [PubMed]
  70. Norman, D.A. The ‘problem’ with automation: Inappropriate feedback and interaction, not ‘over-automation’. Philos. Trans. R. Soc. Lond. B Biol. Sci. 1990, 327, 585–593. [Google Scholar] [PubMed]
  71. Berberian, B.; Sarrazin, J.C.; Le Blaye, P.; Haggard, P. Automation technology and sense of control: A window on human agency. PLoS ONE 2012, 7, e34075. [Google Scholar] [CrossRef] [PubMed]
  72. Yun, S.; Wen, W.; An, Q.; Hamasaki, S.; Yamakawa, H.; Tamura, Y.; Yamashita, A.; Asama, H. Investigating the relationship between assisted driver’s SoA and EEG. In Proceedings of the International Conference on NeuroRehabilitation, Pisa, Italy, 16–20 October 2018; pp. 1039–1043. [Google Scholar]
  73. Grynszpan, O.; Sahaï, A.; Hamidi, N.; Pacherie, E.; Berberian, B.; Roche, L.; Saint-Bauzel, L. The sense of agency in human-human vs human-robot joint action. Conscious. Cogn. 2019, 75, 102820. [Google Scholar] [CrossRef] [PubMed]
  74. Sahaï, A.; Desantis, A.; Grynszpan, O.; Pacherie, E.; Berberian, B. Action co-representation and the sense of agency during a joint Simon task: Comparing human and machine co-agents. Conscious. Cogn. 2019, 67, 44–55. [Google Scholar] [CrossRef] [PubMed]
  75. Zanatto, D.; Chattington, M.; Noyes, J. Sense of agency in human-machine interaction. In Advances in Neuroergonomics and Cognitive Engineering: Proceedings of the AHFE 2021 Virtual Conferences on Neuroergonomics and Cognitive Engineering, Industrial Cognitive Ergonomics and Engineering Psychology, and Cognitive Computing and Internet of Things, New York, NY, USA, 25–29 July 2021; Springer: Berlin/Heidelberg, Germany, 2021; pp. 353–360. [Google Scholar]
  76. Albutihe, I. Sense of Agency and Automation: A Systematic Review. 2023. Available online: https://www.diva-portal.org/smash/record.jsf?pid=diva2:1791640 (accessed on 15 January 2024).
  77. Wen, W.; Yun, S.; Yamashita, A.; Northcutt, B.D.; Asama, H. Deceleration assistance mitigated the trade-off between sense of agency and driving performance. Front. Psychol. 2021, 12, 643516. [Google Scholar] [CrossRef] [PubMed]
  78. Vantrepotte, Q.; Berberian, B.; Pagliari, M.; Chambon, V. Leveraging human agency to improve confidence and acceptability in human-machine interactions. Cognition 2022, 222, 105020. [Google Scholar] [CrossRef] [PubMed]
  79. Saito, H.; Horie, A.; Maekawa, A.; Matsubara, S.; Wakisaka, S.; Kashino, Z.; Kasahara, S.; Inami, M. Transparency in human-machine mutual action. J. Robot. Mechatron. 2021, 33, 987–1003. [Google Scholar] [CrossRef]
  80. Lawrence, D.A. Stability and transparency in bilateral teleoperation. IEEE Trans. Robot. Autom. 1993, 9, 624–637. [Google Scholar] [CrossRef]
  81. Le Goff, K.; Rey, A.; Haggard, P.; Oullier, O.; Berberian, B. Agency modulates interactions with automation technologies. Ergonomics 2018, 61, 1282–1297. [Google Scholar] [CrossRef] [PubMed]
  82. Franck, N.; Farrer, C.; Georgieff, N.; Marie-Cardine, M.; Daléry, J.; d’Amato, T.; Jeannerod, M. Defective recognition of one’s own actions in patients with schizophrenia. Am. J. Psychiatry 2001, 158, 454–459. [Google Scholar] [CrossRef] [PubMed]
  83. Sato, A.; Yasuda, A. Illusion of sense of self-agency: Discrepancy between the predicted and actual sensory consequences of actions modulates the sense of self-agency, but not the sense of self-ownership. Cognition 2005, 94, 241–255. [Google Scholar] [CrossRef] [PubMed]
  84. Nakashima, R.; Kumada, T. Explicit sense of agency in an automatic control situation: Effects of goal-directed action and the gradual emergence of outcome. Front. Psychol. 2020, 11, 2062. [Google Scholar] [CrossRef] [PubMed]
  85. Dewey, J.A.; Carr, T.H. When dyads act in parallel, a sense of agency for the auditory consequences depends on the order of the actions. Conscious. Cogn. 2013, 22, 155–166. [Google Scholar] [CrossRef] [PubMed]
  86. Frith, C.D.; Blakemore, S.J.; Wolpert, D.M. Abnormalities in the awareness and control of action. Philos. Trans. R. Soc. Lond. Ser. B Biol. Sci. 2000, 355, 1771–1788. [Google Scholar] [CrossRef] [PubMed]
  87. Haggard, P.; Clark, S.; Kalogeras, J. Voluntary action and conscious awareness. Nat. Neurosci. 2002, 5, 382–385. [Google Scholar] [CrossRef] [PubMed]
  88. Weiss, C.; Herwig, A.; Schütz-Bosbach, S. The self in action effects: Selective attenuation of self-generated sounds. Cognition 2011, 121, 207–218. [Google Scholar] [CrossRef] [PubMed]
  89. Bays, P.M.; Flanagan, J.R.; Wolpert, D.M. Attenuation of self-generated tactile sensations is predictive, not postdictive. PLoS Biol. 2006, 4, e28. [Google Scholar] [CrossRef] [PubMed]
  90. Synofzik, M.; Thier, P.; Lindner, A. Internalizing agency of self-action: Perception of one’s own hand movements depends on an adaptable prediction about the sensory action outcome. J. Neurophysiol. 2006, 96, 1592–1601. [Google Scholar] [CrossRef]
  91. Waszak, F.; Cardoso-Leite, P.; Hughes, G. Action effect anticipation: Neurophysiological basis and functional consequences. Neurosci. Biobehav. Rev. 2012, 36, 943–959. [Google Scholar] [CrossRef]
  92. Roussel, C.; Hughes, G.; Waszak, F. A preactivation account of sensory attenuation. Neuropsychologia 2013, 51, 922–929. [Google Scholar] [CrossRef]
  93. Kilteni, K.; Ehrsson, H.H. Body ownership determines the attenuation of self-generated tactile sensations. Proc. Natl. Acad. Sci. USA 2017, 114, 8426–8431. [Google Scholar] [CrossRef] [PubMed]
  94. Libet, B.; Wright, E.; Gleason, C. Readiness-potentials preceding unrestricted “spontaneous” vs. pre-planned voluntary acts. Electroencephalogr. Clin. Neurophysiol. 1982, 54, 322–335. [Google Scholar] [CrossRef]
  95. Imaizumi, S.; Tanno, Y. Intentional binding coincides with explicit sense of agency. Conscious. Cogn. 2019, 67, 1–15. [Google Scholar] [CrossRef] [PubMed]
  96. Gibbon, J.; Church, R.M.; Meck, W.H. Scalar timing in memory. Ann. N. Y. Acad. Sci. 1984, 423, 52–77. [Google Scholar] [CrossRef] [PubMed]
  97. Treisman, M. Temporal discrimination and the indifference interval: Implications for a model of the “internal clock”. Psychol. Monogr. Gen. Appl. 1963, 77, 1. [Google Scholar] [CrossRef] [PubMed]
  98. Matell, M.S.; Meck, W.H. Cortico-striatal circuits and interval timing: Coincidence detection of oscillatory processes. Cogn. Brain Res. 2004, 21, 139–170. [Google Scholar] [CrossRef] [PubMed]
  99. Morrone, M.C.; Ross, J.; Burr, D. Saccadic eye movements cause compression of time as well as space. Nat. Neurosci. 2005, 8, 950–954. [Google Scholar] [CrossRef] [PubMed]
  100. Wenke, D.; Haggard, P. How voluntary actions modulate time perception. Exp. Brain Res. 2009, 196, 311–318. [Google Scholar] [CrossRef] [PubMed]
  101. Kang, S.Y.; Im, C.H.; Shim, M.; Nahab, F.B.; Park, J.; Kim, D.W.; Kakareka, J.; Miletta, N.; Hallett, M. Brain networks responsible for sense of agency: An EEG study. PLoS ONE 2015, 10, e0135261. [Google Scholar] [CrossRef] [PubMed]
  102. Jeunet, C.; Albert, L.; Argelaguet, F.; Lécuyer, A. “Do you feel in control?”: Towards novel approaches to characterise, manipulate and measure the sense of agency in virtual environments. IEEE Trans. Vis. Comput. Graph. 2018, 24, 1486–1495. [Google Scholar] [CrossRef]
  103. Zaadnoordijk, L.; Meyer, M.; Zaharieva, M.; Kemalasari, F.; van Pelt, S.; Hunnius, S. From movement to action: An EEG study into the emerging sense of agency in early infancy. Dev. Cogn. Neurosci. 2020, 42, 100760. [Google Scholar] [CrossRef]
  104. Wen, W.; Yamashita, A.; Asama, H. Measurement of the perception of control during continuous movement using electroencephalography. Front. Hum. Neurosci. 2017, 11, 392. [Google Scholar] [CrossRef] [PubMed]
  105. Sun, W.; Huang, M.; Wu, C.; Yang, R. Sense of agency on handheld AR for virtual object translation. In Proceedings of the 2022 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW), Christchurch, New Zealand, 12–16 March 2022; pp. 838–839. [Google Scholar]
  106. Wang, L.; Huang, M.; Qin, C.; Wang, Y.; Yang, R. Movement augmentation in virtual reality: Impact on sense of agency measured by subjective responses and electroencephalography. In Proceedings of the 2022 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW), Christchurch, New Zealand, 12–16 March 2022; pp. 832–833. [Google Scholar]
  107. Kühn, S.; Nenchev, I.; Haggard, P.; Brass, M.; Gallinat, J.; Voss, M. Whodunnit? Electrophysiological correlates of agency judgements. PLoS ONE 2011, 6, e28657. [Google Scholar] [CrossRef]
  108. Schafer, E.W.; Marcus, M.M. Self-stimulation alters human sensory brain responses. Science 1973, 181, 175–177. [Google Scholar] [CrossRef]
  109. Bäß, P.; Jacobsen, T.; Schröger, E. Suppression of the auditory N1 event-related potential component with unpredictable self-initiated tones: Evidence for internal forward models with dynamic stimulation. Int. J. Psychophysiol. 2008, 70, 137–143. [Google Scholar] [CrossRef] [PubMed]
  110. Bednark, J.G.; Poonian, S.; Palghat, K.; McFadyen, J.; Cunnington, R. Identity-specific predictions and implicit measures of agency. Psychol. Conscious. Theory Res. Pract. 2015, 2, 253. [Google Scholar] [CrossRef]
  111. Hughes, G.; Desantis, A.; Waszak, F. Attenuation of auditory N 1 results from identity-specific action-effect prediction. Eur. J. Neurosci. 2013, 37, 1152–1158. [Google Scholar] [CrossRef] [PubMed]
  112. Legaspi, R.; Xu, W.; Konishi, T.; Wada, S. Positing a sense of agency-aware persuasive AI: Its theoretical and computational frameworks. In Proceedings of the International Conference on Persuasive Technology, Virtual Event, 12–14 April 2021; pp. 3–18. [Google Scholar]
  113. Legaspi, R.; Xu, W.; Konishi, T.; Wada, S.; Ishikawa, Y. Multidimensional analysis of sense of agency during goal pursuit. In Proceedings of the 30th ACM Conference on User Modeling, Adaptation and Personalization, Barcelona, Spain, 4–7 July 2022; pp. 34–47. [Google Scholar]
  114. Tao, J.; Tan, T. Affective computing: A review. In Proceedings of the International Conference on Affective Computing and Intelligent Interaction, Beijing, China, 22–24 October 2005; pp. 981–995. [Google Scholar]
  115. Merat, N.; Seppelt, B.; Louw, T.; Engström, J.; Lee, J.D.; Johansson, E.; Green, C.A.; Katazaki, S.; Monk, C.; Itoh, M.; et al. The “Out-of-the-Loop” concept in automated driving: Proposed definition, measures and implications. Cogn. Technol. Work. 2019, 21, 87–98. [Google Scholar] [CrossRef]
  116. Wang, Z.; Zheng, R.; Kaizuka, T.; Nakano, K. Relationship between gaze behavior and steering performance for driver–automation shared control: A driving simulator study. IEEE Trans. Intell. Veh. 2018, 4, 154–166. [Google Scholar] [CrossRef]
  117. Takechi, T.; Nakamura, F.; Fukuoka, M.; Ienaga, N.; Sugimoto, M. The Sense of Agency, Sense of Body Ownership with a Semi-autonomous Telexistence Robot under Shared/Unshared Intention Conditions. In Proceedings of the ICAT-EGVE 2022—International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments—Posters and Demos, Yokohama, Japan, 30 November–3 December 2022; Teo, T., Kondo, R., Eds.; The Eurographics Association: Eindhoven, The Netherlands, 2022. [Google Scholar]
  118. Fong, T.; Thorpe, C. Vehicle teleoperation interfaces. Auton. Robot. 2001, 11, 9–18. [Google Scholar] [CrossRef]
  119. Sheridan, T.B. Telerobotics, Automation, and Human Supervisory Control; MIT Press: Cambridge, MA, USA, 1992. [Google Scholar]
  120. Okamura, A.M. Haptic feedback in robot-assisted minimally invasive surgery. Curr. Opin. Urol. 2009, 19, 102–107. [Google Scholar] [CrossRef] [PubMed]
  121. Ryu, D.; Hwang, C.S.; Kang, S.; Kim, M.; Song, J.B. Wearable haptic-based multi-modal teleoperation of field mobile manipulator for explosive ordnance disposal. In Proceedings of the IEEE International Safety, Security and Rescue Rototics, Workshop, 2005, Kobe, Japan, 6–9 June 2005; pp. 75–80. [Google Scholar]
  122. Mo, Y.; Song, A.; Qin, H. A lightweight accessible wearable robotic interface for bimanual haptic manipulations. IEEE Trans. Haptics 2021, 15, 85–90. [Google Scholar] [CrossRef] [PubMed]
  123. Wang, F.; Qian, Z.; Lin, Y.; Zhang, W. Design and rapid construction of a cost-effective virtual haptic device. IEEE/ASME Trans. Mechatron. 2020, 26, 66–77. [Google Scholar] [CrossRef]
  124. Evangelou, G.; Georgiou, O.; Moore, J. Using virtual objects with hand-tracking: The effects of visual congruence and mid-air haptics on sense of agency. IEEE Trans. Haptics 2023, 16, 580–585. [Google Scholar] [CrossRef]
  125. Evangelou, G.; Limerick, H.; Moore, J. I feel it in my fingers! Sense of agency with mid-air haptics. In Proceedings of the 2021 IEEE World Haptics Conference (WHC), Montreal, QC, Canada, 6–9 July 2021; pp. 727–732. [Google Scholar]
  126. Bergström, J.; Knibbe, J.; Pohl, H.; Hornbæk, K. Sense of agency and user experience: Is there a link? ACM Trans. -Comput.-Hum. Interact. (TOCHI) 2022, 29, 1–22. [Google Scholar] [CrossRef]
  127. Bergstrom-Lehtovirta, J.; Coyle, D.; Knibbe, J.; Hornbæk, K. I really did that: Sense of agency with touchpad, keyboard, and on-skin interaction. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, Montreal, QC, Canada, 21–26 April 2018; pp. 1–8. [Google Scholar]
  128. Morita, T.; Zhu, Y.; Aoyama, T.; Takeuchi, M.; Yamamoto, K.; Hasegawa, Y. Auditory Feedback for Enhanced Sense of Agency in Shared Control. Sensors 2022, 22, 9779. [Google Scholar] [CrossRef] [PubMed]
  129. Tanimoto, T.; Shinohara, K.; Yoshinada, H. Research on effective teleoperation of construction machinery fusing manual and automatic operation. Robomech J. 2017, 4, 14. [Google Scholar] [CrossRef]
  130. Pacaux-Lemoine, M.P.; Gadmer, Q.; Richard, P. Train remote driving: A Human-Machine Cooperation point of view. In Proceedings of the 2020 IEEE International Conference on Human-Machine Systems (ICHMS), Rome, Italy, 7–9 September 2020; pp. 1–4. [Google Scholar]
  131. Maekawa, A.; Saito, H.; Okazaki, N.; Kasahara, S.; Inami, M. Behind The Game: Implicit Spatio-Temporal Intervention in Inter-personal Remote Physical Interactions on Playing Air Hockey. In Proceedings of the ACM SIGGRAPH 2021 Emerging Technologies, Virtual, 9–13 August 2021; pp. 1–4. [Google Scholar]
  132. Mulder, M.; Abbink, D.A.; Boer, E.R. The effect of haptic guidance on curve negotiation behavior of young, experienced drivers. In Proceedings of the 2008 IEEE International Conference on Systems, Man and Cybernetics, Singapore, 12–15 October 2008; pp. 804–809. [Google Scholar]
  133. Moon, H.S.; Seo, J. Optimal action-based or user prediction-based haptic guidance: Can you do even better? In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, Virtual, 8–13 May 2021; pp. 1–12. [Google Scholar]
  134. Noubissie Tientcheu, S.I.; Du, S.; Djouani, K. Review on Haptic Assistive Driving Systems Based on Drivers’ Steering-Wheel Operating Behaviour. Electronics 2022, 11, 2102. [Google Scholar] [CrossRef]
  135. Mohebbi, R.; Gray, R.; Tan, H.Z. Driver reaction time to tactile and auditory rear-end collision warnings while talking on a cell phone. Hum. Factors 2009, 51, 102–110. [Google Scholar] [CrossRef] [PubMed]
  136. Chun, J.; Han, S.H.; Park, G.; Seo, J.; Choi, S. Evaluation of vibrotactile feedback for forward collision warning on the steering wheel and seatbelt. Int. J. Ind. Ergon. 2012, 42, 443–448. [Google Scholar] [CrossRef]
  137. Zhao, Y.; Chevrel, P.; Claveau, F.; Mars, F. Driving with a Haptic Guidance System in Degraded Visibility Conditions: Behavioral Analysis and Identification of a Two-Point Steering Control Model. Vehicles 2022, 4, 1413–1429. [Google Scholar] [CrossRef]
  138. Kim, T. How Mobility Technologies Change Our Lived Experiences: A Phenomenological Approach to the Sense of Agency in the Autonomous Vehicle. Krit. Online J. Philos. 2021, 14, 23–47. [Google Scholar] [CrossRef]
  139. IEEE Std 1872-2015; IEEE Standard Ontologies for Robotics and Automation. IEEE: Piscataway, NJ, USA, 2015; pp. 1–60.
  140. Obhi, S.S.; Hall, P. Sense of agency in joint action: Influence of human and computer co-actors. Exp. Brain Res. 2011, 211, 663–670. [Google Scholar] [CrossRef]
  141. Stenzel, A.; Chinellato, E.; Bou, M.A.T.; Del Pobil, Á.P.; Lappe, M.; Liepelt, R. When humanoid robots become human-like interaction partners: Corepresentation of robotic actions. J. Exp. Psychol. Hum. Percept. Perform. 2012, 38, 1073. [Google Scholar] [CrossRef] [PubMed]
  142. Investigation of Anthropomorphic System Design Features for Sense of Agency in Automation Technologies. Available online: https://www.hfes-europe.org/wp-content/uploads/2023/05/Frenzel2023.pdf (accessed on 13 January 2024).
  143. Yu, X.; Hoggenmueller, M.; Tomitsch, M. Your Way or My Way: Improving Human-Robot Co-Navigation Through Robot Intent and Pedestrian Prediction Visualisations. In Proceedings of the 2023 ACM/IEEE International Conference on Human-Robot Interaction, Stockholm, Sweden, 13–16 March 2023; pp. 211–221. [Google Scholar]
  144. Pagliari, M.; Chambon, V.; Berberian, B. What is new with Artificial Intelligence? Human–agent interactions through the lens of social agency. Front. Psychol. 2022, 13, 954444. [Google Scholar] [CrossRef] [PubMed]
  145. Fu, Z.; Zhao, T.Z.; Finn, C. Mobile ALOHA: Learning Bimanual Mobile Manipulation with Low-Cost Whole-Body Teleoperation. arXiv 2024, arXiv:2401.02117. [Google Scholar]
  146. Ueda, S.; Nakashima, R.; Kumada, T. Influence of levels of automation on the sense of agency during continuous action. Sci. Rep. 2021, 11, 2436. [Google Scholar] [CrossRef] [PubMed]
  147. Luo, J.; Lin, Z.; Li, Y.; Yang, C. A teleoperation framework for mobile robots based on shared control. IEEE Robot. Autom. Lett. 2019, 5, 377–384. [Google Scholar] [CrossRef]
  148. Nguyen, V.T.; Sentouh, C.; Pudlo, P.; Popieul, J.C. Model-based shared control approach for a power wheelchair driving assistance system using a force feedback joystick. Front. Control. Eng. 2023, 4, 1058802. [Google Scholar] [CrossRef]
  149. Sakagami, N.; Suka, M.; Kimura, Y.; Sato, E.; Wada, T. Haptic shared control applied for ROV operation support in flowing water. Artif. Life Robot. 2022, 27, 867–875. [Google Scholar] [CrossRef]
  150. Boessenkool, H.; Abbink, D.A.; Heemskerk, C.J.; van der Helm, F.C.; Wildenbeest, J.G. A task-specific analysis of the benefit of haptic shared control during telemanipulation. IEEE Trans. Haptics 2012, 6, 2–12. [Google Scholar] [CrossRef] [PubMed]
  151. Benloucif, A.; Nguyen, A.T.; Sentouh, C.; Popieul, J.C. Cooperative trajectory planning for haptic shared control between driver and automation in highway driving. IEEE Trans. Ind. Electron. 2019, 66, 9846–9857. [Google Scholar] [CrossRef]
  152. Legaspi, R.; Xu, W.; Konishi, T.; Wada, S.; Kobayashi, N.; Naruse, Y.; Ishikawa, Y. The sense of agency in human–AI interactions. Knowl.-Based Syst. 2024, 286, 111298. [Google Scholar] [CrossRef]
  153. Legaspi, R.; He, Z.; Toyoizumi, T. Synthetic agency: Sense of agency in artificial intelligence. Curr. Opin. Behav. Sci. 2019, 29, 84–90. [Google Scholar] [CrossRef]
  154. Inga, J.; Ruess, M.; Robens, J.H.; Nelius, T.; Rothfuß, S.; Kille, S.; Dahlinger, P.; Lindenmann, A.; Thomaschke, R.; Neumann, G.; et al. Human-machine symbiosis: A multivariate perspective for physically coupled human-machine systems. Int. J. -Hum.-Comput. Stud. 2023, 170, 102926. [Google Scholar] [CrossRef]
  155. Gao, P. Key technologies of human–computer interaction for immersive somatosensory interactive games using VR technology. Soft Comput. 2022, 26, 10947–10956. [Google Scholar] [CrossRef]
  156. Mellecker, R.; Lyons, E.J.; Baranowski, T. Disentangling fun and enjoyment in exergames using an expanded design, play, experience framework: A narrative review. Games Health Res. Dev. Clin. Appl. 2013, 2, 142–149. [Google Scholar] [CrossRef]
  157. Desrochers, M.N.; Pusateri, M.J., Jr.; Fink, H.C. Game assessment: Fun as well as effective. Assess. Eval. High. Educ. 2007, 32, 527–539. [Google Scholar] [CrossRef]
Figure 1. Count of SoA papers per year since 1992 (source(s): data source: Web of Science (all document types)).
Figure 1. Count of SoA papers per year since 1992 (source(s): data source: Web of Science (all document types)).
Applsci 14 07327 g001
Figure 2. Co-occurrence network diagram of SoA keywords (source(s): data source: Web of Science (all document types)).
Figure 2. Co-occurrence network diagram of SoA keywords (source(s): data source: Web of Science (all document types)).
Applsci 14 07327 g002
Figure 3. Count of SoA papers per year in some specific fields since 2005 (source(s): data source: Web of Science (all document types)).
Figure 3. Count of SoA papers per year in some specific fields since 2005 (source(s): data source: Web of Science (all document types)).
Applsci 14 07327 g003
Figure 4. The comparator model.
Figure 4. The comparator model.
Applsci 14 07327 g004
Figure 5. The multifactorial weighting model.
Figure 5. The multifactorial weighting model.
Applsci 14 07327 g005
Figure 6. The seven-level development route of the HMI system.
Figure 6. The seven-level development route of the HMI system.
Applsci 14 07327 g006
Table 1. Criteria and their description.
Table 1. Criteria and their description.
CriteriaDescription
Source websiteWeb of Science Core Collection
YearsJanuary 1992–June 2024
Search termTS = “sense of agency”
Inclusion criteriaArticles, proceedings papers, review articles
Sample size2096
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Yu, H.; Du, S.; Kurien, A.; van Wyk, B.J.; Liu, Q. The Sense of Agency in Human–Machine Interaction Systems. Appl. Sci. 2024, 14, 7327. https://doi.org/10.3390/app14167327

AMA Style

Yu H, Du S, Kurien A, van Wyk BJ, Liu Q. The Sense of Agency in Human–Machine Interaction Systems. Applied Sciences. 2024; 14(16):7327. https://doi.org/10.3390/app14167327

Chicago/Turabian Style

Yu, Hui, Shengzhi Du, Anish Kurien, Barend Jacobus van Wyk, and Qingxue Liu. 2024. "The Sense of Agency in Human–Machine Interaction Systems" Applied Sciences 14, no. 16: 7327. https://doi.org/10.3390/app14167327

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop