Next Article in Journal
Research on Shovel-Force Prediction and Power-Matching Optimization of a Large-Tonnage Electric Wheel Loader
Next Article in Special Issue
Attentional Bias Modification Training in Virtual Reality: Evaluation of User Experience
Previous Article in Journal
A Proposal for Risk Assessment of Low-Frequency Noise in the Human–Machine–Environment System
Previous Article in Special Issue
Performance, Emotion, Presence: Investigation of an Augmented Reality-Supported Concept for Flight Training
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Design of an Immersive Virtual Reality Framework to Enhance the Sense of Agency Using Affective Computing Technologies

UpnaLab, Public University of Navarre, 31006 Pamplona, Spain
*
Author to whom correspondence should be addressed.
Appl. Sci. 2023, 13(24), 13322; https://doi.org/10.3390/app132413322
Submission received: 31 October 2023 / Revised: 29 November 2023 / Accepted: 12 December 2023 / Published: 17 December 2023

Abstract

:
Virtual Reality is expanding its use to several fields of application, including health and education. The continuous growth of this technology comes with new challenges related to the ways in which users feel inside these virtual environments. There are various guidelines on ways to enhance users’ virtual experience in terms of immersion or presence. Nonetheless, there is no extensive research on enhancing the sense of agency (SoA), a phenomenon which refers to the self-awareness of initiating, executing, and controlling one’s actions in the world. After reviewing the state of the art of technologies developed in the field of Affective Computing (AC), we propose a framework for designing immersive virtual environments (IVE) to enhance the users’ SoA. The framework defines the flow of interaction between users and the virtual world, as well as the AC technologies required for each interactive component to recognise, interpret and respond coherently within the IVE in order to enhance the SoA.

1. Introduction

As Virtual Reality (VR) expands into multiple fields of application, it is becoming necessary to address its benefits and limitations. Usually, VR is defined in terms of the overall user experience and sense of immersion or presence, but it is less frequent to associate it with the sense of agency (SoA), which is a central aspect of human self-consciousness since it refers to the experience of being in control of your own actions and their consequences.
In the field of Affective Computing (AC), several techniques and tools make it possible for computers to recognise, interpret and express emotions. In fact, our work is based on the assumption that these technologies are useful for further applications, not just emotional management.
This paper reviews the state of the art of the technologies developed in the field of Affective Computing (CA) with the aim of analysing how each of them can be used to improve the SoA of the users in immersive virtual environments (EVI). First, in Section 2, the importance of the sense of agency in our daily lives is explained. Then, in Section 3, we review the main techniques and tools from the field of AC that could enhance people’s experience in immersive virtual environments (IVEs). This can be achieved by using sensations and emotions as driving elements which influence the user’s perception of their interaction with the IVE and especially their sense of agency in the virtual world.
Based on the state of the art, in Section 4, a framework is proposed as a design guide to properly manage the emotional interactive flow between the elements involved in the IVE. The framework shows, for each element of the interaction flow, the CA technologies that allow each element recognition, interpretation and coherent response within the IVE in order to improve the SoA. This framework helps designers to select both the appropriate technologies to effectively convey emotions between the people involved within the virtual world and the techniques that allow for the environment to respond emotionally in a coherent way, thus increasing the SoA.

2. The Relevance of the Sense of Agency in Virtual Reality

The term sense of agency (SoA) refers to the experience of controlling one’s own actions and, through them, those of the outer world [1]. It is important to distinguish between agency (performing an action) and sense of agency (feeling in control of that action). In [2], this phenomenon is explained in terms of intention. This is, to have a high sense of agency, one’s previous intention or expected outcome to an action should match the real result of performing that action. For example, when the intention of a person is to burst a balloon that they are holding, they expect the action to happen in this way (Figure 1): they hold a sharp object in their hand, they move that hand towards the balloon and, when the sharp object reaches the balloon, it bursts. The person knows the outcome because they watch how the balloon bursts (visual feedback), they hear it burst (audio feedback) and they feel its air spreading out of it into their skin (touch feedback). Since the visual, auditory and tactile feedback match the expected outcome, the person has a high sense of agency, i.e., they feel in control of their action.
Research about SoA is relevant because it influences several aspects of our daily life, such as our health, our learning process or simply our perception of the technology that we use.

2.1. Sense of Agency in Health

The sense of controlling the actions of our own body is a fundamental feature for having a healthy perception of our body self. This means that when there is a correct SoA, the person recognises themselves as the agent of their own actions. The sense of agency is altered in psychosis. For example, patients with a brain disorder called schizophrenia believe that their movements and thoughts are being driven by external forces. This happens because their brains cannot predict the consequences of the actions that they perform [3].
Although the SoA has an important impact on daily life, there are few studies about the relationship between SoA and other health circumstances, such as stress, depression or the loss of control in schizophrenia and agoraphobia [4]. In this sense, IVEs can be useful as tools to research about the relationship between SoA and mental health. This is the case of [5], where the authors used virtual hands to study how stress and the body self are related to the SoA. Also, in ref. [6], a virtual environment was used to analyse whether a deficit in agency, similar to the one produced in schizophrenia, influences the sense of presence or the performance in sensory–motor tests. In general, a reduction in the SoA is related to a bad health situation and a decrease in the quality of life [7].

2.2. Sense of Agency in the Learning Process

Agency is inherent to students’ capacity of regulating, controlling and supervising their own learning [8]. Students’ efficiency to regulate their cognitive, affective and behavioural processes while interacting with their learning environment is key for their academic success.
Research such as [9] has used immersive Virtual Reality technologies to demonstrate that an increase in the perceived sense of presence and agency makes the learning process easier. In the field of immersive collaborative learning, there is also research that analyses the sense of agency when two users collaborate to complete a task while they are represented by the same virtual avatar [10]. In this study, researchers observed that users tend to feel certain control over the avatar even when they have little or no real control. These results present a new set of possible applications related to Virtual Reality and collaborative teleworking, where users could share virtual bodies.

2.3. Sense of Agency in Human–Computer Interaction

In terms of Human-Computer-Interaction (HCI), the sense of agency is a crucial feature. It is needed to evaluate the ways in which users experience the interactions with any technology. For this reason, SoA has become a central aspect of research in the HCI field [11]. Research about the sense of agency can benefit from new interaction techniques that are being quickly developed in the field of HCI, such as gestural input, physiological or intelligent interfaces, and assistance methods [11]. Moreover, immersive Virtual Reality is helping researchers study the sense of agency related to the body self. For instance, they analyse the possibility of experiencing the same emotions towards a virtual body inside an IVE and towards a biological body, and the extent of such possibility [12].

2.4. Sense of Agency: Theoretical Models

Researching mechanisms to improve the sense of agency when interacting in an IVE imply knowing the psychological mechanisms that influence that perception. There are two main models to describe the neurocognitive models underlying the SoA [13], the Comparator Model and The Theory of Apparent Mental Causation. The first one suggests that the SoA primarily comes from underlying processes of motor control and it is inferred retrospectively, after the action is performed, based on its external consequences. This suggests that the SoA comes from the comparison between sensory predictions and real sensory feedback. The second one focuses on situational signals and suggests that the SoA is linked to the correspondence between the external events and our intentions.
During the last decades, the perspective that has dominated research about the sense of agency is the one based on the Comparator Model. However, according to [14], the complete system of the SoA may be more complex and imply multiple processes on different cognitive levels. In addition to the comparative mechanism, many other factors contribute to a large degree to the SoA. These factors are, for instance, the selection of actions, the intention, the effort, the emotion, the retrospective inference, the achieved objectives and the social interaction. There are also other studies that suggest factors that do not influence the sense of agency. For example, the results in [15] show that presence and agency are not influenced by the appearance or the type of visualisation of the avatar.
Research conducted in [14] shows a definition of SoA in terms of two layers: the sense of agency of one’s own actions (body agency) and the sense of agency about the external events (external agency) (see Figure 2).
The first layer implies self-consciousness: when one feels that their consciousness controls their body. The second layer is relevant for the interaction with the environment: when one feels in control of external events and that the environment is reacting coherently to their actions. This two-layer definition seems interesting for its application on IVEs, because it relates to the SoA. In fact, it relates not only to the avatar’s behaviour, but also to the reactions from the environment. Thus, it involves multiple senses, such as sight, hearing and touch. This is relevant because research shows that the human sense of agency becomes stronger or weaker depending on the senses involved in the action. In the context of IVEs, the senses of sight and hearing have been extensively implemented. However, the sense of touch can greatly enhance the realism and immersion of the experience, but few environments have been developed taking into account haptic feedback. Haptic feedback can simulate physical interactions within the virtual world, making virtual experiences more tangible and believable ([16,17]). Haptic influence in user experience was also studied in [18], where the authors suggested that haptic feedback on its own generates at least a better sense of presence than visual feedback. Stimulating the sense of touch by interacting with other agents and with virtual content can profoundly impact the user’s psychological and emotional state.

3. State-of-the-Art Technologies for Emotional Management

In this section, we analyse the technologies that allow management of the emotional interactive flow between users and the content of the virtual world. The aim of this analysis is to select the appropriate technologies to design a framework for the development of IVEs that could enhance the sense of agency.

3.1. Affective Computing

Traditionally, in the field of Human–Computer Interaction (HCI), users leave emotions behind to be able to interact with computers in an efficient and rational way [19]. However, recent research considers that emotions have significant relevance for human–computer interactions and also for virtual environments [20].
Affective Computing (AC) was defined in the decade of 1990 and consolidated in 1997, when the first book about this term was published. It was written by professor Rosalind Picard (MIT) [21]. In the beginning, AC was defined as computers’ ability to recognise, to understand and even to have and to express emotions. Some time after that, Picard proposed yet another definition: computational approaches for the deliberate detection and induction of affection [22]. Nowadays, wider definitions can be found, such as the one in [23], where the authors propose that AC is computing that relates to emotions or other affective phenomena that emerge from them or that deliberately influence them.
Taking these definitions into account, to design a user interface based on emotions, the frameworks and toolkits to explore should address the following characteristics:
  • Emotion classification: models that allow defining a set of emotions that the system is able to recognise and express.
  • Emotion recognition: research of the techniques to be used for the recognition of emotions felt by the users during the interaction with the system.
  • Generation of emotional responses: emotional models and technologies that allow for the system to make decisions in terms of selecting emotions to answer the users and of choosing the best way to show this emotional response.
  • Emotional expression: definition of the audiovisual techniques to be used by the system to show emotional responses.

3.2. Classification of Emotions

The first step to design an emotional system is to define the set of emotions that it might be able to manage. There are different classifications of emotions, which can be divided mainly into two groups: discrete and continuous. This means that emotions can be described as a defined set of discrete entities or as elements placed along continuous dimensions [24].
Discrete models propose a specific set of emotions. For instance, Ekman [25] categorises six basic emotions: happiness, rage, fear, sadness, surprise and disgust (Figure 3). Other less restrictive models include sets of emotions that are not considered universal due to their social and cultural influence. This is the case of Plutchik’s discrete model [26], which represents the following emotions in terms of a wheel: trust, surprise, joy, fear, disgust, sadness, anticipation and anger. There are other classifications that define sets of 19 [27] or 22 [28] discrete emotions.
On the other hand, dimensional models represent emotions as the combination of several psychological dimensions. For instance, Russell’s circumplex [30] considers each emotion as a lineal combination of two affective dimensions: valence and arousal. The arousal dimension expresses the intensity of the emotion, while the valence dimension quantifies the positive or negative scaling of the emotion in a continuous pleasure–disgust scale.
Later on, Mehrabian and Russell [31] created a new three-dimensional model adding the label of dominance to those of arousal and valence. Dominance values represent the users’ feeling of being in control of the application, and they can range from submissive/without control to dominant/in control and empowered.

3.3. Emotion Recognition

From the technological point of view, there are two main methods to identify users’ emotions: measuring physiological changes and analysing expressive behaviour.
Recognising emotions based on users’ expressive behaviour has the main advantage of not requiring contact with a measuring device or a sensor, because usually this type of recognition is achieved by analysing facial expressions [32] and body gestures [33] by means of computer vision and artificial intelligence techniques or by analysing variations in the tone or speed of speech by audio-signal techniques [34].
Nevertheless, current emotional recognition systems that use physiological signals have increased research interest [35,36]. Although plenty of devices are capable of measuring these physiological signals in the human body [37], the most frequently used devices in the field of Affective Computing are described below.
  • Analysis of electrodermal activity (EDA): it is related to the sympathetic nervous system. The human being reacts to behavioural, cognitive and affective phenomena activating sweat glands to prepare the body for action. The sensors measure this secretion of sweat [38].
  • Analysis of the electrocardiogram (ECG): ECG is a reliable source of information and it has considerable potential to recognise and predict human emotions such as anger, joy, trust, sadness, anticipation and surprise. More specifically, to detect these emotions, it is required to extract the Heart Rate Variability (HRV) from ECG measures [39].
  • Analysis of the electroencephalogram (EEG): EEG is an electrophysiological monitoring method used to register the electrical activity of the brain by the use of electrodes. EEG signals can reveal relevant characteristics of emotional states. Recently, several BCI emotion recognition techniques have been developed based on EEG [40]. Knowing that ECG, EDA and EEG are the main methods to detect emotions without users’ voluntary control over them, some authors prefer EEG-based systems for emotion recognition [41]. This is because ECG measurements experience a large delay between stimuli and emotional response, and EDA-based systems cannot report the valence dimension when used on their own.
  • Analysis of the respiratory rate (RR): emotional states can be identified by means of their respiratory pattern. For instance, happiness and other related positive emotions produce significant respiratory variations, which include an increase in the pattern variability and a decrease in the volume of inspiration and breathing time. Positive emotions vary their effects in the respiratory flow depending on how exciting they are; i.e., the more exciting ones increase the respiratory rate. On the other hand, feeling disgust suppress or cease respiration, probably as a natural reaction to avoid inhaling noxious elements [42].
As mentioned in Section 3.2, discrete and dimensional models can be used to classify a set of emotions. Discrete models are widely used when emotion recognition is performed by analysing expressive behaviour. In particular, it is even more common to recognise emotions by facial expressions due to Ekman’s research [43].
When recognising emotions by means of physiological signals, the preferred models are the dimensional ones, that is the models that are based on valence, arousal and dominance (Figure 4). Frequently, several types of physiological signals are mixed. For instance, in the literature, heart rate has been used to measure different levels of valence; electrodermal activity (EDA) and eye tracking have been used to measure arousal. Analysis of facial expressions and head poses, has been used to predict both valence and arousal. An example of the combination of recognition techniques is [44], in which the dimensional model is used to describe how emotional states provoked by affective sounds could be recognised efficiently by estimating dynamics in the autonomic nervous system (ANS). This work also shows that emotional states are modelled with a combination of arousal and valence dimensions, while the dynamics in the ANS are estimated by the standard non-linear analysis of heart rate variability (HRV), which is derived from the electrocardiogram (ECG).

3.4. Generating Emotional Responses

The following aspect of Affective Computing refers to the capacity of machines to answer appropriately to users’ emotions. Once the emotions are recognised, the system needs to decide how to react to those emotions. In the literature, two types of systems for generating emotional responses can be found: those that react in a predefined way for interactive events and those that modify the applications’ content and interactive flow to adapt it to users’ emotions based on their context. In some specific cases, such as in learning process or during a body rehabilitation treatment, adapting to users’ emotional states is more relevant. This is because considering an extra level of difficulty or an inadequate situation may cause complications to the users’ health or learning processes [37].
To adapt the response in relation to the users’ emotional state, the system needs a reasoning mechanism to be able to select the emotion to express based on context. This means that the emotional interaction is generated dynamically, as a result of an internal evaluation process of the events produced by the interaction.
There are emotional theories that sutdy how different emotions manifest in human beings. Some of these models can be implemented in computer systems to simulate human emotional mechanisms.
Four main theoretical perspectives in the study of emotion can be found: the evolutionary expressive perspective, the physiological perspective, the cognitive perspective and the social structures perspective.
Several authors have inspired the development of their emotional models using those perspectives [47].
The aforementioned evolutionary expressive proposal was first developed by Darwin [48] and states that human emotional expressions are determined by their own evolution, and that they are universal and innate.
The second proposal, the physiological one, suggests that body gestures are responsible for the emotions. One of the most relevant theories of this perspective is the James–Lange theory [49], which states that emotions emerge from physiological arousal. For instance, one feels more happy when one smiles, sadder when one cries, and fearful when one flees. Thus, this theory states that, as an answer to different experiences and stimuli, the autonomic nervous system creates physiological responses (muscular tension, tearing, cardio-respiratory acceleration) from which emotions are created. Within the physiological context, there are neurological theories. Research as that by Damasio [50] indicates that people have neuronal patterns that represent changes in the body and in the brain, those changes building emotions [51]. Examining physiological theories, Cannon stated that physiological arousal and emotional experience are produced simultaneously, although in independent ways [52]. In their experiments, Cannon and Bard determined that emotion perception that awakes stimuli originates two phenomena: the conscious experience of emotion and general physiological changes. All of it is generated because the thalamus sends its pulses to the cerebral cortex and the hypothalamus.
The third perspective, the cognitive theory, states that thoughts play an essential role in the creation of emotions. Supporters of this perspective suggest that knowledge is another relevant part when experiencing emotions. In this case, emotions are the result of a cognitive evaluation process of the events in the environment. There are several emotional models that follow this perspective. Some of the most relevant ones in the field of Affective Computing have been developed by Roseman [53], Scherer [54] and OCC [28]. Specifically, the OCC evaluation theory (Figure 5) is one of the most used theories in the field of Affective Computing because their elements correspond with commonly used notions in agent models [55].
Some authors, such as Schachter and Singer [56], merge physiological theories and cognitive ones, stating that the origin of emotions comes from, on the one hand, our interpretation of the peripheral physiological responses in our organism, and on the other hand the cognitive evaluation of the situation that creates those physiological responses. Therefore, this theory suggests that emotions appear due to the cognitive evaluation of events and the body response. The person perceives the physiological changes, acknowledges what is happening in their environment and produces their emotions according to both observations. What determines the intensity of emotion that a person feels is mainly determined by the physiological changes.
Figure 5. Several studies employed the OCC model to generate emotions for embodied characters; image source [57].
Figure 5. Several studies employed the OCC model to generate emotions for embodied characters; image source [57].
Applsci 13 13322 g005
Finally, the social constructive perspective proposes the idea that emotions cannot be strictly explained in terms of physiological or cognitive facts. Instead, it states that emotions are social. Therefore, a social level of analysis is necessary to truly understand the nature of emotion.
In the context of HCI, the two most commonly used perspectives to develop computational models of emotion are cognitive and physiological. While physiological theories are mostly used to develop emotional enhancement [58] and emotion recognition systems (Section 3.3), cognitive theories are the most frequently used theories to develop systems that adapt their answers to human emotions. Intelligent systems that are developed based on cognitive theories are known as cognitive agents. NEW TEXT PROPOSAL: According to research in [59] there are many different approaches to building such agents. The main mature frameworks for intelligent agents are based on cognitive architectures, such as Soar [60], ACT-R [61] and BDI (Beliefs, Desires and Intentions) [62]. Compared to Soar and other complex architectures, BDI stands out for its simplicity and versatility. In the case of BDI architectures, those that include affective aspects that influence the cognitive process are known as EBDI. EBDI agents can be classified into two main groups: (1) those that incorporate emotions and (2) those that consider other aspects such as personality, emotional state, empathy, coping mechanisms and emotional regulation [63].
In this section, we reviewed technologies that allow dynamic adaptation of the response of the EVI to the emotional events of the users. Dynamic adaptation is essential to achieve a natural and coherent interaction. This may also apply to the sense of agency, as recent studies recognise that SoA goes beyond being a static phenomenon and may fluctuate as a function of various factors such as user interaction, environmental changes, and interindividual variability ([4,17]).

3.5. Expressing Emotions

Once the system recognises the emotion in a person, it should make a decision about which emotion and through which sensory channel it should reply. To achieve that, the system should offer the necessary tools to express that emotion.
In the case of IVEs, the system tools to express emotions include avatars, with verbal and body language ability, as well as virtual content, such as 3D models and audio, as potential transmitters of emotions. Therefore, emotional responses could be expressed by one of the following elements:
  • Non-verbal language: Mehrabian [64] concluded that, in the case of face-to-face communication, the three parts of a message (words, tone of voice and body language) should be coherently supportive of one another. In fact, 7% of our acceptance and empathy for a message emitted face to face depends on our acceptance of the choice of words, 38% depends on the use of voice (tone, volume) and 55% depends on over-gesticulation or facial structure. Because of this, when expressing emotions, non-verbal language has an important impact. Virtual Reality environments allow both non-verbal communication, due to the use of avatars with full-body movements, and verbal communication in real time, although they usually lack a complete system for non-verbal signals [65]. In [66], the authors highlight the lack of interaction paradigms with facial expression and the almost nonexistence of significant control over environmental aspects of non-verbal communication, such as posture, pose and social status.
    Several research works have focused on defining how human beings express emotions with their body and face. Although studies similar to Giddenss’ [67] state that one of the main aspects of non-verbal communication when communicating emotions is facial emotional expressions, body movements are also considered essential to accompany words [68]. To generate believable avatars, it is imperative that an animation motor is developed based on these studies.
    On the one hand, in the area of facial animation, there is diverse research. A lot of these studies are based on [43]. Ekman and Friesen developed FACS which stands for Facial Action Coding System. It is an exhaustive system based on human anatomy that captures all visually differentiable facial movements. FACS describes this facial activity by means of 44 unique actions called Action Units (AUs), which categorise all possible head movements and eye positions.
    On the other hand, the body animation in the field of Affective Computing has been less researched than that of facial expression [69], although some studies were found focused on body positions and the movement of the hands in non-verbal expressiveness [70,71]. Nowadays, most of the work related to body animation is based on data collected by systems of motion capture (mocap). Mocap is a popular technique for representing human motion based on tracking markers corresponding to different regions or joints of the human body [72].
    Nevertheless, this kind of studies can exclusively be used when an avatar has anthropomorphous appearance. In the case of other types of emotional agents with no human shape, the way to provide them with some tools for non-verbal communication could be to modify their shape [73], colour [74], or to apply some changes to the way they move. Following this last approach, there is research that analyses the effect that amplitude, acceleration and duration of movements have on different emotions [75]. Other studies analyse the variation of heart rate [76] and respiratory rate [42] to link each rhythm variation to its corresponding emotion.
  • Verbal Language: Dialog Systems and Digital Storytelling: Bickmore and Cassell [77] confirmed that non-verbal channels are important not only for tansmiting more information and complementing the voice channel but also for regulating the conversational flow. Because of this, when the IVE is expressing emotional response, it is important to provide it with mechanisms that can adapt the conversational flow to the emotional states of the interlocutors. Several authors work with Digital Storytelling techniques to adapt the IVE’s narrative flow to the emotions of participants and offer a more natural and adaptive verbal response. For instance, in [78], players’ physiological signals were mapped into valence and arousal, and they were used as interactive input to adapt a video game narrative.
  • Audiovisual Mechanisms: Beyond expressing an emotional response through an avatar’s verbal or non-verbal language, the way in which content is shown inside the IVE is also an option to emotionally respond to the user. For instance, augmenting verbal communication by means of adding sounds that are not related to talking, such as special effects and narrative music, can not only offer more information about the content, but also affect the atmosphere and the mood. This happens because sound is capable of achieving emotional engagement, of improving the learning process and of augmenting the overall immersion [79]. In studies such as [78], sounds such as music are not the only additions to the IVE. Instead, visual elements are also included to accompany the avatar, like a flying ball that changes its colour depending on the users’ emotions detected by physiological measures. Results show that participants are in favour of narrative, musical and visual adaptations of video games based on real-time analysis of their emotions.
    In the literature, several emotion-based multimedia databases can be found to elicit emotions. The content to be shown can range from a static image (such as the IAPS database [80], which is labelled according to the affective dimensions of valence, arousal and dominance/control) to a single sound (such as the IADS database [81], which is labelled according to the affective dimensions of valence, arousal and dominance/control) or a video sequence (such as the CAAV database [82], which provides a wide range of standardised stimuli based on two emotional dimensions: valence and arousal). Virtual Reality can be an interesting medium due to its complexity and flexibility, as it offers numerous possibilities to transmit and elicit human emotions. An example of this are the ten scenarios developed in [83], two for each of the five basic emotions (rage, fear, disgust, sadness and joy) that contain design elements mapped in valence and arousal dimensions.

3.6. Methods for User Evaluation

User experience (UX) in Virtual Reality environments can be measured by both subjective and objective methods. Subjective methods are a convenient option to evaluate usability, comfort and satisfaction. However, these methods are more prone to judgement mistakes [84]. Also, the use of subjective methods hardly allows evaluation of performance and system efficiency [85].
Several user evaluation methods related to emotional management in the context of IVEs and more specifically to the sense of agency are described below.

3.6.1. Subjective Measures

For evaluating the perceived emotional state of users based on subjective measures, there are several questionnaires such as PANAS [86] and SAM [87]. PANAS is an international short questionnaire composed of 10 items to evaluate the positive and negative affections. Nevertheless, one of the most commonly used questionnaires is the Self-Assessment Manikin (SAM), Figure 6. It consists of a non-verbal pictorial evaluation technique that evaluates valence, arousal and dominance associated with reaction to a stimulus. SAM has been extensively used in VR applications [88,89,90,91,92].
Other widely used methods for evaluating the subjective experience in VR environments are subjective questionnaires. These are usually composed of several qualitative questions to be answered using Likert scales. These questionnaires evaluate characteristics such as presence, interactivity and, more commonly, immersion [94,95,96].
For the measurement of the sense of agency in IVEs, researchers have developed tools to experimentally measure its components. Explicit measures of the SoA are commonly related to verbal reports in which users are asked to evaluate their SoA during a task or simply to rate whether they felt more or less that they were the agents of the actions [97]. Other researchers use Likert scales as subjective questionnaires. Their questions are commonly related to the psychophysical aspects of agency, such as confidence and feeling in control of a target [5]. There are predefined questionnaires, such as the Sense of Agency Scale [98], which consist of 13 items to measure the positive and negative sense of agency, and the Sense of Agency Rating Scale (SOARS) [99]. Specifically for IVEs, Longo and Haggard [100] developed a set of items to measure the sensation of property and agency related to a virtual hand.

3.6.2. Objective Measures

Several studies highlight the necessity of correlating the results of subjective questionnaires, such as SAM, with physiological measurements [88,89,90,91,92]. In the context of immersive environments, some of these studies use emotion recognition systems [101], heart rate measurement devices [102] or analysis of EEG signals [103], seeking objectivity. In the past, multimodal measurements have been recognised as the foundation of promising research in the field of multimodal data and learning analysis. This is the case of [104] that follows an approach based on multimodal data analysis to achieve a more continuous and objective vision of how commitment is developed in users and how it influences their task execution. In this case, physiological data flows were captured in terms of facial expression sensors, eye tracking and electrodermal activity (EDA). Results were correlated to subjective self-reporting data.
Related to objective measurements for evaluating the SoA, there are two widespread techniques to measure the feeling of implicit control: intentional binding and sensory attenuation. This last approach aims to identify the decrease in the intensity of the effects on a self-initiated action, although it is not applied in the field of HCI. Paradigms of intentional binding refer to the subjective temporal compression between actions and results. They are commonly used as an implicit measure of the SoA, because they are based on the hypothesis that voluntary actions and results are perceived closer in time when there is a high SoA [105]. Thus, intentional binding has been measured, for instance, by using a task for the estimation of time intervals [106]. One of the most commonly used techniques developed by Haggard, Clark and Kalogeras [107] is based on Libet’s clock [108].
These types of measurement techniques are an important challenge for IVE because they require great visual attention to Libet’s clock, while users are exposed to constant visual information and need to perform more complex actions. Kong et al. [109] evaluated the implicit SoA by means of that paradigm, although they represented a static context in VR, allowing for users to focus attention on the clock. In [110], the authors took these objective measures by augmenting the clock stimuli with auditory and tactile signals as an approach for future testing in VR environments.

3.7. Frameworks Related to Immersion and Sense of Agency in Immersive Virtual Environments

Immersive virtual environments (IVE) are computer-generated three-dimensional spaces that integrate a set of technologies necessary to simulate real or fictional scenes that can make people feel present in a virtual world.
In fact, feeling immersed in a virtual world is one of the most important characteristics of IVEs. This aspect depends on both the quality of the IVE and the users’ subjective experience. Regarding the quality of the IVE, several features can influence the feeling of immersion, including field of view (FOV), field of regard (FOR), display size and resolution, stereoscopic imaging quality, lighting realism, and frame and refresh rates [111]. However, the subjective experience is influenced by users’ own previous experiences and emotional states.
In this section, we analyse frameworks that have been proposed to manage emotional interaction between the different IVE elements. Our main goal is to find proposals focused on developing IVEs that improve the SoA. In this review, we do not find any frameworks specifically designed for improving the SoA. Nonetheless, there are interesting frameworks that propose technologies to manage IVE features and other characteristics that do influence the SoA, such as emotions, immersion, feeling of presence, physiological changes and content adaptation.
To begin with, there are proposals of closed-loop interactive systems in VR that address the necessity of not only recognising the users’ emotions, but also adapting the environment to them. This is the case in [112,113], in which a framework composed of two modules was proposed: one to recognise emotions and another to elicit them. Thus, the authors conveyed an emotional loop for IVEs, where the virtual world dynamically changes (expression of emotions) based on users’ perceived emotions (emotion recognition). Also, in order to properly recognise emotions, capturing users’ bio-feedback is suggested, as well as collecting the contextual data of relevant features such as attention and engagement. Moreover, in ref. [114], the authors focused on users’ feelings of presence; more specifically, they computed the error of presence. They measured and modelled users’ state of presence in IVE based on discrete emotion recognition. They included a feature extraction component that proposed using physiological signal analysis to understand users’ emotional state and relating it to their sense of presence. These signals, including heart rate (HR), skin conductance level (SCL) and respiratory rate (RR), were proposed to be extracted from ECG and EDA techniques.
In addition, there are other proposals of closed-loop frameworks related to the evaluation of immersive virtual environments. It is worth noticing that this type of frameworks could also guide the implementation of IVEs. This is the case in [115], where an evaluation framework was proposed to define and assess the different components that may affect immersive experience. The emotional components relate to a three-level response system: visual, auditory and haptic.
Related to the implementation of IVEs, there are also frameworks that include interaction modules to be used when developing VR applications. For instance, in [116], an application framework was proposed for developing VR experiences that require equipment, such as sensors to measure physiological data. Although this framework only uses physiological data to show it onscreen for users to acknowledge their physical state, it integrates modules to record these data in real time so that other functionalities can be developed to make use of those data. Also, in [117], an Affective Computing framework was presented to develop serious games for virtual rehabilitation. It is composed of three modules related to emotion recognition, system decision making and environment adaptation. To recognise users’ emotional states, their facial expressions are recorded in real time and analysed through face mapping based on FACS in terms of seven discrete emotions (anger, contempt, disgust, fear, joy, sadness and surprise). This recognition system is also adaptable to dimensional emotions (valence and arousal) captured by, for instance, the use of EEG. Another example of this type of frameworks is present in [118], which focused on the system’s capability to capture and analyse physiological data. It supports diverse sensors, such as EDA, EEG and RR, as well as camera monitoring, to extract physical reactions.
Other works focused on the necessity of developing IVEs that consider emotion cognitive theories. For instance, in ref. [119], the lack of computational emotional models was addressed, specifically for the use of Virtual Humans. Thus, the authors proposed a framework based on appraisal eliciting emotions, related to the cognitive theories of emotion such as Scherer’s [54].
On the other hand, there are several studies related to enhancing the sense of agency by adapting the virtual environments to the users’ body movements. Specifically, in ref. [97], the authors showed that externally reducing the error between one’s motor intention and the visual feedback promotes the sense of agency when performing continuous body movements. Also, in refs. [120,121], the authors suggested that improving sensorimotor hints enhances implicit SoA. Other studies related to the largely explored Virtual Hand Illusion (VHI). In ref. [122], the authors showed that users perceive the virtual hand as part of their own body and that the sense of agency is dependent on the delay and movement variability between their real and the virtual hand. Also, in ref. [123], it was suggested that the sense of agency is influenced by the users’ capability to efficiently control other virtual artefacts instead of by the virtual hand size. In the case of [124], improving body movements and its related agency was addressed from a rehabilitation point of view, suggesting to transfer learning between virtual exercises and the real world.

4. Proposed Framework to Improve the Sense of Agency in Immersive Virtual Environments

Relevant frameworks are studied in Section 3.7, most of them focused on enhancing the sense of presence [114] and immersion [115] or on managing the emotional interactive flow [112]. However, in this work, we are focused on improving and evaluating the sense of agency in virtual environments.
Some virtual environments employ techniques for emotion recognition: facial expressions [117] and modules to capture and analyse real-time physiological data during VR interactions [114,116]. Others, such as [119], highlight the importance of developing frameworks based on appraisal to manage cognitive models, which may be useful for managing intention when addressing the SoA. Also, other research, such as [97,120,122], suggests that the sense of agency can be improved by adapting the virtual environment to the users’ body movements.
Since the sense of agency has a significant impact on our daily lives, in this section we propose a framework which integrates the key elements of the interactive flow within an IVE. We also address the technologies needed to manage its emotions, focusing on enhancing the user’s sense of agency.
The theoretical basis of this framework relies on the definition of agency described in [14], which states that the SoA is composed of two layers (Figure 2). As explained in Section 2.4, the first layer addresses the sense of agency over one’s own action (body agency), and the second layer presents it over external events (external agency).
Although the literature in the field of psychology does not always draw the line between these two layers, this model is selected because its clear structure is more suited for implementation. Moreover, its design is particularly applicable in the context of immersive virtual environments, because it moves far beyond body agency, and it includes external agency, which is essential for evaluating interaction in an IVE. The two-layer model links the SoA not only with a virtual avatar but also with the coherent response of the environment, involving different senses, such as sight, hearing and touch (Figure 7).
This work is based on already existing Affective Computing technologies that can be used on both layers to implement a complete method that improves the SoA for body and external agencies. In fact, AC offers tools to develop closed-loop interactions to improve the SoA by managing emotions. Specifically, AC provides applicable models that could be used to predict intention, as well as systems to generate coherent emotional responses to that intention. In addition, AC technologies allow the evaluation of the SoA through metrics such as anxiety variations or detection of users’ perceived errors during interaction.
The proposed framework shows the main elements involved in closed-loop interaction in immersive virtual environments: the users and their virtual avatars, the environment itself, and the interaction that could happen between them. This interaction flow is shown in Figure 8. Dashed lines indicate the transmission of sensations or emotions and continuous lines indicate the coupling of two elements, implying direct interaction between them. Each element groups the technologies that we propose in order to manage the necessity of recognising, expressing and generating emotions depending on a given situation.
Each of these interactive elements generates events and responses that could influence the sense of agency of the users involved in the interaction. This framework aims to integrate the technologies related to the AC field that allow interpreting the intention and emotional state of the user, to generate a response from the IVE that fulfills their expectations following the theory of the two-layer model which integrates body and external agencies.
Since the framework shows the general CA technologies that allow each element to recognise, interpret and coherently respond to inquiry for all the elements involved in the interaction flow, the technical implementation of this framework depends on the needs of the EVI. For example, in the case of the avatar’s body expressiveness, the framework proposes technologies such as real-time (RT) facial or body animation systems that the EVI designer has to choose according to the technical and functional requirements of their virtual environment.
The closed-loop interactive flow works as follows: the user is immersed in an IVE and they are interacting with the virtual world through their virtual avatar (represented with a dashed orange line). The system also sends their emotions, feelings, and physiological changes to their avatar in addition to VR controller events. This ensures real-time generation of appropriate avatar behaviour, enhancing body agency. In Figure 8, we propose AC technologies discussed in Section 4 for emotion recognition within the orange circle labelled ER. Through the technologies of the blue circle labelled EE, the avatar is able to generate its corresponding animation and thus transmit the emotional state to the person with whom the user is interacting through the IVE (dashed blue line). When the user interacts directly with the 3D content (green line) or with non-player characters (NPCs), the user’s emotional information is also sent to the IVE. This way, the response to the interactive event can be adapted to the user’s emotional state, thereby improving their external agency. This adapted response is generated from recognition of the user’s emotion and intention, which is sent to the IVE. Then, the IVE employs the technologies described in the pink circle labelled EG, selects the most appropriate emotional response and expresses it by employing the technologies of the EE circle. Ideally, during the entire interactive flow, the SoA is also controlled to adapt IVE behaviour. As seen in the state of the art, evaluating the SoA in real time is a complex task. To achieve this goal, the framework proposes to work with the technologies described in the yellow circle. Finally, the overall user experience and sense of agency are also evaluated. For this purpose, there are several subjective questionnaires that can be easily applied.

5. Discussion and Future Work

After conducting a review of the technologies involved in the area of Affective Computing in Section 3, we concluded that there are mature technologies that can help to improve the sense of agency in immersive virtual environments. However, we detected that there are still several challenges that need to be addressed in the following ways:
  • Develop new techniques for adjusting emotion recognition. In the literature, recognition can be achieved by analysing expressive behaviour and measuring physiological changes. The first one has significantly improved in recent years with the advancement of Artificial Intelligence; however, recognizing emotions by analyzing expressive behavior from the data sources required by AI may involve gender and cultural biases. When recognising emotions by means of physiological signals, the preferred models are the dimensional ones. In the literature, we found that most researchers use the two-dimensional model, since there are sensors that can help to predict the level of valence and arousal reached by the user. However, in order to work with agency and the sense of agency, it is important to acknowledge the third dimension, the dominance, although objective measurement of dominance has not been achieved yet. Schachter and Singer [56] established that the origin of emotions comes, on the one hand, from our interpretation of the peripheral physiological responses of the organism, and on the other hand from the cognitive evaluation of the situation that originates those physiological responses. Following this theory, in this work, we propose as a future challenge to develop a system capable of recognising emotions in real time through the combination of physiological metrics and emotional cognitive models. These cognitive models allow modelling users’ objectives, standards and attitudes, as well as launching one emotion or another based on the agency of an event. We believe that the system could use, on the one hand, physiological measurements for situating a user’s emotional state in the correct global space of dimensional models. Then, with cognitive models such as Roseman [53] or OCC [28] which allow us prediction of the emotion in context (influenced by objectives, norms and attitudes), the system could refine emotional recognition. Using those techniques, the system will simulate the Schachter and Singer [56] process which established that the origin of emotions comes, primarily, from physiological responses of the organism, and then from the cognitive evaluation of the situation.
  • Improve the external agency of the two-layer model [14] by adapting audio, visual and tactile content to users’ emotions recognised through the combination of physiological metrics and cognitive theories. Adaptation of IVE content has been implemented at the visual level by means of interactive storytelling techniques that employ user events as inputs. However, they rarely introduce physiological metrics as input to the system or use touch as part of system response, which is essential when working with the SoA.
  • Improve the body agency of the two-layer model [14] by integrating the physiological metrics on the avatar body animation. The development of techniques that allow adapting the avatar animation based on the user’s physiological metrics is less widespread than the expression of emotions through facial and body animation techniques. As physiological signals may involve external body changes, such as chest movement (from respiratory rate) or skin colour (from temperature), in this work, achieving a high body agency is considered essential.
  • Establish objective metrics for the assessment of the SoA in IVEs. In the literature review, several subjective questionnaires have been found for the assessment of the SoA. However, these subjective questionnaires should be accompanied by the analysis of objective metrics for a more unbiased assessment. Techniques based on Libet’s clock [108] measure Intentional binding, but they are not applicable to IVEs, as they are constrained by requiring high visual attention and they do not allow natural interaction with the immersive environment. Therefore, it is necessary to investigate the creation of tools that allow objective measurement of agency.

Author Contributions

Conceptualisation, A.O.; methodology, A.O. and S.E.; formal analysis, A.O. and S.E.; investigation, A.O. and S.E.; writing—original draft preparation, A.O. and S.E.; writing—review and editing, A.O. and S.E.; visualisation, A.O. and S.E.; supervision, A.O. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the EU Horizon 2020 research and innovation programme under grant agreement No. 101017746.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
SoASense of Agency
ACAffective Computing
IVEImmersive Virtual Environments
HCIHuman–Computer Interaction
VRVirtual Reality

References

  1. Haggard, P.; Chambon, V. Sense of agency. Curr. Biol. 2012, 22, R390–R392. [Google Scholar] [CrossRef] [PubMed]
  2. Haggard, P.; Tsakiris, M. The experience of agency: Feelings, judgments, and responsibility. Curr. Dir. Psychol. Sci. 2009, 18, 242–246. [Google Scholar] [CrossRef]
  3. Patnaik, M.; Thirugnanasambandam, N. Neuroscience of Sense of Agency. Front. Young Minds 2022, 10, 683749. [Google Scholar] [CrossRef]
  4. Gallagher, S.; Trigg, D. Agency and Anxiety: Delusions of Control and Loss of Control in Schizophrenia and Agoraphobia. Front. Hum. Neurosci. 2016, 10, 459. [Google Scholar] [CrossRef] [PubMed]
  5. Stern, Y.; Koren, D.; Moebus, R.; Panishev, G.; Salomon, R. Assessing the Relationship between Sense of Agency, the Bodily-Self and Stress: Four Virtual-Reality Experiments in Healthy Individuals. J. Clin. Med. 2020, 9, 2931. [Google Scholar] [CrossRef] [PubMed]
  6. Lallart, E.; Lallart, X.; Jouvent, R. Agency, the Sense of Presence, and Schizophrenia. Cyberpsychol. Behav. 2009, 12, 139–145. [Google Scholar] [CrossRef] [PubMed]
  7. Moore, J.W. What Is the Sense of Agency and Why Does it Matter? Front. Psychol. 2016, 7, 1272. [Google Scholar] [CrossRef]
  8. Code, J. Agency for Learning: Intention, Motivation, Self-Efficacy and Self-Regulation. Front. Educ. 2020, 5, 19. [Google Scholar] [CrossRef]
  9. Petersen, G.B.; Petkakis, G.; Makransky, G. A study of how immersion and interactivity drive VR learning. Comput. Educ. 2022, 179, 104429. [Google Scholar] [CrossRef]
  10. Fribourg, R.; Ogawa, N.; Hoyet, L.; Argelaguet, F.; Narumi, T.; Hirose, M.; Lecuyer, A. Virtual Co-Embodiment: Evaluation of the Sense of Agency while Sharing the Control of a Virtual Body among Two Individuals. IEEE Trans. Vis. Comput. Graph. 2021, 27, 4023–4038. [Google Scholar] [CrossRef]
  11. Limerick, H.; Coyle, D.; Moore, J.W. The experience of agency in human-computer interactions: A review. Front. Hum. Neurosci. 2014, 8, 643. [Google Scholar] [CrossRef] [PubMed]
  12. Kilteni, K.; Groten, R.; Slater, M. The Sense of Embodiment in Virtual Reality. Presence Teleoperators Virtual Environ. 2012, 21, 373–387. [Google Scholar] [CrossRef]
  13. Cavazzana, A. Sense of Agency and Intentional Binding: How Does the Brain Link Voluntary Actions with Their Consequences? Università degli Studi di Padova: Padova, Italy, 2016. [Google Scholar]
  14. Wen, W. Does delay in feedback diminish sense of agency? A review. Conscious. Cogn. 2019, 73, 102759. [Google Scholar] [CrossRef] [PubMed]
  15. Butz, M.; Hepperle, D.; Wölfel, M. Influence of Visual Appearance of Agents on Presence, Attractiveness, and Agency in Virtual Reality. In ArtsIT, Interactivity and Game Creation, Proceedings of the Creative Heritage, New Perspectives from Media Arts and Artificial Intelligence, 10th EAI International Conference, ArtsIT 2021, Virtual, 2–3 December 2021; Lecture Notes of the Institute for Computer Sciences, Social-Informatics and Telecommunications Engineering, LNICST; Springer: Cham, Switzerland, 2022; Volume 422, pp. 44–60. [Google Scholar] [CrossRef]
  16. Gallese, V.; Sinigaglia, C. The bodily self as power for action. Neuropsychologia 2010, 48, 746–755. [Google Scholar] [CrossRef] [PubMed]
  17. Di Plinio, S.; Scalabrini, A.; Ebisch, S.J. An integrative perspective on the role of touch in the development of intersubjectivity. Brain Cogn. 2022, 163, 105915. [Google Scholar] [CrossRef]
  18. Gibbs, J.K.; Gillies, M.; Pan, X. A comparison of the effects of haptic and visual feedback on presence in virtual reality. Int. J. Hum.-Comput. Stud. 2022, 157, 102717. [Google Scholar] [CrossRef]
  19. Brave, S.; Nass, C. Emotion in Human-Computer Interaction. In The Human-Computer Interaction Handbook: Fundamentals, Evolving Technologies and Emerging Applications; CRC Press: Boca Raton, FL, USA, 2002. [Google Scholar]
  20. Sekhavat, Y.A.; Sisi, M.J.; Roohi, S. Affective interaction: Using emotions as a user interface in games. Multimed. Tools Appl. 2021, 80, 5225–5253. [Google Scholar] [CrossRef]
  21. Picard, R.W. Affective Computing; MIT Press: Cambridge, MA, USA, 1997. [Google Scholar] [CrossRef]
  22. Picard, R.W. Affective computing: Challenges. Int. J. Humancomputer Stud. 2003, 59, 55–64. [Google Scholar] [CrossRef]
  23. Schuller, B.W.; Picard, R.; Andre, E.; Gratch, J.; Tao, J. Intelligent Signal Processing for Affective Computing [From the Guest Editors]. IEEE Signal Process. Mag. 2021, 38, 9–11. [Google Scholar] [CrossRef]
  24. Harmon-Jones, E.; Harmon-Jones, C.; Summerell, E. On the Importance of Both Dimensional and Discrete Models of Emotion. Behav. Sci. 2017, 7, 66. [Google Scholar] [CrossRef]
  25. Ekman, P. Facial expression and emotion. Am. Psychol. 1993, 48 4, 384–392. [Google Scholar] [CrossRef]
  26. Plutchik, R. The measurement of emotions. Acta Neuropsychiatr. 1997, 9, 58–60. [Google Scholar] [CrossRef] [PubMed]
  27. Roseman, I.J. Appraisal determinants of emotions: Constructing a more accurate and comprehensive theory. Cogn. Emot. 1996, 10, 241–278. [Google Scholar] [CrossRef]
  28. Ortony, A.; Clore, G.; Collins, A. The Cognitive Structure of Emotion; American Sociological Association: Washington, DC, USA, 1988; Volume 18. [Google Scholar] [CrossRef]
  29. EkmanGroupWeb. Universal Emotions|What Are Emotions?|Paul Ekman Group. 2023. Available online: https://www.paulekman.com/ (accessed on 16 December 2023).
  30. Russell, J.A. A circumplex model of affect. J. Personal. Soc. Psychol. 1980, 39, 1161–1178. [Google Scholar] [CrossRef]
  31. Mehrabian, A.; Russell, J.A. An Approach to Environmental Psychology; MIT Press: Cambridge, MA, USA, 1974. [Google Scholar]
  32. Kshirsagar, A.; Gupta, N.; Verma, B. Real Time Facial Emotions Detection of Multiple Faces Using Deep Learning. In Pervasive Computing and Social Networking; Lecture Notes in Networks and Systems; Springer: Singapore, 2023; Volume 475, pp. 363–376. [Google Scholar] [CrossRef]
  33. Avola, D.; Cinque, L.; Fagioli, A.; Foresti, G.L.; Massaroni, C. Deep Temporal Analysis for Non-Acted Body Affect Recognition. IEEE Trans. Affect. Comput. 2022, 13, 1366–1377. [Google Scholar] [CrossRef]
  34. Van, L.T.; Le, T.D.T.; Xuan, T.L.; Castelli, E. Emotional Speech Recognition Using Deep Neural Networks. Sensors 2022, 22, 1414. [Google Scholar] [CrossRef]
  35. Li, W.; Zhang, Z.; Song, A. Physiological-signal-based emotion recognition: An odyssey from methodology to philosophy. Measurement 2021, 172, 108747. [Google Scholar] [CrossRef]
  36. Maria, E.; Matthias, L.; Sten, H. Emotion Recognition from Physiological Signal Analysis: A Review. Electron. Notes Theor. Comput. Sci. 2019, 343, 35–55. [Google Scholar] [CrossRef]
  37. Aranha, R.V.; Correa, C.G.; Nunes, F.L. Adapting Software with Affective Computing: A Systematic Review. IEEE Trans. Affect. Comput. 2021, 12, 883–899. [Google Scholar] [CrossRef]
  38. Matsumoto, R.R.; Walker, B.B.; Walker, J.M.; Hughes, H.C. Fundamentals of neuroscience. In Principles of Psychophysiology: Physical, Social, and Inferential Elements; Cambridge University Press: Cambridge, UK, 1990; pp. 58–113. [Google Scholar]
  39. Nita, S.; Bitam, S.; Heidet, M.; Mellouk, A. A new data augmentation convolutional neural network for human emotion recognition based on ECG signals. Biomed. Signal Process. Control 2022, 75, 103580. [Google Scholar] [CrossRef]
  40. Houssein, E.H.; Hammad, A.; Ali, A.A. Human emotion recognition from EEG-based brain–computer interface using machine learning: A comprehensive review. Neural Comput. Appl. 2022, 34, 12527–12557. [Google Scholar] [CrossRef]
  41. Garcia, O.D.R.; Fernandez, J.F.; Saldana, R.A.B.; Witkowski, O. Emotion-Driven Interactive Storytelling: Let Me Tell You How to Feel. In Artificial Intelligence in Music, Sound, Art and Design, Proceedings of the 11th International Conference, EvoMUSART 2022, Madrid, Spain, 20–22 April 2022; Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Springer: Cham, Switzerland, 2022; Volume 13221, pp. 259–274. [Google Scholar] [CrossRef]
  42. Jerath, R.; Beveridge, C. Respiratory Rhythm, Autonomic Modulation, and the Spectrum of Emotions: The Future of Emotion Recognition and Modulation. Front. Psychol. 2020, 11, 1980. [Google Scholar] [CrossRef] [PubMed]
  43. Ekman, P.; Friesen, W.V. Facial Action Coding System: A Technique for the Measurement of Facial Movement; Consulting Psychologists Press: Washington, DC, USA, 1978. [Google Scholar]
  44. Nardelli, M.; Valenza, G.; Greco, A.; Lanata, A.; Scilingo, E.P. Recognizing emotions induced by affective sounds through heart rate variability. IEEE Trans. Affect. Comput. 2015, 6, 385–394. [Google Scholar] [CrossRef]
  45. Liu, Z.; Xu, A.; Guo, Y.; Mahmud, J.; Liu, H.; Akkiraju, R. Seemo: A Computational Approach to See Emotions. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, Montreal, QC, Canada, 21–26 April 2018; pp. 1–12. [Google Scholar] [CrossRef]
  46. Mitruț, O.; Moise, G.; Petrescu, L.; Moldoveanu, A.; Leordeanu, M.; Moldoveanu, F. Emotion Classification Based on Biophysical Signals and Machine Learning Techniques. Symmetry 2019, 12, 21. [Google Scholar] [CrossRef]
  47. Calvo, R.A.; D’Mello, S. Affect detection: An interdisciplinary review of models, methods, and their applications. IEEE Trans. Affect. Comput. 2010, 1, 18–37. [Google Scholar] [CrossRef]
  48. Darwin, C.R. The Expression of the Emotions in Man and Animals, 1st ed.; Penguin Books: London, UK, 1872. [Google Scholar]
  49. James, W. What is an emotion? Mind 1884, os-IX, 188–205. [Google Scholar] [CrossRef]
  50. Marg, E. Descartes’error: Emotion, reason, and the human brain. Optom. Vis. Sci. 1995, 72, 847–848. [Google Scholar] [CrossRef]
  51. Brinkmann, S. Damasio on mind and emotions: A conceptual critique. Nord. Psychol. 2006, 58, 366–380. [Google Scholar] [CrossRef]
  52. Cannon, W.B. The James-Lange Theory of Emotions: A Critical Examination and an Alternative Theory. Am. J. Psychol. 1927, 39, 106. [Google Scholar] [CrossRef]
  53. Roseman, I.J.; Spindel, M.S.; Jose, P.E. Appraisals of emotion-eliciting events: Testing a theory of discrete emotions. J. Personal. Soc. Psychol. 1990, 59, 899. [Google Scholar] [CrossRef]
  54. Scherer, K.R. Emotion as a multicomponent process: A model and some cross-cultural data. Rev. Personal. Soc. Psychol. 1984, 5, 37–63. [Google Scholar]
  55. Argente, E.; Val, E.D.; Perez-Garcia, D.; Botti, V. Normative Emotional Agents: A Viewpoint Paper. IEEE Trans. Affect. Comput. 2022, 13, 1254–1273. [Google Scholar] [CrossRef]
  56. Schachter, S.; Singer, J. Cognitive, social, and physiological determinants of emotional state. Psychol. Rev. 1962, 69, 379. [Google Scholar] [CrossRef] [PubMed]
  57. Bartneck, C.; Lyons, M.J.; Saerbeck, M. The Relationship Between Emotion Models and Artificial Intelligence. arXiv 2017, arXiv:1706.09554. [Google Scholar]
  58. Ren, D.; Wang, P.; Qiao, H.; Zheng, S. A biologically inspired model of emotion eliciting from visual stimuli. Neurocomputing 2013, 121, 328–336. [Google Scholar] [CrossRef]
  59. López, Y.S. Abc-Ebdi: A Cognitive-Affective Framework to Support the Modeling of Believable Intelligent Agents. Ph.D. Thesis, Universidad de Zaragoza, Zaragoza, Spain, 2021; p. 1. [Google Scholar]
  60. Laird, J.E.; Newell, A.; Rosenbloom, P.S. SOAR: An architecture for general intelligence. Artif. Intell. 1987, 33, 1–64. [Google Scholar] [CrossRef]
  61. Anderson, J.R. Rules of the Mind; Erlbaum: Hillsdale, NJ, USA, 1993; p. 320. [Google Scholar]
  62. Rao, A.S.; Georgee, M.P. BDI Agents: From Theory to Practice. In Proceedings of the First International Conference on Multiagent Systems, San Francisco, CA, USA, 12–14 June 1995. [Google Scholar]
  63. Sanchez, Y.; Coma, T.; Aguelo, A.; Cerezo, E. Applying a psychotherapeutic theory to the modeling of affective intelligent agents. IEEE Trans. Cogn. Dev. Syst. 2020, 12, 285–299. [Google Scholar] [CrossRef]
  64. Mehrabian, A. Communication without words. In Communication Theory, 2nd ed.; Routledge: London, UK, 1968; pp. 193–200. [Google Scholar] [CrossRef]
  65. Kasapakis, V.; Dzardanova, E.; Nikolakopoulou, V.; Vosinakis, S.; Xenakis, I.; Gavalas, D. Social Virtual Reality: Implementing Non-verbal Cues in Remote Synchronous Communication. In Virtual Reality and Mixed Reality, Proceedings of the 18th EuroXR International Conference, EuroXR 2021, Milan, Italy, 24–26 November 2021; Springer: Cham, Switzerland, 2021; pp. 152–157. [Google Scholar] [CrossRef]
  66. Tanenbaum, T.J.; Hartoonian, N.; Bryan, J. “How do I make this thing smile?”. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, Honolulu, HI, USA, 25–30 April 2020; ACM: New York, NY, USA, 2020; pp. 1–13. [Google Scholar] [CrossRef]
  67. Giddens, A.; Birdsall, K.; Menezo, J.C. Sociologia; Alianza Editorial: Madrid, Spain, 2002. [Google Scholar]
  68. Graf, H.P.; Cosatto, E.; Strom, V.; Huang, F.J. Visual prosody: Facial movements accompanying speech. In Proceedings of the 5th IEEE International Conference on Automatic Face Gesture Recognition, FGR 2002, Washington, DC, USA, 21 May 2002; pp. 396–401. [Google Scholar] [CrossRef]
  69. Kleinsmith, A.; Bianchi-Berthouze, N. Affective body expression perception and recognition: A survey. IEEE Trans. Affect. Comput. 2013, 4, 15–33. [Google Scholar] [CrossRef]
  70. Davis, F. La Comunicación no Verbal; Google Libros: Online, 1998; Volume 3600. [Google Scholar]
  71. Alberts, D. The Expressive Body: Physical Characterization for the Actor; Heinemann Educational Publishers: Portsmouth, NH, USA, 1997; 196p. [Google Scholar]
  72. Ribet, S.; Wannous, H.; Vandeborre, J.P. Survey on Style in 3D Human Body Motion: Taxonomy, Data, Recognition and Its Applications. IEEE Trans. Affect. Comput. 2021, 12, 928–948. [Google Scholar] [CrossRef]
  73. Aoki, T.; Chujo, R.; Matsui, K.; Choi, S.; Hautasaari, A. EmoBalloon—Conveying Emotional Arousal in Text Chats with Speech Balloons. In Proceedings of the Conference on Human Factors in Computing Systems, New Orleans, LA, USA, 29 April–5 May 2022; p. 16. [Google Scholar] [CrossRef]
  74. Lin, P.C.; Hung, P.C.; Jiang, Y.; Velasco, C.P.; Cano, M.A.M. An experimental design for facial and color emotion expression of a social robot. J. Supercomput. 2023, 79, 1980–2009. [Google Scholar] [CrossRef]
  75. Nilay, O. Modeling Empathy in Embodied Conversational Agents. In Proceedings of the 20th ACM International Conference on Multimodal Interaction, Boulder, CO, USA, 16–20 October 2018. [Google Scholar]
  76. Shu, L.; Yu, Y.; Chen, W.; Hua, H.; Li, Q.; Jin, J.; Xu, X. Wearable Emotion Recognition Using Heart Rate Data from a Smart Bracelet. Sensors 2020, 20, 718. [Google Scholar] [CrossRef] [PubMed]
  77. Bickmore, T.; Cassell, J. Relational agents: A model and implementation of building user trust. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, Seattle, WA, USA, 31 March–5 April 2001; pp. 396–403. [Google Scholar] [CrossRef]
  78. Frachi, Y.; Takahashi, T.; Wang, F.; Barthet, M. Design of Emotion-Driven Game Interaction Using Biosignals. In HCI in Games, Proceedings of the 4th International Conference, HCI-Games 2022, Virtual, 26 June–1 July 2022; Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Springer: Cham, Switzerland, 2022; Volume 13334, pp. 160–179. [Google Scholar] [CrossRef]
  79. Steinhaeusser, S.C.; Schaper, P.; Lugrin, B. Comparing a Robotic Storyteller versus Audio Book with Integration of Sound Effects and Background Music. In Proceedings of the HRI ’21 Companion: Companion of the 2021 ACM/IEEE International Conference on Human-Robot Interaction, Boulder, CO, USA, 8–11 March 2021. [Google Scholar] [CrossRef]
  80. Bradley, M.M.; Lang, P.J. International Affective Picture System. In Encyclopedia of Personality and Individual Differences; Springer: Cham, Switzerland, 2017; pp. 1–4. [Google Scholar] [CrossRef]
  81. Yang, W.; Makita, K.; Nakao, T.; Kanayama, N.; Machizawa, M.G.; Sasaoka, T.; Sugata, A.; Kobayashi, R.; Hiramoto, R.; Yamawaki, S.; et al. Affective auditory stimulus database: An expanded version of the International Affective Digitized Sounds (IADS-E). Behav. Res. Methods 2018, 50, 1415–1429. [Google Scholar] [CrossRef] [PubMed]
  82. Crosta, A.D.; Malva, P.L.; Manna, C.; Marin, A.; Palumbo, R.; Verrocchio, M.C.; Cortini, M.; Mammarella, N.; Domenico, A.D. The Chieti Affective Action Videos database, a resource for the study of emotions in psychology. Sci. Data 2020, 7, 32. [Google Scholar] [CrossRef] [PubMed]
  83. Dozio, N.; Marcolin, F.; Scurati, G.W.; Ulrich, L.; Nonis, F.; Vezzetti, E.; Marsocci, G.; Rosa, A.L.; Ferrise, F. A design methodology for affective Virtual Reality. Int. J. Hum.-Comput. Stud. 2022, 162, 102791. [Google Scholar] [CrossRef]
  84. Hou, G.; Dong, H.; Yang, Y. Developing a Virtual Reality Game User Experience Test Method Based on EEG Signals. In Proceedings of the 2017 5th International Conference on Enterprise Systems: Industrial Digitalization by Enterprise Systems, ES 2017, Beijing, China, 22–24 September 2017; pp. 227–231. [Google Scholar] [CrossRef]
  85. Chandra, A.N.R.; Jamiy, F.E.; Reza, H. A review on usability and performance evaluation in virtual reality systems. In Proceedings of the 6th Annual Conference on Computational Science and Computational Intelligence, CSCI 2019, Las Vegas, NV, USA, 5–7 December 2019; pp. 1107–1114. [Google Scholar] [CrossRef]
  86. Thompson, E.R. Development and Validation of an Internationally Reliable Short-Form of the Positive and Negative Affect Schedule (PANAS). J. Cross-Cult. Psychol. 2016, 38, 227–242. [Google Scholar] [CrossRef]
  87. Bradley, M.M.; Lang, P.J. Measuring emotion: The self-assessment manikin and the semantic differential. J. Behav. Ther. Exp. Psychiatry 1994, 25, 49–59. [Google Scholar] [CrossRef]
  88. Liao, D.; Shu, L.; Liang, G.; Li, Y.; Zhang, Y.; Zhang, W.; Xu, X. Design and Evaluation of Affective Virtual Reality System Based on Multimodal Physiological Signals and Self-Assessment Manikin. IEEE J. Electromagn. Microw. Med. Biol. 2020, 4, 216–224. [Google Scholar] [CrossRef]
  89. Tian, F.; Hou, X.; Hua, M. Emotional Response Increments Induced by Equivalent Enhancement of Different Valence Films. In Proceedings of the 2020 5th International Conference on Electromechanical Control Technology and Transportation (ICECTT), Nanchang, China, 15–17 May 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 213–218. [Google Scholar] [CrossRef]
  90. Li, B.J.; Bailenson, J.N.; Pines, A.; Greenleaf, W.J.; Williams, L.M. A Public Database of Immersive VR Videos with Corresponding Ratings of Arousal, Valence, and Correlations between Head Movements and Self Report Measures. Front. Psychol. 2017, 8, 2116. [Google Scholar] [CrossRef]
  91. Rivu, R.; Jiang, R.; Mäkelä, V.; Hassib, M.; Alt, F. Emotion Elicitation Techniques in Virtual Reality. In Human-Computer Interaction—INTERACT 2021, Proceedings of the 18th IFIP TC 13 International Conference, Bari, Italy, 30 August–3 September 2021; Springer: Cham, Switzerland, 2021; pp. 93–114. [Google Scholar] [CrossRef]
  92. Xie, T.; Cao, M.; Pan, Z. Applying Self-Assessment Manikin (SAM) to Evaluate the Affective Arousal Effects of VR Games. In Proceedings of the 2020 3rd International Conference on Image and Graphics Processing, Singapore, 8–10 February 2020; ACM: New York, NY, USA, 2020; pp. 134–138. [Google Scholar] [CrossRef]
  93. Wang, J.; Wang, Y.; Liu, Y.; Yue, T.; Wang, C.; Yang, W.; Hansen, P.; You, F. Experimental study on abstract expression of human-robot emotional communication. Symmetry 2021, 13, 1693. [Google Scholar] [CrossRef]
  94. Mütterlein, J. The Three Pillars of Virtual Reality? Investigating the Roles of Immersion, Presence, and Interactivity. In Proceedings of the 51st Hawaii International Conference on System Sciences, Hilton Waikoloa Village, HI, USA, 3–6 January 2018. [Google Scholar]
  95. Hudson, S.; Matson-Barkat, S.; Pallamin, N.; Jegou, G. With or without you? Interaction and immersion in a virtual reality experience. J. Bus. Res. 2019, 100, 459–468. [Google Scholar] [CrossRef]
  96. Jennett, C.; Cox, A.L.; Cairns, P.; Dhoparee, S.; Epps, A.; Tijs, T.; Walton, A. Measuring and defining the experience of immersion in games. Int. J. Hum.-Comput. Stud. 2008, 66, 641–661. [Google Scholar] [CrossRef]
  97. Aoyagi, K.; Wen, W.; An, Q.; Hamasaki, S.; Yamakawa, H.; Tamura, Y.; Yamashita, A.; Asama, H. Modified sensory feedback enhances the sense of agency during continuous body movements in virtual reality. Sci. Rep. 2021, 11, 2553. [Google Scholar] [CrossRef] [PubMed]
  98. Tapal, A.; Oren, E.; Dar, R.; Eitam, B. The sense of agency scale: A measure of consciously perceived control over one’s mind, body, and the immediate environment. Front. Psychol. 2017, 8, 1552. [Google Scholar] [CrossRef] [PubMed]
  99. Polito, V.; Barnier, A.J.; Woody, E.Z. Developing the Sense of Agency Rating Scale (SOARS): An empirical measure of agency disruption in hypnosis. Conscious. Cogn. 2013, 22, 684–696. [Google Scholar] [CrossRef] [PubMed]
  100. Longo, M.R.; Haggard, P. Sense of agency primes manual motor responses. Perception 2009, 38, 69–78. [Google Scholar] [CrossRef]
  101. Magdin, M.; Balogh, Z.; Reichel, J.; Francisti, J.; Koprda, Š.; György, M. Automatic detection and classification of emotional states in virtual reality and standard environments (LCD): Comparing valence and arousal of induced emotions. Virtual Real. 2021, 25, 1029–1041. [Google Scholar] [CrossRef]
  102. Voigt-Antons, J.N.; Spang, R.; Kojic, T.; Meier, L.; Vergari, M.; Muller, S. Don’t Worry be Happy—Using virtual environments to induce emotional states measured by subjective scales and heart rate parameters. In Proceedings of the 2021 IEEE Virtual Reality and 3D User Interfaces (VR), Lisboa, Portugal, 27 March–1 April 2021; IEEE: Piscataway, NJ, USA, 2021; pp. 679–686. [Google Scholar] [CrossRef]
  103. Li, M.; Pan, J.; Gao, Y.; Shen, Y.; Luo, F.; Dai, J.; Hao, A.; Qin, H. Neurophysiological and Subjective Analysis of VR Emotion Induction Paradigm. IEEE Trans. Vis. Comput. Graph. 2022, 28, 3832–3842. [Google Scholar] [CrossRef]
  104. Dubovi, I. Cognitive and emotional engagement while learning with VR: The perspective of multimodal methodology. Comput. Educ. 2022, 183, 104495. [Google Scholar] [CrossRef]
  105. Bergström, J.; Knibbe, J.; Pohl, H.; Hornbæk, K. Sense of Agency and User Experience: Is There a Link? ACM Trans. Comput.-Hum. Interact. 2022, 29, 1–22. [Google Scholar] [CrossRef]
  106. Zanatto, D.; Chattington, M.; Noyes, J. Human-machine sense of agency. Int. J. Hum.-Comput. Stud. 2021, 156, 102716. [Google Scholar] [CrossRef]
  107. Haggard, P.; Clark, S.; Kalogeras, J. Voluntary action and conscious awareness. Nat. Neurosci. 2002, 5, 382–385. [Google Scholar] [CrossRef] [PubMed]
  108. Ivanof, B.E.; Terhune, D.B.; Coyle, D.; Gottero, M.; Moore, J.W. Examining the effect of Libet clock stimulus parameters on temporal binding. Psychol. Res. 2021, 86, 937–951. [Google Scholar] [CrossRef]
  109. Kong, G.; He, K.; Wei, K. Sensorimotor experience in virtual reality enhances sense of agency associated with an avatar. Conscious. Cogn. 2017, 52, 115–124. [Google Scholar] [CrossRef] [PubMed]
  110. Cornelio Martinez, P.I.; Maggioni, E.; Hornbæk, K.; Obrist, M.; Subramanian, S. Beyond the Libet clock: Modality variants for agency measurements. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, Montreal, QC, Canada, 21–26 April 2018; pp. 1–14. [Google Scholar]
  111. Bowman, D.A.; McMahan, R.P. Virtual reality: How much immersion is enough? Computer 2007, 40, 36–43. [Google Scholar] [CrossRef]
  112. Andreoletti, D.; Paoliello, M.; Luceri, L.; Leidi, T.; Peternier, A.; Giordano, S. A framework for emotion-driven product design through virtual reality. In Proceedings of the Special Sessions in the Advances in Information Systems and Technologies Track of the Conference on Computer Science and Intelligence Systems, Virtual, 2–5 September 2021; Springer: Cham, Switzerland, 2021; pp. 42–61. [Google Scholar]
  113. Andreoletti, D.; Luceri, L.; Peternier, A.; Leidi, T.; Giordano, S. The virtual emotion loop: Towards emotion-driven product design via virtual reality. In Proceedings of the 2021 16th Conference on Computer Science and Intelligence Systems (FedCSIS), Sofia, Bulgaria, 2–5 September 2021; IEEE: Piscataway, NJ, USA, 2021; pp. 371–378. [Google Scholar]
  114. Hernandez-Melgarejo, G.; Luviano-Juarez, A.; Fuentes-Aguilar, R.Q. A framework to model and control the state of presence in virtual reality systems. IEEE Trans. Affect. Comput. 2022, 13, 1854–1867. [Google Scholar] [CrossRef]
  115. Al-Jundi, H.A.; Tanbour, E.Y. A framework for fidelity evaluation of immersive virtual reality systems. Virtual Real. 2022, 26, 1103–1122. [Google Scholar] [CrossRef]
  116. Wang, Y.; Ijaz, K.; Calvo, R.A. A software application framework for developing immersive virtual reality experiences in health domain. In Proceedings of the 2017 IEEE Life Sciences Conference (LSC), Sydney, Australia, 13–15 December 2017; IEEE: Piscataway, NJ, USA, 2017; pp. 30–37. [Google Scholar]
  117. Aranha, R.V.; Chaim, M.L.; Monteiro, C.B.; Silva, T.D.; Guerreiro, F.A.; Silva, W.S.; Nunes, F.L. EasyAffecta: A framework to develop serious games for virtual rehabilitation with affective adaptation. Multimed. Tools Appl. 2023, 82, 2303–2328. [Google Scholar] [CrossRef]
  118. Angelini, L.; Mecella, M.; Liang, H.N.; Caon, M.; Mugellini, E.; Khaled, O.A.; Bernardini, D. Towards an Emotionally Augmented Metaverse: A Framework for Recording and Analysing Physiological Data and User Behaviour. In Proceedings of the 13th Augmented Human International Conference, Winnipeg, MB, Canada, 26–27 May 2022. ACM International Conference Proceeding Series. [Google Scholar] [CrossRef]
  119. Roohi, S.; Skarbez, R. The Design and Development of a Goal-Oriented Framework for Emotional Virtual Humans. In Proceedings of the 2022 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR), Virtual, 12–14 December 2022; IEEE: Piscataway, NJ, USA, 2022; pp. 135–139. [Google Scholar]
  120. Lafleur, A.; Soulières, I.; d’Arc, B.F. Sense of agency: Sensorimotor signals and social context are differentially weighed at implicit and explicit levels. Conscious. Cogn. 2020, 84, 103004. [Google Scholar] [CrossRef]
  121. Ohata, W.; Tani, J. Investigation of the sense of agency in social cognition, based on frameworks of predictive coding and active inference: A simulation study on multimodal imitative interaction. Front. Neurorobotics 2020, 14, 61. [Google Scholar] [CrossRef]
  122. Anders, D.; Berisha, A.; Selaskowski, B.; Asché, L.; Thorne, J.D.; Philipsen, A.; Braun, N. Experimental induction of micro-and macrosomatognosia: A virtual hand illusion study. Front. Virtual Real. 2021, 2, 656788. [Google Scholar] [CrossRef]
  123. Shibuya, S.; Unenaka, S.; Ohki, Y. The relationship between the virtual hand illusion and motor performance. Front. Psychol. 2018, 9, 2242. [Google Scholar] [CrossRef] [PubMed]
  124. Levac, D.E.; Huber, M.E.; Sternad, D. Learning and transfer of complex motor skills in virtual reality: A perspective review. J. Neuroeng. Rehabil. 2019, 16, 121. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Sense of agency conceptualisation: example showing the intention to burst a balloon.
Figure 1. Sense of agency conceptualisation: example showing the intention to burst a balloon.
Applsci 13 13322 g001
Figure 2. General view of the sense of agency in terms of two layers proposed by Wen in [14].
Figure 2. General view of the sense of agency in terms of two layers proposed by Wen in [14].
Applsci 13 13322 g002
Figure 3. Research about the facial movements employed to convey universal emotions. Image source [29].
Figure 3. Research about the facial movements employed to convey universal emotions. Image source [29].
Applsci 13 13322 g003
Figure 4. (a) A two-dimensional model; image source [45] (b) Three-dimensional model; image source [46].
Figure 4. (a) A two-dimensional model; image source [45] (b) Three-dimensional model; image source [46].
Applsci 13 13322 g004
Figure 6. SAM scale that measures the dimensions of pleasure, arousal and dominance using a series of graphic abstract characters horizontally arranged according to a 9-points scale. image source [93].
Figure 6. SAM scale that measures the dimensions of pleasure, arousal and dominance using a series of graphic abstract characters horizontally arranged according to a 9-points scale. image source [93].
Applsci 13 13322 g006
Figure 7. Adaptation of the two-layer model of the sense of agency proposed in [14] to immersive virtual environments.
Figure 7. Adaptation of the two-layer model of the sense of agency proposed in [14] to immersive virtual environments.
Applsci 13 13322 g007
Figure 8. Proposed framework for the implementation of immersive virtual environments that integrate the management of the emotion flow to improve the sense of agency.
Figure 8. Proposed framework for the implementation of immersive virtual environments that integrate the management of the emotion flow to improve the sense of agency.
Applsci 13 13322 g008
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ortiz, A.; Elizondo, S. Design of an Immersive Virtual Reality Framework to Enhance the Sense of Agency Using Affective Computing Technologies. Appl. Sci. 2023, 13, 13322. https://doi.org/10.3390/app132413322

AMA Style

Ortiz A, Elizondo S. Design of an Immersive Virtual Reality Framework to Enhance the Sense of Agency Using Affective Computing Technologies. Applied Sciences. 2023; 13(24):13322. https://doi.org/10.3390/app132413322

Chicago/Turabian Style

Ortiz, Amalia, and Sonia Elizondo. 2023. "Design of an Immersive Virtual Reality Framework to Enhance the Sense of Agency Using Affective Computing Technologies" Applied Sciences 13, no. 24: 13322. https://doi.org/10.3390/app132413322

APA Style

Ortiz, A., & Elizondo, S. (2023). Design of an Immersive Virtual Reality Framework to Enhance the Sense of Agency Using Affective Computing Technologies. Applied Sciences, 13(24), 13322. https://doi.org/10.3390/app132413322

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop