Next Article in Journal
Use of Measurement Probes to Evaluate the Effect of a Series of Whole-Body Cryostimulation Treatments on Skin Characteristics in Subjects with Different BMIs
Previous Article in Journal
Consumption in a Full-Course Meal Manner Is Associated with a Reduced Mean Amplitude of Glycemic Excursions in Young Healthy Women: A Randomized Controlled Crossover Trial
Previous Article in Special Issue
Augmented Reality Improved Knowledge and Efficiency of Root Canal Anatomy Learning: A Comparative Study
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Behavioural Realism and Its Impact on Virtual Reality Social Interactions Involving Self-Disclosure

by
Alan Fraser
,
Ross Hollett
,
Craig Speelman
and
Shane L. Rogers
*
School of Arts and Humanities (Psychology), Edith Cowan University, Joondalup 6027, Australia
*
Author to whom correspondence should be addressed.
Appl. Sci. 2025, 15(6), 2896; https://doi.org/10.3390/app15062896
Submission received: 9 January 2025 / Revised: 26 February 2025 / Accepted: 4 March 2025 / Published: 7 March 2025
(This article belongs to the Special Issue Virtual/Augmented Reality and Its Applications)

Abstract

:
This study investigates how the behavioural realism of avatars can enhance virtual reality (VR) social interactions involving self-disclosure. First, we review how factors such as trust, enjoyment, and nonverbal communication could be influenced by motion capture technology by enhancing behavioural realism. We also address a gap in the prior literature by comparing different motion capture systems and how these differences affect perceptions of realism, enjoyment, and eye contact. Specifically, this study compared two types of avatars: an iClone UNREAL avatar with full-body and facial motion capture and a Vive Sync avatar with limited motion capture for self-disclosure. Our participants rated the iClone UNREAL avatar higher for realism, enjoyment, and eye contact duration. However, as shown in our post-experiment survey, some participants reported that they preferred the avatar with less behavioural realism. We conclude that a higher level of behavioural realism achieved through more advanced motion capture can improve the experience of VR social interactions. We also conclude that despite the general advantages of higher motion capture, the simpler avatar was still acceptable and preferred by some participants. This has important implications for improving the accessibility of avatars for different contexts, such as therapy, where simpler avatars may be sufficient.

1. Introduction

The role that perceptions of behavioural realism of avatars play in user experience during VR-based social interactions has received considerable research interest in recent years [1,2,3,4,5,6,7,8,9,10]. The behavioural realism of an avatar can be defined by the quality of the avatar’s movements, including gestures and facial gestures [11,12,13]. Some developers and users of metaverse social VR platforms may believe that avatars need to display lifelike nonverbal cues, thereby demanding a high level of behavioural realism [14,15,16]. However, whether high behavioural realism is needed to create enjoyable social interactions remains debatable [17,18,19,20,21].
This study investigated the impact of avatars with varying levels of realism on the quality of social interactions in VR involving disclosure. Specifically, we compared an avatar from iClone animated in the UNREAL Engine, which has a higher level of behavioural realism via advanced motion capture, with an avatar from Vive Sync, a commercial platform that employs avatars with a lower level of behavioural realism [22,23]. The iClone UNREAL avatar had its eye movements and gaze directions animated using a dedicated motion capture camera, whereas the Vive Sync avatar relied on onboard eye tracking within the VR headset. The primary aim of this study was to determine whether the level of behavioural realism available through commercial products like Vive Sync is adequate for creating engaging VR social interactions without the need for advanced motion capture. This research aims to guide future developers, researchers, and psychologists in implementing realistic avatars ideal for specific purposes, such as therapy conducted in VR, while also maintaining accessibility and affordability to a broader audience.

1.1. Motion-Captured Avatars and Their Impact on VR Social Interactions

Avatars with motion capture have face and body movements that may be perceived as highly lifelike [24,25,26,27,28]. Motion is captured via sensors attached to the body or via camera footage that can be broadcast onto an avatar in real time [11,22,23,29,30]. This offers a responsiveness that is superior to computer-scripted or pre-recorded animations [31,32,33,34,35]. Several studies have revealed that realistic facial gestures and body gestures for avatars may enhance user perceptions of trust, likability, and satisfaction during VR social interactions [27,30,36,37].
Many metaverse social VR platforms (e.g., Horizon Worlds, BeanVR, VRChat, Rec Room) use avatars with limited behavioural realism in terms of expressions, gestures, and body language. This could be a deliberate choice, as maintaining a high level of accessibility to their product is a core concern. Most social VR platforms only incorporate basic motion tracking features, such as head and hand tracking, with some including additional features, like eye tracking. Previous studies have revealed that many users prefer avatars with stronger behavioural realism [22,23,29,38]. However, it remains unclear whether current social VR platforms would benefit from increased behavioural realism in their avatars due to limited research in this area. Comparing avatars with full-body motion capture with the type of avatars used in many metaverse social VR platforms will help to clarify if introducing higher levels of behavioural realism may help to improve feelings of comfort and enjoyment [22,23,24,39].

1.2. Using Motion Capture for High Behavioural Realism

Achieving a high level of facial realism typically involves computer analysis of video footage that provides a depth map of the user’s face. This includes video processing and algorithms to capture subtle facial gestures with high accuracy [2,5,18,22,23,40]. Motion capture that uses video cameras can enable real-time responsiveness, much like sensor-based body motion tracking. This can involve more consistent eye gaze behaviour, mouth and brow movements, and accurate lip synchronisation [35,40,41,42]. Research demonstrates that when interacting with an avatar with a high level of facial realism, the experience can be more engaging, more enjoyable, and more believable when compared to avatars with more restricted facial movements [2,3,8,23,29,43].
Motion tracking of eye gaze can help to enhance the behavioural realism of avatars and potentially improve VR social interactions. Eye gaze is a form of nonverbal communication that indicates where attention is directed and may help to convey interest in the conversation [5,29,41,43,44,45]. Accurate capture and replication of eye movements in real-time through motion capture technology enable more natural and consistent eye gaze behaviour in avatars [2,41,46]. This enhanced realism may contribute to a stronger sense of social presence and engagement during VR interactions. For instance, Hessels et al. [47] found that participants spent approximately 65% of their time looking at their partner’s eyes during face-to-face interactions, highlighting the potential benefits of accurate eye gaze behaviour during VR social interactions. Studies have also shown that mutual gaze behaviour can enhance trust, comfort, and closeness in social interactions [26,48,49,50,51,52,53] Realistic eye gaze tracking into avatars could allow VR social interactions to more closely approximate face-to-face conversations, potentially facilitating more natural and engaging exchanges, especially in contexts involving self-disclosure.
Fully-body motion capture can produce an avatar with a high level of behavioural realism. Motion capture sensors are placed on the body, and movement data are wirelessly transmitted to a receiver connected to a computer [12,19,27,32,34]. This can be used to animate the movements of an avatar in a more realistic fashion. An avatar with high behavioural realism can help to improve the enjoyment experienced during VR social interactions with avatars by facilitating the expression of nonverbal cues known to improve the quality of social interaction [8,13,31,33,39].
Studies that involve VR social interactions often lack full-body motion capture avatars, thus limiting their behavioural realism [6,8,35,39,54,55,56,57,58]. Furthermore, many of the studies that include user appraisal of full-body motion capture avatars are not conducted within a social interaction context that resembles a personable face-to-face conversation [2,24,28,30,37,53,57]. Instead, prior studies typically have people rate video play-back of avatars rather than have the user interact with the avatar [3,6,8,11,17,21,51,59,60].
Furthermore, there is an absence of studies that directly compare different levels of motion capture within the same study. This makes it difficult to draw conclusions about how different levels of motion tracking through different motion capture systems can affect social interactions in VR. To address this gap, this study compares two distinct levels of motion capture in the same virtual context involving disclosure. More specifically, participant perceptions of a full-body motion-captured avatar are compared with perceptions of an avatar with limited motion tracking when sharing negative and positive disclosures. By comparing different avatars with different types of motion capture, this study aims to clarify if social VR platforms may benefit from stronger levels of behavioural realism.

1.3. Some Avatars May Appear Unnatural Due to Insufficient Behavioural Realism

Researchers have argued that as avatars appear more human, they may unintentionally provoke a sense of discomfort among viewers if their behaviour deviates from what is expected [30,61,62,63,64,65]. This discomfort is often attributed to a disconnect between the avatar’s visual appearance and behaviour. For instance, an avatar that visually resembles a real human but moves rigidly can seem off-putting. Advancements in motion capture technology can enhance the behavioural realism of an avatar, particularly for those who appear as real humans, which can reduce unsettling feelings [23,25,62,64]. Seymour et al. [30] suggest that including realistic motion capture can avoid a possible “uncanny valley” phenomenon. Supporting this, their study compared participants’ reactions to a cartoon avatar with a lower resolution and a detailed, human-like avatar with a higher resolution. Both avatars received positive reactions; however, the human-like avatar with high behavioural realism was rated higher in appeal, relatability, and trust compared to the cartoon avatar with less behavioural realism. Thus, the right combination of an avatar’s visual appearance and behavioural realism using motion capture technology may help to avoid the uncanny valley.
A discrepancy between an avatar’s appearance and behaviour can create unpleasant experiences for some users. As mentioned previously, avatars that resemble real humans but exhibit unnatural movements may reduce the quality of VR social interactions [30,61,62,63,64,65]. Similarly, avatars with less realistic appearances, such as cartoon avatars, may elicit discomfort if their movements are too realistic [18,46,59,66]. This may result in some people preferring less behavioural realism to match the avatar’s appearance. Although previous studies have primarily focused on the benefits of higher levels of behavioural realism for human-like avatars, a high level of motion capture may be detrimental for less realistic avatars by making them appear unnatural [1,46,49,66,67,68]. The current study focussed on an avatar’s behavioural realism, achieved via two different hardware/software platforms. Given that motion capture can sometimes appear unnatural, it is possible that some people may prefer avatars with less behavioural realism.

1.4. Behavioural Realism May Improve Outcomes for VR-Based Therapy

Conducting therapy in VR has the potential to enhance the effectiveness of therapy for some individuals. Interacting through avatars may help to provide a feeling of privacy and anonymity, which may help to reduce feelings of anxiety and improve comfort. As a result, therapeutic retention and outcomes might be improved [52,69,70,71,72,73,74]. However, this approach also has limitations, as therapists using avatars that lack behavioural realism might make it harder to build trust and rapport [70,75].
The cost and technical expertise required to operate motion-captured avatars may deter therapists interested in using realistic avatars for psychological therapy. This could potentially restrict their options to VR headsets that have limited motion tracking for only some behavioural realism. However, more accessible VR avatars may still offer benefits for therapeutic purposes and provide a viable option for therapists where affordability is important. It is unclear how these more accessible headsets with limited motion tracking compare to more sophisticated motion capture. Therefore, investigating how different types of motion capture can affect an avatar’s behavioural realism during VR social interactions involving self-disclosure could help inform and guide future researchers and psychologists on how to use avatars for therapy conducted in VR.
Rogers et al. [23] investigated how VR compares to face-to-face interactions for disclosing positive and negative life experiences. In VR, participants engaged with a human-like avatar animated in real time with motion capture technology to ensure a high level of behavioural realism. The results showed that while face-to-face interactions were rated higher than the VR interaction in terms of closeness and comprehension, there were no differences in awkwardness, feeling understood, enjoyment, closeness, and comfort. Furthermore, when asked to indicate a preference, approximately 30% of the participants reported that they felt more at ease discussing negative experiences in VR. This research indicated that at least for some, discussing negative experiences with an avatar that is controlled by a real therapist might be more comforting than interacting with a therapist in real life.
Expanding on this research, Fraser et al. [22] explored how the different levels of behavioural realism of an avatar’s face and body could affect social interactions involving disclosure. Following a similar procedure to Rogers et al. [23], participants shared a positive and negative experience with an avatar under different behavioural realism conditions. In their post-experiment survey, 41% of their participants reported that an avatar driven by full-body motion capture with a high level of behavioural realism was the most enjoyable and comfortable to interact with. This was followed by 20% of participants reporting that they enjoyed and felt most comfortable with an avatar with moderate behavioural realism (i.e., either face or body motion capture but not both). Finally, 18% reported that they preferred interacting with an avatar with the lowest level of behavioural realism. Their results suggest that for a small minority, enhancing the behavioural realism of an avatar might not be essential to improving VR social interactions.
The studies by Rogers et al. [23] and Fraser et al. [22] contribute to our overall understanding of how avatar realism can affect VR social interactions involving self-disclosure. However, the participants in these studies were only required to recall their own experiences, and no disclosures were made by the person controlling the avatar, potentially reducing participants’ observation and engagement with the avatars. To address this, a reciprocal approach where the participant and researcher controlling the avatar disclose their personal experiences may help clarify if attention may affect ratings towards the avatars.

1.5. The Present Study

The reviewed literature demonstrates the utility of motion capture for enhancing the behavioural realism of avatars. Studies like Brown et al. [11] and Yoon et al. [37] emphasise the importance of body gestures for enhancing social interactions, while others, including Kokkinara and McDonnel [18] and Kruzic et al. [29], highlight the importance of facial gestures. Regarding the potential use of avatars for therapy conducted in VR, self-disclosure studies by Rogers et al. [23] and Fraser et al. [22] suggest that VR social interaction platforms may offer an alternative to face-to-face therapy for individuals who prefer to interact with an avatar that has a level of behavioural realism that is comfortable for them.
The current study investigates different levels of behavioural realism by comparing two types of avatars: the commercially available Vive Sync avatar that has limited motion tracking of the face and body and an iClone avatar used in the UNREAL Engine with full-body motion tracking of both the face and body that was used in prior research studies [22,23]. This comparison allowed us to investigate the effects of different motion-tracking methods on the avatar’s perceived realism and enjoyment of the interaction. It also extends previous research by incorporating both participant disclosure and interviewer disclosure, allowing us to investigate the role of reciprocal sharing during VR social interactions.
The primary aim of this research was to examine the ratings of perceived realism, enjoyment, and eye contact across the iClone UNREAL and Vive Sync avatars. Specifically, this study investigated whether high behavioural realism afforded through full-body motion capture is necessary for improving perceptions of realism and enjoyment in VR social interactions involving self-disclosure. Many studies have concluded that higher levels of behavioural realism can foster positive social interactions in VR by increasing likability, trust, enjoyability, and social presence [3,22,24,26,29,32,37,43].
For the first hypothesis, it was expected that an avatar with a higher level of behavioural realism provided with full-body motion tracking would be rated as more realistic than an avatar with limited motion tracking. For the second hypothesis, it was expected that an avatar with a higher level of behavioural realism would be rated as more enjoyable when compared to an avatar with limited behavioural realism. This is because a realistic avatar with precise motion tracking should avoid the uncanny valley and provide a lifelike experience that a less realistic avatar cannot [52,59,61,76]. For the third hypothesis, it was expected that the avatar with full facial tracking would result in higher user ratings of perceived eye gaze duration compared to the avatar using the Vive Sync headset with partial facial tracking. This is because the avatar with full facial tracking will have more nonverbal cues for the participant to engage with and focus on [29,35,40,41,42,43].
The secondary aim of this research addresses a limitation identified in both Rogers et al. [23] and Fraser et al. [22], where only the participants disclosed personal experiences. To address this limitation, a reciprocal social interaction where both the participant and avatar disclose negative and positive experiences was implemented to clarify if the source of disclosure could influence perceptions of realism. Thus, it was expected that disclosures from the avatar would result in higher realism ratings. This is because the participants’ attention will be directed towards the avatar rather than to themselves.

2. Materials and Methods

2.1. Research Design

This study used a within-subjects design involving three independent variables (IVs). The first IV investigated the influence of avatar type, comparing the effects of using an iClone UNREAL Engine avatar with full-body and face motion tracking to those of using a Vive Sync avatar with limited motion tracking. The second IV explored the impact of disclosure source, comparing the effects of participants’ self-disclosure to the effects of receiving disclosure from their avatar partner. The third IV involved the type of disclosure as either the disclosure of a negative or positive experience. This study also included three dependent variables: ratings of enjoyment, ratings of the perceived realism of the avatar, and perceived duration of eye contact. Finally, quantitative data on preferences for both avatars was collected in an end-of-experiment survey.

2.2. Participants

Participants were primarily recruited from an undergraduate psychology course at Edith Cowan University, with additional recruitment involving students from other disciplines who expressed interest in participating. In total, 57 participants were recruited, comprising 41 females (72%), 15 males (26%), and 1 non-binary (2%). Ages ranged from 18 to 64 years, with an average of 30.72 years and a standard deviation of 11.69 years. These participants also reported their familiarity with using technology in the past 12 months for communication purposes or gaming (see Table 1). Most participants in our study appeared to have had no or very little experience using VR for either conversations or playing games.

2.3. Materials

2.3.1. Hardware

This study used two computers to distribute the resource requirements needed for this study. The first computer was responsible for recording the interviewer’s face and body movements. Its specifications included an Intel i7-9700 CPU, an Nvidia GeForce RTX 2080 GPU, 32 GB DDR4 RAM, and 2 TB internal storage. The second computer that operated the VR simulation included an Intel i9-11900k CPU, Nvidia GeForce RTX 3080 Ti GPU, 32 GB DDR4 RAM, and 2 TB internal storage.
An iPhone 11 positioned on a camera tripod was connected to the first computer with a USB cable to capture the interviewer’s facial gestures, including eye movements and gaze direction. The iPhone 11 has face tracking capabilities from its TrueDepth camera that uses a dot projector to create a 3D map of the face that can be manipulated to animate the facial gestures of an avatar. Additionally, we used the Noitom Perception Neuron Studio Inertial System for body motion capturing. This system provides full-body motion capturing of the interviewer’s feet, lower and upper legs, waist, torso, lower and upper arms, shoulders, and head using 17 wireless inertial sensors. The interviewer’s finger movements were also tracked using 12 more sensors within the Perception Neuron Gloves. Both the motion data of the face and body were streamed to the second computer via a Local Area Network.
An HP Reverb headset was used to display the virtual environment used in the current study and prior studies [22,23] for the social interaction with the UNREAL avatar. This headset has a resolution of 2160 × 2160 pixels per 2.89″ lens and a refresh rate of 90 Hz.
A Vive Focus 3 headset with two controllers was used to display the Vive Sync avatar. This headset displayed a virtual environment hosted on the Vive Sync servers and appeared as a virtual office space. This device has a resolution of 2448 × 2448 pixels per 2.88″ lens and a refresh rate of 90 Hz. It also included an onboard eye-tracking system comprising dual-tracking cameras that can track the direction and origin of gaze at a 120 Hz frequency with an accuracy range of 0.5° to 1.1°. This eye-tracking system was less sophisticated than the iPhone 11’s TrueDepth camera that offers more precise gaze-tracking behaviour. Instead, the Vive Focus 3 supports basic eye and gaze-tracking behaviour but with less accuracy. Additionally, the headset includes dual integrated microphones that capture clear audio, reduce background noise, and enhance the accuracy of voice detection. These voice data can then be used to generate the mouth movements of the Vive Sync avatar.

2.3.2. iClone UNREAL Avatar

The iClone UNREAL avatar was designed using the Reallusion Character Creator v3.88 software (including the Headshot plugin) and made to resemble the researcher conducting the interviews. It was designed to resemble the appearance of avatars commonly used in many metaverse social VR platforms, offering some but not high graphical detail. The avatar was transferred into the iClone v7.93 software, which enabled manipulation of its movements using face and body motion capture. The iClone UNREAL LiveLink plugin transferred the avatar and its movements into UNREAL Engine v5.0. Figure 1 offers a view of this avatar’s appearance.
The iPhone LIVE face application was used to track the facial movements of the interviewer, and Perception Neuron Axis Studio software was used to record their body movements. The motion-captured data were rendered onto the avatar within the iClone software using the iClone Motion Live plugin. Subsequently, these motion-captured data were transmitted into UNREAL Engine v5.0 using the UNREAL Live plugin. This setup enabled a real-time social interaction between the interviewer and participant via the avatar. During the social interaction with the iClone UNREAL avatar, the interviewer used their motion capture gloves to control their avatar’s hand movements, while the participant used a pair of VR controllers to move the hands of their avatar (see Figure 1).
UNREAL Engine version 5.0 was used to design the virtual setting for the social interaction involving the iClone UNREAL avatar. The UNREAL Engine environment included a virtual room that was used in a previous study (Fraser et al. [22]). This room was designed to appear as a compact office space with a bookshelf, window, and several furnishings (see Figure 2). A desk was placed in the centre of the room to simulate two meters between the participant and the interviewer’s body, concealing the lower half of the avatar’s body. This was decided to occlude the lower half of the avatar because some clipping occurred between the waist and hands at irregular intervals, which could have led to a negative impression of the avatar.

2.4. Vive Sync Avatar

The Vive Sync avatar was designed using the VIVE avatar character creator and was made to resemble the researcher conducting the interviews. It had a similar level of detail as the iClone UNREAL avatar, and both were male and dressed in simple black. This avatar was then used as the interviewer’s avatar during the VR social interaction. The avatar used by the participant was a generic avatar of the same gender but was only seen by the interviewer.
The participant and interviewer communicated with each other using a microphone built into the Vive Focus 3 headsets. Both participant and interviewer were physically located in two different rooms but shared the same VR environment, and both spoke to each other using the microphone. The mouth movements of the Vive Sync avatar were generated automatically in response to the user’s speech when detected by the microphone. Its eye movements and blinking behaviour were also tracked using an onboard eye-tracking device in the headsets. Additionally, its head movements were controlled by the Vive Focus 3 headsets, while its arm and wrist movements were controlled using the VR controllers. When combined, the movements and behaviour of the Vive Sync avatar were not as realistic as the iClone UNREAL avatar that had more advanced motion capture.
The social interaction was conducted in the Vive Sync software installed on the Vive Focus 3 headsets. The interviewer and participant both appeared in a seated position (see Figure 3) where the top half of the body was animated using the methods outlined above. The choice of virtual room was limited by the Vive Sync software, so a virtual office environment (Figure 4) was used for the location of this interaction because it most closely resembled the room used for the iClone UNREAL avatar (Figure 1). To maintain focus on the interviewer rather than the room, the interviewer was positioned with their back against the wall (see Figure 3). This was intended to minimise environmental distractions and direct attention toward the interviewer’s avatar.
In summary, efforts were made to have both the avatars and environment used for both conditions to be as similar as possible. This was performed so that the impact of behavioural realism on participant responses could be compared more effectively by limiting any possible influence from the environment. To reduce the perceived environmental differences, participants were situated in a back corner of the office space to approximate the smaller office space of the UNREAL condition and to minimise the visual differences in the room size, furnishings, and lighting.

2.5. Measure of Perceived Realism

The scale for assessing perceived realism was adapted from the Networked Minds Social Presence Inventory [77] and the Temple Presence Inventory [78]. It contained the items (1) I felt like I was interacting with a real person, (2) their facial gestures looked natural/realistic, and (3) their body movements looked natural/realistic. The responses of these items ranged from (1) not at all, (2) a little bit (3) quite a lot (4) a lot, and (5) extremely. A composite score for perceived realism was calculated using the average of the 5 items. Cronbach’s alpha values for this measure across each of the conditions were above 0.80 (see Table 2).

2.6. Measure of Enjoyment

The scale for assessing enjoyment was used in a previous study [22,38]. In this scale, participants rated their feelings of enjoyment, comfort, and closeness. The ratings of this scale spanned from (1) not at all, (2) a little bit, (3) quite a lot, (4) a lot, and (5) extremely. A composite score of enjoyment for each participant was created by averaging the scores of the three items. Cronbach’s alpha values for this measure across each of the conditions were above 0.80 (see Table 2).

2.7. Measure of Percentage of Eye Contact Duration

The scale for assessing the percentage of eye contact maintained during the interaction was created using a percentage scale slider bar. It contained a question asking the participant to report on the proportion of the conversation where they felt there was eye contact between themselves and the avatar. Participants rated this proportion from 0% of the time to 100% of the time.

2.8. Post-Experiment Survey

Upon finishing the experiment, participants were asked to fill out a brief post-experiment survey to determine their preferred avatar to interact with. This survey asked participants which avatar (1) was most enjoyable to interact with, (2) was most comfortable to interact with, (3) was most pleasant to interact with, (4) they felt was most like a real person, (5) had facial gestures that looked the most natural/realistic, (6) had body movements that looked the most natural/realistic, and (7) had the most eye contact. Participants selected either the “iClone UNREAL avatar”, the “Vive Sync avatar”, or “Unsure” for each item.

2.9. Procedure

The procedures for this study were approved by the University’s human research ethics committee.
Participants were students who had shown interest in the experiment through the University’s Research Participation Sign-Up System or by responding to advertisements posted on campus bulletin boards. Participants who were recruited through the sign-up system were awarded course credit for an undergraduate psychology unit, while participants who responded to bulletin advertisements received an AUD 40 gift voucher. Before starting the experiment, participants were asked to complete a pre-experiment survey. The survey included questions about their age, gender, and how frequently they had used different technologies for communication purposes in the past 12 months (see Table 1). Following this, they were instructed to write down two positive and two negative experiences for each session using pen and paper. Participants answered specific questions about each experience, including what had happened, when it happened, and how it made them feel.
After completing the pre-experiment survey, participants first interacted with either the iClone or Vive Sync avatar, with the order of interaction counterbalanced across the participants. For the iClone UNREAL avatar condition, the participants were instructed to sit at a table while a PhD student researcher who acted as the interviewer helped them put on the HP reverb headset comfortably. Once inside the VR, the participants initially saw only a loading screen. During this time, the interviewer spent two minutes calibrating their motion capture suit and camera. Participants could only see the virtual environment and animated avatar once the calibration was complete and the simulation had started. At this point, they could see the interviewer’s fully animated avatar on the other side of the virtual desk and could control their own virtual hands using the provided controllers. The interview began once the participant indicated they were ready to proceed.
For the Vive Sync avatar condition, the participant was instructed to follow the researcher into a separate room containing a Vive Focus 3 headset. After helping the participant wear the headset comfortably, the participant found themselves positioned in the Vive Sync office environment. The interviewer then left the room and used a second Vive Focus 3 to join the participant in the Vive Sync virtual office space. The interviewer then greeted the participant using the onboard microphone and instructed the participant to face the interviewer’s avatar. The interview began once the participant indicated they were ready to proceed.
Participants engaged in two interactions with the interviewer in VR. Each interaction was conducted in a systematic way, with the participant and researcher both sharing one positive and one negative experience. For the first interaction, participants disclosed a positive and negative experience to the avatar and listened to the avatar disclose a positive and negative experience, lasting approximately two minutes for each disclosure. This was repeated for the second interaction. In total, there were four disclosures from the participants and four from the avatar.
Participants were allowed to share any experience they felt comfortable sharing, while the researcher kept their experiences the same for all participants. This study involved two separate interactions, and the order of presentation for either the UNREAL avatar or Vive Sync avatar was counterbalanced across participants. The order in which the researcher and participant shared their experiences was also counterbalanced to prevent order effects. This ensured that in half of the interactions, the researcher shared their experiences first, while in the other half, the participant shared first. However, in the second interaction, the order of the participants’ disclosures was not counterbalanced. Instead, the interaction was structured in the same way as Rogers et al. [23] and Fraser et al. [22] so that the participant always finished by sharing a positive experience. This decision was made for ethical reasons and was aimed at reducing any potential negative feelings the participant might have after leaving the experiment. After the first interaction, the interviewer helped the participant remove the headset and provided them with an iPad tablet to answer questions related to perceived realism, enjoyment, and eye contact.
After the interactions were completed, the participants were asked to complete a post-experiment survey, which took an average of 5 min to finish. The survey was conducted on the same tablet that was used for the pre-experiment survey. Overall, the experiment lasted an average of 40 min, including the time spent on interactions, completing the surveys, and following the procedure.

3. Results

Descriptive statistics for perceived enjoyment and realism, their respective internal consistency values, and the correlations between them are presented in Table 2. As can be seen in Table 2, moderate positive associations were found between enjoyment and realism ratings across all avatars and positive/negative disclosure types.
Three 2 × 2 × 2 repeated-measures ANOVAs were conducted to examine whether there were any differences in the mean perceived enjoyment, perceived realism ratings, and eye contact duration across the conditions. The factors included were avatar type (Vive Sync or UNREAL Avatar), disclosure source (participant or interviewer), and disclosure type (positive or negative).

3.1. Results for the Perceived Realism Measure

The ANOVA for the perceived realism ratings showed a significant effect of avatar type (F(1,56) = 15.70, p < 0.01, ηp2 = 0.22), where the iClone UNREAL avatar (M = 3.01, SD = 0.96) had higher ratings for realism compared to the Vive Sync avatar (M = 2.45, SD = 0.94). Cohen’s measure of effect size (d = 0.58) indicated a moderate to large effect (see Figure 5 and Figure 6). There was also a significant effect of disclosure source (F(1,56) = 19.73, p < 0.01, ηp2 = 0.26). Disclosures from the interviewer (M = 2.80, SD = 0.80) resulted in higher perceived realism ratings compared to disclosures from the participant (M = 2.66, SD = 0.80). Cohen’s measure of effect size (d = 0.17) indicated a small effect (see Figure 5). Further, there was a significant effect of disclosure type (F(1,56) = 11.99, p = 0.01, ηp2 = 0.17). Positive disclosures (M = 2.78, SD = 0.80) led to higher perceived realism ratings compared to negative disclosures (M = 2.69, SD = 0.79). Cohen’s measure of effect size (d = 0.11) indicated a small effect (see Figure 6). No significant interaction effect was observed between avatar type and disclosure source (F(1,56) = 1.25, p = 0.27, ηp2 = 0.02), avatar type and disclosure type (F(1,56) = 1.94, p = 0.17, ηp2 = 0.03), disclosure source and disclosure type (F(1,56) = 0.08, p = 0.78, ηp2 < 0.01), or between all three (F(1,56) = 0.02, p = 0.90, ηp2 < 0.01), suggesting that the effects of avatar type, disclosure source, and disclosure on perceived realism ratings were independent of each other. These results demonstrate that the iClone UNREAL avatar, disclosures from the researcher, and positive disclosures led to higher realism ratings.

3.2. Results for the Enjoyment Measure

The ANOVA for the enjoyment ratings showed a significant effect of avatar type (F(1,56) = 4.57, p = 0.04, ηp2 = 0.08). The iClone UNREAL avatar (M = 3.64, SD = 0.80) had higher ratings for enjoyment than the Vive Sync avatar (M = 3.45, SD = 0.86). Cohen’s measure of effect size (d = 0.23) indicated a moderate effect (See Figure 7 and Figure 8). The mean ratings of “quite a bit” indicate that participants generally had a positive perception of both avatars. There was no significant effect of disclosure source (F(1,56) = 0.97, p = 0.33, ηp2 = 0.02). The source of disclosure, either from the participant (M = 3.52, SD = 0.79) or the interviewer (M = 3.56, SD = 0.75), did not significantly influence enjoyment ratings (Figure 7). There was a significant effect of disclosure type (F(1,56) = 35.55, p < 0.01, ηp2 = 0.39). Positive disclosures (M = 3.67, SD = 0.72) resulted in higher ratings of enjoyment when compared to negative disclosures (M = 3.41, SD = 0.82). Cohen’s measure of effect size (d = 0.32) indicated a moderate effect (Figure 8). No significant interaction effect was observed between avatar type and disclosure source (F(1,56) = 0.05, p = 0.83, ηp2 < 0.01), avatar type and disclosure type (F(1,56) = 0.80, p = 0.35, ηp2 = 0.02), disclosure source and disclosure type (F(1,56) = 2.44, p = 0.12, ηp2 = 0.04), or between all three (F(1,56) = 0.17, p = 0.69, ηp2 = 0.00), suggesting that the effects of avatar type, disclosure source, and disclosure on perceived enjoyment ratings were independent of each other. These results demonstrate that the iClone UNREAL avatar and positive disclosures led to higher enjoyment ratings.

3.3. Results for the Perceived Eye Contact Measure

The ANOVA for the perceived eye contact duration measure showed a significant effect of avatar type (F(1,56) = 14.55, p < 0.01, ηp2 = 0.21). The iClone UNREAL avatar (M = 66.90, SD = 24.51) had a higher perceived eye contact duration when compared to the Vive Sync avatar (M = 52.64, SD = 28.75). Cohen’s measure of effect size (d = 0.53) indicated a moderate to large effect (see Figure 9 and Figure 10). There was no significant effect of disclosure source (F(1,56) = 0.31, p = 0.58, ηp2 = 0.01). Disclosures from either the participant (M = 59.60, SD = 23.16) or the interviewer (M = 59.94, SD = 22.43) did not influence perceived eye contact duration (Figure 9). There was a significant effect of disclosure type (F(1,56) = 9.33, p < 0.01, ηp2 = 0.14). Positive disclosures (M = 60.53, SD = 23.10) resulted in longer perceived eye contact duration when compared to negative disclosures (M = 59.01, SD = 22.40). Cohen’s measure of effect size (d = 0.07) indicated a small effect (Figure 10). Additionally, no significant interaction effect was observed between avatar type and disclosure source (F(1,56) = 0.02, p = 0.90, ηp2 < 0.01), avatar type and disclosure type (F(1,56) = 0.20, p = 0.89, ηp2 = 0.00), disclosure source and disclosure type (F(1,56) = 0.67, p = 0.42, ηp2 = 0.01), or between all three (F(1,56) = 3.17, p = 0.08, ηp2 = 0.05), suggesting that the effects of avatar type, disclosure source, and disclosure type on perceived enjoyment ratings were independent of each other. These results demonstrate that the iClone UNREAL avatar and positive disclosures led to higher perceived eye contact ratings.

3.4. End of Experiment Preference Ratings

This section reports on the ratings made at the end of the experiment after interacting with both avatars. The participant preferences for “avatar type” were tallied from the items in the post-experiment survey. The responses for the items “felt most enjoyable”, “felt most comfortable”, and “felt most pleasant” were summed to create a new total called “Most Enjoyable”. The three items related to realism, “most like a real person”, “most natural/realistic facial gestures”, and “most natural/realistic body movements”, were also summed to form a new total for “Most Realistic”. Participant responses to the most perceived eye contact question were used for “Most Eye Contact”. These results indicate a significant preference for the iClone UNREAL avatar in terms of most enjoyable, most realistic, and most eye contact, as shown in Figure 11.

4. Discussion

In this study, two avatars were compared: an iClone UNREAL Engine avatar with high behavioural realism and a Vive Sync avatar, which has a level of behavioural realism similar to avatars in many accessible social VR platforms. Participants were asked to rate their experiences in terms of enjoyment, realism, and duration of eye contact while interacting with these avatars. Overall, participants generally had a positive perception towards both avatars. However as expected, participants reported that their feelings of enjoyment, realism, and eye contact duration were highest when interacting with the iClone UNREAL avatar.
Based on the post-experiment survey, the results revealed that the iClone UNREAL avatar was preferred as the first choice for enjoyment, realism, and eye contact (60–70% of participants). However, some participants (25–35%) reported that they preferred the Vive Sync avatar as their first choice for enjoyment, realism, and holding the most eye contact. This indicates that both avatar types may have their own strengths that appeal to different users.

4.1. The Effect of Avatar Behavioural Realism on Enjoyment, Realism, and Perceived Eye Contact

The primary aim of this study was to investigate whether an avatar controlled by full-body motion capture technology, with a higher level of behavioural realism, would be perceived as more realistic than an avatar with limited motion tracking. The results confirmed the first hypothesis because the avatar with higher behavioural realism was rated as more realistic compared to the avatar with less behavioural realism. This study highlights the potential of motion capture technology for creating realistic avatars that can enhance VR social interactions by making the experience more lifelike.
Previous studies have suggested that including higher levels of behavioural realism for avatars during VR social interactions can improve enjoyment by making the interaction more lifelike [22,26,29,30,31,37,79]. These studies informed the second hypothesis of this study, where it was predicted that an avatar with higher levels of behavioural realism would be more enjoyable to interact with. The results show that participants rated the iClone UNREAL avatar with higher behavioural realism as more enjoyable to interact with than the Vive Sync avatar with lower behavioural realism, thus supporting the second hypothesis. A moderate correlation was found between the participants’ ratings of enjoyment and perceived realism, indicating that those who rated the avatar as more realistic also rated them as more enjoyable to interact with. These results suggest that better motion capture can help improve VR social interactions by providing avatars with lifelike behaviour that is more enjoyable for users.
This study also investigated whether the behavioural realism of an avatar’s facial gestures could affect perceived eye contact duration during a VR social interaction involving self-disclosure. The final hypothesis of this study proposed that an avatar with a higher level of behavioural realism, achieved through detailed facial gestures via motion capture, would be perceived as maintaining more eye contact than an avatar with lower behavioural realism [2,22,41,43,44,46]. The results show that the iCone UNREAL avatar, which had full facial movement tracking, was rated higher for perceived eye contact duration than the Vive Sync avatar. As described in the methods, the Vive Sync avatar had lower fidelity eye-tracking capabilities due to the limitations of the Vive Focus 3 onboard eye-tracking system. Specifically, its eye-tracking system, though capable of tracking eye gaze and movement direction at a frequency of 120 Hz, was less accurate than the full facial motion tracking of the iClone UNREAL that used a TrueDepth camera. This study demonstrated that full facial motion tracking of avatars is more effective in promoting perceived eye contact than the specific eye-tracking technology of some VR headsets. The differences in eye tracking may have affected the appearance of nonverbal communication through less eye contact [5,29,41,43,44,45]. However, the Vive Sync avatar was, on average, enjoyed “quite a bit”, thus supporting our conclusions on the potential benefits of both VR avatars for therapy.
The secondary aim of this study was to address limitations in both Rogers et al. [23] and Fraser et al. [22]) where only the participants disclosed their experiences. Therefore, this study implemented a reciprocal social interaction between participant and interviewer. It was expected that disclosures from the avatar would result in higher realism ratings, as participants’ attention would be more focused on it. However, while the results showed that disclosures from the interviewer did lead to higher perceived realism ratings, its effect size was only small. This suggests that although attention towards the avatar may help to improve how realistic it appears, the magnitude of this effect is unlikely to be as impactful as the avatar’s appearance or behaviour.
The end-of-experiment survey found that more participants preferred the iClone UNREAL avatar with full-body motion capture as the most enjoyable and realistic to interact with. This preference indicates that more advanced motion capture technology can enhance most people’s VR social interactions. However, some preferred the Vive Sync avatar with limited motion capture. This difference in participant preferences may result from an uncanny valley’s potential influence [19,21,30,61,62]. As avatars become more lifelike, they may trigger feelings of unease in some individuals, affecting their perception of realism and enjoyment [3,22,28,61,76]. In this study, the iClone UNREAL avatar’s higher level of motion capture might have encountered an uncanny valley for some participants, causing them to prefer the Vive Sync avatar. This is supported by some participants (24%) choosing the Vive Sync avatar as “Most Realistic”. Alternatively, lower realism might make interactions feel less intimate and allow some people to maintain emotional distance, especially when disclosing negative experiences (Rogers et al., [23]). This is similar to the preference some people have for online chat rooms over face-to-face [39,80,81,82,83].

4.2. Implications and Future Directions

The implications of this research apply to many metaverse social VR platforms that aim to improve immersion and engagement during VR social interactions. These platforms should develop and include avatars with high behavioural realism as an option, which can be achieved using motion capture technology to provide users with more realistic and immersive experiences [8,11,19]. The study suggests that facial and body motion tracking can lead to more satisfying social interactions for most users. However, the present study illustrates that people have varying preferences for avatar realism, as some people may prefer quite realistic avatars while others might prefer less realistic avatars. Offering personalised options for avatar realism can cater to these preferences.
In this study, the iClone UNREAL avatar with a high level of behavioural realism was generally preferred by most participants. However, it is necessary to acknowledge that both avatars were rated as fairly enjoyable overall. This suggests that the Vive Sync avatar, with limited motion capture, might be suitable for the purpose of therapy conducted in VR. These findings highlight that the benefits of more realistic avatars need to be weighed against the additional costs and effort when implementing them [51,59,60,74,84,85,86]. For example, in therapeutic contexts, where the goal is to create a comfortable setting for self-disclosure, a simpler avatar, such as the Vive Sync avatar, might still be effective without the need to use sophisticated motion capture. Although a higher level of motion capture can enhance realism and enjoyment ratings, the decision to use realistic avatars for therapy should consider the costs related to more realistic avatars. This includes the availability of resources and the accessibility of VR technology. The decision will also depend on client expectations, as not all individuals benefit from a high level of avatar realism. More research is needed to investigate the potential utility of different types of avatars for actual counselling sessions.
Current motion capture capabilities are limited in many VR headsets. Full-body motion capture can improve the behavioural realism of avatars, but integrating this technology into accessible VR systems remains a challenge. Therefore, researchers should continue advancing both hardware and software capabilities for more realistic experiences that also accommodate user preferences. Future research should focus on other types of motion capture, such as machine learning solutions that require fewer computer resources to operate and can be integrated into VR headsets [87,88,89,90,91,92].
Future research may benefit from experimental designs with longer interaction times or multiple sessions. For example, investigating the comfort levels of individuals with longer sessions or after weekly sessions over several months may improve our understanding of the longer-term effects of behavioural realism on VR social interactions. A longitudinal study design is best suited for this type of investigation, as it will allow researchers to track the long-term changes in comfort and perception per individual.
Because the present study suggests that people have varying preferences for the level of behavioural realism in avatars, it also has implications for psychological therapy. Previous studies have suggested that avatars can offer a sense of privacy and anonymity that face-to-face social interactions do not [67,70,93]. This is likely attributed to the improvements in privacy offered by avatars that allow users to feel more at ease discussing sensitive topics. Continuing investigations into other possible benefits of using avatars for VR social interactions may help to improve its appeal and adoption for contexts such as therapy [70,71,73,74,93].

4.3. Limitations

A limitation of this study involves the physical location of both the participant and interviewer. The physical location between the participant and interviewer differed between conditions, with the interviewer sharing the same physical space as the participants for the iClone UNREAL avatar but not for the Vive Sync avatar. This setup was deliberate because Vive Sync offers the advantage of being more accessible from different locations, and we wanted to compare it with a more advanced avatar that is restricted to the same room as the participant. However, it is worth noting that this difference could have impacted participant perceptions, as physical closeness may influence feelings of comfort and engagement [3,48,94,95,96]. When comparing the capabilities of avatars, future research should ensure that the physical location of interactions remains the same during the VR social interactions.
Due to the limited customisable options of the Vive Sync platform, neither avatar was compared to the other within the same virtual environment. Despite efforts to reduce the disparities between the environments, the visual differences of the environment could have introduced confounds that may have affected the participant’s perceptions of the avatar [37,53,56,84]. For example, if one environment was perceived as more visually appealing or immersive than the other, it could have positively or negatively influenced participants’ experiences, regardless of the avatar’s behavioural realism. The Vive Sync environment featured a modern office space with large windows and architectural elements, while the UNREAL environment was a more traditional, enclosed room with basic furnishings. Despite these differences in the environment, participants rated the iClone UNREAL avatar as more realistic, and most participants preferred to interact with it. These findings suggest that while environmental differences could potentially influence participant perceptions, the preferences for the iClone UNREAL avatar highlight the importance of avatar behaviour for shaping VR social interactions.
The behavioural realism of avatars can influence the effectiveness of VR social interactions, especially in therapeutic contexts where self-disclosure is important. In such contexts, avatars with more motion capture, including the iClone UNREAL avatar, may enhance therapeutic outcomes by including nonverbal communication that helps facilitate trust and engagement [24,53,70,73,97,98,99]. However, for therapists who lack the resources or expertise to implement avatars with full-body motion capture, simpler avatars like Vive Sync may be more accessible and preferred [19,52,70,100,101,102]. As such, the optimal choice of avatar depends on weighing the benefits of more realistic avatars against their accessibility and ease of use.

Author Contributions

Conceptualization, A.F., R.H., C.S. and S.L.R.; Methodology, A.F. and S.L.R.; Investigation, A.F.; Data curation, A.F.; Writing—original draft, A.F.; Writing—review & editing, A.F., R.H., C.S. and S.L.R.; Visualization, A.F.; Supervision, R.H., C.S. and S.L.R.; Project administration, A.F. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

The study was approved and conducted per The National Statement on Ethical Conduct in Human Research, in agreement with the National Health and Medical Research Council (NHMRC), the Australian Research Council, and Universities Australia.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

Data for this report can be accessed at https://doi.org/10.6084/m9.figshare.27897543.v1, accessed on 25 November 2024.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Girondini, M.; Stefanova, M.; Pillan, M.; Gallace, A. Speaking in Front of Cartoon Avatars: A Behavioral and Psychophysiological Study on How Audience Design Impacts on Public Speaking Anxiety in Virtual Environments. Int. J. Hum.-Comput. Stud. 2023, 179, 103106. [Google Scholar] [CrossRef]
  2. Hartbrich, J.; Weidner, F.; Kunert, C.; Raake, A.; Broll, W.; Arévalo Arboleda, S. Eye and Face Tracking in VR: Avatar Embodiment and Enfacement with Realistic and Cartoon Avatars. In Proceedings of the 22nd International Conference on Mobile and Ubiquitous Multimedia, MUM ’23, Vienna, Austria, 3–6 December 2023; Association for Computing Machinery: New York, NY, USA, 2023; pp. 270–278. [Google Scholar] [CrossRef]
  3. Kim, D.; Lee, H.; Chung, K. Avatar-Mediated Experience in the Metaverse: The Impact of Avatar Realism on User-Avatar Relationship. J. Retail. Consum. Serv. 2023, 73, 103382. [Google Scholar] [CrossRef]
  4. Kim, I.; Ki, C.-W.; Lee, H.; Kim, Y.-K. Virtual Influencer Marketing: Evaluating the Influence of Virtual Influencers’ Form Realism and Behavioral Realism on Consumer Ambivalence and Marketing Performance. J. Bus. Res. 2024, 176, 114611. [Google Scholar] [CrossRef]
  5. Kimmel, S.; Jung, F.; Matviienko, A.; Heuten, W.; Boll, S. Let’s Face It: Influence of Facial Expressions on Social Presence in Collaborative Virtual Reality. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems, CHI ’23, Hamburg, Germany, 23–28 April 2023; Association for Computing Machinery: New York, NY, USA, 2023; pp. 1–16. [Google Scholar] [CrossRef]
  6. Kullmann, P.; Menzel, T.; Botsch, M.; Latoschik, M.E. An Evaluation of Other-Avatar Facial Animation Methods for Social VR. In Extended Abstracts of the 2023 CHI Conference on Human Factors in Computing Systems; CHI EA ’23; Association for Computing Machinery: New York, NY, USA, 2023; pp. 1–7. [Google Scholar] [CrossRef]
  7. Oh, S.; Kim, W.B.; Choo, H.J. The Effect of Avatar Self-Integration on Consumers’ Behavioral Intention in the Metaverse. Int. J. Hum.–Comput. Interact. 2023, 40, 7840–7853. [Google Scholar] [CrossRef]
  8. Visconti, A.; Calandra, D.; Lamberti, F. Comparing Technologies for Conveying Emotions through Realistic Avatars in Virtual Reality-Based Metaverse Experiences. Comput. Animat. Virtual Worlds 2023, 34, e2188. [Google Scholar] [CrossRef]
  9. Wei, S.; Freeman, D.; Rovira, A. A Randomised Controlled Test of Emotional Attributes of a Virtual Coach within a Virtual Reality (VR) Mental Health Treatment. Sci. Rep. 2023, 13, 11517. [Google Scholar] [CrossRef]
  10. Yang, E.K.; Lee, J.H.; Lee, C.H. Virtual Reality Environment-Based Collaborative Exploration of Fashion Design. CoDesign 2023, 20, 332–350. [Google Scholar] [CrossRef]
  11. Brown, G.; Hust, J.; Büttner, S.; Prilla, M. The Impact of Varying Resolution and Motion Realism of Avatars in Augmented Reality-Supported, Virtually Co-Located Sales Consultations. In Proceedings of the Mensch und Computer 2022, MuC ’22, Darmstadt, Germany, 4–7 September 2022; Association for Computing Machinery: New York, NY, USA, 2022; pp. 255–264. [Google Scholar] [CrossRef]
  12. Roth, D.; Lugrin, J.-L.; Galakhov, D.; Hofmann, A.; Bente, G.; Latoschik, M.E.; Fuhrmann, A. Avatar Realism and Social Interaction Quality in Virtual Reality. In Proceedings of the 2016 IEEE Virtual Reality (VR), Greenville, SC, USA, 19–23 March 2016; pp. 277–278. [Google Scholar] [CrossRef]
  13. van Brakel, V.; Barreda-Ángeles, M.; Hartmann, T. Feelings of Presence and Perceived Social Support in Social Virtual Reality Platforms. Comput. Hum. Behav. 2023, 139, 107523. [Google Scholar] [CrossRef]
  14. Fouts, M. Body Language in a Virtual World: How to Communicate Your Message Effectively. Forbes. Available online: https://www.forbes.com/sites/forbescoachescouncil/2021/09/30/body-language-in-a-virtual-world-how-to-communicate-your-message-effectively/ (accessed on 15 November 2023).
  15. Heath, A. Meta’s Social VR Platform Horizon Hits 300,000 Users. The Verge. Available online: https://www.theverge.com/2022/2/17/22939297/meta-social-vr-platform-horizon-300000-users (accessed on 15 November 2023).
  16. Knobeloch, J. Social Interaction in the Metaverse: The Role of Nonverbal Signals in VRChat Culture|LinkedIn. Available online: https://www.linkedin.com/pulse/social-interaction-metaverse-role-nonverbal-signals-vrchat-knobeloch/ (accessed on 15 November 2023).
  17. Fysh, M.C.; Trifonova, I.V.; Allen, J.; McCall, C.; Burton, A.M.; Bindemann, M. Avatars with Faces of Real People: A Construction Method for Scientific Experiments in Virtual Reality. Behav. Res. Methods 2022, 54, 1461–1475. [Google Scholar] [CrossRef]
  18. Kokkinara, E.; McDonnell, R. Animation Realism Affects Perceived Character Appeal of a Self-Virtual Face. In Proceedings of the 8th ACM SIGGRAPH Conference on Motion in Games, MIG ’15, Paris, France, 16–18 November 2015; Association for Computing Machinery: New York, NY, USA, 2015; pp. 221–226. [Google Scholar] [CrossRef]
  19. Ma, F.; Pan, X. Visual Fidelity Effects on Expressive Self-Avatar in Virtual Reality: First Impressions Matter. In Proceedings of the 2022 IEEE Conference on Virtual Reality and 3D User Interfaces (VR), Christchurch, New Zealand, 12–16 March 2022; pp. 57–65. [Google Scholar] [CrossRef]
  20. Persa, G.; Csapo, A.; Galambos, P. Avatars in 3D Virtual Reality: From Concrete to Abstract. In Proceedings of the 12th IEEE International Conference on Cognitive Infocommunications, Online, 23–25 September 2021; pp. 329–334. [Google Scholar]
  21. Segaran, K.; Mohamad Ali, A.Z.; Hoe, T.W. Does Avatar Design in Educational Games Promote a Positive Emotional Experience among Learners? E-Learn. Digit. Media 2021, 18, 422–440. [Google Scholar] [CrossRef]
  22. Fraser, A.D.; Branson, I.; Hollett, R.C.; Speelman, C.P.; Rogers, S.L. Expressiveness of Real-Time Motion Captured Avatars Influences Perceived Animation Realism and Perceived Quality of Social Interaction in Virtual Reality. Front. Virtual Real. 2022, 3, 981400. [Google Scholar] [CrossRef]
  23. Rogers, S.L.; Broadbent, R.; Brown, J.; Fraser, A.; Speelman, C.P. Realistic Motion Avatars Are the Future for Social Interaction in Virtual Reality. Front. Virtual Real. 2022, 2, 750729. [Google Scholar] [CrossRef]
  24. Aseeri, S.; Interrante, V. The Influence of Avatar Representation on Interpersonal Communication in Virtual Social Environments. IEEE Trans. Vis. Comput. Graph. 2021, 27, 2608–2617. [Google Scholar] [CrossRef]
  25. Freiwald, J.P.; Schenke, J.; Lehmann-Willenbrock, N.; Steinicke, F. Effects of Avatar Appearance and Locomotion on Co-Presence in Virtual Reality Collaborations. In Proceedings of the Mensch und Computer 2021, MuC ’21, New York, NY, USA, 5–8 September 2021; Association for Computing Machinery: New York, NY, USA, 2021; pp. 393–401. [Google Scholar] [CrossRef]
  26. Herrera, F.; Oh, S.Y.; Bailenson, J.N. Effect of Behavioral Realism on Social Interactions inside Collaborative Virtual Environments. Presence Teleoperators Virtual Environ. 2020, 27, 163–182. [Google Scholar] [CrossRef]
  27. Kyrlitsias, C.; Michael-Grigoriou, D. Social Interaction with Agents and Avatars in Immersive Virtual Environments: A Survey. Front. Virtual Real. 2022, 2, 786665. [Google Scholar] [CrossRef]
  28. Narayanan, S.; Polys, N.; Bukvic, I.I. Cinemacraft: Exploring Fidelity Cues in Collaborative Virtual World Interactions. Virtual Real. 2020, 24, 53–73. [Google Scholar] [CrossRef]
  29. Kruzic, C.; Kruzic, D.; Herrera, F.; Bailenson, J. Facial Expressions Contribute More than Body Movements to Conversational Outcomes in Avatar-Mediated Virtual Environments. Sci. Rep. 2020, 10, 20626. [Google Scholar] [CrossRef]
  30. Seymour, M.; Yuan, L.; Dennis, A.; Riemer, K. Have We Crossed the Uncanny Valley? Understanding Affinity, Trustworthiness, and Preference for Realistic Digital Humans in Immersive Environments. J. Assoc. Inf. Syst. 2021, 22, 9. [Google Scholar] [CrossRef]
  31. Adkins, A.; Normoyle, A.; Lin, L.; Sun, Y.; Ye, Y.; Di Luca, M.; Jörg, S. How Important Are Detailed Hand Motions for Communication for a Virtual Character Through the Lens of Charades? ACM Trans. Graph. 2023, 42, 27. [Google Scholar] [CrossRef]
  32. Dodds, T.J.; Mohler, B.J.; Bülthoff, H.H. Talk to the Virtual Hands: Self-Animated Avatars Improve Communication in Head-Mounted Display Virtual Environments. PLoS ONE 2011, 6, e25759. [Google Scholar] [CrossRef]
  33. Gillies, M. Creating Virtual Characters. In Proceedings of the 5th International Conference on Movement and Computing, Genoa, Italy, 28–30 June 2018; ACM International Conference Proceeding Series. Association for Computing Machinery: New York, NY, USA, 2018. [Google Scholar] [CrossRef]
  34. Pan, X.; Hamilton, A.F.d.C. Why and How to Use Virtual Reality to Study Human Social Interaction: The Challenges of Exploring a New Research Landscape. Br. J. Psychol. 2018, 109, 395–417. [Google Scholar] [CrossRef] [PubMed]
  35. Thomas, S.; Ferstl, Y.; McDonnell, R.; Ennis, C. Investigating How Speech and Animation Realism Influence the Perceived Personality of Virtual Characters and Agents. In Proceedings of the 2022 IEEE Conference on Virtual Reality and 3D User Interfaces (VR), Christchurch, New Zealand, 12–16 March 2022; pp. 11–20. [Google Scholar] [CrossRef]
  36. Gamelin, G.; Chellali, A.; Cheikh, S.; Ricca, A.; Dumas, C.; Otmane, S. Point-Cloud Avatars to Improve Spatial Communication in Immersive Collaborative Virtual Environments. Pers. Ubiquitous Comput. 2021, 25, 467–484. [Google Scholar] [CrossRef]
  37. Yoon, B.; Kim, H.; Lee, G.A.; Billinghurst, M.; Woo, W. The Effect of Avatar Appearance on Social Presence in an Augmented Reality Remote Collaboration. In Proceedings of the 2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR), Osaka, Japan, 23–27 March 2019; pp. 547–556. [Google Scholar] [CrossRef]
  38. Fraser, A.D.; Branson, I.; Hollett, R.C.; Speelman, C.P.; Rogers, S.L. Do Realistic Avatars Make Virtual Reality Better? Examining Human-like Avatars for VR Social Interactions. Comput. Hum. Behav. Artif. Hum. 2024, 2, 100082. [Google Scholar] [CrossRef]
  39. Smith, H.J.; Neff, M. Communication Behavior in Embodied Virtual Reality. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, CHI ’18, Montreal, QC, Canada, 21–26 April 2018; Association for Computing Machinery: New York, NY, USA, 2018; pp. 1–12. [Google Scholar] [CrossRef]
  40. Seymour, M.; Riemer, K.; Kay, J. Actors, Avatars and Agents: Potentials and Implications of Natural Face Technology for the Creation of Realistic Visual Presence. J. Assoc. Inf. Syst. 2018, 19, 4. [Google Scholar] [CrossRef]
  41. Ruhland, K.; Peters, C.E.; Andrist, S.; Badler, J.B.; Badler, N.I.; Gleicher, M.; Mutlu, B.; McDonnell, R. A Review of Eye Gaze in Virtual Agents, Social Robotics and HCI: Behaviour Generation, User Interaction and Perception. Comput. Graph. Forum 2015, 34, 299–326. [Google Scholar] [CrossRef]
  42. Wu, Y.; Wang, Y.; Jung, S.; Hoermann, S.; Lindeman, R.W. Using a Fully Expressive Avatar to Collaborate in Virtual Reality: Evaluation of Task Performance, Presence, and Attraction. Front. Virtual Real. 2021, 2, 641296. [Google Scholar] [CrossRef]
  43. Luo, L.; Weng, D.; Ding, N.; Hao, J.; Tu, Z. The Effect of Avatar Facial Expressions on Trust Building in Social Virtual Reality. Vis. Comput. 2022, 39, 5869–5882. [Google Scholar] [CrossRef]
  44. Garau, M.; Slater, M.; Vinayagamoorthy, V.; Brogni, A.; Steed, A.; Sasse, M.A. The Impact of Avatar Realism and Eye Gaze Control on Perceived Quality of Communication in a Shared Immersive Virtual Environment. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI ’03, Ft. Lauderdale, FL, USA, 5–10 April 2003; Association for Computing Machinery: New York, NY, USA, 2003; pp. 529–536. [Google Scholar] [CrossRef]
  45. Rogers, S.L.; Speelman, C.P.; Guidetti, O.; Longmuir, M. Using Dual Eye Tracking to Uncover Personal Gaze Patterns during Social Interaction. Sci. Rep. 2018, 8, 4271. [Google Scholar] [CrossRef]
  46. McDonnell, R.; Breidt, M.; Bülthoff, H.H. Render Me Real? Investigating the Effect of Render Style on the Perception of Animated Virtual Humans. ACM Trans. Graph. 2012, 31, 91. [Google Scholar] [CrossRef]
  47. Hessels, R.S.; Cornelissen, T.H.W.; Hooge, I.T.C.; Kemner, C. Gaze Behavior to Faces during Dyadic Interaction. Can. J. Exp. Psychol./Rev. Can. Psychol. Expérimentale 2017, 71, 226–242. [Google Scholar] [CrossRef]
  48. Burgoon, J.K.; Bonito, J.A.; Ramirez, A., Jr.; Dunbar, N.E.; Kam, K.; Fischer, J. Testing the Interactivity Principle: Effects of Mediation, Propinquity, and Verbal and Nonverbal Modalities in Interpersonal Interaction. J. Commun. 2002, 52, 657–677. [Google Scholar] [CrossRef]
  49. Oh, S.Y.; Bailenson, J.; Krämer, N.; Li, B. Let the Avatar Brighten Your Smile: Effects of Enhancing Facial Expressions in Virtual Environments. PLoS ONE 2016, 11, e0161794. [Google Scholar] [CrossRef]
  50. Saberi, M.; DiPaola, S.; Bernardet, U. Expressing Personality Through Non-Verbal Behaviour in Real-Time Interaction. Front. Psychol. 2021, 12, 660895. [Google Scholar] [CrossRef] [PubMed]
  51. Mathis, F.; Vaniea, K.; Khamis, M. Observing Virtual Avatars: The Impact of Avatars’ Fidelity on Identifying Interactions. In Proceedings of the 24th International Academic Mindtrek Conference, Virtual, 1–3 June 2021; Academic Mindtrek ’21. Association for Computing Machinery: New York, NY, USA, 2021; pp. 154–164. [Google Scholar] [CrossRef]
  52. Bradbury, A.; Schertz, S.; Wiebe, E. How Does Virtual Reality Compare? The Effects of Avatar Appearance and Medium on Self-Disclosure. Proc. Hum. Factors Ergon. Soc. Annu. Meet. 2022, 66, 2036–2040. [Google Scholar] [CrossRef]
  53. Yu, K.; Gorbachev, G.; Eck, U.; Pankratz, F.; Navab, N.; Roth, D. Avatars for Teleconsultation: Effects of Avatar Embodiment Techniques on User Perception in 3D Asymmetric Telepresence. IEEE Trans. Vis. Comput. Graph. 2021, 27, 4129–4139. [Google Scholar] [CrossRef]
  54. Roth, D.; Kullmann, P.; Bente, G.; Gall, D.; Latoschik, M.E. Effects of Hybrid and Synthetic Social Gaze in Avatar-Mediated Interactions. In Proceedings of the 2018 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct), Munich, Germany, 16–20 October 2018; pp. 103–108. [Google Scholar] [CrossRef]
  55. Döllinger, N.; Wolf, E.; Mal, D.; Wenninger, S.; Botsch, M.; Latoschik, M.E.; Wienrich, C. Resize Me! Exploring the User Experience of Embodied Realistic Modulatable Avatars for Body Image Intervention in Virtual Reality. Front. Virtual Real. 2022, 3, 935449. [Google Scholar] [CrossRef]
  56. Dubosc, C.; Gorisse, G.; Christmann, O.; Fleury, S.; Poinsot, K.; Richir, S. Impact of Avatar Facial Anthropomorphism on Body Ownership, Attractiveness and Social Presence in Collaborative Tasks in Immersive Virtual Environments. Comput. Graph. 2021, 101, 82–92. [Google Scholar] [CrossRef]
  57. Latoschik, M.E.; Roth, D.; Gall, D.; Achenbach, J.; Waltemate, T.; Botsch, M. The Effect of Avatar Realism in Immersive Social Virtual Realities. In Proceedings of the 23rd ACM Symposium on Virtual Reality Software and Technology, VRST ’17, Gothenburg Sweden, 8–10 November 2017; Association for Computing Machinery: New York, NY, USA, 2017; pp. 1–10. [Google Scholar] [CrossRef]
  58. Zibrek, K.; McDonnell, R. Social Presence and Place Illusion Are Affected by Photorealism in Embodied VR. In Proceedings of the 12th ACM SIGGRAPH Conference on Motion, Interaction and Games, MIG ’19, Newcastle upon Tyne, UK, 28–30 October 2019; Association for Computing Machinery: New York, NY, USA, 2019; pp. 1–7. [Google Scholar] [CrossRef]
  59. Bailey, J.; Blackmore, K.; Robinson, G. Exploring Avatar Facial Fidelity and Emotional Expressions on Observer Perception of the Uncanny Valley. In Intersections in Simulation and Gaming; Naweed, A., Wardaszko, M., Leigh, E., Meijer, S., Eds.; Lecture Notes in Computer Science; Springer International Publishing: Cham, Switzerland, 2018; pp. 201–221. [Google Scholar] [CrossRef]
  60. Higgins, D.; Zibrek, K.; Cabral, J.; Egan, D.; McDonnell, R. Sympathy for the Digital: Influence of Synthetic Voice on Affinity, Social Presence and Empathy for Photorealistic Virtual Humans. Comput. Graph. 2022, 104, 116–128. [Google Scholar] [CrossRef]
  61. Higgins, D.; Egan, D.; Fribourg, R.; Cowan, B.; McDonnell, R. Ascending from the Valley: Can State-of-the-Art Photorealism Avoid the Uncanny? In Proceedings of the ACM Symposium on Applied Perception 2021, SAP ’21, Online, 16–17 September 2021; Association for Computing Machinery: New York, NY, USA, 2021; pp. 1–5. [Google Scholar] [CrossRef]
  62. Kätsyri, J.; Förger, K.; Mäkäräinen, M.; Takala, T. A Review of Empirical Evidence on Different Uncanny Valley Hypotheses: Support for Perceptual Mismatch as One Road to the Valley of Eeriness. Front. Psychol. 2015, 6, 390. [Google Scholar] [CrossRef]
  63. MacDorman, K.F.; Green, R.D.; Ho, C.-C.; Koch, C.T. Too Real for Comfort? Uncanny Responses to Computer Generated Faces. Comput. Hum. Behav. 2009, 25, 695–710. [Google Scholar] [CrossRef]
  64. Piwek, L.; McKay, L.S.; Pollick, F.E. Empirical Evaluation of the Uncanny Valley Hypothesis Fails to Confirm the Predicted Effect of Motion. Cognition 2014, 130, 271–277. [Google Scholar] [CrossRef] [PubMed]
  65. Seyama, J.; Nagayama, R.S. The Uncanny Valley: Effect of Realism on the Impression of Artificial Human Faces. Presence 2007, 16, 337–351. [Google Scholar] [CrossRef]
  66. Hyde, J.; Carter, E.J.; Kiesler, S.; Hodgins, J.K. Using an Interactive Avatar’s Facial Expressiveness to Increase Persuasiveness and Socialness. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, CHI ’15, New York, NY, USA, 18–23 April 2015; Association for Computing Machinery: New York, NY, USA, 2015; pp. 1719–1728. [Google Scholar] [CrossRef]
  67. Kang, S.-H.; Watt, J.H. The Impact of Avatar Realism and Anonymity on Effective Communication via Mobile Devices. Comput. Hum. Behav. 2013, 29, 1169–1181. [Google Scholar] [CrossRef]
  68. Nowak, K.L.; Fox, J. Avatars and Computer-Mediated Communication: A Review of the Definitions, Uses, and Effects of Digital Representations. Rev. Commun. Res. 2018, 6, 30–53. [Google Scholar] [CrossRef]
  69. Adams, L.; Simonoff, E.; Tierney, K.; Hollocks, M.J.; Brewster, A.; Watson, J.; Valmaggia, L. Developing a User-Informed Intervention Study of a Virtual Reality Therapy for Social Anxiety in Autistic Adolescents. Des. Health 2022, 6, 114–133. [Google Scholar] [CrossRef]
  70. Baccon, L.A.; Chiarovano, E.; MacDougall, H.G. Virtual Reality for Teletherapy: Avatars May Combine the Benefits of Face-to-Face Communication with the Anonymity of Online Text-Based Communication. Cyberpsychol. Behav. Soc. Netw. 2019, 22, 158–165. [Google Scholar] [CrossRef]
  71. Beidel, D.C.; Frueh, B.C.; Neer, S.M.; Bowers, C.A.; Trachik, B.; Uhde, T.W.; Grubaugh, A. Trauma Management Therapy with Virtual-Reality Augmented Exposure Therapy for Combat-Related PTSD: A Randomized Controlled Trial. J. Anxiety Disord. 2019, 61, 64–74. [Google Scholar] [CrossRef]
  72. Boeldt, D.; McMahon, E.; McFaul, M.; Greenleaf, W. Using Virtual Reality Exposure Therapy to Enhance Treatment of Anxiety Disorders: Identifying Areas of Clinical Adoption and Potential Obstacles. Front. Psychiatry 2019, 10, 773. [Google Scholar] [CrossRef]
  73. Emmelkamp, P.M.G.; Meyerbröker, K. Virtual Reality Therapy in Mental Health. Annu. Rev. Clin. Psychol. 2021, 17, 495–519. [Google Scholar] [CrossRef]
  74. Pedram, S.; Palmisano, S.; Perez, P.; Mursic, R.; Farrelly, M. Examining the Potential of Virtual Reality to Deliver Remote Rehabilitation. Comput. Hum. Behav. 2020, 105, 106223. [Google Scholar] [CrossRef]
  75. Kang, S.-H.; Gratch, J. The Effect of Avatar Realism of Virtual Humans on Self-Disclosure in Anonymous Social Interactions. In Proceedings of the CHI ’10 Extended Abstracts on Human Factors in Computing Systems, CHI EA ’10, Atlanta, GA, USA, 10–15 April 2015; Association for Computing Machinery: New York, NY, USA, 2010; pp. 3781–3786. [Google Scholar] [CrossRef]
  76. Seymour, M.; Yuan, L.; Dennis, A.; Riemer, K. Crossing the Uncanny Valley? Understanding Affinity, Trustworthiness, and Preference for More Realistic Virtual Humans in Immersive Environments. In Proceedings of the Hawaii International Conference on System Sciences 2019, Maui, HI, USA, 8–11 January 2019. [Google Scholar]
  77. Biocca, F.; Harms, C.; Burgoon, J.K. Toward a More Robust Theory and Measure of Social Presence: Review and Suggested Criteria. Presence Teleoperators Virtual Environ. 2003, 12, 456–480. [Google Scholar] [CrossRef]
  78. Lombard, M.; Ditton, T.B.; Weinstein, L. Measuring Presence: The Temple Presence Inventory. In Proceedings of the 12th Annual International Workshop on Presence, Los Angeles, CA, USA, 11–13 November 2009. [Google Scholar]
  79. Rogers, S.L.; Hollett, R.; Li, Y.R.; Speelman, C.P. An Evaluation of Virtual Reality Role-Play Experiences for Helping-Profession Courses. Teach. Psychol. 2022, 49, 78–84. [Google Scholar] [CrossRef]
  80. Antheunis, M.L.; Schouten, A.P.; Walther, J.B. The Hyperpersonal Effect in Online Dating: Effects of Text-Based CMC vs. Videoconferencing before Meeting Face-to-Face. Media Psychol. 2020, 23, 820–839. [Google Scholar] [CrossRef]
  81. Barbier, L.; Fointiat, V. To Be or Not Be Human-Like in Virtual World. Front. Comput. Sci. 2020, 2, 15. [Google Scholar] [CrossRef]
  82. Hooi, R.; Cho, H. Avatar-Driven Self-Disclosure: The Virtual Me Is the Actual Me. Comput. Hum. Behav. 2014, 39, 20–28. [Google Scholar] [CrossRef]
  83. Oviedo, V.Y.; Fox, J.E. Meeting by Text or Video-Chat: Effects on Confidence and Performance. Comput. Hum. Behav. Rep. 2021, 3, 100054. [Google Scholar] [CrossRef]
  84. Aljaroodi, H.M.; Adam, M.T.P.; Chiong, R.; Teubner, T. Avatars and Embodied Agents in Experimental Information Systems Research: A Systematic Review and Conceptual Framework. Australas. J. Inf. Syst. 2019, 23. [Google Scholar] [CrossRef]
  85. Bailenson, J.N.; Yee, N.; Merget, D.; Schroeder, R. The Effect of Behavioral Realism and Form Realism of Real-Time Avatar Faces on Verbal Disclosure, Nonverbal Disclosure, Emotion Recognition, and Copresence in Dyadic Interaction. Presence Teleoperators Virtual Environ. 2006, 15, 359–372. [Google Scholar] [CrossRef]
  86. Weidner, F.; Boettcher, G.; Arboleda, S.A.; Diao, C.; Sinani, L.; Kunert, C.; Gerhardt, C.; Broll, W.; Raake, A. A Systematic Review on the Visualization of Avatars and Agents in AR & VR Displayed Using Head-Mounted Displays. IEEE Trans. Vis. Comput. Graph. 2023, 29, 2596–2606. [Google Scholar] [CrossRef]
  87. Anvari, T.; Park, K.; Kim, G. Upper Body Pose Estimation Using Deep Learning for a Virtual Reality Avatar. Appl. Sci. 2023, 13, 2460. [Google Scholar] [CrossRef]
  88. Dai, P.; Zhang, Y.; Liu, T.; Fan, Z.; Du, T.; Su, Z.; Zheng, X.; Li, Z. HMD-Poser: On-Device Real-Time Human Motion Tracking from Scalable Sparse Observations. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 16–22 June 2024; pp. 874–884. [Google Scholar]
  89. Dobre, G.C.; Gillies, M.; Pan, X. Immersive Machine Learning for Social Attitude Detection in Virtual Reality Narrative Games. Virtual Real. 2022, 26, 1519–1538. [Google Scholar] [CrossRef]
  90. Gamage, N.M.; Ishtaweera, D.; Weigel, M.; Withana, A. So Predictable! Continuous 3D Hand Trajectory Prediction in Virtual Reality. In Proceedings of the 34th Annual ACM Symposium on User Interface Software and Technology, UIST ’21, Virtual Event, 10–14 October 2021; Association for Computing Machinery: New York, NY, USA, 2021; pp. 332–343. [Google Scholar] [CrossRef]
  91. Pięta, P.; Jegierski, H.; Babiuch, P.; Jegierski, M.; Płaza, M.; Łukawski, G.; Deniziak, S.; Jasiński, A.; Opałka, J.; Węgrzyn, P.; et al. Automated Classification of Virtual Reality User Motions Using a Motion Atlas and Machine Learning Approach. IEEE Access 2024, 12, 94584–94609. [Google Scholar] [CrossRef]
  92. Schwind, V.; Halbhuber, D.; Fehle, J.; Sasse, J.; Pfaffelhuber, A.; Tögel, C.; Dietz, J.; Henze, N. The Effects of Full-Body Avatar Movement Predictions in Virtual Reality Using Neural Networks. In Proceedings of the 26th ACM Symposium on Virtual Reality Software and Technology, VRST ’20, Virtual Event, 1–4 November 2020; Association for Computing Machinery: New York, NY, USA, 2020; pp. 1–11. [Google Scholar] [CrossRef]
  93. Maloney, D.; Zamanifard, S.; Freeman, G. Anonymity vs. Familiarity: Self-Disclosure and Privacy in Social Virtual Reality. In Proceedings of the 26th ACM Symposium on Virtual Reality Software and Technology, Virtual Event, 1–4 November 2020. [Google Scholar] [CrossRef]
  94. Lahnakoski, J.M.; Forbes, P.A.G.; McCall, C.; Schilbach, L. Unobtrusive Tracking of Interpersonal Orienting and Distance Predicts the Subjective Quality of Social Interactions. R. Soc. Open Sci. 2020, 7, 191815. [Google Scholar] [CrossRef]
  95. O’Leary, M.B.; Wilson, J.M.; Metiu, A. Beyond Being There: The Symbolic Role of Communication and Identification in Perceptions of Proximity to Geographically Dispersed Colleagues. MIS Q. 2014, 38, 1219–1244. [Google Scholar] [CrossRef]
  96. Won, A.S.; Shriram, K.; Tamir, D.I. Social Distance Increases Perceived Physical Distance. Soc. Psychol. Personal. Sci. 2018, 9, 372–380. [Google Scholar] [CrossRef]
  97. Aas, B.; Meyerbröker, K.; Emmelkamp, P. Who Am I—And If so, Where? A Study on Personality in Virtual Realities. J. Virtual Worlds Res. 2010, 2, 5. [Google Scholar] [CrossRef]
  98. Duan, S.; Valmaggia, L.; Lawrence, A.J.; Fennema, D.; Moll, J.; Zahn, R. Virtual Reality-Assessment of Social Interactions and Prognosis in Depression. J. Affect. Disord. 2024, 359, 234–240. [Google Scholar] [CrossRef]
  99. Miao, F.; Kozlenkova, I.V.; Wang, H.; Xie, T.; Palmatier, R.W. An Emerging Theory of Avatar Marketing. J. Mark. 2022, 86, 67–90. [Google Scholar] [CrossRef]
  100. Higgins, D.; Zhan, Y.; Cowan, B.R.; McDonnell, R. Investigating the Effect of Visual Realism on Empathic Responses to Emotionally Expressive Virtual Humans. In Proceedings of the ACM Symposium on Applied Perception 2023, SAP ’23, Los Angeles, CA, USA, 5–6 August 2023; Association for Computing Machinery: New York, NY, USA, 2023; pp. 1–7. [Google Scholar] [CrossRef]
  101. Pfender, E.; Caplan, S. Nonverbal Immediacy Cues and Impression Formation in Video Therapy. Couns. Psychol. Q. 2023, 36, 395–407. [Google Scholar] [CrossRef]
  102. Sykownik, P.; Maloney, D.; Freeman, G.; Masuch, M. Something Personal from the Metaverse: Goals, Topics, and Contextual Factors of Self-Disclosure in Commercial Social VR. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, CHI ’22, New Orleans, LA, USA, 29 April–5 May 2022; Association for Computing Machinery: New York, NY, USA, 2022; pp. 1–17. [Google Scholar] [CrossRef]
Figure 1. The iClone UNREAL avatar in animation as seen from the participant’s perspective.
Figure 1. The iClone UNREAL avatar in animation as seen from the participant’s perspective.
Applsci 15 02896 g001
Figure 2. The compact office space used for the UNREAL room.
Figure 2. The compact office space used for the UNREAL room.
Applsci 15 02896 g002
Figure 3. The Vive Sync avatar in animation is seen from the participant’s perspective. The participant’s hands are not shown in this image but can be seen by the participant.
Figure 3. The Vive Sync avatar in animation is seen from the participant’s perspective. The participant’s hands are not shown in this image but can be seen by the participant.
Applsci 15 02896 g003
Figure 4. The Vive Sync virtual office. The participant’s view was restricted to the walls of the office building, limiting their view of the office centre and windows.
Figure 4. The Vive Sync virtual office. The participant’s view was restricted to the walls of the office building, limiting their view of the office centre and windows.
Applsci 15 02896 g004
Figure 5. Realism ratings for avatar types according to disclosure source. Error bars represent 95% confidence intervals.
Figure 5. Realism ratings for avatar types according to disclosure source. Error bars represent 95% confidence intervals.
Applsci 15 02896 g005
Figure 6. Realism ratings for avatar types according to disclosure type. Error bars represent 95% confidence intervals.
Figure 6. Realism ratings for avatar types according to disclosure type. Error bars represent 95% confidence intervals.
Applsci 15 02896 g006
Figure 7. Enjoyment ratings for avatar types according to disclosure source. Error bars represent 95% confidence intervals.
Figure 7. Enjoyment ratings for avatar types according to disclosure source. Error bars represent 95% confidence intervals.
Applsci 15 02896 g007
Figure 8. Enjoyment ratings for avatar types according to disclosure type. Error bars represent 95% confidence intervals.
Figure 8. Enjoyment ratings for avatar types according to disclosure type. Error bars represent 95% confidence intervals.
Applsci 15 02896 g008
Figure 9. Perceived eye contact duration types according to disclosure source. Error bars represent 95% confidence intervals.
Figure 9. Perceived eye contact duration types according to disclosure source. Error bars represent 95% confidence intervals.
Applsci 15 02896 g009
Figure 10. Perceived eye contact duration for avatar types according to disclosure type. Error bars represent 95% confidence intervals.
Figure 10. Perceived eye contact duration for avatar types according to disclosure type. Error bars represent 95% confidence intervals.
Applsci 15 02896 g010
Figure 11. Aggregated preference totals for avatar types, including “most enjoyable to interact with”, “most realistic to interact with”, and “holding the most perceived eye contact with”, are reported as percentages.
Figure 11. Aggregated preference totals for avatar types, including “most enjoyable to interact with”, “most realistic to interact with”, and “holding the most perceived eye contact with”, are reported as percentages.
Applsci 15 02896 g011
Table 1. Participant history of technology or face-to-face conversations within the past 12 months.
Table 1. Participant history of technology or face-to-face conversations within the past 12 months.
Participant Ratings
NoneA LittleQuite a BitA LotExtensive
Face-to-face conversations1 (1.8%)4 (7%)0 (0%)2 (3.5%)50 (87.7%)
Phone conversations3 (5.3%)3 (5.3%)0 (0%)12 (21.1%)39 (68.4%)
Screen-based conversations4 (7%)12 (21.1%)13 (22.8%)13 (22.8%)15 (26.3%)
Conversations in virtual reality39 (68.4%)14 (24.6%)0 (0%)2 (3.5%)2 (3.5%)
Screen-based computer games20 (35.1%)15 (26.3%)9 (15.8%)2 (3.5%)11 (19.3%)
Virtual reality computer games41 (71.9%)16 (28.1%)0 (0%)0 (0%)0 (0%)
Table 2. Means, standard deviations (), and Cronbach’s alpha values [], including the correlations between the enjoyment and perceived realism scales, are shown. “Participant” and “Interviewer” labels indicate the source of the disclosure for each column.
Table 2. Means, standard deviations (), and Cronbach’s alpha values [], including the correlations between the enjoyment and perceived realism scales, are shown. “Participant” and “Interviewer” labels indicate the source of the disclosure for each column.
ConditionEnjoyment RatingPerceived Realism RatingCorrelations Between Enjoyment and Realism Ratings
Participant DisclosureInterviewer DisclosureParticipant DisclosureInterviewer DisclosureParticipant DisclosureInterviewer Disclosure
Negative Disclosure
Vive Sync Avatar3.34 (0.99) [0.91]3.33 (0.88) [0.88]2.31 (0.79) [0.87]2.48 (0.97) [0.84]0.56 *0.53 *
UNREAL Avatar3.50 (0.98) [0.90]3.49 (0.88) [0.88]2.93 (1.01) [0.89]3.03 (0.97) [0.86]0.77 *0.69 *
Positive Disclosure
Vive Sync Avatar3.53 (0.87) [0.92]3.59 (0.92) [0.93]2.43 (0.99) [0.86]2.60 (1.00) [0.87]0.57 *0.56 *
UNREAL Avatar3.73 (0.81) [0.88]3.83 (0.74) [0.91]2.98 [0.97] [0.89]3.09 (0.93) [0.84]0.71 *0.66 *
Correlations marked with * have significance values p < 0.01.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Fraser, A.; Hollett, R.; Speelman, C.; Rogers, S.L. Behavioural Realism and Its Impact on Virtual Reality Social Interactions Involving Self-Disclosure. Appl. Sci. 2025, 15, 2896. https://doi.org/10.3390/app15062896

AMA Style

Fraser A, Hollett R, Speelman C, Rogers SL. Behavioural Realism and Its Impact on Virtual Reality Social Interactions Involving Self-Disclosure. Applied Sciences. 2025; 15(6):2896. https://doi.org/10.3390/app15062896

Chicago/Turabian Style

Fraser, Alan, Ross Hollett, Craig Speelman, and Shane L. Rogers. 2025. "Behavioural Realism and Its Impact on Virtual Reality Social Interactions Involving Self-Disclosure" Applied Sciences 15, no. 6: 2896. https://doi.org/10.3390/app15062896

APA Style

Fraser, A., Hollett, R., Speelman, C., & Rogers, S. L. (2025). Behavioural Realism and Its Impact on Virtual Reality Social Interactions Involving Self-Disclosure. Applied Sciences, 15(6), 2896. https://doi.org/10.3390/app15062896

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop