Next Article in Journal
White by Force and the Racialized State of Exception
Previous Article in Journal
Barriers to Empowerment: Examining the Transition of Pakistani Women from Higher Education (HE) to Professional Life
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Societal Perceptions and Acceptance of Virtual Humans: Trust and Ethics across Different Contexts

Center for Strategic Corporate Foresight and Sustainability, SBS Swiss Business School, 8302 Kloten-Zurich, Switzerland
Soc. Sci. 2024, 13(10), 516; https://doi.org/10.3390/socsci13100516 (registering DOI)
Submission received: 17 August 2024 / Revised: 14 September 2024 / Accepted: 26 September 2024 / Published: 29 September 2024

Abstract

:
This article examines public perceptions of virtual humans across various contexts, including social media, business environments, and personal interactions. Using an experimental approach with 371 participants in the United Kingdom, this research explores how the disclosure of virtual human technology influences trust, performance perception, usage likelihood, and overall acceptance. Participants interacted with virtual humans in simulations, initially unaware of their virtual nature, and then completed surveys to capture their perceptions before and after disclosure. The results indicate that trust and acceptance are higher in social media contexts, whereas business and general settings reveal significant negative shifts post-disclosure. Trust emerged as a critical factor influencing overall acceptance, with social media interactions maintaining higher levels of trust and performance perceptions than business environments and general interactions. A qualitative analysis of open-ended responses and follow-up interviews highlights concerns about transparency, security, and the lack of human touch. Participants expressed fears about data exploitation and the ethical implications of virtual human technology, particularly in business and personal settings. This study underscores the importance of ethical guidelines and transparent protocols to enhance the adoption of virtual humans in diverse sectors. These findings offer valuable insights for developers, marketers, and policymakers to optimise virtual human integration while addressing societal apprehensions, ultimately contributing to more effective and ethical deployment of virtual human technologies.

1. Introduction

Since Boeing started developing the first digital avatar with a human image in the early 1960s, this technology has primarily moved from the laboratory or research setting into the commercial sector. It is believed to have become increasingly important in the digital age (Cui and Liu 2023). From a technical point of view, virtual humans are virtual individuals who have become a reality in recent years due to the advancement of information technology, specifically in sectors like motion capture, computer graphics, machine learning, high-precision rendering, and others. The early digital human image generation models were inefficient and expensive and mostly created with modelling software. On the other hand, more accurate features or human pictures based on 3D Morphable Models have been made possible using deep learning algorithms and data-driven 3D reconstructions (Sela et al. 2017; Tewari et al. 2017; Tran and Liu 2018). Developments in deep learning have led to the widespread adoption of end-to-end neural network approaches for speech synthesis, which has significantly improved the quality of speech. Prior systems primarily relied on joint rules and language-specific knowledge; however, advances in speech synthesis have also been critical to improving the accuracy of these representations (Ren et al. 2019; Zhao et al. 2019; Zhu et al. 2019; Yang et al. 2020; Wu et al. 2021). As the technology continues to evolve, the concept is said to have expanded from just being a research tool, and it has seen increased integration, with various use cases and cross-disciplinary capabilities.
The virtual human market was estimated to be around USD 11.30 billion in 2013 and will reach USD 440.3 billion soon, close to a 44.7% annual growth rate (Gandham 2022; Dharmadhikari 2024). This growth has been seen as a result of the pandemic, during which technologies like the metaverse impacted our outlook and changed the online world to make the use of virtual spaces acceptable, which led to an increasing shift towards digitalisation across all sectors. This growth has been attributed to the fast-paced development observed with technology, and it also divides the market into two categories: 2D virtual humans and 3D virtual humans. Two-dimensional virtual humans are flat representations in 2D animations and games, while 3D ones are more realistic and immersive, and often used within virtual reality applications. Virtual humans have seen increased adoption in the entertainment industry, which has aimed at creating digital characters within the service industry for customer service and virtual assistants. In the education industry, they are used for simulations and training and also find a prominent presence in other sectors like marketing, healthcare, and virtual events (Allied Market Research 2023).
There are growing concerns regarding their adoption, as technology is often considered a disruptor that could be used positively or negatively (ITU 2019). While modern technology has allowed for increased connectivity, it has led to the increased use of social media and mobile devices, which leads to eye strain, short attention spans, and depression (Twenge and Campbell 2018). Virtual humans have been used recently as a way of connecting with dead loved ones, where people seek help from AI-generated avatars to process their grief after the death of a beloved family member. This is achieved using deepfake technology, a diffusion model that can develop a realistic avatar that can move and speak. China is said to be a critical market when it comes to this technology, and the sector developing AI avatars has matured at a fast pace and has been profiting from developing this technology. Though the technology offers a short-term solace to individuals who have lost loved ones, there are several ethical considerations associated with its development and application. These include the practicality of the technology and the potential for data exploitation (Yang 2024). There are many technical challenges with this even now, like the challenges of replicating or cloning body movements or certain facial expressions, and most clients often do not have enough high-quality data for the result to be satisfactory. Companies like SuperBrain rely on permission from family members. Still, a company in Ningbo recently used AI tools to create videos of celebrities who have passed away using public data and without approval (Yang 2024). This technology has also allowed for increased scamming, with people being able to replicate voices and communicate with a persons’ close friend, leading to trust being broken as they often are fooled by virtual humans (Visser 2024). The increased possibility of the usage of technology to defraud people has also increased distrust of the said technology.
Research on virtual humans and their perception has been a significant area of interest, and this research has focused on exploring voices that lead to more confidence among people when communicating with digital humans. A study mentioned that while voice quality did not impact learning, it did indeed influence trust and learners’ perception of virtual humans. This finding is critical to understanding how trust is built and how actively this factor influences the situation (Chiou et al. 2020). It is mentioned that even at the time of the study, there were no established measurement tools for trust, especially for virtual humans in virtual reality. A few studies have identified different tools, like the socially conditioned place preference paradigm (SCPP), as ones that could help with measurements. Still, a study has highlighted the importance and role trust plays when it comes to accepting and adopting new technology (Cronje 2023). Researchers found that some or most participants thought virtual humans were more trustworthy in trustworthy conditions than those considered untrustworthy. The study involved asking participants to follow or ask for advice from virtual humans, which was thought to be a behavioural measure of trust (Schroeder et al. 2021). Importantly, manipulation during the research impacts trust behaviour; those in trustworthy conditions would be more likely to ask for advice and follow it, which indicates a paradigm overly sensitive to the assessment of interpersonal trust towards virtual humans (Lin et al. 2023).
This research focuses on acceptance and trust but also explores a region not covered well in earlier research. This is the impact of distinct environments on acceptance, which includes social media, everyday life interactions, and business settings. In some fields, such as entertainment and social media, digital humans are considered critical and often supported due to the innovation and unique experiences they offer (Hwang and Hwang 2022; Deng and Jiang 2023; Ham et al. 2024). The transparency regarding their virtual nature tends to increase engagement and also develops a positive perception of them. Still, this acceptance is said to be different when it comes to the apprehension and scepticism observed in the business environment and in professional and personal interactions (Glass 2024; Moon et al. 2024). The potential issues of digital humans in these contexts raise significant ethical and security concerns, overshadowing their benefits. This engenders a degree of distrust, and by examining their initial positive response and subsequent changes in attitude on the revelation of their trusting nature, this study aims to understand the underlying factors influencing people’s trust and acceptance of virtual humans. This understanding can help developers and marketers optimise these agents’ deployment and assist policymakers in effectively regulating their use.
This research is critical, as with the growing importance and prominence of digital humans across different sectors, understanding the dynamics of their interactions becomes essential and also helps to provide insight into the societal readiness for widespread AI integration, identifying the potential barriers that exist when it comes to their acceptance and proposing strategies that would increase engagement while addressing ethical concerns. This study’s outcomes help develop a guide for developers, regulators, and businesses to make informed decisions that align with people’s expectations and ethical standards.

2. Literature Review

2.1. The Public Perception and Acceptance of Artificial Intelligence

There is a general view that artificial intelligence (AI) is a technology created to help improve human existence and assist individuals in specific situations (Gansser and Reich 2021). AI is used for numerous beneficial applications, such as in illnesses, diagnoses, disaster avoidance, and many others, which is one of the primary goals of the fourth industrial revolution (Darko et al. 2020). Researchers have claimed that AI will help to improve productivity, open new areas or options, reduce mistakes, take on the burden of addressing the most complicated issues, and also relieve people from monotonous chores (Calderon 2019; Yeasmin 2019; Gibbons 2021; Ransbotham et al. 2021). However, despite all these benefits, there are many concerns about the technology, including from ethical, social, and financial perspectives, and the economic impact is considered to be one of the biggest risks of AI (Neudert et al. 2020; Kaya et al. 2024). Studies and research mention that AI could threaten human services, with reports often claiming that the technology could put nearly 47% of employees at risk of losing their jobs. Others argue that robots could lower costs, which could lead to an annual loss of 360,000 to 670,000 jobs just in the US (Frey and Osborne 2017; Restrepo and Acemoglu 2017). Research emphasises that the overall losses observed would be far higher if developments were expected to proceed at their projected rates. Studies also show that the gap created by their development could be distributed just to a small group, which could increase the inequality among people (Ouchchy et al. 2020; Chu et al. 2022; Jeong and Jihwan 2022). In addition to this, AI also has many security challenges. High-profile AI is considered unlawful, biassed, and discriminatory as it could infringe upon human rights, which could lead to ethical problems and increased social anxiety (OECD 2019; Gillespie et al. 2021). These challenges require organisations to understand the impact that adopting AI applications could have on their clients’ technology and behaviour. It should be spread across all stages, increasing user confidence and resolving most problems.
AI has been employed in several ways, including as AI-driven influencers or virtual influencers. It is important to note that people’s faith in technology is essential to its upkeep and acceptance; if the systems are untrustworthy, it may be difficult for them to be widely adopted (Gillespie et al. 2021). In a study by Gerlich (2023b), 357 participants were surveyed, focusing on trust, credibility, expertise, and increasing their comfort with and attraction to virtual influencers. It was found that most participants felt that virtual influencers were more trustworthy and relevant to customer preferences, which led to an increase in purchase intention, as the study focused on social media marketing. The study found increased positive attitudes towards AI-driven influencers, often considering them devoid of self-interest. A multi-country survey noted that 37% of people do not trust or rely on information provided by an AI system or share information with one (Gillespie et al. 2021).
In another study, a survey was conducted, and the respondents had an average score of 3.98 on a six-step Likert scale when assessing AI as the future of mankind (Gerlich 2023a). This score indicates that many people have embraced AI, which is consistent with other studies showing its organisational acceptance (Fountaine et al. 2021). The study also mentions the complex relationship between societal attitudes and technological advancements. In addition, it was also observed that cultural conditions also impact people’s attitudes towards technology adoption. The capability of existing models is questioned when it comes to explaining societal attitudes towards AI. Research suggests that societal trust in AI is a significant factor shaping public opinion, with AI often seen as reliable when aiding decision-making, especially in global issues. This perception is often due to a perceived lack of trust and the ineffectiveness of governmental institutions, which are often considered to have vested interests (Gerlich 2023a). Perceived usefulness, performance expectancy, trust, outcome expectations, and attitudes all had statistically significant and positive influences on predicted behavioural intentions, willingness, and the use of AI across different industries, according to a different study that examined the factors leading to the acceptance of AI using the Technology Acceptance Model (TAM) (Kelly et al. 2023). That said, in some cultural scenarios, there is a need for human contact which it is not possible for AI to replicate.
Another study aimed to analyse AI-related statements and map these evaluations spatially within a criticality map, displaying the mean predicted likelihood of their occurrence and the mean rating of 38 themes relevant to the topic (Brauner et al. 2023). The map has four distinct regions: the top left quadrant contains statements that are rated as positive but improbable; the top right quadrant contains statements that are rated as both positive and probable; the bottom right quadrant contains statements that are rated as negative and probable; and the bottom left quadrant contains statements that are rated as both negative and improbable. The statements positioned on the diagonal represent the correspondence between perceived probability and personal evaluation, while off-diagonal statements expose discrepancies between anticipated outcomes and assessments, such as the likelihood of AI being susceptible to hacking or the improbability of it generating cultural resources (Brauner et al. 2023). There were inconsistencies found in claims that were deemed likely but unfavourable, like the possibility that a privileged few could influence AI and cause fewer people to communicate, the possibility that AI could be impacted by a privileged few and result in more job losses than creations, and the vulnerability of AI to hacking. Additionally, associations were found between the average predicted likelihood and distrust of AI, affinity towards interactions with technology, trust in AI, and interpersonal trust. These results show that although favourable but improbable components highlight gaps in AI’s research and application, the portions viewed as unfavourable are likely to urge efforts to address these issues (Brauner et al. 2023). This shows the difference in people’s perceptions of AI and the specific factors that can influence it.

2.2. Virtual Influencers, Social Media Engagement, and Public Perception

Text-to-speech (TTS) voices are becoming ubiquitous today, mostly due to the increased digitalisation observed worldwide. The influence of TTS voices on learning has been examined over the years, and researchers claim that TTS voices are less effective when facilitating learning than recorded human voices (Schroeder et al. 2021). This phenomenon is said to be the voice effect. With the recent adoption of virtual humans, TTS voices might be as effective as human voices (Craig and Schroeder 2019; Chiou et al. 2020). With the increasing improvements in the field and the development of virtual humans (VHs), it is critical to understand how these technologies impact different sectors or the attitudes of the people. The findings suggest that VHs who had low-quality voices, led to a significant variation in trust compared to those who used high-quality voices. Trust was also influenced by the perception of a virtual human within the TTS condition, but there was little influence on learning outcomes under all circumstances. While it can be claimed that different voices influence trust, the situation or environment also has a role to play that can alter trust (Schroeder et al. 2021).
Another study was conducted that employed communicative agency farming and anthropomorphic design features based on the perception of digital agents and how they affected organisational results or decision-making (Araujo 2018). The results have shown that adopting human-like words or names for chatbots is more influential and creates heightened perception, while there is no impact on their overall social presence. Anthropomorphism, which in the study is said to be mindless and mindful at the same time, is a description of the human characteristics that are found to also be present in other non-human entities like machines. In some cases, people prefer to see fewer human characteristics like intelligence, which was observed when, in a study, the use of a chatbot was reduced as soon as it was said to be intelligent (Araujo 2018). Social media presence is also found to be higher for agents that have a lot of human characteristics, such as a manner of interacting with others, conversation styles, intelligence, and others. In addition, using these characteristics is critical as they help to increase a person’s relationship or bond with the organisation, thus fostering long-term partnerships and creating trust in the brand (Araujo 2018).
Virtual influencers are AI-generated code-based systems that replicate or imitate human influencers and have been increasingly adopted in recent years with advances in technology, as they can be more human-like and act like real people. Like human influencers, these virtual influencers are known to post content on their social media page, which could be based on specific content or brands, and engage with customers, which allows them to gain popularity and spread the awareness or information that they have been programmed to do (Brown and Hayes 2008). Digital influencers have been used widely in promoting brands and are considered a new trend due to their increasing appeal, perceived reliability, and predictability (Moustakas et al. 2020). Unlike human influencers, who are known to be irrational and have biases towards certain products or audiences, virtual influencers are developed based on algorithms, reducing the chances of bias, and their decision-making is based on data, allowing them to be adjusted to the requirement of the brand. This approach is proven to give the brand greater control over the content, offer better value for money, and be highly cost-effective (Thomas and Fowler 2021).
Another benefit of virtual influencers would be the lack of physical limitations, meaning that they can work nonstop or be programmed and run at any time of the day, unlike their human counterparts, who need rest. They also do not age or die, so they exist forever in the digital space (Matthews 2021). This would mean they can pursue most of their interests without any real-life expenses, making them more cost-effective in the long run and less susceptible to controversies (Gerlich 2023b). In the literature, it has been proven that social media may influence consumers’ purchase intention, with a study that surveyed people in Singapore finding that 60% of people do not think virtual influencers influence them. Still, human influencers significantly impact their purchase decisions (Gerlich 2023b).
Gerlich (2023b) explored the impact of virtual influencers and surveyed 357 participants, focusing on trust, expertise, their contribution to purchase intentions, and credibility. The study’s findings differ from those observed in Singapore, where virtual currencies were preferred. Factors such as reliability, relevance, trust, and expertise are critical and contribute to increased acceptance and differences. In addition, these factors also influence consumers’ opinion, considerations, and likelihood of purchasing, with trust and expertise being the most critical factors (Gerlich 2023b). A similar research study that surveyed 246 management students found that 90% of respondents used Facebook and other social media, with 60% spending at least 1 to 3 h on Facebook daily. According to Awdziej et al. (2020), it makes no difference whether individuals are aware that the persona they are interacting with is virtual or real, regarding their believability. According to the study, what counts is the photo’s presentation, the model’s overall beauty, and the virtual effects’ capacity to foster confidence and modelling knowledge. The use of virtual influencers in the fashion business has grown, and even if these influencers are not frequently acknowledged or regarded as virtual characters, their credibility is frequently unaffected (Awdziej et al. 2020). Even though virtual influencers are unable to sense things or experience emotive and sensory information, there is still some unhindered acceptance of their use (De Cicco et al. 2024). Another study investigating the impact of virtual influencers on consumer behaviour found that these influencers increase the likelihood of customers purchasing a product after being exposed to their presence (Shi 2023). Studies also mentioned that those VIs that show higher levels of emotional attachment, social presence, and stronger benefit-seeking behaviour are more successful in influencing behaviour (Yan et al. 2024).
Overall, it can be inferred that the perception and influence of virtual influencers in marketing and altering consumers’ perception and acceptance of products cannot be ignored, and it is increasing with time.

2.3. Ethical Implications and the Misuse of Virtual Humans

The rise of virtual assistance has led to digital humans developing a sense of intimacy with people. Unlike their predecessors, these virtual humans are said to have social skills that increase their connection and rapport during a conversation. It is claimed that virtual humans can provide a safe environment in which people are willing to share more honest disclosures, which could include important information (Lucas et al. 2014). This capability has been used in healthcare, where participants interacted with virtual human interviewers and were informed that they were interacting with humans who controlled the virtual humans. It was mentioned that the people who believed that they were interacting with a computer were found to have lower fear about self-disclosure and lower impression management and could display their sadness more intensely (Lucas et al. 2014).
However, what could improve the quality of care has also been seen to be increasingly misused, with virtual humans often used to scam people for money. Recently, there has been an increasing use of AI and virtual human technology to replicate the voice of a loved one, which is cloning. There have been many scams; some have been successful, while others have not (Bethea 2024).
Deepfakes are also proliferating across the internet, with even politicians and government heads falling prey to the misinformation campaigns generated through these scams. It is reported that three countries, namely Myanmar, Laos, and Cambodia, have been the centre of online scams by Chinese-originating criminal groups that attempted to generate forced labour, scams which involved 300,000 people (Walker 2024). It has been reported that the scammers were able to steal around USD 63.9 billion worldwide and that half a million people are working as scammers using advanced AI technology (Walker 2024). EU reports claim that AI-based dating and social media fraud have been on the rise, all of which have been helped and aided by the development of AI and virtual humans, showcasing the rising misuse of this technology (O’Carroll 2024). In the same way, the development of assistance and machine learning has led to a rise in autonomy given to computer systems to mine large amounts of data, and their increased adoption and shifts to their use are said to enhance the creation of digital immortality. This means the growth of virtual humans and concepts like brain simulation and personality capture, which align with computationally inspired life, that would lead to digital immortality, which challenges the existing social norms and raises significant ethical questions (Savin-Baden and Burden 2019).
The development of technologies like Large Language Models (LLMs), which are widely used in VHs, is said to have not just allowed for improving our understanding of natural language and experiences but has also raised concerns regarding the ethical issues related to the technology, which need to be dealt with (Piñeiro-Martín et al. 2023). The main concerns are about privacy and data security; when considering that VHs mostly use data to improve their effectiveness, this poses a significant threat to people, as it could lead to this system accessing sensitive information like facial expressions that could be exploited by scammers and could lead to privacy breaches. LLMs face another major hurdle in the form of the bias and discrimination that is already inherent within the data that are used to train them, which could further magnify these biases and could lead to more discriminatory consequences that could further divide groups of people (Parsons 2021; Piñeiro-Martín et al. 2023; Ayinla et al. 2024). To ensure diversity and justice, there complete testing is needed, including of all stakeholders, to understand its limitations and create a system with minimal biases or issues.
Another issue that needs to be tackled is the lack of transparency of the codes which provide the answers to questions or allow interactions to continue. Most of the system is opaque, which could lead to ethical issues regarding the use of data and the generation of responses. Developing an intricate mechanism that would underlie these judgements is challenging but also critical, as highlighted before, to ensure user confidence and improve ethical utilisation (Parsons 2021; Ayinla et al. 2024). The current inclination towards anthropomorphising virtual assistants, which involves employing voices that resemble those of humans and imitating human emotions, has the potential to obscure the distinction between humans and machines, and this could be problematic for susceptible demographics, such as the elderly, who may encounter difficulty in differentiating between virtual aides and actual individuals (Parsons 2021; Piñeiro-Martín et al. 2023; Ayinla et al. 2024). These factors influence people’s trust and could harm the widespread adoption of this technology.

2.4. Virtual Humans in Business and Work Settings

In the current business set-up, software algorithms are known to allocate, optimise, and evaluate the work of different populations in different sectors, which could be seen as them taking on the role of supervisors. These algorithms are said to adopt managerial functions, which allow companies to ensure optimisation and oversee many workers at a large scale. Still, the impact on the human workers and their overall work practises is often overlooked (Lee et al. 2015). Over the years, there have been a large number of studies that have focused on intelligent machines in the workplace and have found that the principle of success is dependent on positive human interactions with intelligent machines. This finding means that establishing trust and cooperation is highly critical, in addition to developing accurate mental models and providing increased transparency and explanations (Parise et al. 1999; Lee et al. 2015). Research mentions that intelligent machines like algorithms must incorporate different social and organisational contexts, which involve multiple stakeholders and new roles, into their workflow (Lee et al. 2015).
The peer effect has been considered across multiple domains, such as in health choices, financial decision-making, etc. Peers are known to impact the workplace, and a study was conducted to test the impact of VH peers in the workplace on an organisation’s overall performance (Gürerk et al. 2019). The study was undertaken within a virtual environment, and the responses from the study highlight that people felt natural or similar to how they have always felt. They recognised the presence of virtual peers in two settings, and the findings also showed a difference in the productivity of these people. It was noted that when there were highly competitive peers present, people’s performance would be higher when compared to a situation where there were low-producing peers present. This shows that virtual humans can impact the workplace and people’s performance (Gürerk et al. 2019). Virtual humans also help in other areas like leadership training, as the use of simulation allows them to be trained to navigate complex interpersonal dynamics that allow them to improve their negotiation and social skills while managing emotions and possible mistakes that could lead to significant consequences (Hill 2014). Traditional live role-playing is effective, but it often is costly, and there are time constraints as well as inconsistencies in this method. At the same time, virtual human role-players can provide immersive, interactive experiences along with constructive feedback that helps improve decision-making and communication skills. The US Army has used these for training their personnel, and virtual humans can mimic real people and respond to non-verbal cues that make the training much more realistic (Hill 2014).

3. Materials and Methods

This study aimed to investigate how the perception of virtual humans has evolved and to examine people’s behavioural changes before and after the disclosure of their virtual nature. This research employed a mixed-methods approach, combining quantitative and qualitative analyses. The quantitative component utilised standardised survey instruments to measure changes in perceptions and attitudes, allowing for objective, replicable data collection that could be generalised to a broader population. This study examined the impact of virtual human disclosures across three contexts: social media, business/professional environments, and general public settings. This study included participants across three age groups—18–25 years, 26–40 years, and 40–60 years—with a distribution of 31%, 35%, and 24%, respectively. Participants younger than 18 and older than 60 were excluded to maintain consistency in the sample. Despite the inclusion of different age groups, the analysis did not reveal any significant differences in the perception and acceptance of virtual humans based on age. This finding suggests that the age-related factors influencing AI acceptance may not be as pronounced within the studied age range, or that the differences were too subtle to detect with the given sample size. It is possible that the relatively tech-savvy nature of the younger participants, as well as the professional experience of the older age group, contributed to a more uniform response across the board. However, the lack of significant age-related differences contrasts with some past research, which has suggested that younger individuals are generally more accepting of AI and digital innovations. This discrepancy may be due to the specific contexts in which the virtual humans were used in this study, which might have resonated similarly across age groups.

3.1. Experiment Design

The social media experiment focused on assessing the impact of disclosing the virtual nature of a communication counterpart on participants’ perceptions. Participants interacted with video content featuring a virtual human (VH) presented as a social media influencer. Engagement metrics, such as liking, sharing, and commenting, were recorded without the participants’ knowledge that the influencer was virtual. Following this interaction, participants completed a survey evaluating their initial impressions, including the VH’s overall appeal, trustworthiness, and engagement. In the disclosure phase, participants were informed that the influencer was a virtual human, and detailed information about the underlying technology and its typical uses in marketing was provided. Subsequently, a follow-up survey assessed changes in perceived authenticity, trust, and willingness to engage with the VH in the future. The study employed a within-subject before-and-after design, where participants’ perceptions were measured immediately before and after the disclosure of the virtual human’s nature. This design inherently controls for individual differences, as each participant serves as their own control. The lack of a separate control group is justified, given the study’s focus on immediate perceptual changes due to this disclosure, with no significant time gap between the two measurement points. This approach minimises the influence of external variables and provides a direct assessment of the impact of the disclosure.
The purpose of the business/professional environment was to evaluate how people interacted with virtual agents and how exposing the usage of virtual humans affected people’s perceptions in a work context. The first task required of the participants was to imitate business contacts with an agent they saw as human. These interactions resembled common business situations such as client meetings and customer support. The survey was used to evaluate the agents’ professionalism, efficiency, and the respondent’s level of satisfaction following their contact. Similar to the previous experiment, the participants were also told that the agent was a VH, and they were given comprehensive details on the limits and powers of virtual agents in a work environment. Following this, the same poll was utilised to gauge how their view had changed after learning these details. The third experiment was about general public perception, which focused on societal trust and ethical concerns regarding the use of VHs in everyday scenarios. The participants encountered virtual humans in a simulated scenario from their personal life, communicating with the customer care service of a bank, all the while unaware that they were interacting with a VH. A survey with different questions was used to collect their perception in these conditions, which looked mostly at the performance and trustworthiness of virtual humans. After completing this step, the individuals were informed about the VHs they engaged with and given additional information about the technologies used, including their challenges and benefits. They were provided with a lot of information, which covered topics such as the potential use of technologies like deep-fake video creation and how they are utilised in scams. The discussion also touched on the challenges and ethical considerations of the technology, highlighting the current difficulties in its adoption while emphasising its overall benefits. Once the information was provided and people were ready, they were asked to take another survey based on how their views had changed with the information provided.
To create the interaction or simulation of an interaction, there was a need to develop virtual humans that would be realistic, and for this we used Microsoft’s VASA-1 (Figure 1), which works by using AI, machine learning, computer graphics, and natural language processing, which creates all the required technology and has the expertise required for creating a realistic human with behavioural adaptations and efficient text-to-speech synthesis (Xu et al. 2024). Multiple agents or VHs were developed and programmed to operate in different environments based on the need to conduct experiments set in different scenarios. Microsoft provided the needed VHs according to our testing specifications.

3.2. Data Collection

This study employed a rigorous sampling strategy and a detailed data collection process to ensure the reliability and validity of its findings. Between December 2023 and February 2024, 371 participants from the United Kingdom were recruited across three experiments. The study involved three distinct experiments, each conducted in different contexts: social media (Experiment 1), a business/professional environment (Experiment 2), and in a general public setting (Experiment 3). The number of participants varied slightly across these experiments due to the voluntary nature of their participation and minor attrition. Specifically, Experiment 1 included 130 participants, Experiment 2 included 125 participants, and Experiment 3 included 118 participants. These variations in participant numbers resulted in corresponding differences in the number of observations for each experiment. This analysis accounts for these differences by focusing on within-subject comparisons using a before-and-after design. Each experiment randomly selected 10 participants for follow-up interviews to gain deeper insights into the changes in their perceptions. The sample was gender-balanced, with 190 female and 181 male participants ranging in age from 18 to 65 years. The analysis indicated no significant demographic effects on the outcomes. Participants were recruited using a snowball sampling method, beginning within the Cambridge college community.
Once the interactions were completed, in each phase, a survey was distributed, which was focused on pre-disclosure reactions. The survey was very structured and used a six-point Likert scale to gauge the participant’s positive or negative reaction or perception of the technology specific to the component of the study they were in. The survey covers multiple areas of engagement, such as appeal, efficiency, professionalism, credibility, satisfaction, and others. Once this was completed, educational materials were provided, which helped the participants understand the technology and the potential benefits and risks. A second survey was distributed to the same group to collect responses in this post-declaration phase. In all the components, two open-ended questions reflected the change in the people’s opinion before and after the disclosure.

3.3. Data Analysis

The questions from the surveys used for the three experiments were analysed and grouped into four factors or themes. The four factors identified were trust, performance, usage likelihood, and overall acceptance. The scores for these factors were calculated as the average of the scores given for the questions in each group. Table 1 provides a list of the questions (see Appendix B as well) grouped under each factor, and these too were divided into experiments and based on whether they were answered before or after the disclosure (TBEX1, TAE1 and so on), so that separate analyses could be carried out on the data from each experiment and then their overall score.
The collected data were subjected to paired t-tests to assess the differences between the pre- and post-disclosure scores for each survey question, enabling an analysis of shifts in perception. The t-test statistics determined whether the means of the two groups (pre- and post-disclosure) differed significantly. A p-value of less than 0.05 was considered to indicate a statistically significant relationship, suggesting that the observed differences were unlikely to be due to random chance. A large t-statistic value would show a significant difference or stronger effect of the disclosure. This helped with the comparison at the individual scale and, as a combination, crossed the three experiments in terms of the answers given before and after the disclosure and helped us to see in which situation or context the shift was greater and in which context there were fewer variations. In addition to calculating the paired t-test values and p-values, Cohen’s d was computed to determine the effect size of the difference between pre- and post-disclosure measures. This approach allows for a clearer interpretation of the magnitude of the change in participants’ trust, performance, usage likelihood, and overall acceptance of VHs. This research also used a regression analysis, descriptive statistics, and Q-Q plots to better understand the difference in participants’ values or perceptions before and after the disclosure.
Responses to open-ended questions and data from the 30 interviews were analysed using a thematic analysis to explore participants’ opinions and attitudes toward virtual humans after their disclosure. This qualitative analysis aimed to identify and understand the underlying reasons for the shifts in perception, with recurring themes highlighted and presented in the results.

3.4. Ethical Consideration

All participants were informed that the study might involve an initial deception, but they were assured that they would be fully debriefed after the interaction. During the debriefing, the purpose of the deception was explained, and participants were provided with complete information about the study’s objectives and procedures. Their rights and any emotional or ethical concerns they might have also been addressed. The participants were not coerced into participating in the experiments. They volunteered for the study and were informed about the requirements and how it would be beneficial. They were also given the option to withdraw from the research if they felt uncomfortable. All participants have provided written informed consent.

4. Results and Discussion

It is important to note that the interpretation of the trust scores varies depending on the experimental context. In Experiment 1 (social media), a higher score indicates a higher level of trust. However, in Experiment 3 (general public perception), a higher score reflects lower trust. This distinction arises due to the use of different scales for the trust measurement in each context. In the following sections, the direction of the trust scores will be clearly indicated to avoid confusion.
A comprehensive list of the abbreviations used in this chapter can be found in Appendix A.
The study involved 130 participants in Experiment 1, 125 in Experiment 2, and 118 in Experiment 3. The analysis began with an examination of descriptive statistics, providing a foundation for comparing the shifts in the key variables across the three experimental contexts. We then categorised the data and their corresponding answers based on four criteria. After that, we calculated the overall response shift across the studies based on three factors.
Table 2 presents descriptive statistics for trust, performance, usage likelihood, and overall acceptance across all three experiments. The number of observations (n) in each experiment is as follows: Experiment 1 (n = 130), Experiment 2 (n = 125), and Experiment 3 (n = 118). Table 2 highlights significant variations in trust before and after the disclosure of the nature of the virtual human (VH) across three experimental contexts. In Experiment 1 (social media context), the mean trust score slightly decreased from 2.1487 (positive) to 1.9795, indicating a mild reduction in trust. In this experiment, higher scores indicate more trust. This shift suggests that although the overall attitude remained within a positive range, the revelation of the VH’s nature led to a slight drop in confidence.
In Experiment 2 (business/professional environment), an interesting trend is observed, where trust increased post-disclosure, from 1.1290 to 1.3044. This change suggests that transparency in professional settings might enhance trust, as participants might value the honesty about the VH’s nature. Despite this increase, trust remains in the extremely positive range, indicating a generally favourable perception.
Conversely, Experiment 3 (general public perception) reveals a substantial decline in trust from 1.0684 to 2.7179 after the disclosure, showcasing a shift from an extremely positive to a moderately positive perception. In this context, a higher score corresponds to lower trust. Therefore, this increase reflects a significant decline in trust after the disclosure. This sharp decline indicates that the general public is more sceptical and less accepting of VHs once their non-human nature is disclosed, leading to a significant drop in trust.
Regarding performance, Experiment 1 shows a slight increase from 1.3538 to 1.4712 after the disclosure, remaining within the extremely positive range. This increase suggests that participants in the social media context may appreciate the technical capabilities of VHs, even after learning about their artificial nature. In Experiment 2, the performance perception increased marginally, from 1.8360 to 1.9839, indicating that participants still perceived the VH’s performance positively, though not as strongly so as before. Experiment 3 exhibited a slight decline in performance from 1.0983 to 1.6937, yet it remained within the extremely positive range. This suggests that, despite the drop, their performance is still viewed favourably.
Usage likelihood presents a more pronounced shift. In Experiment 1, the likelihood decreased significantly, from 2.5846 to 1.9077, showing a move from a positive perception closer to a slightly positive outlook post-disclosure. Similarly, in Experiment 2, the usage likelihood shifted from 1.4194 (extremely positive) to 1.8871, indicating a reduction but still remaining within a generally positive range. However, in Experiment 3, there was a dramatic increase from 1.0812 to 4.6496, shifting from an extremely positive to a nearly negative perception. This indicates a significant drop in people’s willingness to engage with VHs after their nature is revealed, reflecting a substantial reduction in their acceptance in the general public context.
Finally, overall acceptance reflects these trends across all three experiments. In Experiment 1, VHs’ acceptance slightly decreased from 2.1526 to 1.9833, remaining within the positive range. Experiment 2 shows a minor decline from 1.7961 to 2.0737, indicating that their overall acceptance remains positive but is slightly less enthusiastic post-disclosure. Experiment 3 shows a more significant drop from 1.1154 to 2.3533, moving towards a more neutral outlook, suggesting a considerable reduction in the overall acceptance of VHs in the general public context.
These results indicate that while some participants maintain a positive perception of VHs after their disclosure, particularly in professional contexts, there is a notable trend towards reduced trust, performance perception, usage likelihood, and overall acceptance, especially within general public settings. The findings underscore the importance of context in shaping attitudes towards VHs, with transparency playing a critical role in influencing trust and acceptance.
The paired t-test of the Experiment 3 (Table 3) results indicates statistically significant shifts in all variables post-disclosure, with the largest mean difference observed in the usage likelihood (−1.10, p < 0.001), suggesting a substantial decrease in participants’ willingness to engage with virtual humans after learning of their artificial nature. While the study examined perceptions of virtual humans across three distinct contexts—social media, business environments, and general public interactions—the results from Experiment 3 (general public perception) are presented here as representative of broader societal trends. The general public context is particularly critical in understanding societal acceptance and the ethical concerns surrounding virtual humans, as it provides the most comprehensive and generalisable insights. Furthermore, the patterns of trust, performance, and usage likelihood observed in Experiment 3 are consistent with the trends seen in the other two experiments, allowing for a focused presentation of the most significant and impactful findings without redundancy.
The paired t-test for the trust results before and after disclosure shows a mean difference of −0.5978813, indicating that trust decreased on average by this amount after the disclosure. The t-statistic of −11.09816 reflects the size of this difference relative to the variability in the data. However, rather than relying solely on the t-statistic, Cohen’s d was calculated to provide a standardised measure of the effect size. For Experiment 3, Cohen’s d was found to be 1.65, indicating a large effect and a substantial decline in trust following the disclosure. This significant shift suggests that the disclosure of the virtual human’s nature had a strong impact on trust. The p-value of 0.00 is below the commonly accepted threshold of 0.05, indicating that the change in trust before and after the disclosure is statistically significant and unlikely to be due to chance. This statistical significance, combined with the large effect size (Cohen’s d = 1.65), further emphasises the substantial impact of the disclosure on public trust. Across all four factors, the pre-disclosure values are consistently lower than the post-disclosure values, reflecting a shift in attitudes after the virtual human’s nature was revealed. Performance before and after disclosure had a mean difference of −0.2931356, with a t-statistic of −13.8722. The corresponding Cohen’s d value is 0.59, indicating a medium effect size. Usage likelihood had the largest mean difference of −1.100932, with a t-statistic of −11.7349. Cohen’s d was calculated to be 3.57, indicating a very large effect size and a significant reduction in participants’ willingness to use virtual humans post-disclosure. The data reveal that all pre- and post-disclosure comparisons are statistically significant, with pre-disclosure values consistently lower across the board. This shift indicates a move towards more negative perceptions of virtual humans following their disclosure, especially in terms of trust and usage likelihood. The mean difference in overall acceptance was −0.5594, with a t-statistic of −10.1593. The corresponding Cohen’s d value is 1.23, indicating a large effect size and a notable reduction in overall acceptance following the disclosure. The Q-Q plots for trust and usage acceptance (Figure 2 and Figure 3) reveal deviations from normality, particularly in post-disclosure scores. However, the Cohen’s d values, which were found to be large in several instances (e.g., trust, d = 1.65; usage likelihood, d = 3.57), provide a robust indication of the effect sizes that are significant, even in the presence of non-normal distributions. This non-normal distribution may have influenced the paired t-test results, with more pronounced deviations observed in the post-disclosure data. The curve of the points suggests differing distributions between pre- and post-disclosure, with pre-disclosure values being less spread out and more tightly clustered.
This study then examined each of the experiments or components individually to determine which had the greatest impact. We started by examining how trust changes with the experiments or contextual situations (Table 4).
The trust levels exhibited the most significant decline in the general public experiment (mean trust before: 1.07, mean trust after: 2.71, p < 0.001), indicating that participants were most sceptical about virtual humans in everyday scenarios. The first experiment is truly a unique setting, as it is the only one where trust has shifted more towards positive than all the others, which the t-statistic confirms. The results indicated that prior to disclosure, the perception of trust was higher. However, after disclosure, the value decreased or shifted closer to a positive direction within the context of social media. This suggests an increased trust in virtual humans (VHs) on social media, consistent with the existing literature’s findings. On the other hand, the t-statistics were negative. After disclosure, the perception scores increased, shifting towards the negative side. In Experiment 2, it was −5.5016, while in Experiment 3, it was −17.8707, showing an increasing difference or negative perception regarding trust in these two scenarios. For the first trust experiment result, we used a Q-Q plot (Figure 4) and observed that the lower values of the variables are quite similar, while there is a significant difference between the higher values.
The use of descriptive statistics allows for a better understanding of the variables, particularly in terms of their ranges. In Experiment 1, there was a high level of variation among the variables. Initially, the level of trust was low even before the nature of the virtual human was known. Following the revelations, the trust values decreased even further, with the highest value being 4.667 across all factors. In the second experiment (Table 5) the difference between the maximum values was only 0.5. The maximum value increased from 1.75 to 2.25. However, in the third experiment, there was a significant drop in trust, as the maximum value changed from 2 to 4. Based on the literature, it is understood that social media is an area in which change is anticipated, and it is a platform that people trust more. However, in other areas such as business and everyday life, there has been a more highly negative perception once trust is established. While the mean value alone does not give a complete picture, descriptive statistics can elucidate the range of this perception.
After analysing the performance of VHs in the three experiments, it was observed that there was a decrease in performance perception across the different experiments after the reveal. Notably, the third experiment (Table 6) exhibited a higher negative t-statistic value of 14.3827, indicating a significant difference in the means relative to the overall variability in the data. Similarly, Experiment 2 also shows a significantly high difference. In contrast, in Experiment 1, the values are lower compared to the other two, which shows that revealing the VH led to an increased shift towards the negative among the respondents. The largest shift was observed in the general public perception experiment, followed by the business environment experiment. For Experiment 1 (social media environment), the negative perception could highlight increased expectations of the quality or performance of VHs.
In Figure 5, the Q-Q plot indicates that the lower values are similar, but the higher values after the revelations were found to be larger in their deviation than the before values. This is shown by the points below the line, which demonstrate significant divergence as the value increases.
The descriptive statistics in Table 7 reveal interesting findings, particularly in relation to the range, minimum, and maximum values. In the first experiment, there was a shift in perception, with maximum values of 2.25 before and 3 after, indicating a shift of 0.75 towards the negative. On the other hand, in the second experiment, the shift was even smaller, with a change of just 0.33 between the minimum and maximum acceptance levels. The third experiment showed a significant perception shift, with values changing from 1.67 before to 3 after the revelations, indicating a reduced perception of performance in general public perception tests.
Focusing on usage likelihood, we can see that on social media, the usage likelihood has an increased positive effect among the participants, with the mean dropping from 2.5846 to 1.9076, while the t-statistic has a value of 3.4775, which means the shift is towards a more positive perception. In the case of business and general public perceptions, the situation was different. In the business context of Experiment 2, the shift was comparatively lower but shifted towards the negative. The post-disclosure mean value of perception was 4.64, indicating a higher level of negative perception or a lower chance of future usage. The t-statistic was also −19.2796, revealing significant differences in its values before and after. This suggests an increase in the likelihood of social media VH usage, but a decrease in the likelihood of its usage in the other two scenarios.
The paired t-test results (Table 8) for usage likelihood reveal significant differences before and after the VH reveal across all examples. For social media (EX1), usage likelihood decreased from a mean of 2.58 to 1.91, with a statistically significant t-statistic of 3.48 and p-value of 0.0007, indicating a decrease in usage intent. In business contexts (EX2), usage likelihood increased from 1.42 to 1.89, supported by a t-statistic of −6.11 and a highly significant p-value (<0.001). For general usage (EX3), a substantial increase in likelihood was observed, from 1.08 to 4.65, with a t-statistic of −19.28 and a p-value also less than 0.001.
The Q-Q plot of Experiment 3 (Figure 6) indicates that the higher values deviated significantly from the line. These points could be considered outliers or significant changes. This suggests that after Experiment 3, people’s views have significantly altered, and their likelihood of using VHs is lower.
When we look at the descriptive statistics of usage likelihood (Table 9), we notice that the minimum and maximum values for social media have not changed much, but the average value has. This indicates that there are still many people who have a negative perception of the use of virtual humans in social media, but there is a significantly larger population that might prefer it. On the other hand, in the business and general perception experiments, we observed a significant shift in the minimum and maximum values. This suggests that the introduction of virtual humans has a negative impact and alters people’s perception of the likelihood of their use.
The final factor is overall acceptance (Table 10). Similar to the previous factors, their overall acceptance in social media improves after the re-evaluation. However, this is in contrast to the other two factors, where there is a significant decrease in their acceptance after the revelation. This suggests that people are more cautious about using technology in these situations.
In previous tests, we have observed that the higher values for overall acceptance in the general perception experiment, or Experiment 3, vary significantly from the values found before the revelation. This shows a shift towards a more negative attitude or perception (Figure 7).
The descriptive statistics (Table 11) align with observations made that are related to other factors. The business and general perception experiments have shown a significant increase in the maximum values from before to after the reveal. In the context of social media, while the maximum value has increased, the values in the middle have either dropped or shifted in the positive direction. This is why the mean is lower compared to the previous value.
Upon conducting various analyses, it is evident that when the use of virtual humans is disclosed, people’s perception tends to shift towards the negative. This shift is particularly noticeable in terms of the likelihood of their usage and overall acceptance. The correlation analysis in Table 12 shows whether the factors impact each other and whether they have any statistically significant relationship.
This analysis shows that trust had a strong positive correlation with usage likelihood before the reveal, at 0.851 and 0.767. After the revelation, trust still had a very strong positive correlation with usage likelihood, while performance was not observed to have as strong a relationship with the other variables. We conducted a regression analysis using pre-disclosure data (Table 13), with overall acceptance as the dependent variable and the other three factors—trust (trb), performance (pb), and usage likelihood (ub)—as independent variables.
Here, we observe that all three factors (trust, performance, and usage likelihood) have a p-value of <0.001, indicating statistical significance. However, these are unstandardized coefficients, meaning their magnitudes cannot be directly compared, as they are expressed in different units. Thus, while all three factors significantly influence overall acceptance, we cannot infer that they do so equally based on their coefficients alone. Still, trust is said to have a significant relationship with just overall acceptance, as the regression with trust as the dependent variable showed that its p-value is above 0.05 for performance and usage likelihood. When we conducted a regression test using the post-revelation values, we discovered that overall acceptance is influenced by trust and the likelihood of usage. At the same time, performance does not seem to have a significant impact, with a p-value of 0.099.
To explore the factors that influence trust specifically after the VH disclosure, we conducted a second regression analysis (Table 14) with trust as the dependent variable. This analysis allows us to investigate whether performance, usage likelihood, and overall acceptance contribute to changes in trust following the disclosure of a virtual human’s nature. We see after the revelation that it has a statistically significant relationship with usage likelihood and overall acceptance, while performance has no impact on trust. The regression analysis indicates that the VH’s pre-disclosure overall acceptance was significantly influenced by trust (β = 0.327, p < 0.001) and usage likelihood (β = 0.391, p < 0.001), suggesting these factors are critical determinants of acceptance. Post-disclosure, trust remains a significant predictor (β = 0.59, p < 0.001), though the influence of performance diminishes, reflecting the impact of the erosion of trust after the disclosure. Overall, this study’s findings show that trust is a critical factor that can influence the adoption of virtual humans, and the general perception towards the technology is found to be mostly negative outside of the social media context.
The thematic analysis revealed a dichotomy in participants’ responses, with some viewing the technology as innovative and exciting, particularly in social media contexts, while others expressed significant concerns about deception and the ethical implications, especially in business and personal settings. During the qualitative analysis, the two open-ended questions and 30 interviews were analysed. The first question focused on understanding what led to their change in perception regarding virtual humans following the revelation. Across the experiments, the themes identified could be classified or grouped as positive or negative reactions. In the case of Experiment 1, within the social media context, people report the technology as being exciting, and some even mentioned that their positive attitude in the experiment after the revelation was because initially it felt boring, but after the revelation, it felt more futuristic. Not only positive reactions were provided in the comments and interviews. Many felt betrayed, concerned, and worried about security, which shows that some had a positive shift in perception while others had a negative shift in perception. As for Experiment 2, some had a positive attitude, mentioning that this is better than the existing chatbots, showcasing that using the VHs is an improvement. At the same time, their concerns about the technology’s potential applications were growing, and their unfavourable opinions were far more prevalent. These concerns centred on how AI could exploit their data or interactions. They also point out that businesses may influence people’s opinions by seeming to be human instead of acting honestly. Although the open comments in Experiment 3 and in the interviews continue to emphasise the positive aspects of the technology and its superiority over the current system, it was determined that the negative responses—such as security concerns, a lack of human touch, and a sense of betrayal—were more pertinent. In addition, the respondents stated that even if it were possible, they would not utilise VHs. This demonstrates the unfavourable change in perspective following the disclosures, which was shown to be more substantial in Experiments 2 and 3. This shift was caused by people’s anxieties and feelings about the technology, which only some individuals continued to trust or think was beneficial.
The second open-ended question asked about people’s opinions of using virtual humans in various experimental contexts. The results showed that while most people responded positively to using VHs in social media and thought they were amazing and exciting, some people were sceptical about how they could affect people and aid in spreading false information. In the second experiment, there were no completely positive reactions, with most being negative or mixed. The respondents and interviewees highlighted their fear of the lack of transparency and the loss of human touch that the adoption would create. The participants also mentioned that while they are happy that chatbots are improving, it does lead to a more impersonal approach from companies that also leads to questions about the quality of the service provided. In Experiment 3, most comments focused on ethical concerns and people’s preference for human interaction over virtual humans who do not understand them. In contrast to the other two experiments, where people were generally willing to accept virtual humans, many people were sceptical of VHs and their role in spreading misinformation on social media. These qualitative insights help explain the quantitative findings, where trust and overall acceptance decreased most sharply in business contexts, likely due to concerns over transparency and data security.
This study’s findings are consistent with previous studies, which highlighted the positive impact of virtual influencers on consumer purchase intentions driven by trust and emotional attachment (Jiménez-Castillo and Sánchez-Fernández 2019; Awdziej et al. 2020; Gerlich 2023b; De Cicco et al. 2024). The results from the social media experiment revealed that even after participants were informed about the use of virtual humans, their trust in the scenario increased, and their overall acceptance of the technology was better than before they knew it involved virtual humans. The regression analysis found that trust is one of the main factors influencing the overall acceptance of technology, for better or worse. There are studies of business and work settings where VHs carry out some of the work processes that could help or impact peer productivity (Hill 2014; Gürerk et al. 2018). Lee et al. (2015) mentioned that integration is only possible after a careful evaluation of acceptance, and this seems to be the major finding of this study, as in a business context, people do not trust or accept the technology as much as within social media. There is a growing reluctance due to fear of security breaches and a decrease in personal interactions. This makes the service less friendly and of a lower quality, as people feel unimportant to the company. In general, people were more opposed to the adoption of virtual humans in technology. They emphasised the importance of transparency regarding the use of technology, as concealing the use of artificial intelligence was often seen as a betrayal of the people involved in the experiment. People are concerned that this technology could be misused and lead to more problems; therefore, it is not an ideal candidate to replace humans. The absence of personal interaction further worsens relationships. The survey participants believe that although improvements in areas such as chatbots are possible, these should be restricted to specific changes, and companies should not attempt to replace the current system without implementing transparent protocols and ethical guidelines.
Overall, this study was able to show that in contexts like social media or digital marketing, companies find increasing uses for VHs. People accept them to a certain extent, but the adoption of VHs in the field cannot extend beyond the existing comfort level of the people. Their adoption in business might be challenging as people do not trust the technology even in general settings. Creating guidelines and security protocols regarding the use of VHs could increase confidence among the people. Still, other elements like attitudes and feelings of importance cannot be addressed, and marketing strategies need to explore new ways in which VHs could be used in creating this feeling among people. Many areas or factors need to be tackled in the long run for their widespread adoption, and this study helps identify the current perception of the technology. It helps us understand where improvements can be made.

5. Conclusions

This study provides a comprehensive examination of how virtual humans (VHs) are perceived across different contexts—namely, social media, business environments, and general public interactions. The findings underscore the complexity and variability of these perceptions, revealing significant differences in trust, performance, usage likelihood, and overall acceptance before and after the disclosure of the VH’s true nature.

5.1. Summary of Key Findings

In the social media context (Experiment 1), this study found that trust levels, while initially positive, experienced a slight decline post-disclosure, moving from an average of 2.1487 to 1.9795. This suggests that while social media users may generally have a favourable view of VHs, there remains a degree of scepticism once their artificial nature is revealed. Their performance perception in this context showed a marginal increase from 1.3538 to 1.4712, indicating that the technical execution of VHs in social media might be appreciated even more after their nature is disclosed. However, their usage likelihood decreased from 2.5846 to 1.9077, suggesting that while users might appreciate the novelty and technical prowess of VHs, they may hesitate to engage with them frequently after learning about their artificiality. Their overall acceptance followed a similar trend, slightly decreasing from 2.1526 to 1.9833, which reflects a positive but cautious attitude.
In the business/professional environment (Experiment 2), an interesting divergence from the expected outcome was observed. The trust in VHs increased from 1.1290 to 1.3044 after disclosure, suggesting that transparency in professional settings can actually enhance trust. This indicates that business users might value honesty and openness about the use of technology, which could mitigate some of the negative impacts of discovering the non-human nature of their interlocutor. The performance ratings of VHs also saw a slight improvement from 1.8360 to 1.9839, remaining in the extremely positive range, which further supports the notion that performance is less impacted by the revelation of artificiality in professional contexts. Their usage likelihood showed a small increase from 1.4194 to 1.8871, reflecting a continued willingness to engage with VHs in a business setting, as the slight rise still indicates a generally positive outlook. The overall acceptance of VHs similarly increased, though modestly, from 1.7961 to 2.0737, suggesting that business users remain open to the integration of VHs, provided that their capabilities are clear and their use is transparent.
Conversely, the general public perception (Experiment 3) demonstrated the most significant decline in all metrics following the VH disclosure. Trust plummeted from 1.0684 to 2.7179, indicating a sharp drop from extremely positive to a more neutral or even slightly negative perception. This suggests that the general public is particularly sensitive to the artificiality of VHs and may react strongly to their use when their nature is revealed. Performance also decreased, though less dramatically, from 1.0983 to 1.6937, indicating that while the technical performance of VHs was still viewed positively, it was less appreciated after the disclosure. Their usage likelihood experienced the most drastic change, skyrocketing from an extremely positive 1.0812 to a highly negative 4.6496. This sharp shift highlights a severe reluctance to engage with VHs post-disclosure, likely driven by discomfort or mistrust. Similarly, their overall acceptance fell from 1.1154 to 2.3533, showing a clear move from a very positive to a more cautious and critical stance. While the study reveals statistically significant shifts in trust, performance, usage likelihood, and overall acceptance following the disclosure of virtual humans, the effect sizes (Cohen’s d) highlight the practical implications of these changes, particularly in public settings, where the trust in virtual humans was most impacted.

5.2. Interpretation and Broader Implications

The results of this study have significant implications for the deployment and acceptance of virtual humans across different sectors. The slight but consistent drop in trust and acceptance in the social media context suggests that while users may be initially intrigued by VHs, their long-term engagement may be jeopardised if their artificial nature is disclosed without careful consideration of user expectations and transparency. In business environments, the data imply that transparency about the use of VHs can enhance trust and acceptance, suggesting that corporate communication strategies should emphasise honesty and clarity regarding the use of digital agents. However, the sharp decline observed in the general public context raises concerns about the broader societal acceptance of VHs, particularly in settings where personal and ethical considerations are paramount.

5.3. Challenges and Ethical Considerations

This study also sheds light on the ethical challenges associated with the use of VHs. Participants expressed concerns about data privacy, the potential misuse of information, and the erosion of human relationships, particularly in professional settings where the reliance on digital agents might weaken the perceived value of human interactions. These concerns are compounded by the lack of established ethical norms and regulations governing the use of VHs, particularly in contexts where their use is not immediately apparent to the user. The fear of transitioning to a machine-dominated system that could diminish the importance of human agency is a significant barrier to the widespread adoption of VHs. Therefore, addressing these ethical concerns is crucial for fostering trust and ensuring that VHs are integrated in a way that complements, rather than replaces, human interaction.

5.4. Study Limitations

While this study provides valuable insights into the perception of VHs across different contexts, several limitations must be acknowledged. First, the sample size, while adequate for the study’s scope, may not fully capture the diversity of the opinions and attitudes present in the broader population. The relatively small sample size could limit the generalizability of these findings, particularly when considering the vast range of cultural, social, and technological contexts in which VHs may be deployed. Additionally, the study was geographically concentrated, primarily involving participants from a specific region. This geographic limitation may have influenced the results, as cultural and regional factors often play a significant role in shaping perceptions of technology.
Another limitation concerns the artificial nature of the experimental settings. While the simulations were designed to mimic real-world interactions, they cannot fully replicate the complexity and nuances of actual social, business, or public environments. Participants may have responded differently in more naturalistic settings, where the stakes and social dynamics are more pronounced. Furthermore, the study relied on self-reported data, which, while useful, can be subject to biases such as social desirability or recall bias. These limitations suggest that while these findings are indicative, they should be interpreted with caution and complemented by further research.
Although the study included participants from three distinct age groups, no significant differences were found in their responses. This suggests a relatively uniform perception of virtual humans across the age spectrum studied. However, future research could explore whether more nuanced age-related differences emerge in larger or more diverse samples, particularly when comparing younger versus older adults in different cultural or professional contexts.
This study did not specifically analyse gender or cultural influences, which are known to affect AI perceptions. Future studies should include a gender-based analysis and consider cultural differences, particularly as AI technologies become more prevalent in global markets. Understanding these variables could lead to more tailored and effective AI interactions.
Although this study did not collect data on education levels, this factor could influence the acceptance of AI. Participants with higher levels of education might have a better understanding of AI technologies, potentially leading to greater trust and acceptance. Future research should consider including education as a variable to explore its impact on the perception of virtual humans.

5.5. Future Research Directions

The findings of this study open up several avenues for future research. Expanding the sample size and exploring cross-cultural differences in the perception of VHs would provide valuable insights into how cultural and regional factors influence acceptance and trust. Additionally, investigating the long-term effects of repeated interactions with VHs could help determine whether initial scepticism can be mitigated over time through familiarity and improved technological integration. Further research should also explore the development of ethical guidelines and regulatory frameworks that address the unique challenges posed by VHs, ensuring that their use is both transparent and aligned with societal values.

5.6. Practical Implications

For businesses and developers, this study’s findings underscore the importance of maintaining a human touch in interactions involving VHs. Ensuring that users are fully informed about the nature of their interactions and that ethical standards are upheld will be key to fostering long-term relationships and trust. Policymakers should consider the implications of these findings for the development of regulations that govern the use of VHs, particularly in contexts where the technology has the potential to impact social and professional relationships significantly.
While virtual humans present exciting opportunities for enhancing digital interactions, their successful integration into various contexts will depend on carefully navigating the challenges of trust, ethical considerations, and user acceptance. This study highlights the importance of context in shaping perceptions of VHs and suggests that transparency and ethical practises will be essential in ensuring that this technology is embraced by the broader public. Future research and practical applications should focus on addressing the concerns identified in this study to promote the development of a more positive and widespread acceptance of virtual humans in both professional and everyday settings.

Funding

This research received no external funding.

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki and approved by the Ethics Committee of SBS Swiss Business School (protocol code EC23/FR16, 7 August 2023).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

Supporting data can be requested from the author.

Conflicts of Interest

The author declare no conflict of interest.

Appendix A. Table of Abbreviations

AbbreviationExplanation
TBEX1Trust Before Experiment 1
TAEX1Trust After Experiment 1
PBEX1Performance Before Experiment 1
PAEX1Performance After Experiment 1
ULBEX1Usage Likelihood Before Experiment 1
ULAEX1Usage Likelihood After Experiment 1
OABEX1Overall Acceptance Before Experiment 1
OAAEX1Overall Acceptance After Experiment 1
TBEX2Trust Before Experiment 2
TAEX2Trust After Experiment 2
PBEX2Performance Before Experiment 2
PAEX2Performance After Experiment 2
ULBEX2Usage Likelihood Before Experiment 2
ULAEX2Usage Likelihood After Experiment 2
OABEX2Overall Acceptance Before Experiment 2
OAAEX2Overall Acceptance After Experiment 2
TBEX3Trust Before Experiment 3
TAEX3Trust After Experiment 3
PBEX3Performance Before Experiment 3
PAEX3Performance After Experiment 3
ULBEX3Usage Likelihood Before Experiment 3
ULAEX3Usage Likelihood After Experiment 3
OABEX3Overall Acceptance Before Experiment 3
OAAEX3Overall Acceptance After Experiment 3
tbdTrust Before Disclosure
pbPerformance Before Disclosure
ubUsage Likelihood Before Disclosure

Appendix B. Questionnaires and Variables

Variables Experiment 1Experiment 2Experiment 3
QuestionsQuestionsQuestions
Trust3,4,73,4,5,65,6
Performance1,2,8,91,3,4,5,7,81,2,3,4,7,10
Usage likelihood5,69,108,9
Overall Acceptance3,4,5,6,9,102,3,4,5,6,7,101,5,6,7,9,10
QUESTIONNAIRE
EXPERIMENT 1:
Demographics:
What is your age
What is your sex
Part 1: Initial Interaction (Before Disclosure)
1 How visually appealing did you find the influencer’s content?
2 How entertaining was the content
3 How credible did you find the information presented by the influencer?
4 How trustworthy did the influencer seem to you?
5 How likely are you to follow this influencer for future posts?
6 How likely are you to like, comment on, or share the influencer’s posts?
7 How authentic did the influencer’s interactions appear to you?
8 How influential do you think this influencer is among their followers?
9 How professional does the influencer’s presentation appear?
10 How much do you feel the influencer aligns with your values?
Part 2: Post-Disclosure (After Information on Virtual Nature)
11 Now knowing the influencer is virtual, how does this affect your perception of their visual appeal?
12 Has your perception of the content’s entertainment changed?
13 How credible do you find the information now that you know the influencer is virtual?
14 How trustworthy does the influencer seem now?
15 Are you more or less likely to follow this influencer now?
16 Are you more or less likely to engage with their posts now?
17 How authentic do the influencer’s interactions appear now?
18 Has your perception of the influencer’s influence changed?
19 How professional does the influencer’s presentation appear now?
20 How has your perception of the influencer’s alignment with your values changed?
Additional Open-Ended Questions (Post-Disclosure)
21 If you changed your opinion about the influencer, can you explain why?
22 What are your thoughts about using virtual influencers in social media marketing?
EXPERIMENT 2:
Demographics:
What is your age
What is your sex
Part 1: Initial Interaction (Before Disclosure)
1How efficiently do you feel your inquiry was handled?
2How satisfied are you with the resolution provided by the agent?
3How competent did the agent seem in handling your request?
4How professional did the interaction feel to you?
5How clear was the communication from the agent?
6How polite was the agent during your interaction?
7How personalized did the service feel?
8How quickly did the agent respond to your inquiries?
9How likely are you to seek assistance from this agent again?
10How likely are you to recommend this service to others?
Part 2: Post-Disclosure (After Information on Virtual Nature)
11Now knowing the agent is virtual, how does this affect your view of the interaction’s efficiency?
12Has your satisfaction with the resolution changed?
13How competent do you find the agent now?
14How professional does the interaction seem now?
15How clear does the communication seem now?
16How polite does the agent seem now?
17How personalized does the service feel now?
18Has your perception of the agent’s response time changed?
19Are you more or less likely to seek assistance from this agent again?
20Are you more or less likely to recommend this service to others now?
Additional Open-Ended Questions (Post-Disclosure)
21If your opinion of the agent has changed, can you explain what influenced this change?
22What are your thoughts on the use of virtual agents in professional settings after learning about their virtual nature?
EXPERIMENT 3:
Demographics:
What is your age
What is your sex
Part 1: Initial Interaction (Before Disclosure)
1 How satisfactory was your experience during the interaction?
2 How natural did the communication with the agent feel?
3 How effectively did the agent understand and respond to your needs?
4 How comfortable did you feel during the interaction?
5 How trustworthy did the agent appear?
6 How knowledgeable did the agent seem regarding your inquiry?
7 How quickly did the agent respond to your questions or concerns?
8 How likely are you to use this service again for similar needs?
9 How likely are you to recommend this agent to friends or family?
10 How personalized did the service provided by the agent feel?
Part 2: Post-Disclosure (After Information on Virtual Nature)
11 Now knowing the agent is virtual, how does this affect your satisfaction with the interaction?
12 Has your perception of the naturalness of the communication changed?
13 How effective do you find the agent now?
14 How comfortable do you feel now knowing the agent is virtual?
15 How trustworthy does the agent appear now?
16 How knowledgeable does the agent seem now?
17 Has your perception of the agent’s response speed changed?
18 Are you more or less likely to use this service again?
19 Are you more or less likely to recommend this agent now?
20 How personalized does the service feel now?
Additional Open-Ended Questions (Post-Disclosure)
21 If your opinion of the agent has changed, can you explain what influenced this change?
22 What are your thoughts on using virtual agents for personal communications and services after learning about their virtual nature?

References

  1. Allied Market Research. 2023. Virtual Humans Market by Type, Industry Vertical: Global Opportunity Analysis and Industry Forecast, 2021–2031. Available online: https://www.alliedmarketresearch.com (accessed on 1 June 2024).
  2. Araujo, Theo. 2018. Living up to the chatbot hype: The influence of anthropomorphic design cues and communicative agency framing on conversational agent and company perceptions. Computers in Human Behavior 85: 183–89. [Google Scholar] [CrossRef]
  3. Awdziej, Marcin, Dagmara Plata-Alf, and Jolanta Tkaczyk. 2020. Real or Not, Doesn’t Matter, As Long As You Are Hot: Exploring the Perceived Credibility of an Instagram Virtual Influencer. In Proceedings of the European Marketing Academy. Warsaw: Akademia Leona Koźmińskiego. [Google Scholar]
  4. Ayinla, Benjamin Samson, Olukunle Oladipupo Amoo, Akoh Atadoga, Temitayo Oluwaseun Abrahams, Femi Osasona, and Oluwatoyin Ajoke Farayola. 2024. Ethical AI in practice: Balancing technological advancements with human values. International Journal of Science and Research Archive 11: 1311–26. [Google Scholar] [CrossRef]
  5. Bethea, Charles. 2024. The Terrifying A.I. Scam That Uses Your Loved One’s Voice. The New Yorker. Available online: https://www.newyorker.com/science/annals-of-artificial-intelligence/the-terrifying-ai-scam-that-uses-your-loved-ones-voice (accessed on 15 June 2024).
  6. Brauner, Philipp, Alexander Hick, Ralf Philipsen, and Martina Ziefle. 2023. What does the public think about artificial intelligence?—A criticality map to understand bias in the public perception of AI. Frontiers in Computer Science 5: 1113903. [Google Scholar] [CrossRef]
  7. Brown, Duncan, and Nick Hayes. 2008. Influencer Marketing: Who Really Influences Your Customers? Oxford: Butterworth-Heinemann. [Google Scholar]
  8. Calderon, Ricardo. 2019. The benefits of Artificial Intelligence in Cybersecurity. Economic Crime Forensics Capstones 36. Available online: https://digitalcommons.lasalle.edu/ecf_capstones/36 (accessed on 23 May 2024).
  9. Chiou, Erin K., Noah L. Schroeder, and Scotty D. Craig. 2020. How we trust, perceive, and learn from virtual humans: The influence of voice quality. Computers and Education 146: 103756. [Google Scholar] [CrossRef]
  10. Chu, Juan, Linjin Xi, Qunlu Zhang, and Ruyi Lin. 2022. Research on Ethical Issues of Artificial Intelligence in Education. In Lecture Notes in Educational Technology. Singapore: Springer. [Google Scholar] [CrossRef]
  11. Craig, Scotty D., and Noah L. Schroeder. 2019. Text-to-Speech Software and Learning: Investigating the Relevancy of the Voice Effect. Journal of Educational Computing Research 57: 1534–48. [Google Scholar] [CrossRef]
  12. Cronje, Johrine. 2023. Trust towards Virtual Humans in Immersive Virtual Reality and Influencing Factors. Doctoral Thesis, Julius-Maximilians-Universität Würzburg, Bloemfontein, South Africa. [Google Scholar]
  13. Cui, Lipeng, and Jiarui Liu. 2023. Virtual Human: A Comprehensive Survey on Academic and Applications. IEEE Access 11: 123830–45. [Google Scholar] [CrossRef]
  14. Darko, Amos, Albert PC Chan, Michael A. Adabre, David J. Edwards, M. Reza Hosseini, and Ernest E. Ameyaw. 2020. Artificial intelligence in the AEC industry: Scientometric analysis and visualization of research activities. Automation in Construction 112: 103081. [Google Scholar] [CrossRef]
  15. De Cicco, Roberta, Serena Iacobucci, Loreta Cannito, Gianni Onesti, Irene Ceccato, and Riccardo Palumbo. 2024. Virtual vs. human influencer: Effects on users’ perceptions and brand outcomes. Technology in Society 77: 102488. [Google Scholar] [CrossRef]
  16. Deng, Fengyi, and Xia Jiang. 2023. Effects of human versus virtual human influencers on the appearance anxiety of social media users. Journal of Retailing and Consumer Services 71: 103233. [Google Scholar] [CrossRef]
  17. Dharmadhikari, Swasti. 2024. Virtual Human Market Report 2024, Global ed. Chicago: Cognitive Market Research. [Google Scholar]
  18. Fountaine, Tim, Brian McCarthy, and Tamim Saleh. 2021. Building the AI-Powered Organization. Harvard Business Review. [Preprint]. Available online: https://hbr.org/2019/07/building-the-ai-powered-organization (accessed on 27 May 2024).
  19. Frey, Carl Benedikt, and Michael A. Osborne. 2017. The future of employment: How susceptible are jobs to computerisation? Technological Forecasting and Social Change 114: 254–80. [Google Scholar] [CrossRef]
  20. Gandham, Akansha Priya. 2022. Virtual Humans Market Expected to Reach $440.3 Billion by 2031—Allied Market Research. Portland: Allied Market Research. [Google Scholar]
  21. Gansser, Oliver Alexander, and Christina Stefanie Reich. 2021. A new acceptance model for artificial intelligence with extensions to UTAUT2: An empirical study in three segments of application. Technology in Society 65: 101535. [Google Scholar] [CrossRef]
  22. Gerlich, Michael. 2023a. Perceptions and Acceptance of Artificial Intelligence: A Multi-Dimensional Study. Social Sciences 12: 502. [Google Scholar] [CrossRef]
  23. Gerlich, Michael. 2023b. The Power of Virtual Influencers: Impact on Consumer Behaviour and Attitudes in the Age of AI. Administrative Sciences 13: 178. [Google Scholar] [CrossRef]
  24. Gibbons, Elizabeth D. 2021. Toward a More Equal World: The Human Rights Approach to Extending the Benefits of Artificial Intelligence. IEEE Technology and Society Magazine 40: 25–30. [Google Scholar] [CrossRef]
  25. Gillespie, Nicole, Steve Lockey, and Caitlin Curtis. 2021. Trust in Artificial Intelligence: A Five Country Study. Brisbane: The University of Queensland and KPMG Australia. [Google Scholar]
  26. Glass, Graham. 2024. The AI Skepticism Gap between Managers and Workers and How We Close It. Forbes. Available online: https://www.forbes.com/sites/forbeshumanresourcescouncil/2024/03/27/the-ai-skepticism-gap-between-managers-and-workers-and-how-we-close-it/ (accessed on 13 June 2024).
  27. Gürerk, Özgür, Andrea Bönsch, Thomas Kittsteiner, and Andreas Staffeldt. 2019. Virtual humans as co-workers: A novel methodology to study peer effects. Journal of Behavioral and Experimental Economics 78: 17–29. [Google Scholar] [CrossRef]
  28. Gürerk, Özgür, Thomas Lauer, and Martin Scheuermann. 2018. Leadership with individual rewards and punishments. Journal of Behavioral and Experimental Economics 74: 57–69. [Google Scholar] [CrossRef]
  29. Ham, Jeongmin, Sitan Li, Jiemin Looi, and Matthew S. Eastin. 2024. Virtual humans as social actors: Investigating user perceptions of virtual humans’ emotional expression on social media. Computers in Human Behavior 155: 108161. [Google Scholar] [CrossRef]
  30. Hill, Randall W., Jr. 2014. How Virtual Humans Can Build Better Leaders. Harvard Business Review, July 26. [Google Scholar]
  31. Hwang, Jae-Yoon, and Seo I. Hwang. 2022. A Study on the Changes in the Social Discourse Concerning the “Metaverse” and “Virtual Human” in the Entertainment Field. Journal of Digital Contents Society 23: 2435–44. [Google Scholar] [CrossRef]
  32. ITU. 2019. Disruptive Technologies and Their Use in Disaster Risk Reduction and Management. Geneva: International Telecommunication Union. [Google Scholar]
  33. Jeong, Cheonsu, and Jeong Jihwan. 2022. Ethical Issues with Artificial Intelligence (A Case Study on AI Chatbot & Self-Driving Car) Ethical Issues with Artificial Intelligence (A Case Study on AI Chatbot & Self-Driving Car). International Journal of Scientific & Engineering Research 13: 468–71. [Google Scholar]
  34. Jiménez-Castillo, David, and Raquel Sánchez-Fernández. 2019. The role of digital influencers in brand recommendation: Examining their impact on engagement, expected value and purchase intention. International Journal of Information Management 49: 366–76. [Google Scholar] [CrossRef]
  35. Kaya, Feridun, Fatih Aydin, Astrid Schepman, Paul Rodway, Okan Yetişensoy, and Meva Demir Kaya. 2024. The Roles of Personality Traits, AI Anxiety, and Demographic Factors in Attitudes toward Artificial Intelligence. International Journal of Human-Computer Interaction 40: 497–514. [Google Scholar] [CrossRef]
  36. Kelly, Sage, Sherrie-Anne Kaye, and Oscar Oviedo-Trespalacios. 2023. What factors contribute to the acceptance of artificial intelligence? A systematic review. Telematics and Informatics 77: 101925. [Google Scholar] [CrossRef]
  37. Lee, Min Kyung, Daniel Kusbit, Evan Metsky, and Laura Dabbish. 2015. Working with machines: The impact of algorithmic and data-driven management on human workers. Paper presented at CHI’15: CHI Conference on Human Factors in Computing Systems, Seoul, Republic of Korea, April 18–23. [Google Scholar]
  38. Lin, Jinghuai, Johrine Cronjé, Ivo Käthner, Paul Pauli, and Marc Erich Latoschik. 2023. Measuring Interpersonal Trust towards Virtual Humans with a Virtual Maze Paradigm. IEEE Transactions on Visualization and Computer Graphics 29: 2401–11. [Google Scholar] [CrossRef]
  39. Lucas, Gale M., Jonathan Gratch, Aisha King, and Louis-Philippe Morency. 2014. It’s only a computer: Virtual humans increase willingness to disclose. Computers in Human Behavior 37: 94–100. [Google Scholar] [CrossRef]
  40. Matthews, Anisa. 2021. Virtual(ly) Black Influencers Prove Racial Capital Is Virtual, Too, Medium. Available online: https://medium.com/swlh/virtual-ly-black-influencers-prove-racial-capital-is-virtual-too-7d94f484141a#:~:text=Meet%20Shudu%20Gram%20—%20the%20world%27s,skin%20glistening%20in%20the%20sun (accessed on 14 June 2024).
  41. Moon, Won-Ki, Y. Greg Song, and Lucy Atkinson. 2024. Virtual voices for real change: The efficacy of virtual humans in pro-environmental social marketing for mitigating misinformation about climate change. Computers in Human Behavior: Artificial Humans 2: 100047. [Google Scholar] [CrossRef]
  42. Moustakas, Evangelos, Nishtha Lamba, Dina Mahmoud, and Ranganathan Chandrasekaran. 2020. Blurring lines between fiction and reality: Perspectives of experts on marketing effectiveness of virtual influencers. Paper presented at International Conference on Cyber Security and Protection of Digital Services, Cyber Security 2020, Dublin, Ireland, June 15–19. [Google Scholar]
  43. Neudert, Lisa-Maria, Aleksi Knuutila, and Philip N. Howard. 2020. Global Attitudes Towards AI, Machine Learning & Automated Decision Making. Oxford: Oxford Internet Institute. [Google Scholar]
  44. O’Carroll, Lisa. 2024. AI Fuelling Dating and Social Media Fraud, EU Police Agency Says. The Guardian. Available online: https://www.theguardian.com/technology/2024/jan/09/ai-wars-dating-social-media-fraud-eu-crime-artificial-intelligence-europol (accessed on 15 June 2024).
  45. OECD. 2019. Artificial Intelligence in Society. Paris: OECD. [Google Scholar]
  46. Ouchchy, Leila, Allen Coin, and Veljko Dubljević. 2020. AI in the headlines: The portrayal of the ethical issues of artificial intelligence in the media. AI and Society 35: 927–36. [Google Scholar] [CrossRef]
  47. Parise, Salvatore, Sara Kiesler, Lee Sproull, and Keith Waters. 1999. Cooperating with life-like interface agents. Computers in Human Behavior 15: 123–42. [Google Scholar] [CrossRef]
  48. Parsons, Thomas D. 2021. Ethical challenges of using virtual environments in the assessment and treatment of psychopathological disorders. Journal of Clinical Medicine 10: 378. [Google Scholar] [CrossRef]
  49. Piñeiro-Martín, Andrés, Carmen García-Mateo, Laura Docío-Fernández, and Maria Del Carmen Lopez-Perez. 2023. Ethical Challenges in the Development of Virtual Assistants Powered by Large Language Models. Electronics 12: 3170. [Google Scholar] [CrossRef]
  50. Ransbotham, Sam, François Candelon, David Kiron, Burt LaFountain, and Shervin Khodabandeh. 2021. The Cultural Benefits of Artificial Intelligence in the Enterprise. MIT Sloan Management Review. [Preprint]. November. Available online: https://boardmember.com/wp-content/uploads/2022/06/BCG-The-Hidden-Cultural-Benefits-of-AI.pdf (accessed on 24 May 2024).
  51. Ren, Yi, Yangjun Ruan, Xu Tan, Tao Qin, Sheng Zhao, Zhou Zhao, and Tie-Yan Liu. 2019. FastSpeech: Fast, robust and controllable text to speech. Paper presented at Advances in Neural Information Processing Systems 32 (NeurIPS 2019), Vancouver, BC, Canada, December 8–14. [Google Scholar]
  52. Restrepo, Pascual, and Daron Acemoglu. 2017. Robots and Jobs: Evidence from the US. CEPR. Available online: https://cepr.org/voxeu/columns/robots-and-jobs-evidence-us (accessed on 14 June 2024).
  53. Savin-Baden, Maggi, and David Burden. 2019. Digital Immortality and Virtual Humans. Postdigital Science and Education 1: 87–103. [Google Scholar] [CrossRef]
  54. Schroeder, Noah L., Erin K. Chiou, and Scotty D. Craig. 2021. Trust influences perceptions of virtual humans, but not necessarily learning. Computers and Education 160: 104039. [Google Scholar] [CrossRef]
  55. Sela, Matan, Elad Richardson, and Ron Kimmel. 2017. Unrestricted Facial Geometry Reconstruction Using Image-to-Image Translation. Paper presented at the IEEE International Conference on Computer Vision, Venice, Italy, October 22–29. [Google Scholar]
  56. Shi, Weilan. 2023. Virtual Human Influencer and Its Impact on Consumer Purchase Intention. Advances in Economics, Management and Political Sciences 47: 80–87. [Google Scholar] [CrossRef]
  57. Tewari, Ayush, Michael Zollhofer, Hyeongwoo Kim, Pablo Garrido, Florian Bernard, Patrick Perez, and Christian Theobalt. 2017. MoFA: Model-Based Deep Convolutional Face Autoencoder for Unsupervised Monocular Reconstruction. Paper presented at the IEEE International Conference on Computer Vision, Venice, Italy, October 22–29. [Google Scholar]
  58. Thomas, Veronica L., and Kendra Fowler. 2021. Close Encounters of the AI Kind: Use of AI Influencers As Brand Endorsers. Journal of Advertising 50: 11–25. [Google Scholar] [CrossRef]
  59. Tran, Luan, and Xiaoming Liu. 2018. Nonlinear 3D Face Morphable Model. Paper presented at the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, June 18–23. [Google Scholar]
  60. Twenge, Jean. M., and W. Keith Campbell. 2018. Associations between screen time and lower psychological well-being among children and adolescents: Evidence from a population-based study. Preventive Medicine Reports 12: 271–83. [Google Scholar] [CrossRef] [PubMed]
  61. Visser, Francesca. 2024. What Is a Deepfake—And How Are They Being Used by Scammers? The Bureau of Investigative Journalism. Available online: https://www.thebureauinvestigates.com/stories/2024-03-07/what-is-a-deepfake-and-what-are-the-different-types/ (accessed on 13 June 2024).
  62. Walker, Tommy. 2024. Report: Southeast Asia Scam Centers Swindle Billions. VOA News. Available online: https://www.voanews.com/a/report-southeast-asia-scam-centers-swindle-billions/7655765.html (accessed on 15 June 2024).
  63. Wu, Xixin, Yuewen Cao, Hui Lu, Songxiang Liu, Shiyin Kang, Zhiyong Wu, Xunying Liu, and Helen Meng. 2021. Exemplar-Based Emotive Speech Synthesis. IEEE/ACM Transactions on Audio Speech and Language Processing 29: 874–86. [Google Scholar] [CrossRef]
  64. Xu, Sicheng, Guojun Chen, Yu-Xiao Guo, Jiaolong Yang, Chong Li, Zhenyu Zang, Yizhong Zhang, Xin Tong, and Baining Guo. 2024. VASA-1: Lifelike Audio-Driven Talking Faces Generated in Real Time. Available online: https://www.microsoft.com/en-us/research/project/vasa-1/ (accessed on 16 June 2024).
  65. Yan, Ji, Senmao Xia, Amanda Jiang, and Zhibin Lin. 2024. The effect of different types of virtual influencers on consumers’ emotional attachment. Journal of Business Research 177: 114646. [Google Scholar] [CrossRef]
  66. Yang, Shan, Heng Lu, Shiyin Kang, Liumeng Xue, Jinba Xiao, Dan Su, Lei Xie, and Dong Yu. 2020. On the localness modeling for the self-attention based end-to-end speech synthesis. Neural Networks 125: 121–30. [Google Scholar] [CrossRef] [PubMed]
  67. Yang, Zeyi. 2024. Deepfakes of Your Dead Loved Ones Are a Booming Chinese Busines. MIT Technology Review. Available online: https://www.technologyreview.com/2024/05/07/1092116/deepfakes-dead-chinese-business-grief/?truid=&utm_source=the_download&utm_medium=email&utm_campaign=the_download.unpaid.engagement&utm_term=&utm_content=06-13-2024&mc_cid=85388db1c6&mc_eid=581b627fd2 (accessed on 13 June 2024).
  68. Yeasmin, Samira. 2019. Benefits of Artificial Intelligence in Medicine. Paper presented at 2nd International Conference on Computer Applications and Information Security, ICCAIS 2019, Riyadh, Saudi Arabia, May 1–3. [Google Scholar]
  69. Zhao, Ziyue, Huijun Liu, and Tim Fingscheidt. 2019. Convolutional neural networks to enhance coded speech. IEEE/ACM Transactions on Audio Speech and Language Processing 27: 663–78. [Google Scholar] [CrossRef]
  70. Zhu, Xiaolian, Yuchao Zhang, Shan Yang, Liumeng Xue, and Lei Xie. 2019. Pre-Alignment guided attention for improving training efficiency and model stability in end-To-end speech synthesis. IEEE Access 7: 65955–64. [Google Scholar] [CrossRef]
Figure 1. VH using VASA-1 (Xu et al. 2024).
Figure 1. VH using VASA-1 (Xu et al. 2024).
Socsci 13 00516 g001
Figure 2. Q-Q Plot of trust before and after disclosure.
Figure 2. Q-Q Plot of trust before and after disclosure.
Socsci 13 00516 g002
Figure 3. Q-Q Plot of usage acceptance before and after disclosure.
Figure 3. Q-Q Plot of usage acceptance before and after disclosure.
Socsci 13 00516 g003
Figure 4. Q-Q plot of trust.
Figure 4. Q-Q plot of trust.
Socsci 13 00516 g004
Figure 5. Performance Q-Q plot EX3.
Figure 5. Performance Q-Q plot EX3.
Socsci 13 00516 g005
Figure 6. Usage likelihood.
Figure 6. Usage likelihood.
Socsci 13 00516 g006
Figure 7. Overall acceptance Q-Q plot.
Figure 7. Overall acceptance Q-Q plot.
Socsci 13 00516 g007
Table 1. Survey questions relating to different variables.
Table 1. Survey questions relating to different variables.
Variables EXP1 QuestionsEXP2 QuestionsEXP3 Questions
Trust3,4,73,4,5,65,6
Performance1,2,8,91,3,4,5,7,81,2,3,4,7,10
Usage likelihood5,69,108,9
Overall Acceptance3,4,5,6,9,102,3,4,5,6,7,101,5,6,7,9,10
Table 2. Cumulative variables across three experiments.
Table 2. Cumulative variables across three experiments.
VariableExperiment 1 (n = 130)Experiment 2 (n = 125)Experiment 3 (n = 118)
TrustBefore: 2.1487, After: 1.9795Before: 1.129, After: 1.3044Before: 1.0684, After: 2.7179
PerformanceBefore: 1.3538, After: 1.4712Before: 1.836, After: 1.9839Before: 1.0983, After: 1.6937
Usage likelihoodBefore: 2.5846, After: 1.9077Before: 1.4194, After: 1.8871Before: 1.0812, After: 4.6496
Overall AcceptanceBefore: 2.1526, After: 1.9833Before: 1.7961, After: 2.0737Before: 1.1154, After: 2.3533
Note: ‘Before’ indicates measurements taken before the disclosure of the virtual human’s nature, and ‘After’ indicates measurements taken immediately afterwards.
Table 3. Paired t-test results comparing pre- and post-disclosure perceptions from Experiment 3.
Table 3. Paired t-test results comparing pre- and post-disclosure perceptions from Experiment 3.
TestMean DifferenceT-StatisticDegrees of Freedomp-Value
Trust before and after−0.5978813−11.08161170.000 (<0.001)
Performance before and after−0.2931356−13.87221170.000 (<0.001)
Usage likelihood before and after−1.100932−11.73491170.000 (<0.001)
Overall acceptance before and after−0.5594−10.15931170.000 (<0.001)
Table 4. Comparative analysis of trust levels across social media, business, and general public experiments before and after disclosure.
Table 4. Comparative analysis of trust levels across social media, business, and general public experiments before and after disclosure.
EX1—Social MediaEX2—BusinessEX3—General
Trust BeforeMean: 2.148718Mean: 1.12904Mean: 1.06839
Std. Dev: 0.8627Std. Dev: 0.1664Std. Dev: 0.2057
T-statistic: 1.2833T-statistic: −5.5016T-statistic: −17.8707
p-value: 0.2017p-value: <0.001p-value: <0.001
Trust AfterMean: 1.979487Mean: 1.3044Mean: 2.717966
Std. Dev: 1.062Std. Dev: 0.3427Std. Dev: 1.013551
T-statistic: 1.2833T-statistic: −5.5016T-statistic: −17.8707
p-value: 0.2017p-value: <0.001p-value: <0.001
Table 5. Trust across experiments—descriptive statistics.
Table 5. Trust across experiments—descriptive statistics.
VariableObsMeanStd. Dev.MinMax
tbex11302.1490.86314.333
taex11301.9791.06214.667
tbex21251.1290.16611.75
taex21251.3040.34312.25
tbex31181.0680.20612
taex31182.7181.01414
Table 6. Paired t-test of performance perception before and after VH reveal.
Table 6. Paired t-test of performance perception before and after VH reveal.
EX1—Social MediaEX2—BusinessEX3—General
Performance BeforeMean: 1.354Mean: 1.835Mean: 1.099
Std. Dev: 0.287Std. Dev: 0.2288Std. Dev: 0.145
T-statistic: −2.7440T-statistic: −6.5792T-statistic: −14.3827
p-value: 0.0069p-value: <0.001p-value: <0.001
Performance AfterMean: 1.4712Mean: 1.98352Mean: 1.6938
Std. Dev: 0.4967Std. Dev: 0.2845Std. Dev: 0.4585
T-statistic: −2.7440T-statistic: −6.5792T-statistic: −14.3827
p-value: 0.0069p-value: <0.001p-value: <0.001
Table 7. Descriptive statistics of perceived performance of VHs across experiments.
Table 7. Descriptive statistics of perceived performance of VHs across experiments.
VariableObsMeanStd. Dev.MinMax
pbex11301.3540.28712.25
paex11301.4710.49713
pbex21251.8360.2291.172.5
paex21251.9840.2851.172.83
pbex31181.0990.14511.67
paex31181.6940.45913
Table 8. Paired t-test of usage likelihood before and after VH reveal.
Table 8. Paired t-test of usage likelihood before and after VH reveal.
EX1—Social MediaEX2—BusinessEX3—General
Usage Likelihood BeforeMean: 2.5846Mean: 1.41936Mean: 1.08118
Std. Dev: 1.463Std. Dev: 0.50155Std. Dev: 0.31322
T-statistic: 3.4775T-statistic: −6.1091T-statistic: −19.2796
p-value: 0.0007p-value: <0.001p-value: <0.001
Usage Likelihood AfterMean: 1.9076Mean: 1.88712Mean: 4.6495
Std. Dev: 1.4782Std. Dev: 0.9854562Std. Dev: 2.01942
T-statistic: 3.4775T-statistic: −6.1091T-statistic: −19.2796
p-value: 0.0007p-value: <0.001p-value: <0.001
Table 9. Descriptive statistics of usage likelihood.
Table 9. Descriptive statistics of usage likelihood.
VariableObsMeanStd. Dev.MinMax
ubex11302.5851.46316
uaex11301.9081.47816
ubex21251.4190.50212.5
uaex21251.8870.98514.5
ubex31181.0810.31313
uaex31184.652.01916
Table 10. Overall acceptance paired t-test.
Table 10. Overall acceptance paired t-test.
EX1—Social MediaEX2—BusinessEX3—General
Overall Acceptance BeforeMean: 2.1525Mean: 1.79528Mean: 1.116017
Std. Dev: 0.8508Std. Dev: 0.2354Std. Dev: 0.2047
T-statistic: 1.2900T-statistic: −8.2438T-statistic: −18.7938
p-value: 0.1994p-value: <0.001p-value: <0.001
Overall Acceptance AfterMean: 1.984Mean: 2.07369Mean: 2.6942
Std. Dev: 1.0494Std. Dev: 0.4108754Std. Dev: 0.9233583
T-statistic: 1.2900T-statistic: −8.2438T-statistic: −18.7938
p-value: 0.1994p-value: <0.001p-value: <0.001
Table 11. Descriptive statistics of overall acceptance.
Table 11. Descriptive statistics of overall acceptance.
VariableObsMeanStd. Dev.MinMax
obex11302.1530.85114.333
oaex11301.9841.04915.17
obex21251.7950.2351.292.57
oaex21252.0740.4111.293.43
obex31181.1160.20512.33
oaex31182.6940.92314.14
Table 12. Correlation analysis.
Table 12. Correlation analysis.
Variables(1)(2)(3)(4)(5)(6)(7)(8)
(1) trb1.000
(2) pb0.3661.000
(3) ub0.7670.1851.000
(4) ob0.8510.3470.9231.000
(5) tra0.0050.055−0.137−0.1341.000
(6) pa0.5020.4210.3960.4590.3871.000
(7) ua0.003−0.023−0.015−0.0580.6430.5441.000
(8) oa−0.0710.098−0.129−0.1150.8960.5030.8021.000
Table 13. Regression describing the before statistics.
Table 13. Regression describing the before statistics.
obCoef.St. Err.t-Valuep-Value [95% Conf Interval] Sig
trb0.3270.0536.18<0.0010.2220.431***
pb0.2630.0693.83<0.0010.1270.4***
ub0.3910.02515.59<0.0010.3410.441***
Constant0.1940.0922.110.0370.0120.375**
Mean dependent var1.704SD dependent var 0.310
R-squared 0.913Number of obs 118
F-test 396.697Prob > F <0.001
Akaike crit. (AIC)−222.179Bayesian crit. (BIC)−211.096
*** p < 0.01, ** p < 0.05.
Table 14. Regression analysis of after-disclosure data, with trust as the dependent variable, and performance, usage likelihood, and overall acceptance as the independent variables.
Table 14. Regression analysis of after-disclosure data, with trust as the dependent variable, and performance, usage likelihood, and overall acceptance as the independent variables.
OaCoef.St. Err.t-Valuep-Value[95% ConfInterval]Sig
Tra0.590.03716.04<0.0010.5170.663***
pa0.1160.071.660.099−0.0220.254*
ua0.1980.0258.05<0.0010.1490.247***
Constant0.3180.1063.000.0030.1080.528***
Mean dependent var2.264SD dependent var0.477
R-squared0.893Number of obs118
F-test315.641Prob > F<0.001
Akaike crit. (AIC)−95.885Bayesian crit. (BIC)−84.802
*** p < 0.01, * p < 0.1.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Gerlich, M. Societal Perceptions and Acceptance of Virtual Humans: Trust and Ethics across Different Contexts. Soc. Sci. 2024, 13, 516. https://doi.org/10.3390/socsci13100516

AMA Style

Gerlich M. Societal Perceptions and Acceptance of Virtual Humans: Trust and Ethics across Different Contexts. Social Sciences. 2024; 13(10):516. https://doi.org/10.3390/socsci13100516

Chicago/Turabian Style

Gerlich, Michael. 2024. "Societal Perceptions and Acceptance of Virtual Humans: Trust and Ethics across Different Contexts" Social Sciences 13, no. 10: 516. https://doi.org/10.3390/socsci13100516

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop