Next Article in Journal / Special Issue
Improving Driver Emotions with Affective Strategies
Previous Article in Journal
Guidance in Cinematic Virtual Reality-Taxonomy, Research Status and Challenges
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The Voice Makes the Car: Enhancing Autonomous Vehicle Perceptions and Adoption Intention through Voice Agent Gender and Style

1
Department of Communication, Michigan State University, East Lansing, MI 48824, USA
2
Department of Media and Information, Michigan State University, East Lansing, MI 48824, USA
3
Engineering and Computer Science, Seattle Pacific University, Seattle, WA 98119, USA
*
Author to whom correspondence should be addressed.
Multimodal Technol. Interact. 2019, 3(1), 20; https://doi.org/10.3390/mti3010020
Submission received: 5 February 2019 / Revised: 16 March 2019 / Accepted: 16 March 2019 / Published: 21 March 2019

Abstract

:
The present research explores how autonomous vehicle voice agent (AVVA) design influences autonomous vehicle passenger (AVP) intentions to adopt autonomous vehicles. An online experiment (N = 158) examined the role of gender stereotypes in response to an AVVA with respect to the technology acceptance model. The findings indicate that characteristics of the AVVA that are more consistent with the stereotypical expectation of the social role (informative male AVVA and social female AVVA) foster greater perceived ease of use (PEU) and perceived usefulness (PU) than inconsistent conditions (social male AVVA and informative female AVVA). The study offers theoretical implications regarding the technology acceptance model in the context of autonomous technologies as well as practical implications for the design of autonomous vehicle voice agents.

1. Introduction

Since 2016, nineteen companies across multiple industries, including Google, Uber, and Tesla, have been involved in developing self-driving cars, aiming at commercializing self-driving cars for the road by 2021 [1]. The U.S. government has supported the movement of introducing self-driving cars by diminishing the related regulations [2]. In Europe, the U.K. government also announced their strong support for self-driving cars [3]. In Asia, the South Korean government allowed companies such as Samsung and Hyundai to test self-driving cars on public roads [4]. Altogether, clear evidence suggests that self-driving cars are becoming a worldwide trend due to immense potential benefits.
Although commercial autonomous cars are likely to be safe and reliable, potential adopters will still experience a high level of uncertainty about the safety, reliability, and control of these vehicles [5,6]. Uncertainty is a major hindrance to technological adoption [7] because if people are uncertain about how the vehicle will behave, they will be reluctant to relinquish control to the autonomous system. Such uncertainty can be reduced through vehicle design approaches that help users trust and thus adopt the technology [8,9].
The present paper builds on the notion that people respond to computational technologies following the social rules that govern normal human interaction [10]. Specifically, we focus on the potential for an autonomous vehicle voice agent (AVVA) to display social characteristics that affect the experience of the autonomous vehicle passenger (AVP) and thus willingness to adopt autonomous vehicles. This paper utilizes the technology acceptance model (TAM) as a theoretical framework to examine how an AVVA’s style (informative vs. sociable) and gender influence the perceived ease of use (PEU) and usefulness (PU) of the autonomous vehicle itself, thereby influencing intention of adoption.

2. Literature Review

2.1. Technology Acceptance Model to Adoption of Intelligent Technology

Davis [11] adopted the Theory of Reasoned Action (TRA) to explain how people accept technologies (Figure 1). TRA suggests causal relationships between beliefs, attitudes, and intentions. Based on this notion, Davis [11] reasoned that perceived beliefs toward technologies influence intentions to adopt technologies. Since Davis [11] proposed the technology acceptance model (TAM), the model has been tested and supported by a large number of empirical studies. The core concepts of the model include PEU and PU. TAM predicts that people are more likely to adopt a technology when it is perceived as easy to use and useful. Also, PEU has been found to influence PU [11]. Scholars have extended the model by adding other contextual variables, including social influence and environmental influence [12,13].
The TAM is a useful theoretical framework for investigating the determinants of internal beliefs of use. Although substantive academic works have focused on adoption of Computer-Mediated Communication (CMC) technologies such as email, telecom, internet, and e-commerce [14], scholars have expanded the TAM to the context of human–computer interaction. For instance, Heerink et al. used the TAM to explain the adoption of a healthcare robot for the elderly [15].
The TAM framework has also been used to investigate the mechanism of technology acceptance. However, as we enter the fourth industrial revolution, researchers are striving to better understand the psychological processes that influence adoption of intelligent technologies that interact autonomously with humans, such as autonomous vehicles. Choi and Ji [16] investigated adoption of autonomous vehicles and, as the TAM predicted, found that PEU and PU lead to adoption. In particular, the results showed that PU was a stronger predictor than other variables. The current study intended to replicate these previous findings based on TAM as a foundational step for the present research. Building on a conventional understanding of TAM, we hypothesize the following:
Hypothesis 1 (H1).
The PEU of an autonomous car will positively influence the PU of the autonomous car.
Hypothesis 2 (H2a,b).
Intention of adopting an autonomous car will be influenced by (a) the PEU and (b) the PU of an autonomous car.

2.2. Intelligent Technology as Social Actors

Mobile technologies are becoming increasingly intelligent. Voice assistant systems such as Siri, Alexa, and Google Home, can set schedules, read articles, and entertain people by telling jokes. Although these intelligent technologies can only mimic human communications based on pre-existing algorithms, these mimicked behaviors are enough to elicit social presence, defined as the degree of perceiving an object as a social other [17].
The question of how people interact with technologies is becoming increasingly significant as people have more chances to encounter intelligent entities that are not human. Early research on this topic concluded that humans interact with technologies the same ways as they interact with other humans [18]. In other words, people treat computers as social actors (CASA). Research examining human interaction with technologies have supported the idea of CASA and found that people perceive human features such as gender and personality in technologies [19,20,21]. People also naturally apply various social rules, such as social identification [20], similarity-attraction [9,19], and gender stereotypes [20,22], when interacting with technologies.
According to the CASA paradigm, social science theories can be extended into the context of human–technology interaction in order to better design assistant technologies. In the current study, we consider a dual-process model to frame the types of information that an AVVA can communicate in order to influence drivers’ perceptions. Dual-process models generally describe differences between intuitive and reasoning-based cognitive processes [23]. That is, an AVVA can be designed to appeal to intuitive cognitive processes by being sociable or to reasoning-based cognitive processes by providing task-related information.
Returning to autonomous vehicles and the TAM, we expect that an AVVA would lead to different perceptions depending on whether it is informative or social. These attributes of an AVVA should influence perceptions of the autonomous vehicle. Although the AVVA is only one piece of the larger assemblage of the autonomous vehicle, it potentially serves as the primary information interface between the user and the technology. Thus, we expect that TAM factors for an AVVA are applicable to the entire technology. More specifically, we expect that the extent to which an AVVA is informative and/or sociable will influence TAM factors (PEU and PU) as related to the autonomous vehicle and thus willingness to adopt autonomous vehicles in general.
People tend to heuristically ascribe social identities to technologies when the technology displays social cues such as sociability and friendliness [10]. Further, such sociability cues can contribute to PEU. Anxiety toward robots has been found to negatively relate to ease of use [24]. Just as people can reduce other’s anxiety by acting friendly and sociably toward those others, the anthropomorphic cues of sociability and friendliness may reduce anxiety towards the technology and subsequently increase PEU. Thus, we predict the following with respect to an AVVA’s sociability characteristics and PEU.
Hypothesis 3 (H3).
A sociable AVVA will induce more autonomous vehicle PEU than an informative AVVA.
On the other hand, an AVVA can appear more intelligent by providing dynamic updates of the driving environment, which should contribute to the AVP’s situational awareness and thus sense of control [25]. Belief in system transparency, the degree of prediction of the technology’s operation, is associated with PU and adoption intention [16]. If a technology provides information that people can use to predict surroundings or behavior of technology, they perceive it to be more useful. Hence, we expect that an AVVA that informs the AVP with dynamic situational updates will be perceived as more useful.
Hypothesis 4 (H4).
An informative AVVA will induce more autonomous vehicle PU than a sociable AVVA.
The effects of AVVA gender is also a topic of interest. Agent gender plays an important role in people’s responses to such technologies [26,27]. Reactions to an agent’s communication style are influenced by gender stereotypes developed through interactions with humans. Multiple studies have identified gender differences in communication style (e.g., [28,29]). For instance, scholars have found that communication from men tends to be more task-oriented (i.e., informative) while women tend to adopt more socially-oriented (i.e., sociable) communication styles [30,31,32].
Given these patterns, gender stereotyping occurs in interactions with technologies. Nass and colleagues found that both male and female participants perceived that evaluation from a male-voiced computer is more valid than evaluation from a female-voiced computer due to the gender stereotype that men are more dominant and influential [22]. Studies have found that people tend to apply the same gender stereotypes to synthetic, computer-generated voices as they do human voices and this affects how people make decisions while interacting with technology [20,33,34]. Notably, studies have found a stereotype-driven matching effect: Male voices are perceived as more authoritative in general, but female voices are trusted and preferred more in contexts that are stereotypically feminine, such as love and relationships [22,35].
Returning to the context of AVVAs and autonomous vehicles and building on this previous literature, the match between an AVVA’s communication style and gender will influence perceptions of the autonomous vehicle. Connecting this to TAM, we predict that AVVA gender moderates the effect of communication styles on PEU and usefulness such that a social female AVVA and informative male AVVA will be preferred over AVVAs that reflect a gender/style mismatch.
Hypothesis 5 (H5).
An informative male AVVA and a social female AVVA will be perceived as easier to use than an informative female AVVA and a social male AVVA.
Hypothesis 6 (H6).
An informative male AVVA and a social female AVVA will be perceived as more useful than an informative female AVVA and a social male AVVA.
Further, given that TAM scholars have found that PEU and PU play roles of mediators [36], we test for the same relationship in the current context. We hypothesize that both variables will mediate the influence of AVVA style and gender on intention of adoption.
Hypothesis 7 (H7).
PEU and PU will mediate the influence of AVVA style and AVVA gender on intention of adopting an autonomous car.
In addition, given that, in general, PEU and PU have a strong association with each other [36,37], certain features of a technology increasing PEU may increase PU indirectly. Thus, we hypothesize PEU will mediate the influence of AVVA style and AVVA gender (as a moderator) on PU.
Hypothesis 8 (H8a,b).
PEU will mediate (a) the influence of AVVA style and (b) the influence of AVVA gender (as a moderator) on PU.

3. Methods

3.1. Experiment Design and Procedure

An online experiment survey was distributed to undergraduate students at an American university (N = 158; 43 men, 114 women, 1 unreported; mean age = 21.51; SD = 6.86). Participants were randomly assigned to one of the four conditions, 2 (AVVA style—informative or social) by 2 (AVVA gender—male or female). Two participants skipped one item each from a measure of PU and PEU. For these two participants, the missing response was replaced with the mean for their other responses so they would not be excluded from the analysis. This approach did not change the mean value of the respective metric for each individual participant.
Participants were given a small amount of course extra credit for their participation. They watched a driving simulation and listened to one of the assigned AVVA voices. Participants were given the following instruction before watching the video of the autonomous car simulation: “Please view the following video in FULL SCREEN MODE with your volume ON. Imagine that you are sitting in the vehicle itself during this experience. Please watch the full video clip (~5 min), then exit full screen mode and complete the questions about the experience.”
The simulation lasted 5 min and 20 s. After the simulation, participants were asked to complete a set of survey questions. Note that item order within the questionnaire was randomized within blocks of questions to reduce potential ordering effects.

3.2. Experiment Treatments

The virtual agent’s voice was generated by Amazon Polly which is a web-based service that turns text into a voice [38]. A female voice named Joanna and a male voice named Matthew were used. The generated voice files were edited on Audacity to synchronize the recorded scene and the voice [39]. For the social AVVA, messages were constructed to focus on relational aspects of communication by disclosing personal information, making jokes, and referring to users’ potential concerns [40]. For the informative AVVA, the script was designed to focus on providing information about the autonomous car’s actions as well as the surrounding environment, such as the weather, speed limit, and traffic signals [40]. The scripts for both agents are available in Appendix A.
City Car Driving software was used to generate the driving simulation (see Figure 2). This software is commercially available and allows users to practice basic driving skills in a realistic city environment [41]. To generate an autonomous car experience, a driving simulation scene was recorded while a researcher drove the car. This recording, along with verbal prompts provided by an AVVA, were played back during the study.

3.3. Measurements

3.3.1. Manipulation Check

Perception of the AVVA as informative (“The virtual agent focused on providing me with driving-related information,” and “The virtual agent primarily talked to me about driving-related information,”) and sociable (“The virtual agent was interested in socializing with me,” and “The virtual agent was sociable,”) were used to check the manipulations as well as in the analyses. Composites were created from means for the informative measure (Cronbach’s alpha = 0.87) and social measure (Cronbach’s alpha = 0.86).

3.3.2. Perceived Ease of Use

PEU was derived from a previous study [11]: “It would be easy to learn how to operate an autonomous car,” “I would find it easy to get an autonomous car to do what I want it to do,” “Interacting with an autonomous car would not require a lot of my mental effort,” and “I would find it easy to use an autonomous car” (Cronbach’s alpha = 0.84).

3.3.3. Perceived Usefulness

PU was derived from a previous study [11]: “Using an autonomous car would increase my productivity,” “Using an autonomous car would increase my driving performance,” “Using an autonomous car would enhance my effectiveness on the driving task,” and “Using an autonomous car would be useful for driving” (Cronbach’s alpha = 0.87).

3.3.4. Intention of Adoption

Intention for future use was measured with a composite measure derived from a previous study [16]: “I intend to use an autonomous car in the future,” “I expect that I would use an autonomous car in the future,” and “I plan to use an autonomous car in the future” (Cronbach’s alpha = 0.96). A 5-point Likert scale were used for all measurements.

4. Results

Two manipulation checks were conducted. AVVA style was found to significantly influence the perception of the AVVA as informative, F (1, 156) = 55.98, p < 0.001, partial eta-squared = 0.33, with informative perception being higher in the informative AVVA condition (M = 4.02, SD = 0.86) than the sociable AVVA condition (M = 2.83, SD = 0.84). Also, perception of the AVVA as social also differed by style, F (1, 156) = 61.65, p < 0.001, partial eta-squared = 0.28, with social perception being higher in the sociable AVVA condition (M = 3.17, SD = 0.83) than the informative AVVA condition (M = 1.99, SD = 1.06). Neither AVVA gender nor the interaction between AVVA style and gender significantly influenced these manipulation check measures. These results suggest the manipulations were successful.
The hypotheses were tested through structural equation modeling using AMOS (v. 20). To test the interaction effects of AVVA’s style and gender, we used the contrast coefficient approach. We coded matched conditions (i.e., information = 1 * male = 1 and social = −1 * female = −1) as 1 and mismatched conditions (information = 1 * female = −1 and social = −1 *male = 1) as −1.
We checked the fit of the model. For cross sectional research, it is suggested to report the Root Mean Square Error of Approximation (RMSEA), Tucker-Lewis Index (TLI), and Comparative Fit Index (CFI) [42]. Regarding the criteria of model fits, RMSEA of 0.01, 0.05, and 0.08 indicates excellent, good, and mediocre fit, respectively [43]. For TLI and CFI, fit values above 0.95 indicate an excellent model fit [44]. The study results showed that the suggested model has an excellent or good fit, χ2 = 107.21, df = 71, p = 0.004, RMSEA = 0.057, TLI = 0.967, CFI = 0.975.
Supporting H1—the PEU of an autonomous car will positively influence the PU of the autonomous car—PEU significantly influenced PU, β = 0.88, SE = 0.08, p < 0.001. Participants who perceived autonomous cars as easy to use were more likely to perceive autonomous cars as useful.
Supporting H2a and H2b—intention of adopting an autonomous car will be influenced by (a) PEU and (b) PU of the vehicle’s AVVA—PEU significantly influenced intention of adoption, β = 0.37, SE = 0.17, p = 0.014, along with PU, β = 0.51, SE = 0.18, p < 0.001. Participants expressed higher autonomous car adoption intent when they perceived more autonomous car ease of use and usefulness.
Providing no support for H3 (a socializing AVVA will induce more PEU than an informative AVVA), the results showed that AVVA style did not influence perceived autonomous car ease of use, β < 0.003, SE = 0.09, p = 0.97.
There was no evidence supporting H4 (an informative AVVA will induce more PU than a socializing AVVA). AVVA style was not found to influence perceived autonomous car usefulness, β = −0.016, SE = 0.05, p = 0.78.
Supporting H5—AVVA gender will moderate the influence of AVVA style on PEU—AVVA gender moderated the influence of AVVA style on perceived autonomous car ease of use such that an informative male AVVA and a sociable female AVVA was perceived as easier to use than an informative female AVVA and a sociable male AVVA, β = 0.17, SE = 0.09, p < 0.05.
Providing no support for H6—AVVA gender will moderate the influence of AVVA style on PU—no moderation effect was found, β = −0.03, SE = 0.06, p = 0.61 (See Figure 3 for the graphical representation of the results).
Providing no support for H7a and H7b—PEU and PU will mediate (a) the influence of AVVA style and (b) AVVA gender (moderating effect) on the intention of adopting an autonomous car—PEU and PU did not mediate the influence of AVVA style, β = −0.006, CI = [−0.15, 0.13], nor the AVVA gender moderating effect on autonomous car adoption intention, β = 0.13, CI = [−0.02, 0.27].
Regarding H8a and H8b—PEU will mediate (a) the influence of AVVA style and (b) AVVA gender (moderating effect) on PU—PEU was not found to mediate the influence of AVVA style on PU, β = 0.003, CI = [−0.14, 0.16]. However, PEU mediated the influence of the AVVA gender moderating effect on PU, β = 0.15, CI = [0.003, 0.30]. The moderating effect between AVVA’s style and gender indirectly influenced PU through PEU. In other words, the finding that stereotypically matched conditions (informative male or sociable female AVVA) led to greater PU than mismatched conditions (sociable male or informative female AVVA) was mediated by PEU.

5. Discussion

This research explored how autonomous vehicle voice agent (AVVA) design influences autonomous vehicle passenger (AVP) intention to adopt autonomous vehicles. Results suggest that AVVA design influences perceptions of the autonomous vehicle, as reflected by the core factors of the technology acceptance model, perceived ease of use (PEU) and perceived usefulness (PU), both of which strongly predicted autonomous vehicle adoption intention. No evidence was found for the predicted main effects of AVVA style (informative versus sociable) on PEU or PU. However, results indicated that AVVA gender moderated the relationship between AVVA informativeness and sociability on PEU (directly) and PU (indirectly, through PEU) in ways that were consistent with gender stereotypes. These results offer new insights into the role of stereotype consistency in the technology acceptance model as well as the importance of considering agent style and gender in the design of voice agents.
Participants perceived an autonomous vehicle as easier to use and more useful when there was stereotypical consistency between the AVVA style and gender. Namely, consistent conditions (informative male AVVA and social female AVVA) fostered greater PEU and PU than inconsistent conditions (social male AVVA and informative female AVVA). This is consistent with previous studies which have found that gender stereotypes guide the ways that people respond to virtual agents [20,33,34], such as the perception that male-voiced computers are generally more dominant and influential, but female-voiced computers are trusted and preferred more when discussing stereotypically feminine topics, such as love and relationships [22,35].
The present research makes a contribution beyond these previous studies by illustrating that stereotypical consistency in a voice agent influences not only the perception of the voice itself, but also the PEU and PU of the technology that the voice agent represents. Given the strong influence of PEU and PU on adoption intention, this research suggests that stereotypical consistency is an important consideration when examining technology adoption, especially in the context of autonomous technologies represented by voice agents.
This finding is consistent with the notion that more intuitive interfaces increase PEU [45]. The CASA (computer as social actor) paradigm suggests that people mindlessly apply various social interaction rules, such as gender stereotypes, to human–computer interaction [19,20,21]. In other words, interfaces that utilize stereotypes facilitate mindless responses that foster more heuristic-based interactions which ease cognitive efforts that individuals would otherwise spend to understand the interface. Increasing PEU helps people identify the usefulness of the technology and ultimately increase the intention to adopt, particularly in the early stages of the adoption process [46]. Through social interaction, people develop schemas that can help them more easily interact with and understand their surroundings. The study results imply that designing a voice agent to be more congruent with social role expectations may help people use the technology more easily, which leads to a greater perception that the technology is useful, ultimately leading to greater adoption intention.
However, we do not mean to suggest that designers should replicate and thus reinforce gender or any other social-role stereotypes. In fact, designers have the power to shape the social norms that guide expectations regarding social roles. Just as perceptions of social norms are influenced by depictions of archetypal individuals and groups in popular media, such as television (e.g., [47]) or video games (e.g., [48]), interactions with voice agents have the potential to influence status quo perceptions outside of media use. In other words, complementing the idea that our understandings of human–human interaction guide our interactions with technology [18], our interactions with social technologies also affect our understandings of human–human interactions. Thus, designers’ choices of whether to rely on or move beyond stereotypes in their autonomous agent interfaces have real potential outcomes for social interaction in our society. Stereotyping, or the reliance on limited information to make broad generalizations about individuals and groups, is harmful to groups and individuals. Although humans are cognitive misers who prefer to use heuristics to minimize effort during decision making, people are also aversive toward biased thinking and would prefer to act in ways that reflect cognitive complexity [49]. Thus, designers have an incentive to counteract or disconfirm stereotypes in their designs, at least to some extent. In the present context, this could mean offering autonomous agents who are equally informative and sociable, regardless of gender. Furthermore, the present research did not compare degrees of informativeness or sociability. Future research should attempt to identify the extent to which a sociable female voice agent can reflect informative functionality before suffering reductions in PEU and PU.
Limitations of this research include the sample, the fidelity of the simulation technology utilized, and the flexibility of the AVVA technology utilized. First, this study was conducted with a college student sample. This population is potentially not representative of the autonomous vehicle adopter target market (e.g., because they have lower incomes). Future research should use older samples who have a higher likelihood of using such vehicles. Second, this study was conducted as an online study on the participants’ own devices. Because of the constraint, the modality may not have felt realistic enough for participants to respond in ways that were externally valid. Thus, future research on this topic should be conducted in more immersive simulators. Finally, the method of providing the AVVA—a pre-recorded driving scenario and set of verbal instructions—only offered a single driving route and scenario. While this scenario was designed to represent a typical driving experience in a low-traffic city, a chance exists that this specific scenario influenced participants in ways that would not generalize to other scenarios. Thus, future research should be conducted in other driving contexts.
These limitations notwithstanding, the present research provides an exploratory examination that yields unique insights about the aspects of AVVA design that influence autonomous vehicle research. Future research can build on these findings to develop more targeted, externally valid examinations of the relationships explored here.

Author Contributions

Conceptualization, S.L. and R.R.; methodology, S.L., T.P. and R.R.; software, S.L. and T.P.; formal analysis, S.L. and R.R.; resources, T.P. and R.R.; writing—original draft preparation, S.L.; writing—review and editing, S.L., T.P., and R.R.; supervision, S.L.

Acknowledgments

The AT&T endowment to the Media & Information department at MSU Michigan State University partially supported this project through Ratan’s AT&T Scholar position. Special thanks to our undergraduate researchers, Ian Crist and Daniel Anderson, for their support in the research process.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Table A1. Virtual Agent Voice Script.
Table A1. Virtual Agent Voice Script.
Simulation ScenesTask-Oriented VASociable VA
#1 Before StartingHello, welcome! My name is iVerse. I am a virtual agent that will drive this autonomous car. My primary goal is to take you to the designated destination with safety. It seems you are ready. I will start the car.Hello, welcome! My name is iVerse. I am a virtual agent that will drive this autonomous car. Thank you for riding along with me. It seems you are ready. I will start the car.
#2 Starting to Drive the CarThe destination has been set to City Mall in downtown. The mall is 3 miles away from here. It is estimated to take 5 minutes to get there. Currently, the weather is 65 degrees Fahrenheit and sunny.I hope you will enjoy this autonomous driving experience. Currently, the weather is 65 degrees Fahrenheit and sunny. I am excited to drive with you in this perfect weather.
#3 Going Straight (1)The speed limit on the road is 35 miles per hour. I am currently driving at 33 miles per hour speed.Let me tell you more about myself. I was invented by a research team at [Anonymized] about a month ago. So, I do not have many friends, but I think I just made one!
#4 Changing LanesI will change the line to the left and then I will turn left in 500 feet.Driving is a demanding task. I am happy to help relieve your stress from driving.
#5 Turning (1)I will turn to the left.Isn’t it funny how red, white, and blue represent freedom… until they’re flashing behind us. I am kidding.
#6 Turning (2)I will turn to the left.I am going to turn left here. I always like making turns when I’m driving
#7 Traffic Signal (1)The red traffic signal is ahead, I will slow down the speed to stop.For some reasons, the red light makes me hungry. I hope you have a nice meal today.
#8 Traffic Signal (2)The red traffic signal is ahead, I will slow down the speed to stop.Another red light. I hope you’re feeling comfortable with this drive.
#9 Going Straight (2)We will arrive at the destination in 1 min.I like this city. People are nice to me, like you.
#10 Turning (3)I will turn to the left.We’re getting close to the destination. I’ll be sad to see you go.
#11 PedestrianA pedestrian is ahead, I will slow down the speed.It’s been fun. I hope you also enjoyed the autonomous driving experience with me.
#12 Before ArrivalThe destination is right in front of us. Please keep your seat belt fastened until we stop completely.The destination is right in front of us. Please keep your seat belt fastened until we stop completely
#13 ArrivalWe have arrived at our destination. We have traveled 3 miles with 35 miles per gallon fuel efficiency. Thank you.We have arrived at our destination. Thank you for using this autonomous vehicle today. I hope you enjoyed the ride and that I will see you again soon

References

  1. Muoio, D. 19 Companies Racing to Put Self-Driving Cars on the Road by 2021. Available online: https://www.businessinsider.com/companies-making-driverless-cars-by-2020-2016-10 (accessed on 5 February 2019).
  2. Kang, C. Self-Driving Cars Gain Powerful Ally: The Government. Available online: https://www.nytimes.com/2016/09/20/technology/self-driving-cars-guidelines.html (accessed on 5 February 2019).
  3. U.K. Department for Transport. The Pathway to Driverless Cars: Summary Report and Action Plan; U.K. Department for Transport: London, UK, 2015.
  4. Govt. Approves Pilot Run of Samsung’s Self-Driving Car. Available online: https://en.yna.co.kr/view/AEN20170501002000320 (accessed on 5 February 2019).
  5. Schoettle, B.; Sivak, M. A Survey of Public Opinion about Autonomous and Self-Driving Vehicles in the US, the UK, and Australia; Transportation Research Institute: Ann Arbor, MI, USA, 2014. [Google Scholar]
  6. Howard, D.; Dai, D. Public Perceptions of Self-Driving Cars: The Case of Berkeley, California. In Proceedings of the Transportation Research Board 93rd Annual Meeting, Washington, DC, USA, 12–16 January 2014; University of California, Berkeley: Berkeley, CA, USA, 2014; Volume 14. [Google Scholar]
  7. Rogers, E.M. Diffusion of Innovations, 4th ed.; Simon and Schuster: New York, NY, USA, 2010. [Google Scholar]
  8. Carter, L.; Bélanger, F. The Utilization of E-Government Services: Citizen Trust, Innovation and Acceptance Factors. Inf. Syst. J. 2005, 15, 5–25. [Google Scholar] [CrossRef]
  9. Verberne, F.M.F.; Ham, J.; Midden, C.J.H. Trust in Smart Systems: Sharing Driving Goals and Giving Information to Increase Trustworthiness and Acceptability of Smart Systems in Cars. Hum. Factors 2012, 54, 799–810. [Google Scholar] [CrossRef] [PubMed]
  10. Nass, C.; Moon, Y. Machines and Mindlessness: Social Responses to Computers. J. Soc. Issues 2000, 56, 81–103. [Google Scholar] [CrossRef]
  11. Davis, F.D. Perceived Usefulness, Perceived Ease of Use, and User Acceptance of Information Technology. Miss. Q. 1989, 13, 319–340. [Google Scholar] [CrossRef]
  12. Venkatesh, V.; Davis, F.D. A Theoretical Extension of the Technology Acceptance Model: Four Longitudinal Field Studies. Manag. Sci. 2000, 46, 186–204. [Google Scholar] [CrossRef]
  13. Venkatesh, V.; Morris, M.G.; Davis, G.B.; Davis, F.D. User Acceptance of Information Technology: Toward a Unified View. Miss. Q. 2003, 27, 425–478. [Google Scholar] [CrossRef]
  14. King, W.R.; He, J. A Meta-Analysis of the Technology Acceptance Model. Inf. Manag. 2006, 43, 740–755. [Google Scholar] [CrossRef]
  15. Heerink, M.; Kröse, B.; Evers, V.; Wielinga, B. Assessing Acceptance of Assistive Social Agent Technology by Older Adults: The Almere Model. Int. J. Soc. Robot. 2010, 2, 361–375. [Google Scholar] [CrossRef]
  16. Choi, J.K.; Ji, Y.G. Investigating the Importance of Trust on Adopting an Autonomous Vehicle. Int. J. Hum. Comput. Interact. 2015, 31, 692–702. [Google Scholar] [CrossRef]
  17. Lee, K.M. Presence, Explicated. Commun. Theory 2004, 14, 27–50. [Google Scholar] [CrossRef]
  18. Reeves, B.; Nass, C. The Media Equation: How People Treat Computers, Television, and New Media like Real People and Places; Center for the Study of Language and Information Publications: Stanford, CA, USA, 1996. [Google Scholar]
  19. Nass, C.; Lee, K.M. Does Computer-Generated Speech Manifest Personality? An Experimental Test of Similarity-Attraction. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI ’00, The Hague, The Netherlands, 1–6 April 2000; ACM: New York, NY, USA, 2000; pp. 329–336. [Google Scholar] [CrossRef]
  20. Lee, E.J.; Nass, C.; Brave, S. Can Computer-Generated Speech Have Gender? In Proceedings of the CHI ’00 Extended Abstracts on Human Factors in Computing Systems, CHI ’00, The Hague, The Netherlands, 1–6 April 2000. [Google Scholar] [CrossRef]
  21. Lee, K.M.; Peng, W.; Jin, S.-A.; Yan, C. Can Robots Manifest Personality?: An Empirical Test of Personality Recognition, Social Responses, and Social Presence in Human–Robot Interaction. J. Commun. 2006, 56, 754–772. [Google Scholar] [CrossRef]
  22. Nass, C.; Moon, Y.; Green, N. Are Machines Gender Neutral? Gender-Stereotypic Responses to Computers with Voices. J. Appl. Soc. Psychol. 1997, 27, 864–876. [Google Scholar] [CrossRef]
  23. Kahneman, D.; Frederick, S. Representativeness Revisited: Attribute Substitution in Intuitive Judgment. Heuristics Biases Psychol. Intuitive Judgm. 2002, 49, 81. [Google Scholar]
  24. De Graaf, M.M.A.; Allouch, S.B. The Relation between People’s Attitude and Anxiety towards Robots in Human-Robot Interaction. In Proceedings of the 2013 IEEE RO-MAN, Gyeongju, Korea, 26–29 August 2013; IEEE: Piscataway, NJ, USA, 2013; pp. 632–637. [Google Scholar] [CrossRef]
  25. Endsley, M.R. Toward a Theory of Situation Awareness in Dynamic Systems. Hum. Factors 1995, 37, 32–64. [Google Scholar] [CrossRef]
  26. Crowelly, C.R.; Villanoy, M.; Scheutzz, M.; Schermerhornz, P. Gendered Voice and Robot Entities: Perceptions and Reactions of Male and Female Subjects. In Proceedings of the 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2009; IEEE: Piscataway, NJ, USA, 2009; pp. 3735–3741. [Google Scholar]
  27. Nass, C.; Steuer, J.; Tauber, E.R. Computers Are Social Actors. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI ’94, Boston, MA, USA, 24–28 April 1994; ACM: New York, NY, USA, 1994; pp. 72–78. [Google Scholar] [CrossRef]
  28. Aries, E.J.; Johnson, F.L. Close Friendship in Adulthood: Conversational Content between Same-Sex Friends. Sex Roles 1983, 9, 1183–1196. [Google Scholar] [CrossRef]
  29. Briton, N.J.; Hall, J.A. Beliefs about Female and Male Nonverbal Communication. Sex Roles 1995, 32, 79–90. [Google Scholar] [CrossRef]
  30. Furumo, K.; Pearson, J.M. Gender-Based Communication Styles, Trust, and Satisfaction in Virtual Teams. J. Inf. Inf. Technol. Organ. 2007, 2, 47–61. [Google Scholar] [CrossRef]
  31. Kramer, C. Perceptions of Female and Male Speech. Lang. Speech 1977, 20, 151–161. [Google Scholar] [CrossRef] [PubMed]
  32. Eagly, A.H.; Karau, S.J. Role Congruity Theory of Prejudice toward Female Leaders. Psychol. Rev. 2002, 109, 573–598. [Google Scholar] [CrossRef] [PubMed]
  33. Park, E.K.; Lee, K.M.; Shin, D.H. Social Responses to Conversational TV VUI. Int. J. Technol. Hum. Interact. 2015, 11, 17–32. [Google Scholar] [CrossRef]
  34. Mullennix, J.W.; Stern, S.E.; Wilson, S.J.; Dyson, C.-L. Social Perception of Male and Female Computer Synthesized Speech. Comput. Hum. Behav. 2003, 19, 407–424. [Google Scholar] [CrossRef]
  35. Lee, E.-J. Effects of “gender” of the Computer on Informational Social Influence: The Moderating Role of Task Type. Int. J. Hum. Comput. Stud. 2003, 58, 347–362. [Google Scholar] [CrossRef]
  36. Venkatesh, V. Determinants of Perceived Ease of Use: Integrating Control, Intrinsic Motivation, and Emotion into the Technology Acceptance Model. Inf. Syst. Res. 2000, 11, 342–365. [Google Scholar] [CrossRef]
  37. Davis, F.D. User Acceptance of Information Technology: System Characteristics, User Perceptions and Behavioral Impacts. Int. J. Man. Mach. Stud. 1993, 38, 475–487. [Google Scholar] [CrossRef]
  38. Amazon Polly. Available online: https://aws.amazon.com/polly/ (accessed on 5 February 2019).
  39. Audacity. Available online: https://www.audacityteam.org/about/ (accessed on 5 February 2019).
  40. Van Dolen, W.M.; Dabholkar, P.A.; de Ruyter, K. Satisfaction with Online Commercial Group Chat: The Influence of Perceived Technology Attributes, Chat Group Characteristics, and Advisor Communication Style. J. Retail. 2007, 83, 339–358. [Google Scholar] [CrossRef]
  41. Development, F. City Car Driving. Available online: https://store.steampowered.com/app/493490/City_Car_Driving/ (accessed on 4 May 2018).
  42. Schreiber, J.B.; Nora, A.; Stage, F.K.; Barlow, E.A.; King, J. Reporting Structural Equation Modeling and Confirmatory Factor Analysis Results: A Review. J. Educ. Res. 2006, 99, 323–338. [Google Scholar] [CrossRef]
  43. MacCallum, R.C.; Browne, M.W.; Sugawara, H.M. Power Analysis and Determination of Sample Size for Covariance Structure Modeling. Psychol. Methods 1996, 1, 130. [Google Scholar] [CrossRef]
  44. Marsh, H.W.; Wen, Z.; Hau, K.-T.; Nagengast, B. Structural Equation Models of Latent Interaction and Quadratic Effects. In Structural Equation Modeling: A Second Course; IAP Information Age Publishing: Charlotte, NC, USA, 2006; pp. 225–265. [Google Scholar]
  45. Cho, V.; Cheng, T.E.; Lai, W.J. The Role of Perceived User-interface Design in Continued Usage Intention of Self-paced E-learning Tools. Comput. Educ. 2009, 53, 216–227. [Google Scholar] [CrossRef]
  46. Davis, F.D.; Bagozzi, R.P.; Warshaw, P.R. User Acceptance of Computer Technology: A Comparison of Two Theoretical Models. Manag. Sci. 1989, 35, 982–1003. [Google Scholar] [CrossRef]
  47. Scharrer, E.; Blackburn, G. Cultivating Conceptions of Masculinity: Television and Perceptions of Masculine Gender Role Norms. Mass Commun. Soc. 2018, 21, 149–177. [Google Scholar] [CrossRef]
  48. Fox, J.; Potocki, B. Lifetime Video Game Consumption, Interpersonal Aggression, Hostile Sexism, and Rape Myth Acceptance: A Cultivation Perspective. J. Interpers. Violence 2016, 31, 1912–1931. [Google Scholar] [CrossRef] [PubMed]
  49. De Neys, W.; Rossi, S.; Houdé, O. Bats, Balls, and Substitution Sensitivity: Cognitive Misers Are No Happy Fools. Psychon. Bull. Rev. 2013, 20, 269–273. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Model of hypothesized relationships between manipulated and measured factors of interest.
Figure 1. Model of hypothesized relationships between manipulated and measured factors of interest.
Mti 03 00020 g001
Figure 2. A capture from the simulation.
Figure 2. A capture from the simulation.
Mti 03 00020 g002
Figure 3. Results of the structure equation model.
Figure 3. Results of the structure equation model.
Mti 03 00020 g003

Share and Cite

MDPI and ACS Style

Lee, S.; Ratan, R.; Park, T. The Voice Makes the Car: Enhancing Autonomous Vehicle Perceptions and Adoption Intention through Voice Agent Gender and Style. Multimodal Technol. Interact. 2019, 3, 20. https://doi.org/10.3390/mti3010020

AMA Style

Lee S, Ratan R, Park T. The Voice Makes the Car: Enhancing Autonomous Vehicle Perceptions and Adoption Intention through Voice Agent Gender and Style. Multimodal Technologies and Interaction. 2019; 3(1):20. https://doi.org/10.3390/mti3010020

Chicago/Turabian Style

Lee, Sanguk, Rabindra Ratan, and Taiwoo Park. 2019. "The Voice Makes the Car: Enhancing Autonomous Vehicle Perceptions and Adoption Intention through Voice Agent Gender and Style" Multimodal Technologies and Interaction 3, no. 1: 20. https://doi.org/10.3390/mti3010020

APA Style

Lee, S., Ratan, R., & Park, T. (2019). The Voice Makes the Car: Enhancing Autonomous Vehicle Perceptions and Adoption Intention through Voice Agent Gender and Style. Multimodal Technologies and Interaction, 3(1), 20. https://doi.org/10.3390/mti3010020

Article Metrics

Back to TopTop