Next Article in Journal
Psychopathic Traits in Adult versus Adolescent Males: Measurement Invariance across the PCL-R and PCL:YV
Previous Article in Journal
Addressing the “Lying Flat” Challenge in China: Incentive Mechanisms for New-Generation Employees through a Moderated Mediation Model
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Understanding Users’ Acceptance of Artificial Intelligence Applications: A Literature Review

1
School of Information Science and Engineering, NingboTech University, Ningbo 315100, China
2
Nottingham University Business School China, University of Nottingham Ningbo China, Ningbo 315100, China
3
Business School, Ningbo University, Ningbo 315211, China
4
School of Management, Zhejiang University, Hangzhou 310058, China
*
Author to whom correspondence should be addressed.
Behav. Sci. 2024, 14(8), 671; https://doi.org/10.3390/bs14080671
Submission received: 24 June 2024 / Revised: 30 July 2024 / Accepted: 1 August 2024 / Published: 2 August 2024
(This article belongs to the Topic Online User Behavior in the Context of Big Data)

Abstract

:
In recent years, with the continuous expansion of artificial intelligence (AI) application forms and fields, users’ acceptance of AI applications has attracted increasing attention from scholars and business practitioners. Although extant studies have extensively explored user acceptance of different AI applications, there is still a lack of understanding of the roles played by different AI applications in human–AI interaction, which may limit the understanding of inconsistent findings about user acceptance of AI. This study addresses this issue by conducting a systematic literature review on AI acceptance research in leading journals of Information Systems and Marketing disciplines from 2020 to 2023. Based on a review of 80 papers, this study made contributions by (i) providing an overview of methodologies and theoretical frameworks utilized in AI acceptance research; (ii) summarizing the key factors, potential mechanisms, and theorization of users’ acceptance response to AI service providers and AI task substitutes, respectively; and (iii) proposing opinions on the limitations of extant research and providing guidance for future research.

1. Introduction

The rapid development of artificial intelligence (AI) has provided rich opportunities for industrial development and social progress. With an expectation of shaping commercial value, improving versatility, and promoting efficiency, AI technology is now increasingly applied into online retailing [1], customer service [2,3], digital innovation [4,5], and providing management support [6,7,8]. Industry reports show that the scale of China’s artificial intelligence industry reached CNY 195.8 billion in 2022, and it is expected to reach CNY 612.2 billion by 2027 [9]. It is worth noting that the success of AI implementation relies on not only technological progress, but also users’ acceptance [10,11]. Although technology implementers place high expectations for AI to improve user experience and performance, the application of AI in multiple fields has reported low actual usage rates [12,13]. Thus, it is vital to understand users’ reactions to AI applications and analyze factors that are relevant to user’s acceptance behavior [2,14].
Artificial intelligence (AI) is “the frontier of computational advancements that references human intelligence in addressing ever more complex decision-making problems” [15]. AI applications are thus “able to perform tasks that require cognition and were formerly typically associated with humans” [16]. Numerous studies have examined user acceptance of AI applications, but revealed mixed results. For example, You, et al. [17] found that users appreciated algorithmic advices more than human advices. In contrast, Longoni, et al. [18] indicated that users were reluctant to use medical AI both in hypothetical or real choices. Furthermore, Garvey, et al. [19] showed that users accepted AI agent more when receiving bad messages, but responded more positively to human agent when receiving good news. Meanwhile, although a large body of research on AI acceptance has focused on various AI application form (e.g., chatbots, AI-based decision-making systems, AI-embedded smart home devices, and autonomous vehicles), there is no consensus on what distinguishes these forms from others and what roles these AI applications plays in human–AI interactions. However, differences in users’ attribution, perception, and acceptant criteria exist among these different AI application forms. For instance, research has indicated that user acceptance of an AI system, which is designed to deliver improved services, is associated with perceptions of algorithmic credibility, usefulness, convenience, and trust [20], while users’ attitudes toward AI systems, which replace users in completing tasks, may result from perceived threat, performance risk, and inertia [21]. This indicates that some differences are present between different forms of AI applications related to user acceptance, in particular, in whether users treat AI as service provider or task substitute. Accordingly, we categorize AI applications into AI service providers and AI task substitutes. Specifically, AI service provider refers to an AI application that provides services to users instead of a human agent or ordinary product [17,18,19], such as AI providing shopping recommendations, customer service, and advising services. AI task substitutes are AI applications that replace users to complete certain tasks [18,22,23], such as AI-generated diagnostic systems, AI-based substitutive decision-making systems, and AI teammates. Decades of experience with new technology implementation suggest that the role of the technology is an important determinant of user acceptance [24,25]. However, despite this, very few attempts have been made to synthesize the extant research in the current works.
To fill this gap, this study aims to analyze the literature regarding users’ acceptant attitudes to AI service providers and task substitutes. A comprehensive review of user’s acceptance of AI applications can help identify collective knowledge of the extant literature, improve understanding of the mixed findings, and provide guidance for future investigations of this important and relevant issue of AI technology implementation. There are several published literature reviews about AI on organizational strategy [16], business value [26], and future of work [27]. To the best of our knowledge, this study differs from the prior review research on AI implication by providing a systematic literature review on AI acceptance from the end user’s perspective rather than focusing on the objectives of technology implementers. Additionally, prior work did not explicitly discern the roles of AI (e.g., AI service providers and task substitutes). In this article, we (i) provide an overview of the exiting methodologies and theoretical frameworks used to investigate AI acceptance; (ii) synthesize key factors, potential mechanisms, and theorizing logics underlying users’ AI acceptant responses to AI service providers and task substitutes, respectively; and (iii) propose opportunities for future research.
The paper is organized as follows. The process of literature identification and selection is firstly explained in Section 2. The journal distribution, methodology overview, and outcome variables of the reviewed studies are also presented. Section 3 and Section 4 analyze users’ different attitudes toward AI service providers and AI task substitutes, respectively. Section 5 summarizes the theoretical frameworks used in our reviewed papers. Finally, gaps and limitations of the extant literature, future research directions, and limitations of the present paper are discussed. This paper ends with a conclusion.

2. Methods

The flowchart based on the PRISMA guidelines (Figure 1) illustrates the process of searching, screening, and ultimately selecting articles for this study. The final selection includes articles from 12 leading journals, organized into nine major research methods. Furthermore, this chapter lists the types of outcome variables concerning users’ acceptance of AI service providers.

2.1. Literature Identification and Selection

Given the huge volume and variety of AI research, our search process was restricted to the newest papers published between 2020 and 2023, and was conducted in leading journals in marketing, information systems and behavioral science domains, including Management Science, Marketing Science, MIS Quarterly, Information Systems Research, Journal of Marketing, Journal of Marketing Research, Journal of Consumer Research, Journal of the Association for Information Systems, Journal of Management Information Systems, International Journal of Information Management, Information & Management, Computers in Human Behavior, and Decision Support Systems. These journals were selected due to their outstanding contributions to technology-acceptance-related knowledge [28]. Specifically, our selection of “Computers in Human Behavior”, a leading journal in the behavioral sciences, is well-regarded for its authoritative contributions, high impact, and relevance to our study’s focus on technology acceptance. Thus, our selection adequately covers significant contributions from both the behavioral sciences and information systems domains.
To ensure no relevant paper was missed, the process began with an automated search with key words “AI” and “artificial intelligence” in the journals mentioned above. After excluding the repeated papers, this process identified 515 articles. Then, we restricted the search to papers using “AI” or “artificial intelligence” as key words and excluded studies that just mentioned AI. This process brought up 249 papers. A manual search was then conducted to ensure that only papers related to user acceptance were included. The key words used in this stage were quite diverse because of the variety of conceptualization of user acceptance in these papers. Thus, we read the abstracts and other related contents of all papers to identify studies that either explicitly or implicitly focus on user acceptance of AI. Finally, 80 papers were included in our analysis. The PRISMA flow diagram summarizes the literature selection process (Figure 1).

2.2. Overview of Reviewed Studies

Table 1 summarizes the journal distribution and methodology overview of the reviewed studies. Sixteen publications were published in UTD24 journals. Most studies were published in Computers in Human Behavior and International Journal of Information Management. Most of the studies used quantitative methodologies (63 papers), 6 papers adopted qualitative approaches to explore user acceptance of AI, and 11 papers combined multiple methods (e.g., empirical estimation and controlled behavioral experiments, survey, and lab experiments). Specifically, controlled behavioral experiments (29 papers) and survey (26 papers) were the two main approaches for user acceptance research. Only 5 papers conducted field experiments. Among qualitative studies, most research employed case studies (3 papers), 2 papers used interviews, and 1 paper conducted a two-year longitudinal study. Among the mixed-method studies, 6 papers combined qualitative methods and quantitative studies, 4 papers conducted a series of experiments and one survey, and 1 paper tested the proposed model by empirical estimation on real-world data and 4 controlled experiments (see in Table 2). Moreover, 1 paper used a game model to reveal how different expert accept AI tools.

2.3. Overview of Conceptualization

Regarding users’ acceptance of AI, the reviewed studies focused on a vast pool of outcome variables (see in Table 3). We categorized the outcome variables as behaviors, behavioral intentions, and perceptions. A total of 14 publications adopt users’ real behaviors to investigate how users accept an AI service provider or task substitute, including AI acceptance behavior (5 papers), AI usage behavior (6 papers), purchase behavior (2 papers), or user performance (1 paper) after AI acceptance. To analyze users’ real behaviors, these studies mainly relied on empirical estimation and field experiment. Most studies reported users’ behavioral intentions by means of survey and controlled behavioral experiments. For example, research has examined users’ intention to accept AI (18 papers), intention to use AI (23 papers), intention to purchase after AI acceptance (3 papers), and intention to self-disclosure to obtain better AI service (1 papers), as well as users’ tendency to perform better (4 paper) or resist AI (3 papers). Additionally, several studies also observed AI acceptance through the lens of user perception, such as attitude toward AI (6 papers), trust in AI (14 papers), and satisfaction with AI (6 papers).

3. Results of Literature Review on User Acceptance

In our review, we categorized the papers by role of AI, that is, AI service provider or AI task substitute. Two research assistants categorized and coded the 80 papers according to AI’s roles in human–AI interaction. When the categorization results coded by two research assistants were consistent, the categorization was adopted; when the categorization results coded by two research assistants were inconsistent, the final categorization results were determined after discussion. Finally, out of the 80 papers, 61 were classified as studies on user acceptance of AI service providers (see Table 4), while 19 were categorized under research on user acceptance of AI task substitutes (see Table 5).

3.1. Results of Literature Review on User Acceptance of AI Service Providers

Based on the 61 papers classified as studies on AI service providers, we summarize users’ acceptant responses to AI service providers and key findings of these research. Based on the definition, AI advisors, AI-based service agents, and other AI-based applications that benefit people in various areas were identified as AI service providers in our analysis. For example, AI advisors include judge–advisor systems, AI-based recommenders, medical AI applications, etc. Examples of AI-based service agents include AI marketing agents, customer service chatbots, and AI-based smart healthcare services. Moreover, other AI service providers include AI instructors for online learning, AI coaches for sales training, AI-embedded mixed reality for retail shopping, etc.
Despite the calls for AI implication for the future of society [100,101,102], our reviewed studies found mixed evidence showing that the links between AI service providers and high level of user acceptance were not always supported, or even showed a reversed result. Only 3 of the 61 papers reported a positive relationship between AI service providers and user acceptance. A total of 9 of the 61 papers provided evidence for users’ AI aversion responses. The majority of the reviewed studies (49 papers) reported conditional results.
Firstly, three studies showed experimental evidence for AI service provider appreciation. You, Yang and Li [17] found that users exhibit a strong algorithm appreciation. That is, people accept AI advice more than that generated by humans even when the prediction errors of AI algorithms have been acknowledged. Since people believe that an AI algorithm is able to give more accurate and reliable advice than humans, and they exhibit higher trust in AI-provided advice. From the perspective of responsibility attribution, Gill [29] revealed that harm to a pedestrian by an autonomous vehicle is more acceptable for users, due to the goal of self-protection to remove themselves from moral responsibility. Schanke, Burtch and Ray [2] observed that consumers are more willing to accept and self-disclose to a chatbot with anthropomorphic features (i.e., humor, communication delays, and social presence). Taken together, users tend to accept AI service providers because of the expectation that AI is more accurate, reliable, and able to take responsibility for the harm it causes. Consumers even increase sensitivity to offers provided by AI service providers due to a fairness evaluation or negotiating mindset. In our reviewed papers, advantages in accuracy, reliability, and responsibility are key factors that determine users’ appreciation for AI service providers; trust and satisfaction are the main mechanisms for forming positive user acceptance attitudes.
Secondly, nine papers observed a response of AI service provider aversion. A possible explanation is that people have doubts about the ability of artificial intelligence to understand human decision-making processes [36]. For example, Peng, van Doorn, Eggers, and Wieringa [30] found that consumers believed that AI is not competent in emotional support and, thus, they are reluctant to accept AI services for warmth-requiring tasks. Similarly, Luo, Tong, Fang, and Qu [32] observed that although chatbots perform as effectively as proficient workers, the disclosure of chatbot identity will reduce customer purchase rates. Mechanism exploration showed that consumers believe an AI-based servicer lacks knowledge and empathy. In the context of peer-to-peer lending, Ge, Zheng, Tian, and Liao [33] found that investors who need more help are less willing to accept AI-advising services. The authors speculate that the low transparency of AI-advising services may be the reason for this impact. Concerns about personal data security [35] and anxiety about healthcare [34] were also identified to induce users’ rejection of AI. Another possible explanation might be the concern about uniqueness neglect. For instance, Yalcin, Lim, Puntoni, and van Osselaer [31] showed that consumers respond less positively to an algorithmic decision maker, especially when the decision made by AI is favorable. An attribution process was proposed to explain this effect: consumers tend to deem a favorable decision made by a human as more reflective of their unique merits and, thus, feel themselves more deserving of the favorable decision. However, algorithmic decision makers usually rely on preset criteria. Thus, it is difficult to attribute the decision made by AI to one’s unique characteristics. Millet, Buehler, Du, and Kokkoris [37] identified that the perceived threats to human unique characteristics (i.e., artistic creativity) lead to responses against AI art generators. In a more direct investigation of the effects of uniqueness neglect, nine studies in Longoni, Bonezzi, and Morewedge [18] revealed consistent results showing that consumers tend to refuse AI medical applications for healthcare due to uniqueness neglect. Specifically, the authors provided evidence that people believe that AI is less able to identify their unique characteristics and circumstances in medical demands, which resulted in consumers’ reluctance to use AI medical services. Taken together, aversion to AI service providers may result from the concern about uniqueness neglect, the low perceived fit between AI and certain tasks, and the perceived inability of AI service providers. Uniqueness neglect, task fit, and algorithm performance are potential mechanisms for aversion to AI service providers.
Thirdly, most of the reviewed studies (49 papers) showed conditional results on users’ acceptance of AI service providers. The research further diverges into two streams. On one hand, some studies focused on exploring the influencing factors of AI service provider acceptance, and mainly employed the survey method. By far, the most attention was paid to perceived anthropomorphism of AI service providers, and related papers have consistently found a positive impact of anthropomorphism on users’ acceptance [48,50,51,96]. For example, Mishra, Shukla, and Sharma [48] showed that anthropomorphism has a positive impact on utilitarian attitude, which in turn increases acceptance of smart voice assistants. Pelau, Dabija, and Ene [51] revealed an indirect effect of anthropomorphism on the acceptance of AI devices, which is fully mediated by perceived empathy and interaction quality. Additionally, there are studies focused on the role of perceived transparency, accountability, and fairness. Shin, Kee, and Shin [41] conceptualized fairness, explainability, accountability, and transparency as key components of algorithm awareness, and found that higher levels of algorithm awareness increased users’ trust and self-disclosure to algorithmic platforms. Shin, Zhong, and Biocca [20] showed a positive relationship between AI service provider acceptance and users’ algorithmic experience, which was conceptualized as inherently related to fairness, transparency, and other components. Furthermore, various other factors were investigated in the reviewed studies, such as artificial autonomy [42], external locus of control [43], personalization [49], and user personality traits [50]. On the other hand, some studies focused on identifying the boundary conditions for AI service provider appreciation or aversion, and mainly adopted experimental methodologies. Related studies demonstrated that design characteristics [1,56,75,78,103], goal orientations [57], types of servicers’ responses [54,55], and assemblage of AI and humans [54,55,103] may significantly change users’ acceptant attitudes. For instance, Longoni and Cian [54] and Luo, Qin, Fang, and Qu [58] showed that users tend to be more accepting when AI is combined with humans. Tojib, Ho, Tsarenko, and Pentina [57] found that consumers with higher desire for achievement tend to accept service robots more. Taken together, users’ acceptant choices can be changed or even switched by technology-related characteristics, contextual factors, user personality traits, design features of AI application, and many other factors in AI service provider usage. Thus, although many efforts have been made to explore user acceptance of AI service providers and underlying mechanisms, more research is needed to identify key factors, clarify mixed findings, and conceptualize new constructs that provide unique understandings of AI service provider acceptance.

3.2. Results of Literature Review on User Acceptance of AI Task Substitutes

AI task substitutes are widely applied in the digital innovation of organizations. How physicians take advantages of AI diagnostic testing systems and how employees respond to substitutive decision-making AI systems have aroused researchers’ interest. In our analysis, 19 of the 80 reviewed papers focused on users’ acceptance of AI task substitutes (see Table 5). Most of the reviewed articles focused on the contexts of healthcare and future of work. Examples of AI task substitutes include AI-based autonomous tools, clinical decision support systems, AI-based hiring systems, etc. Regarding users’ acceptant responses to AI task substitutes, the reviewed studies also failed to reveal a consistent result. In the reviewed studies, 4 of the 19 papers reported aversion responses to AI task substitute acceptance, and 14 papers identified boundary conditions and antecedent factors for users’ acceptance. Surprisingly, there was only one article reporting a completely positive attitude toward AI task substitutes.
Firstly, four studies found that users tend to resist AI task substitutes in many contexts. In clinical diagnostic decision making, Liang and Xue [84] provided evidence from a longitudinal field survey that physicians expressed AI resistance due to the concern of face loss. The belief of professional autonomy and time pressure can even strengthen the resistant intentions. Strich, Mayer, and Fiedler [83] revealed that the feeling of professional identity threat may result in an employee’s reluctance to accept AI-based assistant systems. Apart from the concern about professional identity, factors related to AI usage barriers can also lead to negative responses to AI task substitutes. For example, Kim, Kim, Kwak, and Lee [22] found that employees may decline to use AI task substitutes because of the perception of technology overload (e.g., information overload, communication overload, and system feature overload). Taken together, users may not accept AI task substitutes because of the concerns that AI task substitutes may be difficult to use, reduce colleague interactions, produce unexplainable results, make employers or customers doubt their occupational competence, and even replace them in the workplace. The difficulty in AI usage, concerns about face loss, and feelings of threats to professional identity may drive the negative effect (i.e., users’ aversion to AI task substitutes). Furthermore, although these studies revealed an AI-aversion attitude, factors that may eliminate the negative effects are still worth exploring.
Secondly, 14 articles explored the boundaries and factors of AI task substitute acceptance, mainly in three contexts: medical AI, future of work, and human–robot interaction. In the context of medical AI, researchers mainly focused on why users (i.e., physicians) resist AI diagnostic systems and whether there are factors that can eliminate AI aversion. Results showed that users (i.e., physicians) tend to rely on their own judgements and resist AI task substitutes due to their self-esteem [84], self-expression of reputation and skill level [92], self-monitoring processes [87], and resistance to change and trust in AI [21]. However, monetary incentives and altruistic beliefs can eliminate the resistance to AI task substitutes [92], while for professional autonomy, time pressure will strengthen the AI-aversion response [84]. For example, Dai and Singh [92] distinguished experts into high-type and low-type. Based on game model, the authors found that low-type experts rely on AI advice more, while high-type experts tend to use their own diagnostic decision in order to distinguish themselves from low-type ones. In the context of future of work, various factors that determine user (i.e., employee and organization) acceptant attitudes toward AI in the workplace and teamwork have been investigated, including user perceptions of AI use (e.g., perceived threat, perceived restrictiveness, perceived autonomy) [21,84,91,94], user perception of AI (e.g., perceived capabilities of AI, performance expectancy) [21,90], and AI-enabled task/knowledge characteristics (e.g., skill variety, job complexity) [95]. For example, by interviewing senior managers, Hradecky, Kennell, Cai, and Davidson [88] revealed the key factors that influencing AI adoption in the event industry, including organizational technological practices, financial resources, the size of the organization, issues of data management and protection, and the risk of the COVID-19 pandemic. In the context of human–robot interaction, scholars focused on how users are willing to accept AI as competitors or collaborators. For example, Harris-Watson, Larson, Lauharatanahirun, DeChurch, and Contractor [98] suggested that perceived competence, compared with perceived warmth, was a more important decisive factor of users’ psychological acceptance of AI teammates. Dang and Liu [96] found that a malleable theory of the human mind increased users’ competitive responses to AI robots by reducing performance-avoidance goals, whereas it increased users’ cooperative responses to robots due to induced mastery goals. Hence, it can be seen that in different contexts of AI applications, users’ acceptance of AI task substitutes is influenced by different factors. Future research should identify the specificity of the studied context and the characteristics of human–AI interaction in order to explore the decisive factors of user acceptance behavior of AI based on specific contexts.
Finally, only one paper proved positive attitudes toward AI acceptance. Specifically, Zhang, Chong, Kotovsky, and Cagan [85] found that users tend to trust AI teammates more than human teammates. Furthermore, it worth noting that one paper explored whether customer acceptance or employee acceptance is more important for tourism practitioners in AI-related strategy development. Based on a field experiment, Fan, Gao, and Han [99] revealed the superiority of an imbalanced robotic strategy (i.e., focusing on customer acceptance more than employee acceptance) over a balanced one in service quality improvement, especially when customer demandingness is higher. As prior research focus on either users’ acceptance of AI service providers or users’ acceptance of AI task substitutes, this research integrated both perspectives and answered the question of how to balance the perceptions of two types of AI users, which provided a new research perspective for the acceptance of different AI roles.

4. Theoretical Perspectives Applied to User Acceptance of AI

Based on our observation, the theoretical frameworks most commonly used in the review articles are technology acceptance model (TAM) and the extended technology acceptance theories (e.g., the decomposed theory of planned behavior, unified theory of acceptance and use of technology). The TAM was proposed by Fred D. Davis in 1989 to explain user acceptance of computer technology [104]. The technology acceptance model is one of the most influential and robust theoretical models in the field of information technology acceptance research. In the TAM, perceived usefulness and perceived ease of use are two key factors, which both directly affect use attitude and indirectly affect use intention through use attitude. Moreover, perceived ease of use indirectly affects usage attitude through perceived usefulness. A large number of empirical studies has confirmed the TAM [105,106] and investigated external variables that have an impact on perceived usefulness and perceived ease of use [107,108]. In our analysis, five articles employed TAM as a theoretical framework [20,43,49,71,79] and explored antecedents of perceived usefulness and ease of use. Applying TAM to the AI service provider context, the papers supported the decisive role of perceived usefulness in promoting trust and behavioral intention to accept AI service providers, but found inconsistent results of the relationship between ease of use and acceptant attitude [20,49]. Necessary is further investigation into how TAM could be applied to the AI task substitute context, and how contextual factors influence the established relationships in TAM.
With the progress of technology and the deepening of research, the theoretical model of the technology acceptance model is constantly improved, and the explanatory power of the model is constantly enhanced. For instance, the theory of planned behavior (TPB) extended the TAM by separating usage attitude into three levels (i.e., subjective norm, perceived behavior control, and attitude toward the behavior) and specifying the antecedents (i.e., normative, control, and behavioral beliefs) for three attitude levels, respectively [109]. Furthermore, Taylor and Todd [110] proposed the decomposed theory of planned behavior (DTPB), which decomposed the normative, control, and behavioral beliefs in TPB into components. Specifically, the normative beliefs are decomposed into peers’ influence and superiors’ influence. The control beliefs are decomposed into self-efficacy, technology facilitating conditions, and resource facilitating conditions. The behavioral beliefs are decomposed into perceived usefulness, perceived ease of use, and compatibility. One of our reviewed articles adopted DTPB to examine how employees accept chatbots in an enterprise context. Results showed a strong influence of self-determination (attitude toward acceptance), but a weak impact of external ones (i.e., subjective norm and perceived behavior control).
Additionally, the unified theory of acceptance and use of technology (UTAUT) is an integrated theoretical framework of prior technology acceptance research [111]. In UTAUT, three decisive constructs (i.e., performance expectancy, effort expectancy, and social influence) are used to explain behavioral intention to use technology, while behavioral intention to use technology and facilitating conditions further affect technology use behavior. Furthermore, four moderators were also identified (i.e., age, gender, experience, and voluntariness of use). A variety of studies have empirically examined UTAUT and extended it in various contexts [112,113]. In our reviewed articles, Mamonov and Koufaris [53] applied UTAUT to explore users’ acceptance of AI service providers (i.e., smart thermostat) in a smart home context. The results revealed a weak effect of performance expectancy and an insignificant effect of effort expectancy on intention to adopt a smart thermostat. Meanwhile, techno-coolness, a novel factor proposed by the authors, has a stronger effect on users’ adoption intention. Similarly, Prakash and Das [21] tested UTAUT in clinical diagnostic context. Their study showed a consistent result with original UTAUT but an insignificant relationship between effort expectancy and users’ intentions to accept AI task substitutes (i.e., intelligent clinical diagnostic decision support systems). The authors explained that the ease of use may not be a decisive factor in the special context of clinical practice.
Taken together, TAM and the extended theories have the benefits of offering a comprehensive framework to investigate decisive factors of AI acceptance in specific contexts. However, there are also limitations. First, by adopting such models, most studies were restricted to survey methodologies. To deepen the understanding of users’ decision-making processes, more diverse methods should be integrated. Second, with the widespread application of information technology in various fields of society, the influencing factors of users’ intentions to accept new technologies may be different. As TAM and the extended models were first proposed about 20 years ago, these theories are worth extending in the era of artificial intelligence. Third, in our analysis, the established relationships between constructs in these models may not always be supported in different contexts. Further research may consider specific contextual factors influencing these relationships, conceptualize constructs for particular contexts, and make generalized theorizations.

4.1. Theoretical Perspectives Applied to User Acceptance of AI Service Providers

In terms of user acceptance of AI service providers, various theoretical perspectives have been identified. A total of 7 of the 61 articles adopted TAM and the extended theories, and 19 did not employ a specific theoretical framework. The remaining 36 articles identified 28 theories (see Table 4). The three most commonly used overarching frameworks are computers as social actors (CASA) theory (three papers), task–technology fit theory (three papers), and stimulus (S)–organism (O)–response (R) framework (three papers). Two theories were applied more than once, namely, social presence theory and attribution theory.
Social presence indicates a fairly generic sense of “being with others” during social interaction process [114]. When users experience AI service providers as actual social actors, they interact with AI service providers socially, and, thus, foster psychological and/or behavioral responses (e.g., perceive AI service providers as more credible, shift to a fairness evaluation, be more likely to self-disclose, and further increase intention to accept AI) [2,56]. In our analysis, social presence may serve as antecedent of usage attitude and/or intentions [2,48], but also be theorized as mediators that explain how human-like AI service providers are accepted [56]. This theory thereby serves as a theoretical foundation to explain how anthropomorphic features of AI service providers influence users’ acceptant intention. However, as this theory offers only a single construct of social presence, it is difficult to explain why the influences of different anthropomorphic features vary.
Attribution theory provides a theoretical foundation to understand users’ reactions to favorable and/unfavorable outcomes that are made by AI. According to attribution theory, people tend to infer the cause of events, and may attribute the causes to internal or external factors of the event. For example, researchers have found that people are inclined to attribute favorable events to themselves (e.g., the success was due to my hard work), while make external attribution to unfavorable events (e.g., the failure of exam was due to noise interference). In our review, researchers showed different mechanisms underlying users’ attribution on AI service providers. On one hand, users may attribute unfavorable events to contextual factors instead of AI service providers due to the belief that AI is stable and trackable [47]. On the other hand, users may attribute unfavorable events to AI service providers because they believe AI is given such high autonomy to hold responsibility to negative outcomes. Moreover, studies also found that there was no difference in users’ attribution on AI and human. The authors proposed one possible explanation that although AI may ignore uniqueness, humans may also not be objective [31]. Overall, this overarching theory provides a theoretical explanation for users’ behavioral responses to AI service providers, focusing on revealing the psychological process and influencing factors. Considering that people’s understanding of AI is complicated, the specificity of context and types of AI service provider should be fully considered when applying this theory to research.

4.2. Theoretical Perspectives Applied to User Acceptance of AI Task Substitutes

Among the 19 papers focusing on users’ acceptance of AI task substitutes, 2 used TAM and the extended theories as theoretical framework, and 9 did not explicitly adopt theoretical framework. The remaining eight articles adopted eight theories as theoretical frameworks particularly to explain users’ acceptance of AI task substitutes. Most of these theories were applied only once in our reviewed papers, including overarching frameworks (e.g., cognitive appraisal theory, the technology–organization–environment framework) and specific theories (e.g., the intergroup threat theory).
For example, cognitive appraisal theory offers an explanation on users’ coping mechanisms underlying their reaction to novel situations. According to this theory, people form initial appraisal of new situations by perception of the situation and their own knowledge. The coping mechanism then results from initial appraisal, and further results in different attitudes and behavioral intentions [115]. This theory, thus, provides an overarching framework for investigating how users react to a novel AI task substitute and/or new environment with an AI task substitute [90]. The technology–organization–environment framework is a fairly generic framework to understand the influence of technological, organizational, and environmental factors on organization’s acceptant decision making. However, it does not explicitly identify constructs that comprise the framework. Other theoretical models should be integrated to examine organizational acceptance of AI task substitutes in specific contexts. The intergroup threat theory is widely used to explain intergroup relations. Based on this theory, employees may experience threats from outgroup objects, named realistic threats and symbolic threats. Realistic threats refer to the risk of value loss, such as economic loss and threats to personal security, while symbolic threats are more concerned with the risk of identity loss, such as “uniqueness, self-identity, and self-esteem” [116]. This theory provides a narrow explanation for why users resist AI task substitutes from the relation threat perspective, which may help investigate how to alleviate users’ AI resistance.
Furthermore, dual process theory was utilized more than once in the reviewed studies. This theory identifies two modes of information processing, namely, heuristic process and systematic process. People’s attitudes, intentions, and behaviors rely on how they process information through the two processes [117,118,119]. Though heuristic processes, users tend to make evaluations unconsciously, instinctively, and intuitively, while through systematic processes, users rely more on cognitive, analytic, rational thinking to make decisions. This theory offers benefits for investigating users’ different reactions to AI task substitutes when different processes operate. For instance, Liang and Xue [84] suggested that physicians’ resistance to AI task substitutes is decreased when their systematic process (i.e., perceived usefulness of the AI system) is emphasized. Additionally, the research of Jussupow, Spohrer, Heinzl, and Gawlitza [87] provided evidence for how the two systems shift from one to the other dynamically by identifying the metacognition process.

5. Discussion

Overall, our review reveals inconsistent research findings on user attitudes and perceptions towards AI acceptance, as well as different factors and underlying mechanisms for AI service providers and AI task substitutes (see Table 4 and Table 5). For example, research on the superiority of AI over humans varies across different studies [30,34,85], users’ attribution of negative events is inconsistent [29,31,47], and the source of concern about AI seems to be influenced by the role of AI [36,37,83]. For AI service providers, users may appreciate the higher level of accuracy and reliability of AI applications, whereas they are concerned that AI cannot fit certain tasks due to uniqueness neglect and lack of affection. Trust and satisfaction with usage are main mechanisms for acceptance of AI service providers. For AI task substitutes, the main concern of users comes from professional identity threats and work performance after adopting AI. Nevertheless, factors that may eliminate the negative attitudes were explored. For instance, when users are incentivized by money [92], rationally evaluate the benefits of AI usage [91], or complete the identity adjustment in response to AI systems [83], the resistance towards AI task substitutes can be alleviated.
Extant research typically focuses on user perception, attitude, and acceptance behavior for specific AI applications, but few researchers have clarified the relationships between different AI roles and user perceptions, attitudes, and acceptance behavior. Furthermore, although research has explored many factors that can change users’ acceptant attitudes and behaviors towards AI applications, the underlying psychological processes are still worth investigating. Therefore, future research may further explore how the roles of AI help understand the inconsistencies in the reviewed studies. In addition, the following sections will provide three broad opinions on the limitations of the reviewed research, as well as guidelines for promoting future research on AI acceptance and decision-making mechanisms among users.

5.1. Key Findings and Future Research Directions

5.1.1. Lack of Clarification of the Differences between Various AI Applications

Our analysis shows a lack of consistent definition and terminology of specific forms of AI application. As summarized in Table 4 and Table 5, the 80 reviewed studies identified 55 types of AI service providers and 19 types of AI task substitutes, totaling 71 types of AI applications, in about 25 kinds of contexts.
Although many researchers have demonstrated that the terminology of AI applications being studied is interchangeable with other terms, a considerable number of studies do not specifically explain the definition, characteristics, and human–machine interaction patterns in specific contexts of the AI applications being studied, nor do they conceptualize and theorize user acceptance based on these specific features. One of the consequences of this fragmentation is the poor generalizability of research conclusions and the inability to explain inconsistent results between different studies. For example, although research suggests that the definitions of chatbot and conversational AI are similar [120], previous studies have found mixed attitudes toward chatbot acceptance [2,32,61]. Without distinguishing the characteristics of the VA being studied from other VAs or pointing out the specificity of VA applications in the current context, it is difficult to fully discuss the inconsistent results mentioned above. This is also consistent with the “fragmented adhocracy” problem pointed out in previous review studies [26].
Furthermore, it may also result in the vague role of AI in usage. Due to the lack of AI application definition based on specific context and application type, most studies have failed to clarify the role played by AI applications in user usage, such as service providers and substitutes for employees. In fact, users will interact with artificial intelligence in different modes based on the role of the application. For example, when AI serves as an adviser, users will actively request AI to provide information for their decision-making; When AI serves as a service provider, users passively enjoy the services automatically provided by AI; When AI acts as a collaborator, users will work together with AI to complete tasks. Therefore, without a clear understanding of the role of AI in human-computer interaction, it is difficult to conduct in-depth research on users’ perception during the interaction process and their willingness to accept AI.
Our review categorizes AI roles into two categories, namely AI service providers and AI task substitutes, and finds that users exhibit different acceptance attitudes and decision-making mechanisms when interacting with AI applications with different roles. For example, when AI serves as service providers, users’ AI resistance may stem from concerns about uniqueness neglect [18], task fit [39,77], and algorithm performance. While when AI serves as task substitutes, users tend to resist AI due to the difficulty in AI usage [22], concerns about face loss [84], and feelings of threats to professional identity [83]. Further research is recommended to provide empirical evidence for the differences in users’ acceptance of different AI roles, and to contribute unique knowledge on clear and accurate definitions, characteristics, and user interaction processes of different roles of AI applications.

5.1.2. Limited Generalizability of Research Design

In the sample articles reviewed, the real behavioral data of organizations and individuals in the use of AI application have not been well exploited yet. The majority of our reviewed studies employed either surveys (32.10%) or behavioral experiments (35.80%) in a designed study setting. In contrast, only a few studies utilized field data (8.64%) or conducted multimethod research design (14.81%). Small sample sizes, controlled study settings in experimental research, imagined or recalled decision-making process, and limited perspective of individual users may restrict the generalizability of research conclusions in real-world settings. Moreover, as presented in the previous sections, the studies investigating AI acceptance have identified a pool of outcome variables, such as acceptance behavior [33,88,99], intention to use AI [42,82,91], and trust [77,85,97]. Yet, most studies in our sample measured user perceptions and behavioral intentions as outcome variables (83.95%). Only a few studies utilized users’ actual behavior in practice (16.05%). Although the extant research provided rich evidence in terms of AI acceptant intentions, we are still unclear as to whether these results still hold true in actual acceptant behaviors.
Thus, future research should seek opportunities to utilize real behavioral data for causality estimation, conduct field studies, combine multimethod research design, and consider the impact of individual and organizational characteristics on the acceptance of AI applications, to broaden the generalizability of research findings. In addition, considering the potential evolution of user attitudes toward new technologies during usage, we recommend longitudinal research in a field study setting for future studies to provide insights into the dynamic interactive process between users and AI applications and explore underlying mechanisms of AI acceptance.

5.1.3. Conceptualization and Theorization in the Context of AI Acceptance

Many of the reviewed studies rely on the general technology acceptance models (e.g., technology acceptance model, unified theory of acceptance and use of technology, and health information technology acceptance model) as theoretical frameworks to explain users’ perceptions, attitudes, and acceptance behaviors towards AI applications [20,71,86]. This may ignore the possible changes in technology use behavior brought about by the massive application of information technology, which may affect the impact of key factors in the traditional technology acceptance models. For instance, Shin, Zhong, and Biocca [20] proved the significant impact of both perceived usefulness and ease of use on attitude toward algorithm services. Liu and Tao [49] found that perceived usefulness significantly affected trust and intention to use smart healthcare services, whereas perceived ease of use only predicted trust but did not influence usage intention directly. Lu, et al. [121] showed that perceived usefulness and ease of use were not associated with users’ acceptance of service robots. Additionally, the use of a new generation of AI applications can create new cognitive and affective experiences in human–computer interaction [20,53,64]. In the new era of AI, further research should rethink the boundaries of applying a single disciplinary theory to explain AI usage, make efforts in the extension of traditional models, and make new conceptualization of constructs in the specific context of AI acceptance.
Furthermore, in our analysis, quite a few studies did not identify the theoretical foundations (28 out of 80), or only used a generic overarching framework (e.g., stimulus (S)–organism (O)–response (R) framework; technology–organization–environment framework). Among those with specific theoretical frameworks, the reviewed studies for AI service providers mainly focused on particular responses (such as attribution and anthropomorphic perception), while in the few studies with explicit theoretical frameworks for AI task substitutes, the majority utilized psychological theories to explore the underlying mechanism of AI aversion. Overall, the reviewed research provided a theoretically fragmented picture of AI acceptance, but refrained from creating integrated theoretical frameworks in the specific context of AI acceptance. Future research should integrate theoretical insights of computer science, design science, psychology, and social science to enable more generalizable theorization for understanding AI acceptance.

5.2. Limitations

Due to the surge of AI-related research in recent years, this study only provided a review of relevant research in leading journals over the past three years. Although we made a deliberate effort to select leading journals with outstanding contributions to technology-acceptance-related knowledge, such as “Computers in Human Behavior”, which is highly regarded in the behavioral sciences, there is a possibility that relevant articles from other important journals may have been overlooked. To some extent, this review only conducted a descriptive review and statistics analysis of present research projects in leading journals. It is strongly recommended to conduct meta-analyses on a wider range of publications in the future to enhance our understanding of user acceptance of different roles of AI, as well as the impact of different factors on AI acceptance, and to analyze the research design and structure of related studies in specific contexts. Future research could benefit from including a broader range of journals to ensure a more comprehensive coverage of all relevant studies in the field.

Author Contributions

Conceptualization: P.J. and K.C. P.J., W.N. and Q.W. conducted the literature review, database searches, data extraction, quality assessment, synthesis of results, and writing of the original manuscript. R.Y. and K.C. assisted with quality assessment and reviewing and editing the manuscript drafts. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Al-Natour, S.; Benbasat, I.; Cenfetelli, R. Designing online virtual advisors to encourage customer self-disclosure: A theoretical model and an empirical test. J. Manag. Inf. Syst. 2021, 38, 798–827. [Google Scholar] [CrossRef]
  2. Schanke, S.; Burtch, G.; Ray, G. Estimating the impact of “humanizing” customer service chatbots. Inf. Syst. Res. 2021, 32, 736–751. [Google Scholar] [CrossRef]
  3. Tofangchi, S.; Hanelt, A.; Marz, D.; Kolbe, L.M. Handling the efficiency–personalization trade-off in service robotics: A machine-learning approach. J. Manag. Inf. Syst. 2021, 38, 246–276. [Google Scholar] [CrossRef]
  4. Lee, J.H.; Hsu, C.; Silva, L. What lies beneath: Unraveling the generative mechanisms of smart technology and service design. J. Assoc. Inf. Syst. 2020, 21, 3. [Google Scholar]
  5. Faulkner, P.; Runde, J. Theorizing the digital object. MIS Q. 2019, 43, 1279. [Google Scholar]
  6. Wesche, J.S.; Sonderegger, A. Repelled at first sight? Expectations and intentions of job-seekers reading about AI selection in job advertisements. Comput. Hum. Behav. 2021, 125, 106931. [Google Scholar] [CrossRef]
  7. Dixon, J.; Hong, B.; Wu, L. The robot revolution: Managerial and employment consequences for firms. Manag. Sci. 2021, 67, 5586–5605. [Google Scholar] [CrossRef]
  8. Van den Broek, E.; Sergeeva, A.; Huysman, M. When the machine meets the expert: An ethnography of developing AI for hiring. MIS Q. 2021, 45, 1557. [Google Scholar] [CrossRef]
  9. iResearch. 2022 Research Report on China’s Artificial Intelligence Industry (V). Available online: https://www.iresearch.com.cn/Detail/report?id=4147&isfree=0 (accessed on 5 May 2023).
  10. Kawamoto, K.; Houlihan, C.A.; Balas, E.A.; Lobach, D.F. Improving clinical practice using clinical decision support systems: A systematic review of trials to identify features critical to success. BMJ 2005, 330, 765. [Google Scholar] [CrossRef] [PubMed]
  11. Coiera, E. Guide to Health Informatics; CRC Press: Boca Raton, FL, USA, 2015. [Google Scholar]
  12. Kellogg, K.C.; Sendak, M.; Balu, S. AI on the Front Lines. Available online: https://sloanreview.mit.edu/article/ai-on-the-front-lines/ (accessed on 5 May 2023).
  13. Shixiang. Cresta: Real Time AI Mentor for Sales and Customer Service. Available online: https://36kr.com/p/2141615670591233 (accessed on 5 May 2023).
  14. Schuetzler, R.M.; Grimes, G.M.; Scott Giboney, J. The impact of chatbot conversational skill on engagement and perceived humanness. J. Manag. Inf. Syst. 2020, 37, 875–900. [Google Scholar] [CrossRef]
  15. Berente, N.; Gu, B.; Recker, J.; Santhanam, R. Managing artificial intelligence. MIS Q. 2021, 45, 1433–1450. [Google Scholar]
  16. Borges, A.F.; Laurindo, F.J.; Spínola, M.M.; Gonçalves, R.F.; Mattos, C.A. The strategic use of artificial intelligence in the digital era: Systematic literature review and future research directions. Int. J. Inf. Manag. 2021, 57, 102225. [Google Scholar] [CrossRef]
  17. You, S.; Yang, C.L.; Li, X. Algorithmic versus human advice: Does presenting prediction performance matter for algorithm appreciation? J. Manag. Inf. Syst. 2022, 39, 336–365. [Google Scholar] [CrossRef]
  18. Longoni, C.; Bonezzi, A.; Morewedge, C.K. Resistance to medical artificial intelligence. J. Consum. Res. 2019, 46, 629–650. [Google Scholar] [CrossRef]
  19. Garvey, A.M.; Kim, T.; Duhachek, A. Bad news? Send an AI. Good news? Send a human. J. Mark. 2022, 87, 10–25. [Google Scholar] [CrossRef]
  20. Shin, D.; Zhong, B.; Biocca, F.A. Beyond user experience: What constitutes algorithmic experiences? Int. J. Inf. Manag. 2020, 52, 102061. [Google Scholar] [CrossRef]
  21. Prakash, A.V.; Das, S. Medical practitioner’s adoption of intelligent clinical diagnostic decision support systems: A mixed-methods study. Inf. Manag. 2021, 58, 103524. [Google Scholar] [CrossRef]
  22. Kim, J.H.; Kim, M.; Kwak, D.W.; Lee, S. Home-tutoring services assisted with technology: Investigating the role of artificial intelligence using a randomized field experiment. J. Mark. Res. 2022, 59, 79–96. [Google Scholar] [CrossRef]
  23. Tan, T.F.; Netessine, S. At your service on the table: Impact of tabletop technology on restaurant performance. Manag. Sci. 2020, 66, 4496–4515. [Google Scholar] [CrossRef]
  24. Du, H.S.; Wagner, C. Weblog success: Exploring the role of technology. Int. J. Hum.-Comput. Stud. 2006, 64, 789–798. [Google Scholar] [CrossRef]
  25. Larivière, B.; Bowen, D.; Andreassen, T.W.; Kunz, W.; Sirianni, N.J.; Voss, C.; Wünderlich, N.V.; De Keyser, A. “Service Encounter 2.0”: An investigation into the roles of technology, employees and customers. J. Bus. Res. 2017, 79, 238–246. [Google Scholar] [CrossRef]
  26. Collins, C.; Dennehy, D.; Conboy, K.; Mikalef, P. Artificial intelligence in information systems research: A systematic literature review and research agenda. Int. J. Inf. Manag. 2021, 60, 102383. [Google Scholar] [CrossRef]
  27. Langer, M.; Landers, R.N. The future of artificial intelligence at work: A review on effects of decision automation and augmentation on workers targeted by algorithms and third-party observers. Comput. Hum. Behav. 2021, 123, 106878. [Google Scholar] [CrossRef]
  28. Webster, J.; Watson, R.T. Analyzing the past to prepare for the future: Writing a literature review. MIS Q. 2002, 26, xiii–xxiii. [Google Scholar]
  29. Gill, T. Blame it on the self-driving car: How autonomous vehicles can alter consumer morality. J. Consum. Res. 2020, 47, 272–291. [Google Scholar] [CrossRef]
  30. Peng, C.; van Doorn, J.; Eggers, F.; Wieringa, J.E. The effect of required warmth on consumer acceptance of artificial intelligence in service: The moderating role of AI-human collaboration. Int. J. Inf. Manag. 2022, 66, 102533. [Google Scholar] [CrossRef]
  31. Yalcin, G.; Lim, S.; Puntoni, S.; van Osselaer, S.M. Thumbs up or down: Consumer reactions to decisions by algorithms versus humans. J. Mark. Res. 2022, 59, 696–717. [Google Scholar] [CrossRef]
  32. Luo, X.; Tong, S.; Fang, Z.; Qu, Z. Machines vs. humans: The impact of artificial intelligence chatbot disclosure on customer purchases. Mark. Sci. 2019, 38, 937–947. [Google Scholar] [CrossRef]
  33. Ge, R.; Zheng, Z.; Tian, X.; Liao, L. Human–robot interaction: When investors adjust the usage of robo-advisors in peer-to-peer lending. Inf. Syst. Res. 2021, 32, 774–785. [Google Scholar]
  34. Park, E.H.; Werder, K.; Cao, L.; Ramesh, B. Why do family members reject AI in health care? Competing effects of emotions. J. Manag. Inf. Syst. 2022, 39, 765–792. [Google Scholar] [CrossRef]
  35. Aktan, M.E.; Turhan, Z.; Dolu, İ. Attitudes and perspectives towards the preferences for artificial intelligence in psychotherapy. Comput. Hum. Behav. 2022, 133, 107273. [Google Scholar] [CrossRef]
  36. Formosa, P.; Rogers, W.; Griep, Y.; Bankins, S.; Richards, D. Medical AI and human dignity: Contrasting perceptions of human and artificially intelligent (AI) decision making in diagnostic and medical resource allocation contexts. Comput. Hum. Behav. 2022, 133, 107296. [Google Scholar] [CrossRef]
  37. Millet, K.; Buehler, F.; Du, G.; Kokkoris, M.D. Defending humankind: Anthropocentric bias in the appreciation of AI art. Comput. Hum. Behav. 2023, 143, 107707. [Google Scholar] [CrossRef]
  38. Drouin, M.; Sprecher, S.; Nicola, R.; Perkins, T. Is chatting with a sophisticated chatbot as good as chatting online or FTF with a stranger? Comput. Hum. Behav. 2022, 128, 107100. [Google Scholar] [CrossRef]
  39. Wang, X.; Wong, Y.D.; Chen, T.; Yuen, K.F. Adoption of shopper-facing technologies under social distancing: A conceptualisation and an interplay between task-technology fit and technology trust. Comput. Hum. Behav. 2021, 124, 106900. [Google Scholar] [CrossRef]
  40. Zhang, F.; Pan, Z.; Lu, Y. AIoT-enabled smart surveillance for personal data digitalization: Contextual personalization-privacy paradox in smart home. Inf. Manag. 2023, 60, 103736. [Google Scholar] [CrossRef]
  41. Shin, D.; Kee, K.F.; Shin, E.Y. Algorithm awareness: Why user awareness is critical for personal privacy in the adoption of algorithmic platforms? Int. J. Inf. Manag. 2022, 65, 102494. [Google Scholar] [CrossRef]
  42. Hu, Q.; Lu, Y.; Pan, Z.; Gong, Y.; Yang, Z. Can AI artifacts influence human cognition? The effects of artificial autonomy in intelligent personal assistants. Int. J. Inf. Manag. 2021, 56, 102250. [Google Scholar] [CrossRef]
  43. Canziani, B.; MacSween, S. Consumer acceptance of voice-activated smart home devices for product information seeking and online ordering. Comput. Hum. Behav. 2021, 119, 106714. [Google Scholar] [CrossRef]
  44. Sung, E.C.; Bae, S.; Han, D.-I.D.; Kwon, O. Consumer engagement via interactive artificial intelligence and mixed reality. Int. J. Inf. Manag. 2021, 60, 102382. [Google Scholar] [CrossRef]
  45. Wiesenberg, M.; Tench, R. Deep strategic mediatization: Organizational leaders’ knowledge and usage of social bots in an era of disinformation. Int. J. Inf. Manag. 2020, 51, 102042. [Google Scholar] [CrossRef]
  46. Song, X.; Xu, B.; Zhao, Z. Can people experience romantic love for artificial intelligence? An empirical study of intelligent assistants. Inf. Manag. 2022, 59, 103595. [Google Scholar] [CrossRef]
  47. Huo, W.; Zheng, G.; Yan, J.; Sun, L.; Han, L. Interacting with medical artificial intelligence: Integrating self-responsibility attribution, human–computer trust, and personality. Comput. Hum. Behav. 2022, 132, 107253. [Google Scholar] [CrossRef]
  48. Mishra, A.; Shukla, A.; Sharma, S.K. Psychological determinants of users’ adoption and word-of-mouth recommendations of smart voice assistants. Int. J. Inf. Manag. 2021, 67, 102413. [Google Scholar] [CrossRef]
  49. Liu, K.; Tao, D. The roles of trust, personalization, loss of privacy, and anthropomorphism in public acceptance of smart healthcare services. Comput. Hum. Behav. 2022, 127, 107026. [Google Scholar] [CrossRef]
  50. Chuah, S.H.-W.; Aw, E.C.-X.; Yee, D. Unveiling the complexity of consumers’ intention to use service robots: An fsQCA approach. Comput. Hum. Behav. 2021, 123, 106870. [Google Scholar] [CrossRef]
  51. Pelau, C.; Dabija, D.-C.; Ene, I. What makes an AI device human-like? The role of interaction quality, empathy and perceived psychological anthropomorphic characteristics in the acceptance of artificial intelligence in the service industry. Comput. Hum. Behav. 2021, 122, 106855. [Google Scholar] [CrossRef]
  52. Crolic, C.; Thomaz, F.; Hadi, R.; Stephen, A.T. Blame the bot: Anthropomorphism and anger in customer–chatbot interactions. J. Mark. 2022, 86, 132–148. [Google Scholar] [CrossRef]
  53. Mamonov, S.; Koufaris, M. Fulfillment of higher-order psychological needs through technology: The case of smart thermostats. Int. J. Inf. Manag. 2020, 52, 102091. [Google Scholar] [CrossRef]
  54. Longoni, C.; Cian, L. Artificial intelligence in utilitarian vs. hedonic contexts: The “word-of-machine” effect. J. Mark. 2022, 86, 91–108. [Google Scholar] [CrossRef]
  55. Lv, X.; Yang, Y.; Qin, D.; Cao, X.; Xu, H. Artificial intelligence service recovery: The role of empathic response in hospitality customers’ continuous usage intention. Comput. Hum. Behav. 2022, 126, 106993. [Google Scholar] [CrossRef]
  56. Kim, J.; Merrill Jr, K.; Xu, K.; Kelly, S. Perceived credibility of an AI instructor in online education: The role of social presence and voice features. Comput. Hum. Behav. 2022, 136, 107383. [Google Scholar] [CrossRef]
  57. Tojib, D.; Ho, T.H.; Tsarenko, Y.; Pentina, I. Service robots or human staff? The role of performance goal orientation in service robot adoption. Comput. Hum. Behav. 2022, 134, 107339. [Google Scholar] [CrossRef]
  58. Luo, X.; Qin, M.S.; Fang, Z.; Qu, Z. Artificial intelligence coaches for sales agents: Caveats and solutions. J. Mark. 2021, 85, 14–32. [Google Scholar] [CrossRef]
  59. Ko, G.Y.; Shin, D.; Auh, S.; Lee, Y.; Han, S.P. Learning outside the classroom during a pandemic: Evidence from an artificial intelligence-based education app. Manag. Sci. 2022, 69, 3616–3649. [Google Scholar] [CrossRef]
  60. Luo, B.; Lau, R.Y.K.; Li, C. Emotion-regulatory chatbots for enhancing consumer servicing: An interpersonal emotion management approach. Inf. Manag. 2023, 60, 103794. [Google Scholar] [CrossRef]
  61. Chandra, S.; Shirish, A.; Srivastava, S.C. To be or not to be… human? Theorizing the role of human-like competencies in conversational artificial intelligence agents. J. Manag. Inf. Syst. 2022, 39, 969–1005. [Google Scholar] [CrossRef]
  62. Chi, O.H.; Chi, C.G.; Gursoy, D.; Nunkoo, R. Customers’ acceptance of artificially intelligent service robots: The influence of trust and culture. Int. J. Inf. Manag. 2023, 70, 102623. [Google Scholar] [CrossRef]
  63. Chong, L.; Zhang, G.; Goucher-Lambert, K.; Kotovsky, K.; Cagan, J. Human confidence in artificial intelligence and in themselves: The evolution and impact of confidence on adoption of AI advice. Comput. Hum. Behav. 2022, 127, 107018. [Google Scholar] [CrossRef]
  64. Rhim, J.; Kwak, M.; Gong, Y.; Gweon, G. Application of humanization to survey chatbots: Change in chatbot perception, interaction experience, and survey data quality. Comput. Hum. Behav. 2022, 126, 107034. [Google Scholar] [CrossRef]
  65. Hu, P.; Lu, Y.; Wang, B. Experiencing power over AI: The fit effect of perceived power and desire for power on consumers’ choice for voice shopping. Comput. Hum. Behav. 2022, 128, 107091. [Google Scholar] [CrossRef]
  66. Benke, I.; Gnewuch, U.; Maedche, A. Understanding the impact of control levels over emotion-aware chatbots. Comput. Hum. Behav. 2022, 129, 107122. [Google Scholar] [CrossRef]
  67. Plaks, J.E.; Bustos Rodriguez, L.; Ayad, R. Identifying psychological features of robots that encourage and discourage trust. Comput. Hum. Behav. 2022, 134, 107301. [Google Scholar] [CrossRef]
  68. Jiang, H.; Cheng, Y.; Yang, J.; Gao, S. AI-powered chatbot communication with customers: Dialogic interactions, satisfaction, engagement, and customer behavior. Comput. Hum. Behav. 2022, 134, 107329. [Google Scholar] [CrossRef]
  69. Munnukka, J.; Talvitie-Lamberg, K.; Maity, D. Anthropomorphism and social presence in Human–Virtual service assistant interactions: The role of dialog length and attitudes. Comput. Hum. Behav. 2022, 135, 107343. [Google Scholar] [CrossRef]
  70. Chua, A.Y.K.; Pal, A.; Banerjee, S. AI-enabled investment advice: Will users buy it? Comput. Hum. Behav. 2023, 138, 107481. [Google Scholar] [CrossRef]
  71. Yi-No Kang, E.; Chen, D.-R.; Chen, Y.-Y. Associations between literacy and attitudes toward artificial intelligence–assisted medical consultations: The mediating role of perceived distrust and efficiency of artificial intelligence. Comput. Hum. Behav. 2023, 139, 107529. [Google Scholar] [CrossRef]
  72. Liu, Y.-l.; Hu, B.; Yan, W.; Lin, Z. Can chatbots satisfy me? A mixed-method comparative study of satisfaction with task-oriented chatbots in mainland China and Hong Kong. Comput. Hum. Behav. 2023, 143, 107716. [Google Scholar] [CrossRef]
  73. Wu, M.; Wang, N.; Yuen, K.F. Deep versus superficial anthropomorphism: Exploring their effects on human trust in shared autonomous vehicles. Comput. Hum. Behav. 2023, 141, 107614. [Google Scholar] [CrossRef]
  74. Hu, B.; Mao, Y.; Kim, K.J. How social anxiety leads to problematic use of conversational AI: The roles of loneliness, rumination, and mind perception. Comput. Hum. Behav. 2023, 145, 107760. [Google Scholar] [CrossRef]
  75. Alimamy, S.; Kuhail, M.A. I will be with you Alexa! The impact of intelligent virtual assistant’s authenticity and personalization on user reusage intentions. Comput. Hum. Behav. 2023, 143, 107711. [Google Scholar] [CrossRef]
  76. Im, H.; Sung, B.; Lee, G.; Xian Kok, K.Q. Let voice assistants sound like a machine: Voice and task type effects on perceived fluency, competence, and consumer attitude. Comput. Hum. Behav. 2023, 145, 107791. [Google Scholar] [CrossRef]
  77. Jiang, Y.; Yang, X.; Zheng, T. Make chatbots more adaptive: Dual pathways linking human-like cues and tailored response to trust in interactions with chatbots. Comput. Hum. Behav. 2023, 138, 107485. [Google Scholar] [CrossRef]
  78. Dubé, S.; Santaguida, M.; Zhu, C.Y.; Di Tomasso, S.; Hu, R.; Cormier, G.; Johnson, A.P.; Vachon, D. Sex robots and personality: It is more about sex than robots. Comput. Hum. Behav. 2022, 136, 107403. [Google Scholar] [CrossRef]
  79. Wald, R.; Piotrowski, J.T.; Araujo, T.; van Oosten, J.M.F. Virtual assistants in the family home. Understanding parents’ motivations to use virtual assistants with their Child(dren). Comput. Hum. Behav. 2023, 139, 107526. [Google Scholar] [CrossRef]
  80. Pal, D.; Vanijja, V.; Thapliyal, H.; Zhang, X. What affects the usage of artificial conversational agents? An agent personality and love theory perspective. Comput. Hum. Behav. 2023, 145, 107788. [Google Scholar] [CrossRef]
  81. Oleksy, T.; Wnuk, A.; Domaradzka, A.; Maison, D. What shapes our attitudes towards algorithms in urban governance? The role of perceived friendliness and controllability of the city, and human-algorithm cooperation. Comput. Hum. Behav. 2023, 142, 107653. [Google Scholar] [CrossRef]
  82. Lee, S.; Moon, W.-K.; Lee, J.-G.; Sundar, S.S. When the machine learns from users, is it helping or snooping? Comput. Hum. Behav. 2023, 138, 107427. [Google Scholar] [CrossRef]
  83. Strich, F.; Mayer, A.-S.; Fiedler, M. What do I do in a world of Artificial Intelligence? Investigating the impact of substitutive decision-making AI systems on employees’ professional role identity. J. Assoc. Inf. Syst. 2021, 22, 9. [Google Scholar] [CrossRef]
  84. Liang, H.; Xue, Y. Save face or save life: Physicians’ dilemma in using clinical decision support systems. Inf. Syst. Res. 2022, 33, 737–758. [Google Scholar] [CrossRef]
  85. Zhang, G.; Chong, L.; Kotovsky, K.; Cagan, J. Trust in an AI versus a Human teammate: The effects of teammate identity and performance on Human-AI cooperation. Comput. Hum. Behav. 2023, 139, 107536. [Google Scholar] [CrossRef]
  86. Brachten, F.; Kissmer, T.; Stieglitz, S. The acceptance of chatbots in an enterprise context–A survey study. Int. J. Inf. Manag. 2021, 60, 102375. [Google Scholar] [CrossRef]
  87. Jussupow, E.; Spohrer, K.; Heinzl, A.; Gawlitza, J. Augmenting medical diagnosis decisions? An investigation into physicians’ decision-making process with artificial intelligence. Inf. Syst. Res. 2021, 32, 713–735. [Google Scholar] [CrossRef]
  88. Hradecky, D.; Kennell, J.; Cai, W.; Davidson, R. Organizational readiness to adopt artificial intelligence in the exhibition sector in Western Europe. Int. J. Inf. Manag. 2022, 65, 102497. [Google Scholar] [CrossRef]
  89. Vaast, E.; Pinsonneault, A. When digital technologies enable and threaten occupational identity: The delicate balancing act of data scientists. MIS Q. 2021, 45, 1087–1112. [Google Scholar] [CrossRef]
  90. Chiu, Y.-T.; Zhu, Y.-Q.; Corbett, J. In the hearts and minds of employees: A model of pre-adoptive appraisal toward artificial intelligence in organizations. Int. J. Inf. Manag. 2021, 60, 102379. [Google Scholar] [CrossRef]
  91. Yu, B.; Vahidov, R.; Kersten, G.E. Acceptance of technological agency: Beyond the perception of utilitarian value. Inf. Manag. 2021, 58, 103503. [Google Scholar] [CrossRef]
  92. Dai, T.; Singh, S. Conspicuous by its absence: Diagnostic expert testing under uncertainty. Mark. Sci. 2020, 39, 540–563. [Google Scholar] [CrossRef]
  93. Gkinko, L.; Elbanna, A. The appropriation of conversational AI in the workplace: A taxonomy of AI chatbot users. Int. J. Inf. Manag. 2023, 69, 102568. [Google Scholar] [CrossRef]
  94. Ulfert, A.-S.; Antoni, C.H.; Ellwart, T. The role of agent autonomy in using decision support systems at work. Comput. Hum. Behav. 2022, 126, 106987. [Google Scholar] [CrossRef]
  95. Verma, S.; Singh, V. Impact of artificial intelligence-enabled job characteristics and perceived substitution crisis on innovative work behavior of employees from high-tech firms. Comput. Hum. Behav. 2022, 131, 107215. [Google Scholar] [CrossRef]
  96. Dang, J.; Liu, L. Implicit theories of the human mind predict competitive and cooperative responses to AI robots. Comput. Hum. Behav. 2022, 134, 107300. [Google Scholar] [CrossRef]
  97. Westphal, M.; Vössing, M.; Satzger, G.; Yom-Tov, G.B.; Rafaeli, A. Decision control and explanations in human-AI collaboration: Improving user perceptions and compliance. Comput. Hum. Behav. 2023, 144, 107714. [Google Scholar] [CrossRef]
  98. Harris-Watson, A.M.; Larson, L.E.; Lauharatanahirun, N.; DeChurch, L.A.; Contractor, N.S. Social perception in Human-AI teams: Warmth and competence predict receptivity to AI teammates. Comput. Hum. Behav. 2023, 145, 107765. [Google Scholar] [CrossRef]
  99. Fan, H.; Gao, W.; Han, B. How does (im)balanced acceptance of robots between customers and frontline employees affect hotels’ service quality? Comput. Hum. Behav. 2022, 133, 107287. [Google Scholar] [CrossRef]
  100. Jain, H.; Padmanabhan, B.; Pavlou, P.A.; Santanam, R.T. Call for papers—Special issue of information systems research—Humans, algorithms, and augmented intelligence: The future of work, organizations, and society. Inf. Syst. Res. 2018, 29, 250–251. [Google Scholar] [CrossRef]
  101. Jain, H.; Padmanabhan, B.; Pavlou, P.A.; Raghu, T. Editorial for the special section on humans, algorithms, and augmented intelligence: The future of work, organizations, and society. Inf. Syst. Res. 2021, 32, 675–687. [Google Scholar] [CrossRef]
  102. Rai, A.; Constantinides, P.; Sarker, S. Next generation digital platforms: Toward human-AI hybrids. MIS Q. 2019, 43, iii–ix. [Google Scholar]
  103. Hong, J.-W.; Fischer, K.; Ha, Y.; Zeng, Y. Human, I wrote a song for you: An experiment testing the influence of machines’ attributes on the AI-composed music evaluation. Comput. Hum. Behav. 2022, 131, 107239. [Google Scholar] [CrossRef]
  104. Davis, F.D.; Bagozzi, R.P.; Warshaw, P.R. User acceptance of computer technology: A comparison of two theoretical models. Manag. Sci. 1989, 35, 982–1003. [Google Scholar] [CrossRef]
  105. McCloskey, D. Evaluating electronic commerce acceptance with the technology acceptance model. J. Comput. Inf. Syst. 2004, 44, 49–57. [Google Scholar]
  106. Szajna, B. Empirical evaluation of the revised technology acceptance model. Manag. Sci. 1996, 42, 85–92. [Google Scholar] [CrossRef]
  107. Ha, S.; Stoel, L. Consumer e-shopping acceptance: Antecedents in a technology acceptance model. J. Bus. Res. 2009, 62, 565–571. [Google Scholar] [CrossRef]
  108. Burton-Jones, A.; Hubona, G.S. The mediation of external variables in the technology acceptance model. Inf. Manag. 2006, 43, 706–717. [Google Scholar] [CrossRef]
  109. Ajzen, I. The theory of planned behavior. Organ. Behav. Hum. Decis. Process. 1991, 50, 179–211. [Google Scholar] [CrossRef]
  110. Taylor, S.; Todd, P.A. Understanding information technology usage: A test of competing models. Inf. Syst. Res. 1995, 6, 144–176. [Google Scholar] [CrossRef]
  111. Venkatesh, V.; Morris, M.G.; Davis, G.B.; Davis, F.D. User acceptance of information technology: Toward a unified view. MIS Q. 2003, 27, 425–478. [Google Scholar] [CrossRef]
  112. Baishya, K.; Samalia, H.V. Extending unified theory of acceptance and use of technology with perceived monetary value for smartphone adoption at the bottom of the pyramid. Int. J. Inf. Manag. 2020, 51, 102036. [Google Scholar] [CrossRef]
  113. Venkatesh, V.; Thong, J.Y.; Xu, X. Consumer acceptance and use of information technology: Extending the unified theory of acceptance and use of technology. MIS Q. 2012, 36, 157–178. [Google Scholar] [CrossRef]
  114. Biocca, F.; Harms, C.; Burgoon, J.K. Toward a more robust theory and measure of social presence: Review and suggested criteria. Presence Teleoper. Virtual Environ. 2003, 12, 456–480. [Google Scholar] [CrossRef]
  115. Lazarus, R.S.; Folkman, S. Stress, Appraisal, and Coping; Springer Publishing Company: Berlin/Heidelberg, Germany, 1984. [Google Scholar]
  116. Riek, B.M.; Mania, E.W.; Gaertner, S.L. Intergroup threat and outgroup attitudes: A meta-analytic review. Personal. Soc. Psychol. Rev. 2006, 10, 336–353. [Google Scholar] [CrossRef]
  117. Evans, J.S.B.; Stanovich, K.E. Dual-process theories of higher cognition: Advancing the debate. Perspect. Psychol. Sci. 2013, 8, 223–241. [Google Scholar] [CrossRef]
  118. Ferratt, T.W.; Prasad, J.; Dunne, E.J. Fast and slow processes underlying theories of information technology use. J. Assoc. Inf. Syst. 2018, 19, 3. [Google Scholar] [CrossRef]
  119. Evans, J.S.B. Dual-processing accounts of reasoning, judgment, and social cognition. Annu. Rev. Psychol. 2008, 59, 255–278. [Google Scholar] [CrossRef]
  120. Seeger, A.-M.; Pfeiffer, J.; Heinzl, A. Texting with humanlike conversational agents: Designing for anthropomorphism. J. Assoc. Inf. Syst. 2021, 22, 8. [Google Scholar] [CrossRef]
  121. Lu, L.; Cai, R.; Gursoy, D. Developing and validating a service robot integration willingness scale. Int. J. Hosp. Manag. 2019, 80, 36–51. [Google Scholar] [CrossRef]
Figure 1. PRISMA flowchart.
Figure 1. PRISMA flowchart.
Behavsci 14 00671 g001
Table 1. Overview of reviewed studies.
Table 1. Overview of reviewed studies.
JournalMethodNumber of Articles
Management ScienceEmpirical estimation1
Marketing ScienceField experiment1
Game model1
MIS QuarterlyCase study1
Information Systems ResearchEmpirical estimation1
Field experiment1
Interview1
Survey1
Journal of MarketingExperiment2
Field experiment1
Mixed methods1
Journal of Marketing ResearchExperiment1
Field experiment1
Journal of Consumer ResearchExperiment2
Journal of the Association for Information SystemsCase study1
Journal of Management Information SystemsExperiment3
Mixed methods1
International Journal of Information ManagementCase study1
Interview1
Survey8
Mixed methods3
Information & ManagementExperiment2
Survey2
Mixed methods1
Computers in Human BehaviorExperiment19
Field experiment1
Longitudinal study1
Survey15
Mixed methods5
Table 2. Specifics of mixed methods.
Table 2. Specifics of mixed methods.
Mixed MethodsNumber of Articles
Qualitative methods and quantitative studies6
Experiments and one survey4
Empirical estimation on real-world data and 4 controlled experiments1
Table 3. Overview of conceptualization of user acceptance of AI service providers.
Table 3. Overview of conceptualization of user acceptance of AI service providers.
Types of Outcome VariablesNumber of Articles
BehaviorAcceptance behavior5
Usage behavior6
Purchase behavior2
User performance1
Behavioral intentionAI resistance3
Intention to accept AI18
Intention to use AI23
Purchase intention3
Intention to self-disclosure1
User performance4
PerceptionAttitude6
Trust14
Satisfaction6
Table 4. Overview of reviewed studies on user acceptance of AI service providers.
Table 4. Overview of reviewed studies on user acceptance of AI service providers.
SourceTypes of
AI Service Provider
User Acceptance (or Not)Theoretical PerspectivesMethodsKey Findings
AI AppreciationAI Aversion
You, Yang and Li [17]Judge–advisor system Cognitive load theory.ExperimentIndividuals largely exhibit algorithm appreciation where they tend to adopt algorithmic advice to a greater extent. Related factors are also explored.
Gill [29]Autonomous vehicle Attribution theory.ExperimentFor negative events to pedestrians, people tend to consider autonomous vehicles more acceptable.
Schanke, Burtch and Ray [2]Customer service chatbot Social presence theory.Field experimentConsumers tend to be willing to self-disclose, shift to a fairness evaluation, and accept the offer provided by a human-like chatbot.
Peng, et al. [30]AI service Social cognition theory, task–technology fit theory.Mixed methodConsumers tend to refuse AI for warmth-requiring tasks due to the low perceived fit between AI and task.
Longoni, Bonezzi and Morewedge [18]AI medical application Uniqueness neglect.ExperimentWith AI medical application, consumers are less likely to utilize healthcare, are less sensitive to differences in provider performance, exhibit lower reservation prices for healthcare, and derive negative utility.
Yalcin, et al. [31]Algorithmic decision maker Attribution theory.ExperimentConsumers tend to response less positively to an algorithmic decision maker. Related factors are also explored.
Luo, et al. [32]Chatbot Field experimentAlthough chatbots perform as effectively as proficient workers, the disclosure of chatbot identity will reduce customer purchase rates.
Ge, et al. [33]AI financial-advising servicer Empirical estimationInvestors who need more help are less likely to accept robot-advising services. Furthermore, the adjustment of adoption behavior based on recent robo-advisor performance may result in inferior performance of investors.
Park, et al. [34]AI monitoring for healthcare ExperimentAnxiety about healthcare monitoring and anxiety about health outcomes decreased the rejection of AI monitoring, whereas surveillance anxiety and delegation anxiety increased rejection. Meanwhile, individual-level risks and perceived controllability are significant moderators.
Aktan, et al. [35]AI-based psychotherapy SurveyMost participants reported more trust in human psychotherapists than in AI-based psychotherapists. However, AI-based psychotherapists may be beneficial due to the ability to comfortably talk about embarrassing experiences, having accessibility at any time, and accessing remote communication. Furthermore, gender and profession types may also affect choice of AI-based psychotherapists.
Formosa, et al. [36]AI decision maker ExperimentUsers consistently view humans (vs. AI) as appropriate decision makers.
Millet, et al. [37]AI art generator ExperimentUsers, especially among those with stronger anthropocentric creativity beliefs, perceived AI-made (vs. human-made) artwork as less creative and induced less awe, which led to less preference.
Drouin, et al. [38]Emotionally responsive chatbotConditional ExperimentIn terms of negative emotions and conversational concerns, participants reported better responses to chatbot than human partners, whereas in terms of homophily, responsive chat, and liking of chat partner, participants showed better responses to human than chatbot.
Wang, et al. [39]Shopper-facing technologyConditionalTask–technology fit theory.SurveyThe authors identify three dimensions of shopper-facing technologies, named shopper-dominant (pre-) shopping technologies, shopper-dominant post-shopping technologies, and technology-dominant automations. Shoppers’ adoption intentions are determined by their evaluations on technology–task fitness.
Zhang, et al. [40]Smart home serviceConditionalSurveillance theory.SurveyThe intention to use AI in a smart home context depends on the trade-offs of contextual personalization and privacy concerns.
Shin, et al. [41]Algorithmic platformConditionalPrivacy calculus theory.SurveyThe trust and self-disclosure to algorithms depend on users’ algorithm awareness, which depends on users’ perceived control of information flow.
Hu, et al. [42]Intelligent assistantConditionalMind perception theory.SurveyArtificial autonomy of intelligent personal assistants is significantly related to users’ continuance usage intention, which is mediated by competence and warmth perception.
Canziani and MacSween [43]Voice-activated smart home deviceConditionalTechnology acceptance model.SurveyPropensity for seeking referent persons’ opinions will increase perceived device utility. Perceived device utility and hedonic enjoyment of voice ordering are both positively related to consumers’ intentions to use the device for online ordering.
Sung, et al. [44]AI-embedded mixed reality (MR)ConditionalStimulus (S)–organism (O)–response (R) framework.SurveyThe quality of AI, including speech recognition and synthesis via machine learning, significantly influences MR immersion, MR enjoyment, and perceptions of novel experiences, which collectively increase consumer engagement and behavioral responses (i.e., purchase intentions and intentions to share).
Wiesenberg and Tench [45]Social robotConditionalMediatization theory.SurveyLeading communication professionals in Central and Western Europe as well as Scandinavia report higher concerns with ethical challenges of social bot usage, while professionals in Southern and Eastern Europe are less skeptical. In general, only a small minority of the sample reports readiness to use social bots for organizational strategic communication.
Song, et al. [46]Intelligent assistantConditionalTheory of love.SurveyAI application is able to promote users’ feeling of intimacy and passion. These feelings will positively impact users’ commitment, which further increase intention to use intelligent assistants.
Huo, et al. [47]Medical AIConditionalAttribution theory.SurveyPatients’ acceptance of medical AI for independent diagnosis and treatment is significantly related to their self-responsibility attribution, which is mediated by human–computer trust (HCT) and moderated by personality traits.
Mishra, et al. [48]Smart voice assistantConditionalFlow theory and the theory of anthropomorphism.SurveyPlayfulness and escapism are significantly related to hedonic attitude, while anthropomorphism, visual appeal, and social presence are significantly related to utilitarian attitude. Smart voice assistant (SVA) usage is influenced more by utilitarian attitude than hedonic attitude.
Liu and Tao [49]AI-based smart healthcare serviceConditionalTechnology acceptance model.SurveyPublic acceptance of smart healthcare services is directly or indirectly determined by perceived usefulness, perceived ease of use, trust, and AI-specific characteristics.
Chuah, et al. [50]Service robotConditionalComplexity theory.SurveySpecific combinations of human-like, technology-like, and consumer features are able to increase intention to use service robots.
Pelau, et al. [51]AI deviceConditionalThe computers as social actors (CASA) theory.SurveyAnthropomorphic characteristics of AI device indirectly influence acceptance and trust towards AI device through the mediation route of both perceived empathy and interaction quality.
Shin, Zhong and Biocca [20]Algorithm systemConditionalTechnology acceptance model.Mixed methodUsers’ actual use of algorithm systems is significantly related to their algorithmic experience (AX).
Crolic, et al. [52]Customer service chatbotConditionalExpectancy violation theory.Mixed methodThe effect of chatbot anthropomorphism on customer satisfaction, overall firm evaluation, and subsequent purchase intentions depends on customers’ emotional state. An angry emotional state leads to a negative effect.
Mamonov and Koufaris [53]Smart thermostatConditionalUnified theory of acceptance and use of technology.Mixed methodThe smart thermostat adoption intention is mainly determined by techno-coolness, less by performance expectancy, and not by effort expectancy.
Longoni and Cian [54]AI-based recommenderConditional ExperimentConsumers are more likely to adopt AI recommendations in the utilitarian realm, while they tend to adopt the, less in the hedonic realm. Related factors are also explored.
Lv, et al. [55]AI serviceConditionalSocial response theory.ExperimentIn service recovery, a high-empathy AI response can significantly increase customers’ continuous usage intention.
Garvey, Kim and Duhachek [19]AI marketing agentConditionalExpectations discrepancy theory.ExperimentConsumers tend to react positively (i.e., increased purchase likelihood and satisfaction) to AI agent for bad news, while they react negatively to good news offered by AI agent.
Al-Natour, Benbasat and Cenfetelli [1]Virtual advisorConditionalSocial exchange theory.ExperimentThe perceptions of a virtual advisor and the relationship with a virtual advisor are both determinants in self-disclosure intention.
Lv, Yang, Qin, Cao and Xu [55]AI music generatorConditionalRole theory.ExperimentThe acceptance of an AI music generator as a musician is significantly related to its humanlike traits, but not influenced by its autonomy to create songs.
Kim, et al. [56]AI instructorConditionalSocial presence theory.ExperimentAn AI instructor with a humanlike voice (vs. with a machinelike voice) improves students’ perceived social presence and credibility, which further increases intention to enroll in AI-instructor-based online courses.
Tojib, et al. [57]Service robotConditional ExperimentService robot adoption is directly or indirectly determined by desire for achievement (PAP), desire to avoid failure (PAV), spontaneous social influence, and challenge appraisal.
Luo, et al. [58]AI coachConditionalInformation processing theory.Field experimentMiddle-ranked human agents benefit more from the help of an AI coach, while both bottom- and top-ranked agents show limited incremental gains, because bottom-ranked agents exhibit information overload problem and top-ranked agents hold the strongest aversion to an AI coach.
Ko, et al. [59]AI-powered learning appConditionalTemporal construal theory.Empirical estimationStudents living in the epicenter of the COVID-19 outbreak (vs. those do not) tended to use AI-powered learning app less at first, but increased, regularized their usage, and rebounded to a curriculum path with time.
Luo, et al. [60]Emotion-regulatory chatbotConditionalInterpersonal emotion management (IEM) theory.ExperimentPerceived interpersonal emotion management strategies significantly affected positive word-of-mouth, which was sequentially mediated by appraisals and post-recovery emotions.
Chandra, et al. [61]Conversational AI agentConditionalMedia naturalness theory.Mixed methodHuman-like interactional (i.e., cognitive, relational, and emotional) competencies in conversational AI agents increased user trust and further improved user engagement with the agents.
Chi, et al. [62]AI service robotConditionalArtificially Intelligent Device Use Acceptance (AIDUA) framework.SurveyTrust in AI robot interaction affected use intention. Uncertainty avoidance, long-term orientation, and power distance were significant moderators.
Chong, et al. [63]AI advisorConditional Mixed methodThe choice to accept or reject AI suggestions was determined by human self-confidence rather than confidence in AI.
Rhim, et al. [64]Survey chatbotConditional ExperimentHumanization applied survey chatbot (vs. baselinebot) is perceived as more positive, with higher anthropomorphism and social presence. Participants spent more time in HASbot interaction and indicated higher levels of self-disclosure, satisfaction, and social desirability bias with HASbot than with baselinebot.
Hu, et al. [65]AI assistantConditional Longitudinal studyUsers perceived less risk and were more willing to use AI assistants in shopping when perceived power fits desire for power.
Benke, et al. [66]Emotion-aware chatbotConditional ExperimentControl levels induced users’ perceptions of autonomy and trust in emotion-aware chatbots, but did not increase cognitive effort.
Plaks, et al. [67]RobotConditional ExperimentThe authors varied the robotic counterpart’s humanness by displaying values and self-aware emotions from low to high levels. As values varied from low to high levels, participants tended to choose the cooperative option; whereas as levels of self-aware emotions increased, participants were more likely to choose the competitive option. Trust was identified as key mechanism.
Jiang, et al. [68]AI-powered chatbotConditionalSocial exchange theory and resource exchange theory.SurveyResponsiveness and a conversational tone sequentially increased customers’ satisfaction with chatbot services, social media engagement, purchase intention, and price premium.
Munnukka, et al. [69]Virtual service assistantConditionalComputers as social actors (CASA) theory.ExperimentThe interaction with a virtual service assistant (i.e., perceived anthropomorphism, social presence, dialog length, and attitudes) increased recommendation quality perceptions and further improved trust in VSA-based recommendations.
Chua, et al. [70]AI-based recommendationConditional ExperimentAttitude toward AI was positively related to behavioral intention to accept AI-based recommendations, trust in AI, and perceived accuracy of AI. Uncertainty level was a significant moderator.
Yi-No Kang, et al. [71]AI-assisted medical consultationConditionalHealth information technology acceptance model. SurveyThree dimensions of health literacy were identified as healthcare, disease prevention, and health promotion. Disease prevention was significantly associated with attitudes toward AI-assisted medical consultations through mediation of distrust of AI, whereas health promotion was also positively related to attitudes toward AI-assisted medical consultations through mediation of efficiency of AI. Furthermore, digital literacy was associated with attitudes toward AI-assisted medical consultations and mediated by both distrust and efficiency of AI.
Liu, et al. [72]Task-oriented chatbotConditionalD&M information system success model.Mixed methodRelevance, completeness, pleasure, and assurance in both mainland China and Hong Kong sequentially increased satisfaction and usage intention. Privacy concerns in both regions did not significantly affect satisfaction. Response time and empathy were significantly associated with satisfaction only in mainland China.
Wu, et al. [73]Shared autonomous vehicleConditionalTrust-in-automation three-factor model.SurveyAnthropomorphism negatively influenced human–SAV (i.e., shared autonomous vehicle) interaction quality when participants were male, with low income, low education, or no vehicle ownership.
Hu, et al. [74]Conversational AIConditionalInteraction of person-affect-cognition-execution (I-PACE) model.SurveySocial anxiety increased problematic use of conversational AI, which was mediated by loneliness and rumination. Mind perception was a significant moderator.
Alimamy and Kuhail [75]Intelligent virtual assistantConditionalHuman–computer interaction theory and stimulus–organism–response theory.SurveyPerceived authenticity and personalization increased commitment, trust, and reusage intentions, which were mediated by user involvement and connection.
Im, et al. [76]Voice assistantConditionalComputers as social actors (CASA) theory.ExperimentWhen users engaged in functional tasks, voice assistants with a synthetic voice increased perceived fluency, competence perception, and attitudes.
Jiang, et al. [77]ChatbotConditionalTask–technology fit theory.Mixed methodConversational cues were associated with human trust, which was mediated by perceived task-solving competence and social presence. The extent of users’ ambiguity tolerance and task creation were significant moderators.
Dubé, et al. [78]Sex robotsConditional SurveyCorrelational analyses showed that willingness to engage with and perceived appropriateness of using sex robots were more closely related to erotophilia and sexual sensation seeking than any other traits. Mixed repeated-measure ANOVAs and independent sample t-tests with Bonferroni corrections also showed that cismen and nonbinary/gender-nonconforming individuals were more willing to engage with sex robots and perceived their use as more appropriate than ciswomen.
Wald, et al. [79]Virtual assistantConditionalTechnology acceptance model, uses and gratifications theory, and the first proposition of the differential susceptibility to media effects model.SurveyHedonic motivation was the key factor influencing parents’ willingness to co-use the virtual assistant with their child(ren).
Pal, et al. [80]Conversational AIConditionalStimulus organism response framework and theory of love.Mixed methodLove (i.e., passion, intimacy, and commitment) significantly influenced the usage scenario. The agent personality was a significant moderator.
Oleksy, et al. [81]Algorithms in urban governanceConditional Mixed methodLower level of perceived friendliness of the city increased users’ reluctance to accept algorithmic governance. Cooperation between algorithms and humans increased acceptance of algorithms, perceived friendliness, and controllability of the city.
Lee, et al. [82]AI-embedded systemConditionalHAII-TIME (Human–AI Interaction from the perspective of the Theory of Interactive Media).ExperimentUsers tended to view the system with explicit or implicit machine-learning cues as a helper and trusted it more.
Table 5. Overview of reviewed studies on user acceptance of AI task substitutes.
Table 5. Overview of reviewed studies on user acceptance of AI task substitutes.
SourceTypes of
AI Task Substitute
User Acceptance (or Not)Theoretical PerspectivesMethodKey Findings
AI AppreciationAI Aversion
Strich, et al. [83]Substitutive Decision-Making AI System Professional role identityCase studyThe introduction of the Substitutive Decision-Making AI System makes employees feel their professional identities are threatened; thus, they strengthen and protect their professional role identities.
Liang and Xue [84]Clinical decision support system Dual process theorySurveyPhysicians may resist clinical decision support system (CDSS) recommendations. Related factors are also explored.
Kim, Kim, Kwak and Lee [22]AI assistant Field experimentAlthough AI-generated advices lead to better service performance, some employees may not utilize AI assistance (i.e., AI aversion) due to unforeseen barriers to usage (i.e., technology overload).
Zhang, et al. [85]AI teammate ExperimentCompared with human teammates, users trust AI teammates more, accepting the AI’s decisions.
Brachten, et al. [86]Enterprise botConditionalThe decomposed theory of planned behaviorSurveyBoth intrinsic and external motivations of employees positively influence the intention to use Enterprise Bots, and the influence of intrinsic motivation is stronger.
Jussupow, et al. [87]AI-based systemConditionalDual process theoryInterviewPhysicians tend to use metacognitions to assess AI advice, and the metacognitions determine whether physicians make decisions based on AI or not.
Hradecky, et al. [88]AI applicationConditionalThe technology—organization—environment frameworkInterviewThe degree of confidence in organizational technological practices, financial resources, the size of the organization, issues of data management and protection, and the COVID-19 pandemic determine the adoption of AI in the event industry.
Vaast and Pinsonneault [89]Digital technologyConditional Case studyAI adoption relies on the constant adjustment and redefinition of people’s occupational identity.
Chiu, et al. [90]Enterprise AIConditionalCognitive appraisal theorySurveyPerceptions of AI’s operational and cognitive capabilities significantly increase affective and cognitive attitudes toward AI, while concerns regarding AI significantly decrease affective attitude toward AI.
Prakash and Das [21]Intelligent clinical diagnostic decision support systemConditionalUnified theory of acceptance and use of technologyMixed methodPerformance expectancy, effort expectancy, social influence, initial trust, and resistance to change are significantly related to intention to use.
Yu, et al. [91]Technological agencyConditional ExperimentControl and restrictiveness significantly affect users’ perceived relation with technological agents and acceptance.
Dai and Singh [92]AI diagnostic testingConditionalGame theoryGame modelHigh-type experts tend to use their own diagnostic decision, while low-type experts rely on AI advice more. Related factors have also been explored.
Gkinko and Elbanna [93]AI chatbotConditional Case studyThe dominant mode of interaction and the understanding of the AI chatbot technology significantly contribute to users’ appropriation of AI chatbots.
Ulfert, et al. [94]Agent-based decision support system (DSS)Conditional ExperimentHigh DSS autonomy increased users’ information load reduction and technostress, but decreased user intention. Job experience strengthened the impact on information load reduction, but weakened the negative effect on user intention.
Verma and Singh [95]AI-enabled systemConditionalProspect theory, job design theorySurveyAI-enabled task characteristics (job autonomy and skill variety) and knowledge characteristics (job complexity, specialization, and information processing) are significantly related with innovative work behavior. Meanwhile, perceived substitution crisis is a significant moderator.
Dang and Liu [96]AI robotConditional ExperimentA malleable theory of the human mind negatively affected performance-avoidance goals, and further positively affected competitive responses to robots. Meanwhile, a malleable theory of the human mind positively affected mastery goals, and further positively affected cooperative responses to robots. Further, Chinese participants were less competitive and had more cooperative responses to AI robots than British participants.
Westphal, et al. [97]Human–AI collaboration systemConditionalCognitive load theoryExperimentDecision control was positively associated with user trust, understanding, and user compliance with system recommendations. Providing explanations may not only reenact the system’s reasoning, but also increase task complexity; the effectiveness relies on the user’s cognitive ability in complex tasks.
Harris-Watson, et al. [98]AI teammateConditionalTripartite model of human newcomer receptivityExperimentPerceived warmth and competence affect psychological acceptance, and further positively impact perceived HAT viability.
Fan, et al. [99]AI robotConditionalInformation processing theoryField experimentAn imbalanced robotic strategy is superior to a balanced one for service quality. In addition, when customer demanding is high, a customer-focused robotic strategy (i.e., higher customer acceptance of robots than employee acceptance) is the optimal choice to improve service quality. However, when frontline task ambidexterity is high, the positive effects of imbalanced robotic strategy on service quality diminish.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Jiang, P.; Niu, W.; Wang, Q.; Yuan, R.; Chen, K. Understanding Users’ Acceptance of Artificial Intelligence Applications: A Literature Review. Behav. Sci. 2024, 14, 671. https://doi.org/10.3390/bs14080671

AMA Style

Jiang P, Niu W, Wang Q, Yuan R, Chen K. Understanding Users’ Acceptance of Artificial Intelligence Applications: A Literature Review. Behavioral Sciences. 2024; 14(8):671. https://doi.org/10.3390/bs14080671

Chicago/Turabian Style

Jiang, Pengtao, Wanshu Niu, Qiaoli Wang, Ruizhi Yuan, and Keyu Chen. 2024. "Understanding Users’ Acceptance of Artificial Intelligence Applications: A Literature Review" Behavioral Sciences 14, no. 8: 671. https://doi.org/10.3390/bs14080671

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop