Understanding Users’ Acceptance of Artificial Intelligence Applications: A Literature Review
Abstract
:1. Introduction
2. Methods
2.1. Literature Identification and Selection
2.2. Overview of Reviewed Studies
2.3. Overview of Conceptualization
3. Results of Literature Review on User Acceptance
3.1. Results of Literature Review on User Acceptance of AI Service Providers
3.2. Results of Literature Review on User Acceptance of AI Task Substitutes
4. Theoretical Perspectives Applied to User Acceptance of AI
4.1. Theoretical Perspectives Applied to User Acceptance of AI Service Providers
4.2. Theoretical Perspectives Applied to User Acceptance of AI Task Substitutes
5. Discussion
5.1. Key Findings and Future Research Directions
5.1.1. Lack of Clarification of the Differences between Various AI Applications
5.1.2. Limited Generalizability of Research Design
5.1.3. Conceptualization and Theorization in the Context of AI Acceptance
5.2. Limitations
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Al-Natour, S.; Benbasat, I.; Cenfetelli, R. Designing online virtual advisors to encourage customer self-disclosure: A theoretical model and an empirical test. J. Manag. Inf. Syst. 2021, 38, 798–827. [Google Scholar] [CrossRef]
- Schanke, S.; Burtch, G.; Ray, G. Estimating the impact of “humanizing” customer service chatbots. Inf. Syst. Res. 2021, 32, 736–751. [Google Scholar] [CrossRef]
- Tofangchi, S.; Hanelt, A.; Marz, D.; Kolbe, L.M. Handling the efficiency–personalization trade-off in service robotics: A machine-learning approach. J. Manag. Inf. Syst. 2021, 38, 246–276. [Google Scholar] [CrossRef]
- Lee, J.H.; Hsu, C.; Silva, L. What lies beneath: Unraveling the generative mechanisms of smart technology and service design. J. Assoc. Inf. Syst. 2020, 21, 3. [Google Scholar]
- Faulkner, P.; Runde, J. Theorizing the digital object. MIS Q. 2019, 43, 1279. [Google Scholar]
- Wesche, J.S.; Sonderegger, A. Repelled at first sight? Expectations and intentions of job-seekers reading about AI selection in job advertisements. Comput. Hum. Behav. 2021, 125, 106931. [Google Scholar] [CrossRef]
- Dixon, J.; Hong, B.; Wu, L. The robot revolution: Managerial and employment consequences for firms. Manag. Sci. 2021, 67, 5586–5605. [Google Scholar] [CrossRef]
- Van den Broek, E.; Sergeeva, A.; Huysman, M. When the machine meets the expert: An ethnography of developing AI for hiring. MIS Q. 2021, 45, 1557. [Google Scholar] [CrossRef]
- iResearch. 2022 Research Report on China’s Artificial Intelligence Industry (V). Available online: https://www.iresearch.com.cn/Detail/report?id=4147&isfree=0 (accessed on 5 May 2023).
- Kawamoto, K.; Houlihan, C.A.; Balas, E.A.; Lobach, D.F. Improving clinical practice using clinical decision support systems: A systematic review of trials to identify features critical to success. BMJ 2005, 330, 765. [Google Scholar] [CrossRef] [PubMed]
- Coiera, E. Guide to Health Informatics; CRC Press: Boca Raton, FL, USA, 2015. [Google Scholar]
- Kellogg, K.C.; Sendak, M.; Balu, S. AI on the Front Lines. Available online: https://sloanreview.mit.edu/article/ai-on-the-front-lines/ (accessed on 5 May 2023).
- Shixiang. Cresta: Real Time AI Mentor for Sales and Customer Service. Available online: https://36kr.com/p/2141615670591233 (accessed on 5 May 2023).
- Schuetzler, R.M.; Grimes, G.M.; Scott Giboney, J. The impact of chatbot conversational skill on engagement and perceived humanness. J. Manag. Inf. Syst. 2020, 37, 875–900. [Google Scholar] [CrossRef]
- Berente, N.; Gu, B.; Recker, J.; Santhanam, R. Managing artificial intelligence. MIS Q. 2021, 45, 1433–1450. [Google Scholar]
- Borges, A.F.; Laurindo, F.J.; Spínola, M.M.; Gonçalves, R.F.; Mattos, C.A. The strategic use of artificial intelligence in the digital era: Systematic literature review and future research directions. Int. J. Inf. Manag. 2021, 57, 102225. [Google Scholar] [CrossRef]
- You, S.; Yang, C.L.; Li, X. Algorithmic versus human advice: Does presenting prediction performance matter for algorithm appreciation? J. Manag. Inf. Syst. 2022, 39, 336–365. [Google Scholar] [CrossRef]
- Longoni, C.; Bonezzi, A.; Morewedge, C.K. Resistance to medical artificial intelligence. J. Consum. Res. 2019, 46, 629–650. [Google Scholar] [CrossRef]
- Garvey, A.M.; Kim, T.; Duhachek, A. Bad news? Send an AI. Good news? Send a human. J. Mark. 2022, 87, 10–25. [Google Scholar] [CrossRef]
- Shin, D.; Zhong, B.; Biocca, F.A. Beyond user experience: What constitutes algorithmic experiences? Int. J. Inf. Manag. 2020, 52, 102061. [Google Scholar] [CrossRef]
- Prakash, A.V.; Das, S. Medical practitioner’s adoption of intelligent clinical diagnostic decision support systems: A mixed-methods study. Inf. Manag. 2021, 58, 103524. [Google Scholar] [CrossRef]
- Kim, J.H.; Kim, M.; Kwak, D.W.; Lee, S. Home-tutoring services assisted with technology: Investigating the role of artificial intelligence using a randomized field experiment. J. Mark. Res. 2022, 59, 79–96. [Google Scholar] [CrossRef]
- Tan, T.F.; Netessine, S. At your service on the table: Impact of tabletop technology on restaurant performance. Manag. Sci. 2020, 66, 4496–4515. [Google Scholar] [CrossRef]
- Du, H.S.; Wagner, C. Weblog success: Exploring the role of technology. Int. J. Hum.-Comput. Stud. 2006, 64, 789–798. [Google Scholar] [CrossRef]
- Larivière, B.; Bowen, D.; Andreassen, T.W.; Kunz, W.; Sirianni, N.J.; Voss, C.; Wünderlich, N.V.; De Keyser, A. “Service Encounter 2.0”: An investigation into the roles of technology, employees and customers. J. Bus. Res. 2017, 79, 238–246. [Google Scholar] [CrossRef]
- Collins, C.; Dennehy, D.; Conboy, K.; Mikalef, P. Artificial intelligence in information systems research: A systematic literature review and research agenda. Int. J. Inf. Manag. 2021, 60, 102383. [Google Scholar] [CrossRef]
- Langer, M.; Landers, R.N. The future of artificial intelligence at work: A review on effects of decision automation and augmentation on workers targeted by algorithms and third-party observers. Comput. Hum. Behav. 2021, 123, 106878. [Google Scholar] [CrossRef]
- Webster, J.; Watson, R.T. Analyzing the past to prepare for the future: Writing a literature review. MIS Q. 2002, 26, xiii–xxiii. [Google Scholar]
- Gill, T. Blame it on the self-driving car: How autonomous vehicles can alter consumer morality. J. Consum. Res. 2020, 47, 272–291. [Google Scholar] [CrossRef]
- Peng, C.; van Doorn, J.; Eggers, F.; Wieringa, J.E. The effect of required warmth on consumer acceptance of artificial intelligence in service: The moderating role of AI-human collaboration. Int. J. Inf. Manag. 2022, 66, 102533. [Google Scholar] [CrossRef]
- Yalcin, G.; Lim, S.; Puntoni, S.; van Osselaer, S.M. Thumbs up or down: Consumer reactions to decisions by algorithms versus humans. J. Mark. Res. 2022, 59, 696–717. [Google Scholar] [CrossRef]
- Luo, X.; Tong, S.; Fang, Z.; Qu, Z. Machines vs. humans: The impact of artificial intelligence chatbot disclosure on customer purchases. Mark. Sci. 2019, 38, 937–947. [Google Scholar] [CrossRef]
- Ge, R.; Zheng, Z.; Tian, X.; Liao, L. Human–robot interaction: When investors adjust the usage of robo-advisors in peer-to-peer lending. Inf. Syst. Res. 2021, 32, 774–785. [Google Scholar]
- Park, E.H.; Werder, K.; Cao, L.; Ramesh, B. Why do family members reject AI in health care? Competing effects of emotions. J. Manag. Inf. Syst. 2022, 39, 765–792. [Google Scholar] [CrossRef]
- Aktan, M.E.; Turhan, Z.; Dolu, İ. Attitudes and perspectives towards the preferences for artificial intelligence in psychotherapy. Comput. Hum. Behav. 2022, 133, 107273. [Google Scholar] [CrossRef]
- Formosa, P.; Rogers, W.; Griep, Y.; Bankins, S.; Richards, D. Medical AI and human dignity: Contrasting perceptions of human and artificially intelligent (AI) decision making in diagnostic and medical resource allocation contexts. Comput. Hum. Behav. 2022, 133, 107296. [Google Scholar] [CrossRef]
- Millet, K.; Buehler, F.; Du, G.; Kokkoris, M.D. Defending humankind: Anthropocentric bias in the appreciation of AI art. Comput. Hum. Behav. 2023, 143, 107707. [Google Scholar] [CrossRef]
- Drouin, M.; Sprecher, S.; Nicola, R.; Perkins, T. Is chatting with a sophisticated chatbot as good as chatting online or FTF with a stranger? Comput. Hum. Behav. 2022, 128, 107100. [Google Scholar] [CrossRef]
- Wang, X.; Wong, Y.D.; Chen, T.; Yuen, K.F. Adoption of shopper-facing technologies under social distancing: A conceptualisation and an interplay between task-technology fit and technology trust. Comput. Hum. Behav. 2021, 124, 106900. [Google Scholar] [CrossRef]
- Zhang, F.; Pan, Z.; Lu, Y. AIoT-enabled smart surveillance for personal data digitalization: Contextual personalization-privacy paradox in smart home. Inf. Manag. 2023, 60, 103736. [Google Scholar] [CrossRef]
- Shin, D.; Kee, K.F.; Shin, E.Y. Algorithm awareness: Why user awareness is critical for personal privacy in the adoption of algorithmic platforms? Int. J. Inf. Manag. 2022, 65, 102494. [Google Scholar] [CrossRef]
- Hu, Q.; Lu, Y.; Pan, Z.; Gong, Y.; Yang, Z. Can AI artifacts influence human cognition? The effects of artificial autonomy in intelligent personal assistants. Int. J. Inf. Manag. 2021, 56, 102250. [Google Scholar] [CrossRef]
- Canziani, B.; MacSween, S. Consumer acceptance of voice-activated smart home devices for product information seeking and online ordering. Comput. Hum. Behav. 2021, 119, 106714. [Google Scholar] [CrossRef]
- Sung, E.C.; Bae, S.; Han, D.-I.D.; Kwon, O. Consumer engagement via interactive artificial intelligence and mixed reality. Int. J. Inf. Manag. 2021, 60, 102382. [Google Scholar] [CrossRef]
- Wiesenberg, M.; Tench, R. Deep strategic mediatization: Organizational leaders’ knowledge and usage of social bots in an era of disinformation. Int. J. Inf. Manag. 2020, 51, 102042. [Google Scholar] [CrossRef]
- Song, X.; Xu, B.; Zhao, Z. Can people experience romantic love for artificial intelligence? An empirical study of intelligent assistants. Inf. Manag. 2022, 59, 103595. [Google Scholar] [CrossRef]
- Huo, W.; Zheng, G.; Yan, J.; Sun, L.; Han, L. Interacting with medical artificial intelligence: Integrating self-responsibility attribution, human–computer trust, and personality. Comput. Hum. Behav. 2022, 132, 107253. [Google Scholar] [CrossRef]
- Mishra, A.; Shukla, A.; Sharma, S.K. Psychological determinants of users’ adoption and word-of-mouth recommendations of smart voice assistants. Int. J. Inf. Manag. 2021, 67, 102413. [Google Scholar] [CrossRef]
- Liu, K.; Tao, D. The roles of trust, personalization, loss of privacy, and anthropomorphism in public acceptance of smart healthcare services. Comput. Hum. Behav. 2022, 127, 107026. [Google Scholar] [CrossRef]
- Chuah, S.H.-W.; Aw, E.C.-X.; Yee, D. Unveiling the complexity of consumers’ intention to use service robots: An fsQCA approach. Comput. Hum. Behav. 2021, 123, 106870. [Google Scholar] [CrossRef]
- Pelau, C.; Dabija, D.-C.; Ene, I. What makes an AI device human-like? The role of interaction quality, empathy and perceived psychological anthropomorphic characteristics in the acceptance of artificial intelligence in the service industry. Comput. Hum. Behav. 2021, 122, 106855. [Google Scholar] [CrossRef]
- Crolic, C.; Thomaz, F.; Hadi, R.; Stephen, A.T. Blame the bot: Anthropomorphism and anger in customer–chatbot interactions. J. Mark. 2022, 86, 132–148. [Google Scholar] [CrossRef]
- Mamonov, S.; Koufaris, M. Fulfillment of higher-order psychological needs through technology: The case of smart thermostats. Int. J. Inf. Manag. 2020, 52, 102091. [Google Scholar] [CrossRef]
- Longoni, C.; Cian, L. Artificial intelligence in utilitarian vs. hedonic contexts: The “word-of-machine” effect. J. Mark. 2022, 86, 91–108. [Google Scholar] [CrossRef]
- Lv, X.; Yang, Y.; Qin, D.; Cao, X.; Xu, H. Artificial intelligence service recovery: The role of empathic response in hospitality customers’ continuous usage intention. Comput. Hum. Behav. 2022, 126, 106993. [Google Scholar] [CrossRef]
- Kim, J.; Merrill Jr, K.; Xu, K.; Kelly, S. Perceived credibility of an AI instructor in online education: The role of social presence and voice features. Comput. Hum. Behav. 2022, 136, 107383. [Google Scholar] [CrossRef]
- Tojib, D.; Ho, T.H.; Tsarenko, Y.; Pentina, I. Service robots or human staff? The role of performance goal orientation in service robot adoption. Comput. Hum. Behav. 2022, 134, 107339. [Google Scholar] [CrossRef]
- Luo, X.; Qin, M.S.; Fang, Z.; Qu, Z. Artificial intelligence coaches for sales agents: Caveats and solutions. J. Mark. 2021, 85, 14–32. [Google Scholar] [CrossRef]
- Ko, G.Y.; Shin, D.; Auh, S.; Lee, Y.; Han, S.P. Learning outside the classroom during a pandemic: Evidence from an artificial intelligence-based education app. Manag. Sci. 2022, 69, 3616–3649. [Google Scholar] [CrossRef]
- Luo, B.; Lau, R.Y.K.; Li, C. Emotion-regulatory chatbots for enhancing consumer servicing: An interpersonal emotion management approach. Inf. Manag. 2023, 60, 103794. [Google Scholar] [CrossRef]
- Chandra, S.; Shirish, A.; Srivastava, S.C. To be or not to be… human? Theorizing the role of human-like competencies in conversational artificial intelligence agents. J. Manag. Inf. Syst. 2022, 39, 969–1005. [Google Scholar] [CrossRef]
- Chi, O.H.; Chi, C.G.; Gursoy, D.; Nunkoo, R. Customers’ acceptance of artificially intelligent service robots: The influence of trust and culture. Int. J. Inf. Manag. 2023, 70, 102623. [Google Scholar] [CrossRef]
- Chong, L.; Zhang, G.; Goucher-Lambert, K.; Kotovsky, K.; Cagan, J. Human confidence in artificial intelligence and in themselves: The evolution and impact of confidence on adoption of AI advice. Comput. Hum. Behav. 2022, 127, 107018. [Google Scholar] [CrossRef]
- Rhim, J.; Kwak, M.; Gong, Y.; Gweon, G. Application of humanization to survey chatbots: Change in chatbot perception, interaction experience, and survey data quality. Comput. Hum. Behav. 2022, 126, 107034. [Google Scholar] [CrossRef]
- Hu, P.; Lu, Y.; Wang, B. Experiencing power over AI: The fit effect of perceived power and desire for power on consumers’ choice for voice shopping. Comput. Hum. Behav. 2022, 128, 107091. [Google Scholar] [CrossRef]
- Benke, I.; Gnewuch, U.; Maedche, A. Understanding the impact of control levels over emotion-aware chatbots. Comput. Hum. Behav. 2022, 129, 107122. [Google Scholar] [CrossRef]
- Plaks, J.E.; Bustos Rodriguez, L.; Ayad, R. Identifying psychological features of robots that encourage and discourage trust. Comput. Hum. Behav. 2022, 134, 107301. [Google Scholar] [CrossRef]
- Jiang, H.; Cheng, Y.; Yang, J.; Gao, S. AI-powered chatbot communication with customers: Dialogic interactions, satisfaction, engagement, and customer behavior. Comput. Hum. Behav. 2022, 134, 107329. [Google Scholar] [CrossRef]
- Munnukka, J.; Talvitie-Lamberg, K.; Maity, D. Anthropomorphism and social presence in Human–Virtual service assistant interactions: The role of dialog length and attitudes. Comput. Hum. Behav. 2022, 135, 107343. [Google Scholar] [CrossRef]
- Chua, A.Y.K.; Pal, A.; Banerjee, S. AI-enabled investment advice: Will users buy it? Comput. Hum. Behav. 2023, 138, 107481. [Google Scholar] [CrossRef]
- Yi-No Kang, E.; Chen, D.-R.; Chen, Y.-Y. Associations between literacy and attitudes toward artificial intelligence–assisted medical consultations: The mediating role of perceived distrust and efficiency of artificial intelligence. Comput. Hum. Behav. 2023, 139, 107529. [Google Scholar] [CrossRef]
- Liu, Y.-l.; Hu, B.; Yan, W.; Lin, Z. Can chatbots satisfy me? A mixed-method comparative study of satisfaction with task-oriented chatbots in mainland China and Hong Kong. Comput. Hum. Behav. 2023, 143, 107716. [Google Scholar] [CrossRef]
- Wu, M.; Wang, N.; Yuen, K.F. Deep versus superficial anthropomorphism: Exploring their effects on human trust in shared autonomous vehicles. Comput. Hum. Behav. 2023, 141, 107614. [Google Scholar] [CrossRef]
- Hu, B.; Mao, Y.; Kim, K.J. How social anxiety leads to problematic use of conversational AI: The roles of loneliness, rumination, and mind perception. Comput. Hum. Behav. 2023, 145, 107760. [Google Scholar] [CrossRef]
- Alimamy, S.; Kuhail, M.A. I will be with you Alexa! The impact of intelligent virtual assistant’s authenticity and personalization on user reusage intentions. Comput. Hum. Behav. 2023, 143, 107711. [Google Scholar] [CrossRef]
- Im, H.; Sung, B.; Lee, G.; Xian Kok, K.Q. Let voice assistants sound like a machine: Voice and task type effects on perceived fluency, competence, and consumer attitude. Comput. Hum. Behav. 2023, 145, 107791. [Google Scholar] [CrossRef]
- Jiang, Y.; Yang, X.; Zheng, T. Make chatbots more adaptive: Dual pathways linking human-like cues and tailored response to trust in interactions with chatbots. Comput. Hum. Behav. 2023, 138, 107485. [Google Scholar] [CrossRef]
- Dubé, S.; Santaguida, M.; Zhu, C.Y.; Di Tomasso, S.; Hu, R.; Cormier, G.; Johnson, A.P.; Vachon, D. Sex robots and personality: It is more about sex than robots. Comput. Hum. Behav. 2022, 136, 107403. [Google Scholar] [CrossRef]
- Wald, R.; Piotrowski, J.T.; Araujo, T.; van Oosten, J.M.F. Virtual assistants in the family home. Understanding parents’ motivations to use virtual assistants with their Child(dren). Comput. Hum. Behav. 2023, 139, 107526. [Google Scholar] [CrossRef]
- Pal, D.; Vanijja, V.; Thapliyal, H.; Zhang, X. What affects the usage of artificial conversational agents? An agent personality and love theory perspective. Comput. Hum. Behav. 2023, 145, 107788. [Google Scholar] [CrossRef]
- Oleksy, T.; Wnuk, A.; Domaradzka, A.; Maison, D. What shapes our attitudes towards algorithms in urban governance? The role of perceived friendliness and controllability of the city, and human-algorithm cooperation. Comput. Hum. Behav. 2023, 142, 107653. [Google Scholar] [CrossRef]
- Lee, S.; Moon, W.-K.; Lee, J.-G.; Sundar, S.S. When the machine learns from users, is it helping or snooping? Comput. Hum. Behav. 2023, 138, 107427. [Google Scholar] [CrossRef]
- Strich, F.; Mayer, A.-S.; Fiedler, M. What do I do in a world of Artificial Intelligence? Investigating the impact of substitutive decision-making AI systems on employees’ professional role identity. J. Assoc. Inf. Syst. 2021, 22, 9. [Google Scholar] [CrossRef]
- Liang, H.; Xue, Y. Save face or save life: Physicians’ dilemma in using clinical decision support systems. Inf. Syst. Res. 2022, 33, 737–758. [Google Scholar] [CrossRef]
- Zhang, G.; Chong, L.; Kotovsky, K.; Cagan, J. Trust in an AI versus a Human teammate: The effects of teammate identity and performance on Human-AI cooperation. Comput. Hum. Behav. 2023, 139, 107536. [Google Scholar] [CrossRef]
- Brachten, F.; Kissmer, T.; Stieglitz, S. The acceptance of chatbots in an enterprise context–A survey study. Int. J. Inf. Manag. 2021, 60, 102375. [Google Scholar] [CrossRef]
- Jussupow, E.; Spohrer, K.; Heinzl, A.; Gawlitza, J. Augmenting medical diagnosis decisions? An investigation into physicians’ decision-making process with artificial intelligence. Inf. Syst. Res. 2021, 32, 713–735. [Google Scholar] [CrossRef]
- Hradecky, D.; Kennell, J.; Cai, W.; Davidson, R. Organizational readiness to adopt artificial intelligence in the exhibition sector in Western Europe. Int. J. Inf. Manag. 2022, 65, 102497. [Google Scholar] [CrossRef]
- Vaast, E.; Pinsonneault, A. When digital technologies enable and threaten occupational identity: The delicate balancing act of data scientists. MIS Q. 2021, 45, 1087–1112. [Google Scholar] [CrossRef]
- Chiu, Y.-T.; Zhu, Y.-Q.; Corbett, J. In the hearts and minds of employees: A model of pre-adoptive appraisal toward artificial intelligence in organizations. Int. J. Inf. Manag. 2021, 60, 102379. [Google Scholar] [CrossRef]
- Yu, B.; Vahidov, R.; Kersten, G.E. Acceptance of technological agency: Beyond the perception of utilitarian value. Inf. Manag. 2021, 58, 103503. [Google Scholar] [CrossRef]
- Dai, T.; Singh, S. Conspicuous by its absence: Diagnostic expert testing under uncertainty. Mark. Sci. 2020, 39, 540–563. [Google Scholar] [CrossRef]
- Gkinko, L.; Elbanna, A. The appropriation of conversational AI in the workplace: A taxonomy of AI chatbot users. Int. J. Inf. Manag. 2023, 69, 102568. [Google Scholar] [CrossRef]
- Ulfert, A.-S.; Antoni, C.H.; Ellwart, T. The role of agent autonomy in using decision support systems at work. Comput. Hum. Behav. 2022, 126, 106987. [Google Scholar] [CrossRef]
- Verma, S.; Singh, V. Impact of artificial intelligence-enabled job characteristics and perceived substitution crisis on innovative work behavior of employees from high-tech firms. Comput. Hum. Behav. 2022, 131, 107215. [Google Scholar] [CrossRef]
- Dang, J.; Liu, L. Implicit theories of the human mind predict competitive and cooperative responses to AI robots. Comput. Hum. Behav. 2022, 134, 107300. [Google Scholar] [CrossRef]
- Westphal, M.; Vössing, M.; Satzger, G.; Yom-Tov, G.B.; Rafaeli, A. Decision control and explanations in human-AI collaboration: Improving user perceptions and compliance. Comput. Hum. Behav. 2023, 144, 107714. [Google Scholar] [CrossRef]
- Harris-Watson, A.M.; Larson, L.E.; Lauharatanahirun, N.; DeChurch, L.A.; Contractor, N.S. Social perception in Human-AI teams: Warmth and competence predict receptivity to AI teammates. Comput. Hum. Behav. 2023, 145, 107765. [Google Scholar] [CrossRef]
- Fan, H.; Gao, W.; Han, B. How does (im)balanced acceptance of robots between customers and frontline employees affect hotels’ service quality? Comput. Hum. Behav. 2022, 133, 107287. [Google Scholar] [CrossRef]
- Jain, H.; Padmanabhan, B.; Pavlou, P.A.; Santanam, R.T. Call for papers—Special issue of information systems research—Humans, algorithms, and augmented intelligence: The future of work, organizations, and society. Inf. Syst. Res. 2018, 29, 250–251. [Google Scholar] [CrossRef]
- Jain, H.; Padmanabhan, B.; Pavlou, P.A.; Raghu, T. Editorial for the special section on humans, algorithms, and augmented intelligence: The future of work, organizations, and society. Inf. Syst. Res. 2021, 32, 675–687. [Google Scholar] [CrossRef]
- Rai, A.; Constantinides, P.; Sarker, S. Next generation digital platforms: Toward human-AI hybrids. MIS Q. 2019, 43, iii–ix. [Google Scholar]
- Hong, J.-W.; Fischer, K.; Ha, Y.; Zeng, Y. Human, I wrote a song for you: An experiment testing the influence of machines’ attributes on the AI-composed music evaluation. Comput. Hum. Behav. 2022, 131, 107239. [Google Scholar] [CrossRef]
- Davis, F.D.; Bagozzi, R.P.; Warshaw, P.R. User acceptance of computer technology: A comparison of two theoretical models. Manag. Sci. 1989, 35, 982–1003. [Google Scholar] [CrossRef]
- McCloskey, D. Evaluating electronic commerce acceptance with the technology acceptance model. J. Comput. Inf. Syst. 2004, 44, 49–57. [Google Scholar]
- Szajna, B. Empirical evaluation of the revised technology acceptance model. Manag. Sci. 1996, 42, 85–92. [Google Scholar] [CrossRef]
- Ha, S.; Stoel, L. Consumer e-shopping acceptance: Antecedents in a technology acceptance model. J. Bus. Res. 2009, 62, 565–571. [Google Scholar] [CrossRef]
- Burton-Jones, A.; Hubona, G.S. The mediation of external variables in the technology acceptance model. Inf. Manag. 2006, 43, 706–717. [Google Scholar] [CrossRef]
- Ajzen, I. The theory of planned behavior. Organ. Behav. Hum. Decis. Process. 1991, 50, 179–211. [Google Scholar] [CrossRef]
- Taylor, S.; Todd, P.A. Understanding information technology usage: A test of competing models. Inf. Syst. Res. 1995, 6, 144–176. [Google Scholar] [CrossRef]
- Venkatesh, V.; Morris, M.G.; Davis, G.B.; Davis, F.D. User acceptance of information technology: Toward a unified view. MIS Q. 2003, 27, 425–478. [Google Scholar] [CrossRef]
- Baishya, K.; Samalia, H.V. Extending unified theory of acceptance and use of technology with perceived monetary value for smartphone adoption at the bottom of the pyramid. Int. J. Inf. Manag. 2020, 51, 102036. [Google Scholar] [CrossRef]
- Venkatesh, V.; Thong, J.Y.; Xu, X. Consumer acceptance and use of information technology: Extending the unified theory of acceptance and use of technology. MIS Q. 2012, 36, 157–178. [Google Scholar] [CrossRef]
- Biocca, F.; Harms, C.; Burgoon, J.K. Toward a more robust theory and measure of social presence: Review and suggested criteria. Presence Teleoper. Virtual Environ. 2003, 12, 456–480. [Google Scholar] [CrossRef]
- Lazarus, R.S.; Folkman, S. Stress, Appraisal, and Coping; Springer Publishing Company: Berlin/Heidelberg, Germany, 1984. [Google Scholar]
- Riek, B.M.; Mania, E.W.; Gaertner, S.L. Intergroup threat and outgroup attitudes: A meta-analytic review. Personal. Soc. Psychol. Rev. 2006, 10, 336–353. [Google Scholar] [CrossRef]
- Evans, J.S.B.; Stanovich, K.E. Dual-process theories of higher cognition: Advancing the debate. Perspect. Psychol. Sci. 2013, 8, 223–241. [Google Scholar] [CrossRef]
- Ferratt, T.W.; Prasad, J.; Dunne, E.J. Fast and slow processes underlying theories of information technology use. J. Assoc. Inf. Syst. 2018, 19, 3. [Google Scholar] [CrossRef]
- Evans, J.S.B. Dual-processing accounts of reasoning, judgment, and social cognition. Annu. Rev. Psychol. 2008, 59, 255–278. [Google Scholar] [CrossRef]
- Seeger, A.-M.; Pfeiffer, J.; Heinzl, A. Texting with humanlike conversational agents: Designing for anthropomorphism. J. Assoc. Inf. Syst. 2021, 22, 8. [Google Scholar] [CrossRef]
- Lu, L.; Cai, R.; Gursoy, D. Developing and validating a service robot integration willingness scale. Int. J. Hosp. Manag. 2019, 80, 36–51. [Google Scholar] [CrossRef]
Journal | Method | Number of Articles |
---|---|---|
Management Science | Empirical estimation | 1 |
Marketing Science | Field experiment | 1 |
Game model | 1 | |
MIS Quarterly | Case study | 1 |
Information Systems Research | Empirical estimation | 1 |
Field experiment | 1 | |
Interview | 1 | |
Survey | 1 | |
Journal of Marketing | Experiment | 2 |
Field experiment | 1 | |
Mixed methods | 1 | |
Journal of Marketing Research | Experiment | 1 |
Field experiment | 1 | |
Journal of Consumer Research | Experiment | 2 |
Journal of the Association for Information Systems | Case study | 1 |
Journal of Management Information Systems | Experiment | 3 |
Mixed methods | 1 | |
International Journal of Information Management | Case study | 1 |
Interview | 1 | |
Survey | 8 | |
Mixed methods | 3 | |
Information & Management | Experiment | 2 |
Survey | 2 | |
Mixed methods | 1 | |
Computers in Human Behavior | Experiment | 19 |
Field experiment | 1 | |
Longitudinal study | 1 | |
Survey | 15 | |
Mixed methods | 5 |
Mixed Methods | Number of Articles |
Qualitative methods and quantitative studies | 6 |
Experiments and one survey | 4 |
Empirical estimation on real-world data and 4 controlled experiments | 1 |
Types of Outcome Variables | Number of Articles | |
---|---|---|
Behavior | Acceptance behavior | 5 |
Usage behavior | 6 | |
Purchase behavior | 2 | |
User performance | 1 | |
Behavioral intention | AI resistance | 3 |
Intention to accept AI | 18 | |
Intention to use AI | 23 | |
Purchase intention | 3 | |
Intention to self-disclosure | 1 | |
User performance | 4 | |
Perception | Attitude | 6 |
Trust | 14 | |
Satisfaction | 6 |
Source | Types of AI Service Provider | User Acceptance (or Not) | Theoretical Perspectives | Methods | Key Findings | |
---|---|---|---|---|---|---|
AI Appreciation | AI Aversion | |||||
You, Yang and Li [17] | Judge–advisor system | √ | Cognitive load theory. | Experiment | Individuals largely exhibit algorithm appreciation where they tend to adopt algorithmic advice to a greater extent. Related factors are also explored. | |
Gill [29] | Autonomous vehicle | √ | Attribution theory. | Experiment | For negative events to pedestrians, people tend to consider autonomous vehicles more acceptable. | |
Schanke, Burtch and Ray [2] | Customer service chatbot | √ | Social presence theory. | Field experiment | Consumers tend to be willing to self-disclose, shift to a fairness evaluation, and accept the offer provided by a human-like chatbot. | |
Peng, et al. [30] | AI service | √ | Social cognition theory, task–technology fit theory. | Mixed method | Consumers tend to refuse AI for warmth-requiring tasks due to the low perceived fit between AI and task. | |
Longoni, Bonezzi and Morewedge [18] | AI medical application | √ | Uniqueness neglect. | Experiment | With AI medical application, consumers are less likely to utilize healthcare, are less sensitive to differences in provider performance, exhibit lower reservation prices for healthcare, and derive negative utility. | |
Yalcin, et al. [31] | Algorithmic decision maker | √ | Attribution theory. | Experiment | Consumers tend to response less positively to an algorithmic decision maker. Related factors are also explored. | |
Luo, et al. [32] | Chatbot | √ | Field experiment | Although chatbots perform as effectively as proficient workers, the disclosure of chatbot identity will reduce customer purchase rates. | ||
Ge, et al. [33] | AI financial-advising servicer | √ | Empirical estimation | Investors who need more help are less likely to accept robot-advising services. Furthermore, the adjustment of adoption behavior based on recent robo-advisor performance may result in inferior performance of investors. | ||
Park, et al. [34] | AI monitoring for healthcare | √ | Experiment | Anxiety about healthcare monitoring and anxiety about health outcomes decreased the rejection of AI monitoring, whereas surveillance anxiety and delegation anxiety increased rejection. Meanwhile, individual-level risks and perceived controllability are significant moderators. | ||
Aktan, et al. [35] | AI-based psychotherapy | √ | Survey | Most participants reported more trust in human psychotherapists than in AI-based psychotherapists. However, AI-based psychotherapists may be beneficial due to the ability to comfortably talk about embarrassing experiences, having accessibility at any time, and accessing remote communication. Furthermore, gender and profession types may also affect choice of AI-based psychotherapists. | ||
Formosa, et al. [36] | AI decision maker | √ | Experiment | Users consistently view humans (vs. AI) as appropriate decision makers. | ||
Millet, et al. [37] | AI art generator | √ | Experiment | Users, especially among those with stronger anthropocentric creativity beliefs, perceived AI-made (vs. human-made) artwork as less creative and induced less awe, which led to less preference. | ||
Drouin, et al. [38] | Emotionally responsive chatbot | Conditional | Experiment | In terms of negative emotions and conversational concerns, participants reported better responses to chatbot than human partners, whereas in terms of homophily, responsive chat, and liking of chat partner, participants showed better responses to human than chatbot. | ||
Wang, et al. [39] | Shopper-facing technology | Conditional | Task–technology fit theory. | Survey | The authors identify three dimensions of shopper-facing technologies, named shopper-dominant (pre-) shopping technologies, shopper-dominant post-shopping technologies, and technology-dominant automations. Shoppers’ adoption intentions are determined by their evaluations on technology–task fitness. | |
Zhang, et al. [40] | Smart home service | Conditional | Surveillance theory. | Survey | The intention to use AI in a smart home context depends on the trade-offs of contextual personalization and privacy concerns. | |
Shin, et al. [41] | Algorithmic platform | Conditional | Privacy calculus theory. | Survey | The trust and self-disclosure to algorithms depend on users’ algorithm awareness, which depends on users’ perceived control of information flow. | |
Hu, et al. [42] | Intelligent assistant | Conditional | Mind perception theory. | Survey | Artificial autonomy of intelligent personal assistants is significantly related to users’ continuance usage intention, which is mediated by competence and warmth perception. | |
Canziani and MacSween [43] | Voice-activated smart home device | Conditional | Technology acceptance model. | Survey | Propensity for seeking referent persons’ opinions will increase perceived device utility. Perceived device utility and hedonic enjoyment of voice ordering are both positively related to consumers’ intentions to use the device for online ordering. | |
Sung, et al. [44] | AI-embedded mixed reality (MR) | Conditional | Stimulus (S)–organism (O)–response (R) framework. | Survey | The quality of AI, including speech recognition and synthesis via machine learning, significantly influences MR immersion, MR enjoyment, and perceptions of novel experiences, which collectively increase consumer engagement and behavioral responses (i.e., purchase intentions and intentions to share). | |
Wiesenberg and Tench [45] | Social robot | Conditional | Mediatization theory. | Survey | Leading communication professionals in Central and Western Europe as well as Scandinavia report higher concerns with ethical challenges of social bot usage, while professionals in Southern and Eastern Europe are less skeptical. In general, only a small minority of the sample reports readiness to use social bots for organizational strategic communication. | |
Song, et al. [46] | Intelligent assistant | Conditional | Theory of love. | Survey | AI application is able to promote users’ feeling of intimacy and passion. These feelings will positively impact users’ commitment, which further increase intention to use intelligent assistants. | |
Huo, et al. [47] | Medical AI | Conditional | Attribution theory. | Survey | Patients’ acceptance of medical AI for independent diagnosis and treatment is significantly related to their self-responsibility attribution, which is mediated by human–computer trust (HCT) and moderated by personality traits. | |
Mishra, et al. [48] | Smart voice assistant | Conditional | Flow theory and the theory of anthropomorphism. | Survey | Playfulness and escapism are significantly related to hedonic attitude, while anthropomorphism, visual appeal, and social presence are significantly related to utilitarian attitude. Smart voice assistant (SVA) usage is influenced more by utilitarian attitude than hedonic attitude. | |
Liu and Tao [49] | AI-based smart healthcare service | Conditional | Technology acceptance model. | Survey | Public acceptance of smart healthcare services is directly or indirectly determined by perceived usefulness, perceived ease of use, trust, and AI-specific characteristics. | |
Chuah, et al. [50] | Service robot | Conditional | Complexity theory. | Survey | Specific combinations of human-like, technology-like, and consumer features are able to increase intention to use service robots. | |
Pelau, et al. [51] | AI device | Conditional | The computers as social actors (CASA) theory. | Survey | Anthropomorphic characteristics of AI device indirectly influence acceptance and trust towards AI device through the mediation route of both perceived empathy and interaction quality. | |
Shin, Zhong and Biocca [20] | Algorithm system | Conditional | Technology acceptance model. | Mixed method | Users’ actual use of algorithm systems is significantly related to their algorithmic experience (AX). | |
Crolic, et al. [52] | Customer service chatbot | Conditional | Expectancy violation theory. | Mixed method | The effect of chatbot anthropomorphism on customer satisfaction, overall firm evaluation, and subsequent purchase intentions depends on customers’ emotional state. An angry emotional state leads to a negative effect. | |
Mamonov and Koufaris [53] | Smart thermostat | Conditional | Unified theory of acceptance and use of technology. | Mixed method | The smart thermostat adoption intention is mainly determined by techno-coolness, less by performance expectancy, and not by effort expectancy. | |
Longoni and Cian [54] | AI-based recommender | Conditional | Experiment | Consumers are more likely to adopt AI recommendations in the utilitarian realm, while they tend to adopt the, less in the hedonic realm. Related factors are also explored. | ||
Lv, et al. [55] | AI service | Conditional | Social response theory. | Experiment | In service recovery, a high-empathy AI response can significantly increase customers’ continuous usage intention. | |
Garvey, Kim and Duhachek [19] | AI marketing agent | Conditional | Expectations discrepancy theory. | Experiment | Consumers tend to react positively (i.e., increased purchase likelihood and satisfaction) to AI agent for bad news, while they react negatively to good news offered by AI agent. | |
Al-Natour, Benbasat and Cenfetelli [1] | Virtual advisor | Conditional | Social exchange theory. | Experiment | The perceptions of a virtual advisor and the relationship with a virtual advisor are both determinants in self-disclosure intention. | |
Lv, Yang, Qin, Cao and Xu [55] | AI music generator | Conditional | Role theory. | Experiment | The acceptance of an AI music generator as a musician is significantly related to its humanlike traits, but not influenced by its autonomy to create songs. | |
Kim, et al. [56] | AI instructor | Conditional | Social presence theory. | Experiment | An AI instructor with a humanlike voice (vs. with a machinelike voice) improves students’ perceived social presence and credibility, which further increases intention to enroll in AI-instructor-based online courses. | |
Tojib, et al. [57] | Service robot | Conditional | Experiment | Service robot adoption is directly or indirectly determined by desire for achievement (PAP), desire to avoid failure (PAV), spontaneous social influence, and challenge appraisal. | ||
Luo, et al. [58] | AI coach | Conditional | Information processing theory. | Field experiment | Middle-ranked human agents benefit more from the help of an AI coach, while both bottom- and top-ranked agents show limited incremental gains, because bottom-ranked agents exhibit information overload problem and top-ranked agents hold the strongest aversion to an AI coach. | |
Ko, et al. [59] | AI-powered learning app | Conditional | Temporal construal theory. | Empirical estimation | Students living in the epicenter of the COVID-19 outbreak (vs. those do not) tended to use AI-powered learning app less at first, but increased, regularized their usage, and rebounded to a curriculum path with time. | |
Luo, et al. [60] | Emotion-regulatory chatbot | Conditional | Interpersonal emotion management (IEM) theory. | Experiment | Perceived interpersonal emotion management strategies significantly affected positive word-of-mouth, which was sequentially mediated by appraisals and post-recovery emotions. | |
Chandra, et al. [61] | Conversational AI agent | Conditional | Media naturalness theory. | Mixed method | Human-like interactional (i.e., cognitive, relational, and emotional) competencies in conversational AI agents increased user trust and further improved user engagement with the agents. | |
Chi, et al. [62] | AI service robot | Conditional | Artificially Intelligent Device Use Acceptance (AIDUA) framework. | Survey | Trust in AI robot interaction affected use intention. Uncertainty avoidance, long-term orientation, and power distance were significant moderators. | |
Chong, et al. [63] | AI advisor | Conditional | Mixed method | The choice to accept or reject AI suggestions was determined by human self-confidence rather than confidence in AI. | ||
Rhim, et al. [64] | Survey chatbot | Conditional | Experiment | Humanization applied survey chatbot (vs. baselinebot) is perceived as more positive, with higher anthropomorphism and social presence. Participants spent more time in HASbot interaction and indicated higher levels of self-disclosure, satisfaction, and social desirability bias with HASbot than with baselinebot. | ||
Hu, et al. [65] | AI assistant | Conditional | Longitudinal study | Users perceived less risk and were more willing to use AI assistants in shopping when perceived power fits desire for power. | ||
Benke, et al. [66] | Emotion-aware chatbot | Conditional | Experiment | Control levels induced users’ perceptions of autonomy and trust in emotion-aware chatbots, but did not increase cognitive effort. | ||
Plaks, et al. [67] | Robot | Conditional | Experiment | The authors varied the robotic counterpart’s humanness by displaying values and self-aware emotions from low to high levels. As values varied from low to high levels, participants tended to choose the cooperative option; whereas as levels of self-aware emotions increased, participants were more likely to choose the competitive option. Trust was identified as key mechanism. | ||
Jiang, et al. [68] | AI-powered chatbot | Conditional | Social exchange theory and resource exchange theory. | Survey | Responsiveness and a conversational tone sequentially increased customers’ satisfaction with chatbot services, social media engagement, purchase intention, and price premium. | |
Munnukka, et al. [69] | Virtual service assistant | Conditional | Computers as social actors (CASA) theory. | Experiment | The interaction with a virtual service assistant (i.e., perceived anthropomorphism, social presence, dialog length, and attitudes) increased recommendation quality perceptions and further improved trust in VSA-based recommendations. | |
Chua, et al. [70] | AI-based recommendation | Conditional | Experiment | Attitude toward AI was positively related to behavioral intention to accept AI-based recommendations, trust in AI, and perceived accuracy of AI. Uncertainty level was a significant moderator. | ||
Yi-No Kang, et al. [71] | AI-assisted medical consultation | Conditional | Health information technology acceptance model. | Survey | Three dimensions of health literacy were identified as healthcare, disease prevention, and health promotion. Disease prevention was significantly associated with attitudes toward AI-assisted medical consultations through mediation of distrust of AI, whereas health promotion was also positively related to attitudes toward AI-assisted medical consultations through mediation of efficiency of AI. Furthermore, digital literacy was associated with attitudes toward AI-assisted medical consultations and mediated by both distrust and efficiency of AI. | |
Liu, et al. [72] | Task-oriented chatbot | Conditional | D&M information system success model. | Mixed method | Relevance, completeness, pleasure, and assurance in both mainland China and Hong Kong sequentially increased satisfaction and usage intention. Privacy concerns in both regions did not significantly affect satisfaction. Response time and empathy were significantly associated with satisfaction only in mainland China. | |
Wu, et al. [73] | Shared autonomous vehicle | Conditional | Trust-in-automation three-factor model. | Survey | Anthropomorphism negatively influenced human–SAV (i.e., shared autonomous vehicle) interaction quality when participants were male, with low income, low education, or no vehicle ownership. | |
Hu, et al. [74] | Conversational AI | Conditional | Interaction of person-affect-cognition-execution (I-PACE) model. | Survey | Social anxiety increased problematic use of conversational AI, which was mediated by loneliness and rumination. Mind perception was a significant moderator. | |
Alimamy and Kuhail [75] | Intelligent virtual assistant | Conditional | Human–computer interaction theory and stimulus–organism–response theory. | Survey | Perceived authenticity and personalization increased commitment, trust, and reusage intentions, which were mediated by user involvement and connection. | |
Im, et al. [76] | Voice assistant | Conditional | Computers as social actors (CASA) theory. | Experiment | When users engaged in functional tasks, voice assistants with a synthetic voice increased perceived fluency, competence perception, and attitudes. | |
Jiang, et al. [77] | Chatbot | Conditional | Task–technology fit theory. | Mixed method | Conversational cues were associated with human trust, which was mediated by perceived task-solving competence and social presence. The extent of users’ ambiguity tolerance and task creation were significant moderators. | |
Dubé, et al. [78] | Sex robots | Conditional | Survey | Correlational analyses showed that willingness to engage with and perceived appropriateness of using sex robots were more closely related to erotophilia and sexual sensation seeking than any other traits. Mixed repeated-measure ANOVAs and independent sample t-tests with Bonferroni corrections also showed that cismen and nonbinary/gender-nonconforming individuals were more willing to engage with sex robots and perceived their use as more appropriate than ciswomen. | ||
Wald, et al. [79] | Virtual assistant | Conditional | Technology acceptance model, uses and gratifications theory, and the first proposition of the differential susceptibility to media effects model. | Survey | Hedonic motivation was the key factor influencing parents’ willingness to co-use the virtual assistant with their child(ren). | |
Pal, et al. [80] | Conversational AI | Conditional | Stimulus organism response framework and theory of love. | Mixed method | Love (i.e., passion, intimacy, and commitment) significantly influenced the usage scenario. The agent personality was a significant moderator. | |
Oleksy, et al. [81] | Algorithms in urban governance | Conditional | Mixed method | Lower level of perceived friendliness of the city increased users’ reluctance to accept algorithmic governance. Cooperation between algorithms and humans increased acceptance of algorithms, perceived friendliness, and controllability of the city. | ||
Lee, et al. [82] | AI-embedded system | Conditional | HAII-TIME (Human–AI Interaction from the perspective of the Theory of Interactive Media). | Experiment | Users tended to view the system with explicit or implicit machine-learning cues as a helper and trusted it more. |
Source | Types of AI Task Substitute | User Acceptance (or Not) | Theoretical Perspectives | Method | Key Findings | |
---|---|---|---|---|---|---|
AI Appreciation | AI Aversion | |||||
Strich, et al. [83] | Substitutive Decision-Making AI System | √ | Professional role identity | Case study | The introduction of the Substitutive Decision-Making AI System makes employees feel their professional identities are threatened; thus, they strengthen and protect their professional role identities. | |
Liang and Xue [84] | Clinical decision support system | √ | Dual process theory | Survey | Physicians may resist clinical decision support system (CDSS) recommendations. Related factors are also explored. | |
Kim, Kim, Kwak and Lee [22] | AI assistant | √ | Field experiment | Although AI-generated advices lead to better service performance, some employees may not utilize AI assistance (i.e., AI aversion) due to unforeseen barriers to usage (i.e., technology overload). | ||
Zhang, et al. [85] | AI teammate | √ | Experiment | Compared with human teammates, users trust AI teammates more, accepting the AI’s decisions. | ||
Brachten, et al. [86] | Enterprise bot | Conditional | The decomposed theory of planned behavior | Survey | Both intrinsic and external motivations of employees positively influence the intention to use Enterprise Bots, and the influence of intrinsic motivation is stronger. | |
Jussupow, et al. [87] | AI-based system | Conditional | Dual process theory | Interview | Physicians tend to use metacognitions to assess AI advice, and the metacognitions determine whether physicians make decisions based on AI or not. | |
Hradecky, et al. [88] | AI application | Conditional | The technology—organization—environment framework | Interview | The degree of confidence in organizational technological practices, financial resources, the size of the organization, issues of data management and protection, and the COVID-19 pandemic determine the adoption of AI in the event industry. | |
Vaast and Pinsonneault [89] | Digital technology | Conditional | Case study | AI adoption relies on the constant adjustment and redefinition of people’s occupational identity. | ||
Chiu, et al. [90] | Enterprise AI | Conditional | Cognitive appraisal theory | Survey | Perceptions of AI’s operational and cognitive capabilities significantly increase affective and cognitive attitudes toward AI, while concerns regarding AI significantly decrease affective attitude toward AI. | |
Prakash and Das [21] | Intelligent clinical diagnostic decision support system | Conditional | Unified theory of acceptance and use of technology | Mixed method | Performance expectancy, effort expectancy, social influence, initial trust, and resistance to change are significantly related to intention to use. | |
Yu, et al. [91] | Technological agency | Conditional | Experiment | Control and restrictiveness significantly affect users’ perceived relation with technological agents and acceptance. | ||
Dai and Singh [92] | AI diagnostic testing | Conditional | Game theory | Game model | High-type experts tend to use their own diagnostic decision, while low-type experts rely on AI advice more. Related factors have also been explored. | |
Gkinko and Elbanna [93] | AI chatbot | Conditional | Case study | The dominant mode of interaction and the understanding of the AI chatbot technology significantly contribute to users’ appropriation of AI chatbots. | ||
Ulfert, et al. [94] | Agent-based decision support system (DSS) | Conditional | Experiment | High DSS autonomy increased users’ information load reduction and technostress, but decreased user intention. Job experience strengthened the impact on information load reduction, but weakened the negative effect on user intention. | ||
Verma and Singh [95] | AI-enabled system | Conditional | Prospect theory, job design theory | Survey | AI-enabled task characteristics (job autonomy and skill variety) and knowledge characteristics (job complexity, specialization, and information processing) are significantly related with innovative work behavior. Meanwhile, perceived substitution crisis is a significant moderator. | |
Dang and Liu [96] | AI robot | Conditional | Experiment | A malleable theory of the human mind negatively affected performance-avoidance goals, and further positively affected competitive responses to robots. Meanwhile, a malleable theory of the human mind positively affected mastery goals, and further positively affected cooperative responses to robots. Further, Chinese participants were less competitive and had more cooperative responses to AI robots than British participants. | ||
Westphal, et al. [97] | Human–AI collaboration system | Conditional | Cognitive load theory | Experiment | Decision control was positively associated with user trust, understanding, and user compliance with system recommendations. Providing explanations may not only reenact the system’s reasoning, but also increase task complexity; the effectiveness relies on the user’s cognitive ability in complex tasks. | |
Harris-Watson, et al. [98] | AI teammate | Conditional | Tripartite model of human newcomer receptivity | Experiment | Perceived warmth and competence affect psychological acceptance, and further positively impact perceived HAT viability. | |
Fan, et al. [99] | AI robot | Conditional | Information processing theory | Field experiment | An imbalanced robotic strategy is superior to a balanced one for service quality. In addition, when customer demanding is high, a customer-focused robotic strategy (i.e., higher customer acceptance of robots than employee acceptance) is the optimal choice to improve service quality. However, when frontline task ambidexterity is high, the positive effects of imbalanced robotic strategy on service quality diminish. |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Jiang, P.; Niu, W.; Wang, Q.; Yuan, R.; Chen, K. Understanding Users’ Acceptance of Artificial Intelligence Applications: A Literature Review. Behav. Sci. 2024, 14, 671. https://doi.org/10.3390/bs14080671
Jiang P, Niu W, Wang Q, Yuan R, Chen K. Understanding Users’ Acceptance of Artificial Intelligence Applications: A Literature Review. Behavioral Sciences. 2024; 14(8):671. https://doi.org/10.3390/bs14080671
Chicago/Turabian StyleJiang, Pengtao, Wanshu Niu, Qiaoli Wang, Ruizhi Yuan, and Keyu Chen. 2024. "Understanding Users’ Acceptance of Artificial Intelligence Applications: A Literature Review" Behavioral Sciences 14, no. 8: 671. https://doi.org/10.3390/bs14080671
APA StyleJiang, P., Niu, W., Wang, Q., Yuan, R., & Chen, K. (2024). Understanding Users’ Acceptance of Artificial Intelligence Applications: A Literature Review. Behavioral Sciences, 14(8), 671. https://doi.org/10.3390/bs14080671