Next Article in Journal
Analyzing the Dynamics of Customer Behavior: A New Perspective on Personalized Marketing through Counterfactual Analysis
Previous Article in Journal
Blockchain and Supply-Chain Financing: An Evolutionary Game Approach with Guarantee Considerations
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Is Smarter Better? A Moral Judgment Perspective on Consumer Attitudes about Different Types of AI Services

Business College, Yangzhou University, 88 South University Ave., Yangzhou 225009, China
*
Author to whom correspondence should be addressed.
J. Theor. Appl. Electron. Commer. Res. 2024, 19(3), 1637-1659; https://doi.org/10.3390/jtaer19030080
Submission received: 27 April 2024 / Accepted: 14 June 2024 / Published: 27 June 2024
(This article belongs to the Topic Consumer Psychology and Business Applications)

Abstract

:
AI is considered a key driver of industrial transformation and a strategic technology that will shape future development. With AI services continuing to permeate various sectors, concerns have emerged about the ethics of AI. This study investigates the effects of different types of AI services (mechanical, thinking, and affective AI services) on consumers’ attitudes through offline and online AI service experiments. We also construct a model to explore the mediating roles of identity threat and perceived control. The findings reveal that mechanical AI services negatively affect consumers’ attitudes while thinking and affective AI services have a positive effect. Additionally, we explore how consumers’ attitudes vary across different service scenarios and ethical judgments (utilitarianism and deontology). Our findings could offer practical guidance for enterprises providing AI services.

1. Introduction

While artificial intelligence (AI), as a major source of innovation, is increasingly reshaping services through performing various tasks, it also threatens human jobs [1]. AI largely focuses on simulating and augmenting human capabilities, encompassing a wide range of theories, approaches, technologies, and application systems [2]. The service industry and researchers alike have recognized the considerable effects of AI robots, AI virtual assistants, and generative AI on consumers’ perceptions and attitudes [3,4,5]. However, in contrast to AI applications focused on products and services, research attention has shifted toward the cognitive and affective attitudes that AI services evoke in consumers. AI services are also gaining increasing attention from the service industry, as, for example, AI robots are being used for hotel delivery and AI virtual assistants are being used to make personalized travel recommendations [6,7]. Online retailers on Taobao have added AI technology to their live chat services to provide online shopping advice. Meanwhile, some consumers might hope that retailers can provide more direct and accurate human online shopping consultation services (also known as “human services”) to solve their shopping problems [8]. Such applications not only provide convenience in consumers’ lives but also enable them to engage in service environments characterized by human–machine collaboration and interaction [9], thus warranting a detailed investigation of AI services.
Research on AI services has focused on various aspects, such as defining the concept of AI [10], identifying dimensions of AI services and categorizing these [11,12], and exploring consumers’ attitudes about AI services. However, research findings regarding consumers’ attitudes about AI services remain inconsistent. Specifically, consumers tend to hold negative attitudes about AI services compared with manual services [3,13,14]. For instance, when using AI services, users might believe their uniqueness is disregarded [15], their decision-making freedom and autonomy are violated, and their identity is threatened [16]. However, these negative emotions can be alleviated through the collaborative involvement of human workers [17,18,19]. Luo et al. [3], for example, offered causal evidence for the positive effects of AI in coaching salespeople to serve customers better. The negative consequences of job loss and workers’ resistance to AI can be mitigated when AI is designed to complement workers’ job tasks. Additionally, there is evidence that automated service could become optimal as customers become more sensitive to service quality, but only if the quality of the automation technology is sufficiently high [20,21]. As robots increasingly join the service workforce, how to optimally integrate human and robotic labor has become an important research topic [1,22]. The growth of AI application in the service industry calls for an investigation of the specific changes in consumers’ attitudes toward different types and contexts of AI services. This study, therefore, analyzes consumers’ affective and cognitive attitudes about AI services in terms of AI categorization [23,24].
The complexity of AI technology has sparked both corporate and academic discussions about AI adoption and the determination of its type [25]. Recently, researchers have started categorizing AI in terms of strong AI and weak AI based on its function or in terms of mechanical, reflective, and affective AI [11,12]. In the marketing context, Schepers et al. [26] found that customers have different emotional responses to different types of AI (i.e., mechanical, thinking, and affective AI). However, a review of the literature reveals that research on consumers’ attitudes tends to focus on AI services with lower levels of intelligence. There is a lack of empirical research on AI services with higher levels of intelligence, resulting in a limited perspective in the current research. Additionally, the rapid development of AI has raised concerns about ethics and social responsibility [27]. However, the existing research mainly consists of qualitative studies that analyze various ethical conflicts in AI but largely overlook the ethical implications of AI itself, failing to distinguish them from evaluations of the consequences of AI research, development, and application. To fill these gaps, we investigated the effects of different types of AI services on consumers’ attitudes using both offline and online experiments. We also introduced identity threat and perceived control as mediating variables and moral judgments as moderating variables [28,29]. We contribute to the literature on consumers’ attitudes and responses toward different types of AI services through investigating the following research questions: (1) How do different types of AI services influence consumers’ attitudes? (2) Can identity threat and perceived control mediate the effects of different types of AI services on consumers’ attitudes? (3) Can moral judgments moderate the effects of different types of AI services on consumers’ attitudes? (4) Can service scenarios (restaurants vs. hotels) mediate those effects?

2. Theoretical Framework

2.1. Artificial Intelligence and AI Services

First proposed by McCarthy in 1956, AI is commonly defined as the automation of intelligent behavior [10]. AI is a novel technological science primarily used to simulate and extend human capabilities, encompassing various theories, approaches, technologies, and application systems. Prior research has recognized the benefits of AI technologies across various fields. For instance, trading bots and robot advisors can help investors with stock analytics [30]. AI applications can improve operational efficiency, fraud detection, and asset management for banks [31,32]. Researchers have examined the effects of AI applications in healthcare [33,34]. For instance, AI-powered algorithms can help doctors diagnose cancers [35,36], reduce medical errors, and improve hospital efficiency [37,38]. Therefore, a future trend of AI application will be to focus on empathetic tasks that require computers to understand people’s emotional status and respond appropriately with care and feeling [1].
Regarding categorization, researchers generally classify AI into three levels: function, form, and role-playing. Function-based AI has been further analyzed in terms of strong AI and weak AI, or weak AI, general AI, and super AI [11,39]. Schepers et al. [26] and Huang et al. [12], meanwhile, categorized AI into mechanical, thinking, and affective AI. Role-playing AI in service contexts has been explored in terms of three roles: supporters, enhancers, and performers [40]. We adopt the categorization used by Huang et al. [12]—namely, mechanical AI services, thinking AI services, and affective AI services—and use the scale developed by Schepers et al. [26] to measure consumers’ perceptions of AI service types.
Online shopping has become a major aspect of consumers’ daily lives, and technologies such as AI are increasingly being applied to consumer services. The improvement of living standards has caused consumers to pay more attention to e-services, among which AI services are among the most important elements. Though sometimes used interchangeably, robotics and AI are related but different fields. Robotics involves creating robots to perform tasks without human intervention while AI concerns the way systems emulate the human mind to “learn”, make decisions, and solve problems, without the need for specifically programmed instructions. Nevertheless, robotics and AI can coexist. Robotics usually does not require AI, as the performed tasks are predictable and repetitive and do not require additional “thought”. Most robotics systems are designed within the limits of AI. In other words, most robots have been designed to perform simple, programmable tasks, because there has not been much scope for them to do anything more complex. Yet, with ongoing advances in AI, the line between robotics and AI could become increasingly blurred in the coming decades.

2.2. Consumers’ Attitudes about Using AI

Attitudes about AI use relate to users’ subjective evaluations of the latest AI technologies or devices. Based on attitude models and decision theory, studies have investigated the effects of the application of the latest AI technologies on user attitudes and decision making [41]. When people perceive the advantages of using innovative technology, it positively influences their attitudes toward its use [42]. In this sense, it is important to identify people’s attitudes when new technologies are introduced. Studies have highlighted the role of attitudes in predicting learners’ intentions to use technology [42,43,44]. Studies have found significant relationships between attitudes and the intention to use wearable devices such as smartwatches [45]. Consumer attitude refers to the psychological inclinations of individuals before they engage in consumption activities, which can shape the direction of consumers’ decision-making behavior [46]. Since consumers have begun to interact with smart objects (robots) actively and intensively, the traditional human-centric conceptualization of consumer experience has evolved into a consumer–robot assemblage [47].
Research on consumers’ attitudes has provided a robust understanding of the topic. It is evident in the literature that consumers generally have negative attitudes about current AI services. However, when AI is further segmented, it becomes unclear whether consumers’ responses to different types of AI vary and whether initial negative perceptions of AI technology change with increasing levels of AI intelligence. These issues require further exploration. This study, then, aims to refine AI research through classifying AI into three types: mechanical, thinking, and affective AI. We investigate the effects of these different types of AI services on consumers’ attitudes, expanding the scope of research on consumers’ attitudes and enriching the theoretical content.

2.3. Mechanisms of the Effect of Different Types of AI Services on Consumers’ Attitudes

2.3.1. Different Types of AI Services and Consumers’ Attitudes

Current marketing research mainly focuses on consumers’ willingness to accept less intelligent AI services and the mechanisms that influence such acceptance. There is limited research, however, comparing and examining the roles of different types of AI services in shaping consumers’ attitudes. Here, we explore the effect of AI role positioning and the degree of application on marketing activities. AI services encompass different levels of intelligence, including mechanical, thinking, and affective intelligence [1,26]. Mechanical AI can automate programmed repetitive tasks, providing consumers with standardized, efficient services, which can evoke positive emotions [48,49]. By contrast, services that lack consistency can lead to negative consumer emotions [48]. Thinking AI learns from previous experiences, giving consumers a sense of similarity to human workers, which can trigger positive emotions [50]. For example, consumers experience more emotion and perceive better companionship from restaurant recommendation agents that exhibit diverse vocabularies, grammatical complexity, and language fluency [51]. Affective AIs can use relational skills such as empathy and supportive behaviors to evoke positive affective states in users [52,53], including liking, trust, respect, and satisfaction. Different types of AI can leverage their respective advantages in specific service situations. Our study focuses on mechanical, thinking, and affective AI to investigate the effects of different types of AI services on consumers’ attitudes within the same service scenario.
Studies suggest that customers’ attitudes are closely associated with their experience of using AI services [54,55,56]. For instance, consumers tend to experience more positive emotions and fewer negative emotions when interacting with consistent, efficient, and competent mechanical AI services [48,49]. The presence of consistent service—a key characteristic of mechanical AI—can contribute to a pleasurable service encounter via facilitating a state of flow for customers [57]. This, in turn, enhances consumers’ cognitive and affective attitudes. Conversely, the absence of service consistency leads to negative feelings owing to increased uncertainty about outcomes [48]. In summary, customers are likely to perceive the structure and consistency provided by enhanced mechanical AI as desirable, leading to an increase in their cognitive and affective attitudes. Therefore, we hypothesize the following:
Hypothesis 1 (H1). 
A higher level of mechanical AI service significantly affects customers’ cognitive attitudes (H1a) and affective attitudes (H1b).
Research suggests that customers perceive AI services with thinking intelligence as capable of learning from previous experiences, similar to human employees [50,58]. The ability of AI to improve and adapt its skills is a desirable service outcome that triggers positive emotions in humans [26,59,60]. For instance, customers perceive restaurant recommendation agents in smartphone apps that demonstrate diverse vocabularies, grammatical complexity, and linguistic fluency as more affective and better companions [51]. Moreover, evidence indicates that thinking AIs’ advanced capabilities in terms of innovation, understanding humans, and problem-solving enhance customer trust [61]. For example, personal assistants integrated with thinking AI, such as smart wearables, assist users with activities such as exercise, monitoring daily food intake, or caring for relatives with impairments. The learning capabilities of these assistants contribute to customers’ sense of protection and general well-being, reducing feelings of loneliness and a lack of self-control [62]. Thus, combining these insights with the fundamentals of cognitive assessment theory, we propose the following:
Hypothesis 2 (H2). 
A higher level of thinking AI service significantly affects customers’ cognitive attitudes (H2a) and affective attitudes (H2b).
Feeling AI, characterized by relational skills such as empathizing with users and exhibiting supportive behaviors, can enhance positive affective states in customers, including liking, trust, respect, happiness, and satisfaction [52,53]. The provision of a human online shopping consultation service can attract more consumers and increase market share [8]. When an AI robot demonstrates empathy through recognizing, experiencing, and appropriately responding to the emotions of others, it improves customer–brand relationships [63]. Empathy is a universal value in human communication that fosters stronger affiliations and capitalizes on positive effects. Conversely, a lack of empathy can lead to undesired outcomes such as misunderstandings, hostility, and frustration [64]. Additionally, genuine robotic laughter can block negative affective responses in users, and the affective capabilities of AI can help customers cope with negative events during service, such as delays or encounters with rude customers [65]. In summary, customers tend to view stronger AI emotion as a desirable outcome, leading to more emphasis on cognitive attitudes than affective attitudes [66]. Thus, we propose the following:
Hypothesis 3 (H3). 
A higher level of affective AI service significantly affects customers’ cognitive attitudes (H3a) and affective attitudes (H3b).

2.3.2. Identity Threat, Perceived Control, and Consumers’ Attitudes

When an experience contradicts identity, individuals experience a loss of self-esteem and take action to preserve the self-esteem associated with identity [28]. This is the nature of identity threat, defined as “experiences appraised as indicating potential harm to the value, meanings, or enactment of an identity” [67,68,69,70]. AI behavioral characteristics that reduce human autonomy, control, and self-worth can trigger identity threats. When humans perceive the potential to be replaced with machines, concerns about human identity and uniqueness arise [71]. As AI becomes more autonomous through iterative development, there is a general perception of increased identity-related threats compared with nonautonomous AI [71,72,73]. AI with higher autonomy can blur the distinctions between humans and machines, posing threats to human values and individual uniqueness [74]. Leung et al. [16] found that AI automation might be undesirable to consumers when identity motives are important drivers of consumption. However, Logg et al. [75] found that nonexperts appreciated algorithmic advice, based on laboratory experiments. As AI develops and becomes more complex, it activates identity threats for users, which in turn can influence consumers’ attitudes about AI. Studies have explored the effects of AIs with different roles (“facilitator” and “substitute”) on consumer evaluation in terms of identity threat. Therefore, we selected identity threat as a mediating variable to investigate the mechanism of the effect of different types of AI services on consumers’ attitudes. We hypothesize the following:
Hypothesis 4 (H4). 
Identity threat mediates the effect of different types of AI services (mechanical, thinking, and affective AI services) on customers’ cognitive attitudes (H4a) and affective attitudes (H4b).
Control refers to the need to demonstrate competitiveness and the ability to master the environment in a given situation [76]. People have an innate need to control their environment [77]. Perceived control over a technology is an important element for the evaluation of outcomes obtained with the product, as noted by Jörling et al. [78]. This could be a positive finding for companies that provide service robots, since service customers are more likely to attribute responsibility for a negative outcome to themselves than to the service robot or the company. New technologies often bring uncertainty, which can be perceived as a threat that weakens an individual’s sense of control over the environment [79]. When interacting with AI, individuals might feel concerned about the ownership of their personal data, leading to a challenge to their perceived control [71]. Meanwhile, this is likely to result in negative perceptions and emotions on the part of service customers, which might cloud the service experience. As AI possesses a certain degree of autonomous decision-making ability and can perform tasks automatically, an increased level of AI intelligence strengthens its autonomy, which in turn weakens individuals’ control over it [80].
Schweitzer and Van Den Hende [80] found that an intervention design—that is, a product design that allows consumers to intervene in the actions of an autonomous smart product—can reduce their perceived disempowerment with regard to autonomous smart products. Pavlou et al. [79] found that feeling that one is in control with the freedom to act is a prerequisite for trust in technology and can increase product satisfaction. Consumers who perceive autonomous smart products as a disempowering fear that such products might function more in the interest of a corporation than in their own interest and make decisions that users would rather make themselves or take unintended action [80]. As AI becomes more complex, ranging from assisting human workers with mechanical and repetitive tasks to providing personalized and even affective services, and as AI’s autonomy, data collection, and analytical capabilities increase, consumers’ perceived control over AI diminishes, which can trigger consumer rejection of and resistance to AI services. We considered perceived control as a mediating variable to explore the influence of different types of AI services on consumers’ attitudes, and we propose the following:
Hypothesis 5 (H5). 
Perceived control mediates the effect of different types of AI services (mechanical, thinking, and affective) on customers’ cognitive attitudes (H5a) and affective attitudes (H5b).

2.3.3. Service Scenarios, AI Services, and Consumers’ Attitudes

Service scenarios driven by big data encompass a wide range of technological, social, and societal needs. These scenarios are well-designed service spaces that aim to evoke special affective feelings and enhance consumers’ intention to consume. Service is one of the main application domains of AI technology, and an increasing number of enterprises are adopting AI-driven virtual customer service to replace manual service. Wirtz et al. [81] categorized service scenarios into two types: cognitive–analytical and emotional–social. This categorization has been widely used in research on AI in service domains. According to the theory of mindfulness, people tend to perceive AI as rational rather than affective. Therefore, consumers click on ads for algorithm-based advice less than on ads for human-based advice when the task is subjective (dating advice) but not when the task is objective (financial advice) [82]. Thus, consumers’ cognition and perception of AI services might differ across different service scenarios, in turn influencing their attitudes. To explore this further, in the current study, we focus on the service industry. We selected two common service scenarios—restaurants and hotels—as the experimental background, with the aim is to investigate whether service scenarios moderate the influence of different types of AI services on consumers’ attitudes; we therefore propose the following:
Hypothesis 6 (H6). 
Service scenarios (restaurants vs. hotels) moderate the effect of different types of AI services (mechanical, thinking, and affective) on customers’ cognitive attitudes (H6a) and affective attitudes (H6b).

2.4. Effect of Moral Judgment

As AI technology advances, ethical considerations surrounding its application are becoming increasingly salient [83]. The diverse range of AI applications in various fields has led to complex roles in society, transitioning AI from a passive tool to a human agent, which has sparked concerns about the ethics of AI across all sectors. AI can exhibit discriminatory behavior in various ways, such as prioritizing customers based on demographic and economic factors [84], discriminating against or alienating disadvantaged consumer groups through targeting [71,85], and breaching privacy when users access AI-customized services. Additionally, the increasing intelligence of AI and its ability to drive sales and consumption have also raised ethical controversies [86].
Moral cognitive theory suggests that individuals are more likely to positively react to motives behind moral actions that align with their own moral values [87,88,89]. Individuals’ ethical evaluations of AI can directly influence their interactions with it. Hermann [90] suggested that ethical challenges and interdependencies are likely to intensify with increasing levels of AI intelligence and humanization. To seize the opportunities AI offers for marketing, the prevailing principled, deontological approach to AI ethics should be supplemented with a utilitarian perspective, weighing benefits and costs across stakeholders. Consequently, individuals who make deontological moral judgments might view malicious behaviors triggered by AI technology as immoral, leading them to question AI applications. Meanwhile, those who make utilitarian moral judgments might show relative optimism toward AI after weighing the overall utility of AI for all stakeholders, even when ethical issues arise. This argument aligns with social cognitive theory, which suggests that people’s responses to stimuli, such as motive attributions, are moderated by individual differences [91,92]. Understanding the effect of individual differences in moral judgments is crucial, as it implies that different moral judgments made by consumers regarding AI services can be important moderators of cognitive and affective attitudes. Therefore, from an ethical perspective, we selected the variable of moral judgment to investigate whether it plays a moderating role in the effect of different types of AI services on consumers’ attitudes. Thus, we propose the following:
Hypothesis 7 (H7). 
Moral judgments moderate the effects of different types of AI services (mechanical, reflective, and affective) on customers’ cognitive attitudes (H7a) and affective attitudes (H7b).
Figure 1 depicts our model of the effects of different types of AI services on consumer attitude.

3. Method

Prior to the formal experiment, we conducted a pre-experiment to select 67 subjects (46 valid samples were obtained) to use the hotel scenario as the experimental background. We manipulated the AI type of the service robot in the form of textual scenario descriptions and picture stimuli. The one-way ANOVA results showed that only the mechanical type of AI service scenario was less than ideal. There could be two reasons for the nonsignificance of the mechanical AI scenes: (1) the sample size was unbalanced (47.8% mechanical AI scenes, which is close to the general number of all scenes); and (2) the textual descriptions of the experimental scenes were too long and specialized in their representation of AI types (based on feedback from subjects), resulting in the subjects not being able to accurately comprehend the distinctions between the AI types. The opinions of experts and the subjects were combined to revise the scene and picture descriptions following the experimental design in Schepers et al. [26] for use in the formal experiments.

3.1. Study 1

The goal of Experiment 1 was to test the effect of different types of AI services on consumers’ attitudes and to explore whether the effects of AI services on consumers’ attitudes were affected by the complexity of the AI, thus testing H1–H3.

3.1.1. Design and Participants

For the selection of experimental materials, Experiment 1 referred to Schepers et al. [26] in relation to the manipulation of AI categories in hotel scenarios. We also used pictures of intelligent robots (consistent with those used in the pre-experiment) and textual descriptions of AI service scenarios as experimental materials. Experiment 1 was based on the results of the pre-experiment combined with experts’ suggestions to streamline and optimize the descriptions of each type of AI service scenario and further label the AI pictures.
We used a one-way (mechanical AI service vs thinking AI service vs affective AI service) between-group design; 110 subjects were recruited through a research platform to participate in the experiment. Twenty samples that did not pass the validation (mismatch in the selection of AI scenario types and wrong answers to reverse-scoring questions) were excluded, and ninety valid samples were recovered. Among the valid samples, 26.7% were male, age was in the range 18–40 years old (accounting for 97.8%), and 96.6% had a bachelor’s degree or above. There were 33 mechanical AI service scenarios, 28 thinking AI service scenarios, and 29 affective AI service scenarios.

3.1.2. Task and Procedures

To allow subjects to quickly enter the experimental situation, Experiment 1 began by asking subjects to consider a hotel in another city and showing a picture of an AI robot in charge of check-in service at the front desk. Then, a paragraph description of the hotel AI service scene was given. After the subjects read the experimental materials, they were asked to choose the type of AI in the experimental scene and score the scene’s authenticity, AI service type perception, and consumer attitude on a seven-point Likert scale (1 = strongly disagree, 7 = strongly agree). Finally, subjects’ personal information was collected.

3.1.3. Results and Discussion

The results of reliability analysis showed that the Cronbach’s alpha coefficients of all question items are greater than 0.8, indicating a good level of scale reliability. The results of exploratory factor analysis showed that the KMO and Bartlett’s test-of-sphericity values were greater than 0.7, with a p value of less than 0.001, indicating that the validity of the scale question items was good and suitable for factor analysis. Then, six common factors were extracted using the maximum variance method: authenticity, mechanical AI services, thinking AI services, affective AI services, consumers’ cognitive attitudes, and consumers’ affective attitudes. The total variance explained was 83.080%. The following list displays the conclusions of the manipulation test for AI types and the main effect test, along with relevant discussions pertaining to this section:
  • Manipulation test for AI type
The one-way ANOVA results showed that subjects’ perceptions of the mechanical AI service (M = 5.778; SD = 0.705) were significantly higher than those for the thinking AI service (M = 5.035; SD = 1.375) and affective AI service (M = 5.441; SD = 1.217) under the scenario description of the mechanical AI service (F = 3.407; p = 0.038 < 0.05). In the scenario description of the thinking AI service (F = 5.528; p = 0.008 < 0.05), perceptions of ther thinking AI service (M = 5.598; SD = 1.258) were significantly higher than perceptions of the mechanical AI service (M = 4.606; SD = 1.547) and affective AI service (M = 5.476; SD = 1.116). In the scenario description of the affective AI service (F = 16.484; p < 0.001), perceptions of the affective AI service (M = 5.750; SD = 0.768) were also significantly higher than perceptions of the mechanical (M = 3.657; SD = 1.771) and thinking AI services (M = 3.978; SD = 1.692). Thus, our manipulation of AI types was successful.
  • Main effect test
The linear regression analysis results showed that mechanical AI services did not have significant effects on consumers’ cognitive attitudes (β = −0.013, p = 0.902 > 0.05) or affective attitudes (β = 0.025, p = 0.818 > 0.05). Thinking AI services had significant effects on consumers’ cognitive attitudes (β = 0.276, p = 0.008 < 0.05) and affective attitudes (β = 0.383, p < 0.001). Affective AI services had significant effects on consumers’ cognitive attitudes (β = 0.287, p = 0.006 < 0.05) and affective attitudes (β = 0.296, p = 0.005 < 0.05); thus, H2 and H3 are supported. The β-value indicated that the main effect did not show a linear enhancement characteristic with the enhancement of AI complexity (i.e., from mechanical AI to thinking AI to affective AI).
  • Discussion
Experiment 1 used a hotel as the experimental setting and manipulated AI types in the form of text and pictures. Regression analysis showed that both thinking and affective AI services significantly and positively affected consumers’ attitudes (cognitive and affective attitudes). This coincides with Schepers et al. [26], who investigated the effect of three different types of AI services on consumer emotions in a pair of hotel scenarios; unlike mechanical AI services, which did not have a significant effect on positive consumer emotions, thinking and affective AI services both significantly and positively influenced positive consumer emotions. To extend the application scenario, Study 2 used restaurants as the experimental background and introduced two mediating variables—identity threat and perceived control—to investigate whether they mediated the influence of different types of AI services on consumers’ attitudes.

3.2. Study 2

Experiment 2 was conducted to test whether the main effect test results of Study 2 were consistent with those of Study 1 and to test the mediating roles of identity threat and perceived control in the effects of different types of AI services on consumers’ attitudes, thus testing hypotheses H4 and H5.

3.2.1. Design and Participants

Except for the textual descriptions of the restaurant service scenario, the AI robot pictures, and the nomenclature, the experimental materials were the same as in Experiment 1. The textual experimental materials in Experiment 2 referred to the descriptions of different types of AI service scenarios in the restaurant context. Based on the Chinese context and expert guidance, three sets of descriptions of AI behaviors in the restaurant scenario were adapted, corresponding to the three AI types (mechanical, thinking, and affective).
A total of 229 sample data were collected in this study, and 218 valid data were obtained for a valid recovery rate of 95%. Eleven samples did not pass validation (AI service scenarios and AI types were not paired). Among the valid samples, 39.4% were male, age was 26–40, and 93.1% had a bachelor’s degree or above. There were 76 mechanical AI service scenarios, 71 thinking AI service scenarios, and 71 affective AI service scenarios.

3.2.2. Task and Procedures

Experiment 2 began by asking subjects to imagine that they have decided to go to a restaurant in their city with two friends. When arriving at the restaurant, they notice that they will be served by an AI robot (a picture of the robot was shown). Then, the service scenario of the restaurant’s AI robot operation was described. After subjects read the experimental materials, they completed measurement items about the choice of AI service type, scenario authenticity, perception of AI service type, identity threat, perceived control, and consumer attitudes (cognitive and affective). Finally, subjects’ personal information was collected. The measurement items for scene authenticity, AI service type perception, and consumer attitude were consistent with those used in the previous experiments. For the identity threat measurement items, we mainly referred to Yogeeswaran et al. [74] and Yogeeswaran and Dasgupta [93].

3.2.3. Results and Discussion

The following list presents the conclusions of the main effect testing and the intermediary effect tests for identity threat and perceived control, along with relevant discussion.
  • Main effect test
The one-way ANOVA results showed that the manipulation of AI types in the experimental scenario passed the test. We then used linear regression to investigate the effects of different types of AI services on consumers’ cognitive and affective attitudes. The results showed that mechanical AI services had a significant negative effect on consumers’ cognitive attitudes (β = −0.083, p = 0.044 < 0.05) and a nonsignificant effect on consumers’ affective attitudes (β = −0.063, p = 0.200 > 0.05). Thinking AI services had a significant negative effect on consumers’ cognitive attitudes (β = 0.034, p = 0.357 > 0.05) and a nonsignificant effect on affective attitudes (β = 0.038, p = 0.384 > 0.05). Affective AI services had a significant effect on consumers’ cognitive attitudes (β = 0.083, p = 0.008 < 0.05) and affective attitudes (β = 0.108, p = 0.004 < 0.05). H1–H3 are partially supported.
  • Mediating effect of identity threat
We used the PROCESS plug-in (v3.5 by Hayes) in SPSS to test the mediating effect. In PROCESS, we set mechanical AI service as an independent variable, identity threat as a mediator variable, and consumers’ cognitive attitudes as a dependent variable in the corresponding positions. We selected Model 4 from the drop-down box, 5000 for the sample size, and 95% for the confidence interval. Clicking OK generated the regression results. The results showed that the mediation effect values for identity threat did not include 0 (LLCI = −0.067, ULCI = −0.002), indicating a significant mediation effect. After controlling for the mediating variable of identity threat, the effect of mechanical AI services on consumers’ cognitive attitudes was not significant (β = −0.049, p = 0.205 > 0.05). Thus, identity threat plays a mediating role in the effect of mechanical AI services on consumers’ cognitive attitudes (Figure 2); H4 is supported.
In the main effect test results, affective AI services were observed to have a significant effect on consumers’ cognitive and affective attitudes. Thus, we set affective AI service as the independent variable, identity threat as the mediator variable, and consumers’ cognitive and affective attitudes as the dependent variables in PROCESS to explore the mechanism of influence. As before, we selected Model 4, a sample size of 5000, and a confidence interval of 95% and obtained the regression results. The results showed that the mediation effect values for identity threat included 0 (LLCI = −0.045, ULCI = 0.001), indicating that the mediation effect was not significant and that identity threat did not play a mediating role in the influence of affective AI service on consumers’ cognitive attitudes. Using consumers’ affective attitudes as the dependent variable while keeping the model, sample size, and confidence intervals unchanged, the mediation effect test results still included 0 (LLCI = −0.053, ULCI = 0.000). This indicates that the mediation effect was not significant, meaning that identity threat was not found to play a mediating role in the effect of affective AI services on consumers’ affective attitudes.
  • Mediating effect of perceived control
Similarly, we tested for the mediating effect of perceived control on the significant main effect path.
First, in PROCESS, we selected mechanical AI services as the independent variable, consumers’ perceived attitudes as the dependent variable, and perceived control as the mediating variable. We again used Model 4, a sample size of 5000, and a confidence interval of 95%. The regression results showed that the mediation effect test results for perceptual control included 0 (LLCI = −0.073, ULCI = 0.017), indicating that the mediation effect was not significant. This suggests that perceptual control does not play a mediating role in the effect of mechanical AI services on consumers’ cognitive attitudes.
Next, in PROCESS, we replaced the independent variable with affective AI services and kept the other options unchanged (i.e., the mediator variable was perceived control, the dependent variable was consumers’ cognitive attitudes, using Model 4, sample size 5000, and confidence interval 95%). The regression results showed that the mediation effect test results for perceptual control included 0 (LLCI = −0.020, ULCI = 0.050), indicating that the mediation effect was not significant. This means that perceptual control did not play a mediating role in the influence of affective AI services on consumers’ cognitive attitudes.
Similarly, again selecting the same options in PROCESS, the dependent variable was replaced with consumers’ affective attitudes. The new regression results showed that the mediation effect test for perceived control included 0 (LLCI = −0.026, ULCI = 0.059), indicating that the mediation effect was not significant. This suggests that perceived control did not play a mediating role in the influence of affective AI services on consumers’ affective attitudes. Therefore, H5 is not supported.
  • Discussion
Experiment 2 manipulated the AI types in a restaurant context and verified the findings of Study 1. H1a, H3, and H4a were supported, and H5 was not supported. Study 2 demonstrated that mechanical AI services significantly negatively affected consumers’ cognitive attitudes and affective AI services significantly positively affected consumers’ cognitive and affective attitudes. This is consistent with the research of Longoni et al. [15] in the field of healthcare services. They found that patients showed resistance to AI services (AI in that study was categorized as mechanical AI according to the definition of AI types in this study) and preferred services with emotion and empathy; thus, patients might have more positive attitudes about affective AI services. Meanwhile, identity threat played a negative mediating role in the effect of mechanical AI services on consumers’ cognitive attitudes. This could be because mechanical AIs are more heavily programmed, and in mechanical AI service environments, consumers rely on the AI’s accompaniment and instructions throughout the entire process, with a lower degree of individual autonomy, which triggers identity threat, making the results more negative. To enhance external validity, Experiment 3 was used to validate the results of Experiments 1 and 2 in a real-world context and further test the moderating effect of moral judgment.

3.3. Study 3

Experiment 3 aimed to test the moderating role of moral judgment in the main effect and the direction of its role, and to further explore the influence of service scenario type (restaurant vs. hotel) on consumers’ attitudes in conjunction with the data from Experiment 2, testing hypotheses H6 and H7.

3.3.1. Design and Participants

In Experiment 3, for the field survey, we selected a hotel with general popularity, moderate decoration, and the use of AI services. The experiment excluded holidays to avoid experimental errors arising from the peak travel season and other factors. Customers in the lobby of the hotel were randomly selected as subjects. A total of 245 subjects participated in the experiment, excluding nine who did not pass the test (AI scenarios and AI types were not paired), thus obtaining 234 valid samples for an effective recovery rate of 95.5%. Among the valid samples, 38.9% were male, age was 18–40 years (29.1% 18–25, 31.6% 26–30, 32.1% 31–40), and 93.2% had a bachelor’s degree or above. There were 80 mechanical AI service scenarios, 77 thinking AI service scenarios, and 77 affective AI service scenarios.

3.3.2. Task and Procedures

Experiment 3 collected data online, asking the subjects to scan a QR code for the questionnaire. To enhance the questionnaire’s credibility, we used two questions to screen out customers who had had direct contact with hotel AI: “Have you ever directly experienced hotel AI services before?” and “Please recall how long ago you experienced hotel AI services”. Before subjects filled out the questionnaire, we briefly explained the characteristics of each type of AI service. There was also a brief description of each type of AI service in the corresponding options of the question, “What do you think is the type of AI you are exposed to in hotels?” This was done to avoid problems such as the subjects not being able to understand and select the AI type owing to overspecialization of the AI type name. We then measured the subjects’ perceptions of AI service types as well as perceived control, identity threat, moral judgment, and attitude. Then, the subjects’ basic information was collected. The measurement items for perception of AI service type, identity threat, perceived control, and consumer attitude were consistent with the scales used in the previous experiments.
After testing the moderating effect of moral judgment, we integrated the sample data collected in Experiment 2 (restaurant) and Experiment 3 (hotel) and assigned values to the types of service scenarios in SPSS (0 for the restaurant service scenario, 1 for the hotel service scenario) to conduct structural equation modeling and investigate the moderating role of service scenarios.

3.3.3. Results and Discussion

  • Main and mediating effect tests
We conducted a manipulation test on the type of AI service. The results of the one-way ANOVA indicated the success of the manipulation. Next, we used linear regression to investigate the effects of different types of AI services on consumers’ attitudes. The results showed that mechanical AI services had a significant negative effect on consumers’ cognitive attitudes (β = −0.124, p = 0.008 < 0.05) and affective attitudes (β = −0.134, p = 0.009 < 0.05). Affective AI services had a significant negative effect on consumers’ cognitive attitudes (β = 0.088, p = 0.017 < 0.05) and affective attitudes (β = 0.092, p = 0.024 < 0.05); thus, H1 and H3 are supported. Meanwhile, identity threat fully mediated the effect of mechanical AI services on consumers’ cognitive and affective attitudes; H4 is partially supported. Meanwhile, perceived control did not mediate the effects of different types of AI services on consumers’ attitudes; H5 is not supported.
  • Moderating effect of moral judgment
This section examines the moderating effect of utilitarian judgments. We first used consumers’ cognitive attitudes as the dependent variable, mechanical AI services as the independent variable, and utilitarian judgments as the moderating variable. Together with the interaction variable (generated via multiplying the decentered independent variable and the moderating variable), we input them into SPSS for stratified regression. The standardized coefficient of the interaction was found to be −0.140 (p = 0.021 < 0.05). Thus, utilitarian judgment negatively moderated the effect of mechanical AI services on consumers’ cognitive attitudes. Then, we continued the same operation and found that utilitarian judgment negatively moderated the effect of mechanical AI services on consumers’ affective attitudes. There was no moderating effect between affective AI services and consumers’ cognitive attitudes and there was a negative moderating effect between affective AI services and consumers’ affective attitudes. Figure 3 shows the moderating paths. Therefore, H7 is supported.
This section reports the results of tests on the moderating role of deontological judgments. We used consumers’ cognitive attitudes as the dependent variable, mechanistic AI services as the independent variable, and deontological judgments as the moderating variable. Together with the interaction variable (generated via multiplying the decentered independent variable and the moderating variable), we input the data into SPSS for stratified regression. The standardized coefficient of the interaction term was found to be 0.137 (p = 0.051 > 0.05). Thus, deontological judgment did not moderate the relationship between mechanistic AI service and consumers’ cognitive attitudes. Using the same operation, we found that deontological judgment did not moderate the effects of mechanical AI services on consumers’ affective attitudes and it positively moderated the effects of affective AI services on consumers’ cognitive and affective attitudes. Figure 3 shows the specific moderating paths. Thus, H7 is supported.
  • Structural equation modeling
After integrating the sample data from Experiments 2 and 3, we constructed a structural equation model in AMOS with mechanical, thinking, and affective AI services as independent variables, identity threat and perceived control as mediating variables, and consumers’ cognitive and affective attitudes as dependent variables (Figure 4).
The structural equation model had a value of 2.211 (between 1 and 3) for X2/DF, 0.922 (above 0.90) for GFI, 0.896 (close to 0.90) for AGFI, and 0.052 (below 0.08) for RMSEA. Thus, the model’s data fit was acceptable.
  • Moderating effects of service scenario types
To test the moderating effects of service scenarios, we set the cluster variables in AMOS (restaurant service scenario coded 0; hotel service scenario coded 1) and established a multicluster analysis model. Table 1 shows the results with significant moderating effects after implementation.
The restaurant service scenario positively moderated the effect of identity threat on consumers’ affective attitudes (β = 0.890, p < 0.001); the hotel service scenario did not (β = −0.058, p > 0.05). Thus, service scenarios play a moderating role in this path, supporting H6a.
  • Discussion
We conducted a field experiment (Study 3) to verify the results of Studies 1 and 2 and enhance external validity (i.e., to again verify that H1 and H3 were supported and H5 was not supported). We also tested the moderating role of moral judgment in the influence of different types of AI services on consumers’ attitudes (i.e., H7). Regarding the moderating role of moral judgment, we found that utilitarian judgments negatively moderated the effect of mechanical AI services on consumers’ cognitive and affective attitudes and the effect of affective AI services on consumers’ affective attitudes. Deontological judgments positively moderated affective AI services’ effects on consumers’ cognitive and affective attitudes.
Meanwhile, we used service scenarios as moderating variables to investigate whether different service scenarios moderated the influence of different types of AI services on consumers’ attitudes. The results indicated that service scenarios moderated the effect of identity threat on consumers’ affective attitudes. Restaurant service scenarios significantly positively moderated the effect of identity threat on consumers’ attitudes, and hotel service scenarios did not. This could be related to the service atmospheres and socialization factors of different service scenarios. In the AI-served hotel scenario, consumers have less contact with the AI service and can avoid it if they feel uncomfortable with it, giving them more space and freedom than in the restaurant service scenario. There, they rely on AI to accompany and instruct them throughout the whole process, lowering their autonomy, which can trigger negative attitudes.

4. General Discussion

Classifying AI services is becoming an important way for companies to improve consumers’ AI acceptance. The degree of AI sophistication is often used as an indicator of AI classification. #This study contributes to the literature through examining how different types of AI services influence consumers’ cognitive and affective attitudes. Using three studies (Table 2 shows an overview of the hypotheses), we have demonstrated that each AI type can spur cognitive attitudes and positive attitudes of consumers. The most pronounced effects come from AI services with higher levels of affective AI. We also show that the level of AI intelligence can trigger identity threat and negatively affect consumers’ attitudes. Service scenarios and consumers’ moral judgment moderated the results. Our conclusions can offer theoretical support and practical guidance for both researchers and marketers. Figure 5 illustrates the findings.

4.1. Theoretical Contribution

We used an innovative perspective to examine the relationship between consumers’ attitudes and AI services and investigate the effects of different types of AI services on consumers’ attitudes. We adopted the research perspective of AI service classification and investigated the mechanism through which consumers’ attitudes are influenced by the complexity level of AI services. The findings contribute to the literature through improving our understanding of these relationships [94].
First, this study enriches theories about the marketing of AI services. Previous marketing research has largely focused on consumers’ willingness to accept less intelligent AI services, with a limited exploration of other dimensions. We synthesize the mainstream categorization of AI services and investigate the mechanisms underlying the effects of different types of AI services on consumers’ attitudes. Additionally, while previous studies of AI services mainly focused on specific service scenarios, this study addresses this limitation via examining the matching of complexity levels and service scenarios in AI services, providing a new perspective for AI service research in the marketing field [15]. Furthermore, we elaborate and distinguish the classification dimensions of AI services in marketing, clarifying the connotation and research significance of AI services. Another innovation is the introduction of moral judgments into the context of AI services. Schepers et al. [26] found that higher levels of “thinking” and “perception” in AI service robots triggered positive emotions in customers, leading to increased consumption and loyalty, but the effect of mechanical AI on positive emotions was less clear. Our study, meanwhile, offers alternative explanations in terms of consumers’ cognitive and affective attitudes. We found that mechanical AI services have a significant negative effect on consumers’ attitudes. Another contribution is the finding that identity threat plays a fully negative mediating role between mechanical AI and consumers’ affective attitudes, revealing the mechanism through which AI service intelligence is associated with customers’ emotions.
Second, this study enhances the generalizability of service scene design. Under the experience economy, enterprises often design service scenarios to convey their corporate image and attract customer traffic. We simulated sensory marketing environments in different service scenarios, both offline and online, to more realistically reflect the dynamic process of consumers’ affective and cognitive changes. Whereas social identity theory suggests that algorithms that challenge the distinctiveness of humans from machines generate negative evaluations, a more cognitive perspective based on consumers’ beliefs about algorithms’ effectiveness suggests that decreasing human–machine distinctiveness can increase consumers’ use of algorithms as a result of making algorithms seem more useful. Through validating our influence mechanism model using different experiments, we confirmed that the restaurant service scene positively moderated the influence of identity threat on consumers’ affective attitudes, while the hotel service scene, which was less affectively supportive, did not show a significant moderating effect. This finding aligns with the theory of mindfulness and validates social cognitive theory [12]. We also suggest that in “cognitive–analytical” service scenarios (hotels), where functional value is sought, consumers might have more positive attitudes about AI owing to its precision, consistency, and efficiency in helping them. The moderating effect of hotel service scenarios will be further explored in future research.
Third, this study broadens the literature examining the differences in consumers’ attitudes under different moral judgments. Generally, AI-related ethical issues pertain to particular features of the technology or the consequences of its use, falling within the tradition of computer and (information) technology ethics (e.g., Moor, 2005; Wright 2011) [95,96,97]. In the context of technology ethics, Moor (2005) proposed a tripartite model for understanding technological revolutions, ranging from the introduction and permeation stages to the power stage [96]. With increasing use intensity, numbers of users, understanding, and integration into and effects on society, ethical challenges increase as well. Our study expands research on the ethical aspects of AI through examining differences in consumers’ attitudes about AI services under different moral judgments—specifically, utilitarian and deontological ethics. We found that utilitarian judgment negatively moderated the relationship between mechanical AI and consumers’ cognitive attitudes; deontological judgment positively moderated the relationship between affective AI services and consumers’ cognitive attitudes. These differences might arise from the application claims of utilitarianism and deontology in machine ethics. Moreover, AI systems that are humanized or emotionally intelligent do not come without ethical controversies [98,99] (e.g., Belk, 2020; De Bruyn et al., 2020). The introduction of moral judgment as an important variable for explaining the formation mechanism of ethical willingness has implications for the further exploration of consumers’ behavioral willingness. Additionally, we provide new insights for future quantitative research on AI ethics, such as examining how consumers’ attitudes and behavioral tendencies change when they respond to AI service agents in ethical situations.

4.2. Managerial Implications

The use of AI in big data fosters the advancement and development of online marketing, thereby stimulating economic growth. AI plays a pivotal role in driving the progress of marketing technology and bolstering economic development within the marketing system. In the realm of online marketing, AI plays a vital role in enhancing people’s lives and facilitating industry growth. AI is reshaping retailing in multiple ways, helping retailers to better understand and anticipate customer needs to optimize decision-making, improving the efficiency of supply chains, optimizing inventory management and logistics, and enhancing consumers’ shopping experiences. Thus, online retailers can improve their affective service level in terms of AI services via allowing consumers to choose products or services independently to enhance their autonomy. Our findings have practical implications for the service industry, enterprises, and consumers. In terms of the service industry, the increasing application of AI in the service field presents an opportunity to introduce affective AI to enhance the consumer experience. Consumers have shown robustly positive attitudes about affective AI services in both hotel and restaurant service scenarios. Affective AI services, with their strong empathic and emotional capabilities, can create positive affective experiences for users, leading to increased liking, trust, respect, and overall well-being. AI is not only a technical issue but also involves humanistic, social, and cultural dimensions. As AI technology continues to proliferate, it triggers a series of new problems, such as those related to psychology, sociology, and ethics. The truly effective use of AI requires a synthesis of technical and humanistic factors, which is key to meeting the challenges of AI. Therefore, the service industry is advised to consider introducing more intelligent and affective AIs to provide enhanced affective experiences for consumers.
For enterprises, the integration of AI with big data and cloud computing empowers them with substantial computational resources, fostering their growth and development. In the digital era, data have emerged as a valuable asset, akin to “oil”. Through the collection, organization, and analysis of extensive market and user data, enterprises can gain a deeper understanding of market dynamics and evolving needs. AI technologies, especially machine learning, can autonomously identify patterns and trends within data, offering businesses insights into future market dynamics [100]. Through leveraging this technology, companies can deliver more intelligent and tailored customer service. For instance, our research demonstrates that chatbots can incorporate humanized and emotionally responsive interactions while offering online consultation to customers. As technology advances and user acceptance of AI increases, there is an expectation that greater adoption of AI technology will take place. Catering enterprises, in particular, should focus on affective care for consumers and aim to avoid negative emotions, such as identity threat and loss of control during the acceptance of AI services. Furthermore, to increase their competitiveness, online retailers should take account of the situation and adjust their service strategies to increase their investment in affective AI design when consumers are more focused on affective care. To mitigate consumer rejection and resistance, enterprises can use a combination of human workers and AI to observe consumer needs and provide positive responses. Enhancing consumer autonomy, such as allowing them to choose products or services independently, can also improve consumers’ attitudes about AI services. Upgrading AI programs and enhancing the affective attributes of AI can facilitate better affective communication between AI and consumers, transforming negative attitudes about AI. Additionally, with the emergence of ethical issues in AI, enterprises should consider the principles underlying consumer behavior and attitudes. Enterprises need to understand that innovation is a long-term process and should not expect to see significant results in just a few months, or even a year or two. In that process, enterprises need to focus on the construction of innovation mechanisms in light of their actual business needs and create a foundation conducive to long-term development. Companies can foster goodwill through aligning their values and vision with consumers who prioritize deontological principles. Similarly, companies can highlight the value and benefits of AI technology to engage consumers in thoughtful consideration, mitigating negative evaluations and improving attitudes about services.
Regarding consumers, the application of AI services in daily life has enhanced their quality of life and living standards. However, consumers’ understanding of these services remains limited. AI technology is designed to assist rather than replace humans, freeing them from repetitive tasks and enabling more time for creative activities. For example, AI-powered wearable devices can provide personalized fitness programs based on users’ historical data, help control daily food intake, and even assist with the care of debilitated loved ones. Through embracing AI technology and avoiding unnecessary anxiety and negative emotions, consumers can make the most of its benefits without compromising their work and personal lives. In this context, there is a need to establish a balance between individual selves, AI technology, and interpersonal relationships. Consumption and lifestyle changes should be more focused on inner needs and mental health. Enterprises should enhance communication with consumers and streamline the deployment of AI services in online marketing. Consequently, through leveraging user personalization, enterprises can offer tailored purchase recommendations and exclusive deals, thereby augmenting customer satisfaction and fostering loyalty. Furthermore, as consumer awareness of the technology grows, research should focus on leveraging AI for enhanced marketing efficacy while upholding consumer privacy, a concern encompassing both technical and ethical dimensions. The use of AI technology is an inevitable trend in the development of computer technology that will promote the online economy and the development of online marketing.

4.3. Limitations and Directions for Future Research

We acknowledge several limitations that can be addressed in future research. These limitations include the need to explore consumers’ final behaviors, the potential interference of social factors, and the consideration of specific social contexts.
First, this study considered consumers’ attitudes as the dependent variable and did not delve into consumers’ final behaviors. It is important to note that there might not always be consistency between consumers’ attitudes and behaviors. Future research can extend the investigation to explore the effects of different types of AI services on consumers’ actual behaviors, such as their willingness to purchase, engagement levels, and loyalty toward AI services. This will provide a more comprehensive understanding of the relationship between AI services and consumer responses.
Second, this study opens a new avenue for examining consumer behavior under social influence. For example, COVID-19 altered the consumer journey [100,101,102] and thus created both opportunities and challenges for marketing practices [103,104]. In particular, the pandemic limited consumers’ ability to visit restaurants and travel to hotels, making it challenging to conduct field experiments and collect sample data. Additionally, the pandemic itself might have heightened the need for affective care, potentially leading to more positive attitudes about affective AI services. Future research can incorporate specific social factors, such as consumers’ direct experiences with consumption ethics, to further explore the influence of social factors on this study’s findings. This will help contextualize the results and provide a deeper understanding of the effect of social factors on consumers’ attitudes about AI services. As such, a more comprehensive understanding of the mechanisms underlying the effects of different types of AI services on consumers’ attitudes can be achieved.

5. Conclusions

In conclusion, this study enriches AI service marketing theories, revealing the impact of AI complexity on consumer attitudes and the mediating role of identity threat. It suggests that affective AI services positively influence consumer experiences, highlighting the importance of ethical considerations in AI design. Future research should explore consumers’ actual behaviors and the influence of social factors on AI service acceptance.

Author Contributions

Conceptualization, Q.F. and X.W.; methodology, Y.D.; software, Y.D.; validation, Q.F., Y.D. and X.W.; formal analysis, X.W.; investigation, Q.F.; resources, Y.D.; data curation, X.W.; writing—original draft preparation, Y.D.; writing—review and editing, Q.F.; visualization, Y.D.; supervision, Q.F.; project administration, Q.F.; funding acquisition, Q.F. All authors have read and agreed to the published version of the manuscript.

Funding

The research was supported by the Key Program of Philosophy and Social Science Foundation in Colleges and Universities in Jiangsu Province of China (Grant No. 2021SJZDA035) and the Yangzhou University Business School Graduate Innovation Project (SXYYJSKC202329).

Institutional Review Board Statement

The study was conducted according to the guidelines of the Declaration of Helsinki and approved by the Ethics Committee of Yangzhou University (protocol code 20220357 and 8 May 2022).

Informed Consent Statement

All respondents gave permission for the processing of their responses.

Data Availability Statement

Data are available on request to the authors.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Huang, M.; Rust, R.T. Artificial Intelligence in Service. J. Serv. Res. 2018, 21, 155–172. [Google Scholar] [CrossRef]
  2. Peltier, J.W.; Dahl, A.A.; Schibrowsky, J.A. Artificial Intelligence in Interactive Marketing: Conceptual Framework and Research Agenda. J. Res. Interact. Mark. 2024, 18, 54–90. [Google Scholar] [CrossRef]
  3. Luo, X.; Qin, M.S.; Fang, Z.; Qu, Z. Artificial Intelligence Coaches for Sales Agents: Caveats and Solutions. J. Mark. 2021, 85, 14–32. [Google Scholar] [CrossRef]
  4. Uysal, E.; Alavi, S.; Bezencon, V. Trojan horse or useful helper? A relationship perspective on artificial intelligence assistants with humanlike features. J. Acad. Mark. Sci. 2022, 50, 1153–1175. [Google Scholar] [CrossRef]
  5. Chandra, S.; Shirish, A.; Srivastava, S.C. To Be or Not to Be … Human? Theorizing the Role of Human-Like Competencies in Conversational Artificial Intelligence Agents. J. Manag. Inform. Syst. 2022, 39, 969–1005. [Google Scholar] [CrossRef]
  6. Chandra, S.; Verma, S. Personalized Recommendation During Customer Shopping Journey. In The Palgrave Handbook of Interactive Marketing; Wang, C., Ed.; Springer: Berlin/Heidelberg, Germany, 2023; pp. 729–752. [Google Scholar]
  7. Habil, S.; El-Deeb, S.; El-Bassiouny, N. AI-Based Recommendation Systems: The Ultimate Solution for Market Prediction and Targeting. In The Palgrave Handbook of Interactive Marketing; Wang, C., Ed.; Springer: Berlin/Heidelberg, Germany, 2023; pp. 683–704. [Google Scholar]
  8. Zhao, L.; Wu, W.; Jiang, M. Human Services or Non-Human Services? How Online Retailers Make Service Decisions. J. Theor. Appl. Electron. Commer. Res. 2022, 17, 1791–1811. [Google Scholar] [CrossRef]
  9. Zeng, N.; Jiang, L.; Vignali, G.; Ryding, D. Customer Interactive Experience in Luxury Retailing: The Application of AI-Enabled Chatbots in the Interactive Marketing. In The Palgrave Handbook of Interactive Marketing; Wang, C., Ed.; Springer: Berlin/Heidelberg, Germany, 2023; pp. 785–805. [Google Scholar]
  10. McCarthy, J. From here to human-level AI. Artif. Intell. 2007, 18, 1174–1182. [Google Scholar] [CrossRef]
  11. Flowers, J.C. Strong and Weak AI: Deweyan Considerations. In Proceedings of the AAAI Spring Symposium: Towards Conscious AI Systems, Palo Alto, CA, USA, 25–27 March 2019; Volume 2287, p. 7. [Google Scholar]
  12. Huang, M.H.; Rust, R.T. Engaged to a Robot? The Role of AI in Service. J. Serv. Res. 2021, 24, 30–41. [Google Scholar] [CrossRef]
  13. Lopez, A.; Garza, R. Consumer bias against evaluations received by artificial intelligence: The mediation effect of lack of transparency anxiety. J. Res. Interact. Mark. 2023, 17, 831–847. [Google Scholar] [CrossRef]
  14. Chiara, L.; Andrea, B.; Morewedge, C.K. Resistance to medical artificial intelligence is an attribute in a compensatory decision process: Response to Pezzo and Beckstead. Judgm. Decis. Mak. 2020, 15, 446–448. [Google Scholar]
  15. Longoni, C.; Bonezzi, A.; Morewedge, C.K. Resistance to medical artificial intelligence. J. Consum. Res. 2019, 46, 629–650. [Google Scholar] [CrossRef]
  16. Leung, E.; Paolacci, G.; Puntoni, S. Man Versus Machine: Resisting Automation in Identity-Based Consumer Behavior. J. Mark. Res. 2018, 55, 818–831. [Google Scholar] [CrossRef]
  17. He, A.Z.; Zhang, Y. AI-powered touch points in the customer journey: A systematic literature review and research agenda. J. Res. Interact. Mark. 2023, 17, 620–639. [Google Scholar] [CrossRef]
  18. Huh, J.; Kim, H.Y.; Lee, G. “Oh, happy day!” Examining the role of AI-powered voice assistants as a positive technology in the formation of brand loyalty. J. Res. Interact. Mark. 2023, 17, 794–812. [Google Scholar] [CrossRef]
  19. Hsieh, S.H.; Lee, C.T. Hey Alexa: Examining the effect of perceived socialness in usage intentions of AI assistant-enabled smart speaker. J. Res. Interact. Mark. 2021, 15, 267–294. [Google Scholar] [CrossRef]
  20. Andreassen, T.W.; Rutger, D.O.; Line, L.O. Customer Inconvenience and Price Compensation: A Multiperiod Approach to Labor-Automation Trade-Offs in Services. J. Serv. Res. 2018, 21, 173–183. [Google Scholar] [CrossRef]
  21. Lucia-Palacios, L.; Pérez-López, R. How can autonomy improve consumer experience when interacting with smart products? J. Res. Interact. Mark. 2023, 17, 19–37. [Google Scholar] [CrossRef]
  22. Xiao, L.; Kumar, V. Robotics for Customer Service: A Useful Complement or an Ultimate Substitute? J. Serv. Res. 2019, 24, 9–29. [Google Scholar] [CrossRef]
  23. Brinson, N.H.; Britt, B.C. Reactance and turbulence: Examining the cognitive and affective antecedents of ad blocking. J. Res. Interact. Mark. 2021, 15, 549–570. [Google Scholar] [CrossRef]
  24. Wang, C.L. New frontiers and future directions in interactive marketing: Inaugural Editorial. J. Res. Interact. Mark. 2021, 15, 1–9. [Google Scholar] [CrossRef]
  25. Gao, L.; Li, G.; Tsai, F.; Gao, C.; Zhu, M.; Qu, X. The impact of artificial intelligence stimuli on customer engagement and value co-creation: The moderating role of customer ability readiness. J. Res. Interact. Mark. 2023, 17, 317–333. [Google Scholar] [CrossRef]
  26. Schepers, J.; Belanche, D.; Casaló, L.V.; Flavián, C. How Smart Should a Service Robot Be? J. Serv. Res. 2022, 25, 565–582. [Google Scholar] [CrossRef]
  27. Kunz, W.; Wirtz, J. Corporate Digital Responsibility (CDR) in the Age of AI–Implications for Interactive Marketing. J. Res. Interact. Mark. 2024, 18, 31–37. [Google Scholar] [CrossRef]
  28. Hu, X.; Wise, K. How playable ads influence consumer attitude: Exploring the mediation effects of perceived control and freedom threat. J. Res. Interact. Mark. 2021, 15, 295–315. [Google Scholar] [CrossRef]
  29. Adjei, M.T.; Zhang, N.; Bagherzadeh, R.; Farhang, M.; Bhattarai, A. Enhancing consumer online reviews: The role of moral identity. J. Res. Interact. Mark. 2023, 17, 110–125. [Google Scholar] [CrossRef]
  30. Trippi, R.R.; Turban, E. Neural Networks in Finance and Investing: Using Artificial Intelligence to Improve Real-World Performance. McGraw-Hill Inc.: New York, NY, USA, 1993. [Google Scholar]
  31. Fethi, M.D.; Pasiouras, F. Assessing bank efficiency and performance with operational research and artificial intelligence techniques: A survey. Eur. J. Oper. Res. 2010, 204, 189–198. [Google Scholar] [CrossRef]
  32. Payne, E.M.; Peltier, J.; Barger, V.A. Enhancing the value co-creation process: Artificial intelligence and mobile banking service platforms. J. Res. Interact. Mark. 2021, 15, 68–85. [Google Scholar]
  33. Swan, E.L.; Peltier, J.W.; Dahl, A.J. Artificial Intelligence in Healthcare: The Value Co-Creation Process and Influence of Other Digital Health Transformations. J. Res. Interact. Mark. 2024, 18, 109–126. [Google Scholar] [CrossRef]
  34. Zhu, Y.; Lu, Y.; Gupta, S.; Wang, J.; Hu, P. Promoting smart wearable devices in the health-AI market: The role of health consciousness and privacy protection. J. Res. Interact. Mark. 2023, 17, 257–272. [Google Scholar] [CrossRef]
  35. Vibert, J.; Dupin, N. Dermatologist-level classification of skin cancer with deep neural networks. Nature 2017, 19, 407–408. [Google Scholar]
  36. Leachman, S.A.; Merlino, G. The final frontier in cancer diagnosis. Nature 2017, 542, 36–38. [Google Scholar] [CrossRef] [PubMed]
  37. Patel, V.L.; Shortliffe, E.H.; Stefanelli, M.; Szolovits, P.; Berthold, M.R.; Bellazzi, R.; Abu-Hanna, A. The coming of age of artificial intelligence in medicine. Artif. Intell. Med. 2009, 46, 5–17. [Google Scholar] [CrossRef] [PubMed]
  38. Bennett, C.C.; Hauser, K. Artificial intelligence framework for simulating clinical decision-making: A Markov decision process approach. Artif. Intell. Med. 2013, 57, 9–19. [Google Scholar] [CrossRef] [PubMed]
  39. Neuhofer, B.; Magnus, B.; Celuch, K. The impact of artificial intelligence on event experiences: A scenario technique approach. Electron. Mark. 2020, 31, 601–617. [Google Scholar] [CrossRef] [PubMed]
  40. Ostrom, A.L.; Fotheringham, D.; Bitner, M.J. Customer Acceptance of AI in Service Encounters: Understanding Antecedents and Consequences. In Handbook of Service Science, Volume II. Service Science: Research and Innovations in the Service Economy; Maglio, P.P., Kieliszewski, C.A., Spohrer, J.C., Lyons, K., Patrício, L., Sawatani, Y., Eds.; Springer: Cham, Switzerland, 2019; pp. 77–103. [Google Scholar]
  41. Miville, N.D. Factors Influencing the Diffusion of Innovation and Managerial Adoption of New Technology. Ph.D. Thesis, Nova Southeastern University, Fort Lauderdale, FL, USA, 2005. [Google Scholar]
  42. Al-Rahmi, W.M.; Yahaya, N.; Alamri, M.M.; Alyoussef, I.Y.; Al-Rahmi, A.M.; Kamin, Y.B. Integrating innovation diffusion theory with technology acceptance model: Supporting students’ attitude towards using a massive open online courses (MOOCs) systems. Interact. Learn. Environ. 2021, 29, 1380–1392. [Google Scholar] [CrossRef]
  43. Davis, F.D. Perceived usefulness, perceived ease of use, and user acceptance of information technology. J. Manag. Inf. Syst. 1989, 13, 319–340. [Google Scholar] [CrossRef]
  44. Hoi, V.N. Understanding higher education learners’ acceptance and use of mobile devices for language learning: A Rasch-based path modeling approach. Comput. Educ. 2020, 146, 103761. [Google Scholar] [CrossRef]
  45. Choi, J.; Kim, S. Is the smartwatch an IT product or a fashion product? A study on factors affecting the intention to use smartwatches. Comput. Hum. Behav. 2016, 63, 777–786. [Google Scholar] [CrossRef]
  46. Bagozzi, R.P. The Construct Validity of The Affective, Behavioral, And Cognitive Components of Attitude by Analysis of Covariance Structures. Multivar. Behav. Res. 1978, 13, 9–31. [Google Scholar] [CrossRef]
  47. Hoffman, D.L.; Thomas, P.N. Consumer and Object Experience in the Internet of Things: An Assemblage Theory Approach. J. Consum. Res. 2018, 44, 1178–1204. [Google Scholar] [CrossRef]
  48. Price, L.L.; Arnould, E.J.; Deibler, S.L. Consumers’ emotional responses to service encounters: The influence of the service provider. Int. J. Serv. Ind. Manag. 1995, 6, 34–63. [Google Scholar] [CrossRef]
  49. Delcourt, C.; Gremler, D.D.; De Zanet, F.; van Riel, A.C.R. An Analysis of the Interaction Effect Between Employee Technical and Emotional Competencies in Emotionally Charged Service Encounters. J. Serv. Manag. 2017, 28, 85–106. [Google Scholar] [CrossRef]
  50. Belanche, D.; Casaló, L.V.; Flavián, C.; Schepers, J. Robots or Frontline Employees? Exploring Customers’ Attributions of Responsibility and Stability After Service Failure or Success. J. Serv. Manag. 2020, 31, 267–289. [Google Scholar] [CrossRef]
  51. Rapp, A.; Boldi, A.; Curti, L.; Perrucci, A.; Simeoni, R. How Do People Ascribe Humanness to Chatbots? An Analysis of Real-World Human-Agent Interactions and a Theoretical Model of Humanness. Int. J. Hum.–Comput. Interact. 2023, 1–24. [Google Scholar] [CrossRef]
  52. Bickmore, T.W.; Picard, R.W. Establishing and Maintaining Long-Term Human-Computer Relationships. ACM Trans. Comput. Interact. 2005, 12, 293–327. [Google Scholar] [CrossRef]
  53. Gelbrich, K.; Hagel, J.; Orsingher, C. Emotional Support from a Digital Assistant in Technology-Mediated Services: Effects on Customer Satisfaction and Behavioral Persistence. Int. J. Res. Mark. 2021, 38, 176–193. [Google Scholar] [CrossRef]
  54. Chen, Y.H.; Keng, C.J.; Chen, Y.L. How interaction experience enhances customer engagement in smart speaker devices? The moderation of gendered voice and product smartness. J. Res. Interact. Mark. 2022, 16, 403–419. [Google Scholar] [CrossRef]
  55. Li, G.; Zhao, Z.; Li, L.; Li, Y.; Zhu, M.; Jiao, Y. The relationship between AI stimuli and customer stickiness, and the roles of social presence and customer traits. J. Res. Interact. Mark. 2024, 18, 38–53. [Google Scholar] [CrossRef]
  56. Yim, A.; Cui, A.P.; Walsh, M. The Role of Cuteness on Consumer Attachment to Artificial Intelligence Agents. J. Res. Interact. Mark. 2023, 18, 127–141. [Google Scholar] [CrossRef]
  57. Quach, S.; Barari, M.; Moudrý, D.V.; Quach, K. Service integration in omnichannel retailing and its impact on customer experience. J. Retail. Consum. Serv. 2022, 65, 102267. [Google Scholar] [CrossRef]
  58. Yao, Q.; Kuai, L.; Jiang, L. Effects of the anthropomorphic image of intelligent customer service avatars on consumers’ willingness to interact after service failures. J. Res. Interact. Mark. 2023, 17, 734–753. [Google Scholar] [CrossRef]
  59. Gao, Y.; Liu, H. Artificial intelligence-enabled personalization in interactive marketing: A customer journey perspective. J. Res. Interact. Mark. 2023, 17, 663–680. [Google Scholar] [CrossRef]
  60. Zimmermann, R.; Mora, D.; Cirqueira, D.; Helfert, M.; Bezbradica, M.; Werth, D.; Weitzl, W.J.; Riedl, R.; Auinger, A. Enhancing brick-and-mortar store shopping experience with an augmented reality shopping assistant application using personalized recommendations and explainable artificial intelligence. J. Res. Interact. Mark. 2023, 17, 273–298. [Google Scholar] [CrossRef]
  61. Troshani, I.; Hill, S.R.; Sherman, C.; Arthur, D. Do we trust in AI? Role of anthropomorphism and intelligence. J. Comput. Inform. Syst. 2021, 61, 481–491. [Google Scholar] [CrossRef]
  62. Mele, C.; Spena, T.R.; Kaartemo, V.; Marzullo, M.L. Smart nudging: How cognitive technologies enable choice architectures for value co-creation. J. Bus. Res. 2021, 129, 949–960. [Google Scholar] [CrossRef]
  63. Aslam, W.; Farhat, K. The Role of Artificial Intelligence in Interactive Marketing: Improving Customer-Brand Relationship. In The Palgrave Handbook of Interactive Marketing; Wang, C., Ed.; Springer International Publishing: Berlin/Heidelberg, Germany, 2023; pp. 199–217. [Google Scholar]
  64. Preece, J. Empathy online. Virtual Real. 1999, 4, 74–84. [Google Scholar] [CrossRef]
  65. Jo, D.J.; Han, J.K.; Chung, K.; Lee, K. Empathy Between Human and Robot? In Proceedings of the 2013 8th ACM/IEEE International Conference on Human-Robot Interaction (HRI), Tokyo, Japan, 3–6 March 2013; pp. 151–152. Available online: https://ieeexplore.ieee.org/abstract/document/6483546 (accessed on 22 April 2024).
  66. Candao, G.C.; Herrando, C.; Hoyos; Martín-De, M.J. Affective Interaction with Technology: The Role of Virtual Assistants in Interactive Marketing. In The Palgrave Handbook of Interactive Marketing; Wang, C., Ed.; Springer International Publishing: Berlin/Heidelberg, Germany, 2023; pp. 275–298. [Google Scholar]
  67. Petriglieri, J.L. Under threat: Responses to and the consequences of threats to individuals’ identities. Acad. Manag. Rev. 2011, 36, 641–662. [Google Scholar]
  68. Craig, K.; Thatcher, J.B.; Grover, V. The IT Identity Threat: A Conceptual Definition and Operational Measure. J. Manag. Inform. Syst. 2019, 36, 259–288. [Google Scholar] [CrossRef]
  69. Park, E.; Del Pobil, A.P. Users’ attitudes toward service robots in South Korea. Ind. Robot. 2013, 40, 77–87. [Google Scholar] [CrossRef]
  70. Sarstedt, M.; Henseler, J.; Ringle, C.M. Multigroup analysis in partial least squares (PLS) path modeling: Alternative methods and empirical results. Adv. Int. Mark. 2011, 22, 195–218. [Google Scholar]
  71. Puntoni, S.; Reczek, R.W.; Giesler, M.; Botti, S. Consumers and artificial intelligence: An experiential perspective. J. Mark. 2021, 85, 131–151. [Google Scholar] [CrossRef]
  72. Gerlich, L.; Parsons, B.N.; White, A.S.; Prior, S.; Warner, P. Gesture recognition for control of rehabilitation robots. Cogn. Technol. Work. 2007, 9, 189–207. [Google Scholar] [CrossRef]
  73. Hogg, M.A.; Abrams, D.; Brewer, M.B. Social identity: The role of self in group processes and intergroup relations. Group. Processes. Interg. 2017, 5, 570–581. [Google Scholar] [CrossRef]
  74. Yogeeswaran, K.; Złotowski, J.; Livingstone, M.; Bartneck, C.; Sumioka, H.; Ishiguro, H. The interactive effects of robot anthropomorphism and robot ability on perceived threat and support for robotics research. J. Hum.-Robot Interact. 2016, 5, 29–47. [Google Scholar] [CrossRef]
  75. Logg, J.M.; Minson, J.A.; Moore, D.A. Algorithm appreciation: People prefer algorithmic to human judgment. Organ. Behav. Hum. Decis. Process. 2019, 151, 90–103. [Google Scholar] [CrossRef]
  76. Faranda, W.T. A scale to measure the cognitive control form of perceived control: Construction and preliminary assessment. Psychol. Mark. 2001, 18, 1259–1281. [Google Scholar] [CrossRef]
  77. White, R.W. Motivation reconsidered: The concept of competence. Psychol. Rev. 1959, 66, 297–333. [Google Scholar] [CrossRef] [PubMed]
  78. Jörling, M.; Böhm, R.; Paluch, S. Service robots: Drivers of perceived responsibility for service outcomes. J. Serv. Res. 2019, 22, 404–420. [Google Scholar] [CrossRef]
  79. Pavlou, P.A. Consumer acceptance of electronic commerce: Integrating trust and risk with the technology acceptance model. Int. J. Electron. Commer. 2003, 7, 101–134. [Google Scholar]
  80. Schweitzer, F.; Van Den Hende, E.A. To be or not to be in thrall to the march of smart products. Psychol. Mark. 2016, 33, 830–842. [Google Scholar] [CrossRef]
  81. Wirtz, J.; Patterson, P.G.; Kunz, W.H.; Gruber, T.; Lu, V.N.; Paluch, S.; Martins, A. Brave new world: Service robots in the frontline. J. Serv. Manag. 2018, 29, 907–931. [Google Scholar] [CrossRef]
  82. Castelo, N.; Bos, M.W.; Lehmann, D.R. Task-Dependent algorithm aversion. J. Mark. Res. 2019, 56, 809–825. [Google Scholar] [CrossRef]
  83. Illia, L.; Colleoni, E.; Zyglidopoulos, S. Ethical implications of text generation in the age of artificial intelligence. Bus. Ethics Environ. Responsib. 2023, 32, 201–210. [Google Scholar] [CrossRef]
  84. Libai, B.; Bart, Y.; Gensler, S.; Hofacker, C.F.; Kaplan, A.; Kötterheinrich, K.; Kroll, E.B. A brave new world? On AI and the management of customer relationships. J. Interact. Mark. 2020, 51, 44–56. [Google Scholar] [CrossRef]
  85. Matz, S.C.; Netzer, O. Using big data as a window into consumers’ psychology. Curr. Opin. Behav. Sci. 2017, 18, 7–12. [Google Scholar] [CrossRef]
  86. Davenport, T.; Guha, A.; Grewal, D.; Bressgott, T. How artificial intelligence will change the future of marketing. J. Acad. Mark. Sci. 2020, 48, 24–42. [Google Scholar] [CrossRef]
  87. Aquino, K.; Reed, A. The self- importance of moral identity. J. Pers. Soc. Psychol. 2002, 83, 1423–1440. [Google Scholar] [CrossRef]
  88. Reed, A.; Aquino, K.; Levy, E. Moral identity and judgments of charitable behaviors. J. Mark. 2007, 71, 178–192. [Google Scholar] [CrossRef]
  89. Thornton, M.A.; Rupp, D.E. The joint effects of justice climate, group moral identity, and corporate social responsibility on the prosocial and deviant behaviors of groups. J. Bus. Ethics. 2016, 137, 677–697. [Google Scholar] [CrossRef]
  90. Hermann, E. Leveraging Artificial Intelligence in Marketing for Social Good-An Ethical Perspective. J. Bus. Ethics 2021, 179, 43–61. [Google Scholar] [CrossRef] [PubMed]
  91. Bandura, A. Self-efficacy theory in human agency. Am. J. Psychol. 1982, 37, 122–147. [Google Scholar] [CrossRef]
  92. Fiske, S.T.; Taylor, S.E. Social Cognition, 2nd ed.; McGraw-Hill: New York, NY, USA, 1991. [Google Scholar]
  93. Yogeeswaran, K.; Dasgupta, N. The Devil is in the Details: Abstract versus Concrete Construals of Multiculturalism Differentially Impact Intergroup Relations. J. Pers. Soc. Psychol. 2014, 106, 772–789. [Google Scholar] [CrossRef] [PubMed]
  94. Wang, C.L. Editorial-The misassumptions about contributions. J. Res. Interact. Mark. 2022, 16, 1–2. [Google Scholar] [CrossRef]
  95. Yu, B. Deep Learning Applications for Interactive Marketing in the Contemporary Digital Age. In The Palgrave Handbook of Interactive Marketing; Wang, C., Ed.; Springer: Berlin/Heidelberg, Germany, 2023; pp. 705–728. [Google Scholar]
  96. Moor, J.H. Why we need better ethics for emerging technologies. Ethics Inf. Technol. 2005, 7, 111–119. [Google Scholar] [CrossRef]
  97. Wright, D.A. framework for the ethical impact assessment of information technology. Ethics Inf. Technol. 2011, 13, 199–226. [Google Scholar] [CrossRef]
  98. Belk, R. Ethical issues in service robotics and artificial intelligence. Serv. Ind. J. 2021, 41, 860–876. [Google Scholar] [CrossRef]
  99. De Bruyn, A.; Viswanathan, V.; Beh, Y.S.; Brock, J.K.-U.; Von Wangenheim, F. Artificial intelligence and marketing: Pitfalls and opportunities. J. Interact. Mark. 2020, 51, 91–105. [Google Scholar] [CrossRef]
  100. Lee, I.; Shin, Y.J. Machine learning for enterprises: Applications, algorithm selection, and challenges. Bus. Horiz. 2020, 63, 157–170. [Google Scholar] [CrossRef]
  101. Kannan, P.K.; Kulkarni, G. The impact of COVID-19 on customer journeys: Implications for interactive marketing. J. Res. Interact. Mark. 2022, 16, 22–36. [Google Scholar] [CrossRef]
  102. McDonald, M. Viewpoint-a big opportunity for interactive marketing post-COVID-19. J. Res. Interact. Mark. 2022, 16, 15–21. [Google Scholar] [CrossRef]
  103. Cho, Y.N.; Kim, H.E.; Youn, N. Together or alone on the prosocial path amidst the COVID-19 pandemic: The partitioning effect in experiential consumption. J. Res. Interact. Mark. 2022, 16, 64–81. [Google Scholar] [CrossRef]
  104. Sheth, J.N. Post-pandemic marketing: When the peripheral becomes the core. J. Res. Interact. Mark. 2022, 16, 37–44. [Google Scholar] [CrossRef]
Figure 1. Research Model.
Figure 1. Research Model.
Jtaer 19 00080 g001
Figure 2. Identity threat mediates the effect of mechanical AI services on consumers’ cognitive attitudes. Note: NS indicates not significant (p > 0.05); * and *** indicate p < 0.05 and p < 0.001.
Figure 2. Identity threat mediates the effect of mechanical AI services on consumers’ cognitive attitudes. Note: NS indicates not significant (p > 0.05); * and *** indicate p < 0.05 and p < 0.001.
Jtaer 19 00080 g002
Figure 3. Moderating effect paths of utilitarian judgment (a) and deontological judgment (b). Note: NS indicates not significant (p > 0.05); *, **, and *** indicate p < 0.05, p < 0.01, and p < 0.001.
Figure 3. Moderating effect paths of utilitarian judgment (a) and deontological judgment (b). Note: NS indicates not significant (p > 0.05); *, **, and *** indicate p < 0.05, p < 0.01, and p < 0.001.
Jtaer 19 00080 g003
Figure 4. Structural equation model.
Figure 4. Structural equation model.
Jtaer 19 00080 g004
Figure 5. Summary of findings.
Figure 5. Summary of findings.
Jtaer 19 00080 g005
Table 1. Moderating role of service scenarios between identity threat and consumers’ affective attitudes.
Table 1. Moderating role of service scenarios between identity threat and consumers’ affective attitudes.
Model PathRestaurant ScenarioHotel Scenario
Standardized CoefficientT-ValueStandardized CoefficientT-ValueΔx2
Identity threat → consumers’ affective attitudes0.890 ***3.604−0.058−0.3685.0361 *
Note: *, and *** denote p < 0.05 and p < 0.001.
Table 2. Summary of results.
Table 2. Summary of results.
Hypothesis (Effect)RelationshipStudy 1Study 2Study 3
H1aMechanical AI → cognitive attitudes-+ 1+ 3
H1bMechanical AI → affective attitudes--+ 4
H2aThinking AI → cognitive attitudes+--
H2bThinking AI → affective attitudes+--
H3aAffective AI → cognitive attitudes+++ 5
H3bAffective AI → affective attitudes+++ 6
H4a
(mediating effect)
AI level → identity threat → cognitive attitudes——P 2P 7
H4b
(mediating effect)
AI level → identity threat → affective attitudes——-P 8
H5a
(mediating effect)
AI level → perceived control → cognitive attitudes——--
H5b
(mediating effect)
AI level → perceived control → affective attitudes——--
H6a
(moderating effect)
AI level × type of service scenarios → cognitive attitudes————P 11
H6b
(moderating effect)
AI level × type of service scenarios → affective attitudes————-
H7a
(moderating effect)
AI level × type of moral judgments → cognitive attitudes————P 9
H7b
(moderating effect)
AI level × type of moral judgments → affective attitudes————P 10
Note: - indicates supported; + indicates not supported; P indicates partially supported. 1 Mechanical AI services have a significant negative effect on consumers’ cognitive attitudes. 2 Identity threat fully negatively mediates the relationship between mechanical AI and consumers’ cognitive attitudes. 3 Mechanical AI has a significant negative effect on consumers’ cognitive attitudes. 4 Mechanical AI has a significant negative effect on consumers’ affective attitudes. 5 Affective AI services have a significant effect on consumers’ cognitive attitudes but not a significant positive effect. 6 Affective AI services have a significant effect on consumers’ affective attitudes, but the positive effect is not significant. 7 Identity threat fully negatively mediates the relationship between mechanical AI and consumers’ cognitive attitudes. 8 Identity threat plays a fully negative mediating role between mechanical AI and consumers’ affective attitudes. 9 Utilitarian judgment negatively moderates the relationship between mechanical AI and consumers’ cognitive attitudes; deontological judgment positively moderates the relationship between affective AI services and consumers’ cognitive attitudes. 10 Utilitarian judgments play a negative moderating role between mechanical AI and consumers’ affective attitudes and a negative moderating role between affective AI and consumers’ affective attitudes; deontological judgments positively moderate affective AI’s effect on consumers’ affective attitudes. 11 Restaurant service scenarios positively moderate identity threat’s effect on consumers’ affective attitudes.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Fan, Q.; Dai, Y.; Wen, X. Is Smarter Better? A Moral Judgment Perspective on Consumer Attitudes about Different Types of AI Services. J. Theor. Appl. Electron. Commer. Res. 2024, 19, 1637-1659. https://doi.org/10.3390/jtaer19030080

AMA Style

Fan Q, Dai Y, Wen X. Is Smarter Better? A Moral Judgment Perspective on Consumer Attitudes about Different Types of AI Services. Journal of Theoretical and Applied Electronic Commerce Research. 2024; 19(3):1637-1659. https://doi.org/10.3390/jtaer19030080

Chicago/Turabian Style

Fan, Qingji, Yan Dai, and Xue Wen. 2024. "Is Smarter Better? A Moral Judgment Perspective on Consumer Attitudes about Different Types of AI Services" Journal of Theoretical and Applied Electronic Commerce Research 19, no. 3: 1637-1659. https://doi.org/10.3390/jtaer19030080

APA Style

Fan, Q., Dai, Y., & Wen, X. (2024). Is Smarter Better? A Moral Judgment Perspective on Consumer Attitudes about Different Types of AI Services. Journal of Theoretical and Applied Electronic Commerce Research, 19(3), 1637-1659. https://doi.org/10.3390/jtaer19030080

Article Metrics

Back to TopTop