Next Article in Journal
Sustainability Management Accounting in Urban Development: A Case Study of an Egyptian State-Owned Enterprise
Previous Article in Journal
The Impact of Supply Chain Finance on the Investment Efficiency of Publicly Listed Companies in China Based on Sustainable Development
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Costly “Greetings” from AI: Effects of Product Recommenders and Self-Disclosure Levels on Transaction Costs

1
School of Management, Xiamen University, Xiamen 361005, China
2
School of International Law, East China University of Political Science and Law, Shanghai 200050, China
*
Author to whom correspondence should be addressed.
Sustainability 2024, 16(18), 8236; https://doi.org/10.3390/su16188236
Submission received: 31 July 2024 / Revised: 9 September 2024 / Accepted: 19 September 2024 / Published: 22 September 2024

Abstract

:
Companies are increasingly using artificial intelligence (AI) to provide users with product recommendations, but its efficacy is inconsistent. Drawing upon social exchange theory, we examine the effects of product recommenders and their levels of self-disclosure on transaction costs. Specifically, we recruited 78 participants and conducted a 2 × 2 online experiment in which we manipulated product recommenders (human versus AI) and examined how self-disclosure levels (high versus low) affect consumers’ return intentions. We predicted and found that a low level of self-disclosure from human recommenders instead of AI counterparts results in higher emotional support, which leads to lower transaction costs. However, under high levels of self-disclosure, consumers’ emotional support and subsequent transaction costs do not differ between human and AI recommenders. Accordingly, we provide theoretical insights into the roles of self-disclosure and emotional support in human–machine interactions, and we contribute to sustainable AI practices by enhancing the efficiency of business operations and advancing broader sustainability objectives.

1. Introduction

Artificial Intelligence (AI) agents are increasingly prevalent in consumer interactions across a range of services, including digital assistants, sales robots, and financial advisors [1]. In the realm of e-commerce, AI-driven chatbots are meticulously engineered to replicate human communication by accurately discerning and responding to the underlying intent and sentiment in user interactions. This sophisticated emulation, facilitated by advances in Natural Language Processing (NLP) and Large language models (LLMs), substantially boosts operational efficiency and lowers transaction costs [2]. For instance, Amazon utilizes user-based AI technologies to match users with similar tastes and provide tailored product recommendations based on their historical behavior [3]. Similarly, cosmetics companies like Sephora [4] and Kiehl’s [5] employ AI chatbots to recommend products tailored to customers’ preferences and skin types. However, several AI robotics firms, including Softbank’s Pepper unit [6] and Meta’s M assistant, have recently scaled back or discontinued their AI service operations [7], and underscored the potential limitations and risks associated with overreliance on AI technologies. The constraints inherent in AI training data may lead to the oversight of the needs of diverse groups and raise significant concerns regarding privacy breaches associated with user data [8], underscoring the caution against sole reliance on AI recommendations. In addition, possible reasons for skepticism towards AI are the common prejudice that it lacks human feelings and emotional intelligence [9], which may lead to negative attitudes and the perception that AIs are less trustworthy (e.g., the uncanny valley feelings and algorithm aversion [10]). Employees are more willing to accept the information provided by humans rather than AI managers, even though the information is identical [11]. However, when AI demonstrates an ability to understand user emotions and needs, users are more likely to trust its recommendations [12]. These indicate that the adoption of AI agents is influenced not only by their tangible benefits but also by subjective perceptions, particularly the emotional and affective aspects of consumer interactions with AI-based service technologies. Existing research primarily focuses on individuals’ perceptions of AI’s agency and experiential capabilities, and has not yet explored individual behavior through the lens of individuals’ emotional perception. Therefore, it is imperative to investigate the underlying mechanisms, particularly emotional support, that shape consumer behavior when interacting with AI, and to compare the difference between different product recommenders (AI and human), along with their potential economic outcomes (i.e., the transaction costs).
Companies often disclose AI’s capabilities and past performance (e.g., Google’s BERT and IBM’s Watson), including its capabilities in textual analysis and processing accuracy, to enhance trust among users and improve their overall experience. Similar to how humans foster relationships through self-disclosure, prior research denotes that AI self-disclosure is crucial for deepening these interactions. For instance, Lee et al. [13] found that AI self-disclosure had a reciprocal effect on promoting deeper participant self-disclosure, and a positive effect on improving participants’ perceived intimacy and enjoyment. Tsumura and Yamada [14] noted that the absence of self-disclosure diminishes empathy between humans and AI, whereas high levels of self-disclosure enhance human empathy. However, other studies have found that AI self-disclosure may backfire because users could react negatively if they feel that they were manipulated into disclosing information [15]. Moreover, research on expectancy theory suggests that if someone shows high capability, people will expect their performance at a similar level and may anticipate even better future behaviors [16]. That is, high self-disclosure may lead to inflated customer expectations, potentially raising transaction costs when these expectations are not met. Given this mixed evidence, it is important to examine the effects of varying levels of self-disclosure by AI (versus human) on individual perceptions and transaction costs in interactive settings.
To address this research question, the social exchange theory provides a robust framework for identifying consumers’ perceived feelings and subsequent behaviors. This theory suggests that relationships are established through a sequence of exchanges characterized by self-interest and interdependence [17]. Social exchange theory provides a framework for analyzing the distinct effects of artificial intelligence and human interactions on transaction costs by examining how consumers evaluate the costs and benefits associated with emotional engagement in both AI and human contexts. Compared with AI, humans are capable of offering profound emotional support, such as empathy and compassion, which is considered a high-value benefit that consequently reduces transaction costs. Additionally, this theory posits self-disclosure as a cognitive process that involves assessing rewards and costs, where the perceived value of disclosure is balanced against its potential risks [17]. In essence, akin to interpersonal dynamics, we conceptualize consumers’ emotional support towards AIs as a dual-faceted process. On one hand, it is a rational endeavor where the benefits and costs of self-disclosure are meticulously assessed and weighed [18]. On the other hand, it is a social interactional process shaped by contextual and relational factors, as well as by the unfolding events within the interaction itself [19]. From this understanding, we propose that product recommendations generated by humans typically result in lower transaction costs than those generated by AI. This is attributed to the perception of greater emotional support from human interactions, which fosters trust and diminishes perceived risks. Furthermore, we hypothesize that under conditions of low self-disclosure, transaction costs will be lower when interacting with human subjects compared to AI subjects. However, under conditions of high self-disclosure, the transaction costs associated with AI and human subjects will not significantly differ.
To test these hypotheses, we conducted a 2 × 2 between-subjects experiment to investigate the effects of product recommender subjects (AI and human) and self-disclosure levels (high and low) on emotional support and transaction costs. Participants were randomly assigned to one of four groups where they read a sales manager’s description, indicated product preferences, received recommendations, and then rated their satisfaction and intentions to return. Consistent with our theoretical expectations, the results demonstrated that AI recommenders incurred higher transaction costs than their human counterparts. We observed that under conditions of low self-disclosure, transaction costs were lower with human recommenders compared to AI; however, with high self-disclosure, the difference in transaction costs between AI and human recommenders was not statistically significant. Our findings suggested that human recommenders provided higher emotional support than AI, which in turn led to reduced transaction costs. Moreover, in line with social exchange theory, we found that low self-disclosure from human subjects elicited greater emotional support compared to that from AI subjects, reducing individuals’ propensity to return products and thus lowering transaction costs.
This study makes several theoretical contributions to the extant research. Firstly, we introduce a theoretical framework and research model grounded in social exchange theory to elucidate how the sources of product recommendations and levels of self-disclosure collaboratively influence emotional support and transaction costs (i.e., consumers’ likelihood of returning). We found that the social exchange theory, which is often applied in interpersonal contexts, can be further applied in contexts in which humans are interacting with AI product recommenders. Our findings indicate that interactions with AI recommender entities incur higher transaction costs through the mediating effect of reduced emotional support, which further enhances our understanding of the central role of emotional support as an underlying mechanism of human–AI interactions. These findings confirm the perception of AI as social actors to whom users attribute human-like characteristics and apply the same social norms and expectations as they would in human interactions [20,21]. As a result, consumer decisions, such as whether to return products, are influenced by the source of the product recommendations, mirroring the social dynamics common in interpersonal interactions. This observation aligns with the interaction-centric theory proposed by Al-Natour and Benbasat [22] and Al-Natour et al. [17], which suggests that users gather cues during their interactions to form their perceptions while engaging with AI entities.
Second, we contribute to the management and psychology literature on self-disclosure [23,24]. While prior studies primarily examined how the use of machines facilitates human self-disclosure [13,25], our research explores how self-disclosure by machines influences individual behaviors. This novel approach extends the nascent body of literature on the psychological cognition and behavioral responses of customers interacting with various human–robot configurations [2,26]. Drawing upon social exchange theory, our findings further confirm previous research regarding the mediation effects of emotional support in the adoption of product recommendation agents [13,14], although these effects vary; rather, they might depend on how much information about AIs or humans is disclosed. Specifically, we discovered that with lower levels of machine self-disclosure, human subjects exhibit greater emotional support and experience lower transaction costs compared to interactions with higher levels of machine self-disclosure. This insight underscores the pivotal role of self-disclosure in shaping consumer experiences with AI interactions, particularly its impact on emotional support and economic outcomes.
The remaining sections of this paper are structured as follows. Section 2 provides the research background and develops our hypotheses. Section 3 and Section 4 describe the experimental design and results. This paper concludes with Section 5.

2. Background and Hypothesis Development

2.1. Background

There has been a growing trend toward the adoption of AI-powered service robots to give recommendations, which are gradually replacing human sales in the service sector [27]. Advances in AI technologies, such as Natural Language Processing and Large language models, have intensified the focus on integrating personalization techniques [28], including sentiment analysis and conversational style adaptation, to enhance customer service quality and optimize human resource allocation [29]. However, the efficacy of these AI systems remains inconsistent. This has prompted researchers to explore the differences between humans and AI in the provision of services (see Table 1). For example, studies have examined customers’ attribution of responsibility for AI recommender service failure, finding that customers tend to attribute more responsibility to the firm and less to the AI robot in comparison with humans [30]. Other studies have focused on customer perceptions of service stability, revealing that customers perceive AI robots to be more reliable than humans [27]. You et al. [31] denoted that individuals largely exhibit algorithm appreciation, and they follow algorithmic advice to a greater extent than identical human advice due to a higher trust in an algorithmic than a human advisor. These studies, focusing on the attribution of responsibility, perceptions of stability, and differences in service delivery, provide valuable insights into the differences in customer perceptions towards human and AI sales agents. Notably, although firms assume AI will enhance operational efficiency and decrease transaction costs, defined as the expenses involved when buyers (customers) and sellers (retailers) engage in trade [32], a growing body of research documents that individuals often exhibit “algorithm aversion”—the tendency to discount computer-based advice more heavily than human advice—although the advice is identical otherwise [11,33]. Given these mixed results, we conducted an experiment to examine how humans respond to the implementation of AI in a product recommendation context. Specifically, we examined and compared the impact of AI and human agents on transaction costs.
Self-disclosure refers to any personal information one shares with others and it may include “any information exchange that refers to the self, including personal states, dispositions, events in the past, and plans for the future” [38], which is key to individuals’ formation of interpersonal relationships and achievement of intimacy. Recent research has mainly focused on how an AI agent influences human self-disclosure, and has demonstrated mixed findings. For example, Kim et al. [25] found that consumers’ lay beliefs about AI (i.e., a perceived lack of social judgment capability) lead to enhanced disclosure of sensitive personal information to AI (vs. humans). Lee et al. [23] denoted that an AI chatbot lowers human psychological barriers to being judged and increases their willingness to engage in self-disclosure. On the contrary, when the user perceived the AI chatbot to have thoughts and emotions, the degree of anthropomorphism was higher, and the degree of user self-disclosure was lower [39]. Moreover, Ho et al. [40] found that there is no difference between AI chatbots and humans in creating emotional, relational, and psychological benefits. Prior studies primarily examined how the use of machines facilitates human self-disclosure, but limited research explores how self-disclosure by machines influences individual perceptions and behaviors (e.g., Lee et al. [13]; Tsumura and Yamada [14]). This novel approach extends the emerging discourse on psychological cognition and behavioral responses in human–robot interactions, placing an emphasis on the dynamics of machine self-disclosure. Accordingly, this paper aims to explore how the product recommendations subject (i.e., AI versus human) influences consumers’ perceived individual emotional support and return behaviors (i.e., the transaction cost), and to examine the moderating effect of self-disclosure levels on these relationships.

2.2. Hypothesis Development

2.2.1. The Effect of Product Recommender Subject on Transaction Cost

The social exchange theory, which has been pivotal in understanding interpersonal interactions, posits that relationships are created through a process of cost-benefit analysis [17]. Anchored on this theory, each interaction is a transaction where individuals strive to maximize their benefits and minimize costs.
When examining the differential impacts between AI and human recommenders on transaction costs, we can consider how the dynamics of cost-benefit analysis influence user behavior. Interactions with humans typically involve emotional communication, such as empathy, trust, and intuition [12]. These interactions are enriched by a depth of social cues and adaptive responses that AI systems have yet to fully replicate [1,12]. When consumers interact with human salespeople or recommenders, they weigh the benefits of receiving personalized advice—seemingly attuned to their individual needs and preferences—against the potential costs associated with communication and the emotional investment such interactions require. This emotional connection and trust act as significant benefits, effectively reducing customers’ skepticism about the transaction outcome [41], thereby decreasing the likelihood of returns and associated transaction costs.
Conversely, interactions with AI systems often lack the intuitive and emotional responsiveness characteristic of human interactions [1]. The absence of emotional support heightens perceived uncertainties, such as potential misunderstandings and dissatisfaction [42,43], because these systems frequently encounter challenges in effectively interpreting and responding to nuanced human communications. Although AI can efficiently process large volumes of data and deliver algorithm-based recommendations, this advantage cannot compensate for the high cognitive costs due to the lack of understanding and profound emotional interaction [44,45]. This deficit in emotional responsiveness complicates customers’ ability to trust AI recommendations, amplifying the perceived risk and potential costs of the transaction. Compared to human product recommendations, AI product recommendations are likely to result in a higher inclination to return products due to insufficient emotional support and personalized interaction. Consequently, we hypothesize the following.
Hypothesis 1.
Compared to human recommenders, AI recommenders will lead to higher transaction costs.

2.2.2. The Mediating Effect of Emotional Support

Emotional support is a type of social support that communicates empathy, emotional validation, and encouragement to people who are experiencing stressful life events [46]. It addresses basic human needs of being cared for and supported by someone else. According to social exchange theory, this provision of emotional support can be seen as a crucial factor in interpersonal communications [47]. According to the CASA framework [48], people perceive and respond to computers as they naturally do with humans, applying social scripts derived from human experiences to their interactions with computers. This suggests that even when provided by AI, emotional support can still fulfill a similar function in social exchanges, potentially reducing transaction costs associated with AI interactions.
However, prior research indicates that consumers may recognize AIs’ perceived lack of emotional intelligence and moderate agency, which can lead to negative attitudes [9,10,49]. For example, Tong et al. [49] demonstrated that disclosing the use of AI in providing feedback to employees can lead to negative perceptions among the employees, thereby offsetting the value created by the AI deployment. Similarly, when predicting student performance, individuals prefer to rely on their own predictions (or predictions from another person) rather than predictions produced by an algorithm, even after receiving information that demonstrates that the algorithm’s predictions are consistently more accurate than their own [10]. This suggests that the adoption of AI systems is influenced not only by their tangible benefits but also by subjective perceptions, particularly the emotional and affective aspects of consumer interactions with AI-based self-service technologies [50]. Overall, while AI has the potential to provide emotional support, it significantly lags behind humans in fostering emotional connections and generating positive emotional responses. So, we hypothesize the following:
Hypothesis 2.
Emotional support mediates the relationship between product recommenders and transaction cost. Compared to human recommenders, AI recommenders will result in higher transaction costs through the mediating effect of reduced emotional support.

2.2.3. The Moderating Effect of Self-Disclosure Level

Anchored in social exchange theory, research has treated self-disclosure as a cognitive process that involves rewards and costs [17]. The theory posits that relationships are formed through a series of exchanges characterized by self-interest and interdependence, and views self-disclosure as a product of a cost-benefit analysis [20].
Under conditions of low self-disclosure, subjects in product recommendations will not disclose substantial personal information, which profoundly affects their interactions with consumers and subsequent outcomes. Disclosing insufficient information substantially constrains the capacity to build trust and relationships with customers, which is pivotal in enhancing customer satisfaction and loyalty [51]. That is, the scarcity of personal information may lead customers to perceive the recommendations as lacking an interactive, context-specific, and personalized experience [52,53], increasing the likelihood of dissatisfaction with the recommended products. Human recommenders, endowed with the innate capability to establish personal connections and intuitively build trust [54], can effectively counteract the negative effects of limited information disclosure through their inherent interpersonal skills. This capacity frequently results in reduced transaction costs, as customers experience a sense of personal engagement and trust, notwithstanding the limited background information on the recommenders. In contrast, AI recommenders, who cannot provide additional contextual or nuanced information, may appear impersonal and less trustworthy [55], potentially elevating transaction costs due to heightened user skepticism and diminished satisfaction.
In contrast, under conditions of high self-disclosure, both human and AI recommenders are capable of providing a form of trust that contributes to the formation of high expectations. When AI and humans demonstrate high levels of self-disclosure during communication, consumers may experience enhanced trust and understanding. This can establish deeper emotional connections with consumers, which reduces uncertainty and increases the trust and expectations in future performance [56]. Owing to this, consumers are more likely to invest significant cognitive resources during the interaction and expect to receive personalized recommendations. However, when product recommendations do not meet these expectations, the perceived benefits do not offset the initial cognitive efforts (costs). The discrepancies between high expectations and actual recommendations lead to significant dissatisfaction, engendering a sense of negative emotions such as disappointment [57]. This suggests that in interactions characterized by high self-disclosure, whether the recommender is human or artificial intelligence, consumer return intentions are primarily driven by the negative emotions caused by the discrepancies between expectations and delivery. Thus, we hypothesize the following:
Hypothesis 3.
Under conditions of low self-disclosure, transaction costs will be lower in response to human recommenders compared to AI counterparts, but under conditions of high self-disclosure, the transaction costs will not differ between AI and humans.

3. Method

3.1. Participant

The sample for the online experiment consists of 78 participants [58] recruited from a prestigious university through the Credamo platform (https://www.credamo.com). Like Qualtrics and Amazon Mechanical Turk (MTurk), Credamo is a specialized data collection platform based in China that provides data services to researchers from over 3000 universities globally [59,60]. Within the participant demographic, 34.62% were male and 65.38% were female. The majority of participants (87.18%) fell within the age range of 21–30 years, and 98.72% held a Bachelor’s degree, ensuring their capability to fully understand and engage with the experimental procedures and tasks. Participants were randomly assigned to one of four experimental conditions as delineated in Table 2, and they were compensated CNY 1 upon completion of the experiment.
To determine the required sample size and robust statistical power for this study, a power analysis was conducted (Cohen 1992) using SPSS 29 [58]. We aimed for a power value of 0.8 and an effect size (Cohen’s f) of 0.4, considering equal group weights of 1 across four groups, with a significance level set at 0.05. The calculated sample size was 76, with 19 participants per group. Our actual sample size is 78, which exceeds the calculated requirement of 76. This indicates that the study design has adequate statistical power, thereby enhancing the reliability and validity of the results. The increased sample size further ensures more precise detection of the anticipated effects and minimizes the risk of Type II errors.

3.2. Research Design

To test our hypotheses, we conducted a 2 × 2 online experiment, manipulating the product recommender subjects (AI versus human) and self-disclosure levels (low versus high) to empirically test their influence on emotional support and transaction cost (i.e., the willingness to return). In all conditions, participants were required to complete a product recommendation task where they selected their preferences and received recommendations from assigned sales managers. They then decided whether to proceed with a return based on the provided recommendations. All experimental materials were automatically administered via Credamo, and the results for each participant were collected in the same manner. Notably, all experiments reported in this paper were reviewed and approved by the relevant IRBs, and we adhered to the principles outlined by Jongepier and Klenk [61] and Coons and Weber [62] in our online experiment.

3.3. Procedure

We recruited participants from business school alumni through an online questionnaire using Credamo. After clicking the link of this online questionnaire, participants were randomly assigned by Credamo to one of four experimental groups. They were instructed to follow a sequence that included five primary phases of this online experiment (see Figure 1): (1) reading a consent form and a detailed description of the sales manager, which provided context about the manager’s personal information, (2) indicating personal product preferences focused on features, preferred materials, type of lid, and price range for selecting a water bottle, (3) receiving product recommendations, (4) evaluating their satisfaction with the recommendations and their intention to return the products, and (5) completing a post-experiment questionnaire.
In the third phase of the experiment, the fact that all participants received the same product recommendation—a temperature-measuring bottle made of aluminum and silicone, without a lid—despite their varying preferences suggests that the product was not selected based on individual preferences or choices provided in the selection range. This approach could be used to test the impact of unexpected or unsolicited recommendations on participant reactions or acceptance rates, regardless of their prior stated preferences. Meanwhile, this approach of recommending the same product was designed to enhance both the internal and external validity of the experiment. By controlling the product type and ensuring all participants received identical product information, we eliminated confounding effects caused by variations in product features. This setup reflects typical industry practices where companies often offer standardized products rather than fully customized recommendations, thereby allowing us to more accurately gauge the impact of standardized product recommendations on consumer behavior.
After reviewing the product recommendations, participants assessed their satisfaction and their intentions to returning the products. Finally, upon completing the evaluation, participants filled out a post-experiment questionnaire that gathered additional data on their overall experience, including demographic information, perceived emotional support from the sales manager, etc. Upon completion of the experiment, each participant was compensated CNY 1, which involved approximately five minutes for the online experiment and the post-experiment questionnaire.

3.4. Data

Prior to data collection, participants were furnished with a comprehensive experimental statement and an informed consent document, which clearly outlined the potential risks and benefits associated with their participation in this study. Data such as emotional support, intentions to return, and demographic information were gathered through an online questionnaire. These data were collected anonymously and encrypted by the researchers to ensure the security and confidentiality of the information. Additionally, our manipulation of independent variables online was guided by the ethical and moral principles outlined by Jongepier and Klenk [61] and Coons and Weber [62]. Adherence to these guidelines was imperative to ensure ethical compliance, fairness, and transparency throughout the execution of this online experiment.

3.4.1. Dependent Variable

Transaction costs. Drawing from transaction cost theory, transaction cost refers to the expenses involved when buyers (customers) and sellers (retailers) engage in trade [32]. These costs encompass a variety of activities including the search for suitable trading partners, the acquisition of product information and pricing, the drafting of contracts, the actual purchasing process, and the enforcement of these contracts. In our setting, we concentrate on the supplier-side transaction costs incurred during interactions with different product recommenders.
We employed a 7-point Likert scale to assess transaction costs, specifically by asking participants to rate their intentions to return the recommended product. This approach effectively captures the actual economic costs associated with processing returns, as a lower willingness to return indicates enhanced performance in product recommendations and customer service, thereby reducing transaction costs. This method aligns closely with real-world online shopping scenarios, where customer satisfaction and intentions to return directly influence the transaction costs between consumers and retailers.

3.4.2. Independent Variable

Product recommenders. We manipulated the product recommender subjects at two levels: AI and human. In the AI condition, participants were informed that a sales manager named “AI Sales” would be assigned as their dedicated sales manager, providing product recommendations throughout the entire process. In the human condition, the sales manager was referred to as “Jonny Zhang”. This manipulation aimed to isolate the effects of the recommender’s type on participants’ decision-making and satisfaction, allowing for a direct comparison between AI-driven and human-driven product recommendations in a controlled setting.
Self-disclosure levels. Self-disclosure refers to the communication of private information to another [19]. This includes any information that refers to the self, such as personal dispositions, events in the past, or current or future plans of action [17,19].
Following the methodologies of Koohikamali et al. [63] and Li-Barber [64], the self-disclosure level (high versus low) was operationalized by manipulating the personal and professional information shared by the sales managers. In the high self-disclosure condition, the sales manager provided detailed descriptions of their professional achievements, personal skills, and successful client interactions, including statistics such as the number of clients successfully assisted and high customer satisfaction rates. In contrast, the low self-disclosure condition involved the sales manager sharing only basic information about their professional background, such as their job title and place of employment, without divulging any detailed professional achievements or personal skills. By comparing these two conditions, we aimed to determine the moderating effect of self-disclosure level (i.e., whether detailed personal and professional information is disclosed) on the relationship between product recommendation subjects and transaction cost.

3.4.3. Mediators

Emotional Support: Emotional support refers to the provision of care, concern, empathy, love, and trust [65]. In line with the methodologies of Kessler et al. [66] and Lakey and Cassady [67], we utilized a 7-point scale to assess the extent to which participants believed their sales manager provided emotional support. In the post-experiment questionnaire, participants responded to the question, “To what extent do you believe your sales manager can provide you with emotional support?” The scale ranged from 1 (extremely unlikely) to 7 (extremely likely), enabling us to gauge the perceived emotional support under different experimental conditions.

3.5. Mathematical Model

Transaction cost = β0 + β1 Product recommenders + β2 Self-disclosure Level + β3 (Product recommenders × Self-disclosure Level) + ϵ.

3.6. Ethical Consideration

All experiments reported in this paper underwent review and approval by the relevant Institutional Review Boards (IRBs). The data collection process adhered to established legal standards and ethical guidelines, including the Declaration of Helsinki, the International Ethical Guidelines for Health-Related Research Involving Humans, and the General Data Protection Regulation (GDPR). The experiments posed no foreseeable risks or hazards to participants.

4. Results

4.1. Manipulation Check

The first manipulation check question asked participants to identify the name of the product recommender. Of the 78 participants, 72 (92.31 percent) correctly responded when asked whether the product recommender’s name was Jonny Zhang or AI SALES. The second manipulation check question assessed whether the product recommender proactively demonstrated their past outstanding performance. Out of 78 participants, 11 (14.10 percent) failed to correctly identify the levels of self-disclosure of their sales managers. These results indicate that participants generally comprehended the manipulations. After eliminating 14 participants who failed any manipulation checks, 64 usable responses remained. Additionally, we verified the validity of the random assignment using demographic data (gender, age, educational background) collected during the post-experiment questionnaire. There were no significant differences between the four treatment groups in terms of participants’ demographic data. Consequently, we could confirm that the random assignment was valid, allowing for further data analysis.

4.2. Hypothesis Tests

4.2.1. Test of H1

We tested our hypotheses using a one-way ANOVA model with transaction cost as the dependent variable and product recommender subjects as the independent variable. Table 3, Panel A presents the descriptive statistics for transaction costs, including sample sizes, means, standard deviations, standard errors, and the minimum and maximum values for each treatment group. Table 3, Panel B reports the results of the ANOVA, which tested hypotheses regarding the means of the groups in our dataset. This analysis aimed to determine whether there are significant differences in transaction costs between AI and human recommendation subjects. H1 posits that the transaction cost is lower when a product recommendation comes from an AI sales manager than when it comes from a human counterpart. Consistent with H1, we found that, on average, the transaction cost in response to human sales managers was 3.410, lower than that in response to the AI sales managers (4.090), yielding a 0.68-point difference across conditions. This difference is marginally statistically significant, and consistent with the directional prediction in Hypothesis 1 (F = 2.035, one-tailed, p = 0.080). Thus, we found that, on average, the transaction cost was 16.63% lower in the human condition than in the AI condition and 9.07% lower than the overall sample mean of 3.750. In summary, H1 is supported.

4.2.2. Test of H2

We used Hayes’s [68] Process Model 4 with 5000 bootstrap samples to test the mediation models. The Process model is a path analysis-based computation tool that can test combinations of direct, indirect, and moderating effects [68] and has been used in several accounting studies [69,70]. Figure 2 shows the proposed conceptual models, and Table 4 presents the coefficients and significance of the direct and indirect paths, as well as the total effect.
To test the mediating effect of emotional support predicted in H2, we performed a path analysis. We coded the AI agent as 0 and the human agent as 0. Figure 2 represents the path model with path coefficients and their significance levels. The results presented in Table 4 show that human managers resulted in higher emotional support than AI managers (0.750, p < 0.093), while the indirect effect of emotional arousal was 0.353 (95% CI from −0.7857 to −0.0151), indicating that emotional support mediates the relationship between product recommender subjects and transaction cost. Overall, the indirect path of emotional support in the Process model is consistent with our hypothesis, and thus, H2 is supported.

4.2.3. Test of H3

Panel B of Table 5 shows a significant interaction between product recommender subjects and self-disclosure level (F1,60 = 4.108, p = 0.024, one-tailed). To further evaluate this interaction, we also examined the simple effects of product recommender subjects. As reported in Table 5, Panel C, the simple effect of product recommender subjects was significant in the lower level of self-disclosure condition (F1,60 = 6.323, p = 0.008, one-tailed). Although the simple effect of the product recommender subject in the higher level of self-disclosure condition was not statistically significant (F1,60 = 0.080, p = 0.389, one-tailed), the overall mean pattern depicted in Figure 3 is consistent with our expectations. In contexts of high self-disclosure, both humans and artificial intelligence (AI) exhibit significant understanding and analytical capabilities. These capabilities prompt users to set high expectations, regardless of whether they are interacting with a person or an AI system. However, when product recommendations fail to meet these expectations, the resulting disappointment is substantial due to the significant cognitive resources invested, such as high expectations. That is, whether the recommender is human or artificial intelligence, consumer return intentions are primarily driven by the negative emotions caused by the discrepancies between expectations and actual delivery. Thus, the transaction costs will not differ between AI and humans.
These results suggest that the difference in transaction cost between the AI and human sales managers depended on whether a lower level of self-disclosure was provided. As noted above, the mean transaction cost was 0.68 lower in the AI sales manager condition than in the human sales manager condition. However, this mean difference increased to 1.67 when the self-disclosure level was low (4.00 vs. 2.33) and decreased to 0.17 when the self-disclosure level was high (4.35 vs. 4.18), consistent with the predicted interaction effect. Given that the product recommended by both human and AI sales managers is the same, these differences are quite significant and suggest that AI labels can meaningfully influence transaction costs within the product recommendation task. Overall, these results suggest that the low self-disclosure further enhances the inhibitory effect of the human product recommender on the transaction costs.

4.2.4. The Moderated Mediation Analysis

To shed light on the theoretical arguments underlying our hypothesis, we drew on the social exchange theory suggesting that the willingness to return (i.e., transaction cost) arising from different product recommender subjects might be attributable to emotional support. We examined the mediating mechanism for the joint effect of product recommender subjects and self-disclosure levels on transaction costs. Specifically, we examined whether one theorized component of product recommender subjects—emotional support—is the mechanism through which transaction cost is lower given a low level of self-disclosure. Furthermore, in the product recommendation setting, when consumers believe that the sales manager can provide emotional support, they feel more satisfied and valued. This emotional support from the sales manager fosters trust and reliance in the purchase, which diminishes customer dissatisfaction and consequently reduces the likelihood of product returns [71]. Accordingly, we expected the effect of product recommender subjects on the transaction cost to operate sequentially through consumers’ feelings of emotional support. Additionally, we theorized that the effect of transaction cost is lower when the product is recommended by human sales managers (versus AI sales managers) in a low self-disclosure condition, and we further examined whether and how this sequential indirect effect of emotional support differs with different levels of self-disclosure (i.e., moderated mediation).
Figure 4 presents our proposed conceptual model, while Table 6 reports the coefficients and significance of both the direct and indirect paths of the model in more detail. First, we examine the direct paths that correspond to the proposed hypotheses and our earlier ANOVA results. Table 6, Panel B shows that there is a significant positive path from product recommender subjects to emotional support (Coef = 2.0000, p < 0.01), indicating that the human (compared with AI) sales managers were more likely to provide emotional support to customers. Consistent with the interaction effects predicted by H3, Table 6, Panel B also shows a significant direct moderation effect on emotional support (Coef = −2.3529, p < 0.01) and a negative relationship between emotional support and transaction cost (Coef = −0.4702, p < 0.01). Overall, the direct paths in the Process model are consistent with our earlier analysis.
Next, we considered the indirect paths. Table 5, Panel C shows a partial conditional indirect effect; that is, the mediation path from product recommender subject via emotional support to transaction cost is conditional on the self-disclosure levels. Specifically, the mediation effect was significant only when the self-disclosure level was demonstrated in a low condition (Coef = −0.9404, confidence interval: −1.7858 to −0.3401) but not when demonstrated in a high condition (Coef = 0.1660, confidence interval: −0.4675 to 0.7584, including a zero, which indicates that the difference is not significant). Collectively, the links in this model indicate that low self-disclosure from a human sales manager instead of an AI manager results in higher emotional support, which in return leads to lower willingness to return. Overall, the pattern of these conditional indirect effects suggests that the interaction effect predicted in H3 between product recommender subjects and self-disclosure levels on transaction cost in the product recommendation task occurs both directly and indirectly through consumers’ emotional support.

5. Discussion

Firms are increasingly implementing AI systems for product recommendations, favoring them over human sales specialists due to their lower cost and higher availability [72,73]. However, AI systems are perceived to possess moderate levels of agency and low experience, which may further contribute to negative attitudes among employees [9,74,75]. To mitigate these negative perceptions, companies often build their AI by disclosing its data analysis capabilities and past performance to build trust and further enhance customer experiences. Despite the advantages of using AI to provide product recommendations, such as enhanced efficiency and scalability [1], whether—and how—to best design and deploy an AI product recommendation system remains an open question. Prior studies have primarily examined how the use of machines facilitates human self-disclosure [13,23,24,25], and little research explores how self-disclosure by machines affects customers’ psychological and behavioral responses. Owing to this, this paper aims to explore how the product recommendations subject (i.e., AI and human) influences consumers’ perceived individual emotional support and return behaviors (i.e., the transaction cost), and to examine the moderating effect of self-disclosure levels on these relationships.
In this study, we presented a theory and experimental evidence consistent with the idea that the product recommendations subject and self-disclosure levels will have influences on both customers’ psychological cognition and behavioral responses. Specifically, we conducted a 2 × 2 between-subjects experiment, manipulating the product recommendation subject (AI versus human) and self-disclosure level (High versus Low) to empirically test their influence on emotional support and transaction cost. We predicted and found that when a product is recommended by AI sales, employees’ willingness to return is higher than when the product is recommended by human sales managers through decreased emotional support. Furthermore, under a low self-disclosure level, we found that the presence of an AI sales manager (relative to the human) results in lower emotional support and higher transaction cost, but there is no significant difference between AI and humans when the product recommendation is provided under high self-disclosure.

6. Conclusions

The research findings from this study offer significant practical implications for the deployment of artificial intelligence (AI) in product recommendation scenarios. Our results show that even when the recommender system is AI-based, affixing a human label (e.g., naming it “Jonny Zhang”) can substantially reduce transaction costs. This strategy of humanizing AI not only enhances the emotional support perceived by users but also contributes to more sustainable business practices by improving efficiency and reducing the resource drain typically associated with higher transaction costs. Moreover, our findings indicate that low self-disclosure—focusing solely on the functionality of the product recommendations rather than the recommender’s personal accolades—further amplifies the positive impact of perceived human attributes on reducing transaction costs. This highlights the importance of how information is presented; emphasizing functional aspects over personal achievements or histories can lead to more effective, cost-efficient outcomes. This approach also leverages AI’s ability to provide concise, functional information without overloading the user with unnecessary details, which can lead to more streamlined and efficient consumer decisions. Such efficiency is crucial for sustainability as it reduces the waste of resources, both in terms of the cognitive load on consumers and the operational overhead for businesses. In essence, by focusing on the functional benefits of products rather than the intricacies of the AI itself, companies can foster a more sustainable interaction model that conserves resources while still meeting consumer needs effectively. That is, utilizing AI in emotionally driven interactions effectively manages user expectations and minimizes resource waste associated with transaction costs, thereby supporting sustainable practices that positively impact environmental and social outcomes.
We conclude with the following three caveats. First, given that product recommendation AI systems are already widely implemented across various sectors, conducting a field experiment would provide more authentic insights. By integrating real consumer interactions, a field study could capture genuine behavioral responses and decision-making processes, which are often influenced by dynamic and complex real-world factors. Second, we ran this online experiment on single-period performance; the results may vary in a multi-period context. Future research should consider longitudinal studies to observe changes in consumer perceptions (e.g., emotional support) and behaviors (e.g., return behavior), as we did not account for the impacts of learning and experience; consumers become more accustomed to AI and human interactions [37]. Third, future research should consider the heterogeneity (e.g., cultural differences) of employees. The algorithm aversion effect will lessen as human acquaintance with algorithms increases [76], resulting in a reduction in prejudices toward AI. We encourage future research to explore these issues.

Author Contributions

Conceptualization, Y.C.; methodology, Y.T. and S.Z.; formal analysis, Y.T.; writing—original draft preparation, Y.T.; writing—review and editing, Y.T. and S.Z.; supervision, Y.C.; funding acquisition, Y.C. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Natural Science Foundation of China (No. 72172132).

Institutional Review Board Statement

Our study was conducted in accordance with the Declaration of Helsinki, and the protocol was approved by the Ethics Committee of Xiamen University (Project identification code/Ethical Approval Number: XDYX202409K49).

Informed Consent Statement

Informed consent was obtained from all subjects involved in this study.

Data Availability Statement

Due to personal privacy issues, research data are available on request from the authors. The data that support the findings of this study are available from the corresponding author, Yuhong Tu, upon reasonable request.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Huang, M.-H.; Rust, R.T. Artificial intelligence in service. J. Serv. Res. 2018, 21, 155–172. [Google Scholar] [CrossRef]
  2. Schuetzler, R.M.; Grimes, G.M.; Scott Giboney, J. The impact of chatbot conversational skill on engagement and perceived humanness. J. Manag. Inf. Syst. 2020, 37, 875–900. [Google Scholar] [CrossRef]
  3. Bouguezzi, S.S. Milos How Does the Amazon Recommendation System Work? Available online: https://www.baeldung.com/cs/amazon-recommendation-system (accessed on 10 July 2024).
  4. Langfelder, N. Generative AI: Revolutionizing Retail through Hyper-Personalization. Available online: https://www.data-axle.com/resources/blog/generative-ai-revolutionizing-retail-through-hyper-personalization/ (accessed on 20 July 2024).
  5. Khattar, V. Famous Beauty Brands Using Chatbot Technology. Available online: https://www.skin-match.com/beauty-technology/famous-beauty-brands-using-chatbot-technology (accessed on 10 July 2024).
  6. Nussey, S. EXCLUSIVE SoftBank Shrinks Robotics Business, Stops Pepper Production-Sources. Available online: https://www.reuters.com/technology/exclusive-softbank-shrinks-robotics-business-stops-pepper-production-sources-2021-06-28/ (accessed on 10 July 2024).
  7. Hoffman, G. Anki, Jibo, and Kuri: What We Can Learn from Social Robots that Didn’t Make It. Available online: https://spectrum.ieee.org/anki-jibo-and-kuri-what-we-can-learn-from-social-robotics-failures (accessed on 10 July 2024).
  8. Kaur, D.; Uslu, S.; Rittichier, K.J.; Durresi, A. Trustworthy artificial intelligence: A review. ACM Comput. Surv. CSUR 2022, 55, 1–38. [Google Scholar] [CrossRef]
  9. Gray, H.M.; Gray, K.; Wegner, D.M. Dimensions of mind perception. Science 2007, 315, 619. [Google Scholar] [CrossRef]
  10. Dietvorst, B.J.; Simmons, J.P.; Massey, C. Overcoming algorithm aversion: People will use imperfect algorithms if they can (even slightly) modify them. Manag. Sci. 2018, 64, 1155–1170. [Google Scholar] [CrossRef]
  11. Commerford, B.P.; Dennis, S.A.; Joe, J.R.; Ulla, J.W. Man versus machine: Complex estimates and auditor reliance on artificial intelligence. J. Account. Res. 2022, 60, 171–201. [Google Scholar] [CrossRef]
  12. Glikson, E.; Woolley, A.W. Human trust in artificial intelligence: Review of empirical research. Acad. Manag. Ann. 2020, 14, 627–660. [Google Scholar] [CrossRef]
  13. Lee, Y.-C.; Yamashita, N.; Huang, Y. Designing a chatbot as a mediator for promoting deep self-disclosure to a real mental health professional. Proc. ACM Hum. Comput. Interact. 2020, 4, 1–27. [Google Scholar] [CrossRef]
  14. Tsumura, T.; Yamada, S. Influence of agent’s self-disclosure on human empathy. PLoS ONE 2023, 18, e0283955. [Google Scholar] [CrossRef]
  15. Saffarizadeh, K.; Keil, M.; Boodraj, M.; Alashoor, T. “My Name is Alexa. What’s Your Name?” The Impact of Reciprocal Self-Disclosure on Post-Interaction Trust in Conversational Agents. J. Assoc. Inf. Syst. 2024, 25, 528–568. [Google Scholar] [CrossRef]
  16. Correll, S.J.; Ridgeway, C.L. Expectation states theory. In Handbook of Social Psychology; Springer: Berlin/Heidelberg, Germany, 2003; pp. 29–51. [Google Scholar]
  17. Al-Natour, S.; Benbasat, I.; Cenfetelli, R. Designing online virtual advisors to encourage customer self-disclosure: A theoretical model and an empirical test. J. Manag. Inf. Syst. 2021, 38, 798–827. [Google Scholar] [CrossRef]
  18. Antaki, C.; Barnes, R.; Leudar, I. Self-disclosure as a situated interactional practice. Br. J. Soc. Psychol. 2005, 44, 181–199. [Google Scholar] [CrossRef] [PubMed]
  19. Greene, K.; Derlega, V.J.; Mathews, A. Chapter 22: Self-disclosure in personal relationships. In The Cambridge Handbook of Personal Relationships; Cambridge University Press: Cambridge, UK, 2006; pp. 409–427. [Google Scholar]
  20. Bigras, É.; Léger, P.-M.; Sénécal, S. Recommendation agent adoption: How recommendation presentation influences employees’ perceptions, behaviors, and decision quality. Appl. Sci. 2019, 9, 4244. [Google Scholar] [CrossRef]
  21. Bouayad, L.; Padmanabhan, B.; Chari, K. Can recommender systems reduce healthcare costs? The role of time pressure and cost transparency in prescription choice. MIS Q. 2020, 44, 1859–1903. [Google Scholar] [CrossRef]
  22. Al-Natour, S.; Benbasat, I. The adoption and use of IT artifacts: A new interaction-centric model for the study of user-artifact relationships. J. Assoc. Inf. Syst. 2009, 10, 2. [Google Scholar] [CrossRef]
  23. Lee, J.; Lee, D.; Lee, J.-G. Influence of rapport and social presence with an AI psychotherapy chatbot on users’ self-disclosure. Int. J. Hum. Comput. Interact. 2024, 40, 1620–1631. [Google Scholar] [CrossRef]
  24. Meng, J.; Dai, Y. Emotional support from AI chatbots: Should a supportive partner self-disclose or not? J. Comput.-Mediat. Commun. 2021, 26, 207–222. [Google Scholar] [CrossRef]
  25. Kim, T.W.; Jiang, L.; Duhachek, A.; Lee, H.; Garvey, A. Do you mind if I ask you a personal question? How AI service agents alter consumer self-disclosure. J. Serv. Res. 2022, 25, 649–666. [Google Scholar] [CrossRef]
  26. Longoni, C.; Cian, L. When do we trust AI’s recommendations more than people’s. Harv. Bus. Rev. 2020, 23. [Google Scholar]
  27. Belanche, D.; Casaló, L.V.; Flavián, C.; Schepers, J. Service robot implementation: A theoretical framework and research agenda. Serv. Ind. J. 2020, 40, 203–225. [Google Scholar] [CrossRef]
  28. Kotler, P.; Kartajaya, H.; Setiawan, I. Marketing 6.0: The Future Is Immersive; John Wiley & Sons: Hoboken, NJ, USA, 2023. [Google Scholar]
  29. Dongbo, M.; Miniaoui, S.; Fen, L.; Althubiti, S.A.; Alsenani, T.R. Intelligent chatbot interaction system capable for sentimental analysis using hybrid machine learning algorithms. Inf. Process. Manag. 2023, 60, 103440. [Google Scholar] [CrossRef]
  30. Leo, X.; Huh, Y.E. Who gets the blame for service failures? Attribution of responsibility toward robot versus human service providers and service firms. Comput. Hum. Behav. 2020, 113, 106520. [Google Scholar] [CrossRef]
  31. You, S.; Yang, C.L.; Li, X. Algorithmic versus human advice: Does presenting prediction performance matter for algorithm appreciation? J. Manag. Inf. Syst. 2022, 39, 336–365. [Google Scholar] [CrossRef]
  32. Devaraj, S.; Fan, M.; Kohli, R. Antecedents of B2C channel satisfaction and preference: Validating e-commerce metrics. Inf. Syst. Res. 2002, 13, 316–333. [Google Scholar] [CrossRef]
  33. Filiz, I.; Judek, J.R.; Lorenz, M.; Spiwoks, M. The extent of algorithm aversion in decision-making situations with varying gravity. PLoS ONE 2023, 18, e0278751. [Google Scholar] [CrossRef] [PubMed]
  34. Kim, T.; Lee, H.; Kim, M.Y.; Kim, S.; Duhachek, A. AI increases unethical consumer behavior due to reduced anticipatory guilt. J. Acad. Mark. Sci. 2023, 51, 785–801. [Google Scholar] [CrossRef]
  35. Huo, W.; Zheng, G.; Yan, J.; Sun, L.; Han, L. Interacting with medical artificial intelligence: Integrating self-responsibility attribution, human–computer trust, and personality. Comput. Hum. Behav. 2022, 132, 107253. [Google Scholar] [CrossRef]
  36. Filieri, R.; Lin, Z.; Li, Y.; Lu, X.; Yang, X. Customer emotions in service robot encounters: A hybrid machine-human intelligence approach. J. Serv. Res. 2022, 25, 614–629. [Google Scholar] [CrossRef]
  37. Berger, B.; Adam, M.; Rühr, A.; Benlian, A. Watch me improve—Algorithm aversion and demonstrating the ability to learn. Bus. Inf. Syst. Eng. 2021, 63, 55–68. [Google Scholar] [CrossRef]
  38. Chen, R.; Sharma, S.K. Self-disclosure at social networking sites: An exploration through relational capitals. Inf. Syst. Front. 2013, 15, 269–278. [Google Scholar] [CrossRef]
  39. Lee, J.; Lee, D. User perception and self-disclosure towards an AI psychotherapy chatbot according to the anthropomorphism of its profile picture. Telemat. Inform. 2023, 85, 102052. [Google Scholar] [CrossRef]
  40. Ho, A.; Hancock, J.; Miner, A.S. Psychological, relational, and emotional effects of self-disclosure after conversations with a chatbot. J. Commun. 2018, 68, 712–733. [Google Scholar] [CrossRef]
  41. Schmalz, S.; Orth, U.R. Brand attachment and consumer emotional response to unethical firm behavior. Psychol. Mark. 2012, 29, 869–884. [Google Scholar] [CrossRef]
  42. Feeney, B.C.; Collins, N.L. A new look at social support: A theoretical perspective on thriving through relationships. Personal. Soc. Psychol. Rev. 2015, 19, 113–147. [Google Scholar] [CrossRef]
  43. Collins, N.L.; Feeney, B.C. Working models of attachment shape perceptions of social support: Evidence from experimental and observational studies. J. Personal. Soc. Psychol. 2004, 87, 363. [Google Scholar] [CrossRef] [PubMed]
  44. Pessoa, L. On the relationship between emotion and cognition. Nat. Rev. Neurosci. 2008, 9, 148–158. [Google Scholar] [CrossRef]
  45. Rafaeli, A.; Erez, A.; Ravid, S.; Derfler-Rozin, R.; Treister, D.E.; Scheyer, R. When customers exhibit verbal aggression, employees pay cognitive costs. J. Appl. Psychol. 2012, 97, 931. [Google Scholar] [CrossRef]
  46. Burleson, B.R. The experience and effects of emotional support: What the study of cultural and gender differences can tell us about close relationships, emotion, and interpersonal communication. Pers. Relatsh. 2003, 10, 1–23. [Google Scholar] [CrossRef]
  47. Lawler, E.J.; Thye, S.R. Social exchange theory of emotions. In Handbooks of Sociology and Social Research; Springer: Boston, MA, USA, 2006; pp. 295–320. [Google Scholar]
  48. Reeves, B.; Nass, C. The Media Equation: How People Treat Computers, Television, and New Media Like Real People; Cambridge University: Cambridge, UK, 1996; Volume 10, pp. 19–36. [Google Scholar]
  49. Tong, S.; Jia, N.; Luo, X.; Fang, Z. The Janus face of artificial intelligence feedback: Deployment versus disclosure effects on employee performance. Strateg. Manag. J. 2021, 42, 1600–1631. [Google Scholar] [CrossRef]
  50. Ruan, Y.; Mezei, J. When do AI chatbots lead to higher customer satisfaction than human frontline employees in online shopping assistance? Considering product attribute type. J. Retail. Consum. Serv. 2022, 68, 103059. [Google Scholar] [CrossRef]
  51. Zhai, M.; Chen, Y. How do relational bonds affect user engagement in e-commerce livestreaming? The mediating role of trust. J. Retail. Consum. Serv. 2023, 71, 103239. [Google Scholar] [CrossRef]
  52. Bues, M.; Steiner, M.; Stafflage, M.; Krafft, M. How mobile in-store advertising influences purchase intention: Value drivers and mediating effects from a consumer perspective. Psychol. Mark. 2017, 34, 157–174. [Google Scholar] [CrossRef]
  53. Gao, T.T.; Rohm, A.J.; Sultan, F.; Pagani, M. Consumers un-tethered: A three-market empirical study of consumers’ mobile marketing acceptance. J. Bus. Res. 2013, 66, 2536–2544. [Google Scholar] [CrossRef]
  54. Troshani, I.; Rao Hill, S.; Sherman, C.; Arthur, D. Do we trust in AI? Role of anthropomorphism and intelligence. J. Comput. Inf. Syst. 2021, 61, 481–491. [Google Scholar] [CrossRef]
  55. Grewal, D.; Guha, A.; Satornino, C.B.; Schweiger, E.B. Artificial intelligence: The light and the darkness. J. Bus. Res. 2021, 136, 229–236. [Google Scholar] [CrossRef]
  56. Agarwal, U.A.; Narayana, S.A. Impact of relational communication on buyer–supplier relationship satisfaction: Role of trust and commitment. Benchmarking Int. J. 2020, 27, 2459–2496. [Google Scholar] [CrossRef]
  57. Kichan, N.; Baker, J.; Norita, A.; Jahyun, G. Dissatisfaction, disconfirmation, and distrust: An empirical examination of value co-destruction through negative electronic word-of-mouth (eWOM). Inf. Syst. Front. 2020, 22, 113–130. [Google Scholar]
  58. Cohen, J. Quantitative methods in psychology: A power primer. Psychol. Bull. 1992, 112, 1155–1159. [Google Scholar] [CrossRef]
  59. Wang, C.; Chen, J.; Xie, P. Observation or interaction? Impact mechanisms of gig platform monitoring on gig workers’ cognitive work engagement. Int. J. Inf. Manag. 2022, 67, 102548. [Google Scholar] [CrossRef]
  60. Li, H.; Xie, X.; Zou, Y.; Wang, T. “Take action, buddy!”: Self–other differences in passive risk-taking for health and safety. J. Exp. Soc. Psychol. 2024, 110, 104542. [Google Scholar] [CrossRef]
  61. Jongepier, F.; Klenk, M. The Philosophy of Online Manipulation; Taylor & Francis: Abingdon, UK, 2022. [Google Scholar]
  62. Coons, C.; Weber, M. Manipulation: Theory and Practice; Oxford University Press: Oxford, UK, 2014. [Google Scholar]
  63. Koohikamali, M.; Peak, D.A.; Prybutok, V.R. Beyond self-disclosure: Disclosure of information about others in social network sites. Comput. Hum. Behav. 2017, 69, 29–42. [Google Scholar] [CrossRef]
  64. Li-Barber, K.T. Self-disclosure and student satisfaction with Facebook. Comput. Hum. Behav. 2012, 28, 624–630. [Google Scholar]
  65. Kort-Butler, L. The Encyclopedia of Juvenile Delinquency and Justice; Wiley-Blackwell: Oxford, UK, 2017; pp. 1–4. [Google Scholar]
  66. Kessler, R.C.; Kendler, K.S.; Heath, A.; Neale, M.C.; Eaves, L.J. Kessler Perceived Social Support Scale. J. Personal. Soc. Psychol. 1992. [Google Scholar] [CrossRef]
  67. Lakey, B.; Cassady, P.B. Cognitive processes in perceived social support. J. Personal. Soc. Psychol. 1990, 59, 337. [Google Scholar] [CrossRef]
  68. Hayes, A.F. Introduction to Mediation, Moderation, and Conditional Process Analysis: A Regression-Based Approach; The Guilford Press: London, UK; New York, NY, USA, 2013. [Google Scholar]
  69. Bobek, D.D.; Hageman, A.M.; Radtke, R.R. The effects of professional role, decision context, and gender on the ethical decision making of public accounting professionals. Behav. Res. Account. 2015, 27, 55–78. [Google Scholar] [CrossRef]
  70. Commerford, B.P.; Hatfield, R.C.; Houston, R.W. The effect of real earnings management on auditor scrutiny of management’s other financial reporting decisions. Account. Rev. 2018, 93, 145–163. [Google Scholar] [CrossRef]
  71. Kim, M.; Sudhir, K.; Uetake, K.; Canales, R. When salespeople manage customer relationships: Multidimensional incentives and private information. J. Mark. Res. 2019, 56, 749–766. [Google Scholar] [CrossRef]
  72. Daugherty, P.R.; Wilson, H.J. Human+ Machine: Reimagining Work in the Age of AI; Harvard Business Press: Boston, MA, USA, 2018. [Google Scholar]
  73. Guha, A.; Grewal, D.; Kopalle, P.K.; Haenlein, M.; Schneider, M.J.; Jung, H.; Moustafa, R.; Hegde, D.R.; Hawkins, G. How artificial intelligence will affect the future of retailing. J. Retail. 2021, 97, 28–41. [Google Scholar] [CrossRef]
  74. Bigman, Y.E.; Gray, K. People are averse to machines making moral decisions. Cognition 2018, 181, 21–34. [Google Scholar] [CrossRef]
  75. Gray, K.; Wegner, D.M. Feeling robots and human zombies: Mind perception and the uncanny valley. Cognition 2012, 125, 125–130. [Google Scholar] [CrossRef]
  76. Burton, J.W.; Stein, M.K.; Jensen, T.B. A systematic review of algorithm aversion in augmented decision making. J. Behav. Decis. Mak. 2020, 33, 220–239. [Google Scholar] [CrossRef]
Figure 1. Experimental procedure.
Figure 1. Experimental procedure.
Sustainability 16 08236 g001
Figure 2. Conceptual model of the mediation analysis. Subjects = AI [code = 0] vs. Human [code = 1] (manipulated variable); Emotional Support = measured on a 7-point Likert scale with endpoints of 1 (Very unsupportive) and 7 (Very supportive); Transaction Cost = measured on a 7-point Likert scale with endpoints of 1 (Very unlikely to return) and 7 (Very likely to return).
Figure 2. Conceptual model of the mediation analysis. Subjects = AI [code = 0] vs. Human [code = 1] (manipulated variable); Emotional Support = measured on a 7-point Likert scale with endpoints of 1 (Very unsupportive) and 7 (Very supportive); Transaction Cost = measured on a 7-point Likert scale with endpoints of 1 (Very unlikely to return) and 7 (Very likely to return).
Sustainability 16 08236 g002
Figure 3. Observed effects of product recommender subjects and self-disclosure levels on transaction cost. This figure plots transaction cost after receiving product recommendations from AI and human sales managers. Subjects = AI [code = 0] vs. Human [code = 1] (manipulated variable); Self-Disclosure level = Low [code = 0] vs. High [code = 1] (manipulated variable).
Figure 3. Observed effects of product recommender subjects and self-disclosure levels on transaction cost. This figure plots transaction cost after receiving product recommendations from AI and human sales managers. Subjects = AI [code = 0] vs. Human [code = 1] (manipulated variable); Self-Disclosure level = Low [code = 0] vs. High [code = 1] (manipulated variable).
Sustainability 16 08236 g003
Figure 4. Conceptual model of the moderated mediation analysis. Subjects = AI [code = 0] vs. Human [code = 1] (manipulated variable); Self-Disclosure level = Low [code = 0] vs. High [code = 1] (manipulated variable); Emotional Support = measured on a 7-point Likert scale with endpoints of 1 (Very unsupportive) and 7 (Very supportive); Transaction Cost = measured on a 7-point Likert scale with endpoints of 1 (Very unlikely to return) and 7 (Very likely to return).
Figure 4. Conceptual model of the moderated mediation analysis. Subjects = AI [code = 0] vs. Human [code = 1] (manipulated variable); Self-Disclosure level = Low [code = 0] vs. High [code = 1] (manipulated variable); Emotional Support = measured on a 7-point Likert scale with endpoints of 1 (Very unsupportive) and 7 (Very supportive); Transaction Cost = measured on a 7-point Likert scale with endpoints of 1 (Very unlikely to return) and 7 (Very likely to return).
Sustainability 16 08236 g004
Table 1. The effect of AI on human behavior.
Table 1. The effect of AI on human behavior.
AuthorMethodMain Findings
Boudorf et al. [3]Randomized controlled trialWhen using digital advice, consumers were willing to pay 14% less for popular brand plans and 37% less for plans with higher star ratings, compared to having only basic product information.
Kim et al. [34]ExperimentConsumers are more likely to engage in unethical behaviors when interacting with AI agents, due to reduced anticipatory feelings of guilt.
Leo and Huh [30]ExperimentWhen service fails, people attribute less responsibility toward a service provider if it is a robot rather than a human.
People attribute more blame toward a service firm when a robot delivers a failed service than when a human does.
Huo et al. [35]SurveyPatients’ self-responsibility attribution is positively related to human–computer trust (HCT) and sequentially enhances the acceptance of medical AI for independent diagnosis and treatment.
You et al. [31]ExperimentIndividuals follow algorithmic advice more than identical human advice due to higher trust in algorithms, and this trust remains unchanged even when they are informed of the algorithm’s prediction errors.
Filieri et al. [36]Machine learningThe majority of customer interactions with service robots were positive, and robots that moved triggered more emotional responses than stationary ones.
Berger et al. [37]ExperimentFor an objective and non-personal decision task, human decision makers exhibit algorithm aversion if they are familiar with the advisor’s performance and the advisor errs.
Commerfold et al. [11]ExperimentAuditors proposed smaller adjustments to management’s complex estimates when receiving contradictory evidence from an AI system rather than a human specialist. This effect was particularly pronounced when the estimates were based on relatively objective inputs.
Filiz et al. [33]ExperimentAlgorithm aversion occurs more frequently as the seriousness of the decision’s consequences increases.
Table 2. Numbers of participants in each treatment group.
Table 2. Numbers of participants in each treatment group.
ConditionSubjects
AIHuman
Self-
Disclosure
Level
LowGroup AGroup C
(N = 18)(N = 19)
HighGroup BGroup D
(N = 22)(N = 19)
Table 3. The effect of product recommender subjects on transaction costs.
Table 3. The effect of product recommender subjects on transaction costs.
Independent Variable: Transaction Cost
Panel A: Descriptives
SubjectNMeanStd. DevStd. EMinMax
AI324.090 1.940 0.343 1 7
Human323.410 1.915 0.339 1 7
Total643.750 1.944 0.243 1 7
Panel B: One-way ANOVA (one-tailed)
SSdfMSFSig.
Between Groups7.56317.5632.0350.080
Within Groups230.438623.717
Total23863
Table 4. Mediating role of emotional support.
Table 4. Mediating role of emotional support.
EffecttpLLCIULCI
Total effect−0.6875−1.42640.1588−1.49230.1173
Direct effect−0.3348−0.74570.4587−1.08490.4152
Indirect effect−0.3527--−0.7857−0.0151
Table 5. The effect of product recommender subjects and self-disclosure levels on transaction costs.
Table 5. The effect of product recommender subjects and self-disclosure levels on transaction costs.
Panel A: Descriptive Statistics: mean, standard deviation, n cell
Self-Disclosure
SubjectsLowHighOverall
AI4.004.184.09
1.6042.2431.94
151732
AC
Human2.334.353.41
0.9762.061.915
151732
BD
Overall3.174.263.75
1.5552.1221.944
303464
Panel B: Conventional ANOVA
SourceSum of SquaresdfFp
Subjects8.84812.6850.053
Self-Disclosure19.21615.8320.010
Subjects × Self-Dis13.53614.1080.024
Error197.68660
Panel C: Simple Effect
dfMSFp
Low: AI versus Human1, 6020.8336.3230.008
High: AI versus Human1, 600.2650.0800.389
Table 6. The effect of product recommender subjects and self-disclosure levels on emotional support.
Table 6. The effect of product recommender subjects and self-disclosure levels on emotional support.
Panel A: Descriptive Statistics: mean, standard deviation, n cell
Self-Disclosure
SubjectsLowHighOverall
AI2.333.292.84
1.2911.8961.687
151732
AC
Human4.332.943.59
1.4471.9191.829
151732
BD
Overall3.333.123.22
1.6881.8871.786
303464
Panel B: Direct Path
CoefSEtp
Subjects → Emotional Support2.00000.61313.26220.0018
Subjects × Self-Dis−2.35290.8411−2.79740.0069
→Emotional Support
Emotional Support → Transaction Cost−0.47020.1267−3.71070.0004
Panel C: Conditional Indirect Path by Self-Disclosure Levels
Subjects → Emotional Support → Transaction Cost
Assigned Self-Disclosure LevelsEffectBootSEBootLLCIBootULCI
Low−0.94040.3702−1.7858−0.3401
High0.16600.3081−0.46750.7584
Pairwise Contrast1.10640.48810.26622.1809
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Chen, Y.; Tu, Y.; Zeng, S. Costly “Greetings” from AI: Effects of Product Recommenders and Self-Disclosure Levels on Transaction Costs. Sustainability 2024, 16, 8236. https://doi.org/10.3390/su16188236

AMA Style

Chen Y, Tu Y, Zeng S. Costly “Greetings” from AI: Effects of Product Recommenders and Self-Disclosure Levels on Transaction Costs. Sustainability. 2024; 16(18):8236. https://doi.org/10.3390/su16188236

Chicago/Turabian Style

Chen, Yasheng, Yuhong Tu, and Siyao Zeng. 2024. "Costly “Greetings” from AI: Effects of Product Recommenders and Self-Disclosure Levels on Transaction Costs" Sustainability 16, no. 18: 8236. https://doi.org/10.3390/su16188236

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop