Next Article in Journal
Tourism Economics: 20 Years After the Critical Turn
Previous Article in Journal
Festivals in Age of AI: Smarter Crowds, Happier Fans
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Artificial Intelligence in Tourism Through Chatbot Support in the Booking Process—An Experimental Investigation

1
Department of Quantitative Methods, Faculty of Economics and Law, Hochschule Pforzheim, 75175 Pforzheim, Germany
2
Department of International Business, Faculty of Economics and Law, Hochschule Pforzheim, 75175 Pforzheim, Germany
*
Author to whom correspondence should be addressed.
Tour. Hosp. 2025, 6(1), 36; https://doi.org/10.3390/tourhosp6010036
Submission received: 24 January 2025 / Revised: 12 February 2025 / Accepted: 19 February 2025 / Published: 21 February 2025

Abstract

:
AI-controlled chatbots have been used in travel services for some time and range from simple hotel reservations to personalized travel recommendations. However, the acceptance of chatbots compared to human interlocutors has not yet been extensively studied experimentally in the tourism context. In this experimental, randomized, vignette-based, preregistered 2 (agent: AI chatbot/human counterpart) × 3 (situation: positive/neutral/negative) between-subjects design, we hypothesized that booking intention is reduced in chatbots compared to human agents and in situations where the booking can only be made under more negative than the original conditions. Additionally, we hypothesized an interaction effect between agent and situation, presuming that the decrease in booking intention in negative situations would be less strong for human agents than for chatbots. Structural equation modelling of the data indicates strong support for the Technology Acceptance Model in the booking context. As presumed, the booking intention was lower in the negative situation and borderline lower for the chatbot. The interaction effect was shown descriptively in the data. Chatbots are recognized during the booking process and less accepted to support bookings than their human counterparts. Therefore, managers should design chatbots as human-like as possible to avoid losing sales when outsourcing customer contact activities to AI technologies.

1. Introduction

Artificial intelligence (AI) is on the rise in all aspects of life. Whereas chatbots initially were programmed to answer domain specific questions using word and pattern matching techniques, AI-controlled modern chatbots can actually keep a conversation going on a vast range of topics (Iancu & Iancu, 2023). The development of AI-controlled systems permits the outsourcing of activities formerly performed by humans to computers. Chatbots are a specific AI system and known for their ability to “speak” with their human counterpart. They are employed in different industries to handle inquiries, complaints, give advice, or entertain. According to Shawar and Atwell (2005), “a chatbot is a machine conversation system which interacts with human users via natural conversational language” (p. 489). Brennan (2006) defined a chatbot as “an artificial construct that is designed to converse with human beings using natural language as input and output” (p. 61). The first chatbot was ELIZA, programmed by Weizenbaum (1966) to mimic human–computer conversations. It relied on written linguistic cues and used pattern matching techniques. Chatbots have evolved ever since and range from purely text-based systems based on rules to those relying on deep learning and natural language processing (e.g., ChatGPT or DeepSeek) Today’s chatbots also understand speech and give answers (e.g., Siri from Apple or Alexa by Amazon) (Caldarini et al., 2022).
Given the scarcity of experienced personnel in some mature markets (e.g., Germany), cost pressure, and the 24/7 availability of chatbots (Kim & Baek, 2024), they are increasingly used to interact with customers and perform a variety of tasks. In the tourism industry, chatbots are used widely in customer support, e.g., to give recommendations on existing bookings (e.g., booking.com and Airbnb), update on flights and rebooking (e.g., Lufthansa), and inform on luggage requirements or flight details (e.g., KLM). Companies like chatbots because they respond quickly and in real time to customer messages and are cost-effective (M. Li et al., 2021). Customers easily adjust to chatbots and use them without major problems (Melián-González et al., 2021).
The possible impact of AI on industry revenues is estimated to be huge. In a report published by Statista in 2024 and based on data from IHR, McKinsey, Oxford Economics, and S&P, travel transport and logistics rank fourth in industry sector to benefit from additional revenues through the use of AI (Statista, 2024c). Industry issues like the shortage of qualified labor, cost pressure, and increased playfulness of consumers convince more and more companies to rely on AI to support managing their business (Kim & Baek, 2024).
Research on chatbots in the tourism industry covers a wide range of topics from intention to use chatbots (e.g., Melián-González et al., 2021) and acceptance of the technology (e.g., Jimenez-Barreto et al., 2021; H. Xu et al., 2024) to service, including service failure (e.g., Majeed et al., 2024), trust (e.g., Pillai & Sivathanu, 2020), and anthropomorphic chatbots that possess human-like expressions or emotions (Zhang et al., 2024; Park et al., 2024). Research also covers a wide range of applications, e.g., air travel (Jimenez-Barreto et al., 2021), travel planning (Kim et al., 2024), sustainable tourism (Majid et al., 2024), the use of chatbot applications in museums (Noh & Hong, 2021), travel planning (H. Xu et al., 2024), or different booking contexts (Zhu et al., 2023a). Many authors have researched chatbots’ characteristics (e.g., Zhang et al., 2024) without comparing them to human counterparts. Studies comparing human travel agents to chatbots are less frequent (cf. Table 1). What is missing in the literature is an evaluation on how AI chatbots compare to human agents in more than two outcome situations. Existing studies center on positive vs. negative reactions without considering a neutral situation. Our research fills this gap by conducting a 2 (chatbot/human) by 3 experiment (positive/neutral/negative outcome) measuring booking intention. We use structural equation modelling to test a slightly modified version of the Technology Acceptance Model in the booking context. There is strong support for the model in the data. Implications of a further preference for human agents over chatbots are discussed.

2. Literature Review

2.1. Chatbots in Tourism

The literature on chatbot applications in tourism has increased significantly since 2023. For our research, we centered on studies covering hotel bookings as well as interactions with online travel agencies. Most studies in this field are 2 × 2 experiments (cf. Table 1) and research chatbot characteristics or service failure/recovery situations. Experiments covering more than two outcome situations are absent from literature, as well as studies relying on the Technology Acceptance Model (cf. Table 1). For practitioners, a very important process is the sales process. Therefore, they need to know how chatbots compare to humans in this situation and how likely customers will book an accommodation if they are served by a chatbot or a person. To our knowledge, this has not been researched so far (cf. Table 1).

2.2. Technology Acceptance Model (TAM)

The Technology Acceptance Model developed by Davis (1985) explains the adoption intention of new technology and has been tested in different contexts but not for human-chatbot interaction during hotel bookings. Davis rooted his model in the Theory of Reasoned Action (Fishbein & Ajzen, 1975). This theory explains that behavior results from intention. Therefore, most studies—and we will follow the approach—do not research actual behavior but only intention, which is formed through several variables: attitude toward the behavior and perceived norm. In its original form, the TAM comprises the constructs Perceived Usefulness, Perceived Ease of Use, and attitude toward use to explain actual use of a new technology (Davis, 1985, p. 24). Attitude toward a system has been shown to predict technology adoption. External variables can be added to the model. Subsequent alterations of the model have relied on Ajzen’s Theory of Planned Behavior (Ajzen, 1991) and have included Perceived Behavioral Control in the model. In the context of chatbot adoption in tourism, to our knowledge, there are only a few articles based on the TAM (e.g., Islam et al., 2024; Pillai & Sivathanu, 2020; Zhu et al., 2023a).
Davis (1989) defined Perceived Usefulness as “the degree to which a person believes that using a particular system would enhance his or her job performance” (p. 320). Research centering on chatbot usage in tourism is just developing. According to Pillai and Sivathanu (2020), both customers and managers of travel agencies perceive chatbots to be useful. For managers, they save manpower and therefore costs; for customers, they make it easier to plan a travel itinerary or undertake a booking. In order to be considered useful, Zhu et al. (2023a) found that the control, responsiveness, and personalization of chatbots is important. Consumers want to have control over their booking processes, receive prompt answers, and want personalized treatment. In general, numerous studies have shown that Perceived Usefulness is a good predictor of actual chatbot use not only in the tourism sector (Atwal & Bryson, 2021; Esiyok et al., 2024; Liu et al., 2024).
Davis (1989) secondly used the variable Perceived Ease of Use to explain usage intention. Already in the paper, he stated that Perceived Ease of Use has less explanatory power to explain usage intention than Perceived Usefulness. Whereas Perceived Ease of Use has been included in the TAM and tested by several authors also in the tourism context (e.g., Islam et al., 2024; Liu et al., 2024; Pillai & Sivathanu, 2020), its inclusion has also been criticized. Venkatesh et al. (2003) pointed out that Perceived Ease of Use for a technology loses importance with increased experience. It is therefore only a problem for first-time or inexperienced users. Furthermore, if humans and chatbots are compared, Perceived Ease of Use is difficult to measure. In order to make chatbots appear human (and thereby comparable to human agents), visual, conversational, and identity cues are more important (Go & Sundar, 2019; Liu & Sundar, 2018). Since the current study compares people to chatbots, Perceived Ease of Use was excluded from the model.
In a later version of the TAM, Venkatesh and Davis (2000) extended the model to also include Subjective Norms. People are influenced by their surroundings and will therefore consider the opinion of others when they decide on usefulness or usage of a system. In a study by Islam et al. (2024) on chatbot usage intention for hotel concierge services, Subjective Norms were especially important to explain the relation between Perceived Usefulness and usage intention. The approval of a customer’s social environment is crucial for the usage of chatbots in the tourism context. This applies to all age groups, as Iancu and Iancu (2023) explained in a generalized setting. Elderly people consider chatbots to be useful tools if their social environment approves of them.
Perceived Usefulness influences intention either directly via attitude or indirectly via attitude and Perceived Behavioral Control. Attitude “reflects feelings of favorableness or unfavorableness toward using the technology” (Taylor & Todd, 1995, p. 561). According to Davis (1989) and others (e.g., Taylor & Todd, 1995) Attitude primarily decides if a user will adopt or reject a chatbot. The positive attitude toward the use of chatbots significantly influenced their use in various situations (e.g., Kwangsawad & Jattamart, 2022; Liu et al., 2024) as well as their reusage intention (Silva et al., 2023). However, in some situations, people might find it difficult to control their behavior because negative experiences in the past or additional information available impact their actions (Taylor & Todd, 1995). Within the context of chatbot usage in tourism contexts, to our knowledge, there are no studies employing Perceived Behavioral Control as a variable. However, with regard to artificial intelligence, e.g., the acceptance of immersive technologies like virtual or augmented reality, it has been shown that Perceived Behavioral Control influences intention (Sujood & Pancy, 2024). The same holds true for tourists using ChatGPT to research travel information (Shi et al., 2024).
Uses and gratification theory has been developed to explain the consumption of specific media suited to certain individuals. Newer research also applies it to digital media such as, e.g., the internet (Ruggiero, 2000). In its original argument, it states that people consume media with a content that—among others—suits their needs, entertains, and enhances social interaction (Katz et al., 1973). Niu and Mvondo (2024) applied uses and gratification theory to explain the concept of Technology Affinity as “users’ perceived importance of AI chatbots in their life” (p. 3). They found Technology Affinity to positively influence satisfaction with ChatGPT. In a different setting (vertical farming), Jürkenbeck et al. (2019) also worked with an adapted TAM model and found that Technology Affinity was important to explain the Perceived Usefulness of certain types of vertical farms.
As a consequence, we propose the research model as depicted in Figure 1.

3. Hypothesis Development

Consumers’ satisfaction is strongly influenced by expectations and their fulfillment. The discrepancy between expected and actual results plays a decisive role in the assessment of subjective well-being. The theory of expectation disconfirmation, which was originally developed by Oliver (1980), states that consumer satisfaction depends largely on whether the service received meets, exceeds, or disappoints their expectations. If people receive less than they originally expected, this often leads to dissatisfaction and disappointment. This theory that falling short of expectations leads to a significant reduction in customer satisfaction is supported by numerous empirical studies (Boulding et al., 1993; X. Wang et al., 2020). Customer expectations and their fulfillment are also a decisive factor for customer satisfaction in the tourism sector (Khajeh Nobar & Rostamzadeh, 2018; Shyja et al., 2023) and for the use of chatbots to make recommendations (Zhang et al., 2024).
We therefore expected:
H1: 
In situations in which the booking can only be made at more negative than the original conditions, the behavioral intention to book is reduced.
Gonçalves et al. (2024) conducted three experimental studies in the luxury tourism sector, finding that the potential use of AI reduced tourists’ using intentions as well as their perception of a luxury value. Behavioral intentions were mediated by customers’ need for differentiation. Van Esch et al. (2022) found that customers’ subjective happiness was higher in the interaction with humans than with chatbots, resulting in a preference of human interaction over AI interaction. The discrimination of the service types was higher for politically conservative tourists. Also, in settings where human teamed up with chatbots compared to pure chatbot interaction, satisfaction was higher for the mixed teams (Y. Li et al., 2024). In general, different studies have shown a preference for human agents in contrast to service robots/chatbots or related technologies (e.g., Choi et al., 2021; Mende et al., 2019; Qiu et al., 2020; Y. Xu et al., 2020).
We therefore expected:
H2: 
The behavioral intention to book is reduced for chatbots vs. human agents.
Artificial intelligence supports users in a variety of ways and is becoming increasingly reliable. Many researchers have identified the reliability of a service as the most important indicator of its quality (Dhingra et al., 2020). Yun and Park (2022) showed that reliability and assurance positively impacted customer satisfaction.
The interaction between consumers and service agents—whether human agents or AI—significantly influences booking intentions, especially when the valence of the situation varies. In negative scenarios, such as receiving a less favorable room rate than anticipated, consumers may perceive AI agents as less empathetic, leading to a sharper decline in booking intentions for negative situations for the AI chatbot compared to human agents (Zhu et al., 2023b). This might be due to the “algorithm aversion” phenomenon, with individuals preferring human judgment over algorithmic decisions, particularly when negative outcomes are involved (Jussupow et al., 2020). For complex tasks, humans are preferred over chatbots (Y. Xu et al., 2020). Here, customers expect humans to have greater problem-solving abilities. The same applies to different emotional states. Angry customers entering a conversation with anthropomorphic chatbots have higher service and empathy expectations (Crolic et al., 2022). Thus, if they dislike the chatbot’s proposed solution, their purchasing intention is reduced. Also, in situations of service failure, customers attribute more blame to the company and the brand if they interacted with chatbots instead of humans (Pavone et al., 2023).
We therefore hypothesized that there is an interaction effect between the situation and the agent in a way that the differences in the intention to book between the three situations are smaller for humans than for chatbots.
H3: 
The decrease in the behavioral intention to book for negative compared to neutral and positive situations is less strong for human agents than for chatbots.
In order to make our work transparent, we preregistered our design, hypotheses, main analysis, planned secondary analyses, and planned sample size for a power of 80%, as well as exclusion criteria using the platform AsPredicted (https://aspredicted.org/, accessed on 28 October 2024). Our preregistration was submitted prior to data collection.

4. Materials and Methods

4.1. Participants

Four-hundred ninety-six German participants were recruited by Resolution Research (an international market research company) and participated in the experiment. The original questionnaire was in German. Items appearing in this text were translated to English. We had incorporated two control questions “Because I read carefully, I tick 5 (2) here”. into the questionnaire that were, however, already screened out in the recruiting process so that all 496 participants had answered the control questions correctly. Four participants who had closed their browser during the time of answering the questionnaire, as well as four persons who admitted that they had not seriously participated in the study, were excluded. Eighteen participants were excluded because they had already participated in the study before, as well as three people who indicated that they had been disturbed for more than three minutes while answering the questionnaire. As a result, 467 subjects remained.
Overall, 240 participants (51.4%) were female, and 227 (48.6%) were male; nobody indicated another sex. Figure 2 shows the age distribution in the sample. The most frequented age group was the group aged 25–29, with 16.5% (n = 77).

4.2. Design

A 2 (agent: AI vs. human) × 3 (situation: positive vs. neutral vs. negative) between-subjects design was chosen. This means that six conditions resulted: an AI in a positive situation, an AI in a neutral situation, and an AI in a negative situation, as well as human agent in a positive situation, a human agent in a neutral situation, and a human agent in a negative situation. Participants were randomly assigned to one of the six resulting conditions. In the online questionnaire software Unipark, the option to attempt a uniform distribution of groups was chosen in order to reach a balanced design.

4.3. Measures

All scales—with the exception of attitude—consisted of multiple items measured on seven-point Likert-type scales. The answers ranged from 1—I strongly disagree to 7—I strongly agree. For all used scales, the order of the employed items was randomized. Attitude was measured with a semantic differential.

4.3.1. Dependent Variable–Behavioral Intention

Behavioral Intention (to make a booking) was chosen as the dependent variable. It was measured with three items (“I intend to ensure that I make the booking”, “I expect that I make the booking”, “I will ensure that I make the booking”) and was adapted from (Hamilton et al., 2016). Behavioral Intention was calculated as the mean over the three answers. High values indicated a high intention to make the booking. Cronbach’s Alpha resulted in a value of 0.882.

4.3.2. Mediator and Control Variables

Perceived Behavioral Control. The four-item scale (“It is mostly up to me whether I do the booking”, “I have complete control over whether I do the booking”, “It would be easy for me to ensure that I do the booking”, “I am confident that I could ensure that I do the booking”) was adapted from (Hamilton et al., 2016) and provided a Cronbach’s Alpha of 0.870.
Technological Affinity. The two-item scale (“I am skeptical about new digital technology”, “The age of my mobile devices such as smartphones and notebooks is irrelevant to me”, taken from Jürkenbeck et al. (2019), had a Cronbach’s Alpha of 0.879.
Subjective Norm. The three-item scale (“My friends will approve of my booking”, “My family will approve of my booking”, “My colleagues will approve of my booking”) was adopted from Islam et al. (2024) and had a Cronbach’s Alpha of 0.875.
Perceived Usefulness. A four-item scale (“Using the booking assistant makes it easier for me to complete a booking successfully”, “The booking assistant helps me to complete the booking more quickly”, “The booking assistant makes it easier for me to make the booking exactly as I want it”, “I find the booking assistant helpful when booking”) was adapted from Venkatesh and Davis (2000). It had a Cronbach’s Alpha of 0.939.
Attitude. Three items (“For me, booking with the help of the assistance would be unfavorable/favorable; bad/good; valuable worthless”) taken from Hamilton et al. (2016) measured the attitude. The scale had a Cronbach’s Alpha of 0.938.
The reliability coefficients (Cronbach’s α) for all scales ranged between 0.870 and 0.939, indicating good to very good internal consistency (Bühner, 2021).
Manipulation Check. To check whether the participants understood that they talked to an artificial intelligence vs. a human being, we asked them to rate their interaction partner on a battery of questions in the form of a semantic differential, i.e., emotionless–emotional, artificial–natural, not sensitive–sensitive, inhuman–human, mechanical–empathetic, monotonous–multifaceted and cold–warm.

4.4. Procedure

Participants completed the experiment in November 2024 using the software Unipark Questback. In a short introduction, the participants were told that this was a survey on the topic of vacation travel. The reference to artificial intelligence or the intention to book was not mentioned. Participants gave their informed consent to participate after being informed about anonymity, voluntariness, and the possibility to stop answering at any time. We asked for part of the demographical data, i.e., age and gender at the beginning of the study. Persons under 18 were screened out. In the following, participants read a vignette about themselves in a booking situation. The subjects of the artificial intelligence/positive situation group (for human agent and/or neutral situation/negative situation see the alternative sentences in parentheses) read the following vignette:
You have decided that you want to go to the sea this vacation. You’ve already done a bit of research and picked out a room with a sea view in a hotel right on the beach. However, you still have a few questions because you are not yet very familiar with your chosen vacation destination.
You decide to enquire on the website of an online travel agency. A friendly chatbot [travel agency employee] answers. “Hello, my name is Sunny [Marie Sommer]. How can I help you?” You clarify your questions about the vacation destination and then want to book the room you have selected. The chatbot Sunny [travel agency employee Marie Sommer] replies: “I’ve just seen that I can give you a 10% discount on the room you’ve chosen. May I book it for you?” [“The room is available at the usual conditions. May I book it for you?”/“I’ve just seen that the conditions in the system have changed compared to the conditions you saw. The room rate has unfortunately increased by 10%. This is now the general price that you will also find with other providers. May I still book the room for you?”]
After a manipulation check, participants were first asked for their Intention to book and had to then answer the items for the mediator and control variables (Perceived Ease of Use, Perceived Usefulness, Attitude, Perceived Control, Subjective Norm, Technological Affinity). A few further sociodemographic variables (Nationality, Country of Residence, Marital status, Number of Children Living in the Household, Occupation, Monthly Household Income) followed. At the end of the questionnaire, participants were asked whether they had seriously answered the questions, whether they had previously participated, and whether they had been disturbed. We also asked them to formulate the perceived aim of the study in their own words. To conclude, subjects were debriefed and informed that the study was concerned with the possible different levels of acceptance of artificial intelligence vs. human employees in the tourism booking process.

5. Results

5.1. Preliminary Analyses

Manipulation Check

Table 2 shows the results of the manipulation check. The travel agency employee Marie Sommer of the vignette in the human condition was perceived as significantly more human in all dimensions.

5.2. Descriptive Group Analysis

Table 3 shows the distribution of participants on the groups formed by the Agent and the Situation, as well as the means and standard deviations of the dependent variable Intention in total and for the subgroup of male participants. Participants were almost equally distributed between the four resulting groups, with a slight imbalance in the proportion of male respondents between the groups.

5.3. Main Analysis

5.3.1. ANOVA

As specified in our pre-registered analysis plan, we first tested the effect of the Agent and the Situation on Intention (to book) as well as their possible interaction. The two-factor ANOVA resulted in a significant effect of the Agent, F(1,461) = 4.08, p = 0.044, η2 = 0.01 and the Situation, F(2,461) = 13.58, p < 0.001, η2 = 0.06. For a human counterpart, the Intention to book was higher for all situations. Thus, H1 could be confirmed. As expected, the Intention was lower in negative situations, so that also H2 could be confirmed. The interaction effect, F(2,461) = 0.65, p = 0.524, η2 = 0.003, was not significant.
Figure 3, however, shows that the descriptive reduction of the Intention (to book) from 4.48 for the neutral to 3.65 for the negative situation was, with a difference of 0.83, clearly higher for the artificial intelligence than the reduction of 0.54 from 4.70 in the neutral to 4.16 in the negative situation for a human counterpart. Thus, H3 could be descriptively confirmed.

5.3.2. SEM

Additionally, we tested the total model as shown in Figure 1 with a structural equation model. SEM models were calculated using the lavaan module in R. When indirect effects were reported, we reported 95% confidence intervals that were calculated with the bootstrap option using 5000 simulations.
We recoded the independent variable situation as dummy variables, defining a variable neutral situation and a variable negative situation that were one in the neutral/negative situation and zero otherwise, thus defining the positive situation as the reference value. The variable Agent was already dichotomous, with ‘artificial intelligence’ being the reference variable. To take the interactions into account, two product variables, Inter Agent Neutral and Inter Agent Negative, between the Agent and the Neutral and Negative Situation were defined that were only one when the Agent was human and the situation neutral or negative, respectively.
When modelling the model in Figure 1 with a first model including all interactions (Model 1), the effects of the interaction variables on Perceived Usefulness (Inter Agent Neutral: β = −0.160, p = 0.570, Inter Agent Negative; β = 0.292, p = 0.308) and on Intention (Inter Agent Neutral: β = 0.240, p = 0.327, Inter Agent Negative; β = 0.347, p = 0.164), all were insignificant. The indirect effects of Inter Agent Neutral (β = 0.127, p = 0.329) and Inter Agent Negative (β = 0.183, p = 0.167) remained insignificant as well. As for the main effects, only the negative situation had a significant direct effect on Intention (β = −0.793, p < 0.001), as well as a significant indirect effect (β = −0.243, p = 0.009). The effects of the neutral situation (β = −0.088, p = 0.644) and Agent (β = −0.036, p = 0.859) were insignificant. The other effects were comparable to the ones for the model without interaction effects and are therefore reported in detail for model 2.
The fit of structural equation model 1 was evaluated using multiple fit indices. Chi-square was significant (χ2(277) = 674,75, p < 0.001). The Comparative Fit Index (CFI) was 0.949; the TLI was 0.942. The Root Mean Square Error of Approximation (RMSEA) was 0.055 (90% CI: [0.050, 0.061]), the SRMR 0.076. The fit indices (except for Chi-square) all showed a good fit, so that model 1 already fitted the data well.
Given the insignificance of the interaction terms, we, however, calculated another model without interaction terms, called model 2.
For model 2, the Chi-square test was significant (χ2(239) = 626.08, p < 0.001), which was expected given the large sample size (N = 467). However, alternative fit indices indicated that the model fit the data adequately well. The Comparative Fit Index (CFI) was 0.951, exceeding the threshold for an excellent fit (>0.95). Also, the TLI of 0.944 indicated a strong fit and aligned with the CFI, further supporting the model’s validity. The Root Mean Square Error of Approximation (RMSEA) was 0.059 (90% CI: [0.053, 0.065]), indicating a good model fit. The lower confidence bound was close to the threshold for an excellent fit (<0.05), suggesting that the model provides a strong approximation of the observed data. The Standardized Root Mean Square Residual (SRMR) was 0.080, exactly hitting the conventional cutoff of 0.080. Thus, the fit indices suggest the model is a good representation of the data.
Figure 4 shows the effects of the structural equation model. The analysis revealed that Attitude (toward booking) had a direct significant effect on Intention (β = 0.608, p < 0.001). So did Perceived Control (β = 0.374, p = 0.002). Negative situation had a significant direct effect on Intention (β = −0.654, p < 0.001). On the other hand, neither Agent (β = 0.004, p = 0.975) nor neutral situation (β = −0.159, p = 0.251) effected Intention directly.
Attitude had a significant effect on Perceived Behavioral Control (β = 0.292, p < 0.001). The indirect effect of Attitude via Perceived Behavioral Control on Intention was βind = 0.109, 95%-CI = [0.044, 0.186], p = 0.003. This indicates that Attitude affected Intention both directly and indirectly through Perceived Control, suggesting a partial mediation. The effect of Perceived Usefulness on Attitude (β = 0.860, p < 0.001) was highly significant. Perceived Usefulness had an indirect effect of βind = 0.523, 95%-CI = [0.392, 0.648], p < 0.001 on Intention. Perceived Usefulness was predicted by Agent (β = 0.206, p = 0.043), negative situation (β = −0.297, p = 0.018), Subjective Norm (β = 0.761, p < 0.001), and Technological Affinity (β = 0.133, p = 0.024). The neutral situation (β = 0.013, p = 0.923) did not significantly affect the Perceived Usefulness.
While the indirect effect of Agent on Intention via the delta-method would have been significant, with βind = 0.108, p = 0.044, it became slightly insignificant with the more robust bootstrapping method, with βind = 0.108, 95%-CI = [0.002, 0.225], p = 0.058. However, this can still be interpreted as a borderline (non-)significance. We can thus say that there is also some evidence for H1 when using SEM. As the direct effect was nearly zero, the effect of the Agent was, however, almost totally mediated by Perceived Usefulness and Attitude.
The indirect effect of the negative situation on Intention was βind = −0.155, 95%-CI = [−0.297, −0.029], p = 0.022. Thus, both the direct and the indirect effect of the negative situation were significant. H2 can thus also be confirmed by the SEM. There was a partial mediation of the negative situation on Intention via Perceived Usefulness and Attitude. Subjective Norm had an indirect effect of βind = 0.398, 95%-CI = [0.279, 0.519], p < 0.001 on Intention, Technological Affinity, with βind = 0.070, 95%-CI = [0.011, 0.140], p = 0.035. The indirect effect of the neutral situation was βind = 0.007, 95%-CI = [−0.120, 0.135], p = 0.914.

6. Discussion

Our preregistered hypotheses could be partially confirmed by our analyses. The ANOVA revealed significant main effects for the variables Agent and Situation, while the interaction between the two factors was not significant. Agent and Situation thus independently contributed to the booking Intention. In the negative situation, Intention was (significantly) lower than in the neutral and positive situation, thus confirming H1. For the human agent, Intention was significantly higher than for the chatbot, thus confirming H2. Due to the insignificant interaction effect, H3 could not be confirmed inferentially. From the descriptive analysis, one could, however, see that Intention decreased more in the negative compared to the neutral and positive situations for chatbots than for human agents, thus confirming H3 descriptively.
When these relationships were examined using a structural equation model (SEM) with the consideration of interaction effects between Agent and Situation, the interaction effects were insignificant. Also, the effect of Agent disappeared entirely. This result can be attributed to several factors. First, the inclusion of interaction terms produces a risk of multicollinearity, with the product terms sharing considerable variance with the main effects. Thus, the unique contributions of the main effects can become difficult to detect, and effects that appear in simpler models can be obscured. Second, the statistical power required to detect effects in SEM is higher than in ANOVA, particularly in the presence of complex models and smaller sample sizes. This reduced power may result in non-significant effects that would otherwise be detectable in less complicated models.
We thus additionally modelled the SEM without considering interaction effects between Agent and Situation. The Agent then became significant with the delta-method and remained borderline significant with the more robust bootstrapping method. Overall, H1 could also be confirmed by the SEM. So, could H2 as the negative situation remained clearly significant. Due to the missing inclusion of the interaction terms, H3 could not be confirmed by that model. The descriptive evidence for H3, of course, remained.
According to expectancy disconfirmation theory (Oliver, 1980), customer satisfaction depends upon performance expectations and their fulfillment. If a product does not fulfill expectations, satisfaction falls, and repurchase intention is reduced. Our setting concerned an accommodation that could not be booked as planned. Thus, costumers perceived the performance of the booking process negatively and were deceived in their expectations. As stated in similar contexts (e.g., food ordered that looked different from promotional pictures (Cai & Chi, 2021)), expectancy disconfirmation theory also holds in this setting. In a real-life setting, managers need to avoid proposing worse settings than the chosen one. Otherwise, they risk negative feelings, resulting in a reduced booking intention.
With regard to the second hypothesis (human counterparts are preferred to chatbots), it confirms the existing literature in tourism (e.g., Van Esch et al., 2022 or Zhu et al., 2023a), as well as other settings (e.g., Sheng et al., 2024). For Germany, a study by the Bavarian Center for Tourism found that nearly three-quarters of respondents favored human contact in tourism services over AI interactions (Bayerisches Zentrum für Tourismus, 2024). This is not specific to Germany. A study by Zhu et al. (2023b) tested service failure in a booking context in China. Here, customers also preferred humans for complaint handling, confirming our results. Only in very specific situations are technologies preferred over humans. During the COVID-19 pandemic, people preferred clean environments, avoiding crowds and contagion (Kim et al., 2021). However, studies undertaken before the pandemic showed the preference for human counterparts (e.g., Choi et al., 2021; Van Esch et al., 2022; Y. Xu et al., 2020), supporting our results. In addition, after the pandemic, tourists not only returned to old travelling habits, but people also preferred human experiences since they had missed them during the pandemic (Ujitoko et al., 2022). The reason for the preference for humans in the booking process might rely on reduced psychological ownership in the case of chatbots (Scarpi, 2024). Scarpi (2024) postulated that chatbots depicting emotions might receive similar outcomes as humans. Our study confirms that chatbots are accepted to take over the booking process, but a human-to-human conversation is preferred. To our knowledge, only the study from Song et al. (2022) contradicts this. They found that chatbot self-recovery was preferred over human recovery in service failure situations because humans are more inclined to misuse private data. Here, our finding points to a specific German situation. In general, Germans would assume computers to mishandle private data and not humans.
To our knowledge, our third hypothesis (interaction effect Situation/Agent) is unique and has so far not been researched. Even though it could not be confirmed, descriptive evidence points into its direction and supports our hypothesis. Further research is needed. Crolic et al. (2022) found that angry customers reacted negatively to chatbots. Scarpi (2024) pointed to the missing psychological ownership to explain lower rebooking intention in the case of chatbots, also supporting our descriptive finding. Choi et al. (2021) showed that robot acceptance was strongly culture-bound. Japanese tourists reacted more positively to robots than non-Japanese tourists. Our sample was German, and Germans are—with regard to digital technologies—rather skeptical because they fear losing control (Ortiz, 2023). This is visible in several aspects, e.g., the comparably low acceptance of mobile pay (Statista, 2024b), the high concerns about data privacy (Statista, 2024a), as well as strict laws for data security (Ortiz, 2023). They might therefore reject the chatbot in a negative, i.e., emotionally challenging, situation and react more positively to the human interaction partner.
Apart from the main hypotheses regarding the independent variables Agent and Situation, the analysis also confirms the correlations postulated by the TAM model and extends its applicability to the specific context of human vs. chatbot booking agents. Attitude had a significant effect on Perceived Behavioral Control and a significant indirect effect via Perceived Behavioral Control on Intention, suggesting a partial mediation. The effect of Perceived Usefulness on Attitude was highly significant. Perceived Usefulness had an indirect effect on Intention. Other researchers were also able to detect additional direct effects here, mainly because of the information quality of chatbots (Zhu et al., 2023a). In our adaptation of the TAM, Technology Affinity had a direct effect on Perceived Usefulness and an indirect effect on Intention, showing that chatbot acceptance depends on Technology Affinity. The results confirm that the TAM is a well suited model to explain the acceptance of chatbots, even including Perceived Behavioral Control and Technological Affinity. Both variables have not been researched in the chatbot/tourism context to our knowledge. Especially in the case of chatbots making negative suggestions, Technological Affinity is helpful to accept the new offers. The same applies to Perceived Behavioral Control. The idea of being able to steer the chatbot might help to form a positive booking intention. According to our results, respondents were very well able to distinguish between the chatbot and human counterpart even though we only gave few semantic clues (name of booking support either chatbot Sunny or Mrs. Sommer). Chatbots were perceived as a useful technology that supports the booking process. Our research thereby extends the existing literature by confirming the TAM.

7. Conclusions

Especially in Germany, qualified personnel are hard to find. Chatbots or robots are welcome support to handle routine enquiries and save costs. According to our study, managers of tourism companies or booking platforms should consider that booking intention is reduced when using chatbots. Our study did not manipulate anthropomorphism. Other research indicates that anthropomorphic chatbots evoke less resistance from customers (Konya-Baumbach et al., 2023; Klein & Martinez, 2022; Sheehan et al., 2020). Thus, chatbot design is important to achieve the intended cost savings while preserving sales. In addition, for negative scenarios, the option to be connected with a human counterpart should be offered in order to avoid frustration.
Our study extends the theory of chatbot acceptance to include different scenarios (service failure, neutral scenario, positive outcome). Here, our research indicates that chatbot acceptance depends not only on the satisfaction with the conversation but also on the outcome. The combination of expectancy theory with the TAM is well suited to explain this.

8. Implications, Limitations and Further Research

This study presents some important information for managers of tourism companies. Businesses should be selective on how to employ chatbots. Whereas a standard booking request without any modifications of the situation can be handled very well by a chatbot, complaints should offer the option to be connected to a person, especially in a country like Germany where people are highly concerned about data privacy. Conversations with chatbots should include gateways where a connection to a person can be selected. Future research might be able to determine the threshold more exactly where a switch to a human is needed. Keeping this in mind, more businesses will be able to switch to, e.g., chatbot-driven call centers that handle standard requests. From a practical point of view, humans can now center on situations that need more creative or non-standardized solutions.
From a research perspective, our study offers support for the use of the TAM in the chatbot context. At the moment, many studies rely on the CASA paradigm, social presence, or social response theory. Research applying the TAM in the chatbot context is scarce despite its proven effectiveness to explain intention.
Furthermore, our study has a geographical limitation. Compared to other countries, technology acceptance in Germany is only medium. Therefore, some of our results may have been biased because people prefer human counterparts in general. A comparison with other countries with higher technology acceptance (e.g., Singapore or Denmark, to remain in the European cultural context) would be useful here.
Another limitation consists of the type of chatbot investigated. We did not differentiate between simple rules-based chatbots or those powered by generative AI. Future research is needed here.
Further research could also investigate other variables that might influence the acceptance of chatbots, such as the personalization of the chatbot or the use of voice vs. the use of text. Also, other variables that could affect chatbot acceptance, such as trust in artificial intelligence or user familiarity with chatbots, could be incorporated into the model.

Author Contributions

Conceptualization, K.W. and K.B.; methodology, K.W. and K.B.; software, K.W.; validation, K.W. and K.B.; formal analysis, K.W.; data curation, K.W.; writing—original draft preparation, K.W. and K.B.; writing—review and editing, K.W. and K.B.; visualization, K.W. and K.B.; supervision, K.W. and K.B.; project administration, K.W. and K.B. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Ethical review and approval were waived for this study due to conformity to European General Data Protection Regulation (https://www.dfg.de/de/foerderung/antrag-foerderprozess/faq/geistes-sozialwissenschaften, accessed on 28 October 2024).

Informed Consent Statement

Informed consent was obtained from all individual participants included in the study.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
AIArtificial intelligence
ANOVAAnalysis of variance
CFIComparative Fit Index
CIConfidence interval
DVDependent variable
RMSEARoot Mean Square Error of Approximation
SPSSStatistical Package for the Social Sciences
SRSRStandardized Root Square Residual
SDStandard deviation
SEMStructural Equation Model
TAMTechnology Acceptance Model

References

  1. Ajzen, I. (1991). The theory of planned behavior. Organizational Behavior and Human Decision Processes, 50(2), 179–211. [Google Scholar] [CrossRef]
  2. Atwal, G., & Bryson, D. (2021). Antecedents of intention to adopt artificial intelligence services by consumers in personal financial investing. Strategic Change, 30(3), 293–298. [Google Scholar] [CrossRef]
  3. Bayerisches Zentrum für Tourismus. (2024). Studie zur akzeptanz von KI im tourismus. Available online: https://trvlcounter.de/top-news/15339-studie-zur-akzeptanz-von-ki-im-tourismus/?utm_source=chatgpt.com (accessed on 4 November 2024).
  4. Boulding, W., Kalra, A., Staelin, R., & Zeithaml, V. A. (1993). A dynamic process model of service quality: From expectations to behavioral intentions. Journal of Marketing Research, 30(1), 7. [Google Scholar] [CrossRef]
  5. Brennan, K. (2006). The managed teacher: Emotional labour, education, and technology. Educational Insights, 10, 55–65. [Google Scholar]
  6. Bühner, M. (2021). Einführung in die test- und fragebogenkonstruktion (4., korrigierte und erweiterte Auflage). Pearson. [Google Scholar]
  7. Cai, R., & Chi, C. G.-Q. (2021). Pictures vs. reality: Roles of disconfirmation magnitude, disconfirmation sensitivity, and branding. International Journal of Hospitality Management, 98, 103040. [Google Scholar] [CrossRef]
  8. Caldarini, G., Jaf, S., & McGarry, K. (2022). A Literature Survey of Recent Advances in Chatbots. Information, 13(1), 41. [Google Scholar] [CrossRef]
  9. Chauhan, R., & Mehra, P. (2024). Abstract or concrete language style? How chatbots of online travel agencies should apologise to customers. Asia Pacific Journal of Tourism Research, 1–15. [Google Scholar] [CrossRef]
  10. Choi, Y., Oh, M., Choi, M., & Kim, S. (2021). Exploring the influence of culture on tourist experiences with robots in service delivery environment. Current Issues in Tourism, 24(5), 717–733. [Google Scholar] [CrossRef]
  11. Crolic, C., Thomaz, F., Hadi, R., & Stephen, A. T. (2022). Blame the bot: Anthropomorphism and anger in customer–chatbot interactions. Journal of Marketing, 86(1), 132–148. [Google Scholar] [CrossRef]
  12. Davis, F. D. (1985). A technology acceptance model for empirically testing new end-user information systems: Theory and results [Ph.D. thesis, Massachusetts Institute of Technology]. Available online: https://dspace.mit.edu/handle/1721.1/15192 (accessed on 4 January 2025).
  13. Davis, F. D. (1989). Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Quarterly, 13(3), 319. [Google Scholar] [CrossRef]
  14. Dhingra, S., Gupta, S., & Bhatt, R. (2020). A study of relationship among service quality of e-commerce websites, customer satisfaction, and purchase intention. International Journal of E-Business Research, 16(3), 42–59. [Google Scholar] [CrossRef]
  15. Esiyok, E., Gokcearslan, S., & Kucukergin, K. G. (2024). Acceptance of educational use of AI chatbots in the context of self-directed learning with technology and ICT self-efficacy of undergraduate students. International Journal of Human–Computer Interaction, 41(1), 1–10. [Google Scholar] [CrossRef]
  16. Fishbein, M., & Ajzen, I. (1975). Belief, attitude, intention and behaviour: An introduction to theory and research. Addison-Wesley. [Google Scholar]
  17. Go, E., & Sundar, S. S. (2019). Humanizing chatbots: The effects of visual, identity and conversational cues on humanness perceptions. Computers in Human Behavior, 97, 304–316. [Google Scholar] [CrossRef]
  18. Gonçalves, A. R., Costa Pinto, D., Shuqair, S., Mattila, A., & Imanbay, A. (2024). The paradox of immersive artificial intelligence (AI) in luxury hospitality: How immersive AI shapes consumer differentiation and luxury value. International Journal of Contemporary Hospitality Management, 36, 3865–3888. [Google Scholar] [CrossRef]
  19. Hamilton, K., Spinks, T., White, K. M., Kavanagh, D. J., & Walsh, A. M. (2016). Perceived behavioural control scale [Database record]. APA PsycTests. [Google Scholar] [CrossRef]
  20. Iancu, I., & Iancu, B. (2023). Interacting with chatbots later in life: A technology acceptance perspective in COVID-19 pandemic situation. Frontiers in Psychology, 13, 1111003. [Google Scholar] [CrossRef]
  21. Islam, M. S., Tan, C. C., Sinha, R., & Selem, K. M. (2024). Gaps between customer compatibility and usage intentions: The moderation function of subjective norms towards chatbot-powered hotel apps. International Journal of Hospitality Management, 123, 103910. [Google Scholar] [CrossRef]
  22. Jimenez-Barreto, J., Rubio, N., & Molinillo, S. (2021). “Find a flight for me, Oscar!” Motivational customer experiences with chatbots. International Journal of Contemporary Hospitality Management, 33(11), 3860–3882. [Google Scholar] [CrossRef]
  23. Jussupow, E., Benbasat, I., & Heinzl, A. (2020, June 15–17). Why are we averse towards algorithms? A comprehensive literature review on algorithm aversion. 28th European Conference on Information Systems (ECIS), OnlineAvailable online: https://aisel.aisnet.org/ecis2020_rp/168 (accessed on 5 January 2025).
  24. Jürkenbeck, K., Heumann, A., & Spiller, A. (2019). Sustainability matters: Consumer acceptance of different vertical farming systems. Sustainability, 11(15), 4052. [Google Scholar] [CrossRef]
  25. Katz, E., Blumler, J. G., & Gurevitch, M. (1973). Uses and Gratifications Research. Public Opinion Quarterly, 37(4), 509. [Google Scholar] [CrossRef]
  26. Khajeh Nobar, H. B., & Rostamzadeh, R. (2018). The impact of customer satisfaction, customer experience and customer loyalty on brand power: Empirical evidence from hotel industry. Journal of Business Economics and Management, 19(2), 417–430. [Google Scholar] [CrossRef]
  27. Kim, J., Shin, S., Kim, J. Y., & Koo, C. (2024). Effect of ChatGPT’s Answering Style on Users’ Acceptance in a Trip Planning Context. International Journal of Tourism Research, 26(5), e2746. [Google Scholar] [CrossRef]
  28. Kim, J. S., & Baek, T. H. (2024). Motivational determinants of continuance usage intention for generative AI: An investment model approach for ChatGPT users in the United States. Behaviour & Information Technology, 1–17. [Google Scholar] [CrossRef]
  29. Kim, S., Kim, J., Badu-Baiden, F., Giroux, M., & Choi, Y. (2021). Preference for robot service or human service in hotels? Impacts of the COVID-19 pandemic. International Journal of Hospitality Management, 93, 102795. [Google Scholar] [CrossRef] [PubMed]
  30. Klein, K., & Martinez, L. F. (2022). The impact of anthropomorphism on customer satisfaction in chatbot commerce: An experimental study in the food sector. Electronic Commerce Research, 23(4), 2789–2825. [Google Scholar] [CrossRef]
  31. Konya-Baumbach, E., Biller, M., & Von Janda, S. (2023). Someone out there? A study on the social presence of anthropomorphized chatbots. Computers in Human Behavior, 139, 107513. [Google Scholar] [CrossRef]
  32. Kwangsawad, A., & Jattamart, A. (2022). Overcoming customer innovation resistance to the sustainable adoption of chatbot services: A community-enterprise perspective in Thailand. Journal of Innovation & Knowledge, 7(3), 100211. [Google Scholar] [CrossRef]
  33. Li, M., Yin, D., Qiu, H., & Bai, B. (2021). A systematic review of AI technology-based service encounters: Implications for hospitality and tourism operations. International Journal of Hospitality Management, 95, 102930. [Google Scholar] [CrossRef]
  34. Li, Y., Li, Y., Chen, Q., & Chang, Y. (2024). Humans as teammates: The signal of human–AI teaming enhances consumer acceptance of chatbots. International Journal of Information Management, 76, 102771. [Google Scholar] [CrossRef]
  35. Liu, B., & Sundar, S. S. (2018). Should Machines Express Sympathy and Empathy? Experiments with a Health Advice Chatbot. Cyberpsychology, Behavior, and Social Networking, 21(10), 625–636. [Google Scholar] [CrossRef] [PubMed]
  36. Liu, M., Yang, Y., Ren, Y., Jia, Y., Ma, H., Luo, J., Fang, S., Qi, M., & Zhang, L. (2024). What influences consumer AI chatbot use intention? An application of the extended technology acceptance model. Journal of Hospitality and Tourism Technology, 15(4), 667–689. [Google Scholar] [CrossRef]
  37. Majeed, S., Kim, W. G., & Nimri, R. (2024). Conceptualizing the role of virtual service agents in service failure recovery: Guiding insights. International Journal of Hospitality Management, 123, 103889. [Google Scholar] [CrossRef]
  38. Majid, G. M., Tussyadiah, I., & Kim, Y. R. (2024). Exploring the Potential of Chatbots in Extending Tourists’ Sustainable Travel Practices. Journal of Travel Research. ahead of print. [Google Scholar] [CrossRef]
  39. Melián-González, S., Gutiérrez-Taño, D., & Bulchand-Gidumal, J. (2021). Predicting the intentions to use chatbots for travel and tourism. Current Issues in Tourism, 24(2), 192–210. [Google Scholar] [CrossRef]
  40. Mende, M., Scott, M. L., van Doorn, J., Grewal, D., & Shanks, I. (2019). Service Robots Rising: How humanoid robots influence service experiences and elicit compensatory consumer responses. Journal of Marketing Research, 56(4), 535–556. [Google Scholar] [CrossRef]
  41. Meng, L., Li, T., Shi, X., & Huang, X. (2023). Double-sided messages improve the acceptance of chatbots. Annals of Tourism Research, 102, 103644. [Google Scholar] [CrossRef]
  42. Niu, B., & Mvondo, G. F. N. (2024). I am ChatGPT, the ultimate AI chatbot! Investigating the determinants of users’ loyalty and ethical usage concerns of ChatGPT. Journal of Retailing and Consumer Services, 76, 103562. [Google Scholar] [CrossRef]
  43. Noh, Y.-G., & Hong, J.-H. (2021). Designing Reenacted Chatbots to Enhance Museum Experience. Applied Sciences, 11(16), 7420. [Google Scholar] [CrossRef]
  44. Oliver, R. L. (1980). A cognitive model of the antecedents and consequences of satisfaction decisions. Journal of Marketing Research, 17(4), 460. [Google Scholar] [CrossRef]
  45. Ortiz, M. (2023). Loss of control and technology acceptance. In M. Ortiz (Ed.), Loss of control and technology acceptance in (digital) transformation: Acceptance and design factors of a heuristic model (pp. 21–29). Springer Fachmedien. [Google Scholar] [CrossRef]
  46. Park, J. E., Fan, A., & Wu, L. (2024). Chatbots in complaint handling: The moderating role of humor. International Journal of Contemporary Hospitality Management, 37(3), 805–824. [Google Scholar] [CrossRef]
  47. Pavone, G., Meyer-Waarden, L., & Munzel, A. (2023). Rage against the machine: Experimental insights into customers’ negative emotional responses, attributions of responsibility, and coping strategies in artificial intelligence–based service failures. Journal of Interactive Marketing, 58(1), 52–71. [Google Scholar] [CrossRef]
  48. Pillai, R., & Sivathanu, B. (2020). Adoption of AI-based chatbots for hospitality and tourism. International Journal of Contemporary Hospitality Management, 32(10), 3199–3226. [Google Scholar] [CrossRef]
  49. Qiu, H., Li, M., Shu, B., & Bai, B. (2020). Enhancing hospitality experience with service robots: The mediating role of rapport building. Journal of Hospitality Marketing & Management, 29(3), 247–268. [Google Scholar] [CrossRef]
  50. Ruggiero, T. E. (2000). Uses and gratifications theory in the 21st century. Mass Communication and Society, 3(1), 3–37. [Google Scholar] [CrossRef]
  51. Scarpi, D. (2024). Strangers or friends? Examining chatbot adoption in tourism through psychological ownership. Tourism Management, 102, 104873. [Google Scholar] [CrossRef]
  52. Shams, G., & Kim, K. (2024). Chatbots on the Frontline: The Imperative Shift From a “One-Size-Fits-All” Strategy Through Conversational Cues and Dialogue Designs. Journal of Hospitality & Tourism Research. ahead of print. [Google Scholar] [CrossRef]
  53. Shawar, B. A., & Atwell, E. S. (2005). Using corpora in machine-learning chatbot systems. International Journal of Corpus Linguistics, 10(4), 489–516. [Google Scholar] [CrossRef]
  54. Sheehan, B., Jin, H. S., & Gottlieb, U. (2020). Customer service chatbots: Anthropomorphism and adoption. Journal of Business Research, 115, 14–24. [Google Scholar] [CrossRef]
  55. Sheng, M. L., Natalia, N., & Rusfian, E. Z. (2024). AI chatbot, human, and in-between: Examining the broader spectrum of technology-human interactions in driving customer-brand relationships across experience and credence services. Psychology & Marketing. ahead of print. [Google Scholar] [CrossRef]
  56. Shi, J., Lee, M., Girish, V. G., Xiao, G., & Lee, C.-K. (2024). Embracing the ChatGPT revolution: Unlocking new horizons for tourism. Journal of Hospitality and Tourism Technology, 15(3), 433–448. [Google Scholar] [CrossRef]
  57. Shyja, P. J., Singh, K., Kokkranikal, J., Bharadwaj, R., Rai, S., & Antony, J. (2023). Service quality and customer satisfaction in hospitality, leisure, sport and tourism: An assessment of research in web of science. Journal of Quality Assurance in Hospitality & Tourism, 24(1), 24–50. [Google Scholar] [CrossRef]
  58. Silva, F. A., Shojaei, A. S., & Barbosa, B. (2023). Chatbot-based services: A study on customers’ reuse intention. Journal of Theoretical and Applied Electronic Commerce Research, 18(1), 457–474. [Google Scholar] [CrossRef]
  59. Song, M., Du, J., Xing, X., & Mou, J. (2022). Should the chatbot “save itself” or “be helped by others”? The influence of service recovery types on consumer perceptions of recovery satisfaction. Electronic Commerce Research and Applications, 55, 101199. [Google Scholar] [CrossRef]
  60. Statista. (2024a). Anteil der menschen, die sich sorgen über datenmissbrauch im internet machen in 56 ländern/territorien weltweit 2024. Available online: https://de.statista.com/statistik/studie/id/56641/dokument/datenschutz-im-internet/ (accessed on 22 December 2024).
  61. Statista. (2024b). Anteil der nutzer von mobilen zahlungsmitteln in 56 ländern/territorien weltweit 2024. Statista. Available online: https://de.statista.com/prognosen/1456280/mobile-zahlungsmittel-nutzer-in-ausgewaehlten-laendern-weltweit (accessed on 21 December 2024).
  62. Statista. (2024c). Digital & trends: Artificial intelligence (AI) use in travel and tourism 2024. Statista. Available online: https://www.statista.com/study/165390/artificial-intelligence-ai-use-in-travel-and-tourism/ (accessed on 13 December 2024).
  63. Sujood, & Pancy. (2024). Travelling with open eyes! A study to measure consumers’ intention towards experiencing immersive technologies at tourism destinations by using an integrated model of TPB, TAM captured through the lens of S-O-R. International Journal of Contemporary Hospitality Management, 36(11), 3906–3929. [Google Scholar] [CrossRef]
  64. Taylor, S., & Todd, P. (1995). Assessing IT usage: The role of prior experience. MIS Quarterly, 19(4), 561–570. [Google Scholar] [CrossRef]
  65. Ujitoko, Y., Yokosaka, T., Ban, Y., & Ho, H.-N. (2022). Tracking changes in touch desire and touch avoidance before and after the COVID-19 outbreak. Frontiers in Psychology, 13, 1016909. [Google Scholar] [CrossRef] [PubMed]
  66. Van Esch, P., Cui, Y., Das, G., Jain, S. P., & Wirtz, J. (2022). Tourists and AI: A political ideology perspective. Annals of Tourism Research, 97, 103471. [Google Scholar] [CrossRef]
  67. Venkatesh, V., & Davis, F. D. (2000). A theoretical extension of the technology acceptance model: Four longitudinal field studies. Management Science, 46(2), 186–204. [Google Scholar] [CrossRef]
  68. Venkatesh, V., Morris, M. G., Davis, G. B., & Davis, F. D. (2003). User acceptance of information technology: Toward a unified view. MIS Quarterly, 27(3), 425. [Google Scholar] [CrossRef]
  69. Wang, L., Xiao, J., Luo, Z., Guo, Y., & Xu, X. (2024). The Impact of Default Options on Tourist Intention Post Tourism Chatbot Failure: The Role of Service Recovery and Emoticon. Tourism Management Perspectives, 53, 101299. [Google Scholar] [CrossRef]
  70. Wang, X., Zhou, R., & Zhang, R. (2020). The impact of expectation and disconfirmation on user experience and behavior intention. In A. Marcus, & E. Rosenzweig (Eds.), Design, user experience, and usability. Interaction Design (pp. 464–475). Bd. 12200. Springer International Publishing. [Google Scholar] [CrossRef]
  71. Weizenbaum, J. (1966). ELIZA—A computer program for the study of natural language communication between man and machine. Communications of the ACM, 9(1), 36–45. [Google Scholar] [CrossRef]
  72. Xu, H., Law, R., Lovett, J., Luo, J. M., & Liu, L. (2024). Tourist acceptance of ChatGPT in travel services: The mediating role of parasocial interaction. Journal of Travel & Tourism Marketing, 41(7), 955–972. [Google Scholar] [CrossRef]
  73. Xu, Y., Shieh, C.-H., van Esch, P., & Ling, I.-L. (2020). AI customer service: Task complexity, problem-solving ability, and usage intention. Australasian Marketing Journal, 28(4), 189–199. [Google Scholar] [CrossRef]
  74. Yun, J., & Park, J. (2022). The effects of chatbot service recovery with emotion words on customer satisfaction, repurchase intention, and positive word-of-mouth. Frontiers in Psychology, 13, 922503. [Google Scholar] [CrossRef] [PubMed]
  75. Zhang, J., Chen, Q., Lu, J., Wang, X., Liu, L., & Feng, Y. (2024). Emotional expression by artificial intelligence chatbots to improve customer satisfaction: Underlying mechanism and boundary conditions. Tourism Management, 100, 104835. [Google Scholar] [CrossRef]
  76. Zhu, Y., Zhang, R., Zou, Y., & Jin, D. (2023a). Investigating customers’ responses to artificial intelligence chatbots in online travel agencies: The moderating role of product familiarity. Journal of Hospitality and Tourism Technology, 14(2), 208–224. [Google Scholar] [CrossRef]
  77. Zhu, Y., Zhang, J., & Wu, J. (2023b). Who did what and when? The effect of chatbots’ service recovery on customer satisfaction and revisit intention. Journal of Hospitality and Tourism Technology, 14(3), 416–429. [Google Scholar] [CrossRef]
Figure 1. Model of the intention to book derived from the Technology Acceptance Model (TAM) Own elaboration based on (Davis, 1989); own source.
Figure 1. Model of the intention to book derived from the Technology Acceptance Model (TAM) Own elaboration based on (Davis, 1989); own source.
Tourismhosp 06 00036 g001
Figure 2. Age distribution of the sample; own source.
Figure 2. Age distribution of the sample; own source.
Tourismhosp 06 00036 g002
Figure 3. Intention depending on Agent and Situation; own source.
Figure 3. Intention depending on Agent and Situation; own source.
Tourismhosp 06 00036 g003
Figure 4. Structural model of the SEM for Intention (to book); ***—p < 0.001, **—p < 0.01, *—p < 0.05, own source.
Figure 4. Structural model of the SEM for Intention (to book); ***—p < 0.001, **—p < 0.01, *—p < 0.05, own source.
Tourismhosp 06 00036 g004
Table 1. Classification of previous studies; own source.
Table 1. Classification of previous studies; own source.
AuthorMethodContextKey TopicKey Finding/Abstract
Chauhan and Mehra (2024)Experiment, vignette based, 2 × 2 designOnline Travel AgencyService failureResearch investigates the most effective language style (abstract or concrete) for OTAs chatbots to apologize to the customer to gain forgiveness for service failure. Concrete language style of the OTAs chatbot apology was more effective in achieving customer forgiveness.
Meng et al. (2023)1 field study, 3 experiments, scenario basedHotel bookingChatbot CharacteristicsA double-sided message strategy enhanced customers’ willingness to interact with AI chatbots via the mediating role of perceived authenticity.
Park et al. (2024)Experiment, vignette based, 2 × 2 designHotel bookingChatbot Characteristics/Service RecoveryPerceived control and social presence can improve chatbots’ effectiveness in handling service failures to regain customer satisfaction and the consequent revisit intention. However, humor showed opposite effects in the two studies: chatbots using humorous language in complaint handling may have attenuated the positive effect of perceived control but enhanced the positive effect of social presence.
Scarpi (2024)SurveyHotel bookingChatbot vs. HumanChatbots (vs. humans) decreased feelings of psychological ownership, which lowered the relationship commitment and rebooking intention.
Shams and Kim (2024)Experiment, vignette based, 2 × 2 designHotel booking/
Attraction visit
Chatbot CharacteristicsResults suggest that a match between chatbot’s humanoid and dialogue characteristics can increase fluency in comprehending the message, enhancing customer satisfaction and usage intention.
Song et al. (2022)Experiment, Vignette based, single factorHotel bookingChatbot vs. human, privacy concerns, service recoveryChatbot self-recovery led to higher satisfaction, value, and lower privacy risk, moderated by intelligence. High perceived chatbot intelligence led to higher privacy risks.
L. Wang et al. (2024)Experiment, vignette based, 2 × 2 designHotel bookingService failure/NudgingResults showed that the opt-out default option could increase higher tourists’ continuous use intention by decreasing tourists’ affective effort. This effect was moderated by service recovery and emoticon. Specifically, the opt-out default option was more effective to improve continuous usage intention when using informational help and the pleading emoticon.
Zhu et al. (2023a)SurveyOnline Travel AgencyTrust/Perceived Usefulness/Perceived Ease of UseInteraction and information quality, as AI chatbot stimuli, significantly increased potential tourists’ trust and purchase intention. Perceived Usefulness played a mediating role in the relationship among interactivity, information quality, customer trust, and purchase intention. Furthermore, the findings indicated that customers with high product familiarity exhibited greater trust in products demonstrating a high level of perceived usefulness.
Zhu et al. (2023b)Experiment, vignette based, 2 × 2 designHotel bookingService recoveryCompared with human employees’ recovery, chatbots’ recovery led to lower customer satisfaction and revisit intention. This effect was more significant for symbolic recovery instead of economic recovery.
Table 2. Means, standard deviation (SD), and p-value for the manipulation check variables; own source.
Table 2. Means, standard deviation (SD), and p-value for the manipulation check variables; own source.
AIHumanp-Value
emotionless–emotional3.86 (1.61)4.41 (1.55)<0.001
artificial–natural3.81 (1.68)4.64 (1.68)<0.001
not sensitive–sensitive4.08 (1.56)4.74 (1.56)<0.001
inhuman–human4.23 (1.62)5.17 (1.59)<0.001
mechanical–empathetic3.90 (1.64)4.57 (1.66)<0.001
monotonous–multifac.3.89 (1.56)4.26 (1.52)0.010
cold–warm4.21 (1.61)4.76 (1.56)<0.001
Table 3. Percentage of participants and male participants per group, mean, and standard deviation (SD) of the DV Intention (to book); own source.
Table 3. Percentage of participants and male participants per group, mean, and standard deviation (SD) of the DV Intention (to book); own source.
AIHumanTotal
posNeutralnegposNeutralneg
% 17.619.116.715.215.815.6100
% male46.350.647.453.544.6 49.348.6
Mean4.714.483.654.844.704.164.42
SD1.581.611.581.311.291.731.58
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wüst, K.; Bremser, K. Artificial Intelligence in Tourism Through Chatbot Support in the Booking Process—An Experimental Investigation. Tour. Hosp. 2025, 6, 36. https://doi.org/10.3390/tourhosp6010036

AMA Style

Wüst K, Bremser K. Artificial Intelligence in Tourism Through Chatbot Support in the Booking Process—An Experimental Investigation. Tourism and Hospitality. 2025; 6(1):36. https://doi.org/10.3390/tourhosp6010036

Chicago/Turabian Style

Wüst, Kirsten, and Kerstin Bremser. 2025. "Artificial Intelligence in Tourism Through Chatbot Support in the Booking Process—An Experimental Investigation" Tourism and Hospitality 6, no. 1: 36. https://doi.org/10.3390/tourhosp6010036

APA Style

Wüst, K., & Bremser, K. (2025). Artificial Intelligence in Tourism Through Chatbot Support in the Booking Process—An Experimental Investigation. Tourism and Hospitality, 6(1), 36. https://doi.org/10.3390/tourhosp6010036

Article Metrics

Back to TopTop