Next Article in Journal
Constructing a Socio-Legal Framework Proposal for Governing Large Language Model Usage and Application in Education
Previous Article in Journal
The Roma Population: Migration, Settlement, and Resilience
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Impact of Motivation Factors for Using Generative AI Services on Continuous Use Intention: Mediating Trust and Acceptance Attitude

Seoul Business School, aSSIST University, Seoul 03767, Republic of Korea
*
Author to whom correspondence should be addressed.
Soc. Sci. 2024, 13(9), 475; https://doi.org/10.3390/socsci13090475
Submission received: 13 July 2024 / Revised: 25 August 2024 / Accepted: 5 September 2024 / Published: 9 September 2024
(This article belongs to the Special Issue Technology, Digital Transformation and Society)

Abstract

:
This study aims to empirically analyze the relationship between the motivational factors of generative AI users and the intention to continue using the service. Accordingly, the motives of users who use generative AI services are defined as individual, social, and technical motivation factors. This research verified the effect of these factors on intention to continue using the services and tested the meditating effect of trust and acceptance attitude. We tested this through verifying trust and acceptance attitudes. An online survey was conducted on language-based generative AI service users such as OpenAI’s ChatGPT, Google Bard, Microsoft Bing, and Meta-Lama, and a structural equation analysis was conducted through a total of 356 surveys. As a result of the analysis, individual, social, and technical motivational factors all had a positive (+) effect on trust and acceptance attitude on the attitude toward accepting generative AI services. Among them, individual motivation such as self-efficacy, innovation orientation, and playful desire were found to have the greatest influence on the formation of the acceptance attitude. In addition, social factors were identified as the factors that have the greatest influence on trust in the use of generative AI services. When it comes to using generative AI, it was confirmed that social reputation or awareness directly affects the trust in usability.

1. Introduction

The Fourth Industrial Revolution has led to rapid technological advances in artificial intelligence (AI), Big data, the Internet of Things (IoT), and robotics. In the midst of these changes, Open AI’s ChatGPT (Generative Pre-trained Transformer), launched in 2022, based on a large language model (LLM), has been disseminated rapidly, surpassing 1 million users within five days of launch and reaching 100 million monthly active users within just two months. As a result, a number of commercial and educational utilization possibilities for ChatGPT have been proposed. Accordingly, Google has unveiled the chatbot service, “Bard”, which utilizes Google’s LLM Pathways Language Model, and Meta made public their LLM “Large Language Model Meta AI” (LLaMA). As global big tech companies are actively developing generative AI services, generative AI service technology is evolving more rapidly and leading the transformation of various industries (McKinsey 2024).
It is currently predicted that generative AI services will be a game-changer in every field (Ooi et al. 2023; Orchard and Tasiemski 2023). Goldman Sachs (2023) reported that generative AI could increase global gross domestic product (GDP) by 7% (about USD 7 trillion), which could lead to a productivity boost of 1.5% for 10 years. Stanford University’s Human-Centered Artificial Intelligence (2023) explained that generative AI is an important inflection point in the evolution of technology, where machines generate language, images, and speech and will have a huge impact on individual lives and society as a whole, sophisticatedly complementing human labor in more productive and creative ways.
As the generative AI industry grows, so does research in the academia. In particular, research exploring the applicability and future value of generative AI is gaining traction in various fields including healthcare (Zhang and Boulos 2023; Varghese and Chapiro 2024); finance (Lee and Chen 2022; Mogaji and Nguyen 2022); education (Lim et al. 2023; Tlili et al. 2023); and news production (Korzynski et al. 2023; Naeem et al. 2024). Recently, there has been research on managing the misuse and abuse of generative AI technologies such as ChatGPT (Fiona et al. 2023). However, there is relatively little research analyzing the behavior of generative AI users from a service marketing perspective. In particular, user-centered research is needed to understand how users perceive generative AI, what motivates them to adopt it, and what factors influence their adoption.
To date, a review of research related to generative AI technologies and services shows that the discussion centers on new media acceptance and communication forms based on the technology acceptance model (TAM) (Gupta et al. 2022). Recently, the use of ChatGPT has spread rapidly, and studies have been introduced on the variables that affect ChatGPT use, satisfaction with use, and continuous use intention. In particular, studies based on the theory of TAM are presented, such as Shaengchart (2023), Zou and Huang (2023), and Solomovich and Abraham (2024), which analyzed perceived usefulness and perceived ease of use as key variables that influence the intention to continue using Chat GPT technology. Pandey and Sharma (2023) and Feuerriegel et al. (2024) presented the results of a study on the acceptance of generative AI-based chatbot services for information quality, system quality, and service quality. In addition, based on the concept of affordance that indicates behavioral inducement, Camilleri (2024) explained how the three factors of providing personalized help, talking like a human, and contextual cognition, which are the affordances that make up ChatGPT, affect continuous use through the mediation of perceived usefulness and ease of use that are technology acceptance factors. However, these related studies are not yet sufficient to predict user interactions because they originate from the context of TAM or organizational acceptance (Ali et al. 2023; Saif et al. 2024; Salloum et al. 2024).
In particular, with the recent launch of various generative AI services such as text, image, and video, generative AI services are moving beyond just search and information provision to learning, problem solving, and life support services. Therefore, it is necessary to go beyond the discussion of acceptance of generative AI technology and conduct more specific research on the motivations, attitudes, and intentions of users who use generative AI as a service. There is also a need to discuss whether generative AI accurately reflects an individual’s communication goals or how users react to and accept generative AI services based on perceived value or trust and satisfaction. In this context, this study attempts to empirically analyze the motivation factors of generative AI users and their relationship with the intention to continue using the service. As such, this study aims to define the individual, social, and technical motivation factors of users who use generative AI services and to investigate the relationship between these factors and their impact on continuous use intention, with trust and attitude factors as parameters.

2. Literature Review and Hypothesis Development

2.1. Generative AI Services and Use Motivation

Generative AI refers to artificial intelligence technology that generates new content, such as text, images, and video in response to specific user prompts (Bandi et al. 2023; Feuerriegel et al. 2024). The core of a generative AI service is to understand a user’s question, analyze it, select necessary information that is worthy of being an answer out of a myriad of information on its own, and provide it by appropriately summarizing and organizing information (Euchner 2023; Kenthapadi et al. 2023; Sætra 2023). In particular, this is seen when continuing the context of the asked question or posing questions with concrete cases can result in a more accurate answer. Just like conversing with a person, the AI service provides an answer by understanding the context, allowing users to experience an information search service that is on another level from the previous services.
Existing intelligent AI search engines or chatbots have evolved to provide sophisticated personalized services (Berthelot et al. 2024; Orchard and Tasiemski 2023). However, despite their widespread distribution, these AI-based information search and communication services have shown limitations in meeting user expectations because of their inability to grasp the context or respond accurately to complex content outside the scope of their settings (Gupta et al. 2022; Gupta et al. 2024). However, generative AI actively generates results in response to the specific needs of users. While traditional deep learning-based AI technologies simply predict or classify based on existing data, generative AI goes one step further to solve questions or challenges that users have asked by solving and learning data on its own to actively present data or content. Thus, generative AI services are characterized by their ability to generate answers customized to the context of the user’s request, provide creative outputs, and solve complex problems (Ferraro et al. 2024).
These generative AI services can be considered to be centered on motivation factors based on the uses and gratifications theory (Liu 2015; Ruggiero 2000) when considering users’ choice and satisfaction with the service. “Motivation” can be defined as an internal state that activates an individual’s internal driving force or physical energy that inspires behavior and directs them to achieve goals that exist in the external environment (Pedrotti and Nistor 2016; Camilleri and Falzon 2021). In particular, use motivation is an important factor that influences consumer decision-making and refers to personal desires and urges that arise when consumers seek to obtain or enjoy satisfaction from consumption activities and are manifested to fulfill their desire to consume (Larsen et al. 2009; Aysu 2020; Raman et al. 2022). In addition, service use motivation is the drive to purchase a service to satisfy a need and refers to an internal state that activates internal energy and directs it in a selective manner to achieve a goal that exists in the external environment (Kim et al. 2011; Jacobsen et al. 2014).
In consumer behavior, motivation can be defined as the perceived information input through stimuli that causes a change in behavior, an active and proactive drive to reduce human psychological tension, and a state of being that is activated to produce goal-directed behavior in the presence of the consumer (Bayton 1958; Oyserman 2009; Fullerton 2013). In addition, because the motivations that cause certain behaviors exist within consumers’ outward behaviors, it is necessary to first identify their motivations in order to understand them (Oyserman 2009; Durmaz and Diyarbakırlıoğlu 2011).
Users choose services based on their needs and desires and feel satisfied as a result of fulfilling those needs. These needs and wants can vary depending on the disposition of the individual and the social and technical influences they have experienced. This shows that the individual, social, and technical characteristics that influence the active use of the service are related to the use of the service (Corre et al. 2017; Pleger et al. 2020). From this perspective, generative AI services also require a concept of what motivates consumers to use them and what characteristics drive their motivation. Based on this theory, this study categorizes the motivations for using generative AI services into individual, social, and technical motivation factors (Lu et al. 2005; Walker and Johnson 2006).
First, the examination of individual motivation factors for generative AI services provides that the personal characteristics of the end users of information technology play a crucial role in the performance of information technology implementation (Lee et al. 2019; Brewer et al. 2000; Yilmaz and Yilmaz 2023; Elmashhara et al. 2024; Ma et al. 2024). Self-efficacy is defined as the degree to which a user believes that they can use a system to accomplish a specific task (Schunk 1995; Schunk and DiBenedetto 2021). Users who are sufficiently knowledgeable and believe in their own abilities have an enhanced perception of the ease of use of the technology, which can change their behavioral intention (Pan 2020). Further, users with high innovation orientation actively seek information about new technologies or services and tolerate uncertainty (Siddiqui et al. 2020). Along with this, users’ playful desire to be entertained by new services accelerates the behavior of accepting and adapting to technology (Ståhlbröst and Bergvall-Kåreborn 2011; Stock et al. 2015). Generally, if users experience personal joy and pleasure from using a particular system, they are inherently motivated to use it (Waterman et al. 2008; Kim et al. 2011; Posada et al. 2014).
The second social motivation factor refers to social conformity and social image. In choosing a new service, social conformity is the alteration of an individual’s behavior in accordance with those of others, which is defined as acting under group pressure (Bae and Park 2015; Osatuyi and Turel 2019). The acceptance of new technologies and services is heavily influenced by social influence from relevant groups of users. Once an atmosphere is created where everyone around them is using the service, individuals will follow suit and use the new service (Kim et al. 2010). In addition, the choice of new technology-based services by a particular individual operates as a factor that enhances their social image (Hsu and Lin 2008; Preece and Shneiderman 2009; Hernandez et al. 2011; Sun et al. 2017). Vanduhe et al. (2020) explained that users believe that access to new technology-based services improves their status within the social system of which they are a part. In this respect, the improvement of the individual’s image within the in-group is a motivation factor for the use of new services (Hassouneh and Brengman 2014; Hamari et al. 2018).
The third set of technical motivation factors includes technical convenience, perceived ease and usefulness, and personalization. When users find it easy to use a technology, a positive perception of the technology is formed (Price and Kadi-Hanifi 2011). Additionally, positive attitudes of users are formed when the technology is perceived as actually useful (Wang and Hsieh 2015). Furthermore, personalized services significantly improve user experience and satisfaction because they are tailored to users’ individual needs and preferences (Pan 2020; Schmid and Dowling 2020; Kabalisa and Altmann 2021). In the end, as Saif et al. (2024) argued, for new AI-based services such as ChatGPT, factors such as the convenience and usefulness of the technology to users are motivation factors for users to use the service.

2.2. Service Use Motivation: Trust and Acceptance Attitude

Motivation factors for using a service influence an individual user’s trust and acceptance attitude of the service. First, we can look at the relationship between service use motivation and trust. Trust is divided into the subjective internal state and behavior. The internal state refers to the belief that the object one relies on will not act in a way that has negative consequences for the person, and behavior refers to the expectation that others’ intentions are good and are manifested in observable actions (Kanfer 1990; Steers et al. 2004; Lai 2011). However, in marketing, trust is defined in a consumer context as the degree to which a consumer believes and trusts the veracity of the source and content of information conveyed by a product or service. If consumers do not trust the information about a service, they will react negatively and are more likely to reject the information and service (Durmaz and Diyarbakırlıoğlu 2011; Levin et al. 2012; White 2015; Dong 2019).
In the context of an information system or technology service, trust is the degree to which consumers perceive it to be worthy of belief and trust. Trust in technology services is related to transparency, privacy, security, and so on. When technology services operate reliably without errors or interruptions and provide accurate services without harming or deceiving consumers, users have full trust in the technology service, and this influences positive emotions and behaviors (Gefen and Straub 2003; Van der Heijden et al. 2003; Kim et al. 2012).
Therefore, Chen et al. (2021) found that in AI-based information services, the more diverse, accurate, and systematic the information is, the more informativity and quality have a significant effect on trust and trust has a significant effect on performance. Li and Lu (2021) explained that the more experience a user has with a chatbot, the higher the reliability. Hong (2022) argued that users with high self-efficacy and innovation orientation exhibit stronger willingness and trust in technology demand. Kim and Kim (2020) explained that when the use of a new digital platform becomes popular, social synchronization leads to lower risk perception and higher reliability in using the new service. In addition, a number of studies based on technology acceptance of new services (Norzelan et al. 2024; Ismatullaev and Kim 2024; Ozili 2024; Mariani and Dwivedi 2024) show that factors that influence technology acceptance, such as perceived ease of use and usefulness, enhance the perceived reliability of new technology services, leading to the selection of the services. Based on these prior studies, this study designed the following hypotheses:
H1. 
Individual motivation factors for using generative AI services will have a positive (+) impact on trust.
H2. 
Social motivation factors for using generative AI services will have a positive (+) impact on trust.
H3. 
Technical motivation factors for using generative AI services will have a positive (+) impact on trust.
Service acceptance attitude refers to a service user’s convergence with a target, such as a generative AI service and their intention to continue using it (Amos and Zhang 2024). Moreover, service acceptance attitude is interpreted as the positive or negative perceptions and reactions that users have toward a service, which are influenced by the purpose or motivation of using a generative AI service (Feuerriegel et al. 2024; Camilleri 2024; Gupta 2024). As such, because service acceptance attitude is determined by the user’s choices, it can be influenced by their individual, social, and technical motivations for using the service.
Most prior studies on service acceptance attitudes evaluate consumers’ intention to use generative AI services based on the TAM and the diffusion of innovations. In response to this, Shaengchart (2023) explained that as generative AI services are an emerging technology and are increasingly enabled through mobile and applications, generative AI service acceptance attitude is related to users’ use motivation, perceived value, etc. Generative AI service acceptance attitude affects the service user’s receptive feelings and expectations of necessity, which can lead to active utilization, encouraging reuse, creating conditions for preferential choice, and user acceptance attitude that enables users to recommend the service to others and continue to use it.
After all, as Hsu and Lin (2008) and Raman et al. (2022) have argued, individual motivation factors can positively influence the acceptance attitude of generative AI services just as when individuals have high self-efficacy, innovation orientation, and enjoyment of a new technology-based service, they will have a positive acceptance attitude toward it. In addition, social influencers shape positive attitudes toward technology through the experiences and opinions of those around them (Hassouneh and Brengman 2014; Hamari et al. 2018; Vanduhe et al. 2020). When users perceive a technology as useful, positive attitudes toward it are formed (Price and Kadi-Hanifi 2011; Wang and Hsieh 2015; Bandi et al. 2023).
In this context, Baek and Kim (2023) explained that users’ motivation factors for choosing a service influence their attitude. Bhattacharyya et al. (2022) claimed that the use motivation factors of users of OTT services affect their intention to use the service, and Brandtzaeg and Følstad (2018) claimed that the use motivation of users of chatbot services affects their service selection and use behavior. Based on these previous studies, we hypothesize that the individual, social, and technical motivation factors that influence the use of generative AI services will also influence the acceptance attitude toward their services.
H4. 
Individual motivation factors for using generative AI services will have a positive (+) effect on acceptance attitude.
H5. 
The social motivation factor for using generative AI services will have a positive (+) effect on acceptance attitude.
H6. 
Technical motivators for using generative AI services will have a positive (+) effect on acceptance attitude.

2.3. Trust, Acceptance Attitude, and Continuous Use Intention

Continuous use intention refers to the degree to which a user plans to continue using a product or service (Park et al. 2010; Dehghani 2018). Adoption or acceptance, the act of starting to use a new information system or service, is an essential step for the success of that system or service, and beyond the initial use, such as adoption or acceptance, it must be followed by continuous use by users to finally succeed (Cheng et al. 2020). The long-term survival and real success of an information system depends more on continuous use than on initial use. Along with satisfaction, continuous use intention is recognized as an important concept in the information systems field. Most studies confirm that user satisfaction has a direct positive impact on potential future behavioral intentions, such as continuous use intention (Wang et al. 2016; Lee 2020; Liu et al. 2022; Lv et al. 2022).
Trust is an important influencing factor in the acceptance attitude or continuous use intention for these information services or new technology systems. It plays an important role in technology acceptance. Trust is an important factor in users’ acceptance of new technologies, which leads to the formation of a positive attitude toward them (Gefen and Straub 2003). Specifically, trust in information refers to the degree of trust in information that is perceived by the recipient of the information based on its expertise, objectivity, consistency, and so on (Van der Heijden et al. 2003; Hoehle et al. 2012; Habbal et al. 2024). In response to this, Liao et al. (2011) argued that trust reduces the perceived risk of the information recipient, which in turn influences the formation of positive attitudes. Aghdaie et al. (2011) also explain that attitude change is more influenced by higher trust in the source. After all, as Choung et al. (2023) argued, trust is an important factor that increases users’ use intention and attitude to accept AI-based services.
Trust is also a factor that makes users continuously use new technologies. Trust motivates users to use technology more often (Ismatullaev and Kim 2024) and plays a critical role in strengthening the intention to continue using AI-based services. When users trust a technology, their intention to use it continuously increases (Shin 2021). In particular, trust has a positive impact on users’ perceived usefulness and perceived ease of use, which ultimately increases their continuous use intention (Vorobeva et al. 2024). Mogaji and Nguyen (2022) and Amos and Zhang (2024) reported that in automated systems such as generative AI, appropriate trust is critical, especially in complex and unpredictable situations, and it affects dependence on and effectiveness of the automated system. Based on these prior studies, this study designed the following hypotheses for the present study:
H7. 
Trust in using generative AI services will have a positive (+) effect on service acceptance attitude.
H8. 
Trust factor in using generative AI services will have a positive (+) effect on continuous use intention.
Acceptance attitude is also an important factor in keeping users engaged with new technologies. This is because when users have a positive attitude toward a technology, their intention to continue using it increases (Yang and Yoo 2004). The satisfaction and positive experiences users have while using the technology reinforce acceptance, which further motivates them to continue using the technology (Marangunić and Granić 2015). Notably, as claimed by Ng (2024), a study on the impact of acceptance of AI-based services on continuous use intention found that a positive acceptance attitude positively impacted continuous use intention. In this regard, Jang et al. (2023) also emphasized that trust, positive acceptance attitude, and personalized experiences have a significant impact on continuous use intention. Wulandari et al. (2024) stated that the experience of ChatGPT directly affects continuous use intention, and Ma and Huo (2023) explained that the user’s reaction, acceptance intention, and attitude toward the chatbot affect continuous use intention. Based on these prior studies, the present study states the following hypothesis:
H9. 
Acceptance attitude of using generative AI services will have a positive (+) impact on continuous use intention.

3. Research Method

3.1. Research Model

Through an exploration of the literature, this study attempts to provide a structured examination of how motivation factors for using generative AI services relate to continuous use intention. Therefore, we designed a research model based on the structural equation as shown in Figure 1 to determine the causal relationship between the factors that motivate usage, which consist of individual, social, and technical factors, and whether they affect continuous use intention mediated by trust and acceptance attitude.

3.2. Measurement Variable and Data Collection

A survey was conducted to collect data for the analysis of the designed research model. We organized the survey questions based on previous studies as shown in Table 1. The operational variables for each of the survey components were defined as follows: First, the independent variable is the motivation factor for using generative AI services, which consists of individual, social, and technical factors. Accordingly, “individual factors” are defined as variables consisting of cognitive factors such as self-efficacy, innovation orientation, and playfulness that are influenced by individual aspects that determine the use of generative AI services. “Social factor” refers to a social environmental factor that affects service acceptance. In this study, it refers to the social awareness or image of the use of AI services that affect the use of generative AI services, and the recommendations or influences of nearby acquaintances on the use of AI services. The “technical factor” is defined as a variable that influences the motivation to use AI services based on the technical convenience or accessibility of AI services, such as the perceived ease and perceived usefulness of using generative AI services and personalization.
The mediating variables used are the variables of trust and acceptance attitude. “Trust” is defined as the user’s confidence in the usability and service quality of the generative AI service, which in turn influences continuous use intention. “Acceptance attitude” is defined as a user’s positive emotions and active attitude toward using generative AI services and is, therefore, a variable that affects continuous use. The dependent variable is “continuous use intention”, which refers to users’ willingness to continue using the generative AI service.
These defined variables were used as questions in the questionnaire, which consisted of 29 questions in total, as follows. SPSS 27.0 was used for demographic characteristics, descriptive statistics, and exploratory factor analysis. For path analysis, AMOS 27.0 was used to test the hypotheses by conducting confirmatory factor analysis and path analysis of the structural equation model.

3.3. Demographic Information

This study conducted an online survey of users of language-based generative AI services such as OpenAI’s ChatGPT, Google Bard, Microsoft Bing, and Meta LLaMA in Korea. The survey was conducted using a self-report questionnaire and was open for three weeks beginning 10 April 2023; 425 surveys were collected, out of which 356 were used for analysis, excluding 69 surveys that had insincere responses. Detection of insincere responses was based on the consistency of one-line responses and psychometric agreement in an ex-post, non-intrusive detection method. One-line responses (long string) were defined as giving the same response to 6 consecutive questions out of 29 questions. For lack of agreement/disagreement consistency, a response was categorized as insincere when the correlation of individual responses between pairs of items was under 0.3 as per the criteria proposed.
As shown in Table 2, the gender ratio of the survey population was evenly split at 51.4% male and 48.6% female. Age showed a similar use distribution with 21.3% in their 20s, 38.2% in their 30s, 25.8% in their 40s, and 14.6% in their 50s. Regarding education level, 9.3% were high school graduates; 73.8%, college graduates; and 16.9%, master’s and doctorate degree holders. In terms of occupation, 64% were office workers; 7.3%, students; 14.4%, professional jobholders; 5.3%, self-employed; and 9%, others including homemakers and unemployed people. Regarding the frequency of using generative AI, the percentage of individuals using it daily was 7.6%; 3 or more times per week, 15.4%; once or more a week, 34%; once or more a month, 21.9%; once or more every 2–3 months, 6.2%; and once to date, 14.9%. For the main type of service used, Chat GPT accounted for 41.8%; followed by Google Bard and MS Bing, 10.4%; NAVER HyperCLOVA X, 9.8%; Kakao KoGPT, 7.2%; and Meta LLaMA, 3.3%. Others, comprising 17.1%, were found to be using a variety of generative AI services.

4. Results

4.1. Analysis Results of Reliability and Validity

As shown in Table 3, the results of the reliability and convergent validity analysis of the measurement model are both favorable. The motivation factors in this study have a large number of items (19 factors), which are likely to lead to the estimation of unknowns, which can lead to problems with model fit and increased estimation error. Therefore, the partial disaggregation approach was applied. Accordingly, the domain representative method was applied to analyze three individual factors, two social factors, and three technical factors.
The factor standard load was 0.725 to 0.912, which is favorable, and the internal reliability was a composite reliability of 0.788 to 0.901, ensuring significance. The t-value was 6.5 or greater, ensuring statistical significance. The average variance extracted (AVE) value was 0.555 to 0.727, with a Cronbach α value of 0.748 to 0.888, ensuring convergent validity. The goodness-of-fit analysis of the structural equation measurement model revealed a χ² (df) of 200.478, χ²/degrees of freedom of 2.979. The goodness-of-fit-index (GFI) value was 0.902, an adjusted goodness-of-fit-index (AGFI) 0.895, a normal fit index (NFI) 0.908, and the root mean square error of approximation (RMSEA) 0.065, making the construct values of the measurement model’s goodness-of-fit statistically significant.
As shown in Table 4, the AVE values and correlation coefficients between latent variables were analyzed, and the value of the square root of the AVE of each latent variable was greater than the correlation coefficients between latent variables, confirming discriminant validity.

4.2. Analysis Results of Structural Model

As presented in Table 5, in the case of standardized regression weights, if the value is less than 0.5, the corresponding item is removed to increase the explanatory power of the model. Therefore, in this study, it was confirmed that all factors were significant at values more than 0.5. Standard error is an indicator of the degree of uncertainty in the population mean estimate and is the standard deviation of the sample mean; it serves to inform where the data values are distributed around the population mean. In this study, it was found to be 0.050–0.137. The goodness-of-fit of the structural model was analyzed, and χ²(df) was 237.544, and χ²/degree of freedom was 3.054. The GFI was 0.892, and the normal fit index (NFI) was 0.897, all of which were found to be valid. The root mean square residual (RMS) was 0.033, the AGFI was 0.903, and the RMSEA was 0.076, indicating that the goodness-of-fit construct values were significant. The Tucker–Lewis index (TLI) value for determining the descriptive power of a structural model was 0.905, and the CFI, which is unaffected by the sample but indicates the explanatory power of the model, was 0.923, confirming that the default model was appropriate.
Path analysis was performed to test the hypotheses, as shown in Table 5, and all but one of the nine total hypotheses were accepted. Individual factors (2.215, p < 0.05), social factors (5.123, p < 0.001), and technical factors (4.699, p < 0.001) all had positive (+) effects on trust. In addition, individual factors (5.412, p < 0.001), social factors (2.560, p < 0.05), and technical factors (2.454, p < 0.01) all had positive (+) effects on acceptance attitude as well, so hypotheses 4–6 were all adopted. In Hypothesis 7, trust was found to have a positive (+) effect on acceptance attitude (5.621, p < 0.001), so the hypothesis was adopted. However, there was no effect on continuous use intention, so the hypothesis was rejected. Finally, acceptance attitude had a positive (+) effect on continuous use intention (8.323, p < 0.001), so the hypothesis was adopted. Thus, it was found that acceptance attitude has a greater impact on the intention to continue using generative AI services than trust.

4.3. Analysis Results of Direct and Indirect Effects

The direct, indirect, and total effects were analyzed using the Sobel test to verify the mediating effects of trust and acceptance attitude (Table 6). The effects of individual, social, and technical factors, which are motivation factors for using generative AI services, on continuous use intention, mediated by trust, were significant for both direct and indirect effects, and the total effect was also significant. Acceptance attitude also showed a mediating effect. In the case of trust, the mediation effect between the exogenous and endogenous variables was found to be mediated by acceptance attitude, which in turn affected continuous use. Thus, it can be concluded that the trust factor does not directly affect the intention to continue using generative AI services but may affect continuous use intention through the mediation of acceptance attitude.

5. Discussions

This study classified the motivation for using generative AI services into personal, social, and technical factors, and analyzed the effect of each factor on the intention to continue using the service by using trust and acceptance attitude as parameters. The analysis results are in line with the research results on technology acceptance and continuous use of new digital services. Gupta (2024) confirmed that the personal and technical factors of AI-based services have the same positive effect on both acceptance attitude and trust in generative AI services, and the effect on the intention to continue using the attitude when there is an acceptance through this. In particular, Orchard and Tasiemski (2023) found that social influence and adaptability play a decisive role in accepting new technologies, and that social factors directly affect the acceptance of generative AI services. As such, this study confirmed that individual users’ motivation to use generative AI services affects the acceptance and continuous use of new technologies for generative AI services. Accordingly, the results of this study can present three detailed research results.
First, it was found that individual factors have the greatest impact on generative AI service acceptance attitude. This means that individual motivations such as self-efficacy, innovation orientation, and playful desire play a decisive role in the formation of acceptance attitude. Individual factors are the direct experiences that users have when interacting with a generative AI service. Self-efficacy makes users confident in using the service, which leads to a positive acceptance attitude; innovation orientation plays a crucial role in exploring and embracing new technologies; and playful desire increases the enjoyment of using the service, which leads to the user’s continuous acceptance of the service. This shows that when users learn new features and gain tangible benefits from them, they create positive emotions. Therefore, it is important to consider how actively users will accept and utilize the service rather than technical and social factors in the early stages of service use. After all, in order for generative AI service users to maintain their curiosity and continue to use the service, continuous provision of new features and content will be of paramount importance.
Second, social factors were found to be the most significant influencers of trust in using generative AI services. This is because users are sensitive to the opinions and social conformity of those around them, as they tend to adjust their behavior based on the evaluations and reactions of others, meaning that social conformity and image enhancement play an important role in building user trust. Social factors influence how users interact with their social environment. Social conformity describes the phenomenon where people see others around them positively evaluating and using a new technology, such as a generative AI service, and become more confident in it themselves. In addition, it has been confirmed that the fact that users who are conscious of their social image can improve their social image by using the service has a very important influence on the formation of trust in the service and the intention to continue using it. As the social impact on users’ use of generative AI services has a stronger impact on the acceptance, intention to use, and continuous use of services than on personal and technical experiences, a more in-depth study on social influence factors for generative AI services should be considered.
Third, it was found that trust is rather effective when it mediates acceptance attitude without directly affecting the intention to continue using it. This is consistent with the results of previous studies that if trust is established (Wan et al. 2019), it leads to a positive attitude of acceptance of the user’s service and leads to continuous use. Just because users trust the technology does not mean that they continue to use it, but that they should develop a positive attitude of acceptance that can actually accept and actively utilize it. Therefore, trust in service alone cannot guarantee continuous use of generative AI services, and in order for users to continue to use the service, they must form trust in service and an attitude to actively accept it. As a result, future discussions on trust attributes that can strengthen users’ acceptance attitudes and satisfaction in generative AI services may be necessary.

6. Conclusions

6.1. Research Implications

This study’s results confirm that individual, social, and technical motivators all have a positive impact on trust and acceptance attitudes, and that trust and acceptance attitudes have a significant effect on intention to continue using generative AI sevices, mediated by trust and acceptance attitudes. The results show that users of generative AI services are more likely to trust and accept them when they are personally motivated by enjoyment and utility, influenced by social conformity and image enhancement, and perceive them as technologically convenient and useful. Trust and acceptance directly and indirectly improve continuous use intention. Based on these findings, there are three main implications as follows:
First, research shows that individual motivation has the greatest impact on the formation of acceptance attitude. This means that the enjoyment, self-efficacy, and innovation orientation that users feel in generative AI services are important. Companies need to continuously improve their services to increase the positive experiences users have with them, focusing on providing ease of use, usability, and personalized experiences. By providing intuitive user interfaces, valuable content, and personalized services, users can easily navigate the service and obtain useful information, thereby improving their satisfaction and acceptance attitude. In order to fulfill users’ individual motivations and create a positive attitude of acceptance, it is essential to improve the usefulness and ease of use of the service, as well as how easy it is to use, so that users will enjoy it more and be more likely to continue using it.
Second, the finding that social factors play a critical role in trust formation suggests that the opinions and social perceptions of those around one play a pivotal role in whether users trust a new technology or service. Therefore, organizations should make it easy for users to share their experiences and access positive feedback from other users. Social evaluations can be shared through endorsements from influential users and social proof can be leveraged through expert reviews to build trust by driving social conformity and image enhancement. Having a strategy in place to facilitate interaction between users, creating an environment that naturally builds trust, can also be effective.
Third, it was shown that trust does not directly affect continuous use intention but does so through the mediation of acceptance attitude. This suggests that simply trusting the technology does not lead to continuous use; users need to develop a positive acceptance attitude to continue using it. Therefore, companies should provide a trust-based user experience to encourage users to form a positive service acceptance attitude that lead to continuous use. Trust can be bolstered by the safety, security, and transparency of the service, which in turn fosters the user’s acceptance attitude and positive experience.

6.2. Research Limitations and Future Plans

Generative AI services are becoming more common around the world, and based on this, various services are expanding with growing impact. In this regard, this study’s significance is that it empirically examines users’ acceptance and use intention of generative AI services. However, despite the significance of these findings, this study has the following limitations:
First, it is limited to users in South Korea, which limits the generalization of the results. There is a need to include samples from diverse cultural and geographic backgrounds to validate the findings and provide a more comprehensive understanding. It would be useful to conduct a study on the motivation and intention to continue using generative AI services in other countries or regions to draw global implications.
Second, this is a cross-sectional study that examined user behavior at a single point in time and it did not track changes in user motivation, trust, acceptance attitude, and continuous use intention over time. In future research, longitudinal studies will provide deeper insights into how user behavior and attitudes change. This will allow for a better understanding of how users perceive generative AI services over time and how their behavior and acceptance changes accordingly.
Third, while this study focused on individual, social, and technical motivations and trust, attitude, and continuous use intention, additional variables and external factors such as perceived risk, user satisfaction, regulatory changes, and market competition need to be explored further. For example, the relationship between user satisfaction and perceived risk can be analyzed to guide service improvements, or the impact of market competition on user behavior can be explored to formulate strategic responses. This will allow users to understand the broader context of their use of generative AI services and help them formulate new strategies.

Author Contributions

Conceptualization, S.K. and B.K.; methodology, S.K.; software, B.K.; validation, S.K. and B.K.; formal analysis, B.K.; investigation, S.K. and Y.C.; resources, S.K. and Y.C.; data curation, B.K.; writing—original draft preparation, S.K. and B.K.; writing—review and editing, Y.C. and B.K.; visualization, B.K.; supervision, Y.C.; project administration, Y.C. and B.K.; funding acquisition, S.K. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki, and approved by the Research Ethics Committee of aSSIST University (approval Code: The Statistics Act No. 33, on 12 July 2024).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The data presented in this study are available on request from the corresponding author due to privacy reasons.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Aghdaie, Seyed Fathollah Amiri, Amir Piraman, and Saeed Fathi. 2011. An analysis of factors affecting the consumer’s attitude of trust and their impact on internet purchasing behavior. International Journal of Business and Social Science 2: 147–58. [Google Scholar]
  2. Ali, Omar, Peter A. Murray, Mujtaba Momin, and Fawaz S. Al-Anzi. 2023. The knowledge and innovation challenges of ChatGPT: A scoping review. Technology in Society 75: 102402. [Google Scholar] [CrossRef]
  3. Amos, Clinton, and Lixuan Zhang. 2024. Consumer Reactions to Perceived Undisclosed Generative AI Usage in an Online Review Context. Available online: https://ssrn.com/abstract=4778082 (accessed on 3 August 2024). [CrossRef]
  4. Aysu, Semahat. 2020. The use of technology and its effects on language learning motivation. Journal of Language Research 4: 86–100. [Google Scholar]
  5. Bae, Jee-Woo, and Cheong-Yeul Park. 2015. Influence of user-motivation on user-commitment in social media: Moderating effects of social pressure. The Journal of the Korea Contents Association 15: 462–74. [Google Scholar] [CrossRef]
  6. Baek, Tae Hyun, and Minseong Kim. 2023. Is ChatGPT scary good? How user motivations affect creepiness and trust in generative artificial intelligence. Telematics and Informatics 83: 102030. [Google Scholar] [CrossRef]
  7. Bandi, Ajay, Pydi Venkata Satya Ramesh Adapa, and Yudu Eswar Vinay Pratap Kumar Kuchi. 2023. The power of generative ai: A review of requirements, models, input–output formats, evaluation metrics, and challenges. Future Internet 15: 260. [Google Scholar] [CrossRef]
  8. Bayton, James A. 1958. Motivation, cognition, learning—Basic factors in consumer behavior. Journal of Marketing 22: 282–89. [Google Scholar]
  9. Berthelot, Adrien, Eddy Caron, Mathilde Jay, and Laurent Lefèvre. 2024. Estimating the environmental impact of Generative-AI services using an LCA-based methodology. Procedia CIRP 122: 707–12. [Google Scholar] [CrossRef]
  10. Bhattacharyya, Som Sekhar, Shaileshwar Goswami, Raunak Mehta, and Bishwajit Nayak. 2022. Examining the factors influencing adoption of over the top (OTT) services among Indian consumers. Journal of Science and Technology Policy Management 13: 652–82. [Google Scholar] [CrossRef]
  11. Brandtzaeg, Petter Bae, and Asbjørn Følstad. 2018. Chatbots: Changing user needs and motivations. Interactions 25: 38–43. [Google Scholar] [CrossRef]
  12. Brewer, Gene A., Sally Coleman Selden, and Rex L. Facer Ii. 2000. Individual conceptions of public service motivation. Public Administration Review 60: 254–64. [Google Scholar] [CrossRef]
  13. Camilleri, Mark Anthony. 2024. Factors affecting performance expectancy and intentions to use ChatGPT: Using SmartPLS to advance an information technology acceptance framework. Technological Forecasting and Social Change 201: 123247. [Google Scholar] [CrossRef]
  14. Camilleri, Mark Anthony, and Loredana Falzon. 2021. Understanding motivations to use online streaming services: Integrating the technology acceptance model (TAM) and the uses and gratifications theory (UGT). Spanish Journal of Marketing-ESIC 25: 217–38. [Google Scholar] [CrossRef]
  15. Chen, Tao, Wenshan Guo, Xian Gao, and Zhehao Liang. 2021. AI-based self-service technology in public service delivery: User experience and influencing factors. Government Information Quarterly 38: 101520. [Google Scholar] [CrossRef]
  16. Cheng, Yanxia, Saurabh Sharma, Prashant Sharma, and KMMCB Kulathunga. 2020. Role of personalization in continuous use intention of Mobile news apps in India: Extending the UTAUT2 model. Information 11: 33. [Google Scholar] [CrossRef]
  17. Choung, Hyesun, Prabu David, and Arun Ross. 2023. Trust in AI and its role in the acceptance of AI technologies. International Journal of Human–Computer Interaction 39: 1727–39. [Google Scholar] [CrossRef]
  18. Corre, Kevin, Olivier Barais, Gerson Sunyé, Vincent Frey, and Jean-Michel Crom. 2017. Why can’t users choose their identity providers on the web? Proceedings on Privacy Enhancing Technologies 2017: 72–86. [Google Scholar] [CrossRef]
  19. Dehghani, Milad. 2018. Exploring the motivational factors on continuous usage intention of smartwatches among actual users. Behaviour & Information Technology 37: 145–58. [Google Scholar]
  20. Dong, Xiaozhou. 2019. A study on the relationship among customer behavior stickiness, motivation of shopping and customer value in the online shopping. Journal of Contemporary Marketing Science 2: 196–216. [Google Scholar]
  21. Durmaz, Yakup, and Ibrahim Diyarbakırlıoğlu. 2011. A Theoritical Approach to the Strength of Motivation in Customer Behavior. Global Journal of Human Social Science 11: 36–42. [Google Scholar]
  22. Elmashhara, Maher Georges, Roberta De Cicco, Susana C. Silva, Maik Hammerschmidt, and Maria Levi Silva. 2024. How gamifying AI shapes customer motivation, engagement, and purchase behavior. Psychology & Marketing 41: 134–50. [Google Scholar]
  23. Euchner, Jim. 2023. Generative ai. Research-Technology Management 66: 71–74. [Google Scholar] [CrossRef]
  24. Ferraro, Carla, Vlad Demsar, Sean Sands, Mariluz Restrepo, and Colin Campbell. 2024. The paradoxes of generative AI-enabled customer service: A guide for managers. Business Horizons 67: 549–59. [Google Scholar] [CrossRef]
  25. Feuerriegel, Stefan, Jochen Hartmann, Christian Janiesch, and Patrick Zschech. 2024. Generative ai. Business & Information Systems Engineering 66: 111–26. [Google Scholar]
  26. Fiona, Fui-Hoon Nah, Ruilin Zheng, Jingyuan Cai, Keng Siau, and Langtao Chen. 2023. Generative AI and ChatGPT: Applications, challenges, and AI-human collaboration. Journal of Information Technology Case and Application Research 25: 277–304. [Google Scholar]
  27. Fullerton, Ronald A. 2013. The birth of consumer behavior: Motivation research in the 1940s and 1950s. Journal of Historical Research in Marketing 5: 212–22. [Google Scholar] [CrossRef]
  28. Gefen, David, and Detmar Straub. 2003. Managing user trust in B2C e-services. e-Service 2: 7–24. [Google Scholar] [CrossRef]
  29. Goldman Sachs. 2023. Generative AI Could Raise Global GDP by 7%. Available online: https://www.goldmansachs.com/intelligence/pages/generative-ai-could-raise-global-gdp-by-7-percent.html (accessed on 24 April 2024).
  30. Gupta, Ruchi, Kiran Nair, Mahima Mishra, Blend Ibrahim, and Seema Bhardwaj. 2024. Adoption and impacts of generative artificial intelligence: Theoretical underpinnings and research agenda. International Journal of Information Management Data Insights 4: 100232. [Google Scholar] [CrossRef]
  31. Gupta, Swati, Alhamzah F. Abbas, and Rajeev Srivastava. 2022. Technology Acceptance Model (TAM): A bibliometric analysis from inception. Journal of Telecommunications and the Digital Economy 10: 77–106. [Google Scholar] [CrossRef]
  32. Gupta, Varun. 2024. An empirical evaluation of a generative artificial intelligence technology adoption model from entrepreneurs’ perspectives. Systems 12: 103. [Google Scholar] [CrossRef]
  33. Habbal, Adib, Mohamed Khalif Ali, and Mustafa Ali Abuzaraida. 2024. Artificial intelligence trust, risk and security management (AI trism): Frameworks, applications, challenges and future research directions. Expert Systems with Applications 240: 122442. [Google Scholar] [CrossRef]
  34. Hamari, Juho, Lobna Hassan, and Antonio Dias. 2018. Gamification, quantified-self or social networking? Matching users’ goals with motivational technology. User Modeling and User-Adapted Interaction 28: 35–74. [Google Scholar] [CrossRef]
  35. Hassouneh, Diana, and Malaika Brengman. 2014. A motivation-based typology of social virtual world users. Computers in Human Behavior 33: 330–38. [Google Scholar] [CrossRef]
  36. Hernandez, Blanca, Teresa Montaner, F. Javier Sese, and Pilar Urquizu. 2011. The role of social motivations in e-learning: How do they affect usage and success of ICT interactive tools? Computers in Human Behavior 27: 2224–32. [Google Scholar] [CrossRef]
  37. Hoehle, Hartmut, Sid Huff, and Sigi Goode. 2012. The role of continuous trust in information systems continuance. Journal of Computer Information Systems 52: 1–9. [Google Scholar]
  38. Hong, Joo-Wha. 2022. I was born to love AI: The influence of social status on AI self-efficacy and intentions to use AI. International Journal of Communication 16: 172–91. [Google Scholar]
  39. Hsu, Chin-Lung, and Judy Chuan-Chuan Lin. 2008. Acceptance of blog usage: The roles of technology acceptance, social influence and knowledge sharing motivation. Information & management 45: 65–74. [Google Scholar]
  40. Ismatullaev, Ulugbek Vahobjon Ugli, and Sang-Ho Kim. 2024. Review of the factors affecting acceptance of AI-infused systems. Human Factors 66: 126–44. [Google Scholar] [CrossRef]
  41. Jacobsen, Christian Bøtcher, Johan Hvitved, and Lotte Bøgh Andersen. 2014. Command and motivation: How the perception of external interventions relates to intrinsic motivation and public service motivation. Public Administration 92: 790–806. [Google Scholar] [CrossRef]
  42. Jang, Changki, Deokwon Heo, and WookJoon Sung. 2023. Effects on the continuous use intention of AI-based voice assistant services: Focusing on the interaction between trust in AI and privacy concerns. Informatization Policy 30: 22–45. [Google Scholar]
  43. Kabalisa, Rene, and Jörn Altmann. 2021. AI technologies and motives for AI adoption by countries and firms: A systematic literature review. In Economics of Grids, Clouds, Systems, and Services: 18th International Conference, GECON 2021, Virtual Event, September 21–23, 2021, Proceedings 18. Cham: Springer International Publishing. [Google Scholar]
  44. Kanfer, Ruth. 1990. Motivation theory and industrial and organizational psychology. Handbook of Industrial and Organizational Psychology 1: 75–130. [Google Scholar]
  45. Kenthapadi, Krishnaram, Himabindu Lakkaraju, and Nazneen Rajani. 2023. Generative ai meets responsible ai: Practical challenges and opportunities. Paper presented at 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, Long Beach, CA, USA, August 6–10. [Google Scholar]
  46. Kim, Jang Hyun, Min-Sun Kim, and Yoonjae Nam. 2010. An analysis of self-construals, motivations, Facebook use, and user satisfaction. Intl. Journal of Human–Computer Interaction 26: 1077–99. [Google Scholar] [CrossRef]
  47. Kim, Ju Yeon, Jung P. Shim, and Ahn Kyung Mo. 2011. Social networking service: Motivation, pleasure, and behavioral intention to use. Journal of Computer Information Systems 51: 92–101. [Google Scholar]
  48. Kim, Jungsun, Natasa Christodoulidou, and Pearl Brewer. 2012. Impact of individual differences and consumers’ readiness on likelihood of using self-service technologies at hospitality settings. Journal of Hospitality & Tourism Research 36: 85–114. [Google Scholar]
  49. Kim, Yoojin, and Boyoung Kim. 2020. Selection attributes of innovative digital platform-based subscription services: A case of South Korea. Journal of Open Innovation: Technology, Market, and Complexity 6: 70. [Google Scholar] [CrossRef]
  50. Korzynski, Pawel, Grzegorz Mazurek, Andreas Altmann, Joanna Ejdys, Ruta Kazlauskaite, Joanna Paliszkiewicz, Krzysztof Wach, and Ewa Ziemba. 2023. Generative artificial intelligence as a new context for management theories: Analysis of ChatGPT. Central European Management Journal 31: 3–13. [Google Scholar] [CrossRef]
  51. Lai, Emily R. 2011. Motivation: A literature review. Person Research’s Report 6: 40–41. [Google Scholar]
  52. Larsen, Tor J., Anne M. Sørebø, and Øystein Sørebø. 2009. The role of task-technology fit as users’ motivation to continue information system use. Computers in Human Behavior 25: 778–84. [Google Scholar] [CrossRef]
  53. Lee, Heejun, Miyeon Ha, Sujeong Kwon, Yealin Shim, and Jinwoo Kim. 2019. A study on consumers’ perception of and use motivation of artificial intelligence (AI) speaker. The Journal of the Korea Contents Association 19: 138–54. [Google Scholar]
  54. Lee, Jung-Chieh, and Xueqing Chen. 2022. Exploring users’ adoption intentions in the evolution of artificial intelligence mobile banking applications: The intelligent and anthropomorphic perspectives. International Journal of Bank Marketing 40: 631–58. [Google Scholar] [CrossRef]
  55. Lee, Young-Chan. 2020. Artificial intelligence and continuous usage intention: Evidence from a Korean online job information platform. Business Communication Research and Practice 3: 86–95. [Google Scholar] [CrossRef]
  56. Levin, Michael A., Jared M. Hansen, and Debra A. Laverie. 2012. Toward understanding new sales employees’ participation in marketing-related technology: Motivation, voluntariness, and past performance. Journal of Personal Selling & Sales Management 32: 379–93. [Google Scholar]
  57. Li, Fan, and Yuan Lu. 2021. Engaging end users in an ai-enabled smart service design-the application of the smart service blueprint scape (SSBS) framework. Proceedings of the Design Society 1: 1363–72. [Google Scholar] [CrossRef]
  58. Liao, Chechen, Chuang-Chun Liu, and Kuanchin Chen. 2011. Examining the impact of privacy, trust and risk perceptions beyond monetary transactions: An integrated model. Electronic Commerce Research and Applications 10: 702–15. [Google Scholar] [CrossRef]
  59. Lim, Weng Marc, Asanka Gunasekara, Jessica Leigh Pallant, Jason Ian Pallant, and Ekaterina Pechenkina. 2023. Generative AI and the future of education: Ragnarök or reformation? A paradoxical perspective from management educators. The International Journal of Management Education 21: 100790. [Google Scholar]
  60. Liu, Weiyan. 2015. A historical overview of uses and gratifications theory. Cross-Cultural Communication 11: 71–78. [Google Scholar]
  61. Liu, Xiaohui, Xiaoyu He, Mengmeng Wang, and Huizhang Shen. 2022. What influences patients’ continuance intention to use AI-powered service robots at hospitals? The role of individual characteristics. Technology in Society 70: 101996. [Google Scholar] [CrossRef]
  62. Lu, June, James E. Yao, and Chun-Sheng Yu. 2005. Personal innovativeness, social influences and adoption of wireless Internet services via mobile technology. The Journal of Strategic Information Systems 14: 245–68. [Google Scholar] [CrossRef]
  63. Lv, Xingyang, Yufan Yang, Dazhi Qin, Xingping Cao, and Hong Xu. 2022. Artificial intelligence service recovery: The role of empathic response in hospitality customers’ continuous usage intention. Computers in Human Behavior 126: 106993. [Google Scholar] [CrossRef]
  64. Ma, Jiaojiao, Pengcheng Wang, Benqian Li, Tian Wang, Xiang Shan Pang, and Dake Wang. 2024. Exploring user adoption of ChatGPT: A technology acceptance model oerspective. International Journal of Human–Computer Interaction 40: 1–15. [Google Scholar] [CrossRef]
  65. Ma, Xiaoyue, and Yudi Huo. 2023. Are users willing to embrace ChatGPT? Exploring the factors on the acceptance of chatbots from the perspective of AIDUA framework. Technology in Society 75: 102362. [Google Scholar] [CrossRef]
  66. Marangunić, Nikola, and Andrina Granić. 2015. Technology acceptance model: A literature review from 1986 to 2013. Universal Access in the Information Society 14: 81–95. [Google Scholar] [CrossRef]
  67. Mariani, Marcello, and Yogesh K. Dwivedi. 2024. Generative artificial intelligence in innovation management: A preview of future research developments. Journal of Business Research 175: 114542. [Google Scholar] [CrossRef]
  68. McKinsey. 2024. The Economic Potential of Generative AI: The Next Productivity Frontier. Available online: https://www.mckinsey.com/featured-insights/mckinsey-live/webinars/the-economic-potential-of-generative-ai-the-next-productivity-frontier (accessed on 23 April 2024).
  69. Mogaji, Emmanuel, and Nguyen Phong Nguyen. 2022. Managers’ understanding of artificial intelligence in relation to marketing financial services: Insights from a cross-country study. International Journal of Bank Marketing 40: 1272–98. [Google Scholar] [CrossRef]
  70. Naeem, Rimsha, Marko Kohtamäki, and Vinit Parida. 2024. Artificial intelligence enabled product–service innovation: Past achievements and future directions. Review of Managerial Science 18: 1–44. [Google Scholar] [CrossRef]
  71. Ng, Yu-Leung. 2024. A longitudinal model of continued acceptance of conversational artificial intelligence. Information Technology & People, ahead-of-print. [Google Scholar] [CrossRef]
  72. Norzelan, Nur Azira, Intan Salwani Mohamed, and Maslinawati Mohamad. 2024. Technology acceptance of artificial intelligence (AI) among heads of finance and accounting units in the shared service industry. Technological Forecasting and Social Change 198: 123022. [Google Scholar] [CrossRef]
  73. Ooi, Keng-Boon, Garry Wei-Han, Tan Mostafa Al-Emran, and Mohammed Al-Sharafi. 2023. The potential of generative artificial intelligence across disciplines: Perspectives and future directions. Journal of Computer Information Systems 64: 1–32. [Google Scholar] [CrossRef]
  74. Orchard, Tim, and Leszek Tasiemski. 2023. The rise of generative AI and possible effects on the economy. Economics and Business Review 9: 9–26. [Google Scholar] [CrossRef]
  75. Osatuyi, Babajide, and Ofir Turel. 2019. Social motivation for the use of social technologies: An empirical examination of social commerce site users. Internet Research 29: 24–45. [Google Scholar] [CrossRef]
  76. Oyserman, Daphna. 2009. Identity-based motivation and consumer behavior. Journal of Consumer Psychology 19: 276–79. [Google Scholar] [CrossRef]
  77. Ozili, Peterson K. 2024. Technology impact model: A transition from the technology acceptance model. AI & SOCIETY 39: 1–3. [Google Scholar] [CrossRef]
  78. Pan, Xiaoquan. 2020. Technology acceptance, technological self-efficacy, and attitude toward technology-based self-directed learning: Learning motivation as a mediator. Frontiers in Psychology 11: 564294. [Google Scholar] [CrossRef] [PubMed]
  79. Pandey, Sumit, and Srishti Sharma. 2023. A comparative study of retrieval-based and generative-based chatbots using deep learning and machine learning. Healthcare Analytics 3: 100198. [Google Scholar] [CrossRef]
  80. Park, JaeSung, JaeJon Kim, and Joon Koh. 2010. Determinants of continuous usage intention in web analytics services. Electronic Commerce Research and Applications 9: 61–72. [Google Scholar] [CrossRef]
  81. Pedrotti, Maxime, and Nicolae Nistor. 2016. User motivation and technology acceptance in online learning environments. In Adaptive and Adaptable Learning: 11th European Conference on Technology Enhanced Learning, EC-TEL 2016, Lyon, France, September 13-16, 2016, Proceedings 11. Cham: Springer International Publishing. [Google Scholar]
  82. Pleger, Lyn E., Alexander Mertes, Andrea Rey, and Caroline Brüesch. 2020. Allowing users to pick and choose: A conjoint analysis of end-user preferences of public e-services. Government Information Quarterly 37: 101473. [Google Scholar] [CrossRef]
  83. Posada, Julián Esteban Gutiérrez, Elaine CS Hayashi, and M. Cecília C. Baranauskas. 2014. On feelings of comfort, motivation and joy that GUI and TUI evoke. In Design, User Experience, and Usability. User Experience Design Practice: Third International Conference, DUXU 2014, Held as Part of HCI International 2014, Heraklion, Crete, Greece, June 22–27, 2014, Proceedings, Part IV 3. Cham: Springer International Publishing. [Google Scholar]
  84. Preece, Jennifer, and Ben Shneiderman. 2009. The reader-to-leader framework: Motivating technology-mediated social participation. AIS Transactions on Human-Computer Interaction 1: 13–32. [Google Scholar] [CrossRef]
  85. Price, Fiona, and Karima Kadi-Hanifi. 2011. E-motivation! The role of popular technology in student motivation and retention. Research in Post-Compulsory Education 16: 173–87. [Google Scholar] [CrossRef]
  86. Raman, Arumugam, Raamani Thannimalai, Mohan Rathakrishnan, and Siti Noor Ismail. 2022. Investigating the Influence of Intrinsic Motivation on Behavioral Intention and Actual Use of Technology in Moodle Platforms. International Journal of Instruction 15: 1003–24. [Google Scholar] [CrossRef]
  87. Ruggiero, Thomas E. 2000. Uses and gratifications theory in the 21st century. Mass Communication & Society 3: 3–37. [Google Scholar]
  88. Sætra, Henrik Skaug. 2023. Generative AI: Here to stay, but for good? Technology in Society 75: 102372. [Google Scholar] [CrossRef]
  89. Saif, Naveed, Sajid Ullah Khan, Imrab Shaheen, Faiz Abdullah ALotaibi, Mrim M. Alnfiai, and Mohammad Arif. 2024. Chat-GPT; validating Technology Acceptance Model (TAM) in education sector via ubiquitous learning mechanism. Computers in Human Behavior 154: 108097. [Google Scholar] [CrossRef]
  90. Salloum, Said A., Rose A. Aljanada, Aseel M. Alfaisal, Mohammed Rasol Al Saidat, and Raghad Alfaisal. 2024. Exploring the Acceptance of ChatGPT for Translation: An Extended TAM Model Approach. Artificial Intelligence in Education: The Power and Dangers of ChatGPT in the Classroom 144: 527–42. [Google Scholar]
  91. Schmid, Yvonne, and Michael Dowling. 2020. New work: New motivation? A comprehensive literature review on the impact of workplace technologies. Management Review Quarterly 72: 1–28. [Google Scholar] [CrossRef]
  92. Schunk, Dale H. 1995. Self-efficacy, motivation, and performance. Journal of Applied Sport Psychology 7: 112–37. [Google Scholar] [CrossRef]
  93. Schunk, Dale H., and Maria K. DiBenedetto. 2021. Self-efficacy and human motivation. Advances in Motivation Science 8: 153–79. [Google Scholar]
  94. Shaengchart, Yarnaphat. 2023. A conceptual review of TAM and ChatGPT usage intentions among higher education students. Advance Knowledge for Executives 2: 1–7. [Google Scholar]
  95. Shin, Donghee. 2021. The effects of explainability and causability on perception, trust, and acceptance: Implications for explainable AI. International Journal of Human-Computer Studies 146: 102551. [Google Scholar] [CrossRef]
  96. Siddiqui, Sohni, Martin Thomas, and Naureen Nazar Soomro. 2020. Technology integration in education: Source of intrinsic motivation, self-efficacy and performance. Journal of E-learning and Knowledge Society 16: 11–22. [Google Scholar]
  97. Solomovich, Lior, and Villy Abraham. 2024. Exploring the influence of ChatGPT on tourism behavior using the technology acceptance model. Tourism Review, ahead-of-print. [Google Scholar] [CrossRef]
  98. Ståhlbröst, Anna, and Birgitta Bergvall-Kåreborn. 2011. Exploring users motivation in innovation communities. International Journal of Entrepreneurship and Innovation Management 14: 298–314. [Google Scholar] [CrossRef]
  99. Stanford University’s Human-Centered Artificial Intelligence. 2023. Generative AI: Perspectives from Stanford HAI (Human-Centered Artificial Intelligence). Available online: https://hai.stanford.edu/generative-ai-perspectives-stanford-hai (accessed on 3 June 2024).
  100. Steers, Richard M., Richard T. Mowday, and Debra L. Shapiro. 2004. The future of work motivation theory. Academy of Management Review 29: 379–87. [Google Scholar] [CrossRef]
  101. Stock, Ruth Maria, Pedro Oliveira, and Eric Von Hippel. 2015. Impacts of hedonic and utilitarian user motives on the innovativeness of user-developed solutions. Journal of Product Innovation Management 32: 389–403. [Google Scholar] [CrossRef]
  102. Sun, Yacheng, Xiaojing Dong, and Shelby McIntyre. 2017. Motivation of user-generated content: Social connectedness moderates the effects of monetary rewards. Marketing Science 36: 329–37. [Google Scholar] [CrossRef]
  103. Tlili, Ahmed, Boulus Shehata, Michael Agyemang Adarkwah, Aras Bozkurt, Daniel T. Hickey, Ronghuai Huang, and Brighter Agyemang. 2023. What if the devil is my guardian angel: ChatGPT as a case study of using chatbots in education. Smart Learning Environments 10: 15–39. [Google Scholar] [CrossRef]
  104. Van der Heijden, Hans, Tibert Verhagen, and Marcel Creemers. 2003. Understanding online purchase intentions: Contributions from technology and trust perspectives. European Journal of Information Systems 12: 41–48. [Google Scholar] [CrossRef]
  105. Vanduhe, Vanye Zira, Muesser Nat, and Hasan Fahmi Hasan. 2020. Continuance intentions to use gamification for training in higher education: Integrating the technology acceptance model (TAM), social motivation, and task technology fit (TTF). IEEE Access 8: 21473–84. [Google Scholar] [CrossRef]
  106. Varghese, Julian, and Julius Chapiro. 2024. ChatGPT: The transformative influence of generative AI on science and healthcare. Journal of Hepatology 80: 977–80. [Google Scholar] [CrossRef]
  107. Vorobeva, Darina, Diego Costa Pinto, Nuno António, and Anna S. Mattila. 2024. The augmentation effect of artificial intelligence: Can AI framing shape customer acceptance of AI-based services? Current Issues in Tourism 27: 1551–71. [Google Scholar] [CrossRef]
  108. Walker, Rhett H., and Lester W. Johnson. 2006. Why consumers use and do not use technology-enabled services. Journal of Services Marketing 20: 125–35. [Google Scholar] [CrossRef]
  109. Wan, Jihong, Xiaoliang Chen, Yajun Du, and Mengmeng Jia. 2019. Information propagation model based on hybrid social factors of opportunity, trust and motivation. Neurocomputing 333: 169–84. [Google Scholar] [CrossRef]
  110. Wang, Edward, Shih-Tse Nicole, and Pei-Yu Chou. 2016. Examining social influence factors affecting consumer continuous usage intention for mobile social networking applications. International Journal of Mobile Communications 14: 43–55. [Google Scholar] [CrossRef]
  111. Wang, Tzong-Song, and Sheng-Wen Hsieh. 2015. An assessment of individual and technological factors for computing validation: Motivation and social processes. Revista de Cercetare si Interventie Sociala 50: 156–71. [Google Scholar]
  112. Waterman, Alan S., Seth J. Schwartz, and Regina Conti. 2008. The implications of two conceptions of happiness (hedonic enjoyment and eudaimonia) for the understanding of intrinsic motivation. Journal of Happiness Studies 9: 41–79. [Google Scholar] [CrossRef]
  113. White, Christopher. 2015. The impact of motivation on customer satisfaction formation: A self-determination perspective. European Journal of Marketing 49: 1923–40. [Google Scholar] [CrossRef]
  114. Wulandari, Ajeng Ayu, Noviawan Rasyid Ohorella, and Titih Nurhaipah. 2024. Perceived Ease of Use and User Experience Using Chat GPT. JIKA (Jurnal Ilmu Komunikasi Andalan) 7: 52–75. [Google Scholar] [CrossRef]
  115. Yang, Hee-dong, and Youngjin Yoo. 2004. It’s all about attitude: Revisiting the technology acceptance model. Decision Support Systems 38: 19–31. [Google Scholar] [CrossRef]
  116. Yilmaz, Ramazan, and Fatma Gizem Karaoglan Yilmaz. 2023. The effect of generative artificial intelligence (AI)-based tool use on students’ computational thinking skills, programming self-efficacy and motivation. Computers and Education: Artificial Intelligence 4: 100147. [Google Scholar] [CrossRef]
  117. Zhang, Peng, and Maged N. Kamel Boulos. 2023. Generative AI in medicine and healthcare: Promises, opportunities and challenges. Future Internet 15: 286. [Google Scholar] [CrossRef]
  118. Zou, Min, and Liang Huang. 2023. To use or not to use? Understanding doctoral students’ acceptance of ChatGPT in writing through technology acceptance model. Frontiers in Psychology 14: 1259531. [Google Scholar] [CrossRef]
Figure 1. Research model.
Figure 1. Research model.
Socsci 13 00475 g001
Table 1. Variable definitions and measurement items.
Table 1. Variable definitions and measurement items.
FactorsMeasurement ItemsReferences
Individual factors- I do not have much trouble using generative AI services.
- I tend to use generative AI services efficiently.
- I have been a quick starter with generative AI services compared to others.
- I am interested in locating the latest information through generative AI services.
- I tend to try to learn new features of generative AI services.
- Generative AI services are fun to use.
- Using generative AI services satisfies my curiosity.
Kim et al. (2012),
Wang and Hsieh (2015),
Brewer et al. (2000),
Liu et al. (2022)
Social factors- I think most people of my generation use generative AI services.
- Most people around me use generative AI services.
- I think society as a whole uses generative AI services.
- I think people who use generative AI services are more knowledgeable.
- I think people who use generative AI services get more attention.
- I think people who use generative AI services will be economically wealthy.
Kim et al. (2011),
Stock et al. (2015),
Sun et al. (2017),
Osatuyi and Turel (2019)
Technical factors- Generative AI services are easy to use.
- Generative AI service features are easy to control.
- Generative AI services can be used flexibly in a variety of ways.
- Generative AI services can help with processing things faster.
- Generative AI services can help with increasing productivity at work.
- The information or service provided by a generative AI service is useful to me.
- Generative AI services are convenient because they are personalized.
Hsu and Lin (2008),
Larsen et al. (2009),
Pan (2020)
Camilleri (2024)
Trust- Generative AI services are generally trustworthy.
- The information provided by generative AI services is trustworthy.
- I trust generative AI services to provide me with the information I want.
Van der Heijden et al. (2003),
Hoehle et al. (2012),
Baek and Kim (2023)
Acceptance attitude- I am in favor of using generative AI services.
- I utilize generative AI services actively.
- I use generative AI services for a variety of purposes.
Hsu and Lin (2008),
Vanduhe et al. (2020),
Ma et al. (2024)
Continuous use intention- I will continue to use generative AI services in the future.
- I would prioritize generative AI services over other services.
- I would highly recommend the generative AI services I currently use to others.
Van der Heijden et al. (2003),
Wang et al. (2016),
Lee (2020)
Table 2. Demographic information of survey participants.
Table 2. Demographic information of survey participants.
CategoryNumber of ResponsesPercentage (%)
GenderMale18351.4
Female17348.6
Total356100
Age (in years)20 s7621.3
30 s13638.3
40 s9225.8
50 s5214.6
Total356100
Education levelHigh school graduates329.3
College graduates26373.8
Master’s and doctorate graduates6116.9
Total356100
OccupationOffice workers22864.0
Students267.3
* Professionals 5114.4
Self-employed195.3
Other329.0
Total356100
Generative AI services
Frequency of use
Daily277.6
Three or more times per week5515.4
Once or more a week12134.0
Once or more a month7821.9
Once or more every 2–3 months 226.2
Once to date 5314.9
Total356100
Generative AI services Chat GPT14841.8
Google Bard37 10.4
Meta LLaMA12 3.3
MS Bing 37 10.4
NAVER HyperCLOVA35 9.8
Kakao KoGPT26 7.2
Other61 17.1
Total356100
* Professional job groups: Government/Medical/Finance/Consulting/Education/Arts/Telecommunications, etc.
Table 3. Results of reliability and convergent validity test.
Table 3. Results of reliability and convergent validity test.
VariablesMeasurement ItemsStandard
Loading Value
Standard Errort-Value (p)CRAVECronbach α
Individual factorsIM10.725 0.7880.5550.798
IM20.8450.12111.932 ***
IM30.9540.10912.157 ***
Social factorsSM10.915 0.7970.6340.886
SM20.8960.01613.133 ***
Technical factorsTM10.898 0.8560.5850.845
TM20.7960.05215.668 ***
TM30.7900.04111.452 ***
TrustTU10.756 0.8340.5800.891
TU20.8450.09112.213 ***
TU30.8770.08712.440 ***
Acceptance attitudeAA10.884 0.9010.7270.888
AA20.8970.04518.396 ***
AA30.8480.05517.758 ***
Continuous use intentionSU10.912 0.8980.5830.748
SU20.8020.04612.048 ***
SU30.8740.05412.430 ***
Measurement model fit: χ²(df) = 200.478, χ²/degree of freedom = 2.979, RMS = 0.030, GFI = 0.902, AGFI = 0.895, NFI = 0.908, TLI = 0.917, CFI = 0.905, RMSEA = 0.065. *** p < 0.001
Table 4. Correlation matrix and AVE.
Table 4. Correlation matrix and AVE.
VariablesIFSFTFTruAACUI
Individual factors (IF)0.745
Social factors (SF)0.3920.796
Technical factors (TF)0.655 **0.4080.765
Trust (Tru)0.6620.5450.7220.771
Acceptance attitude (AA)0.4220.435 **0.588 **0.607 **0.762
Continuous use intention (CUI)0.573 **0.357 **0.676 **0.666 **0.5580.853
Note: ** p < 0.01/ The square root of AVE is shown in bold letters.
Table 5. Results of hypothesis test.
Table 5. Results of hypothesis test.
Hypothesis (Path)SRW *Standard Errort Value (p)Support
H1Individual factors -> Trust0.7720.1132.215 *Adopted
H2Social factors -> Trust0.6780.0505.123 ***Adopted
H3Technical factors -> Trust0.6090.1024.699Adopted
H4Individual factors -> Acceptance attitude0.6930.1235.412 ***Adopted
H5Social factors -> Acceptance attitude0.5090.1372.560 *Adopted
H6Technical factors -> Acceptance attitude0.7240.1082.454Adopted
H7Trust -> Acceptance attitude0.6930.1235.621 ***Adopted
H8Trust -> Continuous use intention1.0650.0511.129Rejected
H9Acceptance attitude -> Continuous use intention1.1660.1108.323 ***Adopted
Structural model fit: χ²(df) = 237.544, χ²/degree of freedom = 3.054, RMS = 0.033, GFI = 0.892, AGFI = 0.903, NFI = 0.897, TLI = 0.905, CFI = 0.923, RMSEA = 0.076. Note: * p < 0.05, *** p < 0.001/* SRW: standardized regression weights.
Table 6. Results table of direct and indirect effects.
Table 6. Results table of direct and indirect effects.
Hypothesis (Path)Direct EffectsIndirect EffectsTotal Effect
Individual factors → Trust2.215 *-2.215 *
Social factors → Trust5.123 ***-5.123 ***
Technical factors → Trust4.699-4.699
Individual factors → Trust → Continuous use intention0.118 ***0.179 **0.297 **
Social factors → Trust → Continuous use intention0.211 **0.157 *0.368 **
Technical factors → Trust → Continuous use intention0.172 *0.131 *0.303 *
Individual factors → Acceptance attitude5.412 ***-5.412 ***
Social factors → Acceptance attitude2.560 *-2.560 *
Technical factors → Acceptance attitude2.454-2.454
Individual factors → Acceptance attitude → Continuous use intention0.211 ** 0.211 **
Social factors → Acceptance attitude → Continuous use intention0.328 ***0.107 *0.435 ***
Technical factors → Acceptance attitude → Continuous use intention0.207 *0.112 *0.319 *
Trust -> → Continuous use intention1.129-1.129
Acceptance attitude -> → Continuous use intention8.323 ***-8.323 ***
Trust -> → Acceptance attitude -> → Continuous use intention0.145 *0.122 *0.267 *
Note: * p < 0.05, ** p < 0.01, *** p < 0.001.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Kang, S.; Choi, Y.; Kim, B. Impact of Motivation Factors for Using Generative AI Services on Continuous Use Intention: Mediating Trust and Acceptance Attitude. Soc. Sci. 2024, 13, 475. https://doi.org/10.3390/socsci13090475

AMA Style

Kang S, Choi Y, Kim B. Impact of Motivation Factors for Using Generative AI Services on Continuous Use Intention: Mediating Trust and Acceptance Attitude. Social Sciences. 2024; 13(9):475. https://doi.org/10.3390/socsci13090475

Chicago/Turabian Style

Kang, Sangbum, Yongjoo Choi, and Boyoung Kim. 2024. "Impact of Motivation Factors for Using Generative AI Services on Continuous Use Intention: Mediating Trust and Acceptance Attitude" Social Sciences 13, no. 9: 475. https://doi.org/10.3390/socsci13090475

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop