Next Article in Journal
Developing Vehicular Response Strategies for Subpar Communication: Systemic Impact on Fuel Consumption and Emissions
Previous Article in Journal
A Cost-Optimization Model for Water-Scarcity Mitigation Strategies Towards Differentiated City Types in China
Previous Article in Special Issue
Exploring the Role of AI and Teacher Competencies on Instructional Planning and Student Performance in an Outcome-Based Education System
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Exploring Factors Influencing ChatGPT-Assisted Learning Satisfaction from an Information Systems Success Model Perspective: The Case of Art and Design Students

1
School of Design, Jiangnan University, Wuxi 214122, China
2
School of Digital Technology & Innovation Design, Jiangnan University, Wuxi 214122, China
3
College of Fine Arts, Huaqiao University, Quanzhou 362021, China
4
School of Economics and Management, Xiamen University of Technology, Xiamen 361024, China
5
Faculty of Innovation and Design, City University of Macau, Macau SAR 999078, China
*
Author to whom correspondence should be addressed.
Systems 2026, 14(1), 7; https://doi.org/10.3390/systems14010007 (registering DOI)
Submission received: 16 October 2025 / Revised: 25 November 2025 / Accepted: 15 December 2025 / Published: 20 December 2025

Abstract

As education undergoes digital transformation, ChatGPT-4 has emerged as one of the most visible tools of generative artificial intelligence. While widely discussed, its impact on student satisfaction and learning outcomes in higher education remains underexplored. This study investigates the factors that shape art and design students’ satisfaction when using ChatGPT to support coursework. Unlike previous research focusing on ChatGPT adoption behavior, this study extends the Information Systems Success Model (ISSM) to the context of art and design education. Drawing on 435 valid survey responses, we employed a mixed-methods approach. Partial Least Squares Structural Equation Modeling (PLS-SEM) was first applied to examine how system quality, compatibility, personal innovativeness, and perceived usefulness influence satisfaction directly and through mediating mechanisms. To complement this, fuzzy-set Qualitative Comparative Analysis (fsQCA) was used to identify multiple combinations of conditions that lead to high satisfaction. The findings show that compatibility, perceived usefulness, and personal innovativeness significantly enhance satisfaction, with path coefficients of 0.378, 0.342, and 0.155, respectively. Importance–Performance Map Analysis (IPMA) further highlights personal innovativeness and system quality as critical drivers. By providing both theoretical and practical insights, this study contributes to the growing body of research on generative AI in art and design education and informs the design of courses and digital learning tools.

1. Introduction

In recent years, generative Artificial Intelligence (AI) technology, represented by ChatGPT, has developed rapidly. It can instantly respond to personalized needs and generate diverse informational content such as text, images, and audio, bringing a revolutionary impact to the global education system [1]. Particularly in higher education, AI tools have been widely applied in various stages, including curriculum design [2], knowledge generation [3], creative ideation [4], and assignment evaluation [5], profoundly changing the teaching models of educators and the learning methods of students [6,7,8]. It is thus evident that generative AI tools like ChatGPT have gradually become one of the core information systems supporting education [9].
Among various disciplines, art and design education has been particularly significantly influenced by Artificial Intelligence-Generated Content (AIGC) [8,10,11]. Art and design education has always emphasized creative thinking [12], interdisciplinary practice [13], and autonomous learning abilities [14]. The intervention of generative AI tools can effectively merge human creativity with the information processing capabilities of machines, providing students in art and design majors with entirely new avenues for creative inspiration and learning support [15,16]. Chandrasekera et al. [17] noted that through collaborative creation with generative AI tools, the creative thinking abilities and the quality of practical outcomes of design students can be significantly enhanced. Concurrently, AI-driven creation models disrupt the cumbersome and time-consuming processes of traditional design, effectively improving learning and creative efficiency [18]. Lu et al. [19] further discovered that combining large language models (LLMs) like ChatGPT with text-to-image generation systems allows for precise control over image content, thereby optimizing the overall quality of the generated images. The aforementioned studies all indicate that generative AI tools, represented by ChatGPT, have demonstrated immense application potential in art and design education [20,21,22]. However, existing research has primarily focused on students’ motivations for using ChatGPT and their intention to adopt the technology. For instance, Alfaisal et al. [23] found that university students’ usage intention is mainly influenced by factors such as system quality and personal innovativeness; Raman, Mandal, Das, Kaur, JP and Nedungadi [22], drawing on Rogers’ perceived theory, pointed out that compatibility is one of the key factors affecting students’ adoption intention; meanwhile, Zhao et al. [24], using a hybrid method of Partial Least Squares Structural Equation Modeling (PLS-SEM) and fuzzy-set Qualitative Comparative Analysis (fsQCA), showed that perceived usefulness positively influences Chinese university students’ attitudes toward using ChatGPT. While these studies provide an important reference for understanding students’ behavioral intentions, there remains a lack of discussion on student satisfaction and its influencing mechanisms during the process of using ChatGPT for assisted learning. The satisfaction level of students in human–computer collaborative learning with AI tools like ChatGPT not only effectively reflects their individual learning experiences and performance but also genuinely mirrors the operational effectiveness of AI tools as educational information systems. This is of great significance for exploring the integration pathways of generative AI tools in the field of art and design education.
To analyze the satisfaction of art and design students using ChatGPT for course-assisted learning and its influencing mechanisms more systematically and comprehensively, this study introduces the Information Systems Success Model (ISSM) proposed by DeLone and McLean [25] as its theoretical foundation. This model explains the operational effectiveness of information systems through six dimensions—system quality, information quality, use, user satisfaction, individual impact, and organizational impact—and it continues to be widely adopted today [26,27,28,29,30]. It is noteworthy that the ISSM is also frequently used in research related to educational information systems: Efiloğlu Kurt [31] analyzed the usage and learning satisfaction of an e-learning system from the ISSM perspective; Çelik and Ayaz [32] evaluated the application effectiveness of a student information system based on an updated ISSM; Duong et al. [33] integrated the ISSM with the stimulus–organism–response (SOR) paradigm in their study to explore the trust, satisfaction, and continuance intention of higher education students towards ChatGPT; Tan et al. [34] explored the key factors influencing university students’ continuance intention to use ChatGPT by integrating the ISSM and the Unified Theory of Acceptance and Use of Technology (UTAUT2). These studies collectively validate the applicability and explanatory power of the ISSM in the field of educational information systems. As one of the most representative information systems in higher education today, research analyzing ChatGPT using the ISSM is not uncommon, but existing work has mostly focused on system effectiveness evaluation [35], continuance intention [36], and the influencing factors of usage intention [37]. Although some studies have included user satisfaction in their analytical frameworks [38,39], these studies predominantly rely on samples drawn from the general university student population, theoretically assuming that their conclusions are equally applicable across different disciplinary contexts. However, the learning context for art and design students differs significantly from other disciplines. For this group, the use of generative AI tools like ChatGPT is not merely a matter of information retrieval or task execution; rather, it involves creative behaviors such as idea generation, scheme iteration, and material evaluation. Consequently, directly applying the aforementioned general ISSM-based models to this group may fail to fully reveal the mechanisms underlying art and design students’ dependence on and satisfaction with AI information systems. Furthermore, the intervention of generative AI may exert a systematic impact on the curriculum structure, pedagogical paradigms, and educational policies within art education. However, empirical research targeting this specific group of students is relatively limited. Therefore, to fill this research gap and to help art and design education better respond to the trend of intellectualization, this study aims to answer the following questions:
RQ1: 
What are the factors that influence the satisfaction of art and design students when using ChatGPT for course-assisted learning?
To systematically reveal the influencing mechanism of student satisfaction with ChatGPT, this study further explores:
RQ2: 
How do these factors interact and form conditional configurations to jointly drive changes in student learning satisfaction?
In summary, generative AI tools, represented by ChatGPT, are reshaping the learning methods and creative pathways in art and design education. In response to this educational landscape, this study focuses on art and design students, centering on the issue of human–computer collaborative learning satisfaction with ChatGPT as an educational information system. It constructs a research model based on the ISSM with key variables including system quality, compatibility, personal innovativeness, and perceived usefulness. To achieve a more systematic and comprehensive analysis, this study employs a combination of PLS-SEM and fsQCA. It aims to uncover the key driving paths and conditional configurations that affect the satisfaction of art and design students using ChatGPT for assisted learning, thereby providing a systematic theoretical basis and practical implications for the effective integration of generative AI into the art and design education system. The specific research process is as follows: Part 2 reviews and analyzes the current application status of ChatGPT in art and design education and related theoretical models; Part 3 clarifies the research methodology, completes the questionnaire design, and explains the data collection process; Parts 4 and 5 analyze the data results, test the hypotheses, and discuss the main findings; Parts 6 and 7 summarize the research, point out its limitations, and propose suggestions for future research.

2. Theoretical Framework and Research Hypotheses

2.1. Applications of ChatGPT in Art and Design Education

Recently, ChatGPT has already become one of the most widely used intelligent tools in the field of education by virtue of its text generation and conversational abilities that approximate human-like thinking. It can not only provide personalized learning support to students through real-time dialogue but also assist them in completing complex tasks of information integration and knowledge construction, thereby effectively promoting the learning process and enhancing learning performance [40,41,42,43]. Relying on these advantages, ChatGPT is widely regarded as an educational information system with pedagogical support functions, and its application value has been fully verified in professional fields such as language education [44,45], medical education [46,47], programming education [48,49], and art and design education [50,51]. Compared to other disciplines, personalized thinking and creative expression are particularly crucial in art and design education, and ChatGPT is aptly suited to provide support for these core needs. Lazkani [10] pointed out that a significant advantage of ChatGPT lies in its ability to stimulate students’ creativity by offering intelligent suggestions. Li et al. [52] also emphasized that AI tools can generate diverse creative proposals, uncover potential associations, and provide real-time, iterative, personalized feedback, thus systematically assisting students in their design and creative activities. Overall, as an intelligent pedagogical support system, ChatGPT can provide systematic guidance and inspiration to art and design students in key stages such as creative exploration, concept divergence, and design expression, demonstrating unique application value in art and design education.
Furthermore, several empirical studies have already validated the usability and effectiveness of ChatGPT in assisting with coursework in art and design programs. Chang and Tung [53] indicate that ChatGPT can assist students in the art and design process by completing tasks such as data processing, design thinking, design development, and design decision-making, thereby opening up more possibilities for artistic creation. By comparing student groups who used ChatGPT for design ideation with those who did not, Papachristos et al. [54] found that ChatGPT could significantly enhance the overall quality of creative projects, broadening students’ creative perspectives. A study focusing on visual communication design students similarly showed that using ChatGPT to assist in course learning under appropriate guidance helps enhance students’ autonomous learning abilities and creative thinking, and does not negatively impact their creativity [55]. Additionally, Filippi [56], using quantity, usefulness, novelty, and variety as metrics for product design innovation, compared the differences between ChatGPT and traditional creativity generation methods (such as brainstorming, mind mapping, and sketching). The results showed that ChatGPT has a significant advantage in the quantity and novelty of idea generation. The aforementioned studies fully demonstrate that ChatGPT has become an important educational information system for promoting artistic creation and design innovation, capable of effectively assisting art and design students in their coursework. Concurrently, the adoption of ChatGPT by students in this major is also showing a continuous upward trend [57], which provides a realistic basis for further in-depth exploration of their satisfaction with AI-assisted learning and its influencing factors from an information systems perspective.
However, although existing research has addressed student satisfaction with ChatGPT-assisted learning, these studies typically treat students from different disciplinary backgrounds as a homogeneous group for analysis, thereby ignoring the inherent differences among various majors in terms of curriculum structure, learning methods, and practical processes. For example, Almulla [58], combining the Technology Acceptance Model and the PLS-SEM method, analyzed the key factors for university students’ adoption of ChatGPT and its mechanism of action on learning satisfaction; Tsai et al. [59], based on Self-Determination Theory, used PLS-SEM to explore university students’ satisfaction with using ChatGPT; Ng et al. [60] employed a mixed-methods approach to investigate university students’ attitudes and satisfaction with using generative AI tools like ChatGPT for assisted learning. Although these studies provide an important reference for understanding the application of ChatGPT in higher education, their conclusions are not entirely applicable to the art and design education context, which emphasizes creative thinking and personalized expression. Therefore, there is an urgent need for systematic research specifically on the satisfaction of art and design students using ChatGPT for course-assisted learning. Such research would provide an evidence-based reference for educators, curriculum designers, and AI tool developers in this field, and in turn, optimize the functional positioning and application models of ChatGPT in art and design education.

2.2. Information System Success Model (ISSM)

This study adopts the ISSM proposed by DeLone and McLean [25] as its core theoretical framework. As one of the most influential cornerstone theories in the field of information systems research, this model provides a systematic analytical framework for understanding user behavior within information systems and the formation mechanism of their satisfaction. The ISSM was initially composed of six core constructs, aiming to explain the overall effectiveness of an information system. Subsequently, based on relevant empirical research findings and the emerging context of internet applications, DeLone and McLean [26] revised and updated the original model. They added the “Service Quality” construct to more comprehensively reflect the importance of information systems in interactive support and service experience. They also further clarified the core influential role of “System Quality,” “Information Quality,” and “Service Quality” on “System Use” and “User Satisfaction.” In recent years, with the widespread penetration of generative AI tools into the global education system, the applied value of the ISSM has become increasingly evident, providing a solid theoretical perspective for analyzing AI-driven educational support systems [33,37,39,61]. Therefore, re-examining the satisfaction of art and design students using ChatGPT for course-assisted learning from the perspective of the ISSM not only helps to reveal its systemic influencing mechanisms within this specific educational context but can also further expand the theoretical applicability of the ISSM in this area.

2.2.1. ChatGPT System Quality

System quality is a core construct of the ISSM, used to measure the technical performance of an information system in terms of its stability, ease of use, and response efficiency. An improvement in system quality typically contributes to higher information system usage and user satisfaction [26]. A large body of research has shown that system quality not only significantly affects users’ trust and continuance intention [62], but is also a key indicator for measuring the overall effectiveness of an information system [63]. In their empirical study on information-exchange virtual communities, Zheng et al. [64] also confirmed that system quality has a significant positive impact on user satisfaction. With the rapid development of artificial intelligence, the system attributes of ChatGPT—as a typical AI-driven information system—such as its operational stability, response speed, and interactional flexibility, have become important factors affecting user experience and satisfaction [65]. This research suggests that in educational settings, system quality effectively supports the effectiveness of human–computer collaboration between students and ChatGPT [65,66,67]. Therefore, this study adopts system quality as the primary construct for measuring the satisfaction of art and design majors using ChatGPT to assist in their course learning.
In the application context of ChatGPT, system quality is primarily manifested in its performance in aspects such as reliability, timeliness, and flexibility [25,65,68,69,70]. Among these, reliability is one of the core indicators for measuring system quality, often represented by the symbol R [71], and it reflects the user’s degree of trust in and satisfaction with the information system [69]. In an educational context, when students perceive ChatGPT to have high reliability, their perceived usefulness also increases correspondingly, making them more inclined to use ChatGPT as an auxiliary tool for completing critical tasks or important decisions [72]. Meanwhile, system timeliness is also a crucial metric for evaluating ChatGPT’s system quality. Its instant feedback mechanism can rapidly respond to students’ learning needs, significantly enhancing the user experience and thereby effectively increasing satisfaction [73]. Furthermore, system flexibility is equally. Chu [65] points out that when an information system exhibits high flexibility, it can offer users diverse operations and customized options, stimulating individual creative thinking and further enhancing user satisfaction. In other words, ChatGPT’s system flexibility helps to foster students’ personal innovativeness. In summary, ensuring that ChatGPT maintains a high level of system quality in terms of reliability, timeliness, and flexibility is crucial for improving students’ satisfaction when using it for course-assisted learning and positively influences their perceived usefulness and personal innovativeness during the process [66,74].
Additionally, existing research has shown that when students perceive a high degree of compatibility in their overall experience of using an information system, their continuance intention is also higher [75,76,77]. In their research, Isaac et al. [78] treated system quality, service quality, and information quality as second-order constructs of overall quality, and empirically verified that the overall quality of an educational information system can significantly affect its compatibility. From this, it can be seen that system quality, as an important sub-construct of overall quality, also has a positive impact on the compatibility of educational information systems. Research on ChatGPT similarly indicates that its overall quality metrics, including system, service, and information quality, can positively influence its system compatibility [42]. The aforementioned research suggests that a high overall quality of an information system often implies that it also possesses high compatibility. Based on this, it can be inferred that system quality, as a sub-construct of overall quality, is also applicable to explaining the compatibility of information systems. Accordingly, this study proposes the following hypotheses:
H1a: 
ChatGPT system quality has significant positive influence on personal innovativeness.
H1b: 
ChatGPT system quality has significant positive influence on perceived usefulness.
H1c: 
ChatGPT system quality has significant positive influence on compatibility.

2.2.2. Compatibility

Compatibility is regarded as a basic condition for adopting information systems or innovative technologies, referring to the degree to which a system or technology aligns with users’ cognitive styles, behavioral patterns, and lifestyles [78,79]. Compatibility is thus a central dimension of innovation, directly affecting whether user needs can be met [80]. Other studies argue that compatibility describes how well a new system or technology integrates into users’ existing behaviors and prior experiences, with higher compatibility typically predicting stronger behavioral intentions [81]. In the context of art and design education, when the functions of ChatGPT align closely with students’ learning needs and expectations, adoption frequency increases [74]. Compatibility also motivates students to explore and experiment, thereby enhancing personal innovativeness. Moreover, when ChatGPT’s functional design, interaction patterns, and task support match students’ curricular requirements, students are more likely to perceive ChatGPT as useful—that is, compatibility positively affects perceived usefulness [76,82,83]. Raman, Mandal, Das, Kaur, JP and Nedungadi [22] argued that students’ proficiency in e-learning environments influences adoption intentions and empirically verified the positive effect of ChatGPT compatibility on adoption. Beyond adoption, higher compatibility enhances learning experiences and thereby satisfaction [84]. Accordingly, this study proposes:
H2a: 
Compatibility has significant positive influence on personal innovativeness.
H2b: 
Compatibility has significant positive influence on perceived usefulness.
H2c: 
Compatibility has significant positive influence on satisfaction.

2.2.3. Personal Innovativeness

Personal innovativeness is typically defined as an individual’s willingness to experiment with new information technologies [85]. Highly innovative individuals are more likely to actively explore and adopt emerging technologies [86]. In the context of ChatGPT and other AI information systems, personal innovativeness becomes especially relevant. Individuals with higher levels of innovativeness tend to report stronger satisfaction with new technologies and display greater motivation to explore and apply them [87]. Kumar et al. [88] found that personal innovativeness influences whether users perceive ChatGPT positively or negatively, as well as whether they judge it to be useful or not. Mathur, Anand, Sharma and Vishnoi [37] investigated perceived innovativeness of ChatGPT as a construct and empirically demonstrated that it moderates user satisfaction. In a word, these findings suggest that personal innovativeness is a critical determinant of both effective use and satisfaction with ChatGPT. Therefore, this study proposes:
H3a: 
Personal innovativeness has significant positive influence on perceived usefulness.
H3b: 
Personal innovativeness has significant positive influence on satisfaction.

2.2.4. Perceived Usefulness

Perceived usefulness is one of the most influential determinants of users’ attitudes toward technology and plays a central role in technology adoption across contexts [89]. In educational settings, students are more inclined to use ChatGPT as a learning assistant only when they clearly perceive it as helping them accomplish tasks and enhancing their learning experience [74,90]. Almulla [58] defined perceived usefulness as the extent to which students believe ChatGPT improves learning efficiency and empirically showed that students who perceive ChatGPT as useful demonstrate stronger engagement and higher satisfaction with learning outcomes. Other studies [66,80,86] have also confirmed the significant positive relationship between perceived usefulness and satisfaction. Accordingly, this study proposes:
H4: 
Perceived usefulness has significant positive influence on satisfaction.
Drawing on the above hypotheses, this study develops a research model consisting of five constructs—system quality, compatibility, personal innovativeness, perceived usefulness, and satisfaction—and nine hypotheses, as illustrated in Figure 1.

3. Methodology

3.1. Research Design and Questionnaire Development

To comprehensively examine art and design students’ satisfaction with ChatGPT, this study employed a mixed-methods design combining PLS-SEM, Importance–Performance Map Analysis (IPMA), and fsQCA. As a flexible multivariate analysis method, PLS-SEM is capable of effectively handling complex model structures and diverse types of measurement variables. It employs bootstrapping to estimate path coefficients and their corresponding confidence intervals. Furthermore, the method’s ability to test interaction effects has led to its widespread application across fields such as management, information systems, and marketing.
Hypotheses were tested through both symmetric and asymmetric analyses: SmartPLS 4 (version 4.0.9.2) was used for PLS-SEM and IPMA, while fsQCA 3.0 was used for configurational analysis. By integrating linear and configurational perspectives, the study provides a more nuanced understanding of how art and design students evaluate ChatGPT. This multi-method approach not only enriches the analysis but also generates actionable recommendations for educators and system designers, thereby offering scientific evidence for integrating AI information systems into art and design education.
Measurement items were adapted from established, validated scales with strong reliability, drawn from prior research. With input from subject-matter experts, several items were slightly modified to enhance contextual relevance and clarity (see Appendix A). Before formal data collection, a pilot test was conducted with 10 participants who met the study criteria. They were asked to assess whether the items were clear and fully understandable. Based on their feedback, several items were revised for clarity. The final survey employed a 7-point Likert scale (1 = strongly disagree, 7 = strongly agree). The choice of a 7-point scale was based on prior research showing that it offers higher reliability and discriminant validity compared with a 5-point scale [91].

3.2. Data Collection

The study targeted art and design students from universities in Hong Kong and Macao. Data were collected through an online questionnaire, yielding 435 valid responses. Prior to participation, students were fully informed of the study’s purpose, scope, and significance. It was emphasized that participation was voluntary, with no coercion involved. Respondents were given sufficient time to complete the questionnaire carefully to ensure accuracy and completeness. To guarantee validity, only students who confirmed that they were familiar with ChatGPT and had used it in their studies were included.
To enhance representativeness, the sample included diversity in gender, age, level of study, and frequency of ChatGPT use.
As shown in Table 1, the gender distribution was balanced: 47.4% male (n = 206) and 52.6% female (n = 229). In terms of age, the majority were 18–21 years old (86.7%, n = 377), while 13.3% (n = 58) were between 22 and 26 years old. Regarding academic level, 89.4% were undergraduates (n = 389) and 10.6% were graduate students (n = 46, including both master’s and doctoral). With respect to usage frequency, 13.3% (n = 58) reported using ChatGPT less than once a week, 29.0% (n = 126) about once a week, 9.9% (n = 43) several times a week, 29.9% (n = 130) about once a day, and 17.9% (n = 78) several times a day. Overall, more than half of the students reported using ChatGPT frequently, suggesting both high adoption and active engagement within this student population.

4. Results

In the PLS-SEM analysis, this study adopted the weighted path scheme with up to 3000 iterations and default initial weights. A non-parametric procedure—bootstrapping with 5000 resamples—was employed to determine the statistical significance of the PLS-SEM results. In addition, IPMA was conducted on top of the structural equation model to further evaluate the importance and performance of the key constructs in explaining the outcome variable, thereby providing targeted recommendations for practice [92]. To complement the linear analysis of PLS-SEM, fsQCA was also introduced to examine the complex, multiple causal relationships among antecedent variables [93]. Compared to traditional linear approaches, fsQCA allows the identification of multiple configurational pathways and the detection of condition combinations most closely associated with satisfaction with ChatGPT [94]. This makes fsQCA particularly suitable for exploring configurational effects of antecedents in the context of digital educational transformation.

4.1. Assessment of Measurement Model

To assess the quality of the measurement model, several common indicators were used, including factor loadings, Variance Inflation Factor (VIF), Cronbach’s α, Composite Reliability (CR), and Average Variance Extracted (AVE).
As shown in Table 2, all factor loadings exceeded 0.772, above the conventional threshold of 0.70, indicating good construct validity. All VIF values were below 3, suggesting no serious multicollinearity. Cronbach’s α ranged from 0.750 to 0.868, exceeding the 0.70 benchmark, thereby confirming good internal consistency [95]. CR values ranged between 0.871 and 0.910, surpassing the recommended 0.70 threshold [96] and further supporting the model’s reliability. Average Variance Extracted (AVE) values ranged from 0.680 to 0.800, all well above the 0.50 cutoff [95]. These results collectively confirm the convergent validity of the constructs.
Next, discriminant validity was tested using cross-loadings. As shown in Table 3, each item’s outer loading on its intended construct was higher than its cross-loadings on other constructs, meeting the criterion [97]. This indicates that all items were more strongly associated with their own construct than with others, providing evidence of discriminant validity.
Furthermore, the Fornell–Larcker criterion was applied by comparing the square root of AVE values with inter-construct correlations [98]. As shown in Table 4, all diagonal values (square roots of AVE) exceeded the corresponding correlations, confirming discriminant validity [80]. To strengthen robustness, the HTMT ratio (heterotrait–monotrait) was also calculated. All HTMT values were below the 0.85 threshold [99], consistent with prior recommendations [100,101]. This further supports discriminant validity.

4.2. Assessment of Structural Model

Bootstrapping was applied to test the hypotheses. Figure 2 presents the overall path model results, while Table 5 details the regression coefficients, significance levels, and related statistics. All hypotheses were supported.
The structural model was further assessed for explanatory power and predictive relevance. Following Hair et al. [102], R2, f2, and Q2 were calculated. R2 values ranged from 36.8% to 57.1% (Table 6), exceeding the 25% threshold recommended for substantial explanatory power [103]. All Q2 values were positive, indicating predictive relevance [102]. Regarding effect size, f2 values of 0.35, 0.15, and 0.02 are interpreted as large, medium, and small effects, respectively, according to Cohen [104]. The model fit assessment shows that the SRMR value was 0.068, which is below the recommended threshold of 0.08, indicating an acceptable model fit [102]. In this study, H1c showed a large effect (f2 = 0.846), and H1b, H2c, and H4 demonstrated medium effects (f2 = 0.259, 0.188, 0.146), while all remaining paths exhibited small effects.
Multi-collinearity was also assessed. High correlations among independent variables can distort regression results [105]. To address this, two tests of common method bias (CMB) were applied: the Unmeasured Latent Method Construct (ULMC) test [106] and the full collinearity test [107]. As shown in Table 6, the ratio of average substantive variance (0.712) to average method variance (0.005) was 142:1, indicating no serious CMB [108]. Moreover, the full collinearity test showed all VIF values between 1.000 and 2.046, below the threshold of 5 [102]. Hence, CMB was not a concern in this study.

4.3. Importance–Performance Map Analysis (IPMA)

IPMA, conducted via SmartPLS, compares the total effects (importance) and average latent variable scores (performance) of exogenous constructs on the target endogenous construct. After confirming scale directionality and positive indicator weights, satisfaction was set as the outcome variable, predicted by system quality, compatibility, personal innovativeness, and perceived usefulness. Additionally, based on the IPMA findings, we plotted Figure 3, the Performance–Impact Construct Priority Map. In this map, the horizontal axis represents the average importance score (on a scale of 0 to 1), while the vertical axis represents the average performance score (on a scale of 0 to 100).
As shown in Table 7 and Figure 3, the performance scores were: SQ (60.821), COM (61.562), PI (71.099), and PU (70.172). Personal innovativeness and perceived usefulness showed the strongest performance effects. In terms of importance, SQ (0.582) had the highest value, followed by COM (0.513), PU (0.342), and PI (0.242). Interestingly, while system quality was most important, its performance score was relatively weaker. Conversely, personal innovativeness had the lowest importance but the highest performance. This suggests that both system quality and personal innovativeness are critical drivers of satisfaction. Therefore, improvement efforts should focus on enhancing ChatGPT’s system quality and leveraging personal innovativeness to optimize student satisfaction.

4.4. Fuzzy-Set Qualitative Comparative Analysis

Building on the PLS-SEM analysis, which quantified the path coefficients among latent variables, this study further applied fsQCA to capture the configurational complexity underlying student satisfaction. While PLS-SEM offers precise estimates of linear causal relationships, in the context of digital educational transformation, student satisfaction with ChatGPT is often shaped by the interplay of multiple factors. Traditional SEM is limited in assessing how different combinations of conditions jointly affect outcomes. To address this limitation, fsQCA was introduced as a complementary method. Grounded in configurational logic, fsQCA systematically reveals how multiple condition sets jointly drive satisfaction, thus providing a more comprehensive explanation of its complex formation mechanisms.
Calibration of variables is a key step in fsQCA, transforming raw data into fuzzy-set membership scores to better capture the nuances of social phenomena. Specifically, this study calculated the mean scores of each latent variable and adopted the direct calibration method proposed by Rihoux and Ragin [109]. The 5th, 50th, and 95th percentiles of the score distribution were selected as anchors, corresponding to the thresholds of “full non-membership”, “crossover point” and “full membership.” This ensured that all final membership values fell within the 0–1 range.
As a preliminary step, necessity analysis was performed to test whether any single condition served as a necessary prerequisite for the outcome [110]. Both antecedent variables and their negations were tested, with a consistency threshold of 0.90 [111,112]. As shown in Table 8, no condition or its negation met the 0.90 threshold, suggesting that no single factor constitutes a necessary condition for satisfaction. Therefore, the analysis proceeded to configurational solutions, exploring how combinations of antecedents are sufficient to explain satisfaction outcomes.
In the configurational analysis, the task was to identify sufficient combinations of conditions that explain ‘satisfaction’ or ‘~satisfaction’ outcomes. Drawing on fuzzy-set theory and Boolean algebra [112], the analysis evaluated both consistency (the extent to which a configuration reliably leads to the outcome) and coverage (the proportion of cases explained by that configuration). A raw consistency threshold of 0.80 was applied [113]. To avoid ambiguous solutions, a Proportional Reduction in Inconsistency (PRI) threshold of 0.70 was adopted [114]. The minimum case frequency was set at 1 to ensure that only empirically observed configurations were included [113].
To distinguish core from peripheral conditions, both the parsimonious and intermediate solutions were examined. Core conditions are those consistently present across both solutions, while peripheral conditions appear only in intermediate solutions. This study reports the intermediate solutions, supplemented with insights from parsimonious solutions for classification.
As shown in Table 9, four configurations were identified as sufficient for high satisfaction, with an overall solution consistency of 0.923 and coverage of 0.655. This indicates strong explanatory power and that these solutions collectively cover more than half of the cases, aligning with acceptable standards [115]. Core conditions driving high satisfaction included system reliability, compatibility, personal innovativeness, and perceived usefulness. Conversely, four configurations were associated with low satisfaction, with overall consistency and coverage of 0.932 and 0.695, respectively. Core negated conditions included system timeliness, system flexibility, compatibility, personal innovativeness, and perceived usefulness.

5. Discussion

5.1. Discussion of PLS-SEM and IPMA Results

The results not only corroborate the fundamental pathway of System Quality impacting usage outcomes within the ISSM framework (i.e., SQ → PU → SA) but also reveal theoretical nuances specific to the context of art and design education. Specifically, ChatGPT system quality exerts significant positive effects on personal innovativeness, perceived usefulness, and compatibility (H1a, H1b, H1c supported). Interestingly, the strongest effect was found in the rarely tested relationship between system quality and compatibility (β = 0.677). This finding suggests that in the creativity-intensive environment of art and design education, ChatGPT’s performance regarding reliability, timeliness, and flexibility is interpreted by students as a signal determining whether the tool can be adopted and embedded into their existing creative workflows. This result is consistent with prior work suggesting that system quality, as a dimension of overall quality, facilitates compatibility in information systems [78], and directly confirms the linkage between these two constructs. By contrast, the effect of system quality on personal innovativeness was the weakest among the supported paths. This may be because personal innovativeness is a relatively stable individual trait, shaped primarily by students’ subjective willingness to try new technologies [116]. In other words, higher system quality strengthens innovative tendencies only among students who already possess such inclinations, but it does not activate innovative potential among students with low inherent openness to novelty. As Sadewo et al. [117] argued, personal innovativeness reflects motivational differences at the individual level: ChatGPT provides an innovative experience, but it is students with naturally higher innovativeness who are more inclined to adopt it in learning. Furthermore, the positive impact of system quality on perceived usefulness aligns with existing research [66,118,119]. Through reliable knowledge provision, timely feedback, and adaptive functionality, ChatGPT significantly enhances students’ perceptions of its usefulness. To conclude, these findings imply that higher levels of ChatGPT’s reliability, timeliness, and flexibility not only ensure compatibility with students’ learning needs but also facilitate task completion and, to some extent, foster innovative attempts.
Second, compatibility significantly influences personal innovativeness, perceived usefulness, and satisfaction (H2a, H2b, H2c supported). Notably, among all predictors of satisfaction, compatibility exhibited the strongest path coefficient (β = 0.378), outperforming both personal innovativeness (β = 0.155) and perceived usefulness (β = 0.342). This indicates that the extent to which the tool aligns with creative needs is a more critical determinant than its perceived utility or novelty. This finding resonates with prior studies demonstrating that when ChatGPT is closely aligned with students’ learning needs, not only is its integration into learning processes more likely, but satisfaction is also significantly enhanced [120,121]. Crucially, this finding shifts the explanatory focus of the ISSM within art and design education from a “usefulness-driven” paradigm to a “compatibility-determined” one. When compatibility is high, system functions are more readily accepted and applied in creative practice, thereby actualizing the effect of perceived usefulness on satisfaction. Conversely, if compatibility is insufficient, even high levels of perceived usefulness may fail to translate into satisfaction, as the system’s output remains difficult to integrate into the creative workflow. Moreover, this study found that high ChatGPT compatibility fosters greater personal innovativeness. While some scholars have suggested that the use of generative AI tools can stimulate students’ exploratory and creative abilities [122], few empirical studies have confirmed this. Our study provides direct evidence. Thus, enhancing ChatGPT’s compatibility—or adopting other AI tools better tailored to design education—can strengthen students’ innovativeness and drive innovation in the discipline. However, the effect of compatibility on perceived usefulness was relatively weaker, consistent with prior findings [77,123], which showed that while compatibility positively shapes perceived usefulness, it is not its primary driver.
Third, personal innovativeness significantly affects both perceived usefulness and satisfaction (H3a, H3b supported), thereby providing a theoretical extension to the ISSM framework. The relationship between personal innovativeness and perceived usefulness (H3a) has been widely supported in previous research [120,124,125]. Students with higher levels of innovativeness are more willing to explore ChatGPT’s diverse applications, which enhances their perceptions of usefulness and satisfaction [126,127]. Mathur, Anand, Sharma and Vishnoi [37], and Nan et al. [128] similarly observed that when students believe ChatGPT can effectively meet innovative learning needs, their satisfaction increases accordingly. However, the effect of personal innovativeness on satisfaction (β = 0.155) was the weakest path coefficient among all hypotheses. This result echoes the earlier finding on the weak link from system quality to personal innovativeness (H1a). Together, these results highlight that personal innovativeness, as a subjective individual trait, is less influenced by external system quality and more by personal motivation. Consequently, personal innovativeness often plays a moderating rather than a determinative role in shaping satisfaction.
Additionally, this study validates the hypothesis that perceived usefulness positively influences satisfaction (supporting H4), aligning with the fundamental pathways of the ISSM. According to the ISSM framework, the quality dimensions of a system act as prerequisites for success, shaping users’ perceptions of usefulness and ultimately determining their satisfaction and continuance intention [25,26]. In the context of this study, factors such as system quality and compatibility were found to significantly enhance students’ perceived usefulness, which in turn drives learning satisfaction. This underscores the core role of perceived usefulness as a mediating bridge within the model. Specifically, students interpret improvements in ChatGPT’s reliability, timeliness, and flexibility as signals of the tool’s capacity to optimize design tasks; this perception subsequently translates into higher satisfaction and a stronger intention for continued adoption. Numerous studies similarly confirm that when students find ChatGPT useful for their coursework, they are more satisfied and more willing to continue using it [58,80,89,90,129]. Perceived usefulness not only boosts learning motivation by signaling that ChatGPT helps achieve knowledge and performance goals but also strengthens the belief in continued learning, thereby increasing satisfaction [130]. Thus, perceived usefulness emerges as a primary determinant of satisfaction, reinforcing the need for ChatGPT to continuously improve its functionalities and productivity—consistent with its iterative updates since 2022.
Finally, the IPMA results show that system quality, compatibility, personal innovativeness, and perceived usefulness all contribute to satisfaction, though in different ways. System quality emerged as the most critical factor on the importance dimension, while personal innovativeness dominated on the performance dimension. Overall, optimizing ChatGPT-assisted learning satisfaction for art and design students requires targeted improvements in both system quality and personal innovativeness.

5.2. Discussion of Configurational Results

The fsQCA results identified four configurations leading to high satisfaction and three configurations leading to low satisfaction in ChatGPT-assisted learning among art and design students. Importantly, the absence of compatibility emerged as the central condition driving dissatisfaction across all negative configurations. Below, we discuss the configurations separately.

5.2.1. High-Satisfaction Configurations

The first pathway emphasized the central role of compatibility and perceived usefulness. In configuration M1 (~SR * SF * COM * PI * PU), even when system reliability was absent, students still reported high satisfaction as long as ChatGPT aligned well with their learning needs, supported by system flexibility and their own innovativeness. This finding indicates that strong perceptions of usefulness and contextual fit can compensate for deficiencies in reliability, allowing students to maintain a positive learning experience. This offers significant implications for the theoretical interpretation of the ISSM, suggesting that compatibility is not merely a subordinate outcome of system quality but a critical dimension that—alongside perceived usefulness—co-determines the realization of value.
Another pathway demonstrated a more balanced profile, in which most positive factors were simultaneously present. In Configurations M2 (SR * ST * SF * PI * PU) and M4 (SR * ST * SF * COM * PI), system reliability and personal innovativeness consistently appeared as core conditions, reinforced by system timeliness and flexibility. Depending on the configuration, either perceived usefulness or compatibility served as an additional core factor. Such a balanced set of conditions provided students with a stable, responsive, and adaptable learning tool, complemented by positive personal attitudes and value alignment, which together fostered high levels of satisfaction.
A further pathway combined system performance with perceived value. In Configuration M3 (SR * ST * SF * COM * PU), reliability, timeliness, and flexibility, along with compatibility, formed the foundation for satisfaction, while perceived usefulness reinforced the overall experience. This synergy highlights that technical performance, when aligned with students’ course needs, acts as a critical driver of positive outcomes.

5.2.2. Low-Satisfaction Configurations

Configuration M1 (~SR * ~SF * ~COM * ~PI * ~PU) involved the simultaneous absence of compatibility, innovativeness, flexibility, and perceived usefulness, with system reliability absent peripherally and timeliness irrelevant. The accumulation of these negative conditions created extremely poor user experiences and substantially reduced satisfaction.
Configuration M2 (~SR * ~ST * ~COM * ~PI * ~PU) underscored the combined absence of timeliness and compatibility, both of which acted as core deficiencies. When system responses were delayed and ChatGPT failed to fit the learning context, and these weaknesses were compounded by low innovativeness and low perceived usefulness, students were particularly prone to dissatisfaction.
Finally, Configuration M3 (~SR * ~ST * ~SF * ~COM * ~PU) showed that when system flexibility, compatibility, and perceived usefulness were all missing, dissatisfaction was almost inevitable, even if personal traits were not considered. In this case, insufficient adaptability and usability directly undermined the learning experience, preventing students from gaining value from ChatGPT.
In conclusion, these results reinforce the idea that high satisfaction depends not on any single factor but on combinations of conditions that include compatibility and perceived usefulness, often in conjunction with system performance or innovativeness. Conversely, dissatisfaction arises primarily when compatibility is absent, especially when paired with missing perceptions of usefulness or system adaptability.

6. Conclusions

With the rapid advancement of generative AI tools represented by ChatGPT, education is accelerating its transition toward digitalization and intelligence. These tools are not only reshaping students’ learning patterns and cognitive paradigms but also demonstrating potential for human–computer collaboration within the art and design field that transcends traditional paradigms [131]. Consequently, deeply investigating the key factors influencing students’ satisfaction with ChatGPT-assisted learning is essential for clarifying the educational utility and efficacy of these tools, as well as for supporting their rational application in educational contexts. Focusing on art and design students, this study grounded its analysis in the ISSM, employing a mixed-methods approach—combining PLS-SEM, IPMA, and fsQCA—to analyze five constructs: system quality, compatibility, personal innovativeness, perceived usefulness, and satisfaction. The PLS-SEM results demonstrated that ChatGPT’s system quality—measured through reliability, timeliness, and flexibility—positively influences personal innovativeness, compatibility, and perceived usefulness, which in turn foster higher satisfaction. The IPMA results further revealed that while system quality and personal innovativeness differed in their relative importance and performance, both constitute priority factors for optimizing satisfaction. Beyond linear pathways, fsQCA identified multiple alternative condition combinations leading to either high or low satisfaction. High-satisfaction configurations consistently featured compatibility and perceived usefulness, often in conjunction with personal innovativeness or system performance. By contrast, in low-satisfaction configurations, the absence of compatibility was the only core condition common to all, often amplified by the absence of perceived usefulness, which significantly heightened negative experiences. These findings highlight that ensuring system alignment with students’ learning needs and maintaining strong perceptions of value are critical both for fostering satisfaction and for preventing dissatisfaction.
Overall, against the backdrop of the educational system’s intelligent transformation, this study systematically elucidates the key factors and mechanisms influencing art and design students’ satisfaction with ChatGPT. In doing so, it provides significant theoretical insights for integrating intelligent information systems into art and design education. By introducing the ISSM into this specific context and focusing on generative AI tools like ChatGPT, this study reveals the underlying satisfaction mechanisms in AI-assisted learning. This not only validates the applicability of the ISSM within the field of art and design education but also enriches the model’s theoretical scope. Furthermore, this study highlights the critical role of multi-condition combinations in explaining satisfaction, demonstrating that single factors are insufficient to account for complex user experiences. This configurational perspective lays a robust theoretical foundation for future research on student satisfaction.
Simultaneously, this study offers practical implications for educational systems, educators, and policymakers, facilitating the effective advancement of intelligent educational reform. At the educational system level, institutions should proactively integrate AI tools like ChatGPT into curriculum design and pedagogical objectives [132], thereby fostering human–computer collaboration and driving the digital transformation of education. For educators—the core force of educational reform—it is essential to enhance AI literacy through systematic training and technical support. Educators should effectively integrate AI tools into teaching practices while fully considering specific disciplinary characteristics [133]. Furthermore, educators should leverage the personalized teaching advantages of AI to meet students’ diverse learning needs, stimulate their innovation and spirit of exploration [134], and promote educational equity. At the policy level, government, academia, and industry should work together to establish and refine explicit guidelines for the use of generative AI tools in relation to academic integrity, copyright, data security, and privacy protection [1]. Moreover, it is crucial to establish dynamic monitoring mechanisms to prevent students’ over-reliance on AI from weakening their independent thinking abilities, thereby ensuring the safe, reliable, and sustainable development of the educational system. In summary, this study not only provides practical guidance for the convergence of intelligent education and art design but also supports the educational system in achieving high-quality, sustainable development in the era of artificial intelligence.

7. Research Limitations and Future Studies

Although this study systematically examined ChatGPT-assisted learning among art and design students, several limitations remain, which open avenues for future research. First, the study focused on ChatGPT’s text generation capabilities. Yet coursework in art and design often emphasizes visual and audiovisual content. Tools such as MidJourney or Stable Diffusion, which generate images, may better align with the learning needs and contexts of design students. Future research should therefore expand beyond ChatGPT to include image-based generative AI tools and compare their impact on learning outcomes and satisfaction. Second, the sample primarily consisted of students from a single institution, specifically those from the Hong Kong and Macau regions. Institutional characteristics such as teaching methods and assignment requirements may have influenced students’ usage patterns, potentially limiting the diversity and generalizability of the findings. To address this, future research could expand the sample scope to include participants from diverse regions. Comparative analyses could also be conducted based on regionally preferred AI systems and the usage habits of different user groups to further validate and refine the study’s conclusions. Simultaneously, future studies could extend the research perspective to the functional positioning of AI information systems within the design process. Systematically examining the distinct roles, interaction modes, and tangible benefits of AI across different creative stages would help reveal how the role allocation of AI influences design creation. Furthermore, future research could conduct Multi-Group Analysis (MGA) in PLS-SEM based on demographic profiles such as gender, age, and professional background. This would allow for a deeper investigation into whether the mechanisms of generative AI-assisted learning exhibit heterogeneity across different learner characteristics.
Incorporating newer versions of ChatGPT or other emerging AI platforms into future research will be essential for capturing the dynamic nature of students’ behaviors and satisfaction mechanisms. By addressing these limitations, future research can more comprehensively illuminate how art and design students use AI information systems to support learning, thereby advancing the transformation and innovation of design education in the age of artificial intelligence.

Author Contributions

Author Contributions: Conceptualization, Z.Z. and J.C.; methodology, J.C.; software, X.C.; validation, Z.Z. and D.L.; formal analysis, Z.Z. and J.C.; investigation, D.L.; resources, S.W.; data curation, J.C. and X.C.; writing—original draft preparation, Z.Z.; writing—review and editing, J.C.; visualization, J.C. and S.W.; supervision, D.L.; project administration, J.C.; funding acquisition, J.C. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the Research Start-up Fund for High-level Talents of Huaqiao University under the project “Research on the Construction of Visual Assets in Digital Media Art” (Project No. 24SKBS011).

Institutional Review Board Statement

This study does not involve disease treatment or patients, nor does it involve subjects who can be identified. Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The data that support the findings of this study are available from the corresponding author upon reasonable request.

Acknowledgments

We also thank the anonymous reviewers who provided valuable comments on the manuscript.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A. Questionnaire Items

ConstructItemsReferences
System
reliability
SR1 Using ChatGPT for Q&A is feasible.[135]
SR2 The content generated by ChatGPT is reliable.
SR3 The operating system of ChatGPT is trustworthy.
System
timeliness
ST1 ChatGPT responds to my needs quickly.
ST2 ChatGPT provides me with knowledge content in a timely manner.
System
flexibility
SF1 ChatGPT can adapt to my various needs.
SF2 ChatGPT can flexibly respond to my anticipated needs.
SF3 ChatGPT can meet my diverse requirements.
CompatibilityCOM1 ChatGPT aligns with my learning values.[42]
COM2 ChatGPT fits my learning style.
COM3 ChatGPT meets my learning needs.
Personal
innovativeness
PI1 I am willing to try and learn about ChatGPT.[127]
PI2 I am open to new methods related to ChatGPT.
PI3 I believe I have the ability to master new functions of ChatGPT.
Perceived
usefulness
PU1 ChatGPT enables me to complete tasks more quickly.[136]
PU2 ChatGPT is helpful for my learning.
PU3 ChatGPT improves my work efficiency.
PU4 ChatGPT makes it easier to generate a variety of learning materials.
SatisfactionSA1 I am satisfied with my experience of using ChatGPT.
SA2 I am satisfied with the content generated by ChatGPT.
SA3 Using ChatGPT has met my expectations.
SA4 I am satisfied with the overall effectiveness of ChatGPT.

References

  1. Chen, X.; Hu, Z.; Wang, C. Empowering education development through AIGC: A systematic literature review. Educ. Inf. Technol. 2024, 29, 17485–17537. [Google Scholar] [CrossRef]
  2. Abbasi, B.N.; Wu, Y.; Luo, Z. Exploring the impact of artificial intelligence on curriculum development in global higher education institutions. Educ. Inf. Technol. 2025, 30, 547–581. [Google Scholar] [CrossRef]
  3. Tong, R.; Yu, W.; Zhang, J.; Li, L.; Zhou, J.; Chang, X. Research on the Application Path of AIGC in Higher Education. In Proceedings of the Wuhan International Conference on E-business, Wuhan, China, 23–25 May 2025; pp. 259–269. [Google Scholar]
  4. Huang, K.-L.; Liu, Y.-C.; Dong, M.-Q.; Lu, C.-C. Integrating AIGC into product design ideation teaching: An empirical study on self-efficacy and learning outcomes. Learn. Instr. 2024, 92, 101929. [Google Scholar] [CrossRef]
  5. Shi, L. Applications and Challenges of AI in English Teaching: Comparative Analysis and Quantitative Assessment Methods for AIGC-Assisted Writing Evaluation. In Proceedings of the International Conference on Digital Classroom & Smart Learning, Shanghai, China, 27–29 September 2024; pp. 165–180. [Google Scholar]
  6. Sui, X.; Lin, Q.; Wang, Q.; Wan, H. Who will benefit from AIGC: An empirical study on the intentions to use artificial intelligence generated content in higher education. Educ. Inf. Technol. 2025, 30, 20627–20651. [Google Scholar] [CrossRef]
  7. Guo, J.; Ma, Y.; Li, T.; Noetel, M.; Liao, K.; Greiff, S. Harnessing Artificial Intelligence in Generative Content for enhancing motivation in learning. Learn. Individ. Differ. 2024, 116, 102547. [Google Scholar] [CrossRef]
  8. Wang, K.; Yang, Z.; Jaehong, K. AIGC changes in teaching practice in higher education visual design courses: Curriculum and teaching methods. Innov. Educ. Teach. Int. 2025, 1–17. [Google Scholar] [CrossRef]
  9. Gill, S.S.; Xu, M.; Patros, P.; Wu, H.; Kaur, R.; Kaur, K.; Fuller, S.; Singh, M.; Arora, P.; Parlikad, A.K. Transformative effects of ChatGPT on modern education: Emerging Era of AI Chatbots. Internet Things Cyber-Phys. Syst. 2024, 4, 19–23. [Google Scholar] [CrossRef]
  10. Lazkani, O. Revolutionizing education of art and design through ChatGPT. In Artificial Intelligence in Education: The Power and Dangers of ChatGPT in the Classroom; Springer: Berlin/Heidelberg, Germany, 2024; pp. 49–60. [Google Scholar]
  11. Fan, X.; Zhong, X. Artificial intelligence-based creative thinking skill analysis model using human–computer interaction in art design teaching. Comput. Electr. Eng. 2022, 100, 107957. [Google Scholar] [CrossRef]
  12. Samaniego, M.; Usca, N.; Salguero, J.; Quevedo, W. Creative thinking in art and design education: A systematic review. Educ. Sci. 2024, 14, 192. [Google Scholar] [CrossRef]
  13. Costantino, T. STEAM by another name: Transdisciplinary practice in art and design education. Arts Educ. Policy Rev. 2018, 119, 100–106. [Google Scholar] [CrossRef]
  14. Winters, T. Facilitating meta-learning in art and design education. Int. J. Art Des. Educ. 2011, 30, 90–101. [Google Scholar] [CrossRef]
  15. Omran Zailuddin, M.F.N.; Nik Harun, N.A.; Abdul Rahim, H.A.; Kamaruzaman, A.F.; Berahim, M.H.; Harun, M.H.; Ibrahim, Y. Redefining creative education: A case study analysis of AI in design courses. J. Res. Innov. Teach. Learn. 2024, 17, 282–296. [Google Scholar] [CrossRef]
  16. Fathoni, A.F.C.A. Leveraging generative AI solutions in art and design education: Bridging sustainable creativity and fostering academic integrity for innovative society. In Proceedings of the E3S Web of Conferences; EDP Sciences: Paris, France, 2023; p. 01102. [Google Scholar]
  17. Chandrasekera, T.; Hosseini, Z.; Perera, U. Can artificial intelligence support creativity in early design processes? Int. J. Archit. Comput. 2025, 23, 122–136. [Google Scholar] [CrossRef]
  18. Rao, J.; Xiong, M. A new art design method based on AIGC: Analysis from the perspective of creation efficiency. In Proceedings of the 2023 4th International Conference on Intelligent Design (ICID), Xi’an, China, 20–22 October 2023; pp. 129–134. [Google Scholar]
  19. Lu, Y.; Guo, C.; Dou, Y.; Dai, X.; Wang, F.-Y. Could ChatGPT Imagine: Content control for artistic painting generation via large language models. J. Intell. Robot. Syst. 2023, 109, 39. [Google Scholar] [CrossRef]
  20. Guo, C.; Lu, Y.; Dou, Y.; Wang, F.-Y. Can ChatGPT boost artistic creation: The need of imaginative intelligence for parallel art. IEEE/CAA J. Autom. Sin. 2023, 10, 835–838. [Google Scholar] [CrossRef]
  21. Zhu, S.; Wang, Z.; Zhuang, Y.; Jiang, Y.; Guo, M.; Zhang, X.; Gao, Z. Exploring the impact of ChatGPT on art creation and collaboration: Benefits, challenges and ethical implications. Telemat. Inform. Rep. 2024, 14, 100138. [Google Scholar] [CrossRef]
  22. Raman, R.; Mandal, S.; Das, P.; Kaur, T.; JP, S.; Nedungadi, P. University Students as Early Adopters of ChatGPT: Innovation Diffusion Study; Springer Science: Berlin/Heidelberg, Germany, 2023. [Google Scholar]
  23. Alfaisal, R.; Hatem, M.; Salloum, A.; Al Saidat, M.R.; Salloum, S.A. Forecasting the acceptance of ChatGPT as educational platforms: An integrated SEM-ANN methodology. In Artificial Intelligence in Education: The Power and Dangers of ChatGPT in the Classroom; Springer: Berlin/Heidelberg, Germany, 2024; pp. 331–348. [Google Scholar]
  24. Zhao, Y.; Li, Y.; Xiao, Y.; Chang, H.; Liu, B. Factors influencing the acceptance of ChatGPT in high education: An integrated model with PLS-SEM and fsQCA approach. Sage Open 2024, 14, 21582440241289835. [Google Scholar] [CrossRef]
  25. DeLone, W.H.; McLean, E.R. Information systems success: The quest for the dependent variable. Inf. Syst. Res. 1992, 3, 60–95. [Google Scholar] [CrossRef]
  26. DeLone, W.H.; McLean, E.R. The DeLone and McLean model of information systems success: A ten-year update. J. Manag. Inf. Syst. 2003, 19, 9–30. [Google Scholar]
  27. Petter, S.; DeLone, W.; McLean, E. Measuring information systems success: Models, dimensions, measures, and interrelationships. Eur. J. Inf. Syst. 2008, 17, 236–263. [Google Scholar] [CrossRef]
  28. Wang, X.-W.; Guo, Y. Beyond the screen: Understanding consumer engagement with live-stream shopping from the perspective of the information system success model. J. Retail. Consum. Serv. 2026, 88, 104515. [Google Scholar] [CrossRef]
  29. Cho, K.W.; Bae, S.-K.; Ryu, J.-H.; Kim, K.N.; An, C.-H.; Chae, Y.M. Performance evaluation of public hospital information systems by the information system success model. Healthc. Inform. Res. 2015, 21, 43–48. [Google Scholar] [CrossRef]
  30. Alshammari, S.H.; Alshammari, R.A. An integration of expectation confirmation model and information systems success model to explore the factors affecting the continuous intention to utilise virtual classrooms. Sci. Rep. 2024, 14, 18491. [Google Scholar] [CrossRef]
  31. Efiloğlu Kurt, Ö. Examining an e-learning system through the lens of the information systems success model: Empirical evidence from Italy. Educ. Inf. Technol. 2019, 24, 1173–1184. [Google Scholar] [CrossRef]
  32. Çelik, K.; Ayaz, A. Validation of the Delone and McLean information systems success model: A study on student information system. Educ. Inf. Technol. 2022, 27, 4709–4727. [Google Scholar] [CrossRef]
  33. Duong, C.D.; Nguyen, T.H.; Ngo, T.V.N.; Dao, V.T.; Do, N.D.; Pham, T.V. Exploring higher education students’ continuance usage intention of ChatGPT: Amalgamation of the information system success model and the stimulus-organism-response paradigm. Int. J. Inf. Learn. Technol. 2024, 41, 556–584. [Google Scholar] [CrossRef]
  34. Tan, C.N.-L.; Tee, M.; Koay, K.Y. Discovering students’ continuous intentions to use ChatGPT in higher education: A tale of two theories. Asian Educ. Dev. Stud. 2024, 13, 356–372. [Google Scholar] [CrossRef]
  35. Marjanovic, U.; Mester, G.; Milic Marjanovic, B. Assessing the success of artificial intelligence tools: An evaluation of chatgpt using the information system success model. Interdiscip. Descr. Complex Syst. INDECS 2024, 22, 266–275. [Google Scholar] [CrossRef]
  36. Chen, H.-J.; Chang, S.-T.; Chou, P.-Y.; Tsai, Y.-S.; Chu, C.; Hsieh, F.; Tseng, G. Exploring users’ continuous use intention of ChatGPT based on the is success model and technology readiness. Int. J. Manag. Stud. Soc. Sci. Res. 2024, 6, 366–377. [Google Scholar] [CrossRef]
  37. Mathur, S.; Anand, V.; Sharma, D.; Vishnoi, S.K. Influence of ChatGPT in professional communication–moderating role of perceived innovativeness. Int. J. Inf. Learn. Technol. 2025, 42, 107–126. [Google Scholar] [CrossRef]
  38. Al-Emran, M.; Abu-Hijleh, B.; Alsewari, A.A. Examining the impact of Generative AI on social sustainability by integrating the information system success model and technology-environmental, economic, and social sustainability theory. Educ. Inf. Technol. 2025, 30, 9405–9426. [Google Scholar] [CrossRef]
  39. Sabeh, H.N.; Kee, D.M.H.; Mohammed, R.B.; Albarwary, S.A.; Khalilov, S. ChatGPT in the Malaysian Classroom: Assessing Student Acceptance and Effectiveness Through UTAUT2 and IS Success Models. In Proceedings of the 2025 IEEE 22nd International Multi-Conference on Systems, Signals & Devices (SSD), Istanbul, Turkey, 29 March–1 April 2025; pp. 583–592. [Google Scholar]
  40. Shuhaiber, A.; Kuhail, M.A.; Salman, S. ChatGPT in higher education-A Student’s perspective. Comput. Hum. Behav. Rep. 2025, 17, 100565. [Google Scholar] [CrossRef]
  41. Ngo, T.T.A. The perception by university students of the use of ChatGPT in education. Int. J. Emerg. Technol. Learn. (Online) 2023, 18, 4. [Google Scholar] [CrossRef]
  42. Chen, J.; Zhuo, Z.; Lin, J. Does ChatGPT play a double-edged sword role in the field of higher education? An in-depth exploration of the factors affecting student performance. Sustainability 2023, 15, 16928. [Google Scholar] [CrossRef]
  43. Qadir, J. Engineering education in the era of ChatGPT: Promise and pitfalls of generative AI for education. In Proceedings of the 2023 IEEE Global Engineering Education Conference (EDUCON), Kuwait, Kuwait, 1–4 May 2023; pp. 1–9. [Google Scholar]
  44. Kohnke, L.; Moorhouse, B.L.; Zou, D. ChatGPT for language teaching and learning. Relc J. 2023, 54, 537–550. [Google Scholar] [CrossRef]
  45. Athanassopoulos, S.; Manoli, P.; Gouvi, M.; Lavidas, K.; Komis, V. The use of ChatGPT as a learning tool to improve foreign language writing in a multilingual and multicultural classroom. Adv. Mob. Learn. Educ. Res. 2023, 3, 818–824. [Google Scholar] [CrossRef]
  46. Lee, H. The rise of ChatGPT: Exploring its potential in medical education. Anat. Sci. Educ. 2024, 17, 926–931. [Google Scholar] [CrossRef]
  47. Thomae, A.V.; Witt, C.M.; Barth, J. Integration of ChatGPT into a course for medical students: Explorative study on teaching scenarios, students’ perception, and applications. JMIR Med. Educ. 2024, 10, e50545. [Google Scholar] [CrossRef]
  48. Yilmaz, R.; Yilmaz, F.G.K. Augmented intelligence in programming learning: Examining student views on the use of ChatGPT for programming learning. Comput. Hum. Behav. Artif. Hum. 2023, 1, 100005. [Google Scholar] [CrossRef]
  49. Sun, D.; Boudouaia, A.; Zhu, C.; Li, Y. Would ChatGPT-facilitated programming mode impact college students’ programming behaviors, performances, and perceptions? An empirical study. Int. J. Educ. Technol. High. Educ. 2024, 21, 14. [Google Scholar] [CrossRef]
  50. Chellappa, V.; Luximon, Y. Understanding the perception of design students towards ChatGPT. Comput. Educ. Artif. Intell. 2024, 7, 100281. [Google Scholar] [CrossRef]
  51. Meron, Y.; Araci, Y.T. Artificial intelligence in design education: Evaluating ChatGPT as a virtual colleague for post-graduate course development. Des. Sci. 2023, 9, e30. [Google Scholar] [CrossRef]
  52. Li, X.; Tang, X.; Zheng, X.; Huang, Y.; Tu, Y. Exploring the AIGC-driven co-creation model in art and design education: Insights from a student workshop and exhibition. Int. J. Technol. Des. Educ. 2025, 1–31. [Google Scholar] [CrossRef]
  53. Chang, Y.-C.; Tung, F.-W. ChatGPT in Design Practice: Redefining Collaborative Design Process with Future Designers. In Proceedings of the International Conference on Human-Computer Interaction, Gothenburg, Sweden, 22–27 June 2025; pp. 31–43. [Google Scholar]
  54. Papachristos, E.; Inal, Y.; Monllaó, C.V.; Johansen, E.A.; Hermansen, M. Integrating AI into Design Ideation: Assessing ChatGPT’s Role in Human-Centered Design Education. Authorea Prepr. 2024. [Google Scholar] [CrossRef]
  55. Ouma, B.O.; Mwangi, E.K.; Okoth, A.A.; Njeri, A.W. Integrating Generative AI and ChatGPT in Design Education: Impacts on Critical Thinking Development. Int. J. Graph. Des. 2025, 3, 1–18. [Google Scholar] [CrossRef]
  56. Filippi, S. Measuring the impact of ChatGPT on fostering concept generation in innovative product design. Electronics 2023, 12, 3535. [Google Scholar] [CrossRef]
  57. Maclachlan, R.; Adams, R.; Lauro, V.; Murray, M.; Magueijo, V.; Flockhart, G.; Hasty, W. Chat-GPT: A clever search engine or a creative design assistant for students and industry? In Proceedings of the 26th International Conference on Engineering and Product Design Education: Rise of the Machines: Design Education in the Generative AI Era, Brighton, UK, 5–6 September 2024. [Google Scholar]
  58. Almulla, M.A. Investigating influencing factors of learning satisfaction in AI ChatGPT for research: University students perspective. Heliyon 2024, 10, e32220. [Google Scholar] [CrossRef] [PubMed]
  59. Tsai, C.-Y.; Huang, T.-C.; Shu, Y.; Chiang, Y.-H. Investigating the Influence of Autonomy, Competence, Relatedness, and Excitement on User Satisfaction with ChatGPT: A Self-Determination Theory Perspective. In Proceedings of the International Conference on Innovative Technologies and Learning, Oslo, Norway, 5–7 August 2025; pp. 383–392. [Google Scholar]
  60. Ng, J.; Tong, M.; Tsang, E.Y.; Chu, K.; Tsang, W. Exploring Students’ Perceptions and Satisfaction of Using GenAI-ChatGPT Tools for Learning in Higher Education: A Mixed Methods Study. SN Comput. Sci. 2025, 6, 476. [Google Scholar] [CrossRef]
  61. Thongsri, N.; Tripak, O.; Bao, Y. Do learners exhibit a willingness to use ChatGPT? An advanced two-stage SEM-neural network approach for forecasting factors influencing ChatGPT adoption. Interact. Technol. Smart Educ. 2025, 22, 217–234. [Google Scholar] [CrossRef]
  62. Vance, A.; Elie-Dit-Cosaque, C.; Straub, D.W. Examining trust in information technology artifacts: The effects of system quality and culture. J. Manag. Inf. Syst. 2008, 24, 73–100. [Google Scholar] [CrossRef]
  63. Gorla, N.; Somers, T.M.; Wong, B. Organizational impact of system quality, information quality, and service quality. J. Strateg. Inf. Syst. 2010, 19, 207–228. [Google Scholar] [CrossRef]
  64. Zheng, Y.; Zhao, K.; Stylianou, A. The impacts of information quality and system quality on users’ continuance intention in information-exchange virtual communities: An empirical investigation. Decis. Support Syst. 2013, 56, 513–524. [Google Scholar] [CrossRef]
  65. Chu, M.-N. Assessing the benefits of ChatGPT for business: An empirical study on organizational performance. IEEE Access 2023, 11, 76427–76436. [Google Scholar] [CrossRef]
  66. Ma, W. An empirical study on the educational application of ChatGPT. J. Electr. Syst. 2024, 20, 829–841. [Google Scholar] [CrossRef]
  67. Sibanda, A. Customer adoption of Chat GPT for web development and programming assistance in the Zimbabwe tech industry. In Proceedings of the International Student Conference on Business, Education, Economics, Accounting, and Management (ISC-BEAM), Yogyakarta, Indonesia, 3–4 September 2024; pp. 1931–1944. [Google Scholar]
  68. Petter, S.; McLean, E.R. A meta-analytic assessment of the DeLone and McLean IS success model: An examination of IS success at the individual level. Inf. Manag. 2009, 46, 159–166. [Google Scholar] [CrossRef]
  69. Al-Mamary, Y.H.; Shamsuddin, A.; Aziati, N. The relationship between system quality, information quality, and organizational performance. Int. J. Knowl. Res. Manag. E-Commer. 2014, 4, 7–10. [Google Scholar]
  70. Jo, H. Understanding AI tool engagement: A study of ChatGPT usage and word-of-mouth among university students and office workers. Telemat. Inform. 2023, 85, 102067. [Google Scholar] [CrossRef]
  71. Birolini, A. Quality and Reliability of Technical Systems: Theory, Practice, Management; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2012. [Google Scholar]
  72. Al-Maroof, R.S.; Alhumaid, K.; Alshaafi, A.; Akour, I.; Bettayeb, A.; Alfaisal, R.; Salloum, S.A. A comparative analysis of chatgpt and google in educational settings: Understanding the influence of mediators on learning platform adoption. In Artificial Intelligence in Education: The Power and Dangers of ChatGPT in the Classroom; Springer: Berlin/Heidelberg, Germany, 2024; pp. 365–386. [Google Scholar]
  73. Atalan, A. The ChatGPT application on quality management: A comprehensive review. J. Manag. Anal. 2025, 12, 229–259. [Google Scholar] [CrossRef]
  74. Al-kfairy, M. Factors impacting the adoption and acceptance of ChatGPT in educational settings: A narrative review of empirical studies. Appl. Syst. Innov. 2024, 7, 110. [Google Scholar] [CrossRef]
  75. Wu, J.-H.; Wang, S.-C. What drives mobile commerce?: An empirical evaluation of the revised technology acceptance model. Inf. Manag. 2005, 42, 719–729. [Google Scholar] [CrossRef]
  76. Almogren, A.S.; Al-Rahmi, W.M.; Dahri, N.A. Exploring factors influencing the acceptance of ChatGPT in higher education: A smart education perspective. Heliyon 2024, 10, e31887. [Google Scholar] [CrossRef]
  77. Li, C.; Yang, J.; Zhang, H.; Tian, L.; Guo, J.; Yu, G. Assessment of University Students’ behavioral Intentions to Use ChatGPT: A Comprehensive Application Based on the Innovation Diffusion Theory and the Technology Acceptance Model. Preprint 2024, 2024061835. [Google Scholar] [CrossRef]
  78. Isaac, O.; Aldholay, A.; Abdullah, Z.; Ramayah, T. Online learning usage within Yemeni higher education: The role of compatibility and task-technology fit as mediating variables in the IS success model. Comput. Educ. 2019, 136, 113–129. [Google Scholar] [CrossRef]
  79. Rogers, E.M.; Singhal, A.; Quinlan, M.M. Diffusion of innovations. In An integrated Approach to Communication Theory and Research; Routledge: Madison Ave, NY, USA, 2014; pp. 432–448. [Google Scholar]
  80. Yu, C.; Yan, J.; Cai, N. ChatGPT in higher education: Factors influencing ChatGPT user satisfaction and continued use intention. Front. Educ. 2024, 9, 1354929. [Google Scholar] [CrossRef]
  81. Acikgoz, F.; Elwalda, A.; De Oliveira, M.J. Curiosity on cutting-edge technology via theory of planned behavior and diffusion of innovation theory. Int. J. Inf. Manag. Data Insights 2023, 3, 100152. [Google Scholar] [CrossRef]
  82. Al-Rahmi, W.M.; Yahaya, N.; Alamri, M.M.; Alyoussef, I.Y.; Al-Rahmi, A.M.; Kamin, Y.B. Integrating innovation diffusion theory with technology acceptance model: Supporting students’ attitude towards using a massive open online courses (MOOCs) systems. Interact. Learn. Environ. 2021, 29, 1380–1392. [Google Scholar] [CrossRef]
  83. Jafari, H.; Naghshineh, N.; Rodríguez, O.A.; Keshavarz, H.; Lund, B. In ChatGPT We Trust? Unveiling the Dynamics of Reuse Intention and Trust Towards Generative AI Chatbots among Iranians. Infosci. Trends 2024, 1, 56–72. [Google Scholar] [CrossRef]
  84. Akour, I.A.; Al-Maroof, R.S.; Alfaisal, R.; Salloum, S.A. A conceptual framework for determining metaverse adoption in higher institutions of gulf area: An empirical study using hybrid SEM-ANN approach. Comput. Educ. Artif. Intell. 2022, 3, 100052. [Google Scholar] [CrossRef]
  85. Agarwal, R.; Prasad, J. A conceptual and operational definition of personal innovativeness in the domain of information technology. Inf. Syst. Res. 1998, 9, 204–215. [Google Scholar] [CrossRef]
  86. Chen, H.-J. Verifying the link of innovativeness to the confirmation-expectation model of ChatGPT of students in learning. J. Inf. Commun. Ethics Soc. 2025, 23, 433–447. [Google Scholar] [CrossRef]
  87. Salloum, S.A.; Hatem, M.; Salloum, A.; Alfaisal, R. Envisioning ChatGPT’s Integration as Educational Platforms: A Hybrid SEM-ML Method for Adoption Prediction. In Artificial Intelligence in Education: The Power and Dangers of ChatGPT in the Classroom; Springer: Berlin/Heidelberg, Germany, 2024; pp. 315–330. [Google Scholar]
  88. Kumar, J.; Rani, M.; Rani, G.; Rani, V. Human-machine dialogues unveiled: An in-depth exploration of individual attitudes and adoption patterns toward AI-powered ChatGPT systems. Digit. Policy Regul. Gov. 2024, 26, 435–449. [Google Scholar] [CrossRef]
  89. Liu, Y.; Park, Y.; Wang, H. The mediating effect of user satisfaction and the moderated mediating effect of AI anxiety on the relationship between perceived usefulness and subscription payment intention. J. Retail. Consum. Serv. 2025, 84, 104176. [Google Scholar] [CrossRef]
  90. Alshammari, S.H.; Babu, E. The mediating role of satisfaction in the relationship between perceived usefulness, perceived ease of use and students’ behavioural intention to use ChatGPT. Sci. Rep. 2025, 15, 7169. [Google Scholar] [CrossRef]
  91. Preston, C.C.; Colman, A.M. Optimal number of response categories in rating scales: Reliability, validity, discriminating power, and respondent preferences. Acta Psychol. 2000, 104, 1–15. [Google Scholar] [CrossRef]
  92. Dash, G.; Paul, J. CB-SEM vs. PLS-SEM methods for research in social sciences and technology forecasting. Technol. Forecast. Soc. Change 2021, 173, 121092. [Google Scholar] [CrossRef]
  93. Fiss, P.C. Building better causal theories: A fuzzy set approach to typologies in organization research. Acad. Manag. J. 2011, 54, 393–420. [Google Scholar] [CrossRef]
  94. Woodside, A.G. Moving beyond multiple regression analysis to algorithms: Calling for adoption of a paradigm shift from symmetric to asymmetric thinking in data analysis and crafting theory. J. Bus. Res. 2013, 66, 463–472. [Google Scholar] [CrossRef]
  95. Hair, J.F., Jr.; Sarstedt, M.; Hopkins, L.; Kuppelwieser, V.G. Partial least squares structural equation modeling (PLS-SEM) An emerging tool in business research. Eur. Bus. Rev. 2014, 26, 106–121. [Google Scholar] [CrossRef]
  96. Gefen, D.; Straub, D.; Boudreau, M.-C. Structural equation modeling and regression: Guidelines for research practice. Commun. Assoc. Inf. Syst. 2000, 4, 7. [Google Scholar] [CrossRef]
  97. Hair, J.F. A Primer on Partial Least Squares Structural Equation Modeling (PLS-SEM); Sage: Thousand Oaks, CA, USA, 2014. [Google Scholar]
  98. Fornell, C.; Larcker, D.F. Evaluating structural equation models with unobservable variables and measurement error. J. Mark. Res. 1981, 18, 39–50. [Google Scholar] [CrossRef]
  99. Kline, R.B. Principles and Practice of Structural Equation Modeling; Guilford publications: New York, NY, USA, 2023. [Google Scholar]
  100. Carrión, G.C.; Henseler, J.; Ringle, C.M.; Roldán, J.L. Prediction-oriented modeling in business research by means of PLS path modeling: Introduction to a JBR special section. J. Bus. Res. 2016, 69, 4545–4551. [Google Scholar] [CrossRef]
  101. Henseler, J.; Ringle, C.M.; Sarstedt, M. A new criterion for assessing discriminant validity in variance-based structural equation modeling. J. Acad. Mark. Sci. 2015, 43, 115–135. [Google Scholar] [CrossRef]
  102. Hair, J.F.; Risher, J.J.; Sarstedt, M.; Ringle, C.M. When to use and how to report the results of PLS-SEM. Eur. Bus. Rev. 2019, 31, 2–24. [Google Scholar] [CrossRef]
  103. Henseler, J.; Ringle, C.M.; Sinkovics, R.R. The use of partial least squares path modeling in international marketing. In New Challenges to International Marketing; Emerald Group Publishing Limited: Leeds, UK, 2009; Volume 20, pp. 277–319. [Google Scholar]
  104. Cohen, J. Statistical Power Analysis for the Behavioral Sciences; Academic Press: Cambridge, MA, USA, 2013. [Google Scholar]
  105. Alin, A. Multicollinearity. Wiley Interdiscip. Rev. Comput. Stat. 2010, 2, 370–374. [Google Scholar] [CrossRef]
  106. Liang, H.; Saraf, N.; Hu, Q.; Xue, Y. Assimilation of enterprise systems: The effect of institutional pressures and the mediating role of top management. MIS Q. 2007, 31, 59–87. [Google Scholar] [CrossRef]
  107. Kock, N.; Lynn, G. Lateral collinearity and misleading results in variance-based SEM: An illustration and recommendations. J. Assoc. Inf. Syst. 2012, 13, 2. [Google Scholar] [CrossRef]
  108. Hew, J.-J.; Leong, L.-Y.; Tan, G.W.-H.; Lee, V.-H.; Ooi, K.-B. Mobile social tourism shopping: A dual-stage analysis of a multi-mediation model. Tour. Manag. 2018, 66, 121–139. [Google Scholar] [CrossRef]
  109. Rihoux, B.; Ragin, C.C. Configurational Comparative Methods: Qualitative Comparative Analysis (QCA) and Related Techniques; Sage: Thousand Oaks, CA, USA, 2009; Volume 51. [Google Scholar]
  110. Dul, J. Identifying single necessary conditions with NCA and fsQCA. J. Bus. Res. 2016, 69, 1516–1523. [Google Scholar] [CrossRef]
  111. Greckhamer, T.; Furnari, S.; Fiss, P.C.; Aguilera, R.V. Studying configurations with qualitative comparative analysis: Best practices in strategy and organization research. Strateg. Organ. 2018, 16, 482–495. [Google Scholar] [CrossRef]
  112. Ragin, C.C. Redesigning Social Inquiry: Fuzzy Sets and Beyond; University of Chicago Press: Chicago, IL, USA, 2009. [Google Scholar]
  113. Pappas, I.O.; Woodside, A.G. Fuzzy-set Qualitative Comparative Analysis (fsQCA): Guidelines for research practice in Information Systems and marketing. Int. J. Inf. Manag. 2021, 58, 102310. [Google Scholar] [CrossRef]
  114. Fainshmidt, S.; Witt, M.A.; Aguilera, R.V.; Verbeke, A. The contributions of qualitative comparative analysis (QCA) to international business research. J. Int. Bus. Stud. 2020, 51, 455–466. [Google Scholar] [CrossRef]
  115. Beynon, M.J.; Jones, P.; Pickernell, D. Country-based comparison analysis using fsQCA investigating entrepreneurial attitudes and activity. J. Bus. Res. 2016, 69, 1271–1276. [Google Scholar] [CrossRef]
  116. Wu, Q.; Tian, J.; Liu, Z. Exploring the usage behavior of generative artificial intelligence: A case study of ChatGPT with insights into the moderating effects of habit and personal innovativeness. Curr. Psychol. 2025, 44, 8190–8203. [Google Scholar] [CrossRef]
  117. Sadewo, S.T.; Ratnawati, S.; Giovanni, A.; Widayanti, I. The Influence of Personal Innovativeness on ChatGPT Continuance Usage Intention among Students. SATESI J. Sains Teknol. Dan Sist. Inf. 2025, 5, 88–98. [Google Scholar]
  118. Foroughi, B.; Iranmanesh, M.; Ghobakhloo, M.; Senali, M.G.; Annamalai, N.; Naghmeh-Abbaspour, B.; Rejeb, A. Determinants of ChatGPT adoption among students in higher education: The moderating effect of trust. Electron. Libr. 2025, 43, 1–21. [Google Scholar] [CrossRef]
  119. Kim, M.K.; Jhee, S.Y.; Han, S.-L. The Impact of Chat GPT’s Quality Factors on~ Perceived Usefulness, Perceived Enjoyment, and~ Continuous Usage Intention Using the IS Success Model. Asia Mark. J. 2025, 26, 243–254. [Google Scholar] [CrossRef]
  120. Almarzouqi, A.; Aburayya, A.; Salloum, S.A. Prediction of user’s intention to use metaverse system in medical education: A hybrid SEM-ML learning approach. IEEE Access 2022, 10, 43421–43434. [Google Scholar] [CrossRef]
  121. Salloum, S., Sr.; Almarzouqi, A., Sr.; Salloum, A., Jr.; Alfaisal, R., Sr. Unlocking the Potential of ChatGPT in Medical Education and Practice. JMIR Prepr. 2024, 1–35. [Google Scholar] [CrossRef]
  122. Khan, S.; Mehmood, S.; Khan, S.U. Navigating innovation in the age of AI: How generative AI and innovation influence organizational performance in the manufacturing sector. J. Manuf. Technol. Manag. 2025, 36, 597–620. [Google Scholar] [CrossRef]
  123. Kim, Y.W.; Cha, M.C.; Yoon, S.H.; Lee, S.C. Not merely useful but also amusing: Impact of perceived usefulness and perceived enjoyment on the adoption of AI-powered coding assistant. Int. J. Hum. Comput. Interact. 2025, 41, 6210–6222. [Google Scholar] [CrossRef]
  124. Al-Adwan, A.S.; Li, N.; Al-Adwan, A.; Abbasi, G.A.; Albelbisi, N.A.; Habibi, A. Extending the technology acceptance model (TAM) to Predict University Students’ intentions to use metaverse-based learning platforms. Educ. Inf. Technol. 2023, 28, 15381–15413. [Google Scholar] [CrossRef] [PubMed]
  125. Batouei, A.; Nikbin, D.; Foroughi, B. Acceptance of ChatGPT as an auxiliary tool enhancing travel experience. J. Hosp. Tour. Insights 2025, 8, 2744–2763. [Google Scholar] [CrossRef]
  126. Sabeh, H.N. What drives IT students toward ChatGPT? Analyzing the factors influencing students’ intention to use ChatGPT for educational purposes. In Proceedings of the 2024 21st International Multi-Conference on Systems, Signals & Devices (SSD), Erbil, Iraq, 22–25 April 2024; pp. 533–539. [Google Scholar]
  127. Sabraz Nawaz, S.; Fathima Sanjeetha, M.B.; Al Murshidi, G.; Mohamed Riyath, M.I.; Mat Yamin, F.B.; Mohamed, R. Acceptance of ChatGPT by undergraduates in Sri Lanka: A hybrid approach of SEM-ANN. Interact. Technol. Smart Educ. 2024, 21, 546–570. [Google Scholar] [CrossRef]
  128. Nan, D.; Sun, S.; Zhang, S.; Zhao, X.; Kim, J.H. Analyzing behavioral intentions toward Generative Artificial Intelligence: The case of ChatGPT. Univers. Access Inf. Soc. 2025, 24, 885–895. [Google Scholar] [CrossRef]
  129. Sun, P.; Li, L.; Hossain, M.S.; Zabin, S. Investigating students’ behavioral intention to use ChatGPT for educational purposes. Sustain. Futures 2025, 9, 100531. [Google Scholar] [CrossRef]
  130. Yousaf, A.; Mishra, A.; Gupta, A. ‘From technology adoption to consumption’: Effect of pre-adoption expectations from fitness applications on usage satisfaction, continual usage, and health satisfaction. J. Retail. Consum. Serv. 2021, 62, 102655. [Google Scholar] [CrossRef]
  131. Wu, J.; Cai, Y.; Sun, T.; Ma, K.; Lu, C. Integrating AIGC with design: Dependence, application, and evolution-a systematic literature review. J. Eng. Des. 2025, 36, 758–796. [Google Scholar] [CrossRef]
  132. Zhang, Y.; Dong, C. Exploring the digital transformation of generative ai-assisted foreign language education: A socio-technical systems perspective based on mixed-methods. Systems 2024, 12, 462. [Google Scholar] [CrossRef]
  133. Cui, Y.; Meng, Y.; Tang, L. Reconsidering teacher assessment literacy in GenAI-enhanced environments: A scoping review. Teach. Teach. Educ. 2025, 165, 105163. [Google Scholar] [CrossRef]
  134. Wang, P.; Jing, Y.; Shen, S. A systematic literature review on the application of generative artificial intelligence (GAI) in teaching within higher education: Instructional contexts, process, and strategies. Internet High. Educ. 2025, 65, 100996. [Google Scholar] [CrossRef]
  135. Nguyen, T.M.; Quach, S.; Thaichon, P. The effect of AI quality on customer experience and brand relationship. J. Consum. Behav. 2022, 21, 481–493. [Google Scholar] [CrossRef]
  136. Wang, H.; Li, D.; Gu, C.; Wei, W.; Chen, J. Research on high school students’ behavior in art course within a virtual learning environment based on SVVR. Front. Psychol. 2023, 14, 1218959. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Research model.
Figure 1. Research model.
Systems 14 00007 g001
Figure 2. Analysis results of hypothesized model.
Figure 2. Analysis results of hypothesized model.
Systems 14 00007 g002
Figure 3. Performance impact construct priority map.
Figure 3. Performance impact construct priority map.
Systems 14 00007 g003
Table 1. Demographic information of respondents.
Table 1. Demographic information of respondents.
SampleCategoryNumber (n = 435)Proportion (%)
GenderMale20647.4
Female22952.6
Age18–2137786.7
22–265813.3
Level of studyUndergraduate38989.4
Graduate4610.6
FrequencyLess than once a week5813.3
About once a week12629.0
Several times a week439.9
About once a day13029.9
Several times a day7817.9
Table 2. Measurement model analysis results.
Table 2. Measurement model analysis results.
ConstructsItemsLoadings
(>0.7)
VIF
(<0.3)
α
(>0.7)
CR
(>0.7)
AVE
(>0.5)
System reliabilitySR10.8031.5030.8020.8840.717
SR20.8621.927
SR30.8741.990
System timelinessST10.8991.5640.7500.8890.800
ST20.8901.564
System flexibilitySF10.8141.5510.7770.8710.693
SF20.8711.852
SF30.8111.560
CompatibilityCOM10.8501.7030.7920.8780.706
COM20.8441.784
COM30.8271.571
Personal innovativenessPI10.8752.0340.7950.8800.711
PI20.8772.047
PI30.7721.419
Perceived usefulnessPU10.8402.0060.8430.8950.680
PU20.8231.825
PU30.8251.886
PU40.8121.780
SatisfactionSA10.8582.0710.8680.9100.716
SA20.8352.040
SA30.8492.150
SA40.8412.037
Table 3. Discriminant validity: Cross-loading.
Table 3. Discriminant validity: Cross-loading.
SRSTSFCOMPIPUSA
SR10.8030.4880.4750.5150.4530.5630.519
SR20.8620.4640.5070.5070.3850.4670.493
SR30.8740.4570.5450.5310.3880.4930.495
ST10.5270.8990.5510.4830.4020.5450.511
ST20.4630.8900.5320.4000.4270.5690.504
SF10.4900.5170.8140.4960.3560.4580.519
SF20.5180.5250.8710.5180.3900.5000.570
SF30.4940.4680.8110.5340.3810.5020.547
COM10.5220.4410.5260.8500.5400.5270.572
COM20.4810.3800.5350.8440.4160.4520.546
COM30.5340.4220.5020.8270.4580.5430.574
PI10.4430.4130.3970.4940.8750.5190.494
PI20.3860.4090.3780.4560.8770.5670.488
PI30.3900.3470.3680.4780.7720.4050.460
PU10.4620.5450.4820.4990.4980.8400.538
PU20.4960.5040.4830.4830.5080.8230.564
PU30.5040.5160.4570.5170.4660.8250.556
PU40.5130.4880.5060.5010.4850.8120.527
SA10.5240.5400.5670.6390.5270.6120.858
SA20.4900.4380.5330.5350.4510.5270.835
SA30.4930.4660.5570.5280.4780.5510.849
SA40.4970.4670.5610.5620.4680.5450.841
Note: The bold values represent the outer loadings of each measurement item on its corresponding construct.
Table 4. Discriminant Validity: Fornell–Larcker and HTMT Results.
Table 4. Discriminant Validity: Fornell–Larcker and HTMT Results.
SRSTSFCOMPIPUSA
SR0.8470.7140.7620.7660.6050.7290.711
ST0.5540.8950.7920.6390.5990.7820.700
SF0.6020.6050.8320.7910.5760.7220.798
COM0.6110.4950.6200.8400.7090.7390.806
PI0.4820.4630.4520.5640.8430.7210.685
PU0.5990.6220.5850.6060.5930.8250.771
SA0.5930.5670.6560.6720.5700.6620.846
Note: Table preserved with AVE square roots on diagonal, correlations below diagonal, and HTMT above diagonal.
Table 5. Structural assessment result.
Table 5. Structural assessment result.
HypothesisPathStd Betap-ValueResultsR2Q2f2VIF
H1aSQ→PI0.3060.000Support0.3680.2540.0801.846
H1bSQ→PU0.4700.000Support0.5710.3820.2591.994
H1cSQ→COM0.6770.000Support0.4580.3190.8461.000
H2aCOM→PI0.3560.000Support 0.1091.846
H2bCOM→PU0.1440.003Support 0.0242.046
H2cCOM→SA0.3780.000Support0.5690.3990.1881.759
H3aPI→PU0.2540.000Support 0.0951.583
H3bPI→SA0.1550.000Support 0.0321.717
H4PU→SA0.3420.000Support 0.1461.852
Table 6. An alternative test for common method bias.
Table 6. An alternative test for common method bias.
ConstructsItemsSubstantive Factor Loading (R1)Substantive Variance (R12)Method Factor Loading (R2)Method Variance (R22)
System
reliability
SR10.7990.6380.1360.018
SR20.8650.748−0.0770.006
SR30.8750.766−0.0500.003
System
timeliness
ST10.8960.8030.0200.000
ST20.8930.797−0.0200.000
System
flexibility
SF10.8110.658−0.0200.000
SF20.8720.760−0.0230.001
SF30.8120.6590.0440.002
CompatibilityCOM10.8460.7160.0540.003
COM20.8520.726−0.1230.015
COM30.8220.6760.0650.004
Personal
innovativeness
PI10.8750.7660.0080.000
PI20.8770.769−0.0160.000
PI30.7730.5980.0100.000
Perceived
usefulness
PU10.8420.709−0.0490.002
PU20.8210.6740.0250.001
PU30.8250.681−0.0010.000
PU40.8110.6580.0260.001
SatisfactionSA10.8480.7190.1760.031
SA20.8410.707−0.0980.010
SA30.8540.729−0.0640.004
SA40.8420.709−0.0180.000
Average 0.8430.7120.0000.005
Table 7. Results of IPMA.
Table 7. Results of IPMA.
Latent ConstructsPerformance Impact Total Effect
(Importance)
Index Values
(Performance)
System quality0.58260.821
Compatibility0.51361.562
Personal innovativeness0.24271.099
Perceived usefulness0.34270.172
Table 8. Results of Necessity Analysis.
Table 8. Results of Necessity Analysis.
VariableSatisfaction~Satisfaction
ConsistencyCoverageConsistencyCoverage
SR0.7530.8180.5020.527
~SR0.5640.5400.8260.764
ST0.8180.7640.6180.558
~ST0.5270.5880.7380.797
SF0.8190.7960.5520.518
~SF0.5030.5380.7820.807
COM0.7940.8300.5180.523
~COM0.5430.5380.8320.796
PI0.7930.7780.5770.546
~PI0.5370.5680.7660.782
PU0.8100.7950.5530.524
~PU0.5150.5440.7840.800
Table 9. Results of Configuration Analysis.
Table 9. Results of Configuration Analysis.
ConfigurationSatisfactionDissatisfaction
M1M2M3M4M1M2M3
SRSystems 14 00007 i001Systems 14 00007 i001
ST
SF
COMSystems 14 00007 i001 Systems 14 00007 i001
PISystems 14 00007 i001 Systems 14 00007 i001
PUSystems 14 00007 i001Systems 14 00007 i001
Consistency0.9430.9440.9480.9530.9470.9460.947
Raw coverage0.3320.5490.5550.5300.5310.5160.526
Unique coverage0.0500.0350.0400.0160.0470.0320.042
Overall solution coverage0.6550.605
Overall solution consistency0.9230.932
Note: Large circles indicate core conditions and small circles peripheral conditions. Black circles (“●”) indicate the “presence” of a condition, crossed-out circles (“⊗”) indicate its “negation”, and blank spaces in the solutions indicate “don’t care”.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhuo, Z.; Li, D.; Chen, J.; Chen, X.; Wang, S. Exploring Factors Influencing ChatGPT-Assisted Learning Satisfaction from an Information Systems Success Model Perspective: The Case of Art and Design Students. Systems 2026, 14, 7. https://doi.org/10.3390/systems14010007

AMA Style

Zhuo Z, Li D, Chen J, Chen X, Wang S. Exploring Factors Influencing ChatGPT-Assisted Learning Satisfaction from an Information Systems Success Model Perspective: The Case of Art and Design Students. Systems. 2026; 14(1):7. https://doi.org/10.3390/systems14010007

Chicago/Turabian Style

Zhuo, Ziqing, Dongning Li, Jiangjie Chen, Xinqiang Chen, and Shuaijun Wang. 2026. "Exploring Factors Influencing ChatGPT-Assisted Learning Satisfaction from an Information Systems Success Model Perspective: The Case of Art and Design Students" Systems 14, no. 1: 7. https://doi.org/10.3390/systems14010007

APA Style

Zhuo, Z., Li, D., Chen, J., Chen, X., & Wang, S. (2026). Exploring Factors Influencing ChatGPT-Assisted Learning Satisfaction from an Information Systems Success Model Perspective: The Case of Art and Design Students. Systems, 14(1), 7. https://doi.org/10.3390/systems14010007

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop