Next Article in Journal
A Review of Microreactors for Process Intensification
Previous Article in Journal
The Influence of Nudges on Workers’ Safety Behavior in the Construction Industry
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Proceeding Paper

To Satisfy or Clarify: Enhancing User Information Satisfaction with AI-Powered ChatGPT †

by
Chung Jen Fu
1,
Andri Dayarana K. Silalahi
2,*,
I-Tung Shih
1,
Do Thi Thanh Phuong
2,3,
Ixora Javanisa Eunike
4 and
Shinetseteg Jargalsaikhan
1
1
Department of Business Administration, College of Management, Chaoyang University of Technology, Taichung 411310, Taiwan
2
Department of Marketing and Logistics Management, Chaoyang University of Technology, Taichung 411310, Taiwan
3
Department of Distribution Management, National Chin-Yi University of Technology, Taiping 411030, Taiwan
4
Master of Management Program, Graduate School, University of Technology, Medan 20235, Indonesia
*
Author to whom correspondence should be addressed.
Presented at the 2024 IEEE 4th International Conference on Electronic Communications, Internet of Things and Big Data, Taipei, Taiwan, 19–21 April 2024.
Eng. Proc. 2024, 74(1), 3; https://doi.org/10.3390/engproc2024074003
Published: 26 August 2024

Abstract

:
The effective use of AI-powered ChatGPT in higher education highlights its potential as a knowledge acquisition tool, yet it brings to the fore the less-explored area of user information satisfaction (UIS), a gap addressing the development of the UIS model tailored to ChatGPT. In this foundational research grounded in UIS theory, a model was proposed based on critical factors to guide educational stakeholders in investigating issues such as plagiarism and ethical use. By analyzing responses from a diverse group of Indonesian users employed in higher education and using Structural Equation Modeling with Smart-PLS 4.0, we identified five key determinants: completeness, precision, timeliness, convenience, and format. The determinants affect user satisfaction, while traditionally emphasized factors such as accuracy and reliability do not significantly impact the academic utilization of ChatGPT. The results advocate for reliance on ChatGPT with limitations in regard to accuracy and suggest a change in the academic use of technology. The results also offer a perspective that blends theoretical innovation with practical application in educational settings.

1. Introduction

Since its emergence in the field of education in late 2022, AI-powered ChatGPT has drawn attention to its benefits and challenges. Studies have underscored risks, including plagiarism, misinformation [1], and ethical dilemmas [2], leading to hesitancy in its adoption in higher education. However, contrasting with these negative implications, there is evidence of ChatGPT’s academic utility. As Roose [3] suggested, ChatGPT could be ‘the teacher’s best friend,’ aiding in tasks such as creating outlines, sparking discussion ideas, and saving time on material preparation. This juxtaposition leads to the provisional view that ChatGPT, while potentially controversial, could be an indispensable, transformative AI tool in academia, bringing about what can be termed a ‘Groundbreaking Performance Transformation’ in future educational environments.
ChatGPT generates information in the form of text and images. ChatGPT is often considered a groundbreaking AI-powered tool in terms of user productivity, but this premise warrants further examination. This urgency arises as previous studies on ChatGPT have primarily focused on its benefits [4], risks [5], and academic integrity implications for users [6], among other aspects [7,8,9]. However, the information satisfaction generated by ChatGPT from a user’s perspective has not yet been thoroughly evaluated. Therefore, to identify a neglected gap, a model introduced by Ives et al. [10] was used to measure User Information Satisfaction (UIS) related to this information system. We investigated UIS concerning the generative information from ChatGPT for academic purposes, too.
A UIS model was constructed for the information and systems of ChatGPT to determine seven measures, namely, completeness, precision, timeliness, convenience, format, accuracy, and reliability, in the UIS framework [10]. This study is the first to examine the UIS of ChatGPT since it was introduced to the market. The urgency of this research lies in the following: Users (consumers) need to attain satisfaction with the ChatGPT system, as measured by various factors. Additionally, a pragmatic evaluation of the current system reflects its sustainability and continued use from the perspectives of educational organization policies, the scholarly publishing environment, and other academic activities. The results of this study contribute to the development of a User Information Satisfaction (UIS) model for ChatGPT systems based on the seven analyzed factors and their impact on user satisfaction. For individual ChatGPT users, organizations, and governments, policies can be made based on the results.

2. Theoretical Background

2.1. UIS

UIS was introduced by Ives et al. [10] to evaluate information systems. Since its inception, numerous studies have been conducted to investigate users’ levels of satisfaction with information systems across various sectors [11,12,13]. Notably, Ang and Soh [14] explored how UIS was measured in organizations, focusing on employee satisfaction for those without a computer studies background. This underscores the importance of UIS for information system users. Consequently, the applicability of the UIS concept can be extended to measure user satisfaction with respect to ChatGPT systems. Ives et al. [10] defined UIS for users as believing in the ability of the information system to meet their information requirements. UIS is a critical tool for evaluating the effectiveness of information systems used in organizations. Ives et al. [10] also emphasized the necessity of efforts to standardize UIS measurements for comparative studies for different systems and contexts. Therefore, applying UIS to the context of ChatGPT is crucial, given the lack of investigation of UIS from the perspective of ChatGPT users. Thus, it is necessary to explore how ChatGPT users perceive this system’s ability to meet their information needs, thereby contributing valuable insights to the effectiveness of ChatGPT as an information tool.

2.2. Theoretical Framework

Previous studies have identified different UIS measures depending on the context and information systems (IS) under investigation. For example, Galletta and Lederer [15] identified 23 measures for assessing user satisfaction regarding the implementation and effectiveness of IS (Figure 1). Laumer et al. [11] identified four factors for measuring users’ satisfaction with enterprise content management systems; they also conducted other diverse studies [10,11,12]. Owing to this research, the theoretical framework for measuring UIS is well established. However, there has been little measurement of UIS for AI-powered systems, including ChatGPT. Thus, we determined UIS measures for ChatGPT, providing information on OpenAI for users, developers, and companies whose operations involve the use of Chatbots. This attempt broadens the scope of UIS applications and contributes to the optimization of AI-powered systems in terms of meeting user needs. The developed measures can be used to understand how these technologies are tailored and improved, enhancing their utility and user satisfaction in various contexts.
Based on the concept of UIS defined in previous studies [4,10,11], we determined and tested seven measures for assessing the ChatGPT. These seven UIS measures were combined into a user satisfaction model. The UIS measures for ChatGPT include completeness, precision, timeliness, convenience, format, accuracy, and reliability. These measures were evaluated in terms of user satisfaction and organized within a conceptual research framework, as depicted in Figure 1. The result provides a holistic understanding of how users interact with and perceive the ChatGPT system. By analyzing these diverse measures, key areas of strength and potential improvement for ChatGPT were identified to enhance its effectiveness as a user-oriented tool. The results are expected to offer valuable insights into the optimization of ChatGPT and similar AI-powered systems.

3. Hypotheses

ChatGPT UIS Measures and Satisfaction

We employed seven User Information Satisfaction (UIS) measures—completeness, accuracy, precision, reliability, timeliness, convenience, and format—to test user satisfaction. By doing so, we examined how each of these dimensions contributes to the overall satisfaction of users interacting with ChatGPT. This approach enables a detailed exploration of this system’s efficacy and user experience, offering insights that can guide future enhancements and modifications to optimize ChatGPT for its users.
The completeness of ChatGPT’s responses refers to its thoroughness and comprehensiveness in addressing user queries. Users value detailed and comprehensive answers, as they foster an understanding of a given subject [16]. The more complete a response provided by ChatGPT to its users, the higher the likelihood of user satisfaction, emphasizing the importance of depth and breadth in information delivery. Therefore, the following hypothesis was proposed.
H1. 
The more complete the responses provided by ChatGPT, the higher the user satisfaction.
Accuracy was conceptualized as the correctness and truthfulness of ChatGPT’s responses. Accurate information is essential for users [4], particularly in decision-making. Thus, we posited a direct positive relationship between the accuracy of information provided and the level of user satisfaction, underlining the significance of providing correct and reliable answers to enhance user experience. Therefore, the following hypothesis was proposed.
H2. 
The higher the accuracy of ChatGPT’s responses, the greater the user satisfaction.
Precision is related to the relevance and specificity with which ChatGPT responds to a user’s inquiries. Users prefer targeted and relevant answers that address their specific concerns [17]. The more precise the information provided, the greater the user satisfaction, highlighting the value of tailor-made and context-specific responses. Therefore, the following hypothesis was proposed.
H3. 
The precision of ChatGPT’s responses in directly addressing the user’s specific queries positively influences user satisfaction.
Reliability refers to the consistency and dependability of ChatGPT’s responses over time. Research on user interaction indicates that consistent performance builds user trust and satisfaction [18]. This underscores the importance of ChatGPT being able to maintain a stable and dependable output to keep users satisfied. Therefore, the following hypothesis was proposed.
H4. 
The more reliable the information provided by ChatGPT, the higher the level of user satisfaction.
Timeliness focuses on the speed of ChatGPT’s responses. Quick and prompt replies have been posited to increase user satisfaction [19]. Timeliness is highly valued in the context of digital communication. Users often equate rapid responses with efficiency and effective service, thus enhancing their overall satisfaction with a tool. Therefore, the following hypothesis was proposed.
H5. 
The greater the timeliness of ChatGPT, the greater the level of user satisfaction.
Convenience in relation to using ChatGPT includes aspects such as ease of use and accessibility. We posited that convenience plays a significant role in user satisfaction. Studies on human–computer interaction have suggested that user-friendly and accessible tools significantly enhance user experience and satisfaction. This implies that the more convenient it is to interact with ChatGPT, the higher the level of user satisfaction. Therefore, the following hypothesis was proposed.
H6. 
Greater convenience in interacting with ChatGPT will significantly boost user satisfaction.
The format of ChatGPT’s responses, including clarity, organization, and presentation, impacts user satisfaction. Well-structured and presented information aids comprehension and user satisfaction [20]. A user-friendly and well-organized response format positively influences the level of user satisfaction, as it facilitates easier understanding and interaction. Therefore, we proposed the following hypothesis.
H7. 
The format in which ChatGPT presents its responses, if clear and well organized, positively impacts user satisfaction.

4. Methodology

4.1. Measures

We adopted the UIS concept [10] as the foundational framework for developing this conceptual research structure. We used the seven UIS measures determined in previous studies, making modifications and adjustments specific to this study’s purposes. This approach was necessary due to the absence of measures for evaluating the UIS of ChatGPT. We formulated 21 questions in four dimensions, modified from the work by Bhattacherjee [21]. For completeness, timeliness, and format, six items were adapted and refined from the work by Laumer et al. [11]. Accuracy was derived from Foroughi et al.’s work [4] and modified from the work by Ives et al. [10], including three items, namely, precision, reliability, and convenience, resulting in eight items. This adaptation ensured the measures were aligned with the unique characteristics and user interactions of ChatGPT, providing a robust and relevant evaluation of user satisfaction. Using this approach, a detailed and nuanced understanding of user satisfaction with respect to ChatGPT was acquired, contributing to the user experience research into AI-powered systems.

4.2. Data Collection

We adopted a quantitative methodology. A survey was designed for specific research participants. The data were gathered through an online survey based on a meticulously crafted questionnaire. This questionnaire comprised three parts. We asked for the consent of the participants, aligning with the study’s use of purposive sampling. To be eligible, the respondents had to meet two key criteria: (a) they had to have used ChatGPT for over six months for academic purposes, and (b) they had to have a background in higher education, serving as faculty members, students (both graduate and undergraduate), or postdoctoral researchers. The second section of the questionnaire was created to collect sociodemographic data such as gender, age, type of university, and occupation. The third section was formulated to allow the respondents to answer specific questions. Using this approach, data were collected to accurately represent the experiences of ChatGPT users in academic environments and to understand how different demographic groups use and perceive ChatGPT.
The data were collected from August to November 2023 from a pool of 508 responses. In total, 73% of the respondents were males. Respondents who were 21 to 35 years old had the highest degree of ChatGPT usage (75%). Altogether, 57% graduated from public universities and 43% from private institutions. Undergraduate students constituted 39% of the participants, and 32% were graduate students (master’s and doctoral candidates). Faculty members accounted for 15%, and 14% were postdoctoral researchers.

4.3. Data Analysis

We set up several stages for data analysis. To ensure validity and reliability, we conducted a Common Method Variance (CMV) test, employing Harman’s Single-Factor method [22], tested using SPSS version 26. We conducted validity, reliability, and hypothesis analyses using Structural Equation Modeling and Smart-PLS 4.0 software. Outer Loadings (OLs), Composite Reliability (CR), Average Variance Extracted (AVE), and Variance Inflation Factor (VIF) were also analyzed [23]. Discriminant validity was tested using the Fornell–Larcker Criterion [24], the Heterotrait/Monotrait Ratio/HTMT [25], a Cross-Loadings Matrix [23], R-squared values [26], and model fit evaluation [23].

5. Results

5.1. CMV

Before data analysis, CMV was tested using Harman’s Single-Factor test, in which all the measures were factored into a single dependent variable. If the total variance explained is below 50%, CMV is not considered a concern [22]. In total, 13.6% CMV was obtained without concern in this study. Thus, the credibility of the subsequent analysis was secured, ensuring the robustness and reliability of our data interpretation and results. This low CMV enhances the validity of this study, reinforcing the integrity of the research findings and their implications.

5.2. Validity and Reliability

Table 1 displays the results for validity and reliability, which were validated in this study.
Discriminant validity was tested to evaluate the efficacy of the model developed in this study. Three methods were employed, as illustrated in Table 2 and Table 3. All diagonal and bolded values were greater than the intervariable correlation values. This suggested discriminant validity. Additionally, the HTMT values were below 0.90. These results confirm the distinctiveness of the factors of the model, ensuring that each measure captured a unique aspect of the phenomenon, which was crucial for ensuring the overall validity of the research findings.
Discriminant validity was tested using the Cross-Loading Matrix method (Table 3). The correlation of each construct with its respective measurement items was greater than that with cross items. Thus, discriminant validity was confirmed in this study. The differentiation of the constructs ensured the reliability and accuracy of the measurement model, ensuring that each construct was distinctively and accurately captured in the research framework.
The model was tested using an evaluation of model fit and R-squared values. The SRMR value was 0.070, the Chi-Square value was 1631.471, and the NFI was 0.783, all of which fell within the threshold limits [23]. Subsequently, the R-squared value was assessed to determine the power of independent constructs for predicting the dependent constructs. An R-squared value of 0.581 was obtained, indicating that the seven ChatGPT UIS measures accounted for 58.1% of the variance in satisfaction. This meets the criteria set by Falk and Miller [26], as the R-squared value was above the 0.10 benchmark. These results demonstrate the model’s good fit and underline the considerable explanatory power of the identified factors in determining user satisfaction with ChatGPT.

5.3. Hypothesis Testing

Table 4 and Figure 2 present a summary of the hypothesis testing. Accuracy (β = 0.070; T = 1.247) and reliability (β = 0.016; T = 0.400) were not significant predictors of satisfaction, leading to the rejection of hypotheses H2 and H4. Completeness (β = 0.096; T = 2.863), convenience (β = 0.126; T = 2.192), format (β = 0.323; T = 5.115), precision (β = 0.245; T = 3.681), and timeliness (β = 0.138; T = 2.396) significantly impacted satisfaction, supporting hypotheses H1, H3, H5, H6, and H7. Format and precision impacted satisfaction more than the other factors. These results highlight the varying degrees of influence of different UIS measures on user satisfaction, with several factors playing a more critical role than others in shaping user experience with respect to ChatGPT.

6. Discussion

We examined the UIS framework proposed by Ives et al. [10], which suggested that accuracy and reliability are essential for user satisfaction with respect to information systems. Contradicting this long-standing belief, the study’s results revealed that these aspects did not significantly shape user satisfaction in the utilization of ChatGPT. This indicated that the parameters determining satisfaction are changing with advancements in generative AI technologies. This is contradictory to findings reported in previous studies [10]. Other dimensions including completeness, convenience, format, precision, and timeliness influenced users’ satisfaction with ChatGPT. Users of ChatGPT valued the presentation and immediacy of information, not just its correctness or dependability. This underscores that with recent UIS theory adaptations, the changing landscape of user expectations is becoming more sophisticated [10,11].

7. Conclusions

Implication

The UIS framework was applied theoretically by examining the aspects of accuracy and reliability regarding user satisfaction as discussed by Ives et al. [10]. These factors did not significantly influence user satisfaction in the context of ChatGPT, pointing to a potential shift in the UIS paradigm as generative AI technology progresses. Such a result challenges and potentially redefines the established parameters of user satisfaction. The results of this study contribute to UIS by identifying the importance of completeness, convenience, format, precision, and timeliness. The evolving priorities of users were also redefined by advanced technology. The theoretical framework of UIS contributes attributes to the current digital interaction milieu and enhances our understanding of user satisfaction in regard to information systems.
The results of this study can be used to understand the application of ChatGPT in organizational and individual contexts. At an organizational level, the traditional emphasis on accuracy and reliability is changing, encouraging entities to recalibrate their focus toward completeness, convenience, format, precision, and timeliness in their information systems. Such a realignment leads to the creation of interfaces that are more attuned to user needs, potentially improving users’ utilization of and satisfaction with AI-powered tools. This will result in the development of more intuitive and efficient systems, thereby enhancing the return on investment for AI technologies.
For those who regularly engage with AI programs such as ChatGPT, the results reported herein underscore the importance of mastering the features of this system that enhance its immediate utility. Understanding how to effectively navigate and extract value from AI interactions is crucial, suggesting that user education is mandatory in these aspects. As end-users become more adept at utilizing the streamlined and user-friendly features of such technologies, they can optimize efficiency and productivity, reaping the full benefits of the evolving digital tools at their disposal. This knowledge enables users to have realistic expectations and maximize the utility of AI in their work and learning environments, solidifying their role as competent navigators in the advancing technological landscape.
Despite its contributions, this study had limitations, such as a confined focus on ChatGPT while disregarding various other generative AI platforms. Furthermore, this study’s reliance on self-reported measures of satisfaction introduced biases that hindered its ability to capture the nuanced reactions of users to AI interfaces. Further studies are necessary to include a broader range of AI tools and technologies in this field of research, thereby enriching the generalizability of the findings. Additionally, it is necessary to incorporate objective usage data to provide an understanding of user satisfaction and behavior and offer a holistic view of how individuals and organizations interact with AI systems. A longitudinal study must be conducted to assess users’ satisfaction with AI over time as they become more accustomed to these technologies.

Author Contributions

Conceptualization, C.J.F. and A.D.K.S.; methodology, A.D.K.S. and I.-T.S.; software, A.D.K.S. and I.J.E.; validation, D.T.T.P., A.D.K.S., S.J. and I.J.E.; formal analysis, A.D.K.S.; investigation, I.J.E. and A.D.K.S.; resources, C.J.F. and A.D.K.S.; data curation, I.J.E., D.T.T.P. and S.J.; writing—original draft preparation, C.J.F., A.D.K.S. and I.-T.S.; writing—review and editing, A.D.K.S., I.-T.S., I.J.E., D.T.T.P. and S.J.; visualization, I.J.E. and S.J.; supervision, C.J.F., A.D.K.S. and I.-T.S.; project administration, A.D.K.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available on request from the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Lund, B.D.; Wang, T.; Mannuru, N.R.; Nie, B.; Shimray, S.; Wang, Z. ChatGPT and a new academic reality: Artificial Intelligence-written research papers and the ethics of the large language models in scholarly publishing. J. Assoc. Inf. Sci. Technol. 2023, 74, 570–581. [Google Scholar] [CrossRef]
  2. Bringula, R. What do academics have to say about ChatGPT? A text mining analytics on the discussions regarding ChatGPT on research writing. AI Ethics 2023, 1–13. [Google Scholar] [CrossRef]
  3. Roose, K. Don’t Ban ChatGPT in Schools. Teach with It. 2023. Available online: https://www.nytimes.com/2023/01/12/technology/chatgpt-schools-teachers.html (accessed on 13 February 2024).
  4. Foroughi, B.; Senali, M.G.; Iranmanesh, M.; Khanfar, A.; Ghobakhloo, M.; Annamalai, N.; Naghmeh-Abbaspour, B. Determinants of intention to use ChatGPT for educational purposes: Findings from PLS-SEM and fsQCA. Int. J. Hum. Comput. Int. 2023, 1–20. [Google Scholar] [CrossRef]
  5. Rivas, P.; Zhao, L. Marketing with chatgpt: Navigating the ethical terrain of gpt-based chatbot technology. AI 2023, 4, 375–384. [Google Scholar] [CrossRef]
  6. Bin-Nashwan, S.A.; Sadallah, M.; Bouteraa, M. Use of ChatGPT in academia: Academic integrity hangs in the balance. Technol. Soc. 2023, 75, 102370. [Google Scholar] [CrossRef]
  7. Dwivedi, Y.K.; Kshetri, N.; Hughes, L.; Slade, E.L.; Jeyaraj, A.; Kar, A.K.; Baabdullah, A.M.; Koohang, A.; Raghavan, V.; Ahuja, M.; et al. “So what if ChatGPT wrote it?” Multidisciplinary perspectives on opportunities, challenges and implications of generative conversational AI for research, practice and policy. Int. J Inf. Manag. 2023, 71, 102642. [Google Scholar] [CrossRef]
  8. Baek, T.H.; Kim, M. Is ChatGPT scary good? How user motivations affect creepiness and trust in generative artificial intelligence. Telemat. Inform. 2023, 83, 102030. [Google Scholar] [CrossRef]
  9. Kocoń, J.; Cichecki, I.; Kaszyca, O.; Kochanek, M.; Szydło, D.; Baran, J.; Bielaniewicz, J.; Gruza, M.; Janz, A.; Kanclerz, K.; et al. ChatGPT: Jack of all trades, master of none. Inf. Fusion. 2023, 99, 101861. [Google Scholar] [CrossRef]
  10. Ives, B.; Olson, M.H.; Baroudi, J.J. The measurement of user information satisfaction. Commun. ACM 1983, 26, 785–793. [Google Scholar] [CrossRef]
  11. Laumer, S.; Maier, C.; Weitzel, T. Information quality, user satisfaction, and the manifestation of workarounds: A qualitative and quantitative study of enterprise content management system users. Eur. J. Inf. Syst. 2017, 26, 333–360. [Google Scholar] [CrossRef]
  12. Bai, B.; Law, R.; Wen, I. The impact of website quality on customer satisfaction and purchase intentions: Evidence from Chinese online visitors. Int. J. Hosp. Manag. 2008, 27, 391–402. [Google Scholar] [CrossRef]
  13. Iivari, J.; Ervasti, I. User information satisfaction: IS implementability and effectiveness. J. Inf. Manag. 1994, 27, 205–220. [Google Scholar] [CrossRef]
  14. Ang, J.; Soh, P.H. User information satisfaction, job satisfaction and computer background: An exploratory study. J. Inf. Manag. 1997, 32, 255–266. [Google Scholar] [CrossRef]
  15. Galletta, D.F.; Lederer, A.L. Some cautions on the measurement of user information satisfaction. Decis. Sci. 1989, 20, 419–443. [Google Scholar] [CrossRef]
  16. Gupta, S.; Motlagh, M.; Rhyner, J. The digitalization sustainability matrix: A participatory research tool for investigating digitainability. Sustainability 2020, 12, 9283. [Google Scholar] [CrossRef]
  17. Reinecke, K.; Bernstein, A. Knowing what a user likes: A design science approach to interfaces that automatically adapt to culture. MIS Q. 2013, 37, 427–453. [Google Scholar] [CrossRef]
  18. Chen, Y.; Zahedi, F.M.; Abbasi, A.; Dobolyi, D. Trust calibration of automated security IT artifacts: A multi-domain study of phishing-website detection tools. Inf. Manag. 2021, 58, 103394. [Google Scholar] [CrossRef]
  19. Petter, S.; Fruhling, A. Evaluating the success of an emergency response medical information systemInt. J. Med. Inform. 2011, 80, 480–489. [Google Scholar] [CrossRef] [PubMed]
  20. Park, S.; Zo, H.; Ciganek, A.P.; Lim, G.G. Examining success factors in the adoption of digital object identifier systems. Electron. Commer. Res. Appl. 2011, 10, 626–636. [Google Scholar] [CrossRef]
  21. Bhattacherjee, A. Understanding information systems continuance: An expectation-confirmation model. MIS Q. 2001, 25, 351–370. [Google Scholar] [CrossRef]
  22. Baumgartner, H.; Weijters, B.; Pieters, R. The biasing effect of common method variance: Some clarifications. J. Acad. Mark. Sci. 2021, 49, 221–235. [Google Scholar] [CrossRef]
  23. Hair, J.; Hollingsworth, C.L.; Randolph, A.B.; Chong, A.Y.L. An updated and expanded assessment of PLS-SEM in information systems research. Ind. Manag. Data Syst. 2017, 117, 442–458. [Google Scholar] [CrossRef]
  24. Fornell, C.; Larcker, D.F. Evaluating structural equation models with unobservable variables and measurement error. J. Mark. Res. 1981, 18, 39–50. [Google Scholar] [CrossRef]
  25. Henseler, J.; Ringle, C.M.; Sarstedt, M. A new criterion for assessing discriminant validity in variance-based structural equation modeling. J. Acad. Mark. Sci. 2015, 43, 115–135. [Google Scholar] [CrossRef]
  26. Falk, R.F.; Miller, N.B. A Primer for Soft Modeling; The University of Akron Press: Akron, OH, USA, 1992. [Google Scholar]
Figure 1. ChatGPT’s UIS framework.
Figure 1. ChatGPT’s UIS framework.
Engproc 74 00003 g001
Figure 2. Hypothesis testing.
Figure 2. Hypothesis testing.
Engproc 74 00003 g002
Table 1. Convergent validity and reliability.
Table 1. Convergent validity and reliability.
ConstructsOLCACRVIFAVE
Accuracy0.736–0.8580.7130.7241.288–1.5970.636
Completeness0.735–0.9640.7900.9361.385–1.3850.734
Convenience0.820–0.8410.7710.7811.444–1.7700.684
Format0.870–0.8710.7810.6811.314–1.3630.758
Precision0.792–0.8500.7360.6551.044–1.3170.526
Reliability0.869–0.9080.7350.7491.510–1.5100.790
Satisfaction0.797–0.8320.8220.8221.678–1.9130.652
Timeliness0.860–0.8890.7090.6991.392–1.3920.765
Notes: The threshold for Outer Loadings (OLs) was >0.70; that for Cronbach’s Alpha (CA) was 0.70; that for Composite Reliability (CR) was >0.70; that for Variance Inflation Factor (VIF) was <0.30; and that for Average Variance Extracted (AVE) was >0.50.
Table 2. Fornell–Larcker Criterion and HTMT.
Table 2. Fornell–Larcker Criterion and HTMT.
Constructs (1) (2) (3) (4) (5) (6) (7) (8)
ACC (1)0.7980.3520.8760.8080.8730.7760.7180.858
CMP (2)0.2510.8570.2770.4490.6550.3740.1760.282
CVC (3) 0.6520.2100.8270.7140.6440.6230.6830.788
FMT (4) 0.5640.2990.5210.8710.8750.5980.7840.691
PRR (5) 0.6850.3150.6640.5250.7250.8580.8270.895
RLB (6) 0.5570.2840.4760.4230.5580.8890.5370.249
STS (7) 0.5510.1460.5550.5860.5860.4200.8080.682
TML (8) 0.6010.2150.5840.4750.5770.6610.5170.875
Notes: The diagonal bolded values are the square root of the AVE, which was used for Fornell–Larcker Criterion. The italic values indicate the HTMT, with a threshold of <0.90. ACC, accuracy; CMP, completeness; CVC, convenience; FMT, format; PRR, precision; RLB, reliability; STS, satisfaction; TML, timeliness.
Table 3. Cross-loadings matrix.
Table 3. Cross-loadings matrix.
Items/ContructsACCCMPCVCFMTPRRRLBSTSTML
ACR.10.7950.2530.5190.4680.5110.4990.4270.514
ACR.20.8580.1790.5230.4820.5590.4110.4880.479
ACR.30.7360.1710.5240.3960.5740.4310.4000.450
CMP.10.2390.9640.2070.2680.3130.2880.1590.224
CMP.20.1910.7350.1410.2700.2100.1710.0630.114
CNV.10.4960.2010.8200.4070.4800.3510.3640.412
CNV.20.5330.1730.8410.4210.6000.4190.4880.480
CNV.30.5780.1540.8200.4580.5510.4020.5000.537
FMR.10.4890.3070.4210.8700.4190.3710.5100.387
FMR.20.4920.2130.4850.8710.4940.3650.5110.440
PRC.10.5760.2030.5680.4180.8500.4570.5150.486
PRC.20.5770.2030.5690.4390.8380.4800.4910.503
PRC.30.2770.4730.2180.2850.7920.2350.1820.191
RLB.10.4980.2870.3980.3710.4450.8690.3400.557
RLB.20.4940.2240.4460.3810.5400.9080.4020.615
STS.10.4380.0930.4410.4890.4560.2950.7980.364
STS.20.4440.0820.4810.4450.4890.3540.8320.444
STS.30.4590.1510.4210.4930.4460.3610.7970.410
STS.40.4390.1470.4480.4670.5000.3450.8020.450
TML.10.5250.2020.5230.4130.5020.5910.4270.860
TML.20.5280.1760.4990.4180.5090.5680.4750.889
Table 4. Summary of hypothesis testing.
Table 4. Summary of hypothesis testing.
HypothesisβTBootstrapping CI 97.5%Decision
LowerUpper
CMP → STS0.096 **2.8630.1680.040Accept
AC → STS0.0701.2470.0370.182Reject
PRC → STS0.245 ***3.6810.1170.378Accept
RLB → STS0.0160.4000.1180.086Reject
TML → STS0.138 **2.3960.0250.249Accept
CVN → STS0.126 **2.1920.0140.239Accept
FMT → STS0.323 ***5.1150.2020.444Accept
Notes: AC, accuracy; CMP, completeness; CVN, convenience; FMT, format; PRC, precision; RLB, reliability; TML, timeliness; STS, satisfaction. Significance levels: *** p < 0.001; ** p < 0.010.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Fu, C.J.; Silalahi, A.D.K.; Shih, I.-T.; Phuong, D.T.T.; Eunike, I.J.; Jargalsaikhan, S. To Satisfy or Clarify: Enhancing User Information Satisfaction with AI-Powered ChatGPT. Eng. Proc. 2024, 74, 3. https://doi.org/10.3390/engproc2024074003

AMA Style

Fu CJ, Silalahi ADK, Shih I-T, Phuong DTT, Eunike IJ, Jargalsaikhan S. To Satisfy or Clarify: Enhancing User Information Satisfaction with AI-Powered ChatGPT. Engineering Proceedings. 2024; 74(1):3. https://doi.org/10.3390/engproc2024074003

Chicago/Turabian Style

Fu, Chung Jen, Andri Dayarana K. Silalahi, I-Tung Shih, Do Thi Thanh Phuong, Ixora Javanisa Eunike, and Shinetseteg Jargalsaikhan. 2024. "To Satisfy or Clarify: Enhancing User Information Satisfaction with AI-Powered ChatGPT" Engineering Proceedings 74, no. 1: 3. https://doi.org/10.3390/engproc2024074003

Article Metrics

Back to TopTop