Next Article in Journal
The Effects of the Medium of Notetaking on the Delayed Learning Effect of College Students: A Mediated Moderation Model
Previous Article in Journal
Young Children’s Directed Question Asking in Preschool Classrooms
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Tool, Threat, Tutor, Talk, and Trend: College Students’ Attitudes toward ChatGPT

1
Department of Counseling and Applied Psychology, National Taichung University of Education, Taichung 403, Taiwan
2
Department of Engineering Science, National Cheng Kung University, Tainan 701, Taiwan
3
Graduate School of Technological and Vocational Education, National Yunlin University of Science and Technology, Yunlin 640, Taiwan
*
Authors to whom correspondence should be addressed.
Behav. Sci. 2024, 14(9), 755; https://doi.org/10.3390/bs14090755
Submission received: 14 July 2024 / Revised: 21 August 2024 / Accepted: 23 August 2024 / Published: 27 August 2024

Abstract

:
The purposes of this study are to investigate college students’ attitudes toward ChatGPT and to understand whether gender makes any difference in their attitudes. We developed the ChatGPT attitude scale (CAS) and administrated it to a sample of 516 Taiwan college students. Through an exploratory factor analysis, the 5-T (Tool, Tutor, Talk, Trend, and Threat) model of CAS was extracted and validated via confirmatory factor analysis. The CAS exhibited good reliability and validity and can be used to explain ChatGPT attitudes. According to our findings, university students consider ChatGPT an important “Tool” in their daily life. Additionally, ChatGPT plays a significant “Tutor” role, assisting with language translation and knowledge learning. Besides its utilitarian functions, ChatGPT also serves as a “Talk” feature, offering interactive chat and emotional support. Currently, students also acknowledge ChatGPT as an important “Trend” of the times, but they are also deeply concerned about the potential “Threat” of content falsification and job displacement brought on by ChatGPT. In terms of gender differences, compared to females, males scored higher than females in the total scale and in the Tool, Tutor, and Trend subscales. However, there was no significant difference between males and females in the Talk and Threat subscales. This gender difference result differs from previous research on robots or social media.

1. Introduction

In recent years, the use of chatbots or generative artificial intelligence (AI) applications, such as ChatGPT (Chat Generative Pre-Trained Transformer), has become increasingly popular [1]. ChatGPT, developed by OpenAI, is an AI chatbot program designed to simulate human conversation. It can generate human-like text and is capable of handling complex language tasks, including automatic text generation, question-answering, automatic summarization, and image generation. Since its launch in November 2022, ChatGPT has gained rapid popularity among users. According to a survey conducted in July 2023, ChatGPT surpassed one million users in its first week and accumulated over a billion users within two months, making it the fastest-growing consumer application in history. The ChatGPT historical page has been visited over 9 billion times. In total, 12.31% of ChatGPT users are from the United States. Among ChatGPT users, 59.67% are male and 40.33% are female. Additionally, 53% of readers are unable to distinguish whether speed-reading articles were written by ChatGPT or a human [2].
The use of ChatGPT has had a significant impact on the academic and daily lives of university students [1,3]. ChatGPT can be utilized to create personalized educational materials and customized learning plans tailored to the unique needs and interests of each student [4]. Moreover, ChatGPT is an important assistant for university students in organizing information, learning foreign languages, and translation [3,5]. In addition to its academic applications, ChatGPT also influences students’ daily lives. Many people use ChatGPT to plan their travel itineraries, search for food, or obtain related lifestyle information. Although the use of ChatGPT is becoming increasingly popular among university students, there is still limited research on the psychological aspects and attitudes towards the use of ChatGPT.
Attitude is defined as a psychological tendency that is expressed by evaluating a particular entity with some degree of favor or disfavor [6]. People’s overall attitudes towards ChatGPT are likely to play a significant role in their acceptance of this AI technology. Previous research has found that students with more positive attitudes toward technology (e.g., computers and the Internet) are more likely to perceive technology-based learning tasks as interesting and important [7]. Therefore, it can be inferred that attitudes towards ChatGPT may have an impact on learning, although there is still limited research on this topic [5]. In the past, many studies have investigated ‘Internet attitudes” (e.g., [7,8], “robot attitudes” (e.g., [9]) or “AI-attitude” (e.g., [10]). However, the attitudes toward Chabot/ChatGPT are still limited, and most of the relevant studies are based on the technology acceptance model (TAM) [11,12,13,14]. College students are an important user group of ChatGPT. Understanding their attitudes toward ChatGPT is crucial for understanding the acceptance, usage behavior, and potential psychological consequences of this technology. Additionally, previous research has found gender differences in attitudes towards technology (e.g., [14,15]). Whether the same applies to ChatGPT attitudes remains to be studied. To summarize, there are three purposes behind this study. The first is to investigate the usage behavior and satisfaction of university students with ChatGPT. The second purpose of this study is to develop and validate the ChatGPT attitude scale. The third purpose of this study is to examine whether gender makes any difference in ChatGPT attitudes. This study’s specific research questions are as follows:
RQ1.
What are the usage time/frequency and satisfaction of university students with ChatGPT?
RQ2.
Does ChatGPT attitude scale developed by this study exhibit validity and reliability?
RQ3.
Does gender of college students make any difference in their attitudes toward ChatGPT?

2. Literature Review

Research on ChatGPT attitude is still rare. Previous studies have shown that attitudes towards different information technology use (e.g., computers and the Internet) are related to each other [7,8,16]. Therefore, this study starts with the relevant research on computer/Internet-related attitudes and integrates research on chatbot and robot AI-related attitudes and finally summarizes the possible factors behind ChatGPT attitude.

2.1. Studies on Computer/Internet-Related Attitudes: 4T, 5T, and 6T Models

Taylor proposed the 3T model to measure computer/Internet-related attitudes, which are Tutor, Tool, and Tutee. “Tutor” refers to the computer acting as an expert in various fields to teach users. “Tool” refers to the computer being a tool that can greatly enhance efficiency. “Tutee” refers to the computer role-playing as a student [17].
To measure adolescents’ attitudes towards the Internet, Tsai modified Taylor’s 3T model of computer attitudes and proposed a 4T framework: Technology, Tool, Toy, and Travel [8]. “Technology” represents how technology strongly changes people’s lifestyles. “Tool” represents how the medium enables humans to accomplish various tasks to satisfy their needs. “Toy” represents joy, entertainment, and the continuously evolving and appealing nature of activities such as online gaming. “Travel” represents the Internet as a navigation tool that takes us to different places and websites, similar to traveling to different destinations.
Chou and his colleagues conducted a series of studies on students’ attitudes toward the Internet [16,18,19]. Chou et al. proposed a 4T framework, named “Tool, Toy, Telephone, and Treasure of Information”, to categorize people’s attitudes towards the Internet [18]. Subsequently, due to the rapid development of blogs and fan pages, Chou et al. added the “Territory” factor, creating a 5T framework [7]. Later, Chou and his colleagues pointed out that a substantial proportion of people engage in commercial activities and online shopping through the Internet [19]. Therefore, they added the “Trade” factor to the original 5T framework, resulting in a 6T model.

2.2. Studies on Chatbot/AI-Related Attitudes

Schepman and Rodway developed the General Attitudes towards Artificial Intelligence Scale (GAAIS) to measure AI attitudes [10]. Their factor analysis results revealed that GAAIS consists of two factors: positive and negative. The positive factor includes beliefs that AI is a beneficial tool, represents future trends, and provides new economic opportunities, among others. On the other hand, the negative factor reflects attitudes of worry, lack of control, perceived evil, and mistrust towards AI. Participants showed positive attitudes towards AI applications involving big data, such as astronomy, law, and pharmacology. However, they held negative attitudes towards applications involving human judgement tasks, such as healthcare and psychological counseling.
Koverola et al. developed the General Attitudes Towards Robots Scale (GAToRS), which consists of four dimensions based on individual/social levels and positive/negative influences [9]. The positive dimension includes feeling comfortable and pleasant around robots; having reasonable hopes for the overall development of robots; perceiving robots as important tools that assist humans; and the belief that if robots have emotions, one can be friends with them. The negative influence dimension includes feeling uneasy and anxious around robots and concerns about the overall development of robots.
Sindermann et al. developed the Attitude Towards Artificial Intelligence (ATAI) scale, which consists of five items. The participants were from Germany (N = 461; 345 female), China (N = 413; 145 female), and the UK (N = 84; 65 female) [20]. A factor analysis revealed that the ATAI scale includes two factors: acceptance and fear. The acceptance factor comprises items related to trusting artificial intelligence and perceiving it as a beneficial tool for humans. The fear factor includes items related to feeling afraid of AI and having concerns about AI potentially destroying humanity and causing mass unemployment. Regarding gender differences, males showed greater interest in technology and held more positive attitudes towards technology.
Li’s General attitudes toward ChatGPT scale [21], a five-item measure adapted from the the Attitude Toward Artificial Intelligence (ATAI) scale [20]. included two subscales–acceptance and fear of ChatGPT. Sample statements include “I trust ChatGPT.” (Acceptance) and “ChatGPT will cause many job losses” (fear).
Iqbal et al. conducted a qualitative research study to understand teachers’ perspectives on ChatGPT [22]. Semi-structured interviews were conducted with 20 university professors from a university in Pakistan. That study found that some professors had negative views and attitudes towards the use of ChatGPT. They expressed concerns such as the potential for students to use ChatGPT for cheating or plagiarism, as well as the possibility of it causing disruptions in the classroom. However, there were also those who held positive attitudes towards ChatGPT. They believed that ChatGPT could be a useful tool in certain situations, such as providing automatic replies to students and allowing teachers to focus more on other aspects of teaching. Additionally, ChatGPT was seen as having the potential to improve student engagement and motivation.

2.3. Frameworks for Studying ChatGPT Attitudes

The measurement tool for ChatGPT attitudes is still under development. Therefore, this study is based on the 5T/6T model of Internet attitudes, integrating research related to chatbots and AI attitudes, and proposes the following hypothetical 5T framework for studying ChatGPT attitudes:
(1)
Tool
The dimension referred to as “Tool” in this study focuses on the functionality provided by ChatGPT, such as language translation, data collection, and organization. This dimension can be traced back to computer/Internet-related attitudes [8,17,18]. Studies on Internet attitudes also recognize the role of the Internet in information search and other functionalities, which are also present in AI/ChatGPT today, including personal assistants, travel planning, language translation, etc. [23]. Brandtzaeg and Følstad also mentioned that the purpose of using chatbots is to obtain necessary information and assistance for efficiently completing tasks [24]. The dimension of “Tool” is also found in other scales such as Koverola et al.’s “General Attitudes Towards Robots Scale” [9], Sindermann et al.’s “Attitude Towards Artificial Intelligence scale” [20], Schepman and Rodway’s “General Attitudes towards Artificial Intelligence Scale” [10], and Li’s “General attitudes toward ChatGPT scale” [21].
(2)
Tutor
The dimension referred to as “Tutor” in this study highlights the learning guidance and feedback provided by AI. AI is widely used in tasks such as homework assistance, language translation, language learning, and presentation creation, acting as a digital teacher [1,3,25]. Iqbal et al. also identified the potential benefits of using ChatGPT to assist teaching, such as enhancing student engagement and motivation [22]. AI has numerous applications and explorations in the field of education [26]. ChatGPT can provide personalized learning materials and resource recommendations, and learners can utilize ChatGPT to reflect on their progress and learning, supporting autonomous learning [4]. Deveci Topal et al. applied chatbots in a science curriculum and found that students perceived them as useful, interesting, and providing helpful assistance for learning outside the classroom [27].
(3)
Threat
In this study, the “Threat” dimension refers to the cognitive assessment and emotional response to the potential threat of ChatGPT. The cognitive aspect involved potential risks associated with ChatGPT, including concerns about job displacement, data privacy breaches, deception, and changes in interpersonal interactions. People worried that ChatGPT might be used as a weapon of mass deception, which could aid in deception-related crimes [28]. As AI technologies, including ChatGPT, continue to advance in their understanding, learning, and problem-solving capabilities, humans perceive potential threats. People also worry about job opportunities being replaced by AI [29,30]. Additionally, concerns exist regarding the impact of AI on our lifestyle and societal norms [31]. Research by Fast and Horvitz found that worries about the potential uncontrollability and ethical aspects of AI development are increasing [32]. Furthermore, AI raises concerns about related personal privacy issues. AI technologies require the collection and storage of data from the environment for learning and updating, potentially posing risks of privacy breaches [33]. The “Threat” factor also appears in other AI attitude-related measurement tools, such as Koverola et al., Sindermann et al., Schepman and Rodway, and Li [9,10,20,21].
The emotion response to the potential threat of ChatGPT involved negative emotions such as fear, anxiety, and distrust towards ChatGPT. People fear the rapid and uncontrollable development of AI technology. Anxiety about AI-related technologies is common in studies and scales related to AI and robots. For example, the GAAIS scale by Schepman and Rodway includes attitudes towards AI being worrisome, out of control, evil, and untrustworthy [10]. The ATAI scale by Sindermann et al. includes topics such as finding AI terrifying and the belief that AI will destroy humanity [20]. Koverola et al. included concerns about feeling uneasy and anxious about robots and overall worries about the development of robots [9]. In the educational context, Iqbal et al. found that university teachers expressed distrust in using ChatGPT, fearing that students might use it for cheating [22]. These studies reflect people’s fears and concerns about AI-related technologies.
(4)
Talk
The dimension referred to as “Talk” in this study focuses on the ability to have conversations with chatbots and share psychological distress. The recent developments in GPT have made chatbots indistinguishable from humans [34]. This dimension is an important feature of chatbots. Some people find chatbots interesting and enjoyable, using them to pass the time [24]. Some social chatbot applications, such as Replika, have gained popularity for providing emotional and social support, offering companionship [35]. Deveci Topal et al. applied chatbots in science courses and found that students found chatting with chatbots to be fun [27]. For individuals who struggle with social interactions, chatbots can help reduce feelings of loneliness or fulfill their desired social patterns [24]. In the study by Bae Brandtzæg et al., nearly half of the participants reported receiving emotional support from the chatbot Woebot, highlighting the importance of the “Talk” function in ChatGPT [36].
(5)
Trend
The dimension referred to as “Trend” in this study focuses on the trend and new era brought about by the development of ChatGPT. Despite the various challenges associated with ChatGPT, there is no doubt that artificial intelligence has become a driving force for innovation and revolution in various fields [37,38]. Artificial intelligence represents a trend of the new generation and the development achievements of emerging technologies. This factor also appears in the research by Schepman and Rodway and by Iqbal et al. [10,22].
Based on the six theoretical frameworks mentioned above, this study developed a preliminary version of the CAS scale consisting of 21 questions, which was then tested and evaluated for its validity and reliability

2.4. Gender Differences in Chatbot/AI-Related Attitudes

Research on AI attitudes has shown that, in terms of the overall score on the AI attitude scale, men tend to have more positive attitudes towards AI compared to women. However, the gender differences in specific dimensions of AI attitudes may not always be significant.
Grassini developed the AI Attitude Scale (AIAS) and found that female participants scored lower than male participants, indicating that men have more positive attitudes towards AI technology [15]. Similarly, the study by Pinto dos Santos et al. also found similar results [35]. They investigated the attitudes of medical students towards AI in the medical field and found that men had more positive attitudes and lower levels of fear towards AI. Another study by Lee and Yen explored gender differences in attitudes towards robots and found that men had a higher acceptance of robot services compared to women [39]. Men also preferred robot services over human services.
When it comes to gender differences in AI attitude dimensions, the results are not always significant. For example, Sindermann et al. developed the Attitude Towards Artificial Intelligence (ATAI) scale and found that males scored higher than females in the ATAI Acceptance scale [20]. However, no significant gender differences were found in the ATAI Fear scale. Additionally, the samples from different countries (Germany, the UK, and China) showed significant differences in both ATAI scales, with the Chinese sample having the highest scores in the Acceptance scale and the lowest scores in the Fear scale compared to the samples from Germany and the UK.
Regarding the reasons for gender differences in AI attitudes, Gibert and Valls pointed out that previous research has generally found that women have more negative views on AI compared to men [40]. They suggest that this may be due to a higher representation of men majoring in information-related fields, which reflects different levels of involvement or interest in AI among different genders. Another reason could be that men are generally more optimistic. It was also mentioned that women have more concerns about the application of AI and tend to focus more on social implementation issues related to AI [41].
Whether there are gender differences in the overall ChatGPT attitude scale and individual dimensions is still to be explored.

3. Material and Methods

3.1. Participants and Procedure

The participants were 516 college students in Taiwan obtained by convenient sampling (350 women and 166 men, mean age = 20.61 with SD = 1.84). A Google Forms link to our questionnaire was posted on SMs (i.e., Facebook and Instagram) and online forums. The present study has been ethically approved by the Institutional Review Board (IRB) of National Chung Cheng University (Ref: CCUREC-112063001).

3.2. Measures

3.2.1. Demographic Information and ChatGPT Usage Survey

The questionnaire surveyed the participants’ demographic information, such as gender, age, and major. The ChatGPT (version 3.5) usage survey included the following items:
(1)
Weekly usage time and frequency of ChatGPT.
(2)
How much are you willing to pay for the advanced version of ChatGPT?
(3)
Rate the accuracy of ChatGPT in answering questions on a scale of 1 to 10.
(4)
Rate the depth and breadth of ChatGPT’s responses on a scale of 1 to 10.
(5)
Overall satisfaction with the interaction with ChatGPT.

3.2.2. The ChatGPT Attitude Scale (CAS)

We developed the CAS based on the structure mentioned Section 2.3. The CAS is a 21-item, 4-point scale. The item description and psychometric properties are illustrated in Section 4.1.

4. Results

4.1. ChatGPT Usage Survey

The average weekly usage of ChatGPT by participants was 2.16 h (SD = 4.45), and participants were willing to pay TWD 516 (SD = 181.49) for an upgrade, which is approximately USD 16.
In terms of usage frequency, 5.6% used ChatGPT daily, 19.4% used it every two or three days, approximately 24.2% used it once a week, 42.2% used it once a month, and 5.6% never used it. It is clear that about half of the participants use ChatGPT at least once a week.
On a scale of 1 to 10, the average accuracy rating for ChatGPT was 6.61 (SD = 1.94), while the depth and breadth ratings were 6.53 (SD = 1.9). The overall satisfaction rating was 7.08 (SD = 2.00).

4.2. Validity and Reliability of the CAS

Validity of the CAS

Data Analyses were conducted using the SPSS 27.0 and LISREL 8.80 programs. Factor analysis can broadly be divided into exploratory factor analysis (EFA) and confirmatory factor analysis (CFA); both are common techniques used in scale development, but each serving slightly different purposes [42]. EFA can be used to explore the underlying structures among a set of items and is suitable during the early stages of scale development, while CFA is used to confirm a previously stated theoretical model or factor structures. Since EFA can contribute to model specification prior to cross-validation with CFA [43], some scholars (e.g., Gerbing and Hamilton, Knekta et al., and Yu et al.) [43,44,45] recommend running a two-phase factor analysis by dividing the sample into two halves, using EFA to explore the factor structure of the items for one half and then using CFA to cross-validate the factor structure for the other half.
Therefore, we first randomly split the data into two Group 1 (n = 256) and Group 2 (n = 256). In phase 1, we applied EFA to explore the factor structure and item analysis on Group 1. In phase 2, for (n = 256), we applied CFA on Group 2 to validate the structures obtained from phase 1.
We applied EFA using a Varimax rotation of the principal axis extraction. A value of 0.30 was determined as a viable cut-off point for judging factor loadings. The CAS exhibited a five-T factor structure, named “Tool”,” Threat”, “Tutor”, “Talk”, and “Trend”. The factor loadings of the items are shown in Table 1. This study proposed five factors based on a literature review; the proposed factor structure of this study is supported, as the items and assignment of factors in the EFA’s five-factor solution are reasonable.
Next, a CFA was then run to verify the structure of the CAS obtained from phase 1. Concerning the fit indices, the CFA model showed a good fit (Chi-Square = 357.40, df = 184, p-value = 0.00, root mean square error of approximation (RMSEA) = 0.061 < 0.08, 90% CI for RMSEA = (0.051; 0.070), comparative fit index (CFI) = 0.97 > 0.90, standardized root mean square residual (SRMR) = 0.081 < 0.10) [43]. Moreover, for the results of the parameter estimates, all estimates were significant and consistent with the underlying theory. To sum up, the CFA results cross-validated that the CAS exhibited good factor/construct validity. The validity of the five-T (Tool, Threat, Tutor, Talk, and Trend) model of ChatGPT attitude was supported by a CFA. The CFA of CAS showing the standardized solution is shown in Figure 1. For reliability, the Cronbach alpha of the CAS is 0.870, indicating good reliability.

4.3. Gender Differences in ChatGPT Attitudes

The results of this study showed that males scored significantly higher than females (p < 0.05) on the ChatGPT attitude total scale and the Tool, Tutor, and Trend subscales, as shown in Table 2.

5. Discussion

This study developed the CAS based on the 5T (Tool, Tutor, Talk, Trend, and Threat) model. Via an exploratory factor analysis, the 5-T model of CAS was extracted. Moreover, the model was validated via a confirmatory factor analysis. The CAS exhibited good reliability and validity and can be used to explain ChatGPT attitudes.
According to our research findings, university students consider ChatGPT an important “Tool” in their daily lives and a “Tutor” in academics; ChatGPT also serves as a “Talk” feature, offering emotional support. Students recognize ChatGPT as a significant “Trend” of the present era, but they also express considerable concern regarding the possible “Threat” of content falsification and job displacement that it may cause.
Regarding gender differences, males outperformed females in the overall score as well as in the Tool, Tutor, and Trend subscales. This study found that males scored significantly higher than females on the ChatGPT attitude scale. These findings are consistent with previous research on AI attitudes, such as Jang et al., Sindermann et al., and Grassini [14,15,20], which generally found that males have more positive attitudes towards AI. Sindermann et al. measured AI attitudes in samples from Germany, the UK, and China and found that males had more positive attitudes towards AI than females in all three countries [20]. Grassini also found that males hold more positive attitudes towards AI compared to females [12]. Additionally, Pinto dos Santos et al. investigated the attitudes of medical students towards AI in the medical field and found that males had more positive perceptions of the benefits of AI and lower levels of fear towards AI [38].
Regarding the Tool and Tutor dimensions, this study found that for males, ChatGPT is perceived more as a tool and a useful tool for learning new knowledge. In terms of the Trend dimension, this study found that males scored higher than females, indicating that males are more concerned about the technical aspects and view this technology as a future trend. These findings are similar to the results of Horowitz and Kahn [41], who found that males have more positive attitudes towards the utilitarian use of AI, higher acceptance, and more positive attitudes towards future developments.
There were no significant gender differences in the Talk and Threat dimensions. In the Talk dimension, there were no significant differences, indicating that university students of different genders perceive chatbots as having the ability to chat and express emotions. This finding differs from previous research on social media (e.g., Yu et al.) [42], which generally found that females use social media for chatting more. However, in the context of chatbots, there were no significant gender differences. This may be because social media involves social interactions and chats with real people, which involves social skills and interaction anxiety, leading to less utilization by males. But when it comes to chatting with chatbots, there were no significant gender differences in attitudes.
In the Threat dimension, this study found no gender differences in the fear of ChatGPT. This differs from previous research on AI attitudes, where some studies found that males have lower fear towards AI [38,41]. The above studies suggest that males are more optimistic about the development of AI, while females have higher concerns. Males are more focused on the technological aspect of AI development, while females are more inclined to focus on social issues related to AI, leading to anxiety and perceived threats. However, this study found no gender differences in the Threat factor, indicating that both genders perceive potential threats to society from ChatGPT and related AI applications. The powerful and rapid development of AI and the associated threats and social issues have attracted widespread attention regardless of gender. Thus, gender differences in concerns about AI-related threats appear to be less significant among university students compared to previous studies.

6. Conclusions

To understand attitudes toward ChatGPT, this study developed the ChatGPT attitude scale (CAS) and recruited a sample of 516 university students from Taiwan for the research. The results of the exploratory factor analysis revealed that the CAS exhibited a five-factor structure, named “Tool”, “Threat”, “Tutor”, “Talk”, and “Trend”. The confirmatory factor analysis validated the 5-T (Tool, Tutor, Talk, Trend, and Threat) model, which can be used to explain the corresponding five factors that constitute ChatGPT attitudes.
Regarding gender differences, compared to females, males scored higher on the total scale and the Tool, Tutor, and Trend subscales, while there were no significant differences in the Talk and Threat subscales.
This study has several limitations. Firstly, in terms of sample, this study sampled university students from Taiwan. Although the scale demonstrated good reliability and validity, further cross-cultural validation is needed to obtain more cross-cultural confirmation. Additionally, in terms of sample gender composition, this study had a higher proportion of females compared to males. This phenomenon is common in many survey studies (e.g., Yu, Sindermann et al.) [20,42], and it may be because females are more willing to respond to questionnaires [42].
Furthermore, the development of chatbots is rapid, and this study was conducted only in the second quarter of 2023. Considering the ongoing changes in technological development and usage behavior, future research should conduct longitudinal studies to understand the changes in attitudes and usage behavior towards chatbots.
Additionally, in terms of software categories, the application of chatbots may extend beyond the ChatGPT 3.5 software. Other AI tools (e.g., Midjourney, ChatSonic, and HuggingFace) are also worth researching as their user base grows.

Author Contributions

Conceptualization, S.-C.Y.; methodology, S.-C.Y.; software, S.-C.Y., Y.-M.H. and T.-T.W.; formal analysis, S.-C.Y.; writing—original draft preparation, S.-C.Y.; writing—review and editing, S.-C.Y., Y.-M.H. and T.-T.W.; funding acquisition, S.-C.Y. and Y.-M.H.; data collection: S.-C.Y., Y.-M.H. and T.-T.W. All authors have read and agreed to the published version of the manuscript.

Funding

The authors thank the National Science and Technology Council (NSTC) of Taiwan, R.O.C, for financially supporting this research under contract Nos. MOST -110-2511-H-142-010-MY3 and MOST- 110-2511-H-006-012-MY3.

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki, and approved by the Institutional Review Board (or Ethics Committee) of National Chung Cheng University (protocol code Ref: CCUREC-112063001 and 22 August 2023).

Informed Consent Statement

Informed consent was obtained from all subjects involved in this study.

Data Availability Statement

The datasets generated by the survey research during and/or analyzed during the current study are available in the Dataverse repository, https://www.dropbox.com/scl/fi/sbipeuptzk0g810fwfx0b/attitude-opendata-ascii.txt.prn?rlkey=qzbn2h9em4klpk8auikvfmx6v&dl=0 (accessed on 12 July 2024).

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Jo, H. Understanding AI tool engagement: A study of ChatGPT usage and word-of-mouth among university students and office workers. Telemat. Inform. 2023, 85, 102067. [Google Scholar] [CrossRef]
  2. Shewale, R. 32 Detailed ChatGPT Statistics—Users, Revenue and Trends; DemandSage: Boston, MA, USA, 2023. [Google Scholar]
  3. Cai, J.; Sun, Y.; Niu, C.; Qi, W.; Fu, X. Validity and Reliability of the Chinese Version of Robot Anxiety Scale in Chinese Adults. Int. J. Hum. Comput. Interact. 2023, 40, 1–10. [Google Scholar] [CrossRef]
  4. Gill, S.S.; Kaur, R. ChatGPT: Vision and challenges. Internet Things Cyber-Phys. Syst. 2023, 3, 262–271. [Google Scholar] [CrossRef]
  5. Wu, T.T.; Lee, H.Y.; Li, P.H.; Huang, C.N.; Huang, Y.M. Promoting Self-Regulation Progress and Knowledge Construction in Blended Learning via ChatGPT-Based Learning Aid. J. Educ. Comput. Res. 2023, 61, 3–31. [Google Scholar] [CrossRef]
  6. Eagly, A.H.; Chaiken, S. The Psychology of Attitudes; Harcourt Brace Jovanovich College Publishers: Orlando, FL, USA, 1993; Volume 22, 794p. [Google Scholar]
  7. Chou, C.; Yu S chi Chen C hsiu Wu, H.C. Tool, Toy, Telephone, Territory, or Treasure of Information: Elementary school students’ attitudes toward the Internet. Comput. Educ. 2009, 53, 308–316. [Google Scholar] [CrossRef]
  8. Tsai, C.C. Adolescents’ Perceptions Toward the Internet: A 4-T Framework. CyberPsychol. Behav. 2004, 7, 458–463. [Google Scholar] [CrossRef]
  9. Koverola, M.; Kunnari, A.; Sundvall, J.; Laakasuo, M. General Attitudes Towards Robots Scale (GAToRS): A New Instrument for Social Surveys. Int. J. Soc. Robot. 2022, 14, 1559–1581. [Google Scholar] [CrossRef]
  10. Schepman, A.; Rodway, P. Initial validation of the general attitudes towards Artificial Intelligence Scale. Comput. Hum. Behav. Rep. 2020, 1, 100014. [Google Scholar] [CrossRef]
  11. Yilmaz, H.; Maxutov, S.; Baitekov, A.; Balta, N. Student Attitudes towards Chat GPT: A Technology Acceptance Model Survey. Int. Educ. Rev. 2023, 1, 57–83. [Google Scholar] [CrossRef]
  12. Saif, N.; Khan, S.U.; Shaheen, I.; Alotaibi, F.A.; Alnfiai, M.M.; Arif, M. Chat-GPT; validating Technology Acceptance Model (TAM) in education sector via ubiquitous learning mechanism. Comput. Hum. Behav. 2024, 154, 108097. [Google Scholar] [CrossRef]
  13. Tan, P.; Juinn, B. A Study on the Use of Chatgpt in English Writing Classes for Taiwanese College Students Using Tam Technology Acceptance Model; SSRN: Amsterdam, The Netherlands, 2024; p. 4811515. [Google Scholar]
  14. Jang, Y.; Choi, S.; Kim, H. Development and validation of an instrument to measure undergraduate students’ attitudes toward the ethics of artificial intelligence (AT-EAI) and analysis of its difference by gender and experience of AI education. Educ. Inf. Technol. 2022, 27, 11635–11667. [Google Scholar] [CrossRef]
  15. Grassini, S. Development and validation of the AI attitude scale (AIAS-4): A brief measure of general attitude toward artificial intelligence. Front. Psychol. 2023, 14. [Google Scholar] [CrossRef] [PubMed]
  16. Chou, H.L.; Chou, C.; Chen, C.H. The moderating effects of parenting styles on the relation between the internet attitudes and internet behaviors of high-school students in Taiwan. Comput. Educ. 2016, 94, 204–214. [Google Scholar] [CrossRef]
  17. Taylor, R. The Computer in the School: Tutor, Tool, Tutee; Paper $14; Teachers College Press: New York, NJ, USA, 1980. [Google Scholar]
  18. Chou, C.; Chen, C.H.; Wu, H.C. Tool, toy, telephone, or information: Children’ perceptions of the Internet. In Proceedings of the Conference Presentation, 114th American Psychology Association (APA) Annual Convention, San Francisco, CA, USA, 17–20 August 2007. [Google Scholar]
  19. Chou, C.; Wu, H.C.; Chen, C.H. Re-visiting college students’ attitudes toward the Internet-based on a 6-T model: Gender and grade level difference. Comput. Educ. 2011, 56, 939–947. [Google Scholar] [CrossRef]
  20. Sindermann, C.; Sha, P.; Zhou, M.; Wernicke, J.; Schmitt, H.S.; Li, M.; Sariyska, R.; Stavrou, M.; Becker, B.; Montag, C. Assessing the Attitude Towards Artificial Intelligence: Introduction of a Short Measure in German, Chinese, and English Language. Künstl Intell. 2021, 35, 109–118. [Google Scholar] [CrossRef]
  21. Li, H. Rethinking human excellence in the AI age: The relationship between intellectual humility and attitudes toward ChatGPT. Personal. Individ. Differ. 2023, 215, 112401. [Google Scholar] [CrossRef]
  22. Iqbal, N.; Ahmed, H.; Azhar, K. Exploring Teachers’ Attitudes towards Using Chat GPT. Glob. J. Manag. Admin. Sci. 2023, 3, 97–111. [Google Scholar]
  23. Kumar, V.; Rajan, B.; Venkatesan, R.; Lecinski, J. Understanding the Role of Artificial Intelligence in Personalized Engagement Marketing. Calif. Manag. Rev. 2019, 61, 135–155. [Google Scholar] [CrossRef]
  24. Brandtzaeg, P.B.; Følstad, A. Chatbots: Changing user needs and motivations. Interactions 2018, 25, 38–43. [Google Scholar] [CrossRef]
  25. Abramson, A. How to Use ChatGPT as a Learning Tool. 2023. Available online: https://www.apa.org/monitor/2023/06/chatgpt-learning-tool (accessed on 16 May 2024).
  26. Pappas, M.; Drigas, A. Incorporation of Artificial Intelligence Tutoring Techniques in Mathematics. Int. J. Eng. Pedagog. 2016, 6, 12–16. [Google Scholar] [CrossRef]
  27. Deveci Topal, A.; Dilek Eren, C.; Kolburan Geçer, A. Chatbot application in a 5th grade science course. Educ. Inf. Technol. 2021, 26, 6241–6265. [Google Scholar] [CrossRef] [PubMed]
  28. Sison, A.J.G.; Daza, M.T.; Gozalo-Brizuela, R.; Garrido-Merchán, E.C. ChatGPT: More Than a “Weapon of Mass Deception” Ethical Challenges and Responses from the Human-Centered Artificial Intelligence (HCAI) Perspective. Int. J. Hum. Comput. Interact. 2023, 1–20. [Google Scholar] [CrossRef]
  29. Rotman, D. MIT Technology Review. How Technology Is Destroying Jobs. 2013. Available online: https://www.technologyreview.com/2013/06/12/178008/how-technology-is-destroying-jobs/ (accessed on 19 December 2023).
  30. Huang, M.H.; Rust, R.T. Artificial Intelligence in Service. J. Serv. Res. 2018, 21, 155–172. [Google Scholar] [CrossRef]
  31. Wang, W.; Siau, K. Artificial Intelligence, Machine Learning, Automation, Robotics, Future of Work and Future of Humanity: A Review and Research Agenda. J. Database Manag. 2019, 30, 61–79. [Google Scholar] [CrossRef]
  32. Fast, E.; Horvitz, E. Long-term trends in the public perception of artificial intelligence. In Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence, San Francisco, CA, USA, 4–9 February 2017; Volume 31. Available online: https://ojs.aaai.org/index.php/AAAI/article/view/10635 (accessed on 25 June 2024).
  33. Keskinbora, K.H. Medical ethics considerations on artificial intelligence. J. Clin. Neurosci. 2019, 64, 277–282. [Google Scholar] [CrossRef]
  34. Yu, S.; Zhao, L. Emojifying chatbot interactions: An exploration of emoji utilization in human-chatbot communications. Telemat. Inf. 2024, 86, 102071. [Google Scholar] [CrossRef]
  35. Laestadius, L.; Bishop, A.; Gonzalez, M.; Illenčík, D.; Campos-Castillo, C. Too human and not human enough: A grounded theory analysis of mental health harms from emotional dependence on the social chatbot Replika. New Media Soc. 2022, 22, 14614448221142007. [Google Scholar] [CrossRef]
  36. Brandtzaeg, P.; Skjuve, M.; Dysthe, K.; Følstad, A. When the social becomes non-human: Young people’s perception of social support in chatbots social support in chatbots. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, Yokohama, Japan, 8–13 May 2021. [Google Scholar]
  37. Himeur, Y.; Ghanem, K.; Alsalemi, A.; Bensaali, F.; Amira, A. Artificial intelligence based anomaly detection of energy consumption in buildings: A review, current trends and new perspectives. Appl. Energy 2021, 287, 116601. [Google Scholar] [CrossRef]
  38. dos Santos, D.P.; Giese, D.; Brodehl, S.; Chon, S.H.; Staab, W.; Kleinert, R.; Maintz, D.; Baeßler, B. Medical students’ attitude towards artificial intelligence: A multicentre survey. Eur. Radiol. 2019, 29, 1640–1646. [Google Scholar] [CrossRef]
  39. Lee, K.H.; Yen, C.L.A. Implicit and Explicit Attitudes Toward Service Robots in the Hospitality Industry: Gender Differences. Cornell Hosp. Q. 2023, 64, 212–225. [Google Scholar] [CrossRef]
  40. Gibert, K.; Valls, A. Building a Territorial Working Group to Reduce Gender Gap in the Field of Artificial Intelligence. Appl. Sci. 2022, 12, 3129. [Google Scholar] [CrossRef]
  41. Horowitz, M.C.; Kahn, L. What influences attitudes about artificial intelligence adoption: Evidence from Evidence from U.S. local officials. PLoS ONE 2021, 16, e0257732. [Google Scholar]
  42. Yu, S.C. The More COVID-19 Information We Shared; the More Anxious We Got? The Associations Among Social Media Use, Anxiety, and Coping Strategies: Cyberpsychology, Behavior, and Social Networking. 2022. Available online: https://www.liebertpub.com/doi/10.1089/cyber.2022.0010 (accessed on 25 June 2024).
  43. Gerbing, D.W.; Hamilton, J.G. Viability of exploratory factor analysis as a precursor to confirmatory factor analysis. Struct. Equ. Model. Multidiscip. J. 1996, 3, 62–72. [Google Scholar] [CrossRef]
  44. Knekta, E.; Runyon, C.; Eddy, S. One Size Doesn’t Fit All: Using Factor Analysis to Gather Validity Evidence When Using Surveys in Your Research. Life Sci. Educ. 2019, 18, rm1. [Google Scholar] [CrossRef] [PubMed]
  45. Yu, S.C.; Chen, H.R.; Liu, A.C.; Lee, H.Y. Toward COVID-19 Information: Infodemic or Fear of Missing Out? Healthcare 2020, 8, 550. [Google Scholar] [CrossRef] [PubMed] [PubMed Central]
Figure 1. CFA on the CAS with a standardized solution.
Figure 1. CFA on the CAS with a standardized solution.
Behavsci 14 00755 g001
Table 1. Factor loadings for EFA of the CAS.
Table 1. Factor loadings for EFA of the CAS.
Factor Loading
12345
Factor 1: Tool
I use ChatGPT to search for current news or the latest trends.0.7420.1330.1870.1860.147
I use ChatGPT to plan travel itineraries or recommend food.0.7340.1770.1590.1880.072
I use ChatGPT to understand current trends.0.7290.2410.1540.2180.104
I use ChatGPT to provide information on various topics, such as the news and weather.0.7050.1760.1110.2020.075
Factor 2: Tutor
I use ChatGPT to learn skills like writing, programming, sales, etc.0.0690.7810.0320.1600.177
I use ChatGPT to learn subject knowledge, such as history, mathematics, computer science, etc.0.1460.7300.0860.1700.173
I use ChatGPT to learn foreign languages.0.3670.6750.1040.129−0.028
I use ChatGPT to handle work or academic problems.0.0060.636−0.009−0.0300.460
I use ChatGPT for language translation.0.2900.6200.067−0.0600.145
Factor 3: Threat
With the rapid advancement of AI technologies like ChatGPT, everyone should exercise caution.0.046−0.0100.7350.0290.227
I’m concerned that ChatGPT might leak my conversation records.0.2480.0310.6730.129−0.035
I’m afraid that ChatGPT will replace my job.0.0390.1890.6560.1040.039
Due to ChatGPT, the problem of plagiarism in student assignments or papers may become more serious.0.1530.0210.597−0.1190.252
ChatGPT is unreliable.−0.0010.0110.5960.088−0.187
I worry that ChatGPT will decrease my interpersonal interactions.0.3740.0380.5960.255−0.116
Factor 4: Talk
I ask ChatGPT about emotional or interpersonal issues.0.1990.1140.1420.8660.050
I discuss personal growth-related questions with ChatGPT.0.2090.1330.1160.8380.115
I chat with ChatGPT to pass the time or alleviate loneliness.0.3430.0610.0850.7490.022
Factor 5: Trend
I believe using ChatGPT is the future.0.0300.1700.0870.0150.838
I think ChatGPT is a powerful force driving change in various industries.0.1130.2420.1170.0660.807
I find ChatGPT to be interesting and enjoyable.0.3230.224−0.1500.2090.565
Table 2. Summary of gender differences in the CAS scores.
Table 2. Summary of gender differences in the CAS scores.
MeasureMenWoment-Value
MSDMSD
Attitude50.7810.1648.0411.142.69 **
  Tool8.593.167.853.322.42 *
  Threat13.193.8613.193.770.00
  Tutor13.603.5812.264.013.66 ***
  Talk5.492.495.412.750.33
  Trend9.921.649.342.073.44 **
Note. * p < 0.05. ** p < 0.01. *** p < 0.001.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Yu, S.-C.; Huang, Y.-M.; Wu, T.-T. Tool, Threat, Tutor, Talk, and Trend: College Students’ Attitudes toward ChatGPT. Behav. Sci. 2024, 14, 755. https://doi.org/10.3390/bs14090755

AMA Style

Yu S-C, Huang Y-M, Wu T-T. Tool, Threat, Tutor, Talk, and Trend: College Students’ Attitudes toward ChatGPT. Behavioral Sciences. 2024; 14(9):755. https://doi.org/10.3390/bs14090755

Chicago/Turabian Style

Yu, Sen-Chi, Yueh-Min Huang, and Ting-Ting Wu. 2024. "Tool, Threat, Tutor, Talk, and Trend: College Students’ Attitudes toward ChatGPT" Behavioral Sciences 14, no. 9: 755. https://doi.org/10.3390/bs14090755

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop