Next Article in Journal
Evaluation of the Implementation of Project-Based-Learning in Engineering Programs: A Review of the Literature
Previous Article in Journal
Editorial for Special Issue “Practices in Science and Engineering Education”
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

It Helps with Crap Lecturers and Their Low Effort: Investigating Computer Science Students’ Perceptions of Using ChatGPT for Learning

by
Mireilla Bikanga Ada
School of Computing Science, University of Glasgow, Glasgow G12 8QQ, UK
Educ. Sci. 2024, 14(10), 1106; https://doi.org/10.3390/educsci14101106
Submission received: 19 July 2024 / Revised: 27 September 2024 / Accepted: 8 October 2024 / Published: 11 October 2024

Abstract

:
This study explores how computing science students (n = 335) use ChatGPT, their trust in its information, their navigation of plagiarism issues, and their confidence in addressing plagiarism and academic integrity. A mixed-methods approach was used, combining quantitative survey data with a qualitative thematic analysis of student comments to provide a comprehensive understanding of these issues. The findings reveal that ChatGPT has become integral to students’ academic routines, with 87.8% using it weekly with variable frequency. Most students (70.3%) believe the university should allow ChatGPT use, and 66.6% think it is fair to use it for academic purposes despite 57.4% distrusting its information. Additionally, 53.8% worry about accidentally plagiarising when using ChatGPT. Overall, students have moderate confidence in addressing these academic integrity issues, with no differences between undergraduate and postgraduate students. Male students reported higher confidence in handling plagiarism and academic integrity issues than female students, suggesting underlying differences in how students perceive and interact with generative AI technologies. A thematic analysis of 74 student comments on their ChatGPT experience revealed four themes: (a) Usage and Role of ChatGPT, (b) Ethical and Responsible Use, (c) Limitations and Accuracy, and (d) Impact on Education and Need for Clear Guidelines. This study contributes to the ongoing debate on accepting and using ChatGPT, highlighting the need for institutions to provide clear guidelines and ethical considerations to ensure responsible use within educational contexts.

1. Introduction

Artificial Intelligence (AI) has been a focal point of increased interest and research in recent years, leading to its adoption across various institutions for diverse purposes, including administrative tasks, instruction, and learning [1]. However, ChatGPT, an advanced large language model (LLM) AI, is rapidly transforming educational practices at an unprecedented rate. While ChatGPT offers substantial opportunities for students and academic institutions, it has also raised significant concerns regarding academic integrity, particularly plagiarism in higher education [2,3,4]—a pre-existing pervasive issue [5]. Students often struggle to distinguish between plagiarism and acceptable academic practice [6]. This confusion is also shared by markers and educators, who frequently face uncertainty about assignment guidelines and find it challenging to assess student work accurately when tools like ChatGPT are involved. This leads to heightened suspicion and diminished trust in student submissions [7,8]. However, academic responses to plagiarism were already varied, inconsistent, and often misaligned with institutional policies [9] even before ChatGPT emerged. This inconsistency contributes to further student confusion. Researchers have raised concerns about the risks of AI-assisted cheating and call for rigorous strategies to counteract these challenges [10].
Furthermore, ethical considerations are informed by theories of academic integrity and educational ethics, which stress the importance of maintaining academic standards in the face of technological advancement [11], as the information revolution has significantly impacted academic integrity, necessitating the re-evaluation of existing practices [12]. Despite the growing adoption of ChatGPT in educational settings, there remains a significant gap in understanding how students perceive its role in academic integrity, trust in genAI-generated content, and engage in ethical usage.
This study aims to fill this gap by exploring these perceptions among computer science students, thereby providing insights that are crucial for developing effective educational strategies and policies. Unlike previous studies that have generalised findings across multiple disciplines, this research highlights the specific challenges and opportunities within computer science education.
The discipline-specific impact of ChatGPT is notably evident in computer science (CS), where its integration necessitates a clear understanding of how these technologies influence student perceptions and interactions from the outset. Studies such as those by [13,14] highlight the various ways students use ChatGPT for research and data comparison tasks, acknowledging its effectiveness as a time-saving tool while also emphasising the need for students to be aware of its limitations and to avoid over-relying on it. Despite the potential benefits, the moderate usage of generative AI in education and the preference for changing assessment methods among educators point to the complex integration of these tools in academic settings [15,16].
Trust in AI-generated content is another critical aspect, with trust in AI being influenced by how others interact with the AI and the level of control or ownership users feel over AI decisions [17,18]. This complexity is accentuated by findings that trust in AI is not static and can change over time based on the AI’s performance, user experiences, and changes in AI algorithms or data [19]. Research by [20,21] explores these dynamics, revealing varying levels of trust among students and diverse patterns of interaction with genAI tools, such as ChatGPT.
This paper presents a study that explores computer science students’ perceptions of ChatGPT, focusing on its use and effectiveness and the ethical considerations it raises within an academic setting. The research primarily investigates several key areas: the frequency and nature of use of ChatGPT among students; their confidence in addressing issues of plagiarism and academic integrity when using ChatGPT; the perceived effectiveness of ChatGPT in academic settings; and the ethical implications of its use. Furthermore, the study examines the level of students’ trust in the information provided by ChatGPT and their views on the fairness of using it for learning purposes. Through a mixed-methods approach, combining quantitative data analysis and thematic qualitative analysis of student comments from an open-ended question in the questionnaire, this paper aims to contribute to the ongoing discourse on the role of genAI in education.
This paper is structured as follows: Section 2 presents the related work, Section 3 concerns materials and methods, Section 4 presents the results, Section 5 provides the discussion, and the final section concludes the paper.

2. Related Work

2.1. Tool Adoption and Student Attitudes towards Plagiarism

Plagiarism, which lacks a universally accepted theoretical or philosophical definition [22], is generally understood as the act of using someone else’s work, ideas, or expressions without proper acknowledgement [23]. It is one of the most significant challenges facing universities today, primarily due to advances in information and communication technology that make it easier to copy and share content [22,24,25]. This act violates ethical standards in both academic and professional contexts, undermining the integrity of scholarly work and distorting academic progress [22,25]. Academic integrity, while often used interchangeably with concepts like plagiarism and cheating [26], refers more broadly to the ethical code and moral principles that guide academic practices. It emphasises honesty, fairness, trust, and responsibility in scholarly activities [27].
In the context of evolving digital tools like ChatGPT, these issues have gained increased prominence. The advent of ChatGPT has sparked significant debate regarding academic integrity. The use of ChatGPT in academic writing presents both opportunities and challenges. Yu et al. [28] surveyed 328 students who used ChatGPT to explore the technology acceptance model in higher education. They found that its compatibility and efficiency positively influenced perceived ease of use and usefulness, significantly boosting user satisfaction and the intention to continue using the technology. This aligns with the Technology Acceptance Model (TAM), an effective framework and a credible model for assessing the acceptance of diverse learning technologies in educational contexts [29,30], including ChatGPT [31]. The TAM posits that perceived ease of use and perceived usefulness are critical factors in the adoption of new technologies in educational settings [32]. This model is particularly relevant when assessing tools like ChatGPT, which have the potential to reshape learning practices. Jarrah et al.’s review of the literature on ChatGPT in academic writing highlights that while ChatGPT can serve as a valuable tool, following responsible practices, including proper citation and attribution, is crucial to prevent plagiarism and maintain academic integrity [2]. Qureshi [33] highlights the potential benefits of ChatGPT, such as increased student involvement and improved performance, but also notes its limitations, particularly in examinations and term papers. Malinka et al. [34] and research by [35] discuss ethical concerns, with [34] explicitly addressing the risk of cheating and plagiarism in computer security education. Rajala et al. [35] found that students used ChatGPT for debugging, tutoring, and enhancing comprehension, arguing that the positive implications outweigh the negative ones. Dawson and Overfield [6] and research by [5] highlighted students’ uncertainty about plagiarism boundaries, complicating the use of ChatGPT.

2.2. Ethical and Responsible Use

Hassani and Silva [36] investigated how ChatGPT could be used in data science and found that it can significantly enhance productivity and accuracy in data science workflows. However, its output can be challenging to interpret. Budhiraja et al. [37] investigated ChatGPT’s usage among undergraduate students in computer science and found that students use it for code generation, error correction, brainstorming, concept tutoring, feedback and content creation. Their study also highlighted concerns about plagiarism and the need for training to use ChatGPT effectively. Kim et al. [38] noted dissatisfaction among users when ChatGPT failed to grasp their intentions, emphasising the need for improved interaction tactics. Those with low knowledge are more dissatisfied and put less effort into addressing the issue. This is consistent with ethical theories that emphasise the importance of clear guidelines and ethical training to prevent the misuse of technology in educational settings [2,39]. Joshi et al. [40] investigated the strengths and weaknesses of undergraduate students’ utilisation of ChatGPT in computer science. They found that students are over-reliant on ChatGPT, leading to issues like self-sabotage and decreased accuracy in assignments and exams. They also identified an underlying bias in ChatGPT’s dataset. Yilmaz and Yilmaz [41] concluded that genAI tools like ChatGPT can benefit programming education by improving computational thinking and motivation. Ray’s [42] literature review highlighted ethical concerns, data biases, and safety issues despite the widespread acceptance of ChatGPT. Gill et al. [43] found that ChatGPT helps educators create content and act as an online tutor but has drawbacks like data inaccuracy and potential plagiarism. Lo [7] reviewed the literature showing that ChatGPT can assist instructors and students but also enables plagiarism and generates incorrect information. Hariri [44] noted ChatGPT’s use in various fields, including chatbots, content generation, language translation, personalised recommendations and even medical diagnosis and treatment, but warned of harmful language patterns and biased responses. Acosta-Enriquez et al. [45] found that responsible use, frequent intention to use, and acceptance are predictors of a positive attitude towards ChatGPT. Singh et al. [39] surveyed 430 master’s students in computing science, finding that while many are familiar with ChatGPT, clear guidelines are needed to prevent misuse. ChatGPT can be helpful in code generation and debugging but can also lead to over-reliance. Fenu [13] and research by [14] found that students use ChatGPT for research and data comparison tasks, appreciating its effectiveness as a time-saving tool. Despite its numerous benefits, many studies—for example, refs. [10,46]—argue that students should be aware of ChatGPT’s limitations and avoid over-relying.

2.3. Trust in AI-Generated Content

Trust in AI-generated content is another critical aspect. Dorton [17] suggests that trust in AI is complex, involving “supradyadic” trust interactions, in which a user’s trust in the AI is influenced by how other people interact with the agent, beyond endorsements or reputation. It is not just a dyadic construct (between user and agent) but can be influenced by third-party interactions [17]. Meanwhile, ref. [18] found that the level of control or ownership may also influence trust in AI a user has over the AI’s decisions, and too much information about AI can decrease trust. This is supported by the literature on trust in technology, which emphasises the role of perceived control and transparency in establishing and maintaining trust [47,48]. Trust in AI can change over time based on various factors, such as the AI’s performance, the user’s experiences, and changes in the AI’s algorithms or data [19]. Fenu [13] identified various student interaction patterns with ChatGPT, ranging from passive acceptance to active problem-solving. Lim and Thing [14] discussed the “age of eroding trust” generated by ChatGPT, highlighting potential reductions in confidence and trust in systems and processes, which may create apprehension about technological advancements. According to [49], despite the importance of trust in the adoption and responsible usage of generative AI and its role in digital transformation, there is limited understanding of how trustworthiness in AI tools is evaluated in real-world settings. Amoozadeh et al. [20] found that trust in AI positively correlates with improved motivation, confidence, and knowledge, regardless of programming confidence. Their study, which compared students from an Indian and an American university, found that female students from the American university had more trust in GenAI than male students.

2.4. Implications for Teaching and Learning

Scholl et al. [21] analysed students’ chat protocols from solving programming exercises with ChatGPT by examining prompts, frequencies, chat progress, contents, and other usage patterns. They found that students used ChatGPT for immediate solutions and understanding and as a study buddy or virtual tutor, indicating significant engagement with the tool. Griesbeck et al. [50] surveyed students’ perceptions of AI in education, finding widespread acceptance but significant concerns about ethical issues and cheating. Bakas et al. [51] argued that AI could be a cognitive partner in critical thinking. Kosar et al. [52] found that students’ overall performance, the grading results of practical assignments, and midterm exams are not influenced by ChatGPT. They concluded that it does not significantly influence overall performance in programming courses but can be beneficial with proper measures. Smolansky [15] noted a moderate usage of generative AI in education, with a need for changing assessment methods to foster critical thinking. Gorichanaz [3] reported that students accused of using ChatGPT in assessments often assert false accusations and call for rethinking assessment practices. Sullivan et al. [4] argue that generative AI tools can enhance student learning, prompting a need for adaptation in teaching and assessment practices.
The study aligns with the Technology Acceptance Model (TAM), which suggests that perceived ease of use and perceived usefulness are critical factors in the adoption of new technologies in educational settings. This model is particularly relevant when assessing tools like ChatGPT, which have the potential to reshape learning practices. Furthermore, the ethical considerations are informed by theories of academic integrity and educational ethics, which stress the importance of maintaining academic standards in the face of technological advancement.
In summary, the literature review stresses ChatGPT’s dual nature as both a beneficial educational tool and a potential source of ethical challenges. These findings highlight the need for further investigation into students’ perceptions, trust, and ethical navigation when using ChatGPT, providing a strong foundation for the research questions posed in this study.

2.5. Research Questions

This study aims to investigate the perceptions of computing science students about using ChatGPT for learning at a university. The primary objectives of this research are as follows:
  • Understand how students are utilising ChatGPT in their academic work, specifically identifying the tasks for which they find it most useful.
  • Assess the level of trust students place in the information provided by ChatGPT.
  • Explore how students navigate the ethical issues related to plagiarism when using ChatGPT.
  • Evaluate students’ confidence in addressing plagiarism and academic integrity issues when utilising ChatGPT.
These objectives are directly aligned with the research questions, which frame the study as follows:
  • How do students use ChatGPT in their academic pursuits, and what specific tasks do they use it for?
  • How much trust do students have in the information provided by ChatGPT?
  • How do students navigate the issues with plagiarism when using ChatGPT?
  • How confident are students in their ability to address plagiarism and academic integrity issues when using ChatGPT?

3. Materials and Methods

The research design of this study is exploratory in nature, aiming to provide initial insights into the perceptions and experiences of computing science students regarding ChatGPT’s use in academic settings. The study uses a mixed-methods approach [53], combining quantitative data from a self-reported survey with qualitative data from open-ended responses. This approach is particularly suited to the study’s objectives, as it allows for both the measurement of specific variables (e.g., trust, usage, confidence) and the exploration of deeper insights into student experiences through thematic analysis.

3.1. Context and Participants

This study was conducted at a university in the United Kingdom, where the official guidelines related to generative AI remain broad. Staff are advised to emphasise to students the importance of avoiding the misuse of generative AI, as this could lead to an “unfair advantage” and result in a conduct case. The university is actively working toward providing digital development training to support staff in effectively incorporating AI into teaching and guiding students in its appropriate use. At the end of Semester 1 (December 2023) of the 2023–2024 academic year, when ChatGPT was barely a year old, CS students were asked to rate various statements related to their experience of using it for learning. All participants were fully informed that anonymity was assured, why the research was being conducted, how their data would be used, and that participation was voluntary; the decision not to participate would not affect them in any way. The participants are currently enrolled in the School of Computing Science, encompassing undergraduate and postgraduate master’s students. Among the master’s students, there are two distinct groups: MSc CS+, which includes MSc Computer Science and MSc Data Science students, and MSc IT, comprising conversion students who do not possess a prior undergraduate degree in computer science.

3.2. Quantitative Data Collection and Analysis

A self-reported survey was sent to students, asking them to share their experience of ChatGPT. The self-reported survey used Likert-scale questions rated on a scale of 1 to 7 (1 = Strongly disagree, 7 = Strongly agree). The self-reported instrument included two subscales (see Table 1) with high internal consistency: Uses of ChatGPT (Cronbach’s alpha = 0.87) and Effectiveness of ChatGPT (Cronbach’s alpha = 0.84). Students were also asked to indicate their confidence levels in their ability to find solutions to the issues of plagiarism and academic integrity when using ChatGPT, using a scale from 1 (not confident at all) to 100 (Very confident). Table 1 presents the descriptive analysis of the scales, and Figure 1 presents the different histograms. SPSS version 29 was used to analyse closed questions using descriptive and inferential statistics, including Pearson correlations, Chi-Square, ANOVA and independent samples t-tests. The exploratory nature of this research justified the use of an unvalidated survey instrument, as the primary objective was to gather preliminary insights into students’ perceptions of ChatGPT. Given that this area of study is still emerging, the development and validation of a new scale were not feasible within the scope of this study. Future research should aim to validate and refine these measurement tools to ensure more robust and generalised findings. While Cronbach’s alpha demonstrates the internal consistency of the subscales, it does not confirm the factor structure of the survey. Therefore, the author recommends that future research employ confirmatory factor analysis (CFA) to validate the survey instrument and verify the relationships between the observed variables and their underlying constructs.

3.3. Qualitative Data Collection and Analysis

Qualitative data were collected through an open-ended question included in the survey. The open-ended question asked participants to provide any comments or experiences related to their use of ChatGPT in academic settings. A total of 102 students responded to this question. During the data cleaning process, responses that contained irrelevant or non-informative answers, such as “no” or “none”, were removed. After this cleanup, 74 comments were retained for analysis. The qualitative data were analysed using inductive thematic analysis (TA), following the six-step process of familiarisation, coding, generating themes, reviewing themes, renaming, and writing up [54]. TA was chosen because of its flexibility and suitability for exploring students’ nuanced experiences and perceptions in an exploratory study. TA provided valuable insights into the broader context of ChatGPT use, including ethical considerations and trust issues.

4. Results

4.1. Quantitative Results

The survey was sent to 889 undergraduate and master’s students, yielding a response rate of 37.68%, with 335 students completing the self-reported questionnaire. Of those who answered the gender question, 219 (66.6%) were male, 83 (25.2%) were female, 18 (5.5%) preferred not to say, and 9 (2.7%) were non-binary/third-gender students. Of those who responded regarding their academic status, there were 200 undergraduates (61%) and 128 postgraduate students (39%). This postgraduate group comprised 88 (26.8%) MSc + students (MSc CS and MSc Data Science) and 40 (12.2%) MSc IT—Conversion students. Cohen’s d [55] was used to measure the effect size, indicating the magnitude of the difference between groups, with values of 0.2 considered small, 0.5 medium, and 0.8 large. The following section presents the descriptive analysis of participants’ responses regarding the use and effectiveness of ChatGPT in their academic work.

4.1.1. Descriptive Analysis of Individual Scale Items

Students’ uses of ChatGPT and tasks: As seen in Figure 2, most students (87.8%) already use ChatGPT at least once every week, with the majority using it two to three times a week. Only 12.2% never use it. To further understand how students perceive the effectiveness of ChatGPT in their academic tasks, responses to specific survey items related to confidence in using the tool, trust in the information it provides, and its role in improving academic productivity were analysed.
As seen in Table 2, a significant portion of students (41.4%) agree that ChatGPT makes them more confident working on complex assignments, and 36.2% disagree. Most students (64%) believe that ChatGPT is effective in helping them understand complex topics. However, 41.8% also do not believe that ChatGPT understands the context of their questions well, compared to 34.6% who agree. While only 22.7% of students believe that the assignment/assessment marking criteria document is very useful for obtaining accurate, specific answers for an assignment from ChatGPT, 34.6% are undecided, and 42.7% disagree. Meanwhile, 36.2% agreed that using the assignment/assessment marking criteria document with ChatGPT helps them understand what is required in their assignment, while 34.5% disagree and 29% are undecided. Additionally, 29.1% of students also feel that despite using ChatGPT to assist them, the assignment/assessment marking criteria document is still vague, while 33.4% neither agreed nor disagreed.
Students were also asked to provide a specific example of a topic or concept that ChatGPT helped them understand better. Analysis of their comments (N = 200 comments) showed that ChatGPT has helped them understand a wide range of topics and concepts, which have been categorised as follows:
  • Algorithms and Machine Learning (30%): Understanding algorithms in Automata Theory, different machine learning algorithms, optimisation techniques, Principal Component Analysis (PCA), Gradient Descent, and Convolutional Neural Networks (CNNs).
  • Programming and Coding (35%): Debugging code, understanding specific programming concepts like recursion, syntax in various languages (Java-version 17 and above, Python-version 3.9 and above), and data structures like HashMap.
  • Mathematics and Data Science (20%): Concepts like Gaussian mixture models, distributions in statistics, eigenvectors and eigenvalues, and the mathematics behind data science.
  • Research and Summarisation (10%): Summarising research papers, translating concepts, and understanding definitions of complex topics in data fundamentals.
  • Miscellaneous Topics (5%): Understanding GDPR, job analysis, intellectual property rights, human–computer interaction, and specific commands in coding.
The analysis also explored students’ attitudes towards the use of ChatGPT in academic settings, including their views on whether the university should officially allow its use.

4.1.2. The University Should Allow ChatGPT

As presented in Table 2, most students (70.3%) think that the university should allow ChatGPT because it can help improve study efficiency. Very few students disagree (14.7%) and are undecided (14.8%). While there is no significant gender difference (p = 0.16, equal variance), independent t-test results show statistically significant differences (p = 0.015, unequal variance) in opinions between undergraduates (N = 199, M = 5.14) and postgraduates (N = 127, M = 5.6), with a small effect size (Cohen’s d = 0.27) [55]. Furthermore, the results show a significant difference (p = 0.001, equal variance) between students who are concerned about accidentally plagiarising or copying content from ChatGPT for academic work (N= 177, M = 5.7) and those who do not share the same concerns (N = 152, M = 4.9), with a medium effect size (Cohen’s d = 0.47).
The following subsection explores the critical issue of trust in AI-generated content—an area where students’ opinions were notably divided.

4.1.3. Trust in the Information Provided by ChatGPT

A little more than half of the respondents, 190 (57.4%), do not trust the information provided by ChatGPT, while 92 (27.8%) are undecided. Only 49 (14%) trust ChatGPT information. While there is no gender difference, postgraduates (N = 126, M = 3.5) have statistically significantly (p = 0.001) more trust than undergraduates (N = 200, M = 2.97), with a small effect size (d = 0.37). Furthermore, the results show a significant difference in trust (p = 0.001, equal variance) between students who are concerned about accidentally plagiarising or copying content from ChatGPT for academic work (N = 177, M = 3.42) and those who are not concerned (N = 152, M = 2.9), with a medium effect size (Cohen’s d = 0.47).
In addition to trust, the fairness of using ChatGPT for learning purposes is another critical factor in understanding students’ perceptions of this AI tool.

4.1.4. Fairness in Using ChatGPT for Learning

Most students, 215 (66.6%), think it is fair for them to use ChatGPT for learning. Only 16.8% disagree, and 16.7% are undecided. While there is no gender difference (p = 0.17), independent sample t-test results indicate a statistically significant difference (p = 0.017, unequal variance) between postgraduates (N = 125, M = 5.48) and undergraduates (n = 193, M = 5.03). with a small effect size (Cohen’s d = 0.27). Furthermore, the results show a significant difference in fairness to use ChatGPT (p = 0.001, unequal variance) between students who are concerned about accidentally plagiarising or copying content from ChatGPT for academic work (N = 174, M = 5.57) and those who are not concerned (N = 147, M = 4.76), with a medium effect size (Cohen’s d = 0.49).
Given the importance of academic integrity, the next section examines students’ concerns about accidentally plagiarising when using ChatGPT and their confidence levels in addressing these issues.

4.1.5. Concerns about Accidentally Plagiarising or Copying Content from ChatGPT

More than half the respondents are concerned about accidentally plagiarising or copying content when using ChatGPT (N = 178, 53.8%). In comparison, 153 (46.2%) students do not share these concerns. While there is no gender difference, Chi-Square results indicate a statistically significant association between academic status (undergraduate and postgraduate) and concern about accidentally plagiarising or copying content from ChatGPT in academic work (with both Fisher’s Exact Test and Pearson Chi-Square, p = 0.003), with postgraduates (N = 127, Yes = 81 or 63.8% of postgraduates) showing a higher level of concern than undergraduates (N = 200, Yes = 94 or 47% of undergraduates). The results also show that MSc CS + students (CS, Data Science) (N = 87, Yes = 55 or 63.2% of MSc + students) show less concern than MSc IT students (conversion students) (N = 40, Yes = 26 or 65% of MSc IT students). Responding to a multiple-choice question, to address these concerns, students mainly rewrite the content (56%), as explained by one student, “It wasn’t me that used ChatGPT but a teammate in a group project. We (mostly I) rewrote the content in my own words”. Many students (52.9%) verify information with other sources, 39.8% avoid using the information, and 9.9% cite ChatGPT as a source. A small number (4.7%) use their other approaches, as one student explained, “Asked general questions rather than questions specific to a specific question (e.g., asked how it would recommend I approach answering a general kind of question rather than asking for detailed instructions or an answer to a specific question). Did this to avoid relying too heavily on ChatGPT’s wording”.
Building on these findings, we explored how students’ confidence in addressing plagiarism and academic integrity issues might influence their overall use and perception of ChatGPT.

4.1.6. Confidence in Finding Solutions to Plagiarism and Academic Integrity Issues When Using ChatGPT

Confidence was measured from 1 (not confident at all) to 100 (Very confident). Most students have moderate confidence in finding solutions to plagiarism and academic integrity issues when using ChatGPT (M = 56.76, SD = 30.09, N = 284). Independent t-test results indicate a statistically significant gender difference (p = 0.048, equal variance). Male students (N = 192, M = 60.26) are more confident than female students (N = 69, M = 51.97), with a small effect size (Cohen’s d = 0.28). However, there are no statistically significant gender differences in the perception of the effectiveness of ChatGPT (p = 0.17) or the use of ChatGPT (p = 0.57). While there is no significant difference between undergraduates and postgraduate students in their confidence levels in their ability to find solutions to plagiarism and academic integrity (p > 0.5) and their perceptions of the effectiveness of ChatGPT (p > 0.5), there is, however, a statistically significant difference in their use of ChatGPT (p < 0.001, equal variance): postgraduate students’ (N = 128, M = 47.25) mean level is higher than undergraduate students (N = 200, M = 40.87), with a small effect size (Cohen’s d = 0.4).
The final quantitative analysis focuses on the correlations between students’ confidence in handling plagiarism, their perceptions of ChatGPT’s effectiveness, and their overall usage of the tool.

4.1.7. Other Results

Pearson correlation results show a positive and significant correlation between students’ confidence in solving plagiarism and academic integrity issues and their perception of the effectiveness of ChatGPT (r = 0.313 **, N = 284, p < 0.01), as well as uses of ChatGPT (r = 0.188 **, N = 284, p < 0.01). For example, students who are more confident in their ability to handle plagiarism issues tend to perceive ChatGPT as more effective in assisting them, especially in helping them complete their other assignments (N = 331, M = 4.5, Md = 5) and understand complex topics (N = 331, M = 4.98, Md = 5). On the other hand, there is a significant negative correlation (r = −0.220 **, N = 331, p < 0.01) between concerns about plagiarism and academic integrity and the perceived effectiveness of ChatGPT. This implies that students concerned about plagiarism and academic integrity tend to perceive ChatGPT as less effective in assisting them. There is also a significant negative correlation (r = −0.329, N = 331, p < 0.01) between concerns about plagiarising or copying content from ChatGPT and the use of ChatGPT, indicating that students who have concerns about plagiarism and academic integrity issues are less likely to use ChatGPT frequently. There is a strong, significant positive correlation between the perceived effectiveness of ChatGPT and the use of ChatGPT (r = 0.703, N = 333, p < 0.001). Students who find ChatGPT more effective are likely to use it more often for different tasks.
While the quantitative results provide valuable insights, they are complemented by qualitative data, which offer a deeper understanding of students’ experiences and perceptions.

4.2. Qualitative Results

Given the exploratory nature of this study, an inductive thematic analysis (TA) [54] was used to analyse 74 student comments on their ChatGPT experience following the six steps of familiarisation, coding, generating themes, reviewing themes, renaming and writing up. This approach was selected for its theoretical flexibility [56], including the flexibility in terms of the type of data (from an open-ended question) and identifying patterns within qualitative data, which is particularly suitable for an exploratory study like this one. The decision to categorise the comments into four specific themes was guided by the patterns that emerged during the coding process. Initially, open coding was conducted to identify key phrases and ideas expressed by the students. As these codes were clustered together based on similarity and relevance, the following four themes were identified:
  • Usage and Role of ChatGPT: This theme captures how students utilise ChatGPT to support their learning, including finding information, understanding complex concepts, and improving productivity.
  • Ethical and Responsible Use: This theme explores students’ concerns and strategies related to maintaining academic integrity while using ChatGPT.
  • Limitations and Accuracy: This theme addresses the perceived shortcomings of ChatGPT, such as the vagueness of its responses and the potential for generating incorrect information.
  • Impact on Education and Need for Clear Guidelines: This theme reflects students’ views on how ChatGPT influences educational practices and the need for clearer institutional guidelines on its use.
The decision to focus on these four themes was not arbitrary but rather a result of iterative analysis and the clustering of related codes, which helped to capture the most significant aspects of the students’ experiences. The thematic analysis was guided by the principle of saturation, where no new themes emerged as the data were revisited multiple times.
Conducting TA as a sole researcher has both advantages and challenges. While consistency in coding and theme development can be ensured, there is also a risk of personal biases influencing the analysis. Two strategies were employed to mitigate these biases and enhance the rigour of the analysis. Firstly, a genAI tool was utilised to cross-validate the qualitative results. By providing the genAI with appropriate prompts, it was possible to generate an additional layer of objectivity in the analysis, ensuring that the identified themes were not solely a result of personal interpretation. This approach helped to verify the robustness of the initially identified themes and provided an external check of the findings. Then, input was sought from a colleague with expertise in qualitative research to review the coding and thematic development. This peer review process added another layer of validation to the analysis.
Despite these efforts, it is important to note that this study’s limitations, including the reliance on a single researcher for primary data interpretation, could be addressed in future studies by involving multiple researchers to enhance the depth and breadth of the analysis. Each theme is discussed in detail below, offering a nuanced view of how students perceive and interact with ChatGPT in their academic pursuits.

4.2.1. Usage and Role of ChatGPT

ChatGPT is primarily used as a tool to aid in learning and productivity. Most students see it as a helpful resource for finding information, understanding complex concepts, and improving their efficiency. For example, one student mentioned using it to find code examples and explanations. Another student noted that it is useful for summarising and explaining topics while studying. Some see it as a tool that can enhance their learning experiences, as captured in this comment: “ChatGPT is best used to explain topics and give us an understanding of how to approach and solve questions”. However, one student expressed a more critical perspective, “It helps with crap lecturers and their low effort, syllabus and course content. Like honestly could easily replace maybe 20% of lecturers”. This comment reflects dissatisfaction with some lecturers, suggesting that ChatGPT could potentially fill the gaps left by the perceived low-quality teaching. While this view is specific to one student, 6 (20%) out of 30 comments related to the uses of ChatGPT indicated that students use ChatGPT to supplement their learning. This suggests that a minority of students view ChatGPT as an effective supplement to, or even a partial replacement for, human instruction in specific educational contexts. However, it is important to note that dissatisfaction with lecturers is often part of a broader narrative in education studies that places disproportionate blame on educators. The literature emphasises that educational challenges should be understood within a broader systemic context rather than solely attributed to individual educators. This trend in educational discourse tends to overlook institutional and structural factors, unfairly positioning teachers as the primary cause of educational shortcomings [57].

4.2.2. Ethical and Responsible Use

Most students are concerned about academic integrity and honesty, expressing apprehension about the impact of ChatGPT on traditional academic values. One student summarises this sentiment: “It shouldn’t be allowed for essays, etcetera, as it is plagiarism (scrapes content off Google) and also will lack an actual in-depth understanding of the topic presented to it”. Another student added, “ChatGPT causes issues with copyright, let alone academic integrity”. These concerns highlight the broader challenge of defining academic honesty in an era where generative AI tools are becoming more prevalent. As one student noted, “I think what I am beginning to unfortunately understand is that academic honesty is a concept that is very difficult to define”. This uncertainty reflects the students’ concerns about the legitimacy of AI-generated content in relation to academic integrity. Consequently, students hesitate to cite or acknowledge ChatGPT in their academic work, as demonstrated by one student who remarked, “Actually, I want to cite GPT but am afraid to do so because I am not sure how much percent I can cite”. This highlights the dilemma students face in navigating the new academic environment dominated by genAI technologies, with no clear guidelines or policies on their appropriate use.
Opinions on whether ChatGPT should be used in academic contexts vary. For instance, one student believes, “It should be discouraged by the university for the most part. It will hinder everyone in their future career and progression”, suggesting a need for more stringent guidelines for ChatGPT usage in education. The concern is that reliance on it could negatively impact students’ future careers and professional development by potentially hindering the development of necessary skills and knowledge.
In contrast, others see the value in embracing new technology and using it to adapt educational practices. For example, one undergraduate female student stated, “ChatGPT is a real tool available to all professionals, so instead of worrying about its use in assessments, maybe we should change how and what we assess, so that it is useful and meaningful in a world where ChatGPT does exist”, indicating a more accepting stance towards integrating genAI in education, including rethinking assessment methods to reflect the evolving role of technology. However, this shift also raises concerns about fairness in academic assessments. A student observed, “I feel ChatGPT makes [it] easier to get the solutions for a given task. ... It is like a lecturer gives you [an] assignment with [the] solution”. This sentiment suggests that ChatGPT may provide substantial assistance for assignments requiring clear-cut answers, such as problem-solving in technical fields, including computing science. However, for tasks that require interpretative reasoning, critical thinking, or the integration of human experience, such as in essay-based or creative subjects, ChatGPT may offer less value. This distinction highlights that the tool’s impact depends heavily on the nature of the assignment, potentially making it more powerful for science students than arts students.
Nonetheless, this raises concerns about ChatGPT undermining the intended learning objectives of assignments, creating an imbalance between effort and outcomes in some educational contexts. Similarly, one student noted, “It allows students who don’t understand the topic well, but who know how to use ChatGPT well, to nearly pass every course”. This concern was echoed by a student who shared their experience using ChatGPT for quizzes: “The only time I used it over the 2022–2023 year was for a couple of timed quizzes. The quizzes were open-book, so ChatGPT was fair game. It’s able to produce faster and clearer answers to multiple-choice questions than Google or the lecture slides can”. These comments reinforce the need for the careful consideration of ChatGPT’s ethical implications of genAI use, particularly in ensuring fairness and maintaining academic integrity. Students who genuinely strive to understand and learn the material rather than depending on a tool for answers are the most affected. The potential unfair advantage and lack of academic integrity may create disparities in academic performance.

4.2.3. Limitations and Accuracy

Many students have experienced limitations in ChatGPT’s accuracy, particularly in providing vague or incorrect answers to academic questions. Some doubt its reliability in producing accurate code and emphasise the need for additional validation and understanding beyond ChatGPT’s responses. Students also noted that it may not excel in complex subjects and is best used for simple tasks like summarisation and paraphrasing. These limitations and inaccuracies can lead to unfair evaluation if ChatGPT is used as the primary source of information or answers. One student shared, “In my experience answers provided by ChatGPT are very vague for academic questions (much too vague to put it into an essay for example)”. A female undergraduate echoed this sentiment, stating that “ChatGPT sometimes makes up fake literature”, while another student observed, “Often time ChatGPT gives you answers that are wrong (in maths etc.)”. A male MSc CS student further explained, “ChatGPT can be exceptionally useful when studying, but it needs to be used in tandem with other sources, as information from ChatGPT is not always accurate”.
These limitations highlight the importance of critical thinking and subject knowledge when using genAI tools like ChatGPT. Users need to discern between accurate information and unreliable outputs. For instance, one student acknowledged, “ChatGPT spits out a lot of garbage, and you need to have at least some level of understanding around what you’re actually doing for it to be a useful tool”. This suggests that ChatGPT is most effective when users already have some prior knowledge of the topic, enabling them to identify errors and use the tool as a supplement rather than a primary source of information.

4.2.4. Impact on Education and Need for Clear Guidelines

This theme explores how ChatGPT impacts students’ learning and the education system. Students recognise the potential impact of ChatGPT on their learning, where they use it as a “tool to aid learning” to help them understand the topics. For example, one male MSc IT student suggested considering ChatGPT as a teacher: “Consider it as a teacher rather than a machine which only replies to the question”. A male MSc CS student commented that they “provide context or explanations from lecture material and ask it to explain that to me in different levels of complexity until I understand it”. An undergraduate male remarked that ChatGPT “definitely improves my ability to ‘start’ an assignment …”. A few comments highlight the necessity to rethink traditional assessment methods in light of generative AI advancements. One student suggested, “Integrating ChatGPT into education requires a rethinking of assessment …”. This reflects an emerging awareness that educational frameworks may need to evolve to stay relevant in the age of genAI. However, other students think ChatGPT affects genuine learning, which is unfair to students who prioritise developing critical thinking skills. For example, one student commented, “I will say I am concerned for my usage of ChatGPT because I worry that my over-reliance on will be devastating once I’m in an exam room”. The discussion of balancing ChatGPT’s use as a tutor or guidance tool versus a complete solution reflects concerns about fairness in educational outcomes. However, there is a need for clarity and guidance as “Communication from the university has been unclear on when use of chatGPT is permitted and when not. It feels like this is because the uni is not sure of this yet itself”, another student commented.

5. Discussion

This study investigated computer science students’ perceptions regarding using ChatGPT for learning at a university. The primary research questions focused on how students use ChatGPT, their trust in its information, their navigation of plagiarism issues, and their confidence in addressing academic integrity when using ChatGPT. The results provide a comprehensive understanding of these aspects and offer insights into the broader implications for educational practices and policies.

5.1. How Do Students Use ChatGPT in Their Academic Pursuits?

The findings reveal that ChatGPT has become an integral part of students’ academic routines, with 87.8% of participants using the tool weekly. This frequent usage aligns with the growing adoption of genAI tools in education, as noted in previous studies [13,14]. It is not, therefore, surprising that 70.3% of the students think that the university should allow the use of ChatGPT, as it can enhance their study efficiency. Students primarily use ChatGPT to aid in understanding complex topics, completing assignments, and improving study efficiency. For example, many students reported using ChatGPT to find code examples, explanations, and summaries of academic material. This suggests that ChatGPT serves as a valuable supplementary resource and can significantly aid in information gathering, understanding complex topics, and improving overall academic productivity, thus enhancing traditional learning methods by providing immediate, accessible assistance [15,16].
However, the qualitative data also highlight the limitations of ChatGPT, with students expressing concerns about its accuracy, especially for more complex tasks. This suggests that while ChatGPT is a helpful aid for routine tasks and enhancing efficiency, it may not always be reliable as a primary source for more challenging academic work. As such, students must use it in tandem with other resources and exercise critical thinking to ensure accuracy. Interestingly, some students mentioned turning to ChatGPT to compensate for perceived inadequacies in traditional teaching. Comments like “helps with crap lecturers and their low effort” reflect dissatisfaction with certain lecturers, prompting students to use ChatGPT as a substitute for gaps in teaching. However, this raises a more critical question: Is it appropriate to compare the pedagogy delivered by human lecturers with AI-generated responses? Pedagogical methods involve interaction, adaptability, and real-time engagement, which inherently differ from AI’s static, corpus-based outputs. While ChatGPT can quickly provide definitions, explanations, and examples and can serve as viable supplementary material in specific contexts, its strict adherence to provided prompts brings into question its ability to adapt to a wide range of learning needs and preferences [58]—it lacks the deeper contextualisation, critical engagement, and human nuance that lecturers bring to the classroom [59]. For instance, it may not be reasonable to equate a spontaneous oral definition given by a lecturer during class with the more mechanical response of an AI-based on pre-existing data. Human instructors can adjust their explanations based on student feedback, provide real-time clarification, and engage in dialogue, which AI tools cannot replicate effectively [60,61,62,63], at least now.
This dependence on genAI tools like ChatGPT might reflect gaps in educational practices and highlight the limitations of using it as a remedy for dissatisfaction with human instruction. Therefore, while ChatGPT’s frequent use reflects its usefulness and limitations, it is important to recognise that it cannot fully substitute for the dynamic and interactive aspects of pedagogy. Students benefit from ChatGPT’s accessibility and ability to fill specific gaps in learning. However, they must also be cautious of over-reliance, particularly for complex academic tasks that require critical engagement and deeper understanding. This dual perspective emphasises the need for a balanced approach to integrating genAI tools into educational practices—recognising the strengths of both genAI tools and traditional human-led pedagogy while addressing the limitations of each.

5.2. How Much Trust Do Students Have in the Information Provided by ChatGPT?

Trust in AI-generated content is a critical issue, with the study revealing that 57.4% of students do not trust the information provided by ChatGPT, as measured by Item 3 of the survey (see Table 2), which specifically asked about students’ trust in AI-generated content. This uncertainty is reflected in qualitative data, with several students commenting on the vagueness and inaccuracy of ChatGPT in answering complex academic questions. For instance, one student remarked, “… answers provided by ChatGPT are very vague for academic questions”, which echoes the broader sentiment captured by the quantitative findings. Considering both qualitative and quantitative data findings stresses the importance of addressing AI-generated content’s perceived accuracy and reliability to build user trust. Only 14% of respondents expressed trust in the tool, which could stem from the tool’s limitations and inaccuracies, as highlighted by the qualitative comments such as “ChatGPT sometimes makes up fake literature”. These trust issues are not unique to this study, as similar concerns have been identified in the existing literature [17,18,19]. The study also found higher trust levels among postgraduate students than undergraduates, suggesting that experience and familiarity play significant roles in shaping trust. This finding aligns with previous research [20,21], where trust in genAI is influenced by a user’s familiarity and experience with the technology. For example, one MSc CS student commented, “I use ChatGPT to summarize and explain topics while studying. I provide context or explanations from lecture material and ask it to explain that to me in different levels of complexity until I understand it”. This indicates a degree of trust in the ChatGPT’s ability to provide helpful and reliable information during their study process. Alternatively, this could imply that postgraduates, with their greater experience, may be better equipped to discern when and how to use ChatGPT effectively, trusting it in specific contexts rather than broadly. For example, one MSc CS student stated, “ChatGPT spits out a lot of garbage, and you need to have at least some level of understanding around what you’re actually doing for it to be a useful tool”. Another MSc IT student noted, “It is a very useful free learning tool. I have dyslexia and dyspraxia, and it’s very helpful with checking over my work for sequencing issues, which normal tools like gramma and spell checkers do not”. This comment suggests that the student trusts ChatGPT as a valuable learning tool, especially for addressing challenges related to dyslexia and dyspraxia. The student finds it particularly helpful for checking sequencing issues in their work, which traditional grammar and spell checkers do not adequately address, further indicating their trust in ChatGPT’s capabilities. Indeed, ChatGPT-4 could be a transformative tool for individuals with dyslexia [64].
Interestingly, despite the overall lack of trust identified in the quantitative findings, the qualitative results showed that students frequently rely on ChatGPT for routine tasks, likely due to its convenience and ability to provide quick assistance for simple tasks like summarisation or brainstorming ideas. While they may not fully trust it for complex academic work, its accessibility and time-saving features make it a useful tool in certain contexts. For example, one undergraduate student’s comment summarises this feeling, expressing scepticism about ChatGPT’s ability to handle more complex academic tasks, indicating a lack of trust for advanced coursework. However, the comment also highlights its effectiveness for simple tasks like summarisation and note-taking: “I feel that ChatGPT generally sucks at more complicated concepts, so directly using it for level 3–4 courses assignment usually give very bad results. What ChatGPT excel at, however, is summarization and paraphrasing. For example, I can transcribe lecture videos and feed it into ChatGPT to ask specific questions, and generally, it would give decent results in very digestible format like point forms. This really simplifies the process of making notes and also it lets me absorb lecture material faster”.
It is also important to note that some versions of ChatGPT allow users to adjust settings, such as the “temperature”, which controls the creativity or precision of the responses [65]. For instance, high temperatures (e.g., 0.7) generate more diverse and creative outputs, while low temperatures (e.g., 0.2) produce more precise and focused responses. Adjusting this setting could potentially address some concerns around accuracy, particularly in contexts where more exact answers are required. However, this technical feature is not widely understood or utilised by most users, as many rely on the default settings without realising the potential for customisation.
These findings highlight the need for targeted training, including training on how to optimise genAI tools, particularly for less experienced students. Such training would help students develop critical thinking skills necessary to evaluate AI-generated content and foster a more discerning and informed approach to using these tools in academic settings. Promoting digital literacy and critical thinking across all academic levels could help bridge the trust gap and ensure students are better prepared to responsibly and effectively engage with genAI technologies. Although this study was exploratory and did not explicitly apply the Technology Acceptance Model (TAM), the findings regarding students’ lack of trust in AI-generated content align with the TAM’s focus on perceived usefulness and reliability—an important consideration in the TAM’s framework.

5.3. How Do Students Navigate the Issues with Plagiarism When Using ChatGPT?

Academic integrity and plagiarism are significant concerns associated with using AI tools like ChatGPT [63]. Over half of the respondents (53.8%) expressed worries about accidentally plagiarising when using ChatGPT. This aligns with previous studies [14,50], highlighting the ethical dilemmas AI tools present in educational contexts. One of the key issues with ChatGPT, unlike other AI tools such as Copilot, is that it does not provide information about the original sources from which it retrieves data. As a result, users are unaware of which part of the output is from what source on the Internet, leading to an increased risk of accidental plagiarism. This lack of transparency makes it challenging for students to ensure the accuracy and originality of the content they use in their academic work. Furthermore, the moderate confidence level (M = 56.76, SD = 30.09) in addressing plagiarism and academic integrity issues reflects the complexity and ambiguity that students face in discerning the appropriate use of AI-generated content in their academic work [46]. This suggests a need for further clarity and support from institutions to help students navigate these challenges responsibly. The absence of clear source attribution from ChatGPT exacerbates these issues, as it increases the difficulty of verifying information and ensuring proper citation, making it crucial for educational institutions to provide guidance on how to use AI tools in a way that upholds academic integrity.
The qualitative data revealed that students are aware of the ethical dilemmas posed by genAI tools. They conveyed a desire for more comprehensive policies and educational programs to help navigate these challenges. Comments such as “Actually, I want to cite GPT but afraid to do so …” illustrate students’ uncertainty, highlighting the uncertainty stemming from a lack of clear guidelines and institutional support. However, it is important to note that guidelines for citing generative AI, such as those published by the APA, are available [66]. These guidelines provide clear instructions on properly citing information retrieved from tools like ChatGPT, helping to reduce uncertainty. Despite this, many students remain unaware of such resources, further emphasising the need for universities to develop and communicate comprehensive policies that guide the ethical and responsible use of genAI tools like ChatGPT. Therefore, institutions must develop and communicate comprehensive policies that guide the ethical and responsible use of genAI tools like ChatGPT, helping students understand how to integrate these technologies into their academic work without compromising integrity. Such measures are essential given the growing concerns regarding academic integrity, including plagiarism, in higher education [2,3,4]. This is primordial as the issue of plagiarism is anticipated to intensify as AI technology continues to advance at an unprecedented pace [63].

5.4. How Confident Are Students in Their Ability to Address Plagiarism and Academic Integrity Issues When Using ChatGPT?

This study found gender differences in confidence levels regarding handling plagiarism and academic integrity issues, with male students reporting higher confidence than female students. This disparity suggests underlying differences in how male and female students perceive and interact with genAI technologies, potentially reflecting broader gender dynamics in technology use and digital literacy, as has been suggested in previous research [20,50]. Alternatively, this difference might indicate differing levels of exposure or encouragement to engage with technology-based tools across genders. The qualitative data reinforce these quantitative findings, with students expressing concerns about the vagueness of ChatGPT’s answers and the potential for misunderstanding key academic concepts. One female MSc CS student noted, “In my experience answers provided by ChatGPT are very vague for academic questions (much too vague to put it into an essay for example)”. Meanwhile, a female undergraduate commented, “ChatGPT sometimes makes up fake literature”. Such experiences likely contribute to the lower confidence levels observed among female students, suggesting that further guidance and support are necessary to help all students, regardless of gender, use genAI tools like ChatGPT more effectively and responsibly.
On the other hand, male students’ comments such as “ChatGPT can be exceptionally useful when studying, but it needs to be used in tandem with other sources, as information from ChatGPT is not always accurate” highlights a practical approach to using ChatGPT and could indicate awareness of ChatGPT’s limitations and the need for additional verification. Other comments, such as “Consider it as a teacher rather than a machine which only replies to the question”, could suggest a more interactive and responsible use of ChatGPT, viewing it as a learning tool rather than a source of direct answers. This approach may be tied to greater confidence in using the tool effectively and ethically.
Despite these differences, the moderate overall confidence level indicates that all students, regardless of gender, require more robust support systems to address these issues effectively. These findings highlight the need for targeted educational interventions to bridge the confidence gap, ensuring that all students, regardless of gender, feel equally equipped to address these challenges associated with genAI tools like ChatGPT. This aligns with the necessity of clear guidelines and training, as highlighted in previous studies, to help students navigate academic integrity challenges when using AI technologies [37,46]. The significant correlation between students’ confidence in solving plagiarism and academic integrity issues and their perception of ChatGPT’s effectiveness (r = 0.313, p < 0.01) further suggests that confidence in managing academic integrity is linked to how useful students find ChatGPT. For example, one student stated, “I think, what I am beginning to unfortunately understand, is that academic honesty is a concept that is very difficult to define. If I hire a tutor, and they teach me the material I need to understand for my coursework—is that dishonest? Is that fair to those who cannot afford a tutor”? The student reveals deeper ethical concerns about fairness and academic honesty, comparing the use of ChatGPT to hiring a tutor. This demonstrates a critical reflection on the ethical implications of using AI tools and how this relates to broader issues of fairness and trust in the academic process. Moreover, although students may feel confident in using ChatGPT for specific tasks (like summarising or explaining complex topics), this confidence does not necessarily translate into trust for high-stakes academic work. This student’s comment illustrates this well, as it highlights the inaccuracies that lead to the selective use of ChatGPT “ChatGPT gives inaccurate information quite often so I don’t use it for my assignments, only to understand complex topics and debug code”. This lack of trust in AI tools for complex tasks suggests that students are aware of the limitations of AI and how this influences their ethical decision-making.
The findings highlight the importance of fostering both digital literacy and ethical awareness to maximise the benefits of genAI tools while minimising their risks. If students are more confident in managing the ethical use of genAI, they are likely to perceive these tools as more effective in supporting their academic work. This finding aligns with previous studies that have emphasised the need for clear guidelines and training on how to use AI tools responsibly [2,39,63].

5.5. Ethics, Accuracy, and Limitations of AI in Educational Contexts

The qualitative data revealed diverse opinions on the ethical and responsible use of ChatGPT. While some students appreciate the tool’s value, others express significant concerns about its impact on academic integrity. Comments such as “ChatGPT causes issues with copyright, let alone academic integrity” and “It should be discouraged by the university for the most part” reflect the tension between the perceived benefits and ethical dilemmas of using AI tools [2,3,4]. These concerns are reiterated in the literature, where the ethical implications of genAI in education are increasingly scrutinised [50]. The need for educational institutions to develop and enforce ethical frameworks and policies governing genAI tool usage is evident. Many students reported limitations in ChatGPT’s accuracy, particularly in providing vague or incorrect answers to academic questions. This finding is consistent with existing research that questions the reliability of AI-generated content. Students emphasised the need for additional validation and understanding beyond ChatGPT’s responses, particularly for complex subjects. These limitations stress the importance of developing critical thinking skills and not over-relying on genAI tools for academic success [46,63].
Promoting awareness about the capabilities and limitations of genAI tools, alongside fostering critical thinking skills, can help students use these tools as aids rather than substitutes for their intellectual efforts.

5.6. Insights on the Impact on Education and Clarifying Guidelines for Ethical AI Use

This study reveals that ChatGPT has a profound impact on students’ learning experiences, with many students using it to supplement their understanding of lecture material, generate ideas, and start assignments. However, as the qualitative data indicate, students’ reliance on generative AI (genAI) tools can introduce new challenges to traditional educational practices. These challenges require thoughtful consideration, particularly in computer science education. The integration of genAI tools can enhance learning while also raising ethical and practical concerns, such as those related to usability, trust, and plagiarism.

5.6.1. Rethinking Assessment Methods

The qualitative data suggest that the traditional approaches to assessment may not be fully equipped to address the challenges posed by genAI tools. Several students expressed concerns that current assessment methods do not adequately account for the capabilities of tools like ChatGPT, potentially leading to unfair advantages or undermining the development of critical thinking skills. For instance, one student noted that while ChatGPT is useful for understanding concepts, it could potentially enable students to bypass deeper engagement with the material if not used appropriately. This concern highlights the necessity of re-evaluating how assessments are designed to ensure they test students’ true understanding and ability to apply knowledge rather than their ability to utilise genAI tools effectively. Potential solutions include incorporating more formative assessments focusing on the learning process rather than just the final product. Oral examinations, practical projects, and collaborative tasks may also offer more reliable measures of student competence in an era where genAI can assist in producing polished written assignments or code. Furthermore, assessments could be designed to require higher-order thinking skills, such as analysis, synthesis, and evaluation, which are less likely to be fully addressed by genAI tools.
This aligns with research emphasising the importance of designing assessments that test cognitive processes beyond the capabilities of genAI tools [15,67,68,69,70]. For example, students could be specifically asked to use AI-powered coding assistants for initial code generation but also tasked with debugging and optimising the code manually, ensuring deeper engagement with problem-solving processes. These assignments would not only test technical proficiency but would also encourage students to balance AI assistance with independent learning.

5.6.2. Addressing Usability, Trust, and Ethical Dilemmas

The growing integration of genAI tools into education has also introduced concerns about usability and trust. As found in this study, some students rely on these tools for summarising or simplifying information but often doubt their reliability for more complex or nuanced academic tasks. As highlighted in previous studies, this uncertainty may stem from genAI tools occasionally providing inaccurate or incomplete information [7,71,72]. Ensuring trust in these tools requires educators to provide clear guidance on when and how to use them responsibly. One practical approach is to incorporate AI literacy workshops into the curriculum, where students critically evaluate outputs from tools like ChatGPT, assess the accuracy and bias of the generated information, and discuss ethical usage in academic contexts.

5.6.3. The Need for Clear Guidelines and Ethical Frameworks

The data also highlight a critical need for clear communication and guidelines from universities regarding the appropriate use of genAI tools like ChatGPT [63]. Many students reported uncertainty about when and how they could use these tools without violating academic integrity policies. This lack of clarity puts students at risk of unintentional misconduct and diminishes the educational value of genAI tools by leaving them underutilised or misapplied. To address these challenges, universities should develop and enforce comprehensive ethical frameworks that clearly define the acceptable uses of genAI tools in academic work. These guidelines should cover various aspects, such as the distinction between acceptable assistance and plagiarism, properly citing AI-generated content and the ethical implications of relying on genAI for learning and assessments. These guidelines should also clarify the boundaries of proper usage, ensuring that students understand when genAI can enhance their learning versus when it may detract from developing independent problem-solving skills [73,74]. For example, to effectively guide students, educators could incorporate AI literacy workshops that focus on assessing AI-generated content for accuracy, bias, and ethical use. These workshops would provide hands-on activities where students evaluate outputs from tools like ChatGPT, identifying both the benefits and the limitations in academic contexts.
Moreover, these guidelines should be regularly updated to keep pace with the rapid advancements in generative AI technology and its growing integration into educational practices. For computer science educators, effectively integrating genAI tools requires a balanced approach. Beyond merely recognising the potential of AI, educators should actively develop strategies for incorporating these tools into their teaching. For example, the key considerations for educator-focused guidance could include the following:
  • Educators should establish clear expectations for how students use AI tools in their coursework, helping them differentiate between legitimate use and over-reliance. For example, setting specific tasks where AI is encouraged, followed by manual refinement, can help students develop a balance between utilising AI and independent learning.
  • Rethink assessment structures to ensure they measure deep learning. Incorporating oral exams, live coding sessions, and collaborative projects can mitigate the risk of students relying too heavily on AI for polished outputs. Focus on tasks requiring human judgment and creativity, where AI currently underperforms.
  • Include training programs for students to develop critical skills in evaluating the output of AI tools. Workshops can provide hands-on activities for assessing bias, accuracy, and ethical considerations. Educators should also train students to understand the limitations and biases of AI-generated content, equipping them with the skills to responsibly navigate the evolving digital landscape.
  • Ensure that students understand the ethical dimensions of AI use in academia. By integrating discussions on the ethical implications of using AI into the curriculum, educators can help students navigate the fine line between assistance and academic dishonesty.

5.6.4. Supporting Digital Literacy and Critical Thinking

In addition to establishing guidelines and preparing students well to navigate the evolving landscape of genAI-enhanced education, universities should prioritise promoting digital literacy and critical thinking skills. As genAI tools become more prevalent in education, students must be equipped with the technical know-how to use these tools and the critical abilities to evaluate AI-generated content’s accuracy, relevance, and ethical implications. Educational programs should include training on using genAI tools responsibly, critically assessing the generated information, and balancing genAI assistance with independent learning and original thought. Furthermore, educators must be adequately trained and supported to integrate genAI tools into their teaching practices effectively. This includes understanding the limitations and potential biases of ChatGPT or other genAI tools, designing assessments that account for genAI capabilities, and guiding students in ethical genAI use.
In summary, while ChatGPT offers significant opportunities to enhance learning, it also poses challenges that require thoughtful consideration and proactive management. Rethinking assessment methods, establishing clear guidelines, and fostering digital literacy can help universities ensure that generative AI tools are used to complement and enrich education rather than detract from it. This aligns with findings from [50], who emphasised the necessity of developing and enforcing ethical frameworks and policies governing AI tool usage in academia [15,16].

6. Conclusions

The findings of this study contribute to the ongoing discourse on the role of AI in education, particularly in the context of computer science. They highlight the dual nature of ChatGPT as both a beneficial educational tool and a potential challenge to academic integrity. While ChatGPT can enhance learning by providing immediate assistance and improving study efficiency, it also raises concerns regarding plagiarism, trust in AI-generated content, and varying levels of student confidence in using these tools responsibly.
However, several limitations of this study should be acknowledged. First, the research relied on self-reported data, which may be subject to biases such as social desirability or inaccurate recall. Additionally, the study focused on a specific demographic—computer science students—which may limit the generalisability of the findings to other disciplines. Finally, the exploratory nature of this research means that the survey instrument was not validated, and further studies are needed to refine the measurement tools and replicate the findings in different contexts.
Given these limitations, the conclusions drawn from this study should be interpreted with caution, yet they provide valuable insights for educators. To better integrate genAI tools like ChatGPT into the educational landscape, computer science educators should focus on developing clear guidelines and ethical frameworks that define the acceptable use of these technologies in academic work. This includes providing explicit instructions on how to avoid plagiarism and encouraging the responsible use of genAI tools as supplementary aids rather than primary sources of information. Furthermore, to effectively harness the potential of genAI tools, educators should prioritise promoting digital literacy and critical thinking skills among students. This involves training students in the technical aspects of using AI tools and evaluating the credibility and reliability of AI-generated content. Educators could then empower students to use these tools effectively while maintaining academic standards and integrity.
Finally, addressing the gender differences observed in the study could lead to more inclusive and effective genAI tool adoption strategies. Tailored educational interventions considering these differences can help bridge the confidence gap and ensure that all students, regardless of gender, are equally equipped to engage with genAI technologies in their learning. While ChatGPT offers significant opportunities to enhance the teaching and learning process, its integration into educational practices must be carefully managed.

Unique Contribution and Future Directions

This study provides a unique perspective by focusing specifically on computer science students, a group that is both familiar with technology and likely to be early adopters of genAI tools like ChatGPT. Unlike previous studies with generalised findings across multiple disciplines, this research highlights the specific challenges and opportunities within computer science education. Future research could build on these findings by exploring how genAI tools are used in other STEM fields or by developing cross-disciplinary approaches that integrate cognitive science and ethics insights. Furthermore, a comparative analysis of ChatGPT with other genAI tools in educational settings could offer a more nuanced understanding of its relative advantages and limitations, further guiding educators in selecting the most effective tools for their students. Additionally, while this study reports strong internal consistency for the survey subscales using Cronbach’s alpha, it did not undertake confirmatory factor analysis (CFA) to validate the factor structure of the survey instrument. Future research should conduct CFA to ensure that the subscales accurately represent distinct and meaningful constructs, thus providing more robust evidence for the scale’s validity.

Funding

This research received no external funding.

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki, and approved by the College of Science and Engineering Ethics Committee of University of Glasgow (protocol code 300230028 and 18 October 2023).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The datasets presented in this article are not readily available because the data are part of an ongoing study. Requests to access the datasets should be directed to the corresponding author.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Chen, L.; Chen, P.; Lin, Z. Artificial Intelligence in Education: A Review. IEEE Access 2020, 8, 75264–75278. [Google Scholar] [CrossRef]
  2. Jarrah, A.M.; Wardat, Y.; Fidalgo, P. Using ChatGPT in academic writing is (not) a form of plagiarism: What does the literature say? Online J. Commun. Media Technol. 2023, 13, e202346. [Google Scholar] [CrossRef]
  3. Gorichanaz, T. Accused: How students respond to allegations of using ChatGPT on assessments. Learn. Res. Pract. 2023, 9, 183–196. [Google Scholar] [CrossRef]
  4. Sullivan, M.; Kelly, A.; McLaughlan, P. ChatGPT in higher education: Considerations for academic integrity and student learning. J. Appl. Learn. Teach. 2023, 6, 31–40. [Google Scholar] [CrossRef]
  5. Roberts, T.S. (Ed.) Student Plagiarism in an Online World: Problems and Solutions; IGI Global: Hershey, PA, USA, 2008. [Google Scholar] [CrossRef]
  6. Dawson, M.M.; Overfield, J.A. Plagiarism: Do Students Know What It Is? Biosci. Educ. 2006, 8, 1–15. [Google Scholar] [CrossRef]
  7. Lo, C.K. What Is the Impact of ChatGPT on Education? A Rapid Review of the Literature. Educ. Sci. 2023, 13, 410. [Google Scholar] [CrossRef]
  8. Farazouli, A.; Cerratto-Pargman, T.; Bolander-Laksov, K.; McGrath, C. Hello GPT! Goodbye Home Examination? An Exploratory Study of AI Chatbots’ Impact on University Teachers’ Assessment Practices. Assess. Eval. High. Educ. 2023, 49, 363–375. [Google Scholar] [CrossRef]
  9. De Maio, C.; Dixon, K.; Yeo, S. Academic Staff Responses to Student Plagiarism in Universities: A Literature Review from 1990 to 2019. Issues Educ. Res. 2019, 29, 1131–1142. Available online: http://www.iier.org.au/iier29/demaio.pdf (accessed on 26 September 2024).
  10. Xie, Y.; Wu, S.; Chakravarty, S. AI meets AI: Artificial Intelligence and Academic Integrity—A Survey on Mitigating AI-Assisted Cheating in Computing Education. In Proceedings of the 24th Annual Conference on Information Technology Education (SIGITE’ 23), Marietta, GA, USA, 11–14 October 2023; Association for Computing Machinery: New York, NY, USA, 2023; pp. 79–83. [Google Scholar] [CrossRef]
  11. Cojocariu, V.M.; Mareş, G. Academic Integrity in the Technology-Driven Education Era. In Ethical Use of Information Technology in Higher Education; Mâță, L., Ed.; EAI/Springer Innovations in Communication and Computing; Springer: Singapore, 2022. [Google Scholar] [CrossRef]
  12. McHaney, R.; Cronan, T.P.; Douglas, D.E. Academic Integrity: Information Systems Education Perspective. J. Inf. Syst. Educ. 2016, 27, 153–158. Available online: https://aisel.aisnet.org/jise/vol27/iss3/1/ (accessed on 26 September 2024).
  13. Fenu, G.; Galici, R.; Marras, M.; Reforgiato, D. Exploring Student Interactions with AI in Programming Training. In Proceedings of the 32nd ACM Conference on User Modeling, Adaptation and Personalization (UMAP Adjunct’ 24), Cagliari, Italy, 1–4 July 2024; Association for Computing Machinery: New York, NY, USA, 2024; pp. 555–560. [Google Scholar] [CrossRef]
  14. Lim, J.W.Z.; Thing, V.L.L. Building confidence in a world of eroding trust. Digit. Gov. Res. Pract. 2024, 5, 1–17. [Google Scholar] [CrossRef]
  15. Smolansky, A.; Cram, A.; Raduescu, C.; Zeivots, S.; Huber, E.; Kizilcec, R.F. Educator and Student Perspectives on the Impact of Generative AI on Assessments in Higher Education. In Proceedings of the Tenth ACM Conference on Learning @ Scale (L@S’ 23), Copenhagen, Denmark, 20–22 July 2023; Association for Computing Machinery: New York, NY, USA, 2023; pp. 378–382. [Google Scholar] [CrossRef]
  16. Xu, L. The Dilemma and Countermeasures of AI in Educational Application. In Proceedings of the 2020 4th International Conference on Computer Science and Artificial Intelligence (CSAI’ 20), Zhuhai, China, 11–13 December 2021; Association for Computing Machinery: New York, NY, USA, 2021; pp. 289–294. [Google Scholar] [CrossRef]
  17. Dorton, S. Supradyadic Trust in Artificial Intelligence. Artif. Intell. Soc. Comput. 2022, 28, 92–100. [Google Scholar] [CrossRef]
  18. Kim, T.; Song, H. Communicating the Limitations of AI: The Effect of Message Framing and Ownership on Trust in Artificial Intelligence. Int. J. Hum. Comput. Interact. 2022, 39, 790–800. [Google Scholar] [CrossRef]
  19. Beauxis-Aussalet, E.; Behrish, M.; Borgo, R.; Chau, D.H.; Collins, C.; Ebert, D.; El-Assady, M.; Endert, A.; Keim, D.A.; Kohlhammer, J.; et al. The Role of Interactive Visualization in Fostering Trust in AI. IEEE Comput. Graph. Appl. 2021, 41, 7–12. [Google Scholar] [CrossRef]
  20. Amoozadeh, M.; Daniels, D.; Nam, D.; Kumar, A.; Chen, S.; Hilton, M.; Ragavan, S.S.; Alipour, M.A. Trust in Generative AI among Students: An exploratory study. In Proceedings of the 55th ACM Technical Symposium on Computer Science Education V. 1 (SIGCSE 24), Portland, OR, USA, 20–23 March 2024; Association for Computing Machinery: New York, NY, USA, 2024; pp. 67–73. [Google Scholar] [CrossRef]
  21. Scholl, A.; Schiffner, D.; Kiesler, N. Analysing Chat Protocols of Novice Programmers Solving Introductory Programming Tasks with ChatGPT. arXiv 2024, arXiv:2405.19132. [Google Scholar] [CrossRef]
  22. McIntire, A.; Calvert, I.; Ashcraft, J. Pressure to Plagiarize and the Choice to Cheat: Toward a Pragmatic Reframing of the Ethics of Academic Integrity. Educ. Sci. 2024, 14, 244. [Google Scholar] [CrossRef]
  23. Maryon, T.; Dubre, V.; Elliott, K.; Escareno, J.; Fagan, M.H.; Standridge, E.; Lieneck, C. COVID-19 Academic Integrity Violations and Trends: A Rapid Review. Educ. Sci. 2022, 12, 901. [Google Scholar] [CrossRef]
  24. Šprajc, P.; Urh, M.; Jerebic, J.; Trivan, D.; Jereb, E. Reasons for Plagiarism in Higher Education. Organizacija 2017, 50, 33–45. [Google Scholar] [CrossRef]
  25. Kampa, R.K.; Padhan, D.K.; Karna, N.; Gouda, J. Identifying the Factors Influencing Plagiarism in Higher Education: An Evidence-Based Review of the Literature. Account. Res. 2024, 1–16. [Google Scholar] [CrossRef]
  26. Macfarlane, B.; Zhang, J.; Pun, A. Academic Integrity: A Review of the Literature. Stud. High. Educ. 2012, 39, 339–358. [Google Scholar] [CrossRef]
  27. Bretag, T. Chapter 1: Introduction to A Research Agenda for Academic Integrity: Emerging Issues in Academic Integrity Research. In A Research Agenda for Academic Integrity; Edward Elgar Publishing: Cheltenham, UK, 2020. [Google Scholar] [CrossRef]
  28. Yu, C.; Yan, J.; Cai, N. ChatGPT in higher education: Factors influencing ChatGPT user satisfaction and continued use intention. Front. Educ. 2024, 9, 1354929. [Google Scholar] [CrossRef]
  29. Zaineldeen, S.; Hongbo, L.; Koffi, A.L.; Hassan, B.M.A. Technology Acceptance Model Concepts, Contribution, Limitation, and Adoption in Education. Univ. J. Educ. Res. 2020, 8, 5061–5071. [Google Scholar] [CrossRef]
  30. Granić, A.; Marangunić, N. Technology Acceptance Model in Educational Context: A Systematic Literature Review. Br. J. Educ. Technol. 2019, 50, 2572–2593. [Google Scholar] [CrossRef]
  31. Saif, N.; Khan, S.U.; Shaheen, I.; ALotaibi, F.A.; Alnfiai, M.M.; Arif, M. Chat-GPT: Validating Technology Acceptance Model (TAM) in Education Sector via Ubiquitous Learning Mechanism. Comput. Hum. Behav. 2024, 154, 108097. [Google Scholar] [CrossRef]
  32. Park, N.; Lee, K.M.; Cheong, P.H. University Instructors’ Acceptance of Electronic Courseware: An Application of the Technology Acceptance Model. J. Comput.-Mediat. Commun. 2007, 13, 163–186. [Google Scholar] [CrossRef]
  33. Qureshi, B. ChatGPT in computer science curriculum assessment: An analysis of its successes and shortcomings. In Proceedings of the 2023 9th International Conference on e-Society, e-Learning and e-Technologies (ICSLT’ 23), Portsmouth, UK, 9–11 June 2023; Association for Computing Machinery: New York, NY, USA, 2023; pp. 7–13. [Google Scholar] [CrossRef]
  34. Malinka, K.; Peresíni, M.; Firc, A.; Hujnák, O.; Janus, F. On the Educational Impact of ChatGPT: Is Artificial Intelligence Ready to Obtain a University Degree? In Proceedings of the 2023 Conference on Innovation and Technology in Computer Science Education V. 1 (ITiCSE 2023), Turku, Finland, 7–12 July 2023; Association for Computing Machinery: New York, NY, USA, 2023; pp. 47–53. [Google Scholar] [CrossRef]
  35. Rajala, J.; Hukkanen, J.; Hartikainen, M.; Niemelä, P. “\Call me Kiran\”—ChatGPT as a Tutoring Chatbot in a Computer Science Course. In Proceedings of the 26th International Academic Mindtrek Conference (Mindtrek’ 23), Tampere, Finland, 3–6 October 2023; Association for Computing Machinery: New York, NY, USA, 2023; pp. 83–94. [Google Scholar] [CrossRef]
  36. Hassani, H.; Silva, E.S. The Role of ChatGPT in Data Science: How AI-Assisted Conversational Interfaces Are Revolutionising the Field. Big Data Cogn. Comput. 2023, 7, 62. [Google Scholar] [CrossRef]
  37. Budhiraja, R.; Joshi, I.; Challa, J.S.; Akolekar, H.D.; Kumar, D. “It’s not like Jarvis, but it’s pretty close!”—Examining ChatGPT’s Usage among Undergraduate Students in Computer Science. In Proceedings of the 26th Australasian Computing Education Conference (ACE ‘24), Sydney, Australia, 29 January–2 February 2024; Association for Computing Machinery: New York, NY, USA, 2024; pp. 124–133. [Google Scholar] [CrossRef]
  38. Kim, Y.; Lee, J.; Kim, S.; Park, J.; Kim, J. Understanding Users’ Dissatisfaction with ChatGPT Responses: Types, Resolving Tactics, and the Effect of Knowledge Level. In Proceedings of the 29th International Conference on Intelligent User Interfaces (IUI’ 24), Greenville, SC, USA, 18–21 March 2024; Association for Computing Machinery: New York, NY, USA, 2024; pp. 385–404. [Google Scholar] [CrossRef]
  39. Singh, H.; Tayarani-Najaran, M.-H.; Yaqoob, M. Exploring Computer Science Students’ Perception of ChatGPT in Higher Education: A Descriptive and Correlation Study. Educ. Sci. 2023, 13, 924. [Google Scholar] [CrossRef]
  40. Joshi, I.; Budhiraja, R.; Dev, H.; Kadia, J.; Ataullah, M.O.; Mitra, S.; Akolekar, H.D.; Kumar, D. ChatGPT in the classroom: An analysis of its strengths and weaknesses for solving undergraduate computer science questions. In Proceedings of the 55th ACM Technical Symposium on Computer Science Education V. 1 (SIGCSE 2024), Portland, OR, USA, 20–23 March 2024; Association for Computing Machinery: New York, NY, USA, 2024; pp. 625–631. [Google Scholar] [CrossRef]
  41. Yılmaz, R.; Yilmaz, F.G.K. The effect of generative artificial intelligence (AI)-based tool use on students’ computational thinking skills, programming self-efficacy and motivation. Comput. Educ. Artif. Intell. 2023, 4, 100147. [Google Scholar] [CrossRef]
  42. Ray, P.P. ChatGPT: A comprehensive review on background, applications, key challenges, bias, ethics, limitations and future scope. Internet Things Cyber-Phys. Syst. 2023, 3, 121–154. [Google Scholar] [CrossRef]
  43. Gill, S.S.; Xu, M.; Patros, P.; Wu, H.; Kaur, R.; Kaur, K.; Fuller, S.; Singh, M.; Arora, P.; Parlikad, A.K.; et al. Transformative effects of ChatGPT on modern education: Emerging Era of AI Chatbots. Internet Things Cyber-Phys. Syst. 2024, 4, 19–23. [Google Scholar] [CrossRef]
  44. Hariri, W. Unlocking the Potential of ChatGPT: A Comprehensive Exploration of its Applications, Advantages, Limitations, and Future Directions in Natural Language Processing. arXiv 2023, arXiv:2304.02017. [Google Scholar] [CrossRef]
  45. Acosta-Enriquez, B.G.; Arbulú Ballesteros, M.A.; Huamaní Jordan, O.; Roca, C.L.; Tirado, K.S. Analysis of college students’ attitudes toward the use of ChatGPT in their academic activities: Effect of intent to use, verification of information and responsible use. BMC Psych. 2024, 12, 255. [Google Scholar] [CrossRef] [PubMed]
  46. Vargas-Murillo, A.R.; Pari-Bedoya, I.N.M.A.; Guevara-Soto, F.J. The Ethics of AI Assisted Learning: A Systematic Literature Review on the Impacts of ChatGPT Usage in Education. In Proceedings of the 2023 8th International Conference on Distance Education and Learning (ICDEL ‘23), Beijing, China, 9–12 June 2023; Association for Computing Machinery: New York, NY, USA, 2023; pp. 8–13. [Google Scholar] [CrossRef]
  47. Long, C.; Niedbala, E.; Kelley, A. Developing Human Trust in Automated Systems: The Central Role of Personal Control. In Academy of Management Proceedings; Academy of Management: Briarcliff Manor, NY, USA, 2021. [Google Scholar] [CrossRef]
  48. Lacity, M.C.; Schuetz, S.W.; Kuai, L.; Steelman, Z.R. IT’s a Matter of Trust: Literature Reviews and Analyses of Human Trust in Information Technology. J. Inf. Technol. 2024, 2683962231226397. [Google Scholar] [CrossRef]
  49. Wang, R.; Cheng, R.; Ford, D.; Zimmermann, T. Investigating and Designing for Trust in AI-powered Code Generation Tools. In Proceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency (FAccT ‘24), Rio de Janeiro, Brazil, 3–6 June 2024; Association for Computing Machinery: New York, NY, USA, 2024; pp. 1475–1493. [Google Scholar] [CrossRef]
  50. Griesbeck, A.; Zrenner, J.; Moreira, A.; Au-Yong-Oliveira, M. AI in Higher Education: Assessing Acceptance, Learning Enhancement, and Ethical Considerations Among University Students. In Good Practices and New Perspectives in Information Systems and Technologies; Rocha, Á., Adeli, H., Dzemyda, G., Moreira, F., Poniszewska-Marańda, A., Eds.; Lecture Notes in Networks and Systems; Springer: Cham, Switzerland, 2024; Volume 987, pp. 214–227. [Google Scholar] [CrossRef]
  51. Bakas, N.P.; Papadaki, M.; Vagianou, E.; Christou, I.; Chatzichristofis, S.A. Integrating LLMs in Higher Education, Through Interactive Problem Solving and Tutoring: Algorithmic Approach and Use Cases. In Information Systems. EMCIS 2023; Papadaki, M., Themistocleous, M., Al Marri, K., Al Zarouni, M., Eds.; Lecture Notes in Business Information Processing; Springer: Cham, Switzerland, 2024; Volume 501, pp. 291–307. [Google Scholar] [CrossRef]
  52. Kosar, T.; Ostojić, D.; Liu, Y.D.; Mernik, M. Computer Science Education in ChatGPT Era: Experiences from an Experiment in a Programming Course for Novice Programmers. Mathematics 2024, 12, 629. [Google Scholar] [CrossRef]
  53. Schoonenboom, J.; Johnson, R.B. How to Construct a Mixed Methods Research Design. Kölner Z. Soziol. Sozialpsychol. 2017, 69 (Suppl. 2), 107–131. [Google Scholar] [CrossRef] [PubMed]
  54. Braun, V.; Clarke, V. Using thematic analysis in psychology. Qual. Res. Psychol. 2006, 3, 77–101. [Google Scholar] [CrossRef]
  55. Cohen, J. Statistical Power Analysis. Curr. Dir. Psychol. Sci. 1992, 1, 98–101. [Google Scholar] [CrossRef]
  56. Lester, J.N.; Cho, Y.; Lochmiller, C.R. Learning to Do Qualitative Data Analysis: A Starting Point. Hum. Resour. Dev. Rev. 2020, 19, 94–106. [Google Scholar] [CrossRef]
  57. Sharma, A. Neoliberal Etiology and Educational Failure: A Critical Exploration. Curric. Inq. 2021, 51, 542–561. [Google Scholar] [CrossRef]
  58. Denny, P.; MacKellar, C.; Amir, L.; Yang, L.; Lino, A.; Johnson, C. Can We Trust AI-Generated Educational Content? Comparative Analysis of Human and AI-Generated Learning Resources. arXiv 2023, arXiv:2306.10509. [Google Scholar]
  59. Farrokhnia, M.; Banihashem, S.K.; Noroozi, O.; Wals, A. A SWOT Analysis of ChatGPT: Implications for Educational Practice and Research. Innov. Educ. Teach. Int. 2023, 61, 460–474. [Google Scholar] [CrossRef]
  60. Chan, C.K.Y.; Tsi, L.H.Y. Will Generative AI Replace Teachers in Higher Education? A Study of Teacher and Student Perceptions. Stud. Educ. Eval. 2024, 83, 101395. [Google Scholar] [CrossRef]
  61. Triberti, S.; Di Fuccio, R.; Scuotto, C.; Marsico, E.; Limone, P. “Better than My Professor?” How to Develop Artificial Intelligence Tools for Higher Education. Front. Artif. Intell. 2024, 7, 1329605. [Google Scholar] [CrossRef] [PubMed]
  62. Mollick, E.R.; Mollick, L. Using AI to Implement Effective Teaching Strategies in Classrooms: Five Strategies, Including Prompts. Whart. Sch. Res. Pap. 2023. [Google Scholar] [CrossRef]
  63. Grassini, S. Shaping the Future of Education: Exploring the Potential and Consequences of AI and ChatGPT in Educational Settings. Educ. Sci. 2023, 13, 692. [Google Scholar] [CrossRef]
  64. Botchu, B.; Iyengar, K.P.; Botchu, R. Can ChatGPT Empower People with Dyslexia? Disabil. Rehabil. Assist. Technol. 2023, 19, 2131–2132. [Google Scholar] [CrossRef]
  65. Cheat Sheet: Mastering Temperature and Top_p in ChatGPT API. OpenAI Community. Available online: https://community.openai.com/t/cheat-sheet-mastering-temperature-and-top-p-in-chatgpt-api/172683 (accessed on 4 September 2024).
  66. APA Journals Policy on Generative AI: Additional Guidance. American Psychological Association. Available online: https://www.apa.org/pubs/journals/resources/publishing-tips/policy-generative-ai (accessed on 4 September 2024).
  67. Moorhouse, B.L.; Yeo, M.A.; Wan, Y. Generative AI Tools and Assessment: Guidelines of the World’s Top-Ranking Universities. Comput. Educ. Open 2023, 5, 100151. [Google Scholar] [CrossRef]
  68. Salinas-Navarro, D.E.; Vilalta-Perdomo, E.; Michel-Villarreal, R.; Montesinos, L. Using Generative Artificial Intelligence Tools to Explain and Enhance Experiential Learning for Authentic Assessment. Educ. Sci. 2024, 14, 83. [Google Scholar] [CrossRef]
  69. Thanh, B.N.; Vo, D.T.H.; Nhat, M.N.; Pham, T.T.T.; Trung, H.T.; Xuan, S.H. Race with the Machines: Assessing the Capability of Generative AI in Solving Authentic Assessments. Australas. J. Educ. Technol. 2023, 39, 59–81. [Google Scholar] [CrossRef]
  70. Kazemitabaar, M.; Ye, R.; Wang, X.; Henley, A.Z.; Denny, P.; Craig, M.; Grossman, T. CodeAid: Evaluating a Classroom Deployment of an LLM-Based Programming Assistant that Balances Student and Educator Needs. In Proceedings of the 2024 CHI Conference on Human Factors in Computing Systems (CHI ’24), Association for Computing Machinery, New York, NY, USA, 11–16 May 2024; pp. 1–20. [Google Scholar] [CrossRef]
  71. Megahed, F.M.; Chen, Y.J.; Ferris, J.A.; Knoth, S.; Jones-Farmer, L.A. How Generative AI Models Such as ChatGPT Can Be (Mis)Used in SPC Practice, Education, and Research? An Exploratory Study. Qual. Eng. 2023, 36, 287–315. [Google Scholar] [CrossRef]
  72. Montenegro-Rueda, M.; Fernández-Cerero, J.; Fernández-Batanero, J.M.; López-Meneses, E. Impact of the Implementation of ChatGPT in Education: A Systematic Review. Computers 2023, 12, 153. [Google Scholar] [CrossRef]
  73. Barrett, A.; Pack, A. Not Quite Eye to A.I.: Student and Teacher Perspectives on the Use of Generative Artificial Intelligence in the Writing Process. Int. J. Educ. Technol. High. Educ. 2023, 20, 59. [Google Scholar] [CrossRef]
  74. Dickey, E.; Bejarano, A.; Garg, C. AI-Lab: A Framework for Introducing Generative Artificial Intelligence Tools in Computer Programming Courses. SN Comput. Sci. 2024, 5, 720. [Google Scholar] [CrossRef]
Figure 1. Histograms (confidence in finding solutions to plagiarism issues caused by ChatGPT, perceived effectiveness of ChatGPT, and uses of ChatGPT).
Figure 1. Histograms (confidence in finding solutions to plagiarism issues caused by ChatGPT, perceived effectiveness of ChatGPT, and uses of ChatGPT).
Education 14 01106 g001
Figure 2. Frequency of ChatGPT use.
Figure 2. Frequency of ChatGPT use.
Education 14 01106 g002
Table 1. Descriptive analysis of scales (mean, standard deviation and number).
Table 1. Descriptive analysis of scales (mean, standard deviation and number).
MSDN
Confidence in the ability to find solutions to the issues of plagiarism and academic integrity when using ChatGPT56.7630.09284
Effectiveness of ChatGPT 4.111.31333
Uses of ChatGPT 3.621.29333
Table 2. Descriptive analysis of survey items, including level of agreement equal to or above five.
Table 2. Descriptive analysis of survey items, including level of agreement equal to or above five.
About ChatGPTNMMd% Equal or above 5
1The university should allow the use of ChatGPT because it can help improve my study efficiency by assisting with research, helping to draft and edit written work, and providing explanations for complex topics.3315.31670.3%
2I think it is FAIR to use ChatGPT for my studies.3235.2566.6%
3I trust the information provided by ChatGPT.3313.2314%
4I have paid a subscription to ChatGPT.3302.3119%
Uses of ChatGPT scale itemsNMMd
U1I know how to use ChatGPT.3315.37673.3%
U2I have used ChatGPT to improve the quality of my essays/reports or dissertation writing.3313.48334.5%
U3I have used ChatGPT to improve or debug my code.3324.22449.4%
U4have used ChatGPT to help me brainstorm assignment ideas.3323.94446.4%
U5I believe ChatGPT understands the context of my questions well.3303.9434.6%
U6I have used ChatGPT for language translation.3283.02230.2%
U7I have used ChatGPT to create content for social media.3282.2114.7%
U8Before ChatGPT, the quality of my writing was poor.3282.52212.2%
U9I often check my answers with ChatGPT before submission.3313.33328.2%
U10With ChatGPT, I am more confident in working on complex assignments.3293.92441.4%
U11With ChatGPT, I always complete my assignments before the deadline.3303.353.526.7%
U12I have used ChatGPT for other purposes.3304.6557.5%
Effectiveness of ChatGPT subscale itemsNMMd
E1ChatGPT is effective in helping the student complete their dissertation project, essays or reports.3304.4448.4%
E2ChatGPT is effective in helping the student complete their other assignments.3314.5552.6%
E3ChatGPT is effective in helping the student understand complex topics.3314.98564%
E4The assignment/assessment marking criteria document is very useful for getting accurate, specific answers for an assignment from ChatGPT.3213.56422.7%
E5Using the assignment/assessment marking criteria document with ChatGPT helps me understand what is required in my assignment.3213.92436.2%
E6Despite using ChatGPT to assist me, I feel the assignment/assessment marking criteria document is still vague.3203.81429.1%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Bikanga Ada, M. It Helps with Crap Lecturers and Their Low Effort: Investigating Computer Science Students’ Perceptions of Using ChatGPT for Learning. Educ. Sci. 2024, 14, 1106. https://doi.org/10.3390/educsci14101106

AMA Style

Bikanga Ada M. It Helps with Crap Lecturers and Their Low Effort: Investigating Computer Science Students’ Perceptions of Using ChatGPT for Learning. Education Sciences. 2024; 14(10):1106. https://doi.org/10.3390/educsci14101106

Chicago/Turabian Style

Bikanga Ada, Mireilla. 2024. "It Helps with Crap Lecturers and Their Low Effort: Investigating Computer Science Students’ Perceptions of Using ChatGPT for Learning" Education Sciences 14, no. 10: 1106. https://doi.org/10.3390/educsci14101106

APA Style

Bikanga Ada, M. (2024). It Helps with Crap Lecturers and Their Low Effort: Investigating Computer Science Students’ Perceptions of Using ChatGPT for Learning. Education Sciences, 14(10), 1106. https://doi.org/10.3390/educsci14101106

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop