Next Article in Journal
Enhancing Cultural Heritage Accessibility Through Three-Dimensional Artifact Visualization on Web-Based Open Frameworks
Previous Article in Journal
Markov-CVAELabeller: A Deep Learning Approach for the Labelling of Fault Data
 
 
Article
Peer-Review Record

Exploring the Ethical Implications of Using Generative AI Tools in Higher Education

Informatics 2025, 12(2), 36; https://doi.org/10.3390/informatics12020036
by Elena Đerić 1,*, Domagoj Frank 2,* and Dijana Vuković 3
Reviewer 1:
Reviewer 2:
Reviewer 3: Anonymous
Informatics 2025, 12(2), 36; https://doi.org/10.3390/informatics12020036
Submission received: 2 February 2025 / Revised: 18 March 2025 / Accepted: 2 April 2025 / Published: 7 April 2025

Round 1

Reviewer 1 Report

Comments and Suggestions for Authors

"The study analyzes how different segments of the academic community (undergraduate and graduate students and teaching and research staff) perceive the ethical implications of using generative AI (GenAI) in higher education. In particular, it investigates the relationship between academic role, gender and duration of use of AI-based tools (ChatGPT, Microsoft Copilot, etc.) with respect to awareness of copyright, transparency, responsibility and academic integrity.

2. Originality and relevance

The topic is highly pertinent and novel, given the exponential growth of generative AI tools in the university environment. The work fills a gap in the literature: while there are studies on the adoption of AI in education, few focus on empirically comparing ethical perceptions among different academic subgroups. The investigation offers a useful practice perspective for the design of institutional policies because it focuses on factors such as experience of use or gender that can influence opinions on responsibility, transparency and authorship.

3. Contribution to the field

This study adds value by delving into the inter- and intra-group differences in the university ecosystem with regard to the ethical use of generative AI. Its quantitative approach, with a large sample size, allows for the extraction of solid and action-oriented conclusions, for example, for the development of training programs and guides for the responsible use of AI. Likewise, the allusion to the pressure of “publish or perish” as a potential factor that could influence ethical decision-making is highlighted, which enriches the academic debate.

4. Methodological rigor

  • Study design and sample: A questionnaire was used aimed at students (undergraduate, postgraduate, doctorate) and teachers and researchers from a single institution. The total sample (more than 800 people) is substantial, although the group of doctoral students is small, which makes it difficult to generalize conclusions for that sub-segment. It would be valuable to include some additional comment that qualifies how this limitation affects the representativeness of the study.
  • Data collection and analysis: The article describes in sufficient detail the questionnaire items that measure ethical awareness, personal responsibility and the potentially negative consequences of using AI. The application of statistical tests (chi-square, correlations, reliability analysis) is appropriate. However, to reinforce confidence in the validity of the sample it might be useful to mention response rates or any exclusion criteria.
  • Ethical considerations: Given that the study examines perceptions of ethics, it would be interesting to know if the questionnaire was piloted or previously validated. This would serve to reinforce the scale's robustness for measuring concepts such as authorship, transparency or academic integrity.

5. Consistency of the conclusions with the results

The conclusions are coherent. The finding that teaching and research staff have a greater ethical sensitivity, compared to undergraduate or postgraduate students, is supported by the statistical results. Acknowledging limitations (single institution, differences in sample size between subgroups) is also adequate. Furthermore, the suggestions regarding the need to replicate the study in other contexts and to strengthen training in academic integrity are in line with the evidence obtained.

6. Relevance of references

The bibliographic citations used appear current and adequate, particularly those that address the adoption of AI and discussions on ethics in educational settings. The inclusion of additional references related to the “publish or perish” culture and its possible impact on ethical behavior could be considered, to support the idea that the pressure to publish influences the use of generative AI.

7. Additional comments on tables and figures

  • The tables showing the results of the Chi-square tests, correlations and reliability analysis are clear and useful.
  • If there were additional figures showing, for example, the distribution of perceptions by subgroups, it might be worth considering adding more detailed figure footnotes describing the main variable and the range of values.

Additional observations

  • It would be enriching to expand, at least with a brief paragraph, the discussion on how the pressure to “publish or perish” can lead to greater use of AI to write drafts or relax standards of review and citation, as mentioned in passing in the text.
  • Specifically mentioning the possibility of replicating the study in universities with different profiles (e.g., greater emphasis on investigation vs. greater emphasis on teaching) or in different cultural contexts would strengthen the justification for extending the comparison."

Author Response

Comment 1: I also found the introduction and review of research on ethical dimensions of AI use in higher education to be lacking in terms of scope / comprehensiveness. Please develop by fleshing out and adding review of additional relevant sources. I have provided some additional citations - which should be added - but do not feel limited to these. 

Response 1: Dear Reviewer, thank you for taking the time to review our manuscript, suggest additional sources and for your constructive feedback. We appreciate your insightful comments and suggestions for improving the clarity and comprehensiveness of our study. We agree with your comments; therefore, we have incorporated additional relevant sources, including those you kindly provided, to strengthen the theoretical foundation of our manuscript. Please see the modified sections listed below:  

AlAfnan, M. A., Samira Dishari, Marina Jovic, & Koba Lomidze. (2023). ChatGPT as an Educational Tool: Opportunities, Challenges, and Recommendations for Communication, Business Writing, and Composition Courses. Journal of Artificial Intelligence and Technology. https://doi.org/10.37965/jait.2023.0184 

  • Integrated into: Overview of Generative AI Tools Use in Higher Education and Intoduction 

Hosseini, M., Resnik, D. B., & Holmes, K. (2023). The ethics of disclosing the use of artificial intelligence tools in writing scholarly manuscripts. Research Ethics, 19(4), 449– 465. https://doi.org/10.1177/17470161231180449 

  • Integrated into:  Shifting the Focus to Ethical Implications – Copyrights and Authorship 

Guilherme, A. (2019). AI and education: the importance of teacher and student relations. AI & SOCIETY, 34(1), 47–54. https://doi.org/10.1007/s00146-017-0693-8 

  • Integrated into Overview of Generative AI Tools Use in Higher Education 

Jiang, J., Vetter, M. A., & Lucia, B. (2024). Toward a ‘More-Than-Digital’ AI Literacy: Reimagining Agency and Authorship in the Postdigital Era with ChatGPT. Postdigital Science and Education, 6(3), 922–939. https://doi.org/10.1007/s42438-024-00477-1 

  • Integrated into:  Shifting the Focus to Ethical Implications - Copyrights and Authorship 

Roe, J., Renandya, W. A., & Jacobs, G. M. (2023). A Review of AI-Powered Writing Tools and Their Implications for Academic Integrity in the Language Classroom. Journal of English and Applied Linguistics, 2(1). https://doi.org/10.59588/2961-3094.1035 

  • Integrated into: Shifting the Focus to Ethical Implications – Academic integrity 

Sharples, M. (2023). Towards social generative AI for education: theory, practices and ethics. Learning: Research and Practice, 9(2), 159–167. https://doi.org/10.1080/23735082.2023.2261131 

  • Integrated into: Identifying Challenges and Concerns in AIED 

Vetter, M. A., Lucia, B., Jiang, J., & Othman, M. (2024). Towards a framework for local interrogation of AI ethics: A case study on text generators, academic integrity, and composing with ChatGPT. Computers and Composition, 71, 102831. https://doi.org/10.1016/j.compcom.2024.102831 

  • Integrated into: Shifting the Focus to Ethical Implications – Academic integrity 

 

Comment 2: I do think there could be some improvements in terms of thoroughly detailing limitations of the study. For example, by including more information about the sample size as well as the timing of the study - the percentage of teachers/researchers in comparison to the overall sample should be noted. 

Response 2: In section three, “Materials and Methods” we integrated details about the period when the survey was conducted. We also included in the fifth section, “Limitations”, information on the overall number of active teachers and students in the University North compared to the percentage that participated in our survey.    

Reviewer 2 Report

Comments and Suggestions for Authors

Thank you for the opportunity to review this article manuscript, which provides good insights into the ethical implications of GenAI tool usage in higher education. In terms of scientific rigor, the design/methods are well-described, and the results clearly presented. I do think there could be some improvements in terms of thoroughly detailing limitations of the study. For example, by including more information about the sample size as well as the timing of the study - the percentage of teachers/researchers in comparison to the overall sample should be noted. 

I also found the introduction and review of research on ethical dimensions of AI use in higher education to be lacking in terms of scope / comprehensiveness. Please develop by fleshing out and adding review of additional relevant sources. I have provided some additional citations - which should be added - but do not feel limited to these. 

< !--?xml version="1.0"?-->

AlAfnan, M. A., Samira Dishari, Marina Jovic, & Koba Lomidze. (2023). ChatGPT as an Educational Tool: Opportunities, Challenges, and Recommendations for Communication, Business Writing, and Composition Courses. Journal of Artificial Intelligence and Technology. https://doi.org/10.37965/jait.2023.0184

< !--?xml version="1.0"?-->

Hosseini, M., Resnik, D. B., & Holmes, K. (2023). The ethics of disclosing the use of artificial intelligence tools in writing scholarly manuscripts. Research Ethics, 19(4), 449–465. https://doi.org/10.1177/17470161231180449   < !--?xml version="1.0"?--> Guilherme, A. (2019). AI and education: the importance of teacher and student relations. AI & SOCIETY, 34(1), 47–54. https://doi.org/10.1007/s00146-017-0693-8   < !--?xml version="1.0"?--> Jiang, J., Vetter, M. A., & Lucia, B. (2024). Toward a ‘More-Than-Digital’ AI Literacy: Reimagining Agency and Authorship in the Postdigital Era with ChatGPT. Postdigital Science and Education, 6(3), 922–939. https://doi.org/10.1007/s42438-024-00477-1   < !--?xml version="1.0"?--> Roe, J., Renandya, W. A., & Jacobs, G. M. (2023). A Review of AI-Powered Writing Tools and Their Implications for Academic Integrity in the Language Classroom. Journal of English and Applied Linguistics, 2(1). https://doi.org/10.59588/2961-3094.1035

< !--?xml version="1.0"?-->

Sharples, M. (2023). Towards social generative AI for education: theory, practices and ethics. Learning: Research and Practice, 9(2), 159–167. https://doi.org/10.1080/23735082.2023.2261131   < !--?xml version="1.0"?--> Vetter, M. A., Lucia, B., Jiang, J., & Othman, M. (2024). Towards a framework for local interrogation of AI ethics: A case study on text generators, academic integrity, and composing with ChatGPT. Computers and Composition, 71, 102831. https://doi.org/10.1016/j.compcom.2024.102831  

Author Response

Comment 1: 4. Methodological rigor: Study design and sample: A questionnaire was used aimed at students (undergraduate, postgraduate, doctorate) and teachers and researchers from a single institution. The total sample (more than 800 people) is substantial, although the group of doctoral students is small, which makes it difficult to generalize conclusions for that sub-segment. It would be valuable to include some additional comment that qualifies how this limitation affects the representativeness of the Study. 

Response 1: Dear Reviewer, thank you for taking the time to review our manuscript and for your constructive feedback. We agree with your feedback - this is a known limitation that we acknowledged within the fifth section “Limitations” which has been thoroughly rewritten. 

 

Comment 2: Data collection and analysis: The article describes in sufficient detail the questionnaire items that measure ethical awareness, personal responsibility and the potentially negative consequences of using AI. The application of statistical tests (chi-square, correlations, reliability analysis) is appropriate. However, to reinforce confidence in the validity of the sample it might be useful to mention response rates  or any exclusion criteria. 

Response 2: We appreciate your insightful comments and suggestions for improving the clarity and comprehensiveness of our study. We agree with your comments; therefore, we have included this information in the third section “Materials and Methods” as it follows: Considering the overall population of students at University North, nearly 20% of active students and around 30% of active teachers have participated in this survey. The Single Sign-On (SSO) feature in Google Forms was utilized to ensure respondent authenticity and eligibility, requiring credentials specific to the higher education institution (HEI). There was no need for further exclusions since participants joined with their official email addresses, and there were no answers from non-uni members. 

 

Comment 3: Ethical considerations: Given that the study examines perceptions of ethics, it would be interesting to know if the questionnaire was piloted or previously  validated. This would serve to reinforce the scale's robustness for measuring concepts such as authorship, transparency or academic integrity. 

Response 3: The questionnaire was piloted as highlighted in the third section "Materials and Methodology": To ensure the reliability and validity of the instrument, a pilot study was conducted with a small subset of respondents, and questions were refined according to their feedback. 

  

Comment 4: The inclusion of additional references related to the “publish or perish” culture and its possible impact on ethical behavior could be considered, to support the idea that the pressure to publish influences the use of generative AI. 

Response 4: The whole paragraph on AI-enhanced "publish or perish" culture has been included in the section "Shifting the Focus to Ethical Implications – dimension: Responsibility":  It is important to stress that ethical challenges and responsibility in GenaAI tools use are not solely limited to students. The "growing pressure on scientists to produce more publications to sustain and progress in their academic careers, a phenomenon widely known as "publish or perish" [55], has raised concerns over "AI-generated mass output rather than work that is innovative or informed by social values and priorities" [56]. Studies have found that researchers in highly competitive environments, including China [57],[58], have turned to AI to meet publication quotas, raising questions about the authenticity and depth of AI-assisted work. This practice undermines academic integrity and contributes to the proliferation of low-quality or duplicate research, making it difficult to distinguish quality work from AI-generated work. Without regulation, the overuse of AI in mass-producing papers can potentially generate multiple ethical issues, including unattributed AI authorship, plagiarism, and the dissemination of misinformation [56],[59]. Although AI can help researchers write and brainstorm, its misuse in academic environments results in a surge of poor studies, which makes it harder to distinguish between proper research and AI-generated content. Academic institutions must adopt stricter policies about using AI in research, emphasizing ethical use and maintaining scholarly standards to counteract this trend. If left unchecked, the AI-driven "publish or perish" culture could destabilize the credibility of academic discourse and erode public trust in scientific research. 

 

Comment 5: Additional comments on tables and figures: If there were additional figures showing, for example, the distribution of perceptions by subgroups, it might be worth considering adding more detailed  figure footnotes describing the main variable and the range of values. 

Response 5: At this moment, our co-author, who was responsible for statistics, was unable to generate and describe additional figures. Since this item was not critical, we have focused on correcting other parts of the paper according to your suggestions. 

 

Comment 6: Additional observations: It would be enriching to expand, at least with a brief paragraph, the discussion on how the pressure to “publish or perish” can lead to greater use of AI to write drafts or relax standards of review and citation, as mentioned in passing in the text. 

Response 6: The whole paragraph on AI-enhanced “publish or perish” culture has been included in the section "Shifting the Focus to Ethical Implications – dimension: Responsibility". 

 

Comment 7: Specifically mentioning the possibility of replicating the study in universities with different profiles (e.g., greater emphasis on investigation vs. greater emphasis on teaching) or in different cultural contexts would strengthen the justification for extending the comparison." 

Response 7: This has been added to the section “Conclusion”, within the fourth paragraph focusing on the future research directions. 

Reviewer 3 Report

Comments and Suggestions for Authors

This manuscript addresses a critical and timely topic. As AI tools become more common in academic settings, understanding their ethical implications is essential. The authors make a valuable contribution by empirically investigating ethical concerns including copyright, authorship, transparency, and responsibility. Their framework provides a structured lens to examine these challenges.

However, there are several areas that need improvement. One major issue is the lack of a strong theoretical foundation for the study. While the paper identifies key ethical issues, these ideas are not well connected to existing research or theories. A related issue is the lack of clear definitions for key terms. Providing clear definitions would not only strengthen the manuscript’s conceptual rigor but also make the findings easier to interpret.

When it comes to the survey, the authors have made a good effort to cover multiple ethical dimensions, but the questions are too broad and lack context. Ethical issues related to GenAI are often highly situational. By providing more detailed scenarios or examples, the survey could capture a deeper and more nuanced understanding of participants’ ethical perceptions. This level of detail is crucial for translating the findings into actionable insights for educators and policymakers.

Another limitation of the study is the use of a single university as the source of all survey data. While the sample size is sufficient, the fact that all participants come from the same institution limits the generalizability of the findings. Institutional policies, academic culture, and even geographic context may significantly influence participants’ perceptions of GenAI and related ethical issues. 

The discussion of results also requires further development. The manuscript mentions a weak correlation between ethical awareness and the adoption of GenAI tools but does not sufficiently explore why this might be the case. This is a particularly meaningful finding that deserves more attention. Are there external factors, such as institutional policies or peer influence, that play a stronger role in adoption decisions? Or does the weak correlation suggest that ethical awareness alone is insufficient to drive behavior change? Addressing these questions would add depth to the analysis and enhance the practical relevance of the study.

Lastly, the paper could be improved by considering differences between academic disciplines. Ethical concerns with AI tools likely vary across fields. For example, STEM researchers might focus more on data accuracy and reproducibility, while those in the humanities may care more about originality and authorship. Adding this perspective would make the findings more relevant to a wider range of academic areas.

Author Response

Comment 1: This manuscript addresses a critical and timely topic. As AI tools become more common in academic settings, understanding their ethical implications is essential. The authors make a valuable contribution by empirically investigating ethical concerns including copyright, authorship, transparency, and responsibility. Their framework provides a structured lens to examine these challenges. However, there are several areas that need improvement. One major issue is the lack of a strong theoretical foundation for the study. While the paper identifies key ethical issues, these ideas are not well connected to existing research or theories. A related issue is the lack of clear definitions for key terms. Providing clear definitions would not only strengthen the manuscript’s conceptual rigor but also make the findings easier to interpret. 

Response 1: Dear Reviewer, thank you for taking the time to review our manuscript and for your constructive feedback. We appreciate your insightful comments and suggestions for improving the clarity and comprehensiveness of our study. We agree with your comments; therefore, we have enriched the theoretical foundation for the study with additional references and research – please see below: 

AlAfnan, M. A., Samira Dishari, Marina Jovic, & Koba Lomidze. (2023). ChatGPT as an Educational Tool: Opportunities, Challenges, and Recommendations for Communication, Business Writing, and Composition Courses. Journal of Artificial Intelligence and Technology. https://doi.org/10.37965/jait.2023.0184 

  • Integrated into: Overview of Generative AI Tools Use in Higher Education and Intoduction  

Hosseini, M., Resnik, D. B., & Holmes, K. (2023). The ethics of disclosing the use of artificial intelligence tools in writing scholarly manuscripts. Research Ethics, 19(4), 449– 465. https://doi.org/10.1177/17470161231180449 

  • Integrated into:  Shifting the Focus to Ethical Implications – Copyrights and Authorship 

Guilherme, A. (2019). AI and education: the importance of teacher and student relations. AI & SOCIETY, 34(1), 47–54. https://doi.org/10.1007/s00146-017-0693-8 

  • Integrated into Overview of Generative AI Tools Use in Higher Education 

Jiang, J., Vetter, M. A., & Lucia, B. (2024). Toward a ‘More-Than-Digital’ AI Literacy: Reimagining Agency and Authorship in the Postdigital Era with ChatGPT. Postdigital Science and Education, 6(3), 922–939. https://doi.org/10.1007/s42438-024-00477-1 

  • Integrated into:  Shifting the Focus to Ethical Implications - Copyrights and Authorship 

Roe, J., Renandya, W. A., & Jacobs, G. M. (2023). A Review of AI-Powered Writing Tools and Their Implications for Academic Integrity in the Language Classroom. Journal of English and Applied Linguistics, 2(1). https://doi.org/10.59588/2961-3094.1035 

  • Integrated into: Shifting the Focus to Ethical Implications – Academic integrity 

Sharples, M. (2023). Towards social generative AI for education: theory, practices and ethics. Learning: Research and Practice, 9(2), 159–167. https://doi.org/10.1080/23735082.2023.2261131 

  • Integrated into: Identifying Challenges and Concerns in AIED 

Vetter, M. A., Lucia, B., Jiang, J., & Othman, M. (2024). Towards a framework for local interrogation of AI ethics: A case study on text generators, academic integrity, and composing with ChatGPT. Computers and Composition, 71, 102831. https://doi.org/10.1016/j.compcom.2024.102831 

  • Integrated into: Shifting the Focus to Ethical Implications – Academic integrity  

Likewise, the “clear definitions of key terms”, such as dimensions of the ethical implications (copyrights and authorship, transparency, responsibility, and academic integrity), have been further elaborated and enriched with additional references and explanations. 

 

Comment 2: Another limitation of the study is the use of a single university as the source of all survey data. While the sample size is sufficient, the fact that all participants come from the same institution limits the generalizability of the findings. Institutional policies, academic culture, and even geographic context may significantly influence participants’ perceptions of GenAI and related ethical issues. 

Response 2: Added to the section “Limitations”: While the sample size is sufficient, all participants, as a source of survey data, come from a single institution, consequently limiting the generalizability of the findings. Institutional policies, academic culture, and even geographic context may significantly influence participants' perceptions of GenAI and related ethical issues. Expanding future research to include more universities with different profiles or diverse cultural contexts would enhance the applicability of the findings. 

Additionally, we enhanced the “Recommendations for future research” (within the “Conclusion”) with this finding: Considering the limitations of this study and data collection within a single institution, future studies should include universities with different institutional profiles and cultural contexts. This would allow researchers to assess whether AI-related ethical concerns and adoption trends vary or remain consistent across educational settings. 

 

Comment 3: When it comes to the survey, the authors have made a good effort to cover multiple ethical dimensions, but the questions are too broad and lack context. Ethical issues related to GenAI are often highly situational. By providing more detailed scenarios or examples, the survey could capture a deeper and more nuanced understanding of participants’ ethical perceptions. This level of detail is crucial for translating the findings into actionable insights for educators and policymakers. Are there external factors, such as institutional policies or peer influence, that play a stronger role in adoption decisions? Or does the weak correlation suggest that ethical awareness alone is insufficient to drive behavior change? Addressing these questions would add depth to the analysis and enhance the practical relevance of the study. 

Response 3: Added to the section “Limitations”: Additionally, the survey could capture a deeper and more nuanced understanding of participants' ethical perceptions (within different dimensions), allowing for the translation of findings into actionable insights for educators and policymakers. Based on the previous point, addressing factors such as institutional policies or peer influence would provide greater clarity on whether external pressures shape adoption decisions more than ethical considerations. The current findings suggest that peer influence plays a dominant role in the absence of clear policies, but further research is needed to explore the extent of this effect. Considering this study's results and findings, future works are recommended to investigate whether the weak correlation between ethical awareness and AI adoption suggests that ethical awareness alone is insufficient to drive behavior change. A mix of more factors may drive such decisions in an educational environment. 

 

Comment 4: Lastly, the paper could be improved by considering differences between academic disciplines. Ethical concerns with AI tools likely vary across fields. For example, STEM researchers might focus more on data accuracy and reproducibility, while those in the humanities may care more about originality and authorship. Adding this perspective would make the findings more relevant to a wider range of academic areas. 

Response 4: This has been added to the “Conclusion”, fourth paragraph focusing on the future research directions: Future research should also consider differences between academic disciplines, as ethical concerns with AI tools likely vary across fields. For instance, STEM researchers may be more concerned with data accuracy and reproducibility, while those in the humanities may prioritize originality, authorship, and the implications of AI-generated content on creative work. Addressing these discipline-specific perspectives could offer a holistic overview and help shape more targeted guidelines and insights for various academic communities. Considering the limitations of this study and data collection within a single institution, future studies should include universities with different institutional profiles and cultural contexts. This would allow researchers to assess whether AI-related ethical concerns and adoption trends vary or remain consistent across educational settings. 

Back to TopTop