Previous Article in Journal
The Good and Bad of AI Tools in Novice Programming Education
Previous Article in Special Issue
Assessment Automation of Complex Student Programming Assignments
 
 
Article
Peer-Review Record

University Teachers’ Views on the Adoption and Integration of Generative AI Tools for Student Assessment in Higher Education

Educ. Sci. 2024, 14(10), 1090; https://doi.org/10.3390/educsci14101090 (registering DOI)
by Zuheir N. Khlaif 1,*, Abedalkarim Ayyoub 2, Bilal Hamamra 3, Elias Bensalem 3,4, Mohamed A. A. Mitwally 5, Ahmad Ayyoub 6, Muayad K. Hattab 7 and Fadi Shadid 5,7
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Educ. Sci. 2024, 14(10), 1090; https://doi.org/10.3390/educsci14101090 (registering DOI)
Submission received: 25 July 2024 / Revised: 30 September 2024 / Accepted: 4 October 2024 / Published: 6 October 2024
(This article belongs to the Special Issue Application of New Technologies for Assessment in Higher Education)

Round 1

Reviewer 1 Report (New Reviewer)

Comments and Suggestions for Authors Your research study shows promise. Here are some suggestions to further improve its quality.   1. Authors should follow the "Instructions for Authors": In the text, reference numbers should be placed in square brackets [ ], and placed before the punctuation; for example [1], [1–3] or [1,3]. For embedded citations in the text with pagination, use both parentheses and brackets to indicate the reference number and page numbers; for example [5] (p. 10). or [6] (pp. 101–105).   2. Further editing is necessary. Specifically, consider removing unnecessary parentheses after references (e.g., Lines 39)   3. The authors should provide explanations for the abbreviations used in Figure 1. Ensure the figure caption clearly states the abbreviations and their meaning.    4. Is the Figure 2 necessary to describe the process of data analysis?    5. There seems to be a discrepancy. You stated an online survey was used, but Line 323 mentions 10 hours of interviews. Please clarify whether your study employed:  1. Only an online survey 2. A combination of online survey and interviews (mixed-methods)   6. Please use a more appropriate title for Table 1.    7. I noticed an inconsistency in your methodology description. Line 358 mentions a quantitative approach, but your use of both open-ended and closed ended surveys and data analysis suggests a mixed-methods design. Please clarify or adjust your methodology statement to reflect the actual research design. Comments on the Quality of English Language

Minor editing required. 

Author Response

 

  1. Authors should follow the "Instructions for Authors": In the text, reference numbers should be placed in square brackets [ ], and placed before the punctuation; for example [1], [1–3] or [1,3]. For embedded citations in the text with pagination, use both parentheses and brackets to indicate the reference number and page numbers; for example [5] (p. 10). or [6] (pp. 101–105).

 

Action

We changed the in-text citation and reorganize the references list based on that.

 

  1. Further editing is necessary. Specifically, consider removing unnecessary parentheses after references (e.g., Lines 39)  

 

Action

We make editing of the manuscript

 

  1. The authors should provide explanations for the abbreviations used in Figure 1. Ensure the figure caption clearly states the abbreviations and their meaning. 

 

Action

We added a description of the model and the abbreviation of the constructs.

 

  1. Is the Figure 2 necessary to describe the process of data analysis?   

Yes, one of the reviewer last time asked to add the process of qualitative data analysis. So we added this figure.

  1. There seems to be a discrepancy. You stated an online survey was used, but Line 323 mentions 10 hours of interviews. Please clarify whether your study employed:  1. Only an online survey 2. A combination of online survey and interviews (mixed-methods)  

Action

We used mixed methods approach (a survey composed of open-ended and closed-ended questions).

  1. Please use a more appropriate title for Table 1. 

Action

The title of table 1 was changed

 

  1. I noticed an inconsistency in your methodology description. Line 358 mentions a quantitative approach, but your use of both open-ended and closed ended surveys and data analysis suggests a mixed-methods design. Please clarify or adjust your methodology statement to reflect the actual research design.

Action

We rephrased the methodology part to address your comments.

Reviewer 2 Report (New Reviewer)

Comments and Suggestions for Authors

The theme is very interesting and very well written. The methodology is perfectly adequate and the results are well described. Congrats!

 

However, the manuscript is still a working document that still has some errors, a lot of annotations and erased text.

It needs a lot of revision by the authors for a final version.

I have attached the document with notes and suggestions.

Comments for author File: Comments.pdf

Author Response

Dear reviewer 2

You have provided us with your comments and feedback on Pdf file. We revised our manuscript and addressed all of your comments and suggestions carefully. 

All changes were in red font in the revised manuscript. 

This manuscript is a resubmission of an earlier submission. The following is a list of the peer review reports and author responses from that submission.


Round 1

Reviewer 1 Report

Comments and Suggestions for Authors

Dear authors, 

I read the research paper with interest and was  excited about the evaluation and the results.

In times of AI, assessments are an important point in the overall view of the learning process. The article deals with this sufficiently in the beginning and also points out current literature. Three research questions are also defined.

Nevertheless, I have to reject the article for the following reasons:

 - The defined research questions are not answered. They therefore remain unanswered.

 - The article introduces the topic of student examinations at great length. However, the survey asks teachers about the general use of generative AI and how open they are used to it. There are no questions about assessment or how it should be designed. 

Since neither the research questions are sufficiently answered and the long introduction suggests something completely different from the final discussion and the results, I must reject the article.

Author Response

I read the research paper with interest and was  excited about the evaluation and the results.

In times of AI, assessments are an important point in the overall view of the learning process. The article deals with this sufficiently in the beginning and also points out current literature. Three research questions are also defined.

Nevertheless, I have to reject the article for the following reasons:

 - The defined research questions are not answered. They therefore remain unanswered.

Response

We added the answer of the two questions in the results section

 

 - The article introduces the topic of student examinations at great length. However, the survey asks teachers about the general use of generative AI and how open they are used to it. There are no questions about assessment or how it should be designed. 

 

Response

First of all, thank you so much for your hard work and for providing us with invaluable feedback and comments to improve our work. Unfortunately, we realized that we did not include Appendix A in the manuscript as stated in our uploaded manuscript. On line 317, we mentioned Appendix A, which should have been uploaded. Instead, we mistakenly included the wrong survey. The incorrect survey was listed as Table 1 on page 7, lines 322-323, which is not related to our data.

We acknowledge this mistake. However, if you review the discussion section, you will see that we frequently mention the use of Gen AI in assessment, focusing on its application for student assessment. I have attached the correct survey which we used to collect our data for this study (Appendix A.). In addition, we will upload the raw data for the correct survey for both open-ended and closed questions. 

 

Author Response File: Author Response.pdf

Reviewer 2 Report

Comments and Suggestions for Authors

The structure of the paper is somehow confusing. The literature review should appear before the section devoted to research questions. Besides, that section is somehow confusing as, from my point of view, it should not be divided into two different sections. One section is enough to explain the general purpose of the paper, together with the specific research questions.

The author(s) should justify the motivation behind choosing instructors from different fields of knowledge when it comes to the sample.

Table 1. Measurement scale. is really confusing as there seem to be items without a source. Besides, in spite of the the fact that author(s) state(s) that literature on GenAI is quite recent, the sources to develop  the UTAUT were published before 2012... It requires justification because perhaps recent studies about literature reviews could have been a better option to design the questionnaire.

The model shows room for improvement and I do encourage the author(s) to try to improve the goodness-of-fit indices as it may result in a paper with a higher impact.

The discussion is solid and rigorous and they are really valuable for the community.

Nonetheless, the section devoted to conclusions is too short, and it requires improvement in this regard.

Comments on the Quality of English Language

The quality of English is good, but requires minor revisions.

Author Response

The structure of the paper is somehow confusing. The literature review should appear before the section devoted to research questions. Besides, that section is somehow confusing as, from my point of view, it should not be divided into two different sections. One section is enough to explain the general purpose of the paper, together with the specific research questions.

Action

We organized it and added subsections and the research question at the end of the literature review

Table 1. Measurement scale. is really confusing as there seem to be items without a source. Besides, in spite of the the fact that author(s) state(s) that literature on GenAI is quite recent, the sources to develop  the UTAUT were published before 2012... It requires justification because perhaps recent studies about literature reviews could have been a better option to design the questionnaire.

The model shows room for improvement and I do encourage the author(s) to try to improve the goodness-of-fit indices as it may result in a paper with a higher impact.

Response

We addressed your comments

Table 5.5 compares the initial result of the bootstrap-based test for the exact overall model fit (i.e., d_ULS and d_G) with the confidence interval derived from the sampling distribution. The initial value ought to be included in the confidence interval. Therefore, to show that the model has a "good fit," the upper bound of the confidence interval needs to be greater than the initial value of the d_ULS and d_G fit requirements. Select the confidence interval so that the 95% or 99% point represents the upper bound (Schuberth et al., 2022). Moreover, satisfactory fitting is indicated by an SRMR value of 0.05, which is less than 0.08 (Hu and Bentler, 1999).

 

Table 5.5: goodness of fit

 

 

Original sample (O)

Sample mean (M)

95%

99%

SRMR

Saturated model

0.05

     

d_ULS

Saturated model

0.3

0.15

0.21

0.44

 

Estimated model

0.71

0.43

0.69

0.87

d_G

Saturated model

0.6

0.45

0.57

0.64

 

Estimated model

0.72

0.46

0.59

0.76

 

 

The discussion is solid and rigorous and they are really valuable for the community.

Nonetheless, the section devoted to conclusions is too short, and it requires improvement in this regard.

Response

The paper is too long so we tried to shorten it.

Author Response File: Author Response.pdf

Reviewer 3 Report

Comments and Suggestions for Authors

The article addresses a study investigated the factors that may impact the adoption of Generative Artificial 4 Intelligence (Gen AI) tools for students’ assessment in tertiary education from the perspectives of 5 early-adopter instructors in the Middle East. The authors utilize a self-administered online survey and the 6 Unified Theory of Acceptance and Use of Technology (UTAUT) model to collect data. The content is succinctly described and contextualized with respect to previous and present theoretical background and empirical research on the topic. The research design, questions, hypotheses and methods are clearly stated. The arguments and discussion of findings are coherent, balanced and compelling. For empirical research, the results are clearly presented. The article is adequately referenced. The conclusions are thoroughly supported by the results presented in the article or referenced in secondary literature.

 

Author Response

Reviewer #3

The article addresses a study investigated the factors that may impact the adoption of Generative Artificial 4 Intelligence (Gen AI) tools for students’ assessment in tertiary education from the perspectives of 5 early-adopter instructors in the Middle East. The authors utilize a self-administered online survey and the 6 Unified Theory of Acceptance and Use of Technology (UTAUT) model to collect data. The content is succinctly described and contextualized with respect to previous and present theoretical background and empirical research on the topic. The research design, questions, hypotheses and methods are clearly stated. The arguments and discussion of findings are coherent, balanced and compelling. For empirical research, the results are clearly presented. The article is adequately referenced. The conclusions are thoroughly supported by the results presented in the article or referenced in secondary literature.

Response

Thank you

Back to TopTop