Next Article in Journal
Prompt: ChatGPT, Create My Course, Please!
Previous Article in Journal
Multitrack Educational Programs as a Method of Educational Process Personalization at Universities
 
 
Article
Peer-Review Record

SMART: Selection Model for Assessment Resources and Techniques

Educ. Sci. 2024, 14(1), 23; https://doi.org/10.3390/educsci14010023
by Isabel C. Gil-García 1,† and Ana Fernández-Guillamón 2,*,†
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Educ. Sci. 2024, 14(1), 23; https://doi.org/10.3390/educsci14010023
Submission received: 22 November 2023 / Revised: 19 December 2023 / Accepted: 20 December 2023 / Published: 25 December 2023
(This article belongs to the Section Higher Education)

Round 1

Reviewer 1 Report

Comments and Suggestions for Authors

Thank you for the opportunity to review the manuscript, and please allow me to make a few recommendations for improving the manuscript.

I suggest the authors explain a little more the reasons behind their choice for TOPSIS and AHP methods, and not others. For example, although the now classic AHP can still fulfill its role, there could be other newer methods that could surpass some of the limitations of AHP (such as Fuzzy AHP, Promethee, ANP, Vikor, BWM etc.). Perhaps with a brief discussion comparing the methods chosen with other existing methods for multi-criteria decision-making.

While the study seems like an interesting experiment and most possibly useful for educators and institutions, I suggest the authors to include a discussion subsection (or at least discuss a little bit more) on the implications of the findings for the educators and institutions.

Furthermore, I wasn’t able to find the discussion on the limitations of the proposed SMART methodology. I suggest the authors to address a little bit the potential limitations and constraints (if any) of the study so as to strengthen its overall reliability. For example, the authors could discuss a little bit on potential ethical considerations in the decision-making process, especially when involving multiple experts.

I would also like to read in this manuscript about the authors’ plans for continuing this research, perhaps for further research and exploration in areas where this study might have limitations.

The authors mention in the conclusion section about the applicability of the method to other modality or educational level, but I recommend a little more nuanced discussion on the generalization of findings to diverse contexts to strengthen their conclusion.

Author Response

Dear reviewer, we attach the file, thank you for your appreciated review.

Author Response File: Author Response.pdf

Reviewer 2 Report

Comments and Suggestions for Authors
  1. 1. The text is very interesting, especially for presenting an innovative approach to the selection of learning assessment activities. It is well-structured and well-written.

  2. 2. The proposed model (SMART model) appears to be interesting but complex in its application, particularly involving the identification and formalization of criteria and sub-criteria to consider in the selection of assessment activities, as well as the identification and characterization of these activities based on a set of elements.

  3. 3. On the other hand, it is a model that does not take into account the perspectives and preferences of students regarding assessment activities, seeming to have a view of assessment as a process exclusively decided by teachers, which does not align with a perspective of higher education more centered on the student, including in terms of assessment practices.

  4. 4. The aspects mentioned above raise the question of the necessity and relevance of using the proposed model.

  5. 5. Therefore, it is considered that the text should include a final section that discusses the limitations of the proposed model and potential difficulties in its implementation.

  6. 6. The introduction should also discuss the need for new assessment practices in light of the paradigm shift from teacher-centered to student-centered teaching. The introduction argues for the need for more interactive and student-centered activities but without focusing the discussion on specific assessment issues.

  7. 7. "Distance Education" does not appear to be a relevant keyword in relation to the content of the analyzed text.

Author Response

The authors express their sincere appreciation to Reviewer 2 for the comments and their useful suggestions. They have been included in the revised version of the paper. All the comments received have been discussed in detail, and our responses are presented in blue colour —both as noted below and in the modifications in our revised paper.
Comments to the Author
1. The text is very interesting, especially for presenting an innovative approach to the selection of learning assessment activities. It is well-structured and well-written.
The authors appreciate that Reviewer 2 considers the present study as interesting and that it is well-structured and written.
2. The proposed model (SMART model) appears to be interesting but complex in its application, particularly involving the identification and formalization of criteria and sub-criteria to consider in the selection of assessment activities, as well as the identification and characterization of these activities based on a set of elements.
We appreciate Reviewer 2 for highlighting the complexity of criteria identification and activity characterization in the SMART model. However, it is important to ensure that the complexity does not hinder the practical application of the model.
To determine and define the criteria and alternatives, we recommend asking different people related to the educational context (e.g., teachers, counselors, etc.). In this way, it is possible to consider a more comprehensive set of factors.
Even though it is a complex task, we believe that the rigor of the SMART methodology results in more confidence in the final rankings. The model makes subjective decision-making more structured, and data driven.
3. On the other hand, it is a model that does not take into account the perspectives and preferences of students regarding assessment activities, seeming to have a view of assessment as a process exclusively decided by teachers, which does not align with a perspective of higher education more centred on the student, including in terms of assessment practices.
The authors thank Reviewer 2 for this comment. In the paper, our focus was to support teachers/lecturers in choosing what assessment activity to prepare, but integrating student preferences is also important. There are a couple of ways in which student perspectives could be incorporated into the model:
1.
Include student-related criteria in the criteria set used for evaluation. These criteria could assess the perceived usefulness, enjoyment, or difficulty of activities from the student standpoint. This would require gathering input directly from students.
2.
Conduct a student survey to identify highly rated or preferred activities and use these as the activities (alternatives) for the SMART method. From this activity pool, the lecturer's considerations can be assessed through the SMART model, filtering/prioritizing those activities.
We will explore ways to address this gap by inviting student perspectives, either through the evaluation criteria itself or using student data to validate activity selections.
4. The aspects mentioned above raise the question of the necessity and relevance of using the proposed model.
We are confident that with the explanations given in this letter and in the new version of the manuscript, the necessity and relevance of using the SMART model are clearer.
5. Therefore, it is considered that the text should include a final section that discusses the limitations of the proposed model and potential difficulties in its implementation.
We have included two new Sections (5.1 and 5.2). In them, we discuss the key implications for educators and institutions (5.1) and the limitations and difficulties (5.2). 
Please, refer to pages 13 14. Moreover, this was also suggested by Reviewer 1.

6. The introduction should also discuss the need for new assessment practices in light of the paradigm shift from teacher-centred to student-centred teaching. The introduction argues for the need for more interactive and student-centred activities but without focusing the discussion on specific assessment issues.
The authors agree with Reviewer 2. In the new version of the manuscript, we have also included the assessment issues related to student-centred teaching. Please, refer to the blue color text.
7. "Distance Education" does not appear to be a relevant
keyword in relation to the content of the analyzed text.
The authors thank Reviewer 2 for this comment. In the new version of the manuscript, we have updated the keywords. We are confident that they are relevant to the paper.

Round 2

Reviewer 2 Report

Comments and Suggestions for Authors

I appreciate the authors for taking into account the aspects mentioned in the first round of evaluation. I believe they have significantly improved the text by addressing important aspects of it. However, I have a few additional considerations and suggestions that I would like to bring to their attention.

Although I find the relevant text to be significant and a partial response to the comments from the first round, I emphasize that the central issue here is the assessment of learning. Therefore, the argumentation should not only consider the reference to students as "active agents in their own learning process" but should also explicitly address their involvement in the assessment of their own learning and even the learning of their peers.

 

I appreciate the introduction of the "5.1. Discussion" and "5.2. Limitations of the study and future works" sections, which, in my assessment, significantly improved the text, addressing the issues I had highlighted in point 5 of the first round of review. However, I would like to bring the following points to the authors' consideration:

In the "5.1. Discussion" section, the authors wrote: "The key implications of this methodology for educators are: (line 283) • Prioritizing assignments and activities over traditional quizzes and exams based on 284 the model ranking’s results. (lines 284-285)." Traditional quizzes and exams are also valid forms of assessment and should be considered within the SMART methodology. Therefore, theoretically, the model can, in certain situations and depending on the criteria and sub-criteria considered, indicate these two assessment strategies or instruments as appropriate. I suggest removing the statement "Prioritizing assignments and activities over traditional quizzes and exams based on 284 the model ranking’s results. (lines 284-285)" or making it more understandable.

In lines 310-311-312, the authors introduce new abbreviations and acronyms that do not appear in the "Abbreviations" section. Please update the list of abbreviations accordingly.

I think it would be more appropriate to use 'assessment methodologies' rather than 'evaluation methodologies' in the keywords, but not being a native English speaker, I'm not entirely sure about it.

Author Response

I attach a response letter to the reviewer.

Author Response File: Author Response.pdf

Back to TopTop