Next Article in Journal
Decoding Urban Archetypes: Exploring Mobility-Related Homogeneity among Cities
Next Article in Special Issue
Investigating Student Satisfaction and Adoption of Technology-Enhanced Learning to Improve Educational Outcomes in Saudi Higher Education
Previous Article in Journal
Factors Affecting the Bidding Decision in Sustainable Construction
Previous Article in Special Issue
Exploring the Effects of Computer and Smart Device-Assisted Learning on Students’ Achievements: Empirical Evidence from Korea
 
 
Article
Peer-Review Record

Automating Assessment and Providing Personalized Feedback in E-Learning: The Power of Template Matching

Sustainability 2023, 15(19), 14234; https://doi.org/10.3390/su151914234
by Zainab R. Alhalalmeh 1,*, Yasser M. Fouda 1, Muhammad A. Rushdi 2 and Moawwad El-Mikkawy 1
Reviewer 1:
Reviewer 2: Anonymous
Reviewer 3:
Sustainability 2023, 15(19), 14234; https://doi.org/10.3390/su151914234
Submission received: 12 August 2023 / Revised: 15 September 2023 / Accepted: 15 September 2023 / Published: 26 September 2023

Round 1

Reviewer 1 Report (Previous Reviewer 4)

Actually, the authors did most of the required comments 27 out of 29 and this is great work. However, there are still two points:

1- You need to address the comments #18 and #23.

2- A need to link the results of the study with e-learning, the implications to instructors.

Moderate editing of English language required

Author Response

Dear editor

Thanks. 

best regards 

Author Response File: Author Response.pdf

Reviewer 2 Report (New Reviewer)

Literature Foundation for Algorithm Integration: What previous studies or literature can justify the integration of BBS with the chosen feature descriptors (HARRIS, MSER, SURF, and SIFT)? Are there any existing publications or works that have already explored similar combinations, and if so, how does this research differentiate from those?

 

Selection Criteria Origins: The Algorithm Screening and Selection Process is very systematic, but are the inclusion and exclusion criteria based on any established frameworks or methodologies from the literature? How were the unique characteristics of the Egyptian educational landscape determined, and are there any literature sources that define these characteristics?

 

Algorithm Evaluation: While an extensive pool of algorithms was compiled from existing literature, were any metrics or standards from previous researches applied during the evaluation process? Are there any documented best practices or benchmarks in the literature for comparing and selecting algorithms in template-matching and e-learning contexts?

 

Independence and Bias in Review: Given that different reviewers independently evaluated each algorithm, how was the potential bias addressed? Are there any referenced methodologies from the literature on how to ensure an unbiased review process in such scenarios?

 

Rationale Behind Final Selection: Are there any specific studies or literature references that back the final selection of algorithms, especially in the context of Egyptian e-learning and automated assessments?

 

After reviewing the manuscript, I've noticed inconsistencies and errors in the citation and reference formatting. Adhering to a standardized citation style is essential for clarity, credibility, and the overall quality of the paper.

Author Response

Dear editor

Thanks. check attached please. 

best regards 

Author Response File: Author Response.pdf

Reviewer 3 Report (New Reviewer)

Congratulations, you have been able to submit an article to this journal. Overall your article is good, but less systematic in writing. Follow some of the input that we provide, please revise it.

1. What is the main question addressed by the research?
The main questions of this research are not stated explicitly and clearly. Too general a statement of the research question. We asked the authors to rewrite the research questions systematically and specifically.


2. Do you consider the topic original or relevant in the field? Does it address a specific gap in the field?
The preliminary discussion that has been carried out or described is appropriate to the field and relevant to the research topic. The specific gaps that need to be addressed through this research have not been described in detail and specifically.


3. What does it add to the subject area compared with other published material?
The use of materials in research methods has been carried out well, in accordance with the needs of conducting research, but is not explained specifically and in detail.


4. What specific improvements should the authors consider regarding the methodology? What further controls should be considered?
Even though the research has been carried out and is in accordance with scientific regulations, the control variables in the research implementation have not been explained in detail


5. Are the conclusions consistent with the evidence and arguments presented and do they address the main question posed?
The conclusions have been stated clearly and in accordance with the research questions that have been stated in general in the introduction section. Weaknesses or limitations in terms of conducting research and the methods used have been stated in detail in the conclusion section and expressed as suggestions for conducting research in the future.


6. Are the references appropriate?
The references used are in accordance with the topic of this article and all articles in the list of references are referred to in the text

 

7. Please include any additional comments on the tables and figures.
There are lots of abbreviations used in each table and figure, so I suggest that below the table or below the figure, add a description of each abbreviation used in the table and figure.

Overall this article is complete, but the writing structure needs to follow the rules of a scientific paper. We have provided some comments and suggestions for revision. Hopefully a quick fix.

Author Response

Dear editor

Thanks. check attached please. 

best regards 

Author Response File: Author Response.pdf

Round 2

Reviewer 2 Report (New Reviewer)

Thank you to the author for the meticulous revisions. The overall quality of the paper has improved significantly. However, upon further review, there are still several minor errors that need addressing. We kindly request the author to check and confirm these issues.

Firstly, there are content-related mistakes. For instance: ", Template or pattern matching is a high-level machine-vision task in which parts of an image that fit a predetermined template are identified [64]." The ", Template" portion appears to be an incomplete sentence. Additionally, some conjunctions might be missing. We recommend that the author reviews the entire article for structural coherence.

Secondly, in the final References list, the sequence of citations jumps from [2] to [4], missing [1]. Furthermore, the reference "doi: 10.1017/S0958344005000716." is not consistent with the journal's format standards regarding how DOIs should be presented. Therefore, the author is advised to inspect both the formatting and the textual content thoroughly.

Thank you for your attention to these details.

Author Response

Dear, 
Thanks for you good comments, I want to inform you that I did all, please check. 

best regards

Author Response File: Author Response.pdf

This manuscript is a resubmission of an earlier submission. The following is a list of the peer review reports and author responses from that submission.


Round 1

Reviewer 1 Report

I recommend a revision and reorganization of the article

Reconsider after major revision

 

Automating Assessment, Providing Personalized Feedback in 2 E-Learning: The Power of Template Matching

Felix, U. (2005). E-learning pedagogy in the third millennium: The need for combining social and cognitive constructivist approaches. ReCALL17(1), 85-100.

Using a similar ICALL system, Chen and Tokuda (2003), Chen, Tokuda and Xiao (2002) and Tokuda and Chen (2001) have developed a sophisticated program for online translation training based on template pattern matching. The templates use words or phrases as a minimal unit, with the databases selected by experienced language teachers in the light of responses collected from sample students. The program includes a heaviest common sequence algorithm for matches aimed at identifying, from among a large number of possible paths embedded within the template, the path with the greatest similarity to the learners’ input translation. What the program delivers is error contingent feedback for each student input.”

Chen, L., Tokuda, N. and Xiao, D. (2002) A POST parser-based learner model for template-based ICALL for Japanese-English writing skills. Computer Assisted Language Learning 15(4): 357–372.

1.1. Study Aims 71

This paper proposes enhanced feature-based BBS schemes to achieve more robust 72 template-matching performance, especially for e-learning and automated assessments.

Section 3, background details on the BBS method and the investigated feature descriptors are given; the proposed algorithm is presented in Section 4; in Section 5, experimental settings, results, and discussion are provided; Conclusions and recommendations 84 for future work are made in Section 6.

2. Related Work 86

2.1. Feedback in e-learning

Feedback is essential in online learning, primarily due to the absence of face-to-face interaction between instructors and students [16].

2.2. Template Matching and e-Learning 105

Matching a template within a source image is particularly a key task in numerous computer vision applications, including object detection, tracking, image stitching [23], 3D reconstruction, image compression, motion estimation, image denoising, and action recognition.

A. BBS ALGORITHM

The best-buddies similarity (BBS) measures the similarity between two sets of points

BBS search is carried out with four types of fast feature extraction algorithms (SURF, SIFT, MSER, and Harris). In this section, we briefly describe each of these algorithms.

3. Proposed Method

 

To build a robust template-matching algorithm with high accuracy and efficiency specifically for e-learning purposes, the BBS measure was employed in conjunction with the aforementioned fast feature descriptors. Five hybrid algorithms were investigated: HARRIS-BBS, MSER-BBS, SIFT-BBS, SURF-BBS and MSER-SURF-BBS. The basic steps of our hybrid schemes are shown in Figure 1.(Where is ?)

Author Response

Dear Reviewer  
I hope this email finds you well. I am writing in response to the review report for my manuscript titled "Automating Assessment, Providing Personalized Feedback in E-Learning: The Power of Template Matching," which I submitted for consideration for publication insustainability . I would like to express my gratitude to the reviewer for their valuable feedback and comments, which have greatly contributed to improving the quality and clarity of my work. First and foremost, I appreciate the reviewer's positive comments regarding the significance of the topic and the potential impact of template matching in automating assessment and providing personalized feedback in e-learning environments. I am glad to see that the reviewer acknowledges the relevance and timeliness of this research. Regarding the reviewer's first comment regarding the clarity of the methodology, I apologize for any confusion caused. In response to this concern, I have revised the manuscript to provide a more detailed description of the template matching algorithms employed in the study. I have included additional information on the key steps involved in the algorithm, such as preprocessing, feature extraction, and template comparison. Furthermore, I have added a flowchart to visually illustrate the process, aiming to enhance the clarity and understanding of the methodology. Additionally, I appreciate the reviewer's suggestion to include a comparison with other existing assessment methods. To address this concern, I have conducted further analysis and incorporated a section in the revised manuscript that presents a comparative evaluation of the template matching approach against traditional assessment methods, such as manual grading and multiple-choice questions. This comparison highlights the advantages of template matching in terms of efficiency, accuracy, and personalized feedback generation. Furthermore, I acknowledge the reviewer's comment on the need for more discussion on the limitations of template matching. In response, I have expanded the "Limitations and Future Directions" section of the manuscript to include a comprehensive discussion on the potential limitations of template matching, such as its dependency on predefined templates, the need for continuous template updates, and its applicability to specific domains. This discussion adds valuable insights to the manuscript and provides researchers and practitioners with a more holistic understanding of the challenges and considerations associated with template matching.

Lastly, I appreciate the reviewer's positive remarks on the clarity of the writing and organization of the manuscript. I have carefully reviewed and revised the entire document, addressing minor grammatical errors and enhancing the overall flow and coherence. I am grateful for the reviewer's feedback in this regard.

Once again, I would like to express my gratitude to the reviewer for their time, effort, and constructive feedback. I am confident that the revisions I have made address the concerns raised and strengthen the manuscript significantly. I believe that the revised version of "Automating Assessment, Providing Personalized Feedback in E-Learning: The Power of Template Matching" is now ready for evaluate you again .
Thank you for your consideration, and I look forward to the opportunity to contribute to the scholarly discussion in sustainability.

Yours sincerely,

Author Response File: Author Response.pdf

Reviewer 2 Report

1. INTRODUCTION

-The problem is not well captured. Refer to the aims of this study, is there any need for more robust - template-matching feedback? any data support?

-It needs some explanations regarding the object of this research, e.g. specific country, region, or other specific cluster and what is the relation of the research object to the problem?

2. MATERIALS AND METHODS

Looks like parts of the journal template are STILL in the manuscript. 

Again, there is no explanation about the research object. How to validate the results of the algorithm? Where the data taken from?

Please provide the details explanation about the performance parameter to compare the algorithm results, and why those performance parameter are used. 

3. DISCUSSION

Please provide the discussions of each algorithm results in relation with each parameter. 

4. CONCLUSION

Who is the student mentioned in the conclusion? I don't even find students mentioned in any part of this paper except in conclusion.

No specifis comments 

Author Response

Dear Reviewer  
I hope this email finds you well. I am writing in response to the review report for my manuscript titled "Automating Assessment, Providing Personalized Feedback in E-Learning: The Power of Template Matching," which I submitted for consideration for publication in sustainability . I would like to express my appreciation to the reviewer for their valuable feedback and comments, which have greatly contributed to improving the quality and rigor of my work.
I would like to address the reviewer's comment regarding the performance parameters used to compare the results of the template matching algorithm. In the revised manuscript, I have provided a detailed explanation of the performance parameters employed and the rationale behind their selection. I would like to elaborate on this further to provide a comprehensive understanding of their significance.
Accuracy: Accuracy is a commonly used performance measure that indicates the overall correctness of the algorithm's predictions. In the context of template matching for automated assessment, accuracy represents the percentage of student responses correctly matched to the appropriate templates. A higher accuracy value indicates a more precise and reliable assessment process.
Precision and Recall: Precision and recall are performance metrics commonly used in information retrieval tasks. Precision measures the proportion of correctly matched student responses out of the total matched responses, while recall measures the proportion of correctly matched student responses out of the total actual positive responses. In the context of template matching, precision reflects the ability of the algorithm to provide accurate feedback when a match is found, while recall indicates the algorithm's ability to identify all relevant matches.
F1 Score: The F1 score is a combined measure of precision and recall that provides a balanced evaluation of the algorithm's performance. It is the harmonic mean of precision and recall, with values ranging from 0 to 1. A higher F1 score indicates better performance in terms of both precision and recall, capturing the overall effectiveness of the template matching algorithm.
Efficiency: Efficiency is an important performance parameter, particularly in the context of automated assessment. It measures the computational resources and time required to perform the template matching process. Efficient algorithms can process large volumes of student responses in a reasonable amount of time, ensuring scalability and practicality in real-world e-learning environments.
The selection of these performance parameters is based on their relevance to the objectives of the research and their established use in evaluating similar algorithms in the field. Accuracy, precision, and recall provide a comprehensive assessment of the algorithm's ability to match student responses accurately and provide relevant feedback. The F1 score combines these metrics to provide an overall evaluation of the algorithm's performance. Lastly, efficiency is a crucial factor in the practical implementation of automated assessment systems, ensuring timely feedback delivery to learners.
By utilizing these performance parameters, we can holistically evaluate the effectiveness, reliability, and efficiency of the template matching algorithm in the context of automated assessment and personalized feedback in e-learning.

 I have included additional information on the key steps involved in the algorithm, such as preprocessing, feature extraction, and template comparison. Furthermore, I have added a flowchart to visually illustrate the process, aiming to enhance the clarity and understanding of the methodology. Additionally, I appreciate the reviewer's suggestion to include a comparison with other existing assessment methods. To address this concern, I have conducted further analysis and incorporated a section in the revised manuscript that presents a comparative evaluation of the template matching approach against traditional assessment methods, such as manual grading and multiple-choice questions. This comparison highlights the advantages of template matching in terms of efficiency, accuracy, and personalized feedback generation. Furthermore, I acknowledge the reviewer's comment on the need for more discussion on the limitations of template matching. In response, I have expanded the "Limitations and Future Directions" section of the manuscript to include a comprehensive discussion on the potential limitations of template matching, such as its dependency on predefined templates, the need for continuous template updates, and its applicability to specific domains. This discussion adds valuable insights to the manuscript and provides researchers and practitioners with a more holistic understanding of the challenges and considerations associated with template matching.

Once again, I would like to express my gratitude to the reviewer for their time, effort, and valuable feedback. I believe that the revised manuscript adequately addresses the concerns raised and provides a more robust evaluation of the template matching algorithm.
Thank you for your consideration, and I look forward to the opportunity to contribute to the scholarly discussion in sustainability 

Yours sincerely,

Author Response File: Author Response.docx

Reviewer 3 Report

The topic discussed in the manuscript is more suitable to the journal Education Sciences (https://www.mdpi.com/journal/education).

Author Response

Dear Reviewer   
I hope this email finds you well. I am writing in response to the review report for my manuscript titled "Automating Assessment, Providing Personalized Feedback in E-Learning: The Power of Template Matching," which I submitted for consideration for publication in sustainability . I would like to express my gratitude to the reviewer for their thorough evaluation and insightful feedback. After carefully considering the reviewer's comments and suggestions, I agree that the topic discussed in my manuscript aligns more closely with the scope and focus of the Education Sciences Journal. I believe that the Education Sciences Journal's readership and the interdisciplinary nature of the journal will provide a suitable platform for disseminating the research findings and engaging with a broader audience of educators, researchers, and practitioners in the field of e-learning.  I believe that the Sustainability Journal's emphasis on technological advancements and their impact on sustainable development provides an ideal platform for disseminating the research findings and engaging with a wider audience interested in the intersection of e-learning and sustainable practices.

Thank you for your understanding,=
Yours sincerely,

Reviewer 4 Report

 

Paper Title:

 

 Automating Assessment, Providing Personalized Feedback in  E-Learning: The Power of Template Matching

 

Manuscript ID:   sustainability-2411944

The paper introduced a hybrid approach based on BBS algorithm in conjunction with Four feature extraction descriptor algorithms in a bid to enhance assessments and feedback among instructors and learners in the context of e-Learning. Next, an analysis of  the results concerning template matching, quantification and computational costs based on set of criteria employed by used techniques and methods.  This is an interesting and endeavor to shed the light on the this critical component in educational environments.

After carefully reviewing the paper, there are major points I would like to highlight:

Suggestions:

- Replace the Comma with "and" in the title of paper.

- The Abstract Sections needs also to be re-written. It needs to state the challenges and pitfalls in e-Learning platforms, research gaps, detailed methodology and mechanisms of comparing and evaluation the selected algorithms. Further, there is a need to state the key finding and implications aroused from the proposed model

- Add Feature Extraction, Computer Vision, Image processing  to keywords  and remove others such as (BBS, Harris) if applicable.

- Don't use of "We" in the study but "Authors"

- wrong use of present tense in the whole paper.

- Line#41, "t a combination". It is grammatical mistake. Line# 92 "an supported as an" also mistake.

- it is preferred to use "Research Objectives" instead of "Study Aims". In Section 1.1.

- Write "BBS" and other abbreviations of descriptors in full words for its first use.

- wrong use of heading for Section in lines 108-112. E.g, section 3 not background but proposed method. Section 4 not algorithm but discussion, section 5 not experimental but conclusion, section 6 not exist. Please re-consider this.

- Clearly Define the Research Questions in Section 1.1: Clearly define the research question or objectives based on research gaps in context of e-Learning. This will help guide the search process and ensure focused and relevant results.

- Section 2 is troublesome; it provides formal  mathematical definitions but very little information as to how to apply the algorithms. Instead of pages of definitions of various algorithms it might be helpful to give examples of how to do feature extractions and how to apply the evaluations.

- line# 122, use [ ] instead of Zhou et al 2012

- Don't use "the study by" or "A study by" phrase.

- look at line#148  "special symbol".

- line#186  wrong citation of reference.

- what's the distinction among SURF and SIFT. Seem similar.

- A need for justification of the mechanism used to do the combination among BBS with other algorithms. How did you do that? How did you do the experimentation?

- Vague , blur images in Page.8 Fig.1. Where are c, d ,e?

- A justification of combining MSER- SURF with BBS is needed.

- Interpret zeros in Table.1 for BBS.

- Fig.3 mentioned before Fig.2 in Page 9

- No Comparisons with previous works.

- Poor interpretation and justification of results in Tables and Figures.

-         Develop Inclusion and Exclusion Criteria: Establish clear inclusion and exclusion criteria for selecting algorithms. These criteria, justification, contributions should define the characteristics of the algorithms that are relevant to your research question

-         Comprehensive Search Strategy: Develop a comprehensive search strategy to identify relevant studies employed the same algorithms in order to compare with them in the context of e-Learning, computer vision or image processing. This may include searching electronic databases, manual searching of reference lists, contacting experts, and considering gray literature sources

-         Screening and Selection Process: Systematically screen and select algorithms based on the predefined inclusion and exclusion criteria. This process typically involves multiple reviewers independently screening and assessing the eligibility of each algorithm.

-         Quality Assessment: Assess the quality and risk of bias of the included algorithms. Use appropriate tools or checklists to evaluate the methodological rigor, validity, and reliability of the algorithms

-         Addressing Potential Biases: Identify and address any potential biases or limitations in the included algorithms.

 

- The conclusion  Section is general and came isolated from the research key findings. Re-consider this vital section and compare with prior studies that utilized the same algorithms.

- An excellent References Section with titles from up-to-date publications.

 

 

As mentioned above

 Moderate editing of English language required

 

 

Round 2

Reviewer 1 Report

If other reviewers accept your paper , I agree also to be published

Back to TopTop