Next Article in Journal
From Symptom Tracking to Contact Tracing: A Framework to Explore and Assess COVID-19 Apps
Previous Article in Journal
Intransitiveness: From Games to Random Walks
Previous Article in Special Issue
Progressive Teaching Improvement For Small Scale Learning: A Case Study in China
 
 
Article
Peer-Review Record

Generic Tasks for Algorithms

Future Internet 2020, 12(9), 152; https://doi.org/10.3390/fi12090152
by Gregor Milicic *, Sina Wetzel and Matthias Ludwig
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Future Internet 2020, 12(9), 152; https://doi.org/10.3390/fi12090152
Submission received: 31 July 2020 / Revised: 1 September 2020 / Accepted: 1 September 2020 / Published: 3 September 2020
(This article belongs to the Special Issue Computational Thinking)

Round 1

Reviewer 1 Report

The manuscript presents an interesting approach to foster computational thinking teaching and learning. Generic Tasks are indeed an interesting approach for this and with carefully implemented tools for teachers and students, they can bring added value to this challenging area. 

However, there are few things I would like to point out. First, number of participants in the survey is rather small. With n=14 it is not possible to make too far-reaching conclusions (which authors also acknowledge at least partially). It would have been better to write a separate "Limitations of the study" chapter and elaborate these issues there as well – with possible strategies to overcome these issues in the future research. 

Second, results from the survey are represented in a bit confusing way. Especially Figure 3 is quite difficult to interpret, perhaps the authors could consider another representation for the results (I personally would prefer bar graph over the line graph in these kind of cases). 

Third, few issues concerning the methodology could be explained better. For example, what is the rationale to choose 5-point Likert scale – and on the other hand, why another part of the survey uses six-point scale? Is this something that readily would confuse the survey participants? Measuring experts estimations about how students could learn is also a bit concerning for me. Although the participants are experts in their field, they are now answering according what they think that the students would do. The reality, especially when there is not yet tools available to use the GTs in real-life teaching setting, might be completely something different, and this reduces significancy of the results with a good deal. I anyway like the idea very much and am in favour for publishing this work if the minor additions and clarifications are included.

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Reviewer 2 Report

futureinternet-902788

Generic Tasks for algorithms

The authors address an interesting research topic for the journal. It is a rigorous and well-organized paper. Anyway, some recommendations should be considered:

* Please, revise the order of the references in the main text. All the references must be renumbered to follow numerical order.

* In my opinion, the Conclusion section should be more concise, emphasizing the most important contribution of this study.

* Please, revise the format of the references according the MDPI guidelines (e.g. references 1, 20, 28, 34, etc.)

* Keywords: please correct the word “generic” => “generic”

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Back to TopTop