Next Article in Journal
Autism Spectrum Disorder and Remote Learning: Parents’ Perspectives on Their Child’s Learning at Home
Previous Article in Journal
Distributed Leadership: School Principals’ Practices to Promote Teachers’ Professional Development for School Improvement
 
 
Article
Peer-Review Record

Assessing Student and Coach Learning Experiences with Virtual Collegiate Soil Judging Contest during COVID-19 Pandemic

Educ. Sci. 2023, 13(7), 717; https://doi.org/10.3390/educsci13070717
by Ammar B. Bhandari 1,*, Steven Chumbley 2 and Benjamin Turner 2
Reviewer 1:
Reviewer 2:
Reviewer 3: Anonymous
Educ. Sci. 2023, 13(7), 717; https://doi.org/10.3390/educsci13070717
Submission received: 1 June 2023 / Revised: 2 July 2023 / Accepted: 12 July 2023 / Published: 14 July 2023
(This article belongs to the Topic Advances in Online and Distance Learning)

Round 1

Reviewer 1 Report

This article explores the advantage of organize online events for soil judging.

It is clear and well written but there is a lack of recent references about Covid and post covid online events. I also miss more information and references about advantages and disadvantages for learning in these events.

 

Author Response

Please see the attachment 

Author Response File: Author Response.pdf

Reviewer 2 Report

Simple paper, with clear and simple aim and frankly expected results.

I miss a paragraph or section discussing the quality of the results achieved by the students compared to the previous face-to-face competitions. This is because the competition was held under great constraints.

This could give an additional view to the online version of the judging contest.

Otherwise just a few minor remarks.

Comments for author File: Comments.pdf

Author Response

Response attached 

Author Response File: Author Response.pdf

Reviewer 3 Report

Thank you for doing the work to collect this data and share it with the broader community. Overall, it think there is worthwhile content here worthy of publication, as long as there was in fact IRB approval for the research. Some expanded analysis and careful revisions can resolve some issues.

Overall key items to address:

IRB approval – This human-subjects research requires IRB approval. Please include details of the IRB approval that was given prior to survey distribution.

Relationship to Owen et al. (2021) – This research was done in the context of the same region contest as the Owen et al. paper, with the timing of the surveys overlapping. It does appear to be focused on distinct research questions and distinct survey questions, but it is important to explain the relationship, what overlap exists (eg were these separate questions within the same survey?), and how this is unique and distinct.

Sample size – n=31 and n=6 are pretty small numbers for a survey. Given the scope of the work and that these represent high response rates (86 and 100%), I don’t have a problem with the numbers per se but it is important to acknowledge how these sample sizes might limit the interpretation of your results.

Methods - No statistical analysis was performed. Is your data not suitable for statistical analysis? If so, why not?

Results – The results section in general needs a lot of work. It currently feels like it’s mostly just a list repeating what’s in the tables (eg “The lowest mean scores were found within the individual areas related to: The accuracy in which the event measured student skills (M = 3.33, SD = 0.82)”) repeated over and over for various questions which makes it hard to read. The approach to summarizing/grouping results (which is helpful in making it more than just writing out in long form everything in the tables) is to pick out highest and lowest scores, but the way this is currently done strikes me as over-interpretation given that many values are similar to each other. If you can perform some type of statistical analysis, this would allow you to make some comparisons. Some specific examples are identified below. Also, in some cases there seems to be something of an attempt at discussion of results that just repeats them. Look for ways to group/summarize results so they are accessible here, but you can save discussion for the discussion section.

Discussion – The overall analysis is on the simplistic side, and tt seems like you may be overinterpreting your results to try to reach some conclusions. Putting things that came in at around 4 as strengths and things that came in around 3.5 as weaknesses without being very clear about the degree of difference is misleading. It is also important in this section to be transparent about the limitations of this work given the number of responses (even if you run some statistics and have some clear differences you can point to).

 

Specific items to address:

Line 67-76: In this section (and elsewhere in the manuscript) at times the authors are listed prior to the citation number and at times they are not. Please be consistent in following the convention for the journal.

Lines 85-88: This sentence is a bit convoluted. Please clarify.

Lines 89-90: NCSC – abbreviation not defined. Also, exact dates are unnecessary and distracting. Also, unclear why the cancellation of the national contest in spring forced a cancellation/modification of region contests later that year. Maybe clarify that these were related events rather than one causing the other?

Line 94: massive is a strong word given that this statement isn’t tied to a specific reference. This language could be modified to properly express uncertainty around the expected scale of the setback.

Line 97: Clarify that these are the students and coaches that participated in this virtual region contest

Line 117: Link doesn’t work – please fix reference

Line 129: “We spent several hours…” This sentence feels unnecessary, especially since ‘several’ is such a generic term. Perhaps the key point is that there was collaboration across multiple organizers that was done both face-to-face and virtually?

Figures 2 and 3: Perhaps connect these two as Fig 2a and 2b, since they are two parts of the same score card?

Lines 173-180: How were these specific set of questions chosen? Were they based on some existing body of work? Were they created by the authors to address key areas of interest?

Tables 1 and 3: Needs a little more context/clarification – were these word for word the statements the students were given on the survey? If so, what were the Likert options? (agree/disagree wouldn’t make sense for these). Is Overall satisfaction an average of the above items? If so, does this make sense to just average these different topics?

Tables 2 and 4: Make sure if Tables 1/3 are modified to clarify what the scale was (as opposed to it being stated in the body) that the same is applied to tables 2 and 4. Again, is the overall an average? Why is an average of these a useful metric?

Line 186: Number of responses for students (and coaches when you get to section 3.2) and response rates should be reported up front before discussing the results.

Line 187, 207, 222, : Repeating what I mentioned with the framing of the tables, I’m not sure a mean of these diverse topics is particularly useful. A range of scores within the category would be more relevant.

Lines 188-196: Given that these are descriptive data with 31 total responses and without any statistical analysis, how confident are you that a 3.94 is really any different from a 4.17? Splitting the highest and lowest scores in this way seems like a stretch. As there do seem to be topical trends distinguishing the higher and lower scores, I’m still fine with you pointing out these themes, but the inherent uncertainty and the relatively close mean scores should be clearly acknowledged.

Lines 197-199: Delete

Lines 199-204: Repeat

Line 208: This wasn’t the highest mean score in table 2 – are you considering some subset of those questions here?

Lines 207-210: This feels redundant/simplistic – they said they enjoyed the flexibility so this suggests they enjoyed the flexibility?

Line 211: This wasn’t the lowest score

Lines 224-226: What are your cutoffs for satisfied vs neutral feelings? Again, with an n of 6, is 4.00 any different from 3.67? Or even 3.17?

Lines 229-231, 241-243: Again redundant/simplistic.

Lines 252-253: Region IV students… Rephrase – currently it sounds like the survey responses gave them the experience.

Lines 285-293: This section is not discussion of the results.

Lines 296-297: Here you state focus wasn’t impacted, in line 271 you state that staying focused was a challenge. Maybe this is a typo, or was supposed to refer to the coaches’ perceptions? Or maybe this is a great example of how a mean score of 2.35 may or may not mean much without some more clear structure to the analysis.

Line 319-322: Needs rewording, too much restatement of results for this section

Line 322-324: This statement belongs in the methods when describing the surveys and how questions were selected

Line 349: “in the future during the pandemic” - ?

Lines 344-359: Much of your conclusions paragraph are hypotheticals which go well beyond what your research addressed. A brief discussion of future implications is appropriate, but the focus should be on the specific findings of your research.

Errors are not extreme, but common enough to be distracting. Please do some careful editing for grammar, readability.

Author Response

Please see attachment.

Author Response File: Author Response.pdf

Round 2

Reviewer 1 Report

I think the documents has been ameliorated with the corrections made.

Back to TopTop