Next Article in Journal
COVID-19-Related Disruptions Are an Opportunity for Reflection on the Role of Research Training in Psychiatric Residency Programs
Previous Article in Journal
Conflict between Science and Superstition in Medical Practices
 
 
Review
Peer-Review Record

A Narrative Review of Immersive Technology Enhanced Learning in Healthcare Education

Int. Med. Educ. 2022, 1(2), 43-72; https://doi.org/10.3390/ime1020008
by Chris Jacobs 1,*, Georgia Foote 1, Richard Joiner 1 and Michael Williams 2
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Reviewer 3: Anonymous
Int. Med. Educ. 2022, 1(2), 43-72; https://doi.org/10.3390/ime1020008
Submission received: 19 August 2022 / Revised: 4 October 2022 / Accepted: 7 October 2022 / Published: 15 October 2022

Round 1

Reviewer 1 Report

 

Thank you for the opportunity to review this article. It is an interesting field, and research into technology-based education is always welcome. However, I have some suggestions for the authors consideration.

 

The protocol is not sufficient. The text states to see Ref 48 for the protocol, although no actual information is provided. I tried to track it down via OSF and found: https://osf.io/tpjyw/wiki/home/ but this is not a protocol. How can we tell if there have been any deviations from a priori planning,  hypothesising etc? The Prisma Checklist provided in the supplementary material states (in Item 24b) that the protocol can be found on Page 6, but it was not here either. The point of a systematic review is that the process is, well, systematic. The steps taken are clear and methodical. Registering a title and an incredibly broad statement regarding research questions as the protocol on OSF is not a genuine step.

 

 

In its current form, there are too many important snippets of information missing. A systematic review needs to be reproduceable. What date was the actual search done? (it says “papers including 2022”, however many of these won’t be published yet, so this certainly does not include all papers from 2022. This will be quite misleading to readers looking at this in the future. The best date I can go on is 7th January, when the OSF file was created. This means that the search would have missed nearly all papers from 2022. 

 

This fact that there was no a priori protocol, not a particularly systematic approach, only one database used, and missing information (dates, etc), means that this is not a formal systematic review in the more modern sense. The title may need to be amended.

 

What is the authors definition of “medical education”? Some of the key articles in virtual and augmented reality are in pre-medicine, allied health, science degrees or other similar disciplines. Does pharmacology suit, which also has a wealth of immersive reality research? It is not clear what constitutes medical education.

 

What is the rationale underlying the use the Web of Science database for an Educational topic review? This should have been performed in one of the educational databases, such as ERIC. Many education articles, even in the med-ed field, would not be listed under the science or medical databases. This also means that many relevant papers that would otherwise fall within the criteria have been omitted. For example, https://doi.org/10.14742/ajet.3840 (infact, looking now quite a few articles from journals such as AJET have been omitted). This would be OK if it was solely based around WOS, but the mention that Google Scholar was also used means these other ones should be picked up.

 

The title may have to read as “a meta-analysis of papers from a scientific database” or similar, to get around the use of WOS as the sole database. It at least highlights to the reader that educational databases were not used, even though this is an educational topic.

 

360 video is not commonly considered an immersive technology. It is unclear how this ended up grouped within this definition.

 

How was the title and abstract screening done, and then the full-text etc. This is usually done in duplicate with a second author, and disputes referred to a third author. PRISMA checklist point 8-9 require this information. Having someone score 20% of papers is usually not enough. I’d insist on duplicated screening at each step, this is the current expected standard.

 

Reporting for risk of bias, study bias and certainty are all components of the PRISMA checklist, but have not been performed in this study. There has been too many omissions (stated above) to genuinely say that this article follows PRISMA guidelines. This statement should be removed from the abstract. Perhaps the authors could state that this was a “modified prisma” or something along those lines.

 

There is obviously a lot of work in this article. It is on a good topic, and I am pleased to see more research in this area. However, this just doesn’t fall within the modern standard of a systematic review. The registered statement on OSF is disappointing, steps of PRISMA have been missed, there is no duplication of screening, there isn’t even a date when the search was conducted, and a single relatively inappropriate database was used. This is in no way reproduceable with its stated methods, and the potential bias is confounding. I am just not sure what value this submission brings to the academic community in its current form.

 

Perhaps it can be fixed in a review, but I’m not sure what to recommend. The authors won’t be able to go back and prepare a protocol, but should at least state that an a priori protocol was not prepared (and mention the limitations of this, with the associated risk of bias in doing this). The potential bias with a single screener should also be mentioned. The omission of articles is a worry due to a single database. The lack of a definition of medical education in general also needs to be addressed.

 

Author Response

See attached.

Author Response File: Author Response.pdf

Reviewer 2 Report

The paper presents a literature review on immersive technology (VR/AR/360-degree video) for healthcare professionals education. The novelty presented in this review is the use of the Medical Education Research Study Quality Instrument (MERSQI) to assess the learning outcome in the studies surveyed. In that, it appears to bring some noteworthy contribution to the field. 

Some issues I detected with the paper:

1) It is unclear to what extent VR/AR were considered as pertains to the medium evolving so much, especially in the later 2010s. For instance, the authors state to take into account studies from as early as 2002, where headsets like the Occulus were not used. And indeed, studies included in the paper references (e.g., [64]), as well as studies in the supplementary Data extraction Table (e.g., Brunner et al, 2004) appear to work with MIST VR system, which can be classified as non-immersive VR (as stated by the authors, "For the purpose of the review non-immersive VR will not be considered, for example computer screens displaying an interactive program such as a virtual patient. Immersive VR allows users to interact via head-mounted displays"). One would therefore expect these studies to not be included in the review. Also, other studies seem to use completely different hardware system (e.g., EYESi Ophthalmic Surgical Simulator), which again seem to be completely different to other studies using commodity VR hardware like Oculus. The same can be said for AR - are the authors considering mobile/smartphone AR or only head-mounted AR in their analysis, and why? These are not clear points and one would expect the authors to argue more on the hardware use and inclusion/exclusion criteria in relation to that.

2) The authors state 4 research questions in page 2, which are not clearly addressed in the Discussion Section. Consider (Tang et al., 2021) - cited by the authors in reference [13]. Admittedly, it appears the authors modelled their paper quite similar to that survey, as evidenced by some research questions being similar (e.g., RQ1); providing the definition to immersive technology (including the Milgram reality-virtuality continuum), a comparison of each paper's Section 3 (right down to Figure 3 in the paper in review, and Figure 2 in Tang et al.) etc.  In the latter paper, the results/discussion are quite obviously linked to the research questions. I would expect the authors to similarly revisit their research questions in a similar manner (deriving key takeaways and insights), then devote the discussion to further highlight their contribution better, that is, in being "the first review to categorise subgroups in MERSQI and to critically look at instrument methods to assess learning in detail". A clearer discussion on the use of the instrument, and why it is important to consider it in this (and future works) would be helpful for the reader. Also, given the similarity in the two studies mentioned above, the authors could provide arguments and better illustrate the added value that their study bring in relation to (Tang et al., 2021). 

Author Response

See attached.

Author Response File: Author Response.pdf

Reviewer 3 Report

Interesting and well written paper. However I suggest to highlighted in the review the presence/absence of studies about inter and intraobserver agreement and the relative results to evaluated technology. 

Discussion can be shorten

Author Response

See attached.

Author Response File: Author Response.pdf

Round 2

Reviewer 1 Report

Thank you for making some corrections in response to the review comments. There isn’t much to change, but I have some major issues remaining with the manuscript in its current format. That being said, rather than recommend rejection, I would like to offer the authors some suggestions to make this suitable for publication below.

 

1.      I still disagree that 360 video should not be classed as immersive reality. However, see from the Blair et al., article on “learning from 360-degree film…” presented, that the authors do have evidence supporting this, so am content with this consideration and have no further objections surrounding its inclusion.

2.      I have the same concerns as my past review comments regarding the use of PRISMA. Re-classifying this as using the PRISMA extension for scoping reviews is also not a common process. I have already mentioned the protocol limitations in the past review, but too many of the other steps have been omitted. Also, these guidelines are best used when guiding the study from the start. It is not as strong to retrospectively fill these criteria. Instead, frameworks such as PRISMA-ScR, are best performed systematically, in discrete steps, and reported as such. I would recommend the term “scoping review” be removed from the title, and references to the PRISMA-ScR be removed. Nonetheless, this article would be fine to fit a narrative review. As such, could the title be changed to: “A narrative review of immersive technology-enhanced learning in healthcare education”. This is similar to the initial title submitted prior to reviews, just with some extra clarity.

3.      The lack of inclusion of an educational database for an educational review paper presents an odd research design, not a simple limitation as presented in the document. From the lack of a proper database to scan, it is not clear what this article contributes to the literature. Consider the reverse, if one did a medical intervention study using an educational database, it would simply miss too many medical journals and papers to be valid. A narrative review of MEDLINE/PUBMED can feasibly be completed, however, this just needs to be clearly stated as the process taking place. Add the database information into the Abstract, and ensure this is made very clear early on in the paper. It was obvious on an initial scan that many of the foundational educational studies in the healthcare education field were omitted, and experienced readers will pick up on this. However, with a bit of clarity, this concern can be answered simply by the fact that most of the educational journals do not list on Medline. Stating this from the abstract onwards will assist.

Author Response

Thank you for your comments. Both have been accepted and appreciated. The title has been reworded as suggested and the removal of PRISMA-SCr in abstracts and methods.

We have also made in clear in abstract of the WoS searches and left the description of the inclusion of MEDLINE in methods, so that, as recommended the reader can see the limited databases from outset. These comments remain in limitations to the discussion.

Yours sincerely,

Authors

Reviewer 2 Report

I would like to thank the authors for taking the time to address my comments and concerns with the paper. As presently constructed I feel confident to recommend the article proceed with publication.

One minor suggestion would be for the authors to incorporate their final sentence in my author response letter ("It remains the only review that uses a validated measure for evaluating paper quality in immersive technology for clinical based education and a novel approach to present a synthesis of findings to a reader that incorporates paper quality scores.") in the article conclusion (basically the authors do a good job highlighting their contribution in this sentence, and it is missing from the text).

Author Response

Thanks for noting this and we added this statement to the conclusion.

Yours sincerely,

Authors

Back to TopTop