Next Article in Journal
The Effect of Aerobic and Resistance Exercise after Bariatric Surgery: A Systematic Review
Previous Article in Journal
Lasting Impact of COVID-19 on Bariatric Surgery Delivery in North America: A Retrospective International Cohort Study of 349,209 Patients in 902 Centers
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Communication

Mixed Reality for Pediatric Brain Tumors: A Pilot Study from a Singapore Children’s Hospital

1
Neurosurgical Service, KK Women’s and Children’s Hospital, 100 Bukit Timah Road, Singapore 229899, Singapore
2
Department of Digital Integration, Medical Innovation and Care Transformation, KK Women’s and Children’s Hospital, 100 Bukit Timah Road, Singapore 229899, Singapore
3
Department of Chemistry, National University of Singapore, 21 Lower Kent Ridge Rd, Singapore 119077, Singapore
4
Division of Surgery, KK Women’s and Children’s Hospital, 100 Bukit Timah Road, Singapore 229899, Singapore
5
Department of Neurosurgery, National Neuroscience Institute, 11 Jalan Tan Tock Seng, Singapore 308433, Singapore
6
SingHealth Duke-NUS Neuroscience Academic Clinical Program, 11 Jalan Tan Tock Seng, Singapore 308433, Singapore
7
SingHealth Duke-NUS Paediatrics Academic Clinical Program, 100 Bukit Timah Road, Singapore 229899, Singapore
*
Author to whom correspondence should be addressed.
Surgeries 2023, 4(3), 354-366; https://doi.org/10.3390/surgeries4030036
Submission received: 5 June 2023 / Revised: 27 June 2023 / Accepted: 10 July 2023 / Published: 12 July 2023

Abstract

:
Mixed reality (MR) platforms for neurosurgical education, training, and clinical use have gained popularity in recent years. However, their use in pediatric neurosurgery is comparatively unexplored. We designed a study to explore the use of an MR-based application for pediatric brain tumors. The primary aim is to determine if the use of MR provides the neurosurgical team with a better understanding of the visuospatial anatomy of neoplasms in pediatric craniums and to guide operative planning. Secondary aims include exploring its use as an educational tool for junior doctors and medical students. Methods: Three-dimensional anatomical models of selected pediatric brain tumors are created and uploaded to an MR application. The processed data is transferred into designated MR head-mounted devices. At the end of the trial, users are required to fill in an evaluation form. Results: A total of 30 participants took part in this study. Based on the collated feedback data, all of them agreed that the MR platform was useful as a tool in different aspects of understanding the selected pediatric brain tumors. Conclusions: This study demonstrates a proof of concept of the feasibility of MR platforms for a better understanding of pediatric brain tumors. Further development is needed to refine the current setup to be more versatile.

1. Introduction

Brain tumors are the most common solid tumors in children, comprising up to 20% of all childhood cancers [1,2]. To date, pediatric brain tumors are still the leading cause of cancer-related deaths in children globally [2]. Anatomically, many of these brain tumors are infiltrative and/or deep-seated; hence, they are at constant risk of causing significant physical and cognitive disability [3]. Novel insights from molecular studies have allowed clinicians to have a better in-depth understanding of these challenging neoplasms; and, for selected cases, to guide the therapy [4]. Under such circumstances, the role of the pediatric neurosurgeon is paramount to alleviate raised intracranial pressure via maximal safe resection and/or provision of representative tissue from biopsy for diagnostic investigations. Modern technological adjuncts aim to help the operating surgeon improve the safety and efficacy of brain tumor resection.
In recent years, there has been an increasing body of literature on the application of mixed reality (MR) systems for pre-operative planning and intraoperative neuro-navigation to spatially visualize the brain tumor with precision [5,6,7,8]. An MR system encompasses two similar yet distinct systems: virtual reality (VR); and augmented reality (AR) [5]. The former (i.e., VR) is able to immerse the users in a fully artificial digital environment, while the latter (i.e., AR) has the capability to overlay virtual objects in the real-world environment. In the AR approach, virtual information is “anchored” to the real object, and this combination is visualized via a head-mounted display (HMD) [9]. Here, the HoloLens (HoloLens; Microsoft Corporation, Redmond, Washington) is one such example that has been shown to be useful in the surgical innovations [7,10]. Next, user-tracking and this feedback to the system allows a seamless user–environment interface [11]. Specifically in neurosurgery, AR prototypes have been developed to overcome the issue of limited views of the surgical field during the course of the operation [8,12,13]. Studies have demonstrated that MR technology is useful in the education of neurosurgery residents as a training tool [6,14]. Although MR systems have been used as a teaching tool for simulation in adult neurosurgery, their use in children is comparatively unexplored [15]. Furthermore, our literature search reveals that most of the publications on neurosurgical MR applications stem from Europe and North America [9,16,17]. Overall, similar systems for pediatric neurosurgery from our part of the world in Southeast Asia remain underrepresented at this point in time.
Separately, one of the challenges in pediatric neurosurgery is the developing cranium in this unique population. Owing to their growing head sizes and developing craniofacial features, traditional surgical landmarks used in adults may not be fully applicable when planning for brain tumor surgery in young children. Put together, spatial visualization of intra-axial lesions in this age group is less straightforward. This is especially so for neurosurgical residents in training who are unfamiliar with smaller-sized craniums and their variable anatomy. Standard neuronavigational devices rely on a two-dimensional (2D) virtual environment that is created into a workstation [9]. The operating neurosurgeon is required to self-visualize the tumor’s three-dimensional (3D) spatial location in the patient’s unopened cranium. Here, the main concern is iatrogenic injury to the surrounding eloquent parenchyma and critical neurovascular structures. Under such circumstances, we hypothesize that the use of MR as a teaching tool and operative adjunct will be useful. As a step toward evaluating the feasibility of MR systems at our institution, we designed a pilot study to explore the use of an MR-based application for pediatric brain tumors. The primary aim is to determine if the use of MR provides the neurosurgical team with a better understanding of the spatial anatomy of brain tumors in pediatric craniums in order to provide better guidance for perioperative planning. Secondary aims include exploring its use as an educational tool for junior neurosurgical trainees and medical students.

2. Materials and Methods

2.1. Overview of Study Design

This is a single-institution study conducted for the purposes of neuroanatomy education and perioperative guidance for pediatric brain tumors. Ethics waiver is provided by the hospital ethics board as there is no patient contact involved (Singhealth CIRB Reference Number: 2022/2466). Archival radiological images of brain tumors more common in the pediatric population are selected. In particular, cases are chosen to emphasize the spatial proximity of the tumors in relation to their surrounding critical neurovascular structures. Typical examples include suprasellar, pineal region, and posterior fossa tumors. Next, a 3D anatomical model is created based on multimodal data from fine-cut 1 mm magnetic resonance imaging (MRI) scans. This is manually performed via an open-source platform (3D Slicer, https://www.slicer.org/, accessed during the period of 1 July 2022 to 31 July 2022) for the analysis and display of information derived from the medical imaging [18]. The data, output as stereolithographic (STL) files, are subsequently processed by uploading the data to an MR application (Holoeyes XR; Holoeyes Corporation, Minato-ku, Tokyo, Japan)). Briefly, this cloud-based platform has an in-built processor that utilizes artificial intelligence and machine learning algorithms for refining complex data. Subsequently, the processed data is downloaded into designated head-mounted display (HMD) devices. The application of this specific MR software in neurosurgery has been previously described before in a clinical vignette [7].
For the purposes of this study, we employed the following HMDs: the Oculus Quest 2 (Reality Labs, Meta Platforms, Menlo Park, California, USA) for the VR experience and HoloLens (HoloLens; Microsoft Corporation, Redmond, Washington, DC, USA) and for the AR experience, respectively. For the AR aspect of this study, a hologram of the intracranial structures is created and superimposed onto the real world for observation, while the VR system immerses the user inside a virtual environment [19]. Target users include neurosurgeons, neurosurgical trainees, junior doctors, and medical students. An evaluation survey based on the Likert Scale is conducted for the users. Briefly, a score of 1 corresponds to the most negative opinion, and a score of 5 for the most positive opinion is used to measure various aspects of their experience with the MR system. The breakdown of the scoring is as per following: 1 = Disappointing; 2 = Insufficient details, but still usable; 3 = Neutral; 4 = Good details, but can be improved; 5 = Exceptional. A free-text box is provided for the participant to give additional comments at the end of the survey. Of note, the rationale for using this method of evaluation is based on contemporary studies on MR use in the clinical and/or medical education setting for the neurosurgery [16]. A diagram of this study’s outline is illustrated in Figure 1.

2.2. Outline of Trial with Mixed Reality Models

Briefly, the Oculus Quest 2 is a VR system that consists of an HMD and two handheld touch controllers equipped with sensors. Here, the user is fully immersed in a virtual space and can interact with virtual objects in simulated environments, using controllers [20,21]. This standalone system relies on internet connectivity to access the cloud-based Holoeyes XR platform. Once worn, the HMD offers each participant good stereoscopic visualization and depth perception through binocular lenses inside the headset environment—in this case, a virtual operating theatre. This VR system includes two wireless controllers that are trackable by a camera located at the front of the headset. These controllers allow user interaction via action buttons, thumbsticks, and analog triggers. We conducted this exercise in a closed conference room whereby the participants were seated (Figure 2). For the AR-related experience, the Microsoft HoloLens was worn by the primary user in the actual operating theatre. Details of its use in the adult neurosurgical operating theatre have been previously described in the literature [8,22]. Similar to the VR setup, the HoloLens is also a standalone system that is wirelessly connected to the cloud-based Holoeyes XR platform. A 3D-rendered pediatric brain tumor image is downloaded into the HMD and is moved manually to overlay with the mannequin’s head. Surface landmarks, such as the pinna, nasion, and tip of the nose, are verified to ensure that the registration is acceptable for the hologram to be “anchored” onto users to eventually see the 3D hologram merged with the physical head of the mannequin (Figure 3 and Figure 4). A virtual external ventricular catheter (EVD) is integrated into both VR and AR platforms. The users are tasked to attempt to insert the catheter into designated regions within each model. Although this particular exercise is more relevant to the neurosurgical trainees and consultants, the rest of the participants are also invited to try it.

2.3. Data Analysis

Statistical analyses are generated using GraphPad Prism version 9.5.1 for Windows (Graph Pad Software, La Jolla, California, USA). As this study has a limited population, descriptive statistics are reported. This includes mean with standard deviation for continuous data and frequency and percentage for categorical data.

3. Results

3.1. Participant Demographics

A total of 30 participants took part in this study. These consisted of 5 (16.7%) specialist-accredited neurosurgeons, 8 (26.7%) neurosurgical trainees, 6 (20%) junior doctors (non-trainees rotating through neurosurgery), and 11 (36.7%) medical students. In the study cohort, 17 (56.7%) of them reported familiarity with the use of the MR equipment. Here, we referred to the participants with substantial MR experience from regular video gaming and/or other unrelated research projects that involved the use of similar MR platforms. This subgroup is made up of six neurosurgical trainees, four junior doctors, and seven medical students (Supplementary Figure S1a,b).

3.2. Evaluation of Visual Quality of Project Models

Overall, the participants gave favorable scores (i.e., ≥3 out of 5) for the visual quality of the MR models. Of interest, 12.5% (n = 4) of the participants scored 2 out of 5 for questions 1 and 2, respectively (Figure 5). For Question 1, they consisted of two consultants and two final-year neurosurgical residents, while for Question 2, there were three consultants and one final-year neurosurgical resident. All of them gave similar comments in the free-text section of the evaluation form, which will be subsequently highlighted in Section 3.3. Questions related to this aspect of the evaluation are shown in Table 1.

3.3. Other Relevant Feedback from Study Participants

As part of the feedback exercise, 97% (n = 29) of participants filled in the free-text option at the end of their evaluation forms. Overall, the free-text option offered useful insights from different perspectives. Common examples were the inclusion of “normal” brain models and word labeling of intracranial structures for medical students and junior doctors. For this sub-cohort, the consensus was that their knowledge of neuroanatomy was not as comprehensive as the neurosurgical trainees and consultants. On the other hand, the neurosurgical trainees and consultants were keen to explore a more “hands-on” approach with the models. These included, firstly, haptic feedback for the catheter insertion exercise (especially with the AR aspect of the trial) and, next, the adjoining cervical spines being added onto the existing cranial models so that they can practice positioning the head for the intended surgery. Here, they felt that knowledge of where the tumor was spatially placed would help in the patient’s final position prior to pinning the head in various positions (eg. supine, pterional, lateral, prone, and so forth). Additional suggestions for improvement were to delineate the finer intracranial structures around the tumor. For instance, it would be relevant to be able to see individual structures of the anterior optic pathway, vessels of the ophthalmic and clinoidal parts of the internal carotid artery during surgery for suprasellar tumors; and where the internal cerebral veins would be in relation to pineal tumors (Table 2).

4. Discussion

For neurosurgeons, brain tumor surgery presents complex challenges due to a delicate tradeoff between removing as much neoplastic tissue as possible and minimizing the loss of brain function [4]. To achieve this successfully, there has to be very low threshold for errors [21]. Likewise, the armamentarium of operative tools for neurosurgeons has evolved tremendously to address such safety concerns. Currently, MR platforms are utilized in multiple industries, such as recreational gaming, film, military, and so forth [7]. Its main advantage is the technological ability to generate detailed, 3D interactive images that can be used to provide quantitative data via a simulator device [23]. In recent years, the use of MR in neurosurgery and the neurosciences has been steadily growing [23]. Established applications include training in dexterity and technical skills, teaching neuroanatomy, and planning surgical procedures [19].
At the same time, we are cognizant that one of the difficulties relevant to the use of new surgical innovations is the existence of a “learning curve” [24]. Here, this refers to the time taken for an average practitioner to be able to use the new adjunct independently with an acceptable outcome [25]. Similar to most devices, the observed clinical outcome is a function of both the effectiveness of the device paired with the skill of the surgeon [26]. For our study, almost half of the participants (43.3%) had no experience with MR systems before. Thus, the initial expectation is that this subgroup of users may not face difficulties using the devices. Nonetheless, they find the devices easy to use and the virtual models easy to maneuver. On the whole, we did not observe any significantly prolonged learning time when the devices were provided to them for the first time.

4.1. Current Visual–Spatial limitations in Pediatric Brain Tumor Surgery

The present practice in operating theatre involves the neurosurgeon relying heavily on image-guided neurosurgery to achieve maximal safe resection of brain tumors [6]. As previously mentioned, this setup involves the preoperative image to be displayed on a computer screen in the operating theatre. The screen is divided into three separate images (sagittal, coronal, and axial), and the neurosurgeon mentally combines the images to create a single 3D composite image [6]. The critical part of this exercise requires one to integrate the 2D information to get a 3D spatial relationship between the tumor and surrounding critical structures in order to choose the best surgical strategy [9]. Hence, this workflow involves frequent switching of views during the surgery and may be disruptive intraoperatively [6,27]. MR devices in the form of 3D goggles, in corroboration with their software applications, have been described as a useful intraoperative adjunct to allow the neurosurgeon to visualize the tumor spatially within the brain seamlessly [8]. Nonetheless, these cases have been mostly reported in adult neurosurgery, and overall, the incorporation of MR systems into routine use for neurosurgery is still in its infancy [5].
Virtual reality-driven 3D reconstruction navigation has also shown improvements over traditional image modalities in craniofacial, sellar, and infratentorial tumor resection [28,29]. Viewing the anatomical objects via such a platform allows the neurosurgical approach to be discussed in-depth with the rest of the multidisciplinary team [30]. Studies have demonstrated that the MR approach has the largest potential in terms of increased ergonomics since it mixes real and virtual objects producing a visualization environment where physical and digital objects co-exist and interact in real time [9]. Compared to the traditional monitor-based visualization of standard navigators, AR HMDs preserve the user’s egocentric view of the surgical field. For this reason, they are deemed to be the most ergonomic and effective output medium to guide procedures that are manually performed under the surgeon’s direct vision [31].

4.2. The Reality of Learning Neuroanatomy and Neurosurgery in Present Day

Through the years, cadaveric dissection has been the mainstay for teaching neuroanatomy. However, it has several limitations, including the availability of specimens, costs, and substantial time commitment [32,33]. To address these drawbacks, computer-based VR methods have been trialed to provide practical alternatives to medical training [14,34]. To date, studies have shown that VR models are comparable to cadaver specimens in teaching skull-based anatomy [35]. Following that, we are aware that a detailed roadmap of neuroanatomy confers a higher degree of confidence and success in the neurosurgical procedures [36]. Mixed reality platforms can provide another avenue where trainees can repeatedly practice performing realistic simulations on various neurosurgical procedures and track their training progress in a controlled environment [6,34]. From a patient safety perspective, AR offers a protected training environment for neurosurgical residents [12]. The immersivity of these devices allows for the experience of different virtual contexts, fostering an interactive experience [37]. This is important as it integrates training and further development of the surgery curriculum that will ultimately lead to a significant reduction in the cost of training [38]. For example, ventriculostomy is one of the first neurosurgical procedures that neurosurgical trainees learn [39]. Although it is considered to be a simple task, complications associated with poor insertion technique can be detrimental to patient outcome [40,41]. Similarly, we attempted to extrapolate such task-based exercises for our study’s brain models based on previous publications [39,42].

4.3. Study Reflections and Practical Limitations Encountered

For this study, we acknowledge that there are noteworthy limitations that need to be addressed. First and foremost, current workflows for visualizing 3D models with AR-HMD require multiple manual steps, thereby making the process laborious and, therefore, unrealistic for daily clinical practice [43]. This is also reflected in our study, whereby images have to be manually delineated in each axial slice for the segmentation process, which is extremely time-consuming. To overcome this, some authors have described the use of a fully automated segmentation program—a plausible solution for similar projects in the future [43]. Next, we agree with the users’ comments that haptic feedback will be a good add-on to the existing platform to make the learning experience more realistic. Haptic feedback is broadly defined as the combination of tactile feedback through sensory skin receptors and kinaesthetic feedback through muscle, tendon, and joint sensory receptors [44,45]. Several studies have already proven that the application of haptic feedback systems to virtual reality scenarios for surgical training increases virtual tissue manipulation realism by adding sensory cues [33,45,46]. Thus, this endeavor is certainly a consideration as part of improvements for our existing MR platform.
Following that, we are cognizant that the implementation of medical technology into existing healthcare systems is a considerable undertaking. High expenses incurred from the purchase and maintenance of the technology in question are likely to contribute to continually rising healthcare costs [47]. To date, the cost-effectiveness and benefits of MR systems in the clinical setting are still unknown [6,48]. Furthermore, MR tools for medical education are expensive due to the technology necessary for creating highly detailed, 3D image environments with real-time user interactivity [14]. Despite its reputed advantages so far, we are cognizant that the literature so far has shown that the effectiveness of such platforms has not yet shown to be superior to that of conventional neuroanatomy education methods [49]. To offset the costs, the ideal scenario is that the device is versatile and can be readily used for multiple applications by different groups. For the purpose of our study, we attempted to diversify the use of our MR platform for different medical trainees and professionals of varying expertise to address this issue.
On a separate but practical note, the phenomenon of brain shift continues to be a frustrating pitfall in maintaining accuracy during brain tumor resection [50]. This on-table experience invalidates the traditional neuronavigation patient-to-image mapping during surgery [51]. Current MR technologies use preoperative radiological images to be anchored on the patient’s craniometric and, or facial landmarks to create a 3D, virtual view of the brain tumor inside the cranium. Therefore, from the operating neurosurgeon’s perspective, the MR system is unable to provide real-time mitigation of intraoperative brain shift. Under such circumstances, other established neurosurgical adjuncts such as the intraoperative MRI (iMRI) or ultrasound (iUS) are more suited to overcome this issue [50,52].

4.4. Future Work and Directions

Based on the results of our pilot study, efforts are currently in place to implement some of the useful suggestions into the existing platform. This includes, firstly, the inclusion of “normal” 3D brain models and other types of intracranial pathology from our pediatric population for the medical students and junior doctors to understand visual–spatial anatomy in non-tumor patients. Next, additional details on the anatomy of pediatric brain tumor models are required for neurosurgical trainees to appreciate various surgical approaches better. To address this, we are looking into refining the segmentation process from the initial MRI images, especially with regard to individual intracranial structures for each brain model. Finally, there is ongoing work to integrate this technology into the hospital’s current neuronavigation device for intraoperative use as the next step of this pilot project. This will be in line with the published literature, whereby the use of AR in adult neurosurgical procedures has been shown to be safe and useful [7,8,17,22]. However, challenges include considerable costs involved for such an endeavor (as previously mentioned) and the long-term feasibility of such a setup in the face of rapid technological advancements.
In the present day, the knowledge of white matter tracts is crucial for the surgical treatment of tumors in eloquent regions of the brain [53,54]. Thus, preserving these structures is paramount during the resection [30]. Recent studies have reported the feasibility of combining MRI diffusion tensor imaging (DT) tractography and functional MRI sequences into MR platforms [30,55]. Another example is Quicktome (Omniscient Neurotechnology; Sydney, Australia), a software developed based on the Human Connectome Project (HCP). Based on data from HCP, it creates a patient-specific parcellation map of the cortex and its subcortical connectivity tracts to model multiple white matter tracts crossing the same region. These parcellation and tractographs are then superimposed onto the patient’s MRI scan to help visualize the anatomical relationship of the brain networks with the intracranial lesion of interest [56]. Put together, one of our considerations is to explore the integration of functional networks into preoperative brain tumor imaging into the MR platform. Such endeavors aim at another layer of functional preservation that is central to modern neuro-oncological surgery.
Beyond the confines of the operating theatre, the use of the MR platforms has been reported to be useful as part of the counseling process for patients undergoing elective neurosurgical procedures and postoperative neurorehabilitation [19,57]. At the time of this writing, studies demonstrate that VR applications help to improve communication with patients and their understanding in the perioperative setting, ensuring patient satisfaction, reducing litigation, and improving patient compliance [19,58].

5. Conclusions

In summary, there are certainly advantages for MR platforms to be integrated into the hospital’s setup. Overall, there is a wide range of long-term benefits in clinical, educational, and patient-centered care—in the present and future. However, our pilot study demonstrates pertinent limitations that warrant further development. Current pitfalls include costs, the versatility of the technology for medical professionals of varying expertise, and the long-term resilience of such a platform in real-life surgeries. Nonetheless, from the broader perspective of pediatric neurosurgery, the implementation of MR technology has the potential to fulfill the ethos of an academic institution dedicated to the care of children with brain tumors and beyond.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/surgeries4030036/s1, Figure S1: (a) Summary of study participants.; (b) Proportion of participants who either have previous experience or are familiar with MR platforms.

Author Contributions

Conceptualization, S.Y.Y.L. and S.L.; methodology, J.C.T., N.K.L. and S.Y.Y.L.; software, J.C.T. and B.C.C.; validation, T.M.C. and J.C.T.; formal analysis, S.L., T.M.C. and S.Y.Y.L.; investigation, T.M.C. and J.C.T.; resources, J.C.T., N.K.L. and S.Y.Y.L.; data curation, B.C.C., J.C.T. and T.M.C.; writing—original draft preparation, S.L., B.C.C. and S.Y.Y.L.; writing—review and editing, S.Y.Y.L.; visualization, J.C.T., T.M.C., S.L. and S.Y.Y.L.; supervision, S.Y.Y.L.; project administration, S.Y.Y.L.; funding acquisition, N.K.L. and J.C.T. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Ministry of Internal Affairs and Communications of Japan.

Institutional Review Board Statement

Ethics waiver is provided by the hospital ethics board, as there has been no patient contact involved (SingHealth CIRB Reference Number: 2022/2466) in this study. A copy of the waiver is available upon request.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data is contained within the article.

Acknowledgments

The authors thank the Ministry of Internal Affairs and Communication of Japan, Holoeyes Incorporated (Japan), and E1 Concepts Pte Ltd. (Singapore) for their administrative and technical support for this project.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Karajannis, M.; Allen, J.C.; Newcomb, E.W. Treatment of pediatric brain tumors. J. Cell. Physiol. 2008, 217, 584–589. [Google Scholar] [CrossRef] [Green Version]
  2. Cohen, A.R. Brain Tumors in Children. N. Engl. J. Med. 2022, 386, 1922–1931. [Google Scholar] [CrossRef]
  3. Rees, J.H. Diagnosis and treatment in neuro-oncology: An oncological perspective. Br. J. Radiol. 2011, 84, S82–S89. [Google Scholar] [CrossRef] [Green Version]
  4. WHO Classification of Tumours Editorial Board. Central Nervous System Tumours, 5th ed.; WHO Classification of Tumours Editorial Board, Ed.; International Agency for Research on Cancer: Lyon, France, 2021; p. 568. [Google Scholar]
  5. Durrani, S.; Onyedimma, C.; Jarrah, R.; Bhatti, A.; Nathani, K.R.; Bhandarkar, A.R.; Mualem, W.; Ghaith, A.K.; Zamanian, C.; Michalopoulos, G.D.; et al. The Virtual Vision of Neurosurgery: How Augmented Reality and Virtual Reality are Transforming the Neurosurgical Operating Room. World Neurosurg. 2022, 168, 190–201. [Google Scholar] [CrossRef]
  6. Tagaytayan, R.; Kelemen, A.; Sik-Lanyi, C. Augmented reality in neurosurgery. Arch. Med. Sci. 2018, 14, 572–578. [Google Scholar] [CrossRef]
  7. Iizuka, K.; Sato, Y.; Imaizumi, Y.; Mizutani, T. Potential Efficacy of Multimodal Mixed Reality in Epilepsy Surgery. Oper. Neurosurg. 2021, 20, 276–281. [Google Scholar] [CrossRef]
  8. Jain, S.; Gao, Y.; Yeo, T.T.; Ngiam, K.Y. Use of Mixed Reality in Neuro-Oncology: A Single Centre Experience. Life 2023, 13, 398. [Google Scholar] [CrossRef]
  9. Chiacchiaretta, P.; Perrucci, M.G.; Caulo, M.; Navarra, R.; Baldiraghi, G.; Rolandi, D.; Luzzi, S.; Del Maestro, M.; Galzio, R.; Ferretti, A. A Dedicated Tool for Presurgical Mapping of Brain Tumors and Mixed-Reality Navigation During Neurosurgery. J. Digit. Imaging 2022, 35, 704–713. [Google Scholar] [CrossRef]
  10. Al Janabi, H.F.; Aydin, A.; Palaneer, S.; Macchione, N.; Al-Jabir, A.; Khan, M.S.; Dasgupta, P.; Ahmed, K. Effectiveness of the HoloLens mixed-reality headset in minimally invasive surgery: A simulation-based feasibility study. Surg. Endosc. 2020, 34, 1143–1149. [Google Scholar] [CrossRef] [Green Version]
  11. Tomlinson, S.B.; Hendricks, B.K.; Cohen-Gadol, A. Immersive Three-Dimensional Modeling and Virtual Reality for Enhanced Visualization of Operative Neurosurgical Anatomy. World Neurosurg. 2019, 131, 313–320. [Google Scholar] [CrossRef]
  12. Meola, A.; Cutolo, F.; Carbone, M.; Cagnazzo, F.; Ferrari, M.; Ferrari, V. Augmented reality in neurosurgery: A systematic review. Neurosurg. Rev. 2017, 40, 537–548. [Google Scholar] [CrossRef]
  13. Luzzi, S.; Giotta Lucifero, A.; Martinelli, A.; Maestro, M.D.; Savioli, G.; Simoncelli, A.; Lafe, E.; Preda, L.; Galzio, R. Supratentorial high-grade gliomas: Maximal safe anatomical resection guided by augmented reality high-definition fiber tractography and fluorescein. Neurosurg. Focus 2021, 51, E5. [Google Scholar] [CrossRef]
  14. Coelho, G.; Figueiredo, E.G.; Rabelo, N.N.; Rodrigues de Souza, M.; Fagundes, C.F.; Teixeira, M.J.; Zanon, N. Development and Evaluation of Pediatric Mixed-Reality Model for Neuroendoscopic Surgical Training. World Neurosurg. 2020, 139, e189–e202. [Google Scholar] [CrossRef]
  15. Mishra, R.; Narayanan, M.D.K.; Umana, G.E.; Montemurro, N.; Chaurasia, B.; Deora, H. Virtual Reality in Neurosurgery: Beyond Neurosurgical Planning. Int. J. Environ. Res. Public Health 2022, 19, 1719. [Google Scholar] [CrossRef]
  16. Iop, A.; El-Hajj, V.G.; Gharios, M.; de Giorgio, A.; Monetti, F.M.; Edstrom, E.; Elmi-Terander, A.; Romero, M. Extended Reality in Neurosurgical Education: A Systematic Review. Sensors 2022, 22, 6067. [Google Scholar] [CrossRef]
  17. Condino, S.; Montemurro, N.; Cattari, N.; D’Amato, R.; Thomale, U.; Ferrari, V.; Cutolo, F. Evaluation of a Wearable AR Platform for Guiding Complex Craniotomies in Neurosurgery. Ann. Biomed. Eng. 2021, 49, 2590–2605. [Google Scholar] [CrossRef] [PubMed]
  18. Kikinis, R.; Pieper, S.D.; Vosburgh, K.G. 3D Slicer: A Platform for Subject-Specific Image Analysis, Visualization, and Clinical Support. In Intraoperative Imaging and Image-Guided Therapy; Jolesz, F.A., Ed.; Springer: New York, NY, USA, 2014; pp. 277–289. [Google Scholar]
  19. Vayssiere, P.; Constanthin, P.E.; Herbelin, B.; Blanke, O.; Schaller, K.; Bijlenga, P. Application of virtual reality in neurosurgery: Patient missing. A systematic review. J. Clin. Neurosci. 2022, 95, 55–62. [Google Scholar] [CrossRef]
  20. Tao, G.; Garrett, B.; Taverner, T.; Cordingley, E.; Sun, C. Immersive virtual reality health games: A narrative review of game design. J. NeuroEng. Rehabil. 2021, 18, 31. [Google Scholar] [CrossRef]
  21. Gonzalez-Romo, N.I.; Mignucci-Jiménez, G.; Hanalioglu, S.; Gurses, M.E.; Bahadir, S.; Xu, Y.; Koskay, G.; Lawton, M.T.; Preul, M.C. Virtual neurosurgery anatomy laboratory: A collaborative and remote education experience in the metaverse. Surg. Neurol. Int. 2023, 14, 90. [Google Scholar] [CrossRef] [PubMed]
  22. Incekara, F.; Smits, M.; Dirven, C.; Vincent, A. Clinical Feasibility of a Wearable Mixed-Reality Device in Neurosurgery. World Neurosurg. 2018, 118, e422–e427. [Google Scholar] [CrossRef] [PubMed]
  23. Scott, H.; Griffin, C.; Coggins, W.; Elberson, B.; Abdeldayem, M.; Virmani, T.; Larson-Prior, L.J.; Petersen, E. Virtual Reality in the Neurosciences: Current Practice and Future Directions. Front. Surg. 2021, 8, 807195. [Google Scholar] [CrossRef] [PubMed]
  24. Kuznietsova, V.; Woodward, R.S. Estimating the Learning Curve of a Novel Medical Device: Bipolar Sealer Use in Unilateral Total Knee Arthroplasties. Value Health 2018, 21, 283–294. [Google Scholar] [CrossRef] [PubMed]
  25. Subramonian, K.; Muir, G. The ‘learning curve’in surgery: What is it, how do we measure it and can we influence it? BJU Int. 2004, 93, 1173–1174. [Google Scholar] [CrossRef]
  26. Kirisits, A.; Redekop, W.K. The economic evaluation of medical devices: Challenges ahead. Appl. Health Econ. Health Policy 2013, 11, 15–26. [Google Scholar] [CrossRef] [PubMed]
  27. Léger, É.; Drouin, S.; Collins, D.L.; Popa, T.; Kersten-Oertel, M. Quantifying attention shifts in augmented reality image-guided neurosurgery. Healthc. Technol. Lett. 2017, 4, 188–192. [Google Scholar] [CrossRef]
  28. Wang, S.S.; Zhang, S.M.; Jing, J.J. Stereoscopic virtual reality models for planning tumor resection in the sellar region. BMC Neurol. 2012, 12, 146. [Google Scholar] [CrossRef] [Green Version]
  29. Zawy Alsofy, S.; Sakellaropoulou, I.; Stroop, R. Evaluation of Surgical Approaches for Tumor Resection in the Deep Infratentorial Region and Impact of Virtual Reality Technique for the Surgical Planning and Strategy. J. Craniofacial Surg. 2020, 31, 1865–1869. [Google Scholar] [CrossRef]
  30. Ille, S.; Ohlerth, A.K.; Colle, D.; Colle, H.; Dragoy, O.; Goodden, J.; Robe, P.; Rofes, A.; Mandonnet, E.; Robert, E.; et al. Augmented reality for the virtual dissection of white matter pathways. Acta Neurochir. 2021, 163, 895–903. [Google Scholar] [CrossRef]
  31. Vávra, P.; Roman, J.; Zonča, P.; Ihnát, P.; Němec, M.; Kumar, J.; Habib, N.; El-Gendi, A. Recent Development of Augmented Reality in Surgery: A Review. J. Healthc. Eng. 2017, 2017, 4574172. [Google Scholar] [CrossRef] [Green Version]
  32. Filho, F.V.; Coelho, G.; Cavalheiro, S.; Lyra, M.; Zymberg, S.T. Quality assessment of a new surgical simulator for neuroendoscopic training. Neurosurg. Focus 2011, 30, E17. [Google Scholar] [CrossRef] [Green Version]
  33. Lemole, G.M., Jr.; Banerjee, P.P.; Luciano, C.; Neckrysh, S.; Charbel, F.T. Virtual reality in neurosurgical education: Part-task ventriculostomy simulation with dynamic visual and haptic feedback. Neurosurgery 2007, 61, 142–148, discussion 148–149. [Google Scholar] [CrossRef] [Green Version]
  34. Chan, J.; Pangal, D.J.; Cardinal, T.; Kugener, G.; Zhu, Y.-C.; Roshannai, A.; Markarian, N.; Sinha, A.; Anandkumar, A.; Hung, A.J.; et al. A systematic review of virtual reality for the assessment of technical skills in neurosurgery. Neurosurg. Focus 2021, 51, E15. [Google Scholar] [CrossRef]
  35. Chen, S.; Zhu, J.; Cheng, C.; Pan, Z.; Liu, L.; Du, J.; Shen, X.; Shen, Z.; Zhu, H.; Liu, J.; et al. Can virtual reality improve traditional anatomy education programmes? A mixed-methods study on the use of a 3D skull model. BMC Med. Educ. 2020, 20, 395. [Google Scholar] [CrossRef]
  36. Kockro, R.A.; Stadie, A.; Schwandt, E.; Reisch, R.; Charalampaki, C.; Ng, I.; Yeo, T.T.; Hwang, P.; Serra, L.; Perneczky, A. A collaborative virtual reality environment for neurosurgical planning and training. Neurosurgery 2007, 61, 379–391, discussion 391. [Google Scholar] [CrossRef] [PubMed]
  37. Carnevale, A.; Mannocchi, I.; Sassi, M.S.H.; Carli, M.; De Luca, G.; Longo, U.G.; Denaro, V.; Schena, E. Virtual Reality for Shoulder Rehabilitation: Accuracy Evaluation of Oculus Quest 2. Sensors 2022, 22, 5511. [Google Scholar] [CrossRef] [PubMed]
  38. Cannizzaro, D.; Zaed, I.; Safa, A.; Jelmoni, A.J.M.; Composto, A.; Bisoglio, A.; Schmeizer, K.; Becker, A.C.; Pizzi, A.; Cardia, A.; et al. Augmented Reality in Neurosurgery, State of Art and Future Projections. A Systematic Review. Front. Surg. 2022, 9, 227. [Google Scholar] [CrossRef] [PubMed]
  39. Alaraj, A.; Charbel, F.T.; Birk, D.; Tobin, M.; Luciano, C.; Banerjee, P.P.; Rizzi, S.; Sorenson, J.; Foley, K.; Slavin, K.; et al. Role of cranial and spinal virtual and augmented reality simulation using immersive touch modules in neurosurgical training. Neurosurgery 2013, 72, 115–123. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  40. Hultegård, L.; Michaëlsson, I.; Jakola, A.; Farahmand, D. The risk of ventricular catheter misplacement and intracerebral hemorrhage in shunt surgery for hydrocephalus. Interdiscip. Neurosurg. 2019, 17, 23–27. [Google Scholar] [CrossRef]
  41. Ofoma, H.; Cheaney, B., 2nd; Brown, N.J.; Lien, B.V.; Himstead, A.S.; Choi, E.H.; Cohn, S.; Campos, J.K.; Oh, M.Y. Updates on techniques and technology to optimize external ventricular drain placement: A review of the literature. Clin. Neurol. Neurosurg. 2022, 213, 107126. [Google Scholar] [CrossRef]
  42. Li, Y.; Chen, X.; Wang, N.; Zhang, W.; Li, D.; Zhang, L.; Qu, X.; Cheng, W.; Xu, Y.; Chen, W.; et al. A wearable mixed-reality holographic computer for guiding external ventricular drain insertion at the bedside. J. Neurosurg. 2018, 131, 1599–1606. [Google Scholar] [CrossRef] [Green Version]
  43. Fick, T.; van Doormaal, J.A.M.; Tosic, L.; van Zoest, R.J.; Meulstee, J.W.; Hoving, E.W.; van Doormaal, T.P.C. Fully automatic brain tumor segmentation for 3D evaluation in augmented reality. Neurosurg. Focus 2021, 51, E14. [Google Scholar] [CrossRef] [PubMed]
  44. Panait, L.; Akkary, E.; Bell, R.L.; Roberts, K.E.; Dudrick, S.J.; Duffy, A.J. The Role of Haptic Feedback in Laparoscopic Simulation Training. J. Surg. Res. 2009, 156, 312–316. [Google Scholar] [CrossRef] [PubMed]
  45. Bugdadi, A.; Sawaya, R.; Bajunaid, K.; Olwi, D.; Winkler-Schwartz, A.; Ledwos, N.; Marwa, I.; Alsideiri, G.; Sabbagh, A.J.; Alotaibi, F.E.; et al. Is Virtual Reality Surgical Performance Influenced by Force Feedback Device Utilized? J. Surg. Educ. 2019, 76, 262–273. [Google Scholar] [CrossRef] [PubMed]
  46. Moody, L.; Baber, C.; Arvanitis, T.N. Objective surgical performance evaluation based on haptic feedback. Stud. Health Technol. Inform. 2002, 85, 304–310. [Google Scholar]
  47. Norton, S.P.; Dickerson, E.M.; Kulwin, C.G.; Shah, M.V. Technology that achieves the Triple Aim: An economic analysis of the BrainPath approach in neurosurgery. Clin. Outcomes Res. 2017, 9, 519–523. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  48. Nguyen, N.Q.; Cardinell, J.; Ramjist, J.M.; Lai, P.; Dobashi, Y.; Guha, D.; Androutsos, D.; Yang, V.X.D. An augmented reality system characterization of placement accuracy in neurosurgery. J. Clin. Neurosci. 2020, 72, 392–396. [Google Scholar] [CrossRef] [PubMed]
  49. Chytas, D.; Paraskevas, G.; Noussios, G.; Demesticha, T.; Asouhidou, I.; Salmas, M. Considerations for the value of immersive virtual reality platforms for neurosurgery trainees’ anatomy understanding. Surg. Neurol. Int. 2023, 14, 173. [Google Scholar] [CrossRef] [PubMed]
  50. Gerard, I.J.; Kersten-Oertel, M.; Hall, J.A.; Sirhan, D.; Collins, D.L. Brain Shift in Neuronavigation of Brain Tumors: An Updated Review of Intra-Operative Ultrasound Applications. Front. Oncol. 2020, 10, 618837. [Google Scholar] [CrossRef]
  51. Gerard, I.J.; Kersten-Oertel, M.; Petrecca, K.; Sirhan, D.; Hall, J.A.; Collins, D.L. Brain shift in neuronavigation of brain tumors: A review. Med. Image Anal. 2017, 35, 403–420. [Google Scholar] [CrossRef]
  52. Giussani, C.; Trezza, A.; Ricciuti, V.; Di Cristofori, A.; Held, A.; Isella, V.; Massimino, M. Intraoperative MRI versus intraoperative ultrasound in pediatric brain tumor surgery: Is expensive better than cheap? A review of the literature. Child’s Nerv. Syst. 2022, 38, 1445–1454. [Google Scholar] [CrossRef]
  53. De Benedictis, A.; Sarubbo, S.; Duffau, H. Subcortical surgical anatomy of the lateral frontal region: Human white matter dissection and correlations with functional insights provided by intraoperative direct brain stimulation: Laboratory investigation. J. Neurosurg. 2012, 117, 1053–1069. [Google Scholar] [CrossRef]
  54. Duffau, H.; Thiebaut de Schotten, M.; Mandonnet, E. White matter functional connectivity as an additional landmark for dominant temporal lobectomy. J. Neurol. Neurosurg. Psychiatry 2008, 79, 492–495. [Google Scholar] [CrossRef]
  55. Chen, B.; Moreland, J.; Zhang, J. Human brain functional MRI and DTI visualization with virtual reality. Quant. Imaging Med. Surg. 2011, 1, 11–16. [Google Scholar] [CrossRef]
  56. Yeung, J.T.; Taylor, H.M.; Nicholas, P.J.; Young, I.M.; Jiang, I.; Doyen, S.; Sughrue, M.E.; Teo, C. Using Quicktome for Intracerebral Surgery: Early Retrospective Study and Proof of Concept. World Neurosurg. 2021, 154, e734–e742. [Google Scholar] [CrossRef] [PubMed]
  57. Voinescu, A.; Sui, J.; Stanton Fraser, D. Virtual Reality in Neurorehabilitation: An Umbrella Review of Meta-Analyses. J. Clin. Med. 2021, 10, 1478. [Google Scholar] [CrossRef] [PubMed]
  58. Shepherd, T.; Trinder, M.; Theophilus, M. Does virtual reality in the perioperative setting for patient education improve understanding? A scoping review. Surg. Pract. Sci. 2022, 10, 100101. [Google Scholar] [CrossRef]
Figure 1. Flowchart summarizing this study’s work processes from creation of brain models to MR evaluation of the platform to final evaluation. [Note: some aspects of this figure were created with the help of stock images from Microsoft® Powerpoint® for Microsoft 365 (Microsoft Corporation, Redmond, WA, USA)].
Figure 1. Flowchart summarizing this study’s work processes from creation of brain models to MR evaluation of the platform to final evaluation. [Note: some aspects of this figure were created with the help of stock images from Microsoft® Powerpoint® for Microsoft 365 (Microsoft Corporation, Redmond, WA, USA)].
Surgeries 04 00036 g001
Figure 2. Photo illustration of VR setup for neurosurgical training. (A) Using the Oculus Quest 2, the user is immersed in a virtual operating theatre environment. The handheld touch controllers enable him to maneuver the objects of interest. (B) Here, an EVD (white) is attempted to be placed where the ventricles are perceived to be based on the surface landmarks of a virtual toddler’s head. (C) This photo illustration depicts a screenshot of a menu that allows the user to remove various layers of the virtual model. In this image, the skin and bony surfaces are removed, demonstrating where the external ventricular drain (white) is inserted into the brain parenchyma (pink). (D) Screenshot of the EVD (white) placed by the user in relation to the ventricles (purple), tumor (yellow–gold), and blood vessels (red and blue).
Figure 2. Photo illustration of VR setup for neurosurgical training. (A) Using the Oculus Quest 2, the user is immersed in a virtual operating theatre environment. The handheld touch controllers enable him to maneuver the objects of interest. (B) Here, an EVD (white) is attempted to be placed where the ventricles are perceived to be based on the surface landmarks of a virtual toddler’s head. (C) This photo illustration depicts a screenshot of a menu that allows the user to remove various layers of the virtual model. In this image, the skin and bony surfaces are removed, demonstrating where the external ventricular drain (white) is inserted into the brain parenchyma (pink). (D) Screenshot of the EVD (white) placed by the user in relation to the ventricles (purple), tumor (yellow–gold), and blood vessels (red and blue).
Surgeries 04 00036 g002
Figure 3. Photo illustration of AR setup in the operating theatre for neurosurgical training. (A) Using the Microsoft HoloLens, a 3D-rendered image is moved manually to overlay with a toddler mannequin’s head. Surface landmarks, such as the pinna, nasion, and tip of the nose, are verified to ensure the registration is acceptable for the hologram to be “anchored” onto. The final visualization by the user will be that of the 3-D hologram merged with the mannequin’s head. (B) The surface layers are removed for the primary user to visualize the intracranial structures and assess their spatial locations. (C) Here, the pediatric brain tumor of interest is a suprasellar cystic craniopharyngioma (yellow–gold) with close proximity to the ventricles (purple), posterior fossa (light pink), and blood vessels (red and blue). (D) The exercise is to attempt to insert a virtual EVD (white) into the cystic tumor (yellow–gold) by bypassing the nearby ventricles (purple).
Figure 3. Photo illustration of AR setup in the operating theatre for neurosurgical training. (A) Using the Microsoft HoloLens, a 3D-rendered image is moved manually to overlay with a toddler mannequin’s head. Surface landmarks, such as the pinna, nasion, and tip of the nose, are verified to ensure the registration is acceptable for the hologram to be “anchored” onto. The final visualization by the user will be that of the 3-D hologram merged with the mannequin’s head. (B) The surface layers are removed for the primary user to visualize the intracranial structures and assess their spatial locations. (C) Here, the pediatric brain tumor of interest is a suprasellar cystic craniopharyngioma (yellow–gold) with close proximity to the ventricles (purple), posterior fossa (light pink), and blood vessels (red and blue). (D) The exercise is to attempt to insert a virtual EVD (white) into the cystic tumor (yellow–gold) by bypassing the nearby ventricles (purple).
Surgeries 04 00036 g003
Figure 4. Photo illustration of AR setup in the operating theatre for neurosurgical training using an older child mannequin. (A) The Microsoft HoloLens is worn by the primary user with the surface landmarks overlaid and registered on the mannequin’s face. (B) Similar to the previous setup (in Figure 2), the surface layers are removed for the primary user to visualize the overall intracranial structures and assess their spatial locations. (C) For this exercise, the pediatric brain tumor of interest is a pineal tumor (yellow–gold) associated with mildly dilated ventricles (purple). (D) The neurosurgical trainee is tasked with inserting a virtual EVD (white) into the ventricular system (purple) via a frontal point while being spatially aware of where the tumor (yellow–gold) is.
Figure 4. Photo illustration of AR setup in the operating theatre for neurosurgical training using an older child mannequin. (A) The Microsoft HoloLens is worn by the primary user with the surface landmarks overlaid and registered on the mannequin’s face. (B) Similar to the previous setup (in Figure 2), the surface layers are removed for the primary user to visualize the overall intracranial structures and assess their spatial locations. (C) For this exercise, the pediatric brain tumor of interest is a pineal tumor (yellow–gold) associated with mildly dilated ventricles (purple). (D) The neurosurgical trainee is tasked with inserting a virtual EVD (white) into the ventricular system (purple) via a frontal point while being spatially aware of where the tumor (yellow–gold) is.
Surgeries 04 00036 g004
Figure 5. Overview of the participants’ evaluations of the visual quality of the MR pediatric brain tumor models on the MR platform.
Figure 5. Overview of the participants’ evaluations of the visual quality of the MR pediatric brain tumor models on the MR platform.
Surgeries 04 00036 g005
Table 1. Evaluation of visual quality of intracranial structures.
Table 1. Evaluation of visual quality of intracranial structures.
QuestionQuantitative Score
(Based on Likert Scale)
Q1: Visual quality of brain tumor examples1
2
3
4
5
Q2: Visual quality of normal brain structures in relation to brain tumors1
2
3
4
5
Q3: Visual quality of intracranial blood vessels in relation to brain tumors1
2
3
4
5
Q4: Visual quality of ventricular system in relation to brain tumors1
2
3
4
5
Q5: Overall usefulness in understanding brain tumor spatial anatomy using the MR platform1
2
3
4
5
Table 2. Summary of the collated common feedback comments.
Table 2. Summary of the collated common feedback comments.
Participant Group *Representative Free-Text Comments
Medical students/
Junior doctors
“Useful to see normal brain models to compare with brain tumor models”;
“Able to have name of anatomical structure when tapped on”
Neurosurgical residents/
Consultants
“Haptic feedback for catheter placement will make simulation more realistic”;
“Addition of rest of the spine structures can help with visualizing where the brain tumor is in relation to the patient’s body during head positioning for surgery”;
“Finer details of individual anatomical structures around the tumor will be useful for preoperative planning”
* Some comments overlapped between participant groups.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Liang, S.; Teo, J.C.; Coyuco, B.C.; Cheong, T.M.; Lee, N.K.; Low, S.Y.Y. Mixed Reality for Pediatric Brain Tumors: A Pilot Study from a Singapore Children’s Hospital. Surgeries 2023, 4, 354-366. https://doi.org/10.3390/surgeries4030036

AMA Style

Liang S, Teo JC, Coyuco BC, Cheong TM, Lee NK, Low SYY. Mixed Reality for Pediatric Brain Tumors: A Pilot Study from a Singapore Children’s Hospital. Surgeries. 2023; 4(3):354-366. https://doi.org/10.3390/surgeries4030036

Chicago/Turabian Style

Liang, Sai, Jing Chun Teo, Bremen C. Coyuco, Tien Meng Cheong, Nicole K. Lee, and Sharon Y. Y. Low. 2023. "Mixed Reality for Pediatric Brain Tumors: A Pilot Study from a Singapore Children’s Hospital" Surgeries 4, no. 3: 354-366. https://doi.org/10.3390/surgeries4030036

Article Metrics

Back to TopTop