Augmented Reality Surgical Navigation System Integrated with Deep Learning
Abstract
:1. Introduction
2. Materials and Methods
2.1. System Overview
2.2. Virtual Object Automatic Scanning
2.3. Using Machine Learning to Predict and Recommend Surgical Target
2.4. Manual Operation Target Point Positioning
2.5. DICOM Image Real-Time Display and Selection of Target Point
2.6. Generation of Surgical Trajectory and Calibration of Entry Point Angle
3. Experiments Studies and Tests
3.1. Experiment of the Virtual Object Automatic Scanning
3.2. Experiment of Machine Learning to Predict and Recommend Surgical Target
3.3. Test of Clinical Feasibility
3.4. Test of Hololens 2 Feasibility
4. Results
4.1. Results of the Virtual Object Automatic Scanning
4.2. Results of the Machine Learning to Predict and Recommend Surgical Target
4.3. Results of the Proposed System
4.4. Results of the Hololens 2 Feasibility
5. Discussion
5.1. Comparing Augmented Reality Medical Guidance Techniques
5.1.1. Image Positioning vs. Optical Positioning
5.1.2. Our Method vs. Other Method
5.1.3. Tablets vs. Smart Glasses
5.2. The Impact of Viewing Angle Changes on the Coordinates
5.3. Sterile Environment for Surgery
5.4. Clinical Significance and Limitation
6. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Joda, T.; Gallucci, G.; Wismeijer, D.; Zitzmann, N. Augmented and virtual reality in dental medicine: A systematic review. Comput. Biol. Med. 2019, 108, 93–100. [Google Scholar] [CrossRef] [PubMed]
- Tabrizi, L.; Mahvash, M. Augmented reality–guided neurosurgery: Accuracy and intraoperative application of an image projection technique. J. Neurosurg. 2015, 123, 206–211. [Google Scholar] [CrossRef]
- Chen, X.; Xu, L.; Wang, Y.; Wang, H.; Wang, F.; Zeng, X.; Wang, Q.; Egger, J. Development of a surgical navigation system based on augmented reality using an optical see-through head-mounted display. J. Biomed. Inform. 2015, 55, 124–131. [Google Scholar] [CrossRef]
- Lai, M.; Skyrman, S.; Shan, C.; Babic, D.; Homan, R.; Edström, E.; Persson, O.; Urström, G.; Elmi-Terander, A.; Hendriks, B.; et al. Fusion of augmented reality imaging with the endoscopic view for endonasal skull base surgery; A novel application for surgical navigation based on intraoperative cone beam computed tomography and optical tracking. PLoS ONE 2020, 15, e0227312. [Google Scholar]
- Müller, M.; Rassweiler, M.; Klein, J.; Seitel, A.; Gondan, M.; Baumhauer, M.; Teber, D.; Rassweiler, J.; Meinzer, H.; Maier-Hein, L. Mobile augmented reality for computer-assisted percutaneous nephrolithotomy. Int. J. Comput. Assist. Radiol. Surg. 2013, 8, 663–675. [Google Scholar] [CrossRef]
- Wang, J.; Suenaga, H.; Hoshi, K.; Yang, L.; Kobayashi, E.; Sakuma, I.; Liao, H. Augmented reality navigation with automatic marker-free image registration using 3-D image overlay for dental surgery. IEEE Trans. Biomed. Eng. 2014, 61, 1295–1304. [Google Scholar] [CrossRef]
- Conrad, C.; Fusaglia, M.; Peterhans, M.; Lu, H.; Weber, S.; Gayet, B. Augmented reality navigation surgery facilitates laparoscopic rescue of failed portal vein embolization. J. Am. Coll. Surg. 2016, 223, 31–34. [Google Scholar] [CrossRef]
- Deng, W.; Deng, F.; Wang, M.; Song, Z. Easy-to-use augmented reality neuronavigation using a wireless tablet PC. Stereotact. Funct. Neurosurg. 2014, 92, 17–24. [Google Scholar] [CrossRef] [PubMed]
- Frantz, T.; Jansen, B.; Duerinck, J.; Vandemeulebroucke, J. Augmenting Microsoft’s HoloLens with vuforia tracking for neuronavigation. Healthc. Technol. Lett. 2018, 5, 221–225. [Google Scholar] [CrossRef] [PubMed]
- Vandoormaal, T.; Vandoormaal, J.; Mensink, T. Clinical accuracy of holographic navigation using point-based registration on augmented-reality glasses. Oper. Neurosurg. 2019, 17, 588–593. [Google Scholar] [CrossRef]
- Montani, S.; Striani, M. Artificial intelligence in clinical decision support: A focused literature survey. Yearb. Med. Inform. 2019, 28, 120. [Google Scholar] [CrossRef] [PubMed]
- Fida, B.; Cutolo, F.; Franco, G.; Ferrari, M.; Ferrari, V. Augmented reality in open surgery. Updat. Surg. 2018, 70, 389–400. [Google Scholar] [CrossRef]
- Kalal, Z.; Mikolajczyk, K.; Matas, J. Tracking-learning-detection. IEEE Trans. Pattern Anal. Mach. Intell. 2011, 34, 1409–1422. [Google Scholar] [CrossRef]
- Prakosa, A.; Southworth, M.K.; Silva, J.A.; Trayanova, N.A. Impact of augmented-reality improvement in ablation catheter navigation as assessed by virtual-heart simulations of ventricular tachycardia ablation. Comput. Biol. Med. 2021, 133, 104366. [Google Scholar] [CrossRef]
- Tu, P.; Gao, Y.; Lungu, A.; Li, D.; Wang, H.; Chen, X. Augmented reality based navigation for distal interlocking of intramedullary nails utilizing Microsoft HoloLens 2. Comput. Biol. Med. 2021, 133, 104402. [Google Scholar] [CrossRef] [PubMed]
- Shoeibi, A.; Khodatars, M.; Jafari, M.; Ghassemi, N.; Moridian, P.; Alizadehsani, R.; Ling, S.H.; Khosravi, A.; Alinejad-Rokny, H.; Lam, H.K.; et al. Diagnosis of brain diseases in fusion of neuroimaging modalities using deep learning: A review. Inf. Fusion 2023, 93, 85–117. [Google Scholar] [CrossRef]
- Shoeibi, A.; Ghassemi, N.; Khodatars, M.; Moridian, P.; Khosravi, A.; Zare, A.; Gorriz, J.M.; Chale-Chale, A.H.; Khadem, A.; Acharya, U.R. Automatic diagnosis of schizophrenia and attention deficit hyperactivity disorder in rs-fMRI modality using convolutional autoencoder model and interval type-2 fuzzy regression. Cogn. Neurodyn. 2022, 1–23. [Google Scholar] [CrossRef]
- Al-Masni, M.A.; Kim, W.-R.; Kim, E.Y.; Noh, Y.; Kim, D.-H. Automated detection of cerebral microbleeds in MR images: A two-stage deep learning approach. NeuroImage Clin. 2020, 28, 102464. [Google Scholar] [CrossRef]
- Al-Masni, M.A.; Kim, D.-H. CMM-Net: Contextual multi-scale multi-level network for efficient biomedical image segmentation. Sci. Rep. 2021, 11, 10191. [Google Scholar] [CrossRef]
- Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Imagenet classification with deep convolutional neural networks. Adv. Neural Inf. Process. Syst. 2012, 25, 1–9. [Google Scholar] [CrossRef]
- Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. arXiv 2014, arXiv:1409.1556. [Google Scholar]
- Liu, C.S.W.; Jia, Y.; Sermanet, P.; Reed, S.; Anguelov, D.; Erhan, D.; Vanhoucke, V.; Rabinovich, A. Going deeper with convolutions. In Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA, 7–12 June 2015; pp. 1–9. [Google Scholar]
- Ronneberger, O.; Fischer, P.; Brox, T. U-net: Convolutional networks for biomedical image segmentation. In Proceedings of the 8th International Conference on Medical Image Computing and Computer-Assisted Intervention–(MICCAI), Munich, Germany, 5–9 October 2015; Part III 18. pp. 234–241. [Google Scholar]
- Long, L.; Shelhamer, E.; Darrell, T. Fully convolutional networks for semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition 2015, Boston, MA, USA, 7–12 June 2015; pp. 3431–3440. [Google Scholar]
- Donahue, J.; Jia, Y.; Vinyals, O.; Hoffman, J.; Zhang, N.; Tzeng, E.; Darrell, T. Decaf: A deep convolutional activation feature for generic visual recognition. In Proceedings of the 31st International Conference on Machine Learning, Beijing, China, 21–26 June 2014; pp. 647–655. [Google Scholar]
- Yigin, B.O.; Algin, O.; Saygili, G. Comparison of morphometric parameters in prediction of hydrocephalus using random forests. Comput. Biol. Med. 2020, 116, 103547. [Google Scholar] [CrossRef] [PubMed]
- Martin, M.; Sciolla, B.; Sdika, M.; Quetin, P.; Delachartre, P. Automatic segmentation and location learning of neonatal cerebral ventricles in 3D ultrasound data combining CNN and CPPN. Comput. Biol. Med. 2021, 131, 104268. [Google Scholar] [CrossRef] [PubMed]
- Li, H.; Li, A.; Wang, M. A novel end-to-end brain tumor segmentation method using improved fully convolutional networks. Comput. Biol. Med. 2019, 108, 150–160. [Google Scholar] [CrossRef] [PubMed]
- Westenberger, P. Avizo—Three-Dimensional Visualization Framework. In Geoinformatics 2008—Data to Knowledge; USGS: Reston, VA, USA, 2008; pp. 13–14. [Google Scholar]
- Prevedello, L.; Erdal, B.; Ryu, J.; Little, K.; Demirer, M.; Qian, S.; White, R. Automated critical test findings identification and online notification system using artificial intelligence in imaging. Radiology 2017, 285, 923–931. [Google Scholar] [CrossRef]
- Rau, A.; Kim, S.; Yang, S.; Reisert, M.; Kellner, E.; Duman, I.; Stieltjes, B.; Hohenhaus, M.; Beck, J.; Urbach, H.; et al. SVM-Based Normal Pressure Hydrocephalus Detection. Clin. Neuroradiol. 2021, 31, 1029–1035. [Google Scholar] [CrossRef] [PubMed]
- Sahli, H.; Mouelhi, A.; Sayadi, M.; Rachdi, R. Discriminant textural feature selection and classification for a computerized fetal hydrocephalus detection. In Proceedings of the 2018 IEEE International Conference on Image Processing, Applications and Systems (IPAS), Sophia Antipolis, France, 12–14 December 2018; pp. 232–237. [Google Scholar]
- Konishi, K.; Nakamoto, M.; Kakeji, Y.; Tanoue, K.; Kawanaka, H.; Yamaguchi, S.; Ieiri, S.; Sato, Y.; Maehara, Y.; Tamura, S.; et al. A real-time navigation system for laparoscopic surgery based on three-dimensional ultrasound using magneto-optic hybrid tracking configuration. Int. J. Comput. Assist. Radiol. Surg. 2007, 2, 1–10. [Google Scholar] [CrossRef]
- Ieiri, S.; Uemura, M.; Konishi, K.; Souzaki, R.; Nagao, Y.; Tsutsumi, N.; Akahoshi, T.; Ohuchida, K.; Ohdaira, T.; Tomikawa, M.; et al. Augmented reality navigation system for laparoscopic splenectomy in children based on preoperative CT image using optical tracking device. Pediatr. Surg. Int. 2012, 28, 341–346. [Google Scholar] [CrossRef]
- Duan, W.; Zhang, J.; Zhang, L.; Lin, Z.; Chen, Y.; Hao, X.; Wang, Y.; Zhang, H. Evaluation of an artificial intelligent hydrocephalus diagnosis model based on transfer learning. Medicine 2020, 99, e21229. [Google Scholar] [CrossRef]
- Gavaghan, K.A.; Peterhans, M.; Oliveira-Santos, T.; Weber, S. A portable image overlay projection device for computer-aided open liver surgery. IEEE Trans. Biomed. Eng. 2011, 58, 1855–1864. [Google Scholar] [CrossRef]
- Kenngott, H.; Preukschas, A.; Wagner, M.; Nickel, F.; Müller, M.; Bellemann, N.; Stock, C.; Fangerau, M.; Radeleff, B.; Kauczor, H.; et al. Mobile, real-time, and point-of-care augmented reality is robust, accurate, and feasible: A prospective pilot study. Surg. Endosc. 2018, 32, 2958–2967. [Google Scholar] [CrossRef] [PubMed]
- Heinrich, F.; Schwenderling, L.; Becker, M.; Skalej, M.; Hansen, C. HoloInjection: Augmented reality support for CT-guided spinal needle injections. Healthc. Technol. Lett. 2019, 6, 165–171. [Google Scholar] [CrossRef] [PubMed]
- Hecht, R.; Li, M.; de Ruiter, Q.M.; Pritchard, W.; Li, X.; Krishnasamy, V.; Saad, W.; Karanian, J.W.; Wood, B. Smartphone augmented reality CT-based platform for needle insertion guidance: A phantom study. Cardiovasc. Interv. Radiol. 2020, 43, 756–764. [Google Scholar] [CrossRef] [PubMed]
Notation | Definition |
---|---|
// | Distance on the x/y/z axis of the DICOM object |
The edge point of the scalpel | |
/ | The length/width of the DICOM image |
// | The length of the head along the x/y/z axes |
The number of the specific DICOM slice with the ideal target | |
The total amount of DICOM slices | |
// | The number of DICOM slices on the x/y/z axis |
The ideal 2D target position | |
The ideal 3D target position | |
The origin point | |
/ | The reference point on the left/right of the x-axis |
// | The thickness on the x/y/z axis of DICOM slices |
The x/y/z-axis displayed the DICOM slice |
Abbreviation | Meaning |
---|---|
AR/VR | Augmented reality/Virtual reality |
AR devices | Augmented reality devices, which include mobile devices and head-mounted displays, that are capable of running augmented reality software |
CNN | Convolutional neural network. A type of artificial neural network that belongs to the family of deep learning algorithms |
EVD | Extra-ventricular drainage |
DICOM | Digital Imaging and Communications in Medicine |
DICOM files | Files that have been saved in the Digital Imaging and Communications in Medicine (DICOM) format, typically obtained from computed tomography (CT) or magnetic resonance imaging (MRI) procedures. |
DICOM data | Data that has been formatted in accordance with the Digital Imaging and Communications in Medicine (DICOM) standard |
DICOM image | A 2D image that has been saved in the Digital Imaging and Communications in Medicine (DICOM) format and can be displayed on a screen or monitor |
DICOM slice | A single two-dimensional image of a patient’s anatomy that has been acquired using imaging modalities such as X-ray, CT scan, MRI, or ultrasound. |
HMD | Head-mounted device |
Virtual image | Computer-generated image that cannot exist in the physical world and can only be viewed on a display monitor |
Virtual object | A digital object that lacks a physical presence and can only be created and manipulated within software |
U-Net | U-Net is a convolutional neural network that was originally developed at the Computer Science Department of the University of Freiburg for biomedical image segmentation. The original U-Net model, which consists of an encoder and decoder, is widely used for tasks such as medical image segmentation, as presented in this paper. |
Method | Accuracy (%) | Sensitivity (%) | Specificity (%) |
---|---|---|---|
Proposed | 99.93 | 93.85 | 95.73 |
Surgeon | 85–97 | N/A | N/A |
[30] | N/A | 90.0 | 85.0 |
[31] | 91.80 | 91.84 | 94.77 |
[32] | N/A | 80.0 | 90.0 |
[2] | 94.0 | 93.6 | 94.4 |
[10] | 93.0 | N/A | N/A |
Method | Registration Time | Average Error (mm) | Standard Deviation (mm) |
---|---|---|---|
Proposed | 4 s | 1 | 0.1 |
[7] | 30 s | 1.96 | 0.87 |
[8] | 99 s | 2.5 | N/A |
[35] | N/A | 2.1 | N/A |
[9] | 228 s | 1.2 | 0.54 |
[34] | N/A | 3 | N/A |
[13] | N/A | 2 | N/A |
[36] | N/A | 2 | N/A |
[33] | N/A | 1.9 | 0.45 |
[4] | N/A | 2.8 | 2.7 |
[5] | 300 s | 7.9 | N/A |
Experimenter | A | B | C | D | E | Average |
---|---|---|---|---|---|---|
Error (mm) | 5 | 3 | 7 | 2 | 13 | 6 |
Properties | [36] | [37] | [38] | [39] | Proposed |
---|---|---|---|---|---|
Device | Handheld projector | iPad | HoloLens 1 | Mobile phone | Surface Pro 7 |
Accuracy | projection error: 1.3 ± 0.9 mm | Prosthesis experiment: 2.8 ± 2.7 mm | Average insertion angle difference: 6.35 degree | Prosthesis experiment: 2.7 ± 2.6 mm | Prosthesis experiment: 1.0 ± 0.1 mm |
Anatomy information providing | V | V | V | ||
3D display providing | V | V | V | V | |
Path navigation providing | V | V | V | ||
Visual feedback providing | V | ||||
Clinical feasibility | V | V | V | V |
Features | Surface Pro 7 | Hololens 2 |
---|---|---|
Stability | Better | |
Flexibility | Better | |
Comfort | Better | |
Information richness | Better |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Chiou, S.-Y.; Liu, L.-S.; Lee, C.-W.; Kim, D.-H.; Al-masni, M.A.; Liu, H.-L.; Wei, K.-C.; Yan, J.-L.; Chen, P.-Y. Augmented Reality Surgical Navigation System Integrated with Deep Learning. Bioengineering 2023, 10, 617. https://doi.org/10.3390/bioengineering10050617
Chiou S-Y, Liu L-S, Lee C-W, Kim D-H, Al-masni MA, Liu H-L, Wei K-C, Yan J-L, Chen P-Y. Augmented Reality Surgical Navigation System Integrated with Deep Learning. Bioengineering. 2023; 10(5):617. https://doi.org/10.3390/bioengineering10050617
Chicago/Turabian StyleChiou, Shin-Yan, Li-Sheng Liu, Chia-Wei Lee, Dong-Hyun Kim, Mohammed A. Al-masni, Hao-Li Liu, Kuo-Chen Wei, Jiun-Lin Yan, and Pin-Yuan Chen. 2023. "Augmented Reality Surgical Navigation System Integrated with Deep Learning" Bioengineering 10, no. 5: 617. https://doi.org/10.3390/bioengineering10050617
APA StyleChiou, S. -Y., Liu, L. -S., Lee, C. -W., Kim, D. -H., Al-masni, M. A., Liu, H. -L., Wei, K. -C., Yan, J. -L., & Chen, P. -Y. (2023). Augmented Reality Surgical Navigation System Integrated with Deep Learning. Bioengineering, 10(5), 617. https://doi.org/10.3390/bioengineering10050617