Next Article in Journal
Integrated Sensing and Communication via Orthogonal Time Frequency Space Signaling with Hybrid Message Passing Detection and Fractional Parameter Estimation
Next Article in Special Issue
Real-Time Tool Localization for Laparoscopic Surgery Using Convolutional Neural Network
Previous Article in Journal
Accurate Monocular SLAM Initialization via Structural Line Tracking
Previous Article in Special Issue
Suppression of Clothing-Induced Acoustic Attenuation in Robotic Auscultation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Modern Image-Guided Surgery: A Narrative Review of Medical Image Processing and Visualization

1
School of Mechanical Engineering, Zhejiang University, Hangzhou 310030, China
2
ZJU-UIUC Institute, International Campus, Zhejiang University, Haining 314400, China
*
Author to whom correspondence should be addressed.
Sensors 2023, 23(24), 9872; https://doi.org/10.3390/s23249872
Submission received: 1 October 2023 / Revised: 15 November 2023 / Accepted: 13 December 2023 / Published: 16 December 2023

Abstract

:
Medical image analysis forms the basis of image-guided surgery (IGS) and many of its fundamental tasks. Driven by the growing number of medical imaging modalities, the research community of medical imaging has developed methods and achieved functionality breakthroughs. However, with the overwhelming pool of information in the literature, it has become increasingly challenging for researchers to extract context-relevant information for specific applications, especially when many widely used methods exist in a variety of versions optimized for their respective application domains. By being further equipped with sophisticated three-dimensional (3D) medical image visualization and digital reality technology, medical experts could enhance their performance capabilities in IGS by multiple folds. The goal of this narrative review is to organize the key components of IGS in the aspects of medical image processing and visualization with a new perspective and insights. The literature search was conducted using mainstream academic search engines with a combination of keywords relevant to the field up until mid-2022. This survey systemically summarizes the basic, mainstream, and state-of-the-art medical image processing methods as well as how visualization technology like augmented/mixed/virtual reality (AR/MR/VR) are enhancing performance in IGS. Further, we hope that this survey will shed some light on the future of IGS in the face of challenges and opportunities for the research directions of medical image processing and visualization.

1. Introduction

Image-guided surgery (IGS) is a form of computer-assisted navigation surgery that focuses on processing image data and converting it into information for surgical systems or for visual displaying in the surgeon’s view. This process involves the timely tracking of targeted sites on the patient and the visualization of surgical tool motion, usually guided by a combination of a preoperative surgical plan and model with intraoperative imaging and sensing [1]. This innovative technology has a huge market potential, as it is expected to reach a value of USD 5.5 billion by 2028, growing at a compound annual growth rate of 5.4% during the forecast period [2]. The subject of IGS technology has substantial clinical relevance and is especially paramount to the development of minimally invasive procedures. While the continuous quest of improving surgical outcomes has led to rapid progress in sophisticated surgical techniques, characterized by less and smaller incisions to minimize the invasiveness of the procedures, its limited and non-direct visual access are inevitably making the procedures harder to perform. IGS technology aims at addressing these practical challenges by providing the surgeon with visual information augmentation from pre- and intraoperative imaging. By using fluorescence imaging during surgery, surgeons can see tumors and/or healthy tissues around them in real time, which helps them to remove tumors more accurately and avoid any mistake detrimental to the patients [3].
Medical image processing is an essential step that obtains and manipulates digital images of the patient body for medical diagnostic and treatment purposes. Medical image processing includes the processes of enhancing, segmenting, and registering the medical images using various algorithms and techniques. Image enhancement improves the quality and contrast of the images by removing noise, artifacts, or distortions. Image segmentation partitions the images into regions of interest, such as the organs, tissues, or lesions. Image registration aligns and fuses multiple images from different modalities or time points. The image processing pipeline is extremely crucial for IGS, which includes the use of preoperative and intraoperative images to guide surgical instruments and improve surgical outcomes. However, for the processed medical image data to be useful in IGS and translated to treatment benefits, the data need to be well visualized by surgeons. Image visualization displays intuitive images in two-dimensional (2D) or three-dimensional (3D) formats that can be interacted with by the user.
Medical image visualization is the process of presenting complex and multidimensional data in a clear and understandable way that can support clinical decision making and research. One of the challenges of medical image visualization is creating realistic and interactive representations of the data that can enhance the user’s experience and understanding. Immersive technology, such as augmented/mixed/virtual reality (AR/MR/VR), is a promising solution that can provide a sense of presence and immersion in a virtual or augmented environment. It differs in the views of the user. AR gives a view of the real world with an overlay of digital elements. VR gives a view of a fully immersive digital environment and the user can interact with it [4]. MR is a combination of AR and VR such that the user, real world, and virtual reality can interact with each other in real time [5]. Immersive technology has been integrated with surgical workflows for various purposes, such as preoperative planning, intraoperative navigation, and surgical training. However, there are also limitations and challenges that need to be addressed, such as user interaction, data quality, ethical issues, and technical feasibility.
Despite representative reviews and surveys in IGS being widely available, the focuses are typically specific to the surgical procedures or the organs being operated on. Our review focuses on analyzing navigation systems in general across a representative range of applications. For example, Ryan et al. evaluated IGN’s value in improving surgical accuracy and clinical outcomes specifically in spinal surgery [6]. Donovan et al. summarized current and developing techniques in surgical navigation for head and neck surgery [7]. DeLong et al. assessed the status of navigation in craniofacial surgery [8]. Du et al. performed a meta-analysis on the variation in pedicle screw insertion among 3D FluoroNav, 2D FluoroNav, and computed tomography-based navigation systems [9]. Although a few reviews have also discussed the navigation system in general as a whole, there is an inadequate number of reviews addressing and discussing the process methodology. Uli Mezger et al. reviewed a short history and the evolution of surgical navigation, as well as technical aspects and clinical benefits [10]. Arthur Randolph et al. reviewed equipment and operative setups [1]. Unlike these reviews, our review focuses on analyzing the methods used in image processing. It is important to note that the effectiveness and applicability of these methods may vary depending on specific use cases. For this reason, it is not in the scope of interest of this narrative review to compare the performance of existing methods.
Visualization techniques are drawing much attention for their intuitive interpretation of and including interaction with visual information. The focus of our review is not to summarize the technology or details of specific techniques; rather, we focused on the benefits of intuitive 3D information and the new visualization interfaces. Interested readers can refer to Preim’s review [11] for perceptually motivated 3D medical image data visualization and to Zhou’s review [12] for different types of visualization techniques based on data types, modalities, and applications.
Clinicians are typically interested in the kind of equipment and the methods that contribute to the setting up and realization of an effective navigation system. Available resources in the literature typically focus on the technical details of the imaging mechanism and principles which are often not well streamlined towards information retrieval for readers in the medical domain. In addition, there are promising technologies that have not yet been well introduced to medical communities like the AR/MR/VR technologies, which enrich the effects of existing image modalities and methods of viewing in IGS. This narrative review aims to fill the gap between the current system and MIA methods including a timely discussion of frontier technologies like the AR/MR/VR interface in the application of IGS.
The whole pipeline for the navigation system follows the processing of medical image data streams. Figure 1 shows the workflows that the image data stream goes through and how these workflows relate to each other in a navigation system. We also divided the surgical navigation system into five parts: tracking, visualization, intervention (subjects and the environment), operation (medical team and robotic systems), and imaging (medical image modalities), as shown in Figure 2. It provides an abstract image of a surgical navigation system, and we will use cases to show what methods they used and how they set up a surgical navigation system in the following sections. Methods of image data processes used in navigation systems will be introduced in Section 3.1. Methods and interfaces of visualization will be introduced in Section 3.2.

2. Materials and Methods

This narrative review was performed on publications between 2013 and 2023 using the databases Web of Science™, Scopus, and the IEEE Xplore® Digital Library. The search string and number of papers are listed in Table 1. For IEEE Xplore®, the filters of “2013–2023”, “Conferences” and “Journals” were used. These papers were further filtered by exclusion criteria: (1) no English version; (2) duplicated; (3) irrelevant; and (4) unavailable. We also applied a snowballing search methodology using the references cited in the articles identified in the literature search. There were 39 representative papers with a complete navigation system between 2019 and 2022, as summarized in Section 3.2. Additional papers on classical image processing techniques like segmentation and 3D reconstruction before 2013 were also included.

3. Results

3.1. Medical Image Processing

Table 2 shows the summary of methods used in the segmentation, tracking, and registration part of surgical navigation systems. For segmentation, traditional methods and learning-based methods are both widely used while some auto-segmentation frameworks are becoming popular. For tracking, an electromagnetic tracker (EMT) and optical tracking system (OTS) are mostly used while some researchers have tried learning-based methods or SLAM-based methods. For registration, extrinsic methods like fiducial markers or landmarks and intrinsic methods like iterative closest point (ICP) or coherent point drift (CPD) are the main methods.

3.1.1. Segmentation

Medical image segmentation is a process of dividing medical images into regions or objects of interest, such as the organs, bones, tumors, etc. It has many applications in clinical quantification, therapy, and surgical planning. Various methods have been proposed for medical image segmentation, including traditional methods based on boundary extraction, thresholding, and region growing [44,45,46], which are still popular among researchers. However, medical image segmentation faces some unique challenges:
  • A lack of annotated data: Medical images are often scarce and expensive to label by experts, which limits the availability of training data for supervised learning methods.
  • Inhomogeneous intensity: Medical images may have different contrast, brightness, noise, and artifacts depending on the imaging modality, device, and settings, which make it hard to apply a single threshold or feature extraction method across different images.
  • Vast memory usage: Medical images are often high-resolution and 3D, which requires a large amount of memory and computational resources to process and store.
To address the above listed challenges, some researchers have proposed a multi-agent system (MAS). By forming a collection of individual agents that use the appropriate methods for different targets, MAS can handle complex segmentation problems. For example, Chitsaz et al. proposed a MAS composed of a moderator agent and several local agents that handle thresholding methods to segment CT images [47]. Bennai et al. proposed two organizations of agents to carry out region growing and refinement to segment tumor-in-brain MR images [48]. Moreover, due to the locality and stochasticity of local agents and the cooperation among them, the MAS approach is generally more robust than single-method approaches. The potential of handling a large number of images allows for fast segmentation. However, MAS usually requires prior knowledge and parameter estimation to initialize the agents. Some improved approaches can avoid prior knowledge [49,50,51,52] or parameters estimation [53,54]. Moreover, some research groups have also combined the MAS idea with reinforcement learning (RL). Liao et al. modeled the dynamic process of iterative interactive image segmentation as a Markov decision process and solved it with a multi-agent RL, achieving state-of-the-art results with the advantages of less interactions and a faster convergence [55]. Allinoui et al. proposed a mask extraction method based on multi-agent deep reinforcement learning and showed convincing results on CT images [56].
In recent years, thanks to the fast development of machine learning, most researchers have focused on learning-based methods that use deep neural networks to automatically learn features and segmentations from data [57,58,59,60,61,62]. Among these methods, U-Net is one of the most popular and widely used architectures for medical image segmentation due to its flexibility, optimized modular design, and success in all medical image modalities [57]. Several extensions and variants of U-Net have been developed to improve its performance and adaptability for different tasks and modalities [63,64]. Other networks, such as graph convolutional networks (GCNs) [58], variational autoencoders (VAEs) [59], recurrent neural networks (RNNs) [60,61], class activation maps (CAMs) [62], and so on are also used due to their advantages and applicability. In general, learning-based methods have certain advantages in segmentation accuracy and speed but also face limitations due to the scarcity of existing medical image datasets [65].
Moreover, some open-source frameworks have been proposed to facilitate the implementation and application of medical image segmentation methods for researchers and clinicians who lack experience in this field. For example, NiftyNet [66] is a TensorFlow-based framework that can perform segmentation on CT images; MIScnn [67] is a Python--based framework that supports state-of-the-art deep learning models for medical image segmentation; and 3DSlicer [34,35,68] is a software platform that can deal with 3D data or render 2D data into 3D.

3.1.2. Object Tracking

Object tracking is essential for IGS that involves spatial localization of preoperative and intraoperative image data temporally. It can help locate the relative positions of surgeons, surgical tools, patients, and objects of interest, such as the diseased area and the surgical instruments, during IGS. External tracker tools, such as OTS for ex vivo tracking and electromagnetic tracking systems for in vivo tracking, are commonly used for this purpose. Optical tracking tools use an illuminator and passive marker spheres with unique retro-reflective surfaces that can be attached to any target and detected by the illuminator. Typically, the OTS will assign a frame of reference to facilitate calibration between devices’ coordinates and perform image-to-target registration [27,29,31,69]. By generating a defined EM field in which EM micro-sensors are tracked, rigid, and flexible, medical instruments embedded within these sensors can be tracked without obstruction. Researchers have usually used EMT for deep in vivo organ tracking and movement of the transducer [40,70,71,72,73]. By using external tracker tools, object tracking can achieve millimeter or sub-millimeter accuracy, but additional tools could mean limitations in medical use or incur extra expenses.
Image-based object tracking is another approach that is widely used. It involves detecting and tracking objects in a sequence of images over time. Many features, strategies and state-of-the-art camera-based visual tracking methods have been reviewed and surveyed in [74,75,76,77,78,79]. In the medical domain, vision-based and marker-less surgical tool detection and tracking methods were reviewed in [80,81,82]. Other object tracking methods based on intraoperative imaging modalities include fluoroscopy-based [83,84], ultrasonography-based [85,86], and hybrid multimodalities [87,88], which combine ultrasound and endoscopic vision-based tracking as shown in Figure 3. In [88], the more accurate ultrasound-based localization is used for less frequent initialization and reinitialization while endoscopic camera-based tracking is used for more timely motion tracking. This hybrid form of motion tracking overcomes the inevitable cumulative error associated with vision-based pose estimation of a moving endoscope camera by triggering 3D ultrasound reinitialization, which can be conducted at less frequent intervals due to slower but cumulative error-free localization.
However, image-based object tracking still faces challenges such as low image quality, object motion, and occlusion. High distortion or artifacts of medical images pose greater challenges in object recognition, especially, for medical purposes where requirements for accuracy, reliability, and effectiveness are highly demanding. Some examples of these methods have used deep learning to detect and segment surgical tools in endoscopic images [89], convolutional neural networks (ConvNets) to track surgical tools in laparoscopic videos [90], or a combination of a particle filter and embedded deformation to track surgical tools in stereo endoscopic images [91]. Although there is room for improvement in terms of accuracy and real-time performance, the application of state-of-the-art image-based object tracking methods in surgical navigation is still an open research problem.

3.1.3. Registration and Fusion

Registration is a key process in medical image analysis that involves aligning different coordinate systems that may arise from different perspectives, modalities, or techniques of data acquisition. Depending on whether the alignment can be achieved by a simple transformation matrix or not, registration can be classified into rigid or non-rigid types. Whereas rigid transformation can only handle rotation, scaling, and translation, non-rigid transformation allows for local warping and deformation of the images to achieve alignment. The image registration procedure involves finding relevant features in both volumes, measuring their alignment with a similarity metric, and searching for the optimal transformation to bring them into spatial alignment. And that is where deep learning comes in. Refs. [92,93] surveyed the recent advances and challenges of deep learning methods for medical image registration. Despite the lack of large datasets and a robust similarity metric for multimodal applications, researchers have recently used deep learning as a powerful and convenient tool for fast and automatic registration. For example, Balakrishnan et al. proposed VoxelMorph, a fast and accurate deep learning method for deformable image registration that learns a function to map image pairs to deformation fields, and can be trained in an unsupervised or semi-supervised way on large datasets and rich deformation models [94]. Vos et al. introduced the Deep Learning Image Registration (DLIR) framework, a novel method for unsupervised training of ConvNets for affine and deformable image registration using image similarity as the objective function. The DLIR framework can perform coarse-to-fine registration of unseen image pairs with high speed and accuracy, as demonstrated on cardiac cine MRI and chest CT data [95].
The main goal of registration is to find correspondences between features that represent the same anatomical or functional structures in different coordinate systems. Fusion is a related process that involves displaying data from different coordinate systems in a common one for visualization or analysis purposes. Often, registration and fusion are performed simultaneously to facilitate the integration of multimodal data. For example, Chen et al. used a unified variational model to fuse a high-resolution panchromatic image and a low-resolution multispectral image into the same geographical location [96].
In a surgical navigation system, surgeons require high accuracy, high confidence, and fast and robust registration methods. And one of the challenges of registration in surgery is to deal with non-rigid deformations that may occur during surgery or due to patient movement. To overcome this difficulty, researchers often use landmarks as salient features that can be easily identified and matched in different images. For instance, Jasper et al. proposed a navigation system that used landmark registration between a preoperative 3D model and an intraoperative ultrasound image to achieve active liver compensation with an accuracy below 10 mm [70]. Another example of using landmarks is the OTS, described in Section 3.1.2, which can also provide the landmark function. As shown in Figure 4, rigid point-based registration is performed between the physical space and the image space by using the OTS to track the surgical tool and measure the points in the physical space, and by using software to segment the points in the image space. For example, Sugino et al. used the NDI system and 3D Slicer to set up a surgical navigation system that collected data [18]. However, landmarks are hard to place in some conditions, such as the brain or the lung. Non-rigid registration between intraoperative images is required and also a challenging problem in medical image analysis, especially for organs that undergo large deformations during surgery.

3.1.4. Planning

One of the essential tasks for surgeons, especially in craniofacial procedures, is to obtain high-quality information on the patient’s preoperative anatomy that can help them strategize and plan the surgical procedure accurately [97]. For instance, in tumor surgery, it is crucial to view the structure and morphology of the hepatic vessels and their relation to the tumors [98]. This 3D information can be derived from direct clinical measurements taken from physical models [99] or from digital models reconstructed from volumetric images such as CT or MR. In order to achieve this, segmentation, tracking, and registration techniques are employed to enable surgeons to see the surgical tools overlaid on the patient’s anatomy and even to see through obstructions and locate the targets. Based on this information, surgeons can plan the optimal route for surgery preoperatively and guide the surgical tool intraoperatively with the assistance of computer software. For example, Han et al. presented a method to automatically plan and guide screw placement in pelvic surgery using shape models and augmented fluoroscopy [100]. Li et al. presented a method to automatically plan screw placement in shoulder joint replacement using cone space and bone density criteria [101]. Surgical planning can improve the accuracy, safety, efficiency, and quality of surgery, especially for complex or minimally invasive cases. However, it can be time-consuming, costly, and technically challenging to produce accurate and reliable surgical plans because it usually requires a physical model to simulate the surgery. For example, Sternheim et al. used a Sawbones tumor model to simulate the resection of a primary bone sarcoma and reduced the risk of a positive margin resection [102].
In addition to traditionally used physical models in surgical planning, 3D imaging and virtual surgical planning (VSP) have become increasingly popular in orthognathic surgery in many regions of the world [103]. VSP requires 3D models that are usually reconstructed from volumetric images and need rendering procedures to be displayed on a screen. Advances in these computer-aided technologies have opened up new possibilities for VSP in craniofacial surgery [104]. VSP can provide more anatomically based and surgically accurate simulation of the procedure, enable a more interactive and collaborative planning process, and improve the predictability, precision, and outcomes of surgery. A usability study of a visual display by Regodić et al. reported that clinically experienced users reached the targets with shorter trajectories using VSP [105]. And Mazzola et al. showed that VSP reduced the time and maintained the cost and quality of facial skeleton reconstruction with microvascular free flaps [106].

3.2. Visualization

3.2.1. 3D Reconstruction and Rendering

3D reconstruction in this paper means a process of generating 3D models from image slices/sequences. Rendering is an interactive process that allows the observer to adjust the display parameters to depict the point of interest most intuitively.
Accurate and clear 3D models for the visualization of anatomical structures of their patients is important for radiologists and surgeons. Supported by computer vision, tomographic reconstruction techniques for CT and MRI have been well developed in the past few decades and can provide high-quality visualization of human anatomy to help medical diagnostics. Nowadays, many platforms and types of software provide automatic reconstruction and rendering procedures, for example, ParaView, Seg3D, SynGO, Mimics, 3D Slicer, and so on. Usually, functionalities like auto-segmentation are also provided. Radiological imaging like CT and cone beam computed tomography can provide high-resolution and high-contrast images. However, due to their ionizing nature, they also pose the risk of radiation exposure to patients. Using low-dose ones on the other hand causes image quality degradation. Magnetic resonance imaging can provide non-invasive and non-radiation images, but they are also affected by some factors, such as metal objects, gas, bones, tissue depth, and background noise.
To address these issues, many researchers use deep neural networks to generate high-quality or complementary data [107,108,109]. The impressive performance of CNN-based low-dose CT restoration [110,111] has stimulated more research on deep learning methods for image reconstruction. Ref. [112] proposed a new algorithm that uses discriminative sparse transform constraints to reconstruct low-dose CT images with better quality and less noise by combining the advantages of image-compressed sensing reconstruction and a differential feature representation model and avoiding the drawbacks of the classical methods that depend on a prior image and cause registration and matching problems. Ref. [113] proposed a new deep learning network that uses a noise estimation network and a transfer learning scheme to adapt to different imaging scenarios and denoise low-dose CT images with better quality. Deep learning-based MR image reconstruction methods are plentiful, such as FA-GAN [114], FL-MRCM [115], U-Net [116], and so on. In the meantime, 2D X-rays, which are cost-effective, widely available, and expose patients to less radiation, can also be used to reconstruct 3D images with some methods proposed by researchers [117]. In short, generating and viewing 3D models for diagnostic purpose is a common phenomenon.
Apart from 3D model reconstruction from diagnostic imaging, there are also dynamic 3D image reconstruction applications based on intraoperative imaging modalities. Ultrasound scanning, being a common intraoperative imaging modality, can be used to carry out 3D reconstruction [118,119]. Other than using a 3D ultrasound transducer that could acquire a 3D surface directly, intraoperative 2D ultrasound imaging can also reconstruct 3D models with known spatial information of the scan slice as illustrated in Figure 5 Reconstruction can subsequently be conducted after segmentation of the 3D surface based on the intensity of the ultrasonography, as illustrated in the same figure showcasing the 3D reconstruction of a placenta in a fluid medium [120]. Recent work in relation to 3D ultrasound reconstruction has also explored promising machine learning-based approaches [121,122]. Endoscopic camera-based image reconstruction is another commonly used approach for intraoperative reconstruction of 3D structures in the scene [123,124]. Most of the papers found applied techniques in photogrammetry for intraoperative mapping and surface reconstruction [125,126,127,128]. While these methods are mainly passive, i.e., relying purely on visual landmarks or interest points in the scene, there are also active camera-based approaches that cast structured lighting to the scene for surface reconstruction [129,130]. Camera-based approaches are promising for several clinical applications, including 3D reconstruction of the lumen in capsule endoscopy [131,132]. Some other methods involve a hybrid combination of ultrasonography and endoscopy for 3D reconstruction [87,88,133].
Usually, surgeons and researchers visualize images or volumes on a screen and operate using keyboards. However, with the help of AR/MR/VR technology, researchers can also augment real surgical scenes with the rendering of 3D models. Reconstruction and rendering are integral to IGS, consisting of sophisticated modern user interfaces and visual media.

3.2.2. User Interface and Medium of Visualization

While reconstruction and rendering provide plentiful 3D spatial image data, visualizing image-based navigational information through a 2D screen imposes problems in hand–eye coordination and depth perception [135]. Merging real scenes and virtual images is one of the solutions. Several display technologies, including half-mirror, projection-based image overlay, and integral videography (IV), can show fused real and virtual data in real time, as shown in Figure 6. Half-mirror is a technique that uses a half-silvered mirror or a transparent monitor to reflect a virtual image onto the viewer’s eyes, while allowing them to see through the mirror or monitor to observe the real environment. With the advantages of being realistic, immersive, versatile, and energy-efficient, half-mirror is widely used in surgical navigation [136,137,138]. IV is a technique that uses, captures, and reproduces a light field using a 2D array of micro-lenses to create autostereoscopic images. With the advantages of being autostereoscopic, natural, and parallax-rich, IV is widely used in displaying 3D anatomical structures or surgical plans inside the patient [139,140,141]. Projection-based image overlay is a technique that uses a projector to display a virtual image onto a screen or a real object, such as a wall or a table. With the advantages of being simple, scalable, and adaptable, projection-based image overlay is widely used in projecting guidance information or registration markers on the patient [142,143].
In contrast to these techniques, which rely on external devices to create the AR effect, another approach is to use wearable devices that can directly display the virtual images in the user’s view. Head-mounted displays (HMDs) like HoloLens are innovative devices that can augment many kinds of surgery. They can not only display images and 3D models on the user’s view, as shown in Figure 7 [144], but also use the HMD’s image as an image source for various purposes. All tasks of surgical navigation, such as segmentation, object tracking, registration, fusion, and planning, can be performed on HMDs. For example, Teatini et al. used HoloLens to provide surgeons with the illusion of possessing “X-ray” vision to visualize bones in orthopedics [27], as shown in Figure 8. Teatini used Polaris Spectra and NDI as a means of optical tracking and optical markers to conduct rigid registration. By conducting an evaluation study on two phantoms, Teatini demonstrated that the MR navigation tool has the potential to improve diagnostic accuracy and provide better training conditions. Furthermore, HMDs like HoloLens have other functions that can be utilized in the surgical condition including gesture recognition and audio control. These functions can enhance the convenience for and efficiency of the surgeon. Nishihori et al. accessed a contactless operating interface for 3D image-guided navigation and showed some benefits [145]. However, Nishihori needed additional devices like Kinect to perform gesture recognition and use voice recognition software. HMDs like HoloLens are an integrated interface that incorporate these functions and similar systems can be developed based on them. Therefore, HMDs are a promising technology that can revolutionize surgical practices and outcomes.

3.2.3. Media of Visualization: VR/AR/MR

VR/AR/MR technologies are emerging fields that have attracted a lot of interest and attention in modern medicine. Many research groups have applied such technology in various domains, such as treatment, education, rehabilitation, surgery, training, and so on [146]. VR/AR/MR technologies differ in their degree of immersion and interaction, but they all aim to enhance the user’s experience by creating realistic and engaging environments [147]. One of the earliest applications of AR was to solve a simple problem: how to see the surgical monitor while instruments are inside patients. Yamaguchi et al. built a retinal projection HMD in 2009 to overlay the image and verify its accuracy [148]. Since then, AR technology has advanced significantly and has been used for more complex and challenging tasks. For example, Burström et al. demonstrated the feasibility, accuracy, and radiation-free navigation of AR surgical navigation with instrument tracking in minimally invasive spinal surgery (MISS) [149]. In Figure 9, Sun et al. proposed a fast online calibration procedure for an optical see-through head-mounted display (OST-HMD) with the aid of an OTS [29]. In this system, the whole procedure consisted of three steps: (1) image segmentation and reconstruction, as shown in Figure 9 (A); (2) point-based registration or an ICP-based surface matching algorithm, as shown in Figure 9 (B); and (3) calibration of the OST-HMD, as shown in Figure 9 (C). These examples show how AR technology can improve the accuracy and efficiency of surgical procedures.
A VR simulator is a powerful tool that can be used for teaching or training purposes in medicine. Researchers have explored the use of VR simulators since the early 2000s, when they developed and evaluated various laparoscopic VR systems [150]. Khalifa et al. foresaw that VR has the ability to streamline and enhance the learning experience of residents by working synergistically with curriculum modalities [151]. Jaramaz and Eckman presented an example of a VR system using fluoroscopic navigation [152]. Nowadays, VR has been widely used in the education, training, and planning areas for different surgical specialties. For instance, in oral and maxillofacial surgery, VR has been utilized to improve the delivery of education and the quality of training by creating a virtual environment of the surgical procedure [153], as shown in Figure 10. Haluck et al. built a VR surgical trainer for navigation in laparoscopic surgery [154]. Barber et al. simulated sinus endoscopy using a VR simulator that combines 3D-printed models. They provided evidence that such a VR simulator is feasible and may prove useful as a low-cost and customizable educational tool [155]. These examples show how VR simulators can offer realistic and interactive scenarios for surgical education and training.
MR technology has several features that make it ideal for image-guided navigation, such as a see-through view, spatial mapping, and an interactive interface. Several studies have evaluated and validated MR navigation systems for different surgical procedures and have shown positive results. For example, Martel et al. evaluated an MR navigation system in ventriculostomy and showed a 35% accuracy improvement [156]. Zhou et al. validated an MR navigation system for seed brachytherapy and showed clinically acceptable accuracy [157], as shown in Figure 11. Mehralivand et al. tested the feasibility of a VR-assisted surgical navigation system for radical prostatectomy and showed great usability [158]. Frangi et al. validated an MR navigation system for laparoscopic surgery and showed significant time saving [159]. Incekara et al. provided proof of concept for the clinical feasibility of the HoloLens for the surgical planning of brain tumors and provided quantitative outcome measures [160]. McJunkin et al. showed the significant promise of MR in improving surgical navigation by helping surgical trainees to develop mental 3D anatomical maps more effectively [161]. MR surgical navigation (MRSN) can aid doctors in performing surgery based on a visualized plan and achieving clinically acceptable accuracy [162]. MRSN is feasible, safe, and accurate for lumbar fracture surgery, providing satisfactory assistance for spine surgeons [163]. Therefore, MR technology is a promising tool that can enhance surgical performance and outcomes.

4. Discussion

In this narrative review, we have outlined the tasks of IGS and how they are related to the image data stream. We have followed the image data stream to illustrate how image data are used in each workflow: segmentation, tracking, registration, and planning. Image processing in each workflow is in-series but research groups in each workflow are independent from each other. Moreover, there are many research groups who are focused on setting up IGS systems using various methods and technologies. Nowadays, advanced methods are still being proposed in these areas and, in particular, methods based on AI are taking a prominent place. However, these methods are only focused on their own area and are hard to be applied to the IGS system. Finding out how to fill the gap between these methods and the system has led to a number of interesting research challenges related to satisfying the high accuracy, reliability, and effectiveness requirements of surgery. VR/AR/MR as a new type of visualization method show benefits and are an extension to IGS. They can provide realistic and interactive scenarios for surgical education, training, and planning. Some applications have been proposed but they are still in a primary state. As a solution, some software has been proposed to provide integrated functionality. For example, the 3D Slicer can automatically perform segmentation and reconstruction. But, state-of-the-art machine learning-based methods were not included because of customized models and databases. Therefore, there is a need for more research on how to integrate these methods into the software and the system.
Medical images have always been accompanied by security concerns because they carry private patient information. Currently, there exist various image security techniques like encryption, watermarking, steganography, etc. [164,165]. Since the image streams in the IGS system are only transmitted internally, security issues are not often involved.
Evaluating surgical navigation systems is an important and challenging task that requires appropriate performance metrics. The current performance metrics can be divided into three categories: outcome, efficiency, and errors [166]. Outcome metrics measure the quality and effectiveness of the surgical procedure, such as the accuracy of tumor removal or the preservation of healthy tissue. These metrics usually require the help of medical experts to mimic the metrics used to measure expertise. Efficiency metrics measure the speed and ease of the surgical procedure, such as the time consumed and the path length. These metrics can reflect the usability and convenience of the surgical navigation system. Error metrics measure the deviation and uncertainty of the surgical procedure, such as the percentage of error, deviation from target, or accuracy. These metrics can indicate the reliability and robustness of the surgical navigation system. For an IGS system, it is intrinsic to evaluate the registration and tracking part, which are essential for aligning and updating the image data with the surgical scene. According to [167], factors that contribute to the final evaluation of the whole system include the fiducial registration error (FRE), fiducial localization error (FLE), target registration error (TRE), overlay error (OR), and tool error (TE). These factors can quantify the accuracy and precision of the IGS system. Moreover, some qualitative evaluations have been proposed and usually given by surgeons to describe their subjective opinion on aspects such as comfort, confidence, satisfaction, and preference [168]. These evaluations can capture the user experience and feedback on the IGS system.
Choosing appropriate evaluation metrics is crucial for assessing the performance and validity of surgical navigation systems. The evaluation metrics should be aligned with the purpose and objectives of the research. For example, in [158], Mehralivand et al. wanted to evaluate the feasibility of interactive 3D visualization of prostate MRI data during in vivo robot-assisted radical prostatectomy (RARP). They chose outcome metrics such as blood loss, Gleason score, postoperative prostate-specific antigen (PSA), Sexual Health Inventory for Men (SHIM) score, and International Prostate Symptom Score (IPSS) as evaluation metrics to show the oncological and functional results of their system. In [159], Frangi et al. wanted to show the improvement in their MRSN system compared with LN-CT for laparoscopic surgery. They chose time consumption as an efficiency metric to show the speed and ease of their system. Outcome metrics and efficiency metrics usually have specific standards or need to be compared with other current approaches. In [156], Martel et al. showed a 35% improvement in accuracy for tip and line distances (13.3 mm and 10.4 mm to 9.3 mm and 7.7 mm) compared with conventional methods. In [148], Yamaguchi et al. showed 3.25% and 2.83% maximum projection error in overlaying virtual images onto real surgical stents using their retinal projection HMD system. Error metrics are widely used and usually required in the tracking of device tip and registration between preoperative and intraoperative images. Using multiple evaluation metrics is also common and can provide a more comprehensive assessment of the system. For example, Burström et al. used the accuracy of the device tip, angular deviation, time consumption, and user feedback to show the feasibility of an ARSN system in MISS [149]. It is plausible that maximum errors of 1.5–2 mm are acceptable in surgical navigation system by researchers. However, shifting from phantom or cadaver experiments to animal or human experiments can lead to an accuracy drop due to various factors such as tissue deformation, organ motion, and blood flow. For example, Zhou et al. reached a 0.664 mm needle location average error and 4.74% angle error in a phantom experiment and 1.617 mm and 5.574%, respectively, in an animal experiment [157]. Some researchers have also designed some evaluation models [169] for a HoloLens-based MRSN system based on analytical hierarchy process theory and ergonomics evaluation methods.
Despite not being widely used, HMD devices are attractive platforms for deploying AR/VR/MR surgical navigation systems, as they can provide low-cost, integrated, and immersive solutions for surgical navigation. Moreover, HMD devices can also enable contactless interaction systems, such as gesture control and voice control, which can enhance convenience for and efficiency of surgeons. However, transferring existing methods from conventional monitors to HMD devices is not a trivial task, as it requires adapting to different hardware specifications, user interfaces, and user experiences. Therefore, research and development on how existing methods can be better transferred to HMD devices is an important direction. It is envisioned that immersive technology will transform the way surgeons visualize patient information and interact with the future technologies in the operating room during IGS.

5. Conclusions

IGS is an evolving discipline that aims to provide surgeons with accurate and comprehensive information during surgical procedures. Traditional intervention requires surgeons to collect various types of image information from different modalities, such as CT, MRI, US, and so on. However, these modalities have limitations in terms of resolution, contrast, invasiveness, and cost. Moreover, integrating and visualizing these image data in a meaningful and intuitive way is a challenging task. IGS systems can overcome these limitations by enriching the information presented to the surgeons using advanced image processing and visualization techniques. However, the update of these systems has been limited by issues such as immersive accuracy and non-intuitive user interface. Integrated systems in the form of HMD have the potential to bridge these technology gaps, providing the surgeon with intuitive visualization and control. It is clear from this review that IGS systems have not yet reached maturity as technology and engineering develop further. While progress has been made in segmentation, object tracking, registration, fusion, planning, and reconstruction, combining these independent progresses into one system needs to be addressed. Visualization technologies like VR/MR/AR interfaces provide the possibility of an integration system to address the concerns of cost and complexity. HMD devices can also enable contactless interaction systems, such as gesture control and voice control, which can enhance convenience for and efficiency of surgeons. However, it is not easy to apply existing methods using conventional monitors to HMD devices, because they need to adjust to different hardware specifications, user interfaces, and user experiences. It is anticipated that the achievement of these research directions will lead to the development of IGS systems that will better support more clinical applications.

Author Contributions

Conceptualization, Z.L. and L.Y.; investigation, Z.L. and C.L.; supervision, L.Y.; original draft preparation, Z.L. and C.L.; writing—review and editing; Z.L., C.L. and L.Y. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded in part by the “Human Space X” Initiative Phase I: Tiantong Multidisciplinary Seed Grant from International Campus of Zhejiang University, and by the Industrial Technology Development Project from Yanjia Technology Ltd., Shanghai, China grant number K20230399.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Wijsmuller, A.R.; Romagnolo, L.G.C.; Consten, E.; Melani, A.E.F.; Marescaux, J. Navigation and Image-Guided Surgery. In Digital Surgery; Atallah, S., Ed.; Springer International Publishing: Cham, Switzerland, 2021; pp. 137–144. ISBN 978-3-030-49100-0. [Google Scholar]
  2. ReportLinker Global Image-Guided Therapy Systems Market Size, Share & Industry Trends Analysis Report by Application, by End User, by Product, by Regional Outlook and Forecast, 2022–2028. Available online: https://www.reportlinker.com/p06315020/?utm_source=GNW (accessed on 24 May 2023).
  3. Wang, K.; Du, Y.; Zhang, Z.; He, K.; Cheng, Z.; Yin, L.; Dong, D.; Li, C.; Li, W.; Hu, Z.; et al. Fluorescence Image-Guided Tumour Surgery. Nat. Rev. Bioeng. 2023, 1, 161–179. [Google Scholar] [CrossRef]
  4. Monterubbianesi, R.; Tosco, V.; Vitiello, F.; Orilisi, G.; Fraccastoro, F.; Putignano, A.; Orsini, G. Augmented, Virtual and Mixed Reality in Dentistry: A Narrative Review on the Existing Platforms and Future Challenges. Appl. Sci. 2022, 12, 877. [Google Scholar] [CrossRef]
  5. Flavián, C.; Ibáñez-Sánchez, S.; Orús, C. The Impact of Virtual, Augmented and Mixed Reality Technologies on the Customer Experience. J. Bus. Res. 2019, 100, 547–560. [Google Scholar] [CrossRef]
  6. Kochanski, R.B.; Lombardi, J.M.; Laratta, J.L.; Lehman, R.A.; O’Toole, J.E. Image-Guided Navigation and Robotics in Spine Surgery. Neurosurgery 2019, 84, 1179–1189. [Google Scholar] [CrossRef] [PubMed]
  7. Eu, D.; Daly, M.J.; Irish, J.C. Imaging-Based Navigation Technologies in Head and Neck Surgery. Curr. Opin. Otolaryngol. Head Neck Surg. 2021, 29, 149–155. [Google Scholar] [CrossRef] [PubMed]
  8. DeLong, M.R.; Gandolfi, B.M.; Barr, M.L.; Datta, N.; Willson, T.D.; Jarrahy, R. Intraoperative Image-Guided Navigation in Craniofacial Surgery: Review and Grading of the Current Literature. J. Craniofac Surg. 2019, 30, 465–472. [Google Scholar] [CrossRef]
  9. Du, J.P.; Fan, Y.; Wu, Q.N.; Zhang, J.; Hao, D.J. Accuracy of Pedicle Screw Insertion among 3 Image-Guided Navigation Systems: Systematic Review and Meta-Analysis. World Neurosurg. 2018, 109, 24–30. [Google Scholar] [CrossRef]
  10. Mezger, U.; Jendrewski, C.; Bartels, M. Navigation in Surgery. Langenbeck’s Arch. Surg. 2013, 398, 501–514. [Google Scholar] [CrossRef]
  11. Preim, B.; Baer, A.; Cunningham, D.; Isenberg, T.; Ropinski, T. A Survey of Perceptually Motivated 3D Visualization of Medical Image Data. Comput. Graph. Forum 2016, 35, 501–525. [Google Scholar] [CrossRef]
  12. Zhou, L.; Fan, M.; Hansen, C.; Johnson, C.R.; Weiskopf, D. A Review of Three-Dimensional Medical Image Visualization. Health Data Sci. 2022, 2022, 9840519. [Google Scholar] [CrossRef]
  13. Srivastava, A.K.; Singhvi, S.; Qiu, L.; King, N.K.K.; Ren, H. Image Guided Navigation Utilizing Intra-Operative 3D Surface Scanning to Mitigate Morphological Deformation of Surface Anatomy. J. Med. Biol. Eng. 2019, 39, 932–943. [Google Scholar] [CrossRef]
  14. Shams, R.; Picot, F.; Grajales, D.; Sheehy, G.; Dallaire, F.; Birlea, M.; Saad, F.; Trudel, D.; Menard, C.; Leblond, F. Pre-Clinical Evaluation of an Image-Guided in-Situ Raman Spectroscopy Navigation System for Targeted Prostate Cancer Interventions. Int. J. Comput. Assist. Radiol. Surg. 2020, 15, 867–876. [Google Scholar] [CrossRef] [PubMed]
  15. Wang, T.; He, T.; Zhang, Z.; Chen, Q.; Zhang, L.; Xia, G.; Yang, L.; Wang, H.; Wong, S.T.C.; Li, H. A Personalized Image-Guided Intervention System for Peripheral Lung Cancer on Patient-Specific Respiratory Motion Model. Int. J. Comput. Assist. Radiol. Surg. 2022, 17, 1751–1764. [Google Scholar] [CrossRef] [PubMed]
  16. Feng, Y.; Fan, J.C.; Tao, B.X.; Wang, S.G.; Mo, J.Q.; Wu, Y.Q.; Liang, Q.H.; Chen, X.J. An Image-Guided Hybrid Robot System for Dental Implant Surgery. Int. J. Comput. Assist. Radiol. Surg. 2022, 17, 15–26. [Google Scholar] [CrossRef] [PubMed]
  17. Rüger, C.; Feufel, M.A.; Moosburner, S.; Özbek, C.; Pratschke, J.; Sauer, I.M. Ultrasound in Augmented Reality: A Mixed-Methods Evaluation of Head-Mounted Displays in Image-Guided Interventions. Int. J. Comput. Assist. Radiol. Surg. 2020, 15, 1895–1905. [Google Scholar] [CrossRef] [PubMed]
  18. Sugino, T.; Nakamura, R.; Kuboki, A.; Honda, O.; Yamamoto, M.; Ohtori, N. Comparative Analysis of Surgical Processes for Image-Guided Endoscopic Sinus Surgery. Int. J. Comput. Assist. Radiol. Surg. 2019, 14, 93–104. [Google Scholar] [CrossRef] [PubMed]
  19. Chaplin, V.; Phipps, M.A.; Jonathan, S.V.; Grissom, W.A.; Yang, P.F.; Chen, L.M.; Caskey, C.F. On the Accuracy of Optically Tracked Transducers for Image-Guided Transcranial Ultrasound. Int. J. Comput. Assist. Radiol. Surg. 2019, 14, 1317–1327. [Google Scholar] [CrossRef] [PubMed]
  20. Richey, W.L.; Heiselman, J.S.; Luo, M.; Meszoely, I.M.; Miga, M.I. Impact of Deformation on a Supine-Positioned Image-Guided Breast Surgery Approach. Int. J. Comput. Assist. Radiol. Surg. 2021, 16, 2055–2066. [Google Scholar] [CrossRef]
  21. Glossop, N.; Bale, R.; Xu, S.; Pritchard, W.F.; Karanian, J.W.; Wood, B.J. Patient-Specific Needle Guidance Templates Drilled Intraprocedurally for Image Guided Intervention: Feasibility Study in Swine. Int. J. Comput. Assist. Radiol. Surg. 2023, 18, 537–544. [Google Scholar] [CrossRef]
  22. Dong, Y.; Zhang, C.; Ji, D.; Wang, M.; Song, Z. Regional-Surface-Based Registration for Image-Guided Neurosurgery: Effects of Scan Modes on Registration Accuracy. Int. J. Comput. Assist. Radiol. Surg. 2019, 14, 1303–1315. [Google Scholar] [CrossRef]
  23. Shapey, J.; Dowrick, T.; Delaunay, R.; Mackle, E.C.; Thompson, S.; Janatka, M.; Guichard, R.; Georgoulas, A.; Pérez-Suárez, D.; Bradford, R.; et al. Integrated Multi-Modality Image-Guided Navigation for Neurosurgery: Open-Source Software Platform Using State-of-the-Art Clinical Hardware. Int. J. Comput. Assist. Radiol. Surg. 2021, 16, 1347–1356. [Google Scholar] [CrossRef] [PubMed]
  24. Fauser, J.; Stenin, I.; Bauer, M.; Hsu, W.H.; Kristin, J.; Klenzner, T.; Schipper, J.; Mukhopadhyay, A. Toward an Automatic Preoperative Pipeline for Image-Guided Temporal Bone Surgery. Int. J. Comput. Assist. Radiol. Surg. 2019, 14, 967–976. [Google Scholar] [CrossRef] [PubMed]
  25. Romaguera, L.V.; Mezheritsky, T.; Mansour, R.; Tanguay, W.; Kadoury, S. Predictive Online 3D Target Tracking with Population-Based Generative Networks for Image-Guided Radiotherapy. Int. J. Comput. Assist. Radiol. Surg. 2021, 16, 1213–1225. [Google Scholar] [CrossRef] [PubMed]
  26. Ruckli, A.C.; Schmaranzer, F.; Meier, M.K.; Lerch, T.D.; Steppacher, S.D.; Tannast, M.; Zeng, G.; Burger, J.; Siebenrock, K.A.; Gerber, N.; et al. Automated Quantification of Cartilage Quality for Hip Treatment Decision Support. Int. J. Comput. Assist. Radiol. Surg. 2022, 17, 2011–2021. [Google Scholar] [CrossRef] [PubMed]
  27. Teatini, A.; Kumar, R.P.; Elle, O.J.; Wiig, O. Mixed Reality as a Novel Tool for Diagnostic and Surgical Navigation in Orthopaedics. Int. J. Comput. Assist. Radiol. Surg. 2021, 16, 407–414. [Google Scholar] [CrossRef] [PubMed]
  28. Léger, É.; Reyes, J.; Drouin, S.; Popa, T.; Hall, J.A.; Collins, D.L.; Kersten-Oertel, M. MARIN: An Open-Source Mobile Augmented Reality Interactive Neuronavigation System. Int. J. Comput. Assist. Radiol. Surg. 2020, 15, 1013–1021. [Google Scholar] [CrossRef]
  29. Sun, Q.; Mai, Y.; Yang, R.; Ji, T.; Jiang, X.; Chen, X. Fast and Accurate Online Calibration of Optical See-through Head-Mounted Display for AR-Based Surgical Navigation Using Microsoft HoloLens. Int. J. Comput. Assist. Radiol. Surg. 2020, 15, 1907–1919. [Google Scholar] [CrossRef]
  30. Ma, C.; Cui, X.; Chen, F.; Ma, L.; Xin, S.; Liao, H. Knee Arthroscopic Navigation Using Virtual-Vision Rendering and Self-Positioning Technology. Int. J. Comput. Assist. Radiol. Surg. 2020, 15, 467–477. [Google Scholar] [CrossRef]
  31. Shao, L.; Fu, T.; Zheng, Z.; Zhao, Z.; Ding, L.; Fan, J.; Song, H.; Zhang, T.; Yang, J. Augmented Reality Navigation with Real-Time Tracking for Facial Repair Surgery. Int. J. Comput. Assist. Radiol. Surg. 2022, 17, 981–991. [Google Scholar] [CrossRef]
  32. Mellado, N.; Aiger, D.; Mitra, N.J. Super 4PCS Fast Global Pointcloud Registration via Smart Indexing. Computer graphics forum 2014, 33, 205–215. [Google Scholar] [CrossRef]
  33. Ma, L.; Liang, H.; Han, B.; Yang, S.; Zhang, X.; Liao, H. Augmented Reality Navigation with Ultrasound-Assisted Point Cloud Registration for Percutaneous Ablation of Liver Tumors. Int. J. Comput. Assist. Radiol. Surg. 2022, 17, 1543–1552. [Google Scholar] [CrossRef] [PubMed]
  34. Ter Braak, T.P.; Brouwer de Koning, S.G.; van Alphen, M.J.A.; van der Heijden, F.; Schreuder, W.H.; van Veen, R.L.P.; Karakullukcu, M.B. A Surgical Navigated Cutting Guide for Mandibular Osteotomies: Accuracy and Reproducibility of an Image-Guided Mandibular Osteotomy. Int. J. Comput. Assist. Radiol. Surg. 2020, 15, 1719–1725. [Google Scholar] [CrossRef] [PubMed]
  35. Kokko, M.A.; Van Citters, D.W.; Seigne, J.D.; Halter, R.J. A Particle Filter Approach to Dynamic Kidney Pose Estimation in Robotic Surgical Exposure. Int. J. Comput. Assist. Radiol. Surg. 2022, 17, 1079–1089. [Google Scholar] [CrossRef] [PubMed]
  36. Peoples, J.J.; Bisleri, G.; Ellis, R.E. Deformable Multimodal Registration for Navigation in Beating-Heart Cardiac Surgery. Int. J. Comput. Assist. Radiol. Surg. 2019, 14, 955–966. [Google Scholar] [CrossRef] [PubMed]
  37. Wang, C.; Hayashi, Y.; Oda, M.; Kitasaka, T.; Takabatake, H.; Mori, M.; Honma, H.; Natori, H.; Mori, K. Depth-Based Branching Level Estimation for Bronchoscopic Navigation. Int. J. Comput. Assist. Radiol. Surg. 2021, 16, 1795–1804. [Google Scholar] [CrossRef] [PubMed]
  38. Oda, M.; Tanaka, K.; Takabatake, H.; Mori, M.; Natori, H.; Mori, K. Realistic Endoscopic Image Generation Method Using Virtual-to-Real Image-Domain Translation. Healthc. Technol. Lett. 2019, 6, 214–219. [Google Scholar] [CrossRef] [PubMed]
  39. Hammami, H.; Lalys, F.; Rolland, Y.; Petit, A.; Haigron, P. Catheter Navigation Support for Liver Radioembolization Guidance: Feasibility of Structure-Driven Intensity-Based Registration. Int. J. Comput. Assist. Radiol. Surg. 2020, 15, 1881–1894. [Google Scholar] [CrossRef]
  40. Tibamoso-Pedraza, G.; Amouri, S.; Molina, V.; Navarro, I.; Raboisson, M.J.; Miró, J.; Lapierre, C.; Ratté, S.; Duong, L. Navigation Guidance for Ventricular Septal Defect Closure in Heart Phantoms. Int. J. Comput. Assist. Radiol. Surg. 2022, 17, 1947–1956. [Google Scholar] [CrossRef]
  41. Chan, A.; Parent, E.; Mahood, J.; Lou, E. 3D Ultrasound Navigation System for Screw Insertion in Posterior Spine Surgery: A Phantom Study. Int. J. Comput. Assist. Radiol. Surg. 2022, 17, 271–281. [Google Scholar] [CrossRef]
  42. Zhang, X.; Wang, J.; Wang, T.; Ji, X.; Shen, Y.; Sun, Z.; Zhang, X. A Markerless Automatic Deformable Registration Framework for Augmented Reality Navigation of Laparoscopy Partial Nephrectomy. Int. J. Comput. Assist. Radiol. Surg. 2019, 14, 1285–1294. [Google Scholar] [CrossRef]
  43. Wang, C.; Oda, M.; Hayashi, Y.; Villard, B.; Kitasaka, T.; Takabatake, H.; Mori, M.; Honma, H.; Natori, H.; Mori, K. A Visual SLAM-Based Bronchoscope Tracking Scheme for Bronchoscopic Navigation. Int. J. Comput. Assist. Radiol. Surg. 2020, 15, 1619–1630. [Google Scholar] [CrossRef] [PubMed]
  44. Lee, L.K.; Liew, S.C.; Thong, W.J. A Review of Image Segmentation Methodologies in Medical Image. In Advanced Computer and Communication Engineering Technology: Proceedings of the 1st International Conference on Communication and Computer Engineering; Springer: Cham, Switzerland, 2015; pp. 1069–1080. [Google Scholar]
  45. Sharma, N.; Aggarwal, L. Automated Medical Image Segmentation Techniques. J. Med. Phys. 2010, 35, 3–14. [Google Scholar] [CrossRef] [PubMed]
  46. Dobbe, J.G.G.; Peymani, A.; Roos, H.A.L.; Beerens, M.; Streekstra, G.J.; Strackee, S.D. Patient-Specific Plate for Navigation and Fixation of the Distal Radius: A Case Series. Int. J. Comput. Assist. Radiol. Surg. 2021, 16, 515–524. [Google Scholar] [CrossRef] [PubMed]
  47. Chitsaz, M.; Seng, W.C. A Multi-Agent System Approach for Medical Image Segmentation. In Proceedings of the 2009 International Conference on Future Computer and Communication, Kuala Lumpur, Malaysia, 3–5 April 2009; pp. 408–411. [Google Scholar]
  48. Bennai, M.T.; Guessoum, Z.; Mazouzi, S.; Cormier, S.; Mezghiche, M. A Stochastic Multi-Agent Approach for Medical-Image Segmentation: Application to Tumor Segmentation in Brain MR Images. Artif. Intell. Med. 2020, 110, 101980. [Google Scholar] [CrossRef] [PubMed]
  49. Moussa, R.; Beurton-Aimar, M.; Desbarats, P. Multi-Agent Segmentation for 3D Medical Images. In Proceedings of the 2009 9th International Conference on Information Technology and Applications in Biomedicine, Larnaka, Cyprus, 4–7 November 2009; pp. 1–5. [Google Scholar]
  50. Nachour, A.; Ouzizi, L.; Aoura, Y. Multi-Agent Segmentation Using Region Growing and Contour Detection: Syntetic Evaluation in MR Images with 3D CAD Reconstruction. Int. J. Comput. Inf. Syst. Ind. Manag. Appl. 2016, 8, 115–124. [Google Scholar]
  51. Bennai, M.T.; Guessoum, Z.; Mazouzi, S.; Cormier, S.; Mezghiche, M. Towards a Generic Multi-Agent Approach for Medical Image Segmentation. In Proceedings of the PRIMA 2017: Principles and Practice of Multi-Agent Systems, Nice, France, 30 October–3 November 2017; An, B., Bazzan, A., Leite, J., Villata, S., van der Torre, L., Eds.; Springer International Publishing: Cham, Switzerland, 2017; pp. 198–211. [Google Scholar]
  52. Nachour, A.; Ouzizi, L.; Aoura, Y. Fuzzy Logic and Multi-Agent for Active Contour Models. In Proceedings of the Third International Afro-European Conference for Industrial Advancement—AECIA 2016, Marrakech, Morocco, 21–23 November 2016; Abraham, A., Haqiq, A., Ella Hassanien, A., Snasel, V., Alimi, A.M., Eds.; Springer International Publishing: Cham, Switzerland, 2018; pp. 229–237. [Google Scholar]
  53. Benchara, F.Z.; Youssfi, M.; Bouattane, O.; Ouajji, H.; Bensalah, M.O. A New Distributed Computing Environment Based on Mobile Agents for SPMD Applications. In Proceedings of the Mediterranean Conference on Information & Communication Technologies 2015, Saidia, Morocco, 7–9 May 2015; El Oualkadi, A., Choubani, F., El Moussati, A., Eds.; Springer International Publishing: Cham, Switzerland, 2016; pp. 353–362. [Google Scholar]
  54. Allioui, H.; Sadgal, M.; Elfazziki, A. Intelligent Environment for Advanced Brain Imaging: Multi-Agent System for an Automated Alzheimer Diagnosis. Evol. Intell. 2021, 14, 1523–1538. [Google Scholar] [CrossRef]
  55. Liao, X.; Li, W.; Xu, Q.; Wang, X.; Jin, B.; Zhang, X.; Wang, Y.; Zhang, Y. Iteratively-Refined Interactive 3D Medical Image Segmentation With Multi-Agent Reinforcement Learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 9394–9402. [Google Scholar]
  56. Allioui, H.; Mohammed, M.A.; Benameur, N.; Al-Khateeb, B.; Abdulkareem, K.H.; Garcia-Zapirain, B.; Damaševičius, R.; Maskeliūnas, R. A Multi-Agent Deep Reinforcement Learning Approach for Enhancement of COVID-19 CT Image Segmentation. J. Pers. Med. 2022, 12, 309. [Google Scholar] [CrossRef]
  57. Du, G.; Cao, X.; Liang, J.; Chen, X.; Zhan, Y. Medical Image Segmentation Based on U-Net: A Review. J. Imaging Sci. Technol. 2020, 64, 020508-1. [Google Scholar] [CrossRef]
  58. Huang, R.; Lin, M.; Dou, H.; Lin, Z.; Ying, Q.; Jia, X.; Xu, W.; Mei, Z.; Yang, X.; Dong, Y. Boundary-Rendering Network for Breast Lesion Segmentation in Ultrasound Images. Med. Image Anal. 2022, 80, 102478. [Google Scholar] [CrossRef]
  59. Silva-Rodríguez, J.; Naranjo, V.; Dolz, J. Constrained Unsupervised Anomaly Segmentation. Med. Image Anal. 2022, 80, 102526. [Google Scholar] [CrossRef]
  60. Pace, D.F.; Dalca, A.V.; Brosch, T.; Geva, T.; Powell, A.J.; Weese, J.; Moghari, M.H.; Golland, P. Learned Iterative Segmentation of Highly Variable Anatomy from Limited Data: Applications to Whole Heart Segmentation for Congenital Heart Disease. Med. Image Anal. 2022, 80, 102469. [Google Scholar] [CrossRef] [PubMed]
  61. Ding, Y.; Member, I.; Yang, Q.; Wang, Y.; Chen, D.; Qin, Z.; Zhang, J. MallesNet: A Multi-Object Assistance Based Network for Brachial Plexus Segmentation in Ultrasound Images. Med. Image Anal. 2022, 80, 102511. [Google Scholar] [CrossRef] [PubMed]
  62. Han, C.; Lin, J.; Mai, J.; Wang, Y.; Zhang, Q.; Zhao, B.; Chen, X.; Pan, X.; Shi, Z.; Xu, Z. Multi-Layer Pseudo-Supervision for Histopathology Tissue Semantic Segmentation Using Patch-Level Classification Labels. Med. Image Anal. 2022, 80, 102487. [Google Scholar] [CrossRef] [PubMed]
  63. Ibtehaz, N.; Rahman, M.S. MultiResUNet: Rethinking the U-Net Architecture for Multimodal Biomedical Image Segmentation. Neural networks 2020, 121, 74. [Google Scholar] [CrossRef] [PubMed]
  64. Punn, N.S.; Agarwal, S. Modality Specific U-Net Variants for Biomedical Image Segmentation: A Survey. Artif. Intell. Rev. 2022, 55, 5845–5889. [Google Scholar] [CrossRef] [PubMed]
  65. Liu, X.; Song, L.; Liu, S.; Zhang, Y. A Review of Deep-Learning-Based Medical Image Segmentation Methods. Sustainability 2021, 13, 1224. [Google Scholar] [CrossRef]
  66. Gibson, E.; Li, W.; Sudre, C.; Fidon, L.; Shakir, D.I.; Wang, G.; Eaton-Rosen, Z.; Gray, R.; Doel, T.; Hu, Y. NiftyNet: A Deep-Learning Platform for Medical Imaging. Comput. Methods Programs Biomed. 2018, 158, 113–122. [Google Scholar] [CrossRef]
  67. Müller, D.; Kramer, F. MIScnn: A Framework for Medical Image Segmentation with Convolutional Neural Networks and Deep Learning. BMC Med. Imaging 2021, 21, 1–11. [Google Scholar] [CrossRef]
  68. de Geer, A.F.; van Alphen, M.J.A.; Zuur, C.L.; Loeve, A.J.; van Veen, R.L.P.; Karakullukcu, M.B. A Hybrid Registration Method Using the Mandibular Bone Surface for Electromagnetic Navigation in Mandibular Surgery. Int. J. Comput. Assist. Radiol. Surg. 2022, 17, 1343–1353. [Google Scholar] [CrossRef]
  69. Strzeletz, S.; Hazubski, S.; Moctezuma, J.L.; Hoppe, H. Fast, Robust, and Accurate Monocular Peer-to-Peer Tracking for Surgical Navigation. Int. J. Comput. Assist. Radiol. Surg. 2020, 15, 479–489. [Google Scholar] [CrossRef]
  70. Smit, J.N.; Kuhlmann, K.F.D.; Ivashchenko, O.V.; Thomson, B.R.; Langø, T.; Kok, N.F.M.; Fusaglia, M.; Ruers, T.J.M. Ultrasound-Based Navigation for Open Liver Surgery Using Active Liver Tracking. Int. J. Comput. Assist. Radiol. Surg. 2022, 17, 1765–1773. [Google Scholar] [CrossRef] [PubMed]
  71. Ivashchenko, O.V.; Kuhlmann, K.F.D.; van Veen, R.; Pouw, B.; Kok, N.F.M.; Hoetjes, N.J.; Smit, J.N.; Klompenhouwer, E.G.; Nijkamp, J.; Ruers, T.J.M. CBCT-Based Navigation System for Open Liver Surgery: Accurate Guidance toward Mobile and Deformable Targets with a Semi-Rigid Organ Approximation and Electromagnetic Tracking of the Liver. Med. Phys. 2021, 48, 2145–2159. [Google Scholar] [CrossRef] [PubMed]
  72. Zhang, C.; Hu, C.; He, Z.; Fu, Z.; Xu, L.; Ding, G.; Wang, P.; Zhang, H.; Ye, X. Shape Estimation of the Anterior Part of a Flexible Ureteroscope for Intraoperative Navigation. Int. J. Comput. Assist. Radiol. Surg. 2022, 17, 1787–1799. [Google Scholar] [CrossRef] [PubMed]
  73. Attivissimo, F.; Lanzolla, A.M.L.; Carlone, S.; Larizza, P.; Brunetti, G. A Novel Electromagnetic Tracking System for Surgery Navigation. Comput. Assist. Surg. 2018, 23, 42–52. [Google Scholar] [CrossRef] [PubMed]
  74. Yilmaz, A.; Javed, O.; Shah, M. Object Tracking: A Survey. ACM Comput. Surv. 2006, 38, 13-es. [Google Scholar] [CrossRef]
  75. Luo, W.; Xing, J.; Anton, M.; Zhang, X.; Liu, W.; Kim, T.-K. Multiple Object. Tracking: A Literature Review. Artif. Intell. 2021, 293, 103448. [Google Scholar] [CrossRef]
  76. Ciaparrone, G.; Sánchez, F.L.; Tabik, S.; Troiano, L.; Tagliaferri, R.; Herrera, F. Deep learning in video multi-object tracking: A survey. Neurocomputing 2020, 381, 61–88. [Google Scholar] [CrossRef]
  77. Li, X.; Hu, W.; Shen, C.; Zhang, Z.; Dick, A.; Hengel, A.V.D. A Survey of Appearance Models in Visual Object Tracking. ACM Trans. Intell. Syst. Technol. 2013, 4, 1–48. [Google Scholar] [CrossRef]
  78. Soleimanitaleb, Z.; Keyvanrad, M.A.; Jafari, A. Object Tracking Methods: A Review. In Proceedings of the 2019 9th International Conference on Computer and Knowledge Engineering (ICCKE), Mashhad, Iran, 24–25 October 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 282–288. [Google Scholar]
  79. Zhang, Y.; Wang, T.; Liu, K.; Zhang, B.; Chen, L. Recent Advances of Single-Object Tracking Methods: A Brief Survey. Neurocomputing 2021, 455, 1–11. [Google Scholar] [CrossRef]
  80. Wang, Y.; Sun, Q.; Liu, Z.; Gu, L. Visual Detection and Tracking Algorithms for Minimally Invasive Surgical Instruments: A Comprehensive Review of the State-of-the-Art. Rob. Auton. Syst. 2022, 149, 103945. [Google Scholar] [CrossRef]
  81. Bouget, D.; Allan, M.; Stoyanov, D.; Jannin, P. Vision-Based and Marker-Less Surgical Tool Detection and Tracking: A Review of the Literature. Med. Image Anal. 2017, 35, 633–654. [Google Scholar] [CrossRef] [PubMed]
  82. Yang, L.; Etsuko, K. Review on vision-based tracking in surgical navigation. IET Cyber-Syst. Robot. 2020, 2, 107–121. [Google Scholar] [CrossRef]
  83. Teske, H.; Mercea, P.; Schwarz, M.; Nicolay, N.H.; Sterzing, F.; Bendl, R. Real-time markerless lung tumor tracking in fluoroscopic video: Handling overlapping of projected structures. Med Phys. 2015, 42, 2540–2549. [Google Scholar] [CrossRef] [PubMed]
  84. Hirai, R.; Sakata, Y.; Tanizawa, A.; Mori, S. Real-time tumor tracking using fluoroscopic imaging with deep neural network analysis. Phys. Medica 2019, 59, 22–29. [Google Scholar] [CrossRef] [PubMed]
  85. De Luca, V.; Banerjee, J.; Hallack, A.; Kondo, S.; Makhinya, M.; Nouri, D.; Royer, L.; Cifor, A.; Dardenne, G.; Goksel, O.; et al. Evaluation of 2D and 3D ultrasound tracking algorithms and impact on ultrasound-guided liver radiotherapy margins. Med Phys. 2018, 45, 4986–5003. [Google Scholar] [CrossRef] [PubMed]
  86. Konh, B.; Padasdao, B.; Batsaikhan, Z.; Ko, S.Y. Integrating robot-assisted ultrasound tracking and 3D needle shape prediction for real-time tracking of the needle tip in needle steering procedures. Int. J. Med Robot. Comput. Assist. Surg. 2021, 17, e2272. [Google Scholar] [CrossRef] [PubMed]
  87. Yang, L.; Wang, J.; Ando, T.; Kubota, A.; Yamashita, H.; Sakuma, I.; Chiba, T.; Kobayashi, E. Vision-based endoscope tracking for 3D ultrasound image-guided surgical navigation. Comput. Med. Imaging Graph. 2015, 40, 205–216. [Google Scholar] [CrossRef]
  88. Yang, L.; Wang, J.; Ando, T.; Kubota, A.; Yamashita, H.; Sakuma, I.; Chiba, T.; Kobayashi, E. Self-contained image mapping of placental vasculature in 3D ultrasound-guided fetoscopy. Surg. Endosc. 2016, 90, 4136–4149. [Google Scholar] [CrossRef]
  89. Chen, Z.; Zhao, Z.; Cheng, X. Surgical Instruments Tracking Based on Deep Learning with Lines Detection and Spatio-Temporal Context. In Proceedings of the 2017 Chinese Automation Congress, CAC 2017, Jinan, China, 20–22 October 2017; pp. 2711–2714. [Google Scholar] [CrossRef]
  90. Choi, B.; Jo, K.; Choi, S.; Choi, J. Surgical-Tools Detection Based on Convolutional Neural Network in Laparoscopic Robot-Assisted Surgery. In Proceedings of the Annual International Conference of the IEEE Engineering in Medicine and Biology Society, EMBS, Jeju Island, Republic of Korea, 11–15 July 2017; pp. 1756–1759. [Google Scholar] [CrossRef]
  91. Li, Y.; Richter, F.; Lu, J.; Funk, E.K.; Orosco, R.K.; Zhu, J.; Yip, M.C. Super: A Surgical Perception Framework for Endoscopic Tissue Manipulation with Surgical Robotics. IEEE Robot. Autom. Lett. 2020, 5, 2294–2301. [Google Scholar] [CrossRef]
  92. Haskins, G.; Kruger, U.; Yan, P. Deep Learning in Medical Image Registration: A Survey. Mach. Vis. Appl. 2020, 31, 8. [Google Scholar] [CrossRef]
  93. Fu, Y.; Lei, Y.; Wang, T.; Curran, W.J.; Liu, T.; Yang, X. Deep Learning in Medical Image Registration: A Review. Phys. Med. Biol. 2020, 65, 20TR01. [Google Scholar] [CrossRef] [PubMed]
  94. Balakrishnan, G.; Zhao, A.; Sabuncu, M.R.; Guttag, J.; Dalca, A.V. VoxelMorph: A Learning Framework for Deformable Medical Image Registration. IEEE Trans. Med. Imaging 2019, 38, 1788–1800. [Google Scholar] [CrossRef] [PubMed]
  95. de Vos, B.D.; Berendsen, F.F.; Viergever, M.A.; Sokooti, H.; Staring, M.; Išgum, I. A Deep Learning Framework for Unsupervised Affine and Deformable Image Registration. Med. Image Anal. 2019, 52, 128–143. [Google Scholar] [CrossRef] [PubMed]
  96. Chen, C.; Li, Y.; Liu, W.; Huang, J. SIRF: Simultaneous Satellite Image Registration and Fusion in a Unified Framework. IEEE Trans. Image Process. 2015, 24, 4213–4224. [Google Scholar] [CrossRef] [PubMed]
  97. Mankovich, N.J.; Samson, D.; Pratt, W.; Lew, D.; Beumer, J., III. Surgical Planning Using Three-Dimensional Imaging and Computer Modeling. Otolaryngol. Clin. N. Am. 1994, 27, 875. [Google Scholar] [CrossRef]
  98. Selle, D.; Preim, B.; Schenk, A.; Peitgen, H.-. Analysis of Vasculature for Liver Surgical Planning. IEEE Trans. Med. Imaging 2002, 21, 1344–1357. [Google Scholar] [CrossRef] [PubMed]
  99. Byrd, H.S.; Hobar, P.C. Rhinoplasty: A Practical Guide for Surgical Planning. Plast. Reconstr. Surg. (1963) 1993, 91, 642–654. [Google Scholar] [CrossRef]
  100. Han, R.; Uneri, A.; De Silva, T.; Ketcha, M.; Goerres, J.; Vogt, S.; Kleinszig, G.; Osgood, G.; Siewerdsen, J.H. Atlas-Based Automatic Planning and 3D–2D Fluoroscopic Guidance in Pelvic Trauma Surgery. Phys. Med. Biol. 2019, 64, 095022. [Google Scholar] [CrossRef]
  101. Li, H.; Xu, J.; Zhang, D.; He, Y.; Chen, X. Automatic Surgical Planning Based on Bone Density Assessment and Path Integral in Cone Space for Reverse Shoulder Arthroplasty. Int. J. Comput. Assist. Radiol. Surg. 2022, 17, 1017–1027. [Google Scholar] [CrossRef]
  102. Sternheim, A.; Rotman, D.; Nayak, P.; Arkhangorodsky, M.; Daly, M.J.; Irish, J.C.; Ferguson, P.C.; Wunder, J.S. Computer-Assisted Surgical Planning of Complex Bone Tumor Resections Improves Negative Margin Outcomes in a Sawbones Model. Int. J. Comput. Assist. Radiol. Surg. 2021, 16, 695–701. [Google Scholar] [CrossRef]
  103. Hammoudeh, J.A.; Howell, L.K.; Boutros, S.; Scott, M.A.; Urata, M.M. Current Status of Surgical Planning for Orthognathic Surgery: Traditional Methods versus 3D Surgical Planning. Plast. Reconstr. Surg. Glob. Open 2015, 3, e307. [Google Scholar] [CrossRef] [PubMed]
  104. Chim, H.; Wetjen, N.; Mardini, S. Virtual Surgical Planning in Craniofacial Surgery. Semin. Plast. Surg. 2014, 28, 150–158. [Google Scholar] [CrossRef] [PubMed]
  105. Regodić, M.; Bárdosi, Z.; Diakov, G.; Galijašević, M.; Freyschlag, C.F.; Freysinger, W. Visual Display for Surgical Targeting: Concepts and Usability Study. Int. J. Comput. Assist. Radiol. Surg. 2021, 16, 1565–1576. [Google Scholar] [CrossRef] [PubMed]
  106. Mazzola, F.; Smithers, F.; Cheng, K.; Mukherjee, P.; Low, T.-H.H.; Ch’ng, S.; Palme, C.E.; Clark, J.R. Time and Cost-Analysis of Virtual Surgical Planning for Head and Neck Reconstruction: A Matched Pair Analysis. Oral. Oncol. 2020, 100, 104491. [Google Scholar] [CrossRef] [PubMed]
  107. Tang, X. The role of artificial intelligence in medical imaging research. BJR|Open 2020, 2, 20190031. [Google Scholar] [CrossRef] [PubMed]
  108. Wagner, J.B. Artificial Intelligence in Medical Imaging. Radiol. Technol. 2019, 90, 489–501. [Google Scholar] [CrossRef] [PubMed]
  109. Wang, S.; Cao, G.; Wang, Y.; Liao, S.; Wang, Q.; Shi, J.; Li, C.; Shen, D. Review and Prospect: Artificial Intelligence in Advanced Medical Imaging. Front. Radiol. 2021, 1, 781868. [Google Scholar] [CrossRef]
  110. Wolterink, J.M.; Leiner, T.; Viergever, M.A.; Išgum, I. Generative Adversarial Networks for Noise Reduction in Low-Dose CT. IEEE Trans. Med. Imaging 2017, 36, 2536–2545. [Google Scholar] [CrossRef]
  111. Chen, H.; Zhang, Y.; Kalra, M.K.; Lin, F.; Chen, Y.; Liao, P.; Zhou, J.; Wang, G. Low-Dose CT with a Residual Encoder-Decoder Convolutional Neural Network. IEEE Trans. Med. Imaging 2017, 36, 2524–2535. [Google Scholar] [CrossRef]
  112. Lu, S.; Yang, B.; Xiao, Y.; Liu, S.; Liu, M.; Yin, L.; Zheng, W. Iterative Reconstruction of Low-Dose CT Based on Differential Sparse. Biomed. Signal Process. Control. 2023, 79, 104204. [Google Scholar] [CrossRef]
  113. Wang, J.; Tang, Y.; Wu, Z.; Tsui, B.M.W.; Chen, W.; Yang, X.; Zheng, J.; Li, M. Domain-Adaptive Denoising Network for Low-Dose CT via Noise Estimation and Transfer Learning. Med Phys. 2023, 50, 74–88. [Google Scholar] [CrossRef] [PubMed]
  114. Jiang, M.; Zhi, M.; Wei, L.; Yang, X.; Zhang, J.; Li, Y.; Wang, P.; Huang, J.; Yang, G. FA-GAN: Fused Attentive Generative Adversarial Networks for MRI Image Super-Resolution. Comput. Med Imaging Graph. 2021, 92, 101969. [Google Scholar] [CrossRef] [PubMed]
  115. Guo, P.; Wang, P.; Zhou, J.; Jiang, S.; Patel, V.M. Multi-Institutional Collaborations for Improving Deep Learning-Based Magnetic Resonance Image Reconstruction Using Federated Learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 2423–2432. [Google Scholar]
  116. Dhengre, N.; Sinha, S. Multiscale U-Net-Based Accelerated Magnetic Resonance Imaging Reconstruction. Signal, Image Video Process. 2022, 16, 881–888. [Google Scholar] [CrossRef]
  117. Maken, P.; Gupta, A. 2D-to-3D: A Review for Computational 3D Image Reconstruction from X-Ray Images. Arch. Comput. Methods Eng. 2023, 30, 85–114. [Google Scholar] [CrossRef]
  118. Gobbi, D.G.; Peters, T.M. Interactive intra-operative 3D ultrasound reconstruction and visualization. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Tokyo, Japan, 25–28 September 2002; Springer: Berlin/Heidelberg, Germany, 2002; pp. 156–163. [Google Scholar]
  119. Solberg, O.V.; Lindseth, F.; Torp, H.; Blake, R.E.; Hernes, T.A.N. Freehand 3D Ultrasound Reconstruction Algorithms—A Review. Ultrasound Med. Biol. 2007, 33, 991–1009. [Google Scholar] [CrossRef]
  120. Yang, L.; Wang, J.; Kobayashi, E.; Ando, T.; Yamashita, H.; Sakuma, I.; Chiba, T. Image mapping of untracked free-hand endoscopic views to an ultrasound image-constructed 3D placenta model. Int. J. Med Robot. Comput. Assist. Surg. 2015, 11, 223–234. [Google Scholar] [CrossRef]
  121. Chen, X.; Chen, H.; Peng, Y.; Liu, L.; Huang, C. A Freehand 3D Ultrasound Reconstruction Method Based on Deep Learning. Electronics 2023, 12, 1527. [Google Scholar] [CrossRef]
  122. Luo, M.; Yang, X.; Wang, H.; Dou, H.; Hu, X.; Huang, Y.; Ravikumar, N.; Xu, S.; Zhang, Y.; Xiong, Y.; et al. RecON: Online learning for sensorless freehand 3D ultrasound reconstruction. Med Image Anal. 2023, 87, 102810. [Google Scholar] [CrossRef]
  123. Lin, B.; Sun, Y.; Qian, X.; Goldgof, D.; Gitlin, R.; You, Y. Video-based 3D reconstruction, laparoscope localization and deformation recovery for abdominal minimally invasive surgery: A survey. Int. J. Med Robot. Comput. Assist. Surg. 2016, 12, 158–1788. [Google Scholar] [CrossRef]
  124. Maier-Hein, L.; Mountney, P.; Bartoli, A.; Elhawary, H.; Elson, D.; Groch, A.; Kolb, A.; Rodrigues, M.; Sorger, J.; Speidel, S.; et al. Optical techniques for 3D surface reconstruction in computer-assisted laparoscopic surgery. Med Image Anal. 2013, 17, 974–996. [Google Scholar] [CrossRef]
  125. Mahmoud, N.; Cirauqui, I.; Hostettler, A.; Doignon, C.; Soler, L.; Marescaux, J.; Montiel, J.M.M. ORBSLAM-based endoscope tracking and 3D reconstruction. In Proceedings of the Computer-Assisted and Robotic Endoscopy: Third International Workshop, CARE 2016, Held in Conjunction with MICCAI 2016, Athens, Greece, 17 October 2016; Revised Selected Papers 3. Springer International Publishing: Cham, Switzerland, 2017; pp. 72–83. [Google Scholar]
  126. Grasa, O.G.; Civera, J.; Guemes, A.; Munoz, V.; Montiel, J.M.M. EKF monocular SLAM 3D modeling, measuring and augmented reality from endoscope image sequences. In Proceedings of the 5th Workshop on Augmented Environments for Medical Imaging including Augmented Reality in Computer-Aided Surgery (AMI-ARCS), London, UK, 24 September 2009; Volume 2, pp. 102–109. [Google Scholar]
  127. Widya, A.R.; Monno, Y.; Okutomi, M.; Suzuki, S.; Gotoda, T.; Miki, K. Whole Stomach 3D Reconstruction and Frame Localization from Monocular Endoscope Video. IEEE J. Transl. Eng. Health Med. 2019, 7, 1–10. [Google Scholar] [CrossRef] [PubMed]
  128. Chen, L.; Tang, W.; John, N.W.; Wan, T.R.; Zhang, J.J. SLAM-based dense surface reconstruction in monocular Minimally Invasive Surgery and its application to Augmented Reality. Comput. Methods Programs Biomed. 2018, 158, 135–146. [Google Scholar] [CrossRef] [PubMed]
  129. Hayashibe, M.; Suzuki, N.; Nakamura, Y. Laser-scan endoscope system for intraoperative geometry acquisition and surgical robot safety management. Med Image Anal. 2006, 10, 509–519. [Google Scholar] [CrossRef] [PubMed]
  130. Sui, C.; Wu, J.; Wang, Z.; Ma, G.; Liu, Y.-H. A Real-Time 3D Laparoscopic Imaging System: Design, Method, and Validation. IEEE Trans. Biomed. Eng. 2020, 67, 2683–2695. [Google Scholar] [CrossRef] [PubMed]
  131. Ciuti, G.; Visentini-Scarzanella, M.; Dore, A.; Menciassi, A.; Dario, P.; Yang, G.-Z. Intra-operative monocular 3D reconstruction for image-guided navigation in active locomotion capsule endoscopy. In Proceedings of the 2012 4th IEEE RAS & EMBS International Conference on Biomedical Robotics and Biomechatronics (BioRob), Rome, Italy, 24–27 June 2012; pp. 768–774. [Google Scholar] [CrossRef]
  132. Fan, Y.; Meng MQ, H.; Li, B. 3D reconstruction of wireless capsule endoscopy images. In Proceedings of the 2010 Annual International Conference of the IEEE Engineering in Medicine and Biology, Buenos Aires, Argentina, 31 August–4 September 2010; IEEE: Piscataway, NJ, USA, 2010; pp. 5149–5152. [Google Scholar]
  133. Yang, L.; Wang, J.; Kobayashi, E.; Liao, H.; Sakuma, I.; Yamashita, H.; Chiba, T. Ultrasound image-guided mapping of endoscopic views on a 3D placenta model: A tracker-less approach. In Proceedings of the Augmented Reality Environments for Medical Imaging and Computer-Assisted Interventions: 6th International Workshop, MIAR 2013 and 8th International Workshop, AE-CAI 2013, Nagoya, Japan, 22 September 2013; Springer: Berlin/Heidelberg, Germany, 2013; pp. 107–116. [Google Scholar]
  134. Liangjing, Y. Development of a Self-Contained Image Mapping Framework for Ultrasound-Guided Fetoscopic Procedures via Three-Dimensional Dynamic View Expansion. Ph.D. Thesis, The University of Tokyo, Tokyo, Japan, 2014. [Google Scholar]
  135. Fan, Z.; Ma, L.; Liao, Z.; Zhang, X.; Liao, H. Three-Dimensional Image-Guided Techniques for Minimally Invasive Surgery. In Handbook of Robotic and Image-Guided Surgery; Elsevier: Amsterdam, The Netherlands, 2020; pp. 575–584. [Google Scholar] [CrossRef]
  136. Nishino, H.; Hatano, E.; Seo, S.; Nitta, T.; Saito, T.; Nakamura, M.; Hattori, K.; Takatani, M.; Fuji, H.; Taura, K.; et al. Real-Time Navigation for Liver Surgery Using Projection Mapping with Indocyanine Green Fluorescence: Development of the Novel Medical Imaging Projection System. Ann. Surg. 2018, 267, 1134–1140. [Google Scholar] [CrossRef] [PubMed]
  137. Deng, H.; Wang, Q.H.; Xiong, Z.L.; Zhang, H.L.; Xing, Y. Magnified Augmented Reality 3D Display Based on Integral Imaging. Optik 2016, 127, 4250–4253. [Google Scholar] [CrossRef]
  138. He, C.; Liu, Y.; Wang, Y. Sensor-Fusion Based Augmented-Reality Surgical Navigation System. In Proceedings of the IEEE Instrumentation and Measurement Technology Conference, Taipei, Taiwan, 23–26 May 2016. [Google Scholar] [CrossRef]
  139. Suenaga, H.; Tran, H.H.; Liao, H.; Masamune, K.; Dohi, T.; Hoshi, K.; Takato, T. Vision-Based Markerless Registration Using Stereo Vision and an Augmented Reality Surgical Navigation System: A Pilot Study. BMC Med. Imaging 2015, 15, 1–11. [Google Scholar] [CrossRef]
  140. Zhang, X.; Chen, G.; Liao, H. High-Quality See-through Surgical Guidance System Using Enhanced 3-D Autostereoscopic Augmented Reality. IEEE Trans. Biomed. Eng. 2017, 64, 1815–1825. [Google Scholar] [CrossRef]
  141. Zhang, X.; Chen, G.; Liao, H. A High-Accuracy Surgical Augmented Reality System Using Enhanced Integral Videography Image Overlay. In Proceedings of the Annual International Conference of the IEEE Engineering in Medicine and Biology Society, EMBS 2015, Milan, Italy, 25–29 August 2015; pp. 4210–4213. [Google Scholar] [CrossRef]
  142. Gavaghan, K.A.; Peterhans, M.; Oliveira-Santos, T.; Weber, S. A Portable Image Overlay Projection Device for Computer-Aided Open Liver Surgery. IEEE Trans. Biomed. Eng. 2011, 58, 1855–1864. [Google Scholar] [CrossRef]
  143. Wen, R.; Chui, C.K.; Ong, S.H.; Lim, K.B.; Chang, S.K.Y. Projection-Based Visual Guidance for Robot-Aided RF Needle Insertion. Int. J. Comput. Assist. Radiol. Surg. 2013, 8, 1015–1025. [Google Scholar] [CrossRef]
  144. Yu, J.; Wang, T.; Zong, Z.; Yang, L. Immersive Human-Robot Interaction for Dexterous Manipulation in Minimally Invasive Procedures. In Proceedings of the 4th WRC Symposium on Advanced Robotics and Automation 2022, WRC SARA 2022, Beijing, China, 20 August 2022. [Google Scholar]
  145. Nishihori, M.; Izumi, T.; Nagano, Y.; Sato, M.; Tsukada, T.; Kropp, A.E.; Wakabayashi, T. Development and Clinical Evaluation of a Contactless Operating Interface for Three-Dimensional Image-Guided Navigation for Endovascular Neurosurgery. Int. J. Comput. Assist. Radiol. Surg. 2021, 16, 663–671. [Google Scholar] [CrossRef] [PubMed]
  146. Chen, L.; Day, T.W.; Tang, W.; John, N.W. Recent Developments and Future Challenges in Medical Mixed Reality. In Proceedings of the 2017 IEEE International Symposium on Mixed and Augmented Reality, ISMAR 2017, Nantes, France, 9–13 October 2017. [Google Scholar]
  147. Hu, H.; Feng, X.; Shao, Z.; Xie, M.; Xu, S.; Wu, X.; Ye, Z. Application and Prospect of Mixed Reality Technology in Medical Field. Curr. Med. Sci. 2019, 39, 1–6. [Google Scholar] [CrossRef] [PubMed]
  148. Yamaguchi, S.; Ohtani, T.; Yatani, H.; Sohmura, T. Augmented Reality System for Dental Implant Surgery. In Virtual and Mixed Reality; Springer: Berlin/Heidelberg, Germany, 2009; pp. 633–638. ISBN 0302-9743. [Google Scholar]
  149. Burström, G.; Nachabe, R.; Persson, O.; Edström, E.; Elmi Terander, A. Augmented and Virtual Reality Instrument Tracking for Minimally Invasive Spine Surgery: A Feasibility and Accuracy Study. Spine 2019, 44, 1097–1104. [Google Scholar] [CrossRef] [PubMed]
  150. Schijven, M.; Jakimowicz, J. Virtual Reality Surgical Laparoscopic Simulators: How to Choose. Surg. Endosc. 2003, 17, 1943–1950. [Google Scholar] [CrossRef] [PubMed]
  151. Khalifa, Y.M.; Bogorad, D.; Gibson, V.; Peifer, J.; Nussbaum, J. Virtual Reality in Ophthalmology Training. Surv. Ophthalmol. 2006, 51, 259. [Google Scholar] [CrossRef] [PubMed]
  152. Jaramaz, B.; Eckman, K. Virtual Reality Simulation of Fluoroscopic Navigation. Clin. Orthop. Relat. Res. 2006, 442, 30–34. [Google Scholar] [CrossRef]
  153. Ayoub, A.; Pulijala, Y. The Application of Virtual Reality and Augmented Reality in Oral & Maxillofacial Surgery. BMC Oral. Health 2019, 19, 238. [Google Scholar] [CrossRef]
  154. Haluck, R.S.; Webster, R.W.; Snyder, A.J.; Melkonian, M.G.; Mohler, B.J.; Dise, M.L.; Lefever, A. A Virtual Reality Surgical Trainer for Navigation in Laparoscopic Surgery. Stud. Health Technol. Inform. 2001, 81, 171. [Google Scholar]
  155. Barber, S.R.; Jain, S.; Son, Y.-J.; Chang, E.H. Virtual Functional Endoscopic Sinus Surgery Simulation with 3D-Printed Models for Mixed-Reality Nasal Endoscopy. Otolaryngol.–Head Neck Surg. 2018, 159, 933–937. [Google Scholar] [CrossRef]
  156. Martel, A.L.; Abolmaesumi, P.; Stoyanov, D.; Mateus, D.; Zuluaga, M.A.; Zhou, S.K.; Racoceanu, D.; Joskowicz, L. An Interactive Mixed Reality Platform for Bedside Surgical Procedures. In Medical Image Computing and Computer Assisted Intervention—MICCAI 2020; Springer International Publishing AG: Cham, Switzerland, 2020; Volume 12263, pp. 65–75. ISBN 0302-9743. [Google Scholar]
  157. Zhou, Z.; Yang, Z.; Jiang, S.; Zhang, F.; Yan, H. Design and Validation of a Surgical Navigation System for Brachytherapy Based on Mixed Reality. Med. Phys. 2019, 46, 3709–3718. [Google Scholar] [CrossRef]
  158. Mehralivand, S.; Kolagunda, A.; Hammerich, K.; Sabarwal, V.; Harmon, S.; Sanford, T.; Gold, S.; Hale, G.; Romero, V.V.; Bloom, J.; et al. A Multiparametric Magnetic Resonance Imaging-Based Virtual Reality Surgical Navigation Tool for Robotic-Assisted Radical Prostatectomy. Turk. J. Urol. 2019, 45, 357–365. [Google Scholar] [CrossRef] [PubMed]
  159. Frangi, A.F.; Schnabel, J.A.; Davatzikos, C.; Alberola-López, C.; Fichtinger, G. A Novel Mixed Reality Navigation System for Laparoscopy Surgery. In Medical Image Computing and Computer Assisted Intervention—MICCAI 2018; Springer International Publishing AG: Cham, Switzerland, 2018; Volume 11073, pp. 72–80. ISBN 0302-9743. [Google Scholar]
  160. Incekara, F.; Smits, M.; Dirven, C.; Vincent, A. Clinical Feasibility of a Wearable Mixed-Reality Device in Neurosurgery. World Neurosurg. 2018, 118, e422. [Google Scholar] [CrossRef] [PubMed]
  161. McJunkin, J.L.; Jiramongkolchai, P.; Chung, W.; Southworth, M.; Durakovic, N.; Buchman, C.A.; Silva, J.R. Development of a Mixed Reality Platform for Lateral Skull Base Anatomy. Otol. Neurotol. 2018, 39, e1137–e1142. [Google Scholar] [CrossRef] [PubMed]
  162. Zhou, Z.; Jiang, S.; Yang, Z.; Xu, B.; Jiang, B. Surgical Navigation System for Brachytherapy Based on Mixed Reality Using a Novel Stereo Registration Method. Virtual Real. 2021, 25, 975–984. [Google Scholar] [CrossRef]
  163. Li, J.; Zhang, H.; Li, Q.; Yu, S.; Chen, W.; Wan, S.; Chen, D.; Liu, R.; Ding, F. Treating Lumbar Fracture Using the Mixed Reality Technique. Biomed. Res. Int. 2021, 2021, 6620746. [Google Scholar] [CrossRef] [PubMed]
  164. Holi, G.; Murthy, S.K. An Overview of Image Security Techniques. Int. J. Comput. Appl. 2016, 154, 975–8887. [Google Scholar]
  165. Magdy, M.; Hosny, K.M.; Ghali, N.I.; Ghoniemy, S. Security of Medical Images for Telemedicine: A Systematic Review. Multimed. Tools Appl. 2022, 81, 25101–25145. [Google Scholar] [CrossRef] [PubMed]
  166. Lungu, A.J.; Swinkels, W.; Claesen, L.; Tu, P.; Egger, J.; Chen, X. A Review on the Applications of Virtual Reality, Augmented Reality and Mixed Reality in Surgical Simulation: An Extension to Different Kinds of Surgery. Expert. Rev. Med. Devices 2021, 18, 47–62. [Google Scholar] [CrossRef]
  167. Hussain, R.; Lalande, A.; Guigou, C.; Bozorg-Grayeli, A. Contribution of Augmented Reality to Minimally Invasive Computer-Assisted Cranial Base Surgery. IEEE J. Biomed. Health Inform. 2020, 24, 2093–2106. [Google Scholar] [CrossRef]
  168. Moody, L.; Waterworth, A.; McCarthy, A.D.; Harley, P.J.; Smallwood, R.H. The Feasibility of a Mixed Reality Surgical Training Environment. Virtual Real. 2008, 12, 77–86. [Google Scholar] [CrossRef]
  169. Zuo, Y.; Jiang, T.; Dou, J.; Yu, D.; Ndaro, Z.N.; Du, Y.; Li, Q.; Wang, S.; Huang, G. A Novel Evaluation Model for a Mixed-Reality Surgical Navigation System: Where Microsoft HoloLens Meets the Operating Room. Surg. Innov. 2020, 27, 193–202. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Image Data Stream in Surgical Navigation Workflows.
Figure 1. Image Data Stream in Surgical Navigation Workflows.
Sensors 23 09872 g001
Figure 2. Diagram of the Surgical Navigation System.
Figure 2. Diagram of the Surgical Navigation System.
Sensors 23 09872 g002
Figure 3. Combination of ultrasound localization and endoscopic vision-based pose estimation resulting in timely tracking that is cumulatively error-free.
Figure 3. Combination of ultrasound localization and endoscopic vision-based pose estimation resulting in timely tracking that is cumulatively error-free.
Sensors 23 09872 g003
Figure 4. Overview of the calibration and registration using OTS.
Figure 4. Overview of the calibration and registration using OTS.
Sensors 23 09872 g004
Figure 5. Using intraoperative two-dimensional (2D) ultrasound imaging with known spatial information to reconstruct a three-dimensional (3D) model [134].
Figure 5. Using intraoperative two-dimensional (2D) ultrasound imaging with known spatial information to reconstruct a three-dimensional (3D) model [134].
Sensors 23 09872 g005
Figure 6. Diagram of (a) half-mirror, (b) integral videography, (c) image overlay, and (d) head-mounted display.
Figure 6. Diagram of (a) half-mirror, (b) integral videography, (c) image overlay, and (d) head-mounted display.
Sensors 23 09872 g006
Figure 7. Example of seeing a 3D model in HoloLens [144].
Figure 7. Example of seeing a 3D model in HoloLens [144].
Sensors 23 09872 g007
Figure 8. Patient phantom seen in HoloLens while the surgeon manipulates the limb [27].
Figure 8. Patient phantom seen in HoloLens while the surgeon manipulates the limb [27].
Sensors 23 09872 g008
Figure 9. Example of augmented reality (AR)-based surgical navigation system used in [29].
Figure 9. Example of augmented reality (AR)-based surgical navigation system used in [29].
Sensors 23 09872 g009
Figure 10. Example of a virtual reality (VR) simulator used in [153].
Figure 10. Example of a virtual reality (VR) simulator used in [153].
Sensors 23 09872 g010
Figure 11. Example of a mixed reality (MR) navigation system used in [157].
Figure 11. Example of a mixed reality (MR) navigation system used in [157].
Sensors 23 09872 g011
Table 1. List of Search Strings and Databases.
Table 1. List of Search Strings and Databases.
DatabaseSearch StringNumber
Web of Science™(TS = (surgical navigation)) AND ((KP = (surgical navigation system)) OR (KP = (image guided surgery)) OR (KP = (computer assisted surgery)) OR (KP = (virtual reality)) OR (KP = (augmented reality)) OR (KP = (mixed reality))OR (KP = (3D))) and Preprint Citation Index (Exclude—Database)597
Scopus((TITLE-ABS-KEY(“surgical navigation”)) AND ((KEY(“surgical navigation system”)) OR (KEY(“image guided surgery”)) OR (KEY(“computer assisted surgery”)) OR (KEY(“virtual reality”)) OR (KEY(“augmented reality”)) OR (KEY(“mixed reality”)) OR (KEY(“3D”))) AND PUBYEAR > 2012 AND PUBYEAR < 2024 AND (LIMIT-TO (SUBJAREA,”MEDI”) OR LIMIT-TO (SUBJAREA,”COMP”) OR LIMIT-TO (SUBJAREA,”ENGI”)) AND (LIMIT-TO (DOCTYPE,”ar”) OR LIMIT-TO (DOCTYPE,”cp”)))1594
IEEE Xplore®((“surgical navigation”) AND ((“surgical navigation system”) OR (“image guided surgery”) OR (“computer assisted surgery”) OR (“virtual reality) OR (“augmented reality) OR (“mixed reality”) OR (“3D”)))146
Table 2. Summary of Categorized Methods in Surgical Navigation Systems.
Table 2. Summary of Categorized Methods in Surgical Navigation Systems.
PaperSegmentationTrackingRegistration
[13]NoEMT 1Rigid landmark
[14]3D Slicer 6EMTPDM 2
[15]ThresholdEMTICP 3/B-Spline
[16,17,18,19,20]NoOTS 4Fiducial markers
[21]3D SlicerNoFiducial markers
[22]YesNoICP
[23]3D SlicerOTSSurface-matching
[24]Learning-basedNoICP
[25]NoLearning-basedLearning-based
[26]Learning-basedNoNo
[27,28]YesOTSFiducial markers
[29]Threshold/Region growingOTSICP
[30]Region growingVisual–inertial stereo slamICP
[31]ThresholdLearning-basedSuper4PCS [32]
[33]YesOTSICP
[34]3D SlicerEMTFiducial markers
[35]Mimics 6OTSAnatomical landmark
[36]ManualNoICP
[37]NoDepth estimationLearning-based [38]
[39]EndoSize 6NoRigid intensity-based
[40]YesEMTICP
[41]ThresholdOTSICP
[42]3D Slicer/Learning-basedNoICP and CPD 5
[43]YesVisual SLAMVisual SLAM
1 EMT: electromagnetic tracker. 2 PDM: Philips Disease Management. 3 ICP: iterative closest point. 4 OTS: optical tracking system. 5 CPD: coherent point drift. 6 Public software: 3D Slicer, Mimics, EndoSize.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Lin, Z.; Lei, C.; Yang, L. Modern Image-Guided Surgery: A Narrative Review of Medical Image Processing and Visualization. Sensors 2023, 23, 9872. https://doi.org/10.3390/s23249872

AMA Style

Lin Z, Lei C, Yang L. Modern Image-Guided Surgery: A Narrative Review of Medical Image Processing and Visualization. Sensors. 2023; 23(24):9872. https://doi.org/10.3390/s23249872

Chicago/Turabian Style

Lin, Zhefan, Chen Lei, and Liangjing Yang. 2023. "Modern Image-Guided Surgery: A Narrative Review of Medical Image Processing and Visualization" Sensors 23, no. 24: 9872. https://doi.org/10.3390/s23249872

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop