Next Article in Journal
A Novel Hybrid Semi-Active Mass Damper Configuration for Structural Applications
Next Article in Special Issue
Systematic Evaluation on Speckle Suppression Methods in Examination of Ultrasound Breast Images
Previous Article in Journal
Analysis of Psychoacoustic and Vibration-Related Parameters to Track the Reasons for Health Complaints after the Introduction of New Tramways
Previous Article in Special Issue
Photoacoustic Tomography Imaging of the Adult Zebrafish by Using Unfocused and Focused High-Frequency Ultrasound Transducers
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Ultrasound Navigation for Transcatheter Aortic Stent Deployment Using Global and Local Information

Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing 100084, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2016, 6(12), 391; https://doi.org/10.3390/app6120391
Submission received: 30 September 2016 / Revised: 17 November 2016 / Accepted: 22 November 2016 / Published: 30 November 2016
(This article belongs to the Special Issue Biomedical Ultrasound)

Abstract

:
An ultrasound (US) navigation system using global and local information is presented for transcatheter aortic stent deployment. The system avoids the use of contrast agents and radiation required in traditional fluoroscopically-guided procedures and helps surgeons precisely visualize the surgical site. To obtain a global 3D (three-dimensional) navigation map, we use magnetic resonance (MR) to provide a 3D context to enhance 2D (two-dimensional) US images through image registration. The US images are further processed to obtain the trajectory of interventional catheter. A high-resolution aortic model is constructed by using trajectory and segmented intravascular ultrasound (IVUS) images. The constructed model reflects morphological characteristics of the aorta to provide local navigation information. Our navigation system was validated using in vitro phantom of heart and aorta. The mean target registration error is 2.70 mm and the average tracking error of the multi-feature particle filter is 0.87 mm. These results confirm that key parts of our navigation system are effective. In the catheter intervention experiment, the vessel reconstruction error of local navigation is reduced by 80% compared to global navigation. Moreover, the targeting error of the navigation combining global and local information is reduced compared to global navigation alone (1.72 mm versus 2.87 mm). Thus, the US navigation system which integrates the large view of global navigation and high accuracy of local navigation can facilitate transcatheter stent deployment.

1. Introduction

Placement of endovascular stent has become a preferred option for treatment of acute aorta dissection and aortosclerosis, because it can reduce the risk of infection and patient trauma [1]. Stent placement can replace conventional medical treatment for the majority of patients. During the procedures, surgeons insert a catheter into aortic lesions to finish the placement of stent. However, incorrect placement of the stent can cause delayed complications. There are two critical factors which directly influence the accuracy of stent deployment: (1) information on morphological characteristics of the aorta, especially calcification or lesion areas, which can greatly assist surgeons to determine the size of stent [2]; (2) intuitive three-dimensional (3D) navigation images which can reflect intraoperative information of cardiac structure [3].
Conventional stent placement, which is typically performed under X-ray, remains challenging, due to several shortcomings of fluoroscopic guidance. Firstly, repeated radiations influence the health of patients and clinical staff. Contrast agents probably increase the risk of iatrogenic renal injury for patients. Secondly, although fluoroscopy can visualize the catheter excellently, fluoroscopy lacks depth perception and cannot directly visualize anatomic structures [4]. Thirdly, 2D (two-dimensional) fluoroscopy images cannot provide a quantitative analysis of the vessel’s morphological characteristics [5]. To solve these problems, realizing 3D intuitive image navigation is useful for catheter intervention and stent placement.
Several solutions have been proposed for performing 3D navigation imaging. Some authors have suggested using other 3D imaging modalities as alternatives to fluoroscopy. For example, intraoperative magnetic resonance (MR) and computed tomography (CT) imaging technologies [6,7] have been applied, but their usage is very limited because of relatively low temporal resolution and inflexibility in the workflow. Another approach is the transfer of preoperative CT volume into the intraoperative environment by fusing CT with fluoroscopy. Göksu et al. have used feature-based rigid registration to combine 3D preoperative CT with intraoperative 2D fluoroscopy for endovascular navigation [8]. Although this approach may potentially provide 3D information, it still exposes the interventionist and patient to harmful radiation.
To solve the above problem, some researches proposed using electromagnetic (EM) tracking rather than relying on X-ray for navigation, because EM tracking can collect 3D position information by assembling an EM sensor on the catheter. Manstad-Hulaas et al. [9] use EM approaches to track catheter position and overlay the position onto previously acquired CT or MR data during the procedures. Wang et al. propose combining EM technologies with virtual visual feedback for surgical navigation [10]. Ramcharitar et al. also present a navigation method by combining multi-slice CT with EM navigation for endovascular intervention [11]. These studies achieve the overlay between catheter tip and preoperative images, but cannot detect movements of soft tissue during the surgery.
In addition, some non-ionizing radiation imaging methods have been used for radiation-free navigation. Real-time ultrasound (US) images may assist endovascular intervention and navigation. For example, an US-based catheter localization method [12] and a guidance technology based on US image registration [13] have been proposed for the minimally invasive endovascular surgical navigation. Furthermore, Luan et al. propose a visualization navigation system that integrates US imaging with preoperative anatomical models for catheter intervention of oral cancers [14]. McLeod et al. make use of biplane US to guide trans-catheter aortic valve implantation deployment in a phantom study [15]. However, it is difficult that a single cardiac US image provides sufficient information for doctors. This is because in endovascular navigation, single US images cannot provide a 3D context of the surgical site and high-resolution morphological information of the aorta. Intravascular ultrasound (IVUS) is an attractive complement for common US or preoperative CT images, because IVUS obtains 2D images of the cross-sections of the blood vessels, providing information on vessel morphology for diagnosis and surgery [16]. IVUS imaging is more accurate than conventional angiography because of the relatively higher resolution [17]. Therefore, the approach proposed in this paper is to combine IVUS images with common US images in order to implement a radiation-free 3D guidance system for transcatheter aortic stent deployment. The work described here expands on our previous study [18]. The major limitations of our previous study are that: (1) the accuracies of catheter tracking and IVUS segmentation need to be improved; (2) the verification experiments of US navigation method with global and local information need to be increased. To solve these limitations, the new system not only integrates more accurate methods of tracking and segmentation into previous system but also presents more evaluations experiments and better results.

2. Materials and Methods

The configuration of our US navigation system is shown in Figure 1. During the surgical intervention, an US device, an optical tracking system and a catheter are used. Firstly, the tracking system collects the pose of US probes (S5-1, Philips, Amsterdam, The Netherlands). IVUS scanning probe (Atlantis SR, Boston Scientific, Marlborough, MA, USA) and stent are inserted into the aortic lesion through a catheter (Atlantis SR, Boston Scientific, Marlborough, MA, USA). Next, the collected intraoperative US images, IVUS images and preoperative 3D MR images are combined during the image processing procedure. This procedure mainly includes two parts: (1) 2D US-3D MR registration for a 3D intuitive global navigation map; (2) combining segmented IVUS images with the trajectory of the catheter from US images to finish local high-resolution navigation information. Finally, the global and local navigation images help surgeons to finish accurate transcatheter aortic stent deployment.

2.1. A Global 3D Navigation Map Based on 2D US-3D MR Image Registration

A global 3D navigation map is presented which integrates 2D US images with a high-quality 3D context from MR images through 2D US-3D MR image registration. We apply a preoperative 3D US image to simplify the 2D US-3D MR image registration into two easily achieved steps: 2D US-3D US intra-modal registration and 3D US-3D MR intra-dimensional registration. The rigid registration transformation T 2 DUS 3 DMR between 2D US and 3D MR images is calculated by T 2 DUS 3 DMR = T 2 DUS 3 DUS T 3 DUS 3 DMR , where T 2 DUS 3 DUS is the transformation between intra-modal 2D and 3D US images, and T 3 DUS 3 DMR is the transformation between intra-dimension 3D US and MR images. Figure 2 shows the workflow of 2D US-3D MR image registration. In the intraoperative preparatory stage, we employ a calibrated 3D US probe to collect a 3D US image of the heart. The 3D US-3D MR intra-dimension registration transformation T 3 DUS 3 DMR is acquired manually by using an open source platform (3D slicer) (http://www.slicer.org). Next, crucial 2D US-3D US intra-modal registration T 2 DUS 3 DUS is achieved automatically by the following steps.
Firstly, an N-wire calibration phantom is realized for calibration of the 2D US probe and to acquire the transformation T 2 DUS TS between 2D US image coordinate and tracking system (TS) coordinate. Similarly, an IXI-wire calibration phantom is utilized for calibration of the 3D US probe and to solve the transformation T 3 DUS TS between the 3D US image coordinate and the TS coordinate. The near-optimal initial transformation T ˜ 2 DUS 3 DUS can be calculated by
T ˜ 2 DUS 3 DUS = T 2 DUS TS ( T 3 DUS TS ) 1
Secondly, with the acquired near-optimal initial transformation T ˜ 2 DUS 3 DUS , fast automatic intensity-based local adjustment is employed for accurate registration. In the local adjustment, mutual information (MI) is utilized as a similarity metric. Gradient ascent optimizer is used to find optimum value of the MI metric and final rigid transformation T 2 DUS 3 DUS . More detailed descriptions about calibrations of probes and image registration can be found in our previous work [19].

2.2. Local High-Resolution Navigation Information Based on US Images Tracking and IVUS Image Segmentation

The calibrated 2D US probe collects a series of 2D US images and these 2D US images contain not only the intraoperative structure of heart but also the information of the catheter. On the one hand, these 2D US images are registered with 3D MR to finish a global 3D navigation map. On the other hand, we locate the tip of the catheter in these 2D US images and acquire the trajectory of catheter (for details, see Section 2.2.1). The IVUS probe is embedded in the interventional catheter and collects IVUS images with almost constant velocity (0.5 mm/s). The IVUS images contain the information of complex lesions, such as plaque, aneurysms, etc. By using IVUS images segmentation (for details, see Section 2.2.2), vascular borders (media-adventitia (MA) and lumen borders) and location information of the lesion are extracted. Finally, the segmented IVUS images are re-aligned to acquire a high-resolution aortic model for endovascular intervention guidance. To re-align segmented IVUS images, we use the pose determination method of IVUS images proposed by Ma et al. [20]. As for the locations of IVUS images, due to constant velocity of image collection, the centers of segmented IVUS images are distributed at equidistant intervals on the acquired 3D catheter trajectory. In addition, as for the distribution angles of IVUS images, image planes are positioned perpendicular to the catheter trajectory. In this way, the spatial pose of each segmented IVUS image along catheter trajectory is determined and a 3D aortic model is reconstructed and rendered. The workflow of local navigation information based on US images tracking and IVUS images segmentation is shown in Figure 3.

2.2.1. Multi-Feature Particle Filter Tracking Method for the 3D Trajectory of the Catheter

The calibrated 2D US probe scans the aortic phantom in consecutive cross-sections and results in a series of axis cross-sections of aorta. At the beginning of tracking, an initial tracking region on the first US image is manually delineated. The delineated region is described as a circle because the outline of aortic cross-section is circular. In addition, because the fixed radius of the aortic phantom is 4.5000 mm and the resolution of 2D US image is 0.2382 mm/pixel, the radius of initial circle tracking region is set as 19 pixels. The cross-sections of the catheter are contained in aortic cross-sections. The intensity of the catheter’s cross-section is much higher than for the other region inside the aortic cross-section. Therefore, once the aortic cross-section is tracked in US images, the position of the cross-section of the catheter can be acquired by using a threshold (in our application, the threshold value is set as 220 on a 256 step scale). The tip of interventional catheter is located in the last US image frame where the catheter’s cross-section can be detected. Then the positions of catheter’s tip in 2D US images are transformed to the optical tracking system by using transformation T 2 DUS TS . The 3D location points of the tip of the catheter in the optical tracking system are connected into a curve, which is the 3D trajectory.
As a key step, tracking aortic cross-sections in 2D US images is difficult because there are various noises in US images. Particle filter method provides a robust predicting and tracking framework, because it approximates the object position with a finite set of weighted samples (or particles). Conventional particle filter method [21] uses a single intensity feature to calculate the weight of particle. To achieve more stable tracking of aortic cross-section in US images, we propose a multi-feature particle filter tracking algorithm as follows:
Algorithm 1: Multi-feature particle filter for tracking of aortic cross-section in real-time US (ultrasound) images
Input: US scanning images { I t } t = 1 , 2 , 3
Output: Tracked cross-sections of aorta { p t } t = 1 , 2 , 3
(a) Initialization: (t = 0)
  • 1: Generate initial target position p 0 ( p 0 denotes the center of aortic cross-section in US image I 0 ).
  • 2: Scatter initial particles {   s p 0 , m } m = 1 M (M is the number of particles) (   s p 0 , m = p 0 + g m , g m is zero mean Gaussian random variables).
(b) Particle state propagate:
  • 1: Calculate new particle set {   s p t , m } m = 1 M according to state propagation model and {   s p ˜ t 1 , m } m = 1 M (State propagation model is random drift model   s p t , m =   s p t 1 , m + U t , where U t is the Gaussian white noise).
  • 2: Acquire region of the particles {   s R t , m } m = 1 M (circle regions centered on position of the particles {   s p t , m } m = 1 M ) and template region R t (a circle region centered on the target position p t 1 , I c ).
(c) Particle weight decision:
  • 1: Calculate particle weight ω t , m by comparing differences in features between template region R t and region of the particles   s R t , m , ( ω t , m = 𝒽 { R t ,   s R t , m } );
    ( 𝒽 { , } denotes the process of multi-feature fusion for particles’ weights decision and it is described in detail in Appendix)
(d) State estimation output:
  • Calculate tracked cross-sections of the aorta p t ; p t = m = 1 M ω t , m   s p t , m
(e) Resampling step:
  • Re-sample M particles from {   s p t , m } m = 1 M according to their weight { ω t , m } m = 1 M , and obtain a new particle set {   s p ˜ t , m } m = 1 M , particle’s weight is 1 / M .
Let t = t + 1 and iterate to step (b)

2.2.2. An Improved Level Set Method for IVUS Images Segmentation

IVUS segmentation and border detection can provide accurate vascular structure and lesion location for endovascular local navigation [22]. Thus, vascular modeling based on segmented IVUS images is useful in guiding stent deployment and assessing the efficacy of catheter interventions [23]. We develop an improved level set method to detect MA and lumen borders in IVUS images [24]. Level set function evolves from initialization φ 0 to final borders under the influence of curve force and image force [25]. Appropriate initialization will obtain accurate segmentation. Therefore, we propose an appropriate initialization φ 0 determination method through features classification.
Firstly, Laws’ texture energy measure [26] is applied to represent the features of the pixels in IVUS images. Secondly, support vector machine (SVM)-based feature classification [27] is applied to regionalize IVUS images and provides a rough target area M for the following appropriate initialization φ 0 . For MA border detection, pixels in the first IVUS image of the sequence are the training data. For lumen border detection, pixels inside vessel region of the first image are the training data. SVM classification models are acquired through the training data. By testing other IVUS images, image regionalization results are calculated during MA border detection (Figure 4b). Similarly, regionalization results are also acquired during lumen border detection (Figure 4c).
Thirdly, searching-eliminating-interpolating technology is applied to process the rough target area M and acquire appropriate initialization φ 0 . Image regionalization results of M ADR (adventitia region) and L LR L AR (the union of lumen region and artifact region) are rough target areas for MA and lumen border initializations, respectively. The searching step is implemented to search for the contour point of interest P 1 and the eliminating step can remove improper points of P 1 in the process. The final interpolating step is used to acquire smooth initialization φ 0 . More detailed descriptions about IVUS segmentation can be found in paper [24].

2.3. US Navigation System for Transcatheter Aortic Stent Deployment Using Global and Local Information

The surgical workflow of proposed the US navigation system for transcatheter aortic stent deployment contains three stages: preoperative preparatory, intraoperative preparatory and intraoperative navigation.
● Preoperative preparatory
(1)
Collecting preoperative MR images of patient’s heart.
(2)
Segmenting the heart and aorta from preoperative MR images via open source platform.
(3)
Finishing path planning in segmented phantoms of heart and aorta.
● Intraoperative preparatory
(1)
Using a calibrated 3D US probe to collect a 3D US image only once.
(2)
Preparing catheter and stent for aortic stent deployment.
● Intraoperative navigation
(1)
During insertion of the stent and IVUS probe into aorta through a catheter, a calibrated 2D US probe is implemented to collect the real-time 2D intraoperative US images, and an IVUS probe is used to collect IVUS images.
(2)
US navigation system displays a 3D global navigation map to users. In this map there are 3D cardiac MR images of the patient, along with an US image plane providing an updated interior view of the aorta. Position information of inserted catheter is overlaid on this 3D image.
(3)
When the inserted catheter approaches the lesion of aorta, users need to focus on the local navigation information. With local navigation, users obtain more accurate distance information between lesion and inserted catheter. When the distance is close to zero, the catheter is opened to release the stent.
(4)
In this process, local information including virtual visual images of endovascular view and collected IVUS image is also displayed. This navigation information can also help the surgeon judge the severity of lesion.

3. Experiments and Results

Our experiment platform (as shown in Figure 5) includes the following components. (1) an optical tracking system (Polaris, Northern Digital Inc., Waterloo, ON, Canada) to collect the pose information of US probes; (2) an MR compatible multimodality heart phantom (SHELLEY Medical, London, ON, Canada), which contains a left ventricle (LV) and a right ventricle (RV). An aortic phantom with fixed radius (outer radius: 4.5000 mm; inner radius: 3.5000 mm) is right above the heart phantom; (3) an US system (iU22 xMATRIX, Philips, Amsterdam, The Netherlands) with a 2D linear array probe (S5-1, Philips, Amsterdam, The Netherlands). The size of 2D US images is 600 × 800 pixels with resolution 0.2382 mm × 0.2382 mm; (4) an IVUS system (Boston Scientific Galaxy 2 system, Marlborough, MA, USA) with a 40 MHz Atlantis SR IVUS probe. The collected IVUS images are 8-bits, 512 × 512 with in-frame resolution 0.0175 mm× 0.0175 mm; (5) a 7.5-F diameter catheter (Blazer, Boston Scientic, Marlborough, MA, USA). The IVUS probe and stent are inserted into aortic phantom along with the catheter; (6) preoperative 3D MR images of phantom which are collected with a MR scanner (Philips Achieva 3.0T TX, Amsterdam, The Netherlands). The size of the MR images is 480 × 480 × 300 voxels with resolution 0.4871 mm × 0.4871 mm and slice thickness 1.6000 mm; (7) the image processing technologies in our navigation system are developed using mixed program based on C++ and Matlab (Matlab2014a, MathWorks, Natick, MA, USA). The navigation system runs under Windows 7, on an Intel Core i7 computer with 16 GB RAM (Random Access Memory).
The navigation system contains two key parts including the acquirement of global navigation map based on 2D US-3D MR image registration, and local high-resolution navigation information based on US images tracking and IVUS image segmentation. A set of experiments was conducted to evaluate these two parts (Section 3.1 and Section 3.2). In particular, many image-processing methods applied in these two parts were evaluated individually. After the analyses of these two key parts, an in vitro experiment of catheter intervention (Section 3.3) was conducted to confirm that the integration of global and local information is significant.

3.1. Evaluation of the Global 3D Navigation Map

The global 3D navigation map is based on 2D US-3D MR image registration. And the proposed 2D US-3D MR images registration method relies on effective calibration results of 2D and 3D US probes. Therefore, the evaluation of global navigation map contains the verification of probe calibrations and assessment of 2D US-3D MR image registration.

3.1.1. Calibrations of 2D and 3D US Probes

Calibration reproducibility (CR) error measures repeatability of a probe calibration method when it is performed on a new set of images [28]. CR error is the Euclidian distance between two calibration transformations ( T US PR i and T US PR j ) of the same US image point P US . CR error is usually calculated by
E CR = mean i , j { T US PR i * P US T US PR j * P US }
During the evaluations of 2D and 3D US probe calibrations, we performed 8 calibration trials and used 10 images per trial, 80 datasets in total. The acquired CR error of the 2D US probe’s calibration is 0.61 mm and the CR error of 3D US probe’s calibration is 1.42 mm.

3.1.2. 2D US-3D MR Images Registration

The registration results of 2D US-3D US images, 3D US-3D MR images and final 2D US-3D MR images are shown in Figure 6. The registration accuracy of 2D US-3D MR images was quantitatively evaluated by calculating the target registration error (TRE). Ten contour points of ventricle in 2D US and the corresponding contour points of ventricle in 3D MR model were manually delineated by a surgeon to calculate TRE. TRE is the average Euclidean distance of these corresponding contour points. A mean TRE of 2.70 mm (range 1.05–3.67 mm) is obtained during 2D US-3D MR image registration.

3.2. Evaluation of Local High-Resolution Navigation Information

Local high-resolution navigation information is achieved by combining the 3D trajectory of the catheter from US images tracking with segmented IVUS images. Therefore, the effectiveness of local navigation can be validated by verifying these two critical factors.

3.2.1. Evaluation of Multi-Feature Particle Filter Method for Catheter’s 3D Trajectory

During the experiment, we inserted the catheter into the aortic phantom and scanned large areas with a calibrated 2D US probe. Multi-feature particle filter tracking method was used to track aortic cross-sections in the collected 2D US images. The corresponding parameters in particle filter tracking were set as a = 0.5, δ = 0.7, and the number of particles was 40. Aortic cross-sections were tracked in 200 US images with the multi-feature particle filter tracking methods. The tracking error is determined by the Euclidean distance between aortic cross-section center from automatic tracking and manually delineative center. The average tracking error of multi-feature particle filter method is 0.87 mm (lateral error is 0.43 mm and longitudinal error is 0.76 mm).
After tracking aortic cross-sections in US images, threshold technology is applied to obtain those pixels whose intensities are higher than 220 inside the aortic cross-section. The centroid of those pixels is the detected position of the cross-section of the catheter. In Figure 7, the red circle represents the tracked aortic cross-section and the green point is the located tip of catheter. Based on the positions of the catheter’s tip, the 3D trajectory of the catheter is acquired. Figure 8 shows two trajectories of the catheter, including an automatic trajectory of the catheter from our multi-feature particle filter tracking method and a 3D trajectory from the manually delineative catheter’s tip. The distance between the manual trajectory and automatic trajectory using multi-feature particle filter tracking method is 1.48 mm.

3.2.2. Evaluation of the Improved Level Set Method for IVUS Images Segmentation

Accurate IVUS segmentation and borders detection can provide lesion location inside the vessel for local navigation. For clinical patients who need stent placements, the collected IVUS images contain information on the lesions but various artifacts exist. A segmentation method applicable for real patients is critical. Therefore, the performance of IVUS segmentation method was evaluated using 500 IVUS images from the sequences of ten patients. These patients’ IVUS sequences were collected in Navy PLA General Hospital, China (detailed information of patients is listed in supplementary material Table S1). The MA and lumen borders of 500 images had been manually delineated by an expert in IVUS images interpretation from the Navy PLA General Hospital. Examples of segmented lumen and MA borders of our method are illustrated in Figure 9. Segmentation accuracy was quantified by using standard measurements including Dice indexes and Hausdorff distance. The comparison results of proposed segmentation method and manual delineation are listed in Table 1.

3.3. The US Navigation In Vitro Experiment

We presented an in vitro experiment of the catheter intervention for stent deployment, which is conducted on a cardiac and aortic phantom. During the experiment, there was a plaque inside our aortic phantom, and the center of the plaque was identified as the surgical target. The user was asked to insert a catheter into the target inside the vessel to simulate transcatheter stent deployment. During the intervention, two different types of navigation information: (1) a global navigation map alone; (2) a global map plus local navigation information were available to the user, respectively.
The image guidance interface and information during the intervention are illustrated in Figure 10. Under the guidance of a global navigation map (Figure 10a–c), the relative position between interventional catheter and 3D cardiac structure is displayed intuitively to navigate the catheter to the target. However, there is the limitation that the morphological information of aortic model, especially the lesion’s position, is blurred (Figure 10c). On one hand, to evaluate the vessel reconstruction error of global information, we measured the average outer radius of manually reconstructed aortic model from global MR image, and compared the average outer radius with the actual outer radius of aortic phantom (4.5000 mm). We selected 40 transverses of the reconstructed aortic model to acquire an average outer radius. The resolution of MR images is 0.4871 mm and the acquired reconstruction error of global navigation is 1.1241 mm. On the other hand, the catheter was navigated to the target using global information, and the distance between the catheter arrival position and the target was measured by using coronal and sagittal X-ray projections images from C-arm (Fluoroscan Insight, HOLOGIC, Boston, MA, USA). This distance is treated as targeting error of global navigation (See Figure 11a) and the acquired targeting error is 2.8701 mm.
Comparatively, in the case of a global map plus local navigation information, the user can be guided to insert catheter along the preoperative planning path. Meanwhile, the local navigation information (Figure 10d–f), further provides the user with critical details of the target point. Firstly, to evaluate the vessel reconstruction error of local navigation information, we calculated the average outer radius of the automatically reconstructed aortic model from segmented IVUS images, and compared this average outer radius with the actual radius of aortic phantom. The resolution of IVUS images is 0.0175 mm and the acquired reconstruction error of local information is 0.2217 mm. Secondly, to evaluate the targeting error of catheter intervention, we captured coronal and sagittal X-ray projections when the catheter was navigated to the target by using global plus local information. These projections images are measured (See Figure 11b) and the acquired targeting error is 1.7214 mm. This experiment shows that the new US navigation system which integrates global and local information can improve upon the navigation system that relies on a global navigation map alone.

4. Discussion

The aim of this paper is to present a radiation-free US navigation system for transcatheter aortic stent deployment by combining MR images with two types of US images including IVUS images and common US images. The proposed US navigation system can provide surgeons with not only a global navigation map of heart and surrounding tissue, but also morphological characteristics of aorta to assist stent deployment. There are two key parts in our system: (1) a global navigation map from 2D US-3D MR image registration; (2) local morphological navigation information based on US images tracking and IVUS images segmentation.
Recently, 2D US images have been widely used in minimally invasive cardiac procedures due to its real-time imaging capabilities. However, it is difficult to relate the 2D US images to anatomical context due to the limit of image quality. 3D US image may overcome this problem to some degree, but it performs 3D imaging at the cost of decreased temporal resolution and only provides a narrow field of view. Therefore, we propose to use high-quality 3D context from MR images to enhance 2D US images through image registration, providing a global navigation map for surgeons. To solve the difficulty of 2D US-3D MR image registration, we develop a novel registration method based on calibrations of 2D and 3D US probes. On the one hand, we acquire the small probes’ calibration reproducibility errors (calibration error of 2D US probe: 0.61 mm, calibration error of 3D US probe: 1.42 mm). On the other hand, in Figure 6b, the contours of ventricles in 2D US and 3D MR images achieve a good agreement after registration, which qualitatively demonstrates that the proposed 2D US-3D MR image registration method is effective. A TRE of 2.70 mm is obtained for US and MR image registration. Through registration, interpretability of 2D US images is improved within the 3D anatomical context provided by MR images. Therefore, a global navigation map by integrating 2D US and 3D MR images is achieved (Figure 10a).
Local high-resolution navigation can reveal morphological characteristics of the aorta, especially the information on the target lesion. The vessel reconstruction error of local information is reduced by 80% compared to global information (error: 0.2217 mm versus 1.1241 mm). In addition, in the catheter intervention experiment, the targeting error of global navigation is 2.8701 mm. By adding the local high-resolution navigation information to global navigation, the targeting error reduces to 1.7214 mm. Thus, combining the large view of global navigation with the high accuracy of local navigation can provide surgeons with an intuitive 3D map and adequate lesion location for precise catheter intervention in stent deployment.
We have validated the applicability of our proposed system in a laboratory setting, but several problems exist and need to be solved before delivering our system to clinical in vivo interventions. Firstly, compared with the static phantom experiment in water tank, the beating of heart needs to be taken into account in the in vivo experiment. We will add phase synchronization of US, MR and IVUS images through electrocardiograph (ECG) signals to solve this problem; Secondly, in the in-vivo experiment, the collected IVUS images are more complex than the IVUS images of aortic phantom. We are conducting more evaluations of images segmentation for more patients with different degrees of severity. We acknowledge that 2D US imaging of the human body is more challenging than a phantom in water bath. This is because propagation in homogeneous tissue will cause acoustic noise and the lung and ribs will reduce the acoustic window. Our future works will focus on the research about human body experiments and applications. Furthermore, besides phantom experiments in water tanks, we are developing the evaluation experiment of a beating high-reality heart phantom through different intervention routes and operators. We believe that the proposed US navigation system has great potential to provide surgeons with more abundant information for precise transcatheter aortic stent deployment.

5. Conclusions

The proposed US navigation system combines the global information of intuitive 3D images with the local information of aortic morphological characteristics to provide surgeons with abundant guidance information. This system facilitates transcatheter aortic stent placement and reduces X-ray radiation and doses of contrast.

Supplementary Materials

The following are available online at www.mdpi.com/2076-3417/6/12/391/s1, Table S1: The detailed information and lesion characters of the patients.

Acknowledgments

This study was supported in part by National Natural Science Foundation of China (Grant No. 81427803, 61361160417, 81271735), Grant-in-Aid of Project 985, and Beijing Municipal Science and Technology Commission (Z151100003915079). The Authors would like to thank Yigang Qiu and Tianchang Li from Department of Cardiology, Navy PLA General Hospital for assistance in acquiring and analyzing the IVUS data for this study.

Author Contributions

F.C. and H.L. proposed the method, and designed the experiment. F.C. performed the experiment, and wrote the manuscript; J.L. conceived experiments and analyzed the data; H.L. initiated the study and supervised the project overall. All authors discussed the results.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix

Multi-feature fusion for particles’ weights decision 𝒽 { , } .
To compare the differences between template region R t and region of particles   s R t , m , multiple features of template region and particle region need to be calculated. The first feature is normalized histogram of intensity (HOI), which is defined as
g = g / g = [ g 1 / k = 1 K 1 g k 2 ,   , g K 1 / k = 1 K 1 g k 2 ] T
where g k is the gray histogram counts of k th bin, K 1 specifies the number of bins used in histogram. The second feature is histogram of oriented gradient (HOG) which represents contour information. Oriented gradient OG is calculated by
OG ( i , j ) = G 1 ( i , j ) 2 + G 2 ( i , j ) 2 tan 1 ( G 1 ( i , j ) / G 2 ( i , j ) )
where G 1 and G 2 are gradients along the horizontal and vertical directions. The oriented gradient OG is counted into K 2 bins and normalized HOG is
h = h / h = [ h 1 / k = 1 K 2 h k 2 ,   , h K 2 / i = 1 K 2 h k 2 ]
where h k is histogram counts of k th bin.
The fusion feature f which combines normalized HOI g with HOG h can be represented as
f = [ a g ; ( 1 a ) h ] a [ 0 , 1 ]
By using the above method, the fusion feature f t of template region R t and the fusion feature   s f t , m of particle region   s R t , m are both calculated. The particle weight derived from these features is calculated by:
ω t , m = 1 / 2 π exp { 1 / ( 2 δ 2 ) ( 1   ρ (   s f t , m , f t ) ) } ρ (   s f t , m , f t ) = i = 1 K 1 + K 2   s f t , m ( i ) f t ( i )
where δ is a constant parameter.

References

  1. Bavaria, J.E.; Appoo, J.J.; Makaroun, M.S.; Verter, J.; Yu, Z.-F.; Mitchell, R.S.; Investigators, G.T. Endovascular stent grafting versus open surgical repair of descending thoracic aortic aneurysms in low-risk patients: A multicenter comparative trial. J. Thorac. Cardiovasc. Surg. 2007, 133, 369–377. [Google Scholar] [CrossRef] [PubMed]
  2. Pericevic, I.; Lally, C.; Toner, D.; Kelly, D.J. The influence of plaque composition on underlying arterial wall stress during stent expansion: The case for plaque-specific stents. Med. Eng. Phys. 2009, 31, 428–433. [Google Scholar] [CrossRef] [PubMed]
  3. Cleary, K.; Peters, T.M. Image-guided interventions: Technology review and clinical applications. Annu. Rev. Biomed. Eng. 2010, 12, 119–142. [Google Scholar] [CrossRef] [PubMed]
  4. Tacher, V.; Lin, M.; Desgranges, P.; Deux, J.-F.; Grünhagen, T.; Becquemin, J.-P.; Luciani, A.; Rahmouni, A.; Kobeiter, H. Image guidance for endovascular repair of complex aortic aneurysms: Comparison of two-dimensional and three-dimensional angiography and image fusion. J. Vasc. Interv. Radiol. 2013, 24, 1698–1706. [Google Scholar] [CrossRef] [PubMed]
  5. Vykoukal, D.; Chinnadurai, P.; Davies, M.G. Cardiovascular imaging, navigation and intervention: Hybrid imaging and therapeutics. In Computational Surgery and Dual Training; Springer: Berlin/Heidelberg, Germany, 2014; pp. 125–148. [Google Scholar]
  6. Krishnaswamy, A.; Tuzcu, E.M.; Kapadia, S.R. Three-dimensional computed tomography in the cardiac catheterization laboratory. Catheter. Cardiovasc. Interv. 2011, 77, 860–865. [Google Scholar] [CrossRef] [PubMed]
  7. Raval, A.N.; Telep, J.D.; Guttman, M.A.; Ozturk, C.; Jones, M.; Thompson, R.B.; Wright, V.J.; Schenke, W.H.; DeSilva, R.; Aviles, R.J. Real-time magnetic resonance imaging–guided stenting of aortic coarctation with commercially available catheter devices in swine. Circulation 2005, 112, 699–706. [Google Scholar] [CrossRef] [PubMed]
  8. Goksu, C.; Haigron, P.; Acosta, O.; Lucas, A. Endovascular Navigation Based on Real/Virtual Environments Cooperation for Computer-Assisted Team Procedures. In Proceedings of the Medical Imaging 2004: Visualization, Image-Guided Procedures, and Display, San Diego, CA, USA, 14 February 2004; pp. 257–266.
  9. Manstad-Hulaas, F.; Tangen, G.A.; Gruionu, L.G.; Aadahl, P.; Hernes, T.A. Three-dimensional endovascular navigation with electromagnetic tracking: Ex vivo and in vivo accuracy. J. Endovasc. Ther. 2011, 18, 230–240. [Google Scholar] [CrossRef] [PubMed]
  10. Wang, J.; Ohya, T.; Liao, H.; Sakuma, I.; Wang, T.; Tohnai, I.; Iwai, T. Intravascular catheter navigation using path planning and virtual visual feedback for oral cancer treatment. Int. J. Med. Robot. Comput. Assist. Surg. 2011, 7, 214–224. [Google Scholar] [CrossRef] [PubMed]
  11. Ramcharitar, S.; Pugliese, F.; Schultz, C.; Ligthart, J.; de Feyter, P.; Li, H.; Mollet, N.; van de Ent, M.; Serruys, P.W.; van Geuns, R.J. Integration of multislice computed tomography with magnetic navigation facilitates percutaneous coronary interventions without additional contrast agents. J. Am. Coll. Cardiol. 2009, 53, 741–746. [Google Scholar] [PubMed]
  12. Koolwal, A.B.; Barbagli, F.; Carlson, C.; Liang, D. An ultrasound-based localization algorithm for catheter ablation guidance in the left atrium. Int. J. Robot. Res. 2010, 29, 643–665. [Google Scholar] [CrossRef]
  13. Li, F.P.; Rajchl, M.; White, J.A.; Goela, A.; Peters, T.M. Ultrasound Guidance for Beating Heart Mitral Valve Repair Augmented by Synthetic Dynamic CT. IEEE Trans. Med. Imaging 2015, 34, 2025–2035. [Google Scholar] [CrossRef] [PubMed]
  14. Luan, K.; Ohya, T.; Liao, H.; Kobayashi, E.; Sakuma, I. Vessel bifurcation localization based on intraoperative three-dimensional ultrasound and catheter path for image-guided catheter intervention of oral cancers. Comput. Med. Imaging Graph. 2013, 37, 113–122. [Google Scholar] [CrossRef] [PubMed]
  15. McLeod, A.J.; Currie, M.E.; Moore, J.T.; Bainbridge, D.; Kiaii, B.B.; Chu, M.W.; Peters, T.M. Phantom study of an ultrasound guidance system for transcatheter aortic valve implantation. Comput. Med. Imaging Graph. 2014, 50, 24–30. [Google Scholar] [CrossRef] [PubMed]
  16. Koschyk, D.H.; Nienaber, C.A.; Knap, M.; Hofmann, T.; Kodolitsch, Y.V.; Skriabina, V.; Ismail, M.; Franzen, O.; Rehders, T.C.; Dieckmann, C. How to guide stent-graft implantation in type b aortic dissection? Comparison of angiography, transesophageal echocardiography, and intravascular ultrasound. Circulation 2005, 112, I260–I264. [Google Scholar] [PubMed]
  17. Nissen, S.E.; Yock, P. Intravascular ultrasound novel pathophysiological insights and current clinical applications. Circulation 2001, 103, 604–616. [Google Scholar] [CrossRef] [PubMed]
  18. Chen, F.; Liu, J.; Liao, H. Radiation-Free 3D Navigation and Vascular Reconstruction for Aortic Stent Graft Deployment. In Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 2016; pp. 59–71. [Google Scholar]
  19. Chen, F.; Liao, R.; Liao, H. Fast Registration of Intraoperative Ultrasound and Preoperative MR Images Based on Calibrations of 2D and 3D Ultrasound Probes, Proceedings of the World Congress on Medical Physics and Biomedical Engineering, Toronto, ON, Canada, 7–12 June 2015; Springer: Berlin/Heidelberg, Germany, 2015; pp. 220–223. [Google Scholar]
  20. Ma, H.T.; Wang, H.; Wang, C.; Hau, W.K. 3D reconstruction of coronary arteries using intravascular ultrasound (IVUS) and angiography. In Proceedings of the TENCON 2013–2013 IEEE Region 10 Conference (31194), Xi’an, China, 22–25 October 2013; pp. 1–4.
  21. Arulampalam, M.S.; Maskell, S.; Gordon, N.; Clapp, T. A tutorial on particle filters for online nonlinear/non-gaussian bayesian tracking. IEEE Trans. Signal Process. 2002, 50, 174–188. [Google Scholar] [CrossRef]
  22. Shi, C.; Tercero, C.; Ikeda, S.; Ooe, K.; Fukuda, T.; Komori, K.; Yamamoto, K. In vitro three-dimensional aortic vasculature modeling based on sensor fusion between intravascular ultrasound and magnetic tracker. Int. J. Med. Robot. Comput. Assist. Surg. 2012, 8, 291–299. [Google Scholar] [CrossRef] [PubMed]
  23. Lee, J.T.; White, R.A. Basics of intravascular ultrasound: An essential tool for the endovascular surgeon. Semin. Vasc. Surg. 2004, 17, 110–118. [Google Scholar] [CrossRef] [PubMed]
  24. Chen, F.; Liu, J.; Ma, R.B.; Liao, H. Texture Enhanced Deformable Model for Detection of Media-adventitia and Lumen Borders on IVUS Images. Comput. Med. Imaging Graph. 2016. submitted for publication. [Google Scholar]
  25. Osher, S.; Paragios, N. Geometric Level Set Methods in Imaging, Vision, and Graphics; Springer: Berlin/Heidelberg, Germany, 2003; pp. 3–20. [Google Scholar]
  26. Laws, K.I. Rapid Texture Identification. In Proceedings of the SPIE 0238, Image Processing for Missile Guidance, San Diego, CA, USA, 29 July 1980; pp. 376–381.
  27. Kim, K.I.; Jung, K.; Park, S.H.; Kim, H.J. Support vector machines for texture classification. IEEE Trans. Pattern Anal. Mach. Intell. 2002, 24, 1542–1550. [Google Scholar] [Green Version]
  28. Blackall, J.M.; Rueckert, D.; Maurer, C.R., Jr.; Penney, G.P.; Hill, D.L.; Hawkes, D.J. An Image Registration Approach to Automated Calibration for Freehand 3D Ultrasound, Proceeding of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Pittsburgh, PA, USA, 11–14 October 2000; Springer: Berlin/Heidelberg, Germany, 2000; pp. 462–471. [Google Scholar]
Figure 1. Configuration of an ultrasound (US) navigation system using global and local information. MR: magnetic resonance; IVUS: intravascular ultrasound.
Figure 1. Configuration of an ultrasound (US) navigation system using global and local information. MR: magnetic resonance; IVUS: intravascular ultrasound.
Applsci 06 00391 g001
Figure 2. 2D (two-dimensional) US-3D (three-dimensional) MR images registration based on calibrations information of 2D and 3D US probes.
Figure 2. 2D (two-dimensional) US-3D (three-dimensional) MR images registration based on calibrations information of 2D and 3D US probes.
Applsci 06 00391 g002
Figure 3. The workflow of local navigation information (3D aortic model including morphological information) based on US images tracking and IVUS image segmentation. MA: media-adventitia.
Figure 3. The workflow of local navigation information (3D aortic model including morphological information) based on US images tracking and IVUS image segmentation. MA: media-adventitia.
Applsci 06 00391 g003
Figure 4. Support vector machine (SVM) classification of image. (a) an IVUS image; (b) image regionalization results during MA border detection; (c) image regionalization results during lumen border detection.
Figure 4. Support vector machine (SVM) classification of image. (a) an IVUS image; (b) image regionalization results during MA border detection; (c) image regionalization results during lumen border detection.
Applsci 06 00391 g004
Figure 5. The in vitro experiment platform of aortic and cardiac phantom.
Figure 5. The in vitro experiment platform of aortic and cardiac phantom.
Applsci 06 00391 g005
Figure 6. Registration results of 2D US-3D US, 3D US-3D MR and final 2D US-3D MR images (red volume is the left ventricle model; blue volume is the right ventricle model). (a) Registration results of 3D US-3D MR images; (b) Registration results of 2D US-3D MR images; (c) Registration results of 2D US-3D US images.
Figure 6. Registration results of 2D US-3D US, 3D US-3D MR and final 2D US-3D MR images (red volume is the left ventricle model; blue volume is the right ventricle model). (a) Registration results of 3D US-3D MR images; (b) Registration results of 2D US-3D MR images; (c) Registration results of 2D US-3D US images.
Applsci 06 00391 g006
Figure 7. Tracking result of aortic cross-section and catheter’s tip in US images, (a) the US image which contains the catheter’s tip; (b) the US image which does not contain the catheter’s tip.
Figure 7. Tracking result of aortic cross-section and catheter’s tip in US images, (a) the US image which contains the catheter’s tip; (b) the US image which does not contain the catheter’s tip.
Applsci 06 00391 g007
Figure 8. 3D trajectories of the catheter.
Figure 8. 3D trajectories of the catheter.
Applsci 06 00391 g008
Figure 9. Segmentation examples of IVUS images; yellow and green contours denote the MA and lumen borders respectively (dotted lines denote detected borders by manual segmentation and full lines denote detected contours by proposed segmentation method, respectively).
Figure 9. Segmentation examples of IVUS images; yellow and green contours denote the MA and lumen borders respectively (dotted lines denote detected borders by manual segmentation and full lines denote detected contours by proposed segmentation method, respectively).
Applsci 06 00391 g009
Figure 10. Image guidance interface and information. (a) a global navigation map from 2D US-3D MR registration; (b) enlarged view of a global map at the region of aorta, green curve is the preoperative planning path, asterisk denotes the position of interventional catheter; (c) collected MR image; (d) local navigation information with morphological characteristics of aorta. Dark yellow volume is lesion; (e) virtual visual image from endovascular view; (f) collected IVUS image.
Figure 10. Image guidance interface and information. (a) a global navigation map from 2D US-3D MR registration; (b) enlarged view of a global map at the region of aorta, green curve is the preoperative planning path, asterisk denotes the position of interventional catheter; (c) collected MR image; (d) local navigation information with morphological characteristics of aorta. Dark yellow volume is lesion; (e) virtual visual image from endovascular view; (f) collected IVUS image.
Applsci 06 00391 g010
Figure 11. (a) Two perpendicular X-ray projections were taken to verify the targeting distance between the catheter arrival position and target point during catheter intervention under the global guidance alone; (b) Two perpendicular X-ray projections under the global map plus local navigation.
Figure 11. (a) Two perpendicular X-ray projections were taken to verify the targeting distance between the catheter arrival position and target point during catheter intervention under the global guidance alone; (b) Two perpendicular X-ray projections under the global map plus local navigation.
Applsci 06 00391 g011
Table 1. Mean and standard deviations (Std) of the Dice similarity and Hausdorff distance between the lumen/MA (media-adventitia) border obtained by our method and manual delineation.
Table 1. Mean and standard deviations (Std) of the Dice similarity and Hausdorff distance between the lumen/MA (media-adventitia) border obtained by our method and manual delineation.
IndexMA BorderLumen Border
MeanStdMeanStd
Hausdorff distance0.65 mm0.20 mm0.41 mm0.27 mm
Dice index92.30%8.21%89.87%7.12%

Share and Cite

MDPI and ACS Style

Chen, F.; Liu, J.; Liao, H. Ultrasound Navigation for Transcatheter Aortic Stent Deployment Using Global and Local Information. Appl. Sci. 2016, 6, 391. https://doi.org/10.3390/app6120391

AMA Style

Chen F, Liu J, Liao H. Ultrasound Navigation for Transcatheter Aortic Stent Deployment Using Global and Local Information. Applied Sciences. 2016; 6(12):391. https://doi.org/10.3390/app6120391

Chicago/Turabian Style

Chen, Fang, Jia Liu, and Hongen Liao. 2016. "Ultrasound Navigation for Transcatheter Aortic Stent Deployment Using Global and Local Information" Applied Sciences 6, no. 12: 391. https://doi.org/10.3390/app6120391

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop