Next Article in Journal
Brain Tumor Segmentation Using Deep Capsule Network and Latent-Dynamic Conditional Random Fields
Next Article in Special Issue
Augmenting Performance: A Systematic Review of Optical See-Through Head-Mounted Displays in Surgery
Previous Article in Journal
Efficient and Scalable Object Localization in 3D on Mobile Device
Previous Article in Special Issue
Multi-Stage Platform for (Semi-)Automatic Planning in Reconstructive Orthopedic Surgery
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Multimodal Registration for Image-Guided EBUS Bronchoscopy

1
School of Electrical Engineering and Computer Science, Penn State University, State College, PA 16802, USA
2
Penn State Milton S. Hershey Medical Center, Hershey, PA 17033, USA
*
Author to whom correspondence should be addressed.
J. Imaging 2022, 8(7), 189; https://doi.org/10.3390/jimaging8070189
Submission received: 18 May 2022 / Revised: 27 June 2022 / Accepted: 29 June 2022 / Published: 8 July 2022

Abstract

:
The state-of-the-art procedure for examining the lymph nodes in a lung cancer patient involves using an endobronchial ultrasound (EBUS) bronchoscope. The EBUS bronchoscope integrates two modalities into one device: (1) videobronchoscopy, which gives video images of the airway walls; and (2) convex-probe EBUS, which gives 2D fan-shaped views of extraluminal structures situated outside the airways. During the procedure, the physician first employs videobronchoscopy to navigate the device through the airways. Next, upon reaching a given node’s approximate vicinity, the physician probes the airway walls using EBUS to localize the node. Due to the fact that lymph nodes lie beyond the airways, EBUS is essential for confirming a node’s location. Unfortunately, it is well-documented that EBUS is difficult to use. In addition, while new image-guided bronchoscopy systems provide effective guidance for videobronchoscopic navigation, they offer no assistance for guiding EBUS localization. We propose a method for registering a patient’s chest CT scan to live surgical EBUS views, thereby facilitating accurate image-guided EBUS bronchoscopy. The method entails an optimization process that registers CT-based virtual EBUS views to live EBUS probe views. Results using lung cancer patient data show that the method correctly registered 28/28 (100%) lymph nodes scanned by EBUS, with a mean registration time of 3.4 s. In addition, the mean position and direction errors of registered sites were 2.2 mm and 11.8 , respectively. In addition, sensitivity studies show the method’s robustness to parameter variations. Lastly, we demonstrate the method’s use in an image-guided system designed for guiding both phases of EBUS bronchoscopy.

1. Introduction

The state-of-the-art procedure for examining the lymph nodes in lung-cancer patients draws on an endobronchial ultrasound (EBUS) bronchoscope [1,2,3,4]. The EBUS bronchoscope, also referred to as a convex-probe EBUS or linear EBUS bronchoscope, integrates two modalities into one device (Figure 1): (1) videobronchoscopy, which gives video images of the airway walls (endoluminal surfaces); and (2) convex-probe EBUS, which gives 2D fan-shaped views of extraluminal structures situated outside the airways. Due to the fact that lymph nodes lie outside the airways—hence, are unobservable by videobronchoscopy—EBUS is essential for confirming a node’s location.
Before the procedure, the physician first examines a patient’s 3D chest computed tomography (CT) scan to select suspicious lymph nodes (Figure 2a). Later, during the live surgical procedure, the physician uses the EBUS bronchoscope to localize and biopsy the lymph nodes (Figure 2b). In particular, for each node, the physician first employs videobronchoscopy to navigate the device through the airways. Next, upon reaching the node’s approximate expected vicinity, the physician probes the nearby airway walls using EBUS to localize the node.
Unfortunately, EBUS can be difficult to use effectively [5]. This situation arises for three reasons [6,7,8]. First, the physician must mentally translate their anatomical knowledge to CT scan observations and live EBUS views. Second, EBUS views generally do not lie on the orthogonal 2D axial, coronal, or sagittal planes readily observed in CT. This makes it hard to correlate CT observations to live EBUS views. Finally, because of EBUS’s limited field of view (FOV), the physician performs an essentially blind trial-and-error EBUS sweep to localize a node. This can be challenging given the typical small size of lymph nodes (≈10 mm long-axis length) and the complex 360 cylindrical span of the airway walls.
Recently proposed image-guided bronchoscopy systems have proven to make videobronchoscopic navigation through the airways a high-success, skill-independent operation [9,10]. Using the concept of virtual bronchoscopy (VB), the live bronchoscopic video can be readily correlated and registered to precomputed CT-based VB views that mimic the video views of the real bronchoscope’s video camera [11,12,13,14]. Figure 2c illustrates this operation during a live procedure. The physician follows the guidance system’s presented sequence of VB views along a preplanned guidance route, which, in turn, leads the physician to each diagnostic site. Along the way, image registration synchronizes the position of the real device and virtual camera.
These systems, however, do not offer any means to guide the placement of the EBUS probe for localizing extraluminal sites, as suggested by Figure 2d. Thus, accurate localization, crucial for effective biopsy, continues to be problematic [8,15]. To emphasize this point, a multi-physician study showed that a physician’s biopsy success rate was only 43% despite successfully navigating to the correct airway 78% of the time [16]. Other studies have shown that, even when image-guided navigation and EBUS are used together, biopsy success rates barely >50% still occur [17,18]. We propose a method for registering the pre-operative chest CT scan to the live EBUS views, thereby facilitating accurate image-guided EBUS bronchoscopy.
Several recent works have attempted to extend image guidance to EBUS localization [19,20,21,22,23]. For EBUS bronchoscopy as applied to lymph node examination, the approach of Sato et al. entails considerable manual interaction to preplan sites for invoking EBUS, and offers no virtual-to-real EBUS registration to help with EBUS localization [19]. Sorger et al. proposed a commendable electromagnetic navigation system that draws upon an extra sensor attached to the EBUS bronchoscope [20,21]. However, the system does not register the live EBUS probe view to the pre-operative CT, thereby limiting optimal EBUS placement. It also adds the well-known issues arising from using electromagnetic technology, namely the extra device cost and susceptibility to patient motion [24].
The other works applied virtual bronchoscopy to the allied problem of peripheral nodule diagnosis [22,23]. The physician uses VB to navigate a standard videobronchoscope close to the nodule and then inserts a radial-probe EBUS, which gives 360 views of the extraluminal airway structure, into the videobronchoscope’s working channel. Since radial-probe EBUS is not designed for examining central-chest lymph nodes, it presents a different guidance scenario (separate devices, requires probe removal before biopsy, dissimilar imaging scenario) [1]. This research, however, does give impetus for work with convex-probe EBUS. The approach of Tamiya et al. does not give direct guidance for EBUS placement [22]. The method of Luo and Mori does accomplish EBUS localization. However, it needs extra sensors to track the device and was only tested in a controlled phantom environment [23].
Our proposed registration method draws upon the concept of a CT-based virtual EBUS, as introduced by Zang et al. [25,26]. The virtual EBUS, which mimics the FOV of the convex-probe EBUS, acts as a complementary analog to the virtual bronchoscope used for image-guided navigation. With the virtual EBUS and associated registration method, live image-guided EBUS probe localization can now be performed.
The method, partially motivated by previous research in ultrasound registration, draws upon the idea of mutual information (MI). MI techniques have been commonly used to register ultrasound and CT images since the MI metric does not require the two sources to have similar image appearances [27,28]. Figure 3, discussed in Section 2, clearly illustrates how the real and virtual EBUS views differ in appearance. Unfortunately, because the MI metric relies on a joint intensity probability model, while completely ignoring spatial information, MI-based techniques can fail when shading artifacts exist [29,30]. To improve MI-based methods, Studholme et al. proposed a normalized MI (NMI) metric, which is more robust to image overlap [31]. They also integrated spatial information by calculating mutual information among local regions [32]. As a further enhancement, researchers have included prior knowledge, such as the region shape, to achieve more robust accurate registration. For example, Knops et al. performed k-means clustering to distinguish image pixels having similar intensities but belonging to different regions. They then used these modified intensity bins to calculate NMI during registration [33].
During live EBUS localization, our proposed method uses an optimization process, which combines both NMI and region shape information, to register CT-based virtual EBUS views to the live views generated by the real EBUS probe. We also integrated the method into an image-guided EBUS bronchoscopy system that features a multimodal virtual EBUS bronchoscope, which integrates both the standard virtual videobronchoscope and the virtual EBUS [26]. Thus, intuitive image guidance is now attainable for both videobronchoscopic navigation and EBUS localization. Section 2 details the method. Section 3 validates the method performance using lung cancer patient data. It also illustrates the method’s utility in our image-guided EBUS bronchoscopy system. Finally, Section 4 offers concluding comments.

2. Methods

2.1. Overview

We first overview the protocol employed by our image-guided EBUS bronchoscopy system [26]. Section 2.2 then discusses the method for registering the virtual EBUS to live EBUS probe views.
To begin, the system complied with the standard protocol of nodal staging bronchoscopy. First, prior to the procedure, a guidance plan was derived from the patient’s high-resolution 3D chest CT scan (voxel dimensions Δ x , Δ y , Δ z < 1 mm) using methods similar to those employed by existing image-guided bronchoscopy systems [9,24,34]. This gives the airway tree’s endoluminal surfaces, a set of target lymph nodes, and an airway guidance route for each node. We used previously validated methods to generate the guidance plan [25,34,35,36,37].
Next, in the operating room, the EBUS bronchoscope was interfaced to the guidance computer. It was assumed that the physician uses the Olympus BF-UC180F EBUS bronchoscope, the de facto standard device for EBUS nodal staging [1,3]. This device produces data streams for both the videobronchoscope and EBUS probe (Figure 1), which serve as system inputs. Section 2.3 gives technical detail for the guidance computer and interfacing to the bronchoscopy suite.
Image-guided EBUS bronchoscopy fundamentally involves aligning planning data derived from the patient’s chest CT scan to live views provided by the EBUS bronchoscope. This requires the definition of virtual and real spaces representing the chest (Figure 2). For our scenario, the 3D chest CT scan defines a virtual chest space, whereby a virtual EBUS bronchoscope moves through the CT-based virtual airways. Similarly, as the real device navigates through the physical airways, the EBUS bronchoscope’s data streams give views inside the corresponding real 3D chest space. This is similar to what current image-guided bronchoscopy systems perform, where virtual and real videobronchoscopes simultaneously navigate through distinct manifestations of the same physical chest space [14,38]. For our problem, image guidance involves synchronizing the two spaces during both videobronchoscopic navigation and EBUS localization. In this way, the location of the real device can be confirmed at all times.
During the live procedure, the system used a multimodal virtual EBUS bronchoscope, which extends the concept of the CT-based virtual bronchoscope. This tool’s multimodal display mimics both components of the real EBUS bronchoscope, as shown in Figure 3. The CT-based virtual videobronchoscope simulated the views supplied by the real device’s videobronchoscope camera, as in existing image-guided bronchoscopy systems. In addition, a CT-based virtual EBUS probe simulated 2D fan-shaped EBUS views mimicking the scan plane of the real EBUS probe.
We designed the virtual device’s videobronchoscope camera and EBUS probe to comply with the known geometry and specifications of the Olympus BF-UC180F. Both components are at 3D orientations identical to the configuration of the real EBUS bronchoscope, and both components have identical FOVs of the real device. We give the specific design parameters below, per Figure 4:
  • Bronchoscope tip starts at point s B with axis n B .
  • Videobronchoscope camera axis = n C , offset by an angle Δ = 20 from n B .
  • EBUS probe axis n U S n B at a distance 6 mm from the tip start s B ; i.e., | | s B s U S | | = 6 mm, where s U S is the origin of n U S .
  • 2D EBUS fan-shaped scan plane sweep = 60 with range = 4 cm.
With reference to Figure 1, the magenta arrow represents the virtual bronchoscope’s camera axis n C , and the blue arrow represents the virtual EBUS probe axis n U S . In addition, per Figure 3, in virtual EBUS view I CT , the blue lines delineate the virtual EBUS’s fan-shaped field FOV in I CT , whereas the magenta line denotes the camera axis n C . In this way, the virtual EBUS bronchoscope gives a mechanism for relating the two live data streams to the CT-based virtual space. In particular, it provides the necessary linkage to enable image-guided EBUS bronchoscopy.
For a given lymph node, the physician follows the system display as follows. First, the physician navigates the “real” EBUS bronchoscope close to the node along the preplanned airway guidance route by following the standard virtual bronchoscope. This step draws on an established approach for registering live bronchoscopic video to CT-based VB views (e.g, Merritt et al. [14])—this synchronizes the positions of both the virtual and real devices. Next, the physician follows the virtual EBUS display to localize the lymph node via EBUS. As discussed in Section 2.2, this second step critically hinges on our proposed CT-EBUS registration method, which synchronizes the positions of the virtual and real EBUS probes. Section 3 later highlights the complete system during image-guided bronchoscopy, while Zang et al. gives more system detail [26].

2.2. Virtual-to-Real EBUS Registration

Proper localization of a lymph node depends on our proposed method for registering the virtual and real EBUS probes. To this point, registration involves aligning the virtual EBUS view derived from the preoperative CT scan and the live “real” intraoperative EBUS probe view.
For our EBUS bronchoscopy problem, we performed virtual-to-real EBUS registration by solving an optimization problem. Let I US denote the target EBUS view in 3D real space and I CT p denote a CT-based virtual EBUS view at pose p within the 3D virtual space, where
p = a , b , c , t x , t y , t z ,
( a , b , c ) denote the Euler angles, and ( t x , t y , t z ) denote the 3D position. Per Figure 4a, p specifies the location s U S and probe axis direction n U S of the virtual EBUS probe. Upon reaching a lymph node’s general vicinity via standard videobronchoscopy guidance as specified by known pose p i along the preplanned airway guidance route (e.g., using [14]), the physician next pushes the EBUS probe against the airway wall to give view I US . The pose of I US is technically unknown, since the physician manually performs this scan at their discretion. However, assuming that the physician follows the cues supplied by the guidance system during the guided procedure, we can surmise that the pose of I US is close to the known pose p i of view I CT p i . Hence, pose p in (1) is initialized to known pose p i in virtual space. This is the starting point of our optimal registration problem.
To formulate the optimization, we employed a cost function C that combines raw image intensity information and known ROI segmentation knowledge. In particular,
C ( I CT p , I US ) = C N ( I CT p , I US ) C D ( I CT p , I US ) ,
where C N is the NMI metric, focusing on intensity information, and C D is the Dice index, focusing on ROI shape knowledge. The NMI metric, adapted from Helferty et al. [39], is given by
C N ( I CT p , I US ) = 1 h ( I CT p ) + h ( I US ) h ( I CT p , I US ) ,
whereas the Dice index, a commonly used measure of ROI overlap between two images [40,41], is given by
C D ( I CT p , I US ) = 2 | R CT p R US | | R CT p | + | R US | .
In (3), h ( I CT p ) and h ( I US ) are marginal entropies of images I CT p and I US , whereas h ( I CT p , I US ) is a joint entropy:
h ( I CT p ) = k = 0 M 1 l = 0 M 1 P CT , US ( k , l ) log P CT ( k ) h ( I US ) = k = 0 M 1 l = 0 M 1 P CT , US ( k , l ) log P US ( l ) h ( I CT p , I US ) = k = 0 M 1 l = 0 M 1 P CT , US ( k , l ) log P CT , US ( k , l )
where P CT ( · ) and P US ( · ) are the respective marginal probability density functions of the pixel intensity values in images I CT p and I US , P CT , US ( · , · ) is the joint density function between corresponding pixels in the two images, and M = 256 is the number of gray levels that an image pixel can assume. In (4), R CT p represents the target lymph node’s region of interest (ROI) as it appears in virtual EBUS view I CT p ; i.e.,
R CT p ( x , y ) = 1 , if I CT p ( x , y ) target   node’s   ROI   in   CT 0 , otherwise ,
where the ROI was defined during planning. Similarly, R US equals the segmented ROI appearing in EBUS frame I US , where we used the previously validated automatic method of Zang et al. for this operation [42]. Thus, the collection of CT scan voxels constituting R CT p , as given by (6), corresponds to the known ROI knowledge that will be correlated with R US during optimization. Cost C of (2) is in the range 2 C ( I CT p , I US ) 1 , since C N ( I CT p , I US ) ranges from −1 (strongest NMI correlation) to 1 (no correlation) and C D ( I CT p , I US ) ranges from 0 (no ROI overlap) to 1 (total ROI overlap).
Given ((2)–(6)), our optimization problem was formulated as
p o = arg min p N p i C ( I CT p , I US ) ,
where p i is the initial pose, N p i is a search neighborhood around p i , and p o is the optimal pose minimizing cost C ( · , · ) . To solve (7), we adapted the simplex optimization algorithm to our EBUS registration problem to iteratively search for pose p o giving minimum cost C [43]. To this search, we added a constraint that took into account that the physician must push the EBUS probe against the airway wall to acquire an EBUS view. This implies that p i and poses p N p i must be situated on the airway wall surface. Therefore, we interleaved a surface-voxel search with the simplex algorithm. We now summarize the complete Algorithm 1.
Algorithm 1: Multimodal CT-EBUS Registration Algorithm.
1.
Initialize optimization (7) with captured EBUS frame I US and VB view I CT p i at pose p i .
2.
Segment ROI R US in I US using the method of Zang et al. [42].
3.
Derive a modified pose p i and a simplex N p i that constrains the possible candidate neighboring poses to be situated on the airway wall, as depicted in the chest CT scan.
4.
Using cost function C ( · , · ) defined by ((2)–(6)) and simplex N p i , run the simplex algorithm on (7) to update pose p o until it does not change for T iterations.
5.
Repeat steps 3–4 using p i = p o until p o no longer changes.
6.
Output p o and virtual EBUS view I CT p o .
At the conclusion of the registration algorithm, we now know the precise location in 3D chest space of the lymph node depicted in the live intraoperative EBUS probe view I US , as specified by p o . We elaborate on the algorithm’s steps below.
Regarding steps 1–2, Figure 5a,b depicts an example of initial real and virtual EBUS views I US , I CT p i for a station-10 lymph node. For the real space, I US is initialized as an EBUS frame captured after completing bronchoscopic navigation. For the virtual space, p i is the known final pose reached during navigation, whereas R CT p i equals the predefined nodal ROI as it appears in I CT p i (green regions in right-side views of Figure 5a–c).
Step 3 next generates a simplex N p i of suitable candidate search poses, where N p i is defined by seven vertices situated on the airway wall within CT-based virtual space. Each simplex vertex represents a pose on the airway wall neighboring p i . To begin, we first derived a modified pose p i as the first voxel along the virtual EBUS device’s viewing axis n U S whose value was > 600 HU (HU = Hounsfield units). In chest CT images, it is well known that air appears as dark voxels with HU value ≈ 1000 and the brighter surrounding airway walls have HU values in the range [50, 200]. Hence, we picked the conservative threshold −600 to identify potential airway wall voxels [35]. Next, to derive simplex N p i , we first computed six candidate vertices p v defining the simplex by changing the six parameters of p i separately with increments Δ a , Δ b , Δ c , Δ t x , Δ t y , and Δ t z , per (1) [44]. We then adjusted the vertices p v to corresponding surface points on the airway wall.
To find the surface points defining the simplex, we used the algorithm of Gibbs et al. [35]. The search drew upon the airway-tree surface-voxel structure constituting the previously derived airway-tree endoluminal surfaces. To begin, we constructed a k-d tree structure for these voxels to facilitate a quick search. Subsequently, for each vertex p v of the initial simplex, we searched the k-d tree for the point p s closest to p v . Since the marching cubes algorithm was used to compute the airway-tree endoluminal surfaces, it had a thickness of 1 layer [45]. However, larger airways have walls of thickness >1 voxel; this introduces uncertainty into the surface voxel locations [35]. Thus, to account for thicker airway walls, we also considered neighboring voxels surrounding surface point p s as valid candidate vertices. In particular, if p v has HU value above the surface threshold and | | p v p s | | < 1 mm, then we accepted p v as an adjusted vertex. Note, however, that the k-d tree was based only on surface voxel coordinates, without regard for airway location. Hence, the closest p s to a given p v could be on the wrong airway tree branch, as Gibbs et al. noted [35]. To ensure that p s is on the same airway branch as p v , we performed the following, per Figure 6:
1.
Compute dot product v 1 · v 2 , where v 1 = p v p i and v 2 = p v p s .
2.
If v 1 · v 2 > 0 , p s is part of the correct airway and is kept as the updated vertex.
3.
Otherwise, p s is from the wrong branch. Find the closest airway centerline point p l to p i and search for a new surface voxel candidate p s based on the HU threshold along the direction from p l to p v .
The six adjusted vertices p v along with p i delineate the initial simplex N p i for optimization step 4.
Step 4 now applies the iterative simplex algorithm to collapse the simplex around an optimal pose p o [43,44]. During each iteration, cost C was evaluated for all vertices. Depending on these results, various geometric operations were performed that either expand, contract, or reflect the simplex vertices toward a minimum cost voxel solution p o . The algorithm continued until p o stayed unchanged for T iterations. Steps 3–4 of the top-level multimodal CT-EBUS registration algorithm were then repeated until the voxel corresponding to p o no longer changed.
The marginal densities (5) used by cost function C N in ((2)–(3)) were estimated by normalized image histograms of images I CT p and I US , while the joint density was given by the normalized joint histogram between the two images. These calculations, however, did not use an entire image. Since EBUS images are noisy and filled with bland low-information regions, major portions of I CT p and I US provided misleading/useless information. Instead, we first defined the smallest trapezoid that bounds the segmented ROI in the given image I US and stays within the EBUS’s fan-shaped scan region. This region contains the most useful findings. Next, for both I CT p and I US , we only used pixels within the bounding trapezoid to calculate the required histograms; see Figure 7.
For the station 10 node example of Figure 5, the final registration (Figure 5c) required 57 iterations of steps 3–5. The result clearly confirms that the physician has settled at an effective site. Figure 3 gives another registration example for a station 4R node (case 21405-116).

2.3. Implementation

We ran all experiments on the guidance computer. For our tests, this computer was a Dell Precision T5500 64-bit Windows-based PC (dual 2.8 GHz 6-core CPUs, 24 GB RAM) powered by an Nvidia Quadro 4000 2GB PCIe graphics engine and a Matrox Vio IA/OA frame grabber. All software was implemented in C++. Many data-intensive operations were parallelized using Nvidia’s compute-unified device-architecture (CUDA) tools and OpenMP. A 24-inch Dell monitor served as the guidance system’s display. A standard Olympus Evis Exera III bronchoscopy suite was used for all tests (CV-190 video system, CLV-190 light source, EU-ME1 ultrasound unit, and display monitor). To interface the EBUS bronchoscope to the guidance computer, we connected a BNC-to-BNC cable from the bronchoscope display monitor’s PnP (picture-in-picture) video output to the guidance computer’s Matrox video input. This gave access to the live video streams from the EBUS bronchoscope (both the bronchoscopic video and EBUS).

3. Results

Section 3.1 tests the efficacy of our virtual-to-real EBUS registration method, whereas Section 3.2 presents example results using our method within a complete image-guided EBUS bronchoscopy system.

3.1. CT-EBUS Registration Study

We developed and tested our method using data collected retrospectively from 10 lung-cancer patients. These data were collected at our University’s lung cancer management clinic through two IRB-approved study protocols under informed consent. Chest CT scans were generated by Siemens CT scanners. Each scan was made up of 512 × 512 axial-plane sections (number of sections per scan: 570 to 720), with section thickness = 0.75 mm, section spacing Δ z = 0.5 mm, and axial-plane resolution Δ x = Δ y ranging from 0.60 mm to 0.81 mm. Seven studies drew upon a standard definition EBUS bronchoscope video feed giving 300 × 300 EBUS views, and three studies drew upon a high-definition video feed giving 816 × 848 EBUS views. Over the 10 patients, 28 ROIs were predefined, with a typical long axis length >15 mm: 27 lymph nodes and 1 azygos vein. The lymph nodes were distributed over the nodal stations as follows: station 4, 10 nodes; station 7, 9 nodes; station 10, 5 nodes; and station 11, 3 nodes.
To define the ground truth for each ROI, we first picked a 2D EBUS frame I US that depicts the ROI. Next, we performed 2D EBUS segmentation using Zang’s method to extract the ROI in I US [42]. Finally, we established the ground-truth virtual EBUS view I CT p G best matching I US . To achieve this, we started at the final pose p i of the ROI’s precomputed optimal airway route in CT-based virtual space [25]. We then manually moved the virtual EBUS bronchoscope around this pose to locate a ground-truth pose p G such that view I CT p G most closely mimics the anatomical appearance shown in I US .
To perform the test for each ROI, we began by running the registration method at initial pose p i in virtual space. Upon convergence, the method returned a virtual EBUS view I CT p at an optimal pose p = p o . Let
p G = [ t x G , t y G , t z G ] , p o = [ t x o , t y o , t z o ]
represent the 3D positions and
d G ( a G , b G , c G ) , d o ( a o , b o , c o )
denote the direction vectors for p G and p o , respectively. To quantify the registration performance, we compared p o and p G using three metrics, as suggested in [14]:
1.
Position difference e p , which measures the Euclidean distance between p o and p G :
e p = | | p o p G | |
2.
Direction error e d , which gives the angle between d o and d G :
e d = cos 1 ( d o · d G )
3.
Needle difference e N , which indicates the distance between two extended needle tips at p o and p G :
e N = | | ( p o + l N d o ) ( p G + l N d G ) | |
Regarding these metrics, e p indicates the positional error of the bronchoscope on the airway surface, e d quantifies the orientation error, and e N measures the potential biopsy error.
For the 28-ROI test set, the registration method correctly localized 28/28 ROIs (100%), with average registration time = 3.4 s, which represents the time to complete the iterative simplex optimization algorithm for (7) after segmenting the ROI in the captured EBUS frame R US . (We chose T = 15 for this test.) Table 1 gives the aggregate performance for the error metrics. Lastly, Figure 8 illustrates two registration examples. The results firmly assert the effectiveness of the method.
We next performed a parameter sensitivity test for the registration method. For the test, we first randomly picked one ROI from each human case used in the 28-ROI study. We then performed tests on these 10 ROIs, whereby one search parameter was varied over a given range:
1.
Positional parameters Δ t x , Δ t y , and Δ t z , range [−10 mm, 10 mm], step size = 2.5 mm.
2.
Angle parameters Δ a , Δ b , and Δ c , range[−100 , 100 ], step size = 25 .
3.
Iteration parameter T from 5 to 25, step size 5.
(The angle range accounts for the physician’s limitations in twisting the bronchoscope [46]). For each test, one parameter was changed, whereas all others were held at the default values. Each run always started at the ground truth pose p i that terminates an ROI’s derived optimal path.
Table 2, Table 3 and Table 4 summarize the results for three parameters; results for the other parameters are similar. All results are given over the 10-ROI test set. As a disclaimer, we point out that technician judgment was required in interactively deriving the “best matching” virtual EBUS bronchoscope views serving as the ground truth. As a result of the inherent degradations in EBUS views (broken edges, speckle, low detail content), it was difficult to locate the single pose for which the virtual EBUS view “best” matches the real EBUS frame. This adds bias to our ground truth. In addition, ambiguity can exist when registering a 2D EBUS image to a 3D CT subvolume because the lymph nodes often have approximately spherical/elliptical 3D shapes. Thus, results can vary over different initial conditions.
Overall, the initial condition step sizes, which affect the size of the initial simplex, had a minimal impact on the performance. The bronchoscope position and needle errors, e p and e N , were generally always under 5 mm and 9 mm, respectively, which are smaller than the typical clinically considered lymph node (long axis > 10 mm). The angle error e d also fluctuated under 25 . Larger errors are attributable to the aforementioned ambiguity. Increasing iteration number T beyond five iterations does result in performance improvement. This gain was less significant, however, from 15 to 25 iterations, especially considering that the mean execution time increased from 3.4 to 8.0 s. Hence, for the previous 28-ROI study and all later human studies, we chose T = 15 as the default iteration number.
Regarding the 28-ROI test, the precise mean registration time = 3.4 s ± 1.6 s, with a range = [1.9 s, 10.0 s]. Notably, the registration time does depend somewhat on the size of the node. We note that the two largest nodes considered required the longest registration times: (1) 10.0 s for a station 7 node with all axes 2.4 cm (long axis = 3.1 cm); and (2) 7.9 s for a station 4R node with all axes ≥ 1.6 cm (long axis = 2.7 cm). Excluding these two outliers, the mean registration time for the other 26 test nodes = 3.0 s ± 0.5 s (range, [1.9 s, 4.1 s]. As the discussion later highlights, these times are acceptable for real usage. Overall, these results support the method’s robustness.

3.2. Image-Guided EBUS Bronchoscopy System

The registration method has been integrated into a complete image-guided EBUS bronchoscopy system, as described more fully in [26]. Figure 9 illustrates a retrospective example of system guidance for a 68-year-old female lung cancer patient presenting a station 4R lymph node. Both a high-resolution chest CT and a whole-body PET/CT study were available. The chest CT, produced by a Siemens SOMATOM Definition Flash, has specifications Δ x = Δ y = 0.77 mm, Δ z = 0.5 mm, and volume dimensions = 512 × 512 × 570, while the PET/CT study, generated by a Philips TrueFlight integrated scanner, provided a PET scan with Δ x = Δ y = 4 mm, Δ z = 3 mm, and volume dimensions = 144 × 144 × 284. The example depicts both the videobronchoscopic navigation and EBUS localization guidance phases. Figure 9c clearly shows successful registration during the EBUS localization of the node. Notably, Figure 9c also depicts a CT-based simulated EBUS view, generated by the method in [47]. While CT-EBUS registration is performed using the CT-based EBUS view, the supplemental EBUS simulation, which strongly resembles the real EBUS view, adds confidence in the attained position of the real EBUS probe.
Figure 10 gives a second example from a live prospective guided procedure for a 77-year-old female lung cancer patient presenting a station 4R node. As with the previous example, both a high-resolution chest CT and a whole-body PET/CT study were available. The chest CT, produced by a Siemens SOMATOM Definition Flash, has specifications Δ x = Δ y = 0.57 mm, Δ z = 0.5 mm, and volume dimensions = 512 × 512 × 663, while the PET/CT study, again generated by a Philips TrueFlight integrated scanner, provided a PET scan with Δ x = Δ y = 4 mm, Δ z = 3 mm, and volume dimensions = 144 × 144 × 284. The composite display view during EBUS localization indicates that the EBUS bronchoscope has reached the target node. In addition, the virtual and real EBUS views align well, with the simulated view corroborating the expected appearance of the real view.

4. Discussion

Lung cancer persists as the most common type of cancer death, with a mortality rate >85% [48]. The staging of the chest lymph nodes via EBUS bronchoscopy is one of the major steps in managing a lung cancer patient. While physicians can readily navigate the device close to the correct vicinity of a lymph node—i.e., get into the right “room” (airway)—their subsequent attempt to then correctly localize the node with EBUS and then perform an adequate biopsy—i.e., hit the right wall in the room (airway wall)—is well-known to be problematic.
We proposed a near real-time CT-EBUS registration method that facilitates an accurate EBUS-based examination of a lymph node during an EBUS bronchoscopy. In particular, after the physician navigates the bronchoscope near the lymph node’s vicinity, the method enables accurate image-guided EBUS probe placement; i.e., the physician can immediately localize the lymph node without the uncertainty encountered in standard EBUS usage. Laboratory results using data from lung cancer patients show the registration method’s robust performance.
The method has also been integrated into a complete image-guided EBUS bronchoscopy system designed for chest nodal staging, as demonstrated with system examples. A companion paper by Zang et al. gives more detail related to the system’s guidance and display capabilities [26]. As discussed fully in the companion paper, the system’s functionality and feasibility has been validated in both retrospective and prospective human studies at our University Hospital. For the 13-patient prospective study run within our hospital’s standard clinical work flow, 100% of preselected target lymph nodes were successfully localized using the system, a strong indicator of our registration method’s efficacy. The mean time for performing image-guided EBUS for a particular node was 87.4 s, with a total mean procedure time of 6 min 43 s (4.61 nodes per patient). This time includes all time for device navigation, EBUS localization, EBUS segmentation, and final registration, and it excludes biopsy time. In addition, the system appeared to be safe and feasible in the live clinical setting, with no adverse events reported. Complete detail of this study appears in [26].
While these results point to the potential practicality of the method for live clinical practice, a larger multi-center study is needed to more fully ascertain our methodology’s utility for enabling more efficacious EBUS-based nodal staging. One notable limitation is the need to segment ROIs live in the captured real EBUS frame. Some ROIs required multiple segmentation attempts. However, the prospective study gave an acceptable 18.1 s mean time to segment an ROI in a real EBUS frame (part of the 87.4 s mean procedure time per node). Finally, the system requires integration into an approved system meeting federal FDA quality standards.
As an additional point, for all lymph nodes considered in our studies, the physician specified the nodal station labels while selecting target lymph nodes on CT during procedure planning. Hence, during the later image-guided EBUS bronchoscopy procedure, when a node scanned by the “real” EBUS is registered to the target CT-based node, this not only confirms the 3D physical location of the node, but it also implicitly confirms the real node’s station label. While not a part of this paper, we had devised a methodology previously for automatically defining the nodal stations and assigning station labels to lymph nodes identified in CT; e.g., [49,50].
On another note, we also easily adapted the registration method to the newer Olympus BF-UC190F EBUS bronchoscope by a simple adjustment of scope tip specifications [51]. By making a similar adjustment, other related devices, such as a recently proposed thinner EBUS bronchoscope capable of going deeper into the airway tree [52], could also be guided using our methodology. Finally, our methodology could help to drive robotics-based bronchoscopy systems, which currently offer assistance for the bronchoscope only [10].

Author Contributions

Conceptualization, X.Z., W.Z. and W.H.; methodology, X.Z., W.Z. and W.H.; software, X.Z. and W.Z.; validation, X.Z., W.Z., J.T., R.B. and W.H.; formal analysis, X.Z. and W.Z.; investigation, X.Z., W.Z., J.T., R.B. and W.H.; resources, J.T., R.B. and W.H.; data curation, X.Z., W.Z., R.B. and W.H.; writing—original draft preparation, X.Z., W.Z., J.T., R.B. and W.H.; writing—review and editing, X.Z., W.Z. and W.H.; visualization, X.Z., W.Z. and W.H.; supervision, R.B. and W.H.; project administration, R.B. and W.H.; funding acquisition, W.H. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by NIH National Cancer Institute grant R01-CA151433.

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki, and approved by the Institutional Review Board of Penn State University, Hershey, PA (protocol 21405, approval date July 2021).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

Not applicable.

Conflicts of Interest

William E. Higgins and Penn State have an identified conflict of interest and financial interest related to this research. These interests have been reviewed by the University’s Institutional and Individual Conflict of Interest Committees and are currently being managed by the University and reported to the NIH.

References

  1. Wahidi, M.; Herth, F.; Chen, A.; Cheng, G.; Yarmus, L. State of the Art: Interventional Pulmonology. Chest 2020, 157, 724–736. [Google Scholar] [CrossRef] [PubMed]
  2. Avasarala, S.K.; Aravena, C.; Almeida, F.A. Convex probe endobronchial ultrasound: Historical, contemporary, and cutting-edge applications. J. Thorac. Disease 2020, 12, 1085–1099. [Google Scholar] [CrossRef] [PubMed]
  3. Sheski, F.; Mathur, P. Endobronchial Ultrasound. Chest 2008, 133, 264–270. [Google Scholar] [CrossRef]
  4. Kinsey, C.M.; Arenberg, D.A. Endobronchial Ultrasound–guided Transbronchial Needle Aspiration for Non–Small Cell Lung Cancer Staging. Am. J. Respir. Crit. Care Med. 2014, 189, 640–649. [Google Scholar] [CrossRef] [PubMed]
  5. Fernández-Villar, A.; Leiro-Fernández, V.; Botana-Rial, M.; Represas-Represas, C.; Núñez-Delgado, M. The endobronchial ultrasound-guided transbronchial needle biopsy learning curve for mediastinal and hilar lymph node diagnosis. Chest 2012, 141, 278–279. [Google Scholar] [CrossRef] [PubMed]
  6. Ernst, A.; Herth, F.J. Endobronchial Ultrasound: An Atlas and Practical Guide; Springer Science & Business Media: New York, NY, USA, 2009. [Google Scholar]
  7. Davoudi, M.; Colt, H.; Osann, K.; Lamb, C.; Mullon, J. Endobronchial ultrasound skills and tasks assessment tool. Am. J. Respir. Crit. Care Med. 2012, 186, 773–779. [Google Scholar] [CrossRef]
  8. Folch, E.; Majid, A. Point: Are > 50 supervised procedures required to develop competency in performing endobronchial ultrasound-guided transbronchial needle aspiration for mediastinal staging? Yes. Chest 2013, 143, 888–891. [Google Scholar] [CrossRef]
  9. Reynisson, P.J.; Leira, H.O.; Hernes, T.N.; Hofstad, E.F.; Scali, M.; Sorger, H.; Amundsen, T.; Lindseth, F.; Langø, T. Navigated bronchoscopy: A technical review. J. Bronchol. Interv. Pulmonol. 2014, 21, 242–264. [Google Scholar] [CrossRef]
  10. Criner, G.J.; Eberhardt, R.; Fernandez-Bussy, S.; Gompelmann, D.; Maldonado, F.; Patel, N.; Shah, P.L.; Slebos, D.J.; Valipour, A.; Wahidi, M.M.; et al. Interventional Bronchoscopy: State-of-the-Art Review. Am. J. Respir. Crit. Care Med. 2020, 202, 29–50. [Google Scholar] [CrossRef]
  11. Vining, D.J.; Liu, K.; Choplin, R.H.; Haponik, E.F. Virtual bronchoscopy: Relationships of virtual reality endobronchial simulations to actual bronchoscopic findings. Chest 1996, 109, 549–553. [Google Scholar] [CrossRef]
  12. Mori, K.; Hasegawa, J.; Toriwaki, J.; Anno, H.; Katada, K. Recognition of bronchus in three dimensional X-Ray CT images with application to virtualized bronchoscopy system. In Proceedings of the 13th International Conference on Pattern Recognition, Vienna, Austria, 25–29 August 1996; Volume 3, pp. 528–532. [Google Scholar]
  13. Higgins, W.E.; Ramaswamy, K.; Swift, R.; McLennan, G.; Hoffman, E.A. Virtual bronchoscopy for 3D pulmonary image assessment: State of the art and future needs. Radiographics 1998, 18, 761–778. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  14. Merritt, S.; Khare, R.; Bascom, R.; Higgins, W. Interactive CT-Video Registration for Image-Guided Bronchoscopy. IEEE Trans. Med. Imaging 2013, 32, 1376–1396. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  15. Wahidi, M.M.; Hulett, C.; Pastis, N.; Shepherd, R.W.; Shofer, S.L.; Mahmood, K.; Lee, H.; Malhotra, R.; Moser, B.; Silvestri, G.A. Learning experience of linear endobronchial ultrasound among pulmonary trainees. Chest 2014, 145, 574–578. [Google Scholar] [CrossRef] [PubMed]
  16. Merritt, S.A.; Gibbs, J.D.; Yu, K.C.; Patel, V.; Rai, L.; Cornish, D.C.; Bascom, R.; Higgins, W.E. Image-Guided Bronchoscopy for Peripheral Lung Lesions: A Phantom Study. Chest 2008, 134, 1017–1026. [Google Scholar] [CrossRef] [PubMed]
  17. Ost, D.E.; Ernst, A.; Lei, X.; Feller-Kopman, D.; Eapen, G.A.; Kovitz, K.L.; Herth, F.J.; Simoff, M. Diagnostic yield of endobronchial ultrasound-guided transbronchial needle aspiration: Results of the AQuIRE Bronchoscopy Registry. Chest 2011, 140, 1557–1566. [Google Scholar] [CrossRef] [Green Version]
  18. Ost, D.E.; Ernst, A.; Lei, X.; Kovitz, K.L.; Benzaquen, S.; Diaz-Mendoza, J.; Greenhill, S.; Toth, J.; Feller-Kopman, D.; Puchalski, J.; et al. Diagnostic Yield and Complications of Bronchoscopy for Peripheral Lung Lesions. Results of the AQuIRE Registry. Am. J. Respir. Crit. Care Med. 2016, 193, 68–77. [Google Scholar] [CrossRef] [Green Version]
  19. Sato, M.; Chen, F.; Aoyama, A.; Yamada, T.; Ikeda, M.; Bando, T.; Date, H. Virtual endobronchial ultrasound for transbronchial needle aspiration. J. Thorac. Cardiovas. Surg. 2013, 146, 1204–1212. [Google Scholar] [CrossRef] [Green Version]
  20. Sorger, H.; Hofstad, E.F.; Amundsen, T.; Langø, T.; Leira, H.O. A novel platform for electromagnetic navigated ultrasound bronchoscopy (EBUS). Int. J. Comput. Assist. Radiol. Surg. 2016, 11, 1431–1443. [Google Scholar] [CrossRef] [Green Version]
  21. Sorger, H.; Hofstad, E.; Amundsen, T.; Lango, T.; Bakeng, J.; Leira, H. A multimodal image guiding system for Navigated Ultrasound Bronchoscopy (EBUS): A human feasibility study. PLoS ONE 2017, 12, e0171841. [Google Scholar] [CrossRef]
  22. Tamiya, M.; Okamoto, N.; Sasada, S.; Shiroyama, T.; Morishita, N.; Suzuki, H.; Yoshida, E.; Hirashima, T.; Kawahara, K.; Kawase, I. Diagnostic yield of combined bronchoscopy and endobronchial ultrasonography, under LungPoint guidance for small peripheral pulmonary lesions. Respirology 2013, 18, 834–839. [Google Scholar] [CrossRef]
  23. Luo, X.; Mori, K. Beyond Current Guided Bronchoscopy: A Robust and Real-Time Bronchoscopic Ultrasound Navigation System. In MICCAI 2013 Lecture Notes in Computer Science; Mori, K., Sakuma, I., Sato, Y., Barillot, C., Navab, N., Eds.; Springer: Berlin/Heidelberg, Germany, 2013; Volume 8149, pp. 388–395. [Google Scholar]
  24. Asano, F. Practical Application of Virtual Bronchoscopic Navigation. In Interventional Bronchoscopy; Mehta, A., Jain, P., Eds.; Humana: Totowa, NJ, Canada, 2013; pp. 121–140. [Google Scholar]
  25. Zang, X.; Gibbs, J.; Cheirsilp, R.; Byrnes, P.; Toth, J.; Bascom, R.; Higgins, W. Optimal Route Planning for Image-Guided EBUS Bronchoscopy. Comput. Biol. Med. 2019, 112, 103361. [Google Scholar] [CrossRef] [PubMed]
  26. Zang, X.; Cheirsilp, R.; Byrnes, P.D.; Kuhlengel, T.K.; Abendroth, C.; Allen, T.; Mahraj, R.; Toth, J.; Bascom, R.; Higgins, W.E. Image-guided EBUS bronchoscopy system for lung-cancer staging. Inform. Med. Unlocked 2021, 25, 1–13. [Google Scholar] [CrossRef] [PubMed]
  27. Huang, X.; Moore, J.; Guiraudon, G.; Jones, D.L.; Bainbridge, D.; Ren, J.; Peters, T.M. Dynamic 2D Ultrasound and 3D CT Image Registration of the Beating Heart. IEEE Trans. Med. Imaging 2009, 28, 1179–1189. [Google Scholar] [CrossRef]
  28. Kaar, M.; Hoffmann, R.; Bergmann, H.; Figl, M.; Bloch, C.; Kratochwil, A.; Birkfellner, W.; Hummel, J. Comparison of two navigation system designs for flexible endoscopes using abdominal 3D ultrasound. In Proceedings of the SPIE Medical Imaging 2011, Lake Buena Vista, FL, USA, 13–17 February 2011; Volume 7964, pp. 18–25. [Google Scholar]
  29. Rueckert, D.; Clarkson, M.J.; Hill, D.L.J.; Hawkes, D.J. Non-rigid registration using higher-order mutual information. In Proceedings of the SPIE Medical Imaging 2000, San Diego, CA, USA, 18–29 March 2000; pp. 438–447. [Google Scholar]
  30. Sotiras, A.; Davatzikos, C.; Paragios, N. Deformable Medical Image Registration: A survey. IEEE Trans. Med. Imaging 2013, 32, 1153–1190. [Google Scholar] [CrossRef] [Green Version]
  31. Studholme, C.; Hill, D.L.G.; Hawkes, D.J. An overlap invariant entropy measure of 3D medical image alignment. Pattern Recognit. 1999, 32, 71–86. [Google Scholar] [CrossRef]
  32. Studholme, C.; Drapaca, C.; Iordanova, B.; Cardenas, V. Deformation-based mapping of volume change from serial brain MRI in the presence of local tissue contrast change. IEEE Trans. Med. Imaging 2006, 25, 626–639. [Google Scholar] [CrossRef] [PubMed]
  33. Knops, Z.F.; Maintz, J.B.A.; Viergever, M.A.; Pluim, J.P.W. Registration using segment intensity remapping and mutual information. In Proceedings of the International Conference on Medical Imaging and Computer Assisted Intervention, Saint-Malo, France, 26–29 September 2004; pp. 805–812. [Google Scholar]
  34. Gibbs, J.; Graham, M.W.; Bascom, R.; Cornish, D.; Khare, R.; Higgins, W. Optimal procedure planning and guidance system for peripheral bronchoscopy. IEEE Trans. Biomed. Eng. 2014, 61, 638–657. [Google Scholar] [CrossRef] [Green Version]
  35. Gibbs, J.D.; Graham, M.W.; Higgins, W.E. 3D MDCT-based system for planning peripheral bronchoscopic procedures. Comput. Biol. Med. 2009, 39, 266–279. [Google Scholar] [CrossRef] [Green Version]
  36. Graham, M.W.; Gibbs, J.D.; Cornish, D.C.; Higgins, W.E. Robust 3D Airway-Tree Segmentation for Image-Guided Peripheral Bronchoscopy. IEEE Trans. Med. Imaging 2010, 29, 982–997. [Google Scholar] [CrossRef]
  37. Lu, K.; Higgins, W.E. Segmentation of the central-chest lymph nodes in 3D MDCT images. Comput. Biol. Med. 2011, 41, 780–789. [Google Scholar] [CrossRef] [Green Version]
  38. Bricault, I.; Ferretti, G.; Cinquin, P. Registration of Real and CT-Derived Virtual Bronchoscopic Images to Assist Transbronchial Biopsy. IEEE Trans. Med. Imaging 1998, 17, 703–714. [Google Scholar] [CrossRef] [PubMed]
  39. Helferty, J.P.; Sherbondy, A.J.; Kiraly, A.P.; Higgins, W.E. Computer-based system for the virtual-endoscopic guidance of bronchoscopy. Comput. Vis. Image Underst. 2007, 108, 171–187. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  40. Crum, W.; Camara, O.; Hill, D. Generalized Overlap Measures for Evaluation and Validation in Medical Image Analysis. IEEE Trans. Med. Imaging 2006, 25, 1451–1461. [Google Scholar] [CrossRef] [PubMed]
  41. Rueda, S.; Fathima, S.; Knight, C.L.; Yaqub, M.; Papageorghiou, A.T.; Rahmatullah, B.; Foi, A.; Maggioni, M.; Pepe, A.; Tohka, J.; et al. Evaluation and comparison of current fetal ultrasound image segmentation methods for biometric measurements: A grand challenge. IEEE Trans. Med. Imaging 2014, 33, 797–813. [Google Scholar] [CrossRef]
  42. Zang, X.; Bascom, R.; Gilbert, C.; Toth, J.; Higgins, W. Methods for 2-D and 3-D Endobronchial Ultrasound Image Segmentation. IEEE Trans. Biomed. Eng. 2016, 63, 1426–1439. [Google Scholar] [CrossRef] [Green Version]
  43. Nelder, J.A.; Mead, R. A simplex method for function optimization. Comput. J. 1965, 7, 308–313. [Google Scholar] [CrossRef]
  44. Higgins, W.E.; Helferty, J.P.; Lu, K.; Merritt, S.A.; Rai, L.; Yu, K.C. 3D CT-video fusion for image-guided bronchoscopy. Comput. Med. Imaging Graph. 2008, 32, 159–173. [Google Scholar] [CrossRef] [Green Version]
  45. Schroeder, W.; Martin, K.; Lorensen, B. The Visualization Toolkit, 4th ed.; Prentice Hall: Upper Saddle River, NJ, USA, 2008. [Google Scholar]
  46. Khare, R.; Bascom, R.; Higgins, W. Hands-Free System for Bronchoscopy Planning and Guidance. IEEE Trans. Biomed. Eng. 2015, 62, 2794–2811. [Google Scholar] [CrossRef] [Green Version]
  47. Zhao, W.; Ahmad, D.; Toth, J.; Bascom, R.; Higgins, W.E. Endobronchial Ultrasound Image Simulation for Image-Guided Bronchoscopy. IEEE Trans. Biomed. Eng. 2022. [Google Scholar]
  48. Bray, F.; Ferlay, J.; Soerjomataram, I.; Siegel, R.; Torre, L.; Jemal, A. Global cancer statistics 2018: GLOBOCAN estimates of incidence and mortality worldwide for 36 cancers in 185 countries. CA Cancer J. Clin. 2018, 68, 394–424. [Google Scholar] [CrossRef] [Green Version]
  49. Lu, K.; Taeprasartsit, P.; Bascom, R.; Mahraj, R.; Higgins, W. Automatic definition of the central-chest lymph-node stations. Int. J. Comput. Assist. Radiol. Surg. 2011, 6, 539–555. [Google Scholar] [CrossRef] [PubMed]
  50. Kuhlengel, T.K.; Higgins, W.E. Multi-Destination Planning for Comprehensive Lymph Node Staging Bronchoscopy. In SPIE Medical Imaging 2020: Image-Guided Procedures, Robotic Interventions, and Modeling; Fei, B., Linte, C., Eds.; SPIE: Bellingham, DC, USA, 2020; Volume 11315, pp. 113151T-1–113151T-7. [Google Scholar]
  51. Zhao, W. Planning and Guidance Methods for Peripheral Bronchoscopy. Ph.D. Thesis, The Pennsylvania State University, Department of Electrical Engineering, State College, PA, USA, 2022. [Google Scholar]
  52. Fujino, K.; Ujiie, H.; Kinoshita, T.; Lee, C.Y.; Igai, H.; Inage, T.; Motooka, Y.; Gregor, A.; Suzuki, M.; Yasufuku, K. First Evaluation of the Next-Generation Endobronchial Ultrasound System in Preclinical Models. Ann. Thorac. Surg. 2019, 107, 1464–1471. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Device tip and example views for an integrated EBUS bronchoscope. The video camera provides bronchoscopic video, while the EBUS transducer gives 2D EBUS views. The magenta and blue arrows correspond to videobronchoscope camera axis n C and EBUS probe axis n US , respectively.
Figure 1. Device tip and example views for an integrated EBUS bronchoscope. The video camera provides bronchoscopic video, while the EBUS transducer gives 2D EBUS views. The magenta and blue arrows correspond to videobronchoscope camera axis n C and EBUS probe axis n US , respectively.
Jimaging 08 00189 g001
Figure 2. Overview of an image-guided EBUS bronchoscopy procedure. Top part of the figure focuses on the 3D CT-based virtual chest space, whereas the bottom part features the analogous real 3D chest space. (a) Before the live procedure, the physician selects lymph nodes of interest from a patient’s chest CT scan. (b) During the live surgical procedure, the physician then uses this plan to navigate the bronchoscope toward the lymph node. (c) When using an image-guided bronchoscopy system, the physician receives additional graphical feedback on how to navigate the device toward the lymph node. VB views situated along a precomputed guidance path (blue line) lead the physician toward the lymph node. Image registration between the CT-based VB views and live bronchoscopic video (red dotted-line box) facilitates device synchronization during navigation and leads the physician to the proper airway closest to the node (green region). (d) Existing image-guided bronchoscopy systems offer no means for helping to place the EBUS probe for live localization of the extraluminal lymph node. (Bronchoscopy drawing by Terese Winslow, “Bronchoscopy,” NCI Visuals Online, National Cancer Institute.)
Figure 2. Overview of an image-guided EBUS bronchoscopy procedure. Top part of the figure focuses on the 3D CT-based virtual chest space, whereas the bottom part features the analogous real 3D chest space. (a) Before the live procedure, the physician selects lymph nodes of interest from a patient’s chest CT scan. (b) During the live surgical procedure, the physician then uses this plan to navigate the bronchoscope toward the lymph node. (c) When using an image-guided bronchoscopy system, the physician receives additional graphical feedback on how to navigate the device toward the lymph node. VB views situated along a precomputed guidance path (blue line) lead the physician toward the lymph node. Image registration between the CT-based VB views and live bronchoscopic video (red dotted-line box) facilitates device synchronization during navigation and leads the physician to the proper airway closest to the node (green region). (d) Existing image-guided bronchoscopy systems offer no means for helping to place the EBUS probe for live localization of the extraluminal lymph node. (Bronchoscopy drawing by Terese Winslow, “Bronchoscopy,” NCI Visuals Online, National Cancer Institute.)
Jimaging 08 00189 g002
Figure 3. Example videobronchoscope and EBUS views constituting the multimodal EBUS bronchoscope. (Top) bronchoscopic video sources for the real EBUS bronchoscope and virtual device. (Bottom) corresponding fan-shaped EBUS views, I US and I CT , for the real and virtual devices, respectively. For virtual EBUS view I CT , the blue lines demarcate the EBUS FOV, the magenta line indicates the video camera’s viewing direction n C , and the green region denotes a lymph node predefined in the chest CT scan. In this figure, the videobronchoscope and EBUS view pairs are not at the same site. In addition, both view pairs are registered.
Figure 3. Example videobronchoscope and EBUS views constituting the multimodal EBUS bronchoscope. (Top) bronchoscopic video sources for the real EBUS bronchoscope and virtual device. (Bottom) corresponding fan-shaped EBUS views, I US and I CT , for the real and virtual devices, respectively. For virtual EBUS view I CT , the blue lines demarcate the EBUS FOV, the magenta line indicates the video camera’s viewing direction n C , and the green region denotes a lymph node predefined in the chest CT scan. In this figure, the videobronchoscope and EBUS view pairs are not at the same site. In addition, both view pairs are registered.
Jimaging 08 00189 g003
Figure 4. EBUS probe model: (a) device tip model; (b) 2D EBUS probe view; red lines denote the 60 fan-shaped view. Standard EBUS display settings were used throughout (gain = 19; contrast = 6).
Figure 4. EBUS probe model: (a) device tip model; (b) 2D EBUS probe view; red lines denote the 60 fan-shaped view. Standard EBUS display settings were used throughout (gain = 19; contrast = 6).
Jimaging 08 00189 g004
Figure 5. CT-EBUS registration for a station-10 lymph node (case 21405-139). (a) Initial raw EBUS view I US (left) and corresponding CT-based virtual EBUS view I CT p i (right). (b) Result after segmenting the nodal ROI R US in EBUS view I US ; the green outline signifies the ROI contour [42]. (c) Registered pair ( I US , I CT p o ) after final registration; I US depicts the fused registered CT-based ROI. In all virtual EBUS views, green regions denote nodal ROIs R CT predefined during planning, whereas red regions represent blood vessels.
Figure 5. CT-EBUS registration for a station-10 lymph node (case 21405-139). (a) Initial raw EBUS view I US (left) and corresponding CT-based virtual EBUS view I CT p i (right). (b) Result after segmenting the nodal ROI R US in EBUS view I US ; the green outline signifies the ROI contour [42]. (c) Registered pair ( I US , I CT p o ) after final registration; I US depicts the fused registered CT-based ROI. In all virtual EBUS views, green regions denote nodal ROIs R CT predefined during planning, whereas red regions represent blood vessels.
Jimaging 08 00189 g005
Figure 6. Locating airway wall point p s . Green dot denotes the current surface point p i , yellow dot denotes the candidate position p v , red dot is the closest k-d tree point p s to p v , cyan dot is the closest airway centerline point p l to p v , and orange hollow dot is the correct surface voxel p s .
Figure 6. Locating airway wall point p s . Green dot denotes the current surface point p i , yellow dot denotes the candidate position p v , red dot is the closest k-d tree point p s to p v , cyan dot is the closest airway centerline point p l to p v , and orange hollow dot is the correct surface voxel p s .
Jimaging 08 00189 g006
Figure 7. Limiting region of calculations for (5) in cost C N . (a) Trapezoidal EBUS region delineated by the lines encompassing the EBUS ROI in I US with (b) showing segmented ROI R US . (c,d) Corresponding virtual EBUS view I CT p and predefined ROI R CT .
Figure 7. Limiting region of calculations for (5) in cost C N . (a) Trapezoidal EBUS region delineated by the lines encompassing the EBUS ROI in I US with (b) showing segmented ROI R US . (c,d) Corresponding virtual EBUS view I CT p and predefined ROI R CT .
Jimaging 08 00189 g007
Figure 8. CT-EBUS registration examples. (ac) Station 4L node for case 20349-3-84. (df) Station 4L node for case 21405-108. Parts (a,d) show the automatically segmented ROI in EBUS frame I US using [42]. Parts (b,e) show the registered CT-based predefined ROI superimposed on I US . Parts (c,f) depict the CT-based virtual EBUS view I CT p o after registration. In all views, the green region corresponds to the lymph node, whereas the red regions represent major vessels (PA = pulmonary artery).
Figure 8. CT-EBUS registration examples. (ac) Station 4L node for case 20349-3-84. (df) Station 4L node for case 21405-108. Parts (a,d) show the automatically segmented ROI in EBUS frame I US using [42]. Parts (b,e) show the registered CT-based predefined ROI superimposed on I US . Parts (c,f) depict the CT-based virtual EBUS view I CT p o after registration. In all views, the green region corresponds to the lymph node, whereas the red regions represent major vessels (PA = pulmonary artery).
Jimaging 08 00189 g008
Figure 9. Image-guided EBUS bronchoscopy for a station 4R lymph node for patient 21405-116. (a) 3D airway tree rendering and whole-body PET projection image indicating the target lymph node (red). (b) Registered real video and VB view after navigation. (c) Registered real EBUS view I US , virtual EBUS view I CT (green region = node), and a CT-based simulated EBUS view, respectively, at final site.
Figure 9. Image-guided EBUS bronchoscopy for a station 4R lymph node for patient 21405-116. (a) 3D airway tree rendering and whole-body PET projection image indicating the target lymph node (red). (b) Registered real video and VB view after navigation. (c) Registered real EBUS view I US , virtual EBUS view I CT (green region = node), and a CT-based simulated EBUS view, respectively, at final site.
Jimaging 08 00189 g009
Figure 10. Image-guided EBUS bronchoscopy for a station 4R lymph node for patient 20349-3-87. (a) 3D airway tree rendering and coronal fused CT/PET section indicating the target lymph node (color scale bar indicate PET SUV value). (b) Registered real EBUS view I US , virtual EBUS view I CT (green region = node), and a CT-based simulated EBUS view, respectively, at final site.
Figure 10. Image-guided EBUS bronchoscopy for a station 4R lymph node for patient 20349-3-87. (a) 3D airway tree rendering and coronal fused CT/PET section indicating the target lymph node (color scale bar indicate PET SUV value). (b) Registered real EBUS view I US , virtual EBUS view I CT (green region = node), and a CT-based simulated EBUS view, respectively, at final site.
Jimaging 08 00189 g010
Table 1. Registration performance over the 28-ROI test set.
Table 1. Registration performance over the 28-ROI test set.
MetricMean ± Std. Dev.[Low, High]
e p (mm)2.2 mm ± 2.3 mm[0.2 mm, 11.8 mm]
e N (mm)4.3 mm ± 3.0 mm[1.1 mm, 11.7 mm]
e d ( )11.8 ± 8.8 [0.4 , 41.3 ]
Table 2. Registration sensitivity to variation in Δ t x .
Table 2. Registration sensitivity to variation in Δ t x .
Δ t x (mm) e p (mm) e N (mm) e d ( )
−10.03.7 ± 3.4[1.1, 11.7]7.3 ± 5.1[2.1, 14.6]18.1 ± 11.1[4.6, 38.3]
−7.54.3 ± 3.3[1.8, 12.2]8.4 ± 5.3[2.5, 18.7]21.2 ± 15.2[6, 55.9]
−5.04.5 ± 3.5[1.3, 12.2]6.4 ± 4[1.5, 11.4]18.9 ± 12.7[5.9, 45.8]
−2.54.6 ± 3.1[1, 10.2]7.8 ± 4[2.1, 14.4]22.5 ± 14.5[6.2, 46.5]
0.03.7 ± 3.2[1.4, 10.5]7.8 ± 3.9[2.7, 14.4]22.8 ± 12.9[8.8, 46.2]
2.54 ± 2.8[0.9, 9.4]8.5 ± 6.7[1.4, 20.9]25.9 ± 19.4[5.7, 61.5]
5.03.8 ± 4.5[0.6, 14.5]6.4 ± 4.8[2, 14.9]19.3 ± 12.1[5.9, 41.8]
7.52.8 ± 3.8[0.3, 11.8]4.8 ± 4.5[1.1, 11.7]10.8 ± 8.8[0.4, 22.9]
10.03.8 ± 3.9[1.1, 12]5.7 ± 4.1[1.6, 11.7]16 ± 9.8[5.5, 30.3]
Table 3. Registration sensitivity to variation in Δ a .
Table 3. Registration sensitivity to variation in Δ a .
Δ a ( ) e p (mm) e N (mm) e d ( )
−100.06.6 ± 4.2[2.2, 13.4]9.0 ± 4.1[5.4, 14.5]25.0 ± 12.6[7.1, 41.2]
−75.04.8 ± 4.0[1.2, 11.5]8.6 ± 4.8[3.3, 16.2]24.2 ± 11.5[8.0, 40.4]
−50.05.0 ± 4.6[1.3, 14.9]6.7 ± 3.2[2.8, 12.3]17.3 ± 8.8[5.3, 28.2]
−25.03.6 ± 3.6[0.6, 11.2]8.2 ± 6.2[1.3, 20.1]23.3 ± 13.6[5.2, 50.8]
0.04.4 ± 3.6[1, 11.8]8 ± 2.4[5.5, 12]26.3 ± 10.3[15.9, 47.9]
25.03.5 ± 2.9[1, 9.9]7 ± 3.4[2.1, 13.4]20.1 ± 11[7.8, 36]
50.02.8 ± 3.8[0.3, 11.8]4.8 ± 4.5[1.1, 11.7]10.8 ± 8.8[0.4, 22.9]
75.03.1 ± 3.2[0.9, 10.4]6.1 ± 2.8[3.0, 11.5]16.5 ± 9.5[5.6, 29.4]
100.03.6 ± 3.2[1.0, 11.0]5.0 ± 4.1[1.4, 10.5]19.0 ± 11.5[8.1, 43.7]
Table 4. Registration sensitivity to variation in iteration number T. Time denotes computation time to complete the optimization.
Table 4. Registration sensitivity to variation in iteration number T. Time denotes computation time to complete the optimization.
T e p (mm) e N (mm) e d ( )Time (s)
53 ± 4.2[0.9, 13.2]5.9 ± 5.2[2, 17.7]17.3 ± 9.3[6.4, 36]1.3
102.8 ± 3.8[0.9, 11.8]5.3 ± 4.2[1.3, 11.7]13.6 ± 7.3[1.3, 21.9]2.6
152.8 ± 3.8[0.3, 11.8]4.8 ± 4.5[1.1, 11.7]10.8 ± 8.8[0.4, 22.9]3.4
203.2 ± 3.9[0.3, 11.8]4.4 ± 3.9[1.1, 11.7]9.6 ± 7.5[0.4, 20.5]5.0
252.7 ± 3.8[0.3, 11.8]4.2 ± 3.7[1.1, 11.7]9.8 ± 7.6[0.4, 20.5]8.0
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Zang, X.; Zhao, W.; Toth, J.; Bascom, R.; Higgins, W. Multimodal Registration for Image-Guided EBUS Bronchoscopy. J. Imaging 2022, 8, 189. https://doi.org/10.3390/jimaging8070189

AMA Style

Zang X, Zhao W, Toth J, Bascom R, Higgins W. Multimodal Registration for Image-Guided EBUS Bronchoscopy. Journal of Imaging. 2022; 8(7):189. https://doi.org/10.3390/jimaging8070189

Chicago/Turabian Style

Zang, Xiaonan, Wennan Zhao, Jennifer Toth, Rebecca Bascom, and William Higgins. 2022. "Multimodal Registration for Image-Guided EBUS Bronchoscopy" Journal of Imaging 8, no. 7: 189. https://doi.org/10.3390/jimaging8070189

APA Style

Zang, X., Zhao, W., Toth, J., Bascom, R., & Higgins, W. (2022). Multimodal Registration for Image-Guided EBUS Bronchoscopy. Journal of Imaging, 8(7), 189. https://doi.org/10.3390/jimaging8070189

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop