Next Article in Journal
Few-Shot Object Detection: Application to Medieval Musicological Studies
Previous Article in Journal
Topological Voting Method for Image Segmentation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Fluoroscopic 3D Image Generation from Patient-Specific PCA Motion Models Derived from 4D-CBCT Patient Datasets: A Feasibility Study

1
Department of Computer Science and Engineering, College of Engineering, American University of Sharjah, Sharjah 26666, United Arab Emirates
2
Healthcare Engineering Innovation Center (HEIC), Department of Biomedical Engineering, Khalifa University, Abu Dhabi 127788, United Arab Emirates
3
Department of Radiation Oncology, College of Medicine, University of Cincinnati, Cincinnati, OH 45267, USA
4
Department of Radiation Oncology, Brigham and Women’s Hospital and Harvard Medical School, Boston, MA 02115, USA
5
Cedars-Sinai Medical Center, Los Angeles, CA 90048, USA
*
Author to whom correspondence should be addressed.
J. Imaging 2022, 8(2), 17; https://doi.org/10.3390/jimaging8020017
Submission received: 22 November 2021 / Revised: 6 January 2022 / Accepted: 12 January 2022 / Published: 18 January 2022
(This article belongs to the Section Medical Imaging)

Abstract

:
A method for generating fluoroscopic (time-varying) volumetric images using patient-specific motion models derived from four-dimensional cone-beam CT (4D-CBCT) images was developed. 4D-CBCT images acquired immediately prior to treatment have the potential to accurately represent patient anatomy and respiration during treatment. Fluoroscopic 3D image estimation is performed in two steps: (1) deriving motion models and (2) optimization. To derive motion models, every phase in a 4D-CBCT set is registered to a reference phase chosen from the same set using deformable image registration (DIR). Principal components analysis (PCA) is used to reduce the dimensionality of the displacement vector fields (DVFs) resulting from DIR into a few vectors representing organ motion found in the DVFs. The PCA motion models are optimized iteratively by comparing a cone-beam CT (CBCT) projection to a simulated projection computed from both the motion model and a reference 4D-CBCT phase, resulting in a sequence of fluoroscopic 3D images. Patient datasets were used to evaluate the method by estimating the tumor location in the generated images compared to manually defined ground truth positions. Experimental results showed that the average tumor mean absolute error (MAE) along the superior–inferior (SI) direction and the 95th percentile in two patient datasets were 2.29 and 5.79 mm for patient 1, and 1.89 and 4.82 mm for patient 2. This study demonstrated the feasibility of deriving 4D-CBCT-based PCA motion models that have the potential to account for the 3D non-rigid patient motion and localize tumors and other patient anatomical structures on the day of treatment.

1. Introduction

Respiratory-induced organ motion is a major source of uncertainty in stereotactic body radiotherapy (SBRT) of thoracic and upper abdominal cancers [1]. Respiratory motion can result in motion artifacts during image acquisition and limitations in both radiotherapy planning and delivery. Respiratory-correlated, or four-dimensional (4D) computed tomography (4DCT), as an image-guided radiation therapy (IGRT) tool, provides a solution to obtain high quality CT images in the presence of respiratory motion [2]. Thus, 4DCT became a standard method in radiotherapy treatment planning to account for organ motion, reduce motion artifacts, and reduce associated uncertainties.
Image-based motion modeling of patient anatomy during radiotherapy can be useful in accurately localizing tumors and other anatomical structures in the body [3,4,5,6,7]. There are many approaches proposed for image-based motion modeling. Principal component analysis (PCA)-based motion modeling has proven its efficacy in representing the spatio-temporal relationship of the entire lung motion [8]. Because of their compactness and performance, PCA motion models are being used along with projection images captured at the day of treatment for generating time-varying volumetric images, often called fluoroscopic because they are produced in a continuous fashion similar to the images produced using the well-known fluoroscopy procedure [9,10,11,12,13,14,15,16]. PCA motion models are derived by applying PCA on the displacement vector fields (DVFs) that result from applying deformable image registration (DIR) between the 4DCT phases and a reference phase chosen from the same set. PCA distills the large dataset of DVFs into a few eigenvectors and coefficients representing lung motion [8,12,14,16,17]. Because 4DCT images are acquired at the time of treatment planning, which happens days or weeks before the treatment delivery day, PCA motion models derived from them may not accurately represent patient anatomy or motion patterns at the day of treatment delivery [14]. Consequently, they may not account for tumor baseline shifts that are observed frequently in the clinic [18].
Respiratory-correlated, or four-dimensional (4D) cone-beam CT (4D-CBCT), has been introduced and used in radiotherapy for many clinical tasks such as image guidance and target verification just prior to treatment delivery [19]. 4D-CBCTs are reconstructed by first assigning the raw CBCT projections into several bins depending on the respiratory phases they exhibit; then, 3D images are reconstructed from each bin. Several methods have been used to estimate respiratory motion corresponding to the raw CBCT projections. These methods include using external equipment, such as external markers or abdominal belts, internal implanted radiopaque fiducial markers, or marker-free pure image-based approaches [20,21,22,23,24,25,26,27]. On-board 4D-CBCT images are produced at the day of treatment delivery while the patient is in treatment position. Thus, motion models derived from 4D-CBCT images have the potential to account for the inter-fraction anatomical motion variations that can occur between the planning and treatment delivery phases, which may not be handled using 4DCT-based motion models.
Previous research has been conducted to derive PCA motion models from 4D-CBCT images [9,10,28,29]. In [9,10], PCA motion models were derived from 4D-CBCT datasets of simulated patients using the digital 4D Extended Cardiac-Torso (XCAT) phantom and an anthropomorphic physical phantom. In these studies, a total of eight 4D-CBCT datasets were simulated using XCAT software [30,31,32]. These datasets featured different tumor locations and breathing signals measured from lung cancer patients. Moreover, two 4D-CBCT datasets were simulated by taking CBCT images of an anthropomorphic physical phantom, which was a modified version of the Alderson Lung/Chest Phantom (Radiology Support Devices, Inc., Long Beach, CA, USA). In the phantom’s rib cage, foam slices were inserted to simulate lung tissue and to hold a tumor model. During the image acquisition, the foam slices were pushed in and out using a programmable translation stage in the superior–inferior (SI) direction to simulate diaphragm motion and breathing. PCA motion models were derived from all these phantom datasets and used to generate fluoroscopic 3D images. These studies showed the feasibility and reliability of estimating anatomical motion using 4D-CBCT-based motion models compared to 4DCT-based motion models. However, the experiments were only applied to phantom datasets, and hence the efficacy of this approach on clinical patient datasets has not been verified. In the other studies [28,29], PCA motion models were derived from datasets of patients’ 4D-CBCT images taken at different treatment days to quantify the inter-fraction variations of these motion models. However, these 4D-CBCT-based PCA motion models were not used in further clinical tasks such as generating fluoroscopic 3D images or localizing tumors and/or other anatomical structures of the patients at the time of treatment delivery.
In this study, we proposed to: (1) derive PCA motion models from patient 4D-CBCT images captured immediately before treatment delivery; and (2) use these 4D-CBCT-based motion models to estimate fluoroscopic 3D images based on CBCT projections captured immediately before treatment delivery. The proposed work is an extension to the previous work [9,10] where the methods were tested on digital phantom datasets and anthropomorphic physical phantom datasets. In this study, the methods were applied on patient datasets to demonstrate the feasibility of considering this approach in clinical settings. The rest of this paper is organized as follows. Section 2 discusses the materials and methods used in this work. Section 3 presents the experimental results. The results are discussed in Section 4. Section 5 concludes the paper.

2. Materials and Methods

2.1. Datasets

CBCT projections for two patients were acquired using the Elekta Synergy system (Elekta Oncology Systems Ltd., Crawley, UK) and used retrospectively in this study. This retrospective research protocol qualified for exempt approval from the Institutional Review Board (IRB) of the American University of Sharjah, United Arab Emirates, on 23 August 2021 (IRB 18-425). The projections were acquired over 200 degree rotations at 5.5 fps in 4 min. The total number of projections is 1320 in the first patient dataset and 1356 in the second patient dataset. The dimensions of the projections are 512 × 512 pixels in both datasets. 4D-CBCT images were reconstructed from each projection dataset. To do so, the projections were sorted into six phase bins according to their corresponding respiratory status estimated using the “Amsterdam Shroud” method [33,34]. The Feldkamp, Davis, and Kress (FDK) reconstruction algorithm [35] implemented in Reconstruction ToolKit (RTK) [36] was used to reconstruct 3D images from each projection bin, which resulted in 4D-CBCT images of six phases. The dimensions of each of the reconstructed images are 176 × 228 × 256 voxels, with 1.1 as the voxel size.
The ground truth tumor location for the patient datasets was found by manually identifying the diaphragm location in each projection. A simple graphical user interface was programmed in MATLAB (The MathWorks, Inc., Natick, MA, USA) and used for that purpose. To estimate the coordinates of the tumor in each projection, the diaphragm apex coordinates were identified in each projection and used in a linear regression model to estimate the tumor coordinates. To compare the ground truth 2D coordinates with the tumor 3D coordinates in the estimated fluoroscopic 3D images, the tumor coordinates in the estimated fluoroscopic 3D images were projected onto a 2D flat panel detector. The distance between the 2D projected and ground truth coordinates was calculated in the plane of the detector, and then scaled down to an approximate error inside the patient (at isocenter). A similar procedure was followed in previous publications [10,16,37].
Figure 1 shows axial, coronal, and sagittal slices of peak-exhale 4D-CBCT from each patient. The peak-exhale phase was selected as the reference phase to which all other 4D-CBCT phases are deformed in the DIR module.

2.2. Fluoroscopic 3D Image Estimation

Fluoroscopic 3D image estimation using PCA motion models is a well-known approach that has been used in several previous studies [9,10,11,12,13,14,15,16]. In this work, the same approach is used but the input to this algorithm is the 4D-CBCT images for real patients. Fluoroscopic 3D image estimation algorithm is accomplished in two steps as follows:

2.2.1. 4D-CBCT-Based Motion Model Estimation

To estimate the motion models, DIR is applied to each 4D-CBCT phase with respect to a reference phase chosen from the same set. In this work, the peak-exhale phase was chosen as the reference phase. Demon’s DIR algorithm implemented on a graphics processing unit (GPU) was used in this study [38]. This algorithm was considered in this work because it is a non-rigid registration algorithm that has been used extensively in the literature to register 3D medical images. Moreover, this algorithm has been used in previous fluoroscopic 3D image generation studies similar to this work, which proved that the error caused by this algorithm is negligible [16].
Applying DIR on pairs of phases results in a set of DVFs describing the voxel-wise displacements between each pair of phases. As the resulting DVFs represent a huge dataset, a dimensionality reduction approach is used to transform this dataset from the original high-dimensional space into a low-dimensional one while retaining the properties of the original data. PCA was employed as a linear dimensionality reduction method. PCA is applied on the DVFs which results in a set of eigenvectors and eigenvalues representing the motion of the patient [8]. The set of DVFs can be represented as a weighted sum of these eigenvectors and eigenvalues as follows:
D = D ¯ + i = 1 N v i   u i ( t ) ,
where D is the DVF dataset, D ¯ is the mean DVF, u i ( t )   represents the PCA eigenvalues defined in time, v i represents the eigenvectors defined in space, and N is the number of eigenmodes considered. The eigenvectors can be sorted according to their corresponding eigenvalues such that eigenvectors corresponding to the largest eigenvalues represent a large fraction of the variance of the original data. Previous studies have shown that the first few (2–3) eigenvectors, corresponding to the largest eigenvalues, are sufficient to represent the motion patterns existing in the original dataset [10,12,29]. In this work, the first three eigenvectors were considered as the motion model.

2.2.2. Optimization

An optimization approach is used to estimate the fluoroscopic 3D images. This approach involves three inputs: (1) the motion model (a set of 3 eigenvectors with corresponding eigenvalues); (2) the 4D-CBCT reference phase; and (3) the CBCT projections captured immediately before treatment, while the patient is in treatment position. The working principle of this optimization approach is to iteratively update the motion model by minimizing a cost function representing the squared L2-norm of the difference between a CBCT projection captured at treatment time and a 2D projection computed using both the motion model and the 4D-CBCT reference phase. The cost function is represented by:
min u J ( u ) = P · f ( D ( u ) ,   f 0 ) λ · x
where f 0 is the 4D-CBCT reference phase, D ( u ) represents the parameterized DVFs, f is the estimated fluoroscopic 3D image, P is the projection matrix used to compute the projection from the fluoroscopic 3D image f , x is the CBCT projection captured at treatment delivery day, and λ is the relative pixel intensity between the 2D computed projection and the CBCT projection x . The cost function is minimized using a version of gradient descent, as explained in the appendix in [16]. Figure 2 presents the flowchart of the fluoroscopic 3D image estimation algorithm. As can be seen from the figure, the difference between this study and the previous fluoroscopic 3D image estimation studies is that the input to this study is the 4D-CBCT images acquired for patients at treatment delivery time. This approach will result in deriving 4D-CBCT-based PCA motion models that can be used to estimate fluoroscopic images for patients on the treatment delivery day.
In this approach, a linear relationship between the intensities of the CBCT projections and the computed projections using the motion model is assumed. However, some factors may disturb this assumption, such as noise and the poor quality of the 4D-CBCT images being used to compute the 2D projections. The limited number of projections available for reconstruction of each 4D-CBCT bin can cause artifacts that can appear as streaks in the resulting 4D-CBCT images. These artifacts may cause non-anatomical differences between the real CBCT projections and the corresponding 2D projections computed using the 4D-CBCT-based motion model. Thus, considering a region of interest (ROI) surrounding the tumor and other moving anatomical structures in the images, such as the diaphragm apex, has the potential to reduce the effect of these differences in the optimization procedure. In this work, a ROI was chosen from both the CBCT projection and the corresponding 2D computed projection to reduce the effect of the noise and artifacts existing in the whole images and enhance the accuracy of the optimization. The ROI was chosen to surround the tumor and the diaphragm apex, which are the most visible structures in the image exhibiting breathing motion.

2.3. Evaluation

The method was evaluated by finding the tumor localization error, which is calculated as the mean absolute error (MAE) of the tumor centroid location in the estimated fluoroscopic 3D images. This error value is measured by taking the mean absolute difference between the tumor centroid location in the estimated fluoroscopic 3D images and the ground truth locations. The process of estimating ground truth tumor coordinates is described in Section 2.1. The 3D tumor coordinates in the estimated fluoroscopic 3D images are projected onto a 2D flat panel detector to be able to compare them with the 2D ground truth tumor coordinates. The error is measured along the superior–inferior (SI) direction in patient coordinates.

3. Results

In this section, the estimated PCA motion models and fluoroscopic 3D images are evaluated. Firstly, to evaluate the PCA motion models, an explained variance study was carried out. Explained variance is a statistical analysis study that is used to measure the proportion of the variation of a given dataset that is accounted for by a mathematical model. In this work, the analysis was carried out to explore the variance explained by each PCA eigenvector and to determine the number of PCA eigenvectors that can be considered in the motion model without losing important information. Figure 3a shows the eigenvalue spectrum for the PCA motion models for patient #1 and patient #2. It can be observed that the eigenvalues decrease with higher eigenmodes and drop drastically after the third eigenmode in both patients. Figure 3b shows the percentage of the variance explained by each eigenmode in each patient. It shows both the individual and cumulative explained variances. As can be seen from the figure, the first three eigenvectors can explain most of the variance (97.1% in patient #1) and (97.4% in patient #2).
The PCA motion models derived from 4D-CBCT images were used to estimate the fluoroscopic 3D images. Firstly, a correlation analysis was conducted to measure the correlation between the intensities of a sample CBCT projection and the corresponding 2D projection computed using the estimated motion model and the 4D-CBCT reference phase. This study is important to prove the linear relationship between the intensities of the two projections as assumed in the cost function described by Equation (2). Figure 4 shows a CBCT projection and the corresponding 2D projection computed using the estimated motion model and the 4D-CBCT reference phase. A scatter plot showing the linear correlation between the intensities of the two images is shown. As can be seen from the figure, a linear correlation was found with a correlation coefficient of 96%.
The estimated fluoroscopic 3D images for each of the datasets were evaluated. Figure 5 shows axial, coronal, and sagittal slices of a sample estimated fluoroscopic 3D image from patient #2. Figure 6 shows coronal slices of two estimated fluoroscopic 3D images from patient #2 at different breathing phases. As can be noticed from Figure 6, the estimated fluoroscopic images were able to capture the anatomical motion represented in the CBCT projections used in the optimization module.
To evaluate the accuracy of the estimated fluoroscopic images, the SI tumor position in the estimated images was measured and compared to its ground truth location. Figure 7 shows the SI tumor position in all the estimated fluoroscopic 3D images compared to the ground truth tumor positions in millimeters for patient #1 (a) and patient #2 (b). The tumor MAE along the SI direction was 2.29 mm with a 95th percentile of 5.79 mm for patient #1, and 1.89 mm with a 95th percentile of 4.82 mm for patient #2.

4. Discussion

In this work, the feasibility of building patient-specific motion models from 4D-CBCT images and using them, along with a set of CBCT projections captured at the time of treatment delivery, to generate fluoroscopic 3D images was studied. The 4D-CBCT-based motion models have the potential to overcome an important shortcoming of the 4DCT-based motion models in that they can reflect the patient anatomy and motion at the time of treatment delivery. These fluoroscopic 3D images can be used in several clinical applications such as delivered dose verification [39,40].
The methodology used in this work involved two main steps: deriving the PCA motion model and the optimization approach to estimate the fluoroscopic 3D images. The method was evaluated on two patient datasets. The resulting PCA 4D-CBCT-based motion models that were used in this work were analyzed in Section 3. As mentioned in Figure 3, the first few eigenmodes of these PCA motion models explained most of the variance in the DVF dataset. Based on this, the remainder of the eigenmodes were dropped safely as they do not hold significant information. These results support the findings of other studies that showed that a small number of eigenmodes (2–3) are sufficient to represent the organ motion represented by the DVFs [10,12,16,29]. The iterative optimization approach was shown to converge after several iterations, which resulted in producing optimized fluoroscopic 3D images representing the anatomical motion of the patient. The algorithm was implemented to run efficiently on a GPU (NVIDIA GeForce GTX 1070, 8 GB VRAM). The DIR algorithm takes an average of 17.25 s to register a 4D-CBCT phase to the reference phase. The optimization step needs an average of 1.25 s to estimate a fluoroscopic 3D image, including the time required to estimate the tumor location.
Comparing this error to other studies using 4D-CBCT-based motion models derived from phantom datasets [10], it can be noticed that the error in this study is slightly higher. Given the complexities of the breathing patterns of the real patients and the poor quality of the 4D-CBCT images, having a higher tumor error is expected. DIR accuracy is a key determinant of motion model accuracy. DIR yields the DVFs upon which the motion model is based. The DIR algorithm used in this study is the Demon’s algorithm [38]. In a previous fluoroscopic 3D image generation study using 4DCT images, the authors investigated the effect of DIR performance on the overall method accuracy [16]. It was observed that the error caused by DIR is negligible. In that study, the error was mainly attributed to optimization step of the method—specifically, the mapping between the CBCT projection and the computed one.
One of the major challenges facing the construction of motion models from 4D-CBCT images is the poor quality of the 4D-CBCT images used as input. The limited number of projections available for reconstruction of each 4D-CBCT bin are a key reason for the relatively poor quality of 4D-CBCT images. The effect of the quality of the 4D-CBCT images in fluoroscopic 3D image estimation was investigated in [10]. The authors conducted an experiment using two sets of 4D-CBCT images simulated using a digital XCAT phantom. The 4D-CBCT images in the first set were reconstructed from a well-sampled set of projections, whereas the images in the second set were reconstructed from a severely under-sampled set of projections. The study showed that the tumor MAE along the SI direction increased by 214% (from 1.28 to 4.02 mm) with a 95th percentile increasing by 250% (from 2.0 to 7.00 mm) when using the 4D-CBCT images that were reconstructed from an under-sampled set of projections as the input to this method. The normalized root mean square error (NRMSE) calculated using the voxel-wise intensity difference between the resulting estimated images and the ground truth images also increased by 150% (from 0.10 to 0.25 mm) when using the 4D-CBCT images that were reconstructed from an under-sampled set of projections. The under-sampling issue in 4D-CBCT has been studied extensively in the literature. Several solutions to improve the quality of the 4D-CBCT images have been suggested, such as compressed sensing [41,42,43,44,45], motion compensated reconstruction [46,47,48,49,50,51,52,53,54], and interpolation of “in-between” projections to increase the number of projections in each respiratory phase bin [55,56,57,58]. Recently, deep learning approaches have also been proposed [59,60,61]. Motion modeling and fluoroscopic image estimation from enhanced 4D-CBCT images is worth investigating in future research.

5. Conclusions

This study investigated the feasibility of deriving motion models from patient 4D-CBCT images and using them to generate fluoroscopic 3D images of the patient on the treatment delivery day while the patient is in the treatment position. The algorithm consists of two steps. In the first step, PCA motion models are derived by performing PCA on the DVFs resulting from applying DIR on the input 4D-CBCT images. In the second step, an iterative optimization approach is applied on the motion model to generate a sequence of 3D images using CBCT projections. The estimated fluoroscopic 3D images are assessed by localizing the tumor in generated images and comparing these locations to the tumor ground truth location in the CBCT projections. The tumor MAE along the SI direction was 2.29 mm with a 95th percentile of 5.79 mm for patient #1, and 1.89 mm with a 95th percentile of 4.82 mm for patient #2. The clinical applications of this work include image guidance, patient positioning, and delivered dose estimation and/or verification.

Author Contributions

Conceptualization, C.W. and J.H.L.; Data curation, S.D., M.A. and D.I.; Formal analysis, S.D., D.I., C.W. and J.H.L.; Funding acquisition, S.D.; Investigation, S.D.; Methodology, S.D., C.W. and J.H.L.; Project administration, S.D.; Resources, S.D. and D.I.; Software, M.A.; Supervision, J.H.L.; Validation, S.D., M.A., D.I., C.W. and J.H.L.; Visualization, M.A.; Writing—original draft, S.D.; Writing—review & editing, M.A., D.I., C.W. and J.H.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the American University of Sharjah, United Arab Emirates, grant number EFRG18-BBR-CEN-04. The content is solely the responsibility of the authors and does not necessarily represent the official views of the American University of Sharjah.

Institutional Review Board Statement

This retrospective research protocol qualified for exempt approval from the Institutional Review Board (IRB) of the American University of Sharjah, United Arab Emirates, on 23 August 2021 (IRB 18-425).

Informed Consent Statement

Acquisition of informed consent was waived by the IRB of the institute because the patient data were retrospectively collected, anonymized and de-identified prior to use in this study.

Data Availability Statement

The data presented in this study is not publicly available.

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.

References

  1. Keall, P.J.; Mageras, G.S.; Balter, J.M.; Emery, R.S.; Forster, K.M.; Jiang, S.B.; Kapatoes, J.M.; Low, D.A.; Murphy, M.J.; Murray, B.R.; et al. The management of respiratory motion in radiation oncology report of AAPM Task Group 76. Med. Phys. 2006, 33, 3874–3900. [Google Scholar] [CrossRef]
  2. Vedam, S.S.; Keall, P.J.; Kini, V.R.; Mostafavi, H.; Shukla, H.P.; Mohan, R. Acquiring a four-dimensional computed tomography dataset using an external respiratory signal. Phys. Med. Biol. 2003, 48, 45–62. [Google Scholar] [CrossRef] [PubMed]
  3. Guo, M.; Chee, G.; O’Connell, D.; Dhou, S.; Fu, J.; Singhrao, K.; Ionascu, D.; Ruan, D.; Lee, P.; Low, D.A.; et al. Reconstruction of a high-quality volumetric image and a respiratory motion model from patient CBCT projections. Med. Phys. 2019, 46, 3627–3639. [Google Scholar] [CrossRef] [PubMed]
  4. Fassi, A.; Tagliabue, E.; Tirindelli, M.; Sarrut, D.; Riboldi, M.; Baroni, G. PO-0884: Respiratory motion models from Cone-Beam CT for lung tumour tracking. Radiother. Oncol. 2016, S424. [Google Scholar] [CrossRef] [Green Version]
  5. Fassi, A.; Bombardieri, A.; Ivaldi, G.B.; Liotta, M.; Tabarelli de Fatis, P.; Meaglia, I.; Porcu, P.; Riboldi, M.; Baroni, G. EP-1629: Lung tumor tracking using CBCT-based respiratory motion models driven by external surrogates. Radiother. Oncol. 2017. [Google Scholar] [CrossRef]
  6. Zhang, Q.; Hu, Y.C.; Liu, F.; Goodman, K.; Rosenzweig, K.E.; Mageras, G.S. Correction of motion artifacts in cone-beam CT using a patient-specific respiratory motion model. Med. Phys. 2010, 37, 2901–2909. [Google Scholar] [CrossRef]
  7. Fassi, A.; Schaerer, J.; Fernandes, M.; Riboldi, M.; Sarrut, D.; Baroni, G. Tumor tracking method based on a deformable 4D CT breathing motion model driven by an external surface surrogate. Int. J. Radiat. Oncol. Biol. Phys. 2014, 88, 182–188. [Google Scholar] [CrossRef]
  8. Li, R.; Lewis, J.H.; Jia, X.; Zhao, T.; Liu, W.; Wuenschel, S.; Lamb, J.; Yang, D.; Low, D.A.; Jiang, S.B. On a PCA-based lung motion model. Phys. Med. Biol. 2011, 56, 6009–6030. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  9. Dhou, S.; Hurwitz, M.; Mishra, P.; Berbeco, R.; Lewis, J. 4DCBCT-based motion modeling and 3D fluoroscopic image generation for lung cancer radiotherapy. In Proceedings of the Medical Imaging 2015: Image-Guided Procedures, Robotic Interventions, and Modeling, Orlando, FL, USA, 21–26 February 2015. [Google Scholar]
  10. Dhou, S.; Hurwitz, M.; Mishra, P.; Cai, W.; Rottmann, J.; Li, R.; Williams, C.; Wagar, M.; Berbeco, R.; Ionascu, D.; et al. 3D fluoroscopic image estimation using patient-specific 4DCBCT-based motion models. Phys. Med. Biol. 2015, 60, 3807–3824. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  11. Hurwitz, M.; Williams, C.L.; Mishra, P.; Rottmann, J.; Dhou, S.; Wagar, M.; Mannarino, E.G.; Mak, R.H.; Lewis, J.H. Generation of fluoroscopic 3D images with a respiratory motion model based on an external surrogate signal. Phys. Med. Biol. 2015, 60, 521–535. [Google Scholar] [CrossRef] [PubMed]
  12. Li, R.; Jia, X.; Lewis, J.H.; Gu, X.; Folkerts, M.; Men, C.; Jiang, S.B. Real-time volumetric image reconstruction and 3D tumor localization based on a single x-ray projection image for lung cancer radiotherapy. Med. Phys. 2010, 37, 2822–2826. [Google Scholar] [CrossRef] [PubMed]
  13. Lewis, J.; Li, R.; St. James, S.; Yue, Y.; Berbeco, R.; Mishra, P. Fluoroscopic 3D Images Based on 2D Treatment Images Using a Realistic Modified XCAT Phantom. Int. J. Radiat. Oncol. 2012, 84, S737. [Google Scholar] [CrossRef]
  14. Mishra, P.; Li, R.; James, S.S.; Mak, R.H.; Williams, C.L.; Yue, Y.; Berbeco, R.I.; Lewis, J.H. Evaluation of 3D fluoroscopic image generation from a single planar treatment image on patient data with a modified XCAT phantom. Phys. Med. Biol. 2013, 58, 841–858. [Google Scholar] [CrossRef] [Green Version]
  15. Zhang, Y.; Yin, F.F.; Segars, W.P.; Ren, L. A technique for estimating 4D-CBCT using prior knowledge and Limited-angle projections. Med. Phys. 2013, 40, 121701. [Google Scholar] [CrossRef] [PubMed]
  16. Li, R.; Lewis, J.H.; Jia, X.; Gu, X.; Folkerts, M.; Men, C.; Song, W.Y.; Jiang, S.B. 3D tumor localization through real-time volumetric x-ray imaging for lung cancer radiotherapy. Med. Phys. 2011, 38, 2783–2794. [Google Scholar] [CrossRef] [PubMed]
  17. Zhang, Q.; Pevsner, A.; Hertanto, A.; Hu, Y.-C.; Rosenzweig, K.E.; Ling, C.C.; Mageras, G.S. A patient-specific respiratory model of anatomical motion for radiation treatment planning. Med. Phys. 2007, 34, 4772–4781. [Google Scholar] [CrossRef] [Green Version]
  18. Berbeco, R.I.; Mostafavi, H.; Sharp, G.C.; Jiang, S.B. Towards fluoroscopic respiratory gating for lung tumours without radiopaque markers. Phys. Med. Biol. 2005, 50, 4481–4490. [Google Scholar] [CrossRef] [PubMed]
  19. Sonke, J.J.; Zijp, L.; Remeijer, P.; van Herk, M. Respiratory correlated cone beam CT. Med Phys 2005, 32, 1176–1186. [Google Scholar] [CrossRef] [PubMed]
  20. Dhou, S.; Motai, Y.; Hugo, G.D. Local intensity feature tracking and motion modeling for respiratory signal extraction in cone beam CT projections. IEEE Trans. Biomed. Eng. 2013, 60, 332–342. [Google Scholar] [CrossRef] [PubMed]
  21. Vergalasova, I.; Cai, J.; Yin, F.F. A novel technique for markerless, self-sorted 4D-CBCT: Feasibility study. Med. Phys. 2012, 39, 1442–1451. [Google Scholar] [CrossRef] [PubMed]
  22. Park, S.; Kim, S.; Yi, B.; Hugo, G.; Gach, H.M.; Motai, Y. A Novel Method of Cone Beam CT Projection Binning Based on Image Registration. IEEE Trans. Med. Imaging 2017, 36, 1733–1745. [Google Scholar] [CrossRef] [PubMed]
  23. Chao, M.; Wei, J.; Li, T.; Yuan, Y.; Rosenzweig, K.E.; Lo, Y.C. Robust breathing signal extraction from cone beam CT projections based on adaptive and global optimization techniques. Phys. Med. Biol. 2016, 61, 3109. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  24. Yan, H.; Wang, X.; Yin, W.; Pan, T.; Ahmad, M.; Mou, X.; Cervino, L.; Jia, X.; Jiang, S.B. Extracting respiratory signals from thoracic cone beam CT projections. Phys. Med. Biol. 2013, 58, 1447–1464. [Google Scholar] [CrossRef] [PubMed]
  25. Kavanagh, A.; Evans, P.M.; Hansen, V.N.; Webb, S. Obtaining breathing patterns from any sequential thoracic x-ray image set. Phys. Med. Biol. 2009, 54, 4879–4888. [Google Scholar] [CrossRef] [PubMed]
  26. Sabah, S.; Dhou, S. Image-based extraction of breathing signal from cone-beam CT projections. In Medical Imaging 2020: Image-Guided Procedures, Robotic Interventions, and Modeling; International Society for Optics and Photonics: Houston, TX, USA, 2020; Volume 11315. [Google Scholar]
  27. Dhou, S.; Docef, A.; Hugo, G. Image-based respiratory signal extraction using dimensionality reduction for phase sorting in Cone-Beam CT Projections. In Proceedings of the 2017 International Conference on Computational Biology and Bioinformatics, Newark, NJ, USA, 18–20 October 2017; pp. 79–84. [Google Scholar]
  28. Dhou, S.; Ionascu, D.; Williams, C.; Lewis, J. Inter-fraction variations in motion modeling using patient 4D-cone beam CT images. In Proceedings of the 2018 Advances in Science and Engineering Technology International Conferences, Sharjah, United Arab Emirates, 6 February–5 April 2018; 2018; pp. 1–4. [Google Scholar]
  29. Dhou, S.; Lewis, J.; Cai, W.; Ionascu, D.; Williams, C. Quantifying day-to-day variations in 4DCBCT-based PCA motion models. Biomed. Phys. Eng. Express 2020, 6, 035020. [Google Scholar] [CrossRef]
  30. Segars, W.P.; Mahesh, M.; Beck, T.J.; Frey, E.C.; Tsui, B.M.W. Realistic CT simulation using the 4D XCAT phantom. Med. Phys. 2008, 35, 3800–3808. [Google Scholar] [CrossRef] [PubMed]
  31. Segars, W.P.; Sturgeon, G.; Mendonca, S.; Grimes, J.; Tsui, B.M. 4D XCAT phantom for multimodality imaging research. Med. Phys. 2010, 37, 4902–4915. [Google Scholar] [CrossRef] [PubMed]
  32. Myronakis, M.E.; Cai, W.; Dhou, S.; Cifter, F.; Hurwitz, M.; Segars, P.W.; Berbeco, R.I.; Lewis, J.H. A graphical user interface for XCAT phantom configuration, generation and processing. Biomed. Phys. Eng. Express 2017, 3, 017003. [Google Scholar] [CrossRef]
  33. Zijp, L.; Sonke, J.-J.; van Herk, M. Extraction of the respiratory signal from sequential thorax cone-beam X-ray images. In Proceedings of the International Conference on the Use of Computers in Radiation Therapy, Seoul, Korea, 10–13 May 2004; pp. 507–509. [Google Scholar]
  34. Van Herk, M.; Zijp, L.; Remeijer, P.; Wolthaus, J.; Sonke, J.J. On-Line 4D Cone Beam CT for Daily Correction of Lung Tumour Position during Hypofractionated Radiotherapy; ICCR: Toronto, ON, Canada, 2007.
  35. Feldkamp, L.A.; Davis, L.C.; Kress, J.W. Practical cone-beam algorithm. J. Opt. Soc. Am. A Opt. Image Sci. Vis. 1984, 1, 612–619. [Google Scholar] [CrossRef] [Green Version]
  36. Rit, S.; Vila Oliva, M.; Brousmiche, S.; Labarbe, R.; Sarrut, D.; Sharp, G.C. The Reconstruction Toolkit (RTK), an open-source cone-beam CT reconstruction toolkit based on the Insight Toolkit (ITK). In Proceedings of the International Conference on the Use of Computers in Radiation Therapy (ICCR’13), Melbourne, Australia, 6–9 May 2013. [Google Scholar]
  37. Lewis, J.H.; Li, R.; Watkins, W.T.; Lawson, J.D.; Segars, W.P.; Cervino, L.I.; Song, W.Y.; Jiang, S.B. Markerless lung tumor tracking and trajectory reconstruction using rotational cone-beam projections: A feasibility study. Phys. Med. Biol. 2010, 55, 2505–2522. [Google Scholar] [CrossRef]
  38. Gu, X.; Pan, H.; Liang, Y.; Castillo, R.; Yang, D.; Choi, D.; Castillo, E.; Majumdar, A.; Guerrero, T.; Jiang, S.B. Implementation and evaluation of various demons deformable image registration algorithms on a GPU. Phys. Med. Biol. 2010, 55, 207–219. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  39. Cai, W.; Hurwitz, M.H.; Williams, C.L.; Dhou, S.; Berbeco, R.I.; Seco, J.; Mishra, P.; Lewis, J.H. 3D delivered dose assessment using a 4DCT-based motion model. Med. Phys. 2015, 42, 2897–2907. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  40. Cai, W.; Dhou, S.; Cifter, F.; Myronakis, M.; Hurwitz, M.H.; Williams, C.L.; Berbeco, R.I.; Seco, J.; Lewis, J.H. 4D cone beam CT-based dose assessment for SBRT lung cancer treatment. Phys. Med. Biol. 2015, 61, 554. [Google Scholar] [CrossRef]
  41. Chen, G.H.; Tang, J.; Leng, S. Prior image constrained compressed sensing (PICCS): A method to accurately reconstruct dynamic CT images from highly undersampled projection data sets. Med. Phys. 2008, 35, 660–663. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  42. Sidky, E.Y.; Pan, X. Image reconstruction in circular cone-beam computed tomography by constrained, total-variation minimization. Phys. Med. Biol. 2008, 53, 4777–4807. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  43. Sidky, E.Y.; Duchin, Y.; Pan, X.; Ullberg, C. A constrained, total-variation minimization algorithm for low-intensity x-ray CT. Med. Phys. 2011, 38 (Suppl. 1), S117. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  44. Choi, K.; Xing, L.; Koong, A.; Li, R. First study of on-treatment volumetric imaging during respiratory gated VMAT. Med. Phys. 2013, 40, 40701. [Google Scholar] [CrossRef] [Green Version]
  45. Choi, K.; Wang, J.; Zhu, L.; Suh, T.S.; Boyd, S.; Xing, L. Compressed sensing based cone-beam computed tomography reconstruction with a first-order method. Med. Phys. 2010, 37, 5113–5125. [Google Scholar] [CrossRef] [Green Version]
  46. Li, T.; Koong, A.; Xing, L. Enhanced 4D cone-beam CT with inter-phase motion model. Med. Phys. 2007, 34, 3688–3695. [Google Scholar] [CrossRef] [PubMed]
  47. Rit, S.; Wolthaus, J.; van Herk, M.; Sonke, J.J. On-the-fly motion-compensated cone-beam CT using an a priori motion model. Med. Image Comput. Comput. Assist. Interv. 2008, 11, 729–736. [Google Scholar]
  48. Rit, S.; Sarrut, D.; Desbat, L. Comparison of analytic and algebraic methods for motion-compensated cone-beam CT reconstruction of the thorax. IEEE Trans. Med. Imaging 2009, 28, 1513–1525. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  49. Zhang, H.; Ma, J.; Bian, Z.; Zeng, D.; Feng, Q.; Chen, W. High quality 4D cone-beam CT reconstruction using motion-compensated total variation regularization. Phys. Med. Biol. 2017, 62, 3313. [Google Scholar] [CrossRef] [PubMed]
  50. Li, T.; Schreibmann, E.; Yang, Y.; Xing, L. Motion correction for improved target localization with on-board cone-beam computed tomography. Phys. Med. Biol. 2006, 51, 253–267. [Google Scholar] [CrossRef]
  51. Biguri, A.; Dosanjh, M.; Hancock, S.; Soleimani, M. A general method for motion compensation in x-ray computed tomography. Phys. Med. Biol. 2017, 62, 6532. [Google Scholar] [CrossRef] [Green Version]
  52. Huang, X.; Zhang, Y.; Chen, L.; Wang, J. U-net-based deformation vector field estimation for motion-compensated 4D-CBCT reconstruction. Med. Phys. 2020, 47, 3000–3012. [Google Scholar] [CrossRef] [PubMed]
  53. Riblett, M.J.; Christensen, G.E.; Weiss, E.; Hugo, G.D. Data-driven respiratory motion compensation for four-dimensional cone-beam computed tomography (4D-CBCT) using groupwise deformable registration. Med. Phys. 2018, 45, 4471–4482. [Google Scholar] [CrossRef]
  54. Sauppe, S.; Kuhm, J.; Brehm, M.; Paysan, P.; Seghers, D.; Kachelrieß, M. Motion vector field phase-to-amplitude resampling for 4D motion-compensated cone-beam CT. Phys. Med. Biol. 2018, 63, 035032. [Google Scholar] [CrossRef]
  55. Weiss, G.H.; Talbert, A.J.; Brooks, R.A. The use of phantom views to reduce CT streaks due to insufficient angular sampling. Phys. Med. Biol. 1982, 27, 1151–1162. [Google Scholar] [CrossRef] [PubMed]
  56. Lehmann, T.M.; Gonner, C.; Spitzer, K. Addendum: B-spline interpolation in medical image processing. IEEE Trans. Med. Imaging 2001, 20, 660–665. [Google Scholar] [CrossRef] [PubMed]
  57. Bertram, M.; Wiegert, J.; Schafer, D.; Aach, T.; Rose, G. Directional view interpolation for compensation of sparse angular sampling in cone-beam CT. IEEE Trans. Med. Imaging 2009, 28, 1011–1022. [Google Scholar] [CrossRef] [PubMed]
  58. Dhou, S.; Hugo, G.D.; Docef, A. Motion-based projection generation for 4D-CT reconstruction. In Proceedings of the 2014 IEEE International Conference on Image Processing, Paris, France, 27–30 October 2014. [Google Scholar]
  59. Han, Y.; Ye, J.C. Framing U-Net via Deep Convolutional Framelets: Application to Sparse-View CT. IEEE Trans. Med. Imaging 2018. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  60. Kelly, B.; Matthews, T.P.; Anastasio, M.A. Deep Learning-Guided Image Reconstruction from Incomplete Data. arXiv 2017, arXiv:1709.00584. [Google Scholar]
  61. Madesta, F.; Sentker, T.; Gauer, T.; Werner, R. Self-contained deep learning-based boosting of 4D cone-beam CT reconstruction. Med. Phys. 2020, 47, 5619–5631. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Sample phase (peak-exhale) 4D-CBCT from patient #1 (top) and patient #2 (bottom): (a) axial, (b) coronal, and (c) sagittal slices.
Figure 1. Sample phase (peak-exhale) 4D-CBCT from patient #1 (top) and patient #2 (bottom): (a) axial, (b) coronal, and (c) sagittal slices.
Jimaging 08 00017 g001
Figure 2. Flowchart of the fluoroscopic 3D image estimation algorithm.
Figure 2. Flowchart of the fluoroscopic 3D image estimation algorithm.
Jimaging 08 00017 g002
Figure 3. Variance explained by eigenvectors: (a) eigenvalues’ spectrum of the motion models of patient #1 and patient #2; (b) explained variance ratio of the motion models of patient #1 and patient #2.
Figure 3. Variance explained by eigenvectors: (a) eigenvalues’ spectrum of the motion models of patient #1 and patient #2; (b) explained variance ratio of the motion models of patient #1 and patient #2.
Jimaging 08 00017 g003
Figure 4. (a) Sample CBCT projection from patient #2, (b) the corresponding computed projection using the motion model and the 4D-CBCT reference phase, and (c) a scatter plot showing the correlation between the intensities of the two images in (a,b). A linear correlation was found between the two image intensities with a correlation coefficient of 96%.
Figure 4. (a) Sample CBCT projection from patient #2, (b) the corresponding computed projection using the motion model and the 4D-CBCT reference phase, and (c) a scatter plot showing the correlation between the intensities of the two images in (a,b). A linear correlation was found between the two image intensities with a correlation coefficient of 96%.
Jimaging 08 00017 g004
Figure 5. Sample estimated fluoroscopic 3D image from patient #2 dataset: (a) axial, (b) coronal, and (c) sagittal slices.
Figure 5. Sample estimated fluoroscopic 3D image from patient #2 dataset: (a) axial, (b) coronal, and (c) sagittal slices.
Jimaging 08 00017 g005
Figure 6. Coronal slices of two estimated fluoroscopic 3D images from patient #2 dataset at different breathing phases: (a) exhale phase and (b) inhale phase.
Figure 6. Coronal slices of two estimated fluoroscopic 3D images from patient #2 dataset at different breathing phases: (a) exhale phase and (b) inhale phase.
Jimaging 08 00017 g006
Figure 7. SI tumor position in the estimated fluoroscopic 3D images using motion models derived from 4D-CBCT images of patient #1 (a) and patient #2 (b).
Figure 7. SI tumor position in the estimated fluoroscopic 3D images using motion models derived from 4D-CBCT images of patient #1 (a) and patient #2 (b).
Jimaging 08 00017 g007
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Dhou, S.; Alkhodari, M.; Ionascu, D.; Williams, C.; Lewis, J.H. Fluoroscopic 3D Image Generation from Patient-Specific PCA Motion Models Derived from 4D-CBCT Patient Datasets: A Feasibility Study. J. Imaging 2022, 8, 17. https://doi.org/10.3390/jimaging8020017

AMA Style

Dhou S, Alkhodari M, Ionascu D, Williams C, Lewis JH. Fluoroscopic 3D Image Generation from Patient-Specific PCA Motion Models Derived from 4D-CBCT Patient Datasets: A Feasibility Study. Journal of Imaging. 2022; 8(2):17. https://doi.org/10.3390/jimaging8020017

Chicago/Turabian Style

Dhou, Salam, Mohanad Alkhodari, Dan Ionascu, Christopher Williams, and John H. Lewis. 2022. "Fluoroscopic 3D Image Generation from Patient-Specific PCA Motion Models Derived from 4D-CBCT Patient Datasets: A Feasibility Study" Journal of Imaging 8, no. 2: 17. https://doi.org/10.3390/jimaging8020017

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop