Next Article in Journal
Production of Transportation Fuels from Fischer–Tropsch Waxes: Distillation, Blending, and Hydrocracking
Previous Article in Journal
Why Are Explainable AI Methods for Prostate Lesion Detection Rated Poorly by Radiologists?
Previous Article in Special Issue
Magnetic Resonance Roadmap in Detecting and Staging Endometriosis: Usual and Unusual Localizations
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

CycleGAN-Driven MR-Based Pseudo-CT Synthesis for Knee Imaging Studies

by
Daniel Vallejo-Cendrero
1,
Juan Manuel Molina-Maza
1,
Blanca Rodriguez-Gonzalez
1,
David Viar-Hernandez
1,
Borja Rodriguez-Vila
1,
Javier Soto-Pérez-Olivares
2,
Jaime Moujir-López
2,
Carlos Suevos-Ballesteros
2,
Javier Blázquez-Sánchez
2,
José Acosta-Batlle
2 and
Angel Torrado-Carvajal
1,*
1
Medical Image Analysis and Biometry Laboratory, Universidad Rey Juan Carlos, 28933 Móstoles, Spain
2
Department of Radiology, Hospital Universitario Ramón y Cajal, IRYCIS, 28034 Madrid, Spain
*
Author to whom correspondence should be addressed.
Appl. Sci. 2024, 14(11), 4655; https://doi.org/10.3390/app14114655
Submission received: 8 February 2024 / Revised: 28 April 2024 / Accepted: 13 May 2024 / Published: 28 May 2024
(This article belongs to the Special Issue Biomedical Imaging: From Methods to Applications)

Abstract

:
In the field of knee imaging, the incorporation of MR-based pseudo-CT synthesis holds the potential to mitigate the need for separate CT scans, simplifying workflows, enhancing patient comfort, and reducing radiation exposure. In this work, we present a novel DL framework, grounded in the development of the Cycle-Consistent Generative Adversarial Network (CycleGAN) method, tailored specifically for the synthesis of pseudo-CT images in knee imaging to surmount the limitations of current methods. Upon visually examining the outcomes, it is evident that the synthesized pseudo-CTs show an excellent quality and high robustness. Despite the limited dataset employed, the method is able to capture the particularities of the bone contours in the resulting image. The experimental Mean Absolute Error (MAE), Peak Signal-to-Noise Ratio (PSNR), Structural Similarity Index Measure (SSIM), Zero-Normalized Cross Correlation (ZNCC), Mutual Information (MI), Relative Change (RC), and absolute Relative Change (|RC|) report values of 30.4638 ± 7.4770, 28.1168 ± 1.5245, 0.9230 ± 0.0217, 0.9807 ± 0.0071, 0.8548 ± 0.1019, 0.0055 ± 0.0265, and 0.0302 ± 0.0218 (median ± median absolute deviation), respectively. The voxel-by-voxel correlation plot shows an excellent correlation between pseudo-CT and ground-truth CT Hounsfield units (m = 0.9785; adjusted R 2 = 0.9988; ρ = 0.9849; p < 0.001). The Bland–Altman plot shows that the average of the differences is low (( H U C T H U p s e u d o C T = 0.7199 ± 35.2490; 95% confidence interval [−68.3681, 69.8079]). This study represents the first reported effort in the field of MR-based knee pseudo-CT synthesis, shedding light to significantly advance the field of knee imaging.

1. Introduction

In the field of musculoskeletal imaging, particularly in knee imaging, magnetic resonance (MR) imaging and computed tomography (CT) serve as two indispensable modalities, each offering unique insights [1,2,3,4,5,6]; while MR stands out for its unrivaled soft tissue contrast, CT excels in providing precise density-based information due to its utilization of Hounsfield units (HU). MR’s unique ability to non-invasively capture high-resolution, multi-planar images with exceptional soft tissue contrast provides a detailed representation of knee joint structures, making it indispensable for diagnosing a wide array of pathologies and conditions, providing crucial insights into articular cartilage, ligaments, and menisci. Yet, the absence of density data in MR poses a considerable limitation for clinicians in the field of knee imaging, as it limits their ability to accurately assess cortical bone integrity and osseous mineralization in search of fractures and osteopenia accordingly as well as its drawback for detecting calcified soft tissue deposits with the same precision as CT.
Additionally, the realm of knee imaging goes beyond anatomical diagnosis and extends to metabolic assessment and therapeutic applications. positron emission tomography (PET) plays a crucial role in assessing metabolic activity in tissues, where accurate attenuation correction is imperative to derive precise and quantifiable metabolic information [7,8,9]. Furthermore, when it comes to therapy, radiation therapy planning requires the precise delivery of radiation to target tissue while minimizing exposure to surrounding healthy structures, where the level of accuracy in targeting is critical to the effectiveness and safety of the treatment [10,11]. Conventionally, CT scans are fundamental for both attenuation correction and radiation therapy planning, serving as the primary modality for generating patient-specific attenuation maps.
In this context, recent advances in MR image acquisition and post-processing, including ultrashort echo time, zero echo time, T1-weighted gradient recalled echo, and susceptibility-weighted MR sequences, are showing promising results in MR-based bone visualization [12,13,14,15,16]. Additionally, the incorporation of MR-based pseudo-CT synthesis in knee imaging holds the potential to mitigate the need for separate CT scans, simplifying the workflow, enhancing patient comfort, and reducing radiation exposure, all while maintaining or even enhancing diagnostic and/or treatment accuracy. The existing landscape of MR-based pseudo-CT synthesis includes various methods and has evolved from traditional atlas-based techniques, which leverage preregistered CT–MR image pairs from a database to map MR images to their corresponding CT analogs [17,18,19]. These early methods demonstrated the potential of MR-based pseudo-CT synthesis but were limited by their reliance on anatomical priors, which could lead to inaccuracies in complex or atypical cases.
In recent years, the field has witnessed a profound transformation, driven by the emergence of machine learning and, more specifically, deep learning (DL) techniques [20,21,22]. These advanced approaches have introduced a new era in MR-based pseudo-CT synthesis, characterized by their ability to autonomously learn and map the intricate relationships between MR and CT images. DL-based methods, often rooted in convolutional neural networks (CNNs) [23,24,25] and, more recently, generative adversarial networks (GANs) [26,27,28], have shown remarkable promise in generating high-quality pseudo-CT images from MR data. They offer a more accurate and adaptable solution compared to traditional atlas-based methods [29,30], especially in cases with significant anatomical variations or pathologies.
The integration of DL techniques, in particular, has opened new horizons, enabling more precise, patient-specific, and versatile applications of pseudo-CT synthesis in both diagnostic and therapeutic realms. However, while these methods have shown progress, significant room for improvement remains, specially when moving to other areas than head and neck, and pelvic imaging. Specifically, pseudo-CT synthesis in knee imaging presents a critical challenge, as images usually present different field of view (FOV) and/or positioning of the knee between the MR and the CT scans due to the high degrees of freedom of the knee, hindering matching between MR and CT image structures.
In this work, we present a novel DL framework, grounded in the development of the Cycle-Consistent Generative Adversarial Network (CycleGAN) method [31,32], tailored specifically for the synthesis of pseudo-CT images in knee imaging to surmount the limitations of current methods. In this context, the CycleGAN presents an innovative solution to address the problem of image synthesis between different modalities by translating images from one domain (MR) to another (CT), tackling with potential misregistrations between corresponding data pairs. The distinctive feature of the CycleGAN lies in the usage of adversarial training and the principle of cycle consistency. This integration enables the accomplishment of unpaired data training, ideally suited to tackle the intricate challenge of possible misalignments between CT and MR, as we will further see in Section 2. Beyond the technical intricacies, we also discuss the profound clinical implications of this advancement. To the best of our knowledge, this study represents the first reported effort in the field of MR-based knee pseudo-CT synthesis, shedding light to significantly advance the field of knee imaging.

2. Materials and Methods

2.1. Dataset

This study was conducted at Hospital Universitario Ramón y Cajal. The institutional review board approved this study (protocol 297/23); the need for written informed consent was waived due to the retrospective nature of the study.
A total of 44 patients were retrospectively retrieved from the PACS of the hospital. For inclusion in this study, subjects were required to have both an MR scan and a CT scan of the same knee within a reasonable time frame. Exclusion criteria included the following: (1) images (either MR or CT) with poor image resolution, preventing the identification of the anatomical structures of the knee; (2) severe knee disease, such as soft tissue or bone tumors; (3) surgical procedures in between the MR and CT acquisitions, changing the anatomy distribution between images; and/or (4) extremely different field of view and positioning of the knee between the MR and the CT scans due to the high degrees of freedom of the knee, hindering matching between MR and CT image structures.
Following eligibility, data quality check, and availability, 16 patients were finally included in this study (45.9 ± 13.2 y/o, range 20–70; 8 females), providing us with 16 MR–CT image pairs. MR T1-weighted images and CT scans differed depending on the scanners (images were acquired in different scanner models). Different protocols were used during image acquisition, leading to different image resolutions as well as diverse field of views. Moreover, a range of coil configurations with different numbers of channels were used for MR image acquisition, resulting in a highly heterogeneous database. Figure 1 shows an example of the MR and CT images of an eligible patient. Table 1 summarizes the demographic details for all subjects included in our dataset, as well as their corresponding MR and CT scans vendors, models, and technical details.

2.2. Cycle-Consistent Generative Adversarial Network

As previously mentioned, in this work we leverage a sophisticated and innovative deep learning framework known as a CycleGAN. The CycleGAN is a state-of-the-art deep neural network architecture that has exhibited remarkable capabilities in the domain of medical image synthesis. Actually, it is designed to tackle the complex challenge of converting MR images into pseudo-CT images given difficult, unpaired, or unregistered MR–CT datasets. This approach is unique in that it harnesses the power of adversarial networks and cyclic consistency to ensure that the final synthesized pseudo-CT images not only look authentic but also possess structural accuracy.

2.2.1. Overall Architecture

Figure 2 shows the overall architecture of our CycleGAN implementation. Our approach is based on the implementation described by Zhu et al. [31], but we use 128 initial filters instead of 64 and the residual block has twice the number of convolutions in order to achieve the convergence and generability of the model. We left the rest of the hyperparameters as they were optimized by Zhu et al. by default [31]. It relies on the use of a dual generative network structure, including two essential components: G and F. Network G is primarily responsible for performing the synthesis of images originating in the MR domain to the corresponding CT domain, while its counterpart, network F, specializes in the reverse synthesis process. The generative networks, G and F, are meticulously designed, consisting of multiple convolutional layers that enable them to effectively learn the intricate mapping of features from one domain to the other. In conjunction with these generative networks, we introduce a pair of discriminative networks, denoted as D and E, strategically positioned to act as judges, evaluating the authenticity and quality of images within the MR and CT domains, respectively. This duality ensures a comprehensive assessment of the synthesized images in relation to their intended domains.

2.2.2. Cycle Consistency and Identity Loss

To ensure the entire framework with a heightened level of precision and authenticity compared to images in the target domains, we implemented an adversarial loss function, the cycle consistency loss. This function plays a crucial role in the training process, driving the networks to produce images that are not only realistic but also indistinguishable from those naturally occurring within the target domain, ensuring coherence between synthesis in both directions (from the MR to the CT domain and vice versa). This loss guarantees that, if we take an image from the MR domain, translate it to the CT domain, and then translate the resulting image back to the MR domain, we should obtain an image similar to the original. This property is crucial for preventing the model from generating incoherent translations.
The cycle consistency loss ( L c y c l e ) is mathematically defined as the difference between the original image and the image that is synthesized and then synthesized back to the original domain. This is denoted as follows:
L c y c l e ( G , F ) = F ( G ( I M R ) ) I M R 1 + G ( F ( I C T ) ) I C T 1 ,
where G and F are the functions implemented by the G and F generative components of the CycleGAN, respectively, I M R is an image from the MR domain, I C T is an image from the CT domain, and 1 is the L1 norm. This loss ensures that synthesized images are consistent with their original counterparts.

2.2.3. Data Preprocessing and CycleGAN Training

A first step dedicated to image preprocessing is essential to ensure the quality of the expected outcome. In this sense, intensity normalization and spatial standardization processes were implemented, including the following:
  • The N4ITK MRI Bias Correction algorithm was applied to reduce field inhomogeneities associated with interactions and imperfections of the employed radio frequency coils. For this purpose, the implemented 3D Slicer module was used over T1-weighted MR images [33].
  • MR and CT images were resampled to achieve an isotropic 1 mm space using the Resample Scalar Volume module in 3D Slicer. This step was crucial in establishing a consistent 1mm isotropic resolution across all images.
  • Intra-patient registration aimed at aligning each MR–CT pair. The process included several steps: initially, manual registration was performed to ensure the same spatial localization of the images, using the Transforms module in 3D Slicer. This was followed by an automatic elastic registration step using the Elastix software (5.1.0) [34]. This critical process ensured precise correspondence between each anatomical point in both imaging modalities.
  • MR and CT images were padded (zero-padding for MR images and −1024-padding for CT images) to achieve the same matrix size for all volumes, ensuring compatibility with the CycleGAN input size. This step was crucial in establishing a consistent resolution (256 × 256 × 256 voxels) across all images, effectively preventing information loss during subsequent procedures.
  • CT intensity normalization was conducted to adjust the HU range from −1024 to 3071, using MATLAB (The MathWorks Inc., Natick, USA). This normalization was performed to ensure a fixed representation of 4096 gray levels, including the range for tissues.
  • MR and CT images were used to create a joint mask using MATLAB. This aimed to guarantee coherence in areas where one modality lacked data compared to the other.
Preprocessed 2D slices were then fed to the CycleGAN, using the Mean Absolute Error (MAE) as loss function, together with the Adam optimizer with a learning rate of 10 4 , and a batch size of 6. The model was trained for 300 epochs using a cross-validation scheme (k = 4) with 12 patients for training and 4 patients for test in each iteration; although, after epoch 50, the noticeable improvement in image differences reduced, the loss function graph (MAE) continued to converge gradually. The model was trained in a server equipped with an NVIDIA A100 80GB PCIe Tensor Core GPU. While the training time of the model lasted for one day, the final inference time took around 30 s per patient; this short time for performing inference using the trained model is acceptable for future implementation and deployment in clinical settings.

2.3. Evaluation Metrics

Differences between the synthesized and ground-truth CT were assessed voxel-wise. Only voxels included in the knee were used for comparison, avoiding the inclusion of the background in the computation of the metrics.
Relative Change (RC) maps were calculated in a voxel-wise manner for qualitative assessment following Equation (2):
R C ( x , y , z ) = p C T ( x , y , z ) C T ( x , y , z ) | C T ( x , y , z ) | + ε ,
where ε = 0.001 and pCT(x,y,z) and CT(x,y,z) denote the voxel values at a given (x,y,z) image position of the pseudo-CT and ground-truth CT, respectively.
Voxel-by-voxel correlation plots, Bland–Altman plots, bias, and variability Pearson correlation coefficients were calculated for quantitative comparisons. Statistical significance was considered when the p value was lower than 0.01.
The Mean Absolute Error (MAE), Peak Signal-to-Noise Ratio (PSNR), Structural Similarity Index Measure (SSIM), Zero-Normalized Cross Correlation (ZNCC), Mutual Information (MI), Relative Change (RC), and absolute Relative Change (|RC|) metrics were computed to quantitatively measure the quality of the synthesized pseudo-CT volumes compared with the ground-truth CT volumes, following Equations (3)–(9):
M A E = 1 N x , y , z | p C T ( x , y , z ) C T ( x , y , z ) | ,
P S N R = 10 l o g 10 ( M A X 2 1 N x , y , z ( p C T ( x , y , z ) C T ( x , y , z ) ) 2 ) ,
S S I M = ( 2 μ p C T μ C T + C 1 ) + ( 2 σ p C T , C T + C 2 ) ( μ p C T 2 + μ C T 2 + C 1 ) ( σ p C T 2 + σ C T 2 + C 2 ) ,
Z N C C = 1 N x , y , z ( p C T ( x , y , z ) μ p C T ) ( C T ( x , y , z ) μ C T ) σ p C T σ C T ,
M I = H ( C T ) + H ( p C T ) H ( C T , p C T ) ,
R C = 1 N x , y , z p C T ( x , y , z ) C T ( x , y , z ) | C T ( x , y , z ) | + ε ,
| R C | = 1 N x , y , z | p C T ( x , y , z ) C T ( x , y , z ) | | C T ( x , y , z ) | + ε ,
where, again, x, y, and z are the three-dimensional image dimensions, and pCT( x , y , z ) and CT( x , y , z ) are the pseudo-CT and ground-truth CT voxel values for a given ( x , y , z ) position, respectively; N is the total amount of voxels; μ p C T and μ C T are the mean HU pseudo-CT and ground-truth CT images, respectively; σ p C T and σ C T are the standard deviations for the pseudo-CT and ground-truth CT images, respectively; σ p C T , C T is the joint standard deviation; C 1 = ( k 1 L ) 2 and C 2 = ( k 2 L ) 2 are two variables to stabilize the division with weak denominator depending on L = dynamic range of pixel values (typically 2 # b i t s p e r p i x e l 1 ), k 1 = 0.01 and k 2 = 0.03 by default; H ( p C T ) and H ( C T ) are the entropies for the pseudo-CT and ground-truth CT images, respectively; H ( p C T , C T ) is the joint entropy, defined by Equations (10)–(12):
H ( p C T ) = x p C T p ( p C T ( x ) ) log 2 p ( p C T ( x ) ) ,
H ( C T ) = y C T p ( C T ( y ) ) log 2 p ( C T ( y ) ) ,
H ( p C T , C T ) = x p C T , y C T p ( p C T ( x ) , C T ( y ) ) log 2 p ( p C T ( x ) , C T ( y ) ) ,
where p ( p C T ( x ) ) and p ( C T ( y ) ) represent the individual probability of observing a particular intensity value at a specific voxel in the pseudo-CT and ground-truth CT images, respectively. Similarly, p ( p C T ( x ) , C T ( y ) ) represents the joint probability of visualizing determined intensity values in both the pseudo-CT and ground-truth CT images.

3. Experimental Results

3.1. Qualitative Results

In this section, we present a qualitative evaluation of the synthesized pseudo-CT images of the knee compared to their corresponding ground-truth CT images. Visual inspection is crucial to evaluate the realism, anatomical accuracy, and overall quality of the synthesized pseudo-CTs. Figure 3 shows side-by-side comparisons for four representative subjects in the dataset to illustrate the outcomes of our knee pseudo-CT synthesis method.
Anatomical inspection of the results shows the high quality of the synthesized pseudo-CTs and the robustness of the method when delineating the bone structures despite different anatomical distributions and body mass indexes, which is of utmost importance in our assessment. We inspected the representations of critical bone structures within the knee joint, encompassing the femur, tibia, fibula, and patella, to gauge the accuracy of the pseudo-CT images. Despite minor deviations in the HUs in the boundaries of the bone structures, as demonstrated in the error maps, the synthesized pseudo-CTs are able to capture the shape and details of the bone contours. As part of this evaluation, we can also see how the pseudo-CT method is able to capture and include the presence of implants with good accuracy.
Realism and texture analysis are essential to this qualitative assessment. Visual comparisons between the pseudo-CT and CT images from these representative cases provide initial insights into the realism of the synthesized images. Particular attention is directed towards the preservation of tissue structures, such as bone surfaces, ligaments, and surrounding soft tissues, as well as the maintenance of textural details. Overall, the pseudo-CT method is able to capture and maintain the same distribution of the different tissues, despite presenting less detail for muscle and soft tissues, where there is a lack of detail in their shared boundaries and their intrinsic textures, compared to the ground-truth CT. These synthesis artifacts may be due to the limited number of MR–CT pairs in the dataset used for training our method.
Additionally, we could also see how the CyclGAN is able to handle the generation of densities associated with implants, as could be seen in the synthesis of the screws in second patient in Figure 3.

3.2. Quantitative Results

In this section, we conduct a comprehensive quantitative assessment of our knee pseudo-CT synthesis approach, employing the suite of objective metrics described in Section 2.3 to evaluate the synthesized pseudo-CT images in comparison to the ground-truth CT scans.
A fundamental component of our quantitative analysis is the assessment of HUs, as they are integral for characterizing tissue properties in CT imaging. A thorough comparison of HU values for the analogous voxels in pseudo-CT and ground-truth CT images is presented in Figure 4, including a voxel-by-voxel correlation plot together with a Bland–Altman plot across all participants. The correlation plot showed an excellent correlation between the pseudo-CT HUs and the ground-truth CT HUs (m = 0.9785; adjusted R 2 = 0.9988; ρ = 0.9849; p < 0.001). The Bland–Altman plot showed that the average of differences was low ( H U C T H U p s e u d o C T = 0.7199 ± 35.2490; 95% confidence interval [−68.3681, 69.8079]); the difference between methods was negligible below 200 HU (soft tissues) and tended to decrease as the average increased, especially above 900 HU (metallic implants). The main dispersion is located within the HU values associated with the cancellous and cortical bones.
The remaining quantitative assessments are summarized in Table 2, and serve as a robust and rigorous analysis of the accuracy and efficacy of our method. All metrics demonstrate that our method performs well when synthesizing the pseudo-CT images. The MAE, PSNR, SSIM, ZNCC, and MI metrics demonstrate the excellent agreement between synthesized pseudo-CT images and the ground-truth CT. The RC metric reveals that errors are not systematically biased towards positive nor negative values, while the absolute RC shows that errors in synthesized pseudo-CTs are below 5% when compared to their corresponding ground-truth CTs.

4. Discussion

We developed and implemented a novel pseudo-CT synthesis framework, based on the CycleGAN method, specifically focused on synthesizing pseudo-CT images in knee imaging, aiming to address the limitations observed in existing methods. Thus, we demonstrated how the key feature of the CycleGAN, relying on its use of adversarial training and the principle of cycle consistency led to the successful generation of continuous knee pseudo-CTs.
In recent years, several other DL-based pseudo-CT synthesis methods have been described for head, neck, and pelvis; however, to the best of our knowledge, this study represents the first reported effort in the field of MR-based knee pseudo-CT synthesis, shedding light to significantly advance the field of knee imaging. Despite the evident differences between these anatomical areas, our results are in line with those recently described by other authors in the literature. The qualitative (Figure 3) and quantitative (Figure 4 and Table 2) analyses indicated the promising potential of the method.
As qualitatively depicted in Figure 3, the discrepancies observed at the bone boundaries may largely derive from defects in the MR–CT intra-subject registration, a task which is particularly complex due to multiple factors: differences in patient positioning between the CT and MR scans, the complex anatomy of the knee, and the high freedom of movement of the bone structures. As part of this evaluation, the management of artifacts in the pseudo-CT synthesis process is crucial; in this sense, we also demonstrated how the CyclGAN is able to handle the generation of densities associated with implants, as could be seen in the synthesis of the screws in second patient in Figure 3. This fact may have implications for the clinical usability of the synthesized images.
As seen in the quantitative results shown in Figure 4, pseudo-CT images are very similar to their corresponding ground-truth CT images. The correlation plot showed an excellent correlation between the pseudo-CT HUs and the ground-truth CT HUs, and he Bland–Altman plot showed that the average of differences was low, demonstrating that the difference between methods was negligible below 200 HU and tended to decrease as the average increased, specially above 900 HU. As described by the numerical results reported in Table 2, all metrics convey to the same story, illustrating the strong interrelation between the synthesized pseudo-CT images and their corresponding ground-truth CT images.
The significance of our artificially synthesized pseudo-CT holds crucial importance within the clinical context, specially when considering applications that require the precise delineation of knee bones and/or the computation of a pseudo-CT, such as in knee segmentation tasks, PET/MR attenuation correction and/or radiotherapy planning, where MR–CT datasets are usually limited, leading to larger, highly flexible, and generalizable images.
Our study had several limitations. First, our dataset was very small, specially given the anatomical complexity of the knee. Further improvement of the current model could be achieved by increasing the number of patients included in the dataset and/or training the CycleGAN with better, higher-resolution MR images. This is supported by the fact that, although after epoch 50, the noticeable improvement in image differences reduced, the loss function graph (MAE) continued to converge gradually. We believe that this convergence and, thus, the pseudo-CT results, might be enhanced with a larger patient dataset. Collaborating with several medical institutions to create a multi-center database would be a promising strategy to increase the diversity and number of patients and therefore enhance the robustness and reliance of our DL model. However, despite the small number of patients in the dataset, the comprehensive evaluation presented here encompasses a variety of cases, reflecting diverse patient anatomies and knee conditions. This diversity is essential to ensure that our knee pseudo-CT synthesis approach is consistent and reliable across a spectrum of clinical scenarios. Second, even if the age of patients was in the range of [20–70], their mean age was 45.9 ± 13.2, which does not ensure an even distribution across age groups. Moreover, since all patients are aged over 20 years, applying the proposed method directly to pediatric patients is challenging due to the intrinsic variations in anatomy and bone densities. Third, we have assessed the impact of our method on the synthesis of only a limited number of patients, assessing the main anatomical structures and small implants (screws). However, a further assessment including patients with different pathologies would be desirable. Finally, although we obtained encouraging results, the reproducibility of our method needs to be properly assessed in subjects who underwent repeated examinations.

5. Conclusions

In this work, we described the development and initial validation of a CycleGAN approach to synthesize knee pseudo-CT images from standard T1-weighted MR images. Our proposed method includes minimal error when contrasted with their analogous CT image. Additionally, our results are in line with those previously described for other anatomical areas in the literature. This work serves as a proof of concept to demonstrate the great potential of CycleGANs for pseudo-CT synthesis as well as the feasibility of these methods using real clinical datasets. To the best of our knowledge, this study represents the first reported effort in the field of MR-based knee pseudo-CT synthesis, shedding light to significantly advance the field of knee imaging.
However, further developments are needed before integrating the proposed method in real-world clinical settings. On the one hand, some efforts must be made to implement an external validation in a multi-center clinical essay that increments significantly the dataset size. On the other hand, this new dataset must include pathologies commonly seen in knee imaging (e.g., fractures, torn meniscus or ligament, osteoarthritis, arthrofibrosis, tendinopathies, osteochondroma, bursitis, and synovial or popliteal cysts) and evaluate the performance of the method in these conditions.

Author Contributions

Conceptualization, J.B.-S., J.A.-B. and A.T.-C.; methodology, D.V.-C., J.M.M.-M., D.V.-H., J.A.-B. and A.T.-C.; software, D.V.-C., J.M.M.-M., D.V.-H. and A.T.-C.; validation, J.M.M.-M., B.R.-G., D.V.-H., B.R.-V., J.A.-B. and A.T.-C.; formal analysis, B.R.-G., B.R.-V. and A.T.-C.; investigation, D.V.-C. and A.T.-C.; resources, C.S.-B., J.A.-B. and A.T.-C.; data curation, D.V.-C., J.M.M.-M., J.S.-P.-O., J.M.-L. and A.T.-C.; writing—original draft preparation, D.V.-C., B.R.-G. and A.T.-C.; writing—review and editing, D.V.-C., J.M.M.-M., B.R.-G., D.V.-H., B.R.-V., J.S.-P.-O., J.M.-L., J.B.-S., J.A.-B. and A.T.-C.; visualization, C.S.-B., J.A.-B. and A.T.-C.; supervision, J.S.-P.-O., C.S.-B., J.B.-S., J.A.-B. and A.T.-C.; project administration, J.B.-S. and A.T.-C.; funding acquisition, A.T.-C. All authors have read and agreed to the published version of the manuscript.

Funding

This research is part of projects M3433 and M3438 for the establishment of a medical image analysis lab in radiomics and AI in the Dept. of Radiology (PI: Angel Torrado-Carvajal), funded by Fundación para la Investigación Biomédica del Hospital Universitario Ramón y Cajal, (FIBioHRC). This research is also part of project PID2020-116769RB-I00, ADAPT-AI, (PI: Angel Torrado-Carvajal) funded by Ministerio de Ciencia e Investigación (MCIN)/Agencia Estatal de Investigación (AEI).

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki, and approved by the Ethics Committee of Hospital Universitario Ramón y Cajal (protocol 297/23).

Informed Consent Statement

Patient consent was waived due to the retrospective nature of this study.

Data Availability Statement

The data presented in this study are not available available due to their intrinsic medical nature.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
MRMagnetic Resonance
CTComputed Tomography
HUHounsfield Units
PETPositron Emission Tomography
MAEMean Absolute Error
PSNRPeak Signal-to-Noise Ratio
SSIMStructural Similarity Index
ZNCCZero Normalized Cross Correlation
MIMutual Information
RCRelative Change
DLDeep Learning
CNNsconvolutional neural networks
GANsGenerative Adversarial Networks
CycleGANsCycle-Consistent Generative Adversarial Networks

References

  1. Davies, M.; James, S.; Botchu, R. (Eds.) Imaging of the Knee: Techniques and Applications; Springer: Cham, Switzerland, 2023. [Google Scholar]
  2. Altahawi, F.; Subhas, N. 3D MRI in musculoskeletal imaging: Current and future applications. Curr. Radiol. Rep. 2018, 6, 27. [Google Scholar] [CrossRef]
  3. Sneag, D.B.; Abel, F.; Potter, H.G.; Fritz, J.; Koff, M.F.; Chung, C.B.; Pedoia, V.; Tan, E.T. MRI Advancements in Musculoskeletal Clinical and Research Practice. Radiology 2023, 308, e230531. [Google Scholar] [CrossRef]
  4. Ibad, H.A.; de Cesar Netto, C.; Shakoor, D.; Sisniega, A.; Liu, S.Z.; Siewerdsen, J.H.; Carrino, J.A.; Zbijewski, W.; Demehri, S. Computed tomography: State-of-the-art advancements in musculoskeletal imaging. Investig. Radiol. 2023, 58, 99–110. [Google Scholar] [CrossRef] [PubMed]
  5. Demehri, S.; Baffour, F.I.; Klein, J.G.; Ghotbi, E.; Ibad, H.A.; Moradi, K.; Taguchi, K.; Fritz, J.; Carrino, J.A.; Guermazi, A.; et al. Musculoskeletal CT Imaging: State-of-the-Art Advancements and Future Directions. Radiology 2023, 308, e230344. [Google Scholar] [CrossRef] [PubMed]
  6. Kijowski, R.; Fritz, J. Emerging technology in musculoskeletal MRI and CT. Radiology 2023, 306, 6–19. [Google Scholar] [CrossRef] [PubMed]
  7. Nakamura, H.; Masuko, K.; Yudoh, K.; Kato, T.; Nishioka, K.; Sugihara, T.; Beppu, M. Positron emission tomography with 18F-FDG in osteoarthritic knee. Osteoarthr. Cartil. 2007, 15, 673–681. [Google Scholar] [CrossRef] [PubMed]
  8. Kamasaki, T.; Mori, K.; Taira, Y.; Orita, M.; Miyamoto, I.; Usui, T.; Chiba, K.; Kudo, T.; Takamura, N. PET/computed tomography shows association between subjective pain in knee joints and fluorine-18-fluorodeoxyglucose uptake. Nucl. Med. Commun. 2020, 41, 241–245. [Google Scholar] [CrossRef] [PubMed]
  9. Sandström, A.S.; Torrado-Carvajal, A.; Morrissey, E.J.; Kim, M.; Alshelh, Z.; Zhu, Y.; Li, M.D.; Chang, C.Y.; Jarraya, M.; Akeju, O.; et al. [11C]-PBR28 Positron Emission Tomography Signal as an Imaging Marker of Joint Inflammation in Knee Osteoarthritis. PAIN 2024, 165, 1121–1130. [Google Scholar] [CrossRef] [PubMed]
  10. Van den Ende, C.H.; Minten, M.J.; Leseman-Hoogenboom, M.M.; van den Hoogen, F.H.; Den Broeder, A.A.; Mahler, E.A.; Poortmans, P.M. Long-term efficacy of low-dose radiation therapy on symptoms in patients with knee and hand osteoarthritis: Follow-up results of two parallel randomised, sham-controlled trials. Lancet Rheumatol. 2020, 2, e42–e49. [Google Scholar] [CrossRef]
  11. Kim, B.H.; Shin, K.; Kim, M.J.; Kim, H.J.; Wang, J.H.; Lee, D.H.; Kim, D.H.; Sun, J.; Lee, J.H.; Kim, J.Y.; et al. Low-dose RaDiation therapy for patients with KNee osteoArthritis (LoRD-KNeA): A protocol for a sham-controlled randomised trial. BMJ Open 2023, 13, e069691. [Google Scholar] [CrossRef]
  12. Chong, L.; Lee, K.; Sim, F. 3D MRI with CT-like bone contrast—An overview of current approaches and practical clinical implementation. Eur. J. Radiol. 2021, 143. [Google Scholar] [CrossRef] [PubMed]
  13. Florkow, M.; Willemsen, K.; Mascarenhas, V.; Oei, E.; van Stralen, M.; Seevinck, P. Magnetic Resonance Imaging Versus Computed Tomography for Three-Dimensional Bone Imaging of Musculoskeletal Pathologies: A Review. J. Magn. Reson. Imaging 2022, 56, 11–34. [Google Scholar] [CrossRef] [PubMed]
  14. Koh, E.; Walton, E.; Watson, P. VIBE MRI: An alternative to CT in the imaging of sports-related osseous pathology? Br. J. Radiol. 2018, 91, 20170815. [Google Scholar] [CrossRef] [PubMed]
  15. Aydıngöz, U.; Yıldız, A.; Ergen, F. Zero Echo Time Musculoskeletal MRI: Technique, Optimization, Applications, and Pitfalls. Radiographics 2022, 42, 1398–1414. [Google Scholar] [CrossRef]
  16. Lombardi, A.F.; Ma, Y.J.; Jang, H.; Jerban, S.; Du, J.; Chang, E.Y.; Chung, C.B. Synthetic CT in Musculoskeletal Disorders: A Systematic Review. Investig. Radiol. 2023, 58, 43–59. [Google Scholar] [CrossRef] [PubMed]
  17. Merida, I.; Costes, N.; Heckemann, R.; Hammers, A. Pseudo-CT generation in brain MR-PET attenuation correction: Comparison of several multi-atlas methods. EJNMMI Phys. 2015, 2, A29. [Google Scholar] [CrossRef]
  18. Ladefoged, C.N.; Law, I.; Anazodo, U.; Lawrence, K.S.; Izquierdo-Garcia, D.; Catana, C.; Burgos, N.; Cardoso, M.J.; Ourselin, S.; Hutton, B.; et al. A multi-centre evaluation of eleven clinically feasible brain PET/MRI attenuation correction techniques using a large cohort of patients. Neuroimage 2017, 147, 346–359. [Google Scholar] [CrossRef]
  19. Teuho, J.; Torrado-Carvajal, A.; Herzog, H.; Anazodo, U.; Klén, R.; Iida, H.; Teräs, M. Magnetic resonance-based attenuation correction and scatter correction in neurological positron emission tomography/magnetic resonance imaging—Current status with emerging applications. Front. Phys. 2020, 7, 243. [Google Scholar] [CrossRef]
  20. Spadea, M.F.; Maspero, M.; Zaffino, P.; Seco, J. Deep learning based synthetic-CT generation in radiotherapy and PET: A review. Med. Phys. 2021, 48, 6537–6566. [Google Scholar] [CrossRef]
  21. Boulanger, M.; Nunes, J.C.; Chourak, H.; Largent, A.; Tahri, S.; Acosta, O.; De Crevoisier, R.; Lafond, C.; Barateau, A. Deep learning methods to generate synthetic CT from MRI in radiotherapy: A literature review. Phys. Medica 2021, 89, 265–281. [Google Scholar] [CrossRef]
  22. Vera-Olmos, J.; Torrado-Carvajal, A.; Prieto-de-la Lastra, C.; Catalano, O.A.; Rozenholc, Y.; Mazzeo, F.; Soricelli, A.; Salvatore, M.; Izquierdo-Garcia, D.; Malpica, N. How to Pseudo-CT: A Comparative Review of Deep Convolutional Neural Network Architectures for CT Synthesis. Appl. Sci. 2022, 12, 11600. [Google Scholar] [CrossRef]
  23. Martinez-Girones, P.M.; Vera-Olmos, J.; Gil-Correa, M.; Ramos, A.; Garcia-Ca namaque, L.; Izquierdo-Garcia, D.; Malpica, N.; Torrado-Carvajal, A. Franken-CT: Head and neck mr-based pseudo-CT synthesis using diverse anatomical overlapping MR–CT scans. Appl. Sci. 2021, 11, 3508. [Google Scholar] [CrossRef]
  24. Torrado-Carvajal, A.; Vera-Olmos, J.; Izquierdo-Garcia, D.; Catalano, O.A.; Morales, M.A.; Margolin, J.; Soricelli, A.; Salvatore, M.; Malpica, N.; Catana, C. Dixon-VIBE deep learning (DIVIDE) pseudo-CT synthesis for pelvis PET/MR attenuation correction. J. Nucl. Med. 2019, 60, 429–435. [Google Scholar] [CrossRef] [PubMed]
  25. Sari, H.; Reaungamornrat, J.; Catalano, O.A.; Vera-Olmos, J.; Izquierdo-Garcia, D.; Morales, M.A.; Torrado-Carvajal, A.; Ng, T.S.; Malpica, N.; Kamen, A.; et al. Evaluation of Deep Learning–Based Approaches to Segment Bowel Air Pockets and Generate Pelvic Attenuation Maps from CAIPIRINHA-Accelerated Dixon MR Images. J. Nucl. Med. 2022, 63, 468–475. [Google Scholar] [CrossRef] [PubMed]
  26. Pozaruk, A.; Pawar, K.; Li, S.; Carey, A.; Cheng, J.; Sudarshan, V.P.; Cholewa, M.; Grummet, J.; Chen, Z.; Egan, G. Augmented deep learning model for improved quantitative accuracy of MR-based PET attenuation correction in PSMA PET-MRI prostate imaging. Eur. J. Nucl. Med. Mol. Imaging 2021, 48, 9–20. [Google Scholar] [CrossRef] [PubMed]
  27. Ma, X.; Chen, X.; Li, J.; Wang, Y.; Men, K.; Dai, J. MRI-only radiotherapy planning for nasopharyngeal carcinoma using deep learning. Front. Oncol. 2021, 11, 713617. [Google Scholar] [CrossRef]
  28. Jabbarpour, A.; Mahdavi, S.R.; Sadr, A.V.; Esmaili, G.; Shiri, I.; Zaidi, H. Unsupervised pseudo CT generation using heterogenous multicentric CT/MR images and CycleGAN: Dosimetric assessment for 3D conformal radiotherapy. Comput. Biol. Med. 2022, 143, 105277. [Google Scholar] [CrossRef] [PubMed]
  29. Largent, A.; Barateau, A.; Nunes, J.C.; Lafond, C.; Greer, P.B.; Dowling, J.A.; Saint-Jalmes, H.; Acosta, O.; de Crevoisier, R. Pseudo-CT generation for MRI-only radiation therapy treatment planning: Comparison among patch-based, atlas-based, and bulk density methods. Int. J. Radiat. Oncol. Biol. Phys. 2019, 103, 479–490. [Google Scholar] [CrossRef] [PubMed]
  30. Largent, A.; Barateau, A.; Nunes, J.C.; Mylona, E.; Castelli, J.; Lafond, C.; Greer, P.B.; Dowling, J.A.; Baxter, J.; Saint-Jalmes, H.; et al. Comparison of deep learning-based and patch-based methods for pseudo-CT generation in MRI-based prostate dose planning. Int. J. Radiat. Oncol. Biol. Phys. 2019, 105, 1137–1150. [Google Scholar] [CrossRef]
  31. Zhu, J.Y.; Park, T.; Isola, P.; Efros, A.A. Unpaired image-to-image translation using cycle-consistent adversarial networks. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 2223–2232. [Google Scholar]
  32. Wang, J.; Wu, Q.J.; Pourpanah, F. Dc-cyclegan: Bidirectional ct-to-mr synthesis from unpaired data. Comput. Med. Imaging Graph. 2023, 108, 102249. [Google Scholar] [CrossRef]
  33. Kikinis, R.; Pieper, S.D.; Vosburgh, K.G. 3D Slicer: A platform for subject-specific image analysis, visualization, and clinical support. In Intraoperative Imaging and Image-Guided Therapy; Springer: Berlin/Heidelberg, Germany, 2013; pp. 277–289. [Google Scholar]
  34. Klein, S.; Staring, M.; Murphy, K.; Viergever, M.A.; Pluim, J.P. Elastix: A toolbox for intensity-based medical image registration. IEEE Trans. Med. Imaging 2010, 29, 196–205. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Example axial (left), coronal (center), and sagittal (right) slices of the MR (top row) and CT (bottom row) images of an eligible patient in our dataset. Notice the different field of view and the different positioning of the knee between the MR and CT images.
Figure 1. Example axial (left), coronal (center), and sagittal (right) slices of the MR (top row) and CT (bottom row) images of an eligible patient in our dataset. Notice the different field of view and the different positioning of the knee between the MR and CT images.
Applsci 14 04655 g001
Figure 2. CycleGAN overall architecture, showing the dual generative network structure components G and F synthesizing images from the MR to the CT domain, and vice versa, and the dual discriminative network structure components D and E, assessing the images synthesized by G and F.
Figure 2. CycleGAN overall architecture, showing the dual generative network structure components G and F synthesizing images from the MR to the CT domain, and vice versa, and the dual discriminative network structure components D and E, assessing the images synthesized by G and F.
Applsci 14 04655 g002
Figure 3. Sagittal slices of the MR images (first column), pseudo-CT images (second column), CT images (third column), and Relative Change between the pseudo-CT and the CT (fourth column) for 4 representative subjects. Green arrows highlight the presence of knee implants. Blue values in error maps denote decreased HUs in the pseudo-CT images, whereas red values denote increased HUs.
Figure 3. Sagittal slices of the MR images (first column), pseudo-CT images (second column), CT images (third column), and Relative Change between the pseudo-CT and the CT (fourth column) for 4 representative subjects. Green arrows highlight the presence of knee implants. Blue values in error maps denote decreased HUs in the pseudo-CT images, whereas red values denote increased HUs.
Applsci 14 04655 g003
Figure 4. Voxel-by-voxel correlation plot between CT and pseudo-CT of all images in the dataset, expressed in normalized Hounsfield units (HU) (left) and Bland–Altman plot between CT and pseudo-CT for all images in the dataset (right). Gray scale bars show density of voxels in histogram grid.
Figure 4. Voxel-by-voxel correlation plot between CT and pseudo-CT of all images in the dataset, expressed in normalized Hounsfield units (HU) (left) and Bland–Altman plot between CT and pseudo-CT for all images in the dataset (right). Gray scale bars show density of voxels in histogram grid.
Applsci 14 04655 g004
Table 1. Demographic and technical details from patients included in the dataset.
Table 1. Demographic and technical details from patients included in the dataset.
Age (Sex)MR ScanVoxel Size (mm)Matrix SizeCT ScanVoxel Size (mm)Matrix Size
48 (F)Philips Achieva dStream0.36 × 0.36 × 3.3480 × 480 × 29Siemens Sensation 640.74 × 0.74 × 1512 × 512 × 547
46 (F)Philips Intera0.35 × 0.35 × 4512 × 512 × 20Philips Brilliance 640.27 × 0.27 × 1768 × 768 × 468
53 (F)Philips Ingenia0.36 × 0.36 × 3448 × 448 × 34Philips Brilliance 640.23 × 0.23 × 1.4768 × 768 × 314
37 (F)Philips Achieva0.31 × 0.31 × 3512 × 512 × 32Philips Brilliance 640.26 × 0.26 × 1.4768 × 768 × 228
40 (M)Philips Intera0.39 × 0.39 × 4512 × 512 × 30Toshiba Acquilion ONE0.36 × 0.36 × 0.5512 × 512 × 621
57 (F)Philips Ingenia0.36 × 0.36 × 3448 × 448 × 30Philips Brilliance 640.27 × 0.27 × 1.4768 × 768 × 357
20 (M)Philips Achieva dStream0.35 × 0.35 × 3512 × 512 × 32Toshiba Acquilion ONE0.38 × 0.38 × 0.5512 × 512 × 511
57 (M)Philips Ingenia0.34 × 0.34 × 3560 × 560 × 32Philips Brilliance 640.28 × 0.28 × 1.4768 × 768 × 392
42 (F)Philips Achieva dStream0.26 × 0.26 × 4640 × 640 × 20Philips Brilliance 640.26 × 0.26 × 1.4768 × 768 × 249
38 (M)Philips Ingenia0.26 × 0.26 × 31008 × 1008 × 38Toshiba Acquilion ONE0.49 × 0.49 × 0.5512 × 512 × 496
59 (M)Philips Achieva dStream0.35 × 0.35 × 3512 × 512 × 32Siemens Sensation 640.35 × 0.35 × 1512 × 512 × 372
26 (M)Siemens Avanto0.63 × 0.63 × 3.5320 × 320 × 28Philips Incisive CT0.46 × 0.46 × 0.8451 × 451 × 485
33 (M)Philips Achieva dStream0.36 × 0.36 × 3448 × 448 × 30Philips Brilliance 640.39 × 0.39 × 1.4768 × 768 × 349
57 (F)Philips Ingenia0.36 × 0.36 × 3448 × 448 × 30Philips Incisive CT0.38 × 0.38 × 1512 × 512 × 200
51 (M)Philips Ingenia0.36 × 0.36 × 3448 × 448 × 36Philips Brilliance 640.27 × 0.27 × 1.4768 × 768 × 283
70 (F)Philips Ingenia0.34 × 0.34 × 3528 × 528 × 30Philips Spectral CT0.50 × 0.50 × 2512 × 512 × 126
Siemens Healthineers, Erlangen, Germany; Philips Healthcare, Best, The Netherlands; Toshiba Healthcare, Tokyo, Japan.
Table 2. Summary of the median (and median absolute deviation) Mean Absolute Error (MAE), Peak Signal-to-Noise Ratio (PSNR), Structural Similarity Index Measure (SSIM), Zero-Normalized Cross Correlation (ZNCC), Mutual Information (MI), Relative Change (RC), and absolute Relative Change (|RC|) metrics in the dataset.
Table 2. Summary of the median (and median absolute deviation) Mean Absolute Error (MAE), Peak Signal-to-Noise Ratio (PSNR), Structural Similarity Index Measure (SSIM), Zero-Normalized Cross Correlation (ZNCC), Mutual Information (MI), Relative Change (RC), and absolute Relative Change (|RC|) metrics in the dataset.
MAEPSNRSSIMZNCCMIRC|RC|
30.4638 (7.4770)28.1168 (1.5245)0.9230 (0.0217)0.9807 (0.0071)0.8548 (0.1019)0.0126 (0.0344)0.0440 (0.0246)
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Vallejo-Cendrero, D.; Molina-Maza, J.M.; Rodriguez-Gonzalez, B.; Viar-Hernandez, D.; Rodriguez-Vila, B.; Soto-Pérez-Olivares, J.; Moujir-López, J.; Suevos-Ballesteros, C.; Blázquez-Sánchez, J.; Acosta-Batlle, J.; et al. CycleGAN-Driven MR-Based Pseudo-CT Synthesis for Knee Imaging Studies. Appl. Sci. 2024, 14, 4655. https://doi.org/10.3390/app14114655

AMA Style

Vallejo-Cendrero D, Molina-Maza JM, Rodriguez-Gonzalez B, Viar-Hernandez D, Rodriguez-Vila B, Soto-Pérez-Olivares J, Moujir-López J, Suevos-Ballesteros C, Blázquez-Sánchez J, Acosta-Batlle J, et al. CycleGAN-Driven MR-Based Pseudo-CT Synthesis for Knee Imaging Studies. Applied Sciences. 2024; 14(11):4655. https://doi.org/10.3390/app14114655

Chicago/Turabian Style

Vallejo-Cendrero, Daniel, Juan Manuel Molina-Maza, Blanca Rodriguez-Gonzalez, David Viar-Hernandez, Borja Rodriguez-Vila, Javier Soto-Pérez-Olivares, Jaime Moujir-López, Carlos Suevos-Ballesteros, Javier Blázquez-Sánchez, José Acosta-Batlle, and et al. 2024. "CycleGAN-Driven MR-Based Pseudo-CT Synthesis for Knee Imaging Studies" Applied Sciences 14, no. 11: 4655. https://doi.org/10.3390/app14114655

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop