Next Article in Journal
Current Treatments for COVID-19: Application of Supercritical Fluids in the Manufacturing of Oral and Pulmonary Formulations
Previous Article in Journal
Evaluation of Pharmacokinetic and Toxicological Parameters of Arnica Tincture after Dermal Application In Vivo
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Synthetic Post-Contrast Imaging through Artificial Intelligence: Clinical Applications of Virtual and Augmented Contrast Media

1
Neuroradiology Unit, Department of Radiology, Memorial Sloan Kettering Cancer Center, 1275 York Ave, New York, NY 10065, USA
2
Neuroradiology Unit, NESMOS Department, Sant’Andrea Hospital, La Sapienza University, Via di Grottarossa 1035, 00189 Rome, Italy
3
Medical Physics Department, Bambino Gesù Children’s Hospital, IRCCS, Piazza di Sant’Onofrio, 4, 00165 Rome, Italy
4
Radiology Department, Castelli Hospital, Via Nettunense Km 11.5, 00040 Ariccia, Italy
5
Neuroimaging Lab, IRCCS Fondazione Santa Lucia, 00179 Rome, Italy
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Pharmaceutics 2022, 14(11), 2378; https://doi.org/10.3390/pharmaceutics14112378
Submission received: 13 September 2022 / Revised: 25 October 2022 / Accepted: 26 October 2022 / Published: 4 November 2022
(This article belongs to the Section Physical Pharmacy and Formulation)

Abstract

:
Contrast media are widely diffused in biomedical imaging, due to their relevance in the diagnosis of numerous disorders. However, the risk of adverse reactions, the concern of potential damage to sensitive organs, and the recently described brain deposition of gadolinium salts, limit the use of contrast media in clinical practice. In recent years, the application of artificial intelligence (AI) techniques to biomedical imaging has led to the development of ‘virtual’ and ‘augmented’ contrasts. The idea behind these applications is to generate synthetic post-contrast images through AI computational modeling starting from the information available on other images acquired during the same scan. In these AI models, non-contrast images (virtual contrast) or low-dose post-contrast images (augmented contrast) are used as input data to generate synthetic post-contrast images, which are often undistinguishable from the native ones. In this review, we discuss the most recent advances of AI applications to biomedical imaging relative to synthetic contrast media.

1. Introduction

Contrast media are essential tools in biomedical imaging, allowing for more precise diagnosis of many conditions. The main contrast agents employed in radiology are iodinated contrasts for CT imaging and gadolinium-based contrast agents (GBCA) for MRI. As for any molecule used for medical purposes, contrast media are not exempt from contraindications and side-effects, which need to be evaluated against the known diagnostic benefits when scans are ordered in clinical practice.
One of the main side-effects of iodinated contrasts, besides allergic reactions, is nephrotoxicity [1]. Iodinated contrast media may lead to acute kidney injury (AKI) in certain patients, and they represent a leading cause of hospitalization [1]. Although the exact mechanism of renal damage is still debated, there is evidence of direct cytotoxicity of iodinated contrasts on tubular epithelial and endothelial linings of the kidney. Additionally, these contrasts seem to affect the renal hemodynamics, due to increased oxygen radicals synthesis and reduction in blood flow in both glomerular and tubular capillaries related to hyperviscosity [1]. GBCA toxicity was first reported as a multi-systemic condition known as nephrogenic systemic fibrosis (NSF), described in few cases and mainly related to renal failure [2]. More recently, brain GBCA deposition was discovered through imaging [3,4,5], animal models [6], and autopsy studies [7,8]. Brain accumulation of GBCAs depends on their chemical structure, with higher de-chelation susceptibility for linear compounds. Such evidence led to the suspension of linear GBCA from commerce [9], leading to changes in clinical guidelines for contrast administration [10]. The existence of a definite clinical correlate for GBCA brain deposition is still debated; however, different types of toxicity may occur in the body according to the patient’s age and clinical state [3]. Oxidative stress may play a role in gadolinium ions toxicity, as reflected by changes in intracellular glutathione levels [11,12]. In this situation, finding new means to boost biomedical imaging diagnostic power with low-dose contrast administration, and finding possible alternative diagnostic methods to contrast media for common disorders [13], are extremely relevant in current clinical practice.
In the last few years, artificial intelligence (AI) has revolutionized the field of medicine, with remarkable applications in biomedical imaging [14]. Due to the high volume of imaging data stored in PACS archives, AI applications have proven capable of fueling predictive models for differential diagnosis and patient outcome in many different conditions [14,15,16,17,18,19]. Most recently, the field of ‘virtual’ and ‘augmented’ contrast emerged from the intersection of AI and biomedical imaging. The idea behind these applications is to create virtual enhancement starting from available information on non-contrast images acquired during the same scan (virtual contrast) or to augment the enhancement obtained from a low-dose administration (augmented contrast), through AI computational modeling.
In this review, we present the most recent advances in AI applications to biomedical imaging relative to contrast media. This paper is divided into three sections: (1) we discuss the main AI methods currently available to generate virtual and augmented contrast; (2) we review the main applications of AI-powered contrast media to neuroradiology; and (3) we review the main applications of AI-powered contrast media to body imaging. Finally, we present our reflections on possible future developments.

2. AI Architectures in Synthetic Reconstruction

The term AI was formally proposed for the first time in 1956 at a conference at Dartmouth University, to describe the science of simulating human intellect behaviors through computers, including learning, judgment, and decision making [20]. The application of AI in medicine relies on machine learning and deep learning algorithms. These powerful mathematical algorithms can discover complex structure in high-dimensional data by improving learning through experience. There are three main types of algorithm learning: (i) unsupervised, which is the ability to find patterns in data without a priori information; (ii) supervised, which is used for data classification and prediction based on ground truth; and (iii) reinforcement learning, a technique that enables learning by the trial-and-error approach [21]. Among the many applications of AI in medicine, recent years have witnessed a rising interest towards medical image analysis (MIA). In this context, deep learning networks can be applied to solve classification problems, and for segmentation, object detection, and synthetic reconstruction of medical images [22].
Virtual and augmented contrasts can be considered an application of AI in the field of synthetic imaging. Convolutional neural networks (CNNs) and generative adversarial networks (GANs) are the only known deep learning tools used for image reconstruction [23,24], due to their ability to capture image features that describe a high level of semantic information. These two groups of machine learning architectures have achieved considerable success in the MIA field, and they are further explored in the following sections with particular attention on deep learning architectures to synthesize new images (synthetic post-contrast images) from pre-existing ones (either non-contrast images or low-dose post contrast images) [25]. Previous studies relevant to this topic are summarized in Table 1.

2.1. Convolutional Neural Networks in Synthetic Reconstruction

A CNN can be considered as a simplified version of the neocognitron model introduced by Fukushima [26] in 1980 to simulate the human visual system. CNNs are characterized by an input layer, an output layer, and multiple hidden layers (i.e., convolutional layer, pooling layer, fully connected layer, and various normalization layers). Due to their architecture, the typical use of CNNs is for image classification tasks, where the networks’ output is a single class label related to an input image [27]. When addressing CNNs’ use in medical research, the main applications here are also targeted toward classification problems [28,29]. However, the current literature includes several examples of successful implementation of CNN architectures to address other machine learning and computer vision challenges, such as image reconstruction and generation.
Gong et al., implemented a workflow with zero-dose pre-contrast MRIs and 10% low-dose postcontrast MRIs as inputs to generate full-dose MRI images through AI (augmented contrast) [30]. In this study, the model consisted of an encoder-decoder convolutional neural network (CNN) with three encoder steps and three decoder steps based on U-net architecture. The basic architecture of U-Net consists of a contracting part to capture features, and a symmetric expanding part to enable precise localization and reconstruction. A slightly modified version of the same model was tested by Pasumarthi et al., with similar results [31]. Xie et al., also investigated a U-net based approach to obtain contrast-enhanced MRI images from non-contrast MRI [32]. This type of network architecture has demonstrated superior performance in medical imaging tasks such as segmentation and MRI reconstruction [33,34,35].

2.2. Generative Adversarial Networks in Synthetic Reconstruction

Another approach to produce synthetic post-contrast images relies on generative adversarial networks (GANs), which can be used to generate images from scratch, showing increasing applications in current literature. Starting from a set of paired data, these networks are able to generate new data that are indistinguishable from the ground truth. Current state-of-the-art applications of GANs include image denoising and quality improvement algorithms, such as Wasserstein GAN (WGAN) [36] and the deep convolutional GAN (DCGAN) [37,38,39,40,41,42,43,44]. Regarding image segmentation and classification, another extension of the GAN architecture—the conditional GAN (c-GAN) demonstrated promising results [45]. Additionally, Pix2Pix GANs [46] are a general GAN approach for image-to-image translation, which is very useful for generating new synthetic MRI images starting from images of a different domain.
The most used GAN architectures to obtain synthetic diagnostic images ex novo from MRI and CT scans are the CycleGAN networks [37,43,47]. These are a class of deep learning algorithms that rely on the simultaneous training of two networks: the first is focused on data generation (alias generator) and the second is focused on data discrimination (alias discriminator). The two neural networks compete against each other, learning the statistical distribution of the training data, which allows the generation of new examples from the same distribution [48]. Specifically, the two generator models and the two discriminator models mentioned above are simultaneously trained. One generator takes images from the first domain and generates images for the second domain. Conversely, another generator takes images from the second domain and outputs images for the first domain. The discriminator models determine the plausibility of the generated images. An idea of cycle consistency is used in the CycleGAN. This means that the output image of the second generator should match the original image if the image output by the first generator was used as input to the second generator. A schematic picture of the described architecture can be seen in Figure 1. In all the reported studies, these networks are capable of generating true diagnostic images from a qualitative and informative point of view, demonstrating the enormous advantages of this type of application.

2.3. Model Implementation

Model implementation typically includes three steps: (i) training: the model is trained on an appropriate dataset for the given task; (ii) internal validation: during training, the model is continuously validated on data that is not part of the training set to evaluate the model’s performance; and (iii) external validation: after training, the model is tested on a separate dataset, from which the final metrics are extrapolated [49]. Model internal and external validation are extremely important steps to understand its robustness and reliability. Internal validation involves collecting and analyzing information about the model’s characteristics and outcomes, with the main purpose to ensure that the model is operating correctly. External validation proves the generalizability of the model, which is a prerequisite for clinical use [50]. Steps (ii) and (iii) are key operations to assess the efficacy of a model during both initial research and development phases.
The choice of which metric to use, in steps (ii) and (iii), is strictly related to the type of model data input and output to be compared [51]. For the purposes of generating synthetic contrast, it is expected that the reconstructed images maintain the same information content and quality as the original images to guarantee a correct diagnosis by medical physicians. Bustamante et al., carried out a visual inspection of the synthetic images obtained with GANs, finding no major differences compared to real images [24]. In experimental settings, different scores can be used to test the equivalence of generated vs. native images. An example of a rating score is the mean opinion score (MOS), an index score of quality ranging between 1 and 5 (1 as the most inferior value) [52]:
M O S = n = 1 N R n N
R: individual ratings for the object given by N subjects.
As a quantitative evaluation, the current literature proposes the structure similarity index (SSIM), which accounts for luminance, contrast, and degradation of structural information between two images [33]. SSIM principally refers to a structural information variation, providing a good approximation of distortion in comparing images. Where x and y the two nonnegative image signals, the measure is expressed as follows:
S S I M   x , y = 2 μ x μ y + c 1 2 σ x y + c 2 μ x 2 + μ y 2 + c 1 σ x 2 + σ y 2 + c 2
  • μ x ,   σ x 2 : average and variance of x
  • μ y ,   σ y 2 : average and variance of y
  • σ x y : covariance
  • c 1 , c 2 : two variables to stabilize the division with weak denominator
Further information about this metric can be found in the study of Wang et al. [53]
Another commonly used measure to quantify image reconstruction quality is the peak signal-to-noise ratio (PSNR) metric. This is evaluated as the ratio between the maximum possible power of an image and the power of corrupting noise that affects its quality [34]. In other words, to estimate the PSNR of an image, it is necessary to compare the real image to an ideal one with the maximum possible power.
P S N R = 10   log 10 M A X I 2 M S E
  • M A X I : maximum possible pixel value of the image
  • MSE: mean square error
As per its definition, higher PSNR values equal better quality of the reconstructed output images [54].
Table 1. Synthetic reconstruction studies, including AI architectures, database used, specific algorithms, and task characteristics. In particular, the database used is expressed according to not-specified, private, or public availability, with the total number of patients included in brackets (N); when the database is public, the name is reported. CNN = convolutional neural network; GAN = generative adversarial network; BraTS = Brain Tumor Segmentation database.
Table 1. Synthetic reconstruction studies, including AI architectures, database used, specific algorithms, and task characteristics. In particular, the database used is expressed according to not-specified, private, or public availability, with the total number of patients included in brackets (N); when the database is public, the name is reported. CNN = convolutional neural network; GAN = generative adversarial network; BraTS = Brain Tumor Segmentation database.
AI ArchitectureReferenceDatabase (N)Specific AlgorithmTask
CNN[30]Private (60)U-net based To obtain 100% full-dose 3D T1-weighted images from 10% low-dose 3D T1-weighted images
[31]Private (640)U-net based To obtain 100% full-dose 3D T1-weighted images from 10% low-dose 3D T1-weighted images and pre-contrast 3D T1-weighted images
[32]Brats2020 (369)U-net based To obtain contrast-enhanced T1-weighted images from non-contrast-enhanced T1-weighted images
GAN[37]Not specified (222)CycleGANTo obtain high blood-tissue contrast from non-contrast 4D flow MRI
[43]Private (26)2D-CycleGANTo obtain contrast-enhanced CT from non-contrast-enhanced CT
[46]Private (48)Conditional GANTo obtain fat saturated T1-weighted images from non-contrast-enhanced T1-weighted images
[47]Brats2015 BraTS2015 (50)CycleGAN with Attention
algorithm integrated
To obtain contrast-enhanced T1-weighted images from non-contrast-enhanced T1-weighted images

3. Clinical Applications in Neuroradiology

In neuroradiology, the contrast enhancement of a lesion inside the brain tissue reflects a blood–brain barrier rupture. This event occurs in many different brain diseases such as some primary tumors and metastases, neuroinflammatory diseases, infections, and subacute ischemia, for which contrast injection during MRI examination is considered mandatory for accurate assessment, differential diagnosis, and monitoring [55,56,57,58,59,60]. Other uses of contrast include the evaluation of vascular malformation and aneurysms. In this scenario, the implementation of AI algorithms to reduce contrast usage could result in significant benefit for the patients, and reduced scan times and costs [61]. However, the main drawback of AI analysis, especially through deep learning (DL) methods, reside in the need for a large quantity of data. For this reason, the literature of AI virtual contrast in neuroradiology is focused on MRI examination of relatively common diseases such as tumors and multiple sclerosis (MS) (Table 2).

3.1. AI in Neuro-Oncology Imaging

In neuro-oncology, contrast enhancement is particularly useful not only as a marker for differential diagnosis and progression, but it is also considered the target for neurosurgical removal of a lesion and an indicator of possible recurrence. Although in recent years some authors suggested expanding the surgical resection of brain tumors beyond the contrast enhancement [62,63], injection of gadolinium remains a standard for both first and follow-up MR scans. To avoid the use of gadolinium, Kleesiek et al., applied a Bayesan DL architecture to predict contrast enhancement from non-contrast MR images in patients with gliomas of different grades and healthy controls [64]. The authors obtained good results in terms of qualitative and quantitative assessment (approximate sensitivity and specificity of 91%). Similarly, other studies applied a DL method to pre-contrast MR images of a large group of glioblastomas and low-grade gliomas with a good structural similarity index between simulated and real postcontrast imaging, and the ability of the neuroradiologist to determine the tumor grade [32,65]. These methods can also be applied to sequences different from the T1. Recently, Wang et al., developed a GAN to synthetize 3D isotropic contrast-enhanced FLAIR images from a 2D non-contrast FLAIR image stack in 185 patients with different brain tumors [66]. Interestingly, the authors went beyond simple contrast synthesis and added super-resolution and anti-aliasing tasks in order to solve MR artifacts and create isotropic 3D images, which give a better visualization of the tumor, with a good structural-similarity index to the source images [66]. Calabrese et al., obtained good results in synthetize contrast-enhanced T1 images from non-contrast ones in 400 patients with glioblastoma and lower-grade glioma [65]. In addition, the authors included an external validation analysis, which is always recommended in DL-based studies [65]. However, the simulated images appeared blurrier than real ones, a problem that could especially affect discriminating progression in follow-up exams [65]. This shortcoming appears to be a common issue of all ‘simulated imaging’ studies. As stated above, contrast enhancement reflects disruption of the blood–brain barrier, information that is usually inferred from the pharmacokinetics of gadolinium-based contrasts within the brain vasculature; this explains the difficulty of generating information from sequences that may not contain it. Moreover, virtual contrast may hinder interpretation of derived measures, such as perfusion-weighted imaging, which have been proven crucial for differential diagnosis and prognosis prediction in neuro-oncology [15,16,67,68]. Future directions could make use of ultrahigh field scanners that may have enough resolution to be closer to molecular imaging. In the meantime, another approach has been explored to address the issue. Rather than eliminating contrast injection, different studies used AI algorithms to enhance a reduced dose of gadolinium (10% or 25% of the standard dose), a method defined as ‘augmented contrast’ [30,31,69,70]. This method, used on images of different brain lesions, including meningiomas and arteriovenous malformations, allows the detection of a rupture of the blood–brain barrier with a significantly lower contrast dose. Another advantage is the better quality of the synthetized images, as perceived by evaluating neuroradiologists [30,31]. Such benefits persist with data obtained across different scanners, including both 1.5T and 3T field strengths, a fundamental step for the generalizability of results [30,69,70]. Nevertheless, augmented contrast techniques are not exempt from limitations. Frequently encountered issues with these techniques are the difficulty of detecting small lesions, the presence of false positive enhancement, probably due to flow or motion artifacts, and coregistration mismatch [31,70]. Another concern is the lack of time control between the two contrast injections. Most of the studies perform MRI in a single session by acquiring the low-dose sequence first, followed by full-dose imaging after injecting the remaining dose of gadolinium [30,31,69,70]. The resulting full-dose images are, thus, a combination of the late-delayed pre-dose (10% or 25%) and the standard-delayed contrast, which can result in a slightly different enhancement pattern from the standard full-contrast injection. Future studies could acquire low and standard dose exams on separate days, with controlled postcontrast timing. Lastly, future directions could include prediction of different contrast imaging. In fact, other types of contrast are being developed to give additional information on pathologic processes, such as neuroinflammation [71]. Another interesting application of AI in neuro-oncology imaging consists of augmenting the contrast signal ratio in standard dose T1 images, in order to better delineate tumors and detect smaller lesions. Increasing contrast signal for better detection of tumors, in fact, has always been a goal when developing new MR sequences, leading recently to a consensus for brain tumor imaging, especially for metastasis [72]. A recent study by Ye et al., used a multidimensional integration method to increase the signal-to-noise ratio in T1 gradient echo images [73], also resulting in contrast signal enhancement. By comparison, Bône et al., implemented an AI-based method to increase contrast signal similarly to the ‘augmented contrast’ studies [74]. The authors trained an artificial neural network with T1 pre-contrast, FLAIR, ADC, and low-dose T1 (0.025 mmol/kg). Once trained, the model was leveraged to amplify the contrast on routine full-dose T1 by processing it into high-contrast T1 images. The hypothesis behind this process was that the neural network learned to amplify the difference in contrast between pre-contrast and low-dose images. Hence, by replacing the low-dose sequence with a full-dose one, the synthesis led to a quadruple-dose contrast [74]. The results led to a significant improvement in image quality, contrast level, and lesion detection performance, with a sensitivity increase (16%) and similar false detection rate with respect to routine acquisition.

3.2. AI in Multiple Sclerosis Imaging

MS is the most common demyelinating chronic disorder, and mostly affects young adults [75]. MRI is a fundamental tool in both MS diagnosis and follow-up, and gadolinium injection is usually mandatory. In fact, contrast enhancement is essential to establish dissemination in time according to the McDonald’s criteria, and to evaluate ‘active’ lesions in follow-up scans [76]. Due to the relatively young age at diagnosis, MS patients undergo numerous MRIs throughout the years. For this reason, different research groups are trying to limit contrast administration in selected cases to minimize exposition and costs [40,53]. One study tried to develop a DL algorithm to predict enhancing MS lesions from non-enhanced MR sequences in nearly 2000 scans, reporting accuracy between 70 and 75%, leading the way to further research [77]. The authors used only conventional imaging such as T1, T2, and FLAIR. Future studies could add DWI to the analysis as this sequence has been proven to be useful in identifying active lesions [78]. Small dimensions of MS plaques, as already seen above for brain tumor studies, could be a reason for the low accuracy of synthetic contrast imaging; future directions could try enhancing low-dose contrast injections by means of AI algorithms.
In conclusion, virtual and augmented contrast imaging have an interesting role in neuroradiology for assessing different diseases for which gadolinium injection is, for now, mandatory.
Table 2. Studies in neuroradiology with field of application, database used, tasks, and results characteristics. In particular, the database used is expressed according to not-specified, private, or public availability with the total number of patients included in brackets (N); when the database is public, the name is reported. BraTs= Brain Tumor Segmentation database; SSIM = structural similarity index measures; PSNR = peak signal-to-noise ratio; FLAIR = fluid attenuated inversion recovery; PCC = Pearson correlation coefficient; SNR = signal-to-noise ratio; MS = multiple sclerosis; MR = magnetic resonance.
Table 2. Studies in neuroradiology with field of application, database used, tasks, and results characteristics. In particular, the database used is expressed according to not-specified, private, or public availability with the total number of patients included in brackets (N); when the database is public, the name is reported. BraTs= Brain Tumor Segmentation database; SSIM = structural similarity index measures; PSNR = peak signal-to-noise ratio; FLAIR = fluid attenuated inversion recovery; PCC = Pearson correlation coefficient; SNR = signal-to-noise ratio; MS = multiple sclerosis; MR = magnetic resonance.
Field of ApplicationReferenceDatabase (N)TaskResults
Neuro-oncology[30]Private (60)To obtain 100% full-dose 3D T1-weighted images from 10% low-dose 3D T1-weighted imagesSSIM and PNSR increase by 11% and 5 dB
[31]Private (640)To obtain 100% full-dose 3D T1-weighted images from 10% low-dose 3D T1-weighted images and pre-contrast 3D T1-weighted imagesSSIM 0.92 ± 0.02; PSNR 35.07 dB ± 3.84
[32]BraTs2020 (369)To obtain contrast-enhanced T1-weighted images from non-contrast-enhanced MR imagesSSIM and PCC 0.991 dB ± 0.007 and 0.995 ± 0.006 (whole brain); 0.993 ± 0.008 and 0.999 ± 0.003
[47]Private 1 (185)
Private 2 (36)
BraTs2018 (73 for SR)
To obtain 3D isotropic contrast-enhanced T2 FLAIR from non-contrast enhanced 2D FLAIR and image super resolutionSSIM 0.932 (whole brain), 0.851 (tumor region); PSNR 31.25 dB (whole brain) 24.93 dB (tumor region)
[64]Private (116)To obtain contrast-enhanced T1-weighted images from non-contrast-enhanced MR imagesSensitivity 91.8%, Specificity 91.2%
[65]Private (400)
BraTs2020 (286) (external validation)
To obtain contrast-enhanced T1-weighted images from non-contrast-enhanced MR imagesSSIM 0.84 ± 0.05
[69]Private (83)To obtain 100% full-dose 3D T1-weighted images from 10% low-dose 3D T1-weighted imagesImage quality 0.73; image SNR 0.63; lesion conspicuity 0.89; lesion enhancement 0.87
[70]Private (145)To obtain 100% full-dose 3D T1-weighted images from 25% low-dose 3D T1-weighted imagesSSIM 0.871 ± 0.04; PSNR 31.6 dB ± 2
[74]Private (250)To maximize contrast in full-dose 3D T1-weighted imagesSensitivity in lesion detection increase by 16%
Multiple Sclerosis[77]Private (1970)To identify MS enhancing lesion from non-enhanced MR imagesSensitivity and specificity 78% ± 4.3 and 73% ± 2.7 (slice-wise); 72% ± 9 and 70% ± 6.3

4. Clinical Applications in Body Imaging

4.1. Abdominal and Thoracic Radiology

Imaging is a fundamental tool in both abdominal and thoracic imaging, ranging from diagnosis in the emergency setting to more complex tumor differential diagnosis, therapeutic planning, and follow-up. Contrary to neuroradiology, the implementation of synthetic contrast algorithms is hindered by two main issues: (1) respiration of the patients, which may lead to misalignment between different acquisitions, one of the main obstacles to the development of synthetic imaging [30]; (2) the presence of multiple different organs in these compartments, leading to possible anatomy misclassification. The latter is mainly a problem of abdominal imaging. However, some studies attempted to generate synthetic contrast imaging in these two contexts (Table 3).
MRI of liver neoplasms heavily relies on GBCA injection both for discerning liver cancer from other entities (i.e., hemangiomas) and because some tissues are invisible before GBCA injection [79]. A study by Zhao et al., used a tripartite-GAN model to generate contrast T1 images from non-contrast ones in 265 patients with either hepatocellular carcinoma or hemangioma, and a few subjects without lesions with a good signal-to-noise ratio and high tumor detection accuracy [80]. However, the results are rough enhanced images that rely mainly on tumor structural information, leaving out other important features such as presence of capsules and infiltrative growth. To address these shortcomings, Xu et al., developed a reinforcement learning model that relies on a pixel-level graph to evaluate the images and synthetize contrast-enhanced sequences [81], on 325 subjects with both benign and malignant tumors and healthy controls. The authors achieved a structural similarity index of 0.85 between synthetized and acquired images, prompting more feasible clinical use. However, both studies used only non-contrast T1 images for AI training. Future research should investigate the possibility of adding other sequences (T2, DWI) to boost model performance.
CT scans for abdominal pain are very common in the emergency room setting and very useful for diagnosis and decision making [82,83]. Since contrast injection is not required for all patients with abdominal pain, Kim et al., sought to increase non-contrast CT (NCCT) diagnostic performance by generating contrast-enhanced CT (CECT) through a DL algorithm in more than 500 patients (divided into training, test, and external validation sets) [84]. The consultant and in-training radiologists involved in the research reported increased confidence in diagnosing, especially oncologic conditions such as biliary disease or inflammatory conditions (appendicitis, pancreatitis, diverticulitis) [84]. However, the main drawback of the study was the increased confidence in both correct and incorrect diagnoses, raising some concerns about the utility of this approach. Future directions should focus more on diagnostic accuracy.
Contrast agents are not usually required for detection of lung parenchymal lesions on CT. Nonetheless, CECT is useful for the evaluation of other thoracic structures, such as mediastinum, pleura, and vessels. In this regard, Choi et al., generated CECT from NCCT in a small group of 63 patients (with external validation) acquired on scanners by different vendors and with various scanning parameters [85]. The authors evaluated the conspicuity of mediastinal lymph nodes, which was found to be higher on the synthetized images [85]. However, the AI was fed with ‘virtual no-contrast’ images obtained on dual-energy CT, which enabled perfect spatial registration between the input and the ground truth; nonetheless, it also raises concerns about real ‘contrast synthesis’ since obtaining virtual no-contrast imaging also requires contrast injection. Other interesting thorax CT applications include contrast synthesis for cardiac left chamber evaluation [86] and to better delineate the heart during radiotherapy for breast cancer [87], thus resulting in lower risk of cardiac radio-induced toxicity.

4.2. Cardiovascular Radiology

Computed tomography angiography (CTA) is a computed tomography technique that relies on iodine-based contrast administration to visualize arteries, and to diagnose and evaluate the related diseases, such aneurysms, dissection, or stenosis. With the widespread availability of state-of-the-art multidetector technology, CTA has become the imaging test of choice for various aortic conditions because of its excellent spatial resolution and rapid image acquisition. CTA provides a robust tool for planning aortic interventions and diagnosing acute and chronic vascular diseases. CTA is the standard for imaging aneurysms before intervention and evaluating the aorta in the acute setting to assess traumatic injury, dissection, and aneurysm rupture [88,89]. Furthermore, the recently published results of the DISCHARGE trial [90] support the use of CTA instead of invasive angiography for the assessment of coronary artery disease in patients with stable chest pain. For all these reasons, the diffusion of CTA will likely increment exponentially in the near future.
To avoid use of contrast, Chandrashekar et al., hypothesized that raw data, acquired from a NCCT, could be used to differentiate blood and other soft tissues [91]. Blood is predominantly fluid, with red/white blood cells, whereas a possible adjacent intraluminal thrombus (ILT) is predominantly fibrinous and collagenous, with red cells/platelets. These individual components vary in ultrastructure and physical density, which should be reflected in different (albeit subtle) Hounsfield Units (HUs) on a CT scan (either in individual HU values or in their spatial distribution/‘texture’). Using deep learning (DL) generative methods, these subtle differences can be amplified to enable simulation of contrast-enhanced images without the use of contrast agents.
In this study, the authors investigated the ability of generative adversarial networks (GANs) for this non-contrast to contrast image transformation task [92,93]. They first investigated differences between visually indistinguishable regions (lumen, interface, thrombus) within NCCT, comparing the HU intensity distribution and radiomic features. The latter have been used to find disease features that fail to be appreciated visually. Then, the authors showed that generative models enable the visualization of aortic aneurysm morphology in CT scans obtained without intravenous (IV) contrast administration, and that transformation accuracy was independent of aortic abdominal aneurysm (AAA) size and shape.
Some authors showed that the evolution of 3D indices, especially thrombus volume, is linked to AAA progression rupture risk, and even the incidence of adverse cardiovascular events [94,95,96]. Furthermore, assessing ILT spatial morphology is important for surgical planning and has been shown to influence postoperative outcomes, such as in the case of type 2 endoleak onset [97]. Hence, this evidence reinforces the clinical impact of using DL generative networks for image transformation. Using DL for pseudo-contrast CT visualization of AAA, ILT and its side branches from an NCCT is a promising technique and a safer alternative to the routinely obtained contrast-enhanced CTA. Future studies are needed to validate the clinical utility of such techniques, especially in pre-operative endovascular graft planning imaging.
In cardiac magnetic resonance (CMR), the use of GBCAs is essential for detecting focal myocardial lesions and fibrosis in a variety of cardiovascular diseases using late gadolinium enhancement (LGE) sequences [98,99]. The presence and extent of LGE are independent risk factors for adverse outcomes [100,101,102,103,104,105]. Conventional LGE is dependent on intravenous administration of GBCA and requires at least 10 min after injection to visualize contrast redistribution [106]. LGE image quality is dependent on appropriate adjustment of TI, although the phase-sensitive inversion recovery technique is less sensitive to the TI setting.
Zhang et al., hypothesized that native T1 maps may be transformed into images similar to LGE images [107]. Using novel AI approaches, a virtual native enhancement (VNE) imaging technology was developed [43,47], which exploits and enhances existing contrast and signals within the native T1 maps and cine frames, and displays them in a standardized presentation. The VNE imaging was then validated by comparison against matching LGE for image quality, visuospatial agreement, and myocardial lesion quantification. This approach was developed first in hypertrophic cardiomyopathy (HCM) because its features of regional heterogeneity and diverse tissue remodeling processes make it an ideal test case for a wide range of cardiac diseases.
The proposed VNE technology uses two components: native T1 mapping images, which provide image contrast and signal changes in myocardial tissue [108,109,110], and pre-contrast cine frames of a cardiac cycle for additional wall motion information and more defined myocardial borders. These images were fed to a DL generator to derive a VNE image: an AI technology transformed native T1 map (together with cines) into more readable presentation of LGE, ready for standard clinical interpretation [93,111,112,113]. The DL is effectively acting as a ‘virtual contrast agent’ creating ‘virtual LGE’ image from native CMR sequences. Zhang et al., showed that VNE images had significantly better quality than LGE images, demonstrating high agreement with LGE in myocardial lesion visuospatial distribution and quantification [107]. The clinical utility of detecting subtle lesions (often also seen in LGE) remains unclear, in line with literature reporting sensitivity of T1 to early myocardial changes in HCM patients.
Finally, the VNE technology has the potential to significantly improve clinical practice in CMR imaging, as it may allow significantly faster, lower-cost, and contrast-free CMR scans, enabling frequent monitoring of myocardial tissue changes.

4.3. Head and Neck Radiology

In head and neck radiology, MRI is a very useful diagnostic tool for identification, staging, radiotherapy and surgery planning, and treatment evaluation of certain malignancies, such as nasopharyngeal carcinoma (NPC) [114], especially at a stage when the disease is not visible on endoscopy [115]. GBCAs are used for their power to enhance detectability and boundaries of these tumors [116].
Very few applications of AI-powered contrast media have been proposed in head and neck imaging to date. An exploratory study fed a DL model with non-enhanced conventional MR sequences (T1- and T2-weighted), with the aim of distinguishing between NPC and benign hyperplasia, as well as assessing the T stage, in more than 4000 subjects. The model succeeded in discriminating NPC from benign hyperplasia with an accuracy comparable to a model that included T1 post-contrast sequences (99%) [117]. In addition, when both T1 and T2 non-contrast sequences were evaluated together, the model could predict T stage comparably to enhanced sequences [117].
Although more research is needed to confirm these results, including the need of external validation for generalizability of results, it represents a good example of contrast substitution in oncologic radiology, suggesting a test for other malignancies.
Table 3. Studies in body imaging with field of application, database used, tasks, and results characteristics. In particular, the database used is expressed according to not-specified, private, or public availability with the total number of patients included in brackets (N); when the database is public, the name is reported. SSIM = structural similarity index measure; PSNR = peak signal-to-noise ratio; PCC = Pearson correlation coefficient; MR = magnetic resonance; CT = computed tomography; ASD = average surface distance; DSC = dice similarity coefficient.
Table 3. Studies in body imaging with field of application, database used, tasks, and results characteristics. In particular, the database used is expressed according to not-specified, private, or public availability with the total number of patients included in brackets (N); when the database is public, the name is reported. SSIM = structural similarity index measure; PSNR = peak signal-to-noise ratio; PCC = Pearson correlation coefficient; MR = magnetic resonance; CT = computed tomography; ASD = average surface distance; DSC = dice similarity coefficient.
Field of ApplicationReferenceDatabase (N)TaskResults
Abdominal Imaging[80]Private (265)To obtain contrast-enhanced T1-weighted FS images from non-contrast-enhanced T1-weighted FS imagesPNSR 28.8; accuracy 89.4%
[81]Private (325)To obtain contrast-enhanced T1-weighted images from non-contrast-enhanced T1-weighted imagesSSIM 0.85 ± 0.06; PCC 0.92
[84]Private (500)To obtain contrast-enhanced CT from non-contrast-enhanced CTIncreased diagnostic confidence and accuracy of radiologist
Thoracic Imaging[85]Private (63)To obtain contrast-enhanced CT from non-contrast-enhanced CTSSIM 0.84; PNSR 17.44; contrast-to-noise ratio (lymph nodes) 6.15 ± 5.18
Head and Neck Imaging[117]Private (4478)To distinguish NPC from hyperplasia in non-contrast-enhanced MR images only and including contrast-enhanced MR images in the modelSimilar ASD and DSC between the two models

5. Conclusions and Future Directions

AI-powered contrast media have been proven to be capable of delivering impressive results in several conditions. While current applications to new diagnoses of enhancing lesions, especially in neuro-oncology [70], are still limited and need to be confirmed in larger studies, virtual and augmented contrasts demonstrated promising results in disease follow-up over time. This is especially true for multiple sclerosis, where the administration of contrast can be repeated for years, leading to brain deposition. Many applications of AI-powered contrast media are likely to develop in the next few years in the field of body imaging, as demonstrated by the growing evidence of successful malignancy characterization on non-contrast MR with the use of AI [118,119]. Specifically, the characterization of liver (for which there are already several studies) and prostate cancer may significantly benefit from these techniques, decreasing unessential exposure to contrast media. Another promising field of application is pediatric imaging. Contrast media toxicity and deposition is even more concerning in the pediatric population due to long life expectancy. AI-powered contrast media may find fertile ground in pediatric imaging, such as in the follow-up of demyelinating disorders [120], metabolic diseases [121,122], phacomatoses [123], and post-treatment changes [124]. Furthermore, AI architectures are subject to constant growth and improvement. The implementation of graphical interfaces (GUIs), through which it is possible to fully exploit the generative potential of GANs architectures [125], is currently under development. These advancements will lead to broader clinical applications of virtual and augmented contrasts in the near future.

Author Contributions

Conceptualization, L.P., A.N., A.D.N., M.P., E.T. and C.P.; methodology, L.P., A.N. and A.D.N.; validation, L.P., A.D.N. and A.N.; investigation L.P., A.N., A.D.N., M.P., E.T., C.P. and F.N.; resources, A.R., A.B. and A.N.; data curation, L.P., A.N., A.D.N., M.P. and E.T.; writing—original draft preparation, L.P., A.N., A.D.N., M.P., E.T. and C.P.; writing—review and editing, all authors; visualization L.P., A.N., A.D.N., M.P., E.T. and C.P.; supervision, A.B. and A.R.; project administration, L.P. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Faucon, A.L.; Bobrie, G.; Clément, O. Nephrotoxicity of Iodinated Contrast Media: From Pathophysiology to Prevention Strategies. Eur. J. Radiol. 2019, 116, 231–241. [Google Scholar] [CrossRef] [PubMed]
  2. Cowper, S.E.; Robin, H.S.; Steinberg, S.M. Scleromyxoedema-like cutaneous diseases in renal-dialysis patients. Lancet 2000, 356, 1000–1001. [Google Scholar] [CrossRef]
  3. Pasquini, L.; Napolitano, A.; Visconti, E.; Longo, D.; Romano, A.; Tomà, P.; Espagnet, M.C.R. Gadolinium-Based Contrast Agent-Related Toxicities. CNS Drugs 2018, 32, 229–240. [Google Scholar] [CrossRef] [PubMed]
  4. Rossi Espagnet, M.C.; Bernardi, B.; Pasquini, L.; Figà-Talamanca, L.; Tomà, P.; Napolitano, A. Signal Intensity at Unenhanced T1-Weighted Magnetic Resonance in the Globus Pallidus and Dentate Nucleus after Serial Administrations of a Macrocyclic Gadolinium-Based Contrast Agent in Children. Pediatr. Radiol. 2017, 47, 1345–1352. [Google Scholar] [CrossRef]
  5. Pasquini, L.; Rossi Espagnet, M.C.; Napolitano, A.; Longo, D.; Bertaina, A.; Visconti, E.; Tomà, P. Dentate Nucleus T1 Hyperintensity: Is It Always Gadolinium All That Glitters? Radiol. Med. 2018, 123, 469–473. [Google Scholar] [CrossRef]
  6. Lohrke, J.; Frisk, A.L.; Frenzel, T.; Schöckel, L.; Rosenbruch, M.; Jost, G.; Lenhard, D.C.; Sieber, M.A.; Nischwitz, V.; Küppers, A.; et al. Histology and Gadolinium Distribution in the Rodent Brain after the Administration of Cumulative High Doses of Linear and Macrocyclic Gadolinium-Based Contrast Agents. Invest Radiol. 2017, 52, 324–333. [Google Scholar] [CrossRef] [Green Version]
  7. Kanda, T.; Fukusato, T.; Matsuda, M.; Toyoda, K.; Oba, H.; Kotoku, J.; Haruyama, T.; Kitajima, K.; Furui, S. Gadolinium-Based Contrast Agent Accumulates in the Brain Even in Subjects without Severe Renal Dysfunction: Evaluation of Autopsy Brain Specimens with Inductively Coupled Plasma Mass Spectroscopy. Radiology 2015, 276, 228–232. [Google Scholar] [CrossRef]
  8. McDonald, R.J.; McDonald, J.S.; Kallmes, D.F.; Jentoft, M.E.; Murray, D.L.; Thielen, K.R.; Williamson, E.E.; Eckel, L.J. Intracranial Gadolinium Deposition after Contrast-Enhanced MR Imaging. Radiology 2015, 275, 772–782. [Google Scholar] [CrossRef] [Green Version]
  9. European Medicines Agency. EMA’s Final Opinion Confirms Restrictions on Use of Linear Gadolinium Agents in Body Scans. Available online: http://www.ema.europa.eu/docs/en_%0DGB/document_library/Referrals_document/gadolinium_contrast_%0Dagents_31/European_Commission_final_decision/WC500240575.%0Dpdf (accessed on 1 June 2022).
  10. ESUR Guidelines on Contrast Agents v.10.0. Available online: https://www.esur.org/esur-guidelines-on-contrast-agents/ (accessed on 1 June 2022).
  11. Bottino, F.; Lucignani, M.; Napolitano, A.; Dellepiane, F.; Visconti, E.; Espagnet, M.C.R.; Pasquini, L. In Vivo Brain Gsh: Mrs Methods and Clinical Applications. Antioxidants 2021, 10, 1407. [Google Scholar] [CrossRef]
  12. Akhtar, M.J.; Ahamed, M.; Alhadlaq, H.; Alrokayan, S. Toxicity Mechanism of Gadolinium Oxide Nanoparticles and Gadolinium Ions in Human Breast Cancer Cells. Curr. Drug Metab. 2019, 20, 907–917. [Google Scholar] [CrossRef]
  13. Romano, A.; Rossi-Espagnet, M.C.; Pasquini, L.; Di Napoli, A.; Dellepiane, F.; Butera, G.; Moltoni, G.; Gagliardo, O.; Bozzao, A. Cerebral Venous Thrombosis: A Challenging Diagnosis; A New Nonenhanced Computed Tomography Standardized Semi-Quantitative Method. Tomography 2022, 8, 1–9. [Google Scholar] [CrossRef] [PubMed]
  14. Gillies, R.J.; Kinahan, P.E.; Hricak, H. Radiomics: Images Are More than Pictures, They Are Data. Radiology 2016, 278, 563–577. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  15. Pasquini, L.; Napolitano, A.; Tagliente, E.; Dellepiane, F.; Lucignani, M.; Vidiri, A.; Ranazzi, G.; Stoppacciaro, A.; Moltoni, G.; Nicolai, M.; et al. Deep Learning Can Differentiate IDH-Mutant from IDH-Wild Type GBM. J. Pers. Med. 2021, 11, 290. [Google Scholar] [CrossRef]
  16. Pasquini, L.; Napolitano, A.; Lucignani, M.; Tagliente, E.; Dellepiane, F.; Rossi-Espagnet, M.C.; Ritrovato, M.; Vidiri, A.; Villani, V.; Ranazzi, G.; et al. AI and High-Grade Glioma for Diagnosis and Outcome Prediction: Do All Machine Learning Models Perform Equally Well? Front. Oncol. 2021, 11, 601425. [Google Scholar] [CrossRef]
  17. Bottino, F.; Tagliente, E.; Pasquini, L.; Di Napoli, A.; Lucignani, M.; Talamanca, L.F.; Napolitano, A. COVID Mortality Prediction with Machine Learning Methods: A Systematic Review and Critical Appraisal. J. Pers. Med. 2021, 11, 893. [Google Scholar] [CrossRef] [PubMed]
  18. Verma, V.; Simone, C.B.; Krishnan, S.; Lin, S.H.; Yang, J.; Hahn, S.M. The Rise of Radiomics and Implications for Oncologic Management. J. Natl. Cancer Inst. 2017, 109, 2016–2018. [Google Scholar] [CrossRef] [PubMed]
  19. Larue, R.T.H.M.; Defraene, G.; de Ruysscher, D.; Lambin, P.; van Elmpt, W. Quantitative Radiomics Studies for Tissue Characterization: A Review of Technology and Methodological Procedures. Br. J. Radiol. 2017, 90, 20160665. [Google Scholar] [CrossRef]
  20. McCarthy, J.; Minsky, M.L.; Rochester, N.; Shannon, C.E. A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence—august 31, 1955. AI Mag. 2006, 27, 12. [Google Scholar]
  21. Hamet, P.; Tremblay, J. Artificial Intelligence in Medicine. Metabolism 2017, 69, S36–S40. [Google Scholar] [CrossRef]
  22. Yang, R.; Yu, Y. Artificial Convolutional Neural Network in Object Detection and Semantic Segmentation for Medical Imaging Analysis. Front. Oncol. 2021, 11, 638182. [Google Scholar] [CrossRef]
  23. Goodfellow, I.J.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative Adversarial Nets. Adv. Neural. Inf. Process Syst. 2014, 3, 2672–2680. [Google Scholar] [CrossRef]
  24. Lundervold, A.S.; Lundervold, A. An Overview of Deep Learning in Medical Imaging Focusing on MRI. Z Med. Phys. 2019, 29, 102–127. [Google Scholar] [CrossRef] [PubMed]
  25. Akay, A.; Hess, H. Deep Learning: Current and Emerging Applications in Medicine and Technology. In Proceedings of the IEEE Journal of Biomedical and Health Informatics; IEEE: Piscataway, NJ, USA, 2019; Volume 23, pp. 906–920. [Google Scholar] [CrossRef]
  26. Fukushima, K.; Miyake, S. Neocognitron: A Self-Organizing Neural Network Model for a Mechanism of Visual Pattern Recognition. In Competition and Cooperation in Neural Nets; Lecture Notes in Biomathematics; Springer: Berlin/Heidelberg, Germany, 1982. [Google Scholar]
  27. Kumar, A.; Kim, J.; Lyndon, D.; Fulham, M.; Feng, D. An Ensemble of Fine-Tuned Convolutional Neural Networks for Medical Image Classification. In Proceedings of the IEEE Journal of Biomedical and Health Informatics; IEEE: Piscataway, NJ, USA, 2017; Volume 21, pp. 31–40. [Google Scholar] [CrossRef] [Green Version]
  28. Cai, L.; Gao, J.; Zhao, D. A Review of the Application of Deep Learning in Medical Image Classification and Segmentation. Ann. Transl. Med. 2020, 8, 713. [Google Scholar] [CrossRef]
  29. Yadav, S.S.; Jadhav, S.M. Deep Convolutional Neural Network Based Medical Image Classification for Disease Diagnosis. J. Big Data 2019, 6, 113. [Google Scholar] [CrossRef] [Green Version]
  30. Gong, E.; Pauly, J.M.; Wintermark, M.; Zaharchuk, G. Deep Learning Enables Reduced Gadolinium Dose for Contrast-Enhanced Brain MRI. J. Magn. Reson. Imaging 2018, 48, 330–340. [Google Scholar] [CrossRef] [PubMed]
  31. Pasumarthi, S.; Tamir, J.I.; Christensen, S.; Zaharchuk, G.; Zhang, T.; Gong, E. A Generic Deep Learning Model for Reduced Gadolinium Dose in Contrast-Enhanced Brain MRI. Magn. Reason. Med. 2021, 86, 1687–1700. [Google Scholar] [CrossRef] [PubMed]
  32. Xie, H.; Lei, Y.; Wang, T.; Roper, J.; Axente, M.; Bradley, J.D.; Liu, T.; Yang, X. Magnetic Resonance Imaging Contrast Enhancement Synthesis Using Cascade Networks with Local Supervision. Med. Phys. 2022, 49, 3278–3287. [Google Scholar] [CrossRef]
  33. Lee, D.; Yoo, J.; Ye, J. Deep Artifact Learning for Compressed Sensingand Parallel MRI. arXiv Prepr. 2017, arXiv:1703.01120. [Google Scholar]
  34. Ronneberger, O.; Fischer, P.; Brox, T. U-Net: Convolutional Networks for Biomedical Image Segmentation. In International Conference on Medical Image Computing and Computer-Assisted Intervention; Springer: Cham, Switzerland, 2015; pp. 234–241. [Google Scholar]
  35. Gong, E.; Pauly, J.; Zaharchuk, G. Improving the PI1CS Reconstructionfor Highly Undersampled Multi-Contrast MRI Using Local Deep Network. In Proceedings of the 25th Scientific Meeting ISMRM, Honolulu, HI, USA, 22–27 April 2017; p. 5663. [Google Scholar]
  36. Ran, M.; Hu, J.; Chen, Y.; Chen, H.; Sun, H.; Zhou, J.; Zhang, Y. Denoising of 3D Magnetic Resonance Images Using a Residual Encoder–Decoder Wasserstein Generative Adversarial Network. Med. Image Anal. 2019, 55, 165–180. [Google Scholar] [CrossRef] [Green Version]
  37. Bustamante, M.; Viola, F.; Carlhäll, C.J.; Ebbers, T. Using Deep Learning to Emulate the Use of an External Contrast Agent in Cardiovascular 4D Flow MRI. J. Magn. Reson. Imaging 2021, 54, 777–786. [Google Scholar] [CrossRef]
  38. Zhang, H.; Li, H.; Dillman, J.R.; Parikh, N.A.; He, L. Multi-Contrast MRI Image Synthesis Using Switchable Cycle-Consistent Generative Adversarial Networks. Diagnostics 2022, 12, 816. [Google Scholar] [CrossRef] [PubMed]
  39. Zhao, K.; Zhou, L.; Gao, S.; Wang, X.; Wang, Y.; Zhao, X.; Wang, H.; Liu, K.; Zhu, Y.; Ye, H. Study of Low-Dose PET Image Recovery Using Supervised Learning with CycleGAN. PLoS ONE 2020, 15, e0238455. [Google Scholar] [CrossRef] [PubMed]
  40. Lyu, Q.; You, C.; Shan, H.; Zhang, Y.; Wang, G. Super-Resolution MRI and CT through GAN-CIRCLE. In Developments in X-ray Tomography XII; SPIE: Bellingham, WA, USA, 2019; Volume 11113, pp. 202–208. [Google Scholar] [CrossRef]
  41. Gregory, S.; Cheng, H.; Newman, S.; Gan, Y. HydraNet: A Multi-Branch Convolutional Neural Network Architecture for MRI Denoising. In Medical Imaging 2021: Image Processing; SPIE: Bellingham, WA, USA, 2021; Volume 11596, pp. 881–889. [Google Scholar] [CrossRef]
  42. Hiasa, Y.; Otake, Y.; Takao, M.; Matsuoka, T.; Takashima, K.; Carass, A.; Prince, J.L.; Sugano, N.; Sato, Y. Cross-Modality Image Synthesis from Unpaired Data Using CycleGAN. In International Workshop on Simulation and Synthesis in Medical Imaging; Springer: Cham, Switzerland, 2018; Volume 11037, pp. 31–41. [Google Scholar] [CrossRef]
  43. Chandrashekar, A.; Shivakumar, N.; Lapolla, P.; Handa, A.; Grau, V.; Lee, R. A Deep Learning Approach to Generate Contrast-Enhanced Computerised Tomography Angiograms without the Use of Intravenous Contrast Agents. Eur. Heart J. 2020, 41, ehaa946.0156. [Google Scholar] [CrossRef]
  44. Sandhiya, B.; Priyatharshini, R.; Ramya, B.; Monish, S.; Sai Raja, G.R. Reconstruction, Identification and Classification of Brain Tumor Using Gan and Faster Regional-CNN. In Proceedings of the 2021 3rd International Conference on Signal Processing and Communication (ICPSC), Coimbatore, India, 13–14 May 2021; IEEE: Piscataway, NJ, USA, 2021; pp. 238–242. [Google Scholar] [CrossRef]
  45. Yu, B.; Zhou, L.; Wang, L.; Fripp, J.; Bourgeat, P. 3D CGAN Based Cross-Modality MR Image Synthesis for Brain Tumor Segmentation. In Proceedings of the 2018 IEEE 15th International Symposium on Biomedical Imaging (ISBI 2018), Washington, DC, USA, 4–7 April 2018; IEEE: Piscataway, NJ, USA, 2018; pp. 626–630. [Google Scholar] [CrossRef]
  46. Mori, M.; Fujioka, T.; Katsuta, L.; Kikuchi, Y.; Oda, G.; Nakagawa, T.; Kitazume, Y.; Kubota, K.; Tateishi, U. Feasibility of New Fat Suppression for Breast MRI Using Pix2pix. Jpn J. Radiol. 2020, 38, 1075–1081. [Google Scholar] [CrossRef] [PubMed]
  47. Wang, T.; Lei, Y.; Curran, W.J.; Liu, T.; Yang, X. Contrast-Enhanced MRI Synthesis from Non-Contrast MRI Using Attention CycleGAN. In Medical Imaging 2021: Biomedical Applications in Molecular, Structural, and Functional Imaging; SPIE: Bellingham, WA, USA, 2021; Volume 11600, pp. 388–393. [Google Scholar] [CrossRef]
  48. Zhou, L.; Schaefferkoetter, J.D.; Tham, I.W.K.; Huang, G.; Yan, J. Supervised Learning with Cyclegan for Low-Dose FDG PET Image Denoising. Med. Image Anal. 2020, 65, 101770. [Google Scholar] [CrossRef] [PubMed]
  49. Hicks, S.A.; Strümke, I.; Thambawita, V.; Hammou, M.; Riegler, M.A.; Halvorsen, P.; Parasa, S. On Evaluation Metrics for Medical Applications of Artificial Intelligence. Sci. Rep. 2022, 12, 5979. [Google Scholar] [CrossRef] [PubMed]
  50. Cabitza, F.; Campagner, A.; Soares, F.; García de Guadiana-Romualdo, L.; Challa, F.; Sulejmani, A.; Seghezzi, M.; Carobene, A. The Importance of Being External. Methodological Insights for the External Validation of Machine Learning Models in Medicine. Comput. Methods Programs Biomed. 2021, 208, 106288. [Google Scholar] [CrossRef]
  51. Costa, E.P.; Carvalho, A.C.P.L.F.; Lorena, A.C.; Freitas, A.A. A Review of Performance Evaluation Measures for Hierarchical Classifiers. AAAI Workshop-Tech. Rep. 2007, WS-07-05, 1–6. [Google Scholar]
  52. Streijl, R.C.; Winkler, S.; Hands, D.S. Mean Opinion Score (MOS) Revisited: Methods and Applications, Limitations and Alternatives. Multimed. Syst. 2016, 22, 213–227. [Google Scholar] [CrossRef]
  53. Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image Quality Assessment: From Error Visibility to Structural Similarity. In Proceedings of the IEEE Transactions on Image Processing; IEEE: Piscataway, NJ, USA, 2004; Volume 13, pp. 600–612. [Google Scholar] [CrossRef] [Green Version]
  54. Nadipally, M. Optimization of Methods for Image-Texture Segmentation Using Ant Colony Optimization; Elsevier Inc.: Amsterdam, The Netherlands, 2019; ISBN 9780128155530. [Google Scholar]
  55. Jiao, J.; Namburete, A.I.L.; Papageorghiou, A.T.; Noble, J.A. Self-Supervised Ultrasound to MRI Fetal Brain Image Synthesis. In Proceedings of the IEEE Transactions on Medical Imaging; IEEE: Piscataway, NJ, USA, 2020; Volume 39, pp. 4413–4424. [Google Scholar] [CrossRef]
  56. Zahra, M.A.; Hollingsworth, K.G.; Sala, E.; Lomas, D.J.; Tan, L.T. Dynamic Contrast-Enhanced MRI as a Predictor of Tumour Response to Radiotherapy. Lancet Oncol. 2007, 8, 63–74. [Google Scholar] [CrossRef]
  57. Kappos, L.; Moeri, D.; Radue, E.W.; Schoetzau, A.; Schweikert, K.; Barkhof, F.; Miller, D.; Guttmann, C.R.G.; Weiner, H.L.; Gasperini, C.; et al. Predictive Value of Gadolinium-Enhanced Magnetic Resonance Imaging for Relapse Rate and Changes in Disability or Impairment in Multiple Sclerosis: A Meta-Analysis. Gadolinium MRI Meta-Analysis Group. Lancet 1999, 353, 964–969. [Google Scholar] [CrossRef]
  58. Miller, D.H.; Barkhof, F.; Nauta, J.J.P. Gadolinium Enhancement Increases the Sensitivity of MRI in Detecting Disease Activity in Multiple Sclerosis. Brain 1993, 116 Pt 5, 1077–1094. [Google Scholar] [CrossRef] [PubMed]
  59. Di Napoli, A.; Cristofaro, M.; Romano, A.; Pianura, E.; Papale, G.; di Stefano, F.; Ronconi, E.; Petrone, A.; Rossi Espagnet, M.C.; Schininà, V.; et al. Central Nervous System Involvement in Tuberculosis: An MRI Study Considering Differences between Patients with and without Human Immunodeficiency Virus 1 Infection. J. Neuroradiol. 2019, 47, 334–338. [Google Scholar] [CrossRef] [PubMed]
  60. Di Napoli, A.; Spina, P.; Cianfoni, A.; Mazzucchelli, L.; Pravatà, E. Magnetic Resonance Imaging of Pilocytic Astrocytomas in Adults with Histopathologic Correlation: A Report of Six Consecutive Cases. J. Integr. Neurosci 2021, 20, 1039–1046. [Google Scholar] [CrossRef]
  61. Mattay, R.R.; Davtyan, K.; Rudie, J.D.; Mattay, G.S.; Jacobs, D.A.; Schindler, M.; Loevner, L.A.; Schnall, M.D.; Bilello, M.; Mamourian, A.C.; et al. Economic Impact of Selective Use of Contrast for Routine Follow-up MRI of Patients with Multiple Sclerosis. J. Neuroimaging 2022, 32, 656–666. [Google Scholar] [CrossRef]
  62. Molinaro, A.M.; Hervey-Jumper, S.; Morshed, R.A.; Young, J.; Han, S.J.; Chunduru, P.; Zhang, Y.; Phillips, J.J.; Shai, A.; Lafontaine, M.; et al. Association of Maximal Extent of Resection of Contrast-Enhanced and Non-Contrast-Enhanced Tumor with Survival Within Molecular Subgroups of Patients with Newly Diagnosed Glioblastoma. JAMA Oncol. 2020, 6, 495–503. [Google Scholar] [CrossRef]
  63. Pasquini, L.; Di Napoli, A.; Napolitano, A.; Lucignani, M.; Dellepiane, F.; Vidiri, A.; Villani, V.; Romano, A.; Bozzao, A. Glioblastoma Radiomics to Predict Survival: Diffusion Characteristics of Surrounding Nonenhancing Tissue to Select Patients for Extensive Resection. J. Neuroimaging 2021, 31, 1192–1200. [Google Scholar] [CrossRef]
  64. Kleesiek, J.; Morshuis, J.N.; Isensee, F.; Deike-Hofmann, K.; Paech, D.; Kickingereder, P.; Köthe, U.; Rother, C.; Forsting, M.; Wick, W.; et al. Can Virtual Contrast Enhancement in Brain MRI Replace Gadolinium?: A Feasibility Study. Invest Radiol. 2019, 54, 653–660. [Google Scholar] [CrossRef]
  65. Calabrese, E.; Rudie, J.D.; Rauschecker, A.M.; Villanueva-Meyer, J.E.; Cha, S. Feasibility of Simulated Postcontrast Mri of Glioblastomas and Lower-Grade Gliomas by Using Three-Dimensional Fully Convolutional Neural Networks. Radiol Artif Intell 2021, 3, e200276. [Google Scholar] [CrossRef]
  66. Wang, Y.; Wu, W.; Yang, Y.; Hu, H.; Yu, S.; Dong, X.; Chen, F.; Liu, Q. Deep Learning-Based 3D MRI Contrast-Enhanced Synthesis from a 2D Noncontrast T2Flair Sequence. Med. Phys. 2022, 49, 4478–4493. [Google Scholar] [CrossRef]
  67. Romano, A.; Moltoni, G.; Guarnera, A.; Pasquini, L.; Di Napoli, A.; Napolitano, A.; Espagnet, M.C.R.; Bozzao, A. Single Brain Metastasis versus Glioblastoma Multiforme: A VOI-Based Multiparametric Analysis for Differential Diagnosis. Radiol. Med. 2022, 127, 490–497. [Google Scholar] [CrossRef] [PubMed]
  68. Romano, A.; Pasquini, L.; Di Napoli, A.; Tavanti, F.; Boellis, A.; Rossi Espagnet, M.C.; Minniti, G.; Bozzao, A. Prediction of Survival in Patients Affected by Glioblastoma: Histogram Analysis of Perfusion MRI. J. Neurooncol. 2018, 139, 455–460. [Google Scholar] [CrossRef] [PubMed]
  69. Luo, H.; Zhang, T.; Gong, N.-J.; Tamir, J.; Pasumarthi Venkata, S.; Xu, C.; Duan, Y.; Zhou, T.; Zhou, F.; Zaharchuk, G.; et al. Deep Learning-Based Methods May Minimize GBCA Dosage in Brain MRI Abbreviations CE-MRI Contrast-Enhanced MRI DL Deep Learning GBCAs Gadolinium-Based Contrast Agents. Eur. Radiol. 2021, 31, 6419–6428. [Google Scholar] [CrossRef] [PubMed]
  70. Ammari, S.; Bône, A.; Balleyguier, C.; Moulton, E.; Chouzenoux, É.; Volk, A.; Menu, Y.; Bidault, F.; Nicolas, F.; Robert, P.; et al. Can Deep Learning Replace Gadolinium in Neuro-Oncology? Invest. Radiol. 2022, 57, 99–107. [Google Scholar] [CrossRef]
  71. Kersch, C.N.; Ambady, P.; Hamilton, B.E.; Barajas, R.F. MRI and PET of Brain Tumor Neuroinflammation in the Era of Immunotherapy, From the AJR Special Series on Inflammation. AJR Am. J. Roentgenol. 2022, 218, 582–596. [Google Scholar] [CrossRef]
  72. Kaufmann, T.J.; Smits, M.; Boxerman, J.; Huang, R.; Barboriak, D.P.; Weller, M.; Chung, C.; Tsien, C.; Brown, P.D.; Shankar, L.; et al. Consensus Recommendations for a Standardized Brain Tumor Imaging Protocol for Clinical Trials in Brain Metastases. Neuro. Oncol. 2020, 22, 757–772. [Google Scholar] [CrossRef] [Green Version]
  73. Ye, Y.; Lyu, J.; Hu, Y.; Zhang, Z.; Xu, J.; Zhang, W.; Yuan, J.; Zhou, C.; Fan, W.; Zhang, X. Augmented T1-Weighted Steady State Magnetic Resonance Imaging. NMR Biomed. 2022, 35, e4729. [Google Scholar] [CrossRef]
  74. Bône, A.; Ammari, S.; Menu, Y.; Balleyguier, C.; Moulton, E.; Chouzenoux, É.; Volk, A.; Garcia, G.C.T.E.; Nicolas, F.; Robert, P.; et al. From Dose Reduction to Contrast Maximization. Invest Radiol. 2022, 57, 527–535. [Google Scholar] [CrossRef]
  75. Confavreux, C.; Vukusic, S. Natural History of Multiple Sclerosis: A Unifying Concept. Brain 2006, 129, 606–616. [Google Scholar] [CrossRef] [Green Version]
  76. Thompson, A.J.; Banwell, B.L.; Barkhof, F.; Carroll, W.M.; Coetzee, T.; Comi, G.; Correale, J.; Fazekas, F.; Filippi, M.; Freedman, M.S.; et al. Diagnosis of Multiple Sclerosis: 2017 Revisions of the McDonald Criteria. Lancet Neurol. 2018, 17, 162–173. [Google Scholar] [CrossRef]
  77. Narayana, P.A.; Coronado, I.; Sujit, S.J.; Wolinsky, J.S.; Lublin, F.D.; Gabr, R.E. Deep Learning for Predicting Enhancing Lesions in Multiple Sclerosis from Noncontrast MRI. Radiology 2020, 294, 398–404. [Google Scholar] [CrossRef] [PubMed]
  78. Foroughi, A.A.; Zare, N.; Saeedi-Moghadam, M.; Zeinali-Rafsanjani, B.; Nazeri, M. Correlation between Contrast Enhanced Plaques and Plaque Diffusion Restriction and Their Signal Intensities in FLAIR Images in Patients Who Admitted with Acute Symptoms of Multiple Sclerosis. J. Med. Imaging Radiat. Sci. 2021, 52, 121–126. [Google Scholar] [CrossRef]
  79. Xiao, X.; Zhao, J.; Qiang, Y.; Chong, J.; Yang, X.; Kazihise, N.G.-F.; Chen, B.; Li, S. Radiomics-Guided GAN for Segmentation of Liver Tumor Without Contrast Agents. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention; Springer: Cham, Switzerland, 2019; pp. 237–245. [Google Scholar]
  80. Zhao, J.; Li, D.; Kassam, Z.; Howey, J.; Chong, J.; Chen, B.; Li, S. Tripartite-GAN: Synthesizing Liver Contrast-Enhanced MRI to Improve Tumor Detection. Med. Image Anal. 2020, 63, 101667. [Google Scholar] [CrossRef] [PubMed]
  81. Xu, C.; Zhang, D.; Chong, J.; Chen, B.; Li, S. Synthesis of Gadolinium-Enhanced Liver Tumors on Nonenhanced Liver MR Images Using Pixel-Level Graph Reinforcement Learning. Med. Image Anal. 2021, 69, 101976. [Google Scholar] [CrossRef] [PubMed]
  82. Larson, D.B.; Johnson, L.W.; Schnell, B.M.; Salisbury, S.R.; Forman, H.P. National Trends in CT Use in the Emergency Department: 1995–2007. Radiology 2011, 258, 164–173. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  83. Pandharipande, P.V.; Reisner, A.T.; Binder, W.D.; Zaheer, A.; Gunn, M.L.; Linnau, K.F.; Miller, C.M.; Avery, L.L.; Herring, M.S.; Tramontano, A.C.; et al. CT in the Emergency Department: A Real-Time Study of Changes in Physician Decision Making. Radiology 2016, 278, 812–821. [Google Scholar] [CrossRef] [Green Version]
  84. Kim, S.W.; Kim, J.H.; Kwak, S.; Seo, M.; Ryoo, C.; Shin, C.; Jang, S.; Cho, J.; Kim, Y.H.; Jeon, K. The Feasibility of Deep Learning-Based Synthetic Contrast-Enhanced CT from Nonenhanced CT in Emergency Department Patients with Acute Abdominal Pain. Sci. Rep. 2021, 11, 20390. [Google Scholar] [CrossRef]
  85. Choi, J.W.; Cho, Y.J.; Ha, J.Y.; Lee, S.B.; Lee, S.; Choi, Y.H.; Cheon, J.E.; Kim, W.S. Generating Synthetic Contrast Enhancement from Non-Contrast Chest Computed Tomography Using a Generative Adversarial Network. Sci. Rep. 2021, 11, 20403. [Google Scholar] [CrossRef]
  86. Santini, G.; Zumbo, L.M.; Martini, N.; Valvano, G.; Leo, A.; Ripoli, A.; Avogliero, F.; Chiappino, D.; della Latta, D. Synthetic Contrast Enhancement in Cardiac CT with Deep Learning. arXiv Prepr. 2018, arXiv:1807.01779. [Google Scholar]
  87. Chun, J.; Chang, J.S.; Oh, C.; Park, I.K.; Choi, M.S.; Hong, C.S.; Kim, H.; Yang, G.; Moon, J.Y.; Chung, S.Y.; et al. Synthetic Contrast-Enhanced Computed Tomography Generation Using a Deep Convolutional Neural Network for Cardiac Substructure Delineation in Breast Cancer Radiation Therapy: A Feasibility Study. Radiat. Oncol. 2022, 17, 83. [Google Scholar] [CrossRef]
  88. Foley, W.D.; Karcaaltincaba, M. Computed Tomography Angiography: Principles and Clinical Applications. J. Comput. Assist. Tomogr. 2003, 27 (Suppl. S1), S23–S30. [Google Scholar] [CrossRef] [PubMed]
  89. Aggarwal, S.; Qamar, A.; Sharma, V.; Sharma, A. Abdominal Aortic Aneurysm: A Comprehensive Review. Exp Clin Cardiol 2011, 16, 11. [Google Scholar] [CrossRef] [PubMed]
  90. Maurovich-Horvat, P.; Bosserdt, M.; Kofoed, K.F.; Rieckmann, N.; Benedek, T.; Donnelly, P.; Rodriguez-Palomares, J.; Erglis, A.; Štěchovský, C.; Šakalyte, G.; et al. CT or Invasive Coronary Angiography in Stable Chest Pain. N. Engl. J. Med. 2022, 386, 1591–1602. [Google Scholar] [CrossRef] [PubMed]
  91. Chandrashekar, A.; Handa, A.; Lapolla, P.; Shivakumar, N.; Uberoi, R.; Grau, V.; Lee, R. A Deep Learning Approach to Visualize Aortic Aneurysm Morphology Without the Use of Intravenous Contrast Agents. Ann. Surg. 2021. online ahead of print. [Google Scholar] [CrossRef] [PubMed]
  92. Zhu, J.Y.; Park, T.; Isola, P.; Efros, A.A. Unpaired Image-to-Image Translation Using Cycle-Consistent Adversarial Networks. In Proceedings of the 2017 IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017; IEEE: Piscataway, NJ, USA, 2017; pp. 2242–2251. [Google Scholar] [CrossRef] [Green Version]
  93. Isola, P.; Zhu, J.Y.; Zhou, T.; Efros, A.A. Image-to-Image Translation with Conditional Adversarial Networks. In Proceedings of the 30th IEEE Conference on Computer Vision and Pattern Recognition, CVPR, Honolulu, HI, USA, 21–26 July 2017; IEEE: Piscataway, NJ, USA, 2017; pp. 5967–5976. [Google Scholar] [CrossRef] [Green Version]
  94. Parr, A.; McCann, M.; Bradshaw, B.; Shahzad, A.; Buttner, P.; Golledge, J. Thrombus Volume Is Associated with Cardiovascular Events and Aneurysm Growth in Patients Who Have Abdominal Aortic Aneurysms. J. Vasc. Surg. 2011, 53, 28–35. [Google Scholar] [CrossRef] [Green Version]
  95. Haller, S.J.; Crawford, J.D.; Courchaine, K.M.; Bohannan, C.J.; Landry, G.J.; Moneta, G.L.; Azarbal, A.F.; Rugonyi, S. Intraluminal Thrombus Is Associated with Early Rupture of Abdominal Aortic Aneurysm. J. Vasc. Surg. 2018, 67, 1051–1058.e1. [Google Scholar] [CrossRef] [Green Version]
  96. Metaxa, E.; Kontopodis, N.; Tzirakis, K.; Ioannou, C.V.; Papaharilaou, Y. Effect of Intraluminal Thrombus Asymmetrical Deposition on Abdominal Aortic Aneurysm Growth Rate. J. Endovasc. 2015, 22, 406–412. [Google Scholar] [CrossRef]
  97. Whaley, Z.L.; Cassimjee, I.; Novak, Z.; Rowland, D.; Lapolla, P.; Chandrashekar, A.; Pearce, B.J.; Beck, A.W.; Handa, A.; Lee, R.; et al. The Spatial Morphology of Intraluminal Thrombus Influences Type II Endoleak after Endovascular Repair of Abdominal Aortic Aneurysms. Ann. Vasc. Surg. 2020, 66, 77–84. [Google Scholar] [CrossRef] [Green Version]
  98. Kim, R.J.; Wu, E.; Rafael, A.; Chen, E.-L.; Parker, M.A.; Simonetti, O.; Klocke, F.J.; Bonow, R.O.; Judd, R.M. The Use of Contrast-Enhanced Magnetic Resonance Imaging to Identify Reversible Myocardial Dysfunction. N. Engl. J. Med. 2000, 343, 1445–1453. [Google Scholar] [CrossRef]
  99. Mahrholdt, H.; Wagner, A.; Judd, R.M.; Sechtem, U.; Kim, R.J. Delayed Enhancement Cardiovascular Magnetic Resonance Assessment of Non-Ischaemic Cardiomyopathies. Eur. Heart J. 2005, 26, 1461–1474. [Google Scholar] [CrossRef]
  100. Becker, M.A.J.; Cornel, J.H.; van de Ven, P.M.; van Rossum, A.C.; Allaart, C.P.; Germans, T. The Prognostic Value of Late Gadolinium-Enhanced Cardiac Magnetic Resonance Imaging in Nonischemic Dilated Cardiomyopathy: A Review and Meta-Analysis. JACC Cardiovasc. Imaging 2018, 11, 1274–1284. [Google Scholar] [CrossRef] [PubMed]
  101. Burrage, M.K.; Ferreira, V.M. Cardiovascular Magnetic Resonance for the Differentiation of Left Ventricular Hypertrophy. Curr. Heart Fail. Rep. 2020, 17, 192–204. [Google Scholar] [CrossRef] [PubMed]
  102. Weng, Z.; Yao, J.; Chan, R.H.; He, J.; Yang, X.; Zhou, Y.; He, Y. Prognostic Value of LGE-CMR in HCM: A Meta-Analysis. JACC Cardiovasc. Imaging 2016, 9, 1392–1402. [Google Scholar] [CrossRef] [PubMed]
  103. Gersh, B.J.; Maron, B.J.; Bonow, R.O.; Dearani, J.A.; Fifer, M.A.; Link, M.S.; Naidu, S.S.; Nishimura, R.A.; Ommen, S.R.; Rakowski, H.; et al. 2011 ACCF/AHA Guideline for the Diagnosis and Treatment of Hypertrophic Cardiomyopathy: Executive Summary: A Report of the American College of Cardiology Foundation/American Heart Association Task Force on Practice Guidelines. Circulation 2011, 124, 2761–2796. [Google Scholar] [CrossRef] [PubMed]
  104. Ommen, S.R.; Mital, S.; Burke, M.A.; Day, S.M.; Deswal, A.; Elliott, P.; Evanovich, L.L.; Hung, J.; Joglar, J.A.; Kantor, P.; et al. 2020 AHA/ACC Guideline for the Diagnosis and Treatment of Patients with Hypertrophic Cardiomyopathy: A Report of the American College of Cardiology/American Heart Association Joint Committee on Clinical Practice Guidelines. J. Am. Coll. Cardiol. 2020, 76, e159–e240. [Google Scholar] [CrossRef]
  105. Elliott, P.M.; Anastasakis, A.; Borger, M.A.; Borggrefe, M.; Cecchi, F.; Charron, P.; Hagege, A.A.; Lafont, A.; Limongelli, G.; Mahrholdt, H.; et al. 2014 ESC Guidelines on Diagnosis and Management of Hypertrophic Cardiomyopathy: The Task Force for the Diagnosis and Management of Hypertrophic Cardiomyopathy of the European Society of Cardiology (ESC). Eur. Heart J. 2014, 35, 2733–2779. [Google Scholar] [CrossRef]
  106. Kramer, C.M.; Barkhausen, J.; Bucciarelli-Ducci, C.; Flamm, S.D.; Kim, R.J.; Nagel, E. Standardized Cardiovascular Magnetic Resonance Imaging (CMR) Protocols: 2020 Update. J. Cardiovasc. Magn. Reason. 2020, 22, 17. [Google Scholar] [CrossRef]
  107. Zhang, Q.; Burrage, M.K.; Lukaschuk, E.; Shanmuganathan, M.; Popescu, I.A.; Nikolaidou, C.; Mills, R.; Werys, K.; Hann, E.; Barutcu, A.; et al. Toward Replacing Late Gadolinium Enhancement with Artificial Intelligence Virtual Native Enhancement for Gadolinium-Free Cardiovascular Magnetic Resonance Tissue Characterization in Hypertrophic Cardiomyopathy. Circulation 2021, 144, 589–599. [Google Scholar] [CrossRef]
  108. Messroghli, D.R.; Moon, J.C.; Ferreira, V.M.; Grosse-Wortmann, L.; He, T.; Kellman, P.; Mascherbauer, J.; Nezafat, R.; Salerno, M.; Schelbert, E.B.; et al. Clinical Recommendations for Cardiovascular Magnetic Resonance Mapping of T1, T2, T2* and Extracellular Volume: A Consensus Statement by the Society for Cardiovascular Magnetic Resonance (SCMR) Endorsed by the European Association for Cardiovascular Imaging (EACVI). J. Cardiovasc. Magn. Reason. 2017, 19, 75. [Google Scholar] [CrossRef] [Green Version]
  109. Xu, J.; Zhuang, B.; Sirajuddin, A.; Li, S.; Huang, J.; Yin, G.; Song, L.; Jiang, Y.; Zhao, S.; Lu, M. MRI T1 Mapping in Hypertrophic Cardiomyopathy: Evaluation in Patients Without Late Gadolinium Enhancement and Hemodynamic Obstruction. Radiology 2020, 294, 275–286. [Google Scholar] [CrossRef]
  110. Dass, S.; Suttie, J.J.; Piechnik, S.K.; Ferreira, V.M.; Holloway, C.J.; Banerjee, R.; Mahmod, M.; Cochlin, L.; Karamitsos, T.D.; Robson, M.D.; et al. Myocardial Tissue Characterization Using Magnetic Resonance Noncontrast T1 Mapping in Hypertrophic and Dilated Cardiomyopathy. Circ. Cardiovasc. Imaging 2012, 5, 726–733. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  111. Weng, W.; Zhu, X. U-Net: Convolutional Networks for Biomedical Image Segmentation. IEEE Access 2015, 9, 16591–16603. [Google Scholar] [CrossRef]
  112. Simonyan, K.; Zisserman, A. Very Deep Convolutional Networks for Large-Scale Image Recognition. arXiv 2014, arXiv:1409.1556. [Google Scholar] [CrossRef]
  113. Hann, E.; Popescu, I.A.; Zhang, Q.; Gonzales, R.A.; Barutçu, A.; Neubauer, S.; Ferreira, V.M.; Piechnik, S.K. Deep Neural Network Ensemble for On-the-Fly Quality Control-Driven Segmentation of Cardiac MRI T1 Mapping. Med. Image Anal. 2021, 71, 102029. [Google Scholar] [CrossRef]
  114. Chong, V.F.H.; Fan, Y.F.; Khoo, J.B.K. Nasopharyngeal Carcinoma with Intracranial Spread: CT and MR Characteristics. J. Comput. Assist. Tomogr. 1996, 20, 563–569. [Google Scholar] [CrossRef]
  115. King, A.D.; Vlantis, A.C.; Yuen, T.W.C.; Law, B.K.H.; Bhatia, K.S.; Zee, B.C.Y.; Woo, J.K.S.; Chan, A.T.C.; Chan, K.C.A.; Ahuja, A.T. Detection of Nasopharyngeal Carcinoma by MR Imaging: Diagnostic Accuracy of MRI Compared with Endoscopy and Endoscopic Biopsy Based on Long-Term Follow-Up. Am. J. Neuroradiol. 2015, 36, 2380–2385. [Google Scholar] [CrossRef] [Green Version]
  116. Andreisek, G.; Duc, S.R.; Froehlich, J.M.; Hodler, J.; Weishaupt, D. MR Arthrography of the Shoulder, Hip, and Wrist: Evaluation of Contrast Dynamics and Image Quality with Increasing Injection-to-Imaging Time. Am. J. Roentgenol. 2007, 188, 1081–1088. [Google Scholar] [CrossRef]
  117. Deng, Y.; Li, C.; Lv, X.; Xia, W.; Shen, L.; Jing, B.; Li, B.; Guo, X.; Sun, Y.; Xie, C.; et al. The Contrast-Enhanced MRI Can Be Substituted by Unenhanced MRI in Identifying and Automatically Segmenting Primary Nasopharyngeal Carcinoma with the Aid of Deep Learning Models: An Exploratory Study in Large-Scale Population of Endemic Area. Comput Methods Programs Biomed 2022, 217, 106702. [Google Scholar] [CrossRef]
  118. Jian, W.; Ju, H.; Cen, X.; Cui, M.; Zhang, H.; Zhang, L.; Wang, G.; Gu, L.; Zhou, W. Improving the Malignancy Characterization of Hepatocellular Carcinoma Using Deeply Supervised Cross Modal Transfer Learning for Non-Enhanced MR. In Proceedings of the 2019 41st Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Berlin, Germany, 23–27 July 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 853–856. [Google Scholar] [CrossRef]
  119. Pecoraro, M.; Messina, E.; Bicchetti, M.; Carnicelli, G.; del Monte, M.; Iorio, B.; la Torre, G.; Catalano, C.; Panebianco, V. The Future Direction of Imaging in Prostate Cancer: MRI with or without Contrast Injection. Andrology 2021, 9, 1429–1443. [Google Scholar] [CrossRef]
  120. Wattjes, M.P.; Ciccarelli, O.; Reich, D.S.; Banwell, B.; de Stefano, N.; Enzinger, C.; Fazekas, F.; Filippi, M.; Frederiksen, J.; Gasperini, C.; et al. 2021 MAGNIMS–CMSC–NAIMS Consensus Recommendations on the Use of MRI in Patients with Multiple Sclerosis. Lancet Neurol. 2021, 20, 653–670. [Google Scholar] [CrossRef]
  121. Morana, G.; Bagnasco, F.; Leoni, M.; Pasquini, L.; Gueli, I.; Tortora, D.; Severino, M.; Giardino, S.; Pierri, F.; Micalizzi, C.; et al. Multifactorial Posterior Reversible Encephalopathy Syndrome in Children: Clinical, Laboratory and Neuroimaging Findings. J. Pediatr. Neurol. 2021, 19, 83–91. [Google Scholar] [CrossRef]
  122. Luca, P.; Alessia, G.; Camilla, R.-E.M.; Antonio, N.; Diego, M.; Federica, D.; Rosalba, D.D.C.; Carlo, D.-V.; Daniela, L. Spinal Cord Involvement in Kearns-Sayre Syndrome: A Neuroimaging Study. Neuroradiology 2020, 62, 1725. [Google Scholar] [CrossRef] [PubMed]
  123. Pasquini, L.; Tortora, D.; Manunza, F.; Rossi Espagnet, M.C.; Figà-Talamanca, L.; Morana, G.; Occella, C.; Rossi, A.; Severino, M. Asymmetric Cavernous Sinus Enlargement: A Novel Finding in Sturge–Weber Syndrome. Neuroradiology 2019, 61, 595–602. [Google Scholar] [CrossRef] [PubMed]
  124. Rossi Espagnet, M.C.; Pasquini, L.; Napolitano, A.; Cacchione, A.; Mastronuzzi, A.; Caruso, R.; Tomà, P.; Longo, D. Magnetic Resonance Imaging Patterns of Treatment-Related Toxicity in the Pediatric Brain: An Update and Review of the Literature. Pediatr. Radiol. 2017, 47, 633–648. [Google Scholar] [CrossRef]
  125. Zhang, W. Sketch-To-Color Image with GANs. In Proceedings of the 2020 2nd International Conference on Information Technology and Computer Application, ITCA, Guangzhou, China, 18–20 December 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 322–325. [Google Scholar] [CrossRef]
Figure 1. The CycleGAN model consists of a forward cycle and a backward cycle. (a) In the forward cycle, a synthesis network S y n t h c is trained to translate an input non-contrast image into a contrast one. Network S y n t h n c is trained to translate the resulting contrast image back into a non-contrast image that approximates the original non-contrast one. D i s c c discriminates between real and synthesized contrast images. (b) In the backward cycle, S y n t h n c synthesizes non-contrast images from input contrast images, S y n t h c reconstructs the input contrast image from the synthesized non- contrast one, and D i s c n c discriminates between real and synthesized non-contrast images. I n c = original non-contrast image; I c = original contrast image.
Figure 1. The CycleGAN model consists of a forward cycle and a backward cycle. (a) In the forward cycle, a synthesis network S y n t h c is trained to translate an input non-contrast image into a contrast one. Network S y n t h n c is trained to translate the resulting contrast image back into a non-contrast image that approximates the original non-contrast one. D i s c c discriminates between real and synthesized contrast images. (b) In the backward cycle, S y n t h n c synthesizes non-contrast images from input contrast images, S y n t h c reconstructs the input contrast image from the synthesized non- contrast one, and D i s c n c discriminates between real and synthesized non-contrast images. I n c = original non-contrast image; I c = original contrast image.
Pharmaceutics 14 02378 g001
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Pasquini, L.; Napolitano, A.; Pignatelli, M.; Tagliente, E.; Parrillo, C.; Nasta, F.; Romano, A.; Bozzao, A.; Di Napoli, A. Synthetic Post-Contrast Imaging through Artificial Intelligence: Clinical Applications of Virtual and Augmented Contrast Media. Pharmaceutics 2022, 14, 2378. https://doi.org/10.3390/pharmaceutics14112378

AMA Style

Pasquini L, Napolitano A, Pignatelli M, Tagliente E, Parrillo C, Nasta F, Romano A, Bozzao A, Di Napoli A. Synthetic Post-Contrast Imaging through Artificial Intelligence: Clinical Applications of Virtual and Augmented Contrast Media. Pharmaceutics. 2022; 14(11):2378. https://doi.org/10.3390/pharmaceutics14112378

Chicago/Turabian Style

Pasquini, Luca, Antonio Napolitano, Matteo Pignatelli, Emanuela Tagliente, Chiara Parrillo, Francesco Nasta, Andrea Romano, Alessandro Bozzao, and Alberto Di Napoli. 2022. "Synthetic Post-Contrast Imaging through Artificial Intelligence: Clinical Applications of Virtual and Augmented Contrast Media" Pharmaceutics 14, no. 11: 2378. https://doi.org/10.3390/pharmaceutics14112378

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop