Next Article in Journal
Omics and Artificial Intelligence to Improve In Vitro Fertilization (IVF) Success: A Proposed Protocol
Next Article in Special Issue
Artificial Intelligence Based Algorithms for Prostate Cancer Classification and Detection on Magnetic Resonance Imaging: A Narrative Review
Previous Article in Journal
Comparable Accuracies of Nonstructural Protein 1- and Envelope Protein-Based Enzyme-Linked Immunosorbent Assays in Detecting Anti-Dengue Immunoglobulin G Antibodies
Previous Article in Special Issue
BrainSeg-Net: Brain Tumor MR Image Segmentation via Enhanced Encoder–Decoder Network
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Identification of Tumor-Specific MRI Biomarkers Using Machine Learning (ML)

1
Department of Pharmacy, Faculty of Pharmacy, Al-Zaytoonah University of Jordan, P.O. Box 130, Amman 11733, Jordan
2
Laboratory for Molecular Modeling, Division of Chemical Biology and Medicinal Chemistry, Eshelman School of Pharmacy, The University of North Carlina at Chapel Hill, Chapel Hill, NC 27599, USA
3
National Center for Epidemics and Communicable Disease Control, Amman 11118, Jordan
4
Department of Pharmaceutical Sciences, School of Pharmacy, University of Jordan, Amman 11942, Jordan
*
Author to whom correspondence should be addressed.
Diagnostics 2021, 11(5), 742; https://doi.org/10.3390/diagnostics11050742
Submission received: 4 March 2021 / Revised: 9 April 2021 / Accepted: 12 April 2021 / Published: 21 April 2021
(This article belongs to the Special Issue Machine Learning Advances in MRI of Cancer)

Abstract

:
The identification of reliable and non-invasive oncology biomarkers remains a main priority in healthcare. There are only a few biomarkers that have been approved as diagnostic for cancer. The most frequently used cancer biomarkers are derived from either biological materials or imaging data. Most cancer biomarkers suffer from a lack of high specificity. However, the latest advancements in machine learning (ML) and artificial intelligence (AI) have enabled the identification of highly predictive, disease-specific biomarkers. Such biomarkers can be used to diagnose cancer patients, to predict cancer prognosis, or even to predict treatment efficacy. Herein, we provide a summary of the current status of developing and applying Magnetic resonance imaging (MRI) biomarkers in cancer care. We focus on all aspects of MRI biomarkers, starting from MRI data collection, preprocessing and machine learning methods, and ending with summarizing the types of existing biomarkers and their clinical applications in different cancer types.

1. Introduction

Imaging is routinely used for cancer diagnosis and staging, for monitoring treatment efficacy, for detecting disease recurrence, or generally for cancer surveillance [1,2,3,4]. Understanding the anatomical and physiological aspects of medical images allows experts to distinguish aberrant from normal appearance [5]. Advances in analytical methods and the application of machine learning methods enabled the use of medical images as biomarkers that can potentially optimize cancer care and improve clinical outcome [5]. The imaging biomarkers that are currently, and successfully, used for clinical diagnosis have attracted many researchers’ attention as described in multiple publications [1,5,6,7,8,9,10,11,12,13,14,15,16,17,18].
Magnetic resonance imaging (MRI) is a diagnostic imaging technique that applies strong magnetic and radio waves to generate high quality MRI scans of body organs facilitating the diagnosis of tumors and other conditions such as brain and spinal cord diseases. Currently, MRI is one of the of the big data producers in biomedicine, and is being exploited as important generator of cancer biomarkers. In essence, a biomarker is a characteristic that is measured as an indicator of a biological condition of interest (i.e., normal biological processes, pathogenic processes, or responses to a therapeutic intervention) [19,20]. The process of biomarker prioritization starts with a theory and ends with biomarker validation in an experimental setting. However, the current dogmas in biomedicine may hinder the process of unbiased hypothesis generation due to the complexity of cancer phenotypes and patient attributes, which makes it harder for human experts and physicians to comprehend all the details in MRI scans [21]. This led to the rise MRI biomarkers, identified by ML, that could capture disease characteristics with high accuracy, efficiency, reproducibility and interpretability [5,22].

2. Imaging Biomarkers

Biomarker stands for biological marker and it is defined by the U.S. Food and Drug Administration (FDA) as “a defined characteristic that is measured as an indicator of normal biological processes, pathogenic processes, or responses to an exposure or intervention, including therapeutic interventions” [23]. Biomarkers can measure anatomical, histological, physiological, molecular, and radiographic characteristics. Imaging biomarkers are convenient and reliable [5]. In oncology, they represent comprehensive cancer features such as apoptosis, angiogenesis, growth, metabolism, invasion, metastasis, and selective target interaction [24]. Cancer imaging biomarkers are widely used for cancer identification, for the prediction of disease outcome, and for monitoring treatment responses [5]. Examples of imaging biomarkers include Tumor, Node, Metastasis (TNM) reflecting a staging system (i.e., a prognostic biomarker) and Response Evaluation Criteria in Solid Tumors (RECIST) which can be applied as a response biomarker [1]. Confirmed imaging biomarkers are used to support decision-making in clinical practice. The necessity for quantitative evaluation in diagnosis must be validated [5]. Quantitative approach is profound and exhaustive due to technology and apparatus differences as well as quantitative development that influences the extracted data [5]. The well-established QA and QC protocols are perquisite to validate and approve the reliability of medical assessment along with endeavor made by research, radiological, and medical institution [5]. In addition, significant factors should be considered such as isolating normal healthy from ailment tissues to achieve better diagnosis [5]. Table 1 provides a summary of the various types of imaging biomarkers used in cancer besides MRI.

3. MRI Biomarkers

MRI can be exploited to extract numerous variables according to diverse inherent tissue properties such as proton density, diffusion, and T1-and T2 relaxation times [1]. In addition, MRI can probe the alterations in parameters due to the association of macromolecules and contrast agents [5]. For example, the apparent diffusion coefficient (ADC) is an extensively used criterion in cancer identification [16,62], diagnosis, and treatment assessment [63,64]. However, post-processing tools to derive absolute quantitation are widely disputed [65,66,67], although the protocol itself is versatile and reliable for cancer detection [68]. Quantification of T1 relaxation has an impact on cardiovascular MRI rather than depending on image contrast [69]. T1 values are significant in differentiating cardiac inflammation [70], multiple sclerosis [71,72], liver fat and iron concentration [73,74], and endocrine glands [75].
Quantitative chemical exchange saturation transfer (CEST) imaging is promising in evaluating brain ischemic disease [76], osteoarthritis [77], lymphedema [78], cancer pH and metabolomics [79]. Furthermore, MRI offers beneficial effects such as optimum images distinction, superior resolution, providing many contrasts per each testing; probing histological features (oxygenation, perfusion, and angiogenesis) [1].
Distinctive MRI biomarkers have been assigned in cancer diagnosis [1] including Breast Imaging Reporting and Data System (BI-RADS) [2], Liver Imaging Reporting and Data System (LI-RADS) [80,81], Prostate Imaging Reporting and Data System (PI-RADS) [4], TNM, and RECIST [1]. Quantitative biomarkers have been employed in clinical research studies such as initial area under the gadolinium curve (iAUGC) or transfer constant (Ktrans) from dynamic gadolinium enhanced (DGE) imaging and apparent diffusion coefficient (ADC) [1]. Morphological-based cancer biomarkers use many contrasts and moderate to high spatial resolution of MRI [1,82,83,84]. T1-weighted and T2-weighted imaging are utilized in cancer profiling [1].

4. MRI Data Preprocessing

Applying machine learning directly on raw MRI scans often yields poor results due to noise and information redundancy. Furthermore, machines read and store images in the form of number matrices. Raw MRI data are transformed into numerical features that can be processed by machines while preserving the information in the original data set.

5. Machine Learning for MRI Data

Machine learning (ML) algorithms are becoming useful components of computer-aided disease diagnosis and decision support systems. Computers seem to be able to recognize patterns that humans cannot perceive. Hence, ML provides a tool to analyze and utilize a massive amount of data more efficiently than the conventional analysis carried by human. This realization has led to heightened interest in ML and AI applications to medical images. Recently, employing ML in analyzing big data resulting from medical images, including MRI data, have been useful in obtaining significant clinical information that can aid physicians in making important decisions regarding clinical diagnosis, clinical prognosis, or treatment outcome [55,85,86]. ML can be used also to prioritize MRI biomarkers. The workflow for prioritizing MRI biomarkers using ML is summarized in Figure 1.

5.1. Image Representation by Numeric Features

The success of machine learning relies on data representation [87]. MRI images are represented in terms of features which are numeric values that can be processed by machines. These numeric values could be actual pixel values, edge strengths, variation in pixel values in a specific region of the MRI image, or any other value [88]. Non-image features can be also used in the machine learning process and may include age of the patients, the outcome of the laboratory test, sex, and other available patient or laboratory attributes. Features can be combined to form a feature vector which is also called the input vector [88].

5.2. Feature Extraction

Feature extraction, also known as feature engineering, is the process of identifying the most distinguishing characteristics in imaging signals that characterize MRI images and describe their behavior, allowing machine learning methods to process imaging data and learn from these data. Features can be referred to as descriptors. Feature extraction can be accomplished either manually or automatically.
Image features are usually classified into two main groups: global and local. Global features are generated as a d-dimensional feature vector which represents a specific pattern [89]. Global features usually describe the color, shape, and texture, and are commonly applied in content-based image retrieval (CBIR) systems [90,91,92,93,94,95,96]. Local features refer to certain patterns or specific structures on images that distinguish them from their surroundings. Examples of local features include blobs, corners, and edge pixels [97].

5.3. Data Set Division for Model Building, Model Tuning and External Validation

Many machine learning methods require model training with previously labeled MRI data. For generating these models, the data is divided into three sets: training set, test set and an external validation set that is not used in any way for model building. The modeling set (that remains after splitting out the validation set) is split additionally into training and testing (or tuning) sets. If models fail to predict the external validation set, such models are discarded and not used to make predictions. Additionally, other independent validation sets may become available after the completion of the modeling studies, and then can be used as additional validation sets. We have shown earlier that training-set-only modeling is not sufficient to obtain reliable models that are externally predictive [98,99]. Models that are highly predictive on training and testing data should be retained for the majority voting on external validation sets. Finally, only those models shown to be highly predictive on both testing and external validation sets are used as robust classifiers for MRI imaging data.

5.4. Machine Learning Algorithms

Machine learning algorithms generate models that can classify MRI images into malignant and benign based on extracted local and global image features. The generated ML model is a mathematical model that can predict outcome by generalizing their learned experience on training set data, to deliver a correct prediction of new MRI images unseen by the developed models. The learning exercise can be supervised, semi-supervised or unsupervised. However, for imaging data we rely heavily on supervised methods that can be applied to class-labeled data.
There are three main challenges to applying machine learning in medical imaging for cancer diagnosis: classification, localization, and segmentation. We need ML methods to overcome all these challenges. Herein, we review the most popular ML algorithms applied for MRI biomarkers, and results summarized in Figure 2. We also discuss advantages and disadvantages of each method (Table 2).

5.4.1. Artificial Neural Networks

Learning with artificial neural networks (ANNs) is one of the most famous machine learning methods that was introduced in the 1950s, and is being employed for classifying MRI data [103]. The generated neural network consists of a number of connected computational units, called neurons which are arranged in layers. There is an input layer that allows input data to enter the network, followed by hidden layer or layers transforming the data as it flows through, before ending at an output layer that produces the neural network’s predictions. The network is trained to generate correct predictions by identifying predictive features in a set of labeled training data, fed through the network while the outputs are compared with the actual labels by an objective function [103]. Furthermore, message passing neural network (MPNN) has distinguished morphological aspects in benign and malignant cancers [104]. Diverse morphological features have been recognized including elliptic-normalized circumference (ENC), elliptic-normalized circumference (ENC), long axis to short axis (L:S), abrasions’ sizes, and lobulation index (LI) [67].Further features have been distinguishes such as branch form, nodule brightness, lobulations’ numbers, and ellipsoid features [105].
The ANN method is composed of three learning schemas: (1) the error function which measures how good or bad an output is for a given input, (2) the search function which defines the direction and magnitude of the change required to reduce the error function, and (3) the update function which defines how the weights of the network are updated on the basis of the search function values [88]. This is an iterative process which keeps adjusting the weights until there is no additional improvement. ANN models are very flexible, capable of solving complex problems, but they are difficult to understand and very computationally expensive to train [103].

5.4.2. Logistic Regression (LR)

Logistic regression is a statistical model that uses a logistic function to model binary dependent variable (y) in MRI classification data. It models the probability of that the MRI is for tumor versus normal tissue by using a linear model to predict the log-odds that that y = 1; and then uses the logistic/inverse logit function to convert the log-odds values into probabilities [106]. However, LR models tend to overfit high-dimensional data. Therefore, regularization methods are often used to prevent overfitting to training set data. Regularization is achieved by using a model that tries to fit the training data well, while at the same time trying not to use regression weights that are too large [107]. The most common approaches are L1 regularization, which tries to keep the total absolute values of the regression weights low, and L2 or ridge regularization, which tries to keep the total squared values of the regression weights low.

5.4.3. Contrastive Learning

Contrastive learning is a ML technique that can learn the general features of a dataset (i.e., the MRI dataset) without labels, by teaching the model which data points are similar or different. This can be formulated as a dictionary look-up problem. This algorithm is considered a particular variant of self-supervised learning (SSL) that is particularly useful for learning image-level representations [108]. One of the advantages of this method is that it can be applied for semi-supervised learning problems when clinical annotations are missing from MRI data. This method permits the use of both labeled and unlabeled data to optimize the performance and learning capacity of the classification model. A method that has gained popularity in the literature recently is the unsupervised pre-train, supervised fine-tune, knowledge distillation paradigm [109].

5.4.4. Deep Learning

Deep learning which is also known as deep neural network (DNNs), or deep structured learning, is a machine learning method based on artificial neural networks which allows computational models that are composed of multiple processing layers (typically more than 20 layers) to learn representations of data with multiple levels of abstraction [110]. In deep learning, the algorithm learns useful representations and features automatically, directly from the raw imaging data. By far the most common models in deep learning are various variants of ANNs, but there are others as well [103]. Deep learning methods primarily differ from “classical” machine learning approaches by focusing on feature learning, i.e., automatically learning representations of data [103]. In medical imaging the interest in deep learning is mostly triggered by convolutional neural networks (CNNs) [111]. Features are automatically deduced and optimally tuned for the desired outcome. Deep learning protocols have been applied in cancer prognosis such as melanoma, breast cancer, brain tumor, and nasopharyngeal carcinoma [112,113,114,115].
However, models based on deep learning are often vulnerable to the domain shift problem, which may occur when image acquisition settings or imaging modalities are varied [108]. Further, uncertainty quantification and interpretability may additionally be required in such systems before they can be used in practice. Many strategies have been used to improve the performance of DNNs including contrastive learning, self-organized learning, and others. Recently, FocalNet has become one of the preferred iterative information extraction algorithms to be used with DNNs. This algorithm uses the concept of foveal attention to post-process the outputs of deep learning by performing variable sampling of the input/feature space [116]. FocalNet is integrated into an existing task-driven deep learning model without modifying the weights of the network, and layers for performing foveation are automatically selected using a data-driven approach [116].

5.4.5. k-Nearest Neighbors (kNN)

The kNN method is based on the k nearest neighbors’ principle and the variable selection procedure for feature selection reviewed elsewhere [98,117]. The procedure starts with the random selection of a predefined number of features from all selected features. The generated model can then classify an input vector of a new MRI image (i.e., a collection of MRI image features) by assigning it to the most similar class based on the number of neighbors (i.e., k) with known class labels, that vote on which class the input object belongs to. The predicted class will be the result of majority voting of all k nearest neighbors.

5.4.6. Support Vector Machines (SVM)

Support-vector machines (SVM) are supervised learning models that apply associated learning algorithms for data analysis; they can be used for classification and regression tasks [118,119]. They are named support vector machines because they transform input data in a way that produces the widest plane, or support vector, of separation between the two classes. SVMs gained popularity because they can classify data that are not linearly separable.

5.4.7. Random Forests

The random forests algorithm is a ML technique that uses an ensemble model to make predictions [120]. It essentially uses a bundle of decision trees to make a classification decision. Since, ensemble models implement the results from many different models to calculate a response or to assign a class, they perform better than individual models, and increasingly being used for image classification [98,121]. Random forests algorithm can handle big data, can estimate missing data without compromising accuracy, less prone to overfitting than decision trees, it works well for unbalanced datasets and for classification problems. However, it works like a black box with minimum control on what the model does, and models are difficult to interpret.

5.4.8. Self-Supervised Learning

Self-supervised learning (SSL) provides a strategy to pre-train a neural network with unlabeled data, followed by fine-tuning for a downstream task with limited annotations, e.g., such as in clinical data, to yield high predictive performance [109,122]. However, extensive validation of the automated algorithms is essential before they can be used in critical decision making in healthcare. One of the self-supervised learning methods that showed improved performance on deep learning models applied a strategy based on ‘context restoration’ to handle unlabeled imaging data [122]. The context restoration strategy is characterized by: (1) its ability to learn semantic image features; (2) it uses the learned image features for subsequent image analysis tasks; and (3) it is simple to implement [122].

5.4.9. Naïve Bayes

The Naïve Bayes classifier is a probabilistic classifier based on applying the Bayes theorem under strong independence assumptions between features [123]. It is considered a supervised learner. A query image is represented by a set of features which are assumed to be independently sampled from a class-specific feature space. Then a kernel density estimation allows the Bayesian network models to achieve higher accuracy levels [123,124]. The Naïve Bayes Classifier can produce very accurate classification results with a minimum training time in comparison with conventional supervised or unsupervised methods.

5.4.10. Decision Trees

Decision trees use tree-like models of decisions and their possible effects producing human-readable rules for the classification task [125]. Decision trees take the form of yes or no questions and therefore they are easily interpreted by people. The learning algorithm applies a rapid search for the many possible combinations of decision points to find the points that will give the simplest tree with the most accurate results. When the algorithm is run, one sets the maximal number of decision points, i.e., the depth, and the maximal breadth to be searched. At the end the algorithm determines how many decision points are required to achieve better accuracy. A decision tree model has high variance and low bias which leads to unstable output, and very sensitive to noise.

5.4.11. Other Machine Learning Methods

New approaches such as federated learning, interactive reporting, and synoptic reporting may help to address data availability problem in the future; however, curating and annotating data, as well as computational requirements, remain substantial barriers to machine learning applications for MRI data [126].

5.5. Which ML Method Is Best for Identifying Diagnostic MRI Biomarkers

The best ML methods applied for MRI data analysis should be able to learn useful semantic features from MRI imaging data and lead to improved models for performing medical diagnosis tasks efficiently [122]. However, training good ML models requires large amount of labelled data that may not be available; it is often difficult to obtain a sufficient number of labelled images for training models. In many scenarios the dataset in question consists of more unlabeled images than labelled ones. Therefore, boosting the performance of ML models by using unlabeled as well as labelled data is an important but challenging problem [122].
Many ML methods, particularly deep learning, has boosted medical image analysis for disease diagnosis over the past years. Around 2009, it was realized that deep artificial neural networks (DNNs) were outperforming other established modeling methods on a number of important benchmarks [65]. Currently, deep neural networks are considered the state-of-the-art machine learning models across a variety of areas, from MRI image analysis to natural language processing, and widely deployed in academia and industry [103]. However, there are many challenges for the introduction of deep learning in clinical settings. Challenges are related to data privacy, difficulties in model interpretability and workflow integration.
Despite the large number of retrospective studies (Figure 2), there are fewer applications of deep learning in the clinic on a routine basis [127]. The three major use cases that deep learning can have in MRI diagnostics: (1) model-free image synthesis, (2) model-based image reconstruction, and (3) image or pixel-level classification [127]. Hence, deep learning has the potential to improve every step of the MRI diagnostic workflow and to provide value for every user, from the technologists performing the scan, the physicians ordering the imaging, the radiologists providing the interpretation, and most importantly, the patients who are receiving health care.

5.6. Assessment of Model Performance

For classification models, model performance is usually assessed by generating a confusion matrix and calculating several statistics indicative of model accuracy. In the case when MRI images belong to two classes (e.g., cancer and non-cancer), a 2 × 2 confusion matrix can be defined, where N(1) and N(0) are the numbers of MRI images in the data set that belong to classes (1) and (0), respectively. TP, TN, FP, and FN are the number of true positives (malignant MRI predicted as malignant MRI), true negatives (benign MRI predicted as benign MRI), false positives (benign MRI predicted as malignant MRI), and false negatives (malignant MRI predicted as benign MRI), respectively. The following classification accuracy characteristics associated with confusion matrices are widely used in classification machine learning studies: the true positive rate (TPR) also known as recall (R) or sensitivity (SE = TP/N(1)), specificity (SP = TN/N(0)), the false positive rate (FPR) which is 1-specificity, precision (p = TP/TP + FP) and enrichment E = (TP)N/[(TP + FP)N(1)]. Normalized confusion matrices can be also obtained from the non-normalized confusion matrices by dividing the first column by N(1) and the second column by N(0). Normalized enrichment can be defined in the same way as E but is calculated using a normalized confusion matrix: En = (2TP)N(0)/[(TP)N(0) + (FP)N(1)]. En takes values within the interval of [0, 2] [98,128].
The receiver operating characteristic (ROC) curve is then created by plotting the TPR against the FPR at various thresholds. ROC and precision-recall (PR) analyses are usually performed side by side, and the area under the curve (AUC) is calculated to assess model performance in each case [129]. Both ROC-AUC area under the curve of receiver operating characteristic curves and PR-AUC area under the curve of precision-recall curves are widely used to assess the performance of ML methods for MRI biomarkers [100,129,130].
However, other model performance metrics have been calculated for imbalanced datasets that are usually encountered in the classification datasets. One of these metrics is the correct classification rate CCR which has been suggested as a better measure of model accuracy [98,99], using the equation below:
CCR = 0.5
where and are the number of correctly classified and total number of compounds of class j (j = 1, 2).
The accuracy of MRI biomarkers for benign/malignant discrimination has improved dramatically approaching values higher than 90%; and a performance exceeding 80% classification sensitivity and specificity [19,37,131,132,133].

6. Types of MRI Biomarkers According to Clinical Use

6.1. Diagnostic Biomarkers

The Prostate Imaging Reporting and Data System (PI-RADS) has been approved as a diagnostic biomarker in prostate cancer employing multiparametric MRIc [134]. Additionally, the PROMIS study [135,136] has emphasized the contribution of multiparametric MRI in the examination of prostate cancer patients. In this study, 740 male patients were enrolled, 576 men experienced multiparametric MRI followed by template prostate mapping and transrectal ultrasound (TRUS) biopsy [135,136]. Results showed that multiparametric MRI is more sensitive than (93%, 95% confidence interval (CI) 88–96%) TRUS biopsy (48%, 42–55%, p < 0.0001) [135,136]. Risk grades evaluate the probability of clinically approved cancer; PI-RADS 5 very high, PI-RADS 4 high, PI-RADS 3 intermediate, PI-RADS 2 low, and PI-RADS 1 very low [1]. A meta-analysis procedure has identified sensitivity (0.74) and specificity (0.88) for prostate cancer with PI-RADS [137,138].

6.2. Prognostic Biomarkers

Prognostic imaging biomarkers are used for cancer staging in order to divide patients into different risk groups [1]. MRI is considered the basic staging probe for diverse cancers such as rectal cancer [1]. The TNM stage indicates inclusive survival out of 5 years; stage I (localized, T1/2), node negative: 95% compared to stage IV (metastatic, any T or N: 11%). MRI reflects a predictive role including patellofemoral syndrome (PFS) and resection margin [139,140,141].

6.3. Response Biomarkers

Response biomarkers evaluate the tumor’s response to treatment which is classified into four classes: progressive disease, stable disease, partial response, complete response. This classification depends on the size of modification for particular lesions which are >1 cm, or nodes which are >1.5 cm axis (Table 3) [1]. The RECIST protocol offers a structured and comprehensive measurement of response to treatment in clinical studies [32]. RECIST is significant response biomarker in clinical studies and is employed as a surrogate marker [1].

7. Types of MRI Biomarkers Based on Quantitative Ability

7.1. Semi-Quantitative Recording Systems

The output of semi-quantitative scores are extensively recruited because visual diagnosis is appropriate and related to scoring output [5]. The MRI recording systems for hypoxic-ischemic encephalopathy (HIE) in neonates by T1-weighted (W), T2-W, and diffusion-W images demonstrated higher post-natal scores accompanied with inadequate brain functions [142]. Similarly, high T2-W scoring of cervical spondylosis was linked to illness status and implications [143,144]. Imaging of osteoarthritis is significant for diagnosis process [145]. Internet-based knowledge transfer methods employing the well-established recording protocols showed harmony between imaging and medical specialty in explaining T2-W outcome [146]. Identical recording has been used in multiple sclerosis [147] and rectal wall diagnosis [148]. 18Fluoro-2-deoxy-D-glucose (18FDG) positron emission tomography–computed tomography (PET-CT) imaging has been applied in lymphoma evaluation [149]. Similar scoring has been used in breast, prostate, liver, thyroid, and bladder imaging cancers [150,151,152,153]. MRI scoring has been applied for identifying gynecological malignancies [154] and scoring of renal cancer [155]. Physical evaluation of lung nodule diameter and volume doubling time (VDT) has been widely used in diagnosis, identifying, screening, and response anticipating [156,157].

7.2. Quantitative Recording Systems

Quantitative assessment has been frequently used in size and/or volume measurement. Size contributes in measuring benign and malignant diseases [158]. Measuring of ventricular size on ECG is versatile and linked to medical protocol [158,159]. Left ventricular ejection fraction has been assessed by ultrasound and MRI. Rheumatoid arthritis with aberrant bone features has been recorded with CT as an indicator of the illness progress [160]. RECIST (1.0 and 1.1) [158] assesses cancer prognosis; RECIST measurements are simple, but ambiguous and not reliable [161,162]. The fact that diverse studies have related volume to disease diagnosis [163,164,165,166], volume has not been authenticated in clinical records due to the requirement of splitting of abnormal shaped cancers. Volume is a surrogate for disease progress and response [167]. The metabolic tumor volume (MTV) measuring by PET has been related to survival [168,169]. Furthermore, MTV is an indicator of lymphoma and is considered a biomarker for treatment response [170,171,172]. Eventually, the presence of automated volume partitioning is crucial for treatment approval [5].

7.3. Quantitative Imaging Biomarkers

Quantitative imaging biomarkers that delineate tissue hallmarks such as hypoxia, fibrosis, necrosis, perfusion, and diffusion elaborate the illness state and express histopathology [5]. Numerous quantitative hallmarks can be integrated into mathematical equations to evaluate disease progress and changes during time intervals [5]. Organization of physiological databases is elaborated based on disease existence and type accompanied with scoring according to clinical data to extract anticipative models that serve as diagnosis-support tools. Such model has been provided for brain data inquiring approved and well-organized databases [173]. Exploiting quantitative data embedded in images along with demanding protocols for accession and scoring linked with machine learning algorithms have been applied in neurodegenerative disease and treatment protocol [174,175].

8. Radiomic Signature Biomarkers

Radiomics elaborates the extraction and measurement of quantitative features from radiographic images [24,176]. Radiomics expresses abnormal physiological testing related with other “omics” like proteomics, metabolomics, and genomics [177]. Numerous radiomic hallmarks can be derived from a region or volume of interest (ROI/VOI), calculated manually, semi-automatically, or automatically by computational mathematical algorithms [5]. The summary of all hallmarks is the radiomics signature that is distinct for a tissue, patient, patient group, or disease [85,178]. Radiomics signature depends on imaging information type (PET, MRI, CT), image parameter and implementation, machine-learning, and VOI/ROI segmentation [179].
Though radiomic shot is diverse and not tissue selective, it identifies treatment prognosis, resistance, and survival [180]. Radiomics assist in decision making for treatment protocol and risk prioritization [5]. Interestingly, X-ray mammography, CT, MRI, PET, and single-photon emission computed tomography (SPECT) demonstrated potential results resulting in interpretation benign disease [181]. Improving of image property and data regulation is obligatory for expansive usage. Radiomic fingerprints are multi-component data and records for computational strategies such as neural networks Furthermore, reliability of signatures derived from CT and MRI data is adequate [182,183].

9. MRI Biomarker Standardization

The reproducibility of radiomic studies remains a non-trivial challenge for prioritizing MRI biomarkers. The lack of standardized definitions of radiomics features has resulted in studies that are difficult to reproduce and validate [184]. Additionally, inadequate reporting by these studies has impeded reproducibility further. As a result, the Image Biomarker Standardization Initiative (IBSI) was established to address these challenges by fulfilling the following objectives: “(a) establish nomenclature and definitions for commonly used radiomics features; (b) establish a general radiomics image processing scheme for calculation of features from imaging; (c) provide data sets and associated reference values for verification and calibration of software implementations for image processing and feature computation; and (d) provide a set of reporting guidelines for studies involving radiomic analyses” [184]. Additionally, the methodologic quality of radiomic studies to produce stable features that can be linked to cancer biology can be evaluated using the radiomics quality scoring (RQS) [185].
In order to address the problem of inadequate reporting, the American College of Radiology (ACR) endorsed a Reporting and Data Systems (RADS) framework which provides standardized imaging terminology and report organization to document the findings imaging procedures [2,4]. Additionally, modern picture archiving and communication systems (PACS) [186] possess digital modalities which are connected via the digital imaging and communications in medicine (DICOM) protocol [187]. The DICOM header usually provides information to interpret the body part examined and patient attributes such as position. The type of reported information can be adjusted from the machine settings before performing the imaging procedure.

10. Selected Examples on MRI Biomarkers in Solid Tumors

10.1. MRI Biomarkers for Prostate Cancer

Prostate cancer (PCa) is one of the most prevalent cancers occurring in men. The early detection of PCa is essential for successful treatment and to increase survival rate [188]. Lately, magnetic resonance imaging (MRI), has gained a progressively significant role in the diagnosis and early detection of PCa [189]. Multiparametric MRI (mpMRI) has been proven as a valuable procedure in detection, localization, risk stratification and staging of clinically significant prostate cancer (csPCa). Multiparametric MRI is based on combining the morphological evaluation of T2-weighted imaging (T2WI) with diffusion-weighted imaging (DWI), dynamic contrast-enhanced (DCE) perfusion imaging and spectroscopic imaging (MRSI) to better assess prostate morphology and identify tumor growth [190,191,192,193,194,195].
In addition, mpMRI-targeted biopsies have been shown to provide more accurate diagnosis of csPCa and to reduce the number of repeated biopsies needed for correct diagnosis relative to the transrectal ultrasound-guided biopsies [196]. However, mpMRI still suffers from inter-personnel agreement and variability of diagnostic accuracy based on the specialist’s experience [29,190,197,198,199].
Numerous studies in the literature described the potential role of employing MRI and ML for the analysis of prostate gland tissues and cellular densities to detect PCa. For example, McGarry et al. [200] established an adequate model to obtain a stable fit for ML MRI detection of augmented epithelium and diminished lumen density areas asserting high-grade PCa.
In addition, the volumetric regions of interest (ROI) analysis of index lesions on mpMRI [201] that is based on data available from T2-weighted, DWI and DCE images in combination with a support vector machine (SVM) ML, has been shown to significantly increase he diagnostic performance of PI-RADS v2 in clinically relevant prostate cancer.
Another useful application of ML MRI has been reported for the accurate distinction of stromal benign prostatic hyperplasia from PCa in the transition zone, a challenging diagnosis particularly in the presence of small lesions. Using ML based statistical analysis of quantitative features such as ADC maps, shape, and image texture, immense diagnostic accuracy in the of differentiation between small neoplastic lesions from benign ones was demonstrated [202].
The implication and feasibility of multiparametric machine learning and radiomics have been frequently discussed in literature for the identification and segmentation of clinically significant prostate cancer [203]. A deep learning–based computer-aided diagnostic approach for the identification and segmentation of clinically significant prostate cancer in low-risk patients was recently reported by Arif et al. [204]. The average sensitivity was 82–92% at an average specificity of 43–76% with an area under the curve (AUC) of 0.65 to 0.89 for several lesion volumes ranging from >0.03 to >0.5 cc. In addition, supervised ML classifiers have been used to successfully predict clinically significant cancer prostate cancer utilizing a group of quantitative image-features and comparing them with conventional PI-RADS v2 assessment scores [205].

10.2. MRI Biomarkers for Brain Tumors

Brain tumors are graded to benign (grade I and II) and malignant tumors (grade III and IV). Non-progressive (benign tumors) are originated in the brain but grow slowly and tend not to metastasize to other parts of the body while the malignant tumors grow rapidly with poor differentiation. They maybe originated in the brain and metastasize to other organs (primary) or initiated elsewhere in the body and migrated to the brain (secondary tumor) [206,207].
Magnetic resonance imaging (MRI) is a universal method for differential diagnosis of brain tumors. However, imaging with MRI is always susceptible to human subjectivity and early brain–tumor detection usually depends on the expertise of the radiologist [208], thus accurate diagnosis requires additional medical procedures such as brain biopsy. Unfortunately, biopsy of the brain tumor requires major brain surgery that puts patients at risk. The advancement of new technologies, such as machine learning has had substantial impact on the use of MRI as diagnostic tool for brain tumors. In addition, imaging biomarkers are routinely used for prognosis, and following up on treatment approaches for brain tumors.
Cheng et al., developed databases to classify tumor types using augmented tumor region of interest, image dilatation, and ring-form partition. Intensity histogram and gray level co-occurrence matrix were used to extract features and achieve an accuracy of 91.28% [209]. Additionally, the convolutional neural network (CNN) has made enormous improvement in the field of image processing, with particular impact on segmentation and classification of brain tumors. Brain tumor segmentation methods can be generally classified into three groups: based on traditional image algorithms, based on machine learning, and based on deep learning. Therefore, the segmentation method based on the CNN is widely used in segmentation of lung nodules, retinal segmentation, liver cancer segmentation, and glioma segmentation [210]. Milica et al. [211] recently reported a new CNN architecture for brain tumor identification, with good generalization capability and good execution speed, that was tested on T1-weighted contrast-enhanced magnetic resonance images.
The use of machine learning and radiomics have been suggested for various ap-plications in the imaging and diagnosis of meningiomas with promising outcomes [212]. Differentiating between meningeal-based and intra-axial lesions using MRI can be challenging in some cases. Banzato et al. [213] reported the use of CNN to extract and analyze complex sets of data to discriminate between meningiomas and gliomas in pre- and post-contrast T1 images and T2 images. In their study, an image classifier combining CNN and MRI, was developed to distinguish between meningioma and glioma lesions with accuracy of 94% (MCC = 0.88) on post-contrast T1 images, 91% (MCC = 0.81) on pre-contrast T1-images and 90% (MCC = 0.8) on T2 images.

11. Assigning and Interpreting of Proper Imaging Biomarkers to Confirm Decision-Making

Computerized quantitative evaluations are convenient to implement in machine learning systems. Therefore, the limit values, that determine the possibility of disease occurrence compared to no disease, should be recognized [214]. Such recognized values potentiate the use of imaging a computational biopsy. Assignment of biomarker selection depends on treatment protocol and disease response. Non-selective treatment, tissue necrosis is considered; therefore, biomarkers that evaluate increased free water (CT Hounsfield units) or decreased cell density (ADC) are beneficial. However, selective-treatment such as anti-angiogenesis therapy, perfusion measurements (CT, MRI, and US) as selective biomarkers are considered [215]. Non-selective and selective agents terminate cancer metabolism; therefore, in glycolytic cancers fluorodeoxyglucose (FDG) assessments are reliable [216]. The deformity of tissues after surgery or changes in normal tissues after radiotherapy [217] as well as decrease in quantitative variations between metastatic and non-metastatic tissue [218] should be considered.

12. Progress in Quantitative Imaging Biomarkers as Decision-Making Tools in Clinical Practice

Biomarkers should be reliable, reproducible, in addition to being biologically, clinically and cost effective [18]. While reproducibility is a necessity, it is not frequently observed in practice [219] because incorporating of fundamental research in clinical studies is an arduous task for both patients and investigators. Technical verification determines whether a biomarker can be reproduced in different places on diverse panels. Technical validation may take place after biological validation especially for biological changes that modify imaging biomarker traces that endorse the values assigned to biomarkers. Correlation between clinical and technical validation precedes the assignment of biomarker for specific use. The implementation of imaging biomarkers in clinical diagnosis is assessed as a parameter in medical management such as circulating cancer DNA is specific for cancer identification. The incorporation of imaging biomarkers such as tissue and liquid biomarkers replaces old and simple protocols. The robustness of biomarker’s cost is significant in economically limited medical systems [220]. Further imaging protocols are expensive in contrast to liquid-and tissue-derived biomarkers. Health financial measurement is beneficial for incorporating a new biomarker in clinical diagnosis. The use of imaging biomarkers is a key tool in supporting medical diagnosis protocols.

13. The Challenges for Prioritizing MRI Biomarkers

Despite major advancements in big data analysis and machine learning methods, the development of quantitative imaging biomarkers that can be exploited effectively in medical decisions is hampered by major challenges related to data availability, variability and lack of reliability [3]. Data availability is impacted by limitations related to data sharing, data ownership and patient privacy [221]. Furthermore, the absence of international standard protocols along with quality assurance (QA) and quality control (QC) procedures contributes in an inadequate quantification and interpretation of MRI biomarkers [4,18,222]. This prevents physicians from extracting the required clues for interpreting disease status [223], or for assessing the efficacy of treatment protocols [22]. Additionally, it decreases our capability of merging MRI biomarkers that have been extracted from different imaging methods [1].

14. Conclusions

In this article, we have provided an overview of ML and MRI data. We discussed the nature of MRI data, local and global features, and most frequently used ML methods for model building to prioritize MRI biomarkers. These biomarkers have the potential to revolutionize cancer care, providing a platform for personalized, high-quality, and cost-effective health care for oncology patients. The application of ML methods for the analysis of MRI data has led to the development of disease-specific biomarkers for many cancers including hematological, lymphatic and solid tumors. Neural networks, contrastive learning and deep learning are becoming the leading methods for prioritizing MRI biomarkers. The performance of MRI biomarkers is now exceeding 80% for most methods and cancer types. MRI biomarker performance for disease classification (i.e., malignancy vs. benign) is exceeding 90% for deep learning, neural networks and SVM. Advances in deep learning and AI are expected to revolutionize MRI biomarkers and increase their utility for preclinical and clinical applications in oncology.

Author Contributions

R.H. generated the idea and manuscript outline. R.H., D.A.S. and S.K.B. participated in reviewing the biomedical literatures, data collection and manuscript writing and editing. A.T. provided scientific critique, suggested important modifications and helped in editing and revising the manuscript. All authors have read and agreed to the published version of the manuscript.

Funding

R.H. and D.A.S. acknowledge support from the Deanship of Scientific Research at Al-Zaytoonah University of Jordan (Grant number 2020-2019/17/03).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data is contained within the article.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Dregely, I.; Prezzi, D.; Kelly-Morland, C.; Roccia, E.; Neji, R.; Goh, V. Imaging Biomarkers in Oncology: Basics and Application to MRI. J. Magn. Reson. Imaging 2018, 48, 13–26. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  2. Mercado, C.L. BI-RADS Update. Radiol. Clin. N. Am. 2014, 52, 481–487. [Google Scholar] [CrossRef] [PubMed]
  3. Boellaard, R.; Delgado-Bolton, R.; Oyen, W.J.G.; Giammarile, F.; Tatsch, K.; Eschner, W.; Verzijlbergen, F.J.; Barrington, S.F.; Pike, L.C.; Weber, W.A.; et al. FDG PET/CT: EANM Procedure Guidelines for Tumour Imaging: Version 2.0. Eur. J. Nucl. Med. Mol. Imaging 2015, 42, 328–354. [Google Scholar] [CrossRef] [PubMed]
  4. Barentsz, J.O.; Weinreb, J.C.; Verma, S.; Thoeny, H.C.; Tempany, C.M.; Shtern, F.; Padhani, A.R.; Margolis, D.; Macura, K.J.; Haider, M.A.; et al. Synopsis of the PI-RADS v2 Guidelines for Multiparametric Prostate Magnetic Resonance Imaging and Recommendations for Use. Eur. Urol. 2016, 69, 41–49. [Google Scholar] [CrossRef]
  5. DeSouza, N.M.; Achten, E.; Alberich-Bayarri, A.; Bamberg, F.; Boellaard, R.; Clément, O.; Fournier, L.; Gallagher, F.; Golay, X.; Heussel, C.P.; et al. Validated Imaging Biomarkers as Decision-Making Tools in Clinical Trials and Routine Practice: Current Status and Recommendations from the EIBALL* Subcommittee of the European Society of Radiology (ESR). Insights Imaging 2019, 10, 1–6. [Google Scholar] [CrossRef] [Green Version]
  6. Leithner, D.; Helbich, T.H.; Bernard-Davila, B.; Marino, M.A.; Avendano, D.; Martinez, D.F.; Jochelson, M.S.; Kapetas, P.; Baltzer, P.A.T.; Haug, A.; et al. Multiparametric 18F-FDG PET/MRI of the Breast: Are There Differences in Imaging Biomarkers of Contralateral Healthy Tissue between Patients with and without Breast Cancer? J. Nucl. Med. 2020, 61, 20–25. [Google Scholar] [CrossRef]
  7. Jalali, S.; Chung, C.; Foltz, W.; Burrell, K.; Singh, S.; Hill, R.; Zadeh, G. MRI Biomarkers Identify the Differential Response of Glioblastoma Multiforme to Anti-Angiogenic Therapy. Neuro-Oncol. 2014, 16, 868–879. [Google Scholar] [CrossRef] [Green Version]
  8. Moffa, G.; Galati, F.; Collalunga, E.; Rizzo, V.; Kripa, E.; D’Amati, G.; Pediconi, F. Can MRI Biomarkers Predict Triple-Negative Breast Cancer? Diagnostics 2020, 10, 1090. [Google Scholar] [CrossRef]
  9. Grand, D.; Navrazhina, K.; Frew, J.W. A Scoping Review of Non-Invasive Imaging Modalities in Dermatological Disease: Potential Novel Biomarkers in Hidradenitis Suppurativa. Front. Med. 2019, 6. [Google Scholar] [CrossRef] [Green Version]
  10. Just, N. Improving Tumour Heterogeneity MRI Assessment with Histograms. Br. J. Cancer 2014, 111, 2205–2213. [Google Scholar] [CrossRef] [Green Version]
  11. Padhani, A.R.; Liu, G.; Mu-Koh, D.; Chenevert, T.L.; Thoeny, H.C.; Takahara, T.; Dzik-Jurasz, A.; Ross, B.D.; van Cauteren, M.; Collins, D.; et al. Diffusion-Weighted Magnetic Resonance Imaging as a Cancer Biomarker: Consensus and Recommendations. In Neoplasia; Elsevier B.V.: Amsterdam, The Netherlands, 2009; Volume 11, pp. 102–125. [Google Scholar]
  12. Qiao, J.; Xue, S.; Pu, F.; White, N.; Jiang, J.; Liu, Z.R.; Yang, J.J. Molecular Imaging of EGFR/HER2 Cancer Biomarkers by Protein MRI Contrast Agents Topical Issue on Metal-Based MRI Contrast Agents. J. Biol. Inorg. Chem. 2014, 19, 259–270. [Google Scholar] [CrossRef] [Green Version]
  13. Watson, M.J.; George, A.K.; Maruf, M.; Frye, T.P.; Muthigi, A.; Kongnyuy, M.; Valayil, S.G.; Pinto, P.A. Risk Stratification of Prostate Cancer: Integrating Multiparametric MRI, Nomograms and Biomarkers. Future Oncol. 2016, 12, 2417–2430. [Google Scholar] [CrossRef] [Green Version]
  14. Kurhanewicz, J.; Vigneron, D.B.; Ardenkjaer-Larsen, J.H.; Bankson, J.A.; Brindle, K.; Cunningham, C.H.; Gallagher, F.A.; Keshari, K.R.; Kjaer, A.; Laustsen, C.; et al. Hyperpolarized 13C MRI: Path to Clinical Translation in Oncology. Neoplasia 2019, 21, 1–16. [Google Scholar] [CrossRef]
  15. O’Flynn, E.A.M.; Nandita, M.D. Functional Magnetic Resonance: Biomarkers of Response in Breast Cancer. Breast Cancer Res. 2011, 13, 204. [Google Scholar] [CrossRef] [Green Version]
  16. Lopci, E.; Franzese, C.; Grimaldi, M.; Zucali, P.A.; Navarria, P.; Simonelli, M.; Bello, L.; Scorsetti, M.; Chiti, A. Imaging Biomarkers in Primary Brain Tumours. Eur. J. Nucl. Med. Mol. Imaging 2015, 42, 597–612. [Google Scholar] [CrossRef]
  17. Weaver, O.; Leung, J.W.T. Biomarkers and Imaging of Breast Cancer. Am. J. Roentgenol. 2018, 210, 271–278. [Google Scholar] [CrossRef]
  18. O’Connor, J.P.B.; Aboagye, E.O.; Adams, J.E.; Aerts, H.J.W.L.; Barrington, S.F.; Beer, A.J.; Boellaard, R.; Bohndiek, S.E.; Brady, M.; Brown, G.; et al. Imaging Biomarker Roadmap for Cancer Studies. Nat. Rev. Clin. Oncol. 2017, 14, 169–186. [Google Scholar] [CrossRef]
  19. Booth, T.C.; Williams, M.; Luis, A.; Cardoso, J.; Ashkan, K.; Shuaib, H. Machine Learning and Glioma Imaging Biomarkers. Clin. Radiol. 2020, 75, 20–32. [Google Scholar] [CrossRef] [Green Version]
  20. FDA-NIH Biomarker Working Group. BEST (Biomarkers, EndpointS, and Other Tools). Updated Sept. 25 2017, 55. Available online: https://www.ncbi.nlm.nih.gov/books/NBK326791/ (accessed on 1 March 2021).
  21. Waldstein, S.M.; Seeböck, P.; Donner, R.; Sadeghipour, A.; Bogunović, H.; Osborne, A.; Schmidt-Erfurth, U. Unbiased Identification of Novel Subclinical Imaging Biomarkers Using Unsupervised Deep Learning. Sci. Rep. 2020, 10, 12954. [Google Scholar] [CrossRef]
  22. Hosny, A.; Parmar, C.; Quackenbush, J.; Schwartz, L.H.; Aerts, H.J.W.L. Artificial Intelligence in Radiology. Nat. Rev. Cancer 2018, 18, 500–510. [Google Scholar] [CrossRef]
  23. About Biomarkers and Qualification|FDA. Available online: https://www.fda.gov/drugs/biomarker-qualification-program/about-biomarkers-and-qualification (accessed on 16 February 2021).
  24. European Society of Radiology. White Paper on Imaging Biomarkers. Insights Imaging 2010, 1, 42–45. [Google Scholar] [CrossRef] [Green Version]
  25. Zhang, D.; Xu, A. Application of Dual-Source CT Perfusion Imaging and MRI for the Diagnosis of Primary Liver Cancer. Oncol. Lett. 2017, 14, 5753–5758. [Google Scholar] [CrossRef] [Green Version]
  26. Heuvelmans, M.A.; Walter, J.E.; Vliegenthart, R.; van Ooijen, P.M.A.; de Bock, G.H.; de Koning, H.J.; Oudkerk, M. Disagreement of Diameter and Volume Measurements for Pulmonary Nodule Size Estimation in CT Lung Cancer Screening. Thorax 2018, 73, 779–781. [Google Scholar] [CrossRef]
  27. Van Riel, S.J.; Ciompi, F.; Jacobs, C.; Winkler Wille, M.M.; Scholten, E.T.; Naqibullah, M.; Lam, S.; Prokop, M.; Schaefer-Prokop, C.; van Ginneken, B. Malignancy Risk Estimation of Screen-Detected Nodules at Baseline CT: Comparison of the PanCan Model, Lung-RADS and NCCN Guidelines. Eur. Radiol. 2017, 27, 4019–4029. [Google Scholar] [CrossRef] [Green Version]
  28. Matoba, M.; Tsuji, H.; Shimode, Y.; Nagata, H.; Tonami, H. Diagnostic Performance of Adaptive 4D Volume Perfusion CT for Detecting Metastatic Cervical Lymph Nodes in Head and Neck Squamous Cell Carcinoma. Am. J. Roentgenol. 2018, 211, 1106–1111. [Google Scholar] [CrossRef]
  29. Zhang, L.; Tang, M.; Chen, S.; Lei, X.; Zhang, X.; Huan, Y. A Meta-Analysis of Use of Prostate Imaging Reporting and Data System Version 2 (PI-RADS V2) with Multiparametric MR Imaging for the Detection of Prostate Cancer. Eur. Radiol. 2017, 27, 5204–5214. [Google Scholar] [CrossRef]
  30. Timmers, J.M.H.; van Doorne-Nagtegaal, H.J.; Zonderland, H.M.; van Tinteren, H.; Visser, O.; Verbeek, A.L.M.; den Heeten, G.J.; Broeders, M.J.M. The Breast Imaging Reporting and Data System (Bi-Rads) in the Dutch Breast Cancer Screening Programme: Its Role as an Assessment and Stratification Tool. Eur. Radiol. 2012, 22, 1717–1723. [Google Scholar] [CrossRef] [Green Version]
  31. Van der Pol, C.B.; Lim, C.S.; Sirlin, C.B.; McGrath, T.A.; Salameh, J.P.; Bashir, M.R.; Tang, A.; Singal, A.G.; Costa, A.F.; Fowler, K.; et al. Accuracy of the Liver Imaging Reporting and Data System in Computed Tomography and Magnetic Resonance Image Analysis of Hepatocellular Carcinoma or Overall Malignancy—A Systematic Review. Gastroenterology 2019, 156, 976–986. [Google Scholar] [CrossRef] [Green Version]
  32. Schwartz, L.H.; Seymour, L.; Litière, S.; Ford, R.; Gwyther, S.; Mandrekar, S.; Shankar, L.; Bogaerts, J.; Chen, A.; Dancey, J.; et al. RECIST 1.1—Standardisation and Disease-Specific Adaptations: Perspectives from the RECIST Working Group. Eur. J. Cancer 2016, 62, 138–145. [Google Scholar] [CrossRef] [Green Version]
  33. Wahl, R.L.; Jacene, H.; Kasamon, Y.; Lodge, M.A. From RECIST to PERCIST: Evolving Considerations for PET Response Criteria in Solid Tumors. J. Nuc. Med. 2009, 50, 122S–150S. [Google Scholar] [CrossRef] [Green Version]
  34. Nael, K.; Bauer, A.H.; Hormigo, A.; Lemole, M.; Germano, I.M.; Puig, J.; Stea, B. Multiparametric MRI for Differentiation of Radiation Necrosis from Recurrent Tumor in Patients with Treated Glioblastoma. Am. J. Roentgenol. 2018, 210, 18–23. [Google Scholar] [CrossRef] [PubMed]
  35. Bastiaannet, E.; Groen, B.; Jager, P.L.; Cobben, D.C.P.; van der Graaf, W.T.A.; Vaalburg, W.; Hoekstra, H.J. The Value of FDG-PET in the Detection, Grading and Response to Therapy of Soft Tissue and Bone Sarcomas; a Systematic Review and Meta-Analysis. Cancer Treat. Rev. 2004, 30, 83–101. [Google Scholar] [CrossRef] [PubMed]
  36. Chang, C.Y.; Chang, S.J.; Chang, S.C.; Yuan, M.K. The Value of Positron Emission Tomography in Early Detection of Lung Cancer in High-Risk Population: A Systematic Review. Clin. Respir. J. 2013, 7, 1–6. [Google Scholar] [CrossRef]
  37. Parekh, V.S.; Macura, K.J.; Harvey, S.; Kamel, I.; EI-Khouli, R.; Bluemke, D.A.; Jacobs, M.A. Multiparametric Deep Learning Tissue Signatures for a Radiological Biomarker of Breast Cancer: Preliminary Results. Med. Phys. 2018, 47, 75–88. [Google Scholar] [CrossRef] [PubMed]
  38. Lu, S.J.; Gnanasegaran, G.; Buscombe, J.; Navalkissoor, S. Single Photon Emission Computed Tomography/Computed Tomography in the Evaluation of Neuroendocrine Tumours: A Review of the Literature. Nucl. Med. Commun. 2013, 34, 98–107. [Google Scholar] [CrossRef] [PubMed]
  39. Hoffmann, U.; Ferencik, M.; Udelson, J.E.; Picard, M.H.; Truong, Q.A.; Patel, M.R.; Huang, M.; Pencina, M.; Mark, D.B.; Heitner, J.F.; et al. Prognostic Value of Noninvasive Cardiovascular Testing in Patients with Stable Chest Pain: Insights from the PROMISE Trial (Prospective Multicenter Imaging Study for Evaluation of Chest Pain). Circulation 2017, 135, 2320–2332. [Google Scholar] [CrossRef] [PubMed]
  40. Ambrosini, V.; Campana, D.; Tomassetti, P.; Fanti, S. 68Ga-Labelled Peptides for Diagnosis of Gastroenteropancreatic NET. Eur. J. Nuc. Med. Mol. Imaging 2012, 39, 52–60. [Google Scholar] [CrossRef] [PubMed]
  41. Maxwell, J.E.; Howe, J.R. Imaging in Neuroendocrine Tumors: An Update for the Clinician. Int. J. Endocr. Oncol. 2015, 2, 159–168. [Google Scholar] [CrossRef] [Green Version]
  42. Zacho, H.D.; Nielsen, J.B.; Afshar-Oromieh, A.; Haberkorn, U.; deSouza, N.; de Paepe, K.; Dettmann, K.; Langkilde, N.C.; Haarmark, C.; Fisker, R.V.; et al. Prospective Comparison of 68Ga-PSMA PET/CT, 18F-Sodium Fluoride PET/CT and Diffusion Weighted-MRI at for the Detection of Bone Metastases in Biochemically Recurrent Prostate Cancer. Eur. J. Nucl. Med. Mol. Imaging 2018, 45, 1884–1897. [Google Scholar] [CrossRef]
  43. Gabriel, M.; Decristoforo, C.; Kendler, D.; Dobrozemsky, G.; Heute, D.; Uprimny, C.; Kovacs, P.; von Guggenberg, E.; Bale, R.; Virgolini, I.J. 68Ga-DOTA-Tyr3-Octreotide PET in Neuroendocrine Tumors: Comparison with Somatostatin Receptor Scintigraphy and CT. J. Nucl. Med. 2007, 48, 508–518. [Google Scholar] [CrossRef]
  44. Park, S.Y.; Zacharias, C.; Harrison, C.; Fan, R.E.; Kunder, C.; Hatami, N.; Giesel, F.; Ghanouni, P.; Daniel, B.; Loening, A.M.; et al. Gallium 68 PSMA-11 PET/MR Imaging in Patients with Intermediate- or High-Risk Prostate Cancer. Radiology 2018, 288, 495–505. [Google Scholar] [CrossRef] [Green Version]
  45. Delgado, A.F.; Delgado, A.F. Discrimination between Glioma Grades II and III Using Dynamic Susceptibility Perfusion MRI: A Meta-Analysis. Am. J. Neuroradiol. 2017, 38, 1348–1355. [Google Scholar] [CrossRef] [Green Version]
  46. Su, C.; Liu, C.; Zhao, L.; Jiang, J.; Zhang, J.; Li, S.; Zhu, W.; Wang, J. Amide Proton Transfer Imaging Allows Detection of Glioma Grades and Tumor Proliferation: Comparison with Ki-67 Expression and Proton MR Spectroscopy Imaging. Am. J. Neuroradiol. 2017, 38, 1702–1709. [Google Scholar] [CrossRef] [Green Version]
  47. Hayano, K.; Shuto, K.; Koda, K.; Yanagawa, N.; Okazumi, S.; Matsubara, H. Quantitative Measurement of Blood Flow Using Perfusion CT for Assessing Clinicopathologic Features and Prognosis in Patients with Rectal Cancer. Dis. Colon Rectum 2009, 52, 1624–1629. [Google Scholar] [CrossRef]
  48. Win, T.; Miles, K.A.; Janes, S.M.; Ganeshan, B.; Shastry, M.; Endozo, R.; Meagher, M.; Shortman, R.I.; Wan, S.; Kayani, I.; et al. Tumor Heterogeneity and Permeability as Measured on the CT Component of PET/CT Predict Survival in Patients with Non-Small Cell Lung Cancer. Clin. Cancer Res. 2013, 19, 3591–3599. [Google Scholar] [CrossRef] [Green Version]
  49. Lund, K.V.; Simonsen, T.G.; Kristensen, G.B.; Rofstad, E.K. Pretreatment Late-Phase DCE-MRI Predicts Outcome in Locally Advanced Cervix Cancer. Acta Oncol. 2017, 56, 675–681. [Google Scholar] [CrossRef] [Green Version]
  50. Fasmer, K.E.; Bjørnerud, A.; Ytre-Hauge, S.; Grüner, R.; Tangen, I.L.; Werner, H.M.J.; Bjørge, L.; Salvesen, Ø.O.; Trovik, J.; Krakstad, C.; et al. Preoperative Quantitative Dynamic Contrast-Enhanced MRI and Diffusion-Weighted Imaging Predict Aggressive Disease in Endometrial Cancer. Acta Radiol. 2018, 59, 1010–1017. [Google Scholar] [CrossRef]
  51. Yu, J.; Xu, Q.; Huang, D.Y.; Song, J.C.; Li, Y.; Xu, L.L.; Shi, H.B. Prognostic Aspects of Dynamic Contrast-Enhanced Magnetic Resonance Imaging in Synchronous Distant Metastatic Rectal Cancer. Eur. Radiol. 2017, 27, 1840–1847. [Google Scholar] [CrossRef]
  52. Lee, J.W.; Lee, S.M. Radiomics in Oncological PET/CT: Clinical Applications. Nucl. Med. Mol. Imaging 2018, 52, 170–189. [Google Scholar] [CrossRef]
  53. Kumar, V.; Gu, Y.; Basu, S.; Berglund, A.; Eschrich, S.A.; Schabath, M.B.; Forster, K.; Aerts, H.J.W.L.; Dekker, A.; Fenstermacher, D.; et al. Radiomics: The Process and the Challenges. Magn. Reson. Imaging 2012, 30, 1234–1248. [Google Scholar] [CrossRef] [Green Version]
  54. Wilson, R.; Devaraj, A. Radiomics of Pulmonary Nodules and Lung Cancer. Transl. Lung Cancer Res. 2017, 6, 86–91. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  55. Zhang, Y.; Oikonomou, A.; Wong, A.; Haider, M.A.; Khalvati, F. Radiomics-Based Prognosis Analysis for Non-Small Cell Lung Cancer. Sci. Rep. 2017, 7. [Google Scholar] [CrossRef] [PubMed]
  56. Huang, Y.Q.; Liang, C.H.; He, L.; Tian, J.; Liang, C.S.; Chen, X.; Ma, Z.L.; Liu, Z.Y. Development and Validation of a Radiomics Nomogram for Preoperative Prediction of Lymph Node Metastasis in Colorectal Cancer. J. Clin. Oncol. 2016, 34, 2157–2164. [Google Scholar] [CrossRef] [PubMed]
  57. O’Connor, J.P.B.; Jackson, A.; Parker, G.J.M.; Roberts, C.; Jayson, G.C. Dynamic Contrast-Enhanced MRI in Clinical Trials of Antivascular Therapies. Nat. Rev. Clin. Oncol. 2012, 9, 167–177. [Google Scholar] [CrossRef] [PubMed]
  58. Younes, A.; Hilden, P.; Coiffier, B.; Hagenbeek, A.; Salles, G.; Wilson, W.; Seymour, J.F.; Kelly, K.; Gribben, J.; Pfreunschuh, M.; et al. International Working Group Consensus Response Evaluation Criteria in Lymphoma (RECIL 2017). Ann. Oncol. 2017, 28, 1436–1447. [Google Scholar] [CrossRef] [PubMed]
  59. Dalm, S.U.; Verzijlbergen, J.F.; de Jong, M. Review: Receptor Targeted Nuclear Imaging of Breast Cancer. Int. J. Mol. Sci. 2017, 18, 260. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  60. Bakht, M.K.; Oh, S.W.; Youn, H.; Cheon, G.J.; Kwak, C.; Kang, K.W. Influence of Androgen Deprivation Therapy on the Uptake of PSMA-Targeted Agents: Emerging Opportunities and Challenges. Nucl. Med. Mol. Imaging 2017, 51, 202–211. [Google Scholar] [CrossRef] [PubMed]
  61. Hayano, K.; Okazumi, S.; Shuto, K.; Matsubara, H.; Shimada, H.; Nabeya, Y.; Kazama, T.; Yanagawa, N.; Ochiai, T. Perfusion CT Can Predict the Response to Chemoradiation Therapy and Survival in Esophageal Squamous Cell Carcinoma: Initial Clinical Results. Oncol. Rep. 2007, 18, 901–908. [Google Scholar] [CrossRef] [Green Version]
  62. Bittencourt, L.K.; de Hollanda, E.S.; de Oliveira, R.V. Multiparametric MR Imaging for Detection and Locoregional Staging of Prostate Cancer. Top. Magn. Reson. Imaging 2016, 25, 109–117. [Google Scholar] [CrossRef]
  63. Galbán, C.J.; Hoff, B.A.; Chenevert, T.L.; Ross, B.D. Diffusion MRI in Early Cancer Therapeutic Response Assessment. NMR Biomed. 2017, 30, e3458. [Google Scholar] [CrossRef]
  64. Shukla-Dave, A.; Obuchowski, N.A.; Chenevert, T.L.; Jambawalikar, S.; Schwartz, L.H.; Malyarenko, D.; Huang, W.; Noworolski, S.M.; Young, R.J.; Shiroishi, M.S.; et al. Quantitative Imaging Biomarkers Alliance (QIBA) Recommendations for Improved Precision of DWI and DCE-MRI Derived Biomarkers in Multicenter Oncology Trials. J. Magnet. Reson. Imaging 2019, 49, e101–e121. [Google Scholar] [CrossRef]
  65. Zeng, Q.; Shi, F.; Zhang, J.; Ling, C.; Dong, F.; Jiang, B. A Modified Tri-Exponential Model for Multi-b-Value Diffusion-Weighted Imaging: A Method to Detect the Strictly Diffusion-Limited Compartment in Brain. Front. Neurosci. 2018, 12, 102. [Google Scholar] [CrossRef]
  66. Langkilde, F.; Kobus, T.; Fedorov, A.; Dunne, R.; Tempany, C.; Mulkern, R.V.; Maier, S.E. Evaluation of Fitting Models for Prostate Tissue Characterization Using Extended-Range b-Factor Diffusion-Weighted Imaging. Magn. Reson. Med. 2018, 79, 2346–2358. [Google Scholar] [CrossRef] [PubMed]
  67. Keene, J.D.; Jacobson, S.; Kechris, K.; Kinney, G.L.; Foreman, M.G.; Doerschuk, C.M.; Make, B.J.; Curtis, J.L.; Rennard, S.I.; Barr, R.G.; et al. Biomarkers Predictive of Exacerbations in the SPIROMICS and COPDGene Cohorts. Am. J. Respir. Crit. Care Med. 2017, 195, 473–481. [Google Scholar] [CrossRef]
  68. Winfield, J.M.; Tunariu, N.; Rata, M.; Miyazaki, K.; Jerome, N.P.; Germuska, M.; Blackledge, M.D.; Collins, D.J.; de Bono, J.S.; Yap, T.A.; et al. Extracranial Soft-Tissue Tumors: Repeatability of Apparent Diffusion Coefficient Estimates from Diffusion-Weighted MR Imaging. Radiology 2017, 284, 88–99. [Google Scholar] [CrossRef]
  69. Taylor, A.J.; Salerno, M.; Dharmakumar, R.; Jerosch-Herold, M. T1 Mapping Basic Techniques and Clinical Applications. JACC Cardiovasc. Imaging 2016, 9, 67–81. [Google Scholar] [CrossRef] [Green Version]
  70. Toussaint, M.; Gilles, R.J.; Azzabou, N.; Marty, B.; Vignaud, A.; Greiser, A.; Carlier, P.G. Characterization of Benign Myocarditis Using Quantitative Delayed-Enhancement Imaging Based on MOLLI T1 Mapping. Medicine 2015, 94. [Google Scholar] [CrossRef]
  71. Jurcoane, A.; Wagner, M.; Schmidt, C.; Mayer, C.; Gracien, R.M.; Hirschmann, M.; Deichmann, R.; Volz, S.; Ziemann, U.; Hattingen, E. Within-Lesion Differences in Quantitative MRI Parameters Predict Contrast Enhancement in Multiple Sclerosis. J. Magn. Reson. Imaging 2013, 38, 1454–1461. [Google Scholar] [CrossRef]
  72. Salisbury, M.L.; Lynch, D.A.; van Beek, E.J.R.; Kazerooni, E.A.; Guo, J.; Xia, M.; Murray, S.; Anstrom, K.J.; Yow, E.; Martinez, F.J.; et al. Idiopathic Pulmonary Fibrosis: The Association between the Adaptive Multiple Features Method and Fibrosis Outcomes. Am. J. Respir. Crit. Care Med. 2017, 195, 921–929. [Google Scholar] [CrossRef] [Green Version]
  73. Katsube, T.; Okada, M.; Kumano, S.; Hori, M.; Imaoka, I.; Ishii, K.; Kudo, M.; Kitagaki, H.; Murakami, T. Estimation of Liver Function Using T1 Mapping on Gd-EOB-DTPA-Enhanced Magnetic Resonance Imaging. Investig. Radiol. 2011, 46, 277–283. [Google Scholar] [CrossRef]
  74. Mozes, F.E.; Tunnicliffe, E.M.; Moolla, A.; Marjot, T.; Levick, C.K.; Pavlides, M.; Robson, M.D. Mapping Tissue Water T1 in the Liver Using the MOLLI T1 Method in the Presence of Fat, Iron and B0 Inhomogeneity. NMR Biomed. 2019, 32. [Google Scholar] [CrossRef] [Green Version]
  75. Adam, S.Z.; Nikolaidis, P.; Horowitz, J.M.; Gabriel, H.; Hammond, N.A.; Patel, T.; Yaghmai, V.; Miller, F.H. Chemical Shift MR Imaging of the Adrenal Gland: Principles, Pitfalls, and Applications. Radiographics 2016, 36, 414–432. [Google Scholar] [CrossRef] [PubMed]
  76. Tietze, A.; Blicher, J.; Mikkelsen, I.K.; Østergaard, L.; Strother, M.K.; Smith, S.A.; Donahue, M.J. Assessment of Ischemic Penumbra in Patients with Hyperacute Stroke Using Amide Proton Transfer (APT) Chemical Exchange Saturation Transfer (CEST) MRI. NMR Biomed. 2014, 27, 163–174. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  77. Krishnamoorthy, G.; Nanga, R.P.R.; Bagga, P.; Hariharan, H.; Reddy, R. High Quality Three-Dimensional GagCEST Imaging of in Vivo Human Knee Cartilage at 7 Tesla. Magn. Reson. Med. 2017, 77, 1866–1873. [Google Scholar] [CrossRef]
  78. Donahue, M.J.; Donahue, P.C.M.; Rane, S.; Thompson, C.R.; Strother, M.K.; Scott, A.O.; Smith, S.A. Assessment of Lymphatic Impairment and Interstitial Protein Accumulation in Patients with Breast Cancer Treatment-Related Lymphedema Using CEST MRI. Magn. Reson. Med. 2016, 75, 345–355. [Google Scholar] [CrossRef] [Green Version]
  79. Lindeman, L.R.; Randtke, E.A.; High, R.A.; Jones, K.M.; Howison, C.M.; Pagel, M.D. A Comparison of Exogenous and Endogenous CEST MRI Methods for Evaluating in Vivo PH. Magn. Reson. Med. 2018, 79, 2766–2772. [Google Scholar] [CrossRef]
  80. Tang, A.; Bashir, M.R.; Corwin, M.T.; Cruite, I.; Dietrich, C.F.; Do, R.K.G.; Ehman, E.C.; Fowler, K.J.; Hussain, H.K.; Jha, R.C.; et al. Evidence Supporting LI-RADS Major Features for CT- and MR Imaging-Based Diagnosis of Hepatocellular Carcinoma: A Systematic Review. Radiology 2018, 286, 29–48. [Google Scholar] [CrossRef] [Green Version]
  81. Mitchell, D.G.; Bruix, J.; Sherman, M.; Sirlin, C.B. LI-RADS (Liver Imaging Reporting and Data System): Summary, Discussion, and Consensus of the LI-RADS Management Working Group and Future Directions. Hepatology 2015, 61, 1056–1065. [Google Scholar] [CrossRef]
  82. Degani, H.; Gusis, V.; Weinstein, D.; Fields, S.; Strano, S. Mapping Pathophysiological Features of Breast Tumors by MRI at High Spatial Resolution. Nat. Med. 1997, 3, 780–782. [Google Scholar] [CrossRef]
  83. Uecker, M.; Zhang, S.; Voit, D.; Karaus, A.; Merboldt, K.D.; Frahm, J. Real-Time MRI at a Resolution of 20 Ms. NMR Biomed. 2010, 23, 986–994. [Google Scholar] [CrossRef] [Green Version]
  84. Van Wijk, D.F.; Strang, A.C.; Duivenvoorden, R.; Enklaar, D.-J.F.; Zwinderman, A.H.; van der Geest, R.J.; Kastelein, J.J.P.; de Groot, E.; Stroes, E.S.G.; Nederveen, A.J. Increasing the Spatial Resolution of 3T Carotid MRI Has No Beneficial Effect for Plaque Component Measurement Reproducibility. PLoS ONE 2015, 10, e0130878. [Google Scholar] [CrossRef] [Green Version]
  85. Aerts, H.J.W.L.; Velazquez, E.R.; Leijenaar, R.T.H.; Parmar, C.; Grossmann, P.; Cavalho, S.; Bussink, J.; Monshouwer, R.; Haibe-Kains, B.; Rietveld, D.; et al. Decoding Tumour Phenotype by Noninvasive Imaging Using a Quantitative Radiomics Approach. Nat. Commun. 2014, 5, 1–9. [Google Scholar] [CrossRef]
  86. Trebeschi, S.; Drago, S.G.; Birkbak, N.J.; Kurilova, I.; Cǎlin, A.M.; Delli Pizzi, A.; Lalezari, F.; Lambregts, D.M.J.; Rohaan, M.W.; Parmar, C.; et al. Predicting Response to Cancer Immunotherapy Using Noninvasive Radiomic Biomarkers. Ann. Oncol. 2019, 30, 998–1004. [Google Scholar] [CrossRef] [Green Version]
  87. Bengio, Y.; Courville, A.; Vincent, P. Representation Learning: A Review and New Perspectives. IEEE Trans. Pattern Anal. Mach. Intell. 2013, 35, 1798–1828. [Google Scholar] [CrossRef]
  88. Erickson, B.J.; Korfiatis, P.; Akkus, Z.; Kline, T.L. Machine Learning for Medical Imaging. Radiographics 2017, 37, 505–515. [Google Scholar] [CrossRef]
  89. Li, B.; Li, W.; Zhao, D. Global and Local Features Based Medical Image Classification. J. Med. Imaging Health Inform. 2015, 5, 748–754. [Google Scholar] [CrossRef]
  90. Tamura, H.; Mori, S.; Yamawaki, T. Textural Features Corresponding to Visual Perception. IEEE Trans. Syst. Man Cybern. 1978, 8, 460–473. [Google Scholar] [CrossRef]
  91. Haralick, R.M.; Dinstein, I.; Shanmugam, K. Textural Features for Image Classification. IEEE Trans. Syst. Man Cybern. 1973, 67, 610–621. [Google Scholar] [CrossRef] [Green Version]
  92. Rubner, Y.; Puzicha, J.; Tomasi, C.; Buhmann, J.M. Empirical Evaluation of Dissimilarity Measures for Color and Texture. Comput. Vis. Image Underst. 2001, 84, 25–43. [Google Scholar] [CrossRef]
  93. Castelli, V.; Bergman, L.D.; Kontoyiannis, I.; Li, C.S.; Robinson, J.T.; Turek, J.J. Progressive Search and Retrieval in Large Image Archives. IBM J. Res. Dev. 1998, 42, 253–267. [Google Scholar] [CrossRef] [Green Version]
  94. Ngo, C.W.; Pong, T.C.; Chin, R.T. Exploiting Image Indexing Techniques in DCT Domain. Pattern Recognit. 2001, 34, 1841–1851. [Google Scholar] [CrossRef] [Green Version]
  95. Zhou, X.S.; Huang, T.S. Edge-Based Structural Features for Content-Based Image Retrieval. Pattern Recognit. Lett. 2001, 22, 457–468. [Google Scholar] [CrossRef] [Green Version]
  96. Güld, M.O.; Keysers, D.; Deselaers, T.; Leisten, M.; Schubert, H.; Ney, H.; Lehmann, T.M. Comparison of Global Features for Categorization of Medical Images. In Proceedings of the Medical Imaging 2004, San Diego, CA, USA, 15–17 February 2004. [Google Scholar]
  97. Local Feature Detection and Extraction—MATLAB & Simulink. Available online: https://www.mathworks.com/help/vision/ug/local-feature-detection-and-extraction.html (accessed on 16 February 2021).
  98. Hajjo, R.; Grulke, C.M.; Golbraikh, A.; Setola, V.; Huang, X.-P.; Roth, B.L.; Tropsha, A. Development, Validation, and Use of Quantitative Structure-Activity Relationship Models of 5-Hydroxytryptamine (2B) Receptor Ligands to Identify Novel Receptor Binders and Putative Valvulopathic Compounds among Common Drugs. J. Med. Chem. 2010, 53. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  99. Hajjo, R.; Setola, V.; Roth, B.L.; Tropsha, A. Chemocentric Informatics Approach to Drug Discovery: Identification and Experimental Validation of Selective Estrogen Receptor Modulators as Ligands of 5-Hydroxytryptamine-6 Receptors and as Potential Cognition Enhancers. J. Med. Chem. 2012, 55, 5704–5719. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  100. Forghani, R.; Savadjiev, P.; Chatterjee, A.; Muthukrishnan, N.; Reinhold, C.; Forghani, B. Radiomics and Artificial Intelligence for Biomarker and Prediction Model Development in Oncology. Comput. Struct. Biotechnol. J. 2019, 17, 995–1008. [Google Scholar] [CrossRef]
  101. Kononenko, I. Machine Learning for Medical Diagnosis: History, State of the Art and Perspective. Artif. Intell. Med. 2001, 23, 89–109. [Google Scholar] [CrossRef] [Green Version]
  102. Arora, S.; Khandeparkar, H.; Khodak, M.; Plevrakis, O.; Saunshi, N. A Theoretical Analysis of Contrastive Unsupervised Representation Learning. arXiv 2019, arXiv:1902.09229. [Google Scholar]
  103. Lundervold, A.S.; Lundervold, A. An Overview of Deep Learning in Medical Imaging Focusing on MRI. Z. Fur Med. Phys. 2019, 29, 102–127. [Google Scholar] [CrossRef]
  104. Mehdy, M.M.; Ng, P.Y.; Shair, E.F.; Saleh, N.I.M.; Gomes, C. Artificial Neural Networks in Image Processing for Early Detection of Breast Cancer. Comput. Math. Methods Med. 2017, 2017. [Google Scholar] [CrossRef] [Green Version]
  105. Joo, S.; Yang, Y.S.; Moon, W.K.; Kim, H.C. Computer-Aided Diagnosis of Solid Breast Nodules: Use of an Artificial Neural Network Based on Multiple Sonographic Features. IEEE Trans. Med. Imaging 2004, 23, 1292–1300. [Google Scholar] [CrossRef]
  106. Tolles, J.; Meurer, W.J. Logistic Regression: Relating Patient Characteristics to Outcomes. JAMA 2016, 316, 533–534. [Google Scholar] [CrossRef]
  107. Bühlmann, P.; van de Geer, S. Statistics for High-Dimensional Data; Springer Series in Statistics; Springer: Berlin/Heidelberg, Germany, 2011; ISBN 978-3-642-20191-2. [Google Scholar]
  108. Chaitanya, K.; Erdil, E.; Karani, N.; Konukoglu, E. Contrastive Learning of Global and Local Features for Medical Image Segmentation with Limited Annotations. arXiv 2020, arXiv:2006.10511. [Google Scholar]
  109. Chen, T.; Kornblith, S.; Swersky, K.; Norouzi, M.; Hinton, G. Big Self-Supervised Models Are Strong Semi-Supervised Learners. arXiv 2020, arXiv:2006.10029. [Google Scholar]
  110. Lecun, Y.; Bengio, Y.; Hinton, G. Deep Learning. Nature 2015, 521, 436–444. [Google Scholar] [CrossRef]
  111. LeCun, Y.; Bottou, L.; Bengio, Y.; Haffner, P. Gradient-Based Learning Applied to Document Recognition. Proc. IEEE 1998, 86, 2278–2323. [Google Scholar] [CrossRef] [Green Version]
  112. Zhu, W.; Xie, L.; Han, J.; Guo, X. The Application of Deep Learning in Cancer Prognosis Prediction. Cancers 2020, 12, 603. [Google Scholar] [CrossRef] [Green Version]
  113. Katzman, J.L.; Shaham, U.; Cloninger, A.; Bates, J.; Jiang, T.; Kluger, Y. DeepSurv: Personalized Treatment Recommender System Using a Cox Proportional Hazards Deep Neural Network. BMC Med. Res. Methodol. 2018, 18, 24. [Google Scholar] [CrossRef]
  114. Ching, T.; Zhu, X.; Garmire, L.X. Cox-Nnet: An Artificial Neural Network Method for Prognosis Prediction of High-Throughput Omics Data. PLoS Comput. Biol. 2018, 14, e1006076. [Google Scholar] [CrossRef]
  115. Joshi, R.; Reeves, C.R. Beyond the Cox Model: Artificial Neural Networks for Survival Analysis Part II; Systems Engineering; Coventry University: Coventry, UK, 2006; ISBN 1846000130. [Google Scholar]
  116. Mudassar, B.A.; Mukhopadhyay, S. FocalNet—Foveal Attention for Post-Processing DNN Outputs. In Proceedings of the International Joint Conference on Neural Networks, Budapest, Hungary, 14–19 July 2019. [Google Scholar]
  117. Zheng, W.; Tropsha, A. Novel Variable Selection Quantitative Structure--Property Relationship Approach Based on the k-Nearest-Neighbor Principle. J. Chem. Inf. Comput. Sci. 2000, 40, 185–194. [Google Scholar] [CrossRef]
  118. Vapnik, V.N.; Vapnik, V.N. Introduction: Four Periods in the Research of the Learning Problem. In The Nature of Statistical Learning Theory; Springer: New York, NY, USA, 2000; pp. 1–15. [Google Scholar]
  119. Cristianini, N.; Shawe-Taylor, J. An Introduction to Support Vector Machines and Other Kernel-Based Learning Methods; Cambridge University Press: Cambridge, UK, 2000. [Google Scholar]
  120. Breiman, L. Random Forests. Mach. Learn. 2001, 45, 5–32. [Google Scholar] [CrossRef] [Green Version]
  121. Dahinden, C.; Dahinden, C.; Guyon, I. An Improved Random Forests Approach with Application to the Performance Prediction Challenge Datasets. Hands Pattern Recognit. 2009, 1, 223–230. [Google Scholar]
  122. Chen, L.; Bentley, P.; Mori, K.; Misawa, K.; Fujiwara, M.; Rueckert, D. Self-Supervised Learning for Medical Image Analysis Using Image Context Restoration. Med. Image Anal. 2019, 58, 101539. [Google Scholar] [CrossRef]
  123. Hastie, T.; Tibshirani, R.; Friedman, J. The Elements of Statistical Learning; Springer Series in Statistics; Springer: New York, NY, USA, 2009; ISBN 978-0-387-84857-0. [Google Scholar]
  124. Piryonesi, S.M.; El-Diraby, T.E. Role of Data Analytics in Infrastructure Asset Management: Overcoming Data Size and Quality Problems. J. Transp. Eng. Part B Pavements 2020, 146, 04020022. [Google Scholar] [CrossRef]
  125. Rokach, L.; Maimon, O. Data Mining with Decision Trees; World Scientific: Singapore, 2014. [Google Scholar]
  126. Willemink, M.J.; Koszek, W.A.; Hardell, C.; Wu, J.; Fleischmann, D.; Harvey, H.; Folio, L.R.; Summers, R.M.; Rubin, D.L.; Lungren, M.P. Preparing Medical Imaging Data for Machine Learning. Radiology 2020, 295, 4–15. [Google Scholar] [CrossRef]
  127. Chaudhari, A.S.; Sandino, C.M.; Cole, E.K.; Larson, D.B.; Gold, G.E.; Vasanawala, S.S.; Lungren, M.P.; Hargreaves, B.A.; Langlotz, C.P. Prospective Deployment of Deep Learning in MRI: A Framework for Important Considerations, Challenges, and Recommendations for Best Practices. J. Magn. Reson. Imaging 2020. [Google Scholar] [CrossRef]
  128. Golbraikh, A.; Shen, M.; Tropsha, A. Enrichment: A New Estimator of Classification Accuracy of QSAR Models. In Abstracts of Papers of the American Chemical Society; American Chemical Society: Washington, DC, USA, 2002; pp. U494–U495. [Google Scholar]
  129. Matsuo, H.; Nishio, M.; Kanda, T.; Kojita, Y.; Kono, A.K.; Hori, M.; Teshima, M.; Otsuki, N.; Nibu, K.I.; Murakami, T. Diagnostic Accuracy of Deep-Learning with Anomaly Detection for a Small Amount of Imbalanced Data: Discriminating Malignant Parotid Tumors in MRI. Sci. Rep. 2020, 10, 1–9. [Google Scholar] [CrossRef]
  130. Li, M.; Shang, Z.; Yang, Z.; Zhang, Y.; Wan, H. Machine Learning Methods for MRI Biomarkers Analysis of Pediatric Posterior Fossa Tumors. Biocybern. Biomed. Eng. 2019, 39, 765–774. [Google Scholar] [CrossRef]
  131. Yurttakal, A.H.; Erbay, H.; İkizceli, T.; Karaçavuş, S. Detection of Breast Cancer via Deep Convolution Neural Networks Using MRI Images. Multimed. Tools Appl. 2020, 79, 15555–15573. [Google Scholar] [CrossRef]
  132. Bi, W.L.; Hosny, A.; Schabath, M.B.; Giger, M.L.; Birkbak, N.J.; Mehrtash, A.; Allison, T.; Arnaout, O.; Abbosh, C.; Dunn, I.F.; et al. Artificial Intelligence in Cancer Imaging: Clinical Challenges and Applications. CA A Cancer J. Clin. 2019, 69, 127. [Google Scholar] [CrossRef] [Green Version]
  133. Banaei, N.; Moshfegh, J.; Mohseni-Kabir, A.; Houghton, J.M.; Sun, Y.; Kim, B. Machine Learning Algorithms Enhance the Specificity of Cancer Biomarker Detection Using SERS-Based Immunoassays in Microfluidic Chips. RSC Adv. 2019, 9, 1859–1868. [Google Scholar] [CrossRef] [Green Version]
  134. Weinreb, J.C.; Barentsz, J.O.; Choyke, P.L.; Cornud, F.; Haider, M.A.; Macura, K.J.; Margolis, D.; Schnall, M.D.; Shtern, F.; Tempany, C.M.; et al. PI-RADS Prostate Imaging—Reporting and Data System: 2015, Version 2. Eur. Urol. 2016, 69, 16–40. [Google Scholar] [CrossRef] [PubMed]
  135. El-Shater Bosaily, A.; Parker, C.; Brown, L.C.; Gabe, R.; Hindley, R.G.; Kaplan, R.; Emberton, M.; Ahmed, H.U.; Emberton, M.; Ahmed, H.; et al. PROMIS—Prostate MR Imaging Study: A Paired Validating Cohort Study Evaluating the Role of Multi-Parametric MRI in Men with Clinical Suspicion of Prostate Cancer. Contemp. Clin. Trials 2015, 42, 26–40. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  136. Ahmed, H.U.; El-Shater Bosaily, A.; Brown, L.C.; Gabe, R.; Kaplan, R.; Parmar, M.K.; Collaco-Moraes, Y.; Ward, K.; Hindley, R.G.; Freeman, A.; et al. Diagnostic Accuracy of Multi-Parametric MRI and TRUS Biopsy in Prostate Cancer (PROMIS): A Paired Validating Confirmatory Study. Lancet 2017, 389, 815–822. [Google Scholar] [CrossRef] [Green Version]
  137. De Rooij, M.; Hamoen, E.H.J.; Fütterer, J.J.; Barentsz, J.O.; Rovers, M.M. Accuracy of Multiparametric MRI for Prostate Cancer Detection: A Meta-Analysis. Am. J. Roentgenol. 2014, 202, 343–351. [Google Scholar] [CrossRef]
  138. Hamoen, E.H.J.; de Rooij, M.; Witjes, J.A.; Barentsz, J.O.; Rovers, M.M. Use of the Prostate Imaging Reporting and Data System (PI-RADS) for Prostate Cancer Detection with Multiparametric Magnetic Resonance Imaging: A Diagnostic Meta-Analysis. Eur. Urol. 2015, 67, 1112–1121. [Google Scholar] [CrossRef]
  139. Beets-Tan, R.G.H.; Beets, G.L.; Vliegen, R.F.A.; Kessels, A.G.H.; van Boven, H.; de Bruine, A.; von Meyenfeldt, M.F.; Baeten, C.G.M.I.; van Engelshoven, J.M.A. Accuracy of Magnetic Resonance Imaging in Prediction of Tumour-Free Resection Margin in Rectal Cancer Surgery. Lancet 2001, 357, 497–504. [Google Scholar] [CrossRef]
  140. Taylor, F.G.M.; Quirke, P.; Heald, R.J.; Moran, B.J.; Blomqvist, L.; Swift, I.R.; Sebag-Montefiore, D.; Tekkis, P.; Brown, G. Preoperative Magnetic Resonance Imaging Assessment of Circumferential Resection Margin Predicts Disease-Free Survival and Local Recurrence: 5-Year Follow-up Results of the MERCURY Study. J. Clin. Oncol. 2014, 32, 34–43. [Google Scholar] [CrossRef]
  141. Brown, G. Diagnostic Accuracy of Preoperative Magnetic Resonance Imaging in Predicting Curative Resection of Rectal Cancer: Prospective Observational Study. Br. Med. J. 2006, 333, 779–782. [Google Scholar] [CrossRef] [Green Version]
  142. Trivedi, S.B.; Vesoulis, Z.A.; Rao, R.; Liao, S.M.; Shimony, J.S.; McKinstry, R.C.; Mathur, A.M. A Validated Clinical MRI Injury Scoring System in Neonatal Hypoxic-Ischemic Encephalopathy. Pediatric Radiol. 2017, 47, 1491–1499. [Google Scholar] [CrossRef]
  143. Machino, M.; Ando, K.; Kobayashi, K.; Ito, K.; Tsushima, M.; Morozumi, M.; Tanaka, S.; Ota, K.; Ito, K.; Kato, F.; et al. Alterations in Intramedullary T2-Weighted Increased Signal Intensity Following Laminoplasty in Cervical Spondylotic Myelopathy Patients: Comparison between Pre- and Postoperative Magnetic Resonance Images. Spine 2018, 43, 1595–1601. [Google Scholar] [CrossRef]
  144. Chen, C.J.; Lyu, R.K.; Lee, S.T.; Wong, Y.C.; Wang, L.J. Intramedullary High Signal Intensity on T2-Weighted MR Images in Cervical Spondylotic Myelopathy: Prediction of Prognosis with Type of Intensity. Radiology 2001, 221, 789–794. [Google Scholar] [CrossRef]
  145. Khanna, D.; Ranganath, V.K.; FitzGerald, J.; Park, G.S.; Altman, R.D.; Elashoff, D.; Gold, R.H.; Sharp, J.T.; Fürst, D.E.; Paulus, H.E. Increased Radiographic Damage Scores at the Onset of Seropositive Rheumatoid Arthritis in Older Patients Are Associated with Osteoarthritis of the Hands, but Not with More Rapid Progression of Damage. Arthritis Rheum. 2005, 52, 2284–2292. [Google Scholar] [CrossRef]
  146. Jaremko, J.L.; Azmat, O.; Lambert, R.G.W.; Bird, P.; Haugen, I.K.; Jans, L.; Weber, U.; Winn, N.; Zubler, V.; Maksymowych, W.P. Validation of a Knowledge Transfer Tool According to the OMERACT Filter: Does Web-Based Real-Time Iterative Calibration Enhance the Evaluation of Bone Marrow Lesions in Hip Osteoarthritis? J. Rheumatol. 2017, 44, 1713–1717. [Google Scholar] [CrossRef]
  147. Molyneux, P.D.; Miller, D.H.; Filippi, M.; Yousry, T.A.; Radü, E.W.; Adèr, H.J.; Barkhof, F. Visual Analysis of Serial T2-Weighted MRI in Multiple Sclerosis: Intra- and Interobserver Reproducibility. Neuroradiology 1999, 41, 882–888. [Google Scholar] [CrossRef]
  148. Stollfuss, J.C.; Becker, K.; Sendler, A.; Seidl, S.; Settles, M.; Auer, F.; Beer, A.; Rummeny, E.J.; Woertler, K. Rectal Carcinoma: High Spatial-Resolution MR Imaging and T2 Quantification in Rectal Cancer Specimens. Radiology 2006, 241, 132–141. [Google Scholar] [CrossRef]
  149. Barrington, S.F.; Mikhaeel, N.G.; Kostakoglu, L.; Meignan, M.; Hutchings, M.; Müeller, S.P.; Schwartz, L.H.; Zucca, E.; Fisher, R.I.; Trotman, J.; et al. Role of Imaging in the Staging and Response Assessment of Lymphoma: Consensus of the International Conference on Malignant Lymphomas Imaging Working Group. J. Clin. Oncol. 2014, 32, 3048–3058. [Google Scholar] [CrossRef]
  150. Chernyak, V.; Fowler, K.J.; Kamaya, A.; Kielar, A.Z.; Elsayes, K.M.; Bashir, M.R.; Kono, Y.; Do, R.K.; Mitchell, D.G.; Singal, A.G.; et al. Liver Imaging Reporting and Data System (LI-RADS) Version 2018: Imaging of Hepatocellular Carcinoma in at-Risk Patients. Radiology 2018, 289, 816–830. [Google Scholar] [CrossRef]
  151. Elsayes, K.M.; Hooker, J.C.; Agrons, M.M.; Kielar, A.Z.; Tang, A.; Fowler, K.J.; Chernyak, V.; Bashir, M.R.; Kono, Y.; Do, R.K.; et al. 2017 Version of LI-RADS for CT and MR Imaging: An Update. Radiographics 2017, 37, 1994–2017. [Google Scholar] [CrossRef]
  152. Tessler, F.N.; Middleton, W.D.; Grant, E.G.; Hoang, J.K.; Berland, L.L.; Teefey, S.A.; Cronan, J.J.; Beland, M.D.; Desser, T.S.; Frates, M.C.; et al. ACR Thyroid Imaging, Reporting and Data System (TI-RADS): White Paper of the ACR TI-RADS Committee. J. Am. Coll. Radiol. 2017, 14, 587–595. [Google Scholar] [CrossRef] [Green Version]
  153. Panebianco, V.; Narumi, Y.; Altun, E.; Bochner, B.H.; Efstathiou, J.A.; Hafeez, S.; Huddart, R.; Kennish, S.; Lerner, S.; Montironi, R.; et al. Multiparametric Magnetic Resonance Imaging for Bladder Cancer: Development of VI-RADS (Vesical Imaging-Reporting and Data System). Eur. Urol. 2018, 74, 294–306. [Google Scholar] [CrossRef] [Green Version]
  154. Kitajima, K.; Tanaka, U.; Ueno, Y.; Maeda, T.; Suenaga, Y.; Takahashi, S.; Deguchi, M.; Miyahara, Y.; Ebina, Y.; Yamada, H.; et al. Role of Diffusion Weighted Imaging and Contrast-Enhanced MRI in the Evaluation of Intrapelvic Recurrence of Gynecological Malignant Tumor. PLoS ONE 2015, 10. [Google Scholar] [CrossRef]
  155. Cornelis, F.; Tricaud, E.; Lasserre, A.S.; Petitpierre, F.; Bernhard, J.C.; le Bras, Y.; Yacoub, M.; Bouzgarrou, M.; Ravaud, A.; Grenier, N. Multiparametric Magnetic Resonance Imaging for the Differentiation of Low and High Grade Clear Cell Renal Carcinoma. Eur. Radiol. 2015, 25, 24–31. [Google Scholar] [CrossRef]
  156. Martin, M.D.; Kanne, J.P.; Broderick, L.S.; Kazerooni, E.A.; Meyer, C.A. Lung-RADS: Pushing the Limits. Radiographics 2017, 37, 1975–1993. [Google Scholar] [CrossRef]
  157. Sabra, M.M.; Sherman, E.J.; Tuttle, R.M. Tumor Volume Doubling Time of Pulmonary Metastases Predicts Overall Survival and Can Guide the Initiation of Multikinase Inhibitor Therapy in Patients with Metastatic, Follicular Cell-Derived Thyroid Carcinoma. Cancer 2017, 123, 2955–2964. [Google Scholar] [CrossRef]
  158. Eisenhauer, E.A.; Therasse, P.; Bogaerts, J.; Schwartz, L.H.; Sargent, D.; Ford, R.; Dancey, J.; Arbuck, S.; Gwyther, S.; Mooney, M.; et al. New Response Evaluation Criteria in Solid Tumours: Revised RECIST Guideline (Version 1.1). Eur. J. Cancer 2009, 45, 228–247. [Google Scholar] [CrossRef]
  159. Yao, G.H.; Zhang, M.; Yin, L.X.; Zhang, C.; Xu, M.J.; Deng, Y.; Liu, Y.; Deng, Y.B.; Ren, W.D.; Li, Z.A.; et al. Doppler Echocardiographic Measurements in Normal ChineseAdults (EMINCA): A Prospective, Nationwide, and Multicentre Study. Eur. Heart J. Cardiovasc. Imaging 2016, 17, 512–522. [Google Scholar] [CrossRef] [Green Version]
  160. Figueiredo, C.P.; Kleyer, A.; Simon, D.; Stemmler, F.; d’Oliveira, I.; Weissenfels, A.; Museyko, O.; Friedberger, A.; Hueber, A.J.; Haschka, J.; et al. Methods for Segmentation of Rheumatoid Arthritis Bone Erosions in High-Resolution Peripheral Quantitative Computed Tomography (HR-PQCT). Semin. Arthritis Rheum. 2018, 47, 611–618. [Google Scholar] [CrossRef] [PubMed]
  161. Marcus, C.D.; Ladam-Marcus, V.; Cucu, C.; Bouché, O.; Lucas, L.; Hoeffel, C. Imaging Techniques to Evaluate the Response to Treatment in Oncology: Current Standards and Perspectives. Crit. Rev. Oncol. Hematol. 2009, 72, 217–238. [Google Scholar] [CrossRef]
  162. Levine, Z.H.; Pintar, A.L.; Hagedorn, J.G.; Fenimore, C.P.; Heussel, C.P. Uncertainties in RECIST as a Measure of Volume for Lung Nodules and Liver Tumors. Med. Phys. 2012, 39, 2628–2637. [Google Scholar] [CrossRef]
  163. Hawnaur, J.M.; Johnson, R.J.; Buckley, C.H.; Tindall, V.; Isherwood, I. Staging, Volume Estimation and Assessment of Nodal Status in Carcinoma of the Cervix: Comparison of Magnetic Resonance Imaging with Surgical Findings. Clin. Radiol. 1994, 49, 443–452. [Google Scholar] [CrossRef]
  164. Soutter, W.P.; Hanoch, J.; D’Arcy, T.; Dina, R.; McIndoe, G.A.; DeSouza, N.M. Pretreatment Tumour Volume Measurement on High-Resolution Magnetic Resonance Imaging as a Predictor of Survival in Cervical Cancer. BJOG Int. J. Obstet. Gynaecol. 2004, 111, 741–747. [Google Scholar] [CrossRef] [PubMed]
  165. Jiang, Y.; You, K.; Qiu, X.; Bi, Z.; Mo, H.; Li, L.; Liu, Y. Tumor Volume Predicts Local Recurrence in Early Rectal Cancer Treated with Radical Resection: A Retrospective Observational Study of 270 Patients. Int. J. Surg. 2018, 49, 68–73. [Google Scholar] [CrossRef] [PubMed]
  166. Tayyab, M.; Razack, A.; Sharma, A.; Gunn, J.; Hartley, J.E. Correlation of Rectal Tumor Volumes with Oncological Outcomes for Low Rectal Cancers: Does Tumor Size Matter? Surg. Today 2015, 45, 826–833. [Google Scholar] [CrossRef] [PubMed]
  167. Wagenaar, H.C.; Trimbos, J.B.M.Z.; Postema, S.; Anastasopoulou, A.; van der Geest, R.J.; Reiber, J.H.C.; Kenter, G.G.; Peters, A.A.W.; Pattynama, P.M.T. Tumor Diameter and Volume Assessed by Magnetic Resonance Imaging in the Prediction of Outcome for Invasive Cervical Cancer. Gynecol. Oncol. 2001, 82, 474–482. [Google Scholar] [CrossRef]
  168. Lee, J.W.; Lee, S.M.; Yun, M.; Cho, A. Prognostic Value of Volumetric Parameters on Staging and Posttreatment FDG PET/CT in Patients with Stage IV Non-Small Cell Lung Cancer. Clin. Nucl. Med. 2016, 41, 347–353. [Google Scholar] [CrossRef]
  169. Kurtipek, E.; Çayci, M.; Düzgün, N.; Esme, H.; Terzi, Y.; Bakdik, S.; Aygün, M.S.; Unlü, Y.; Burnik, C.; Bekci, T.T. 18F-FDG PET/CT Mean SUV and Metabolic Tumor Volume for Mean Survival Time in Non-Small Cell Lung Cancer. Clin. Nucl. Med. 2015, 40, 459–463. [Google Scholar] [CrossRef]
  170. Meignan, M.; Itti, E.; Gallamini, A.; Younes, A. FDG PET/CT Imaging as a Biomarker in Lymphoma. Eur. J. Nucl. Med. Mol. Imaging 2015, 42, 623–633. [Google Scholar] [CrossRef]
  171. Kanoun, S.; Tal, I.; Berriolo-Riedinger, A.; Rossi, C.; Riedinger, J.M.; Vrigneaud, J.M.; Legrand, L.; Humbert, O.; Casasnovas, O.; Brunotte, F.; et al. Influence of Software Tool and Methodological Aspects of Total Metabolic Tumor Volume Calculation on Baseline [18F] FDG PET to Predict Survival in Hodgkin Lymphoma. PLoS ONE 2015, 10. [Google Scholar] [CrossRef] [Green Version]
  172. Kostakoglu, L.; Chauvie, S. Metabolic Tumor Volume Metrics in Lymphoma. Semin. Nucl. Med. 2018, 48, 50–66. [Google Scholar] [CrossRef]
  173. Mori, S.; Oishi, K.; Faria, A.V.; Miller, M.I. Atlas-Based Neuroinformatics via MRI: Harnessing Information from Past Clinical Cases and Quantitative Image Analysis for Patient Care. Annu. Rev. Biomed. Eng. 2013, 15, 71–92. [Google Scholar] [CrossRef] [Green Version]
  174. Cole, J.H.; Poudel, R.P.K.; Tsagkrasoulis, D.; Caan, M.W.A.; Steves, C.; Spector, T.D.; Montana, G. Predicting Brain Age with Deep Learning from Raw Imaging Data Results in a Reliable and Heritable Biomarker. NeuroImage 2017, 163, 115–124. [Google Scholar] [CrossRef] [Green Version]
  175. Xu, Y.; Hosny, A.; Zeleznik, R.; Parmar, C.; Coroller, T.; Franco, I.; Mak, R.H.; Aerts, H.J.W.L. Deep Learning Predicts Lung Cancer Treatment Response from Serial Medical Imaging. Clin. Cancer Res. 2019, 25, 3266–3275. [Google Scholar] [CrossRef] [Green Version]
  176. Davnall, F.; Yip, C.S.P.; Ljungqvist, G.; Selmi, M.; Ng, F.; Sanghera, B.; Ganeshan, B.; Miles, K.A.; Cook, G.J.; Goh, V. Assessment of Tumor Heterogeneity: An Emerging Imaging Tool for Clinical Practice? Insights Imaging 2012, 3, 573–589. [Google Scholar] [CrossRef] [Green Version]
  177. Lambin, P.; Rios-Velazquez, E.; Leijenaar, R.; Carvalho, S.; van Stiphout, R.G.P.M.; Granton, P.; Zegers, C.M.L.; Gillies, R.; Boellard, R.; Dekker, A.; et al. Radiomics: Extracting More Information from Medical Images Using Advanced Feature Analysis. Eur. J. Cancer 2012, 48, 441–446. [Google Scholar] [CrossRef] [Green Version]
  178. Gillies, R.J.; Kinahan, P.E.; Hricak, H. Radiomics: Images Are More than Pictures, They Are Data. Radiology 2016, 278, 563–577. [Google Scholar] [CrossRef] [Green Version]
  179. Yip, S.S.F.; Aerts, H.J.W.L. Applications and Limitations of Radiomics. Phys. Med. Biol. 2016, 61, R150–R166. [Google Scholar] [CrossRef] [Green Version]
  180. O’Connor, J.P.B.; Rose, C.J.; Waterton, J.C.; Carano, R.A.D.; Parker, G.J.M.; Jackson, A. Imaging Intratumor Heterogeneity: Role in Therapy Response, Resistance, and Clinical Outcome. Clin. Cancer Res. 2015, 21, 249–257. [Google Scholar] [CrossRef] [Green Version]
  181. Drukker, K.; Giger, M.L.; Joe, B.N.; Kerlikowske, K.; Greenwood, H.; Drukteinis, J.S.; Niell, B.; Fan, B.; Malkov, S.; Avila, J.; et al. Combined Benefit of Quantitative Three-Compartment Breast Image Analysis and Mammography Radiomics in the Classification of Breast Masses in a Clinical Data Set. Radiology 2019, 290, 621–628. [Google Scholar] [CrossRef]
  182. Zhao, B.; Tan, Y.; Tsai, W.Y.; Qi, J.; Xie, C.; Lu, L.; Schwartz, L.H. Reproducibility of Radiomics for Deciphering Tumor Phenotype with Imaging. Sci. Rep. 2016, 6. [Google Scholar] [CrossRef] [Green Version]
  183. Peerlings, J.; Woodruff, H.C.; Winfield, J.M.; Ibrahim, A.; van Beers, B.E.; Heerschap, A.; Jackson, A.; Wildberger, J.E.; Mottaghy, F.M.; DeSouza, N.M.; et al. Stability of Radiomics Features in Apparent Diffusion Coefficient Maps from a Multi-Centre Test-Retest Trial. Sci. Rep. 2019, 9, 1–10. [Google Scholar] [CrossRef] [Green Version]
  184. Zwanenburg, A.; Vallières, M.; Abdalah, M.A.; Aerts, H.J.W.L.; Andrearczyk, V.; Apte, A.; Ashrafinia, S.; Bakas, S.; Beukinga, R.J.; Boellaard, R.; et al. The Image Biomarker Standardization Initiative: Standardized Quantitative Radiomics for High-Throughput Image-Based Phenotyping. Radiology 2020, 295, 328–338. [Google Scholar] [CrossRef] [Green Version]
  185. Sanduleanu, S.; Woodruff, H.C.; de Jong, E.E.C.; van Timmeren, J.E.; Jochems, A.; Dubois, L.; Lambin, P. Tracking Tumor Biology with Radiomics: A Systematic Review Utilizing a Radiomics Quality Score. Radiother. Oncol. 2018, 127, 349–360. [Google Scholar] [CrossRef]
  186. Strickland, N.H. PACS (Picture Archiving and Communication Systems): Filmless Radiology. Arch. Dis. Child. 2000, 83, 82–86. [Google Scholar] [CrossRef] [PubMed]
  187. Bidgood, W.D.; Horii, S.C.; Prior, F.W.; van Syckle, D.E. Understanding and Using DICOM, the Data Interchange Standard for Biomedical Imaging. J. Am. Med. Inform. Assoc. 1997, 4, 199–212. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  188. Siegel, R.L.; Miller, K.D.; Jemal, A. Cancer Statistics. CA Cancer J. Clin. 2018, 68, 7–30. [Google Scholar] [CrossRef] [PubMed]
  189. Mottet, N.; Bellmunt, J.; Bolla, M.; Briers, E.; Cumberbatch, M.G.; de Santis, M.; Fossati, N.; Gross, T.; Henry, A.M.; Joniau, S.; et al. EAU-ESTRO-SIOG Guidelines on Prostate Cancer. Part 1: Screening, Diagnosis, and Local Treatment with Curative Intent. Eur. Urol. 2017, 71, 618–629. [Google Scholar] [CrossRef]
  190. Manenti, G.; Nezzo, M.; Chegai, F.; Vasili, E.; Bonanno, E.; Simonetti, G. DWI of Prostate Cancer: Optimal b -Value in Clinical Practice. Prostate Cancer 2014, 2014, 1–9. [Google Scholar] [CrossRef] [Green Version]
  191. Park, S.Y.; Kim, C.K.; Park, B.K.; Kwon, G.Y. Comparison of Apparent Diffusion Coefficient Calculation between Two-Point and Multipoint b Value Analyses in Prostate Cancer and Benign Prostate Tissue at 3 T: Preliminary Experience. Am. J. Roentgenol. 2014, 202, W287–W294. [Google Scholar] [CrossRef]
  192. Penzkofer, T.; Tempany-Afdhal, C.M. Prostate Cancer Detection and Diagnosis: The Role of MR and Its Comparison with Other Diagnostic Modalities—a Radiologist’s Perspective. NMR Biomed. 2014, 27, 3–15. [Google Scholar] [CrossRef]
  193. Schimmöller, L.; Quentin, M.; Arsov, C.; Hiester, A.; Buchbender, C.; Rabenalt, R.; Albers, P.; Antoch, G.; Blondin, D. MR-Sequences for Prostate Cancer Diagnostics: Validation Based on the PI-RADS Scoring System and Targeted MR-Guided in-Bore Biopsy. Eur. Radiol. 2014, 24, 2582–2589. [Google Scholar] [CrossRef]
  194. Panebianco, V.; Barchetti, F.; Sciarra, A.; Ciardi, A.; Indino, E.L.; Papalia, R.; Gallucci, M.; Tombolini, V.; Gentile, V.; Catalano, C. Multiparametric Magnetic Resonance Imaging vs. Standard Care in Men Being Evaluated for Prostate Cancer: A Randomized Study. Urol. Oncol. Semin. Orig. Investig. 2015, 33, 17.e1–17.e7. [Google Scholar] [CrossRef]
  195. Petrillo, A.; Fusco, R.; Setola, S.V.; Ronza, F.M.; Granata, V.; Petrillo, M.; Carone, G.; Sansone, M.; Franco, R.; Fulciniti, F.; et al. Multiparametric MRI for Prostate Cancer Detection: Performance in Patients with Prostate-Specific Antigen Values between 2.5 and 10 Ng/ML. J. Magn. Reson. Imaging 2014, 39, 1206–1212. [Google Scholar] [CrossRef]
  196. Van der Leest, M.; Cornel, E.; Israël, B.; Hendriks, R.; Padhani, A.R.; Hoogenboom, M.; Zamecnik, P.; Bakker, D.; Setiasti, A.Y.; Veltman, J.; et al. Head-to-Head Comparison of Transrectal Ultrasound-Guided Prostate Biopsy Versus Multiparametric Prostate Resonance Imaging with Subsequent Magnetic Resonance-Guided Biopsy in Biopsy-Naïve Men with Elevated Prostate-Specific Antigen: A Large Prospective Multicenter Clinical Study (Figure Presented.). Eur. Urol. 2019, 75, 570–578. [Google Scholar] [CrossRef] [Green Version]
  197. Glazer, D.I.; Mayo-Smith, W.W.; Sainani, N.I.; Sadow, C.A.; Vangel, M.G.; Tempany, C.M.; Dunne, R.M. Interreader Agreement of Prostate Imaging Reporting and Data System Version 2 Using an In-Bore Mri-Guided Prostate Biopsy Cohort: A Single Institution’s Initial Experience. Am. J. Roentgenol. 2017, 209, W145–W151. [Google Scholar] [CrossRef]
  198. Rosenkrantz, A.B.; Ayoola, A.; Hoffman, D.; Khasgiwala, A.; Prabhu, V.; Smereka, P.; Somberg, M.; Taneja, S.S. The Learning Curve in Prostate MRI Interpretation: Self-Directed Learning versus Continual Reader Feedback. Am. J. Roentgenol. 2017, 208, W92–W100. [Google Scholar] [CrossRef]
  199. Gatti, M.; Faletti, R.; Calleris, G.; Giglio, J.; Berzovini, C.; Gentile, F.; Marra, G.; Misischi, F.; Molinaro, L.; Bergamasco, L.; et al. Prostate Cancer Detection with Biparametric Magnetic Resonance Imaging (BpMRI) by Readers with Different Experience: Performance and Comparison with Multiparametric (MpMRI). Abdom. Radiol. 2019, 44, 1883–1893. [Google Scholar] [CrossRef]
  200. McGarry, S.D.; Hurrell, S.L.; Iczkowski, K.A.; Hall, W.; Kaczmarowski, A.L.; Banerjee, A.; Keuter, T.; Jacobsohn, K.; Bukowy, J.D.; Nevalainen, M.T.; et al. Radio-Pathomic Maps of Epithelium and Lumen Density Predict the Location of High-Grade Prostate Cancer. Int. J. Radiat. Oncol. Biol. Phys. 2018, 101, 1179–1187. [Google Scholar] [CrossRef] [Green Version]
  201. Wang, J.; Wu, C.J.; Bao, M.L.; Zhang, J.; Wang, X.N.; Zhang, Y.D. Machine Learning-Based Analysis of MR Radiomics Can Help to Improve the Diagnostic Performance of PI-RADS v2 in Clinically Relevant Prostate Cancer. Eur. Radiol. 2017, 27, 4082–4090. [Google Scholar] [CrossRef]
  202. Wu, M.; Krishna, S.; Thornhill, R.E.; Flood, T.A.; McInnes, M.D.F.; Schieda, N. Transition Zone Prostate Cancer: Logistic Regression and Machine-Learning Models of Quantitative ADC, Shape and Texture Features Are Highly Accurate for Diagnosis. J. Magn. Reson. Imaging 2019, 50, 940–950. [Google Scholar] [CrossRef] [PubMed]
  203. Wildeboer, R.R.; Mannaerts, C.K.; van Sloun, R.J.G.; Budäus, L.; Tilki, D.; Wijkstra, H.; Salomon, G.; Mischi, M. Automated Multiparametric Localization of Prostate Cancer Based on B-Mode, Shear-Wave Elastography, and Contrast-Enhanced Ultrasound Radiomics. Eur. Radiol. 2020, 30, 806–815. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  204. Arif, M.; Schoots, I.G.; Castillo Tovar, J.; Bangma, C.H.; Krestin, G.P.; Roobol, M.J.; Niessen, W.; Veenland, J.F. Clinically Significant Prostate Cancer Detection and Segmentation in Low-Risk Patients Using a Convolutional Neural Network on Multi-Parametric MRI. Eur. Radiol. 2020, 30, 6582–6592. [Google Scholar] [CrossRef]
  205. Winkel, D.J.; Breit, H.-C.; Shi, B.; Boll, D.T.; Seifert, H.-H.; Wetterauer, C. Predicting Clinically Significant Prostate Cancer from Quantitative Image Features Including Compressed Sensing Radial MRI of Prostate Perfusion Using Machine Learning: Comparison with PI-RADS v2 Assessment Scores. Quant. Imaging Med. Surg. 2020, 10, 808–823. [Google Scholar] [CrossRef] [PubMed]
  206. Classification of Brain Tumors. Available online: https://www.aans.org/en/Media/Classifications-of-Brain-Tumors (accessed on 20 February 2021).
  207. Brain Cancer: Causes, Symptoms & Treatments|CTCA. Available online: https://www.cancercenter.com/cancer-types/brain-cancer (accessed on 20 February 2021).
  208. Afshar, P.; Plataniotis, K.N.; Mohammadi, A. Capsule Networks for Brain Tumor Classification Based on MRI Images and Course Tumor Boundaries. In Proceedings of the ICASSP 2019—IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Brighton, UK, 12–17 May 2019; pp. 1368–1372. [Google Scholar]
  209. Cheng, J.; Huang, W.; Cao, S.; Yang, R.; Yang, W.; Yun, Z.; Wang, Z.; Feng, Q. Enhanced Performance of Brain Tumor Classification via Tumor Region Augmentation and Partition. PLoS ONE 2015, 10, e0140381. [Google Scholar] [CrossRef] [PubMed]
  210. Zhang, Z.; Sejdić, E. Radiological Images and Machine Learning: Trends, Perspectives, and Prospects. Comput. Biol. Med. 2019, 108, 354–370. [Google Scholar] [CrossRef] [Green Version]
  211. Badža, M.M.; Barjaktarović, M.C. Classification of Brain Tumors from Mri Images Using a Convolutional Neural Network. Appl. Sci. 2020, 10, 1999. [Google Scholar] [CrossRef] [Green Version]
  212. Ugga, L.; Perillo, T.; Cuocolo, R.; Stanzione, A.; Romeo, V.; Green, R.; Cantoni, V.; Brunetti, A. Meningioma MRI Radiomics and Machine Learning: Systematic Review, Quality Score Assessment, and Meta-Analysis. Neuroradiology 2021, 1–12. [Google Scholar] [CrossRef]
  213. Banzato, T.; Bernardini, M.; Cherubini, G.B.; Zotti, A. A Methodological Approach for Deep Learning to Distinguish between Meningiomas and Gliomas on Canine MR-Images. BMC Vet. Res. 2018, 14, 317. [Google Scholar] [CrossRef]
  214. Kanis, J.A.; Harvey, N.; Cooper, C.; Johansson, H.; Odén, A.; McCloskey, E.; The Advisory Board of the National Osteoporosis Guideline Group; Poole, K.E.; Gittoes, N.; Hope, S. A Systematic Review of Intervention Thresholds Based on FRAX: A Report Prepared for the National Osteoporosis Guideline Group and the International Osteoporosis Foundation. Arch. Osteoporos. 2016, 11, 1–48. [Google Scholar] [CrossRef] [Green Version]
  215. El Alaoui-Lasmaili, K.; Faivre, B. Antiangiogenic Therapy: Markers of Response, “Normalization” and Resistance. Crit. Rev. Oncol. Hematol. 2018, 128, 118–129. [Google Scholar] [CrossRef]
  216. Sheikhbahaei, S.; Mena, E.; Yanamadala, A.; Reddy, S.; Solnes, L.B.; Wachsmann, J.; Subramaniam, R.M. The Value of FDG PET/CT in Treatment Response Assessment, Follow-up, and Surveillance of Lung Cancer. Am. J. Roentgenol. 2017, 208, 420–433. [Google Scholar] [CrossRef]
  217. van Dijk, L.V.; Brouwer, C.L.; van der Laan, H.P.; Burgerhof, J.G.M.; Langendijk, J.A.; Steenbakkers, R.J.H.M.; Sijtsema, N.M. Geometric Image Biomarker Changes of the Parotid Gland Are Associated with Late Xerostomia. Int. J. Radiat. Oncol. Biol. Phys. 2017, 99, 1101–1110. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  218. Goense, L.; van Rossum, P.S.N.; Reitsma, J.B.; Lam, M.G.E.H.; Meijer, G.J.; van Vulpen, M.; Ruurda, J.P.; van Hillegersberg, R. Diagnostic Performance of 18F-FDG PET and PET/CT for the Detection of Recurrent Esophageal Cancer after Treatment with Curative Intent: A Systematic Review and Meta-Analysis. J. Nucl. Med. 2015, 56, 995–1002. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  219. Sullivan, D.C.; Obuchowski, N.A.; Kessler, L.G.; Raunig, D.L.; Gatsonis, C.; Huang, E.P.; Kondratovich, M.; McShane, L.M.; Reeves, A.P.; Barboriak, D.P.; et al. Metrology Standards for Quantitative Imaging Biomarkers1. Radiology 2015, 277, 813–825. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  220. Waterton, J.C.; Pylkkanen, L. Qualification of Imaging Biomarkers for Oncology Drug Development. Eur. J. Cancer 2012, 48, 409–415. [Google Scholar] [CrossRef]
  221. White, T.; Blok, E.; Calhoun, V.D. Data Sharing and Privacy Issues in Neuroimaging Research: Opportunities, Obstacles, Challenges, and Monsters under the Bed. Hum. Brain Mapp. 2020. [Google Scholar] [CrossRef]
  222. Zhuang, M.; García, D.V.; Kramer, G.M.; Frings, V.; Smit, E.F.; Dierckx, R.; Hoekstra, O.S.; Boellaard, R. Variability and Repeatability of Quantitative Uptake Metrics in 18F-FDG PET/CT of Non–Small Cell Lung Cancer: Impact of Segmentation Method, Uptake Interval, and Reconstruction Protocol. J. Nucl. Med. 2019, 60, 600–607. [Google Scholar] [CrossRef] [Green Version]
  223. Barrington, S.F.; Kirkwood, A.A.; Franceschetto, A.; Fulham, M.J.; Roberts, T.H.; Almquist, H.; Brun, E.; Hjorthaug, K.; Viney, Z.N.; Pike, L.C.; et al. PET-CT for Staging and Early Response: Results from the Response-Adapted Therapy in Advanced Hodgkin Lymphoma Study. Blood 2016, 127, 1531–1538. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Workflow for prioritizing ML MRI biomarkers.
Figure 1. Workflow for prioritizing ML MRI biomarkers.
Diagnostics 11 00742 g001
Figure 2. Column chart showing the number of MRI articles based on the ML method used. (A) The total number of PubMed MRI articles based on the applied ML method. (B) The total number of PubMed Oncology MRI articles based on the applied ML method.
Figure 2. Column chart showing the number of MRI articles based on the ML method used. (A) The total number of PubMed MRI articles based on the applied ML method. (B) The total number of PubMed Oncology MRI articles based on the applied ML method.
Diagnostics 11 00742 g002
Table 1. Imaging biomarkers for disease detection with examples.
Table 1. Imaging biomarkers for disease detection with examples.
DiseaseBiomarkerQuantitative (Q)/Semi-Quantitative (SQ)/Non-Quantitative (NQ)Biomarkers Uses
Malignant diseaseLung RADS,
pancreatic cancer action network (PanCan), national comprehensive cancer network (NCCN) criteria [25,26]
SQAUC for malignancy
0.81–0.87 [27]
CT blood flow,
perfusion,
permeability
measurements
QSensitivity 0.73, specificity
0.70 [28]
AUC 0.75, sensitivity 0.79,
specificity 0.75 [29]
Breast imaging (BI)-RADS [30]
Prostate imaging (PI)-RADS [29]
Liver imaging (LI)-RADS [31]
SQpositive predictive value (PPV) BI-RADS 0 14.1%,
BI-RADS 4 39.1%,
BI-RADS 5 92.9%
PI-RADS 2 pooled
sensitivity 0.85,
pooled specificity 0.71
pooled sensitivity for
malignancy 0.93
Apparent diffusion coefficient (ADC)QLiver AUC 0.82–0.95
Prostate AUC 0.84
RECIST/morphological
volume
QOngoing guidelines for treatment evaluation [32]
Positron emission response criteria in solid tumors
(PERCIST) /metabolic
Volume [33]
QOngoing guidelines for treatment evaluation [32]
Liver cancer
Recurrent
glioblastoma
Dynamic contrast
enhanced (DCE) metrics
(perfusion parameters Ktrans, Kep, blood
flow, Ve)
QHepatocellular cancer
AUC 0.85, sensitivity 0.85,
specificity 0.81 [29]
Brain- Ktrans accuracy 86% [34]
Cancer
Sarcoma [35]
Lung cancer [36]
18FDG-
standardized uptake value (SUV)
QSarcoma-sensitivity
0.91, specificity 0.85,
accuracy 0.88
Lung-sensitivity 0.68–0.95
CancerTargeted radionuclides [37]
In-octreotide [38,39]
68Gallium (Ga)-DOTA-TOC [39] and
68Ga DOTA-TATE
[39,40,41]
68Ga prostate-specific membrane antigen (PSMA) [42]
NQSensitivity 97%, specificity 92% for octreotide [43]
Sensitivity 100%,
specificity 100% for PSMA [44]
Brain cancerDynamic susceptibility contrast (DSC)-MRISQAUC = 0.77 for classifying glioma
grades II and III [45]
GliomaAdjuvant paclitaxel and trastuzumab (APT) trialQAPT accords with cancer grade and
Ki67 index [46]
Rectal cancer
Lung cancer
DCE-CT
parameters
Blood flow,
permeability
QBlood flow 75% accuracy for detecting rectal cancers with lymph node metastases [47]
CT permeability anticipated survival regardless of treatment in lung
cancer [48]
Cervix cancer
Endometrial
cancer
Rectal cancer
Breast cancer
DCE-MRI
parameters
QCancer volume with increasing metrics is considered a significant independent factor for disease-free survival (DFS) and overall survival (OS) in cervical cancer [49]
Low cancer blood flow and low rate constant for contrast agent
intravasation (kep) correlated with high risk of endometrial cancer [50]
Ktrans, Kep and Ve are higher in rectal cancers accompanied with metastasis [51]
Ktrans, iAUC qualitative and ADC anticiptate low-risk breast cancers (AUC of combined parameters 0.78)
Diverse cancer types [52,53]Radiomic signature [54]
DCE-MR parameters
QData endpoints, feature detection protocols, and classifiers are
important factors in lung cancer prediction [55]
Radiomic signature is significantly associated with lymph node (LN) status in colorectal cancer [56]
Evaluating therapeutic effect subsequent to antiangiogenic agents [57]
LymphomaDeauville or response evaluation criteria in lymphoma (RECIL)
score on 18F-FDG-PET
SQAssessment of lymphoma treatment in clinical trials employs the summation of longest diameters of three target lesions [58]
Breast cancer [59]
Prostate cancer [60]
Receptor tyrosine-protein kinase erbB-2, CD340, and HER2
prostate-specific membrane antigen (PSMA)
SQSelective cancer receptor; investigation of cancer treatment on receptor expression.
Assessing therapy response to antiangiogenic agents [57]
Oesophageal
cancer
CT perfusion/blood flowQMultivariate analysis detects blood flow as a predictor of response [61]
Gastrointestinal
stromal cancers
CT density HUQDecrease in cancer density of > 15% on CT associated with a sensitivity of 97% and a specificity of 100% in identifying PET responders compared to 52% and 100% by RECIST [61]
Table 2. A comparison between popular machine learning algorithms used for the prioritization of diagnostic MRI biomarkers [88,100,101,102].
Table 2. A comparison between popular machine learning algorithms used for the prioritization of diagnostic MRI biomarkers [88,100,101,102].
ML MethodDiagnostic Characteristics
Artificial Neural Network (ANN)The mathematics behind the classification algorithm is simple.
The non-linearities and weights allow the neural network (NN) to solve complex problems.
Long training time is required for numerous iterations over the training data.
Tendency for overfitting.
Numerous additional tuning hyperparameters including # of hidden layers/hidden nodes are required for determining optimal performance.
Contrastive LearningSelf-supervised, task-independent deep learning technique that allows a model to learn about data, even without labels.
Learns the general features of a dataset by teaching the model which data points are similar or different.
Can potentially surpass supervised methods.
May yield suboptimal performance on downstream tasks if the wrong transformation invariances are presumed.
Decision Trees (DTs) Easy to visualize
Easy to understand.
Feature selection plays a dominant role in the accuracy of the algorithm.
One set of features can provide drastically different performance than a different set of features, therefore, large Random Forests can be used to alleviate this problem.
Prone to overfitting.
Deep Learning (DL)Can perform both image analysis (deep feature extraction) and construction of a prediction algorithm, eliminating the need for separate steps of extracting radiomic features and using that that to train a prediction model.
Can learn from complex datasets and achieve high performance without requiring prior feature extraction.
Permits massive parallel computations using GPUs.
Requires additional hyper-parameters tune the model for better performance including the number of convolution filters, the size of the filters, and parameters involved in the pooling.
Requires large training sets and it is not an optimal approach for pilot studies or internal data with small datasets.
Computationally-expensive.
k Nearest Neighbor (kNN)Easy to implement as it only requires the calculation of the distance between different points on the basis of data of different features.
Computationally-expensive for large datasets.
Does not work well with high dimensionality as this will complicate the distance calculating process to calculate distance for each dimension.
Sensitive to noisy and missing data.
Requires feature scaling.
Prone to overfitting.
Logistic Regression Constructs linear boundaries, i.e., it assumes linearity between dependent and independent variables.
However, linearly separable data is rarely found in real-world scenarios.
Naïve Bayes Models are faster to train and are simple, datasets and inferior performance on larger datasets.
The Naïve Bayes classifier has generally shown to have superior performance in comparison to the Logistic Regression classifier on smaller datasets.
Less potential for overfitting.
Shows difficulties with complex datasets due to being linear classifiers.
Random Forests (RFs)Less prone to overfitting, and it reduces overfitting in decision trees and helps to improve the accuracy.
Outputs the importance of features which is a very useful for model interpretation.
Works well with both categorical and continuous values, for both classification and regression problems.
Tolerates missing values in the data by automating missing value interpretation.
Output changes significantly with small changes in data.
Self-supervised Learning (SSL) Suitable for large unlabeled datasets, but its utility on small datasets is unknown.
Reduces the relative error rate of few-shot meta-learners, even when the datasets are small and only utilizing images within the datasets.
Support Vector Machines (SVM)Simple mathematics are behind the decision boundary
Can be applied in higher dimensions.
Time-consuming for large datasets, especially for datasets with larger margin decision boundary.
Prone to overfitting.
Sensitive to noisy and large datasets.
Table 3. Response categories according to changes in tumor lesions.
Table 3. Response categories according to changes in tumor lesions.
CategoryRECIST
Target LesionsNontarget Lesions
Progressive disease (PD)>20% ↑ in the sum of target lesions (TL) diameters.
Absolute ↑ (5 mm). Appearance of new lesions.
Clear progress of surviving nontarget lesion.
Appearance of new lesions.
Stable disease (SD)Neither PD nor PRContinuity of ≥ 1 nontarget lesion
Partial response (PR)>30% ↓ in the sum of TLNon-PD/CR
Complete response (CR)Disappearance of TL.
All nodes < 10 mm
Non-pathological nodes
Disappearance of nontarget lesions.
All nodes < 10 mm
Non-pathological nodes
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Hajjo, R.; Sabbah, D.A.; Bardaweel, S.K.; Tropsha, A. Identification of Tumor-Specific MRI Biomarkers Using Machine Learning (ML). Diagnostics 2021, 11, 742. https://doi.org/10.3390/diagnostics11050742

AMA Style

Hajjo R, Sabbah DA, Bardaweel SK, Tropsha A. Identification of Tumor-Specific MRI Biomarkers Using Machine Learning (ML). Diagnostics. 2021; 11(5):742. https://doi.org/10.3390/diagnostics11050742

Chicago/Turabian Style

Hajjo, Rima, Dima A. Sabbah, Sanaa K. Bardaweel, and Alexander Tropsha. 2021. "Identification of Tumor-Specific MRI Biomarkers Using Machine Learning (ML)" Diagnostics 11, no. 5: 742. https://doi.org/10.3390/diagnostics11050742

APA Style

Hajjo, R., Sabbah, D. A., Bardaweel, S. K., & Tropsha, A. (2021). Identification of Tumor-Specific MRI Biomarkers Using Machine Learning (ML). Diagnostics, 11(5), 742. https://doi.org/10.3390/diagnostics11050742

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop