Next Article in Journal
Do Maternal Stress and Depressive Symptoms in Perinatal Period Predict the Lactation Mastitis Occurrence? A Retrospective Longitudinal Study in Greek Women
Previous Article in Journal
CSF Diagnostics: A Potentially Valuable Tool in Neurodegenerative and Inflammatory Disorders Involving Motor Neurons: A Review
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

A Comprehensive Review on Radiomics and Deep Learning for Nasopharyngeal Carcinoma Imaging

1
Department of Otolaryngology-Head and Neck Surgery, Renmin Hospital of Wuhan University, 238 Jie-Fang Road, Wuhan 430060, China
2
Department of Otolaryngology-Head and Neck Surgery, Tongji Hospital Affiliated to Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430030, China
*
Author to whom correspondence should be addressed.
Diagnostics 2021, 11(9), 1523; https://doi.org/10.3390/diagnostics11091523
Submission received: 13 May 2021 / Revised: 10 August 2021 / Accepted: 19 August 2021 / Published: 24 August 2021
(This article belongs to the Section Machine Learning and Artificial Intelligence in Diagnostics)

Abstract

:
Nasopharyngeal carcinoma (NPC) is one of the most common malignant tumours of the head and neck, and improving the efficiency of its diagnosis and treatment strategies is an important goal. With the development of the combination of artificial intelligence (AI) technology and medical imaging in recent years, an increasing number of studies have been conducted on image analysis of NPC using AI tools, especially radiomics and artificial neural network methods. In this review, we present a comprehensive overview of NPC imaging research based on radiomics and deep learning. These studies depict a promising prospect for the diagnosis and treatment of NPC. The deficiencies of the current studies and the potential of radiomics and deep learning for NPC imaging are discussed. We conclude that future research should establish a large-scale labelled dataset of NPC images and that studies focused on screening for NPC using AI are necessary.

Graphical Abstract

1. Introduction

Nasopharyngeal carcinoma (NPC) is an epithelial carcinoma arising from the nasopharyngeal mucosal lining [1]. According to data from the International Agency for Research on Cancer, the number of new cases of NPC in 2020 was 133,354, of which 46.9% were diagnosed in China, showing an extremely uneven geographical distribution [2] (Figure 1). Although NPC accounts for only 0.7% of all malignant tumours and is relatively rare compared with other cancers, it is one of the most common malignant tumours in head and neck cancer [2,3]. NPC occurs more often in males, and the incidence of NPC in males is 2.5 times higher than in females [4]. Heredity and genes play important roles in the development of NPC [5,6,7], and the Epstein–Barr virus infection is perhaps the most common causal agent [1,8]. According to the differentiation of tumour cells, the WHO classified NPC into three types in 2003: keratinizing squamous cell carcinoma, non-keratinizing carcinoma, and basaloid squamous cell carcinoma [9]. The prognosis of NPC is generally better than that of most other cancers, with the reported overall 5-year survival rate reaching 80% [10]. Radiotherapy for early NPC and concurrent chemoradiotherapy for advanced NPC are recommended by the National Comprehensive Cancer Network [11]. Optimum imaging is crucial for staging and radiotherapy planning for NPC [12]. There are various general image inspections for NPC, including computed tomography (CT), magnetic resonance imaging (MRI), and electronic endoscopy. Compared with CT, MRI is the preferred method for primary tumour delineation because of its high resolution on soft tissue [13,14].
In recent years, artificial intelligence (AI) has been rapidly integrated into the field of medicine, especially into medical imaging. Research on the application of AI in NPCs has also gradually become a hot topic. Lancet has published a series of reviews titled ‘Nasopharyngeal carcinoma’ every few years [1,12,15,16]. In the most recent review published in 2019, 18 research questions on NPC that remain to be answered were proposed, and two of them were about AI and NPC: ‘How can reliable radiomics models for improving decision support in NPC be developed?’ and ‘How can artificial intelligence automation for NPC treatment decisions (radiotherapy planning, chemotherapy timing, and regimens) be applied?’. Subsequently, many articles in this area have emerged, and a large number of studies have reported on tumour detection, image segmentation, prognosis prediction, and chemotherapy efficacy prediction in NPC. In these studies, radiomics and deep learning (DL) have gradually become the most important AI tools.
In this work, we focus on the studies of radiomics and DL in the image analysis of NPC and aim to spread the implementation pipeline of radiomics and DL and discover the future potential of radiomics and DL in this field. Because each model in these studies is trained based on a different database which has a huge impact on the model, and because there are no indicators or validation protocols of consensus for the evaluation of each model’s performance, a comprehensive overview is presented to provide a holistic profile instead of a meta-analysis.
This paper will be presented in the following sections:
  • The pipeline of radiomics and the principle of DL are briefly described;
  • The studies of radiomics and DL for NPC imaging in recent years are summarized;
  • The deficiencies of current studies and the potential of radiomics and DL for NPC imaging are discussed.

2. Pipeline of Radiomics

The suffix-omics has initially arisen from ‘genomics’, which is generally defined as the study of whole genomes [17]. Radiomics, which was first proposed by Lambin in 2012 [18], is a relatively ‘young’ concept and is considered a natural extension of computer-aided diagnosis and detection systems [19]. It converts imaging data into a high-dimensional mineable feature space using a large number of automatically extracted data-characterization algorithms to reveal tumour features that may not be recognized by the naked eye and to quantitatively describe the tumour phenotype [20,21,22,23]. These extracted features are called radiomic features and include first-order statistics features, intensity histograms, shape- and size-based features, texture-based features, and wavelet features [24]. Conceptually speaking, radiomics belongs to the field of machine learning, although human participation is needed. The basic hypothesis of radiomics is that the constructed descriptive model (based on medical imaging data, sometimes supplemented by biological and/or medical data) can provide predictions of prognosis or diagnosis [25]. A radiomics study can be structured in five steps: data acquisition and pre-processing, tumour segmentation, feature extraction, feature selection, and modelling [26,27] (Figure 2). In some other reviews, the radiomics pipeline is described in four steps [25,28], in which feature selection and model building are grouped into one step, considering that the two steps are completed in a sequence in the program.
The collection and pre-processing of medical images is the first step in the implementation of radiomics. Radiomics relies on well-annotated medical images and clinical data to build target models. CT was first used when radiomics was proposed [18]. Subsequently, MRI [29], positron emission tomography (PET/CT) [30], and ultrasound [31,32] have been widely used for image analysis of different tumours using radiomics. Image pre-processing (filtering and intensity discretization) is essential as these images often come from different hospitals or medical centres, which results in differences in image parameters, and such differences may have unexpected effects on the model.
Image segmentation is a distinctive feature of radiomics. The methods of image segmentation generally include manual segmentation and semi-automatic segmentation [33]. The region of image segmentation determines which voxels will be included in the analysis, so image segmentation is a basic step of radiomics. However, there is no gold standard for image segmentation. The operation of different personnel will inevitably lead to differences of segmentation regions, which results in the heterogeneity of extracted image features and affects the performance of the model. The process of image segmentation is a very cumbersome step that depends on professionals, which leads to difficulties in obtaining high-quality data and increases the difficulty of clinical translation [34,35].
Feature extraction is a technical step in the pipeline of radiomics, which is implemented in software such as MATLAB. The essence of radiomics is to extract high-throughput features that connect medical images and clinical endpoints from images. These details must be included in the article as the process of feature extraction is affected by the algorithm, methodology, and software parameter setting [36,37]. The current radiomics pipeline typically incorporates approximately 50–5000 quantitative features, and this number is expected to increase [28].
The purpose of feature screening is to reduce the dimensionality of features and screen out the features most relevant to clinical endpoints to avoid overfitting of the model. When the feature dimension exceeds a limit, the performance of the classifier decreases with an increase in the feature dimension, and the time cost of training the model increases. Therefore, the selection of more effective feature subsets through feature selection algorithms is very important for establishing the model. According to the form of feature selection, the current feature screening methods are divided into three categories: filter, wrapper, and embedding [38]. Among them, the least absolute shrinkage and selection operator (LASSO), which is an embedding method, is the most commonly used [39,40].
The process of modelling entails finding the best algorithm to link the selected image features with the clinical endpoints. Supervised and unsupervised learning are common strategies [25,41]. The modelling strategy has been proven to affect the performance of the model [42]. Therefore, it is necessary to select the most appropriate algorithm according to the type of data and target. In addition, building multiple models simultaneously is a worthy, but not a necessary method. Model verification is an indispensable step in establishing the model. The best strategy is to use independent, external data to verify the performance of the model; this has not been implemented in many studies.
To date, radiomics has made impressive performances in tumour differentiation [43,44,45], prognosis evaluation [46,47,48], therapeutic effect evaluation [49,50,51,52], and tumour metastasis evaluation [53,54,55]. Compared with the performance of traditional predictive models based on clinical data and imaging anatomy, better performance of radiomics has been widely reported [24,56,57].

3. The Principle of DL

For a better understanding of DL, it is necessary to clarify the two terms of AI and machine learning, which are often accompanied by and confused with DL [58] (Figure 3). The concept of AI was first proposed by John McCarthy, who defined it as the science and engineering of intelligent machines [59]. In 1956, the AI field was first formed in a Dartmouth College seminar [60]. Currently, the content of AI has become much richer to include knowledge representation, natural language processing, visual perception, automatic reasoning, machine learning, intelligent robots, automatic programming, etc. The term AI has become an umbrella term [61]. Machine learning is a technology used to realize AI. Its core idea is to use algorithms to parse and learn from data, then make decisions and predictions about events in the real world [62], which is different from traditional software programs that are hard-coded to solve specific tasks [63]. The algorithm categories include supervised learning algorithms, such as classification and regression methods [64], unsupervised learning algorithms, such as cluster analysis [65], and semi-supervised learning algorithms [66]. DL is an algorithm tool for machine learning [67]. It is derived from an artificial neural network (ANN), which simulates the mode of human brain processing information [68], and uses the gradient descent method and back-propagation algorithm to automatically correct its own parameters, making the network fit the data better [69,70]. Compared with the traditional ANN, DL has more powerful fitting capabilities owing to more neuron levels [71]. According to different scenarios, DL includes a variety of neural network models, such as convolutional neural networks (CNNs) with powerful image processing capabilities [72], recurrent neural networks (RNNs), which primarily process time-series samples [73,74], and deep belief networks (DBNs), which can deeply express the training data [75]. In recent years, CNN-based methods have gained popularity in the medical image analysis domain [67,76,77]. In the studies of NPC imaging using DL models, CNN was adopted in almost all studies.
There are four key ideas behind CNNs that take advantage of the properties of natural signals: local connections, shared weights, pooling, and the use of many layers [68]. The structure of a CNN, which is mainly composed of an input layer, hidden layer, and output layer, is shown in Figure 4. The hidden layers consist of a convolutional layer, pooling layer, and a fully connected layer. After inputting an image, a greyscale image is converted into a single-channel matrix according to the grey value of each pixel, whereas a colour image is converted into a three-channel matrix. Fixed-size convolution kernels (usually 3 × 3) are used to sequentially scan the matrix area of the same size on the image, and the values of the convolution kernels are multiplied by the values of the corresponding position on the image matrix and finally summed. Each time the convolution kernel moves to the right according to a fixed step (after reaching the right edge of the image, it moves down one step and returns to the left edge of the image) to obtain a summed value. When the convolution kernels finish scanning the entire image, a new matrix, called a convoluted feature, is developed. The pooling layer is a process of downsampling the spatial dimension, with the aim of feature reduction, compressing the parameter number to reduce overfitting [78]. There are generally three methods of pooling: stochastic pooling, average pooling, and the most commonly used, max pooling. The most common size of a pooling filter is 2 × 2, and the most common step size is 2. Generally, the convolution and pooling processes of a CNN model are repeated many times, and the ‘depth’ of a DL model is embodied in the number of convolutions. Common CNN models are endowed with high depth and a large number of parameters. For example, the VGG-16 model, which won the runner-up in the 2014 ILSVRC competition, has a total of 16 layers with 138 million parameters. Usually, there are several fully connected layers at the bottom of a CNN. The fully connected layer maps the learned distributed feature representation to the sample label space and transforms the quantitative value into a nonlinear value through an activation function, which plays the role of a classifier in the model [72]. Owing to its principle, CNN has an advantage in processing image-related tasks and is widely used in medical imaging-related research [79].
Because of the differences in the principles behind deep learning and radiomics, there are differences in the specific tasks and advantages of their implementation processes. Because implementations of radiomics require manual segmentation of lesion areas to capture the radiomics features, this approach is more often used to perform the tasks of diagnosis prediction, assessment of tumour metastasis, and prediction of therapeutic effect. Deep learning models are often based on the whole image, which contains information on the relationship between the tumour and the surrounding tissues. Therefore, image synthesis, lesion detection, prognosis prediction, and image segmentation are regarded more commonly as tasks suitable for deep learning methods. Because the input image of most deep learning tasks is often a full image, which contains the noise information around the lesion, the performance of deep learning models is thus far not as good as that of radiomics for the same dataset due to the embedded noise information. However, because radiomics retains the fundamental disadvantage that manually defining the area of interest is strictly required, which necessitates the performance of considerable human labour and this is not required by deep learning methods, the datasets available for deep learning tasks could be much larger than those of the radiomics task. In addition, with the rapid development of deep neural network algorithms, the performance of deep learning is gradually improving and its performance in many tasks now exceeds the performance of radiomics. For example, Google’s EfficientNet series of networks, published in ICML 2019 [80], demonstrated an extremely impressive performance in terms of efficiency and accuracy of ImageNet tasks.
Although radiomics arose in 2012, and DL has captured the researchers’ vision since 2015, there have been few studies on NPC imaging before 2017. Therefore, most of the studies included in this paper are from January 2017 to March 2021. From the perspective of technology, radiomics and DL tasks in medical images include detection, segmentation, and most commonly, classification. However, these concepts are abstract for most clinicians who do not grasp the idea of AI. Therefore, we summarize the published studies in this area from the perspective of specific clinical issues to be solved.
After screening and systematically summarizing the retrieved literature, we summarized the specific issues that have been considered and divided the studies into three sections according to whether the studies adopted radiomics, DL, or both. Specific tasks using radiomics are divided into the following categories: prognosis prediction, assessment of tumour metastasis, tumour diagnosis, prediction of therapeutic effect, and prediction of complications. Specific tasks using DL are divided into prognosis prediction, image synthesis, detection and/or diagnosis, and image segmentation. We summarize the contributions, methods, and results of each paper in accordance with the chronological order of publication. To evaluate the model, we selected representative outcome indicators for concise presentation (external validation, best model, evaluation indicators of area under the curve (AUC), C-index, or Dice similarity coefficient (DSC) were preferred). A consolidated description of similar studies has been adopted.

4. Screening of Studies

Because there are no indicators or validation protocols of consensus for the evaluation of each model’s performance, and we firmly believe that comparison of models is tenuous for the heterogeneity of a database, a holistic profile of this field was provided instead of a meta-analysis. From this perspective, loose inclusion and exclusion criteria were set (Table 1). Finally, a total of 80 studies were included after following the inclusion and exclusion criteria (Figure 5).

5. Studies Based on Radiomics

5.1. Prognosis Prediction

Prognosis prediction includes tumour risk stratification and recurrence/progression prediction. Among the 31 radiomics-based studies retrieved, 17 were on this topic (Table 2).

5.1.1. 2017

Zhang was one of the first researchers to apply radiomics to NPC imaging. In 2017, he published three studies in this area, all of which were based on MRI images. In [24], a risk stratification prediction model with a C-index of 0.776 was established using a nomogram. In [81], four risk stratification prediction models were established based on random forest, random forest and adaptive boosting, sure independence screening, and linear support vector machine. The AUC was used to evaluate the performance of the model. The best AUC for the validation cohort was 0.846. In [82], a prediction model for predicting the progression of an advanced NPC was established. The dataset included 113 patients. The tumour progression and outcome of patients were predicted according to the radiomics score of the model, and the AUC value reached 0.886.
In the study by Ouyang [83], 100 patients with advanced NPC were included. Radiomic features were extracted, and a Cox proportional hazards regression model was established based on MRI images. The model successfully stratified patients into low- or high-risk groups in the validation set (hazard ratio [HR] = 7.28, p = 0.015).

5.1.2. 2018

Retrieved none.

5.1.3. 2019

The study of Lv [84] is the only positron emission tomography (PET)/CT-based study, which was different from other studies in 2019. A total of 128 patients with NPC were included and 3276 radiomic features were extracted. Then, 13 clinical characteristics were selected in the study to establish seven predictive models using the Cox proportional hazard regression. The C-index was used to evaluate the performance of the models, and the best C-index value in the validation cohort was 0.77.
Several other MRI-based studies were conducted in 2019 [85,86,87,88,89,90,91], among which [85,91] used a support vector machine (SVM) to establish a prediction model after feature extraction and selection; the best C-index in the validation cohort of [85] was 0.814, while the AUC in [91] was 0.80. The Cox proportional hazard regression and nomogram were used to build predictive models in [86,87,89]; 737 patients were included in [86], and the C-index of the external validation cohort was 0.73, which was better than that of clinical prognostic variables (0.577, 0.605). In [87], 260 radiomic features were extracted from the primary tumour and lymph nodes on axial MRI, and LASSO was applied for feature selection and data dimension reduction. Finally, a C-index of 0.811 was obtained. In [89], univariate and multivariate analyses were used for feature selection from the 970 features that were extracted from 140 patients with NPC, and a radiomic nomogram was built by multivariate analyses, which finally reached a C-index of 0.74. In [90], clinical features of tumour volume, T stage, N stage, overall stage, age, and gender were added after extracting the imaging features. Then, the Cox proportional hazard regression analysis was used to determine the independent predictors of progression-free survival and establish a prediction model. The optimal AUC for the model was 0.825. In the study of [88], conventional imaging methods were used, and disease-free survival and overall survival were used as clinical endpoints. Finally, C-indices of 0.751 and 0.845, respectively, were obtained.

5.1.4. 2020

In [92], a total of 128 patients were included, and PET/CT was used to build the dataset. The tumour of each patient was partitioned into several phenotypically consistent sub-regions based on individual- and population-level clustering. Subsequently, 202 features were extracted in each sub-region, and the imaging biomarkers and clinicopathological factors were evaluated using multiple Cox regression analyses and Spearman correlation analysis. It was concluded that the predictive performance of the biomarkers in the sub-regions with three PET/CT imaging characteristics is better than the predictive performance of the entire tumour (C-index, 0.69 vs. 0.58).
In [93,94], MRI images were used to build datasets that included 327 and 136 patients with NPC, respectively. LASSO and recursive feature elimination were used to select features in [93]. The author constructed five models to predict progression-free survival using the univariate Cox proportional hazard model, and the best C-index in the validation set was 0.874. A total of 530 stable features were extracted, and 67 non-redundant features were selected in [94]. Four predictive models were constructed based on the Cox proportional hazard model, and the C-index of the best model was 0.72.
In [95], 100 consecutive cases of NPC were recruited, and nine of the most relevant radiomic features were selected from features extracted from PET and MRI using LASSO. A predictive model of NPC staging was established based on logistic regression, and the total AUC in PET and MR were 0.84 and 0.85, respectively.

5.1.5. 2021

In [96], a multiple model combined with SVM based on the PET/CT of 85 patients with stage III-IVB NPC was established. The model predicted local recurrence and distant metastasis of tumours with sequential floating forward selection and achieved an AUC of 0.829.

5.2. Assessment of Tumour Metastasis

5.2.1. 2017–2018

Retrieved none.

5.2.2. 2019

A classic radiomics approach was implemented in [97]. After extracting 2780 features from the MRI of 176 patients with NPC, LASSO was used for feature screening, and a radiomics model for predicting the distant metastasis of tumours based on logistic regression was established. The AUC for the validation set was 0.792.

5.2.3. 2020

The authors in [98] developed an MRI-based radiomics nomogram for the differential diagnosis of cervical spine lesions and metastasis after radiotherapy. A total of 279 radiomic features were extracted from the enhanced T1-weighted MRI, and eight radiomic features were selected using LASSO to establish a classifier model that obtained an AUC of 0.72 with the validation set.
In [99], the authors explored the issue of whether there was a difference between radiomic features derived from recurrent and non-recurrent regions within the tumour. Seven histogram features and 40 texture features were extracted from the MRI images of 14 patients with T4NxM0 NPC. The author proposed that there were seven features that were significantly different between the recurrent and non-recurrent regions.
Manual delineation of the region of interest (ROI), which is widely used in current radiomics-related studies, has a high degree of variability. However, the tolerance of delineation differences and the possible influence of each step of radiomic analysis are not clear. In [100], based on 238 cases of NPC and 146 cases of breast cancer images, the author established a model for assessing sentinel lymph node metastasis by using a random forest algorithm and implementing erosion, smoothing, and dilation on the ROI. It was proposed that differences from smooth delineation or expansion with 3 pixels width around the tumours or lesions was acceptable.

5.2.4. 2021

In 2021, the study of [96], which was introduced in the section on prognosis prediction, established a model for the assessment of tumour metastasis simultaneously. The best AUC for predicting tumour metastasis was 0.829 (Table 3).

5.3. Tumour Diagnosis

5.3.1. 2017

Retrieved none.

5.3.2. 2018

Lv [101] established a diagnostic model to distinguish NPC from chronic nasopharyngitis using the logistic regression of leave-one-out cross-validation method. A total of 57 radiological features were extracted from the PET/CT of 106 patients, and AUCs between 0.81 and 0.89 were reported.

5.3.3. 2019

Retrieved none.

5.3.4. 2020

In [102], 76 patients were enrolled, including 41 with local recurrence and 35 with inflammation, as confirmed by pathology. A total of 487 radiomic features were extracted from the PET images. The performance was investigated for 42 cross-combinations derived from six feature selection methods and seven classifiers. The authors concluded that diagnostic models based on radiomic features showed higher AUCs (0.867–0.892) than traditional clinical indicators (AUC = 0.817) (Table 4).

5.4. Prediction of Therapeutic Effect

5.4.1. 2017

Retrieved none.

5.4.2. 2018

Wang [103] established an MRI-based imaging omics model for the pre-treatment prediction of early response to induction chemotherapy. A total of 120 patients with stage II-IV NPC were enrolled, and the best AUC of the model was 0.822.

5.4.3. 2019

Yu [104] established a radiomics model based on MR images to predict adaptive radiotherapy eligibility of patients with NPC before the start of the treatment in their study. After feature extraction, a double cross-validation approach of 100 resampled iterations with 3-fold nested cross-validation was employed in the LASSO logistic region for feature selection. Then, a prediction model was established, in which the method of modelling was not declared. An average AUC of 0.852 was reached with the testing set.

5.4.4. 2020

In [105], 108 patients with advanced NPC were included to establish the dataset. The ANOVA/Mann–Whitney U test, correlation analysis, and LASSO were used to select texture features, and multivariate logistic regression was used to establish a predictive model for the early response to neoadjuvant chemotherapy. Finally, an AUC of 0.905 was obtained for the validation cohort.
In [106], logistic regression was used to establish a prediction model based on MRI images to predict the response of advanced NPC to the induction chemotherapy of a gemcitabine plus cisplatin (GP) regimen and docetaxel plus cisplatin (TP) regimen. In the validation cohort, the predictive ability of the established model for the GP regimen reached an AUC of 0.886, while the AUC in the TP regimen was 0.863.
In [107], 19 radiomic features were screened out by using t-test, LASSO regression, and leave-one-out cross-validation after feature extraction from 123 patients with NPC. These 19 radiomic features were combined with clinical features to establish a prediction model of induction chemotherapy based on SVM, which reached an AUC of 0.863 (Table 5).

5.5. Predicting Complications

5.5.1. 2017–2018

Retrieved none.

5.5.2. 2019

In [108], a radiomics model for predicting early acute xerostomia during radiation therapy was established based on CT images. Ridge CV and recursive feature elimination were used for feature selection, whereas linear regression was used for modelling. However, the study’s test cohort included only four patients with NPC and lacked sufficient evidence, despite the study reaching a precision of 0.922.

5.5.3. 2020

The authors in [109] established three radiomics models for the early diagnosis of radiation-induced temporal lobe injury based on the MRIs of 242 patients with NPC. The feature selection in the study was achieved by the Relief algorithm, which is different from other studies. The random forest algorithm was used to establish three early diagnosis models. The AUCs of the models in the test cohort were 0.830, 0.773, and 0.716, respectively (Table 6).

6. Studies Based on DL

6.1. Prognosis Prediction

6.1.1. 2017–2018

Retrieved none.

6.1.2. 2019

In [110], a prognostic model based on 3D DenseNet, which is a convolutional neural network, to predict disease-free survival in 1636 patients with non-metastatic NPC was established. The model classified patients into low- and high-risk groups based on the cut-off value of risk scores, and the author claimed that the model could distinguish the two groups of patients correctly (HR = 0.62, p < 0.0001). Similarly, Du [111] established a deep convolutional neural network model for the risk assessment of patients with non-metastatic NPC. This study included 596 patients with NPC. The model achieved an AUC of 0.828 in the validation set for 3-year disease progression. However, it did not generalize well for the test set (AUC = 0.69), which consisted of 146 patients from a different centre.

6.1.3. 2020

Yang [112] established a weakly-supervised, deep-learning network using an improved residual network (ResNet) with three input channels to achieve automated T staging of NPC. The images of multiple tumour layers of patients were labelled uniformly. The model output a predicted T-score for each slice and then selected the highest T-score slice for each patient to retrain the model to update the network weights. The accuracy of the model in the validation set was 75.59%, and the AUC was 0.943.
In [113], an end-to-end, multi-modality deep survival network was proposed to predict the risk of tumour progression and was compared with the traditional four popular state-of-the-art survival methods. Finally, the established multi-modality deep survival network showed the best performance, with a C-index of 0.651. Similarly, Cui [114] established several prognostic models of NPC based on DL and several other conventional algorithms, such as the generalized linear model, extreme random tree, gradient boosting machine, and random forest. The average AUCs for overall survival, distant metastasis-free survival, and local-region relapse-free survival results obtained from the image data-based model were 0.796, 0.752, and 0.721, respectively.
Qiang [34] established a 3-D convolutional neural network-based prognosis prediction system for locally advanced NPC using MR images and clinical data. The study included 3444 cases, which was one of only two studies that included a sample size of more than 2000. The C-index of the established network in the internal validation cohort and the three external validation cohorts reached 0.776, 0.757, 0.719, and 0.746, respectively.
In contrast to the previous study, Liu’s [115] model for predicting the prognostic value of individual induction chemotherapy based on the DeepSurv neural network used pathological images from 1055 patients. The established DL model (C-index: 0.723) performed better than the EBV DNA (C-index: 0.612) copies and the N stage (C-index: 0.593).

6.1.4. 2021

In [116], a DL model based on ResNet was established to predict the distant metastasis-free survival of locally advanced NPC. In contrast to the studies published in 2020, the authors of this study removed the background noise and segmented the tumour region as the input image of the DL network. Finally, the optimal AUC of the multiple models combined with the clinical features was 0.808 (Table 7).

6.2. Image Synthesis

6.2.1. 2017–2018

Retrieved none.

6.2.2. 2019

In [117], Li used a deep convolutional neural network (DCNN) to generate a composite CT image based on cone-beam CT. The 1%/1 mm gamma pass rate of synthetic CT was 95.5% ± 1.6%. The author proposed that the DCNN model can generate high-quality synthetic CT images from cone-beam CT images and can be used to calculate radiotherapy doses for patients with NPC. Similarly, Wang [118] used a DCNN to generate CT images based on T2-weighted MRI. Compared with real CT, synthetic CT could accurately reconstruct most soft tissue and bone areas, except for the interface between soft tissue and bone and the interface between fragile structures in the nasal cavity.

6.2.3. 2020

Tie [119] used a multichannel multipath conditional generative adversarial network to generate CT images from an MRI. The network was developed based on a 5-level residual U-Net with an independent feature extraction network. The highest structural similarity index of the network was 0.92.
In [120], a generative adversarial network was used to generate CT images based on MRIs to guide the planning of radiotherapy for NPC. The 2%/2 mm gamma passing rates of the generated CT images reached 98.68% (Table 8).

6.3. Detection and/or Diagnosis

6.3.1. 2017

Retrieved none.

6.3.2. 2018

There were three studies that focused on using neural networks based on nasal endoscopic images to detect and/or diagnose NPC in 2018, and two of them were the work of Mohammed [121,122]: an artificial neural network based on the Haar feature fear and genetic algorithm were used to establish an endoscopic diagnosis model for NPC. The authors included a total of 381 NPC endoscopic images, including 159 tumours (abnormal cases) and 222 normal tissues. The established network had a high precision of 96.22%, sensitivity of 95.35%, and specificity of 94.55%. Mohammed also established three different neural network models in another article [122]. The accuracies of the models reached 95.66%, 93.87%, and 94.82%. SVM, the k-nearest neighbour algorithm, and ANN were used in another study to identify NPC that seemed to be based on a coincident dataset with the other two articles [123]. Li’s study included 28,966 eligible images, which included NPC and other pathologically confirmed non-nasopharyngeal tumours, such as lymphoma, rhabdomyosarcoma, olfactory neuroblastoma, malignant melanoma, and plasmacytoma. A fully convolutional network based on the initial architecture was established to detect nasopharyngeal malignancies. The overall accuracy of the network in the test set reached 88.7%, which was better than that of the experts [124].

6.3.3. 2019

Retrieved none.

6.3.4. 2020

Two similar studies, [125,126], based on pathological images were conducted. The authors in [125] used 1970 whole slide pathological images of 731 cases: 316 cases of inflammation, 138 cases of lymphoid hyperplasia, and 277 cases of NPC. The second study used 726 nasopharyngeal biopsies consisting of 363 images of NPC and 363 of benign nasopharyngeal tissue [126]. In [125], Inception-v3 was used to build the classifier, while ResNeXt, a deep neural network with a residual and inception architecture, was used to build the classifier in [126]. The AUCs obtained in [125,126] were 0.936 and 0.985, respectively.
A study based on 203 NPC and 209 benign nasopharyngeal hyperplasia MRI images to distinguish early NPC from nasopharyngeal benign hyperplasia was conducted [127]. A CNN-based classifier was established, and an AUC of 0.96 and an accuracy of 91.5% were reached, which showed no significant difference for NPC detection when compared to the radiologist (accuracy = 87%).
In [128], 3142 NPC and 958 benign hyperplasia images were used. Of the studies that concentrated on AI tools for NPC imaging, this study was conducted with the largest sample size. A self-constrained 3D DenseNet architecture was developed for tumour detection and segmentation. In the differentiation of NPC and benign hyperplasia, the model showed encouraging performance and obtained higher overall accuracy than that of experienced radiologists (97.77% vs. 95.87%) (Table 9).

6.4. Segmentation

Radiotherapy is the most important treatment for NPC. However, it is necessary to accurately delimit the nasopharyngeal tumour volume and the organs at risk in images of the auxiliary damage caused by radiotherapy itself. Therefore, segmentation is particularly relevant to DL in NPC imaging.

6.4.1. 2017

Men [129] developed an end-to-end deep deconvolution neural network (DDNN) to segment tumours, lymph nodes, and risky organs around tumours. A total of 230 patients diagnosed with NPC stages I and II were included. The performance of the DDNN was compared with that of the VGG-16 model. The average DSC value of the DDNN was 80.9% for the total nasopharyngeal tumour volume and 62.3% for the total tumour volume of metastatic lymph nodes, while the DSC values of the VGG-16 were 72.3% and 33.7%, respectively.

6.4.2. 2018

A CNN was used to build an automatic segmentation model for NPC based on enhanced MRIs in Li’s study [130], and case-by-case leave-one-out cross-validation was used to train the network. Their research obtained a DSC value of 0.89. Wang [131] applied a similar method, but only 15 MRI images of patients with NPC were included, and the DSC obtained was 0.79.
Ma’s [132] paper proposed a discriminative learning-based approach for automated NPC segmentation using CT and MRI. The CNN integrated two normal classification sub-networks into a Siamese-like sub-network that could use each other’s multimodal information. The authors concluded that the multimodal method achieves higher segmentation performance (DSC = 0.712) when compared with the segmentation method without multimodal similarity metric learning and the method that only uses CT (DSC = 0.636).

6.4.3. 2019

Daoud [133] proposed a two-stage NPC segmentation method based on CNN using CT images of axial, coronal, and sagittal sections. In the first stage, areas of non-target organs were identified and eliminated from the CT images. The task of the second stage was to identify the NPC from the remaining area of the CT image. The authors concluded that their proposed two-stage segmentation of NPC by integrating three-phase CT images has a satisfactory performance with DSCs of 0.87, 0.85, and 0.91 in axial, coronal, and sagittal sections, respectively.
In [134], a 3D CNN architecture based on VoxResNet was established to automate primary gross tumour volume contouring. It is worth mentioning that this study included a larger sample size (1021 NPCs) than most other studies on this issue. VoxResNet and eight radiation oncologists from multiple centres were evaluated. The DSC of VoxResNet was 0.79, and the accuracies of the oncologists significantly improved with the assistance of VoxResNet (p < 0.001).
In the study by Liang [135], a fully automated deep learning method was developed for organs-at-risk detection and segmentation of CT images, and the DSCs for the segmentation of the brain stem, eye lens, larynx, mandible, oral cavity, mastoid, spinal cord, parotid gland, temporomandibular joint, and optic nerve were between 0.689 and 0.934. Zhong [136] conducted a similar study that combined the DL and boosting algorithm to segment the organs at risk, including the parotid gland, thyroid, and optic nerve, and the corresponding DSCs were 0.92, 0.92, and 0.89, respectively.
Ma published another article in 2019 [137] on NPC image segmentation, similar to the study in 2018 [132]. Based on the developed multimodal convolutional neural network (M-CNN), the authors combined the high-level features extracted by the single-mode CNN and M-CNN to form a combined CNN. The study concluded that the model with multi-mode information fusion performs better than the model without the multi-mode information fusion.
Li [138] proposed and trained a U-Net to automatically segment and delineate tumour targets in patients with NPC. A total of 502 patients from a single medical centre were included, and CT images were collected and pre-processed as a dataset. The trained U-Net finally obtained DSCs of 0.659 for lymph nodes and 0.74 for primary tumours in the testing set.

6.4.4. 2020

Xue [139] proposed a sequential and iterative U-Net (SI-Net) to automatically segment the target volume of the primary tumour and compared it with a conventional U-Net. It was considered that the performance of the SI-Net was better than that of the U-Net (DSCs were 0.84 and 0.80, respectively).
Chen [140] proposed a novel multimodal MRI fusion network to segment NPCs accurately using T1, T2, and contrast-enhanced T1 MRI. The network model was composed of a 3D convolutional block attention module and a residual fusion block and adopted a self-transfer training strategy. A total of 149 patients with NPC were included. The network model obtained a DSC value of 0.724.
In [141], a 3D CNN with a long-range skip connection and multi-scale feature pyramid was developed for NPC segmentation. The network was trained and tested on the 3D MRI images of 120 patients with NPC using five-fold cross-validation, and the 3D CNN achieved a DSC of 0.737.
Ye [142] developed a CNN model based on dense connectivity embedding U-Net to automatically segment primary tumours of NPC on a dual-sequence MRI. A total of 44 MRI images of patients with NPC were included in this study. The average DSC of the external subjects, which consisted of seven patients with NPC, was 0.87.
Considering that NPC is a malignant tumour with a tendency to invade the surrounding tissues, in a complex MRI background, it is difficult to distinguish the signs of invasion on the edge from the closely connected normal tissues. To address the background dominant problem in improving the segmentation accuracy of NPC, Li [143] proposed a coarse-to-fine deep neural network, which started by predicting a coarse mask based on a well-designed segmentation module, followed by a boundary rendering module, which exploited semantic information from different layers of feature maps to refine the boundary of the coarse mask. The dataset contained 2000 MRI slices from 596 patients, and the DSC of the model was 0.703.
Jin [144] proposed a ResSE-UNet network with a ternary cross-entropy loss function to segment the total volume of the primary tumour and compared it with the tumour segmentation model based on the original U-Net, U-Net-NN. The data set of the study consisted of 1757 CT slices from 90 patients with NPC, and ResSE-UNet obtained the best DSC (0.84).
In [145], Wang proposed a modified version of the 3D U-Net model with Res-blocks and SE-blocks to segment the gross tumour volume of the nasopharynx. The novelty of the research is that an automatic pre-processing method was proposed to crop the 3D region of interest of the nasopharynx gross tumour volume, which improved the efficiency of image pre-processing. Automatic delineation models based on 3D U-Net, 3DCNN, and 2D DDNN were developed. The DSCs of the three models were 82.70%, 80.54%, and 77.97%, respectively.
Ke [128] developed a self-constrained 3D DenseNet architecture for tumour detection and segmentation, which was described in the NPC detection and/or diagnosis section. In terms of automatic segmentation of the tumour area, the architecture obtained a good performance with a DSC of 0.77 in the test cohort.

6.4.5. 2021

CNN shows promise for segmenting malignant tumours on contrast-enhanced MRIs, but there are situations where contrast agents are not suitable for specific patients. Can CNN accurately segment tumours based on MRI images without enhancement? To clarify this issue, Wong [146] developed a U-Net to segment primary NPC on a non-contrast-enhanced MRI and compared it with a contrast-enhanced MRI. The U-Net suggested a similar performance (DSC = 0.71) between fat suppression (fs)-T2W and enhanced-T1W, and the enhanced-fs-T1W images showed the highest DSC (0.73).
Bai [147] fine-tuned a pre-trained ResNeXt-50 U-Net, which uses the recall preserved loss to produce a rough segmentation of the gross tumour volume of NPC. Then, the well-trained ResNeXt-50 U-Net was applied to the fine-grained gross tumour volume boundary minute. The study obtained a DSC of 0.618 for online testing (Table 10).

7. Deep Learning-Based Radiomics

DL has shown great potential to dominate the field of image analysis. In ROI [148] and feature extraction tasks [149,150], which lay in the implementation pipeline of radiomics, DL has achieved good results. After completing the model training, DL can automatically analyse images, which is one of the greatest strengths compared to radiomics. Many researchers have introduced DL into radiomics (termed deep learning-based radiomics, DLR) and achieved encouraging results [151]. This may be a trend for the application of AI tools in medical imaging in the future. Therefore, we list these studies on NPC imaging separately.

7.1. Studies Based on Deep Learning-Based Radiomics (DLR)

7.1.1. 2017

Retrieved none.

7.1.2. 2018

To investigate the feasibility of radiomics for the analysis of radioresistance, Li [152] trained and validated an artificial neural network, a k-nearest neighbour, and an SVM model using stratified ten-fold cross-validation. Pre-processed MRI images were used for feature extraction, and principal component analysis was performed for feature reduction. The author concluded that radiomic analysis can be served as imaging biomarkers to facilitate early salvage for patients with NPC who are at risk of in-field recurrence. However, only 20 patients with recurrent NPC were recruited for this study.

7.1.3. 2019

Peng [30] developed a model based on DLR to predict the effect of induction chemotherapy on patients with advanced NPC. The research constructed and trained four deep CNN models to extract the features in the ROI, which alternated the feature extraction step of radiomics. LASSO was used to screen the features, and a nomogram that integrated clinical indicators was developed. Finally, the study reported a C-index of 0.722 in the test set.

7.1.4. 2020

Similar to the study of Peng [30], Zhong [153] established a radiomic nomogram based on MRI images and clinical features and adopted a DCNN (SE-ResNeXt) to quantify the tumour phenotype end-to-end in the process of image feature extraction. The established radiomic nomogram obtained a C-index of 0.788 in the test cohort.
In [154], Zhang innovatively combined the clinical features of patients with nasopharyngeal cancer, the radiomic features based on MRIs, and the DCNN model based on pathological images to construct a multi-scale nomogram to predict the failure-free survival of patients with NPC. The nomogram showed a consistent significant improvement for predicting treatment failure compared with the clinical model in the internal test (C-index: 0.828 vs. 0.602, p < 0.050) and external test (C-index: 0.834 vs. 0.679, p < 0.050) cohorts. (Table 11)

8. Discussion

The widespread application of AI tools in the medical field is a promising trend in the future of medicine. Radiomics and artificial neural networks could be the main approaches to achieve this and also be valuable tools to completely change the strategy of clinical diagnosis and treatment of tumours [28,155]. Particularly in brain [156,157], breast [158,159], lung [160,161], and prostate tumours [162,163], the application of radiomics and DL is at the forefront and shows great potential [164]. Although radiomics has been applied to medical imaging since 2012 and DL has started to be applied to medical imaging around 2015, their application in NPC has only gradually begun in 2017. Many attempts have been made to apply AI tools for NPC imaging in clinical settings. However, there are still some limitations in this field.
First, the current high-quality research in these areas is insufficient. For example, the current research is based on cross-sectional images, such as the most common use of pre-treatment images for prediction. Research based on time-varying images has not yet been conducted. The treatment of most tumours, especially NPC, is a long and multi-cycle combined process. It is unrealistic to guide the entire treatment process based only on the tumour images before treatment. Therefore, it is of great value to evaluate the real-time response of tumours to drugs and the radiation injury risk of important organs around the tumour based on the dynamic changes of image features during treatment. This can provide clinicians with key information to optimize treatment strategies. Furthermore, the quality and size of the dataset used for model training are extremely critical and are the limit of the accuracy and generalization ability of the established model. However, the number of cases included in most studies is limited, and many studies have not performed external testing of the model. In a recent systematic review [165], studies on MRI-based radiology in NPC published in recent years were evaluated using their radiomics quality scores. It was found that only 8% of the included studies included external validation, and the author concluded that radiomics articles in NPC were mostly of low methodological quality. This reflects the current, frustrating situation in this field. Moreover, owing to the lack of massive amounts of structured data, it is difficult for most algorithms to be implemented in clinical practice [166], which is one of the most urgent problems to be solved in the future development of AI tools for NPC.
Second, there are still some aspects that have not been covered in NPC imaging using radiomics and DL. For example, the newly proposed radiogenomic approach combines radiology and genomics [167,168]. Radiogenomics hypothesizes that the texture heterogeneity of tumours could reflect the difference in quantitative imaging features between genome and molecular phenotype, which could indicate the subtype, prognosis, drug response, and other information of the tumour [169]. It has experienced tremendous growth in studies of gliomas [170], breast cancer [171,172], colorectal cancer [173], lung cancer [174], and ovarian cancer [175] over the past years. Radiogenomics has demonstrated significant potential for developing non-invasive prognostic and diagnostic methods, identifying biomarkers for treatment, tumour phenotyping, and genomic signatures [176]. Precision medicine is a disease treatment and prevention approach that considers individual differences in genes, environment, and lifestyle and integrates multiple sources of information to achieve the ultimate goal of personalized management [177]. Radiogenomics, which bridges imaging and genomics [178], could provide a new, non-invasive, fast, and low-cost approach for the implementation of precision medicine [179]. However, to the best of our knowledge, studies on radiogenomics for NPC are currently lacking.
Radiomics is based on feature engineering and retains an inherent defect that manually delineating the area of interest is strictly required, which is labour-consuming. DL could provide a solution for this problem [25]; in addition, it can provide an efficient method for feature extraction in the radiomics pipeline [180]. Therefore, DLR, which combines the advantages of DL and radiomics, has been proposed and widely researched [25,181,182]. Although there are four studies that use DLR in NPC [30,152,153,154], the method is far from being fully developed.
The purpose of AI tools in NPC imaging is to be used in clinical practice. However, there are still many limitations and gaps between research and clinical applications. The lack of massive structured data is the most urgent problem to be solved. Considering that one of the biggest challenges for oncology is to develop accurate and cost-effective screening procedures [28], fast and minimal manual work will be a common clinical need in the future. Therefore, excessive human intervention in the use of developed AI tools is a block that must be handled. For example, the time-consuming step of segmentation and manual selection of image layers based on experience to construct a dataset with the most obvious tumour images that were expected to perform better. From this point, we can perceive the signs of deep learning by replacing traditional radiomics. However, it is difficult for the huge-data-based DL, which is capable of fully automated analysis, to obtain sufficient labelled data to develop state-of-the-art models and eliminate radiomics in the near future, considering that the research field of radiomics is mostly composed of topics such as prognosis of tumours and the efficacy of tumours on treatments, which are costly and time-consuming for data collection. Introducing DL into the pipeline of radiomics to improve the accuracy and stability of the established model may be a promising method in the short term.
After summarizing the relevant studies on deep learning and radiomics for NPC imaging, it was found that MRI was adopted by most studies as established datasets to carry out tasks, such as segmentation of lesions and tissues at risk, generation of synthetic high-quality CT images, and radiotherapy planning. This may be due to the better resolution of soft tissue by MRI than CT. However, different types of medical image have different advantages in machine learning. Therefore, which images are used to establish the dataset should be determined according to the specific task at hand. For example, CT has a higher definition than MRI in showing the damage of NPC to the skull base bone. Therefore, when conducting skull base bone-related research, CT-based models may have better performance. The endoscopic image is a special feature of nasopharyngeal carcinomas which is different from most tumours. The endoscopic image of the nasopharynx is of great significance for the early screening of nasopharyngeal carcinoma, which is difficult to achieve by MRI and CT. However, there are only a few studies in this area based on endoscopic images. Pathological slices are used in many studies related to deep learning and tumours, but there are only a few studies related to NPC. In fact, considerable work remains to be done in this area, such as automatic reporting of specific pathological types of nasopharyngeal cancer and immunohistochemical results, automated predictions, slice-based prognostic analysis of patients, etc. Therefore, it is necessary to adopt datasets containing corresponding medical images according to the specific task.
It is worth mentioning that, from the clinician’s perspective, we hope that all relevant studies have the potential to promote the application of AI tools in clinical practice. However, the clinical significance of studies based on AI tools for predicting T staging of tumours is discounted. The TNM staging system, which is essentially a prediction system for tumour prognosis, represents a body of knowledge combining evidence-based findings from clinical studies with empirical knowledge from site-specific experts [183,184]. The TNM staging system is composed of the relationship between the tumour and the normal anatomical structure around it, which only presents a small amount of information on the image that could be recognized by the naked eye. More importantly, using AI tools to predict tumour prognosis will challenge the core of the TNM staging system in the future, because they have a parallel relationship (Figure 6) and most of the prognostic prediction models based on radiomics or DL are already better than the TNM staging system [24,34,56,57]. Therefore, there are limitations to using AI to predict T staging to solve clinical problems.

9. Conclusions and Future Work

In this review, the studies of NPC image analysis based on radiomics and DL after 2017 are comprehensively summarized (Figure 7). Before summarizing these studies, we provide a brief description of radiomics and DL. We then divided the studies into three categories: radiomics-, DL-, and DLR-based, according to the methods adopted in the studies. The radiomics-based studies were divided into five categories: prognosis prediction, assessment of tumour metastasis, diagnosis, prediction of therapeutic effect, and predicting complications and were summarized in chronological order. The DL-based studies were divided into four categories: prognosis prediction, image synthesis, detection and/or diagnosis, and segmentation and were summarized in chronological order. Due to the limited number of studies, we summarize the DLR-based research in chronological order. According to this method, we present a full picture of the application of radiomics and DL in NPC imaging.
Research on radiomics and DL in NPC imaging has only started in recent years. Therefore, there are still many issues that need further research in the future: linking NPC imaging features with tumour genes/molecules to promote the development of precision medicine for non-invasive, rapid, and low-cost approaches; using multi-stage dynamic imaging to assess tumour response to drugs/radiotherapy and predict the risk of radiation therapy in surrounding vital organs to guide treatment decisions; and bridging the gap from the AI tools established in studies to clinical applications. In addition, current studies based on nasal endoscopic images and pathological images are lacking. In particular, accurate and rapid screening of NPC is of great significance, considering that endoscopic images are usually the primary screening images for most patients. Further high-quality research in this regard is needed. Finally, there is still a lack of large-scale, comprehensive, and fully labelled datasets for NPC; datasets similar to those that are available for lung and brain tumours. The establishment of large-scale public datasets is an important task in the future.

Author Contributions

Conceptualization, Z.-Z.T. and Y.-Q.D.; methodology, S.L., Z.-Z.T. and Y.-Q.D.; studies retrieval, H.-L.H. and Z.-L.Z.; data curation, S.L., H.-L.H. and Z.-L.Z.; writing—original draft preparation, S.L.; figures, S.L.; writing—review and editing, Z.-Z.T. and Y.-Q.D. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data sharing is not applicable to this article.

Acknowledgments

The author would like to thank all colleagues for their help in this paper.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Chen, Y.; Chan, A.T.C.; Le, Q.-T.; Blanchard, P.; Sun, Y.; Ma, J. Nasopharyngeal carcinoma. Lancet 2019, 394, 64–80. [Google Scholar] [CrossRef]
  2. Ferlay, J.; Ervik, M.; Lam, F.; Colombet, M.; Mery, L.; Piñeros, M.; Znaor, A.; Soerjomataram, I.; Bray, F. Global Cancer Observatory: Cancer Today; International Agency for Research on Cancer: Lyon, France, 2020; Available online: https://gco.iarc.fr/today (accessed on 31 March 2021).
  3. Bray, F.; Ferlay, J.; Soerjomataram, I.; Siegel, R.L.; Torre, L.A.; Jemal, A. Global cancer statistics 2018: GLOBOCAN estimates of incidence and mortality worldwide for 36 cancers in 185 countries. CA Cancer J. Clin. 2018, 68, 394–424. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  4. Chen, W.; Zheng, R.; Baade, P.D.; Zhang, S.; Zeng, H.; Bray, F.; Jemal, A.; Yu, X.Q.; He, J. Cancer statistics in China, 2015. CA Cancer J. Clin. 2016, 66, 115–132. [Google Scholar] [CrossRef] [Green Version]
  5. Yu, W.M.; Hussain, S.S.M. Incidence of nasopharyngeal carcinoma in Chinese immigrants, compared with Chinese in China and South East Asia: Review. J. Laryngol. Otol. 2009, 123, 1067–1074. [Google Scholar] [CrossRef] [PubMed]
  6. Bei, J.X.; Su, W.H.; Ng, C.C.; Yu, K.; Chin, Y.M.; Lou, P.J.; Hsu, W.L.; McKay, J.D.; Chen, C.J.; Chang, Y.S.; et al. A GWAS meta-analysis and replication study identifies a novel locus within CLPTM1L/TERT associated with Nasopharyngeal carcinoma in individuals of Chinese ancestry. Cancer Epidemiol. Prev. Biomark. 2016, 25, 188–192. [Google Scholar] [CrossRef] [Green Version]
  7. Cui, Q.; Feng, Q.-S.; Mo, H.-Y.; Sun, J.; Xia, Y.-F.; Zhang, H.; Foo, J.N.; Guo, Y.-M.; Chen, L.-Z.; Li, M.; et al. An extended genome-wide association study identifies novel susceptibility loci for nasopharyngeal carcinoma. Hum. Mol. Genet. 2016, 25, 3626–3634. [Google Scholar] [CrossRef] [Green Version]
  8. Chan, K.A.; Woo, J.K.; King, A.; Zee, B.C.-Y.; Lam, W.K.J.; Chan, S.; Chu, S.W.; Mak, C.; Tse, I.O.; Leung, S.Y.; et al. Analysis of Plasma Epstein–Barr Virus DNA to Screen for Nasopharyngeal Cancer. N. Engl. J. Med. 2017, 377, 513–522. [Google Scholar] [CrossRef]
  9. Eveson, J.W.; Auclair, P.; Gnepp, D.R.; El-Naggar, A.K.; Barnes, L. World Health Organization Classification of Tumours: Pathology and Genetics of Head and Neck Tumours; Barnes, L., Eveson, J.W., Reichart, P., Sidransky, D., Eds.; IARC Press: Lyon, France, 2005; pp. 88–111. [Google Scholar]
  10. Lee, A.W.; Ng, W.T.; Chan, L.L.; Hung, W.M.; Chan, C.C.; Sze, H.C.; Chan, O.S.H.; Chang, A.T.; Yeung, R.M. Evolution of treatment for nasopharyngeal cancer—Success and setback in the intensity-modulated radiotherapy era. Radiother. Oncol. 2014, 110, 377–384. [Google Scholar] [CrossRef]
  11. Butterfield, D. Impacts of Water and Export Market Restrictions on Palestinian Agriculture. Toronto: McMaster University and Econometric Research Limited, Applied Research Institute of Jerusalem (ARIJ). Available online: http://www.socserv.mcmaster.ca/kubursi/ebooks/water.htm (accessed on 31 March 2021).
  12. Chua, M.L.K.; Wee, J.T.S.; Hui, E.P.; Chan, A.T.C. Nasopharyngeal carcinoma. Lancet 2016, 387, 1012–1024. [Google Scholar] [CrossRef]
  13. Yuan, H.; Lai-Wan, K.D.; Kwong, D.L.-W.; Fong, D.; King, A.; Vardhanabhuti, V.; Lee, V.; Khong, P.-L. Cervical nodal volume for prognostication and risk stratification of patients with nasopharyngeal carcinoma, and implications on the TNM-staging system. Sci. Rep. 2017, 7, 10387. [Google Scholar] [CrossRef] [Green Version]
  14. Chan, A.T.C.; Grégoire, V.; Lefebvre, J.L.; Licitra, L.; Hui, E.P.; Leung, S.F.; Felip, E. Nasopharyngeal cancer: EHNS-ESMO-ESTRO clinical practice guidelines for diagnosis, treatment andfollow-up. Ann. Oncol. 2012, 23, vii83–vii85. [Google Scholar] [CrossRef] [PubMed]
  15. Wei, W.; Sham, J.S. Nasopharyngeal carcinoma. Lancet 2005, 365, 2041–2054. [Google Scholar] [CrossRef]
  16. Vokes, E.E.; Liebowitz, D.N.; Weichselbaum, R.R. Nasopharyngeal carcinoma. Lancet 1997, 350, 1087–1091. [Google Scholar] [CrossRef]
  17. Pohlhaus, J.R.; Cook-Deegan, R.M. Genomics Research: World Survey of Public Funding. BMC Genom. 2008, 9, 472. [Google Scholar] [CrossRef] [Green Version]
  18. Lambin, P.; Rios-Velazquez, E.; Leijenaar, R.; Carvalho, S.; van Stiphout, R.G.P.M.; Granton, P.; Zegers, C.M.L.; Gillies, R.; Boellard, R.; Dekker, A.; et al. Radiomics: Extracting more information from medical images using advanced feature analysis. Eur. J. Cancer 2012, 48, 441–446. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  19. Gillies, R.J.; Kinahan, P.E.; Hricak, H. Radiomics: Images are more than pictures, they are data. Radiology 2016, 278, 563–577. [Google Scholar] [CrossRef] [Green Version]
  20. Aerts, H.; Velazquez, E.R.; Leijenaar, R.T.H.; Parmar, C.; Grossmann, P.; Carvalho, S.; Bussink, J.; Monshouwer, R.; Haibe-Kains, B.; Rietveld, D.; et al. Data from: Decoding tumour phenotype by noninvasive imaging using a quantitative radiomics approach. Nat. Commun. 2014, 5, 4006. [Google Scholar] [CrossRef]
  21. Yip, S.S.F.; Aerts, H.J.W.L. Applications and limitations of radiomics. Phys. Med. Biol. 2016, 61, R150–R166. [Google Scholar] [CrossRef] [Green Version]
  22. Rahmim, A.; Schmidtlein, C.R.; Jackson, A.; Sheikhbahaei, S.; Marcus, C.; Ashrafinia, S.; Soltani, M.; Subramaniam, R.M. A novel metric for quantification of homogeneous and heterogeneous tumors in PET for enhanced clinical outcome prediction. Phys. Med. Biol. 2015, 61, 227–242. [Google Scholar] [CrossRef] [Green Version]
  23. Buvat, I.; Orlhac, F.; Soussan, M.; Orhlac, F. Tumor Texture Analysis in PET: Where Do We Stand? J. Nucl. Med. 2015, 56, 1642–1644. [Google Scholar] [CrossRef] [Green Version]
  24. Zhang, B.; Tian, J.; Dong, D.; Gu, D.; Dong, Y.; Zhang, L.; Lian, Z.; Liu, J.; Luo, X.; Pei, S.; et al. Radiomics Features of Multiparametric MRI as Novel Prognostic Factors in Advanced Nasopharyngeal Carcinoma. Clin. Cancer Res. 2017, 23, 4259–4269. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  25. Afshar, P.; Mohammadi, A.; Plataniotis, K.; Oikonomou, A.; Benali, H. From Handcrafted to Deep-Learning-Based Cancer Radiomics: Challenges and Opportunities. IEEE Signal Process. Mag. 2019, 36, 132–160. [Google Scholar] [CrossRef] [Green Version]
  26. Lambin, P.; Leijenaar, R.T.H.; Deist, T.M.; Peerlings, J.; de Jong, E.E.C.; van Timmeren, J.; Sanduleanu, S.; Larue, R.T.H.M.; Even, A.J.G.; Jochems, A.; et al. Radiomics: The bridge between medical imaging and personalized medicine. Nat. Rev. Clin. Oncol. 2017, 14, 749–762. [Google Scholar] [CrossRef] [PubMed]
  27. Liu, Z.; Wang, S.; Dong, D.; Wei, J.; Fang, C.; Zhou, X.; Sun, K.; Li, L.; Li, B.; Wang, M.; et al. The Applications of Radiomics in Precision Diagnosis and Treatment of Oncology: Opportunities and Challenges. Theranostics 2019, 9, 1303–1322. [Google Scholar] [CrossRef]
  28. Limkin, E.J.; Sun, R.; Dercle, L.; Zacharaki, E.I.; Robert, C.; Reuzé, S.; Schernberg, A.; Paragios, N.; Deutsch, E.; Ferté, C. Promises and challenges for the implementation of computational medical imaging (radiomics) in oncology. Ann. Oncol. 2017, 28, 1191–1206. [Google Scholar] [CrossRef]
  29. Ford, J.; Dogan, N.; Young, L.; Yang, F. Quantitative Radiomics: Impact of Pulse Sequence Parameter Selection on MRI-Based Textural Features of the Brain. Contrast Media Mol. Imaging 2018, 2018. [Google Scholar] [CrossRef] [Green Version]
  30. Peng, H.; Dong, D.; Fang, M.-J.; Li, L.; Tang, L.-L.; Chen, L.; Li, W.-F.; Mao, Y.-P.; Fan, W.; Liu, L.-Z.; et al. Prognostic Value of Deep Learning PET/CT-Based Radiomics: Potential Role for Future Individual Induction Chemotherapy in Advanced Nasopharyngeal Carcinoma. Clin. Cancer Res. 2019, 25, 4271–4279. [Google Scholar] [CrossRef] [Green Version]
  31. Guo, Y.; Hu, Y.; Qiao, M.; Wang, Y.; Yu, J.; Li, J.; Chang, C. Radiomics Analysis on Ultrasound for Prediction of Biologic Behavior in Breast Invasive Ductal Carcinoma. Clin. Breast Cancer 2018, 18, e335–e344. [Google Scholar] [CrossRef]
  32. Song, G.; Xue, F.; Zhang, C. A Model Using Texture Features to Differentiate the Nature of Thyroid Nodules on Sonography. J. Ultrasound Med. 2015, 34, 1753–1760. [Google Scholar] [CrossRef]
  33. Polan, D.F.; Brady, S.L.; Kaufman, R.A. Tissue segmentation of computed tomography images using a Random Forest algorithm: A feasibility study. Phys. Med. Biol. 2016, 61, 6553. [Google Scholar] [CrossRef] [PubMed]
  34. Qiang, M.; Li, C.; Sun, Y.; Sun, Y.; Ke, L.; Xie, C.; Zhang, T.; Zou, Y.; Qiu, W.; Gao, M.; et al. A Prognostic Predictive System Based on Deep Learning for Locoregionally Advanced Nasopharyngeal Carcinoma. J. Natl. Cancer Inst. 2021, 113, 606–615. [Google Scholar] [CrossRef]
  35. Aerts, H.J. Data Science in Radiology: A Path Forward. Clin. Cancer Res. 2017, 24, 532–534. [Google Scholar] [CrossRef] [Green Version]
  36. Hatt, M.; Tixier, F.; Pierce, L.; Kinahan, P.; Le Rest, C.C.; Visvikis, D. Characterization of PET/CT images using texture analysis: The past, the present… any future? Eur. J. Nucl. Med. Mol. Imaging 2017, 44, 151–165. [Google Scholar] [CrossRef]
  37. Zhang, L.; Fried, D.V.; Fave, X.J.; Hunter, L.A.; Yang, J.; Court, L.E. IBEX: An open infrastructure software platform to facilitate collaborative work in radiomics. Med. Phys. 2015, 42, 1341–1353. [Google Scholar] [CrossRef] [PubMed]
  38. Bagherzadeh-Khiabani, F.; Ramezankhani, A.; Azizi, F.; Hadaegh, F.; Steyerberg, E.W.; Khalili, D. A tutorial on variable selection for clinical prediction models: Feature selection methods in data mining could improve the results. J. Clin. Epidemiol. 2016, 71, 76–85. [Google Scholar] [CrossRef] [PubMed]
  39. Guo, J.; Liu, Z.; Shen, C.; Li, Z.; Yan, F.; Tian, J.; Xian, J. MR-based radiomics signature in differentiating ocular adnexal lymphoma from idiopathic orbital inflammation. Eur. Radiol. 2018, 28, 3872–3881. [Google Scholar] [CrossRef]
  40. Kim, J.Y.; Park, J.E.; Jo, Y.; Shim, W.H.; Nam, S.J.; Kim, J.H.; Yoo, R.E.; Choi, S.H.; Kim, H.S. Incorporating diffusion-and perfusion-weighted MRI into a radiomics model improves diagnostic performance for pseudoprogression in glioblastoma patients. Neuro-Oncology 2019, 21, 404–414. [Google Scholar] [CrossRef]
  41. Zhang, Y.; Oikonomou, A.; Wong, A.; Haider, M.A.; Khalvati, F. Radiomics-based Prognosis Analysis for Non-Small Cell Lung Cancer. Sci. Rep. 2017, 7, srep46349. [Google Scholar] [CrossRef] [PubMed]
  42. Grootjans, W.; Tixier, F.; van der Vos, C.S.; Vriens, D.; Le Rest, C.C.; Bussink, J.; Oyen, W.J.; de Geus-Oei, L.-F.; Visvikis, D.; Visser, E.P. The Impact of Optimal Respiratory Gating and Image Noise on Evaluation of Intratumor Heterogeneity on 18F-FDG PET Imaging of Lung Cancer. J. Nucl. Med. 2016, 57, 1692–1698. [Google Scholar] [CrossRef] [Green Version]
  43. Mahapatra, D.; Poellinger, A.; Shao, L.; Reyes, M. Interpretability-Driven Sample Selection Using Self Supervised Learning for Disease Classification and Segmentation. IEEE Trans. Med. Imaging 2021. [Google Scholar] [CrossRef]
  44. Conti, A.; Duggento, A.; Indovina, I.; Guerrisi, M.; Toschi, N. Radiomics in breast cancer classification and prediction. Semin. Cancer Biol. 2021, 72, 238–250. [Google Scholar] [CrossRef]
  45. Hawkins, S.; Wang, H.; Liu, Y.; Garcia, A.; Stringfield, O.; Krewer, H.; Li, Q.; Cherezov, D.; Gatenby, R.A.; Balagurunathan, Y.; et al. Predicting Malignant Nodules from Screening CT Scans. J. Thorac. Oncol. 2016, 11, 2120–2128. [Google Scholar] [CrossRef] [Green Version]
  46. Verduin, M.; Primakov, S.; Compter, I.; Woodruff, H.; van Kuijk, S.; Ramaekers, B.; Dorsthorst, M.T.; Revenich, E.; ter Laan, M.; Pegge, S.; et al. Prognostic and Predictive Value of Integrated Qualitative and Quantitative Magnetic Resonance Imaging Analysis in Glioblastoma. Cancers 2021, 13, 722. [Google Scholar] [CrossRef]
  47. Jiang, Y.; Wang, H.; Wu, J.; Chen, C.; Yuan, Q.; Huang, W.; Li, T.; Xi, S.; Hu, Y.; Zhou, Z.; et al. Noninvasive imaging evaluation of tumor immune microenvironment to predict outcomes in gastric cancer. Ann. Oncol. 2020, 31, 760–768. [Google Scholar] [CrossRef]
  48. Lian, C.; Ruan, S.; Denœux, T.; Jardin, F.; Vera, P. Selecting radiomic features from FDG-PET images for cancer treatment outcome prediction. Med. Image Anal. 2016, 32, 257–268. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  49. Chen, M.; Cao, J.; Hu, J.; Topatana, W.; Li, S.; Juengpanich, S.; Lin, J.; Tong, C.; Shen, J.; Zhang, B.; et al. Clinical-Radiomic Analysis for Pretreatment Prediction of Objective Response to First Transarterial Chemoembolization in Hepatocellular Carcinoma. Liver Cancer 2021, 10, 38–51. [Google Scholar] [CrossRef]
  50. Carles, M.; Fechter, T.; Radicioni, G.; Schimek-Jasch, T.; Adebahr, S.; Zamboglou, C.; Nicolay, N.; Martí-Bonmatí, L.; Nestle, U.; Grosu, A.; et al. FDG-PET Radiomics for Response Monitoring in Non-Small-Cell Lung Cancer Treated with Radiation Therapy. Cancers 2021, 13, 814. [Google Scholar] [CrossRef] [PubMed]
  51. Cai, J.; Zheng, J.; Shen, J.; Yuan, Z.; Xie, M.; Gao, M.; Tan, H.; Liang, Z.-G.; Rong, X.; Li, Y.; et al. A Radiomics Model for Predicting the Response to Bevacizumab in Brain Necrosis after Radiotherapy. Clin. Cancer Res. 2020, 26, 5438–5447. [Google Scholar] [CrossRef] [PubMed]
  52. Nie, K.; Shi, L.; Chen, Q.; Hu, X.; Jabbour, S.K.; Yue, N.; Niu, T.; Sun, X. Rectal Cancer: Assessment of Neoadjuvant Chemoradiation Outcome based on Radiomics of Multiparametric MRI. Clin. Cancer Res. 2016, 22, 5256–5264. [Google Scholar] [CrossRef] [Green Version]
  53. Samiei, S.; Granzier, R.; Ibrahim, A.; Primakov, S.; Lobbes, M.; Beets-Tan, R.; van Nijnatten, T.; Engelen, S.; Woodruff, H.; Smidt, M. Dedicated Axillary MRI-Based Radiomics Analysis for the Prediction of Axillary Lymph Node Metastasis in Breast Cancer. Cancers 2021, 13, 757. [Google Scholar] [CrossRef] [PubMed]
  54. Yu, J.; Deng, Y.; Liu, T.; Zhou, J.; Jia, X.; Xiao, T.; Zhou, S.; Li, J.; Guo, Y.; Wang, Y.; et al. Lymph node metastasis prediction of papillary thyroid carcinoma based on transfer learning radiomics. Nat. Commun. 2020, 11, 4807. [Google Scholar] [CrossRef]
  55. Liu, X.; Yang, Q.; Zhang, C.; Sun, J.; He, K.; Xie, Y.; Zhang, Y.; Fu, Y.; Zhang, H. Multiregional-Based Magnetic Resonance Imaging Radiomics Combined with Clinical Data Improves Efficacy in Predicting Lymph Node Metastasis of Rectal Cancer. Front. Oncol. 2021, 10, 10. [Google Scholar] [CrossRef]
  56. Mouraviev, A.; Detsky, J.; Sahgal, A.; Ruschin, M.; Lee, Y.K.; Karam, I.; Heyn, C.; Stanisz, G.J.; Martel, A.L. Use of radiomics for the prediction of local control of brain metastases after stereotactic radiosurgery. Neuro-Oncology 2020, 22, 797–805. [Google Scholar] [CrossRef] [PubMed]
  57. Jiang, Y.; Yuan, Q.; Lv, W.; Xi, S.; Huang, W.; Sun, Z.; Chen, H.; Zhao, L.; Liu, W.; Hu, Y.; et al. Radiomic signature of 18F fluorodeoxyglucose PET/CT for prediction of gastric cancer survival and chemotherapeutic benefits. Theranostics 2018, 8, 5915–5928. [Google Scholar] [CrossRef] [PubMed]
  58. Santos, M.K.; Júnior, J.R.F.; Wada, D.T.; Tenório, A.P.M.; Barbosa, M.H.N.; Marques, P.M.D.A. Artificial intelligence, machine learning, computer-aided diagnosis, and radiomics: Advances in imaging towards to precision medicine. Radiol. Bras. 2019, 52, 387–396. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  59. Hamet, P.; Tremblay, J. Artificial intelligence in medicine. Metabolism 2017, 69, S36–S40. [Google Scholar] [CrossRef]
  60. Moor, J. The Dartmouth College artificial intelligence conference: The next fifty years. Ai Mag. 2006, 27, 87. [Google Scholar]
  61. Zhou, X.; Li, C.; Rahaman, M.; Yao, Y.; Ai, S.; Sun, C.; Wang, Q.; Zhang, Y.; Li, M.; Li, X.; et al. A Comprehensive Review for Breast Histopathology Image Analysis Using Classical and Deep Neural Networks. IEEE Access 2020, 8, 90931–90956. [Google Scholar] [CrossRef]
  62. Erickson, B.J.; Korfiatis, P.; Akkus, Z.; Kline, T.L. Machine Learning for Medical Imaging. Radiographics 2017, 37, 505–515. [Google Scholar] [CrossRef]
  63. Foster, K.R.; Koprowski, R.; Skufca, J.D. Machine learning, medical diagnosis, and biomedical engineering research—Commentary. Biomed. Eng. Online 2014, 13, 94. [Google Scholar] [CrossRef] [Green Version]
  64. Rashidi, H.H.; Tran, N.K.; Betts, E.V.; Howell, L.P.; Green, R. Artificial intelligence and machine learning in pathology: The present landscape of supervised methods. Acad. Pathol. 2019, 6. [Google Scholar] [CrossRef] [PubMed]
  65. Jafari, M.; Wang, Y.; Amiryousefi, A.; Tang, J. Unsupervised Learning and Multipartite Network Models: A Promising Approach for Understanding Traditional Medicine. Front. Pharmacol. 2020, 11, 1319. [Google Scholar] [CrossRef]
  66. Bzdok, D.; Krzywinski, M.; Altman, N. Machine learning: Supervised methods. Nat. Methods 2018, 15, 5–6. [Google Scholar] [CrossRef] [PubMed]
  67. Anwar, S.M.; Majid, M.; Qayyum, A.; Awais, M.; Alnowami, M.; Khan, M.K. Medical Image Analysis using Convolutional Neural Networks: A Review. J. Med. Syst. 2018, 42, 226. [Google Scholar] [CrossRef] [Green Version]
  68. LeCun, Y.; Bengio, Y.; Hinton, G. Deep learning. Nature 2015, 521, 436–444. [Google Scholar] [CrossRef] [PubMed]
  69. Zipser, D.; Andersen, R.A. A back-propagation programmed network that simulates response properties of a subset of posterior parietal neurons. Nat. Cell Biol. 1988, 331, 679–684. [Google Scholar] [CrossRef]
  70. Kriegeskorte, N.; Golan, T. Neural network models and deep learning. Curr. Biol. 2019, 29, R231–R236. [Google Scholar] [CrossRef] [PubMed]
  71. Manisha; Dhull, S.K.; Singh, K.K. ECG Beat Classifiers: A Journey from ANN To DNN. Procedia Comput. Sci. 2020, 167, 747–759. [Google Scholar] [CrossRef]
  72. Soffer, S.; Ben-Cohen, A.; Shimon, O.; Amitai, M.M.; Greenspan, H.; Klang, E. Convolutional Neural Networks for Radiologic Images: A Radiologist’s Guide. Radiology 2019, 290, 590–606. [Google Scholar] [CrossRef] [PubMed]
  73. Deng, Y.; Bao, F.; Dai, Q.; Wu, L.F.; Altschuler, S.J. Scalable analysis of cell-type composition from single-cell transcriptomics using deep recurrent learning. Nat. Methods 2019, 16, 311–314. [Google Scholar] [CrossRef] [PubMed]
  74. Li, Y.; Ma, X.; Zhou, X.; Cheng, P.; He, K.; Li, C. Knowledge Enhanced LSTM for Coreference Resolution on Biomedical Texts. Bioinformatics 2021. [Google Scholar] [CrossRef]
  75. Hua, Y.; Guo, J.; Zhao, H. Deep belief networks and deep learning. In Proceedings of the 2015 International Conference on Intelligent Computing and Internet of Things, Harbin, China, 17–18 January 2015; pp. 1–4. [Google Scholar]
  76. Ko, W.Y.; Siontis, K.C.; Attia, Z.I.; Carter, R.E.; Kapa, S.; Ommen, S.R.; Demuth, S.J.; Ackerman, M.J.; Gersh, B.J.; Arruda-Olson, A.M.; et al. Detection of hypertrophic cardiomyopathy using a convolutional neural network-enabled electrocardiogram. J. Am. Coll. Cardiol. 2020, 75, 722–733. [Google Scholar] [CrossRef]
  77. Skrede, O.-J.; De Raedt, S.; Kleppe, A.; Hveem, T.S.; Liestøl, K.; Maddison, J.; Askautrud, H.; Pradhan, M.; Nesheim, J.A.; Albregtsen, F.; et al. Deep learning for prediction of colorectal cancer outcome: A discovery and validation study. Lancet 2020, 395, 350–360. [Google Scholar] [CrossRef]
  78. Goodfellow, I.; Bengio, Y.; Courville, A. Deep Learning; MIT Press: Cambridge, MA, USA, 2016; 773p. [Google Scholar]
  79. Hosny, A.; Parmar, C.; Quackenbush, J.; Schwartz, L.H.; Aerts, H.J.W.L. Artificial intelligence in radiology. Nat. Rev. Cancer 2018, 18, 500–510. [Google Scholar] [CrossRef] [PubMed]
  80. Tan, M.; Le, Q. Efficientnet: Rethinking model scaling for convolutional neural networks. In Proceedings of the International Conference on Machine Learning, Rome, Italy, 26–28 July 2019; pp. 6105–6114. [Google Scholar]
  81. Zhang, B.; He, X.; Ouyang, F.; Gu, D.; Dong, Y.; Zhang, L.; Mo, X.; Huang, W.; Tian, J.; Zhang, S. Radiomic machine-learning classifiers for prognostic biomarkers of advanced nasopharyngeal carcinoma. Cancer Lett. 2017, 403, 21–27. [Google Scholar] [CrossRef] [PubMed]
  82. Zhang, B.; Ouyang, F.; Gu, D.; Dongsheng, G.; Zhang, L.; Mo, X.; Huang, W.; Zhang, S. Advanced nasopharyngeal carcinoma: Pre-treatment prediction of progression based on multi-parametric MRI radiomics. Oncotarget 2017, 8, 72457–72465. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  83. Ouyang, F.S.; Guo, B.L.; Zhang, B.; Dong, Y.H.; Zhang, L.; Mo, X.K.; Huang, W.H.; Zhang, S.X.; Hu, Q.G. Exploration and validation of radiomics signature as an independent prognostic biomarker in stage III-IVb Nasopharyngeal carcinoma. Oncotarget 2017, 8, 74869–74879. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  84. Lv, W.; Yuan, Q.; Wang, Q.; Ma, J.; Feng, Q.; Chen, W.; Rahmim, A.; Lu, L. Radiomics Analysis of PET and CT Components of PET/CT Imaging Integrated with Clinical Parameters: Application to Prognosis for Nasopharyngeal Carcinoma. Mol. Imaging Biol. 2019, 21, 954–964. [Google Scholar] [CrossRef] [PubMed]
  85. Zhuo, E.-H.; Zhang, W.-J.; Li, H.-J.; Zhang, G.-Y.; Jing, B.-Z.; Zhou, J.; Cui, C.-Y.; Chen, M.-Y.; Sun, Y.; Liu, L.-Z.; et al. Radiomics on multi-modalities MR sequences can subtype patients with non-metastatic nasopharyngeal carcinoma (NPC) into distinct survival subgroups. Eur. Radiol. 2019, 29, 5590–5599. [Google Scholar] [CrossRef] [PubMed]
  86. Zhang, L.L.; Huang, M.Y.; Li, Y.; Liang, J.H.; Gao, T.S.; Deng, B.; Yao, J.J.; Lin, L.; Chen, F.P.; Huang, X.D.; et al. Pretreatment MRI radiomics analysis allows for reliable prediction of local recurrence in non-metastatic T4 Nasopharyngeal carcinoma. EBioMedicine 2019, 42, 270–280. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  87. Yang, K.; Tian, J.; Zhang, B.; Li, M.; Xie, W.; Zou, Y.; Tan, Q.; Liu, L.; Zhu, J.; Shou, A.; et al. A multidimensional nomogram combining overall stage, dose volume histogram parameters and radiomics to predict progression-free survival in patients with locoregionally advanced nasopharyngeal carcinoma. Oral Oncol. 2019, 98, 85–91. [Google Scholar] [CrossRef] [PubMed]
  88. Ming, X.; Oei, R.W.; Zhai, R.; Kong, F.; Du, C.; Hu, C.; Hu, W.; Zhang, Z.; Ying, H.; Wang, J. MRI-based radiomics signature is a quantitative prognostic biomarker for nasopharyngeal carcinoma. Sci. Rep. 2019, 9, 10412. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  89. Zhang, L.; Zhou, H.; Gu, D.; Tian, J.; Zhang, B.; Dong, D.; Mo, X.; Liu, J.; Luo, X.; Pei, S.; et al. Radiomic Nomogram: Pretreatment Evaluation of Local Recurrence in Nasopharyngeal Carcinoma based on MR Imaging. J. Cancer 2019, 10, 4217–4225. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  90. Mao, J.; Fang, J.; Duan, X.; Yang, Z.; Cao, M.; Zhang, F.; Lu, L.; Zhang, X.; Wu, X.; Ding, Y.; et al. Predictive value of pretreatment MRI texture analysis in patients with primary nasopharyngeal carcinoma. Eur. Radiol. 2019, 29, 4105–4113. [Google Scholar] [CrossRef] [Green Version]
  91. Du, R.; Lee, V.H.; Yuan, H.; Lam, K.-O.; Pang, H.H.; Chen, Y.; Lam, E.Y.; Khong, P.-L.; Lee, A.W.; Kwong, D.L.; et al. Radiomics Model to Predict Early Progression of Nonmetastatic Nasopharyngeal Carcinoma after Intensity Modulation Radiation Therapy: A Multicenter Study. Radiol. Artif. Intell. 2019, 1, e180075. [Google Scholar] [CrossRef]
  92. Xu, H.; Lv, W.; Feng, H.; Du, D.; Yuan, Q.; Wang, Q.; Dai, Z.; Yang, W.; Feng, Q.; Ma, J.; et al. Subregional Radiomics Analysis of PET/CT Imaging with Intratumor Partitioning: Application to Prognosis for Nasopharyngeal Carcinoma. Mol. Imaging Biol. 2020, 22, 1414–1426. [Google Scholar] [CrossRef]
  93. Shen, H.; Wang, Y.; Liu, D.; Lv, R.; Huang, Y.; Peng, C.; Jiang, S.; Wang, Y.; He, Y.; Lan, X.; et al. Predicting Progression-Free Survival Using MRI-Based Radiomics for Patients with Nonmetastatic Nasopharyngeal Carcinoma. Front. Oncol. 2020, 10, 618. [Google Scholar] [CrossRef]
  94. Bologna, M.; Corino, V.; Calareso, G.; Tenconi, C.; Alfieri, S.; Iacovelli, N.A.; Cavallo, A.; Cavalieri, S.; Locati, L.; Bossi, P.; et al. Baseline MRI-Radiomics Can Predict Overall Survival in Non-Endemic EBV-Related Nasopharyngeal Carcinoma Patients. Cancers 2020, 12, 2958. [Google Scholar] [CrossRef] [PubMed]
  95. Feng, Q.; Liang, J.; Wang, L.; Niu, J.; Ge, X.; Pang, P.; Ding, Z. Radiomics Analysis and Correlation with Metabolic Parameters in Nasopharyngeal Carcinoma Based on PET/MR Imaging. Front. Oncol. 2020, 10, 1619. [Google Scholar] [CrossRef] [PubMed]
  96. Peng, L.; Hong, X.; Yuan, Q.; Lu, L.; Wang, Q.; Chen, W. Prediction of local recurrence and distant metastasis using radiomics analysis of pretreatment nasopharyngeal [18F]FDG PET/CT images. Ann. Nucl. Med. 2021, 35, 458–468. [Google Scholar] [CrossRef]
  97. Zhang, L.; Dong, D.; Li, H.; Tian, J.; Ouyang, F.; Mo, X.; Zhang, B.; Luo, X.; Lian, Z.; Pei, S.; et al. Development and validation of a magnetic resonance imaging-based model for the prediction of distant metastasis before initial treatment of nasopharyngeal carcinoma: A retrospective cohort study. EBioMedicine 2019, 40, 327–335. [Google Scholar] [CrossRef]
  98. Zhong, X.; Li, L.; Jiang, H.; Yin, J.; Lu, B.; Han, W.; Li, J.; Zhang, J. Cervical spine osteoradionecrosis or bone metastasis after radiotherapy for nasopharyngeal carcinoma? The MRI-based radiomics for characterization. BMC Med. Imaging 2020, 20, 104. [Google Scholar] [CrossRef]
  99. Akram, F.; Koh, P.E.; Wang, F.; Zhou, S.; Tan, S.H.; Paknezhad, M.; Park, S.; Hennedige, T.; Thng, C.H.; Lee, H.K.; et al. Exploring MRI based radiomics analysis of intratumoral spatial heterogeneity in locally advanced nasopharyngeal carcinoma treated with intensity modulated radiotherapy. PLoS ONE 2020, 15, e0240043. [Google Scholar] [CrossRef]
  100. Zhang, X.; Zhong, L.; Zhang, B.; Zhang, L.; Du, H.; Lu, L.; Zhang, S.; Yang, W.; Feng, Q. The effects of volume of interest delineation on MRI-based radiomics analysis: Evaluation with two disease groups. Cancer Imaging 2019, 19, 89–112. [Google Scholar] [CrossRef] [PubMed]
  101. Lv, W.; Yuan, Q.; Wang, Q.; Ma, J.; Jiang, J.; Yang, W.; Feng, Q.; Chen, W.; Rahmim, A.; Lu, L. Robustness versus disease differentiation when varying parameter settings in radiomics features: Application to nasopharyngeal PET/CT. Eur. Radiol. 2018, 28, 3245–3254. [Google Scholar] [CrossRef] [PubMed]
  102. Du, D.; Feng, H.; Lv, W.; Ashrafinia, S.; Yuan, Q.; Wang, Q.; Yang, W.; Feng, Q.; Chen, W.; Rahmim, A.; et al. Machine Learning Methods for Optimal Radiomics-Based Differentiation Between Recurrence and Inflammation: Application to Nasopharyngeal Carcinoma Post-therapy PET/CT Images. Mol. Imaging Biol. 2020, 22, 730–738. [Google Scholar] [CrossRef]
  103. Wang, G.; He, L.; Yuan, C.; Huang, Y.; Liu, Z.; Liang, C. Pretreatment MR imaging radiomics signatures for response prediction to induction chemotherapy in patients with nasopharyngeal carcinoma. Eur. J. Radiol. 2018, 98, 100–106. [Google Scholar] [CrossRef] [PubMed]
  104. Yu, T.-T.; Lam, S.-K.; To, L.-H.; Tse, K.-Y.; Cheng, N.-Y.; Fan, Y.-N.; Lo, C.-L.; Or, K.-W.; Chan, M.-L.; Hui, K.-C.; et al. Pretreatment Prediction of Adaptive Radiation Therapy Eligibility Using MRI-Based Radiomics for Advanced Nasopharyngeal Carcinoma Patients. Front. Oncol. 2019, 9, 1050. [Google Scholar] [CrossRef]
  105. Yongfeng, P.; Chuner, J.; Lei, W.; Fengqin, Y.; Zhimin, Y.; Zhenfu, F.; Haitao, J.; Yangming, J.; Fangzheng, W. The usefulness of pre-treatment MR-based radiomics on early response of neoadjuvant chemotherapy in patients with locally advanced Nasopharyngeal carcinoma. Oncol. Res. Featur. Preclin. Clin. Cancer Ther. 2021, 28, 605–613. [Google Scholar]
  106. Zhang, L.; Ye, Z.; Ruan, L.; Jiang, M. Pretreatment MRI-Derived Radiomics May Evaluate the Response of Different Induction Chemotherapy Regimens in Locally advanced Nasopharyngeal Carcinoma. Acad. Radiol. 2020, 27, 1655–1664. [Google Scholar] [CrossRef]
  107. Zhao, L.; Gong, J.; Xi, Y.; Xu, M.; Li, C.; Kang, X.; Yin, Y.; Qin, W.; Yin, H.; Shi, M. MRI-based radiomics nomogram may predict the response to induction chemotherapy and survival in locally advanced nasopharyngeal carcinoma. Eur. Radiol. 2020, 30, 537–546. [Google Scholar] [CrossRef] [PubMed]
  108. Liu, Y.; Shi, H.; Huang, S.; Chen, X.; Zhou, H.; Chang, H.; Xia, Y.; Wang, G.; Yang, X. Early prediction of acute xerostomia during radiation therapy for nasopharyngeal cancer based on delta radiomics from CT images. Quant. Imaging Med. Surg. 2019, 9, 1288–1302. [Google Scholar] [CrossRef]
  109. Zhang, B.; Lian, Z.; Zhong, L.; Zhang, X.; Dong, Y.; Chen, Q.; Zhang, L.; Mo, X.; Huang, W.; Yang, W.; et al. Machine-learning based MRI radiomics models for early detection of radiation-induced brain injury in nasopharyngeal carcinoma. BMC Cancer 2020, 20, 502. [Google Scholar] [CrossRef] [PubMed]
  110. Qiang, M.; Lv, X.; Li, C.; Liu, K.; Chen, X.; Guo, X. Deep learning in nasopharyngeal carcinoma: A retrospective cohort study of 3D convolutional neural networks on magnetic resonance imaging. Ann. Oncol. 2019, 30, v471. [Google Scholar] [CrossRef]
  111. Du, R.; Cao, P.; Han, L.; Ai, Q.; King, A.D.; Vardhanabhuti, V. Deep convolution neural network model for automatic risk assessment of patients with non-metastatic Nasopharyngeal carcinoma. arXiv 2019, arXiv:1907.11861. [Google Scholar]
  112. Yang, Q.; Guo, Y.; Ou, X.; Wang, J.; Hu, C. Automatic T Staging Using Weakly Supervised Deep Learning for Nasopharyngeal Carcinoma on MR Images. J. Magn. Reson. Imaging 2020, 52, 1074–1082. [Google Scholar] [CrossRef]
  113. Jing, B.; Deng, Y.; Zhang, T.; Hou, D.; Li, B.; Qiang, M.; Liu, K.; Ke, L.; Li, T.; Sun, Y.; et al. Deep learning for risk prediction in patients with nasopharyngeal carcinoma using multi-parametric MRIs. Comput. Methods Programs Biomed. 2020, 197, 105684. [Google Scholar] [CrossRef]
  114. Cui, C.; Wang, S.; Zhou, J.; Dong, A.; Xie, F.; Li, H.; Liu, L. Machine Learning Analysis of Image Data Based on Detailed MR Image Reports for Nasopharyngeal Carcinoma Prognosis. BioMed Res. Int. 2020, 2020. [Google Scholar] [CrossRef] [Green Version]
  115. Liu, K.; Xia, W.; Qiang, M.; Chen, X.; Liu, J.; Guo, X.; Lv, X. Deep learning pathological microscopic features in endemic nasopharyngeal cancer: Prognostic value and protentional role for individual induction chemotherapy. Cancer Med. 2019, 9, 1298–1306. [Google Scholar] [CrossRef] [PubMed]
  116. Zhang, L.; Wu, X.; Liu, J.; Zhang, B.; Mo, X.; Chen, Q.; Fang, J.; Wang, F.; Li, M.; Chen, Z.; et al. MRI-based deep-learning model for distant metastasis-free survival in locoregionally advanced Nasopharyngeal carcinoma. J. Magn. Reson. Imaging 2021, 53, 167–178. [Google Scholar] [CrossRef] [PubMed]
  117. Li, Y.; Zhu, J.; Liu, Z.; Teng, J.; Xie, Q.; Zhang, L.; Liu, X.; Shi, J.; Chen, L. A preliminary study of using a deep convolution neural network to generate synthesized CT images based on CBCT for adaptive radiotherapy of nasopharyngeal carcinoma. Phys. Med. Biol. 2019, 64, 145010. [Google Scholar] [CrossRef]
  118. Wang, Y.; Liu, C.; Zhang, X.; Deng, W. Synthetic CT Generation Based on T2 Weighted MRI of Nasopharyngeal Carcinoma (NPC) Using a Deep Convolutional Neural Network (DCNN). Front. Oncol. 2019, 9, 1333. [Google Scholar] [CrossRef]
  119. Tie, X.; Lam, S.K.; Zhang, Y.; Lee, K.H.; Au, K.H.; Cai, J. Pseudo-CT generation from multi-parametric MRI using a novel multi-channel multi-path conditional generative adversarial network for Nasopharyngeal carcinoma patients. Med. Phys. 2020, 47, 1750–1762. [Google Scholar] [CrossRef]
  120. Peng, Y.; Chen, S.; Qin, A.; Chen, M.; Gao, X.; Liu, Y.; Miao, J.; Gu, H.; Zhao, C.; Deng, X.; et al. Magnetic resonance-based synthetic computed tomography images generated using generative adversarial networks for nasopharyngeal carcinoma radiotherapy treatment planning. Radiother. Oncol. 2020, 150, 217–224. [Google Scholar] [CrossRef]
  121. Mohammed, M.A.; Abd Ghani, M.K.; Arunkumar, N.; Raed, H.; Mohamad, A.; Mohd, B. A real time computer aided object detection of Nasopharyngeal carcinoma using genetic algorithm and artificial neural network based on Haar feature fear. Future Gener. Comput Syst. 2018, 89, 539–547. [Google Scholar] [CrossRef]
  122. Mohammed, M.A.; Abd Ghani, M.K.; Arunkumar, N.; Hamed, R.I.; Mostafa, S.A.; Abdullah, M.K.; Burhanuddin, M.A. Decision support system for Nasopharyngeal carcinoma discrimination from endoscopic images using artificial neural network. J. Supercomput. 2020, 76, 1086–1104. [Google Scholar] [CrossRef]
  123. Abd Ghani, M.K.; Mohammed, M.A.; Arunkumar, N.; Mostafa, S.; Ibrahim, D.A.; Abdullah, M.K.; Jaber, M.M.; Abdulhay, E.; Ramirez-Gonzalez, G.; Burhanuddin, M.A. Decision-level fusion scheme for Nasopharyngeal carcinoma identification using machine learning techniques. Neu. Comput. Appl. 2020, 32, 625–638. [Google Scholar] [CrossRef]
  124. Li, C.; Jing, B.; Ke, L.; Li, B.; Xia, W.; He, C.; Qian, C.; Zhao, C.; Mai, H.; Chen, M.; et al. Development and validation of an endoscopic images-based deep learning model for detection with nasopharyngeal malignancies. Cancer Commun. 2018, 38, 1–11. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  125. Diao, S.; Hou, J.; Yu, H.; Zhao, X.; Sun, Y.; Lambo, R.L.; Xie, Y.; Liu, L.; Qin, W.; Luo, W. Computer-Aided Pathologic Diagnosis of Nasopharyngeal Carcinoma Based on Deep Learning. Am. J. Pathol. 2020, 190, 1691–1700. [Google Scholar] [CrossRef] [PubMed]
  126. Chuang, W.-Y.; Chang, S.-H.; Yu, W.-H.; Yang, C.-K.; Yeh, C.-J.; Ueng, S.-H.; Liu, Y.-J.; Chen, T.-D.; Chen, K.-H.; Hsieh, Y.-Y.; et al. Successful Identification of Nasopharyngeal Carcinoma in Nasopharyngeal Biopsies Using Deep Learning. Cancers 2020, 12, 507. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  127. Wong, L.M.; King, A.D.; Ai, Q.Y.H.; Lam, W.K.J.; Poon, D.M.C.; Ma, B.B.Y.; Chan, K.C.A.; Mo, F.K.F. Convolutional neural network for discriminating Nasopharyngeal carcinoma and benign hyperplasia on MRI. Eur. Radiol. 2021, 31, 3856–3863. [Google Scholar] [CrossRef]
  128. Ke, L.; Deng, Y.; Xia, W.; Qiang, M.; Chen, X.; Liu, K.; Jing, B.; He, C.; Xie, C.; Guo, X.; et al. Development of a self-constrained 3D DenseNet model in automatic detection and segmentation of nasopharyngeal carcinoma using magnetic resonance images. Oral Oncol. 2020, 110, 104862. [Google Scholar] [CrossRef] [PubMed]
  129. Men, K.; Chen, X.; Zhang, Y.; Zhang, T.; Dai, J.; Yi, J.; Li, Y. Deep Deconvolutional Neural Network for Target Segmentation of Nasopharyngeal Cancer in Planning Computed Tomography Images. Front. Oncol. 2017, 7, 315. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  130. Li, Q.; Xu, Y.; Chen, Z.; Liu, D.; Feng, S.-T.; Law, M.; Ye, Y.; Huang, B. Tumor Segmentation in Contrast-Enhanced Magnetic Resonance Imaging for Nasopharyngeal Carcinoma: Deep Learning with Convolutional Neural Network. BioMed Res. Int. 2018, 2018. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  131. Wang, Y.; Zu, C.; Hu, G.; Luo, Y.; Ma, Z.; He, K.; Wu, X.; Zhou, J. Automatic Tumor Segmentation with Deep Convolutional Neural Networks for Radiotherapy Applications. Neural Process. Lett. 2018, 48, 1323–1334. [Google Scholar] [CrossRef]
  132. Ma, Z.; Wu, X.; Sun, S.; Xia, C.; Yang, Z.; Li, S.; Zhou, J. A discriminative learning based approach for automated Nasopharyngeal carcinoma segmentation leveraging multi-modality similarity metric learning. In Proceedings of the 2018 IEEE 15th International Symposium on Biomedical Imaging (ISBI 2018), Washington, DC, USA, 4–7 April 2018; pp. 813–816. [Google Scholar]
  133. Daoud, B.; Morooka, K.; Kurazume, R.; Leila, F.; Mnejja, W.; Daoud, J. 3D segmentation of nasopharyngeal carcinoma from CT images using cascade deep learning. Comput. Med. Imaging Graph. 2019, 77, 101644. [Google Scholar] [CrossRef] [PubMed]
  134. Lin, L.; Dou, Q.; Jin, Y.-M.; Zhou, G.-Q.; Tang, Y.-Q.; Chen, W.-L.; Su, B.-A.; Liu, F.; Tao, C.-J.; Jiang, N.; et al. Deep Learning for Automated Contouring of Primary Tumor Volumes by MRI for Nasopharyngeal Carcinoma. Radiology 2019, 291, 677–686. [Google Scholar] [CrossRef] [PubMed]
  135. Liang, S.; Tang, F.; Huang, X.; Yang, K.; Zhong, T.; Hu, R.; Liu, S.; Yuan, X.; Zhang, Y. Deep-learning-based detection and segmentation of organs at risk in nasopharyngeal carcinoma computed tomographic images for radiotherapy planning. Eur. Radiol. 2018, 29, 1961–1967. [Google Scholar] [CrossRef] [PubMed]
  136. Zhong, T.; Huang, X.; Tang, F.; Liang, S.; Deng, X.; Zhang, Y. Boosting-based cascaded convolutional neural networks for the segmentation of CT organs-at-risk in Nasopharyngeal carcinoma. Med. Phys. 2019, 46, 5602–5611. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  137. Ma, Z.; Zhou, S.; Wu, X.; Zhang, H.; Yan, W.; Sun, S.; Zhou, J. Nasopharyngeal carcinoma segmentation based on enhanced convolutional neural networks using multi-modal metric learning. Phys. Med. Biol. 2018, 64, 025005. [Google Scholar] [CrossRef]
  138. Li, S.; Xiao, J.; He, L.; Peng, X.; Yuan, X. The Tumor Target Segmentation of Nasopharyngeal Cancer in CT Images Based on Deep Learning Methods. Technol. Cancer Res. Treat. 2019, 18. [Google Scholar] [CrossRef] [Green Version]
  139. Xue, X.; Qin, N.; Hao, X.; Shi, J.; Wu, A.; An, H.; Zhang, H.; Wu, A.; Yang, Y. Sequential and Iterative Auto-Segmentation of High-Risk Clinical Target Volume for Radiotherapy of Nasopharyngeal Carcinoma in Planning CT Images. Front. Oncol. 2020, 10, 1134. [Google Scholar] [CrossRef] [PubMed]
  140. Chen, H.; Qi, Y.; Yin, Y.; Li, T.; Liu, X.; Li, X.; Gong, G.; Wang, L. MMFNet: A multi-modality MRI fusion network for segmentation of nasopharyngeal carcinoma. Neurocomputing 2020, 394, 27–40. [Google Scholar] [CrossRef] [Green Version]
  141. Guo, F.; Shi, C.; Li, X.; Wu, X.; Zhou, J.; Lv, J. Image segmentation of nasopharyngeal carcinoma using 3D CNN with long-range skip connection and multi-scale feature pyramid. Soft Comput. 2020, 24, 12671–12680. [Google Scholar] [CrossRef]
  142. Ye, Y.; Cai, Z.; Huang, B.; He, Y.; Zeng, P.; Zou, G.; Deng, W.; Chen, H.; Huang, B. Fully-Automated Segmentation of Nasopharyngeal Carcinoma on Dual-Sequence MRI Using Convolutional Neural Networks. Front. Oncol. 2020, 10, 166. [Google Scholar] [CrossRef] [Green Version]
  143. Li, Y.; Peng, H.; Dan, T.; Hu, Y.; Tao, G.; Cai, H. Coarse-to-fine Nasopharyngeal carcinoma Segmentation in MRI via Multi-stage Rendering. In Proceedings of the 2020 IEEE International Conference on Bioinformatics and Biomedicine (BIBM), Seoul, Korea, 16–19 December 2020; pp. 623–628. [Google Scholar]
  144. Jin, Z.; Li, X.C.; Shen, L.; Lang, J.; Li, J.; Wu, J.; Xu, P.; Duan, J. Automatic Primary Gross Tumor Volume Segmentation for Nasopharyngeal carcinoma using ResSE-UNet. In Proceedings of the 2020 IEEE 33rd International Symposium on Computer-Based Medical Systems (CBMS), Mayo Clinic, Rochester, MN, USA, 28–30 July 2020; pp. 585–590. [Google Scholar]
  145. Wang, X.; Yang, G.; Zhang, Y.; Zhu, L.; Xue, X.; Zhang, B.; Cai, C.; Jin, H.; Zheng, J.; Wu, J.; et al. Automated delineation of nasopharynx gross tumor volume for nasopharyngeal carcinoma by plain CT combining contrast-enhanced CT using deep learning. J. Radiat. Res. Appl. Sci. 2020, 13, 568–577. [Google Scholar] [CrossRef]
  146. Wong, L.M.; Ai, Q.Y.H.; Mo, F.K.F.; Poon, D.M.C.; King, A.D. Convolutional neural network in nasopharyngeal carcinoma: How good is automatic delineation for primary tumor on a non-contrast-enhanced fat-suppressed T2-weighted MRI? Jpn. J. Radiol. 2021, 39, 571–579. [Google Scholar] [CrossRef]
  147. Bai, X.; Hu, Y.; Gong, G.; Yin, Y.; Xia, Y. A deep learning approach to segmentation of nasopharyngeal carcinoma using computed tomography. Biomed. Signal. Process. Control. 2021, 64, 102246. [Google Scholar] [CrossRef]
  148. Shboul, Z.; Alam, M.; Vidyaratne, L.; Pei, L.; Elbakary, M.I.; Iftekharuddin, K.M. Feature-Guided Deep Radiomics for Glioblastoma Patient Survival Prediction. Front. Neurosci. 2019, 13, 966. [Google Scholar] [CrossRef]
  149. Paul, R.; Hawkins, S.H.; Schabath, M.B.; Gillies, R.J.; Hall, L.O.; Goldgof, D.B. Predicting malignant nodules by fusing deep features with classical radiomics features. J. Med. Imaging 2018, 5, 011021. [Google Scholar] [CrossRef]
  150. Bizzego, A.; Bussola, N.; Salvalai, D.; Chierici, M.; Maggio, V.; Jurman, G.; Furlanello, C. Integrating deep and radiomics features in cancer bioimaging. In Proceedings of the 2019 IEEE Conference on Computational Intelligence in Bioinformatics and Computational Biology (CIBCB), Certosa di Pontignano, Siena–Tuscany, Italy, 9–11 July 2019; pp. 1–8. [Google Scholar]
  151. Hatt, M.; Parmar, C.; Qi, J.; El Naqa, I. Machine (Deep) Learning Methods for Image Processing and Radiomics. IEEE Trans. Radiat. Plasma Med. Sci. 2019, 3, 104–108. [Google Scholar] [CrossRef]
  152. Li, S.; Wang, K.; Hou, Z.; Yang, J.; Ren, W.; Gao, S.; Meng, F.; Wu, P.; Liu, B.; Liu, J.; et al. Use of Radiomics Combined with Machine Learning Method in the Recurrence Patterns After Intensity-Modulated Radiotherapy for Nasopharyngeal Carcinoma: A Preliminary Study. Front. Oncol. 2018, 8, 648. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  153. Zhong, L.-Z.; Fang, X.-L.; Dong, D.; Peng, H.; Fang, M.-J.; Huang, C.-L.; He, B.-X.; Lin, L.; Ma, J.; Tang, L.-L.; et al. A deep learning MR-based radiomic nomogram may predict survival for Nasopharyngeal carcinoma patients with stage T3N1M0. Radiother. Oncol. 2020, 151, 1–9. [Google Scholar] [CrossRef] [PubMed]
  154. Zhang, F.; Zhong, L.-Z.; Zhao, X.; Dong, D.; Yao, J.-J.; Wang, S.-Y.; Liu, Y.; Zhu, D.; Wang, Y.; Wang, G.-J.; et al. A deep-learning-based prognostic nomogram integrating microscopic digital pathology and macroscopic magnetic resonance images in nasopharyngeal carcinoma: A multi-cohort study. Ther. Adv. Med. Oncol. 2020, 12. [Google Scholar] [CrossRef] [PubMed]
  155. Litjens, G.; Kooi, T.; Bejnordi, B.E.; Setio, A.A.A.; Ciompi, F.; Ghafoorian, M.; van der Laak, J.A.; van Ginneken, B.; Sánchez, C.I. A survey on deep learning in medical image analysis. Med. Image Anal. 2017, 42, 60–88. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  156. Coroller, T.P.; Bi, W.L.; Huynh, E.; Abedalthagafi, M.; Aizer, A.A.; Greenwald, N.F.; Parmar, C.; Narayan, V.; Wu, W.W.; Miranda de Moura, S.; et al. Radiographic prediction of meningioma grade by semantic and radiomic features. PLoS ONE 2017, 12, e0187908. [Google Scholar] [CrossRef] [Green Version]
  157. Kebir, S.; Khurshid, Z.; Gaertner, F.C.; Essler, M.; Hattingen, E.; Fimmers, R.; Scheffler, B.; Herrlinger, U.; Bundschuh, R.A.; Glas, M. Unsupervised consensus cluster analysis of [18F]-fluoroethyl-L-tyrosine positron emission tomography identified textural features for the diagnosis of pseudoprogression in high-grade glioma. Oncotarget 2016, 8, 8294–8304. [Google Scholar] [CrossRef]
  158. Antropova, N.; Huynh, B.Q.; Giger, M.L. A deep feature fusion methodology for breast cancer diagnosis demonstrated on three imaging modality datasets. Med. Phys. 2017, 44, 5162–5171. [Google Scholar] [CrossRef]
  159. Gierach, G.L.; Li, H.; Loud, J.T.; Greene, M.H.; Chow, C.K.; Lan, L.; Prindiville, S.A.; Eng-Wong, J.; Soballe, P.W.; Giambartolomei, C.; et al. Relationships between computer-extracted mammographic texture pattern features and BRCA1/2 mutation status: A cross-sectional study. Breast Cancer Res. 2014, 16, 424. [Google Scholar]
  160. Ciompi, F.; Chung, K.; Van Riel, S.J.; Setio, A.A.A.; Gerke, P.K.; Jacobs, C.; Scholten, E.T.; Schaefer-Prokop, C.; Wille, M.M.W.; Marchianò, A.; et al. Towards automatic pulmonary nodule management in lung cancer screening with deep learning. Sci. Rep. 2017, 7, 46479. [Google Scholar] [CrossRef]
  161. Coroller, T.P.; Grossmann, P.; Hou, Y.; Velazquez, E.R.; Leijenaar, R.T.; Hermann, G.; Lambin, P.; Haibe-Kains, B.; Mak, R.H.; Aerts, H.J. CT-based radiomic signature predicts distant metastasis in lung adenocarcinoma. Radiother. Oncol. 2015, 114, 345–350. [Google Scholar] [CrossRef]
  162. Zeng, Y.; Xu, S.; Chapman, W.C.; Li, S.; Alipour, Z.; Abdelal, H.; Chatterjee, D.; Mutch, M.; Zhu, Q. Real-time colorectal cancer diagnosis using PR-OCT with deep learning. Theranostics 2020, 10, 2587–2596. [Google Scholar] [CrossRef]
  163. Kather, J.N.; Krisam, J.; Charoentong, P.; Luedde, T.; Herpel, E.; Weis, C.-A.; Gaiser, T.; Marx, A.; Valous, N.A.; Ferber, D.; et al. Predicting survival from colorectal cancer histology slides using deep learning: A retrospective multicenter study. PLoS Med. 2019, 16, e1002730. [Google Scholar] [CrossRef]
  164. Bi, W.L.; Hosny, A.; Schabath, M.B.; Giger, M.L.; Birkbak, N.; Mehrtash, A.; Allison, T.; Arnaout, O.; Abbosh, C.; Dunn, I.F.; et al. Artificial intelligence in cancer imaging: Clinical challenges and applications. CA Cancer J. Clin. 2019, 69, 127–157. [Google Scholar] [CrossRef] [Green Version]
  165. Spadarella, G.; Calareso, G.; Garanzini, E.; Ugga, L.; Cuocolo, A.; Cuocolo, R. MRI based radiomics in nasopharyngeal cancer: Systematic review and perspectives using radiomic quality score (RQS) assessment. Eur. J. Radiol. 2021, 140, 109744. [Google Scholar] [CrossRef] [PubMed]
  166. Denny, J.C.; Collins, F.S. Precision medicine in 2030—seven ways to transform healthcare. Cell 2021, 184, 1415–1419. [Google Scholar] [CrossRef]
  167. Fan, M.; Xia, P.; Clarke, R.; Wang, Y.; Li, L. Radiogenomic signatures reveal multiscale intratumour heterogeneity associated with biological functions and survival in breast cancer. Nat. Commun. 2020, 11, 4861. [Google Scholar] [CrossRef]
  168. Iwatate, Y.; Hoshino, I.; Yokota, H.; Ishige, F.; Itami, M.; Mori, Y.; Chiba, S.; Arimitsu, H.; Yanagibashi, H.; Nagase, H.; et al. Radiogenomics for predicting p53 status, PD-L1 expression, and prognosis with machine learning in pancreatic cancer. Br. J. Cancer 2020, 123, 1253–1261. [Google Scholar] [CrossRef]
  169. Shui, L.; Ren, H.; Yang, X.; Li, J.; Chen, Z.; Yi, C.; Zhu, H.; Shui, P. Era of radiogenomics in precision medicine: An emerging approach for prediction of the diagnosis, treatment and prognosis of tumors. Front. Oncol. 2020, 10, 3195. [Google Scholar]
  170. Jain, R.; Chi, A.S. Radiogenomics identifying important biological pathways in gliomas. Neuro-Oncology 2021, 23, 177–178. [Google Scholar] [CrossRef] [PubMed]
  171. Cho, N. Breast Cancer Radiogenomics: Association of Enhancement Pattern at DCE MRI with Deregulation of mTOR Pathway. Radiology 2020, 296, 288–289. [Google Scholar] [CrossRef]
  172. Pinker-Domenig, K.; Chin, J.; Melsaether, A.N.; Morris, E.A.; Moy, L. Precision Medicine and Radiogenomics in Breast Cancer: New Approaches toward Diagnosis and Treatment. Radiology 2018, 287, 732–747. [Google Scholar] [CrossRef]
  173. Badic, B.; Tixier, F.; Le Rest, C.C.; Hatt, M.; Visvikis, D. Radiogenomics in Colorectal Cancer. Cancers 2021, 13, 973. [Google Scholar] [CrossRef]
  174. Zhou, M.; Leung, A.; Echegaray, S.; Gentles, A.; Shrager, J.B.; Jensen, K.C.; Berry, G.J.; Plevritis, S.K.; Rubin, D.L.; Napel, S.; et al. Non–Small Cell Lung Cancer Radiogenomics Map Identifies Relationships between Molecular and Imaging Phenotypes with Prognostic Implications. Radiology 2018, 286, 307–315. [Google Scholar] [CrossRef]
  175. Vargas, H.A.; Huang, E.P.; Lakhman, Y.; Ippolito, J.E.; Bhosale, P.; Mellnick, V.; Shinagare, A.B.; Anello, M.; Kirby, J.; Fevrier-Sullivan, B.; et al. Radiogenomics of high-grade serous ovarian cancer: Multireader multi-institutional study from the Cancer Genome Atlas Ovarian Cancer Imaging Research Group. Radiology 2017, 285, 482–492. [Google Scholar] [CrossRef] [Green Version]
  176. Panayides, A.S.; Pattichis, M.S.; Leandrou, S.; Pitris, C.; Constantinidou, A.; Pattichis, C.S. Radiogenomics for precision medicine with a big data analytics perspective. IEEE J. Biomed. Health Inform. 2018, 23, 2063–2079. [Google Scholar] [CrossRef] [PubMed]
  177. Toward Precision Medicine: Building a Knowledge Network for Biomedical Research and a New Taxonomy of Disease; National Academies Press: Washington, DC, USA, 2011.
  178. Bodalal, Z.; Trebeschi, S.; Nguyen-Kim, T.D.L.; Schats, W.; Beets-Tan, R. Radiogenomics: Bridging imaging and genomics. Abdom. Radiol. 2019, 44, 1960–1984. [Google Scholar] [CrossRef] [Green Version]
  179. Arimura, H.; Soufi, M.; Kamezawa, H.; Ninomiya, K.; Yamada, M. Radiomics with artificial intelligence for precision medicine in radiation therapy. J. Radiat. Res. 2019, 60, 150–157. [Google Scholar] [CrossRef] [PubMed]
  180. Lao, J.; Chen, Y.; Li, Z.-C.; Li, Q.; Zhang, J.; Liu, J.; Zhai, G. A Deep Learning-Based Radiomics Model for Prediction of Survival in Glioblastoma Multiforme. Sci. Rep. 2017, 7, 10353. [Google Scholar] [CrossRef] [PubMed]
  181. Avanzo, M.; Wei, L.; Stancanello, J.; Vallières, M.; Rao, A.; Morin, O.; Mattonen, S.A.; El Naqa, I. Machine and deep learning methods for radiomics. Med. Phys. 2020, 47, e185–e202. [Google Scholar] [CrossRef] [PubMed]
  182. Jiang, M.; Li, C.-L.; Luo, X.-M.; Chuan, Z.-R.; Lv, W.-Z.; Li, X.; Cui, X.-W.; Dietrich, C.F. Ultrasound-based deep learning radiomics in the assessment of pathological complete response to neoadjuvant chemotherapy in locally advanced breast cancer. Eur. J. Cancer 2021, 147, 95–105. [Google Scholar] [CrossRef] [PubMed]
  183. Gospodarowicz, M.K.; Miller, D.; Groome, P.A.; Greene, F.L.; Logan, P.A.; Sobin, L.H.; Project, F.T.U.T. The process for continuous improvement of the TNM classification. Cancer 2003, 100. [Google Scholar] [CrossRef] [PubMed]
  184. Hosny, A.; Parmar, C.; Coroller, T.P.; Grossmann, P.; Zeleznik, R.; Kumar, A.; Bussink, J.; Gillies, R.J.; Mak, R.H.; Aerts, H.J.W.L. Deep learning for lung cancer prognostication: A retrospective multi-cohort radiomics study. PLoS Med. 2018, 15, e1002711. [Google Scholar] [CrossRef] [PubMed] [Green Version]
Figure 1. Estimated age-standardised incidence rates (World) in 2020, nasopharynx, both sexes, all ages. ARS: age-standardised rates. Data source: GLOBOCAN 2020. Graph production: IARC (http://gco.iarc.fr/today, accessed on 31 March 2021.) World Health [2].
Figure 1. Estimated age-standardised incidence rates (World) in 2020, nasopharynx, both sexes, all ages. ARS: age-standardised rates. Data source: GLOBOCAN 2020. Graph production: IARC (http://gco.iarc.fr/today, accessed on 31 March 2021.) World Health [2].
Diagnostics 11 01523 g001
Figure 2. Five steps of the pipeline of radiomics.
Figure 2. Five steps of the pipeline of radiomics.
Diagnostics 11 01523 g002
Figure 3. Relationship between artificial intelligence, machine learning, neural network, and deep learning. MLP: multilayer perception; CNN: convolutional neural network; RNN: recurrent neural network; DBN: deep belief network; GAN: generative adversative network.
Figure 3. Relationship between artificial intelligence, machine learning, neural network, and deep learning. MLP: multilayer perception; CNN: convolutional neural network; RNN: recurrent neural network; DBN: deep belief network; GAN: generative adversative network.
Diagnostics 11 01523 g003
Figure 4. Image processing principle of a convolutional neural network.
Figure 4. Image processing principle of a convolutional neural network.
Diagnostics 11 01523 g004
Figure 5. PRISMA flow diagram of the search result.
Figure 5. PRISMA flow diagram of the search result.
Diagnostics 11 01523 g005
Figure 6. Parallel relationship between TNM staging system and artificial intelligence (AI)-based prediction model.
Figure 6. Parallel relationship between TNM staging system and artificial intelligence (AI)-based prediction model.
Diagnostics 11 01523 g006
Figure 7. The distribution of studies based on radiomics, deep learning, and deep learning-based radiomics from 2017 to early 2021.
Figure 7. The distribution of studies based on radiomics, deep learning, and deep learning-based radiomics from 2017 to early 2021.
Diagnostics 11 01523 g007
Table 1. Inclusion and exclusion criteria of the study.
Table 1. Inclusion and exclusion criteria of the study.
CriteriasDetailed Rules and Regulations
Inclusion Criteria
  • Journal articles published in the English language;
  • Studies published from 2012 to date;
  • Original researches;
  • Full-text papers that are accessible;
  • Radiomics was used to analyze the images of NPC;
  • Deep learning was used to analyze the images of NPC;
  • The samples, the image data used, the modeling method and evaluation method are described in detail.
Exclusion Criteria
  • Papers that are written in other languages;
  • Full-text of the document is not accessible on the internet;
  • Relevant studies, but other machine learning algorithms that are not based on deep learning or radiomics were used for modeling;
  • Relevant studies, but not based on NPC images;
  • The information of samples, the image data used, the modeling method or evaluation method are not described;
  • Conferences papers, literature reviews, and editorial materials that do not belong to original researches.
Table 2. Studies of predicting the prognosis of nasopharyngeal carcinoma (NPC) using radiomics.
Table 2. Studies of predicting the prognosis of nasopharyngeal carcinoma (NPC) using radiomics.
Author, Year, ReferenceImageSample Size (Patient)Feature SelectionModelingModel Evaluation
Zhang, B. (2017) [24]MRI108LASSOCR, nomograms, calibration curvesC-index 0.776
Zhang, B. (2017) [81]MRI110L1-LOG, L1-SVM, RF, DC, EN-LOG, SISL2-LOG, KSVM, AdaBoost, LSVM, RF, Nnet, KNN, LDA, NBAUC 0.846
Zhang, B. (2017) [82]MRI113LASSORSAUC 0.886
Ouyang, F.S. (2017) [83]MRI100LASSORSHR 7.28
Lv, W. (2019) [84]PET/CT128Univariate analysis with FDR, SC > 0.8CRC-index 0.77
Zhuo, E.H. (2019) [85]MRI658Entropy-based consensus clustering methodSVMC-index 0.814
Zhang, L.L. (2019) [86]MRI737RFECR and nomogramC-index 0.73
Yang, K. (2019) [87]MRI224LASSOCR and nomogramC-index 0.811
Ming, X. (2019) [88]MRI303Non-negative matrix factorizationChi-squared test, nomogramC-index 0.845
Zhang, L. (2019) [89]MRI140LR-RFECR and nomogramC-index 0.74
Mao, J. (2019) [90]MRI79Univariate analysesCRAUC 0.825
Du, R. (2019) [91]MRI277Hierarchal clustering analysis, PRSVMAUC 0.8
Xu, H. (2020) [92]PET/CT128Univariate CR, PR > 0.8CRC-index 0.69
Shen, H. (2020) [93]MRI327LASSO, RFECR, RSC-index 0.874
Bologna, M. (2020) [94]MRI136Intra-class correlation coefficient, SCC > 0.85CRC-index 0.72
Feng, Q. (2020) [95]PET/MR100LASSOCRAUC 0.85
Peng, L. (2021) [96]PET/CT85W-test, Chi-square test, PR, RASFFS coupled with SVMAUC 0.829
Least absolute shrinkage and selection operator (LASSO), L1-logistic regression (L1-LOG), L1-support vector machine (L1-SVM), random forest (RF), distance correlation (DC), elastic net logistic regression (EN-LOG), sure independence screening (SIS), L2-logistic regression (L2-LOG), kernel support vector machine (KSVM), linear-SVM (LSVM), adaptive boosting (AdaBoost), neural network (Nnet), K-nearest neighbour (KNN), linear discriminant analysis (LDA), and naive Bayes (NB).
Table 3. Studies for assessing tumour metastasis using radiomics.
Table 3. Studies for assessing tumour metastasis using radiomics.
Author, Year, ReferenceImageSample SizeFeature SelectionModelingModel Evaluation
Zhang, L. (2019) [97]MRI176LASSOLRAUC 0.792
Zhong, X. (2020) [98]MRI46LASSONomogramAUC 0.72
Akram, F. (2020) [99]MRI14Paired t-test and W-testShapiro-Wilk normality testsp < 0.001
Zhang, X. (2020) [100]MRI238MRMR combined with 0.632 + bootstrap algorithmsRFAUC 0.845
Peng, L. (2021) [96]PET/CT85W-test, PR, RA, Chi-square testSFFS coupled with SVMAUC 0.829
Maximum relevance minimum redundancy (MRMR).
Table 4. Studies of nasopharyngeal carcinoma (NPC) diagnosis using radiomics.
Table 4. Studies of nasopharyngeal carcinoma (NPC) diagnosis using radiomics.
Author, Year, ReferenceImageSample SizeFeature SelectionModelingModel Evaluation
Lv, W. (2018) [101]PET/CT106Intra-class coefficientLR with LOOCVAUC 0.89
Du, D. (2020) [102]PET/CT76MIM, FSCR, RELF-F, MRMR, CMIM, JMI, SC > 0.7DT, KNN, LDA, LR, NB, RF, and SVM with radial basis function kernelAUC 0.892
Logistic regression with leave-one-out cross-validation (LOOCV), mutual information maximization (MIM), Relief-F (RELF-F), conditional mutual information maximization (CMIM), Fisher score (FSCR), joint mutual information (JMI), Spearman’s correlation (SC).
Table 5. Studies for the prediction of therapeutic effect of nasopharyngeal carcinoma (NPC) using radiomics.
Table 5. Studies for the prediction of therapeutic effect of nasopharyngeal carcinoma (NPC) using radiomics.
Author, Year, ReferenceImageSample SizeFeature SelectionModelingModel Evaluation
Wang, G. (2018) [103]MRI120LASSOLRAUC 0.822
Yu, T.T. (2019) [104]MRI70LASSOUnivariate LRAUC 0.852
Yongfeng, P. (2020) [105]MRI108ANOVA/MW test, correlation analysis and LASSOMultivariate LRAUC 0.905
Zhang, L. (2020) [106]MRI265LASSOLRAUC 0.886, 0.863
Zhao, L. (2020) [107]MRI123t-test and LASSO based on LOOCVSVM, nomogram, backward stepwise LRC-index 0.863
Table 6. Studies for predicting the complications of radiation therapy using radiomics.
Table 6. Studies for predicting the complications of radiation therapy using radiomics.
Author, Year, ReferenceImageSample SizeFeature SelectionModelingModel Evaluation
Liu, Y. (2019) [108]CT35RFELRPrecision 0.922
Zhang, B. (2020) [109]MRI242The relief algorithmRFAUC 0.830
Recursive feature elimination (RFE).
Table 7. Studies for the prognosis prediction of nasopharyngeal carcinoma (NPC) based on deep learning (DL).
Table 7. Studies for the prognosis prediction of nasopharyngeal carcinoma (NPC) based on deep learning (DL).
Author, Year, ReferenceImageSample SizeModelingModel Evaluation
Qiang, M.Y. (2019) [110]MRI16363D DenseNetHR 0.62
Du, R. (2019) [111]MRI596DCNNAUC 0.69
Yang, Q. (2020) [112]MRI1138Resnet networkAUC 0.943
Jing, B. (2020) [113]MRI1417Multi-modality deep survival networkC-index 0.651
Qiang, M. (2020) [34]MRI34443D-CNNC-index 0.776
Cui, C. (2020) [114]MRI792Automatic machine learning (AutoML) including DLAUC 0.796
Liu, K. (2020) [115]Pathology1055Neural network DeepSurvC-index 0.723
Zhang, L. (2021) [116]MRI233Resnet networkAUC 0.808
Table 8. Studies for image synthesis of nasopharyngeal carcinoma (NPC) based on deep learning (DL).
Table 8. Studies for image synthesis of nasopharyngeal carcinoma (NPC) based on deep learning (DL).
Author, Year, ReferenceImageSample SizeModelingModel Evaluation
Li, Y. (2019) [117]CBCT70U-Net neural network (DCNN)1%/1 mm GPR 95.5%
Wang, Y. (2019) [118]MRI33U-Net neural network (DCNN)MAE: 97 ± 13 HU in soft tissue, 131 ± 24 HU in all region, 357 ± 44 HU in bone
Tie, X. (2020) [119]MRI32ResU-NetStructural similarity index 0.92
Peng, Y. (2020) [120]MRI173GANs2%/2mm GPR 98.52~98.68%
Generative adversarial networks (GANs).
Table 9. Studies of nasopharyngeal carcinoma (NPC) detection and/or diagnosis based on deep learning (DL).
Table 9. Studies of nasopharyngeal carcinoma (NPC) detection and/or diagnosis based on deep learning (DL).
Author, Year, ReferenceImageSample SizeModelingModel Evaluation
Mohammed, M.A. (2018) [121,122,123]Endoscopic images381 imagesANNAccuracy 96.22%
Li, C. (2018) [124]Endoscopic images28,966 imagesFully convolutional network (FCNN)Accuracy 88.7%
Diao, S. (2020) [125]Pathology731 patientsInception-v3AUC 0.936
Chuang, W.Y. (2020) [126]Pathology726 patientsResNeXtAUC 0.985
Wong, L.M. (2020) [127]MRI412 patientsResidual Attention Network (RAN)AUC 0.96
Ke, L. (2020) [128]MRI4100 patients3D DenseNetAccuracy 97.77%
Table 10. Studies for nasopharyngeal carcinoma (NPC) segmentation based on deep learning (DL).
Table 10. Studies for nasopharyngeal carcinoma (NPC) segmentation based on deep learning (DL).
Author, Year, ReferenceImageSample SizeModelingModel Evaluation
Men, K. (2017) [129]CT230DDNN and VGG-16DSC GTVnx 80.9%
GTVnd 62.3%
CTV 82.6%
Li, Q. (2018) [130]MRI29CNNDSC 0.89
Wang, Y. (2018) [131]MRI15DCNNDSC 0.79
Ma, Z. (2018) [132]CT, MRI50Multi-modality CNNDSC 0.636 (CT),
0.712 (MRI)
Daoud, B. (2019) [133]CT70CNNDSC 0.91
Lin, L. (2019) [134]MRI10213D-CNNDSC 0.79
Liang, S. (2019) [135]CT185CNNDSC 0.689–0.937
Zhong, T. (2019) [136]CT140CNNDSC Parotids 0.92
Thyroids 0.92
Optic nerves 0.89
Ma, Z. (2019) [137]CT, MRI90Single-modality CNN multi-modality CNNDSC 0.746 (CT),
0.752 (MRI)
Li, S. (2019) [138]CT502U-Net (CNN)DSC Lymph nodes 0.659
Tumor 0.74
Xue, X. (2020) [139]CT150SI-Net and U-NetDSC 0.84
Chen, H. (2020) [140]MRI1493D-CNNDSC 0.724
Guo, F. (2020) [141]MRI1203D-CNNDSC 0.737
Ye, Y. (2020) [142]MRI44Dense connectivity embedding U-netDSC 0.87
Li, Y. (2020) [143]MRI596ResNet-101DSC 0.703
Jin, Z. (2020) [144]CT90ResSE-UNetDSC 0.84
Wang, X. (2020) [145]CT2053D U-NetDSC 0.827
Ke, L. (2020) [128]MRI41003D DenseNetDSC 0.77
Wong, L.M. (2021) [146]MRI195CNNDSC 0.73
Bai, X. (2021) [147]MRI60ResNeXt-50 and U-NetDSC 0.618
Table 11. Studies of nasopharyngeal carcinoma (NPC) imaging based on deep learning-based radiomics (DLR).
Table 11. Studies of nasopharyngeal carcinoma (NPC) imaging based on deep learning-based radiomics (DLR).
Author, Year, ReferenceImageSample SizeFeature SelectionModelingModel Evaluation
Li, S. (2018) [152]CT306ICC, PCC, and PCAANN, KNN, SVMAUC
ANN: 0.812
KNN: 0.775
SVM: 0.732
Peng, H. (2019) [30]PET/CT707LASSODCNNs and nomogramC-index 0.722
Zhong, L.Z. (2020) [153]MRI638Not describedSE-ResNeXt, CR and nomogramC-index 0.788
Zhang, F. (2020) [154]MRI, Pathology220ICC > 0.75, Univariate analysis, MRMR, RFResNet-18, NomogramC-index 0.834
Intraclass correlation coefficients (ICC), Pearson correlation coefficient (PCC), and principal component analysis (PCA).
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Li, S.; Deng, Y.-Q.; Zhu, Z.-L.; Hua, H.-L.; Tao, Z.-Z. A Comprehensive Review on Radiomics and Deep Learning for Nasopharyngeal Carcinoma Imaging. Diagnostics 2021, 11, 1523. https://doi.org/10.3390/diagnostics11091523

AMA Style

Li S, Deng Y-Q, Zhu Z-L, Hua H-L, Tao Z-Z. A Comprehensive Review on Radiomics and Deep Learning for Nasopharyngeal Carcinoma Imaging. Diagnostics. 2021; 11(9):1523. https://doi.org/10.3390/diagnostics11091523

Chicago/Turabian Style

Li, Song, Yu-Qin Deng, Zhi-Ling Zhu, Hong-Li Hua, and Ze-Zhang Tao. 2021. "A Comprehensive Review on Radiomics and Deep Learning for Nasopharyngeal Carcinoma Imaging" Diagnostics 11, no. 9: 1523. https://doi.org/10.3390/diagnostics11091523

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop