Next Article in Journal
Ferrite Shield to Enhance the Performance of Optically Pumped Magnetometers for Fetal Magnetocardiography
Next Article in Special Issue
The Effects of Artificial Intelligence Assistance on the Radiologists’ Assessment of Lung Nodules on CT Scans: A Systematic Review
Previous Article in Journal
Factors Associated with Visual Acuity in Advanced Glaucoma
Previous Article in Special Issue
Application of Artificial Intelligence in the Diagnosis, Treatment, and Prognostic Evaluation of Mediastinal Malignant Tumors
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Application of Artificial Intelligence to the Diagnosis and Therapy of Nasopharyngeal Carcinoma

1
Division of Biotherapy, Cancer Center, State Key Laboratory of Biotherapy, West China Hospital, Sichuan University, Guoxue Road 37, Chengdu 610041, China
2
Out-Patient Department, West China Hospital, Sichuan University, Guoxue Road 37, Chengdu 610041, China
3
Division of Vascular Surgery, Department of General Surgery, West China Hospital, Sichuan University, Guoxue Road 37, Chengdu 610041, China
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
J. Clin. Med. 2023, 12(9), 3077; https://doi.org/10.3390/jcm12093077
Submission received: 19 February 2023 / Revised: 12 April 2023 / Accepted: 18 April 2023 / Published: 24 April 2023

Abstract

:
Artificial intelligence (AI) is an interdisciplinary field that encompasses a wide range of computer science disciplines, including image recognition, machine learning, human−computer interaction, robotics and so on. Recently, AI, especially deep learning algorithms, has shown excellent performance in the field of image recognition, being able to automatically perform quantitative evaluation of complex medical image features to improve diagnostic accuracy and efficiency. AI has a wider and deeper application in the medical field of diagnosis, treatment and prognosis. Nasopharyngeal carcinoma (NPC) occurs frequently in southern China and Southeast Asian countries and is the most common head and neck cancer in the region. Detecting and treating NPC early is crucial for a good prognosis. This paper describes the basic concepts of AI, including traditional machine learning and deep learning algorithms, and their clinical applications of detecting and assessing NPC lesions, facilitating treatment and predicting prognosis. The main limitations of current AI technologies are briefly described, including interpretability issues, privacy and security and the need for large amounts of annotated data. Finally, we discuss the remaining challenges and the promising future of using AI to diagnose and treat NPC.

1. Introduction

Nasopharyngeal carcinoma (NPC), an epithelial carcinoma developing in the nasopharynx mucosal, is often observed at the pharyngeal recess [1]. Diagnosing NPC involves an endoscopy followed by an endoscopic biopsy of the suspected site [2,3]. Endoscopic biopsy may miss small cancers located submucosally or laterally to the pharyngeal crypt, which presents significant diagnostic challenges. Early diagnosis of NPC is difficult because of the late onset of symptoms and special anatomical structure. In most cases, NPC patients are diagnosed late, resulting in poor prognoses [4]. Local control rates have reached 95% in early NPC cases owing to the swift advancement of imaging techniques and radiotherapy [5]. Advanced-stage patients still have dismal outcomes, while advanced radiotherapy techniques and chemotherapy strategies have improved NPC prognosis [6,7]. Thus, it would be interesting to know if artificial intelligence (AI) can improve the diagnosis, therapy and prognosis prediction of NPC.
AI is a subdiscipline of computer science that recognizes the nature of intelligence and creates a new type of intelligent machine that can exhibit human-like behaviors [8]. AI is utilized in many areas, including medicine, communication, transportation and finance, among others [9]. AI is mainly used for disease diagnosis, treatment and prognosis prediction in the medicine area. Medical AI has two major branches: virtual and physical [10]. The virtual part of AI is composed of deep learning (DL) and machine learning (ML), which offer a potential way to construct robust computer-assisted approaches. The physical part of AI encompasses robots and medical devices [10]. Several recent studies have shown that AI can improve early diagnosis efficiency as well as the prognosis of NPC patients, through its application in diagnosis and treatment [11,12,13].
There are some reviews on the application of AI in NPC [13,14]. However, AI techniques are advancing so fast that it is necessary to update these reviews frequently. In this review, we analyze and summarize the research progress and clinical application of AI technologies in the diagnosis, treatment and prognosis prediction of NPC. We provide a complete picture of the current status of AI in the main clinical areas. We also study the state of the clinical implementation of AI and the effort needed to make progress in this area. We hope that this information will be helpful to both clinicians and researchers interested in the utilization of AI in the clinical care of NPC.

2. AI and Its Technologies

In the last decades, many medical imaging techniques have played a key role in the early detection, diagnosis and treatment of diseases, such as ultrasound, computed tomography (CT), magnetic resonance imaging (MRI) and positron emission computed tomography (PET-CT) [15]. Recently, significant advances have been made in AI, which allows machines to automatically analyze and interpret complex data [16]. AI is frequently used in some medical fields like oncology, radiology and pathology, which require accurate and plentiful image data analysis. Physicians usually detect, describe and monitor head and neck diseases by visually assessing head and neck medical images. This assessment is often based on experience and can be subjective. In contrast to qualitative reasoning, AI can make quantitative assessments by automatically recognizing imaging information [17]. AI, including traditional ML and DL, enables physicians to make more accurate and faster imaging diagnoses and greatly reduces workload.
Traditional ML algorithms are one of the AI approaches in medical imaging, which heavily rely on the pre-defined engineering features. These are defined by mathematical equations (e.g., tumor texture) and thus can be quantified using computer programs. Features are entered into ML models to help physicians classify patients and make clinical decisions. Traditional ML includes a large number of established methods, such as k-nearest neighbors (KNN), support vector machines (SVM), random forests (RF) and so on. These methods are widely used in radiology to convert image data into feature vectors through image processing methods. Predictive models are built by using these vectors to derive certain information from the same image data and then generating traditional ML. Radiomics have been evaluated in some small retrospective studies, which attempt to predict tissue subtypes, response to certain treatments, prognosis and other information from medical images of tumors.
DL, as a subset of ML, is based on a neural network structure inspired by the human brain. ML models must define and extract features from images and their performance depends on the quality of the features. In contrast, DL algorithms do not have to define features in advance [18]. They can automatically learn features and perform image classification and task processing. This data-driven model is more informative and practical. DL algorithms commonly used in medical image analysis and processing include the artificial neural network (ANN), deep neural network (DNN), convolutional neural network (CNN) and recurrent neural network (RNN). Currently, CNN is the most popular type of DL architecture in the field of medical image analysis [19]. The CNN consists of multiple layers, usually including convolutional, pooling and fully connected layers. The pixels in an image are aggregated and transformed by clustering through the convolutional layer to automatically extract high-level features. The deep convolutional neural network (DCNN) uses more convolutional layers and a larger parameter space to fit large-scale datasets. U-net uses full convolutional layers and image enhancement to obtain good accuracy with limited datasets. RNN is particularly unique in processing time series data. Different DL algorithms have different characteristics and application scenarios.

3. Screening of Studies

We performed a search using the following query: (“artificial intelligence” OR “machine learning” OR “deep learning”) AND (“nasopharyngeal carcinoma” OR “nasopharyngeal cancer”). Using the search phrase, a search of research articles from the past 15 years to March 2023 was performed on Springer, Google Scholar, PubMed and Embase. Because there are no indicators or validation protocols of consensus for the evaluation of each model’s performance, a holistic profile of this field was provided instead of a meta-analysis. From this perspective, loose inclusion and exclusion criteria were set (Table 1). Finally, a total of 76 studies were included after following the inclusion and exclusion criteria.
Only studies using AI techniques in NPC were selected. Table 1 shows the exclusion and inclusion criteria which were applied to papers based on the purpose of our review.

4. Applications of AI to NPC

In the Lancet, a train of reviews entitled “Nasopharyngeal carcinoma” is published every few years [1,20,21,22]. In recent years, medical AI has been gaining popularity in the research of NPC. Many researchers have devoted themselves to NPC prediction of tumor detection, prognosis and efficacy of radiotherapy and chemotherapy (Figure 1).

4.1. AI and NPC Diagnosis

The diagnosis of NPC is a prerequisite for appropriate treatment, which can be divided into qualitative and staging diagnoses. Currently, qualitative diagnosis of NPC is dominated by the collection of biopsy tissue during endoscopy for pathological examination. Staging diagnosis mainly depends on imaging examinations, such as CT, MRI and PET-CT.
The fiberoptic nasopharyngoscope is a fiberoptic device that can magnify suspicious lesions up to thousands of times through the microscope’s visualization technique. The surgeon can use their own surgical forceps to biopsy the suspicious lesion tissue. The biopsy tissue is then selected and made into paraffin sections for histological examination under the microscope, with the help of electron microscopy or immunohistochemistry if necessary. CT scans a certain thickness of the human body with an X-ray beam, and the detector receives the X-rays passing through that layer. The converter converts the X-rays into digital signals, and the computer uses the digital signals to generate images. MRI uses the principle of nuclear magnetic resonance to detect the electromagnetic waves emitted by an applied gradient magnetic field. The magnetic field is based on the attenuation of the energy released in different structural environments within a substance, and can be used to map the internal structure of an object. PET-CT selectively reflects the metabolism of tissues and organs based on tracers, and the physiological, pathological, biochemical and metabolic changes of human tissues at the molecular level. At the same time, CT images are corrected for full energy attenuation of nuclear medicine images. Thus, the nuclear medicine images are able to completely achieve quantitative purposes and highly improve the accuracy of diagnosis, which realizes the complementary information of functional images and anatomical images.
It is difficult to perform accurate tumor diagnosis owing to the complexity of tumor symptoms and individual differences. AI technologies can help clinicians reduce their workload and improve the readability of imaging images, which leads to the improvement of accuracy and efficiency in diagnosing.

4.1.1. AI Application in Nasopharyngoscopy

Nasopharyngoscopy allows direct observation of lesions on the nasopharyngeal wall, and physicians can analyze and screen lesion images to determine whether the lesions are associated with NPC. NPC diagnosis is currently done by visualizing suspicious tissue sites through using white-light reflectance endoscopy and taking biopsies. In previous studies, researchers developed different AI models using nasopharyngeal endoscopic images to distinguish NPC from nasopharyngeal benign hyperplasia. The studies showed that detection of NPC was not significantly different [23] or even performed better than that of radiologists [24]. In 2018, Mohammed et al. had three studies focusing on the detection of NPC using neural networks based on nasopharyngeal endoscopic images [25,26,27]. In all three studies, they used different neural network models and all achieved very good accuracy, sensitivity and specificity. Using 27,536 white-light imaging nasopharyngoscopy images, Li et al. developed a DL model for detecting NPC, reporting an accuracy of 88.7% and 88.0% on retrospective and prospective test sets, respectively [28].
However, conventional white-light endoscopy tends to miss superficial mucosal lesions. For this, Xu et al. designed and trained a Siamese DCNN, which can use white light and narrowband imaging images to enhance the performance of classification for the identification of NPC and non-carcinoma. They collected 4783 nasopharyngoscopy images for DL and validated the predictive power of the model for nasopharyngoscopy results. The overall accuracy and sensitivity of the model were 95.7% and 97.0% according to the prediction level of the patients [29].
Furthermore, the identification of normal tissues and treated NPC is a clinical challenge. For this reason, researchers developed a DL-based platform for fiber-optic Raman diagnostics. This platform utilizes multi-layer Raman-specific CNN. The optimized model can distinguish NPC from control and post-treatment patients with 82.09% diagnostic accuracy. The research team took a closer look at the saliency map of the best model. This map reveals specific Raman signatures associated with cancer-associated biomolecular variations [30].

4.1.2. AI Application in Pathological Biopsy

A pathological biopsy in diagnosing NPC is required but remains challenging because of the non-keratinized carcinomas with little differentiation and many admixed lymphocytes in most samples. However, the diagnostic results of biopsy samples are often subjectively assessed by pathologists, which can lead to differences between observers. Diagnosing NPC by pathologists is ineffective and usually causes inconsistency in the results. Biopsy samples can be automatically classified and diagnosed by using AI techniques, which can improve diagnostic accuracy and efficiency, and reduce costs. The researchers trained and validated a DL model using 726 NPC biopsy specimens, reporting 0.9900 and 0.9848 areas under receiver operator characteristic curves (AUCs) at patch level and slide level, respectively [31]. Other researchers have also developed similar DL-dependent automated pathology diagnosis models. The model is based on the validation dataset and achieves an AUC of 0.869 for NPC diagnosis [32]. The outcomes indicate that the DL algorithm can recognize NPC and help pathologists improve their efficiency and accuracy.
In conclusion, AI plays an important role in recognizing and processing images, and in tissue segmentation in NPC (Table 2). While some applications of AI have yet to be fully realized, its potential in assisting NPC diagnosis is unquestionable.

4.2. AI and NPC Therapy

Major treatments for NPC include radiotherapy, chemotherapy and other integrated approaches. The application of AI techniques in NPC treatment can help clinicians design more personalized and accurate treatment plans for patients. The prediction of chemotherapy response and the precision of the radiotherapy process are usually combined with AI techniques in NPC therapy.

4.2.1. AI Application in NPC Chemotherapy

Chemotherapy combined with radiotherapy is a great improvement in treating advanced NPC. Accurate pre-chemotherapy assessment can help NPC patients choose personalized treatment and improve their prognosis. In 2020, a research group developed a radiological map that integrates clinical data with radiomic features to predict the response and survival of NPC patients who received induced chemotherapy (IC). Based on survival analysis, IC responders had a significant advantage over non-responders in terms of progression-free survival [33]. In a study by Yang et al., CT texture analysis was used as a basis for developing a DL model to identify responders and non-responders to NPC IC. They extracted the DL features of the pre-trained CNN by a transfer learning method, and established the best performance model ResNet50 by SVM classification. The model demonstrated an AUC of 0.811 [34]. These models could be used to predict the treatment response to IC in locally advanced NPC, and might be a practical tool in deciding treatment strategies.
A pre-trained network is a saved CNN that has been previously trained on a large dataset. The original dataset is large and general enough that the spatial hierarchy learned by the pre-trained network can be used as an effective model for extracting features from the visual world. Even if the new problem and task are different from the original task, the learned features are portable between problems, which is an important advantage of DL. It makes DL very effective for small data problems.
To assess the effectiveness of DL on PET-CT-based radiomics for individual IC in advanced NPC, Peng et al. created radiomic signatures and nomograms. Based on a nomogram imaging analysis, high-risk and low-risk patients were divided into two groups, with high-risk patients benefiting from IC and low-risk patients not. Using it as a management tool for advanced NPC in the future would be a novel and helpful innovation [35].

4.2.2. AI Application in NPC Radiotherapy

Radiotherapy is an indispensable treatment for NPC, in which tumor target segmentation and dose calculation are particularly critical. However, the overall radiotherapy planning process is always affected by image quality and the heavy workload of contouring tumor targets. Researchers have applied AI to radiotherapy planning to address these issues.
Image quality is fundamental to the whole of radiotherapy planning. However, high-quality CT images are usually not available owing to machine limitations and avoidance of human radiation during radiotherapy. AI can be used to enhance image quality. Tomotherapy uses megavoltage CT to verify the set-up and adapt radiotherapy, but its high noise and low contrast make the images inferior. In a study by Chen et al., synthetic kilovoltage CT was generated by using a DL approach. In the phantom study, synthetic kilovoltage CT showed significantly higher signal-to-noise ratio, image homogeneity and contrast ratio than megavoltage CT [36]. Li et al. used DCNN to generate synthetic CT images based on cone-beam CT and applied the images to dose calculation for NPC [37]. Similarly, Wang et al. applied DCNN to produce CT images based on T2-weighted MRI. Compared with real CT, most of the soft tissue and bone areas can be accurately reconstructed with synthetic CT [38]. Researchers developed an advanced DCNN architecture to generate synthetic CT images from MRI for intensity-modulated proton therapy treatment planning for NPC patients. The (3 mm/3%) gamma passing rates were above 97.32% for all synthetic CT images [39]. Through these methods, the image quality can be enhanced, which is conducive to tumor segmentation and dose calculation.
In addition, unimodal images are usually unable to provide enough information to accurately depict the tumor target region. As complementary information is provided by multiple form images, better radiotherapy treatment plans can be developed. In 2011, one study constructed a method utilizing weighted CT-MRI registration images for NPC delineation, called “SNAKE” [40]. Ma et al. developed a multi-modal segmentation structure using CNN, which is composed of multi-modal CNN and combined CNN for automatic NPC segmentation of CT and MR images [41]. Chen et al. developed a novel multi-modal MRI fusion network to accurately segment NPC [42]. Zhao et al. presented a method for automatically segmenting NPCs on dual-modality PET-CT images based on completely convolutional networks with auxiliary paths [43].
In current clinical practice, targets and organs-at-risk (OARs) are normally delineated manually by clinicians on CT images, which is tedious and time consuming. To address these issues, many automatic segmentation methods have been proposed by researchers. In one study, researchers proposed an adaptive thresholding technique based on self-organizing maps for semiautomated segmentation of NPC [44]. In addition, the team developed techniques based on region growing for segmentation of CT images for identifying NPC regions [45,46]. Bai et al. proposed an NPC-Seg DL algorithm for NPC segmentation using a location segmentation framework. In this study, the proposed algorithm was evaluated online on the StructseG-NPC dataset, and a 61.81% average dice similarity coefficient (DSC) was obtained on the test dataset [47]. Daoud et al. proposed a CNN model based on DL using a two-stage segmentation strategy to determine the final NPC segmentation by integrating three results obtained from coronal, axial and sagittal images. The study concluded that the DSCs of their proposed system were 0.87, 0.85 and 0.91 in the axial, coronal and sagittal profiles, respectively [48]. Li et al. created a DL model called U-net for NPC segmentation. After the training of the U-net model, the overall DSC of primary tumor was 74.00% [49]. In addition, many researchers have developed some improved models based on the U-net model to delineate the target volume of NPC. Through the training of the model, the final model obtained good DSC (0.827–0.84) [50,51,52]. Men et al. constructed an end-to-end deep deconvolutional neural network (DDNN) to segment nasopharyngeal gross tumor volume and clinical target volume. The performance of the DDNN and VGG-16 models are compared. The DSC values of DDNN were 80.9% of nasopharyngeal gross tumor volume and 82.6% of clinical target volume, while the DSC values of VGG-16 were 72.3% and 73.7%, respectively [53].
MRI images provide better soft tissue contrast compared with CT images, which facilitates accurate segmentation of the tumor target. There have been many studies on building various algorithms for NPC segmentation on MRI images. NPC contours were determined from MRI images using the nearest neighbor graph model and distance regularized level set evolution [54,55]. Li et al. utilized CNN to create an automatic NPC segmentation model based on enhanced MRI, and the trained model obtained a DSC of 0.89 [56]. Lin et al. built a 3D CNN architecture based on VoxResNet to automatically draw primary gross tumor volume profiles. In this study, 1021 NPCs were included and the trained model achieved a DSC of 0.79 [57]. Researchers developed a 3D CNN with long-range jump connections and multi-scale feature pyramids for NPC segmentation. The model has been trained and achieved a DSC of 0.737 in the tests [58]. Ye et al. successfully developed a fully automatic NPC segmentation method using dense connectivity embedding U-net and dual-sequence MRI images, with an average DSC of 0.87 in seven external subjects with NPC [59]. Luo et al. proposed the augmentation-invariant Strategy and combined it with the DL model. The final experimental results show that the augmentation-invariant Strategy is superior to the widely used nnU-net, which can perform highly accurate gross tumor volume segmentation on MRI for NPC [60].
NPC is highly malignant and invasive. Therefore, it is difficult to distinguish the boundaries between tumor tissue and normal tissue in a complex MRI context. In order to solve this background problem, researchers developed a coarse-to-fine deep neural network. The model firstly predicts the coarse mask based on the well-designed segmentation module, and then the boundary rendering module, which uses the semantic information from different feature mapping layers to refine the boundary of the coarse mask. The dataset encompassed 2000 MRI sections from 596 patients, and the model had a DSC of 0.703 [61].
CNN shows promising prospects for cancer segmentation on contrast-enhanced MRI, but some patients are not suitable for the use of contrast media. To address this issue, Wong et al. used U-net to delineate the primary NPC on non-contrast augmented MRI and compared it to the contrast-enhanced MRI. U-net showed similar performance (DSC = 0.71) of fat suppressant (FS)-T2W as enhanced -T1W, and CNN showed promise in depicting NPCs on FS-T2W images when contrast injection was desired [62].
Automated and precise segmentation of OAR can lead to more precise radiotherapy planning and reduce the risk of radioactive side effects. Researchers created a risk organ detection and segmentation network based on DL, and the DSCs of high-risk organ segmentation on CT images ranged from 0.689 to 0.934 [63]. Zhong et al. proposed a cascade network structure combining DL and the Boosting algorithm for segmentation of the organs-at-risk involving parotid gland, thyroid gland and optic nerve, with corresponding DSCs of 0.92, 0.92 and 0.89, respectively [64]. Peng et al. designed OrganNet, an improved full convolutional neural network for automatic segmentation of OARs, with an average DSC of 83.75% [65]. Zhao et al. designed an AU-net model based on 3D U-net to automatically segment the OARs of NPC and obtained a mean DSC value of 0.86 ± 0.02 [66].
The determination of radiotherapy dose also plays an important role in radiotherapy planning. Researchers developed a gated recurrent unit-based RNN model based on dosimetric information to predict treatment plans for NPC. An improved method is proposed to further improve the dose-volume histogram (DVH) prediction precision and the feasibility of this method for small sample patient data [67]. It is shown that the regenerated experimental plans (EPs) guided by the gated recurrent unit-based RNN prediction model achieve good agreement with the clinical plans (CPs). EPs save better doses for many OARs while still meeting acceptable criteria for planning tumor volume (PTV) [68,69]. Yue et al. developed a DL method for dose prediction of radiotherapy for NPC based on distance information and mask information. The predicted dose error and DVH error of the method were 7.51% and 11.6% lower, respectively, than those of the mask-based method [70]. Sun et al. developed a DL network based on U-net to predict the dose distribution of patients based on the anatomical structure information of patients. A total of 117 NPC cases were included in this study, which showed better organ retention and suboptimal planning target volume coverage using the voxel strategy [71]. Jiao et al. developed a generalized regression neural network using geometric and dosimetric information to predict OAR DVHs. The results showed that the R2 value increased by ~6.7% and the mean absolute error value decreased by ~46.7% after adding the dosimetric information to the DVH prediction [72]. Similarly, Chen et al. designed a CNN -based network based on a DL approach to directly predict the DVHs of OARs. The predicted differences between D2% and D50 can be controlled to within 2.32 and 0.69 Gy [73].
Some patients with NPC will develop complications after radiotherapy, which can affect the quality of life and lifespan. However, early diagnosis of the complications is a challenge. AI can be applied to the initial prediction of possible complications after NPC radiotherapy. Previous research used the random forest model to construct a radiological model for the early detection of radiation-induced temporal lobe injury (RTLI). In this model, RTLI can be dynamically predicted in advance, allowing early detection and the possibility of taking preventive measures to limit its progression [74]. Similarly, Bin et al. extracted radiological features from MRI and built a ML model to generate features. A nomogram integrating clinical factors was used to predict RTLI within 5 years after radiotherapy in patients with T4/N0-3/M0 NPC. The C-index of the validation cohort was 0.82 [75]. Ren et al. developed a prediction model based on a ML algorithm with dosimetric features. The model outperforms conventional dose-volume factors in predicting possible radiation-induced hypothyroidism in NPC patients receiving radiotherapy early and taking preventive measures for NPC patients. For prediction performance, the dosiomics-based prediction model showed better results at the optimal AUC value of 0.7, while the dose-volume factor-based prediction model showed better results at 0.61 [76]. To predict radiation-induced xerostomia, Chao et al. developed a clustering model that included inhomogeneous dose distributions within the parotid gland. The team combined clustering models with ML techniques to provide a promising tool for predicting xerostomia in head-and-neck-cancer patients [77].

4.2.3. AI Application in the Personalized and Precise Treatment of NPC

Personalized and precise cancer treatment has become a major topic in NPC. Patients with locally advanced NPC can choose concurrent chemotherapy (CCRT) or IC plus CCRT as treatment options. However, their choice remains ambiguous. A DL-based NPC treatment decision model developed by researchers can predict the prognosis of patients with T3N1M0 NPC under different therapy regimens and recommend the optimized therapy accordingly. It is expected to be a potential tool to promote the individualized treatment of NPC [78]. The ability to discriminate between the different risks associated with NPC relapse in patients and to tailor individual treatment has become increasingly important. An AI model designed by researchers can divide relapse patients into different risk groups, which has great guidance potential for personalized treatment [79]. Targeted therapy is also important in treating NPC patients. Researchers developed a mathematical algorithm using SVM to predict the prognosis of NPC with advanced localization. The algorithm integrated the expression levels of multiple tissue molecular biomarkers representing tumor-genesis signaling pathways and serological biomarkers associated with EBV. It may guide future targeted therapies targeting related signaling pathways [80]. Moreover, the application of AI in clinical management is not easy to ignore. Previous research developed an automatic ML scoring system based on MRI data, which surpassed the American Joint Committee on Cancer (AJCC) [81] TNM system in the prognosis of NPC. Using the new scoring system can help improve counseling and personalized management of patients with NPC and help them achieve better outcomes [82].
With the arrival of the big data era, NPC therapy will become more personalized and precise (Table 3). The development of AI can not only effectively relieve clinicians, but also provide more accurate and humane medical services to patients.

4.3. AI and NPC Prognosis Prediction

Although great progress has been made in NPC treatment, the long-term prognosis of NPC patients is still unsatisfactory. The traditional TNM/AJCC staging system fails to provide the expected prognostic effect and to predict patient progression. In contrast, AI can accurately predict cancer survival time and progression through processing data and analyzing important features.
MRI images and clinical data are frequently used by researchers to build predictive models for NPC prognosis. Zhong et al. established a radiomic nomogram to predict disease-free survival. In the test cohort, the C-index of radiomic nomogram was 0.788 [83]. Researchers used SVM to construct radiomic ML models to predict disease progression, the models had good performance [84,85]. Li et al. combined radiomics and ML to predict the recurrence of NPC after radiotherapy, compared the centralized typical algorithm and the results showed that ANN achieved the best prediction accuracy of 0.812 [86]. Qiang et al. developed a prognosis model based on 3D DenseNet to predict disease-free survival of patients with non-metastatic NPC. A total of 1636 NPC patients were enrolled in the study. The model divided patients into low- and high-risk groups according to the cut-off value of risk score. The results showed that the model could correctly differentiate the two groups of patients (hazard ratio = 0.62) [87]. Similarly, Du et al. developed a DCNN model to assess the risk of non-metastatic NPC patients. In the validation set of 3-year disease progression, the AUC of the model was 0.828 [88]. In addition, several researchers have constructed similar DL models for prognostic prediction and risk stratification of NPC, all of which have good performance [78,89,90]. For NPC patients, survival prediction is of utmost importance. Jing et al. developed an end-to-end multi-modality deep survival network (MDSN) to precisely predict the risk of tumor progression of NPC patients. The model is compared with four traditional popular survival methods. Finally, the established MDSN performs best with a C-index of 0.651 [91]. Chen et al. used ML to develop a survival model based on tumor burden characteristics and all clinical factors. The study enrolled 1643 patients. The C-indexes were 0.766 and 0.760 in the internal validation and external validation sets [92].
PET-CT has particular advantages in sensitivity, specificity and accuracy in NPC recurrence and distant metastases. Meng et al. proposed a model based on pretreatment PET-CT images that can be used both to predict survival and segment advanced NPC. They adopt a hard-sharing segmentation backbone to aid in the extraction of regional attributes associated with the primary tumors and lessen the influence of irrelevant background data. Additionally, they also adopt a cascaded survival network to take the prognostic information from primary tumors and further utilize the tumor data acquired from the segmentation backbone [93]. Gu et al. developed an end-to-end multi-modal DL-based radiomics model to extract deep features from pre-processed PET-CT images and predict the 5-year progression-free survival. The team also incorporated TNM staging into the model to further improve prognostic power. A total of 257 patients with advanced NPC were enrolled and divided into internal and external cohorts. The AUC of the internal and external cohorts were 0.842 and 0.823, respectively [94].
Pathological images can also be used to construct a prognostic model for AI. Researchers integrated MRI-based radiological features and DCNN models based on pathology images and clinical features of NPC patients to construct a multi-scale nomogram to predict failure-free survival of NPC patients. The results showed that the C-index of the internal and external trial cohorts were 0.828 and 0.834, respectively [95]. In a previous study, the software QuPath (version 0.1.3. Queen’s University) was used to extract pathological microscopic features of NPC patients and the neural network DeepSurv to analyze the pathological microscopic features (DSPMF). In studies, DSPMF has proven to be a reliable prognostic tool and may guide treatment decisions for NPC patients [96].
Other researchers have used RNA data to build AI prediction models. In NPC, some miRNAs have prognostic power. Chen et al. combined miRNA expression data from various profiling platforms and constructed a predictive model using 6-miRNAsignatures. According to the functional analysis, the six miRNAs are principally involved in oncogenic signaling pathways, virus infection pathways and B-cell expression [97]. A metastatic and highly invasive cancer, NPC exhibits different molecular profiles and clinical outcomes in terms of their clinical characteristics. Zhao et al. applied ML techniques to RNA-Seq data from NPC tumor biopsies to identify 13 significant genes between the recurrence/metastasis and non-recurrence/metastasis groups. A 4-mRNA signature was identified using these genes. It shows good predictive value for NPC. A positive prognostic value was found for this signature for NPC. Moreover, the 4-mRNA signature was related to the immune response as well as cell proliferation [98]. Zhang et al. used the deep network to predict the prognosis of NPC based on MRI and gene expression, and the AUC was 0.88 [99].
AI makes it possible to predict outcomes based on diverse factors prior to treatment, which is beneficial for the whole diagnosis and treatment process (Table 4). In the near future, AI techniques will help doctors make rational and personalized medical decisions, including accurate diagnoses, personalized treatment and prognosis assessment for NPC patients.

4.4. Current State-of-the-Art AI Algorithms for NPC Diagnosis and Treatment

AI models require a large number of datasets for training and validation, and we have listed some sample images from various datasets in Figure 2.
AI can help doctors with statistics on pathology, physical examination reports, etc. It can analyze and mine patients’ medical data through technologies such as big data and deep mining to automatically identify patients’ clinical variables and indicators. A large part of the medical data comes from medical images, such as CT images, MRI images and PET-CT images. AI can help diagnose and treat diseases by learning a lot from medical images. CNNs have excellent performance in image recognition and image segmentation. In studies on diagnosis [28], treatment response prediction [33] and prognosis prediction [93] of NPC based on various images, researchers have obtained the best performance with improved models based on classical CNNs, usually using AUC and DSC as performance metrics. The FCN-based U-net model also shows very good performance for image segmentation, showing excellent performance in target segmentation [59] and dose prediction [69].
The distribution of studies based on the best performing algorithms is shown in Figure 3. Many studies have improved on the classical model to create new algorithmic models. Among the AI algorithms, DCNN and CNN perform very well. However, the research results are based on each study independently and are not directly comparable due to the use of different datasets and/or evaluation metrics.

4.5. Common Training and Testing Methodologies

The performance of AI algorithms is influenced by many factors. We evaluated dataset size, class balance, validation strategy and data processing strategy, all of which have a direct impact on training and testing performance. A summary is given in Table 5.
Most of the research papers cited datasets with less than 1000 cases. In addition, only one study addressed and discussed the class balance. AI requires special strategies to manage limited and unbalanced data to reduce the impact on training and testing procedures (e.g., data augmentation techniques). Most studies use validation set and cross validation methods for model validation. The validation set method is the simplest method. It divides the entire data set into a training set and a test set. This method uses only a portion of the data for model training and is suitable for cases where the amount of data is relatively large. Cross validation uses the data repeatedly followed by slicing and dicing of the obtained sample data. We then combine the data into multiple different training and testing sets. This strategy is common in small datasets. The cross validation method will be repeated until each part is used as test data at least once. However, cross validation does not ensure the quality of ML models, as potentially biased or unbalanced data leads to biased evaluations. Some papers failed to describe any validation strategy.
Health data contain many missing values. AI algorithms are unable to handle missing values during data pre-processing, which leads to the deteriorated performance of the algorithms. According to Table 4, excluding cases with incomplete data is the most common strategy. However, this strategy suffers from significant information loss and performs poorly when missing values surpass the entire dataset. Some studies lack a data processing strategy and a detailed description of the management of the missing value cases. AI solutions are trained and tested on private/restricted datasets. These datasets either hold sensitive patient information, or belong to medical institutions that cannot or do not wish to make their data publicly available. Dataset availability improves reproducibility and transparency of research [100,101]. However, as all research papers used private data, the availability of datasets for AI applications in NPC remains a concern.

5. Current Challenges

Although there is rapid development of AI techniques in the clinical research of NPC, the application of AI remains immature [102]. Some challenges need to be addressed in order to translate these studies into clinically valuable applications.
As the survival period of NPC is prolonged, more and more patients are suffering from post-radiotherapy radiation brain injury, treatment failure and post-treatment recurrence and metastasis. These patients have complicated conditions and a poor prognosis, which has been causing hardships for treatment. To tackle the above mentioned problems, we need to find the economic, efficient and clinically optimal treatment plan for NPC. Because AI has the advantage of objectively analyzing and processing large amounts of data, AI is supposed to take part in establishing precise treatment ideas, including early screening, precise staging, precise target imaging, optimal treatment of recurrent metastatic NPC and the selection of combination treatment modalities. Prediction models constructed by AI algorithms require a large number of high-quality clinical data to improve their accuracy, sensitivity and specificity, so standardized data annotation and multicenter data sources are needed. Researchers have developed improved algorithms to handle small samples, with less accuracy [103]. At present, the AI algorithms of NPC are mostly limited to the data of a single medical institution [13]. It may lead to overfitting of the model, and the model is not fully applicable to a wider range of scenarios. Therefore, external validation is necessary before widespread clinical adaptation of AI applications.
In addition, AI predictions are called “black box” because the selection process and weighting process of AI algorithms are not clear. In other words, interpretability is an important consideration when applying AI to NPC. At present, there are two main solutions to this problem: interpretable models and model-independent interpretation methods [104]. Both approaches increase computational complexity. Therefore, much work remains to be done to improve the interpretability of the model.
Moreover, much of the research on the utilization of AI in NPC has been designed retrospectively. However, the encouraging results obtained in these studies need to be confirmed by further prospective and multicenter studies owing to possible selection bias in the retrospective study design.
Furthermore, privacy protection and data security are major challenges for AI. Building AI applications for NPC requires a large amount of clinical data from patients, requiring privacy protection and data security. Currently, there are no suitable technical solutions to alleviate this problem while meeting the growing demands of data-driven science [105]. Establishing a secure and reliable multicenter data sharing platform for the NPC is a possible way.
A common defect of current AI tools is their inability to deal with multi-tasking. No integrated AI system has been developed to detect multiple abnormalities in the human body. Disease and treatment require the use of multiple tools, in which the synergistic union is complicated. Leveraging AI solutions bring many benefits, while their deployment is difficult. For healthcare organizations, efforts are needed to bridge the skills gap by educating staff about AI systems and professional capabilities and building patient trust in AI.

6. Conclusions and Prospect

Literature reviews are broadly categorized as systematic and narrative. Systematic reviews are more rigorous in their methodology and less subject to bias than narrative reviews. However, the aim of this paper is to outline the dynamics of research advances in AI in the diagnosis and treatment of NPC and to present the challenges and future of the field. For this purpose, we have chosen to present a narrative review. To ensure the quality of the studies, we clarify the inclusion and exclusion criteria of the study, integrate and analyze the studies, pay attention to the shortcomings of the studied literature and ensure an objective evaluation attitude to give the reader a quick overview of the objective and comprehensive state of research in this field.
AI has shown great potential for applications in various clinical aspects of NPC, with the explosive growth of clinical data and research progress in ML and DL. The applications of AI to NPC are as follows: (1) understanding cancer at the molecular level through DL; (2) supporting the diagnosis and prognosis of NPC based on images and pathological specimens; (3) to promote personalized, accurate diagnosis and treatment of NPC. As AI techniques continue to advance, AI will have a great impact on the NPC clinical area. We believe that AI will be more closely combined with all aspects of medicine in the near future. We can rely on AI techniques to develop less invasive techniques than nasopharyngoscopy, with diagnostic accuracy close to that of pathological biopsies. We can build AI models based on clinical data to help healthy people understand early warning of NPC. AI will be closely integrated with radiotherapy to develop more personalized radiotherapy plans and conduct more effective whole-process efficacy evaluations. In the future, we can establish a large sample size and cross-population ethnic database to support the prediction of prognosis by AI techniques [106], to help researchers find the biggest prognostic factors and establish future prospective prognostic intervention studies.

Funding

This research received no external funding.

Data Availability Statement

Data sharing is not applicable to this article.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Chen, Y.P.; Chan, A.T.C.; Le, Q.T.; Blanchard, P.; Sun, Y.; Ma, J. Nasopharyngeal carcinoma. Lancet 2019, 394, 64–80. [Google Scholar] [CrossRef] [PubMed]
  2. Bossi, P.; Chan, A.T.; Licitra, L.; Trama, A.; Orlandi, E.; Hui, E.P.; Halámková, J.; Mattheis, S.; Baujat, B.; Hardillo, J.; et al. Nasopharyngeal carcinoma: ESMO-EURACAN Clinical Practice Guidelines for diagnosis, treatment and follow-up. Ann. Oncol. 2021, 32, 452–465. [Google Scholar] [CrossRef] [PubMed]
  3. Tang, L.L.; Chen, Y.P.; Chen, C.B.; Chen, M.Y.; Chen, N.Y.; Chen, X.Z.; Du, X.J.; Fang, W.F.; Feng, M.; Gao, J.; et al. The Chinese Society of Clinical Oncology (CSCO) clinical guidelines for the diagnosis and treatment of nasopharyngeal carcinoma. Cancer Commun. 2021, 41, 1195–1227. [Google Scholar] [CrossRef]
  4. Liang, H.; Xiang, Y.Q.; Lv, X.; Xie, C.Q.; Cao, S.M.; Wang, L.; Qian, C.N.; Yang, J.; Ye, Y.F.; Gan, F.; et al. Survival impact of waiting time for radical radiotherapy in nasopharyngeal carcinoma: A large institution-based cohort study from an endemic area. Eur. J. Cancer 2017, 73, 48–60. [Google Scholar] [CrossRef] [PubMed]
  5. Lee, N.; Harris, J.; Garden, A.S.; Straube, W.; Glisson, B.; Xia, P.; Bosch, W.; Morrison, W.H.; Quivey, J.; Thorstad, W.; et al. Intensity-modulated radiation therapy with or without chemotherapy for nasopharyngeal carcinoma: Radiation therapy oncology group phase II trial 0225. J. Clin. Oncol. 2009, 27, 3684–3690. [Google Scholar] [CrossRef]
  6. Sun, X.; Su, S.; Chen, C.; Han, F.; Zhao, C.; Xiao, W.; Deng, X.; Huang, S.; Lin, C.; Lu, T. Long-term outcomes of intensity-modulated radiotherapy for 868 patients with nasopharyngeal carcinoma: An analysis of survival and treatment toxicities. Radiother. Oncol. 2014, 110, 398–403. [Google Scholar] [CrossRef]
  7. Yi, J.L.; Gao, L.; Huang, X.D.; Li, S.Y.; Luo, J.W.; Cai, W.M.; Xiao, J.P.; Xu, G.Z. Nasopharyngeal carcinoma treated by radical radiotherapy alone: Ten-year experience of a single institution. Int. J. Radiat. Oncol. Biol. Phys. 2006, 65, 161–168. [Google Scholar] [CrossRef]
  8. Shalev-Shwartz, S.; Ben-David, S. Understanding Machine Learning: From Theory to Algorithms; Cambridge University Press: Cambridge, UK, 2014. [Google Scholar]
  9. Jordan, M.I.; Mitchell, T.M. Machine learning: Trends, perspectives, and prospects. Science 2015, 349, 255–260. [Google Scholar] [CrossRef]
  10. Hamet, P.; Tremblay, J. Artificial intelligence in medicine. Metabolism 2017, 69s, S36–S40. [Google Scholar] [CrossRef]
  11. Chen, Z.H.; Lin, L.; Wu, C.F.; Li, C.F.; Xu, R.H.; Sun, Y. Artificial intelligence for assisting cancer diagnosis and treatment in the era of precision medicine. Cancer Commun. 2021, 41, 1100–1115. [Google Scholar] [CrossRef]
  12. Yang, C.; Jiang, Z.; Cheng, T.; Zhou, R.; Wang, G.; Jing, D.; Bo, L.; Huang, P.; Wang, J.; Zhang, D.; et al. Radiomics for Predicting Response of Neoadjuvant Chemotherapy in Nasopharyngeal Carcinoma: A Systematic Review and Meta-Analysis. Front. Oncol. 2022, 12, 893103. [Google Scholar] [CrossRef]
  13. Li, S.; Deng, Y.Q.; Zhu, Z.L.; Hua, H.L.; Tao, Z.Z. A Comprehensive Review on Radiomics and Deep Learning for Nasopharyngeal Carcinoma Imaging. Diagnostics 2021, 11, 1523. [Google Scholar] [CrossRef] [PubMed]
  14. Ng, W.T.; But, B.; Choi, H.C.W.; de Bree, R.; Lee, A.W.M.; Lee, V.H.F.; López, F.; Mäkitie, A.A.; Rodrigo, J.P.; Saba, N.F.; et al. Application of Artificial Intelligence for Nasopharyngeal Carcinoma Management—A Systematic Review. Cancer Manag. Res. 2022, 14, 339–366. [Google Scholar] [CrossRef] [PubMed]
  15. Brody, H. Medical imaging. Nature 2013, 502, S81. [Google Scholar] [CrossRef] [PubMed]
  16. LeCun, Y.; Bengio, Y.; Hinton, G. Deep learning. Nature 2015, 521, 436–444. [Google Scholar] [CrossRef]
  17. Ambinder, E.P. A history of the shift toward full computerization of medicine. J. Oncol. Pract. 2005, 1, 54–56. [Google Scholar] [CrossRef]
  18. Shen, D.; Wu, G.; Suk, H.I. Deep Learning in Medical Image Analysis. Annu. Rev. Biomed. Eng. 2017, 19, 221–248. [Google Scholar] [CrossRef]
  19. Litjens, G.; Kooi, T.; Bejnordi, B.E.; Setio AA, A.; Ciompi, F.; Ghafoorian, M.; van der Laak JA, W.M.; van Ginneken, B.; Sánchez, C.I. A survey on deep learning in medical image analysis. Med. Image Anal. 2017, 42, 60–88. [Google Scholar] [CrossRef]
  20. Chua, M.L.K.; Wee, J.T.S.; Hui, E.P.; Chan, A.T.C. Nasopharyngeal carcinoma. Lancet 2016, 387, 1012–1024. [Google Scholar] [CrossRef]
  21. Vokes, E.E.; Liebowitz, D.N.; Weichselbaum, R.R. Nasopharyngeal carcinoma. Lancet 1997, 350, 1087–1091. [Google Scholar] [CrossRef]
  22. Wei, W.I.; Sham, J.S. Nasopharyngeal carcinoma. Lancet 2005, 365, 2041–2054. [Google Scholar] [CrossRef] [PubMed]
  23. Wong, L.M.; King, A.D.; Ai, Q.Y.H.; Lam, W.K.J.; Poon, D.M.C.; Ma, B.B.Y.; Chan, K.C.A.; Mo, F.K.F. Convolutional neural network for discriminating nasopharyngeal carcinoma and benign hyperplasia on MRI. Eur. Radiol. 2021, 31, 3856–3863. [Google Scholar] [CrossRef]
  24. Ke, L.; Deng, Y.; Xia, W.; Qiang, M.; Chen, X.; Liu, K.; Jing, B.; He, C.; Xie, C.; Guo, X.; et al. Development of a self-constrained 3D DenseNet model in automatic detection and segmentation of nasopharyngeal carcinoma using magnetic resonance images. Oral Oncol. 2020, 110, 104862. [Google Scholar] [CrossRef] [PubMed]
  25. Mohammed, M.A.; Abd Ghani, M.K.; Arunkumar, N.; Raed, H.; Mohamad, A.; Mohd, B. A real time computer aided object detection of Nasopharyngeal carcinoma using genetic algorithm and artificial neural network based on Haar feature fear. Future Gener. Comput. Syst. 2018, 89, 539–547. [Google Scholar] [CrossRef]
  26. Mohammed, M.A.; Abd Ghani, M.K.; Arunkumar, N.; Hamed, R.I.; Mostafa, S.A.; Abdullah, M.K.; Burhanuddin, M.A. Decision support system for Nasopharyngeal carcinoma discrimination from endoscopic images using artificial neural network. J. Supercomput. 2020, 76, 1086–1104. [Google Scholar] [CrossRef]
  27. Abd Ghani, M.K.; Mohammed, M.A.; Arunkumar, N.; Mostafa, S.; Ibrahim, D.A.; Abdullah, M.K.; Jaber, M.M.; Abdulhay, E.; Ramirez-Gonzalez, G.; Burhanuddin, M.A. Decision-level fusion scheme for Nasopharyngeal carcinoma identification using machine learning techniques. Neu. Comput. Appl. 2020, 32, 625–638. [Google Scholar] [CrossRef]
  28. Li, C.; Jing, B.; Ke, L.; Li, B.; Xia, W.; He, C.; Qian, C.; Zhao, C.; Mai, H.; Chen, M.; et al. Development and validation of an endoscopic images-based deep learning model for detection with nasopharyngeal malignancies. Cancer Commun. 2018, 38, 59. [Google Scholar] [CrossRef]
  29. Xu, J.; Wang, J.; Bian, X.; Zhu, J.Q.; Tie, C.W.; Liu, X.; Zhou, Z.; Ni, X.G.; Qian, D. Deep Learning for nasopharyngeal Carcinoma Identification Using Both White Light and Narrow-Band Imaging Endoscopy. Laryngoscope 2022, 132, 999–1007. [Google Scholar] [CrossRef]
  30. Shu, C.; Yan, H.; Zheng, W.; Lin, K.; James, A.; Selvarajan, S.; Lim, C.M.; Huang, Z. Deep Learning-Guided Fiberoptic Raman Spectroscopy Enables Real-Time In Vivo Diagnosis and Assessment of Nasopharyngeal Carcinoma and Post-treatment Efficacy during Endoscopy. Anal. Chem. 2021, 93, 10898–10906. [Google Scholar] [CrossRef]
  31. Chuang, W.Y.; Chang, S.H.; Yu, W.H.; Yang, C.K.; Yeh, C.J.; Ueng, S.H.; Liu, Y.J.; Chen, T.D.; Chen, K.H.; Hsieh, Y.Y.; et al. Successful Identification of Nasopharyngeal Carcinoma in Nasopharyngeal Biopsies Using Deep Learning. Cancers 2020, 12, 507. [Google Scholar] [CrossRef]
  32. Diao, S.; Hou, J.; Yu, H.; Zhao, X.; Sun, Y.; Lambo, R.L.; Xie, Y.; Liu, L.; Qin, W.; Luo, W. Computer-Aided Pathologic Diagnosis of Nasopharyngeal Carcinoma Based on Deep Learning. Am. J. Pathol. 2020, 190, 1691–1700. [Google Scholar] [CrossRef] [PubMed]
  33. Zhao, L.; Gong, J.; Xi, Y.; Xu, M.; Li, C.; Kang, X.; Yin, Y.; Qin, W.; Yin, H.; Shi, M. MRI-based radiomics nomogram may predict the response to induction chemotherapy and survival in locally advanced nasopharyngeal carcinoma. Eur. Radiol. 2020, 30, 537–546. [Google Scholar] [CrossRef] [PubMed]
  34. Yang, Y.; Wang, M.; Qiu, K.; Wang, Y.; Ma, X. Computed tomography-based deep-learning prediction of induction chemotherapy treatment response in locally advanced nasopharyngeal carcinoma. Strahlenther. Onkol. 2022, 198, 183–193. [Google Scholar] [CrossRef] [PubMed]
  35. Peng, H.; Dong, D.; Fang, M.J.; Li, L.; Tang, L.L.; Chen, L.; Li, W.F.; Mao, Y.P.; Fan, W.; Liu, L.Z.; et al. Prognostic Value of Deep Learning PET/CT-Based Radiomics: Potential Role for Future Individual Induction Chemotherapy in Advanced Nasopharyngeal Carcinoma. Clin. Cancer Res. 2019, 25, 4271–4279. [Google Scholar] [CrossRef]
  36. Chen, X.; Yang, B.; Li, J.; Zhu, J.; Ma, X.; Chen, D.; Hu, Z.; Men, K.; Dai, J. A deep-learning method for generating synthetic kV-CT and improving tumor segmentation for helical tomotherapy of nasopharyngeal carcinoma. Phys. Med. Biol. 2021, 66, 224001. [Google Scholar] [CrossRef]
  37. Li, Y.; Zhu, J.; Liu, Z.; Teng, J.; Xie, Q.; Zhang, L.; Liu, X.; Shi, J.; Chen, L. A preliminary study of using a deep convolution neural network to generate synthesized CT images based on CBCT for adaptive radiotherapy of nasopharyngeal carcinoma. Phys. Med. Biol. 2019, 64, 145010. [Google Scholar] [CrossRef]
  38. Wang, Y.; Liu, C.; Zhang, X.; Deng, W. Synthetic CT Generation Based on T2 Weighted MRI of Nasopharyngeal Carcinoma (NPC) Using a Deep Convolutional Neural Network (DCNN). Front. Oncol. 2019, 9, 1333. [Google Scholar] [CrossRef]
  39. Chen, S.; Peng, Y.; Qin, A.; Liu, Y.; Zhao, C.; Deng, X.; Deraniyagala, R.; Stevens, C.; Ding, X. MR-based synthetic CT image for intensity-modulated proton treatment planning of nasopharyngeal carcinoma patients. Acta Oncol. 2022, 61, 1417–1424. [Google Scholar] [CrossRef]
  40. Fitton, I.; Cornelissen, S.A.; Duppen, J.C.; Steenbakkers, R.J.; Peeters, S.T.; Hoebers, F.J.; Kaanders, J.H.; Nowak, P.J.; Rasch, C.R.; van Herk, M. Semi-automatic delineation using weighted CT-MRI registered images for radiotherapy of nasopharyngeal cancer. Med. Phys. 2011, 38, 4662–4666. [Google Scholar] [CrossRef]
  41. Ma, Z.; Zhou, S.; Wu, X.; Zhang, H.; Yan, W.; Sun, S.; Zhou, J. Nasopharyngeal carcinoma segmentation based on enhanced convolutional neural networks using multi-modal metric learning. Phys. Med. Biol. 2019, 64, 025005. [Google Scholar] [CrossRef]
  42. Chen, H.; Qi, Y.; Yin, Y.; Li, T.; Liu, X.; Li, X.; Gong, G.; Wang, L. MMFNet: A multi-modality MRI fusion network for segmentation of nasopharyngeal carcinoma. Neurocomputing 2020, 394, 27–40. [Google Scholar] [CrossRef]
  43. Zhao, L.; Lu, Z.; Jiang, J.; Zhou, Y.; Wu, Y.; Feng, Q. Automatic Nasopharyngeal Carcinoma Segmentation Using Fully Convolutional Networks with Auxiliary Paths on Dual-Modality PET-CT Images. J. Digit. Imaging. 2019, 32, 462–470. [Google Scholar] [CrossRef] [PubMed]
  44. Chanapai, W.; Ritthipravat, P. Adaptive thresholding based on SOM technique for semi-automatic NPC image segmentation. In Proceedings of the 2009 International Conference on Machine Learning and Applications, IEEE, Miami, FL, USA, 13–15 December 2009; pp. 504–508. [Google Scholar]
  45. Tatanun, C.; Ritthipravat, P.; Bhongmakapat, T.; Tuntiyatorn, L. Automatic segmentation of nasopharyngeal carcinoma from CT images: Region growing based technique. In Proceedings of the 2010 2nd International Conference on Signal Processing Systems, IEEE, Dalian, China, 5–7 July 2010; Volume 2, pp. 537–541. [Google Scholar]
  46. Chanapai, W.; Bhongmakapat, T.; Tuntiyatorn, L.; Ritthipravat, P. Nasopharyngeal carcinoma segmentation using a region growing technique. Int. J. Comput. Assist. Radiol. Surg. 2012, 7, 413–422. [Google Scholar] [CrossRef] [PubMed]
  47. Bai, X.; Hu, Y.; Gong, G.; Yin, Y.; Xia, Y. A deep learning approach to segmentation of nasopharyngeal carcinoma using computed tomography. Biomed. Signal Process. Control 2021, 64, 102246. [Google Scholar] [CrossRef]
  48. Daoud, B.; Morooka, K.; Kurazume, R.; Leila, F.; Mnejja, W.; Daoud, J. 3D segmentation of nasopharyngeal carcinoma from CT images using cascade deep learning. Comput. Med. Imaging Graph. 2019, 77, 101644. [Google Scholar] [CrossRef] [PubMed]
  49. Li, S.; Xiao, J.; He, L.; Peng, X.; Yuan, X. The Tumor Target Segmentation of Nasopharyngeal Cancer in CT Images Based on Deep Learning Methods. Technol. Cancer Res. Treat. 2019, 18, 1533033819884561. [Google Scholar] [CrossRef]
  50. Xue, X.; Qin, N.; Hao, X.; Shi, J.; Wu, A.; An, H.; Zhang, H.; Wu, A.; Yang, Y. Sequential and Iterative Auto-Segmentation of High-Risk Clinical Target Volume for Radiotherapy of Nasopharyngeal Carcinoma in Planning CT Images. Front. Oncol. 2020, 10, 1134. [Google Scholar] [CrossRef]
  51. Jin, Z.; Li, X.; Shen, L.; Lang, J.; Li, J.; Wu, J.; Xu, P.; Duan, J. Automatic Primary Gross Tumor Volume Segmentation for Nasopharyngeal Carcinoma using ResSE-UNet. In Proceedings of the 2020 IEEE 33rd International Symposium on Computer-Based Medical Systems (CBMS), Rochester, MN, USA, 28–30 July 2020; pp. 585–590. [Google Scholar]
  52. Wang, X.; Yang, G.; Zhang, Y.; Zhu, L.; Xue, X.; Zhang, B.; Cai, C.; Jin, H.; Zheng, J.; Wu, J.; et al. Automated delineation of nasopharynx gross tumor volume for nasopharyngeal carcinoma by plain CT combining contrast-enhanced CT using deep learning. J. Radiat. Res. Appl. Sci. 2020, 13, 568–577. [Google Scholar] [CrossRef]
  53. Men, K.; Chen, X.; Zhang, Y.; Zhang, T.; Dai, J.; Yi, J.; Li, Y. Deep Deconvolutional Neural Network for Target Segmentation of Nasopharyngeal Cancer in Planning Computed Tomography Images. Front. Oncol. 2017, 7, 315. [Google Scholar] [CrossRef]
  54. Huang, W.; Chan, K.L.; Zhou, J. Region-based nasopharyngeal carcinoma lesion segmentation from MRI using clustering- and classification-based methods with learning. J. Digit. Imaging 2013, 26, 472–482. [Google Scholar] [CrossRef]
  55. Kai-Wei, H.; Zhe-Yi, Z.; Qian, G.; Juan, Z.; Liu, C.; Ran, Y. Nasopharyngeal carcinoma segmentation via HMRF-EM with maximum entropy. In Proceedings of the 2015 37th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Milan, Italy, 25–29 August 2015; pp. 2968–2972. [Google Scholar] [CrossRef]
  56. Li, Q.; Xu, Y.; Chen, Z.; Liu, D.; Feng, S.T.; Law, M.; Ye, Y.; Huang, B. Tumor Segmentation in Contrast-Enhanced Magnetic Resonance Imaging for Nasopharyngeal Carcinoma: Deep Learning with Convolutional Neural Network. Biomed. Res. Int. 2018, 2018, 9128527. [Google Scholar] [CrossRef] [PubMed]
  57. Lin, L.; Dou, Q.; Jin, Y.M.; Zhou, G.Q.; Tang, Y.Q.; Chen, W.L.; Su, B.A.; Liu, F.; Tao, C.J.; Jiang, N.; et al. Deep Learning for Automated Contouring of Primary Tumor Volumes by MRI for Nasopharyngeal Carcinoma. Radiology 2019, 291, 677–686. [Google Scholar] [CrossRef]
  58. Guo, F.; Shi, C.; Li, X.; Wu, X.; Zhou, J.; Lv, J. Image segmentation of nasopharyngeal carcinoma using 3D CNN with long-range skip connection and multi-scale feature pyramid. Soft Comput. 2020, 24, 12671–12680. [Google Scholar] [CrossRef]
  59. Ye, Y.; Cai, Z.; Huang, B.; He, Y.; Zeng, P.; Zou, G.; Deng, W.; Chen, H.; Huang, B. Fully-Automated Segmentation of Nasopharyngeal Carcinoma on Dual-Sequence MRI Using Convolutional Neural Networks. Front. Oncol. 2020, 10, 166. [Google Scholar] [CrossRef] [PubMed]
  60. Luo, X.; Liao, W.; He, Y.; Tang, F.; Wu, M.; Shen, Y.; Huang, H.; Song, T.; Li, K.; Zhang, S.; et al. Deep learning-based accurate delineation of primary gross tumor volume of nasopharyngeal carcinoma on heterogeneous magnetic resonance imaging: A large-scale and multi-center study. Radiother. Oncol. 2023, 180, 109480. [Google Scholar] [CrossRef] [PubMed]
  61. Li, Y.; Peng, H.; Dan, T.; Hu, Y.; Tao, G.; Cai, H. Coarse-to-fine Nasopharyngeal Carcinoma Segmentation in MRI via Multi-stage Rendering. In Proceedings of the 2020 IEEE International Conference on Bioinformatics and Biomedicine (BIBM), Seoul, Republic of Korea, 16–19 December 2020; pp. 623–628. [Google Scholar]
  62. Wong, L.M.; Ai, Q.Y.H.; Mo, F.K.F.; Poon, D.M.C.; King, A.D. Convolutional neural network in nasopharyngeal carcinoma: How good is automatic delineation for primary tumor on a non-contrast-enhanced fat-suppressed T2-weighted MRI? Jpn J. Radiol. 2021, 39, 571–579. [Google Scholar] [CrossRef]
  63. Liang, S.; Tang, F.; Huang, X.; Yang, K.; Zhong, T.; Hu, R.; Liu, S.; Yuan, X.; Zhang, Y. Deep-learning-based detection and segmentation of organs at risk in nasopharyngeal carcinoma computed tomographic images for radiotherapy planning. Eur. Radiol. 2019, 29, 1961–1967. [Google Scholar] [CrossRef]
  64. Zhong, T.; Huang, X.; Tang, F.; Liang, S.; Deng, X.; Zhang, Y. Boosting-based Cascaded Convolutional Neural Networks for the Segmentation of CT Organs-at-risk in Nasopharyngeal Carcinoma. Med. Phys. 2019, 46, 5602–5611. [Google Scholar] [CrossRef]
  65. Peng, Y.; Liu, Y.; Shen, G.; Chen, Z.; Chen, M.; Miao, J.; Zhao, C.; Deng, J.; Qi, Z.; Deng, X. Improved accuracy of auto-segmentation of organs at risk in radiotherapy planning for nasopharyngeal carcinoma based on fully convolutional neural network deep learning. Oral. Oncol. 2023, 136, 106261. [Google Scholar] [CrossRef]
  66. Zhao, W.; Zhang, D.; Mao, X. Application of Artificial Intelligence in Radiotherapy of Nasopharyngeal Carcinoma with Magnetic Resonance Imaging. J. Health Eng. 2022, 2022, 4132989. [Google Scholar] [CrossRef]
  67. Zhuang, Y.; Xie, Y.; Wang, L.; Huang, S.; Chen, L.X.; Wang, Y. DVH Prediction for VMAT in NPC with GRU-RNN: An Improved Method by Considering Biological Effects. Biomed. Res. Int. 2021, 2021, 2043830. [Google Scholar] [CrossRef] [PubMed]
  68. Cao, W.; Zhuang, Y.; Chen, L.; Liu, X. Application of dose-volume histogram prediction in biologically related models for nasopharyngeal carcinomas treatment planning. Radiat. Oncol. 2020, 15, 216. [Google Scholar] [CrossRef] [PubMed]
  69. Zhuang, Y.; Han, J.; Chen, L.; Liu, X. Dose-volume histogram prediction in volumetric modulated arc therapy for nasopharyngeal carcinomas based on uniform-intensity radiation with equal angle intervals. Phys. Med. Biol. 2019, 64, 23NT03. [Google Scholar] [CrossRef] [PubMed]
  70. Yue, M.; Xue, X.; Wang, Z.; Lambo, R.L.; Zhao, W.; Xie, Y.; Cai, J.; Qin, W. Dose prediction via distance-guided deep learning: Initial development for nasopharyngeal carcinoma radiotherapy. Radiother. Oncol. 2022, 170, 198–204. [Google Scholar] [CrossRef] [PubMed]
  71. Sun, Z.; Xia, X.; Fan, J.; Zhao, J.; Zhang, K.; Wang, J.; Hu, W. A hybrid optimization strategy for deliverable intensity-modulated radiotherapy plan generation using deep learning-based dose prediction. Med. Phys. 2022, 49, 1344–1356. [Google Scholar] [CrossRef]
  72. Jiao, S.X.; Chen, L.X.; Zhu, J.H.; Wang, M.L.; Liu, X.W. Prediction of dose-volume histograms in nasopharyngeal cancer IMRT using geometric and dosimetric information. Phys. Med. Biol. 2019, 64, 23NT04. [Google Scholar] [CrossRef]
  73. Chen, X.; Men, K.; Zhu, J.; Yang, B.; Li, M.; Liu, Z.; Yan, X.; Yi, J.; Dai, J. DVHnet: A deep learning-based prediction of patient-specific dose volume histograms for radiotherapy planning. Med. Phys. 2021, 48, 2705–2713. [Google Scholar] [CrossRef]
  74. Zhang, B.; Lian, Z.; Zhong, L.; Zhang, X.; Dong, Y.; Chen, Q.; Zhang, L.; Mo, X.; Huang, W.; Yang, W.; et al. Machine-learning based MRI radiomics models for early detection of radiation-induced brain injury in nasopharyngeal carcinoma. BMC Cancer 2020, 20, 502. [Google Scholar] [CrossRef]
  75. Bin, X.; Zhu, C.; Tang, Y.; Li, R.; Ding, Q.; Xia, W.; Tang, Y.; Tang, X.; Yao, D.; Tang, A. Nomogram Based on Clinical and Radiomics Data for Predicting Radiation-induced Temporal Lobe Injury in Patients with Non-metastatic Stage T4 Nasopharyngeal Carcinoma. Clin. Oncol. (R Coll. Radiol.) 2022, 34, e482–e492. [Google Scholar] [CrossRef]
  76. Ren, W.; Liang, B.; Sun, C.; Wu, R.; Men, K.; Xu, Y.; Han, F.; Yi, J.; Qu, Y.; Dai, J. Dosiomics-based prediction of radiation-induced hypothyroidism in nasopharyngeal carcinoma patients. Phys. Med. 2021, 89, 219–225. [Google Scholar] [CrossRef]
  77. Chao, M.; El Naqa, I.; Bakst, R.L.; Lo, Y.C.; Penagaricano, J.A. Cluster model incorporating heterogeneous dose distribution of partial parotid irradiation for radiotherapy induced xerostomia prediction with machine learning methods. Acta Oncol. 2022, 61, 842–848. [Google Scholar] [CrossRef] [PubMed]
  78. Zhong, L.; Dong, D.; Fang, X.; Zhang, F.; Zhang, N.; Zhang, L.; Fang, M.; Jiang, W.; Liang, S.; Li, C.; et al. A deep learning-based radiomic nomogram for prognosis and treatment decision in advanced nasopharyngeal carcinoma: A multicentre study. EBioMedicine 2021, 70, 103522. [Google Scholar] [CrossRef] [PubMed]
  79. Zhao, X.; Liang, Y.J.; Zhang, X.; Wen, D.X.; Fan, W.; Tang, L.Q.; Dong, D.; Tian, J.; Mai, H.Q. Deep learning signatures reveal multiscale intratumor heterogeneity associated with biological functions and survival in recurrent nasopharyngeal carcinoma. Eur. J. Nucl. Med. Mol. Imaging 2022, 49, 2972–2982. [Google Scholar] [CrossRef]
  80. Jiang, R.; You, R.; Pei, X.Q.; Zou, X.; Zhang, M.X.; Wang, T.M.; Sun, R.; Luo, D.H.; Huang, P.Y.; Chen, Q.Y.; et al. Development of a ten-signature classifier using a support vector machine integrated approach to subdivide the M1 stage into M1a and M1b stages of nasopharyngeal carcinoma with synchronous metastases to better predict patients’ survival. Oncotarget 2016, 7, 3645–3657. [Google Scholar] [CrossRef]
  81. Amin, M.B.; Greene, F.L.; Edge, S.B.; Compton, C.C.; Gershenwald, J.E.; Brookland, R.K.; Meyer, L.; Gress, D.M.; Byrd, D.R.; Winchester, D.P. The Eighth Edition AJCC Cancer Staging Manual: Continuing to build a bridge from a population-based to a more “personalized” approach to cancer staging. CA Cancer J. Clin. 2017, 67, 93–99. [Google Scholar] [CrossRef]
  82. Cui, C.; Wang, S.; Zhou, J.; Dong, A.; Xie, F.; Li, H.; Liu, L. Machine Learning Analysis of Image Data Based on Detailed MR Image Reports for Nasopharyngeal Carcinoma Prognosis. Biomed. Res. Int. 2020, 2020, 8068913. [Google Scholar] [CrossRef] [PubMed]
  83. Zhong, L.Z.; Fang, X.L.; Dong, D.; Peng, H.; Fang, M.J.; Huang, C.L.; He, B.X.; Lin, L.; Ma, J.; Tang, L.L.; et al. A deep learning MR-based radiomic nomogram may predict survival for nasopharyngeal carcinoma patients with stage T3N1M0. Radiother. Oncol. 2020, 151, 1–9. [Google Scholar] [CrossRef]
  84. Zhuo, E.H.; Zhang, W.J.; Li, H.J.; Zhang, G.Y.; Jing, B.Z.; Zhou, J.; Cui, C.Y.; Chen, M.Y.; Sun, Y.; Liu, L.Z.; et al. Radiomics on multi-modalities MR sequences can subtype patients with non-metastatic nasopharyngeal carcinoma (NPC) into distinct survival subgroups. Eur. Radiol. 2019, 29, 5590–5599. [Google Scholar] [CrossRef]
  85. Du, R.; Lee, V.H.; Yuan, H.; Lam, K.O.; Pang, H.H.; Chen, Y.; Lam, E.Y.; Khong, P.L.; Lee, A.W.; Kwong, D.L.; et al. Radiomics Model to Predict Early Progression of Nonmetastatic Nasopharyngeal Carcinoma after Intensity Modulation Radiation Therapy: A Multicenter Study. Radiol. Artif. Intell. 2019, 1, e180075. [Google Scholar] [CrossRef]
  86. Li, S.; Wang, K.; Hou, Z.; Yang, J.; Ren, W.; Gao, S.; Meng, F.; Wu, P.; Liu, B.; Liu, J.; et al. Use of Radiomics Combined with Machine Learning Method in the Recurrence Patterns After Intensity-Modulated Radiotherapy for Nasopharyngeal Carcinoma: A Preliminary Study. Front. Oncol. 2018, 8, 648. [Google Scholar] [CrossRef]
  87. Gonzalez, G.; Ash, S.Y.; Vegas-Sanchez-Ferrero, G.; Onieva Onieva, J.; Rahaghi, F.N.; Ross, J.C.; Diaz, A.; San Jose Estepar, R.; Washko, G.R.; for the COPDGene andECLIPSE Investigators. Disease Staging and Prognosis in Smokers Using Deep Learning in Chest Computed Tomography. Am. J. Respir. Crit. Care Med. 2018, 197, 193–203. [Google Scholar] [CrossRef] [PubMed]
  88. Du, R.; Cao, P.; Han, L.; Ai, Q.; King, A.D.; Vardhanabhuti, V. Deep convolution neural network model for automatic risk assessment of patients with non-metastatic Nasopharyngeal carcinoma. arXiv 2019, arXiv:1907.11861. [Google Scholar]
  89. Qiang, M.; Li, C.; Sun, Y.; Sun, Y.; Ke, L.; Xie, C.; Zhang, T.; Zou, Y.; Qiu, W.; Gao, M.; et al. A Prognostic Predictive System Based on Deep Learning for Locoregionally Advanced Nasopharyngeal Carcinoma. J. Natl. Cancer Inst. 2021, 113, 606–615. [Google Scholar] [CrossRef] [PubMed]
  90. Zhang, L.; Wu, X.; Liu, J.; Zhang, B.; Mo, X.; Chen, Q.; Fang, J.; Wang, F.; Li, M.; Chen, Z.; et al. MRI-Based Deep-Learning Model for Distant Metastasis-Free Survival in Locoregionally Advanced Nasopharyngeal Carcinoma. J. Magn. Reson. Imaging 2021, 53, 167–178. [Google Scholar] [CrossRef] [PubMed]
  91. Jing, B.; Deng, Y.; Zhang, T.; Hou, D.; Li, B.; Qiang, M.; Liu, K.; Ke, L.; Li, T.; Sun, Y.; et al. Deep learning for risk prediction in patients with nasopharyngeal carcinoma using multi-parametric MRIs. Comput. Methods Programs Biomed. 2020, 197, 105684. [Google Scholar] [CrossRef] [PubMed]
  92. Chen, X.; Li, Y.; Li, X.; Cao, X.; Xiang, Y.; Xia, W.; Li, J.; Gao, M.; Sun, Y.; Liu, K.; et al. An interpretable machine learning prognostic system for locoregionally advanced nasopharyngeal carcinoma based on tumor burden features. Oral. Oncol. 2021, 118, 105335. [Google Scholar] [CrossRef]
  93. Meng, M.; Gu, B.; Bi, L.; Song, S.; Feng, D.D.; Kim, J. DeepMTS: Deep Multi-task Learning for Survival Prediction in Patients with Advanced Nasopharyngeal Carcinoma Using Pretreatment PET/CT. IEEE J. Biomed. Health Inf. 2022, 26, 4497–4507. [Google Scholar] [CrossRef]
  94. Gu, B.; Meng, M.; Bi, L.; Kim, J.; Feng, D.D.; Song, S. Prediction of 5-year progression-free survival in advanced nasopharyngeal carcinoma with pretreatment PET/CT using multi-modality deep learning-based radiomics. Front. Oncol. 2022, 12, 899351. [Google Scholar] [CrossRef]
  95. Zhang, F.; Zhong, L.Z.; Zhao, X.; Dong, D.; Yao, J.J.; Wang, S.Y.; Liu, Y.; Zhu, D.; Wang, Y.; Wang, G.J.; et al. A deep-learning-based prognostic nomogram integrating microscopic digital pathology and macroscopic magnetic resonance images in nasopharyngeal carcinoma: A multi-cohort study. Adv. Med. Oncol. 2020, 12, 1758835920971416. [Google Scholar] [CrossRef]
  96. Liu, K.; Xia, W.; Qiang, M.; Chen, X.; Liu, J.; Guo, X.; Lv, X. Deep learning pathological microscopic features in endemic nasopharyngeal cancer: Prognostic value and protentional role for individual induction chemotherapy. Cancer Med. 2020, 9, 1298–1306. [Google Scholar] [CrossRef]
  97. Chen, Y.; Wang, Z.; Li, H.; Li, Y. Integrative Analysis Identified a 6-miRNA Prognostic Signature in Nasopharyngeal Carcinoma. Front. Cell Dev. Biol. 2021, 9, 661105. [Google Scholar] [CrossRef] [PubMed]
  98. Zhao, S.; Dong, X.; Ni, X.; Li, L.; Lu, X.; Zhang, K.; Gao, Y. Exploration of a Novel Prognostic Risk Signature and Its Effect on the Immune Response in Nasopharyngeal Carcinoma. Front. Oncol. 2021, 11, 709931. [Google Scholar] [CrossRef] [PubMed]
  99. Zhang, Q.; Wu, G.; Yang, Q.; Dai, G.; Li, T.; Chen, P.; Li, J.; Huang, W. Survival rate prediction of nasopharyngeal carcinoma patients based on MRI and gene expression using a deep neural network. Cancer Sci. 2022, 144, 1596–1605. [Google Scholar] [CrossRef]
  100. Antoniadi, A.M.; Du, Y.; Guendouz, Y.; Wei, L.; Mazo, C.; Becker, B.A.; Mooney, C. Current Challenges and Future Opportunities for XAI in Machine Learning-Based Clinical Decision Support Systems: A Systematic Review. Appl. Sci. 2021, 11, 5088. [Google Scholar] [CrossRef]
  101. Pawlik, M.; Hutter, T.; Kocher, D.; Mann, W.; Augsten, N. A Link is not Enough—Reproducibility of Data. Datenbank. Spektrum. 2019, 19, 107–115. [Google Scholar] [CrossRef] [PubMed]
  102. Hamamoto, R.; Suvarna, K.; Yamada, M.; Kobayashi, K.; Shinkai, N.; Miyake, M.; Takahashi, M.; Jinnai, S.; Shimoyama, R.; Sakai, A.; et al. Application of Artificial Intelligence Technology in Oncology: Towards the Establishment of Precision Medicine. Cancers 2020, 12, 3532. [Google Scholar] [CrossRef] [PubMed]
  103. van de Wiel, M.A.; Neerincx, M.; Buffart, T.E.; Sie, D.; Verheul, H.M. ShrinkBayes: A versatile R-package for analysis of count-based sequencing data in complex study designs. BMC Bioinform. 2014, 15, 116. [Google Scholar] [CrossRef]
  104. Keyang, C.; Ning, W.; Wenxi, S.; Yongzhao, Z. Research Advances in the Interpretability of Deep Learning. J. Comput. Res. Dev. 2020, 57, 1208–1217. [Google Scholar]
  105. Lovis, C. Unlocking the Power of Artificial Intelligence and Big Data in Medicine. J. Med. Internet. Res. 2019, 21, e16607. [Google Scholar] [CrossRef]
  106. Li, J.; Tian, Y.; Zhu, Y.; Zhou, T.; Li, J.; Ding, K.; Li, J. A multicenter random forest model for effective prognosis prediction in collaborative clinical research network. Artif. Intell. Med. 2020, 103, 101814. [Google Scholar] [CrossRef]
Figure 1. The application of AI in NPC diagnosis and treatment.
Figure 1. The application of AI in NPC diagnosis and treatment.
Jcm 12 03077 g001
Figure 2. Sample images from various datasets: (a) endoscopic image (Mohammed et al., 2020 [26]); (b) whole slide image (Chuang et al., 2020 [31]); (c) CT image (Daoud et al., 2019 [48]); (d) MRI image (Guo et al., 2020 [58]); (e) PET image (Zhao et al., 2019 [43]); (f) CT-MR image (Wang et al., 2019 [38]); (g) CBCT-CT image (Li et al., 2019 [49]); (h) DVH image (Zhuang et al., 2021 [67]).
Figure 2. Sample images from various datasets: (a) endoscopic image (Mohammed et al., 2020 [26]); (b) whole slide image (Chuang et al., 2020 [31]); (c) CT image (Daoud et al., 2019 [48]); (d) MRI image (Guo et al., 2020 [58]); (e) PET image (Zhao et al., 2019 [43]); (f) CT-MR image (Wang et al., 2019 [38]); (g) CBCT-CT image (Li et al., 2019 [49]); (h) DVH image (Zhuang et al., 2021 [67]).
Jcm 12 03077 g002
Figure 3. Artificial intelligence algorithms with the best performance in the papers included in our review.
Figure 3. Artificial intelligence algorithms with the best performance in the papers included in our review.
Jcm 12 03077 g003
Table 1. Inclusion and exclusion criteria of the study.
Table 1. Inclusion and exclusion criteria of the study.
ExclusionInclusion
Papers that were not written in English.Journal articles published in the English language.
Full text of the document is not accessible on the internet.Full-text papers that are accessible.
Relevant studies that are not based on deep learning or machine learning were used for modeling.Machine learning algorithms were used for modeling.
The information of samples, the image data used, the modeling method or evaluation method are not described.Deep learning algorithms were used for modeling.
Conferences papers, literature reviews and editorial materials that do not belong to original researchers.The samples, the image data used, the modeling method and evaluation method are described in detail.
Table 2. Summary of AI models for NPC diagnosis session.
Table 2. Summary of AI models for NPC diagnosis session.
Author, YearPurposeAlgorithmsDatasetBest AlgorithmBest Algorithm Performance
Wong et al., 2021 [23]NPC early detectionCNN412 individualsCNNAUC: 0.960
Ke et sl., 2020 [24]NPC detection and segmentationSC-DenseNet4100 individualsSC-DenseNetAccuracy: 0.978
Mohammed et al., 2018 [25]NPC detectionANN381 endoscopic imagesANNAccuracy: 0.962
Mohammed et al., 2020 [26]NPC detectionANN, region growing method249 endoscopic imagesANNPrecision: 0.957
Abd Ghani et al., 2020 [27]NPC detectionANN, SVM, KNN381 endoscopic imagesANNAccuracy: 0.941
Li et al., 2018 [28]NPC detectionfully convolutional network27,536 biopsy-proven images of 7951 patientsfully convolutional networkOverall
Accuracy: 0.887
Xu et al., 2022 [29]NPC diagnosisDCNN4783 nasopharyngoscopy images of 671 patientsDCNNAUC: 0.986
Shu et al., 2021 [30]NPC diagnosis and post-treatment follow-upRS-CNN15,354 FP/HW in vivo Raman spectra of 418 subjectsRS-CNNOverall accuracy: 0.821
Chuang et al., 2020 [31]NPC identificationCNN726 nasopharyngeal biopsiesCNNAUC: 0.985
Diao et al., 2020 [32]NPC identificationInception-v31970 whole slide images of 731 casesInception-v3AUC: 0.930
AI, artificial intelligence; NPC, nasopharyngeal carcinoma; CNN, convolutional neural network; AUC, area under the receiver operator characteristic curve; ANN, artificial neural network; SVM, support vector machines; KNN, k-nearest Neighbors; DCNN, deep convolutional neural network; FP, fingerprint; HW, high -wavenumber; RS-CNN, Raman-specified convolutional neural networks.
Table 3. Summary of AI applications for NPC treatment sessions.
Table 3. Summary of AI applications for NPC treatment sessions.
Author, YearPurposeAlgorithmsDatasetBest AlgorithmBest Algorithm Performance
Zhao et al., 2020 [33]IC treatment response and survival predictionSVMMulti-MR images of 123 patientsSVMC-index: 0.863
Yang et al., 2022 [34]IC treatment response predictionCNN, Xception, VGG16, VGG19, InceptionV3,
InceptionResNetV2
Medical records of 297 patientsCNNAUC: 0.811
Peng et al., 2019 [35]IC treatment response predictionDCNNPET-CT images of 707 patientsDCNNC-index: 0.722
Chen et al., 2021 [36]Synthetic CT generation and tumor segmentationCycleGAN-Resnet, CycleGAN-UnetPlanning kV-CT and MV-CT images of 270 patientsCycleGAN-ResnetImprovement:
CNR 184.0%
image uniformity 34.7%
SNR 199.0%
DSC: 0.790
Li et al., 2019 [37]synthesized CT generation and dose calculationsDCNN70 CBCT/CT paired imagesDCNNMAE: improved from (60, 120) to (6, 27) HU
PTVnx70 1%/1 mm gamma pass rates: 98.6% ± 2.9%
Wang et al., 2019 [38]synthetic CT generationDCNNCT/MRI images of 33 patientsDCNNMAE:
Soft tissue 97 ± 13 HU
Bone 357 ± 44 HU
Chen et al., 2022 [39]synthetic CT generationDCNNCT/MRI images of 206 patientsDCNNThe (3 mm/3%) gamma passing rates were above 97.32%
Fitton et al., 2011 [40]NPC delineation“Snake” algorithmCT-MR images of 5 patients“Snake” algorithmReducing the average delineation time by 6 min per case
Ma et al., 2019 [41]NPC segmentationC-CNN, M-CNN, S-CNNCT-MR images of 90 patientsC-CNNPPV:
CT image 0.714 ± 0.089
MR image 0.797 ± 0.109
Chen et al., 2020 [42]NPC segmentation3D-CNN, U-net,3D U-netMRI images of 149 patients3D-CNNDSC: 0.724
Zhao et al., 2019 [43]NPC segmentationFully convolutional neural networksPET-CT scans images of 30 patientsFully convolutional neural networksMean dice score: 0.875
Weerayuth Chanapai et al., 2009 [44]NPC SegmentationSOM TechniqueCT images of 131 patientsSOM TechniqueCR: 0.620
PM: 0.730
Tatanun et al., 2010 [45]NPC segmentationRegion growing technique97 CT Images of 12 casesRegion growing techniqueAccuracy: 0.951
Chanapai et al., 2012 [46]NPC region segmentationSeeded region growing technique578 CT images of 31 patientsSeeded region growing techniqueCR: 0.690
PM: 0.825
Bai et al., 2021 [47]GTV segmentationResNeXt-50 U-netCT images of 60 patientsResNeXt-50 U-netDSC: 0.618
Daoud et al., 2019 [48]NPC segmentationCNN, U-netCT images of 70 patientsCNNDSC: 0.910
Li et al., 2019 [49]NPC SegmentationU-netCT images of 502 patientsU-netDSC: 0.740
Xue et al., 2020 [50]CTVp1 segmentationSI-net150 NPC patientsSI-netDSC: 0.840
Jin et al., 2020 [51]GTV
Segmentation
ResSE-UNet1757 annotated
CT slices of 90 patients
ResSE-UNetDSC: 0.840
Wang et al., 2020 [52]GTV delineation3D U-net, 3D CNN, 2D DDNNCT images and
corresponding manually delineated target of 205 patients
3D U-netDSC: 0.827
Men et al., 2017 [53]Target segmentationDDNN, VGG-16230 patientsDDNNDSC:
GTVnx 0.809
CTV 0.826
Huang et al., 2013 [54]NPC SegmentationClustering and Classification-Based Methods with Learning, SVM253 MRI slicesClustering and Classification-Based Methods with LearningPPV: 0.9345
Sensitivity: 0.9776
Huang et al., 2015 [55]NPC segmentationDistance regularized level set evolutionMR images of 26 patientsDistance regularized level set evolutionCR: 0.913
PM: 91.840
Li et al., 2018 [56]NPC segmentationCNNMR images of 29 patientsCNNDSC: 0.890
Lin et al., 2019 [57]NPC segmentation3D CNNMR images of 1021 patients3D CNNDSC: 0.790
Guo et al., 2020 [58]NPC segmentation3D-CNN, 3D U-net, V-net, DDnet, DeepLab-like, CNN-basedMRI images of 120 patients3D-CNNDSC: 0.737
Ye et al., 2020 [59]NPC segmentationDense connectivity
embedding U-net
MRI images of 44 patientsDense connectivity
embedding U-net
DSC: 0.870
Luo et al., 2023 [60]GTV delineationaugmentation-invariant Strategy, nnU-netMRI images of 1057 patientsaugmentation-invariant StrategyDSC: 0.88
Li et al., 2020 [61]NPC segmentationResNet-101, U-net, Attention U-net,
BASNet, DANet, Unet++, RefineNet
2000 MRI slices
of 596 patients
ResNet-101DSC: 0.703
Wong et al., 2021 [62]NPC delineationU-net, CNNnon-contrast-enhanced MRI of 195 patientsU-netDSC: 0.710
Liang et al., 2019 [63]OARs detection and segmentationCNNCT images of 180 patientsCNNDSC: 0.689–0.934
Zhong et al., 2019 [64]OARs segmentationBoosting-based cascaded CNN, FCN, U-netCT images of 140 patientsBoosting-based cascaded CNNDSC:
Parotids 0.923
Thyroids 0.923
Optic nerves 0.893
Peng et al., 2023 [65]OARs segmentationfully convolutional neural network, U-netCT images of 310 patientsfully convolutional neural networkDSC: 0.8375
Zhao et al., 2022 [66]OARs segmentationU-netCT images of 147 patientsU-netDSC: 0.86
Zhuang et al., 2021 [67]DVH predictionGRU-RNN80 VMAT plansGRU-RNNcoefficient r:
EUD 0.976
Maximum dose 0.968
Cao et al., 2020 [68]DVH predictionGRU-RNN100 VMAT plansGRU-RNNPTV70
CPs: 70.71 ± 0.83
EPs: 70.77 ± 0.28
Zhuang et al., 2019 [69]DVH predictionGRU-RNN124 VMAT plansGRU-RNNPTV70
CPs: 70.90 ± 0.54
EPs: 71.40 ± 0.51
Yue et al., 2022 [70]Dose prediction3D U-netRadiotherapy datasets of 161 patients3D U-netGTVnx 3 mm/3% gamma pass rate: 95.445%
Sun et al., 2022 [71]Dose predictionU-net117 NPC patientsU-netPTV70.4 D95(Gy): 70.4 ± 0.0
Jiao et al., 2019 [72]DVH predictionGRNN106 nine-field IMRT plansGRNNBrainstem R2: 0.98 ± 0.02
Spinal cord R2: 0.98 ± 0.02
Chen et al., 2021 [73]OARs DVHs predictionCNN180 casesCNND2%(Gy):
Brain stem PRV 0.06 ± 4.31
Spinal cord PRV −0.69 ± 1.77
Zhang et al., 2020 [74]RTLI early detectionRFMR images of 242 patientsRFAUC: 0.83
Bin et al., 2022 [75]RTLI predictionSVM, RF98 stage T4/N0e3/M0 patientsSVMC-index: 0.82
Ren et al., 2021 [76]Radiation-induced hypothyroidism predictionLR, SVM, RF, KNN145 patientsLRAUC: 0.70
Chao et al., 2022 [77]Radiotherapy-induced xerostomia predictionSVM, KNN, RF155 HNC patientsKNNMean accuracy: 0.68–0.7
Zhong et al., 2021 [78]Treatment decisionSE-ResNetMRI images of 638 stage T3N1M0 patientsSE-ResNetHR: 0.17 and 6.24
Zhao et al., 2022 [79]Risk stratification and survival predictionlight-weighted DCNNPET-CT images and OS of 420 patientslight-weighted DCNNC-index: 0.732
Jiang et al., 2016 [80]Synchronous metastases NPC patients’ prognostic classifierSVMHematological markers and clinical characteristics of 347 patientsSVMHR: 3.45
Cui et al., 2020 [82]NPC classification systemRigde, LassoMR images of 792 patientsRigdeAUC:
OS 0.796
LRFS 0.721
AI, artificial intelligence; NPC, nasopharyngeal carcinoma; IC, Induction chemotherapy; MR, magnetic resonance; SVM, support vector machine; CNN, convolutional neural network; AUC, areas under receiver operator characteristic curve; PET-CT, positron emission tomography with computed tomography; DCNN, deep convolutional neural network; CT, computed tomography; kV-CT, kilovoltage computed tomography; MV-CT, megavoltage computed tomography; CNR, contrast-to-noise ratio; SNR, signal-to-noise ratio; DSC, dice similarity coefficient; CBCT, cone-beam computed tomography; MAE, mean absolute error; HU, Hounsfield units; PTVnx, 70 Gy to the planning target volume of the nasopharynx; MRI, magnetic resonance imaging; C-CNN, combined convolutional neural network; M-CNN, multi-modality convolutional neural network; S-CNN, single-modality convolutional neural network; PPV, positive predictive value; SOM, self-organizing map; CR, corresponding ratio; PM, perfect match; GTV, gross tumor volume; LR, logistic regression; KNN, k-nearest neighbor; CTVp1, primary tumor clinical target volume; SI-Net, sequential and iterative U-net; ResSE-UNet, Residual Squeeze-and-Excitation U-net; DDNN, deep deconvolutional neural network; CTV, clinical target volume; OARs, organs at risks; DVH, dose-volume histogram; VMAT, volumetric modulated arc therapy; GRU-RNN, gated recurrent unit-based recurrent neural network; EUD, equivalent uniform dose; CPs, clinical plans; EPs, experimental plans; IMRT, intensity-modulated radiation therapy; GRNN, generalized regression neural network; PRV, planning organ at risk volume; RTLI, radiation-induced temporal lobe injury; RF, random forest; HNC, head-and-neck-cancer; HR, hazard ratio; OS, overall survival; LRFS, local-region relapse-free survival.
Table 4. Summary of AI models for NPC prognosis session.
Table 4. Summary of AI models for NPC prognosis session.
Author, YearPurposeAlgorithmsDatasetBest AlgorithmBest Algorithm Performance
Zhong et al., 2020 [83]Survival predictionDCNNMRI images of 638 stage T3N1M0 patientsDCNNC-index: 0.788
Zhuo et al., 2019 [84]survival StratificationSVM658 non-metastatic patientsSVMC-index: 0.814
Du et al., 2019 [85]Early progression predictionSVMMRI images of 277 nonmetastatic patientsSVMAUC: 0.80
Li et al., 2018 [86]Recurrence predictionANN, KNN, SVM306 patientsANNAccuracy: 0.812
Qiang et al., 2019 [87]Disease-free survival prediction3D DenseNetMRI images of 1636 nonmetastatic patients3D DenseNetHR: 0.62
Du et al., 2019 [88]Risk assessmentDCNNMRI images of 596 nonmetastatic patientsDCNNAUC: 0.828
Qiang et al., 2021 [89]Risk
stratification
3D CNNMR images and clinical data of 3444 patients3D CNNC-index: 0.776
Zhang et al., 2021 [90]DMFS prediction and treatment decisionDCNN233 patientsDCNNAUC: 0.796
Jing et al., 2020 [91]Disease progression predictionMDSN, BoostCI, LASSO -COXMulti-parametric MRIs images of 1417 patientsMDSNC-index: 0.651
Chen et al., 2021 [92]Distant metastasis predictionXGBoostMRIs images of 1643 patientsXGBoostC-index: 0.760
Meng et al., 2022 [93]Joint survival prediction and tumor segmentation3D end-to-end DeepMTS, LASSO-COX, DeepSurv, 2D CNN-based survival, 3D deep survival networkPET-CT images and clinical data of 193 patients3D end-to-end DeepMTSC-index: 0.722
DSC: 0.760
Gu et al., 2022 [94]5-year PFS prediction3D CNNPET/CT images of 257 of patients3D CNNAUC: 0.823
Zhang et al., 2020 [95]Prognosis predictionDCNNMRI images and biopsy specimens whole-slide images
of 220 patients
DCNNC-index: 0.834
Liu et al., 2020 [96]Prognosis predictionNeural network DeepSurvH&E–stained slides of 1229 patientsNeural network DeepSurvC-index: 0.723
Chen et al., 2021 [97]Prognosis predictionRidge regression, elastic netmiRNA expression profiles and clinical
data of 612 patients
Ridge regression, elastic net5-year OS AUC: 0.70
Zhao et al., 2021 [98]Prognosis predictionRFRNA-Seq data of 60 tumor biopsiesRFAUC: OS 0.893
PFS 0.86.
Zhang et al., 2022 [99]Prognosis predictionDNNMRI images and gene expression profiles of 151 patientsDNNAUC: 0.88
AI, artificial intelligence; NPC, nasopharyngeal carcinoma; MRI, magnetic resonance imaging; DCNN, deep convolutional neural network; SVM, support vector machine; KNN, k-nearest neighbor; AUC, areas under receiver operator characteristic curve; HR, hazard ratio; CNN, convolutional neural network; DMFS, distant metastasis-free survival; MDSN, multi-modality deep survival network; PET-CT, positron emission tomography with computed tomography; DeepMTS, deep multi-task Survival; DSC, dice similarity coefficient; PFS, progression-free survival; H&E, Hematoxylin-eosin; OS, overall survival; RF, random forest; DNN, deep neural network.
Table 5. Summary of Training and Testing Methodologies.
Table 5. Summary of Training and Testing Methodologies.
Author, YearPublicly AvailableBalanced ClassesValidation StrategyData Handing Strategy
Wong et al., 2021 [23]NoNoCross validation-
Ke et al., 2020 [24]NoNoValidation setExcluding
Mohammed et al., 2018 [25]NoNo--
Mohammed et al., 2020 [26]NoNoCross validation-
Abd Ghani et al., 2020 [27]NoNo--
Li et al., 2018 [28]NoNoValidation setExcluding
Xu et al., 2022 [29]NoNoCross validation-
Shu et al., 2021 [30]NoNoValidation set-
Chuang et al., 2020 [31]NoNoValidation set-
Diao et al., 2020 [32]NoNoValidation set-
Zhao et al., 2020 [33]NoNoValidation setExcluding
Yang et al., 2022 [34]NoNoValidation setExcluding
Peng et al., 2019 [35]NoNoValidation setExcluding
Chen et al., 2021 [36]NoNoValidation setExcluding
Li et al., 2019 [37]NoNoValidation setExcluding
Wang et al., 2019 [38]NoNoValidation setExcluding
Chen et al., 2022 [39]NoNo-Excluding
Fitton et al., 2011 [40]NoNo--
Ma et al., 2019 [41]NoNo-Excluding
Chen et al., 2020 [42]NoNoCross validationExcluding
Zhao et al., 2019 [43]NoYesCross validationExcluding
Chanapai et al., 2009 [44]NoNoValidation setExcluding
Tatanun et al., 2010 [45]NoNo-Excluding
Chanapai et al., 2012 [46]NoNoValidation setExcluding
Bai et al., 2021 [47]NoNoCross validationExcluding
Daoud et al., 2019 [48]NoNoCross validationExcluding
Li et al., 2019 [49]NoNoValidation setExcluding
Xue et al., 2020 [50]NoNoValidation setExcluding
Jin et al., 2020 [51]NoNoValidation setExcluding
Wang et al., 2020 [52]NoNoCross validationExcluding
Men et al., 2017 [53]NoNoValidation setExcluding
Huang et al., 2013 [54]NoNoValidation setExcluding
Huang et al., 2015 [55]NoNo-Excluding
Li et al., 2018 [56]NoNoCross validationExcluding
Lin et al., 2019 [57]NoNoValidation setExcluding
Guo et al., 2020 [58]NoNoCross validationExcluding
Ye et al., 2020 [59]NoNoCross validationExcluding
Luo et al., 2023 [60]NoNoValidation setExcluding
Li et al., 2020 [61]NoNoValidation setExcluding
Wong et al., 2021 [62]NoNoCross validationExcluding
Liang et al., 2019 [63]NoNo-Excluding
Zhong et al., 2019 [64]NoNoValidation setExcluding
Peng et al., 2023 [65]NoNoCross validationExcluding
Zhao et al., 2022 [66]NoNoCross validationExcluding
Zhuang et al., 2021 [67]NoNoValidation setExcluding
Cao et al., 2020 [68]NoNoValidation setExcluding
Zhuang et al., 2019 [69]NoNoValidation setExcluding
Yue et al., 2022 [70]NoNoValidation setExcluding
Sun et al., 2022 [71]NoNoValidation setExcluding
Jiao et al., 2019 [72]NoNoValidation setExcluding
Chen et al., 2021 [73]NoNoValidation setExcluding
Zhang et al., 2020 [74]NoNoValidation setExcluding
Bin et al., 2022 [75]NoNoCross validationExcluding
Ren et al., 2021 [76]NoNoCross validationExcluding
Chao et al., 2022 [77]NoNoCross validationExcluding
Zhong et al., 2021 [78]NoNoValidation setExcluding
Zhao et al., 2022 [79]NoNoValidation setExcluding
Jiang et al., 2016 [80]NoNoValidation setExcluding
Cui et al., 2020 [82]NoNoCross validationExcluding
Zhong et al., 2020 [83]NoNoCross validationExcluding
Zhuo et al., 2019 [84]NoNoValidation setExcluding
Du et al., 2019 [85]NoNoValidation setExcluding
Li et al., 2018 [86]NoNoCross validationExcluding
Qiang et al., 2019 [87]NoNoValidation setExcluding
Du et al., 2019 [88]NoNoValidation setExcluding
Qiang et al., 2021 [89]NoNoValidation setExcluding
Zhang et al., 2021 [90]NoNoValidation setExcluding
Jing et al., 2020 [91]NoNoValidation setExcluding
Chen et al., 2021 [92]NoNoValidation setExcluding
Meng et al., 2022 [93]NoNoCross validationExcluding
Gu et al., 2022 [94]NoNoValidation setExcluding
Zhang et al., 2020 [95]NoNoValidation setExcluding
Liu et al., 2020 [96]NoNoValidation setExcluding
Chen et al., 2021 [97]NoNoValidation setExcluding
Zhao et al., 2021 [98]NoNoValidation setExcluding
Zhang et al., 2022 [99]NoNo-Excluding
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Yang, X.; Wu, J.; Chen, X. Application of Artificial Intelligence to the Diagnosis and Therapy of Nasopharyngeal Carcinoma. J. Clin. Med. 2023, 12, 3077. https://doi.org/10.3390/jcm12093077

AMA Style

Yang X, Wu J, Chen X. Application of Artificial Intelligence to the Diagnosis and Therapy of Nasopharyngeal Carcinoma. Journal of Clinical Medicine. 2023; 12(9):3077. https://doi.org/10.3390/jcm12093077

Chicago/Turabian Style

Yang, Xinggang, Juan Wu, and Xiyang Chen. 2023. "Application of Artificial Intelligence to the Diagnosis and Therapy of Nasopharyngeal Carcinoma" Journal of Clinical Medicine 12, no. 9: 3077. https://doi.org/10.3390/jcm12093077

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop