Next Article in Journal
Surface Topography-Based Positioning Accuracy of Maxillary Templates Fabricated by the CAD/CAM Technique for Orthognathic Surgery without an Intermediate Splint
Next Article in Special Issue
IVUS Image Segmentation Using Superpixel-Wise Fuzzy Clustering and Level Set Evolution
Previous Article in Journal
Effects of Precursor Concentration in Solvent and Nanomaterials Room Temperature Aging on the Growth Morphology and Surface Characteristics of Ni–NiO Nanocatalysts Produced by Dendrites Combustion during SCS
Previous Article in Special Issue
Systematic Method for Morphological Reconstruction of the Semicircular Canals Using a Fully Automatic Skeletonization Process
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Intelligent Glioma Grading Based on Deep Transfer Learning of MRI Radiomic Features

1
Graduate Institute of Biomedical Informatics, College of Medical Science and Technology, Taipei Medical University, Taipei 10675, Taiwan
2
Graduate Institute of Library, Information and Archival Studies, National Chengchi University, Taipei 11605, Taiwan
3
Department of Engineering Science and Ocean Engineering, National Taiwan University, Taipei 10617, Taiwan
4
Taiwan Instrument Research Institute, National Applied Research Laboratories, Taipei 10622, Taiwan
5
Department of Medical Imaging, Taipei Medical University Hospital, Taipei 10675, Taiwan
6
Department of Radiology, School of Medicine, College of Medicine, Taipei Medical University, Taipei 10675, Taiwan
*
Author to whom correspondence should be addressed.
Appl. Sci. 2019, 9(22), 4926; https://doi.org/10.3390/app9224926
Submission received: 7 September 2019 / Revised: 13 November 2019 / Accepted: 14 November 2019 / Published: 16 November 2019
(This article belongs to the Special Issue Image Processing Techniques for Biomedical Applications)

Abstract

:

Featured Application

Authors are encouraged to provide a concise description of the specific application or a potential application of the work. This section is not mandatory.

Abstract

According to a classification of central nervous system tumors by the World Health Organization, diffuse gliomas are classified into grade 2, 3, and 4 gliomas in accordance with their aggressiveness. To quantitatively evaluate a tumor’s malignancy from brain magnetic resonance imaging, this study proposed a computer-aided diagnosis (CAD) system based on a deep convolutional neural network (DCNN). Gliomas from a multi-center database (The Cancer Imaging Archive) composed of a total of 30 grade 2, 43 grade 3, and 57 grade 4 gliomas were used for the training and evaluation of the proposed CAD. Using transfer learning to fine-tune AlexNet, a DCNN, its internal layers, and parameters trained from a million images were transferred to learn how to differentiate the acquired gliomas. Data augmentation was also implemented to increase possible spatial and geometric variations for a better training model. The transferred DCNN achieved an accuracy of 97.9% with a standard deviation of ±1% and an area under the receiver operation characteristics curve (Az) of 0.9991 ± 0, which were superior to handcrafted image features, the DCNN without pretrained features, which only achieved a mean accuracy of 61.42% with a standard deviation of ±7% and a mean Az of 0.8222 ± 0.07, and the DCNN without data augmentation, which was the worst with a mean accuracy of 59.85% with a standard deviation ±16% and a mean Az of 0.7896 ± 0.18. The DCNN with pretrained features and data augmentation can accurately and efficiently classify grade 2, 3, and 4 gliomas. The high accuracy is promising in providing diagnostic suggestions to radiologists in the clinic.

1. Introduction

Diffuse gliomas, the most common primary central nervous system (CNS) neoplasm, are formed of tumor cells that display differentiation of glial cells. In the World Health Organization (WHO) classification of tumors of the CNS [1,2], diffuse gliomas are graded according to their malignant degree into WHO grades 2 to 4. Patients with diffuse gliomas of lower grades (grades 2 and 3) have more favorable prognoses [2,3]. On the contrary, glioblastoma multiforme (GBM) is the most aggressive tumor type (WHO grade 4) with dismal prognoses despite advances in various aspects of its clinical management [4]. Since therapeutic strategies for the various grades are not identical [5], distinguishing the different grades of diffuse gliomas is a critical issue in clinical settings. Determining the tumor grade relies on different pathological features including mitotic activity, cytological atypia, neoangiogenesis, and tumor necrosis. However, since the definitions are semiquantitative and subjective [6,7], histopathological analyses sometimes result in ambiguity in glioma grading. Moreover, previous reports revealed that the heterogeneous expressions of cellular features may result in misgrading in up to one-third of cases with unguided surgical tissue sampling [7,8,9,10,11].
With the development of medical imaging technologies, magnetic resonance (MR) imaging (MRI) is now the most commonly used modality to evaluate the malignancy of brain tumors [12,13]. Growing evidence has revealed the feasibility of using MRI features to probe underlying pathological subtypes, suggesting their potential application in differentiating tumor molecular profiles based on imaging traits [14]. In addition to conventional sequences, several physiological MR sequences, including diffusion-weighted imaging (DWI), perfusion-weighted imaging (PWI), and MR spectroscopy (MRS), have also been applied to find physiologically meaningful signals within the tumor, thus helping to evaluate heterogeneous patterns of cell compositions within tumor tissues and noninvasively differentiate gliomas of different degrees of malignancy [15,16,17,18,19]. A previous study proposed that MRI scans are highly specific for diagnosing brain stem gliomas and can even replace risky biopsies before radiotherapy in most cases [20]. To improve the clinical care of CNS gliomas, getting the most out of MRI information is very important.
Various computer-aided diagnosis (CAD) systems have been used to extract relevant diagnostic image features from X-ray radiography, ultrasonography, and MRI to evaluate tumor types, grades, and subsequent treatments [21,22,23,24]. Certain image features, including the intensity, morphology, and textural features, are handcrafted according to clinical experience. When designing handcrafted features, such as shape features, which are described by the experimented radiologists from clinical experience as being malignant tumors that are aggressive and have irregular shapes, implementation not only depends on the interpretive skills of physicians and computer scientists, but also is limited by the experts’ available knowledge.
Regarding image interpretation, deep learning, as a recent artificial intelligence technique, proposes using automatic convolutions to extract enormous numbers of edge features and object features to recognize the underlying characteristic patterns in images [25,26,27]. Human interventions with domain-specific knowledge are thus minimized by convolution layers using hierarchical feature representations. Thus, deep convolutional neural networks (DCNNs) have been successfully applied to object recognition in natural images after being trained on a large amount of training data [28]. However, the use of DCNNs in clinical decision making may be restricted due to limited private medical data. Nevertheless, recent studies have used DCNN or machine learning techniques to classify gliomas. Yang et al. used DCNN to classify 113 gliomas and achieved good accuracy [29]. Lotan used machine learning techniques to classify tumors based on the features extracted from image segmentation [30].
This study addressed the issue of limited data using transfer learning to transfer pretrained weights obtained from millions of natural images, i.e., ImageNet, to acquire substantial image features [25,27] and accelerate the training process [31]. In addition, data augmentation was applied to increase the quantity and diversity of training data [27]. Conventional augmentations, including translation, flipping, cropping, scaling, and rotation, are common sampling methods in general use. For the specified glioma MRI images, AutoAugment, which can automatically look for an optimal augmentation policy from one trained dataset to other different datasets [32], was also implemented to generate a customized augmentation dataset. Using the proposed transfer learning and data augmentation, the success of the developed CAD system can be applied to various medical image diagnostic issues.

2. Materials and Methods

2.1. MRI Database

The National Cancer Institute funded The Cancer Imaging Archive (TCIA), an open-access database containing brain MR images that complies with all applicable laws, regulations, and policies to protect human subjects, including all necessary approvals, human subject assurances, informed consent documents, and institutional review board approvals [33]. The acquired MRI database from TCIA was generated in five institutes before any operative procedure: Henry Ford Hospital, Thomas Jefferson University Hospital, Case Western Hospital, Emory University, and Fondazione IRCCS Instituto Neuroligico C. Besta. Representative examples of 30 grade 2, 43 grade 3, and 57 grade 4 gliomas are shown in Figure 1 to illustrate the tumor appearances in MR images.

2.2. Image Analysis

A board-certified neuroradiologist (K.H., with 13 years of experience) blinded to the grading information selected the most representative 2D image from the contrast-enhanced axial MRI T1-weighted image (T1WI). The intensity distributions among images were normalized to the gray-level pixel depth, i.e., 8 bits (0-255). After normalization, contrast-enhanced tumor areas were delineated using OsiriX software (Pixmeo, Geneva, Switzerland). Pixels enclosed in the delineated tumor contour were the input region-of-interest for the following tissue characterization. Figure 2 shows tumor areas of the examples in Figure 1. Before feeding the dataset into the DCNN, image resolutions were normalized to 227 × 227 pixels as a regular procedure.

2.3. Transfer Learning

DCNNs based on hierarchical convolution layers have overcome various classic computer visual challenges with substantial improvements in areas such as image classification [28], image segmentation [34], and object recognition [35]. AlexNet was the first attempt at using a deep neural network to achieve dramatically improved accuracy compared to previous methodologies in the ImageNet large-scale visual recognition challenge (ILSVRC) [28]. Conventional image classification methods use handcrafted image features to quantify intuitive and easily observed characteristic patterns for differentiation. As the amount of data increases, more diversities occur that exceed the imaginable range of humans when they manipulate image features. On the other hand, AlexNet, as a DCNN, utilizes data diversity and extracts arbitrary image features from edges to objects to architecturally establish a model of a specified classification task. Inspired by AlexNet, an enormous dataset is required to train a specific model. Nevertheless, collecting specific image data of sufficient quantity and quality is challenging. Using millions of images for training is also time-consuming. Alternatively, high performance is retained by transfer learning, which transfers knowledge learned about object compositions from an enormous dataset such as ImageNet to a specific task with a smaller amount of data [25,27,31].
In transfer learning [36], the internal layers of the original network are regarded as feature extractors, while the final layers used to learn specific features of the source task are replaced by adaptation layers trained on the target task (Figure 3). The target task in this study was the grading of three levels of gliomas on brain MR images, so that the final fully connected layer of the pre-trained AlexNet DCNN model for 1000 objects was replaced by three groups, i.e., grades 2, 3, and 4, and a subsequent classification layer.

2.4. Data Augmentation

Transfer learning uses extracted features from big data for pretraining. These features are oriented from the source task, such as object recognition in ImageNet, and might not exhaustively describe the target task. To squeeze diversities and characteristics from the target images, AutoAugment [32] was also proposed to enhance the generalization ability and reduce overfitting. AutoAugment has already been used to explore the CIFAR-10, SVHN, and ImageNet datasets to automatically define the best augmentation policies for these imaging data, that is, the most appropriate combinations of image operations to generate more data. In the experiment, the policy for SVHN was transferred to the target task, i.e., examining brain MRI images, while the augmentation policy of the ImageNet dataset focused on color exchange that differed from the collected grayscale images, and CIFAR-10 focused on translation, which caused the target to exceed the boundary and left only half of the target. The SVHN policy consisted of 25 subpolicies, each of which consisted of two operations applied to an image, and each operation was related to two hyper-parameters: the probability of applying the operation and the magnitude of the operation. For example, an operation named ShearX(Y) has a range of magnitudes, which is [−0.3, 0.3] and is discretized into 10 values, so (ShearX, 0.9, 7) has a probability of 0.9 of being applied, and when applied, it has a magnitude of 7 out of 10. Different from the original policy, probabilities of all subpolicies were ignored in the experiment to generate fixed image production; therefore, the two operations in each subpolicy were separated into three groups: operation 1, operation 2, and operation 1 and 2. At this time, some single operations repeated and were removed. These 56 subpolicies expanded the dataset to 57-fold larger. Figure 4 presents augmented images for examples of grade 4 gliomas.

2.5. Ten-Fold Cross-Validation

Ten-fold cross-validation was used for model validation and assessment. The image dataset was randomly partitioned into 10 equal subsamples. In every cross-validation process, one of the subsamples was preserved as a test set and the others were used as a training set. The cross-validation process was repeated 10 times and every subsample was used once as a test set. The mean and standard deviation (SD) of the 10 test results were calculated as an estimate of the model accuracy. Ten-fold cross-validation is widely used to evaluate the generalization ability of limited datasets [37].

3. Results

In the training process, a low learning rate can lead to time-consuming training and a low convergence speed with an excessively high learning rate might cause a suboptimal result or diversity. An initial learning rate of 0.001 and a maximum number of epochs of 20 were adopted to achieve a nearly 100% training accuracy, as shown in Figure 5, including accuracies of the training set (blue), validation set (black), and loss rate (orange). As the figure shows, this network converged at the fifth epoch and was stopped at the eighth epoch by the criterion of no smaller loss. According to biopsy-proven results, the performance of the prediction model as presented by the accuracy, sensitivity, specificity, and area under the receiver operating characteristic (ROC) curve (Az) illustrate the tradeoffs between sensitivity and specificity. They were formulated using SPSS software (version 16 for Windows; SPSS, Chicago, IL, USA). By evaluating the 10-fold cross-validation, the transferred DCNN achieved a mean accuracy of 97.9% with an SD of ±1% and a mean Az of 0.9991 ± 0, as illustrated in Figure 6. In detail, the classifier differentiating grade 2 gliomas from the others achieved an accuracy of 98.7%, a sensitivity of 96.9%, and a specificity of 99.2%. The classifier differentiating grade 3 gliomas from the others achieved an accuracy of 98.3%, a sensitivity of 96.8%, and a specificity of 99.0%. The classifier differentiating grade 4 gliomas from the others achieved an accuracy of 98.7%, a sensitivity of 99.1%, and a specificity of 98.3%. Compared to the DCNN without pretrained features, retraining gliomas only achieved a mean accuracy of 61.42% with an SD of ±7% and a mean Az of 0.8222 ± 0.07. In comparison, the results of the DCNN without augmentation were the worst and achieved a mean accuracy of 59.85% with an SD of ±16% and a mean Az of 0.7896 ± 0.18.

4. Discussion

Machine learning uses statistical analyses to combine various features for automatic classification. As a state-of-the art technique, deep learning inherited the methodology and exhausts computational power to automatically extract any possible features from a large dataset. Deep learning architectures such as the DCNN have been applied to a variety of classification tasks, including object detection and classification in natural images and medical images. DCNN architectures use multiple nonlinear transformations to stimulate advanced visual abstractions of image data. Inspired by the mechanism of biological nervous systems, multiple convolutional layers form a construction from pixels and transfer regions to objects to thoroughly analyze an image’s composition. Based on the tissue appearance on a brain MR image, the proposed CAD system uses a transferred DCNN to establish a malignancy evaluation model to provide more objective and accurate diagnostic suggestions for grading gliomas. In this study, using the DCNN to classify grade 2, 3, and 4 gliomas achieved an almost perfect performance with a mean accuracy of 97.9% with an SD of ±1% and a mean Az of 0.9991 ± 0. A previous study [38] used well-known handcrafted MRI features, including the intensity and textural features, and achieved an accuracy of only 88%, a sensitivity of 82%, a specificity of 90%, and an Az of 0.89. Regarding the classifier type, the performance of conventional artificial neural networks in a previous study [38] had a lower diagnostic performance in comparison: an accuracy of 84%, a sensitivity of 79%, and a specificity of 86%. In a study by Yang [29], they used 113 gliomas and convolutional neural networks with and without transfer learning. The results demonstrated that using transfer learning can improve the performance, which is similar to our results. Lotan used automatic segmentation methods to obtain the tumor contours and areas, just like in our delineation. The following classification also depends on machine learning techniques [30]. Another study used diagnostic information from multiple modalities to achieve better performance [39].
Kermany et al. [31] imposed a transfer learning model to classify optical coherence tomographic images and achieved high performances of accuracy, sensitivity, and specificity exceeding 93% and an Az of 0.988. It is recommended that training using medical images with pretrained models may produce more accurate models in a very short time compared to training a model from scratch. Even with a limited number of datasets, the transferred DCNN can generate comparable or even better performances than human experts. Similar results were shown in this study, as the transferred DCNN performed much better than the DCNN without pretrained features and the DCNN without augmentation. The difference was likely caused by the number of training samples. The original dataset without augmentation contained only 30, 43, and 57 images in the three classes, respectively which is very low even when using transfer learning—compared to Kermany et al., who used 1000 images in each class to train a limited model. Through AutoAugment, described in this study, the dataset was expanded 57-fold and therefore contained over 1000 images in each class. The total number of augmented datasets was sufficient for transfer learning, but still not for original deep learning, so the result of the DCNN without augmentation was worse than that of the transferred DCNN.
Based on the success of transfer learning, collecting qualified image data, such as precise labeling and description of the tumor location, boundaries, and features, may become more important than thinking about sophisticated image features. However, collecting substantial qualified medical data is very challenging. The limited data used in this experiment may not sufficiently represent the overall diversity of tumor appearances. Every year, only about 600 patients have diffuse malignant gliomas, of which only about 240 cases are GBM in Taiwan. As a result, collecting large amounts of data with patient-informed consent is a challenge. Therefore, data augmentation is bound to become a necessary tool for using deep learning on uncommon diseases. To date, the experimental results have demonstrated that using transfer learning and data augmentation for scarce medical images is suitable. If the case number is limited, the generalization is questioned. In this situation, using data augmentation with transfer learning can fit a small dataset with substantial challenge in classification. If the dataset has enough diversity, it can be used. In the future, after more data on gliomas are collected, this system is likely to be practical in clinical use.
Another limitation is that, although contrast-enhanced T1WIs provided critical information for differentiating different grades of gliomas in the DCNN, the correlation between the prediction model and actual biological tissues was underexplored. Key clinical determinants in grading gliomas are necrosis and angiogenesis. Whether they are related to the established DCNN model is the next topic to be explored. Meanwhile, further investigation of other sequences including fluid-attenuated inversion recovery, T2-weighted images, DWI, and MRS is necessary.

5. Conclusions

Using hierarchical feature representations learned by transferred convolutional neural networks, the proposed CAD system using a transferred DCNN with data augmentation achieved a mean accuracy of 97.9% with an SD of ±1% and a mean Az of 0.9991 ± 0. The accuracy in distinguishing different grades of gliomas will be promising for radiologists in the clinic.

Author Contributions

Conceptualization, C.-M.L.; Data curation, K.L.-C.H.; Methodology, C.-M.L., Y.-C.C.; Validation, R.-C.W.; Writing—original draft, C.-M.L.; Writing—review and editing, K.L.-C.H.

Funding

This research was funded by [the Ministry of Science and Technology in Taiwan] grant number [MOST 108-2221-E-004-010-MY3] And The APC was funded by [the Ministry of Science and Technology in Taiwan]

Acknowledgments

The authors would like to thank the Ministry of Science and Technology in Taiwan (MOST 108-2221-E-004-010-MY3 and 106-2314-B-038-033) and Taipei Medical University (TMU 104-AE1-B04, 106-AE1-B46 and 106TMU-TMUH-20) for financially supporting this research.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Louis, D.N.; Ohgaki, H.; Wiestler, O.D.; Cavenee, W.K.; Burger, P.C.; Jouvet, A.; Scheithauer, B.W.; Kleihues, P. The 2007 WHO classification of tumours of the central nervous system. Acta Neuropathol. 2007, 114, 97–109. [Google Scholar] [CrossRef]
  2. Louis, D.N.; Perry, A.; Reifenberger, G.; von Deimling, A.; Figarella-Branger, D.; Cavenee, W.K.; Ohgaki, H.; Wiestler, O.D.; Kleihues, P.; Ellison, D.W. The 2016 World Health Organization Classification of Tumors of the Central Nervous System: A summary. Acta Neuropathol. 2016, 131, 803–820. [Google Scholar] [CrossRef]
  3. Brat, D.J.; Verhaak, R.; Aldape, K.D.; Yung, W.; Salama, S.R.; Cooper, L.; Rheinbay, E.; Miller, C.R.; Vitucci, M.; Morozova, O. Comprehensive, integrative genomic analysis of diffuse lower-grade gliomas. N. Engl. J. Med. 2015, 372, 2481–2498. [Google Scholar]
  4. Desjardins, A.; Gromeier, M.; Herndon, J.E., 2nd; Beaubier, N.; Bolognesi, D.P.; Friedman, A.H.; Friedman, H.S.; McSherry, F.; Muscat, A.M.; Nair, S.; et al. Recurrent Glioblastoma Treated with Recombinant Poliovirus. N. Engl. J. Med. 2018, 379, 150–161. [Google Scholar] [CrossRef]
  5. Gallego Perez-Larraya, J.; Delattre, J.Y. Management of elderly patients with gliomas. Oncologist 2014, 19, 1258–1267. [Google Scholar] [CrossRef] [PubMed]
  6. Burger, P.C.; Vogel, F.S.; Green, S.B.; Strike, T.A. Glioblastoma multiforme and anaplastic astrocytoma pathologic criteria and prognostic implications. Cancer 1985, 56, 1106–1111. [Google Scholar] [CrossRef]
  7. Coons, S.W.; Johnson, P.C.; Scheithauer, B.W.; Yates, A.J.; Pearl, D.K. Improving diagnostic accuracy and interobserver concordance in the classification and grading of primary gliomas. Cancer 1997, 79, 1381–1393. [Google Scholar] [CrossRef]
  8. Kleihues, P.; Soylemezoglu, F.; Schäuble, B.; Scheithauer, B.W.; Burger, P.C. Histopathology, classification, and grading of gliomas. Glia 1995, 15, 211–221. [Google Scholar] [CrossRef] [PubMed]
  9. Gilles, F.H.; Brown, W.D.; Leviton, A.; Tavaré, C.J.; Adelman, L.; Rorke, L.B.; Davis, R.L.; Hedley-Whyte, T.E. Limitations of the World Health Organization classification of childhood supratentorial astrocytic tumors. Cancer 2000, 88, 1477–1483. [Google Scholar] [CrossRef]
  10. Prayson, R.A.; Agamanolis, D.P.; Cohen, M.L.; Estes, M.L.; Kleinschmidt-DeMasters, B.; Abdul-Karim, F.; McClure, S.P.; Sebek, B.A.; Vinay, R. Interobserver reproducibility among neuropathologists and surgical pathologists in fibrillary astrocytoma grading. J. Neurol. Sci. 2000, 175, 33–39. [Google Scholar] [CrossRef]
  11. Kim, S.H.; Chang, W.; Kim, J.P.; Minn, Y.; Choi, J.; Chang, J.; Kim, T.; Park, Y.; Chang, J. Peripheral compressing artifacts in brain tissue from stereotactic biopsy with sidecutting biopsy needle: A pitfall for adequate glioma grading. Clin. Neuropathol. 2010, 30, 328–332. [Google Scholar] [CrossRef] [PubMed]
  12. Mahaley, M.S., Jr.; Mettlin, C.; Natarajan, N.; Laws, E.R., Jr.; Peace, B.B. National survey of patterns of care for brain-tumor patients. J. Neurosurg. 1989, 71, 826–836. [Google Scholar] [CrossRef] [PubMed]
  13. Guzman-De-Villoria, J.A.; Mateos-Perez, J.M.; Fernandez-Garcia, P.; Castro, E.; Desco, M. Added value of advanced over conventional magnetic resonance imaging in grading gliomas and other primary brain tumors. Cancer Imaging 2014, 14, 35. [Google Scholar] [CrossRef] [PubMed]
  14. Li-Chun Hsieh, K.; Chen, C.Y.; Lo, C.M. Quantitative glioma grading using transformed gray-scale invariant textures of MRI. Comput. Biol. Med. 2017, 83, 102–108. [Google Scholar] [CrossRef]
  15. Leach, M.O.; Brindle, K.; Evelhoch, J.; Griffiths, J.R.; Horsman, M.R.; Jackson, A.; Jayson, G.C.; Judson, I.R.; Knopp, M.; Maxwell, R.J. The assessment of antiangiogenic and antivascular therapies in early-stage clinical trials using magnetic resonance imaging: Issues and recommendations. Br. J. Cancer 2005, 92, 1599–1610. [Google Scholar] [CrossRef]
  16. Bai, X.; Zhang, Y.; Liu, Y.; Han, T.; Liu, L. Grading of supratentorial astrocytic tumors by using the difference of ADC value. Neuroradiology 2011, 53, 533–539. [Google Scholar] [CrossRef]
  17. Jackson, A.; O’Connor, J.P.; Parker, G.J.; Jayson, G.C. Imaging tumor vascular heterogeneity and angiogenesis using dynamic contrast-enhanced magnetic resonance imaging. Clin. Cancer Res. 2007, 13, 3449–3459. [Google Scholar] [CrossRef]
  18. Blasberg, R.G. Imaging update: New windows, new views. Clin. Cancer Res. 2007, 13, 3444–3448. [Google Scholar] [CrossRef]
  19. Arvinda, H.; Kesavadas, C.; Sarma, P.; Thomas, B.; Radhakrishnan, V.; Gupta, A.; Kapilamoorthy, T.; Nair, S. RETRACTED ARTICLE: Glioma grading: Sensitivity, specificity, positive and negative predictive values of diffusion and perfusion imaging. J. Neuro-Oncol. 2009, 94, 87–96. [Google Scholar] [CrossRef]
  20. Albright, A.L.; Packer, R.J.; Zimmerman, R.; Rorke, L.B.; Boyett, J.; Hammond, G.D. Magnetic resonance scans should replace biopsies for the diagnosis of diffuse brain stem gliomas: A report from the Children’s Cancer Group. Neurosurgery 1993, 33, 1026–1030. [Google Scholar] [CrossRef]
  21. Lo, C.-M.; Lai, Y.-C.; Chou, Y.-H.; Chang, R.-F. Quantitative breast lesion classification based on multichannel distributions in shear-wave imaging. Comput. Methods Programs Biomed. 2015, 122, 354–361. [Google Scholar] [CrossRef] [PubMed]
  22. Lo, C.-M.; Moon, W.K.; Huang, C.-S.; Chen, J.-H.; Yang, M.-C.; Chang, R.-F. Intensity-invariant texture analysis for classification of bi-rads category 3 breast masses. Ultrasound Med. Biol. 2015, 41, 2039–2048. [Google Scholar] [CrossRef] [PubMed]
  23. Moon, W.K.; Lo, C.-M.; Cho, N.; Chang, J.M.; Huang, C.-S.; Chen, J.-H.; Chang, R.-F. Computer-aided diagnosis of breast masses using quantified BI-RADS findings. Comput. Methods Programs Biomed. 2013, 111, 84–92. [Google Scholar] [CrossRef] [PubMed]
  24. Zacharaki, E.I.; Wang, S.; Chawla, S.; Soo Yoo, D.; Wolf, R.; Melhem, E.R.; Davatzikos, C. Classification of brain tumor type and grade using MRI texture and shape in a machine learning scheme. Magn. Reson. Med. 2009, 62, 1609–1618. [Google Scholar] [CrossRef]
  25. Ravi, K.S.; Heang-Ping, C.; Lubomir, M.H.; Mark, A.H.; Kenny, H.C.; Caleb, D.R. Multi-task transfer learning deep convolutional neural network: Application to computer-aided diagnosis of breast cancer on mammograms. Phys. Med. Biol. 2017, 62, 8894. [Google Scholar]
  26. Antropova, N.; Huynh, B.Q.; Giger, M.L. A deep feature fusion methodology for breast cancer diagnosis demonstrated on three imaging modality datasets. Med Phys. 2017, 44, 5162–5171. [Google Scholar] [CrossRef]
  27. Banerjee, I.; Crawley, A.; Bhethanabotla, M.; Daldrup-Link, H.E.; Rubin, D.L. Transfer learning on fused multiparametric MR images for classifying histopathological subtypes of rhabdomyosarcoma. Comput. Med. Imaging Graph. 2018, 65, 167–175. [Google Scholar] [CrossRef]
  28. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. ImageNet classification with deep convolutional neural networks. In Proceedings of the 25th International Conference on Neural Information Processing Systems—Volume 1, Lake Tahoe, NV, USA, 3–6 December 2012; pp. 1097–1105. [Google Scholar]
  29. Yang, Y.; Yan, L.-F.; Zhang, X.; Han, Y.; Nan, H.-Y.; Hu, Y.-C.; Hu, B.; Yan, S.-L.; Zhang, J.; Cheng, D.-L. Glioma grading on conventional mr images: A deep learning study with transfer learning. Front. Neurosci. 2018, 12, 804. [Google Scholar] [CrossRef]
  30. Lotan, E.; Jain, R.; Razavian, N.; Fatterpekar, G.M.; Lui, Y.W. State of the art: Machine learning applications in glioma imaging. Am. J. Roentgenol. 2019, 212, 26–37. [Google Scholar] [CrossRef]
  31. Kermany, D.S.; Goldbaum, M.; Cai, W.; Valentim, C.C.S.; Liang, H.; Baxter, S.L.; McKeown, A.; Yang, G.; Wu, X.; Yan, F.; et al. Identifying Medical Diagnoses and Treatable Diseases by Image-Based Deep Learning. Cell 2018, 172, 1122–1131. [Google Scholar] [CrossRef]
  32. Cubuk, E.D.; Zoph, B.; Mane, D.; Vasudevan, V.; Le, Q.V. AutoAugment: Learning Augmentation Policies from Data. arXiv 2018, arXiv:1805.09501. [Google Scholar]
  33. McLendon, R.; Friedman, A.; Bigner, D.; Van Meir, E.G.; Brat, D.J.; Mastrogianakis, G.M.; Olson, J.J.; Mikkelsen, T.; Lehman, N.; Aldape, K. Comprehensive genomic characterization defines human glioblastoma genes and core pathways. Nature 2008, 455, 1061–1068. [Google Scholar]
  34. Girshick, R.; Donahue, J.; Darrell, T.; Malik, J. Rich feature hierarchies for accurate object detection and semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014; pp. 580–587. [Google Scholar]
  35. Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. arXiv 2014, arXiv:1409.1556. [Google Scholar]
  36. Oquab, M.; Bottou, L.; Laptev, I.; Sivic, J. Learning and Transferring Mid-level Image Representations Using Convolutional Neural Networks. In Proceedings of the 2014 IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014; pp. 1717–1724. [Google Scholar]
  37. Kohavi, R. A study of cross-validation and bootstrap for accuracy estimation and model selection. In Proceedings of the 14th International Joint Conference on Artificial Intelligence—Volume 2, Montreal, QC, Canada, 20–25 August 1995; pp. 1137–1143. [Google Scholar]
  38. Hsieh, K.L.-C.; Lo, C.-M.; Hsiao, C.-J. Computer-aided grading of gliomas based on local and global MRI features. Comput. Methods Programs Biomed. 2017, 139, 31–38. [Google Scholar] [CrossRef] [PubMed]
  39. Ye, F.; Pu, J.; Wang, J.; Li, Y.; Zha, H. Glioma grading based on 3D multimodal convolutional neural network and privileged learning. In Proceedings of the 2017 IEEE International Conference on Bioinformatics and Biomedicine (BIBM), Kansas City, MO, USA, 13–16 November 2017; pp. 759–763. [Google Scholar]
Figure 1. Gliomas of grades 2, 3, and 4 as they appeared on brain magnetic resonance images. (http://cancerimagingarchive.net/—“License” and the CC BY license (https://creativecommons.org/licenses/by/3.0/).
Figure 1. Gliomas of grades 2, 3, and 4 as they appeared on brain magnetic resonance images. (http://cancerimagingarchive.net/—“License” and the CC BY license (https://creativecommons.org/licenses/by/3.0/).
Applsci 09 04926 g001
Figure 2. Extracted tumor areas as input images to the deep convolutional neural network. (http://cancerimagingarchive.net/—“License” and the CC BY license (https://creativecommons.org/licenses/by/3.0/).
Figure 2. Extracted tumor areas as input images to the deep convolutional neural network. (http://cancerimagingarchive.net/—“License” and the CC BY license (https://creativecommons.org/licenses/by/3.0/).
Applsci 09 04926 g002
Figure 3. Parameter transfer in the proposed deep convolutional neural network.
Figure 3. Parameter transfer in the proposed deep convolutional neural network.
Applsci 09 04926 g003
Figure 4. Augmented data with 56 subpolicies (taking a grade 4 glioma as an example).
Figure 4. Augmented data with 56 subpolicies (taking a grade 4 glioma as an example).
Applsci 09 04926 g004
Figure 5. Training and validation learning processes.
Figure 5. Training and validation learning processes.
Applsci 09 04926 g005
Figure 6. Test accuracy illustrated by tradeoffs between the sensitivity and specificity.
Figure 6. Test accuracy illustrated by tradeoffs between the sensitivity and specificity.
Applsci 09 04926 g006

Share and Cite

MDPI and ACS Style

Lo, C.-M.; Chen, Y.-C.; Weng, R.-C.; Hsieh, K.L.-C. Intelligent Glioma Grading Based on Deep Transfer Learning of MRI Radiomic Features. Appl. Sci. 2019, 9, 4926. https://doi.org/10.3390/app9224926

AMA Style

Lo C-M, Chen Y-C, Weng R-C, Hsieh KL-C. Intelligent Glioma Grading Based on Deep Transfer Learning of MRI Radiomic Features. Applied Sciences. 2019; 9(22):4926. https://doi.org/10.3390/app9224926

Chicago/Turabian Style

Lo, Chung-Ming, Yu-Chih Chen, Rui-Cian Weng, and Kevin Li-Chun Hsieh. 2019. "Intelligent Glioma Grading Based on Deep Transfer Learning of MRI Radiomic Features" Applied Sciences 9, no. 22: 4926. https://doi.org/10.3390/app9224926

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop