Next Article in Journal
Investigation on the Influencing Factors of K0 of Granular Materials Using Discrete Element Modelling
Previous Article in Journal
Stability of the Maxillary and Mandibular Total Arch Distalization Using Temporary Anchorage Devices (TADs) in Adults
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Diagnosis of Intracranial Tumors via the Selective CNN Data Modeling Technique

by
Vinayak Singh
1,
Mahendra Kumar Gourisaria
1,*,
Harshvardhan GM
1,
Siddharth Swarup Rautaray
1,
Manjusha Pandey
1,
Manoj Sahni
2,
Ernesto Leon-Castro
3 and
Luis F. Espinoza-Audelo
4
1
School of Computer Engineering, KIIT Deemed to be University, Bhubaneswar 751024, Odisha, India
2
Department of Mathematics, Pandit Deendayal Energy University, Gandhinagar 382426, Gujarat, India
3
Faculty of Economics and Administrative Sciences, Universidad Católica de la Santísima Concepción, Concepción 4030000, Chile
4
Instituto Tecnológico de Culiacán, Tecnológico Nacional de México, Sinaloa 80220, Mexico
*
Author to whom correspondence should be addressed.
Appl. Sci. 2022, 12(6), 2900; https://doi.org/10.3390/app12062900
Submission received: 31 December 2021 / Revised: 21 February 2022 / Accepted: 8 March 2022 / Published: 11 March 2022

Abstract

:
A brain tumor occurs in humans when a normal cell turns into an aberrant cell inside the brain. Primarily, there are two types of brain tumors in Homo sapiens: benign tumors and malignant tumors. In brain tumor diagnosis, magnetic resonance imaging (MRI) plays a vital role that requires high precision and accuracy for diagnosis, otherwise, a minor error can result in severe consequences. In this study, we implemented various configured convolutional neural network (CNN) paradigms on brain tumor MRI scans that depict whether a person is a brain tumor patient or not. This paper emphasizes objective function values (OFV) achieved by various CNN paradigms with the least validation cross-entropy loss (LVCEL), maximum validation accuracy (MVA), and training time (TT) in seconds, which can be used as a feasible tool for clinicians and the medical community to recognize tumor patients precisely. Experimentation and evaluation were based on a total of 2189 brain MRI scans, and the best architecture shows the highest accuracy of 0.8275, maximum objective function value of 1.84, and an area under the ROC (AUC-ROC) curve of 0.737 to accurately recognize and classify whether or not a person has a brain tumor.

1. Introduction

A brain tumor is a condition where there is a development of abnormal cells in the brain. When a brain tumor grows it increases the intracranial pressure which leads to brain damage and is even life-threatening. Brain tumors can be classified into two categories: benign and malignant tumors. Malignancies consist of cancer cells, whereas benign tumors consist of noncancerous cells. In comparison, the growth of malignant tumors is very rapid, whereas benign tumors are very slow in growth and present less symptoms compared to those of malignant tumors [1]. Brain tumors can also be categorized based on origin: primary brain tumors, which are cancer lesions that are apparent within the brain such as gliomas, oligodendrogliomas, pituitary adenomas, schwannomas, and astrocytomas whereas metastatic tumors (secondary brain tumors) develop at different locations of the CNS and migrate to the brain through arterial circulation [2]. The frontal lobe is the most common site for metastatic tumors.
According to the report, the National Brain Tumor Foundation (NBTF) stated that in the USA, a total of 29,000 people were diagnosed with primary intracranial tumors, out of which 13,000 patients died due to this disastrous tumor. Additionally, one-quarter of all cancer deaths among children are due to brain tumors. The total yearly frequency of primary intracranial tumors in the USA is 11–12 per 100,000 people, and for primary malignant intracranial tumors, it is 6–7 per 100,000. Over 4200 people are diagnosed with brain tumors every year in the UK (estimation of 2007), and 16 out of 1000 diagnosed cancer patients had a brain tumor. Approximately 200 other types of tumors are diagnosed in the UK every year. According to a report in 2007, a total of 80,271 cases were diagnosed in India with various types of tumors [3]. Majority of childhood tumors i.e., 17% are located in the parietal, frontal, and occipital lobe followed by cerebellum and brain stem. As per doctors’ grading, tumors can be classified into four grades: Grade 1 when tissue is benign, Grade 2 when tissue is malignant and the cell looks less like normal cells, Grade 3 when malignant tissues have a cell that is very different from normal cells (anaplastic), Grade 4 where malignant tissue has cells that look most abnormal and grow rapidly.
Malignancies contain structural heterogeneity, as well as active cancer cells, whereas in benign tumors, there is uniformity and they do not contain any active cancer cells. Growth of benign tumors is slow but with time it may convert into a deadly malignant tumor. Benign tumors are also known as low-grade tumors and can be subclassified into two categories, gliomas and meningioma. Similarly, malignant tumors are considered high-grade brain tumors that can also be classified into two parts: glioblastoma and astrocytoma [4]. Benign tumors are not harmful as they do not spread across in the brain, but in the case of malignant tumor, they are very harmful as they have rapid growth. They are removable and they rarely grow back. Malignant tumors can be classified into distinct categories based on various factors like locations, severity level, and type of originating tissues.
Head injuries, hereditary syndromes, immune suppression, prolonged electromagnetic field exposure to ionizing radiation, or chemicals, such as vinyl chloride and formaldehyde, are the major causes of brain tumors. There are many symptoms of a brain tumor, such as persistent headache, eyesight issues, hearing and speech problems, walking and balancing problems, memory lapse, nausea and vomiting, a problem in concentration, and seizures. Early detection of tumors can lead to successful treatment but treatment gets relatively complex with time. There are various therapies related to brain tumors such as radiation therapy is an effective treatment for malignant gliomas. Chemotherapy also plays an important role in the treatment of high-grade gliomas as they kill cancerous cells. Factors like optimum dosing, resection radius, sequencing of radiotherapy, and chemotherapy (concurrent and adjuvant) affect the effectiveness of the treatment. To monitor the anatomy, vascular supply, and cellular structures of intracranial tumors, we used magnetic resonance imaging (MRI), which helped us to diagnose and provide valuable treatment [5,6].
Magnetic resonance imaging (MRI) is a medical image processing technique that assists medical experts in diagnosis and treatment according to the condition of the patient. It is the process by which we can identify the disorders present inside the brain. It makes efficient use of the magnetic area, pulses, and computer visualizing for analyzing bones, organs, and other structures of the body of patients. Computed tomography (CT) scan diagnosis is much more complicated, but an MRI offers the ability to understand the posterior brainstem efficiently. Segmentation always plays a major role in finding malicious regions from medical images, which are complicated [7]. Magnetic resonance imaging (MRI) has many applications in imaging techniques for noticing intracranial tumors, as it is a nonintrusive system; hence, it can be used alongside other imaging modalities, such as magnetic resonance spectroscopy (MRS), computed tomography (CT), and positron emission tomography (PET) to ensure that we can obtain a more accurate tumor structure [8,9]. For biopsy sampling, an MRI is used for tumor grading by pathologists to provide the tumor location, and we obtain full details of the shape and size of the brain tumor by using MRI images [10].
The medical field is greatly impacted by machine learning, computer vision, and deep learning, as they solve a large number of problems in an efficient manner, such as ECG classification, brain tumor, and heart disease detection [11,12,13,14]. In addition, deep learning model techniques, such as CNN and ANN have also improved the classification of objects and the detection of images [15,16]. As a deep learning prototype, CNN [17] can be used to draw out exact characteristics from different types of raw input images present in the dataset, which consist of various configurations of images [18]. Similarly, generative modeling also plays a vital role in solving a large number of problems [19].
In this study, we explore the performance metrics attained by various configured CNN architectures. To find the best and optimal architecture, maximum accuracy (MVA), minimal loss (LVCEL), and training time (TT) were considered, and thus, we also computed the objective function value for each CNN. The objective function value led us to find the best CNN for classifying tumor patients. This brain tumor automatic diagnosis with real-time deployment enables us to achieve precise results with a low computational cost. Further sections of the paper include, Section 2 discussed about the related work of brain tumor, Section 3 comprises about the materials and methods used for solving this binary classification problem, Experimentation and Results are discussed in Section 4, and in Section 5 we concluded with the discussion regarding the future work.

2. Related Work

From the last decade, most of the medical field has been receiving contributions from machine learning and deep learning techniques to make the work easier and more convenient for medical workers. Most of the research papers are oriented toward deep learning techniques, such as CNN’s, transfer learning, and classical neural networks for solving problems regarding brain tumors [20,21,22]. Thillaikkarasi et al. (2019) performed brain tumor segmentation by using a deep learning algorithm that included a CNN with M-SVM that fragmented the tumor automatically. The MRI images were modified using the Laplacian of Gaussian filtering method (LoG) and contrast limited adaptive histogram equalization (CLAHE). Both methods achieved an accuracy of 84% [23].
Ghassemi et al. (2020) used a new deep learning technique, i.e., a generative adversarial network (GAN) on the dataset for extracting the robust and structured features from the MRI images, and imported them into a convolution layer architecture to classify three types of tumors: meningioma, glioma, and pituitary tumors. The dataset contained 3064 total images, and they achieved accuracies of 93.01% and 95.6% for the introduced split and random split, respectively [24]. For instance, the simple CNN technique used by Seetha et al. (2018) achieved an accuracy of 97.5% in brain tumor detection by using CNN architectures where the weight of neurons was very smaller compared to normal deep learning architectures. They used a gradient descent algorithm for calculating the loss function, and the dataset used was deployed by BRAIN TUMOR IMAGE SEGMENTATION BENCHMARK (BRATS) in 2015 [25].
The deep neural network was used by Mohsen H et al. (2017) for the classification of brain tumors, where the dataset contained four classes, including sarcoma, glioblastoma, metastatic bronchogenic carcinoma tumors, and normal tumors. The classifiers were used in conjunction with a principal component analysis (PCA) and a discrete wavelet transform (DWT), which yielded fairly good results across all performance metrics, but the dataset used was very small [26]. Similarly, an accuracy of 81% was achieved by Pashaei et al. (2018). The authors used 3 × 3 size filters and the model construction was based on five layers [27]. The results were good, as they had an ensemble of a CNN and a kernel extreme learning machine (KELM), and they named the merging methodology KE-CNN. They compared their proposed methodology with other classifiers, such as an SVM, a radial base function, and some other classifiers.
A capsule network (CapsNet) was used for tumor detection by recognizing the relationship between the tumor and its nearby tissues. This was a modified form of CNN architecture that was introduced by Afshar et al. (2019) [28]. Transfer learning plays an important role in the classification of tumors and was applied to content-based image retrieval (CBIR) by Swati et al. (2019). The evaluation was performed on a publicly available dataset, and Swati et al. (2019) obtained good results [29]. Jain et al. (2019) used the VGG-16 model, which was trained to detect Alzheimer’s disease from MRI images [30].
Convolutional neural networks greatly contribute to the medical field due to their inclined nature toward images. In the present day, a large amount of unstructured image data are being generated. Therefore, CNN architectures are the best and optimal solution for these real-world problems. In our work, brain MRI images are used, and thus, CNN architecture also plays a crucial role, which is justified in Section 3.4. There are many techniques for detecting brain tumors in patients, but there are also several limitations. The main reason behind this research was to reduce the computational cost and provide an efficient diagnosis system for brain tumors by using CNN. Many radiologists face issues regarding this binary classification due to lack of information and available data.

3. Materials and Methods

In this section, we discuss the materials and methodology used for classifying brain tumors using MRI images. We also specify the dataset and the technology used for experimentation and computation. This section is organized as follows: In 3.1 we have discuss about the dataset used for this study, Section 3.2 explains about the image augmentation, similarly, Section 3.3 and 3.4 explains about the artificial neural network and convolutional neural network and in Section 3.5 we have discuss about the software and hardware used for experimentation.

3.1. Dataset Used

MRI data used for this brain tumor detection study were collected from Kaggle. The dataset was deployed by Navoneel Chakrabarty on Kaggle. The author collected all the images from Google images and gathered approximately 253 images for dataset formation [31]. The images were categorized into two categories, YES and NO, where the YES folder contained 155 MRI scans of people with brain tumors and the NO folder contained 98 MRI scans of people without brain tumors. Figure 1 depicts an MRI scan of a positive and a negative patient. Table 1 shows the augmented dataset distribution of MRI scans for training and testing the CNN models. The dataset was very small, and thus, image augmentation played an important role in the dataset increment.

3.2. Image Augmentation

Image augmentation played a vital role in the training of a CNN architecture by providing various aspects of images for training and testing by increasing the sample images [32,33,34,35,36]. Image augmentation works on different parameters, such as shearing, zooming, rescaling, flipping, whitening, random rotations, and shifts. In our approach, we used rotation, shearing, brightness increment, horizontal and vertical flipping, width, and height shifting for the wider range of our training data.

3.3. Artificial Neural Network (ANN)

ANNs are powerful and simple neural networks because they are highly interconnected with each other. Artificial neural network (ANN) architectures have a wide range of applications in the field of deep learning, such as speech recognition [37,38], face recognition [39], and signature verification applications [40]. They are made up of multiple nodes that are similar to biological neurons of the human brain. The initial node takes up the input data and performs the various operations. The result from the performing neuron is passed to other neurons, and the output from every node is known as the activation value.
Mathematically, it can be represented as i n p u t x . Then we transform a set S     A E x of input signals where (a:x—neuron on S) is a function as follows:
U   : A E x × S   ( b   , c   ) U ( b   , c   ) = f ( < b   , c   > ) A E
where b is a weight vector; <…> is a real scalar product as follows:
f   : A E   Æ
This is called the activation function of the neuron in ANN layers. If f is a linear operator, then the neuron is linear as follows:
A function
U *   : = U ( b   , .   )   :   S   c   U *   ( c   ) A E
is called a model that is trained on the x—neuron on S .
Bounded functions are mostly used for representing the activation functions in output layers. The above equations show that a two-vector variable employs a neuron as a function, whereas compared to a trained neuron, there is a weight vector that fits in trained neurons; therefore, we can state that a mapping of only one vector variable is known as a trained neuron. Activation functions are the most essential part of ANNs. There are various types of activation functions, such as Sigmoid, ReLu, Tanh, Softmax, and Swish. In ANN, the output of any neuron works as an input for another neuron, which leads to developing better results.
Figure 2 shows the connection of ANN layers with each other in the order from left to right in a layer-wise manner, {16, 12, 8, 2}. The input layer consists of 16 nodes, and they are attached with two hidden layers having nodes, 12 and 8. The output layer consists of two nodes. A person will either suffer from a tumor or not. Thus, it is a binary classification and the Softmax function comes into play in the output layer, which helps to sum up the values to 1.

3.4. Convolutional Neural Network (CNN)

Convolutional neural network architecture acts as the backbone of deep learning, as it has a wide range of applications in this modern world, such as decoding facial recognition [41], ECG classification [42], analyzing documents [43], understanding climate [44], predicting earthquakes [45], etc., [46,47,48]. These CNN architectures provide better and more efficient solutions to real-world complex problems. According to observation, it was found that heavy architectures show outstanding performance and they handle numerous critical use cases in a precise way.
A convolutional neural network (CNN) consists of two parameters, weights and biases. Automatic diagnosis systems and various recognition systems are based on CNN architectures. Similarly, the medical field is also acquiring major contributions, including the detection of diseases such as malaria detection [49], heart disease detection [50], and arrhythmia detection [51]. CNN architectures are also used in microbial detections where nuclei segmentation is a major example in this field [52,53], and these neural networks can also be widely used across the field of chronic diseases and the health care sector [54,55]. These architectures are easy to handle, and optimization of these neural networks via hyper parameter tuning is much easier when compared to other neural networks. Figure 3 shows a basic block diagram of the CNN architecture. Convolutional networks are mainly attracted to the image pattern. These architectures are feed-forward neural networks. CNN architectures make use of spatial correlations with input image data. The deep learning architectures were developed by LeCun et al. in 1998 [56].
CNN architectures are composed of four layers, namely, the convolution layer, the pooling layer, and the flattening and fully connected layers, which are discussed below.

3.4.1. Convolutional Layer

Convolutional layers are the initial layer of CNN architectures that work on the convolutional theorem. The major principle of working is that the output of each layer acts as the input for successive layers, and the output received from these layers is in the form of vectors, known as a feature map. Equation (3) shows the convolutional theorem as follows:
a ( b ( x ) ) = b ( a ( x ) )

3.4.2. Pooling Layer

Pooling layers are successive layers after the convolutional layers, which primarily help in the downsampling process by using various spatial variances. The pooling layer can be distributed into two categories: max-pooling and average-pooling. Equation (4) shows the formula for calculating the output dimensions as follows:
O u t p u t d i m e n s i o n     = ( h f + 1 ) × ( w f + 1 ) s ( s × c )
where h represents the height of the feature map, w represents the width of the feature map, c represents channels in the feature map, f is the filter size, and s denotes the length of the stride.

3.4.3. A Flattening and Fully Connected Layer

This is the last layer of the CNN architecture, and it plays an essential role in converting the output of CNN architectures. In the flattening layer, the output of the max-pooling layer is converted into a one-dimensional array that is used as input for the last layer, i.e., a fully connected layer. This layer connects all the 1-D neurons and performs operations. This operation leads to the generation of output. Figure 4 shows all connections and different layers in two-layered fully convolutional neural networks.
Convolutional neural networks (CNN) outperform other neural networks in terms of usage on image datasets due to their excellent functionality and various hyperparameter tuning. These neural networks can be easily implemented on an image dataset, and the accuracies and results gained by these architectures are excellent. In our approach, CNN architectures have played a crucial role in the classification of brain tumors.

3.5. Software and Hardware

All the ConvNet architectures trained during the experiment were implemented using Python-based programming with the Keras and TensorFlow libraries on Jupyter Notebooks. The hardware system that was used was configured with 16 GB RAM and an i5 8th Gen. Processor.

4. Experimentation and Results

This section contains the experimentation and results carried out on the dataset through various architectures that consisted of different parameters. Then, we chose the optimal architecture for classification with a low computational cost. This section is divided into 2 parts where Section 4.1 discuss about the experimentation and results whereas in Section 4.2 we discuss regarding the evaluation of selected architecture.

4.1. Experimentation and Analysis

We implemented various configured CNNs to find the optimal model. For various implementations, we had a variety of hyperparameters, such as artificial layers and convolutional layers, as well as regularization parameters, including Level 1 regularization (L1), Level 2 regularization (L2), image input sizes (IS), batch norm (BN), dropout (DO), kernel sizes (KS), and pooling matrix sizes (PS). We analyzed all the CNN architectures based on maximum accuracy (MVA), least cross-entropy loss (LVCEL), and time taken (TT) for training. The optimal architecture was selected on the basis of the objective function value (OFV), which is expressed in Equation (5) as follows:
O b j e c t i v e   F u n c t i o n   V a l u e   ( O F V ) = M V A T T + L V C E L
Table 2 contains all the information regarding all 15 different CNNs with their MVA, LVCEL, and TT in seconds. Figure 5 shows the MVA, LVCEL and TT for each architecture. Figure 6 shows the validation accuracy, and Figure 7 depicts the validation loss of each CNN model for each epoch. In Table 2, we notice that architecture 4 had the lowest OFV of 0.6073. This might be due to batch normalization, as it reduces training time. We also observed a directly proportional connection between the training time and our different CNN architectures. As in architectures 5, 6, and 13, we fed image sizes of (64, 64) according to which training time was reduced drastically. Hence, we can claim that training time is highly dependent upon IS. Earlier, as we mentioned, TT (Figure 5c) changed drastically when there was an increase or decrease in architecture size, i.e., when we expanded or reduced the layers of CNN and ANN or applied different parameters. According to Table 2, it can also be observed that there is no drastic change in the MVA and LVCEL. However, the maximum OFV of 1.84 was achieved by the 13th CNN architecture, and the minimum OFV was scored by the 4th model as 0.6073. Therefore, we can finally conclude that the simple architecture performed better and was more stable and accurate. For further consideration, architecture 13 was considered an optimal CNN model to diagnose and classify brain tumors.

4.2. Evaluation of Selected Architecture

Architecture 13 was chosen as the optimal architecture, as the performance was good in terms of MVA, LVCEL, TT, and OFV. A confusion matrix was used to calculate the recall, precision, and F1 score for better evaluation and assessment as follows:
P r e c i s i o n = T r u e   P o s i t i v e T r u e   P o s i t i v e + F a l s e   P o s i t i v e
R e c a l l = T r u e   P o s i t i v e T r u e   P o s i t i v e + F a l s e   N e g a t i v e
F 1   S c o r e =   ( 2 × P r e c i s i o n × R e c a l l ) ( P r e c i s i o n + R e c a l l )
Figure 8 shows the confusion matrix of the optimal convolutional neural network architecture, where 51 non-augmented images were considered for metric evaluation. As data augmentation was used for enlarging the original dataset, so we cannot use augmented images for the evaluation of our architectures. Therefore, we considered only 51 images for better testing of our optimal architecture. We also evaluated architectures based on 371 augmented images where results gained were quite similar but for fair results, non-augmented images were considered.
From Equations (6)–(8), we calculated P r e c i s i o n = 0.90322 , R e c a l l = 0.9655 , and F 1 S c o r e = 0.93331 . Our model performed outstandingly, as all the metrics were above 90%. Therefore, we can claim that our architecture performed excellently in diagnosing brain tumors. Figure 9 shows the AUC-ROC where the area under the curve was found to be 0.737, which was good.

5. Conclusions and Future Work

In this paper, we make an effort to provide the best CNN architecture for brain tumor classification using MRI images by comparing all the performances of 15 configured CNN architectures. Based on the objective function value, we selected the ideal model, which was easily trainable and detected tumors faster than other architectures, with a low computational cost. The parameters and configuration of the selected CNN model were simpler than those of the other 14 architectures. Based on the outcomes of this study, we recommend that the simpler architectures perform best and are more stable and reliable than the complex ones. We admit that architecture 13 can perform better, and can gain a higher level of accuracy and stability if we perform small adjustments and experimentation. We also admit that the dataset was very small for testing. In the real-world implementation, we require more MRI scans for training and testing of various state-of-art deep learning models.
In terms of future work on brain tumor detection, various transfer learning architectures can be trained for classification, and architectures, such as ResNet, VGG and many other ImageNet models can be implemented. Therefore, brain tumor detection, which is a sensible case, can provide the best outcome via transfer learning and can better diagnose tumors. A large dataset can be used instead of image augmentation which can lead to better stability of architectures for the classification of MRI scans.

Author Contributions

Workflow pipeline creation, M.S.; dataset gathering, H.G.; data preprocessing, M.P., L.F.E.-A.; results and analysis, V.S., E.L.-C.; writing—review and editing, S.S.R.; implementation and supervision, M.K.G. All authors have read and agreed to the published version of the manuscript.

Funding

Author Leon-Castro acknowledges support from the Chilean Government through FONDECYT initiation grant No. 11190056. Research supported by Red Sistemas Inteligentes y Expertos Modelos Computacionales Iberoamericanos (SIEMCI), project number 522RT0130 in Programa Iberoamericano de Ciencia y Tecnologia para el Desarrollo (CYTED).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The dataset was collected from Kaggle, and we have also explained the dataset preparation in Section 3.1 Dataset Used. For more reference, the dataset link is below. Dataset Link: https://www.kaggle.com/navoneel/brain-mri-images-for-brain-tumor-detection accessed on 19 September 2020.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Bhattacharyya, D.; Kim, T.-H. Brain tumor detection using MRI image analysis. In Proceedings of the International Conference on Ubiquitous Computing and Multimedia Applications, Daejeon, Korea, 13–15 April 2011; pp. 307–314. [Google Scholar]
  2. Iqbal, S.; Ghani Khan, M.U.; Saba, T.; Mehmood, Z.; Javaid, N.; Rehman, A.; Abbasi, R. Deep learning model integrating features and novel classifiers fusion for brain tumor segmentation. Microsc. Res. Tech. 2019, 82, 1302–1315. [Google Scholar] [CrossRef] [PubMed]
  3. Logeswari, T.; Karnan, M. Improved implementation of brain tumor detection using segmentation based on soft computing. J. Cancer Res. Exp. Oncol. 2009, 2, 6–14. [Google Scholar]
  4. Bahadure, N.B.; Ray, A.K.; Thethi, H.P. Image Analysis for MRI Based Brain Tumor Detection and Feature Extraction Using Biologically Inspired BWT and SVM. Int. J. Biomed. Imaging 2017, 2017, 1–12. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  5. Majós, C.; Julià-Sapé, M.; Alonso, J.; Serrallonga, M.; Aguilera, C.; Acebes, J.J.; Arús, C.; Gili, J. Brain Tumor Classification by Proton MR Spectroscopy: Comparison of Diagnostic Accuracy at Short and Long TE. Am. J. Neuroradiol. 2004, 25, 1696–1704. [Google Scholar] [PubMed]
  6. Lukas, L.; Devos, A.; Suykens, J.; Vanhamme, L.; Howe, F.; Majos, C.; Moreno-Torres, A.; Van Der Graaf, M.; Tate, A.; Arus, C.; et al. Brain tumor classification based on long echo proton MRS signals. Artif. Intell. Med. 2004, 31, 73–89. [Google Scholar] [CrossRef] [PubMed]
  7. Rammurthy, D.; Mahesh, P. Whale Harris hawks optimization based deep learning classifier for brain tumor detection using MRI images. J. King Saud Univ. Comput. Inf. Sci. 2020. [Google Scholar] [CrossRef]
  8. Bauer, S.; Wiest, R.; Nolte, L.P.; Reyes, M. A survey of MRI-based medical image analysis for brain tumor studies. Phys. Med. Biol. 2013, 58, 97. [Google Scholar] [CrossRef] [Green Version]
  9. DeAngelis, L.M. Brain tumors. N. Engl. J. Med. 2001, 344, 114–123. [Google Scholar] [CrossRef] [Green Version]
  10. Naser, M.A.; Deen, M.J. Brain tumor segmentation and grading of lower-grade glioma using deep learning in MRI images. Comput. Biol. Med. 2020, 121, 103758. [Google Scholar] [CrossRef]
  11. Sharma, R.; Gourisaria, M.K.; Rautray, S.S.; Pandey, M.; Patra, S.S. ECG Classification using Deep Convolutional Neural Networks and Data Analysis. Int. J. Adv. Trends Comput. Sci. Eng. IJATCSE 2020, 9, 5788–5795. [Google Scholar] [CrossRef]
  12. Mallard, J.R.; Fowler, J.F.; Sutton, M.V. Brain Tumor Detection using Radioactive Arsenic. Br. J. Radiol. 1961, 34, 562–568. [Google Scholar] [CrossRef] [PubMed]
  13. Goswami, S.; Bhaiya, L.K.P. Brain tumor detection using unsupervised learning based neural network. In Proceedings of the International Conference on Communication Systems and Network Technologies, Gwalior, India, 6–8 April 2013; pp. 573–577. [Google Scholar]
  14. Nayak, S.; Gourisaria, M.K.; Pandey, M.; Rautaray, S.S. Prediction of Heart Disease by Mining Frequent Items and Classification Techniques. In Proceedings of the 3rd International Conference on Intelligent Computing and Control Systems, Madurai, India, 15–17 May 2019; pp. 607–611. [Google Scholar]
  15. Yan, C.; Xie, H.; Yang, D.; Yin, J.; Zhang, Y.; Dai, Q. Supervised hash coding with deep neural network for environment perception of intelligent vehicles. IEEE Trans. Intell. Transp. Syst. 2017, 19, 284–295. [Google Scholar] [CrossRef]
  16. Sentas, A.; Tashiev, I.; Kucukayvaz, F.; Kul, S.; Eken, S.; Sayar, A.; Becerikli, Y. Performance evaluation of a support vector machine and convolutional neural network algorithms in real-time vehicle type and color classification. Evol. Intell. 2020, 13, 83–91. [Google Scholar] [CrossRef]
  17. LeCun, Y.; Kavukcuoglu, K.; Farabet, C. Convolutional networks and applications in vision. In Proceedings of the IEEE International Symposium on Circuits and Systems, Paris, France, 30 May–2 June 2010; pp. 253–256. [Google Scholar]
  18. Abd-Ellah, M.K.; Awad, A.I.; Khalaf, A.A.; Hamed, H.F. Two-phase multi-model automatic brain tumor diagnosis system from magnetic resonance images using convolutional neural networks. EURASIP J. Image Video Process. 2018, 97. [Google Scholar] [CrossRef]
  19. Harshvardhan, G.M.; Gourisaria, M.K.; Pandey, M.; Rautaray, S.S. A Comprehensive Survey and Analysis of Generative Models in Machine Learning. Comput. Sci. Rev. 2020, 38, 100285. [Google Scholar]
  20. Iqbal, S.; Ghani, M.U.; Saba, T.; Rehman, A. Brain tumor segmentation in multi-spectral MRI using convolutional neural networks (CNN). Microsc. Res. Tech. 2018, 81, 419–427. [Google Scholar] [CrossRef]
  21. Abdullah, A.A.; Chize, B.S.; Zakaria, Z. Design of Cellular Neural Network (CNN) Simulator Based on Matlab for Brain Tumor Detection. J. Med Imaging Heal. Inform. 2012, 2, 296–306. [Google Scholar] [CrossRef]
  22. Pereira, S.; Pinto, A.; Alves, V.; Silva, C.A. Brain Tumor Segmentation Using Convolutional Neural Networks in MRI Images. IEEE Trans. Med. Imaging 2016, 35, 1240–1251. [Google Scholar] [CrossRef]
  23. Thillaikkarasi, R.; Saravanan, S. An Enhancement of Deep Learning Algorithm for Brain Tumor Segmentation Using Kernel Based CNN with M-SVM. J. Med. Syst. 2019, 43, 84. [Google Scholar] [CrossRef]
  24. Ghassemi, N.; Shoeibi, A.; Rouhani, M. Deep neural network with generative adversarial networks pre-training for brain tumor classification based on MR images. Biomed. Signal Process. Control 2020, 57, 101678. [Google Scholar] [CrossRef]
  25. Seetha, J.; Raja, S.S. Brain Tumor Classification Using Convolutional Neural Networks. Biomed. Pharmacol. J. 2018, 11, 1457–1461. [Google Scholar] [CrossRef]
  26. Mohsen, H.; El-Dahshan, E.S.A.; El-Horbaty, E.S.M.; Salem, A.B.M. Classification using deep learning neural networks for brain tumors. Future Comput. Inform. J. 2018, 3, 68–71. [Google Scholar] [CrossRef]
  27. Pashaei, A.; Sajedi, H.; Jazayeri, N. Brain tumor classification via convolutional neural network and extreme learning machines. In Proceedings of the 8th International Conference on Computer and Knowledge Engineering (ICCKE), Mashhad, Iran, 25–26 October 2018; pp. 314–319. [Google Scholar]
  28. Afshar, P.; Plataniotis, K.N.; Mohammadi, A. Capsule networks for brain tumor classification based on MRI images and coarse tumor boundaries. In Proceedings of the ICASSP 2019–2019 IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), Brighton, UK, 12–17 May 2019; pp. 1368–1372. [Google Scholar]
  29. Swati, Z.N.K.; Zhao, Q.; Kabir, M.; Ali, F.; Ali, Z.; Ahmed, S.; Lu, J. Content-based brain tumor retrieval for MR images using transfer learning. IEEE Access 2019, 7, 17809–17822. [Google Scholar] [CrossRef]
  30. Jain, R.; Jain, N.; Aggarwal, A.; Hemanth, D.J. Convolutional neural network-based Alzheimer’s disease classification from magnetic resonance brain images. Cogn. Syst. Res. 2019, 57, 147–159. [Google Scholar] [CrossRef]
  31. Kaggle. Available online: https://www.kaggle.com/navoneel/brain-mri-images-for-brain-tumor-detection (accessed on 19 September 2020).
  32. Frid-Adar, M.; Diamant, I.; Klang, E.; Amitai, M.; Goldberger, J.; Greenspan, H. GAN-based synthetic medical image augmentation for increased CNN performance in liver lesion classification. Neurocomputing 2018, 321, 321–331. [Google Scholar] [CrossRef] [Green Version]
  33. Bloice, M.D.; Stocker, C.; Holzinger, A. Augmentor: An image augmentation library for machine learning. arXiv 2017, arXiv:1708.04680. [Google Scholar] [CrossRef]
  34. Bloice, M.D.; Roth, P.M.; Holzinger, A. Biomedical image augmentation using Augmentor. Bioinformatics 2019, 35, 4522–4524. [Google Scholar] [CrossRef]
  35. Salehinejad, H.; Valaee, S.; Dowdell, T.; Barfett, J. Image augmentation using radial transform for training deep neural networks. In Proceedings of the 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Calgary, AB, Canada, 15–20 April 2018; pp. 3016–3020. [Google Scholar]
  36. Mikołajczyk, A.; Grochowski, M. Data augmentation for improving deep learning in image classification problems. In Proceedings of the 2018 International Interdisciplinary Ph.D. Workshop (IIPhDW), Swinoujscie, Poland, 9–12 May 2018; pp. 117–122. [Google Scholar]
  37. Neto, J.; Almeida, L.; Hochberg, M.; Martins, C.; Nunes, L.; Renals, S.; Robinson, T. Speaker-adaptation for hybrid HMM-ANN continuous speech recognition system. In Proceedings of the 4th European Conference on Speech Communication and Technology EUROSPEECH’95, Madrid, Spain, 18–21 September 1995; Available online: http://hdl.handle.net/1842/1274 (accessed on 30 December 2021).
  38. Wijoyo, S.; Wijoyo, S. Speech recognition using linear predictive coding and artificial neural network for controlling the movement of a mobile robot. In Proceedings of the 2011 International Conference on Information and Electronics Engineering (ICIEE 2011), Bangkok, Thailand, 28–29 May 2011. [Google Scholar]
  39. Magesh Kumar, C.; Thiyagarajan, R.; Natarajan, S.P.; Arulselvi, S.; Sainarayanan, G. Gabor features and LDA-based face recognition with ANN classifier. In Proceedings of the 2011 International Conference on Emerging Trends in Electrical and Computer Technology, Nagercoil, India, 23–24 March 2011; pp. 831–836. [Google Scholar]
  40. Al-Shoshan, A.I. Handwritten signature verification using image invariants and dynamic features. In Proceedings of the International Conference on Computer Graphics, Imaging, and Visualisation (CGIV’06), Sydney, NSW, Australia, 26–28 July 2006; pp. 173–176. [Google Scholar]
  41. Sajjad, M.; Zahir, S.; Ullah, A.; Akhtar, Z.; Muhammad, K. Human behavior understanding in big multimedia data using CNN based facial expression recognition. Mob. Netw. Appl. 2019, 25, 1611–1621. [Google Scholar] [CrossRef]
  42. Aziz, S.; Ahmed, S.; Alouini, M.S. ECG-based machine-learning algorithms for heartbeat classification. Sci. Rep. 2021, 11, 1–14. [Google Scholar] [CrossRef]
  43. Kim, D.; Park, C.; Oh, J.; Lee, S.; Yu, H. Convolutional matrix factorization for document context-aware recommendation. In Proceedings of the 10th ACM Conference on Recommender Systems, Boston, MA, USA, 15–19 September 2016; pp. 233–240. [Google Scholar]
  44. Racah, E.; Beckham, C.; Maharaj, T.; Kahou, S.E.; Prabhat, M.; Pal, C. extreme weather: A large-scale climate dataset for semi-supervised detection, localization, and understanding of extreme weather events. In Advances in Neural Information Processing Systems; MIT Press: Cambridge, MA, USA, 2017; pp. 3402–3413. [Google Scholar]
  45. Jozinovic, D.; Lomax, A.; Stajduhar, I.; Michelini, A. Rapid prediction of the earthquake ground shaking intensity using raw waveform data and a convolutional neural network. Geophys. J. Int. 2020, 222, 1379–1389. [Google Scholar] [CrossRef]
  46. Crounse, K.R.; Yang, T.; Chua, L.O. Pseudo-random sequence generation using the CNN universal machine with applications to cryptography. In Proceedings of the Fourth IEEE International Workshop on Cellular Neural Networks and their Applications Proceedings (CNNA-96), Seville, Spain, 24–26 June 1996; pp. 433–438. [Google Scholar]
  47. Hirasawa, T.; Aoyama, K.; Tanimoto, T.; Ishihara, S.; Shichijo, S.; Ozawa, T.; Tada, T. Application of artificial intelligence using a convolutional neural network for detecting gastric cancer in endoscopic images. Gastric Cancer 2018, 21, 653–660. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  48. Agarap, A.F. An architecture combining convolutional neural network (CNN) and support vector machine (SVM) for image classification. arXiv 2017, arXiv:1712.03541. [Google Scholar]
  49. Gourisaria, M.K.; Das, S.; Sharma, R.; Rautaray, S.S.; Pandey, M. A Deep Learning Model for Malaria Disease Detection and Analysis using Deep Convolutional Neural Networks. Int. J. Emerg. Technol. 2020, 11, 699–704. [Google Scholar]
  50. Atallah, R.; Al-Mousa, A. Heart disease detection using machine learning majority voting ensemble method. In Proceedings of the 2nd International Conference on New Trends in Computing Sciences (ICTCS) IEEE, Amman, Jordan, 9–11 October 2019; pp. 1–6. [Google Scholar]
  51. Ketu, S.; Mishra, P.K. Empirical analysis of machine learning algorithms on imbalance electrocardiogram-based arrhythmia dataset for heart disease detection. Arab. J. Sci. Eng. 2022, 47, 1447–1469. [Google Scholar] [CrossRef]
  52. Rautaray, S.S.; Dey, S.; Pandey, M.; Gourisaria, M.K. Nuclei Segmentation in Cell Images Using Fully Convolutional Neural Networks. Int. J. Emerg. Technol. 2020, 11, 731–737. [Google Scholar]
  53. Mota, S.M.; Rogers, R.E.; Haskell, A.W.; McNeill, E.P.; Kaunas, R.; Gregory, C.A.; Maitland, K.C. Automated mesenchymal stem cell segmentation and machine learning-based phenotype classification using morphometric and textural analysis. J. Med. Imaging 2021, 8, 014503. [Google Scholar] [CrossRef]
  54. Mishra, S.; Pandey, M.; Rautaray, S.S.; Gourisaria, M.K. A Survey on Big Data Analytical Tools & Techniques in Health Care Sector. Int. J. Emerg. Technol. 2020, 11, 554–560. [Google Scholar]
  55. Anand, A.; Anand, H.; Rautaray, S.S.; Pandey, M.; Gourisaria, M.K. Analysis and prediction of chronic heart diseases using machine learning classification models. Int. J. Adv. Trends Comput. Sci. Eng. 2020, 9, 8479–8487. [Google Scholar]
  56. LeCun, Y.; Bottou, L.; Bengio, Y.; Haffner, P. Gradient-based learning applied to document recognition. IEEE 1998, 86, 2278–2324. [Google Scholar] [CrossRef] [Green Version]
Figure 1. (L) MRI of a person without a brain tumor and (R) MRI of a person with a brain tumor.
Figure 1. (L) MRI of a person without a brain tumor and (R) MRI of a person with a brain tumor.
Applsci 12 02900 g001
Figure 2. Pictorial depiction of artificial neural network.
Figure 2. Pictorial depiction of artificial neural network.
Applsci 12 02900 g002
Figure 3. A block diagram of convolutional neural network architecture.
Figure 3. A block diagram of convolutional neural network architecture.
Applsci 12 02900 g003
Figure 4. The convolutional architecture of the two-layered CNN model.
Figure 4. The convolutional architecture of the two-layered CNN model.
Applsci 12 02900 g004
Figure 5. (a) Maximum validation accuracy of each CNN model, (b) least validation cross-entropy loss of each CNN model, and (c) training time (in seconds) of the CNN model.
Figure 5. (a) Maximum validation accuracy of each CNN model, (b) least validation cross-entropy loss of each CNN model, and (c) training time (in seconds) of the CNN model.
Applsci 12 02900 g005
Figure 6. Training curve of validation accuracy for each CNN architecture per epoch.
Figure 6. Training curve of validation accuracy for each CNN architecture per epoch.
Applsci 12 02900 g006
Figure 7. Training curve of validation loss for each CNN architecture per epoch.
Figure 7. Training curve of validation loss for each CNN architecture per epoch.
Applsci 12 02900 g007
Figure 8. The confusion matrix for architecture 13. We can observe that true positive = 28, true negative = 19, false positive = 3, and false negative = 1.
Figure 8. The confusion matrix for architecture 13. We can observe that true positive = 28, true negative = 19, false positive = 3, and false negative = 1.
Applsci 12 02900 g008
Figure 9. The receiver operating characteristic (ROC) curve for the 13th CNN architecture. The area under the curve (AUC-ROC) is calculated to be 0.737 units.
Figure 9. The receiver operating characteristic (ROC) curve for the 13th CNN architecture. The area under the curve (AUC-ROC) is calculated to be 0.737 units.
Applsci 12 02900 g009
Table 1. Dataset distribution of training and testing after augmentation.
Table 1. Dataset distribution of training and testing after augmentation.
Brain Tumor Patient (YES)Normal (NO)
Training1037781
Testing234137
Total1271918
Table 2. Performance of all 15 convolutional neural network (CNN) models.
Table 2. Performance of all 15 convolutional neural network (CNN) models.
S. no.CL AL RegularizationIS FD KS PS LVCELMVATT
L1 L2 BN DO
123(128,128){64,32}{9,3}{4,2}0.33390.8518195
224(128,128){64,32}{9,3}{4,2}0.37100.8491209
324(128,128){64,32}{9,3}{4,2}0.38000.8491216
424(128,128){64,32}{9,3}{4,2}0.38010.8329219
524(64,64){64,32}{9,3}{4,2}0.54610.7358131
622(64,64){64,32}{9,3}{4,2}0.56510.7224134
723(128,128){64,32}{9,3}{4,2}0.32810.8464220
834(128,128){128,64,32}{9,6,3}{4,2,2}0.39000.8518211
935(128,128){128,64,32}{9,6,3}{4,2,2}0.38110.8383196
1044(128,128){128,64,32,16}{9,6,3,3}{4,2,2,2}0.41000.8167192
1145(128,128){128,64,32,16}{9,6,3,3}{4,2,2,2}0.34860.8652181
1245(128,128){128,64,32,16}{9,6,3,3}{4,2,2,2}0.41460.8571180
1345(64,64){64,32,32,16}{9,6,3,3}{2,2,2,2}0.44800.8275106
1455(128,128){128,64,64,32,16}{9,6,6,3,3}{2,2,2,2,2}0.40120.8464184
1556(128,128){128,64,64,32,16}{9,6,6,3,3}{2,2,2,2,2}0.40400.8356182
Where AL stands for artificial neural network Layers, CL stands for convolutional neural network layers, L1 stands for Level 1 regularization, L2 stands for Level 2 regularizations, BN stands for batch normalization, DO stands for dropout, IS stands for input size of image, FD stands for feature detected, KS stands for kernel sizes, PS stands for pooling sizes, LVCEL stands for least validation cross-entropy loss, MVA stands for maximum validation accuracy, and TT stands for training time (in seconds).
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Singh, V.; Gourisaria, M.K.; GM, H.; Rautaray, S.S.; Pandey, M.; Sahni, M.; Leon-Castro, E.; Espinoza-Audelo, L.F. Diagnosis of Intracranial Tumors via the Selective CNN Data Modeling Technique. Appl. Sci. 2022, 12, 2900. https://doi.org/10.3390/app12062900

AMA Style

Singh V, Gourisaria MK, GM H, Rautaray SS, Pandey M, Sahni M, Leon-Castro E, Espinoza-Audelo LF. Diagnosis of Intracranial Tumors via the Selective CNN Data Modeling Technique. Applied Sciences. 2022; 12(6):2900. https://doi.org/10.3390/app12062900

Chicago/Turabian Style

Singh, Vinayak, Mahendra Kumar Gourisaria, Harshvardhan GM, Siddharth Swarup Rautaray, Manjusha Pandey, Manoj Sahni, Ernesto Leon-Castro, and Luis F. Espinoza-Audelo. 2022. "Diagnosis of Intracranial Tumors via the Selective CNN Data Modeling Technique" Applied Sciences 12, no. 6: 2900. https://doi.org/10.3390/app12062900

APA Style

Singh, V., Gourisaria, M. K., GM, H., Rautaray, S. S., Pandey, M., Sahni, M., Leon-Castro, E., & Espinoza-Audelo, L. F. (2022). Diagnosis of Intracranial Tumors via the Selective CNN Data Modeling Technique. Applied Sciences, 12(6), 2900. https://doi.org/10.3390/app12062900

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop