Next Article in Journal
Adsorption Separation of Various Polar Dyes in Water by Oil Sludge-Based Porous Carbon
Previous Article in Journal
Bench-Scale Gasification of Olive Cake in a Bubbling Fluidized Bed Reactor
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Accurate MRI-Based Brain Tumor Diagnosis: Integrating Segmentation and Deep Learning Approaches

by
Medet Ashimgaliyev
1,
Bakhyt Matkarimov
1,*,
Alibek Barlybayev
1,2,
Rita Yi Man Li
3 and
Ainur Zhumadillayeva
1,4,*
1
Faculty of Information Technologies, L.N. Gumilyov Eurasian National University, Astana 010008, Kazakhstan
2
Higher School of Information Technology and Engineering, Astana International University, Astana 010008, Kazakhstan
3
Department of Economics and Finance, Hong Kong Shue Yan University, Hong Kong, China
4
Department of Computer Engineering, Astana IT University, Astana 010000, Kazakhstan
*
Authors to whom correspondence should be addressed.
Appl. Sci. 2024, 14(16), 7281; https://doi.org/10.3390/app14167281
Submission received: 12 June 2024 / Revised: 29 July 2024 / Accepted: 13 August 2024 / Published: 19 August 2024
(This article belongs to the Section Computing and Artificial Intelligence)

Abstract

:
Magnetic Resonance Imaging (MRI) is vital in diagnosing brain tumours, offering crucial insights into tumour morphology and precise localisation. Despite its pivotal role, accurately classifying brain tumours from MRI scans is inherently complex due to their heterogeneous characteristics. This study presents a novel integration of advanced segmentation methods with deep learning ensemble algorithms to enhance the classification accuracy of MRI-based brain tumour diagnosis. We conduct a thorough review of both traditional segmentation approaches and contemporary advancements in region-based and machine learning-driven segmentation techniques. This paper explores the utility of deep learning ensemble algorithms, capitalising on the diversity of model architectures to augment tumour classification accuracy and robustness. Through the synergistic amalgamation of sophisticated segmentation techniques and ensemble learning strategies, this research addresses the shortcomings of traditional methodologies, thereby facilitating more precise and efficient brain tumour classification.

1. Introduction

The human brain, the body’s most complex organ, regulates various physiological functions, including sensory integration. Brain tumours (BTs), among the most common global malignancies, disrupt these functions, leading to severe consequences, including death [1,2]. Average cellular turnover involves programmed cell death and regeneration, but BTs cause uncontrolled cell proliferation, impairing brain functions. BTs can be malignant or benign, with symptoms like fever, headaches, and cognitive decline, often leading to fatality [3,4].
The early and accurate detection of BTs is crucial for improved patient outcomes. Medical imaging modalities like MRI, CT, and PET scans are used. MRI, favoured for high-resolution images, uses contrast agents like gadolinium to differentiate pathological tissues. Computer-aided diagnosis (CAD) systems analyse these tissues for precise BT detection [5].
Brain tumours are categorised according to their origin, type, and malignancy, ranging from benign Grade I to aggressive Grade IV, influencing treatment approaches [6,7,8,9,10,11,12]. Both Machine Learning (ML) and Deep Learning (DL) improve the accuracy of brain tumour classification [1]. While ML methods such as SVM and KNN are effective, they necessitate manual feature extraction. In contrast, DL, mainly through Convolutional Neural Networks (CNNs), automates feature learning, enhancing diagnostic precision [13,14].
Recent advancements in AI, especially DL, have revolutionised BT detection. CNNs and ensemble learning methods combine multiple models to enhance classification accuracy and robustness [15]. This paper explores integrating segmentation and ensemble learning techniques for MRI-based BT classification, discussing advancements, clinical implications, and future research directions to improve patient treatment and outcomes [16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43].
An overview of such primary deep learning-based brain tumour segmentation techniques shows the effectiveness of these methods (Figure 1):
These techniques highlight the advancements in segmentation methods, contributing to more accurate and efficient brain tumour classification. This paper explores integrating these sophisticated segmentation techniques with ensemble learning strategies, addressing traditional methodologies’ shortcomings, thereby facilitating more precise and efficient brain tumour classification [44].

2. Materials and Methods

Integrating segmentation methods and deep learning ensemble algorithms encompasses diverse techniques and architectures, each contributing to enhancing MRI-based brain tumour classification. Deep learning models are at the core of this integration, which leverage neural networks’ hierarchical representation learning capabilities to extract discriminative features from MRI data. Some such blocks contain DCNNs, Deep Convolutional Neural Networks (DCNNs), convolutional neural networks (CNNs), Recurrent Neural Networks (RNNs), Long Short-Term Memory networks (LSTMs), Deep Neural Networks (DNNs), Deep Autoencoders (AEs), and Generative Adversarial Networks (GANs) [45,46,47]. DCNNs have emerged as the cornerstone of medical image analysis, offering superior performance in image segmentation and classification tasks. By employing multiple layers of convolutional filters, DCNNs can effectively capture spatial hierarchies and learn complex patterns from MRI data, enabling accurate tumour segmentation and feature extraction. CNNs, a subset of DCNNs, have demonstrated remarkable success in various image analysis tasks, including brain tumour classification. By leveraging convolutional layers and pooling operations, CNNs can extract local features from MRI images, providing valuable tumour characterisation and classification information. RNNs and LSTM networks offer unique advantages in processing sequential data, making them well-suited for tasks involving temporal dynamics and spatial relationships within MRI sequences. By modelling sequential dependencies, RNNs and LSTMs can capture temporal patterns in tumour progression and enhance classification accuracy. DNNs encompass a broad class of neural network architectures with multiple layers of interconnected nodes. These networks can learn intricate representations from high-dimensional data, making them suitable for complex tasks such as brain tumour classification. AEs leverage unsupervised learning principles to extract latent representations of input data. By learning to reconstruct input images from compressed representations, deep autoencoders can capture meaningful features and reduce data dimensionality, facilitating more efficient classification [48]. GANs offer a novel approach to data generation and representation learning by training two neural networks simultaneously: a generator and a discriminator. GANs can generate realistic synthetic images that resemble real MRI data, providing valuable augmentation for training deep learning models and enhancing their generalisation capabilities [49,50].
Integrating suitable segmentation methods with deep learning-based ensemble algorithms for MRI-based brain tumour classification aims to enhance the accuracy and robustness of tumour analysis. Here is a brief overview of each component:
  • Segmentation: Suitable segmentation methods aim to accurately identify tumour regions in MRI images, separating them from healthy tissues. This involves thresholding, region growing, active contours, and machine learning-based approaches [51,52]. The choice of method depends on the tumour’s complexity and the data quality [53].
  • Ensemble algorithms: Deep learning-based algorithms combine multiple models to create a more accurate and robust classification system. They can be used to train on different types of data and integrate the predictions of various models to produce a final result. This approach can help reduce errors and improve the overall accuracy of the classification task [54].
By combining these two techniques, researchers can create a powerful tool for analysing MRI images and detecting tumours with greater accuracy and reliability. Deep learning-based ensemble algorithms utilise the diversity of multiple models to enhance classification accuracy and robustness by combining their predictions. Techniques such as bagging, boosting, and stacking combine the output of various models into a collective decision. In the case of brain tumour classification, convolutional neural networks (CNNs) trained on segmented regions of the tumour are utilised to extract discriminatory features that are then fed into ensemble algorithms for final classification [52].
The process of combining segmentation methods with deep learning ensemble algorithms includes the following steps:
Segmentation of tumour areas: Utilize suitable segmentation techniques to identify tumour regions within MRI images. This involves preprocessing the data, applying appropriate segmentation algorithms, and creating segmented masks for the tumour areas [51].
Feature extraction: Extract features from segmented tumour regions using deep learning models such as CNNs. Train these models on the segmented regions to learn features that can capture patterns and characteristics specific to brain tumours [52].
Ensemble learning: Use ensemble learning techniques to combine the predictions of multiple deep learning models. This could involve training different CNN architectures with different initialization parameters, and then combining their predictions by averaging, voting, or other ensemble methods [55].
Classification: Use the combined predictions from the ensemble to classify tumours into different categories. The final classification can be based on the consensus of the individual models’ predictions, with each model’s contribution weighted according to its performance or confidence level [53].
By integrating appropriate segmentation techniques with deep learning ensemble algorithms, researchers seek to enhance the accuracy, dependability, and generalizability of MRI-driven brain tumour classification models. This integrated strategy enables more accurate tumour analysis, facilitating improved treatment planning and patient care in neuro-oncology [51,56].
The study employed transfer learning with five pre-trained CNN models: AlexNet, VGG16, GoogleNet, ResNet18, and ResNet50, all originally trained on the ImageNet dataset. It also provides a detailed discussion of each model’s architecture:
  • AlexNet:
Alex Krzyzewski introduced AlexNet in the 2012 Large-Scale Visual Recognition Challenge (ILSVRC). It won the competition with a top-five error rate of 15.3%, surpassing the state-of-the-art models of that time, which had error rates of 10.8% or higher [51]. Initially trained on two GPU machines, AlexNet now requires only one GPU. It is a relatively shallow network with eight layers, comprising five convolutional layers (Conv) followed by three fully connected (FC) layers [52].
The architecture includes layers with three different filter sizes (11 × 11, 5 × 5, and 3 × 3), data augmentation, dropout, and max pooling operations [53]. Notably, the traditional sigmoid activation function S F x was replaced with the rectified linear unit (ReLU) activation function to address the vanishing gradient problem associated with sigmoid functions [53]. This change enhanced the training stability of the model by preventing the learning process from halting when the gradient approaches zero using the following equations [52]:
  • Sigmoid function (Equation (1)):
S F x = 1 1 + e x p   x  
2.
Rectified linear unit (ReLU) function (Equation (2)):
R e   L u x = M a x 0 , x      
The transition from the sigmoid to the ReLU activation function was due to the vanishing gradient problem associated with the sigmoid function. The gradients become very small as inputs move away from zero, causing the learning process to stall. In contrast, ReLU produces non-zero gradients for positive inputs, which prevents gradient saturation and speeds up learning [55].
2.
VGG16:
VGGNet, developed by the Visual Geometry Group (VGG) Lab at Oxford University in 2014, secured top positions in the ILSVRC 2014 challenge [53]. Designed by Karen Simonyan and Andrew Zisserman, VGGNet achieved a top 5% test accuracy score of 92.7%. This study uses the VGG16 version, which includes 13 convolutional layers and 3 fully connected layers [52]. It utilises small (3 × 3) convolution filters with a stride size of 1 and the same padding throughout all convolutional layers [51]. Additionally, it employs (2 × 2) filters for pooling with a stride size of 2. The default input size for images in VGG16 is 224 × 224 [52].
The VGG16 model is a convolutional neural network (CNN) architecture proposed by the Visual Geometry Group at the University of Oxford. Its name derives from the fact that it is composed of 16 convolutional and fully connected layers, resulting in a deep network architecture. Despite this considerable depth, VGG16 follows a simple and uniform design, utilising small 3 × 3 convolutional filters with max pooling layers interspersed throughout the network. While this approach may seem relatively straightforward, VGG16 had a competitive performance on various image classification benchmarks. Consequently, the VGG16 architecture has been widely adopted as a baseline model in numerous studies and applications across the field of computer vision [12].
3.
GoogleNet (Inception v1):
GoogleNet, introduced by Google’s research group in 2014 with the paper “Going Deeper with Convolutions”, secured first place in the ILSVRC 2014 competition with a remarkable top 5 error rate of 6.67%. With 22 layers, GoogleNet’s design was inspired by LeNet. Its architecture uses tiny (1 × 1) convolution filters to reduce the number of intermediate parameters, lowering the parameter count from 60 million in AlexNet to 4 million [53].
In 2014, researchers at Google developed GoogleNet, also known as Inception v1, a convolutional neural network (CNN) architecture. The critical characteristic of GoogleNet is its deep and wide architecture, which features multiple parallel convolutional pathways. The core innovation of this model was the introduction of inception modules, which perform convolutions with multiple filter sizes and concatenate the output feature maps. By leveraging diverse receptive fields at different scales, GoogleNet achieved competitive performance using fewer parameters than traditional CNN architectures. The inception module concept has been further refined in subsequent versions of the model, such as Inception v2 and Inception v3, to improve efficiency and performance. Some of the other notable architectural choices in GoogleNet use 1 × 1 filters to limit the number of parameters, the incorporation of global average pooling at the end to reduce feature map size and increase accuracy while decreasing trainable parameters, and use inception modules that employ fixed convolutions of different sizes (1 × 1, 3 × 3, 5 × 5) along with 3 × 3 max pooling. This multi-scale approach helps the model effectively manage objects at various scales. Additionally, GoogleNet introduced intermediate classifier branches, such as the Auxiliary Classifier, to provide regularisation and address vanishing gradient issues during training. These innovations have contributed to the strong performance and efficiency of the GoogleNet family of models across various computer vision tasks [12].
4.
ResNet18 and ResNet50:
The Residual Network (ResNet), created by Kaiming and his team at Microsoft Research, was introduced at ILSVRC 2015. It won the classification task with an impressive 3.57% error rate on the ImageNet test set [52]. ResNet is distinguished by its ability to address the vanishing gradient problem, allowing for the effective training of intense neural networks.
One of ResNet’s key advantages is its ability to manage deep architectures while minimising computational complexity and training time. ResNet is available in various versions, such as ResNet18, ResNet50, and ResNet101 [53].
In this study, ResNet18 and ResNet50 were employed. ResNet18 consists of 18 layers, while ResNet50 comprises 50 layers. These variations allow flexibility in balancing model complexity and computational resources, catering to different application needs [52].
ResNet, short for Residual Network, is a family of convolutional neural network (CNN) architectures introduced by Microsoft Research in 2015. The critical innovation of ResNet is its ability to address the problem of vanishing gradients in deep networks by incorporating skip or residual connections. Two prominent variants of the ResNet architecture are ResNet18 and ResNet50, which have 18 and 50 layers, respectively. These models utilise residual blocks, where the input is added to the output of the block, allowing for easier optimisation of very deep networks. This residual connection helps the model learn the difference or “residual” between the input and the desired output rather than learning the complete transformation from scratch. The ResNet architectures have demonstrated superior performance in image classification tasks, particularly on datasets with many classes or challenging visual patterns. By employing these residual connections, ResNet models can overcome the vanishing gradient issue when training very deep neural networks. This leads to improved performance and the ability to leverage the increased depth of the model effectively [55].
Figure 2 shows a schematic diagram of an artificial intelligence-based system designed for tumour detection and segmentation in medical images.
This diagrammatic representation systematically encapsulates the fusion of convolutional neural networks with sophisticated classification algorithms, crafting a comprehensive medical imaging solution. This integration aims to elevate diagnostic precision and contribute positively to enhanced healthcare outcomes. The pipeline initiates with the acquisition of source medical images, which are subjected to a series of pre-processing techniques. These techniques typically include normalisation to adjust the pixel values for uniformity across different images, as well as other image-enhancement methods aimed at improving the visual clarity and feature consistency essential for subsequent analysis. The conditioned images are then input into a cascade of convolutional layers. Within each layer, a set of trainable filters executes spatial feature extraction by convolving with the image matrix, thereby capturing hierarchical patterns essential for complex pattern recognition. Post convolution, a rectified linear unit (ReLU) activation function is applied to each feature map to introduce non-linear transformations, which are crucial for learning non-linear complexities in image data. Followed by selective convolutional layers, max pooling operations are executed to downsample the spatial dimensions of the feature maps. This step is pivotal in reducing the computational demand and mitigating the risk of overfitting by emphasising the most salient features and suppressing the less relevant ones. The architecture further includes additional convolutional layers that refine the depth and granularity of feature extraction. Transposed convolutional layers, often referred to as deconvolutional layers, are employed to progressively rescale the feature maps back to higher resolutions, a process essential for maintaining spatial integrity in tasks such as image segmentation. The culmination of the neural network’s processing is the generation of a segmented image. This output distinctly isolates and delineates the regions of interest, such as tumours, from the non-relevant background, enabling precise medical evaluations. Concurrently, the feature maps are exploited in a classification framework, where an SVM classifier operationalizes an entropy-based feature selection methodology. This approach rigorously identifies and utilises the most discriminative features for robust classification of tumour presence. To augment the model’s efficacy, fine-tuning is conducted using MobileNetV2 (Figure 3), a streamlined deep neural network architected for enhanced computational efficiency on mobile and edge devices. This step aims to optimise the trade-off between computational speed and model accuracy. The integrated system outputs both the classification results—indicating the presence or absence of tumours—and the segmented images visually highlighting the tumour regions.
Several key performance metrics are commonly used in the context of a binary classification problem, such as tumour detection. True Positive (TP) represents the number of correctly identified positive samples, meaning those with tumours. False Positive (FP) refers to the number of incorrectly identified positive samples or those without tumours. True Negative (TN) denotes the number of correctly identified negative samples or those without tumours. Conversely, False Negative (FN) indicates the number of incorrectly identified negative samples or those with tumours. Accuracy, as described in a relevant equation, represents the proportion of correctly identified samples in the data. These metrics provide a comprehensive assessment of the model’s performance in distinguishing between the presence and absence of tumours in the given dataset [1]. Equations (5)–(9) describe the commonly used evaluation criteria for this study:
A c c u r a c y = ( T p T n ) ( T p + F p + T n + F n )      
Precision is a performance metric representing the proportion of correctly identified positive samples among all those identified as positive by the model. Recall is the ratio of correctly identified positive samples to the total number of positive samples. The F1 score is a composite metric combining precision and recall into a single value.
P r e c i s i o n = T p ( T p + F p )
R e c a l l = T p ( T p + F n )      
F 1 s c o r e = 2   P r e c i s i o n × R e c a l l P r e c i s i o n + R e c a l l
The Dice coefficient is like the Intersection over Union (IoU) and ranges from 0 to 1, with 1 indicating the highest similarity between the predicted segmentation and the ground truth. This coefficient evaluates the effectiveness of automated probabilistic fractional segmentation of MR images and the accuracy of manual segmentations regarding spatial overlap.
S C = T p 1 2 2 T p + F p + F n
Intersection over Union (IoU) is a performance metric used to evaluate the accuracy of object detection models. It compares the overlap between the predicted positive regions (including both true positives and false positives) and the actual positive regions (including both true positives and false negatives) in the dataset. The intersection of these sets represents the true positive (Tp) detections, while the union encompasses the true positives, false positives (Fp), and false negatives (Fn).
o U = T p ( T p + F p + F n )
The Similarity Index (SI) is a performance metric used to assess the accuracy of tumour detection models. It measures the similarity between the ground truth annotations representing the actual tumour regions and the model’s segmentation output, which identifies the predicted tumour regions. A higher SI value indicates a closer match between the predicted segmentation and the ground truth, signifying a more accurate detection of the tumour areas.
S I = 2 T p ( 2 T p + F p + F n )
These evaluation metrics are essential for assessing the performance of brain tumour detection and classification models. They measure the models’ ability to correctly identify positive (tumour-containing) and negative (tumour-free) samples and differentiate between different types of tumours. Metrics such as True Positive, False Positive, True Negative, False Negative, Accuracy, Precision, Recall, F1 score, Intersection over Union, and Similarity Index comprehensively evaluate the model’s performance in various tumour detection and classification aspects.

3. Dataset

To evaluate the performance of our methods, we used the BRATS 2018 dataset (Menze et al., 2015; Bakas et al., 2017 a, b, c, 2018). The training set comprised images from 285 patients, including 210 with high-grade gliomas (HGG) and 75 with low-grade gliomas (LGG). The validation set included MRI scans from 66 patients with brain tumours of an unknown grade, as predefined by the BRATS challenge organisers. The test set consisted of images from 191 brain tumour patients. Of these, 77 had undergone Gross Total Resection (GTR). Their data was used to evaluate the model’s ability to predict survival.Each patient was scanned using T1, T1Gd, T2, and FLAIR. Each image was skull-stripped and resampled to a uniform resolution of 1 mm3 in all dimensions. The sequences for each patient were aligned to ensure consistency. Experts manually created the ground truth segmentation masks.
The model’s performance was assessed on the validation and test sets using the CBICA Image Processing Portal, available at ipp.cbica.upenn.edu. Segmentation annotations included tumour subtypes such as necrotic/non-enhancing tumour (NCR), peritumoral oedema (ED), and gadolinium-enhancing tumour (ET).
Below is listed the step-by-step process of training deep learning models for brain tumour classification:
Data Preparation:
  • Dataset collection: We started by collecting a comprehensive dataset of MRI scans, such as the BRATS (Brain Tumor Segmentation Challenge) series from 2013 to 2019. This dataset included thousands of MRI images with corresponding annotations indicating tumour regions.
  • Image preprocessing: The collected images underwent several preprocessing steps. This included bias field correction to address intensity inhomogeneities, intensity normalisation to standardise pixel values, and data augmentation techniques such as rotations, flips, and zooms to increase the dataset’s variability and improve model robustness.
Segmentation:
3.
Tumour segmentation: Suitable segmentation methods were applied to accurately identify tumour regions within the MRI images. Techniques like thresholding, region growth, and active contours created segmented masks that separated tumours from healthy tissues.
Feature Extraction:
4.
Feature extraction: Deep learning models extracted features from the segmented tumour regions, particularly convolutional neural networks (CNNs). CNNs were trained in these regions to learn distinctive features and patterns specific to brain tumours.
Model Selection and Initialization:
5.
Model selection: We chose five pre-trained CNN models—AlexNet, VGG16, GoogleNet, ResNet18, and ResNet50—each renowned for their effectiveness in image analysis tasks. These models were initially trained on the ImageNet dataset.
6.
Transfer learning: We applied transfer learning to tailor these models specifically for brain tumour classification. This process involved fine-tuning the pre-trained models on our MRI dataset, enabling them to utilise pre-learned features while adapting to the nuances of medical images.
Training Process:
7.
Training image pairs: The training involved using pairs of MRI images and their annotated labels. Approximately 2000 MRI images with corresponding annotations were used.
8.
Training parameters: we experimented with various training parameters, such as learning rate and batch size, to find the optimal settings for each model.
9.
Regularization techniques: regularization methods, including dropout and early stopping, were employed to prevent overfitting and ensure the models generalise well to new data.
10.
Training duration: the training process was conducted over 72 h on an NVIDIA Tesla V100 GPU, utilising its computational power to handle extensive data and complex computations.
Ensemble Learning:
11.
Ensemble learning: To enhance classification accuracy and robustness, ensemble learning techniques like bagging, boosting, and stacking were used. These methods combined the predictions of multiple models, producing a final, more accurate classification result.
Evaluation and Fine-Tuning:
12.
Model evaluation: The trained models were evaluated using various metrics, including accuracy, sensitivity, specificity, and the Dice coefficient index. Based on the evaluation results, fine-tuning was performed to further improve performance.
We meticulously followed these steps and ensured our models were well-prepared and optimised for accurate and robust brain tumour classification [3,5,9].

4. Results

Figure 4 presents a sequence of MRI scans illustrating the detection and segmentation process of a meningioma, a type of brain tumour.
This diagrammatic representation systematically encapsulates the fusion of convolutional neural networks with sophisticated classification algorithms, crafting a comprehensive medical imaging solution. This integration aims to elevate diagnostic precision and contribute positively to enhanced healthcare outcomes. The first image on the left displays a transverse section of a brain MRI scan. It prominently shows a large, hyperintense mass in the cerebral tissue, characteristic of a meningioma. The mass appears white on the T1-weighted MRI image due to its contrast enhancement, indicating the presence of a dense, extra-axial tumour. The middle image is a binary mask highlighting the segmented tumour. This mask simplifies the image to show only the tumour region, isolated from the surrounding brain structures. Here, the tumour is depicted as a pure white area on a black background, emphasising the exact shape and boundaries of the meningioma as identified by image processing algorithms. The final image on the right reintroduces the context of the original MRI scan, overlaying a yellow outline that demarcates the perimeter of the tumour. This outline visualises the tumour’s extent and spatial relationship to nearby brain structures, crucial for surgical planning or targeted therapies. This series effectively demonstrates the application of image processing techniques in medical imaging, specifically for tumour identification, isolation, and data preparation for further clinical analysis or intervention.
Integrating suitable segmentation methods with a deep learning-based ensemble algorithm offered several advantages, including improved accuracy, robustness, and interpretability—however, challenges such as dataset bias, class imbalance, and computational resources needed to be addressed.
Overall, the results demonstrated that the proposed approach could enhance the performance of MRI-based brain tumour classification, contributing to improved patient outcomes and personalised treatment strategies.
Table 1 compares the specific attributes and characteristics of commonly used pre-trained CNN architectures, including AlexNet, VGG16, GoogleNet, ResNet18, and ResNet50. Each architecture is evaluated based on layer count, input size, model description, unique features, top-five error rates, and the total number of parameters in millions.
The data from Table 1 underscore the varied design principles and efficiencies inherent in each CNN architecture, offering insights into their suitability for specific image processing and pattern recognition tasks. The top column indicates the total number of layers in each model, reflecting their depth: AlexNet has 8 layers, VGG16 has 16, GoogleNet has 22, ResNet18 has 18, and ResNet50 has 50 layers. Except for AlexNet, which processes images of 227 × 227 pixels, all other models handle images of 224 × 224 pixels with three colour channels (RGB). The Model Description column outlines the composition of each architecture in terms of convolutional (Conv) and fully connected (FC) layers. For instance, AlexNet includes 5 convolutional layers and three fully connected layers, while ResNet50 features 49 convolutional layers and 1 fully connected layer. Each model has unique features that enhance its functionality. AlexNet incorporates local response normalization and overlapping max pooling. VGG16 is noted for its deeper layer structure. GoogleNet integrates object localization and image classification, employs 1 × 1 convolutions, utilises global average pooling, and includes an inception module. ResNet architectures (ResNet18 and ResNet50) are characterised by their use of skip connections, which help mitigate the vanishing gradient problem during training. The Top-Five Error Rate column indicates the models’ accuracy in classifying the top five predictions, with lower percentages signifying higher accuracy. ResNet18 and ResNet50 have the lowest error rates at 3.57%, followed by GoogleNet at 6.67%, VGG16 at 7.3%, and AlexNet at 15.3%. The number of trainable parameters varies significantly among the models, reflecting their complexity and computational demands. VGG16 has the highest number of parameters at 138 million, while GoogleNet is the most parameter-efficient with only 4 million.
These CNN architectures are extensively used in various computer vision tasks, such as image classification, object detection, and semantic segmentation. They are potent tools for feature extraction and representation learning, facilitating the development of advanced deep-learning models for various applications [57].
By incorporating these diverse deep learning architectures into ensemble frameworks, researchers can harness the complementary strengths of individual models and improve classification performance. Ensemble learning techniques such as bagging, boosting, and stacking further enhance the robustness and reliability of classification systems by aggregating predictions from multiple models. Through the synergistic integration of segmentation methods and ensemble learning, researchers aim to unlock the full potential of MRI-based brain tumour classification, paving the way for more accurate diagnosis and personalised treatment strategies in neuro-oncology [44].
We fine-tuned the model parameters in our machine learning process to optimise performance and enhance results. For our study, we trained various deep learning models—AlexNet, VGG16, ResNet18, GoogLeNet, and ResNet50—initially using the ImageNet dataset. Subsequently, we carefully conducted additional training on specialised datasets to adapt these models for the specific tasks of medical image analysis, particularly for the segmentation and classification of brain tumours.
Figure 5 shows examples of segmentation outcomes contrasted with the ground truth.
The provided image compares the segmentation results and the ground truth for a medical image identified as Brats18_TCIA04_343_1_flair. The comparison is organised in three rows and two columns.
Left column (ground truth): These images represent the manually labelled ground truth, indicating the regions of different tissue types within the brain scan. The colours used are as follows:
Green: edoema;
Yellow: non-enhancing solid core;
Red: enhancing core.
Right column (prediction result): These images represent the automated segmentation results produced by a model. The same colour coding is used as in the ground truth images:
Green: edoema;
Yellow: non-enhancing solid core;
Red: enhancing core.
Table 2 presents a comprehensive evaluation of the proposed method using various datasets, including the Brain Tumor Segmentation Challenge (BRATS) series from 2013 to 2019, the Alzheimer’s Disease Neuroimaging Initiative (ADNI), The Cancer Imaging Archive (TCIA), and BrainWeb. These datasets were employed to train models on a broad spectrum of realistic neuroimaging data. As detailed in Table 2, the proposed method exhibits an exemplary performance in tumour detection using the Multi-Support Vector Machine (M-SVM) classifier, particularly with the BRATS 2018 dataset. Table 3 demonstrates that the proposed brain tumour detection method performs exceptionally well with M-SVM classification.
The values reported indicate the high effectiveness of the method, with an accuracy of 97.47%, demonstrating the model’s ability to identify both positive and negative cases correctly. The sensitivity is recorded as 97.22%, indicating the model’s proficiency in correctly identifying positive cases. The specificity is 97.94%, reflecting the model’s accuracy in identifying negative cases. Finally, the Dice coefficient index, which measures the similarity between the predicted and actual labels, is 96.71%. These metrics collectively suggest that the proposed method exhibits robust performance in detecting brain tumours.
Table 4 provides a comparative analysis of various methodologies for brain tumour classification using the BRATS 2018 dataset. The table outlines the classification accuracies of different techniques, facilitating an evaluation of their efficacy in tumour detection. The methods include combinations of CNN and the Local Binary Patterns (LBP) with Particle Swarm Optimization (PSO) reported by Irfan et al. [44], achieving a 92.5% accuracy. The LSTM by Amin et al. [51] shows slightly higher accuracy at 93.8%. The brain-storm optimisation technique by Narmatha et al. [52] matches the first at 92.5%. At the same time, a method integrating Discrete Cosine Transform (DCT), CNN, and the Extreme Learning Machine (ELM) by Khan et al. [53] achieves a value of 93.4%. Notably, the proposed model, which utilises a CNN, MobileNetV2, and Multi-Support Vector Machine (M-SVM), significantly surpasses these, recording a classification accuracy of 97.5%. This indicates the superior performance of the proposed method in accurately classifying brain tumours, highlighting its potential utility in clinical application.
The researchers used a multifaceted approach to improve the results of their medical image analysis models. First, they fine-tuned the model architectures to match the characteristics of medical image data, such as by adding or changing particular layers. This architectural adaptation enhanced the models’ ability to capture medical imagery’s unique features and patterns. Next, the researchers experimented with different training parameters, like learning rate and batch size, to find the optimal settings for each model and dataset. By systematically exploring the parameter space, they sought to unlock the full potential of their models and ensure optimal performance on specific medical image analysis tasks. Additionally, the researchers applied regularisation techniques, such as adding regularisation layers and using early stopping, to prevent the over-training of the models. These strategies helped to improve the models’ generalisation capabilities, ensuring that they could reliably perform well on new, unseen medical images. Finally, the researchers tested various loss functions to identify the one best-suited for the task, to achieve the best possible results for their models. By carefully selecting the appropriate loss function, they could further refine the models’ performance and enhance the accuracy and reliability of the medical image analysis outcomes. Through this multifaceted approach, encompassing architectural adaptations, training parameter optimisation, regularisation strategies, and loss function selection, the researchers worked diligently to enhance the performance of their medical image analysis models and obtain the most accurate and reliable outcomes.
After preliminary training on a dataset, each model was further trained on a specific dataset to improve results in a particular area, in this case medical image processing. Additional training allowed the models to better adapt to the specifics of classifying and segmenting medical images, including brain scans. Additionally, methods of fine-tuning hyperparameters were applied, and the optimisation of learning algorithms was undertaken to achieve the best results in a specific task. This approach has significantly improved the accuracy of image classification and segmentation, which is essential for diagnosing and treating patients with brain tumours.
As a result of these efforts, we have obtained significant improvements in the accuracy of segmentation and the classification of brain tumours, which makes our model more accurate and useful for medical applications.

5. Discussion

This research incorporates sophisticated deep-learning frameworks to enhance the precision of brain tumour identification and segmentation. Our study focuses on meningiomas, utilising a series of MRI images to showcase our model’s capability in detecting and outlining tumour boundaries accurately.
The visual progression presented in Figure 2, from detecting a hyperintense meningioma mass to its precise segmentation and contextual visualisation, highlights the effectiveness of combining CNNs with advanced classification algorithms. This methodological fusion is intended to refine diagnostic accuracy and facilitate clinical interventions by providing precise, actionable imaging data. The binary mask and the subsequent overlay on the MRI scan are pivotal in clarifying the tumour’s spatial relationship with adjacent cerebral structures, a critical factor in planning surgical or therapeutic procedures.
The selection of CNN architectures, as detailed in Table 1, underpins our strategy to optimise image analysis. Each architecture, from AlexNet to ResNet50, is selected based on inherent features that support deep learning tasks specific to medical imaging, such as object localisation and reducing vanishing gradients via skip connections. The diversity in layer depth, error rates, and computational efficiency across these models facilitates a tailored approach to feature extraction and image classification, ensuring robustness in tumour segmentation outcomes.
As summarised in Table 3, our empirical evaluations demonstrate the proposed model’s superior performance in tumour detection metrics like accuracy, sensitivity, specificity, and the Dice coefficient index. These metrics not only corroborate the high efficacy of the Multi-Support Vector Machine classifier when applied to the BRATS 2018 dataset, but also reflect the model’s generalizability across diverse neuroimaging datasets, including those from ADNI, TCIA, and BrainWeb.
Despite the promising outcomes, our research recognises inherent challenges such as dataset bias and class imbalance, which could skew model training and affect generalizability. Addressing these challenges involves refining data preprocessing and augmentation techniques to ensure a balanced representation of tumour types and stages across training sets.
Moreover, the comparative performance metrics in Table 4 emphasise the advancement of our proposed method over existing techniques. By achieving a classification accuracy of 97.5%, our model sets a new benchmark in the field, underscoring its potential for clinical application. It surpasses traditional methods and recent innovations, which have shown lower accuracies in similar settings.
The integration of advanced CNN architectures and ensemble learning strategies has proven highly effective in enhancing the accuracy and reliability of brain tumour classification systems. The continuous improvement of these systems through architecture adaptation, the optimisation of training parameters, and advanced regularisation techniques points towards an exciting future where deep learning can significantly contribute to personalised medicine and improved patient outcomes in neuro-oncology. Our future work will further enhance these models, address existing challenges, and expand their applicability to other medical imaging tasks.

Limitations

While the proposed method for brain tumour classification using deep learning and ensemble algorithms shows promising results, several limitations must be acknowledged. The model’s performance relies heavily on the training datasets used, which may not fully represent the diversity of real-world clinical data, affecting the generalisation to unseen data. Brain tumour datasets often exhibit class imbalances, leading to biassed model training and suboptimal performance for less common tumour types. Additionally, the training and deployment of deep learning models require significant computational resources, which may not be accessible in all clinical settings. Furthermore, the interpretability of these models remains a challenge, as clinicians require clear and understandable explanations for their predictions to aid in decision-making, which the current models may lack. These limitations highlight the need for continued research and development to address dataset bias, class imbalance, computational constraints, and model interpretability, ensuring the deployment of robust and clinically relevant brain tumour classification models.
Robustness and generalisation are critical factors for successfully translating advanced deep-learning models for brain tumour classification from research to clinical practise. Ensuring robustness across different imaging conditions and scanners is crucial, as variations in MRI protocols, scanner types, and patient populations can impact the model’s performance. To ensure generalisation, the models must be validated extensively across diverse clinical settings. Additionally, despite applying regularisation techniques and early stopping, the risk of overfitting persists, especially when models are trained on limited datasets, which can lead to poor performance on new, unseen data. Furthermore, integrating these complex models into clinical workflows poses practical challenges, requiring the development of user-friendly interfaces and seamless integration with hospital information systems to facilitate widespread adoption. Addressing these limitations, including those relating to robustness, generalisation, overfitting, and clinical integration, is crucial for the reliable and effective deployment of advanced deep learning models in real-world medical applications.

6. Conclusions

This study successfully demonstrates the capability of an integrated deep-learning approach to enhance the accuracy and efficiency of meningioma detection and segmentation using MRI scans. The research findings underscore the effectiveness of employing convolutional neural networks combined with advanced classification algorithms, collectively contributing to more precise medical diagnoses and potentially better healthcare outcomes. The utilisation of diverse CNN architectures, as elaborated in Table 1, such as AlexNet, VGG16, GoogleNet, ResNet18, and ResNet50, highlights their distinct advantages in handling complex image processing tasks. Each architecture’s specific features—from local response normalization in AlexNet to the innovative skip connections in the ResNet series—play a crucial role in improving the model’s ability to classify accurately and segment brain tumours. These architectures are carefully chosen based on their performance metrics and suitability for the intricate requirements of medical image analysis, as reflected in their error rates and computational efficiency.
The outcomes of this study, detailed in Table 3, reveal high accuracy, sensitivity, specificity, and Dice coefficient indices for the proposed method, particularly notable when applied to the BRATS 2018 dataset. These results validate the model’s robustness and potential for clinical applications, providing a solid foundation for surgical planning and targeted therapies. The comparative performance analysis in Table 4 emphasises the superiority of our proposed model over existing methods. This superiority is a testament to the careful integration of CNNs with MobileNetV2 and Multi-Support Vector Machine classifiers, significantly enhancing tumour classification accuracy.
However, the study also acknowledges challenges such as dataset bias, class imbalance, and the required high computational resources. These challenges necessitate refining segmentation and training techniques to ensure the models remain effective across diverse and realistic clinical scenarios. The research confirms that the strategic application of machine learning techniques, specifically tailored CNN architectures and ensemble learning strategies, can significantly improve the performance of MRI-based brain tumour classification systems. This advancement paves the way for more accurate and personalised diagnostic tools and promises to enhance therapeutic strategies for patients with brain tumours. Future work will focus on expanding these techniques to other forms of medical imaging and continuing to refine the models to handle the variability and complexity of real-world clinical data.

Author Contributions

Conceptualization, M.A., A.Z. and A.B.; methodology, A.B., A.Z. and B.M.; software, M.A.; validation, M.A., A.B. and A.Z.; formal analysis, R.Y.M.L.; investigation, M.A., A.B., B.M., R.Y.M.L. and A.Z.; resources, M.A. and A.B.; data curation, M.A. and R.Y.M.L.; writing—original draft preparation, M.A., A.B. and A.Z.; writing—review and editing, M.A., A.Z. and R.Y.M.L.; visualisation, M.A., A.B. and R.Y.M.L.; supervision, B.M. and A.Z.; project administration, B.M.; funding acquisition, B.M. and A.Z. All authors have read and agreed to the published version of the manuscript.

Funding

The research was supported by the Science Committee of the Ministry of Science and Higher Education of the Republic of Kazakhstan (Grant no. of the research fund: AP14869848).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The original contributions presented in the study are included in the article, further inquiries can be directed to the corresponding authors.

Conflicts of Interest

The authors declare no conflicts of interest. The funders had no role in the study’s design; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

References

  1. Vidyarthi, A.; Agarwal, R.; Gupta, D.; Sharma, R.; Draheim, D.; Tiwari, P. Machine learning assisted methodology for multiclass classification of malignant brain tumors. IEEE Access 2022, 10, 50624–50640. [Google Scholar] [CrossRef]
  2. Steinmetz, J.D.; Seeher, K.M.; Schiess, N.; Nichols, E.; Cao, B.; Servili, C.; Cavallera, V.; Cousin, E.; Hagins, H.; Moberg, M.E.; et al. Global, regional, and national burden of disorders affecting the nervous system, 1990–2021: A systematic analysis for the Global Burden of Disease Study 2021. Lancet Neurol. 2024, 23, 344–381. [Google Scholar] [CrossRef]
  3. Shah, H.A.; Saeed, F.; Yun, S.; Park, J.H.; Paul, A.; Kang, J.M. A robust approach for brain tumor detection in magnetic resonance images using finetuned efficientnet. IEEE Access 2022, 10, 65426–65438. [Google Scholar] [CrossRef]
  4. Musa, U.I.; Mensah, Q.E.; Falowo, R. Intracranial-tumor detection and classification system using convnet and transfer learning. Int. Res. J. Eng. Technol. 2022, 10, 120–127. [Google Scholar]
  5. Ahmad, S.; Choudhury, P.K. On the performance of deep transfer learning networks for brain tumor detection using MR images. IEEE Access 2022, 10, 59099–59114. [Google Scholar] [CrossRef]
  6. Islami, F.; Ward, E.M.; Sung, H.; Cronin, K.A.; Tangka, F.K.; Sherman, R.L.; Zhao, J.; Anderson, R.N.; Henley, S.J.; Yabroff, K.R.; et al. Annual report to the nation on the status of cancer, part 1: National cancer statistics. JNCI J. Natl. Cancer Inst. 2021, 113, 1648–1669. [Google Scholar] [CrossRef]
  7. Rizwan, M.; Shabbir, A.; Javed, A.R.; Shabbir, M.; Baker, T.; Obe, D.A.J. Brain tumor and glioma grade classification using Gaussian convolutional neural network. IEEE Access 2022, 10, 29731–29740. [Google Scholar] [CrossRef]
  8. Johnson, D.R.; O’Neill, B.P. Glioblastoma survival in the United States before and during the temozolomide era. J. Neuro-Oncol. 2012, 107, 359–364. [Google Scholar] [CrossRef] [PubMed]
  9. Rehman, A.; Naz, S.; Razzak, M.I.; Akram, F.; Imran, M. A deep learning-based framework for automatic brain tumors classification using transfer learning. Circuits Syst. Signal Process. 2020, 39, 757–775. [Google Scholar] [CrossRef]
  10. Fernando, T.; Gammulle, H.; Denman, S.; Sridharan, S.; Fookes, C. Deep learning for medical anomaly detection—A survey. ACM Comput. Surv. (CSUR) 2021, 54, 1–37. [Google Scholar] [CrossRef]
  11. Somasundaram, K.; Genish, T.; Kalaiselvi, T. An atlas based approach to segment the hippocampus from MRI of human head scans for the diagnosis of Alzheimers disease. Int. J. Comput. Intell. Inform. 2015, 5. [Google Scholar]
  12. Xie, Y.; Zaccagna, F.; Rundo, L.; Testa, C.; Agati, R.; Lodi, R.; Tonon, C. Convolutional neural network techniques for brain tumor classification (from 2015 to 2022): Review, challenges, and future perspectives. Diagnostics 2022, 12, 1850. [Google Scholar] [CrossRef]
  13. Louis, D.N.; Perry, A.; Reifenberger, G.; Von Deimling, A.; Figarella-Branger, D.; Cavenee, W.K.; Ohgaki, H.; Wiestler, O.D.; Kleihues, P.; Ellison, D.W. The 2016 World Health Organization classification of tumors of the central nervous system: A summary. Acta Neuropathol. 2016, 131, 803–820. [Google Scholar] [CrossRef]
  14. Montaha, S.; Azam, S.; Rafid, A.R.H.; Hasan, M.Z.; Karim, A.; Islam, A. Timedistributed-cnn-lstm: A hybrid approach combining cnn and lstm to classify brain tumor on 3d mri scans performing ablation study. IEEE Access 2022, 10, 60039–60059. [Google Scholar] [CrossRef]
  15. Lundervold, A.S.; Lundervold, A. An overview of deep learning in medical imaging focusing on MRI. Z. Für Med. Phys. 2019, 29, 102–127. [Google Scholar] [CrossRef]
  16. Rundo, L.; Militello, C.; Vitabile, S.; Russo, G.; Pisciotta, P.; Marletta, F.; Ippolito, M.; D’Arrigo, C.; Midiri, M.; Gilardi, M.C. Semi-automatic brain lesion segmentation in Gamma Knife treatments using an unsupervised fuzzy c-means clustering technique. In Proceedings of the Advances in Neural Networks: Computational Intelligence for ICT. International Workshop on Neural Networks, WIRN 2015, Vietri sul Mare, Italy, 20–22 May 2015; pp. 15–26. [Google Scholar]
  17. Liang, N.Y.; Huang, G.B.; Saratchandran, P.; Sundararajan, N. A fast and accurate online sequential learning algorithm for feedforward networks. IEEE Trans. Neural Netw. 2006, 17, 1411–1423. [Google Scholar] [CrossRef]
  18. Nie, D.; Wang, L.; Gao, Y.; Lian, J.; Shen, D. STRAINet: Spatially varying sTochastic residual AdversarIal networks for MRI pelvic organ segmentation. IEEE Trans. Neural Netw. Learn. Syst. 2018, 30, 1552–1564. [Google Scholar] [CrossRef] [PubMed]
  19. Hollon, T.C.; Pandian, B.; Adapa, A.R.; Urias, E.; Save, A.V.; Khalsa, S.S.S.; Eichberg, D.G.; D’Amico, R.S.; Farooq, Z.U.; Lewis, S. Near real-time intraoperative brain tumor diagnosis using stimulated Raman histology and deep neural networks. Nat. Med. 2020, 26, 52–58. [Google Scholar] [CrossRef]
  20. Natarajan, A.; Kumarasamy, S. Efficient segmentation of brain tumor using FL-SNM with a metaheuristic approach to optimization. J. Med. Syst. 2019, 43, 25. [Google Scholar] [CrossRef] [PubMed]
  21. Muhammad, K.; Khan, S.; Del Ser, J.; De Albuquerque, V.H.C. Deep learning for multigrade brain tumor classification in smart healthcare systems: A prospective survey. IEEE Trans. Neural Netw. Learn. Syst. 2020, 32, 507–522. [Google Scholar] [CrossRef] [PubMed]
  22. Mohan, G.; Subashini, M.M. MRI based medical image analysis: Survey on brain tumor grade classification. Biomed. Signal Process. Control 2018, 39, 139–161. [Google Scholar] [CrossRef]
  23. Hussain, L.; Saeed, S.; Awan, I.A.; Idris, A.; Nadeem, M.S.A.; Chaudhry, Q.U.A. Detecting brain tumor using machines learning techniques based on different features extracting strategies. Curr. Med. Imaging 2019, 15, 595–606. [Google Scholar] [CrossRef] [PubMed]
  24. Gurbină, M.; Lascu, M.; Lascu, D. Tumor detection and classification of MRI brain image using different wavelet transforms and support vector machines. In Proceedings of the 2019 42nd International Conference on Telecommunications and Signal Processing (TSP), Budapest, Hungary, 1–3 July 2019; pp. 505–508. [Google Scholar]
  25. Sarkar, A.; Maniruzzaman, M.; Ahsan, M.S.; Ahmad, M.; Kadir, M.I.; Islam, S.T. Identification and classification of brain tumor from MRI with feature extraction by support vector machine. In Proceedings of the 2020 International Conference for Emerging Technology (INCET), Belgaum, India, 5–7 June 2020; pp. 1–4. [Google Scholar]
  26. Sekhar, A.; Biswas, S.; Hazra, R.; Sunaniya, A.K.; Mukherjee, A.; Yang, L. Brain tumor classification using fine-tuned GoogLeNet features and machine learning algorithms: IoMT enabled CAD system. IEEE J. Biomed. Health Inform. 2021, 26, 983–991. [Google Scholar] [CrossRef] [PubMed]
  27. Ramdlon, R.H.; Kusumaningtyas, E.M.; Karlita, T. Brain tumor classification using MRI images with K-nearest neighbor method. In Proceedings of the 2019 International Electronics Symposium (IES), Surabaya, Indonesia, 27–28 September 2019; pp. 660–667. [Google Scholar]
  28. Sathi, K.A.; Islam, M.S. Hybrid feature extraction based brain tumor classification using an artificial neural network. In Proceedings of the 2020 IEEE 5th International Conference on Computing Communication and Automation (ICCCA), Greater Noida, India, 30–31 October 2020; pp. 155–160. [Google Scholar]
  29. Charan, K.S.; Chokkalingam, S.P.; Sundari, K.S. Efficiency of Decision tree algorithm for Brain Tumor MRI Images comparing with SVM Algorithm. In Proceedings of the 2022 14th International Conference on Mathematics, Actuarial Science, Computer Science and Statistics (MACS), Karachi, Pakistan, 12–13 November 2022; pp. 1–4. [Google Scholar]
  30. Amran, G.A.; Alsharam, M.S.; Blajam, A.O.A.; Hasan, A.A.; Alfaifi, M.Y.; Amran, M.H.; Gumaei, A.; Eldin, S.M. Brain tumor classification and detection using hybrid deep tumor network. Electronics 2022, 11, 3457. [Google Scholar] [CrossRef]
  31. Sajjad, M.; Khan, S.; Muhammad, K.; Wu, W.; Ullah, A.; Baik, S.W. Multi-grade brain tumor classification using deep CNN with extensive data augmentation. J. Comput. Sci. 2019, 30, 174–182. [Google Scholar] [CrossRef]
  32. Ottom, M.A.; Rahman, H.A.; Dinov, I.D. Znet: Deep learning approach for 2D MRI brain tumor segmentation. IEEE J. Transl. Eng. Health Med. 2022, 10, 1–8. [Google Scholar] [CrossRef]
  33. Asif, S.; Yi, W.; Ain, Q.U.; Hou, J.; Yi, T.; Si, J. Improving effectiveness of different deep transfer learning-based models for detecting brain tumors from MR images. IEEE Access 2022, 10, 34716–34730. [Google Scholar] [CrossRef]
  34. Park, H.; Sjösund, L.L.; Yoo, Y.; Bang, J.; Kwak, N. Extremec3net: Extreme lightweight portrait segmentation networks using advanced c3-modules. arXiv 2019, arXiv:1908.03093. [Google Scholar]
  35. Chen, L.C.; Papandreou, G.; Schroff, F.; Adam, H. Rethinking atrous convolution for semantic image segmentation. arXiv 2017, arXiv:1706.05587. [Google Scholar]
  36. Chen, X.; Qi, D.; Shen, J. Boundary-aware network for fast and high-accuracy portrait segmentation. arXiv 2019, arXiv:1901.03814. [Google Scholar]
  37. Toufiq, D.M.; Sagheer, A.M.; Veisi, H. A review on brain tumor classification in mri images. Turk. J. Comput. Math. Educ. (TURCOMAT) 2021, 12, 1958–1969. [Google Scholar]
  38. Hossain, M.F.; Islam, M.A.; Hussain, S.N.; Das, D.; Amin, R.; Alam, M.S. Brain Tumor Classification from MRI Images Using Convolutional Neural Network. In Proceedings of the 2021 IEEE International Conference on Artificial Intelligence in Engineering and Technology (IICAIET), Kota Kinabalu, Malaysia, 13–15 September 2021; pp. 1–6. [Google Scholar]
  39. Saranya, N.; Renuka, D.K. Brain tumor classification using convolution neural network. In Journal of Physics: Conference Series; IOP Publishing: Bristol, UK, 2021; Volume 1916, p. 012206. [Google Scholar]
  40. Havaei, M.; Davy, A.; Warde-Farley, D.; Biard, A.; Courville, A.; Bengio, Y.; Pal, C.; Jodoin, P.-M.; Larochelle, H. Brain tumor segmentation with deep neural networks. Med. Image Anal. 2017, 35, 18–31. [Google Scholar] [CrossRef]
  41. Bayoumi, E.; Khalaf, A.A.; Gharieb, R.R. Brain tumor automatic detection from MRI images using transfer learning model with deep convolutional neural network. J. Adv. Eng. Trends 2021, 41, 19–30. [Google Scholar] [CrossRef]
  42. Minarno, A.E.; Mandiri, M.H.C.; Munarko, Y.; Hariyady, H. Convolutional neural network with hyperparameter tuning for brain tumor classification. Kinet. Game Technol. Inf. Syst. Comput. Netw. Comput. Electron. Control 2021, 4, 127–132. [Google Scholar] [CrossRef]
  43. Wolpert, D.H. Stacked generalization. Neural Netw. 1992, 5, 241–259. [Google Scholar] [CrossRef]
  44. Nigam, I.; Huang, C.; Ramanan, D. Ensemble knowledge transfer for semantic segmentation. In Proceedings of the 2018 IEEE Winter Conference on Applications of Computer Vision (WACV), Lake Tahoe, NV, USA, 12–15 March 2018; pp. 1499–1508. [Google Scholar]
  45. Zhang, S.H.; Dong, X.; Li, H.; Li, R.; Yang, Y.L. PortraitNet: Real-time portrait segmentation network for mobile device. Comput. Graph. 2019, 80, 104–113. [Google Scholar] [CrossRef]
  46. Kim, Y.W.; Rose, J.I.; Krishna, A.V. Accuracy enhancement of portrait segmentation by ensembling deep learning models. In Proceedings of the 2020 Fifth International Conference on Research in Computational Intelligence and Communication Networks (ICRCICN), Bangalore, India, 26–27 November 2020; pp. 59–64. [Google Scholar]
  47. Rohlfing, T.; Maurer, C.R., Jr. Shape-based averaging for combination of multiple segmentations. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Palm Springs, CA, USA, 26–29 October 2005; Springer: Berlin/Heidelberg, Germany, 2005; pp. 838–845. [Google Scholar]
  48. Hansen, L.K.; Salamon, P. Neural network ensembles. IEEE Trans. Pattern Anal. Mach. Intell. 1990, 12, 993–1001. [Google Scholar] [CrossRef]
  49. Singh, V.; Mukherjee, L.; Peng, J.; Xu, J. Ensemble clustering using semidefinite programming with applications. Mach. Learn. 2010, 79, 177–200. [Google Scholar] [CrossRef]
  50. Zhao, B.; Feng, J.; Wu, X.; Yan, S. A survey on deep learning-based fine-grained object classification and semantic segmentation. Int. J. Autom. Comput. 2017, 14, 119–135. [Google Scholar] [CrossRef]
  51. Narmatha, C.; Eljack, S.M.; Tuka, A.A.R.M.; Manimurugan, S.; Mustafa, M. A hybrid fuzzy brain-storm optimization algorithm for the classification of brain tumor MRI images. J. Ambient. Intell. Humaniz. Comput. 2020, 1–9. [Google Scholar] [CrossRef]
  52. Khan, M.A.; Ashraf, I.; Alhaisoni, M.; Damaševičius, R.; Scherer, R.; Rehman, A.; Bukhari, S.A.C. Multimodal brain tumor classification using deep learning and robust feature selection: A machine learning application for radiologists. Diagnostics 2020, 10, 565. [Google Scholar] [CrossRef] [PubMed]
  53. Cheng, J.; Huang, W.; Cao, S.; Yang, R.; Yang, W.; Yun, Z.; Wang, Z.; Feng, Q. Enhanced performance of brain tumor classification via tumor region augmentation and partition. PLoS ONE 2015, 10, e0140381. [Google Scholar] [CrossRef] [PubMed]
  54. Rokach, L. Ensemble-based classifiers. Artif. Intell. Rev. 2010, 33, 1–39. [Google Scholar] [CrossRef]
  55. Goodfellow, I.; Bengio, Y.; Courville, A. Deep Learning; MIT Press: Cambridge, MA, USA, 2016. [Google Scholar]
  56. Amin, J.; Sharif, M.; Raza, M.; Saba, T.; Sial, R.; Shad, S.A. Brain tumor detection: A long short-term memory (LSTM)-based learning model. Neural Comput. Appl. 2020, 32, 15965–15973. [Google Scholar] [CrossRef]
  57. Muthukrishnan, R.; Radha, M. Edge detection techniques for image segmentation. Int. J. Comput. Sci. Inf. Technol. 2011, 3, 259. [Google Scholar] [CrossRef]
Figure 1. Overview of key deep learning-based brain tumour segmentation techniques [10,12].
Figure 1. Overview of key deep learning-based brain tumour segmentation techniques [10,12].
Applsci 14 07281 g001
Figure 2. Proposed framework of tumour detection and segmentation.
Figure 2. Proposed framework of tumour detection and segmentation.
Applsci 14 07281 g002
Figure 3. Fine-tuned MobileNetV2.
Figure 3. Fine-tuned MobileNetV2.
Applsci 14 07281 g003
Figure 4. Meningioma identification in brain MRI.
Figure 4. Meningioma identification in brain MRI.
Applsci 14 07281 g004
Figure 5. The segmentation results were visualized and compared to the ground truth.
Figure 5. The segmentation results were visualized and compared to the ground truth.
Applsci 14 07281 g005
Table 1. Key attributes and characteristics of the pre-trained CNN architectures used.
Table 1. Key attributes and characteristics of the pre-trained CNN architectures used.
AttributesAlexNetVGG16GoogleNetResNet18ResNet50
Number of Layers816221850
Input Size227 × 227 × 3224 × 224 × 3224 × 224 × 3224 × 224 × 3224 × 224 × 3
Model
Description
Five convolutional layers, 3 FC layers13 convolutional layers, 3 FC layers21 convolutional layers, 1 FC layer17 convolutional layers, 1 FC layer49 convolutional layers, 1 FC layer
Unique FeaturesLocal Response Normalization, Overlapping Max PoolingObject Localization and Image Classification1 × 1 Convolution, Global Average Pooling, Inception ModuleSkip ConnectionsSkip Connections
Top-Five
error rate
15.3%7.3%6.67%3.57%3.57%
Number of Parameters (millions)60138411.423.9
Table 2. Summary of deep learning-based brain tumour segmentation techniques.
Table 2. Summary of deep learning-based brain tumour segmentation techniques.
DatasetPreprocessingModel
Architecture
Performance
BRATS
2013 and
2015
Bias field correction,
intensity and patch
normalisation,
augmentation
Custom CNNDSC 88%, SEN 89%,
PR 87%
BRATS
2013
Intensity normalisation,
augmentation
HCNN + CRF-RRNN 1SEN 95%, SPE 95.5%, PR 96.5%,
RE 97.8%, ACC 98.6%
BRATS
2015
Z-score normalisationResidual Network +
dilated convolution
RDM-Net 2
DSC 86%
BRATS
2015
Z-score normalisationStack Multi-connection
Simple Reducing_Net
(SMCSRNet)
DSC 83.42%, PR 78.96%, SEN 90.24%
BRATS
2019
-Ensemble of a 3D-CNN
and U-Net
DSC 90.6%
BRATS
2015
Bias correction,
intensity normalisation
Two-PathGroup-CNN
(2PG-CNN)
DSC 89.2%, PR 88.22%, SEN 88.32%
BRATS
2018
-Hybrid two-track U-Net
(HTTU-Net)
DSC 86.5%, SEN 88.3%, SPE 99.9%
BRATS
2015
-P-Net with bounding
box and image-specific
fine tuning (BIFSeg)
DSC 86.29%
ADNIDenoising,
skull stripping,
sub-sampling
Multi-scale CNN
(MSCNN)
ACC 90.1%
BRATS
2017
Intensity normalisation,
resizing, bias field
correction
Cascaded 3D U-NetsDSC 89.4%
BRATS
2015 and
2017
Downsampling3D centre-crop
dense block
BRATS 2015: DSC 88.4%, SEN 83.8%
BRATS 2017: DSC 88.7%, SEN 84.3%
BRATS
2018 and
2019
Z-score normalisation,
cropping
3D FCN 3BRATS 2018: DSC 90%, SEN 90.3,
SPE 99.48%; BRATS 2019: DSC 89%,
SEN 88.3%, SPE 99.51%
BRATS
2018
Intensity normalisation,
removing 1% of
highest and lowest
intensity
DCNN
(Dense-MultiOCM 4)
BRATS 2018: DSC 86.2%, SEN 84.8%,
SPE 99.5%
TCIAImage cropping,
padding, resizing,
intensity normalisation
U-NetDSC 84%, SEN 92%,
SPE 92%,
ACC 92%
BRATS
2013,
2015,
2018
-AFPNet 5 + 3D CRFBRATS 2013 DSC 86%,
BRATS 2015 DSC 82%,
BRATS 2018 86.58%
BRATS
2015,
2017
Z-score normalisationInception-based U-Net
+ up skip connection +
cascaded training
strategy
DSC 89%, PR 78.5%, SEN 89.5%
BRATS
2015,
BrainWeb
Cropping,
z-score normalisation,
min–max normalisation
(BrainWeb)
Triple-intersecting
U-Nets (TIU-Net)
BRATS 2015: DSC 85%,
BrainWeb DSC 99.5%
BRATS
2015
-LSTM multi-modal
U-Net
DSC 73.09%, SEN 63.76%,
PR 89.79%
1 Heterogeneous CNN combined with conditional random fields and recurrent regression-based neural algorithm.
Table 3. Evaluation of proposed method.
Table 3. Evaluation of proposed method.
Evaluation MetricsPerformance
Accuracy97.47%
Sensitivity97.22%
Specificity97.94%
Dice coefficient index96.71%
Table 4. Comparative performance analysis of existing methods.
Table 4. Comparative performance analysis of existing methods.
MethodsClassification Accuracy
CNN, LBP, and PSO [44]92.5%
LSTM [51]93.8%
Brain-storm optimisation [51]92.5%
DCT, CNN, and ELM [52]93.4%
Proposed model (CNN, MobiNetV2, M-SVM)97.5%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ashimgaliyev, M.; Matkarimov, B.; Barlybayev, A.; Li, R.Y.M.; Zhumadillayeva, A. Accurate MRI-Based Brain Tumor Diagnosis: Integrating Segmentation and Deep Learning Approaches. Appl. Sci. 2024, 14, 7281. https://doi.org/10.3390/app14167281

AMA Style

Ashimgaliyev M, Matkarimov B, Barlybayev A, Li RYM, Zhumadillayeva A. Accurate MRI-Based Brain Tumor Diagnosis: Integrating Segmentation and Deep Learning Approaches. Applied Sciences. 2024; 14(16):7281. https://doi.org/10.3390/app14167281

Chicago/Turabian Style

Ashimgaliyev, Medet, Bakhyt Matkarimov, Alibek Barlybayev, Rita Yi Man Li, and Ainur Zhumadillayeva. 2024. "Accurate MRI-Based Brain Tumor Diagnosis: Integrating Segmentation and Deep Learning Approaches" Applied Sciences 14, no. 16: 7281. https://doi.org/10.3390/app14167281

APA Style

Ashimgaliyev, M., Matkarimov, B., Barlybayev, A., Li, R. Y. M., & Zhumadillayeva, A. (2024). Accurate MRI-Based Brain Tumor Diagnosis: Integrating Segmentation and Deep Learning Approaches. Applied Sciences, 14(16), 7281. https://doi.org/10.3390/app14167281

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop