Next Article in Journal
Multifaceted Effects of Kinase Inhibitors on Pancreatic Cancer Cells Reveals Pivotal Entities with Therapeutic Implications
Next Article in Special Issue
Induction of Skin Cancer by Long-Term Blue Light Irradiation
Previous Article in Journal
The Role of Plasma Cells as a Marker of Chronic Endometritis: A Systematic Review and Meta-Analysis
Previous Article in Special Issue
5-Aminolevulinic Acid-Mediated Photodynamic Therapy Potentiates the Effectiveness of Doxorubicin in Ewing Sarcomas
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Robust Brain Tumor Detector Using BiLSTM and Mayfly Optimization and Multi-Level Thresholding

1
Department of Computer Science, University of Engineering and Technology Taxila, Taxila 47050, Pakistan
2
Industrial Engineering Department, College of Engineering, King Saud University, P.O. Box 800, Riyadh 11421, Saudi Arabia
3
College of Big Data and Internet, Shenzhen Technology University (SZTU), Shenzhen 518118, China
*
Authors to whom correspondence should be addressed.
Biomedicines 2023, 11(6), 1715; https://doi.org/10.3390/biomedicines11061715
Submission received: 26 April 2023 / Revised: 9 June 2023 / Accepted: 12 June 2023 / Published: 15 June 2023
(This article belongs to the Special Issue Photodynamic Therapy in Cancer)

Abstract

:
A brain tumor refers to an abnormal growth of cells in the brain that can be either benign or malignant. Oncologists typically use various methods such as blood or visual tests to detect brain tumors, but these approaches can be time-consuming, require additional human effort, and may not be effective in detecting small tumors. This work proposes an effective approach to brain tumor detection that combines segmentation and feature fusion. Segmentation is performed using the mayfly optimization algorithm with multilevel Kapur’s threshold technique to locate brain tumors in MRI scans. Key features are achieved from tumors employing Histogram of Oriented Gradients (HOG) and ResNet-V2, and a bidirectional long short-term memory (BiLSTM) network is used to classify tumors into three categories: pituitary, glioma, and meningioma. The suggested methodology is trained and tested on two datasets, Figshare and Harvard, achieving high accuracy, precision, recall, F1 score, and area under the curve (AUC). The results of a comparative analysis with existing DL and ML methods demonstrate that the proposed approach offers superior outcomes. This approach has the potential to improve brain tumor detection, particularly for small tumors, but further validation and testing are needed before clinical use.

1. Introduction

A brain tumor is an abnormal growth of cells in the brain, which can be either malignant or benign [1]. As the brain is a vital organ responsible for cognitive function, the presence of tumors can have life-threatening consequences. Brain tumors account for 85–90% of all central nervous system tumors, according to a report [2]. Radiologists use imaging techniques such as CT scans and MRI to locate cancer in the brain, with MRI providing higher-resolution imaging than CT scans. However, manually analyzing the images and grading tumors can be time-consuming, requires specialized expertise, and may still result in imprecise diagnoses and high costs [3]. These challenges arise from the asymmetrical shapes of tumors and the difficulty of distinguishing between different types of tumors that may look similar. Consequently, there is growing interest in developing computerized systems for recognizing brain tumors to improve diagnosis and treatment outcomes [4].
A computerized system utilizing traditional machine learning techniques has been created by researchers. The system involves a variety of processes, including preprocessing, feature extraction, dimensionality reduction, and categorization [5,6,7]. Feature extraction is the most important step in this process as it is necessary to automate brain tumor detection. However, the efficacy of this technique heavily relies on the type and nature of the features used, and traditional methods may not be able to identify small tumors in unseen samples or may work slowly when processing large amounts of brain MRIs. K-nearest neighbors (KNN), support vector machines (SVM), decision trees, and segmentation-based methods are some of the ML-based approaches used to address this problem. In the segmentation-based approach, algorithms are used to segment the tumor as a region of interest (ROI), followed by feature extraction from the ROI. This approach involves several steps, including preprocessing, finding a region of interest, feature extraction, training, and finally classification. However, due to the complex structure of the brain, the current techniques lack accuracy. Therefore, it is vital to develop an efficient and precise model for the timely detection of brain tumors with minimal human intervention [8].
Various popular deep learning models have been developed, including GoogleNet, InceptionNet, ResNet, VGGNet, DenseNet, and AlexNet. However, basic classification techniques used for detecting brain tumors only indicate the presence of a tumor and do not provide information about its location, leading to a high rate of false positives. Researchers have used different object detection methods for brain tumor identification to overcome this limitation. One study [9] utilized the deep learning-based CenterNet to localize brain tumors, using ResNet34 having an attention module as a base network. However, most object detection-based techniques for brain tumor identification require widespread hyperparameters and entail high computational costs. To address these challenges, this study proposes a new approach for early brain tumor recognition. In the first phase, the tumor is located using segmentation. In the second phase, ResNet-v2 and Histogram of Oriented Gradients (HoG), and a DNN are used to extract features from the segmented image. In the end, the features are combined, and a BiLSTM is trained for three classes including glioma, pituitary, and meningioma. The suggested model accurately identifies tumor locations and enhances the detection accuracy by employing essential features extracted from the segmented area.
The aim of this study is to create a strong framework utilizing MFO and Kapur’s thresholding-based segmentation, along with feature fusion, for the purpose of identifying and categorizing brain tumors using various MRI images. Additionally, the goal is to propose an efficient system that can accurately detect and locate tumors in MRI scans that have not been seen before. The study also aims to develop a technique that can identify small tumors in MRI scans for better early detection. The proposed system was extensively tested and the results show that it performs significantly well in terms of accuracy and robustness for early recognition and categorization of brain tumors. The remaining sections of the study are dedicated to discussing existing methods in Section 2, demonstrating the proposed technique in Section 3, evaluating the experiment in Section 4, and concluding the study in Section 5.

2. Related Work

Several researchers have explored machine learning and deep learning-based methods for medical diagnosis, as mentioned in references [10,11]. Some of these approaches utilize segmentation techniques, such as ensemble deep networks, which require training from the initial stage. To address this issue, some researchers have introduced a drop-out layer during the testing phase to recognize uncertainties in lesion identification [12]. Additionally, in reference [13], a CNN-based approach was proposed, and data augmentation was performed to improve the classification accuracy. The study used three datasets and achieved an accuracy of 98.43%. DL-based methods are increasingly essential in several image-processing applications, including medical diagnosis [14]. In another study [15], data augmentation was performed using patch rotation and extraction techniques on 3064 images, and CapsuleNet was utilized for recognizing and categorizing brain cancer into three classes. In reference [16], VGG16 and AlexNet were utilized to attain features from brain scans, and a feature fusion method was employed for binary classification. Finally, an SVM was used to categorize the images, achieving an accuracy of up to 96%. In reference [17], an encoder-based technique was used for categorization, with an accuracy of 98.5%.
Reference [18] used ResNet50 with additional layers for binary categorization of brain tumor images and attained an accuracy upto 97%. In reference [19], the BrainMRNet model was proposed, consisting of attention layers, residual stages, and hyper-column technique, achieving an accuracy of 96.05%. Reference [20] utilized transfer learning-based CNN and a multiple logistic regression method outperforming the existing techniques on three benchmarks. Sachdeva et al. [21] proposed a brain tumor detection method using SVM and artificial neural networks, combined with a genetic algorithm, achieving an accuracy of 91% and 94.9%, respectively. Tahir et al. [22] explored different approaches to increasing classification accuracy by employing edge detection, noise removal techniques, and contrast improvements achieving an accuracy of 86%.
In recent studies, different techniques have been suggested for brain tumor detection. Sarah et al. [23] utilized Harris Hawks optimized neural networks and varied types of layers for modifying the architecture. They preprocessed the images for noise removal and candidate region recognition to identify tumor regions. The proposed method achieved 98% accuracy using the Kaggle dataset. Aruna et al. [24] developed an approach using pretrained CNNs such as InceptionV3, ResNet50, and VGG19. They concatenated deep features extracted through CNNs using a two-stage strategy and reduced dimensions further using PCA for categorization. The results showed improved classification accuracy, but their approach increased computational complexity. In another study, Bakary et al. [25] employed the transfer learning concept to develop an automatic brain tumor classification technique using MR images of the brain. They used the AlexNet model for feature extraction and binary classification, achieving an overall accuracy of 99.62%. However, they did not classify the images into specific types of tumors.
The study by Sarmad et al. [26] proposes an automated system for brain tumor detection that employs several steps to achieve a high accuracy in classifying different types of brain tumors. The first phase involves using linear contrast stretching to identify edges in the sample. In the second phase, a DNN with 17 layers is designed for segmenting the tumor. This step aims to accurately identify the location and boundaries of the tumor within the brain image. In the third step, a modified version of the MobileNetV2 architecture is utilized for extracting features. Transfer learning is used to train the network, which involves adapting the pretrained model’s parameters to the specific task of brain tumor detection. Then, an entropy-based controlled mechanism is utilized with multiclass support vector machines (M-SVM) for feature selection. This step aims to identify the most relevant features for tumor classification. Finally, M-SVM is utilized for brain tumor categorization, which involves identifying glioma, meningioma, and pituitary images. The proposed system achieves a high accuracy of 97.47% and 98.92% for meningioma and pituitary images, respectively.
While several methods have been developed for brain tumor detection, early detection remains a significant challenge. Early detection is critical for effective treatment and improved patient outcomes. A detail of the existing model is shown in Table 1.

3. Methodology

In this section, we introduce the working principals of the proposed model. The proposed system is a three-stage model as shown in Figure 1. The images of the brain are in grayscale; therefore, a preprocessing phase has been skipped. First, segmentation is employed using the mayfly optimization with a multilevel threshold approach. Second, the features are extracted from segmented tumors. Third, the brain samples are classified from the proposed multilayer perceptron (MLP).

3.1. MFO with Multi-Level Thresholding

MFO is one of the population-based methods developed in 2020 [28]. The concepts of MFO consist of the following functions; (1) initialization of equal number of male and female agents, (2) allowing the male mayfly to recognize the finest position as loc for the chosen task, (3) allowing the female mayfly to find and be merged with male mayfly located at loc, (4) offspring generation, and (5) termination of search and displaying the final output.
We employed a multilevel thresholding approach with MFO technique. Kapur et al. [29] proposed a threshold-based approach to compute the optimal thresholds for segmentation. The computation depends upon the distribution of probability and entropy of the image histogram. The approach determines the optimal threshold to maximize the entropy. For the bilevel threshold computation, an objective function can be attained as presented in Equation (1).
F U N k a p t = k 1 + k 2 ,
Here, k 1 and k 2 are computed as below:
k 1 = s = 1 t p s ω 0 ln ( p s ω 0 )
k 2 = s = t + 1 L p s ω 1 ln ( p s ω 1 )
Here, p s refers to the distribution of probability (DP) of the intensity level of grayscale; ω 0 and ω 1 presents the DP for the class labels k 1 and k 2 as described in Equations (2) and (3). This entropy-based approach is flexible enough for multilevel thresholding. Thus, it is necessary to split the images into n class labels using n − 1 threshold numbers. The objective value can be changes as shown in Equation (4).
F U N k a p T = s = 1 n k s ,
Here, T = [t1, t2, t(n − 1)] presents a vector consisting of several threshold numbers. The entropies are described separately with the respective threshold t value; therefore, Equation (5) has been modified for n entropy.
k n c = i = t n + 1 L p i ω n 1 ln ( p i ω n 1 )
where, ( ω 0 , ω 1 , . , ω n 1 ) presents the probability occurrence for the n classes, and for the optimal threshold numbers, the MFO approach is utilized. The MFO technique is projected similarly to mating method and flighting feature of the mayflies [28]. The mayflies in swarms are recognized as female and male individuals. The male mayfly performs more robustly consequently improving the optimization process. The MFO approach modifies the position depending upon the location loci(t) and velocity velocityi(t) at current round:
l o c i t i m e + 1 = l o c i t i m e + v e l o c i t y i t i m e + 1 ,
All female and male mayflies modify the location employing Equation (6) with respect to time. However, they utilize unique velocity modifying features.

Mating

The above half female and male mayflies pass through mating and generate children. The offspring are generated from the parents as denoted in mathematical form below:
o f f s p r 1 = P × M a l e + 1 P × F e m a l e
o f f s p r 2 = P × F e m a l e + 1 P × M a l e
Here, P refers the random numbers for Gauss distribution. Some segmented images are shown in Figure 2.

3.2. Features Extraction (FE)

The proposed approach involves utilizing two algorithms, ResNet-V2 and Histogram of Orientation Gradients (HOG), for feature extraction from brain images. ResNet-V2 is a deep neural network architecture that has been shown to be effective in image classification tasks, while HOG is a popular algorithm used for feature extraction in computer vision. After extracting features from the images, a classifier is trained using a Bi-directional Long Short-Term Memory (BiLSTM) network. BiLSTM is a form of recurrent neural network that can acquire lasting dependencies in sequential data. In this case, it is utilized to classify brain images into non-tumorous and tumorous classes based on the features extracted by ResNet-V2 and HOG. The use of deep learning techniques such as ResNet-V2 and BiLSTM has shown promising results in the field of medical image analysis, including brain tumor detection. The combination of different feature extraction algorithms can also improve the accurateness of classification, as different processes may capture different aspects of the image information.

3.2.1. Histogram of Oriented Gradients (HOG)

This step involves extracting low-level features from tumor utilizing a Histogram of Oriented Gradients (HOG) algorithm. HOG is a popular feature extraction algorithm used in computer vision that captures the local gradient statistics of an image. In this approach, the segmented images are provided to a feature extractor block consisting of HOG and ResNet-V2, which is a deep neural network architecture. The HOG algorithm is utilized to extract a total of 1236 low-level features from the segmented images, using 9 bins to capture the gradient orientations. To improve the results, the intensity of the images can be improved by normalizing the images, although this is considered more valuable when the size of image is large.
To find the features, the images are first resized to compatible blocks of size 6 × 6 or smaller, and a stride of 4 is used for each 2 × 2-sized block. The HOG algorithm then computes the gradient magnitude and direction for each pixel in the image, with the direction ranging from 0–180 degrees. Pixels with similar orientations are grouped into the same bin, and the magnitude of the gradient for each pixel is computed using the mathematical equations. The magnitude m for the gradient of pixel (i,j) and the direction is attained as presented in below equations.
m i , j = i x 2 + j y 2 2 ,
ϑ = i y i x
where, i x , i y refers to the gradients in the directions of x and y. The ϑ exhibits the angle from 0 to 180.

3.2.2. ResNet-V2

He et al. [30] proposed ResNet and the block of residual, comprising two conv. layers and a connection for shortcut without any parameter that conveys the output of current block to the next block. The modification gave better performance than existing unmodified model in ILSVRC-2012 competition employing a 152 layered network and it was concluded that increasing the depth of the network results in improved classification accuracy. After the ResNet-V1, the authors improved the residual block so that ResLU function is not required in the shortcut connection, consequently increasing the detection accuracy. The main change in version 2 was implication of a stack as 1 × 1 batch normalization, 3 × 3 ReLU, and 1 × 1 2D convolutional layers. The architectures of ResNet-V1 and ResNet-V2 are shown in Figure 3.

3.3. Fusion Process

Feature fusion has been widely applied in various machine learning applications, including medical imaging [8]. It offers a dynamic approach to combining multiple feature maps, maximizing their integration. The model used for false positive detection relies on entropy. After obtaining the features, they are merged to form a single vector. Three vectors were computed as shown below:
f R e s   1 × m = R e s V 2 1 × 1 , R e s V 2 1 × 2 , R e s V 2 1 × 3 , , R e s V 2 1 × n ,
f H o G   1 × D = H o G 1 × 1 , H o G 1 × 2 , H o G 1 × 3 , . , H o G 1 × n ,
The fusion of features was utilized as presented below:
F u s i o n ( F e a t v e c t o r ) 1 × P = i = 1 2 { f R e s V 1 1 × m   ,   f H O G 1 × D } ,
Here, R e s V 1 1 × 1 , R e s V 1 1 × 2 , R e s V 1 1 × 3 , , R e s V 1 1 × n presents the feature vectors by ResNetV2, and H o G 1 × 1 , H o G 1 × 2 , H o G 1 × 3 , . , H o G 1 × n presents the feature vectors by HOG. In this context, the feature vector f has undergone fusion. Then, an entropy value is calculated for the chosen features based on the specified value below.
L h e = N h e b i = 1 n p f i ,
F s e l = L h e max f i , 1126
The probability of the features is denoted by p and their entropy is presented by L h e . The merged features are ultimately fed into the classifier to distinguish the samples with tumors.

3.4. Classification

At this stage, we present our classification model that was trained using the merged features to obtain optimal performance for detecting brain tumors. We utilized support vector machines (SVM), decision tree (DT), and our novel Bidirectional Long Short Term Memory (BiLSTM) to categorize the three categories of brain tumors. BiLSTM networks have been shown to provide better predictions than traditional LSTM networks, as they work in both forward and backward phases during training. The input features and weights are passed through multiple layers to generate the output, which is then used to compute the error. The parameters are adjusted during the backpropagation (BP) step to minimize the estimation errors. Furthermore, Bi-LSTM layers perform sequential function on input features. We set the hyperparameters as: ADAM optimizer, learning rate as 0.001, and a batch size 32. Our proposed network achieved the best detection results, followed by SVM, while the minimum detection accuracy was obtained using DT. The layers’ details are presented in Table 2.

4. Experimental Evaluation

Here, we explain the methods utilized for the performance assessment of the proposed method such as implementation details, protocols for training and testing, and several experiments.

4.1. Implementation Details

We utilized the several experiments employing a system integrated with a Graphical Processing Unit (GPU) card, i.e., NVIDIA (GE-FORCE GTX) integrated with 4 GB memory. The details of the environment are reported in Table 3.

4.2. Dataset

The proposed system underwent training and evaluation utilizing two distinct datasets: the Figshare dataset and Harvard medical images [31]. The Figshare dataset encompasses T1-weighted contrast-enhanced MRI images sourced from 233 individuals, yielding a total of 3064 brain images. This dataset consists of three categories of brain tumors, namely pituitary (930), glioma (1426), and meningioma (708), all of which were obtained from Nanfang hospital in China. The samples in this dataset measure 512 × 512 in size. On the other hand, the Harvard medical dataset was composed of ten tumors, as diagnosed by various experts. The brain MRIs in the Harvard dataset were captured in the axial plane, T2-weighted, and measured 256 × 256 pixels. We assessed the efficacy of our proposed detector using both datasets, but we only utilized the Figshare dataset for training purposes. Some sample images are depicted in Figure 4.

4.3. Metrics

The metrics used to evaluate the proposed model’s performance include precision, accuracy, recall, and F1 score. Below is the mathematical expression for these metrics.
P r e c i s i o n = T P T P + F P ,
The model’s accuracy is indicated by the correctly classified samples as per the proposed model, and this can be represented by Equation (17).
A c c u r a c y = T P + T N T P + T N + F P + F N ,
The recall metric represents the proportion of diseased samples correctly identified by the proposed model, even if they were classified as noncancerous. The equations for recall and F1 score are presented below.
R e c a l l = T P T P + F N ,
F 1   s c o r e = 2 P r e c i s i o n R e c a l l P r e c i s i o n + R e c a l l ,
The area under the curve (Auc) was found as below:
A u c = i j f x d x ,

4.4. Localization Results

The evaluation of our suggested segmentation method in this study is based on four parameters: DOI, TC, area, and number of pixels, which are expressed mathematically as follows:
D O I = ω 1 + ω 2 ,
T C = x = 1 m y = 1 n ( I I ) x = 1 m y = 1 n ( I I )
a r e a = i = 1 m j = 1 n I ( i , j ) ,
The TC quantity ranges from 0 to 1, and we evaluated segmentation methods using 80 images from each dataset and compared the findings by computing metrics from their corresponding ground truth images. The Harvard dataset’s ground truth images were assessed by a radiologist expert. Results for 12 images from the Harvard dataset are presented in Table 4, and those from the Figshare dataset are shown in Table 5. Table 6 presents a comparison of TC and DOI with other approaches over the Harvard dataset.

4.5. Classification Results

This experiment demonstrates the classification performance by our proposed approach on the Figshare and Harvard datasets. We used 500 images belonging to each class of the Figshare dataset for training our classifier, and 300 images from the same dataset for testing. Specifically, we tested 100 images for each of the three classes and achieved significant classification results as presented in Table 7. In addition, we trained three classifiers—decision tree (DT), SVM, and BiLSTM—and found that the BiLSTM network performed the best with an accuracy of 99.3%, a recall of 99.1%, precision of 98.3%, an F1 score of 99.1%, and an AUC of 0.989. For AUC computation, we considered the binary classes as pituitary vs all, glioma vs all, and meningioma vs all. Finally, we computed the average of all AUCs to determine the performance of each algorithm.
The highest classification results were obtained using SVM (98.3% accuracy) and then DT (97.3% accuracy) after implementing the BiLSTM. For the second trial, the Harvard dataset was utilized, consisting of 100 images from each of the three classes: pituitary, glioma, and meningioma. Our suggested BiLSTM achieved the best results, with 99.1% accuracy, 98.1% recall, 98.2% precision, 98.3% F1 score, and 0.974 AUC. After implementing our proposed classifier, the considerable classification results were achieved using DT, with an accuracy of 99.0%. The SVM classifier resulted in a minimum accuracy of 98.1% during cross-validation, as shown in Table 8.

4.6. Comparison with Existing Segmentation-Based Techniques

The objective of this experiment was to evaluate the effectiveness of our proposed MFO with a multilevel thresholding approach as a segmentation method. We conducted experiments in which we compared the performance of our method against several other established segmentation techniques including edge-based, region-based, multi-threshold, watershed, and otsu. We assessed the accuracy of the segmentation methods using the Harvard dataset and the results were presented in Table 9. Our experimental findings clearly demonstrate that our proposed segmentation-based method outperforms the traditional methods. A comparative plot illustrating the superiority of our segmentation-based method is presented in Figure 5.

4.7. Comparison with Existing DL-Based Techniques

In this unit, we showcase various deep learning-based approaches for detecting and classifying brain tumors. We conducted an experiment to evaluate the accuracy of our proposed features fusion-based method compared to existing algorithms, and we reported the findings in Table 10. Our segmentation and features fusion-based approach performed exceptionally well, achieving an accuracy of 99.3%, which outperforms all other existing methods, including [26], which achieved the second-highest accuracy of 97.01%. In contrast, the lowest accuracy of 84% was obtained by [38], which utilized the VggNet-LSTM classifier. Our experiment clearly demonstrates that our proposed technique excels in segmentation, feature extraction, and brain tumor classification. Figure 6 displays the comparative plot of our results.

5. Conclusions

This study proposes a robust brain tumor detection method based on feature fusion that efficiently performs without requiring the preprocessing of brain samples. The proposed method involves segmentation using the mayfly optimization technique with multilevel thresholding for localizing tumors in brain MRI images. Features are extracted using HOG for local feature mining and ResNet-V2 for valuable feature mining. The merged features are then classified into three categories (pituitary, glioma, and meningioma) using a BiLSTM classifier. We trained and tested our model using the Figshare dataset and evaluated its robustness using the Harvard dataset. The proposed method achieved an accuracy of 99.3%, a recall of 99.1%, precision of 98.3%, an F1 score of 99.1%, and an AUC of 0.989, outperforming state-of-the-art segmentation and DL-based brain tumor detectors.
This automated method can be used directly by radiologists and oncologists to assist in the early detection of brain tumors. Additionally, the system provides precise tumor locations that can aid physicians in making surgical decisions.
Our approach has a limitation in terms of training time, which could be addressed using high computational systems. We also discovered that our system may have difficulty predicting the type of brain tumor when the MR images are blurry. To address these issues in the future, we aim to reduce the required time for training while maintaining the same level of performance. We also plan to add a preprocessing step to enhance images in situations where high-resolution imaging tools are unavailable. Furthermore, we intend to use our proposed method for detecting various cancers such as lungs, skin, and bone.

Author Contributions

Conceptualization, R.M. and M.S.; methodology, R.M.; software, M.S.; validation, R.M.; formal analysis, B.H.; investigation, R.M.; resources, M.S. and B.H.; data curation, H.H.; writing—original draft preparation, H.H. and B.H.; writing—review and editing, L.L.; visualization, L.L.; supervision, L.L.; project administration, M.S.; funding acquisition, M.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by King Saud University through Researchers Supporting Program number (RSPD2023R704), King Saud University, Riyadh, Saudi Arabia.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not Applicable.

Data Availability Statement

The datasets generated during and/or analyzed during the current study are available from the corresponding author on reasonable request.

Acknowledgments

The authors present their appreciation to King Saud University for funding this research through Researchers Supporting Program number (RSPD2023R704), King Saud University, Riyadh, Saudi Arabia.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Díaz-Pernas, F.J.; Martínez-Zarzuela, M.; Antón-Rodríguez, M.; González-Ortega, D. A deep learning approach for brain tumor classification and segmentation using a multiscale convolutional neural network. Healthcare 2021, 9, 153. [Google Scholar] [CrossRef]
  2. Amanullah, M.; Visumathi, J.; Sammeta, N.; Ashok, M. Convolutional neural network-based MRI brain tumor classification system. In AIP Conference Proceedings; AIP Publishing LLC: Melville, NY, USA, 2022. [Google Scholar]
  3. Chatterjee, I. Artificial intelligence and patentability: Review and discussions. Int. J. Mod. Res. 2021, 1, 15–21. [Google Scholar]
  4. Anitha, V.; Murugavalli, S. Brain tumour classification using two-tier classifier with adaptive segmentation technique. IET Comput. Vis. 2016, 10, 9–17. [Google Scholar] [CrossRef]
  5. El-Dahshan, E.S.A.; Mohsen, H.M.; Revett, K.; Salem, A.B.M. Computer-aided diagnosis of human brain tumor through MRI: A survey and a new algorithm. Expert Syst. Appl. 2014, 41, 5526–5545. [Google Scholar] [CrossRef]
  6. Mahum, R.; Rehman, S.U.; Okon, O.D.; Alabrah, A.; Meraj, T.; Rauf, H.T. A novel hybrid approach based on deep CNN to detect glaucoma using fundus imaging. Electronics 2021, 11, 26. [Google Scholar] [CrossRef]
  7. Mahum, R.; Munir, H.; Mughal, Z.-U.; Awais, M.; Khan, F.S.; Saqlain, M.; Mahamad, S.; Tlili, I. A novel framework for potato leaf disease detection using an efficient deep learning model. Hum. Ecol. Risk Assess. Int. J. 2022, 29, 303–326. [Google Scholar] [CrossRef]
  8. Munir, M.H.; Mahum, R.; Nafees, M.; Aitazaz, M.; Irtaza, A. An Automated Framework for Corona Virus Severity Detection Using Combination of AlexNet and Faster RCNN. Int. J. Innov. Sci. Technol. 2022, 3, 197–209. [Google Scholar]
  9. Masood, M.; Maham, R.; Javed, A.; Tariq, U.; Khan, M.A.; Kadry, S. Brain MRI analysis using deep neural network for medical of internet things applications. Comput. Electr. Eng. 2022, 103, 108386. [Google Scholar] [CrossRef]
  10. Mahum, R.; Aladhadh, S. Skin Lesion Detection Using Hand-Crafted and DL-Based Features Fusion and LSTM. Diagnostics 2022, 12, 2974. [Google Scholar] [CrossRef]
  11. Mahum, R.; Rehman, S.U.; Meraj, T.; Rauf, H.T.; Irtaza, A.; El-Sherbeeny, A.M.; El-Meligy, M.A. A novel hybrid approach based on deep cnn features to detect knee osteoarthritis. Sensors 2021, 21, 6189. [Google Scholar] [CrossRef]
  12. Jungo, A.; Meier, R.; Ermis, E.; Blatti-Moreno, M.; Herrmann, E.; Wiest, R.; Reyes, M. On the effect of inter-observer variability for a reliable estimation of uncertainty of medical image segmentation. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Granada, Spain, 16–20 September 2018; Springer: Cham, Switzerland, 2018. [Google Scholar]
  13. Tiwari, P.; Pant, B.; Elarabawy, M.M.; Abd-Elnaby, M.; Mohd, N.; Dhiman, G.; Sharma, S. Cnn based multiclass brain tumor detection using medical imaging. Comput. Intell. Neurosci. 2022, 2022, 1830010. [Google Scholar] [CrossRef] [PubMed]
  14. Dong, S.; Wang, P.; Abbas, K. A survey on deep learning and its applications. Comput. Sci. Rev. 2021, 40, 100379. [Google Scholar] [CrossRef]
  15. Vimal Kurup, R.; Sowmya, V.; Soman, K. Effect of data pre-processing on brain tumor classification using capsulenet. In Proceedings of the International Conference on Intelligent Computing and Communication Technologies, Tehri, India, 20–21 April 2019; Springer: Singapore, 2019. [Google Scholar]
  16. Toğaçar, M.; Cömert, Z.; Ergen, B. Classification of brain MRI using hyper column technique with convolutional neural network and feature selection method. Expert Syst. Appl. 2020, 149, 113274. [Google Scholar] [CrossRef]
  17. Raja, P.S. Brain tumor classification using a hybrid deep autoencoder with Bayesian fuzzy clustering-based segmentation approach. Biocybern. Biomed. Eng. 2020, 40, 440–453. [Google Scholar] [CrossRef]
  18. Çinar, A.; Yildirim, M. Detection of tumors on brain MRI images using the hybrid convolutional neural network architecture. Med. Hypotheses 2020, 139, 109684. [Google Scholar] [CrossRef] [PubMed]
  19. Toğaçar, M.; Ergen, B.; Cömert, Z. BrainMRNet: Brain tumor detection using magnetic resonance images with a novel convolutional neural network model. Med. Hypotheses 2020, 134, 109531. [Google Scholar] [CrossRef]
  20. Nayak, D.R.; Dash, R.; Majhi, B. Automated diagnosis of multi-class brain abnormalities using MRI images: A deep convolutional neural network based method. Pattern Recognit. Lett. 2020, 138, 385–391. [Google Scholar] [CrossRef]
  21. Sachdeva, J.; Kumar, V.; Gupta, I.; Khandelwal, N.; Ahuja, C.K. A package-SFERCB-“Segmentation, feature extraction, reduction and classification analysis by both SVM and ANN for brain tumors”. Appl. Soft Comput. 2016, 47, 151–167. [Google Scholar] [CrossRef]
  22. Tahir, B.; Iqbal, S.; Khan, M.U.G.; Saba, T.; Mehmood, Z.; Anjum, A.; Mahmood, T. Feature enhancement framework for brain tumor segmentation and classification. Microsc. Res. Tech. 2019, 82, 803–811. [Google Scholar] [CrossRef]
  23. Kurdi, S.Z.; Ali, M.H.; Jaber, M.M.; Saba, T.; Rehman, A.; Damaševičius, R. Brain Tumor Classification Using Meta-Heuristic Optimized Convolutional Neural Networks. J. Pers. Med. 2023, 13, 181. [Google Scholar] [CrossRef]
  24. Aurna, N.F.; Abu Yousuf, M.; Abu Taher, K.; Azad, A.; Moni, M.A. A classification of MRI brain tumor based on two stage feature level ensemble of deep CNN models. Comput. Biol. Med. 2022, 146, 105539. [Google Scholar] [CrossRef] [PubMed]
  25. Badjie, B.; Ülker, E.D. A Deep Transfer Learning Based Architecture for Brain Tumor Classification Using MR Images. Inf. Technol. Control 2022, 51, 332–344. [Google Scholar] [CrossRef]
  26. Maqsood, S.; Damaševičius, R.; Maskeliūnas, R. Multi-Modal Brain Tumor Detection Using Deep Neural Network and Multiclass SVM. Medicina 2022, 58, 1090. [Google Scholar] [CrossRef] [PubMed]
  27. Rajinikanth, V.; Kadry, S.; Nam, Y. Convolutional-neural-network assisted segmentation and SVM classification of brain tumor in clinical MRI slices. Inf. Technol. Control 2021, 50, 342–356. [Google Scholar] [CrossRef]
  28. Zervoudakis, K.; Tsafarakis, S. A mayfly optimization algorithm. Comput. Ind. Eng. 2020, 145, 106559. [Google Scholar] [CrossRef]
  29. Kapur, J.N.; Sahoo, P.K.; Wong, A.K.C. A new method for gray-level picture thresholding using the entropy of the histogram. Comput. Vis. Graph. Image Process. 1985, 29, 273–285. [Google Scholar] [CrossRef]
  30. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016. [Google Scholar]
  31. Hamamci, A.; Kucuk, N.; Karaman, K.; Engin, K.; Unal, G. Tumor-cut: Segmentation of brain tumors on contrast enhanced MR images for radiosurgery applications. IEEE Trans. Med. Imaging 2011, 31, 790–804. [Google Scholar] [CrossRef]
  32. Vishnuvarthanan, G.; Rajasekaran, M.P.; Subbaraj, P.; Vishnuvarthanan, A. An unsupervised learning method with a clustering approach for tumor identification and tissue segmentation in magnetic resonance brain images. Appl. Soft Comput. 2016, 38, 190–212. [Google Scholar] [CrossRef]
  33. Sharif, M.; Tanvir, U.; Munir, E.U.; Khan, M.A.; Yasmin, M. Brain tumor segmentation and classification by improved binomial thresholding and multi-features selection. J. Ambient. Intell. Humaniz. Comput. 2018, 1–20. [Google Scholar] [CrossRef]
  34. Zhang, Y.-D.; Chen, S.; Wang, S.-H.; Yang, J.-F.; Phillips, P. Magnetic resonance brain image classification based on weighted-type fractional Fourier transform and nonparallel support vector machine. Int. J. Imaging Syst. Technol. 2015, 25, 317–327. [Google Scholar] [CrossRef]
  35. Wang, S.; Du, S.; Atangana, A.; Liu, A.; Lu, Z. Application of stationary wavelet entropy in pathological brain detection. Multimed. Tools Appl. 2018, 77, 3701–3714. [Google Scholar] [CrossRef]
  36. Nazir, M.; Wahid, F.; Khan, S.A. A simple and intelligent approach for brain MRI classification. J. Intell. Fuzzy Syst. 2015, 28, 1127–1135. [Google Scholar] [CrossRef]
  37. Yang, G.; Zhang, Y.; Yang, J.; Ji, G.; Dong, Z.; Wang, S.; Feng, C.; Wang, Q. Automated classification of brain images using wavelet-energy and biogeography-based optimization. Multimed. Tools Appl. 2016, 75, 15601–15617. [Google Scholar] [CrossRef]
  38. Vani, N.; Sowmya, A.; Jayamma, N. Brain tumor classification using support vector machine. Int. Res. J. Eng. Technol. 2017, 4, 792–796. [Google Scholar]
  39. Pashaei, A.; Sajedi, H.; Jazayeri, N. Brain tumor classification via convolutional neural network and extreme learning machines. In Proceedings of the 2018 8th International Conference on Computer and Knowledge Engineering (ICCKE), Mashhad, Iran, 25–26 October 2018. [Google Scholar]
  40. Mohsen, H.; El-Dahshan, E.-S.A.; El-Horbaty, E.-S.M.; Salem, A.-B.M. Classification using deep learning neural networks for brain tumors. Future Comput. Inform. J. 2018, 3, 68–71. [Google Scholar] [CrossRef]
  41. Citak-Er, F.; Firat, Z.; Kovanlikaya, I.; Ture, U.; Ozturk-Isik, E. Machine-learning in grading of gliomas based on multi-parametric magnetic resonance imaging at 3T. Comput. Biol. Med. 2018, 99, 154–160. [Google Scholar] [CrossRef]
  42. Shahzadi, I.; Tang, T.B.; Meriadeau, F.; Quyyum, A. CNN-LSTM: Cascaded framework for brain Tumour classification. In Proceedings of the 2018 IEEE-EMBS Conference on Biomedical Engineering and Sciences (IECBES), Sarawak, Malaysia, 3–6 December 2018. [Google Scholar]
  43. Saxena, P.; Maheshwari, A.; Maheshwari, S. Predictive modeling of brain tumor: A Deep learning approach. In Innovations in Computational Intelligence and Computer Vision; Springer: Singapore, 2021; pp. 275–285. [Google Scholar]
  44. Sultan, H.H.; Salem, N.M.; Al-Atabany, W. Multi-classification of brain tumor images using deep neural network. IEEE Access 2019, 7, 69215–69225. [Google Scholar] [CrossRef]
  45. Kaplan, K.; Kaya, Y.; Kuncan, M.; Ertunç, H. Brain tumor classification using modified local binary patterns (LBP) feature extraction methods. Med. Hypotheses 2020, 139, 109696. [Google Scholar] [CrossRef]
Figure 1. The flow diagram for the proposed model.
Figure 1. The flow diagram for the proposed model.
Biomedicines 11 01715 g001
Figure 2. Some segmented samples from dataset. (top) Original images; (bottom) Segmented images.
Figure 2. Some segmented samples from dataset. (top) Original images; (bottom) Segmented images.
Biomedicines 11 01715 g002
Figure 3. ResNet’s blocks.
Figure 3. ResNet’s blocks.
Biomedicines 11 01715 g003
Figure 4. Some samples of brain MRI.
Figure 4. Some samples of brain MRI.
Biomedicines 11 01715 g004
Figure 5. Comparison with existing methods on the Harvard dataset [32,33,34,35,36,37].
Figure 5. Comparison with existing methods on the Harvard dataset [32,33,34,35,36,37].
Biomedicines 11 01715 g005
Figure 6. Comparison plot with existing techniques [18,39,40,41,42,43,44].
Figure 6. Comparison plot with existing techniques [18,39,40,41,42,43,44].
Biomedicines 11 01715 g006
Table 1. Details of some existing methods.
Table 1. Details of some existing methods.
Sr. No. Ref.TypeDatasetIssuesAdvantagesAlgorithmFeatures Performance
1[12]SegmentationBraTs2021High computational complexity is required.Better results of segmentationCNN-TransformerCNN93.50% Dice score
2[26]segmentationFigshare and BRATSExtra computational means are required.Computationally efficient.M-SVMMobileNetV2Accuracy: 98.92%
3[23]ClassificationKaggleMinimum optimized feature selection.Locates the tumor accurately.CNNCNNAccuracy: 98%
4[24]Classification3 benchmarksHigh computational resources used.Significant generalization.Ensemble methodXception, VGG19, EfficientNet, ResNet-50, and Inception-V3Accuracy: 98.96%
5[27]Segmentation and classificationTCIAAdditional computational resourcesIdentifies tumor locations accurately.Hybrid approachHand-crafted + CNNAccuracy: 98.89%
Table 2. Layer information of the Bi-LSTM.
Table 2. Layer information of the Bi-LSTM.
TypeOutput ShapeNumber of Parameters
Feature input-0
LSTM-1 (Forward pass)100, 500161,700
LSTM-2 (Backward pass)100, 500161,700
Max pooling layer5000
FC + ReLU5020,070
Dropout500
FC (Sigmoid)3-
Table 3. Experimental environment details for the proposed model.
Table 3. Experimental environment details for the proposed model.
HardwareConditions
RAM16 GB
Graphical Processing UnitNVIDIA GEFORCE GTX × 4
Central Processing UnitIntel Core i5
GPU Memory4 GB
Table 4. The results on Harvard dataset of segmentation.
Table 4. The results on Harvard dataset of segmentation.
ImageTC (%)DOI (%)PixelsArea (nm2)
Image 1909632447.1 × 1011
Image 2929722348.2 × 1012
Image 3929723218.3 × 1011
Image 4969832436.2 × 1014
Image 5939734216.3 × 1011
Image 6909429335.6 × 1013
Image 7919532847.2 × 1012
Image 8979941225.1 × 1010
Image 9969828473.5 × 1012
Image 10989843548.6 × 1010
Image 11919751217.0 × 1012
Image 12989920384.4 × 1013
Table 5. The results on Figshare dataset of segmentation.
Table 5. The results on Figshare dataset of segmentation.
ImageTC (%)DOI (%)PixelsArea (nm2)
Image 1919322156.1 × 1013
Image 2929622255.1 × 1012
Image 3939632357.4 × 1014
Image 4999833575.4 × 1013
Image 5939734526.1 × 1014
Image 6919553126.8 × 1013
Image 7929544637.3 × 1013
Image 8989744716.1 × 1014
Image 9999933866.8 × 1012
Image 10989834966.8 × 1013
Image 110.999823236.3 × 1014
Image 120.999143447.3 × 1013
Table 6. The evaluation of results with Vishnuvarthanan et al. [32] using Harvard dataset.
Table 6. The evaluation of results with Vishnuvarthanan et al. [32] using Harvard dataset.
TechniqueTC (%)DOI (%)
Graph Cut2743
SOM2337
SOM-FKM3147
FKM2236
Kernel2236
Our approach9997
Table 7. The classification results on Figshare dataset.
Table 7. The classification results on Figshare dataset.
AlgorithmAccuracy (%)Recall (%)Precision (%)AUCF1 Score (%)
DT97.396.897.20.90097.2
SVM98.398.198.90.91298.5
BiLSTM99.399.198.30.98999.1
Table 8. The classification results on Harvard dataset.
Table 8. The classification results on Harvard dataset.
AlgorithmAccuracy (%)Recall (%)Precision (%)AUCF1 Score (%)
DT99.098.298.90.92198.0
SVM98.197.497.90.90197.2
BiLSTM99.198.198.20.97498.3
Table 9. Comparison with state-of-the-art methods on Harvard dataset.
Table 9. Comparison with state-of-the-art methods on Harvard dataset.
AlgorithmYearAccuracy (%)
[32]201696.18
[33]201899.69
[34]201598.89
[35]201898.67
[36]201591.8
[37]201597.78
Proposed202399.1
Table 10. Comparison with existing models based on deep learning.
Table 10. Comparison with existing models based on deep learning.
ReferenceYearMethodAccuracy (%)
[39]2018CNN + KELM93.68
[40]2018DWT93.94
[41]2018SVM, Perceptron, and Logistic Regression93
[42]2018VggNet-LSTM84
[43]2019ResNet-5095
[44]2019Deep NN96.13
[18]2020Improved Model97.01
[45]2020nLBP + KNN95.56
Proposed2023ResNet-V2 + HoG + BiLSTM99.3
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Mahum, R.; Sharaf, M.; Hassan, H.; Liang, L.; Huang, B. A Robust Brain Tumor Detector Using BiLSTM and Mayfly Optimization and Multi-Level Thresholding. Biomedicines 2023, 11, 1715. https://doi.org/10.3390/biomedicines11061715

AMA Style

Mahum R, Sharaf M, Hassan H, Liang L, Huang B. A Robust Brain Tumor Detector Using BiLSTM and Mayfly Optimization and Multi-Level Thresholding. Biomedicines. 2023; 11(6):1715. https://doi.org/10.3390/biomedicines11061715

Chicago/Turabian Style

Mahum, Rabbia, Mohamed Sharaf, Haseeb Hassan, Lixin Liang, and Bingding Huang. 2023. "A Robust Brain Tumor Detector Using BiLSTM and Mayfly Optimization and Multi-Level Thresholding" Biomedicines 11, no. 6: 1715. https://doi.org/10.3390/biomedicines11061715

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop