Next Article in Journal
Optimization of Food Industry Production Using the Monte Carlo Simulation Method: A Case Study of a Meat Processing Plant
Next Article in Special Issue
Where Is My Mind (Looking at)? A Study of the EEG–Visual Attention Relationship
Previous Article in Journal
The Triadic Relationship of Sense-Making, Analytics, and Institutional Influences
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Automated Intracranial Hematoma Classification in Traumatic Brain Injury (TBI) Patients Using Meta-Heuristic Optimization Techniques

1
Department of Computer Science and Engineering, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal 576104, India
2
Department of Instrumentation and Control Engineering, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal 576104, India
3
Institute of Neurological Sciences, Glasgow G51 4LB, UK
4
Department of Neurosurgery, Kasturba Medical College, Manipal Academy of Higher Education, Manipal 576104, India
5
School of Science and Technology, Singapore University of Social Sciences, Singapore 599494, Singapore
6
Department of Medicine, Columbia University, New York, NY 10027, USA
7
School of Engineering, Ngee Ann Polytechnic, Singapore 599489, Singapore
8
Department of Biomedical Engineering, School of Science and Technology, Singapore University of Social Sciences, Singapore 599491, Singapore
9
Department of Biomedical Informatics and Medical Engineering, Asia University, Taichung 413305, Taiwan
*
Author to whom correspondence should be addressed.
Informatics 2022, 9(1), 4; https://doi.org/10.3390/informatics9010004
Submission received: 2 December 2021 / Revised: 31 December 2021 / Accepted: 4 January 2022 / Published: 10 January 2022
(This article belongs to the Special Issue Feature Papers in Medical and Clinical Informatics)

Abstract

:
Traumatic Brain Injury (TBI) is a devastating and life-threatening medical condition that can result in long-term physical and mental disabilities and even death. Early and accurate detection of Intracranial Hemorrhage (ICH) in TBI is crucial for analysis and treatment, as the condition can deteriorate significantly with time. Hence, a rapid, reliable, and cost-effective computer-aided approach that can initially capture the hematoma features is highly relevant for real-time clinical diagnostics. In this study, the Gray Level Occurrence Matrix (GLCM), the Gray Level Run Length Matrix (GLRLM), and Hu moments are used to generate the texture features. The best set of discriminating features are obtained using various meta-heuristic algorithms, and these optimal features are subjected to different classifiers. The synthetic samples are generated using ADASYN to compensate for the data imbalance. The proposed CAD system attained 95.74% accuracy, 96.93% sensitivity, and 94.67% specificity using statistical and GLRLM features along with KNN classifier. Thus, the developed automated system can enhance the accuracy of hematoma detection, aid clinicians in the fast interpretation of CT images, and streamline triage workflow.

1. Introduction

Traumatic Brain Injury (TBI) is a neurological disorder with high rates of disability and mortality worldwide. TBI includes both primary and secondary injuries, which can progressively deteriorate brain function. Hence, most TBI survivors suffer from physical and mental disabilities that require long-term support and medical attention [1,2,3]. TBI can cause accumulation of blood (hemorrhage) inside the cranium leading to increased intracranial pressure. The global annual frequency of TBI occurrence and mortality are predicted to be 369 and 20, respectively, among 100,000 subjects. Approximately, 5–10% of mortality is due to injuries, and 40% of the mortality can be attributed to TBI. There was an 8.4% increase in the age-standardized prevalence of TBI between 1990 and 2016 [4]. Therefore, early identification and diagnosis of hemorrhage is crucial for TBI severity detection and patient management.
Computed Tomography (CT) is the gold-standard used for hematoma detection due to its high speed, wide availability, low cost, and high sensitivity [5,6]. However, rapid and accurate manual diagnosis of Intracranial Hemorrhage (ICH) is a tedious and laborious task due to the inherent limitations present in CT grayscale images including noise, artefacts, uneven boundaries, variations in pixel-wise intensities, and poor tissue contrast. Existing research studies shows that the occurrence of significant misinterpretations and discrepancies is a major problem in the detection of hematoma, especially by resident doctors without input from expert radiologists [7,8]. Manual quantification of hematoma is also subjected to observer variabilities and may introduce estimation errors, particularly for large, irregular, and acute hematoma cases [9,10]. Manual inspection and estimation is a hectic and daunting process that may generate inadvertent delays and errors in the detection process, particularly in a large clinical set-up environment [11,12]. Therefore, automated techniques for fast, reliable, and accurate detection of ICH using Computer Aided Diagnosis (CAD) systems can facilitate better clinical care and patient outcome.

2. Related Work

CAD systems offer more reliable, reproducible, and accurate clinical features, which can aid clinicians in appropriate treatment, planning, and strategic decision-making [13,14]. Computer-assisted techniques can significantly reduce human error and enable quick and cost-effective detection and evaluation of hematoma.
Several CAD systems proposed for brain related classification work are given below. The Probabilistic Neural Network (PNN) classifier combined with entropy features has obtained a classification accuracy of 97.37% [15]. The SVM with wavelets, GLCM, and statistical features yielded an accuracy of 80% [16]. The hierarchical classification approach with handcrafted features have been used for multiclass labelling [17]. An automated model using shape-based features and a logistic classifier yielded an accuracy of 92% [18]. A hematoma classification technique employed the C4.5 algorithm to a set of features extracted based on the axes of the major hyperdense areas in CT slices [19]. The SVM-based pathological slice detection algorithm compared the texture and histogram features extracted from both the brain hemispheres and yielded an accuracy of 90% [20]. The subarachnoid hematoma (SAH) detection model applied a Bayesian classifier with distance features obtained from different anatomical landmarks and yielded a sensitivity of 100% [21]. An automated model extracted features pertaining to position, shape, and size for classification on segmented blood clusters. The authors have reported a sensitivity of 98% [22]. A bleed area detection approach that used information about the location and intensity features in CT slices reported a sensitivity of 82.5% [23]. The random classifier with handcrafted intensity features was able to predict the voxel level ICH probability with a DSI of 0.899 [24]. A symmetry-based detection approach was able to diagnose acute ICH in three-dimensional CT images with an accuracy of 80.6% [25]. A hematoma detection technique involving adaptive thresholding, case-based reasoning, and a genetic algorithm was proposed in [26].
From the above literature, it is evident that the existing CAD systems have utilized various features and machine learning algorithms for hematoma classification. However, only a handful of studies include the removal of noisy and redundant features, which is an important step for significantly improving classification performance, especially for large and challenging heterogeneous datasets. The majority of the reported systems involve complex engineering techniques such as image registration and skull stripping in the initial phases of automation. These methods require specific rules and selection or adjustment of control parameters to obtain maximum performance. Some of these methods are time consuming, expensive, and require manual involvement at various levels. Hence, there is a need for a fast, accurate, efficient, and fully automated CAD system for hematoma detection that will lead to improved patient outcome and quality of care. The main objective of this research study is to develop a simple, rapid, and efficient CAD system for identification of hematoma with significant and discerning sets of textural features.

3. Materials

We used the publicly available CQ500 database to conduct this study. The CQ500 dataset [27] includes CT scans of 491 subjects that were employed to construct a Convolutional Neural Network (CNN)-based model for the classification of hematoma subtypes, calvarial fractures, and midline shift in a fully automated fashion. We used a total of 1831 CT images, of which 1000 are healthy and 831 are with hematoma. The CT images were initially converted to JPG format with a 512 × 512 dimension. A sample set of normal and abnormal axial CT images is shown in Figure 1.

4. Proposed Research Framework

Our novel approach can automatically classify normal and ICH images with a minimal set of powerful features and supervised classifiers. The proposed automated classification technique consists of four major steps. Initially, pre-processing is carried out to remove noise and artefact present in the CT images and to extract the brain region in the image for further processing. Secondly, various textural features are extracted from the pre-processed images using several methods, which are detailed in Section 4.2. Thereafter, the essential and powerful discriminating features are selected using different meta-heuristic algorithms. Finally, the set of refined features are presented to various classifiers to predict normal versus hematoma imagery. The outline of the proposed technique is illustrated in Figure 2.

4.1. Preprocessing

Pre-processing is performed as a fundamental step to enhance the quality of the input images, which aids in the subsequent stages of image analysis. Pre-processing facilitates the removal of noise, unwanted areas in the input images (such as the skull and scalp), and the extraction of the intracranial region of the brain in the image. Contrast Limited Adaptive Histogram Equalization (CLAHE) [28] is used, followed by Otsu’s thresholding [29], in order to obtain binarized image data. The largest connected component is selected as the skull mask, and the region of interest (ROI) is extracted by masking the enhanced images. Sample images after pre-processing are shown in Figure 3.

4.2. Feature Extraction

Features are unique characteristics that can help differentiate various input patterns. Hence, the identification of discriminant features is very important in order to identify normal versus abnormal CT imagery. In the proposed technique, various features that will help describe the texture and intensity variations are used. The texture features in each image are extracted using several techniques, namely, the Gray Level Co-occurrence Matrix (GLCM) and the Gray Level Run Length Matrix (GLRLM). Features related to first-order statistics such as kurtosis, skewness, and variance are also extracted to obtain the pixel-level details present in the input imagery. Seven Hu’s invariant moments are also utilized for the extraction of features.

4.2.1. Gray Level Co-Occurrence Matrix (GLCM)

The Gray Level Co-occurrence Matrix considers the spatial relationships among pairs of pixels using co-occurrence matrices. An entry P (i, j |d, θ) in the GLCM matrix indicates the number of occurrences of pixel intensity pairs [i, j] in the image at a distance d in the direction θ [30,31]. Haralick et al. [30] proposed various second-order statistical features using GLCM to characterize the texture present in an image. The computed set of GLCM features include homogeneity, entropy, contrast, correlation, energy, angular second-order moments, inverse difference moments, and their variants [30,31,32].

4.2.2. Gray Level Run Length Matrix (GLRLM)

The Gray Level Run Length Matrix characterizes textural features using the run length of the intensity values present in the image. Run length indicates the number of occurrences of successive pixels with the identical intensity value in a definite course of direction. Hence, each entry P (i, j | θ) in GLRLM denotes the frequency of which the intensity value i appears in the image with run length j [33]. The various texture features that are computed based on GLRLM include run percentage, gray-level non-uniformity, short-run emphasis, run length non-uniformity, long-run emphasis, and different variants [33,34].

4.2.3. Hu’s Invariant Moments

Hu proposed seven invariant moments that remain insensitive to parallel projection and image geometrical transformation, namely, translation, scaling, and rotation [35,36]. The moment of (p + q) order for a two-dimensional function f (x, y) is given as
  m p q = + + x p y q f ( x , y ) d x d y
where p = 0, 1, 2, … and q = 0, 1, 2…
The central moments can be defined as [35,36]
  μ p q = + +   ( x x ¯ ) p ( y y ¯ ) q f ( x , y ) d x d y  
where x ¯ and y ¯ are the centroids of the image that can be computed as
x ¯ = m 10 m 00   ,   y ¯ = m 01 m 00
The central moments are further normalized to make them insensitive to scale, which can be defined as
η p q = μ p q μ 00 ( 1 + p + q 2 )

4.3. Synthetic Sample Generation

Learning from imbalanced data adversely affects the performance of the classification model. Synthetic data generation techniques can be used to balance the normal and abnormal classes in the dataset. In this work, we used Adaptive Synthetic Sampling (ADASYN) to generate the artificial samples of the abnormal hematoma class. ADASYN utilizes a weighted distribution to decide the amount of artificial samples that need to be generated, which are also difficult to learn [37]. The quantity of majority and minority class samples is utilized to estimate the number of synthetic samples that should be generated for each minority sample [37]. The current study included 831 hematoma images and 1000 normal images. ADASYN was applied to generate synthetic samples for different subsets of extracted features as shown in Table 1.

4.4. Feature Optimization

Feature optimization is the process of generating feature subsets from high-dimensional datasets, which are less redundant and possess great discriminative power, thereby leading to high classification accuracy. Traditional optimization techniques are less efficient, particularly in the case of high-dimensional datasets, as they generate one local optimal solution as the final subset [38,39,40]. Meta-heuristic algorithms are applied to obtain efficient and effective solutions while preserving the accuracy of classification [38,39]. Numerous nature-inspired meta-heuristic algorithms are popular as they use the knowledge of previous iterations from the population to deliver near-optimal solutions [39,40]. In this paper, the Bat Algorithm (BA), Grey Wolf Optimization (GWO), and Whale Optimization Algorithm (WOA) were selected to generate the best set of features.
The Bat Algorithm uses the echolocation behavior of microbats to find the obstacles, the type of prey, their distance, and also to hunt in complete darkness [41]. The algorithm is simple to implement and generates near-optimal solutions rapidly. The swarm intelligence combined with echolocation makes its more powerful and effective as compared to other optimization algorithms [42]. The Grey Wolf Optimization (GWO) algorithm imitates the four-level hierarchy and hunting features of grey wolves [43,44]. The optimal solution is based on the solutions offered by alpha, beta, and delta wolves instead of one solution. Hence, GWO significantly reduces the chance of generating sub-optimal solutions with superior precision and speed [44]. The Whale Optimization Algorithm (WOA) mimics the bubble-net hunting behavior of humpback whales [45,46]. The most powerful characteristic of WOA is its ability to balance the exploration and exploitation phases in the search process. The algorithm requires less operators for its implementation and provides high flexibility, simplicity, and convergence speed [45,46]. A brief explanation of the three meta-heuristic algorithms is included in the subsections below.

4.4.1. Bat Algorithm

The Bat Algorithm uses the behavior of microbats based on echolocation to detect prey [41]. The entire group of bats is assigned with a constant frequency fmin, loudness A0, and wavelength λ. Each bat in the population is initialized with a position xi and velocity vi, and the bats can modify the pulse emission rate in the range of [0, 1] based on their proximity to the target. The frequency is adjusted to generate a new solution, and the position and velocity of each bat is updated in each iteration. The solutions proposed by all the bats in each iteration are compared to obtain the global solution [41,47]. Each bat also generates a local solution based on the current global solution by random flying. The pulse emission rate and the loudness are updated in every iteration, and as each bat is nearing its target, it will reduce its loudness and increase the rate of emitting pulses. Once the bat is successful in detecting its prey, the loudness is reduced to zero [41,47].

4.4.2. Grey Wolf Optimization

The grey wolf optimization algorithm models the leadership and hunting characteristics of grey wolves [43]. A four-level hierarchy is followed, which includes alpha, beta, delta, and omega wolves, respectively. The alpha wolf may be a male or female and plays the leading role in the pack, i.e., the decision maker for hunting, discipline, sleep, and wake-up time. The beta wolf is the best candidate for the alpha wolf and supports the alpha wolf in making decisions and various other activities [37,40]. The delta wolves are superior to omega wolves, which include the elderly, caretakers, sentinels, and scouts. The omega wolves are responsible for maintaining the hierarchical structure. The algorithm begins with a random number of wolves in the search space, and the position of the wolf can be updated with respect to the prey by adjusting the parameters a and C [37,40]. The parameter a, which is initialized with 2, will be reduced to 0 at the end of all iterations so that the wolf will be near to its prey. It is assumed that the alpha, beta, and delta wolves have a superior assessment of the location of the prey, and the rest of the wolves have to update their positions accordingly. The location of the alpha wolf is considered as the best optimal solution at the end of all iterations [40,43].

4.4.3. Whale Optimization

The Whale Optimization Algorithm models the unique hunting strategy of humpback whales, called the bubble-net feeding technique [45]. The algorithm begins with a set of n whales, which are distributed randomly in the d-dimensional search space, and the best solution is decided. Then, the rest of the whales update their positions based on the current best solution. Further, each whale encircles the prey with a spiral-shaped network of bubbles and follows a movement along a spiral trajectory to attack the prey [45,48]. There exists a 50% probability of choosing the encircling mechanism or following a spiral path in each iteration [45]. Moreover, each whale can update its position with respect to a random one, thus facilitating a global search. The best solution is obtained once all of the iterations are completed and the termination criteria are satisfied.

4.5. Classification

Classification is the final task of categorizing input images based on extracted features and assigning class labels to them. Supervised classifiers are initially trained on a labelled set of data and then used to predict the class of the unknown data. In this work, we used fine, weighted, and ensemble subspace k-Nearest Neighbor (KNN) [49]; wide and medium Neural Network (NN) [50]; and Cubic Support Vector Machine (SVM) [50]. Various performance metrics such as accuracy, sensitivity, and specificity were calculated based on a 5-, 7-, and 10-fold cross-validation framework to evaluate the proposed research work.

5. Results

The non-contrast axial CT images are initially pre-processed to remove the unwanted regions and to extract the brain tissue for further analysis. Each image is subdivided into blocks of 128 × 28, and the features are obtained from each sub block using the various feature extraction schemes described earlier. The different feature extraction schemes and the total number of extracted features are given in Table 2.
The synthetic samples for the features obtained using each of the feature extraction schemes are generated using ADASYN as shown in Table 1. The subsets of features are then subjected to different meta-heuristic algorithms for the selection of optimal feature sets. Finally, different classifiers are used to test the efficiency of optimal subset of features. Table 3, Table 4, Table 5, Table 6, Table 7, Table 8 and Table 9 present the best performances obtained by using 5-, 7-, and 10-fold cross-validation schemes. The proposed technique achieved an optimum performance of 95.74% accuracy, a sensitivity of 96.93%, and a specificity of 94.67% using a combination of GLRLM and statistical features along with the Grey Wolf Optimization technique. Table 10 shows the best performance of each classifier in the proposed model. Figure 4 shows the boxplots that were obtained before and after applying Grey Wolf Optimization for GLCM and Hu moments, as well as GLRLM and statistical features, respectively. The ROC curve for various classifiers used in the approach is shown in Figure 5. The entire proposed technique was executed and tested in MATLAB environment with a system configuration of Core i5 7200U (2.50 GHz) with 4GB RAM.

6. Discussion

This paper presents a fully automated technique for the diagnosis of hematoma in non-contrast CT images. The proposed research technique can clearly categorize normal and hematoma classes with an accuracy of 95.74% using an optimizable KNN classifier. It is observed that the combination of GLRLM and statistical features are powerful in capturing the structural variations in the CT imagery. The proposed research model is an initial attempt in hematoma classification using various meta-heuristic algorithms for optimal feature selection. It is observed from Table 10 that the optimal classification performance is achieved by using the Grey Wolf Optimization technique. Table 11 shows the quantitative comparison of various CAD schemes for the classification of normal and hematoma subjects using CT images. It is noted that the proposed approach handled a larger number of images effectively using the optimized set of GLRLM and statistical features. It is observed from Figure 4 that both sets of feature extraction schemes clearly distinguish normal versus hematoma images using the first ranked feature. From Figure 5, the area under the ROC curve ranges from 0.94 to 0.97, which shows that the classifiers are highly adept in distinguishing healthy versus hematoma subjects. Another significant characteristic of the proposed model is the use of multiple cross-validation schemes. It is evident that the seven-fold cross-validation scheme achieved the optimal performance. The optimizable KNN is able to distinguish normal versus pathological images with a specificity of 94.67%. The proposed model also achieved a sensitivity of 96.93% in discerning the hematoma subjects. Hence, the developed fully automated model can assist attending physicians in interpreting CT scans swiftly and accurately for effective decision making and treatment. This, in turn, can help improve patient outcome. Figure 6 presents the architecture for a futuristic ICH diagnosis model based on the Internet of Things (IoT) cloud platform, wherein the remote diagnostic feedback and advice will reach the patient swiftly through the doctor and facilitate quality patient care.
The prominent characteristics of the proposed CAD model are as follows:
  • Achieved a classification accuracy of 95.74% in categorizing normal versus hematoma patients.
  • The features are selected using meta-heuristic algorithms, which will generate globally optimal features to improve overall performance.
  • The system is highly robust, as the method is evaluated using 5-, 7-, and 10-fold cross-validation schemes.
  • A relatively large dataset is used, which consists of 1831 non-axial CT images.

7. Conclusions

In this research study, a fully automated CAD system used to discern normal and hematoma images is developed. With the aid of a fine KNN classifier, the proposed method achieved a maximum accuracy of 95.74%, a sensitivity of 96.93%, and a specificity of 94.67% using a combination of GLRLM and statistical features. The obtained results show that the proposed technique is more accurate and robust, and may assist doctors in strategic decision-making and treatment planning, particularly during critical and emergency scenarios. The performance of the proposed technique should be validated using larger and more diverse datasets for real-time applicability. Hence, our future work aims to include more subjects and to perform classification of various types of hematoma. Additionally, we would like to incorporate more features and deep CNN architectures to design fast and powerful CAD schemes for hematoma diagnosis.

Author Contributions

Conceptualization, V.V., U.R. and A.G.; methodology, V.V., U.R. and A.G.; software, V.V., U.R., P.K. and Y.C., validation, G.M.R., A.H., C.P.O., E.J.C. and U.R.A.; writing—review and editing, C.P.O., E.J.C. and U.R.A.; visualization, V.V., U.R. and A.G. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

The authors would like to thank the Manipal Academy of Higher Education (MAHE) for providing the required facility to carry out this research.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Kim, J.J.; Gean, A.D. Imaging for the diagnosis and management of traumatic brain injury. Neurotherapeutics 2011, 8, 39–53. [Google Scholar] [CrossRef] [Green Version]
  2. Bramlett, H.M.; Dietrich, W.D. Long-Term Consequences of Traumatic Brain Injury: Current Status of Potential Mechanisms of Injury and Neurological Outcomes. J. Neurotrauma 2015, 32, 1834–1848. [Google Scholar] [CrossRef]
  3. McKee, C.; Daneshvar, D.H. The neuropathology of traumatic brain injury. Handb. Clin. Neurol. 2015, 127, 45–66. [Google Scholar]
  4. James, S.L.; Theadom, A.; Ellenbogen, R.G.; Bannick, M.S.; Montjoy-Venning, W.; Lucchesi, L.R.; Abbasi, N.; Abdulkader, R.; Abraha, H.N.; Adsuar, J.C.; et al. Global, regional, and national burden of traumatic brain injury and spinal cord injury, 1990–2016: A systematic analysis for the Global Burden of Disease Study 2016. Lancet Neurol. 2019, 18, 56–87. [Google Scholar] [CrossRef] [Green Version]
  5. Badenes, R.; Bilotta, F. Neurocritical care for intracranial haemorrhage: A systematic review of recent studies. Br. J. Anaesth. 2015, 115, 68–74. [Google Scholar] [CrossRef] [Green Version]
  6. Lee, B.; Newberg, A. Neuroimaging in traumatic brain imaging. NeuroRx 2005, 2, 372–383. [Google Scholar] [CrossRef]
  7. Strub, W.M.; Leach, J.L.; Tomsick, T.; Vagal, A. Overnight preliminary head CT interpretations provided by residents: Locations of misidentified intracranial hemorrhage. Am. J. Neuroradiol. 2007, 28, 1679–1682. [Google Scholar] [CrossRef] [Green Version]
  8. Lal, N.R.; Murray, U.M.; Eldevik, O.P.; Desmond, J.S. Clinical consequences of misinterpretations of neuroradiologic CT scans by on-call radiology residents. Am. J. Neuroradiol. 2000, 21, 124–129. [Google Scholar] [PubMed]
  9. Daunis-I-Estadella, J.; Boada, I.; Bardera, A.; Castellanos, M.; Serena, J.; Castellanos, M.D.M. Reliability of the ABC/2 method in determining acute infarct volume. J. Neuroimaging 2011, 22, 155–159. [Google Scholar]
  10. Webb, J.S.; Ullman, N.L.; Morgan, T.C.; Muschelli, J.; Kornbluth, J.; Awad, I.A.; Mayo, S.; Rosenblum, M.; Ziai, W.; Zuccarrello, M.; et al. Accuracy of the ABC/2 score for intracerebral hemorrhage: Systematic review and analysis of MISTIE, CLEAR-IVH, and CLEAR III. Stroke 2015, 46, 2470–2476. [Google Scholar] [CrossRef] [Green Version]
  11. Chan, K.T.; Carroll, T.; Linnau, K.F.; Lehnert, B. Expectations among academic clinicians of inpatient imaging turnaround time: Does it correlate with satisfaction? Acad. Radiol. 2015, 22, 1449–1456. [Google Scholar] [CrossRef] [PubMed]
  12. Ayaz, H.; Izzetoglu, M.; Izzetoglu, K.; Onaral, B.; Ben, B. Early diagnosis of traumatic intracranial hematomas. J. Biomed. Opt. 2021, 24, 051411. [Google Scholar] [CrossRef] [Green Version]
  13. Kakhandaki, N.; Kulkarni, S.B. Identifcation of normal and abnormal brain hemorrhage on magnetic resonance images. Cogn. Inform. Comput. Model. Cogn. Sci. 2020, 1, 71–91. [Google Scholar]
  14. Khan, M.A.; Sarfraz, M.S.; Alhaisoni, M.; Albesher, A.A.; Wang, S.; Ashraf, I. StomachNet: Optimal deep learning features fusion for stomach abnormalities classifcation. IEEE Access 2020, 8, 197969–197981. [Google Scholar] [CrossRef]
  15. Raghavendra, U.; Gudigar, A.; Vidhya, V.; Rao, B.N.; Sabut, S.; Wei, J.K.; Ciaccio, E.J.; Acharya, U.R. Novel and accurate non—linear index for the automated detection of haemorrhagic brain stroke using CT images. Complex Intell. Syst. 2021, 7, 929–940. [Google Scholar] [CrossRef]
  16. Liu, R.; Tan, C.L.; Leong, T.Y. Hemorrhage Slices Detection in Brain CT Images. In Proceedings of the 19th International Conference on Pattern Recognition, Tampa, FL, USA, 8 December 2008. [Google Scholar]
  17. Shahangian, B.; Pourghassem, H. Automatic brain hemorrhage segmentation and classification algorithm based on weighted grayscale histogram feature in a hierarchical classification structure. Biocybern. Biomed. Eng. 2015, 36, 217–232. [Google Scholar] [CrossRef]
  18. Al-AyyouB, M.; Alawad, D.; Al-Darabsah, K.; Aljarrah, I. Automatic detection and classification of brain hemorrhages. Lect. Notes Comput. Sci. 2018, 10752, 417–427. [Google Scholar]
  19. Xiao, C.-C.; Liao, J.; Wong, M.; Chiang, I.J. Automatic diagnosis of intracranial hematoma on brain ct using knowledge discovery techniques: Is finer resolution better? Biomed. Eng. Appl. Basis Commun. 2008, 20, 401–408. [Google Scholar] [CrossRef]
  20. Tong, H.; Faizal, M.; Fauzi, A.; Haw, S. Automated Hemorrhage Slices Detection for CT Brain Images. In Proceedings of the International Visual Informatics Conference, Selangor, Malaysia, 9–11 November 2011. [Google Scholar]
  21. Li, Y.H.; Zhang, L.; Hu, Q.M.; Li, H.W.; Jia, F.C.; Wu, J.H. Automatic subarachnoid space segmentation and hemorrhage detection in clinical head CT scans. Int. J. Comput. Assist. Radiol. Surg. 2012, 7, 507–516. [Google Scholar] [CrossRef] [PubMed]
  22. Yuh, E.L.; Gean, A.D.; Manley, G.T.; Callen, A.L.; Wintermark, M. Computer-aided assessment of head computed tomography (CT) studies in patients with suspected traumatic brain injury. J. Neurotrauma 2008, 1172, 1163–1172. [Google Scholar] [CrossRef]
  23. Diyana, W.M.; Zaki, M.F.; Fauzi, A.; Besar, R.; Ahmad, W.S.H.M.W. Abnormalities detection in serial computed tomography brain images using multi-level segmentation approach. Multimed. Tools Appl. 2011, 54, 321–340. [Google Scholar]
  24. Muschelli, J.; Sweeney, E.M.; Ullman, N.L.; Vespa, P.; Hanley, D.F.; Crainiceanu, C.M. PItcHPERFeCT: Primary Intracranial Hemorrhage Probability Estimation using Random Forests on CT. NeuroImage Clin. 2017, 14, 379–390. [Google Scholar] [CrossRef] [PubMed]
  25. Foo, Y.H.; Wong, J.H.D.; Azman, R.R.; Leong, Y.L.; Tan, L.K. Identification of acute intracranial bleed on computed tomography using computer aided detection. J. Phys. Conf. 2020, 1497, 012019. [Google Scholar] [CrossRef]
  26. Zhang, Y.; Chen, M.; Hu, Q.; Huang, W. Detection and quantification of intracerebral and intraventricular haemorrhage from computed tomography images with adaptive thresholding and case-based reasoning. Int. J. Comput. Assist. Radiol. Surg. 2013, 8, 917–927. [Google Scholar] [CrossRef]
  27. Chilamkurthy, S.; Ghosh, R.; Tanamala, S.; Biviji, M.; Campeau, N.G.; Venugopal, V.K.; Mahajan, V.; Rao, P.; Warier, P. Deep learning algorithms for detection of critical findings in head CT scans: A retrospective study. Lancet 2018, 392, 2388–2396. [Google Scholar] [CrossRef]
  28. Pizer, S.M.; Amburn, E.P.; Austin, J.D.; Cromarrtie, R.; Geselowitz, A.; Greer, T.; Romeny, H.; Zimmerman, J.B.; Zuiderveld, K. Adaptive histogram equalization and its variations. Comput. Vis. Graph. Image Process. 1987, 39, 355–368. [Google Scholar] [CrossRef]
  29. Otsu, N. A threshold selection method from gray-level histogram. IEEE Trans. Syst. Man Cybern. 1979, 9, 62–66. [Google Scholar] [CrossRef] [Green Version]
  30. Haralick, R.M.; Shanmugam, K.; Dinstein, I. Textural features for image classification. IEEE Trans. Syst. Man Cybern. 1973, 6, 610–621. [Google Scholar] [CrossRef] [Green Version]
  31. Humeau-Heurtier, A. Texture feature extraction methods: A survey. IEEE Access 2019, 7, 8975–9000. [Google Scholar] [CrossRef]
  32. Weszka, J.S.; Rosenfield, A. An application of texture analysis to material inspection. Pattern Recognit. 1976, 8, 195–200. [Google Scholar] [CrossRef]
  33. Tang, X. Texture information in run-length matrices. IEEE Trans. Image Process. 1998, 7, 1602–1609. [Google Scholar] [CrossRef] [Green Version]
  34. Galloway, M.M. Texture classification using gray level run length. Comput. Graph Image Proc. 1975, 4, 172–179. [Google Scholar] [CrossRef]
  35. Hu, M.K. Visual pattern recognition by moment invariants. IEEE Trans. Inf. Theory 1962, 8, 179–187. [Google Scholar]
  36. Gornale, S.S.; Patravali, P.U.; Hiremath, P.S. Automatic Detection and Classification of Knee Osteoarthritis Using Hu’s Invariant Moments. Front. Robot. AI 2020, 7, 591827. [Google Scholar] [CrossRef] [PubMed]
  37. He, H.; Yang, B.; Garcia, E.A.; Li, S.T. ADASYN: Adaptive synthetic sampling approach for imbalanced learning. In Proceedings of the IEEE Transactional Joint Conference on Neural Networks, Hong Kong, China, 1–6 June 2008. [Google Scholar]
  38. Tamimi, E.; Ebadi, H.; Kiani, A. Evaluation of different metaheuristic optimization algorithms in feature selection and parameter determination in SVM classification. Arab. J. Geosci. 2017, 10, 478. [Google Scholar] [CrossRef]
  39. Agrawal, P.; Abutarboush, H.F.; Ganesh, T.; Mohamed, A.W. Metaheuristic Algorithms on Feature Selection: A Survey of One Decade of Research (2009–2019). IEEE Access 2021, 9, 26766–26791. [Google Scholar] [CrossRef]
  40. Arora, S.; Singh, H.; Sharma, M.; Sharma, S.; Anand, P. A new hybrid algorithm based on grey wolf optimization and crow search algorithm for unconstrained function optimization and feature selection. IEEE Access 2019, 7, 26343–26361. [Google Scholar] [CrossRef]
  41. Yang, X.S. A new metaheuristic bat-inspired algorithm. In Proceedings of the 2010 International Workshop on Nature inspired cooperative strategies for optimization, Granada, Spain, 12–14 May 2010. [Google Scholar]
  42. Perwaiz, U.; Younas, I.; Anwar, A.A. Many-objective BAT algorithm. PLoS ONE 2020, 15, e0234625. [Google Scholar] [CrossRef] [PubMed]
  43. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey wolf optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef] [Green Version]
  44. Wang, J.S.; Li, S.X. An improved grey wolf optimizer based on differential evolution and elimination mechanism. Sci. Rep. 2019, 9, 7181. [Google Scholar] [CrossRef] [Green Version]
  45. Mirjalili, S.; Lewis, A. The whale optimization algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar]
  46. Rana, N.; Abd Latiff, M.S.; Chiroma, H. Whale optimization algorithm: A systematic review of contemporary applications, modifications and developments. Neural Comput. Appl. 2020, 32, 1–33. [Google Scholar] [CrossRef]
  47. Paul, K.; Kumar, N.; Dalapati, P. Bat Algorithm for Congestion Alleviation in Power System Network. Technol. Econ. Smart Grids Sustain. 2021, 6, 1–18. [Google Scholar] [CrossRef]
  48. Koryshev, N.; Hodashinsky, I.; Shelupanov, A. Building a Fuzzy Classifier Based on Whale Optimization Algorithm to Detect Network Intrusions. Symmetry 2021, 13, 1211. [Google Scholar] [CrossRef]
  49. Larose, D.T. Discovering Knowledge in Data: An Introduction to Data Mining; Wiley-Interscience: Hoboken, NJ, USA, 2004. [Google Scholar]
  50. Kecman, D.V. Learning and Soft Computing: Support Vector Machines, Neural Networks, and Fuzzy Logic Models; MIT Press: Cambridge, MA, USA, 2001. [Google Scholar]
Figure 1. Sample CT images used to conduct the study.
Figure 1. Sample CT images used to conduct the study.
Informatics 09 00004 g001
Figure 2. Outline of the proposed technique.
Figure 2. Outline of the proposed technique.
Informatics 09 00004 g002
Figure 3. Sample CT images after pre-processing.
Figure 3. Sample CT images after pre-processing.
Informatics 09 00004 g003
Figure 4. Boxplots obtained using first ranked feature before and after applying Grey Wolf Optimization: (a,b): GLCM and Hu’s invariant moments, (c,d): GLRLM and statistical features.
Figure 4. Boxplots obtained using first ranked feature before and after applying Grey Wolf Optimization: (a,b): GLCM and Hu’s invariant moments, (c,d): GLRLM and statistical features.
Informatics 09 00004 g004
Figure 5. AUC for the various classifiers used in the proposed approach. (a) Weighted KNN, (b) fine KNN, (c) optimizable KNN, (d) wide NN, (e) cubic SVM.
Figure 5. AUC for the various classifiers used in the proposed approach. (a) Weighted KNN, (b) fine KNN, (c) optimizable KNN, (d) wide NN, (e) cubic SVM.
Informatics 09 00004 g005
Figure 6. Proposed IoT-based architecture for ICH classification.
Figure 6. Proposed IoT-based architecture for ICH classification.
Informatics 09 00004 g006
Table 1. Amount of samples before and after applying ADASYN.
Table 1. Amount of samples before and after applying ADASYN.
Feature Extraction SchemeNo. of Samples before ADASYNNo. of Samples after ADASYN
GLRLM + statistical features831946
GLCM831831
Hu’s invariant moments831831
GLRLM+ statistical features + GLCM831831
GLRLM + statistical features + Hu’s invariant moments831831
GLCM + Hu’s invariant moments831831
GLRLM + statistical features + GLCM + Hu’s invariant moments831831
Table 2. Number of features extracted using various feature extraction schemes.
Table 2. Number of features extracted using various feature extraction schemes.
Feature Extraction SchemeNo. of Extracted Features
GLRLM + statistical features224
GLCM368
Hu’s invariant moments112
GLRLM+ statistical features + GLCM592
GLRLM + statistical features + Hu’s invariant moments336
GLCM + Hu’s invariant moments480
GLRLM + statistical features + GLCM + Hu invariant moments704
Table 3. Performance of various classifiers using GLRLM and statistical features.
Table 3. Performance of various classifiers using GLRLM and statistical features.
ClassifiersOptimization
Technique
FoldResultsConfusion Matrix Parameters
AccuracyError RateSensitivitySpecificitytnfpfntp
Wide NNGrey Wolf Version 11090.60%9.40%90.80%90.40%9049687859
Fine KNNGrey Wolf Version 11095.07%4.93%97.57%92.70%9277323923
Weighted KNNGrey Wolf Version 11092.14%7.86%95.77%88.70%88711340906
Optimizable KNNGrey Wolf Version 1795.74%4.26%96.93%94.67%9945629917
Cubic SVMGrey Wolf Version 11092.29%7.71%92.49%92.10%9217971875
Table 4. Performance of various classifiers using GLCM features.
Table 4. Performance of various classifiers using GLCM features.
ClassifiersOptimization
Technique
FoldResultsConfusion Matrix Parameters
AccuracyError RateSensitivitySpecificitytnfpfntp
Wide NNBat1090.11%9.89%89.17%90.90%9099190741
Fine KNNWhale1092.30%7.70%91.34%93.10%9316972759
Weighted KNNGrey Wolf Version 21088.97%11.03%84.96%92.30%92377125706
Optimizable KNNWhale1092.57%7.43%91.34%93.60%9366472759
Cubic SVMBat1090.88%9.12%88.57%92.80%9287295736
Table 5. Performance of various classifiers using Hu’s invariant moments.
Table 5. Performance of various classifiers using Hu’s invariant moments.
ClassifiersOptimization
Technique
FoldResultsConfusion Matrix Parameters
AccuracyError RateSensitivitySpecificitytnfpfntp
Wide NNGrey Wolf Version 11083.23%16.77%80.99%85.10%851149158673
Fine KNNWhale1085.69%14.31%83.15%87.80%878122140691
Weighted KNNWhale580.23%19.77%71.00%87.90%879121241590
Optimizable KNNWhale1089.13%10.87%87.36%90.60%90694105726
Cubic SVMWhale1076.84%23.16%59.69%91.10%91189335496
Table 6. Performance of various classifiers using GLRLM, statistical features, and Hu’s invariant moments.
Table 6. Performance of various classifiers using GLRLM, statistical features, and Hu’s invariant moments.
ClassifiersOptimization
Technique
FoldResultsConfusion Matrix Parameters
AccuracyError RateSensitivitySpecificitytnfpfntp
Wide NNBat 1089.46%10.54%87.97%90.70%90793100731
Fine KNNGrey Wolf Version 21093.06%6.94%91.22%94.60%9465473758
Weighted KNNWhale 1087.55%12.45%84.24%91.00%72872131700
Optimizable KNNGrey Wolf Version 1793.77%6.23%92.18%95.10%9514965766
Cubic SVMGrey Wolf Version 1791.26%8.74%88.93%93.20%9326892739
Table 7. Performance of various classifiers using GLRLM, statistical features, and GLCM features.
Table 7. Performance of various classifiers using GLRLM, statistical features, and GLCM features.
ClassifiersOptimization
Technique
FoldResultsConfusion Matrix Parameters
AccuracyError RateSensitivitySpecificitytnfpfntp
Wide NNGrey Wolf Version 11090.61%9.39%89.65%91.40%9148686745
Fine KNNWhale 1093.23%6.77%91.94%94.30%9435767764
Weighted KNNGrey Wolf Version 11090.39%9.61%85.68%94.30%94357119712
Optimizable KNNWhale 1093.66%6.34%91.46%95.50%9554571760
Cubic SVMGrey Wolf Version 11091.75%8.25%89.41%93.70%9376388743
Table 8. Performance of various classifiers using GLCM and Hu’s invariant moments.
Table 8. Performance of various classifiers using GLCM and Hu’s invariant moments.
ClassifiersOptimization
Technique
FoldResultsConfusion Matrix Parameters
AccuracyError RateSensitivitySpecificitytnfpfntp
Wide NNGrey Wolf Version 11090.82%9.18%90.01%91.50%9158583748
Fine KNNBat791.26%8.74%89.17%93.00%9307090741
Weighted KNNBat1089.68%10.32%84.48%94.00%94060129702
Optimizable KNNGrey Wolf Version 1792.63%7.37%91.10%93.90%9396174757
Cubic SVMBat1090.72%9.28%88.09%92.90%9297199732
Table 9. Performance of various classifiers using GLRM, statistical features, GLCM, and Hu’s invariant moments.
Table 9. Performance of various classifiers using GLRM, statistical features, GLCM, and Hu’s invariant moments.
Classifiers.Optimization
Technique
FoldResultsConfusion Matrix Parameters
AccuracyError RateSensitivitySpecificitytnfpfntp
Wide NNGrey Wolf Version 21090.55%9.45%88.81%92.00%9208093738
Fine KNNBat1092.95%7.05%90.97%94.60%9465475756
Weighted KNNWhale1090.01%9.99%85.56%93.70%93763120711
Optimizable KNNBat1093.56%6.44%91.82%95.00%9505068763
Cubic SVMWhale1091.48%8.52%89.53%93.10%9316987744
Table 10. The maximum performance of each classifier in our approach.
Table 10. The maximum performance of each classifier in our approach.
ClassifiersFeature Extraction
Scheme
Optimization
Technique
FoldResultsConfusion Matrix Parameters
AccuracyError RateSensitivitySpecificitytnfpfntp
Wide NNGLCM + Hu’s invariant momentsGrey Wolf Version 11090.82%9.18%90.01%91.50%9158583748
Fine KNNGLRLM+
Statistical features
Grey Wolf Version 11095.07%4.93%97.57%92.70%9277323923
Weighted KNNGLRLM+
Statistical features
Grey Wolf Version 11092.14%7.86%95.77%88.70%88711340906
Optimizable KNNGLRLM+
Statistical features
Grey Wolf Version 1795.74%4.26%96.93%94.67%9945629917
Cubic SVMGLRLM+
Statistical features
Grey Wolf Version 11092.29%7.71%92.49%92.10%9217971875
Table 11. Performance comparison of different techniques.
Table 11. Performance comparison of different techniques.
ApproachesCT DatasetMethodClassifierPerformance
Raghavendra et al. [15]1603Entropy-based non-linear featuresPNNAcc: 97.37%
Shahangian and Pourghassem [17]627Modified Distance Regularized Level Set Evolution (MDRLSE), texture and shape features Hierarchical structure Acc: 94.13%
Al-Ayyoub et al. [18]76Region growingLogisticAcc: 92%
Xiao et al. [19]48Multi-resolution thresholding+ region growing + primary and derived features based on long and short axesC4.5Acc: 0.975
Tong et al. [20]450LBP texture features and histogram featuresSVMAcc: 90%
Li et al. [21]129Distance features based on landmarkBayesianSen: 100
Yuh et al. [22]273thresholding, spatial filtering, and cluster analysis and classification based on location, size, and shape of clusters-Sen: 98
Zaki et al. [23]720FCM + multi-level thresholding + location and intensity features-Sen: 82.5%
Muschelli et al. [24]112Intensity-based predictors Random forest classifierDSI: 0.899
Foo et al. [25]108Multiple thresholding and symmetry detection-Accuracy: 80.6
Zhang et al. [26]426Adaptive thresholding and case-based reasoningGenetic algorithmDetection rate: 94.9%
Our approach1831GLRLM and statistical featuresOptimizable KNNAccuracy: 95.74%
Sensitivity:96.93%
Specificity:94.67%
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

V, V.; Raghavendra, U.; Gudigar, A.; Kasula, P.; Chakole, Y.; Hegde, A.; R, G.M.; Ooi, C.P.; Ciaccio, E.J.; Acharya, U.R. Automated Intracranial Hematoma Classification in Traumatic Brain Injury (TBI) Patients Using Meta-Heuristic Optimization Techniques. Informatics 2022, 9, 4. https://doi.org/10.3390/informatics9010004

AMA Style

V V, Raghavendra U, Gudigar A, Kasula P, Chakole Y, Hegde A, R GM, Ooi CP, Ciaccio EJ, Acharya UR. Automated Intracranial Hematoma Classification in Traumatic Brain Injury (TBI) Patients Using Meta-Heuristic Optimization Techniques. Informatics. 2022; 9(1):4. https://doi.org/10.3390/informatics9010004

Chicago/Turabian Style

V, Vidhya, U. Raghavendra, Anjan Gudigar, Praneet Kasula, Yashas Chakole, Ajay Hegde, Girish Menon R, Chui Ping Ooi, Edward J. Ciaccio, and U. Rajendra Acharya. 2022. "Automated Intracranial Hematoma Classification in Traumatic Brain Injury (TBI) Patients Using Meta-Heuristic Optimization Techniques" Informatics 9, no. 1: 4. https://doi.org/10.3390/informatics9010004

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop