Next Article in Journal
Anomaly Detection Algorithm Based on Broad Learning System and Support Vector Domain Description
Next Article in Special Issue
A Survey on Deep Transfer Learning and Beyond
Previous Article in Journal
STSM: Spatio-Temporal Shift Module for Efficient Action Recognition
Previous Article in Special Issue
Deep Learning for Vessel Trajectory Prediction Using Clustered AIS Data
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Fractured Elbow Classification Using Hand-Crafted and Deep Feature Fusion and Selection Based on Whale Optimization Approach

1
Department of Computer Science, COMSATS University Islamabad, Wah Campus, Islamabad 45550, Pakistan
2
Department of Computer Science, University of Wah, Wah Cantt 47040, Pakistan
3
Department of Applied Data Science, Noroff University College, 4612 Kristiansand, Norway
4
Department of Electrical and Computer Engineering, Lebanese American University, Byblos, Lebanon
*
Author to whom correspondence should be addressed.
Mathematics 2022, 10(18), 3291; https://doi.org/10.3390/math10183291
Submission received: 27 July 2022 / Revised: 18 August 2022 / Accepted: 1 September 2022 / Published: 10 September 2022
(This article belongs to the Special Issue Advances in Machine Learning, Optimization, and Control Applications)

Abstract

:
The fracture of the elbow is common in human beings. The complex structure of the elbow, including its irregular shape, border, etc., makes it difficult to correctly recognize elbow fractures. To address such challenges, a method is proposed in this work that consists of two phases. In Phase I, pre-processing is performed, in which images are converted into RGB. In Phase II, pre-trained convolutional models Darknet-53 and Xception are used for deep feature extraction. The handcrafted features, such as the histogram of oriented gradient (HOG) and local binary pattern (LBP), are also extracted from the input images. A principal component analysis (PCA) is used for best feature selection and is serially merged into a single-feature vector having the length of N×2125. Furthermore, informative features N×1049 are selected out of N×2125 features using the whale optimization approach (WOA) and supplied to SVM, KNN, and wide neural network (WNN) classifiers. The proposed method’s performance is evaluated on 16,984 elbow X-ray radiographs that are taken from the publicly available musculoskeletal radiology (MURA) dataset. The proposed technique provides 97.1% accuracy and a kappa score of 0.943% for the classification of elbow fractures. The obtained results are compared to the most recently published approaches on the same benchmark datasets.

1. Introduction

Bone fractures in children are common, accounting for 10–25 percent of all injuries in children under the age of 18 [1]. The growing bones are more sensitive to ionization radiation, and both CT scans and X-rays expose children to drug radiation [2]. As a result, alternate imaging modalities, such as ultrasound, are used to limit radiation exposure [3]. Recent research on ultrasound has revealed its wide and effective applicability in the United States for identifying fractures in children [4]. Although ultrasonography depends on the operator, it is more sensitive as compared to other modalities such as X-ray and CT scans for diagnosing limb fractures in pediatric persons [5]. Ultrasonography may be performed accurately and reliably by medical doctors [6]. It might identify cortical rupture or irregularity, both of which are signs of fractures. An increased posterior fat cushion, on the other hand, can be easily identified by ultrasonography and may reveal intra-articular injuries [7]. Ultrasonography outperforms radiography for diagnosing posterior fatty cushion elevation [8]. Image processing in the medical industry has an important role and is increasingly known in the healthcare industry because of technical innovation and software advancements. It has a significant role in diagnosing disease and helping doctors to determine the course of treatment. Among many issues related to human health, fractures in different body parts have their importance [9]. The main challenge is to classify the exact location of the fracture in X-ray images. Nowadays, it is difficult for radiologists to detect fractures manually and accurately [10]. Various edge detection techniques have been suggested for the segmentation of X-ray images, and better results have been obtained using the Canny Edge detector [11,12]. There are billions of people affected by muscle fractures [13,14,15,16]. The Staple Random Forests Function Fusion method was used on generalized fractured bones in 2015. This method is more successful than other fracture detection systems. Fractures in children’s elbows are increasing day by day but do not demonstrate accurate results through X-rays. Due to inadequate accuracy in fracture detection, it is hard to for radiologists to identify these fractures [17].
The dual-mode CNN model has been used, in which features are extracted from the transfer learning models [18], and trained weights are concatenated into a single layer and passed to the classifier for discrimination between the fractured/un-fractured elbow X-ray images with 0.889 accuracy. The classification accuracy needs to be improved using the feature optimization approach [19]. In another study, the elbow X-rays were pre-processed using histogram equalization, and YOLOv4 was applied for the detection of fractured elbow region, achieving a detection score of 0.81 [20]. The pre-trained VGG-16 was used for the classification of fractured elbow X-rays images. This method’s performance might be improved by applying adversarial generative network and vision transformers [21]. The augmentation method and histogram equalization have been applied to enhance image contrast. ResNet was used as the backbone of AC-BiFPN for the detection of the fracture’s elbow region. This method gave detection scores of 0.68 [17]. The mask-RCNN model has also been applied, in which FPN and ResNet were used for the detection of the fractured elbow region. This method provided a precision rate of 0.85 [22]. The existing methods need improvement for accurate classification using optimized feature selection techniques.
In the existing literature, most of the work has been carried out for the detection of elbow fractures; however, this field still has a gap because of the complex elbow structures having irregular shapes and borders. Prominent feature selection is another key problem for accurate fracture classification. The weak similarity among fracture types makes the classification process more complex and challenging [23]. To work around the limitations that are currently there, a method is proposed in this research based on the extraction, two optimum feature selection methods, and fusion of hand-crafted and deep features that provide significant improvement. The main contribution is as follows:
The X-ray images are converted into RGB color space. The data augmentation method is applied, in which the X-ray images are flipped vertically and horizontally to increase the number of images. Then, extracted features are fused serially for the selection of important features using PCA. The fused informative selected features vector is further enhanced using improved WOA and passed as an input to the classifiers such as SVM, WNN, and KNN for discrimination between the healthy/fractured elbow X-ray images.
This paper contains five sections. Section 2 explores the recent existing works on elbow fractures. Section 3 elaborates on the steps of the proposed method, whereas Section 4 includes results and discussion. Finally, the conclusion of the proposed method is drafted in Section 5.

2. Related Work

The accurate detection of fractures in bones depends upon the image’s quality [24]. The informative feature extraction is a great challenge for the accurate classification of fracture images. In the literature, an extensive amount of work has been conducted for the detection/classification of elbow fractures, among which recent work is discussed in this section. An anisotropic filter is applied for noise reduction, and the Hough transform is applied for the detection of bone edges. The suggested method comprises three phases. In the first phase, the watershed marker-controlled method is applied for segmentation, and finally, the angle is measured between the line of fracture and the center of the perpendicular line. The conventional feature extraction methods might be useful for the diagnosis of features at the specific site, but deep features are considered for the whole fracted part of the image instead of the specific infected part. Deep CNN was used to detect traumatic pediatric elbow joint effusion in [25]. The Inceptionv3 network was re-trained by [26] using side wrist radiographs to develop a model for determining fractures. The Inceptionv3 network has also been employed for hip fracture detection [27]. A basic binary classification model using DenseNet with 169 layers was trained on the MURA dataset of musculoskeletal X-rays, comprised of 40,895 radiographs [28]. A new CNN model [29,30,31,32,33,34,35,36,37,38,39,40,41] was established for fracture identification, achieving 82.1% accuracy. A broader U-Net architecture has been utilized for the detection of elbow bone fractures [42]. The transfer learning models [33,43,44,45,46,47,48,49,50,51,52,53], i.e., AlexNet and GoogleNet, were applied for the detection of fractures of the femoral neck in X-ray and attained 94.4% accuracy [54]. The SURF system was employed for the classification of calcaneus fracture position in CT images [55]. A pre-trained R-CNN and Inceptionv4 models were utilized for the identification of distal radial fractures [56]. A CNN model has been trained for the detection of post and lateral wrist fractures [57]. Automatic classification of osteoporotic vertebral fractures was investigated by [58]. A novel CNN model has been developed for the automatic classification of thoracolumbar fractures [59,60]. A CNN model has been utilized with RF classifiers for the analysis of fractured elbow bones. This method achieved 95% prediction scores [61]. The authors of another study measured the diagnostic output of ultrasound in the identification of fractured elbow bones with an AUC of 96% [62]. The elbow fracture detection model was employed on the challenging MURA dataset. Neural networks learn directly through hierarchical functional extraction from images [63]. The transfer learning MobileNet model was used for fractured elbow detection using X-ray images. The results were computed, obtaining 0.84 accuracy on the MURA dataset. The visualization results of the fractured elbow were observed using GRAD-Cam [64]. Data augmentation was applied to increase X-ray input images. Then, deep features were derived using ResNet and ACBiFPN models. The results were computed on the MURA dataset, obtaining 68.4% AP [17]. The attention method, RCNN module, and YOLOv5 were used for the detection of a fractured elbow region, obtaining results of 0.71 mAP using the attention module, 0.55 mAP on R-CNN, and 0.44 mAP on YOLOv5 [65]. The transfer learning model and deep learning model were used in another study, with both trained from scratch for the classification of positive and negative elbow X-ray images. In this experiment, it was observed that the transfer learning model performed better in overcoming the over-fitting problem [66]. In another study, the U-network was used for the segmentation of the MURA dataset. This model provided dice scores of 95.95% during training and 90.29% during testing [67]. CNN models, such as Inception-ResNet-v2, VGG-16, ResNet-50, 101,18, Inceptionv3, AlexNet, SqueezNet, GoogleNet and DenseNet-201 have been used for classification of normal/abnormal elbow images using MURA dataset. The Inception-ResNet-v2 architecture provided a mean accuracy of 0.723 and a mean kappa of 0.506 [68]. The MSCNN model has also been fused with GCN for classification. The performance of this model was computed by comparing it with three pre-trained models, DenseNet-169, MSCNN, and CapsNet. This model gave a confidence interval of 0.90 [69].

3. Proposed Methodology

In this research, pre-processing, feature extraction, fusion, and selection of informative features using WOA are performed for fractured elbow X-ray images as presented in Figure 1. In the pre-processing phase, images with the dimension of 512 × 456 are resized into the dimension of 256 × 256 and converted into RGB color space 256 × 256 × 3. The HOG, LBP, and derived deep features come from pre-trained DarkNet-53 and Xception models. Then, extracted features are serially fused, and best features are selected by PCA. Following PCA, the feature vector dimension is N×2125, which is passed to WOA with optimal parameters for the selection of the most relevant features and supplied as input to three types of classification families, including neural network, geometric and nearest neighbor models.

3.1. Feature Extraction Method

Features extraction plays an important role in classification, in which images are represented in a numerical manner [70]. In this research, hand-crafted features are extracted, including texture and shape-based features, i.e., LBP and HOG, and feature learning is performed using pre-trained models, i.e., DarkNet-53 and Xception, for the classification of fractured elbow/healthy images.

3.1.1. Local Binary Pattern (LBP)

In contrast to changes in monotonous lighting, LBP is a local texture descriptor with lesser computing cost [71]. It is an operating technique of local texture known as an easy and powerful operator. The center is used for any pixel by the LBP operator as a threshold, and it compares the results in binary values with the neighboring pixels. In this work, N×45 LBP features are selected out of N×59 using PCA based on the maximum scores.

3.1.2. Histogram of Oriented Gradient (HOG)

HOG is a shape-based feature extraction method that might be utilized for object detection based on gradients [72]. The basic concept behind HOG is to describe the presence and shape of a local entity within the picture by intensity gradients or borders. In the proposed method, images are divided into small regions known as cells, and a pixel histogram for each cell is computed. In the proposed method, dimensions of N×80 HOG features are selected out of N×3186 using PCA.

3.2. Deep Feature Extraction Using Fully Connected Layers

In the proposed method, deep features are extracted using two pre-trained models, DarkNet53 and Xception. The DarkNet53 model contains 184 layers [73,74,75], with 1 input later, 53 convolution layers, 52 batch-normalization laters, 52 ReLU laters, 23 addition layers, 1 average global pooling layer, a SoftMax layer and a classification output layer. In this work, features are extracted from a fully connected layer named Conv-53 of darkNet53 having a length of N×1000 and supplied as input to the classifiers. However, the Xception architecture has 170 layers [76], with 1 input layer, 40 convolutional layers, 40 batch-normalization layers, 35 ReLU layers, 34 group convolutional layers, a SoftMax layer, and 1 pooling average and classification layer. The input format of Xception is an RGB image with dimensions of 299 × 299. It is 126 in depth. The average pooling layer has the dimension of N×1000 out of N×1024, which is used in this study.

3.3. Serial Feature Fusion/Optimum Features Selection

The feature fusion process is utilized to improve the classification outcomes so that better features are selected from different descriptors. Finally, collected useful information is combined serially/in parallel to create a single fused features vector [77,78]. As a result, texture and shape-based features, as well as deep features, are extracted and merged serially to generate a single feature vector in this study. Then, WOA is used to pick prominent features that are then fed into classifiers such as neural networks, SVM, and KNN, as presented in Figure 2.

3.4. Features Selection

WOA is a meta-heuristic approach [79,80] commonly employed in a variety of disciplines and fields, including engineering, due to its basic structure and ease of application [81,82]. Therefore, in this research, the features vector dimension of N×2125 is passed as an input to WOA for informative feature selection. Table 1 shows the optimized WOA parameters.
Table 1 depicts the number of WOA parameters in which 10 total solutions, 100 training epochs, 0.5 thresholds, 0 lower bound, and 1 upper bound are used for the selection of optimum features. The graphical presentation for the convergence plot using the WOA method on the optimum parameters is shown in Figure 3.
Figure 3 shows the ratio between the number of iterations and fitness values. After 85 training epochs, a straight pink line shows that model has reached the convergence point, where the error rate is reduced, and the best N×1049 features are selected out of N×2125 features that are used for the classification of fractured elbow X-ray images.

3.5. Classification of Elbow Fracture

The classification of an elbow fracture is carried using WNN, SVM, and KNN classifiers. The classifiers are utilized for model training on the selected kernel and parameters, as mentioned in Table 2.
Table 2 depicts classifiers used for the classification of DR lesions. In this work, WNN is utilized with 1 FC layer, ReLU activation, and a total of 100 iterations. SVM classifier is used with the cubic kernel such that the scale of the kernel is automatic with one level of box constraint. The KNN classifier is employed with one neighbor and Euclidean distance metric with equal weight metrics. The selected classifiers are trained on five- and ten-fold cross-validation.

4. Results and Discussion

The suggested method’s performance was assessed using the MURA dataset. This dataset was constructed by the coordination of the Stanford machine learning group. The MURA dataset contains 16,984 elbow X-ray images, of which 8648 slices are positive, and 8336 are negative [83,84]. This research work has been implemented on MATLAB 2021 RA toolbox with the Windows 10 operating system.

4.1. Experiment #1: Classification of Fractured Elbow Images Using Five-Fold Cross-Validation

The classification of elbow positive/negative images is performed using five- and ten-fold cross-validation. The classification results of the fractured elbow into two classes as positive/negative are represented in Figure 4.
The computed classification outcomes are shown in Table 3.
Table 3 depicts the classification outcomes for five-fold validation, where it is observed that three types of classifiers, KNN, SVM, and a neural network, are utilized. The Fine KNN classifier provided 95.3% accuracy, 95% precision, 96% recall, 95% F1-Score, 95.0% specificity, 95.6% sensitivity, and a kappa score of 0.906, which is higher as compared to other benchmark classifiers. The ROC on five-fold validation is presented in Figure 5.
In Figure 5, the maximum AUC of 0.97 is compared to previous techniques.

4.2. Experiment#2 Classification of Fractured Elbow Images Using 10 Cross-Validation

Table 4 and Figure 6 depict classification results of positive/negative classes of elbow fracture that are computed by the combination of FN, FP, TP, and TN measures.
Figure 6 shows the confusion matrix on benchmark datasets in which minimum FN and FP are achieved in the Fine KNN classifier. The quantitative results are given in Table 4.
In Table 4, 97.1% accuracy, 96% precision, 96% specificity, 97% sensitivity, and 0.94 kappa score are obtained on the Fine KNN classifier. Figure 7 shows the achieved ROC on the benchmark classifier.
In Figure 7, a maximum of 0.96 ROC is obtained on cubic SVM as compared to other kernels.

4.3. Comparison of the Results of the Proposed Method

The proposed model results were compared on the same benchmark MURA dataset to authenticate its performance, as shown in Table 5.
In Table 5, Anna et al. [85] utilized DenseNet169 for classification and achieved a kappa score of 0.715 on elbow X-rays. Pranav et al. [84] suggested a CNN based on a 169-layer DenseNet architecture and obtained a kappa score of 0.71 on elbow X-ray images. Dennis et al. [86] used the Ensemble-200 model and obtained a kappa score of 0.617. Six pre-trained models, including Densenet-121, Inception v3, Mobilenetv2, Resnet152, Resnet50, and VGG19 with batch normalization, were tested in another study. The results were computed on the MURA dataset, achieving Kappa scores of 0.735, 0.718, 0.734, 0.734, 0.738, and 0.761, respectively. VGG19 with batch normalization provided a higher Ksappa score than the other models [87]. The Ensemble [88] deep model was utilized for fractured X-rays images of the elbow and achieved a kappa score of 0.79. MSDNet was used for the classification of normal/abnormal elbow X-ray images. The results were computed on the MURA dataset, obtaining an accuracy of 0.826 [89].
The existing literature does not use a feature optimization approach for the classification of fractured elbow X-ray images. This research aims to overcome the existing limitations and improve detection accuracy. A method is proposed in which data augmentation is applied to increase the number of images. The hand-crafted and deep features are extracted from the elbow X-ray images. The extracted features are selected using PCA and fused serially. Furthermore, prominent features are selected using WOA, and the active selected feature vector is passed to the classifiers for the classification of the fractured elbow/normal X-ray images.
This research investigates hand-crafted and deep features, and the best features are selected using PCA and fused serially. The selection of informative features using WOA and passing to the classifiers allows for discrimination between normal/abnormal elbow X-rays images. The experimental analysis shows that the proposed classification model significantly improves the classification results.
The proposed method only classifies the fractured elbow. In the future, this method might be extended for the classification of fractured shoulders, arms, fingers, hands, etc.

5. Conclusions

The accurate detection of a fractured elbow using X-rays is a difficult task because of the complex structure of the elbow, including its irregular shape and border. To handle such challenges, a method is proposed in this work in which input images are converted into RGB color space. The data augmentation method is applied to increase the number of images. Then, deep features are extracted using Xception and DarkNetwork-53. The texture (LBP) and shape-based (HOG) features are extracted from X-ray images. The score-based features are selected using PCA and are serially fused with the dimension of N×2125. Then, WOA is applied with optimum parameters for the selection of N×1049 features out of N×2125 and supplied to SVM, WNN, and KNN classifiers. The proposed classification model performance is evaluated on the challenging MURA dataset, obtaining an accuracy of 97.1% with a kappa score of 94.3%. A comparison to the most recently published work authenticates that the proposed method yielded better results.
The performance of the proposed method is evaluated on real data, which might be implemented as a real-time application for the classification of fractured elbow images. This application assists radiologists in elbow fracture detection at an initial stage.

Author Contributions

Conceptualization, J.A.; Data curation, S.A.; Formal analysis, M.Y.; Investigation, M.Y. and S.K.; Validation, J.A., M.S. and M.Y.; Writing—original draft, M.S.; Writing—review & editing, S.M. and M.Y. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Younes, N.; el Hajj, M.-A.; Bizdikian, A.J.; Gannagé-Yared, M.-H. An epidemiological evaluation of fractures and its determinants among Lebanese schoolchildren: A cross-sectional study. Arch. Osteoporos. 2019, 14, 9. [Google Scholar] [CrossRef] [PubMed]
  2. Damilakis, J.; Adams, J.E.; Guglielmi, G.; Link, T.M. Radiation exposure in X-ray-based imaging techniques used in osteoporosis. Eur. Radiol. 2010, 20, 2707–2714. [Google Scholar] [CrossRef] [PubMed]
  3. Zhou, Y.-F. High intensity focused ultrasound in clinical tumor ablation. World J. Clin. Oncol. 2011, 2, 8. [Google Scholar] [CrossRef]
  4. MacDermid, J.C.; Vincent, J.I.; Kieffer, L.; Kieffer, A.; Demaiter, J.; MacIntosh, S. A survey of practice patterns for rehabilitation post elbow fracture. Open J. Orthop. 2012, 6, 429. [Google Scholar] [CrossRef] [PubMed]
  5. Champagne, N.; Eadie, L.; Regan, L.; Wilson, P. The effectiveness of ultrasound in the detection of fractures in adults with suspected upper or lower limb injury: A systematic review and subgroup meta-analysis. BMC Emerg. Med. 2019, 19, 17. [Google Scholar] [CrossRef]
  6. Mateer, J.R.; Ogata, M.; Kefer, M.P.; Wittmann, D.; Aprahamian, C. Prospective analysis of a rapid trauma ultrasound examination performed by emergency physicians. J. Trauma Acute Care Surg. 1995, 38, 879–885. [Google Scholar]
  7. Azizkhani, R.; Yazdi, Z.H.; Heydari, F. Diagnostic accuracy of ultrasonography for diagnosis of elbow fractures in children. Eur. J. Trauma Emerg. Surg. 2021, 184, 1–8. [Google Scholar] [CrossRef]
  8. Al-Aubaidi, Z.; Torfing, T. The role of fat pad sign in diagnosing occult elbow fractures in the pediatric patient: A prospective magnetic resonance imaging study. J. Pediatr. Orthop. B 2012, 21, 514–519. [Google Scholar] [CrossRef]
  9. Dimitrov, D.V. Medical internet of things and big data in healthcare. Healthc. Inform. Res. 2016, 22, 156–163. [Google Scholar] [CrossRef]
  10. Swathika, B.; Anandhanarayanan, K.; Baskaran, B.; Govindaraj, R. Segmentation, Radius Bone Fracture Detection Using Morphological Gradient Based Image Segmentation. Int. J. Comput. Sci. Inf. Technol. 2015, 6, 1616–1619. [Google Scholar]
  11. Niveaditha, S.S.; Pavithra, V.; Jayashree, R.; Tamilselvi, T. Online diagnosis of X-ray image using FLDA image processing algorithm. In Proceedings of the IRF International Conference, Riyadh, Saudi Arabia, 19–22 October 2014; pp. 76–80. [Google Scholar]
  12. Basha, C.Z.; Padmaja, T.M.; Balaji, G. Automatic X-ray Image Classification System. In Smart Computing and Informatics; Springer: Berlin/Heidelberg, Germany, 2018; pp. 43–52. [Google Scholar]
  13. Sarfo-Walters, R. Knowledge, Attitudes and Practices of Postoperative Pain Assessment and Management among Health Care Practitioners in Cape Coast Metropolis, Ghana. Texila Int. J. Nurs. 2015, 96, 1–13. [Google Scholar]
  14. Amin, J.; Anjum, M.A.; Sharif, A.; Raza, M.; Kadry, S.; Nam, Y. Malaria Parasite Detection Using a Quantum-Convolutional Network. CMC-Comput. Mater. Contin. 2022, 70, 6023–6039. [Google Scholar] [CrossRef]
  15. Amin, J.; Anjum, M.A.; Sharif, M.; Rehman, A.; Saba, T.; Zahra, R. Microscopic segmentation and classification of COVID-19 infection with ensemble convolutional neural network. Microsc. Res. Tech. 2022, 85, 385–397. [Google Scholar] [CrossRef] [PubMed]
  16. Amin, J.; Sharif, M.; Fernandes, S.L.; Wang, S.H.; Saba, T.; Khan, A.R. Breast microscopic cancer segmentation and classification using unique 4-qubit-quantum model. Microsc. Res. Tech. 2022, 85, 1926–1936. [Google Scholar] [CrossRef] [PubMed]
  17. Lu, S.; Wang, S.; Wang, G. Automated universal fractures detection in X-ray images based on deep learning approach. Multimed. Tools Appl. 2022, 164, 1–17. [Google Scholar] [CrossRef]
  18. Jahanbakhshi, A.; Abbaspour-Gilandeh, Y.; Heidarbeigi, K.; Momeny, M. A novel method based on machine vision system and deep learning to detect fraud in turmeric powder. Comput. Biol. Med. 2021, 136, 104728. [Google Scholar] [CrossRef]
  19. Luo, J.; Kitamura, G.; Arefan, D.; Doganay, E.; Panigrahy, A.; Wu, S. Knowledge-Guided Multiview Deep Curriculum Learning for Elbow Fracture Classification. In Proceedings of the International Workshop on Machine Learning in Medical Imaging, Strasbourg, France, 27 September 2021; pp. 555–564. [Google Scholar]
  20. Nguyen, H.P.; Hoang, T.P.; Nguyen, H.H. A deep learning based fracture detection in arm bone X-ray images. In Proceedings of the International Conference on Multimedia Analysis and Pattern Recognition (MAPR), Hanoi, Vietnam, 15–16 October 2021; pp. 1–6. [Google Scholar]
  21. Huhtanen, J.T.; Nyman, M.; Doncenco, D.; Hamedian, M.; Kawalya, D.; Salminen, L.; Sequeiros, R.B.; Koskinen, S.K.; Pudas, T.K.; Kajander, S.; et al. Deep learning accurately classifies elbow joint effusion in adult and pediatric radiographs. Sci. Rep. 2022, 12, 11803. [Google Scholar] [CrossRef]
  22. Wei, D.; Wu, Q.; Wang, X.; Tian, M.; Li, B. Accurate Instance Segmentation in Pediatric Elbow Radiographs. Sensors 2021, 21, 7966. [Google Scholar] [CrossRef]
  23. Joseph, D.; Nkubli, F.; Christian, N. Radiation Dose Surveys for Adult Radiography Examinations in two Nigerian Hospitals. In Proceedings of the European Congress of Radiology-ECR, Vienna, Austria, 28 February–4 March 2018. [Google Scholar]
  24. Dai, Q.; Pu, Y.-F.; Rahman, Z.; Aamir, M. Fractional-order fusion model for low-light image enhancement. Symmetry 2019, 11, 574. [Google Scholar] [CrossRef]
  25. England, J.R.; Gross, J.S.; White, E.A.; Patel, D.B.; England, J.T.; Cheng, P.M. Detection of traumatic pediatric elbow joint effusion using a deep convolutional neural network. Am. J. Roentgenol. 2018, 211, 1361–1368. [Google Scholar] [CrossRef]
  26. Kim, D.; MacKinnon, T. Artificial intelligence in fracture detection: Transfer learning from deep convolutional neural networks. Clin. Radiol. 2018, 73, 439–445. [Google Scholar] [CrossRef] [PubMed]
  27. Badgeley, M.A.; Zech, J.R.; Oakden-Rayner, L.; Glicksberg, B.S.; Liu, M.; Gale, W.; McConnell, M.V.; Percha, B.; Snyder, T.M.; Dudley, J.T. Deep learning predicts hip fracture using confounding patient and healthcare variables. NPJ Digit. Med. 2019, 2, 31. [Google Scholar] [CrossRef]
  28. Rajpurkar, P.; Irvin, J.; Bagul, A.; Ding, D.; Duan, T.; Mehta, H.; Yang, B.; Zhu, K.; Laird, D.; Ball, R.L.; et al. Mura dataset: Towards radiologist-level abnormality detection in musculoskeletal radiographs. arXiv 2018, arXiv:1712.06957v2. [Google Scholar]
  29. Guan, B.; Yao, J.; Zhang, G.; Wang, X. Thigh fracture detection using deep learning method based on new dilated convolutional feature pyramid network. Pattern Recognit. Lett. 2019, 125, 521–526. [Google Scholar] [CrossRef]
  30. Raza, M.; Sharif, M.; Yasmin, M.; Khan, M.A.; Saba, T.; Fernandes, S.L. Appearance based pedestrians’ gender recognition by employing stacked auto encoders in deep learning. Future Gener. Comput. Syst. 2018, 88, 28–39. [Google Scholar] [CrossRef]
  31. Liaqat, A.; Khan, M.A.; Shah, J.H.; Sharif, M.; Yasmin, M.; Fernandes, S.L. Automated ulcer and bleeding classification from WCE images using multiple features fusion and selection. J. Mech. Med. Biol. 2018, 18, 1850038. [Google Scholar] [CrossRef]
  32. Naqi, S.; Sharif, M.; Yasmin, M.; Fernandes, S.L. Lung nodule detection using polygon approximation and hybrid features from CT images. Curr. Med. Imaging 2018, 14, 108–117. [Google Scholar] [CrossRef]
  33. Amin, J.; Sharif, M.; Yasmin, M.; Fernandes, S.L. Big data analysis for brain tumor detection: Deep convolutional neural networks. Future Gener. Comput. Syst. 2018, 87, 290–297. [Google Scholar] [CrossRef]
  34. Ansari, G.J.; Shah, J.H.; Yasmin, M.; Sharif, M.; Fernandes, S.L. A novel machine learning approach for scene text extraction. Future Gener. Comput. Syst. 2018, 87, 328–340. [Google Scholar] [CrossRef]
  35. Bokhari, S.T.F.; Sharif, M.; Yasmin, M.; Fernandes, S.L. Fundus image segmentation and feature extraction for the detection of glaucoma: A new approach. Curr. Med. Imaging 2018, 14, 77–87. [Google Scholar] [CrossRef]
  36. Jain, V.K.; Kumar, S.; Fernandes, S.L. Extraction of emotions from multilingual text using intelligent text processing and computational linguistics. J. Comput. Sci. 2017, 21, 316–326. [Google Scholar] [CrossRef]
  37. Fernandes, S.L.; Gurupur, V.P.; Lin, H.; Martis, R.J. A novel fusion approach for early lung cancer detection using computer aided diagnosis techniques. J. Med. Imaging Health Inform. 2017, 7, 1841–1850. [Google Scholar] [CrossRef]
  38. Raja, N.; Rajinikanth, V.; Fernandes, S.L.; Satapathy, S.C. Segmentation of breast thermal images using Kapur’s entropy and hidden Markov random field. J. Med. Imaging Health Inform. 2017, 7, 1825–1829. [Google Scholar] [CrossRef]
  39. Rajinikanth, V.; Madhavaraja, N.; Satapathy, S.C.; Fernandes, S.L. Otsu’s multi-thresholding and active contour snake model to segment dermoscopy images. J. Med. Imaging Health Inform. 2017, 7, 1837–1840. [Google Scholar] [CrossRef]
  40. Shah, J.H.; Chen, Z.; Sharif, M.; Yasmin, M.; Fernandes, S.L. A novel biomechanics-based approach for person re-identification by generating dense color sift salience features. J. Mech. Med. Biol. 2017, 17, 1740011. [Google Scholar] [CrossRef]
  41. Fernandes, S.L.; Bala, G.J. A comparative study on various state of the art face recognition techniques under varying facial expressions. Int. Arab. J. Inf. Technol. 2017, 14, 254–259. [Google Scholar]
  42. Lindsey, R.; Daluiski, A.; Chopra, S.; Lachapelle, A.; Mozer, M.; Sicular, S.; Hanel, D.; Gardner, M.; Gupta, A.; Hotchkiss, R.; et al. Deep neural network improves fracture detection by clinicians. Proc. Natl. Acad. Sci. USA 2018, 115, 11591–11596. [Google Scholar] [CrossRef]
  43. Amin, J.; Anjum, M.A.; Sharif, M.; Kadry, S.; Nam, Y.; Wang, S. Convolutional Bi-LSTM Based Human Gait Recognition Using Video Sequences. CMC-Comput. Mater. Contin. 2021, 68, 2693–2709. [Google Scholar] [CrossRef]
  44. Amin, J.; Sharif, M.; Anjum, M.A.; Nam, Y.; Kadry, S.; Taniar, D. Diagnosis of COVID-19 infection using three-dimensional semantic segmentation and classification of computed tomography images. Comput. Mater. Contin. 2021, 68, 2451–2467. [Google Scholar] [CrossRef]
  45. Amin, J.; Sharif, M.; Anjum, M.A.; Raza, M.; Bukhari, S.A.C. Convolutional neural network with batch normalization for glioma and stroke lesion detection using MRI. Cogn. Syst. Res. 2020, 59, 304–311. [Google Scholar] [CrossRef]
  46. Amin, J.; Sharif, M.; Raza, M.; Saba, T.; Rehman, A. Brain tumor classification: Feature fusion. In Proceedings of the International Conference on Computer and Information Sciences (ICCIS), Aljouf, Saudi Arabia, 3–4 April 2019; pp. 1–6. [Google Scholar]
  47. Amin, J.; Sharif, M.; Yasmin, M.; Ali, H.; Fernandes, S.L. A method for the detection and classification of diabetic retinopathy using structural predictors of bright lesions. J. Comput. Sci. 2017, 19, 153–164. [Google Scholar] [CrossRef]
  48. Amin, J.; Sharif, M.; Yasmin, M.; Fernandes, S.L. A distinctive approach in brain tumor detection and classification using MRI. Pattern Recognit. Lett. 2020, 139, 118–127. [Google Scholar] [CrossRef]
  49. Muhammad, N.; Sharif, M.; Amin, J.; Mehboob, R.; Gilani, S.A.; Bibi, N.; Javed, H.; Ahmed, N. Neurochemical Alterations in Sudden Unexplained Perinatal Deaths—A Review. Front. Pediatrics 2018, 6, 6. [Google Scholar] [CrossRef] [PubMed]
  50. Saba, T.; Mohamed, A.S.; El-Affendi, M.; Amin, J.; Sharif, M. Brain tumor detection using fusion of hand crafted and deep learning features. Cogn. Syst. Res. 2020, 59, 221–230. [Google Scholar] [CrossRef]
  51. Sharif, M.; Amin, J.; Raza, M.; Anjum, M.A.; Afzal, H.; Shad, S.A. Brain tumor detection based on extreme learning. Neural Comput. Appl. 2020, 32, 15975–15987. [Google Scholar] [CrossRef]
  52. Sharif, M.; Amin, J.; Raza, M.; Yasmin, M.; Satapathy, S.C. An integrated design of particle swarm optimization (PSO) with fusion of features for detection of brain tumor. Pattern Recognit. Lett. 2020, 129, 150–157. [Google Scholar] [CrossRef]
  53. Umer, M.J.; Amin, J.; Sharif, M.; Anjum, M.A.; Azam, F.; Shah, J.H. An integrated framework for COVID-19 classification based on classical and quantum transfer learning from a chest radiograph. Concurr. Comput. Pract. Exp. 2021, 34, e6434. [Google Scholar] [CrossRef]
  54. Adams, M.; Chen, W.; Holcdorf, D.; McCusker, M.W.; Howe, P.D.; Gaillard, F. Computer vs human: Deep learning versus perceptual training for the detection of neck of femur fractures. J. Med. Imaging Radiat. Oncol. 2019, 63, 27–32. [Google Scholar] [CrossRef]
  55. Urakawa, T.; Tanaka, Y.; Goto, S.; Matsuzawa, H.; Watanabe, K.; Endo, N. Detecting intertrochanteric hip fractures with orthopedist-level accuracy using a deep convolutional neural network. Skelet. Radiol. 2019, 48, 239–244. [Google Scholar] [CrossRef]
  56. Gan, K.; Xu, D.; Lin, Y.; Shen, Y.; Zhang, T.; Hu, K.; Zhou, K.; Bi, M.; Pan, L.; Wu, W.; et al. Artificial intelligence detection of distal radius fractures: A comparison between the convolutional neural network and professional assessments. Acta Orthop. 2019, 90, 394–400. [Google Scholar] [CrossRef]
  57. Ebsim, R.; Naqvi, J.; Cootes, T.F. Automatic detection of wrist fractures from posteroanterior and lateral radiographs: A deep learning-based approach. In Proceedings of the International Workshop on Computational Methods and Clinical Applications in Musculoskeletal Imaging, Granada, Spain, 16 September 2018; pp. 114–125. [Google Scholar]
  58. Tomita, N.; Cheung, Y.Y.; Hassanpour, S. Deep neural networks for automatic detection of osteoporotic vertebral fractures on CT scans. Comput. Biol. Med. 2018, 98, 8–15. [Google Scholar] [CrossRef] [PubMed]
  59. Raghavendra, U.; Bhat, N.S.; Gudigar, A.; Acharya, U.R. Automated system for the detection of thoracolumbar fractures using a CNN architecture. Future Gener. Comput. Syst. 2018, 85, 184–189. [Google Scholar] [CrossRef]
  60. Taves, J.; Skitch, S.; Valani, R. Determining the clinical significance of errors in pediatric radiograph interpretation between emergency physicians and radiologists. Can. J. Emerg. Med. 2018, 20, 420–424. [Google Scholar] [CrossRef] [PubMed]
  61. Rayan, J.C.; Reddy, N.; Kan, J.H.; Zhang, W.; Annapragada, A. Binomial classification of pediatric elbow fractures using a deep learning multiview approach emulating radiologist decision making. Radiol. Artif. Intell. 2019, 1, e180015. [Google Scholar] [CrossRef]
  62. Lee, S.H.; Yun, S.J. Diagnostic performance of ultrasonography for detection of pediatric elbow fracture: A meta-analysis. Ann. Emerg. Med. 2019, 74, 493–502. [Google Scholar] [CrossRef]
  63. Chartrand, G.; Cheng, P.M.; Vorontsov, E.; Drozdzal, M.; Turcotte, S.; Pal, C.J.; Kadoury, S.; Tang, A. Deep learning: A primer for radiologists. Radiographics 2017, 37, 2113–2131. [Google Scholar] [CrossRef]
  64. Luong, H.H.; Le, L.T.T.; Nguyen, H.T.; Hua, V.Q.; Nguyen, K.V.; Bach, T.N.P.; Nguyen, T.N.A.; Nguyen, H.T.Q. Transfer Learning with Fine-Tuning on MobileNet and GRAD-CAM for Bones Abnormalities Diagnosis. In Proceedings of the Computational Intelligence in Security for Information Systems Conference, Asan, Korea, 29 June–1 July 2022; pp. 171–179. [Google Scholar]
  65. Jia, Y.; Wang, H.; Chen, W.; Wang, Y.; Yang, B. An attention-based cascade R-CNN model for sternum fracture detection in X-ray images. CAAI Trans. Intell. Technol. 2022. [Google Scholar] [CrossRef]
  66. Kandel, I.; Castelli, M.; Popovič, A. Musculoskeletal images classification for detection of fractures using transfer learning. J. Imaging 2020, 6, 127. [Google Scholar] [CrossRef]
  67. Ghoti, K.; Baid, U.; Talbar, S. MURA: Bone Fracture Segmentation Using a U-net Deep Learning in X-ray Images. In Techno-Societal 2020; Springer: Berlin/Heidelberg, Germany, 2021; pp. 519–531. [Google Scholar]
  68. Ananda, A.; Ngan, K.H.; Karabağ, C.; Ter-Sarkisov, A.; Alonso, E.; Reyes-Aldasoro, C.C. Classification and visualisation of normal and abnormal radiographs; A comparison between eleven convolutional neural network architectures. Sensors 2021, 21, 5381. [Google Scholar] [CrossRef]
  69. Liang, S.; Gu, Y. Towards robust and accurate detection of abnormalities in musculoskeletal radiographs with a multi-network model. Sensors 2020, 20, 3153. [Google Scholar] [CrossRef]
  70. Ojala, T.; Pietikainen, M.; Maenpaa, T. Multiresolution gray-scale and rotation invariant texture classification with local binary patterns. IEEE Trans. Pattern Anal. Mach. Intell. 2002, 24, 971–987. [Google Scholar] [CrossRef]
  71. Praveena, K.S. A Classical Hierarchy method for Bone X-Ray Image Classification using SVM. Int. Res. J. Eng. Technol. (IRJET) 2017, 4, 991–993. [Google Scholar]
  72. Wang, C.; Li, Z.; Dey, N.; Li, Z.; Ashour, A.S.; Fong, S.J.; Sherratt, R.S.; Wu, L.; Shi, F. Histogram of oriented gradient based plantar pressure image feature extraction and classification employing fuzzy support vector machine. J. Med. Imaging Health Inform. 2018, 8, 842–854. [Google Scholar] [CrossRef]
  73. Redmon, J.; Farhadi, A. Yolov3: An incremental improvement. arXiv 2018, arXiv:physics/1804.02767. [Google Scholar]
  74. Redmon, J.; Farhadi, A. YOLO9000: Better, faster, stronger. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 7263–7271. [Google Scholar]
  75. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 26 June–1 July 2016; pp. 770–778. [Google Scholar]
  76. Chollet, F. Xception: Deep learning with depthwise separable convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 1251–1258. [Google Scholar]
  77. Cao, Y.; Wang, H.; Moradi, M.; Prasanna, P.; Syeda-Mahmood, T.F. Fracture detection in x-ray images through stacked random forests feature fusion. In Proceedings of the IEEE 12th International Symposium on Biomedical Imaging (ISBI), New York, NY, USA, 16–19 April 2015; pp. 801–805. [Google Scholar]
  78. Umadevi, N.; Geethalakshmi, S. Multiple classification system for fracture detection in human bone X-ray images. In Proceedings of the Third International Conference on Computing, Communication and Networking Technologies (ICCCNT’12), Karur, India, 26–28 July 2012; pp. 1–8. [Google Scholar]
  79. Mirjalili, S.; Lewis, A. The whale optimization algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar] [CrossRef]
  80. Jin, Q.; Xu, Z.; Cai, W. An Improved Whale Optimization Algorithm with Random Evolution and Special Reinforcement Dual-Operation Strategy Collaboration. Symmetry 2021, 13, 238. [Google Scholar] [CrossRef]
  81. Prakash, D.B.; Lakshminarayana, C. Optimal siting of capacitors in radial distribution network using whale optimization algorithm. Alex. Eng. J. 2017, 56, 499–509. [Google Scholar] [CrossRef] [Green Version]
  82. Bhatt, U.R.; Dhakad, A.; Chouhan, N.; Upadhyay, R. Fiber wireless (FiWi) access network: ONU placement and reduction in average communication distance using whale optimization algorithm. Heliyon 2019, 5, e01311. [Google Scholar] [CrossRef]
  83. Guan, B.; Zhang, G.; Yao, J.; Wang, X.; Wang, M. Arm fracture detection in X-rays based on improved deep convolutional neural network. Comput. Electr. Eng. 2020, 81, 106530. [Google Scholar] [CrossRef]
  84. Rajpurkar, P.; Irvin, J.; Bagul, A.; Ding, D.; Duan, T.; Mehta, H.; Yang, B.; Zhu, K.; Laird, D.; Ball, R.L.; et al. Mura: Large dataset for abnormality detection in musculoskeletal radiographs. arXiv 2017, arXiv:physics/1712.06957. [Google Scholar]
  85. Solovyova, A.; Solovyov, I. X-Ray bone abnormalities detection using MURA dataset. arXiv 2020, arXiv:physics/2008.03356. [Google Scholar]
  86. Banga, D.; Waiganjo, P. Abnormality detection in musculoskeletal radiographs with convolutional neural networks (ensembles) and performance optimization. arXiv 2019, arXiv:1908.02170v1. [Google Scholar]
  87. Mehr, G. Automating Abnormality Detection in Musculoskeletal Radiographs through Deep Learning. arXiv 2020, arXiv:physics/2010.12030. [Google Scholar]
  88. He, M.; Wang, X.; Zhao, Y. A calibrated deep learning ensemble for abnormality detection in musculoskeletal radiographs. Sci. Rep. 2021, 11, 9097. [Google Scholar] [CrossRef] [PubMed]
  89. Karthik, K.; Kamath, S.S. MSDNet: A deep neural ensemble model for abnormality detection and classification of plain radiographs. J. Ambient. Intell. Humaniz. Comput. 2022, 1–15. [Google Scholar] [CrossRef]
Figure 1. Proposed model for classification of the fractured elbow using X-ray images.
Figure 1. Proposed model for classification of the fractured elbow using X-ray images.
Mathematics 10 03291 g001
Figure 2. Proposed feature extraction, score-based feature selection, fusion, and best feature selection using WOA and classification.
Figure 2. Proposed feature extraction, score-based feature selection, fusion, and best feature selection using WOA and classification.
Mathematics 10 03291 g002
Figure 3. Graphical representation of WOA.
Figure 3. Graphical representation of WOA.
Mathematics 10 03291 g003
Figure 4. Confusion matrix on five-fold validation. (a) Fine KNN; (b) WNN; (c) SVM-cubic.
Figure 4. Confusion matrix on five-fold validation. (a) Fine KNN; (b) WNN; (c) SVM-cubic.
Mathematics 10 03291 g004
Figure 5. ROC on five-fold validation (a) Fine KNN; (b) WNN; (c) SVM-cubic.
Figure 5. ROC on five-fold validation (a) Fine KNN; (b) WNN; (c) SVM-cubic.
Mathematics 10 03291 g005
Figure 6. Confusion matrix on ten-fold validation. (a) Fine KNN; (b) WNN; (c) SVM-cubic.
Figure 6. Confusion matrix on ten-fold validation. (a) Fine KNN; (b) WNN; (c) SVM-cubic.
Mathematics 10 03291 g006
Figure 7. ROC on ten-fold cross-validation. (a) Fine KNN; (b) WNN; (c) SVM-cubic.
Figure 7. ROC on ten-fold cross-validation. (a) Fine KNN; (b) WNN; (c) SVM-cubic.
Mathematics 10 03291 g007
Table 1. Parameters of WOA.
Table 1. Parameters of WOA.
Solutions10
Total iterations100
Threshold0.5
Lower bound0
Upper bound1
Table 2. Description of classifiers.
Table 2. Description of classifiers.
Kernel of ClassifiersSelected Parameters
Wide neural network (WNN)Number of fully connected layers = 1
Size of first layer = 100
Activation unit = ReLU
Limit of iterations = 1000
Support vector machine (SVM)Kernel = Cubic
Scale of kernel = Automatic
Constraint level of box = 1
K-nearest neighbor (KNN)One = Neighboured
Euclidean distance
Table 3. Classification results on MURA elbow fracture dataset using five-fold validation.
Table 3. Classification results on MURA elbow fracture dataset using five-fold validation.
ClassifierAccuracy %Precision %F1 Score %Specificity %Sensitivity %Kappa Score
Fine KNN95.3959595.095.60.906
Cubic SVM85.1878592.787.70.802
WNN90.1939086.783.50.703
Table 4. Results of classification on MURA elbow fracture dataset using ten-fold validation.
Table 4. Results of classification on MURA elbow fracture dataset using ten-fold validation.
ClassifierAccuracy %Precision %F1 Score %Specificity %Sensitivity %Kappa Score
Fine KNN97.1969796970.94
Cubic SVM91.4939193890.82
WNN86.5878687850.73
Table 5. Comparison of classification results.
Table 5. Comparison of classification results.
MethodologyYearKappa Score
[85]20200.715
[84]20180.710
[86]20190.617
[87]20200.761
[88]20210.790
[89]20220.826 accuracy
[21]20220.95 AUC
[17]20220.68 Precision
Proposed Method0.943 Kappa Score 0.96 AUC
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Malik, S.; Amin, J.; Sharif, M.; Yasmin, M.; Kadry, S.; Anjum, S. Fractured Elbow Classification Using Hand-Crafted and Deep Feature Fusion and Selection Based on Whale Optimization Approach. Mathematics 2022, 10, 3291. https://doi.org/10.3390/math10183291

AMA Style

Malik S, Amin J, Sharif M, Yasmin M, Kadry S, Anjum S. Fractured Elbow Classification Using Hand-Crafted and Deep Feature Fusion and Selection Based on Whale Optimization Approach. Mathematics. 2022; 10(18):3291. https://doi.org/10.3390/math10183291

Chicago/Turabian Style

Malik, Sarib, Javeria Amin, Muhammad Sharif, Mussarat Yasmin, Seifedine Kadry, and Sheraz Anjum. 2022. "Fractured Elbow Classification Using Hand-Crafted and Deep Feature Fusion and Selection Based on Whale Optimization Approach" Mathematics 10, no. 18: 3291. https://doi.org/10.3390/math10183291

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop