Next Article in Journal
A Study on Highway Driving Assist Evaluation Method Using the Theoretical Formula and Dual Cameras
Next Article in Special Issue
Attentive Octave Convolutional Capsule Network for Medical Image Classification
Previous Article in Journal
Mango Leaf Disease Recognition and Classification Using Novel Segmentation and Vein Pattern Technique
Previous Article in Special Issue
Sleep State Classification Using Power Spectral Density and Residual Neural Network with Multichannel EEG Signals
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Dual-Stage Vocabulary of Features (VoF)-Based Technique for COVID-19 Variants’ Classification

1
Department of Electronics Engineering, Sejong University, Seoul 05006, Korea
2
Department of Electrical Engineering, Polytechnique Montreal, Montreal, QC H3T 1J4, Canada
*
Author to whom correspondence should be addressed.
Appl. Sci. 2021, 11(24), 11902; https://doi.org/10.3390/app112411902
Submission received: 18 November 2021 / Revised: 6 December 2021 / Accepted: 13 December 2021 / Published: 14 December 2021
(This article belongs to the Special Issue Medical Signal and Image Processing)

Abstract

:
Novel coronavirus, known as COVID-19, is a very dangerous virus. Initially detected in China, it has since spread all over the world causing many deaths. There are several variants of COVID-19, which have been categorized into two major groups. These groups are variants of concern and variants of interest. Variants of concern are more dangerous, and there is a need to develop a system that can detect and classify COVID-19 and its variants without touching an infected person. In this paper, we propose a dual-stage-based deep learning framework to detect and classify COVID-19 and its variants. CT scans and chest X-ray images are used. Initially, the detection is done through a convolutional neural network, and then spatial features are extracted with deep convolutional models, while handcrafted features are extracted from several handcrafted descriptors. Both spatial and handcrafted features are combined to make a feature vector. This feature vector is called the vocabulary of features (VoF), as it contains spatial and handcrafted features. This feature vector is fed as an input to the classifier to classify different variants. The proposed model is evaluated based on accuracy, F1-score, specificity, sensitivity, specificity, Cohen’s kappa, and classification error. The experimental results show that the proposed method outperforms all the existing state-of-the-art methods.

1. Introduction

Coronavirus, known as COVID-19, is a deadly virus that was discovered in Wuhan China in December 2019 and swiftly spread all over the world. It has taken the lives of millions of people. The World Health Organization (WHO) called it a global pandemic [1]. It has several variants; these variants are categorized into three major groups, named as variants of concern, variants of interest, and variants under monitoring. Variants of concern can cause an increase in transmission. These variants are alpha (α), beta (β), gamma (γ) [2], and delta (δ) [3]. On the other hand, variants of concern such as lambda (λ) and mu (µ) can cause community transmission or multiple clusters. The epsilon (ε), eta (η), iota (ι), and kappa (κ) were downgraded to variants of monitoring from variants of concern, while zeta (ζ) and theta (θ) have not been formally labeled [4]. The variants of concern are more dangerous and cause death. These variants transmit from one individual to others due to physical contact. The most recent and dangerous variant of COVID-19 is the delta variant. Its place of emergence is India. It is highly contagious and has the potential to evade some types of antibodies. Figure 1 shows the variants of concern and their place of emergence. It is necessary to develop a system that detects COVID-19 and classifies its variants without making any physical contact with the infected person. Medical professionals are focused on developing technology to fight this deadly virus. The virus has taken the lives of many people around the world. Artificial intelligence is one of the most promising technologies to detect this virus. A lot of systems have been developed to detect COVID-19 [5]. One of the most widely used approaches to detect COVID-19 from chest X-ray images is the use of a convolutional neural network (CNN) [6].
Several state-of-the-art techniques using machine learning and deep learning have been proposed for the disease classification and prognostics. In [7], the authors proposed a machine learning framework for the prediction of brain strokes using brain images. Similarly, in [8], a machine learning algorithm was proposed for the prediction of heart diseases based on electrocardiogram (ECG) signals. In [9], the authors introduced a novel framework for the evaluation of the outcome of brain neurons after a stroke. They demonstrated the use of machine learning algorithms for the quantitative evaluation. However, the authors of [10] demonstrated a framework for the features’ extraction from electroencephalogram (EEG) signals. The use of machine learning and deep learning proves to be proficient for the detection and classification of several diseases.
In this paper, we propose a vocabulary of features (VoF)-based deep learning framework to detect COVID-19 and classify its variants. The framework is divided into two sections: the detection of COVID-19 and classification of different variants. Detection is done by performing different operations. After detection, the COVID-19 strain is classified into its variant. For the detection and classification, the vocabulary of features (VoF) is used. These VoF frameworks are used to train models. Several VoFs are used, and the results verify that the proposed method gives the best performance.
The rest of the paper is organized as follows: Section 2 explains a comprehensive literature review, and Section 3 illustrates the proposed methodology. In Section 4, a comparative analysis of experimental results is presented, while in Section 5, a brief conclusion is drawn.

2. Literature Review

Coronavirus is a highly dangerous virus. There have been a lot of systems used to detect COVID-19. The analysis of radiology images assisting in detecting coronavirus has obtained elevated attention by researchers around the world. In [11], a deep convolutional neural network-based technique is proposed to detect the virus. The authors fused two datasets and made a combined dataset. The accuracy achieved by the model was 98.7%. Another model used for classification is known as the support vector machine (SVM). The researcher studied different neural networks and among them, ResNet-50 emerged as the best, having an accuracy of 95%. However, this approach is complex and requires a massive dataset as well as abundant execution time [12]. Furthermore, Narin et al. suggested three pre-trained CNN architectures consisting of Inceptionv3, Inception-ResNet-v2, and ResNet-50, which help in detecting COVID-19-positive cases from X-rays. It was observed that a 98% accuracy was achieved for ResNet50, whereas an 87% accuracy was obtained for Inception-ResNet-v2, and Inceptionv3 gained an accuracy of 97%. The drawback of all these models is that they took only 100 images for examination. For a large dataset, the performance of the model will decline slowly [12]. Another method is proposed for the detection of COVID-19, named Covid-Net, helped in classifying several classes of illness: COVID-19, severe pneumonia, and normal. However, this approach gained only 92.4% accuracy, which is not much better as compared to other techniques [13]. Another article used a SqueezeNet model and achieved an accuracy of 98.3%. However, this model is not recommended as it required high processing power and speed for its working [14].
Many techniques have been in discussion for the detection of coronavirus pneumonia, among which some of them use deep learning techniques on computed tomography (CT) images, which can detect up to 89.5% for the accuracy, 87% for the sensitivity, and 88% for the specificity [15]. An advanced deep learning technique that automatically detects COVID-19 was produced by using X-ray images and DCNN-based Inception V3 model, which can validate the accuracy up to 98% [16].
In [17], COVID-19 was detected using X-ray images, and for those images, three algorithms were used and a 95%–99% F-score could be deduced successfully from them. In [18], an exemplar model evolved, which first applies a fuzzy tree transformation and then uses a multi-Kernel local binary pattern. Then, feature detection from the images is done and afterward, they are processed through two or three algorithms like the support vector machine (SVM) and decision tree. SVM produced the best results with an accuracy of about 97.01%.
Similarly, in paper [19], a model to detect COVID-19 with a CNN model called CoroDet is proposed. This used the raw images of CT scan and X-rays. This model is mainly used for three different types of classifications, among which the accuracy achieved is 94.2%, 99.1%, and 91.2% for the three-class classification, two-class classification, and four-class classification, respectively. In [19], a model established on deep learning is introduced whose architecture relies on the ResNet-101 CNN network and recognizes objects from millions of images and then detect anomalies in chest X-ray images. The accuracy was found to be about 71.9%. In [20], another automatic model for COVID-19 is presented using X-ray images. These are classified as binary and multi-class classifications. A real-time object detection system was used, named DarkNet, which uses 17 convolutional layers. The accuracy achieved was 87.02% and 98.08% for multi-class and binary classifications, respectively. Ahsan et al. proposed a novel deep learning-based technique for the detection of COVID-19 from chest X-rays and CT scans. They achieved the accuracy of 82.94% and 93.94% for chest X-rays and CT scans, respectively [21].
CT scans can also be used for the detection of coronavirus. In [22], the authors demonstrated the presence of COVID-19 in the CT scan of a 44-year-old patient. The lesion in the lungs can be found in the virus-affected CT scan. Similarly, in [23], the authors illustrated the detection of the novel coronavirus in a CT scan by detecting the presence of non-invasive fluid. The authors of [24] presented recent advances and emerging techniques for the detection of the virus.
From the literature, it is evident that DCNN is efficient in the detection of COVID-19. Motivated by this, we propose a novel algorithm to detect COVID-19 from chest X-rays and CT scans and then classify it into variants. The following are the major contribution of this article.
  • Detection of COVID-19 from CT scans and chest X-rays;
  • Classification of COVID-19 variants based on a unique vocabulary of features (VoF) technique;
  • Comparison of the proposed method with state-of-the art techniques.
The following section explains the proposed methodology in detail.

3. Proposed Methodology

Chest X-rays and CT scans are utilized in this methodology to detect the presence of COVID-19. Initially, the dataset is preprocessed. In preprocessing, all the input images of the dataset are resized to an equal size. Then, 2D discrete wavelet transform (DWT) is applied. This gives us the spectrogram of 2D images. These preprocessed images are used as input for the deep convolutional neural network. If the output of the first CNN is COVID-19, then the handcrafted features are extracted from the gray-scale images as well as spatial features, which are extracted from the three-channel gray-scale images and RGB images. These spatial features are integrated with the handcrafted features to get the vocabulary of features (VoF) vector. This VoF vector is fed as an input to the classifier to obtain the output label. The output labels tell us which COVID-19 variant is present. The proposed framework is illustrated in Figure 2.

3.1. Preprocessing

All the images of the dataset are preprocessed before passing these as the input for the feature extractors. All the preprocessing steps are explained as follows.

3.1.1. CT Scan Image Slicing Planes

CT scan images can be sliced using three planes: the axial, coronal, and sagittal planes. These planes are visually represented in Figure 3. In our research, we used the axial plane for the slicing of the CT scan images because it is the standard image acquisition for CT and provides a clear perception of the type and distribution of the abnormalities.

3.1.2. Sample Size of Chest X-rays and CT Scans

In medical image processing, particularly using CT scans and chest X-rays, it is important to consider sample size. In this article, we used all the CT scans with a dimension of 365 × 260, and the dimensions of the chest X-rays were 299 × 299.

3.1.3. Image Resize

It is necessary to make all the images an equal size for the better performance of the models. In this paper, all the images were resized to 227 × 227 pixels.

3.1.4. Discrete Wavelet Transform

After image resizing, discrete wavelet transform (DWT) is applied to obtain the compressed and approximated image pixels. Wavelets of an image are the functions generated from a single function by dilation and translations. A simple illustration of DWT is shown in Figure 4. The approximation coefficient of the DWT is used as an input for the feature extractors.

3.2. COVID-19 Detection Using DCNN

All the preprocessed images are fed into the deep convolutional neural network (DCNN) to detect COVID-19. The DCNN used for the detection of COVID-19 has five convolutional layers (Conv) with a rectified linear unit (ReLU) as an activation function. This DCNN has two fully connected layers named as FC-6 and FC-7. The input layer takes image of size 227 × 227 × 3. The features are extracted with the help of the Conv layers. The prediction and detection are done by the fully connected layers and the final softmax layer. The detection of COVID-19 with the help of DCNN is shown in Figure 5.

3.3. Features Extraction

If the output of the DCNN from the first stage is COVID-19, then the features are extracted from the COVID-19-affected image. These extracted features are handcrafted, and handcrafted features are extracted from handcrafted descriptors, while spatial features are extracted from the DCNN.

3.3.1. Handcrafted Features Extraction

Handcrafted features are extracted with the help of histogram of oriented gradient (HOG) [25], local binary pattern (LBP) [26], and oriented FAST and rotated BRIEF (ORB) [27].
HOG is a handcrafted descriptor used in image processing. The basic purpose of HOG is to detect objects based on the orientation of the gradient. It counts occurrences of gradient orientation in the localized segments of an image.
LBP is a very simple texture operator. LBP assigns a label to each pixel based on thresholding its neighbors. The output in this case is a binary number. That is the reason it is known as a local binary pattern.
ORB is a fast and reliable local feature detector for computer vision tasks such as object recognition and 3D reconstruction. It uses a modified version of the visual descriptor BRIEF and the FAST key-point detector. Its goal is to provide a quick and effective replacement for the scale-invariant feature transform (SIFT).

3.3.2. Spatial Features Extraction

Spatial features are extracted with the help of deep convolutional neural network (DCNN) models. DCNN has convolutional layers as well as max-pooling layers. The convolutional layers are used for the features extraction from the images, while the max-pooling layers act as a filter. The output feature vectors are collected at the fully connected layers of the DCNN. Feature extraction from an image via DCNN is demonstrated in Figure 6.

3.4. Vocabulary of Features (VoF) Vector

Vocabulary of features (VoF) is the vector containing handcrafted as well as spatial features. The VoF vector is the combined vector of the spatial feature vector and handcrafted feature vector.

3.5. Classifier

The VoF vector is applied to the classifier to perform classification. The classifier used is the support vector machine (SVM), which classifies objects using support vectors. The classification is done by drawing a hyperplane. The greater the distance of margin of support vector from the hyperplane, the better the accuracy and vice versa. There are three basic kernels of SVM. These are linear, Gaussian, and polynomial.

3.6. Dataset

The datasets used for the experiments are the images of the chest X-ray and CT scans. The chest X-ray dataset contains chest X-rays of normal and COVID-19-affected persons. Similarly, the CT scans dataset contains the CT scans of normal and COVID-19-affected patients. The COVID-19 images are categorized as alpha, beta, gamma, and delta. Then, 60% of the dataset is used for training the model, while 20% is used for validation and 20% is used for testing. The total number of images for the alpha variant are 1345, for the beta variant there are 10,192 images, there are 6012 samples for the gamma variant, and 3616 samples for the delta variant. The chest X-rays and CT scan images are publicly available at [4]. For the classification of the delta variant, we followed [28] to arrange our database into different classes of COVID-19 variants. Figure 7 shows the sample images of database.
The next section presents the experimental results and discussion.

4. Experimental Results

The proposed framework was evaluated based on different performance parameters. The performance parameters include accuracy, F1-score, sensitivity, specificity, Cohen’s kappa, and classification error. The accuracy of the model can be evaluated by Equation (1).
A c c u r a c y = T P + T N T P + F P + T N + F N
where TP denotes true positive, TN denotes true negative, FP represents false positive, and FN means false negative. Accuracy can be stated as the ratio of the correct prediction of COVID-19 to the total number of images in the database. Accuracy is very important performance parameter. Similarly, specificity can be calculated by Equation (2). Specificity is the ratio of the prediction of normal images present in the database.
S p e c i f i c i t y   S p = T N T N + F P
Another performance parameter is the sensitivity of the model, and it is defined as the ratio of the prediction of COVID-19. It can be determined by Equation (3).
S e n s i t i v i t y   S e = T P T P + F N
Like accuracy, the F1-score is also an important parameter for the evaluation of the model performance. It is the average of the specificity and sensitivity and the formula to evaluate the F1-score is given in Equation (4).
F 1 s c o r e = S e * S p S e + S p * 2
where Se and Sp denote sensitivity and specificity, respectively. Another performance parameter considered in this article is Cohen’s kappa. It can be calculated by using Equation (5).
C o h e n s   K a p p a   κ = T P . T N F P . F N T P + F P . F P + T N + T P + F N . F N + T N * 2
These parameters were used to evaluate the performance of our proposed method.

4.1. Simulation Parameters

In our framework DCNNs were used for the extraction of spatial features. The training of the DCNN is performed by fine tuning the different hyperparameters. The final hyperparameters are shown in Table 1.

4.2. Simulation Results

The performance of the trained model was evaluated. The training accuracy and loss of the model were 99.95% and 1.02%, respectively. First, the performance of first stage is presented. Table 2 and Table 3 show the performance of different DCNNs applied for the detection of COVID-19 from chest X-rays and CT scans, respectively.
The highest accuracy achieved by the proposed model for first stage was 99.5% and 99.74% for CT scans and chest X-rays, respectively. The confusion matrices of the proposed framework for chest X-rays and CT scans are shown in Figure 8. The ROC curve for the training and testing of the first stage of the proposed methodology is shown in Figure 9.

4.3. Validation of Results

We performed a k-fold cross-validation to validate our model as well as to deal with the data imbalance problem. We applied a 10-fold cross-validation to our dataset, and the optimal accuracy achieved by the cross-validation was 98.9% with a 1.1% loss.

4.4. Comparison with Existing Models

We also compared the accuracy of the first stage of our proposed framework with other state-of-the-art techniques. The proposed framework outperformed the first stage of the existing frameworks for COVID-19 detection. Figure 10 shows the first-stage performance of the proposed method for chest X-rays with [41,42,43,44,45,46,47], and Figure 11 shows the comparison of the first-stage performance of the proposed technique for CT scans with [21] and [48,49,50,51,52,53,54,55].
We can conclude from Figure 10 and Figure 11 that the proposed method achieves better accuracy than other methods. The accuracy of proposed method is worse than one model of Loey et al. [41] for X-ray images, but the authors in that article used small dataset as compared to the database used in this paper. Similarly, the accuracy of proposed model for the CT scan images is less than Hasan et al. [52] as the authors used a limited database.
After the detection of COVID-19, we further classified COVID-19 into its variant using the vocabulary of features (VoF) technique, where the handcrafted and spatial features were considered for training the model. The comparison of all variants in terms of handcrafted features in X-ray images is shown in Figure 12.
The classification results of the X-rays and CT scans are shown in Figure 13 in the form of confusion matrices. The accuracy for X-ray images was 99.12%, while for CT scan it was 98.54%.
The proposed method achieved a 99.12% accuracy with a classification error of 0.88% for the classification of variants of COVID-19 with the X-ray images. Similarly, the proposed technique achieved 98.54% accuracy with a classification error of 1.46% for the classification of different variants of COVID-19 with the CT scans. The upcoming section presents the conclusion.

5. Conclusions

COVID-19 was discovered in December 2019, and it quickly became a global pandemic. Many people have lost their lives due to COVID-19, and it transmits from one person to another by contact between patients. Since its discovery, several variants of it have been subsequently discovered. In this article, we propose a dual-stage framework for the classification of different variants of COVID-19. The proposed method achieved 99.5% and 99.74% accuracy for the detection of COVID-19 in CT scan and X-ray images, respectively. The maximum accuracy of the second stage of the method was 98.54% and 99.12% for the classification of variants of COVID-19 in CT scans and X-ray images, respectively. The proposed framework thus achieved a state-of-the-art performance for the classification of the variants of concern. In the future, we plan to extend this work to three stages: the detection of COVID-19, classification into variants of concern and variants of interest, and then further classification into their names.

Author Contributions

Conceptualization, S.J. and M.R.; methodology, S.J.; software, S.J.; validation, M.R.; formal analysis, S.J.; investigation, M.R.; resources, M.R.; data curation, S.J.; writing—original draft preparation, S.J.; writing—review and editing, M.R.; visualization, M.R.; supervision, M.R.; project administration, M.R.; funding acquisition, M.R. Both authors contributed equally. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Chamola, V.; Hassija, V.; Gupta, V.; Guizani, M. A Comprehensive Review of the COVID-19 Pandemic and the Role of IoT, Drones, AI, Blockchain, and 5G in Managing its Impact. IEEE Access 2020, 8, 90225–90265. [Google Scholar] [CrossRef]
  2. Bose, P.; Roy, S.; Ghosh, P. A Comparative NLP-Based Study on the Current Trends and Future Directions in COVID-19 Research. IEEE Access 2021, 9, 78341–78355. [Google Scholar] [CrossRef]
  3. Mahase, E. Delta variant: What is Happening with Transmission, Hospital Admissions, and Restrictions? BMJ 2021, 373, n1513. [Google Scholar] [CrossRef] [PubMed]
  4. Bell, D.; Worsley, C. COVID-19. Radiopaedia.org. Available online: https://radiopaedia.org/articles/covid-19-4?lang=us (accessed on 21 October 2021).
  5. Rehman, A.; Iqbal, M.A.; Xing, H.; Ahmed, I. COVID-19 Detection Empowered with Machine Learning and Deep Learning Techniques: A Systematic Review. Appl. Sci. 2021, 11, 3414. [Google Scholar] [CrossRef]
  6. Allam, Z.; Dey, G.; Jones, D.S. Artificial Intelligence (AI) Provided Early Detection of the Coronavirus (COVID-19) in China and Will Influence Future Urban Health Policy Internationally. AI 2020, 1, 9. [Google Scholar] [CrossRef] [Green Version]
  7. Hussain, I.; Park, S.-J. HealthSOS: Real-Time Health Monitoring System for Stroke Prognostics. IEEE Access 2020, 8, 213574–213586. [Google Scholar] [CrossRef]
  8. Hussain, I.; Park, S.-J. Big-ECG: Cardiographic Predictive Cyber-Physical System for Stroke Management. IEEE Access 2021, 9, 123146–123164. [Google Scholar] [CrossRef]
  9. Hussain, I.; Park, S.-J. Quantitative Evaluation of Task-Induced Neurological Outcome after Stroke. Brain Sci. 2021, 11, 900. [Google Scholar] [CrossRef]
  10. Hussain, I.; Young, S.; Kim, C.H.; Benjamin, H.C.M.; Park, S.J. Quantifying Physiological Biomarkers of a Microwave Brain Stimulation Device. Sensors 2021, 21, 1896. [Google Scholar] [CrossRef] [PubMed]
  11. Khasawneh, N.; Fraiwan, M.; Fraiwan, L.; Khassawneh, B.; Ibnian, A. Detection of COVID-19 from Chest X-ray Images Using Deep Convolutional Neural Networks. Sensors 2021, 21, 5940. [Google Scholar] [CrossRef]
  12. Narin, A.; Kaya, C.; Pamuk, Z. Automatic Detection of Coronavirus Disease (COVID-19) using X-ray images and Deep Convolutional Neural Networks. Pattern Anal. Appl. 2021, 24, 1207–1220. [Google Scholar] [CrossRef] [PubMed]
  13. Wang, L.; Lin, Z.Q.; Wong, A. Covid-net: A Tailored Deep Convolutional Neural Network Design for Detection of Covid-19 Cases from Chest X-ray images. Sci. Rep. 2020, 10, 19549. [Google Scholar] [CrossRef] [PubMed]
  14. Ucar, F.; Korkmaz, D. COVIDiagnosis-Net: Deep Bayes SqueezeNet Based Diagnosis of the Coronavirus Disease 2019 (COVID-19) from X-ray images. Med. Hypotheses 2020, 140, 109761. [Google Scholar] [CrossRef] [PubMed]
  15. Wang, S.; Kang, B.; Ma, J.; Zeng, X.; Xiao, M.; Guo, J.; Cai, M.; Yang, J.; Li, Y.; Meng, X.; et al. A Deep Learning Algorithm using CT Images to Screen for Corona Virus Disease (COVID-19). Eur. Radiol. 2021, 31, 6096–6104. [Google Scholar] [CrossRef] [PubMed]
  16. Asif, S.; Wenhui, Y.; Jin, H.; Jinhai, S. Classification of COVID-19 from Chest X-ray images using Deep Convolutional Neural Network. In Proceedings of the 2020 IEEE 6th International Conference on Computer and Communications (ICCC), Chengdu, China, 11–14 December 2020; pp. 426–433. [Google Scholar]
  17. Alazab, M.; Awajan, A.; Mesleh, A.; Abraham, A.; Jatana, V.; Alhyari, S. COVID-19 Prediction and Detection using Deep Learning. Int. J. Comput. Inf. Syst. Ind. Manag. Appl. 2020, 12, 168–181. [Google Scholar]
  18. Tuncer, T.; Ozyurt, F.; Dogan, S.; Subasi, A. A Novel COVID-19 and Pneumonia Classification Method Based on F-Transform. Chemom. Intell. Lab. Syst. 2021, 210, 104256. [Google Scholar] [CrossRef]
  19. Che Azemin, M.Z.; Hassan, R.; Mohd Tamrin, M.I.; Md Ali, M.A. COVID-19 Deep Learning Prediction Model using Publicly Available Radiologist-Adjudicated Chest X-ray images as Training data: Preliminary Findings. Int. J. Biomed. Imaging 2020, 2020, 8828855. [Google Scholar] [CrossRef]
  20. Ozturk, T.; Talo, M.; Yildirim, E.A.; Baloglu, U.B.; Yildirim, O.; Acharya, U.R. Automated Detection of COVID-19 Cases using Deep Neural Networks with X-ray images. Comput. Biol. Med. 2020, 121, 103792. [Google Scholar] [CrossRef]
  21. Ahsan, M.M.; Gupta, K.D.; Islam, M.M.; Sen, S.; Rahman, M.L.; Shakhawat Hossain, M. COVID-19 Symptoms Detection Based on NasNetMobile with Explainable AI Using Various Imaging Modalities. Mach. Learn. Knowl. Extr. 2020, 2, 27. [Google Scholar] [CrossRef]
  22. Asadollahi-Amin, A.; Hasibi, M.; Ghadimi, F.; Rezaei, H.; SeyedAlinaghi, S. Lung Involvement Found on Chest CT Scan in a Pre-Symptomatic Person with SARS-CoV-2 Infection: A Case Report. Trop. Med. Infect. Dis. 2020, 5, 56. [Google Scholar] [CrossRef]
  23. Khurshid, Z.; Asiri, F.Y.I.; Al Wadaani, H. Human Saliva: Non-Invasive Fluid for Detecting Novel Coronavirus (2019-nCoV). Int. J. Environ. Res. Public Health 2020, 17, 2225. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  24. Vashist, S.K. In Vitro Diagnostic Assays for COVID-19: Recent Advances and Emerging Trends. Diagnostics 2020, 10, 202. [Google Scholar] [CrossRef] [Green Version]
  25. Zhou, W.; Gao, S.; Zhang, L.; Lou, X. Histogram of Oriented Gradients Feature Extraction From Raw Bayer Pattern Images. IEEE Trans. Circuits Syst. II Express Briefs 2020, 67, 946–950. [Google Scholar] [CrossRef]
  26. Farooq, A.; Jia, X.; Hu, J.; Zhou, J. Multi-Resolution Weed Classification via Convolutional Neural Network and Superpixel Based Local Binary Pattern Using Remote Sensing Images. Remote Sens. 2019, 11, 1692. [Google Scholar] [CrossRef] [Green Version]
  27. Ma, C.; Hu, X.; Xiao, J.; Du, H.; Zhang, G. Improved ORB Algorithm Using Three-Patch Method and Local Gray Difference. Sensors 2020, 20, 975. [Google Scholar] [CrossRef] [Green Version]
  28. Chohan, F.; Ishak, A.; Alderette, T.; Rad, P.; Michel, G. Clinical Presentation of a COVID-19 Delta Variant Patient: Case Report and Literature Review. Cureus 2021, 13, e18603. [Google Scholar] [CrossRef]
  29. Jamil, S.; Rahman, M.; Ullah, A.; Badnava, S.; Forsat, M.; Mirjavadi, S.S. Malicious UAV Detection Using Integrated Audio and Visual Features for Public Safety Applications. Sensors 2020, 20, 3923. [Google Scholar] [CrossRef]
  30. Fulton, L.V.; Dolezel, D.; Harrop, J.; Yan, Y.; Fulton, C.P. Classification of Alzheimer’s Disease with and without Imagery Using Gradient Boosted Machines and ResNet-50. Brain Sci. 2019, 9, 212. [Google Scholar] [CrossRef] [Green Version]
  31. Awan, M.J.; Bilal, M.H.; Yasin, A.; Nobanee, H.; Khan, N.S.; Zain, A.M. Detection of COVID-19 in Chest X-ray Images: A Big Data Enabled Deep Learning Approach. Int. J. Environ. Res. Public Health 2021, 18, 10147. [Google Scholar] [CrossRef]
  32. Ismail, A.; Elpeltagy, M.; Zaki, M.S.; Eldahshan, K. A New Deep Learning-Based Methodology for Video Deepfake Detection Using XGBoost. Sensors 2021, 21, 5413. [Google Scholar] [CrossRef]
  33. Bansal, P.; Kumar, R.; Kumar, S. Disease Detection in Apple Leaves Using Deep Convolutional Neural Network. Agriculture 2021, 11, 617. [Google Scholar] [CrossRef]
  34. Jamil, S.; Rahman, M.; Haider, A. Bag of Features (BoF) Based Deep Learning Framework for Bleached Corals Detection. Big Data Cogn. Comput. 2021, 5, 53. [Google Scholar] [CrossRef]
  35. Mahdianpari, M.; Salehi, B.; Rezaee, M.; Mohammadimanesh, F.; Zhang, Y. Very Deep Convolutional Neural Networks for Complex Land Cover Mapping Using Multispectral Remote Sensing Imagery. Remote Sens. 2018, 10, 1119. [Google Scholar] [CrossRef] [Green Version]
  36. Srinivasu, P.N.; SivaSai, J.G.; Ijaz, M.F.; Bhoi, A.K.; Kim, W.; Kang, J.J. Classification of Skin Disease Using Deep Learning Neural Networks with MobileNet V2 and LSTM. Sensors 2021, 21, 2852. [Google Scholar] [CrossRef]
  37. Alhichri, H.; Bazi, Y.; Alajlan, N.; Bin Jdira, B. Helping the Visually Impaired See via Image Multi-labeling Based on SqueezeNet CNN. Appl. Sci. 2019, 9, 4656. [Google Scholar] [CrossRef] [Green Version]
  38. Liu, G.; Zhang, C.; Xu, Q.; Cheng, R.; Song, Y.; Yuan, X.; Sun, J. I3D-Shufflenet Based Human Action Recognition. Algorithms 2020, 13, 301. [Google Scholar] [CrossRef]
  39. Mateen, M.; Wen, J.; Song, S.; Huang, Z. Fundus Image Classification Using VGG-19 Architecture with PCA and SVD. Symmetry 2019, 11, 1. [Google Scholar] [CrossRef] [Green Version]
  40. Zhang, D.; Ren, F.; Li, Y.; Na, L.; Ma, Y. Pneumonia Detection from Chest X-ray Images Based on Convolutional Neural Network. Electronics 2021, 10, 1512. [Google Scholar] [CrossRef]
  41. Loey, M.; Smarandache, F.; Khalifa, N.E.M. Within the Lack of Chest COVID-19 X-ray Dataset: A Novel Detection Model Based on GAN and Deep Transfer Learning. Symmetry 2020, 12, 651. [Google Scholar] [CrossRef] [Green Version]
  42. Misra, S.; Jeon, S.; Lee, S.; Managuli, R.; Jang, I.-S.; Kim, C. Multi-Channel Transfer Learning of Chest X-ray Images for Screening of COVID-19. Electronics 2020, 9, 1388. [Google Scholar] [CrossRef]
  43. Lee, K.S.; Kim, J.Y.; Jeon, E.T.; Choi, W.S.; Kim, N.H.; Lee, K.Y. Evaluation of Scalability and Degree of Fine-Tuning of Deep Convolutional Neural Networks for COVID-19 Screening on Chest X-ray Images Using Explainable Deep-Learning Algorithm. J. Pers. Med. 2020, 10, 213. [Google Scholar] [CrossRef] [PubMed]
  44. Bourouis, S.; Alharbi, A.; Bouguila, N. Bayesian Learning of Shifted-Scaled Dirichlet Mixture Models and Its Application to Early COVID-19 Detection in Chest X-ray Images. J. Imaging 2021, 7, 7. [Google Scholar] [CrossRef] [PubMed]
  45. Alam, N.A.; Ahsan, M.; Based, M.A.; Haider, J.; Kowalski, M. COVID-19 Detection from Chest X-ray Images Using Feature Fusion and Deep Learning. Sensors 2021, 21, 1480. [Google Scholar] [CrossRef]
  46. Duran-Lopez, L.; Dominguez-Morales, J.P.; Corral-Jaime, J.; Vicente-Diaz, S.; Linares-Barranco, A. COVID-XNet: A Custom Deep Learning System to Diagnose and Locate COVID-19 in Chest X-ray Images. Appl. Sci. 2020, 10, 5683. [Google Scholar] [CrossRef]
  47. Civit-Masot, J.; Luna-Perejón, F.; Domínguez Morales, M.; Civit, A. Deep Learning System for COVID-19 Diagnosis Aid Using X-ray Pulmonary Images. Appl. Sci. 2020, 10, 4640. [Google Scholar] [CrossRef]
  48. Alshazly, H.; Linse, C.; Barth, E.; Martinetz, T. Explainable COVID-19 Detection Using Chest CT Scans and Deep Learning. Sensors 2021, 21, 455. [Google Scholar] [CrossRef]
  49. Chattopadhyay, S.; Dey, A.; Singh, P.K.; Geem, Z.W.; Sarkar, R. COVID-19 Detection by Optimizing Deep Residual Features with Improved Clustering-Based Golden Ratio Optimizer. Diagnostics 2021, 11, 315. [Google Scholar] [CrossRef]
  50. Guiot, J.; Vaidyanathan, A.; Deprez, L.; Zerka, F.; Danthine, D.; Frix, A.-N.; Thys, M.; Henket, M.; Canivet, G.; Mathieu, S.; et al. Development and Validation of an Automated Radiomic CT Signature for Detecting COVID-19. Diagnostics 2021, 11, 41. [Google Scholar] [CrossRef] [PubMed]
  51. Irfan, M.; Iftikhar, M.A.; Yasin, S.; Draz, U.; Ali, T.; Hussain, S.; Bukhari, S.; Alwadie, A.S.; Rahman, S.; Glowacz, A.; et al. Role of Hybrid Deep Neural Networks (HDNNs), Computed Tomography, and Chest X-rays for the Detection of COVID-19. Int. J. Environ. Res. Public Health 2021, 18, 3056. [Google Scholar] [CrossRef]
  52. Hasan, A.M.; AL-Jawad, M.M.; Jalab, H.A.; Shaiba, H.; Ibrahim, R.W.; AL-Shamasneh, A.R. Classification of Covid-19 Coronavirus, Pneumonia and Healthy Lungs in CT Scans Using Q-Deformed Entropy and Deep Learning Features. Entropy 2020, 22, 517. [Google Scholar] [CrossRef]
  53. Fujioka, T.; Takahashi, M.; Mori, M.; Tsuchiya, J.; Yamaga, E.; Horii, T.; Yamada, H.; Kimura, M.; Kimura, K.; Kitazume, Y.; et al. Evaluation of the Usefulness of CO-RADS for Chest CT in Patients Suspected of Having COVID-19. Diagnostics 2020, 10, 608. [Google Scholar] [CrossRef] [PubMed]
  54. Sallay, H.; Bourouis, S.; Bouguila, N. Online Learning of Finite and Infinite Gamma Mixture Models for COVID-19 Detection in Medical Images. Computers 2021, 10, 6. [Google Scholar] [CrossRef]
  55. Zulkifley, M.A.; Abdani, S.R.; Zulkifley, N.H. COVID-19 Screening Using a Lightweight Convolutional Neural Network with Generative Adversarial Network Data Augmentation. Symmetry 2020, 12, 1530. [Google Scholar] [CrossRef]
Figure 1. Variants of concern and their origin.
Figure 1. Variants of concern and their origin.
Applsci 11 11902 g001
Figure 2. Proposed framework for the classification of COVID-19 variants.
Figure 2. Proposed framework for the classification of COVID-19 variants.
Applsci 11 11902 g002
Figure 3. CT scan planes.
Figure 3. CT scan planes.
Applsci 11 11902 g003
Figure 4. 2D DWT applied to chest X-ray of COVID-19-affected patient.
Figure 4. 2D DWT applied to chest X-ray of COVID-19-affected patient.
Applsci 11 11902 g004
Figure 5. Detection of COVID-19 with DCNN.
Figure 5. Detection of COVID-19 with DCNN.
Applsci 11 11902 g005
Figure 6. Spatial features extraction.
Figure 6. Spatial features extraction.
Applsci 11 11902 g006
Figure 7. Sample CT scan and X-ray images of the database of different variants (a) Column of normal samples; (b) Column of alpha variant of COVID-19 samples; (c) Column of beta variant samples; (d) Column of gamma variant samples; (e) Column of delta variant samples.
Figure 7. Sample CT scan and X-ray images of the database of different variants (a) Column of normal samples; (b) Column of alpha variant of COVID-19 samples; (c) Column of beta variant samples; (d) Column of gamma variant samples; (e) Column of delta variant samples.
Applsci 11 11902 g007
Figure 8. Confusion matrices of the first stage of the proposed framework for (a) chest X-rays and (b) CT scans.
Figure 8. Confusion matrices of the first stage of the proposed framework for (a) chest X-rays and (b) CT scans.
Applsci 11 11902 g008
Figure 9. Training and testing ROC of the first stage of the proposed framework (a) ROC of the training of first stage; (b) ROC of the testing first stage of model.
Figure 9. Training and testing ROC of the first stage of the proposed framework (a) ROC of the training of first stage; (b) ROC of the testing first stage of model.
Applsci 11 11902 g009
Figure 10. Comparison of the first-stage accuracy of the proposed method with others using chest X-rays.
Figure 10. Comparison of the first-stage accuracy of the proposed method with others using chest X-rays.
Applsci 11 11902 g010
Figure 11. Comparison of the first-stage accuracy of the proposed method with others using CT scan images.
Figure 11. Comparison of the first-stage accuracy of the proposed method with others using CT scan images.
Applsci 11 11902 g011
Figure 12. Comparison of the alpha, beta, gamma, and delta variants.
Figure 12. Comparison of the alpha, beta, gamma, and delta variants.
Applsci 11 11902 g012
Figure 13. Confusion matrices of the second stage of the proposed framework for (a) chest X-rays and (b) CT scans.
Figure 13. Confusion matrices of the second stage of the proposed framework for (a) chest X-rays and (b) CT scans.
Applsci 11 11902 g013
Table 1. Summary of hyperparameters for DCNN training.
Table 1. Summary of hyperparameters for DCNN training.
Parameter NameParameter Value
Learning rate10−3
Momentum0.9
OptimizerStochastic Gradient Descent (SGD)
Learning rate decay10−7
Mini batch size64
Loss functionCross entropy
Table 2. Testing performance of different DCNNs for the chest X-ray dataset.
Table 2. Testing performance of different DCNNs for the chest X-ray dataset.
Name of DCNNAccuracySpecificitySensitivityF1-ScoreCohen’s Kappa (κ)Classification
Error
AlexNet [29]96.4%93.5%92.5%93.00%0.913.6%
ResNet-50 [30]97.5%94.6%93.6%94.10%0.922.5%
Inception v3 [31]98.5%95.6%94.6%95.10%0.931.5%
DarkNet-53 [32]96.2%93.3%92.3%92.80%0.903.8%
EfficientNet-b7 [33]98.9%96.0%95.0%95.50%0.931.1%
GoogLeNet [34]96.4%97.4%95.4%96.39%0.943.6%
Inception ResNet v2 [35]98.7%96.5%97.1%96.80%0.951.3%
MobileNet v2 [36]98.4%96.3%99.1%97.68%0.951.6%
SqueezeNet [37]89.1%90.4%93.3%91.83%0.8910.9%
ShuffleNet [38]99.1%99.1%100%99.55%0.970.9%
VGG-19 [39]92.2%94.6%96.1%95.34%0.937.8%
Xception [40]98.5%99.2%100%99.60%0.981.5%
Proposed99.74%99.6%100%99.80%0.990.3%
Table 3. Testing performance of different DCNNs for the CT scan dataset.
Table 3. Testing performance of different DCNNs for the CT scan dataset.
Name of DCNNAccuracySpecificitySensitivityF1-ScoreCohen’s Kappa (κ)Classification
Error
AlexNet [29]94.1%93.2%95.3%94.24%0.925.9%
ResNet-50 [30]95.2%95.6%94.6%95.10%0.934.8%
Inception v3 [31]96.2%94.6%96.1%95.34%0.933.8%
DarkNet-53 [32]93.9%96.3%99.1%97.68%0.956.1%
EfficientNet-b7 [33]96.6%99.2%100%99.60%0.983.4%
GoogLeNet [34]94.1%93.5%92.5%93.00%0.915.9%
Inception ResNet v2 [35]96.4%93.3%92.3%92.80%0.903.6%
MobileNet v2 [36]96.1%96.5%97.1%96.80%0.953.9%
SqueezeNet [37]86.8%90.4%93.3%91.83%0.8913.2
ShuffleNet [38]96.8%97.4%95.4%96.39%0.943.2%
VGG-19 [39]89.9%90.4%93.3%91.83%0.9010.1%
Xception [40]96.2%99.1%100%99.55%0.973.8%
Proposed99.5%99.7%99.3%99.50%0.990.6%
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Jamil, S.; Rahman, M. A Dual-Stage Vocabulary of Features (VoF)-Based Technique for COVID-19 Variants’ Classification. Appl. Sci. 2021, 11, 11902. https://doi.org/10.3390/app112411902

AMA Style

Jamil S, Rahman M. A Dual-Stage Vocabulary of Features (VoF)-Based Technique for COVID-19 Variants’ Classification. Applied Sciences. 2021; 11(24):11902. https://doi.org/10.3390/app112411902

Chicago/Turabian Style

Jamil, Sonain, and MuhibUr Rahman. 2021. "A Dual-Stage Vocabulary of Features (VoF)-Based Technique for COVID-19 Variants’ Classification" Applied Sciences 11, no. 24: 11902. https://doi.org/10.3390/app112411902

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop