Next Article in Journal
An Investigation of the Transmission Mechanism of Executive Compensation Control to the Operating Performance of State-Owned Listed Companies
Previous Article in Journal
Energy Transition in France
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Deep Learning Models for COVID-19 Detection

1
Department of Electrical and Electronic Engineering, Near East University, North Cyprus via Mersin 10, Nicosia 99138, Turkey
2
Department of Radiology, Dr. Suat Günsel University Faculty of Medicine Kyrenia, North Cyprus via Mersin 10, Kyrenia 99300, Turkey
3
Artificial Intelligence Engineering Department, Research Center for AI and IoT, AI and Robotics Institute, Near East University, North Cyprus via Mersin 10, Nicosia 99138, Turkey
*
Author to whom correspondence should be addressed.
Sustainability 2022, 14(10), 5820; https://doi.org/10.3390/su14105820
Submission received: 25 March 2022 / Revised: 29 April 2022 / Accepted: 29 April 2022 / Published: 11 May 2022
(This article belongs to the Topic Big Data and Artificial Intelligence)

Abstract

:
Healthcare is one of the crucial aspects of the Internet of things. Connected machine learning-based systems provide faster healthcare services. Doctors and radiologists can also use these systems for collaboration to provide better help to patients. The recently emerged Coronavirus (COVID-19) is known to have strong infectious ability. Reverse transcription-polymerase chain reaction (RT-PCR) is recognised as being one of the primary diagnostic tools. However, RT-PCR tests might not be accurate. In contrast, doctors can employ artificial intelligence techniques on X-ray and CT scans for analysis. Artificial intelligent methods need a large number of images; however, this might not be possible during a pandemic. In this paper, a novel data-efficient deep network is proposed for the identification of COVID-19 on CT images. This method increases the small number of available CT scans by generating synthetic versions of CT scans using the generative adversarial network (GAN). Then, we estimate the parameters of convolutional and fully connected layers of the deep networks using synthetic and augmented data. The method shows that the GAN-based deep learning model provides higher performance than classic deep learning models for COVID-19 detection. The performance evaluation is performed on COVID19-CT and Mosmed datasets. The best performing models are ResNet-18 and MobileNetV2 on COVID19-CT and Mosmed, respectively. The area under curve values of ResNet-18 and MobileNetV2 are 0 . 89 % and 0 . 84 % , respectively.

1. Introduction

A new pneumonia-type Coronavirus (COVID-19) was detected in Wuhan China in 2019, ref. [1]. Previous versions of this virus are known as SARS-CoV-2. This virus has a higher infection ability than other viruses. As a result, many people have been infected and require medical care in hospitals. Recent works can be found in [2,3,4,5,6].
A recent survey [7] explains all the different methods for COVID-19 detection. Transcription-polymerase chain reaction (RT-PCR) tests are being used to detect the virus in the human body. Computed tomography scans (CT scans) and X-ray images are other ways of identifying COVID-19 in the human body. X-ray images show SARS-CoV-2 infection areas on the human lungs. Furthermore, CT scans can be used to visualise the 3D lungs of a person to decide on the level of severity.
Convolutional neural networks (CNNs) are known as powerful models for image modelling. These can be used to diagnose COVID-19 on X-rays and CT scans. The main advantage of using CNN models is that they allow faster detection of the virus from images than doctors and radiologists. Today, CNN models are being used to diagnose viruses on medical images. These models have been utilised for skin lesion detection on digital images, refs. [8,9]. Authors have also used CNN models for retinography diagnosis on images, refs. [10,11,12]. Furthermore, CNN models allow the diagnosis of COVID-19 on X-rays and CT scans. The well-known CNN models are AlexNet [13], GoogleNet [14], VGG [15], MobileNetV2 [16], ResNet [17], and DenseNet [18]. These models are mainly trained using an ImageNet dataset [19] and then fine-tuned on medical images.
CNN models are data-hungry, and they require many images for the training process. However, accessing a large number of CT images might not be possible during a pandemic or may require a long time. Therefore, inadequate data might hamper the usage of an artificial intelligence-based model for COVID-19 detection. On the other hand, data-efficient CNN models are built on small sets of available images and allow fast modelling of the disease to diagnose COVID-19. Therefore, data-efficient models might make a significant contribution to the rapid diagnosis of COVID-19 during a pandemic.
In this paper, the data-efficient GAN based CNN method (Figure 1) allows COVID-19 detection from CT scans. This paper presents enhanced data-efficient convolutional neural networks to detect COVID-19 from CT scans. The proposed approach is based on the generation of synthetic and augmented images and then generates CNN models using these datasets. Synthetic images allow more information to be extracted from the CT scans and then modelled by CNN models. The enhanced model performances and augmented data-based CNN models are compared on two publicly datasets. The results show that the enhanced models outperformed the classic CNN models.
The advantages of the proposed method are as follows:
  • The proposed model builds on augmented and synthetic CT images of the chests of COVID-19 patients, whereas the classic models only employ augmented images, refs. [20,21,22,23].
  • The proposed CNN learns more possible COVID-19 signs from synthetic CT images then the classic CNN models.
The main novelties of this work are:
  • The proposed novel method utilises a GAN model to generate unseen COVID-19 and normal CT images from a small database. This approach allows the CNN model to learn all possible image deformations for better modelling of the CT images. In contrast, classic CNN models build on data augmentation techniques for improved performance. However, image augmentation allows the generation of more COVID-19 and normal images with different views and orientations. This is problematic since the deformation of the lungs on CT images is the same in the generated data.
  • A method is proposed for fusing synthetic and augmented CT scans for generating enhanced CNN models for COVID-19 detection.
  • Data-efficient enhanced ResNet-18, ResNet-50, VGG, MobileNetV2, AlexNet, and DensNet121 models are proposed for the diagnosis of COVID-19.
The content of this paper is as follows. First, information about the related databases is given. Second, the generation of augmented images is described. Then, the GAN model for synthetic CT scan image creation is explained. The architecture of the GAN model is also explained. Furthermore, the enhanced CNN models based on GAN models are described. Finally, the performance of the proposed methods is evaluated and discussed.

2. Related Work

2.1. Convolutional Neural Networks

He et al. [24] and Hu et al. [25] used CNN models to detect COVID-19 on CT images. He et al. [24] proposed a new transfer learning approach to train a CNN on available CT images. Furthermore, Hu et al. [25] proposed the generation of a CNN model using a small number of CT images.
Mei et al. [26] combined a convolutional neural network and a support vector machine to classify COVID-19 related CT images. The new model architecture was described for more accurate COVID-19 identification on CT images.
Harmon et al. [27] proposed a DensNet-121 network for differentiating COVID-19 from pneumonia virus. The classification accuracy was evaluated using several datasets.
Bhandary et al. [28] replaced the last layer of several CNN model architectures and they introduced a support vector machine. The authors evaluated the performance of this new architecture for COVID-19 diagnosis. The proposed network also detected cancer using CT and X-ray images.
Butt et al. [29] use 3D CT scans for classifying COVID-19, and viral pneumonia. The authors processed each of the CT image patches and then used these patches as inputs to the ResNet-18 model to detect the COVID-19 virus.

2.2. Generative Adversarial Networks

Waheed et al. [1] and Loey et al. [30] utilised a convolutional neural network and generative adversarial network (GAN) for the diagnosis of COVID-19. The authors generated synthetic medical images using a GAN model to create a CNN model.
Generative adversarial networks have been used for medical imaging. The well-known GAN models are vanilla GAN, deep convolutional GAN (DCGAN), pix2pix, and CycleGAN, ref. [31]. The authors of [1,32,33,34] mainly used the vanilla GANs and DPGAN models for generating synthetic images.

3. Method

Figure 1 shows the proposed data-efficient convolutional neural network (CNN) model for COVID-19 detection. The proposed novel method uses the deep convolutional generative adversarial network (DCGAN) and the data augmentation technique to increase the small number of available CT scans. First, the data augmentation technique allows for the creation of many images. Then, these images are used as input to the DCGAN model to produce synthetic images. Then, a CNN model is trained on both synthetically generated and augmented images. In the testing part, the created CNN model provides COVID-19 and non-COVID-19 classification on CT scans.
The proposed data-efficient CNN models are generated and tested on COVID19-CT [24] and MosMed [35] datasets. Table 1 describes the number of images and related categories.

3.1. COVID19-CT Database

The authors of [24] prepared a COVID19-CT dataset for research purposes. This dataset is comprised of 349 COVID-19 and 397 normal CT images. Figure 2 also presents samples of COVID-19 and normal images.

3.2. Mosmed Database

The authors of [35] collected 1110 COVID-19 and non-COVID-19 CT scans from hospitals in Moscow, Russia. The authors also grouped COVID-19 related CT scans as a normal, mild, moderate, or severe condition. Sample images are presented in Figure 2.

3.3. Augmented Datasets

The DCGAN model allows the production of synthetic versions of the available images in the dataset. However, GAN models require a number of pictures for accurate modelling of the images in the dataset. Therefore, available images are augmented to increase the quantity. Then the DCGAN uses these enlarged images to produce synthetic images.
Table 2 shows the number of augmented (Aug.) CT scans for the COVID19-CT and Mosmed datasets. The datasets are split into training and testing sets. Then, CT scans of the training set are rotated to increase their numbers in both datasets. We applied the image rotation technique to augment the training set.
Table 2 also shows the number of synthetically generated (GAN) CT scans. The datasets included a combination of synthetic and augmented images (Aug+GAN) and the number of combined images is also reported in Table 2.

3.4. Synthetic CT Image Generation

Synthetic CT image generation is based on the DCGAN method [34] in Figure 1 to generate synthetic CT images. The DCGAN method is the improved version of the GAN method [33]. Figure 3 also shows the synthetically generated CT scans of the COVID19-CT dataset. This convolutional neural network comprises a generator and discriminator. The discriminator builds on convolution layers while the generator builds on convolutional transpose layers.
This method employs the generator G ( x ) to produce synthetic images and then utilises the discriminator D ( z ) to classify a given synthetic image of a real image. Latent vectors of distribution are used as inputs to the generator and then the generator outputs synthetic images. These generated images are used as inputs to discriminate together with real photos. Finally, the discriminator classifies the input image as a real or synthetic image. This process is repeated for many latent vectors, and the optimisation function is minimised. This function is defined by
m i n G m a x D V ( D , G ) = E x [ l o g D ( x ) + E z [ l o g ( 1 D ( G ( z ) ) ] ]
where data are denoted by x, generator distribution and noise variables are denoted by p g and p ( z ) respectively.
The discriminator employs several convolutions, and this method utilises batch normalisation and LeakyReLU operations after each convolution.
Figure 2. Sample images of COVID19-CT dataset. (a) COVID-19, (b) Normal.
Figure 2. Sample images of COVID19-CT dataset. (a) COVID-19, (b) Normal.
Sustainability 14 05820 g002
Figure 3. Sythetic images of COVID19-CT dataset. (a) COVID-19, (b) Normal.
Figure 3. Sythetic images of COVID19-CT dataset. (a) COVID-19, (b) Normal.
Sustainability 14 05820 g003

3.5. Model Generations

The AlexNet, VGG, ResNet-18, ResNet-50, MobileNetV2, and DensNet-121 networks were generated using augmented and synthetic images. All RGB images of 224 × 224 × 3 sizes are used as inputs to fine-tuned convolutional neural network models (CNNs) for training. All networks are pretrained on the Imagenet dataset, and then these models are further trained using augmented and synthetic images. Fine-tuning is achieved by freezing all convolutional layers and adapting the last fully connected layer for COVID-19 and non-COVID-19 classification.

3.6. COVID-19 Prediction

Figure 1 presents the data-efficient deep learning method. Each of the CT images is used as input to the deep network and then CT images are classified as COVID-19 or non-COVID-19.

3.7. Components of the Optimisation Function

Optimisation values are as follows. We use 0.9 and 256 for momentum and the batch size respectively. The learning value is 0.0001. We use 50 epochs for optimizing the function.

3.8. Implementation

A desktop computer was used to run the experiments. This computer is equipped with an Intel Corei7-4790 CPU and NVIDIA GeForce GTX-1080Ti graphics card.

3.9. Softwave

We used Pytorch deep learning library for generating and testing the proposed methodology.

4. Performance Evaluation

The performance evaluation of the classic deep learning method and the proposed data-efficient methods were evaluated using the COVID19-CT and Mosmed datasets.
Performance evaluation was performed using the metrics described as follows. The area under the receiver operating characteristic (ROC) curve (AUC), accuracy (ACC), sensitivity (SE), and specificity (SP) performance merits were used to test the accuracy of the methods. Accuracy, sensitivity, and specificity can be described as:
Accuracy = TP + TN TP + TN + FP + FN
Sensitivity = TP TP + FN
Specificity = TN TN + FP
where true positive, positive, true negative, false positive, and false negative are denoted as TP, TN, FP, and FN, respectively.

Comparison between Classic Deep Learning Method and Proposed Data-Efficient Method

Table 3 reports the performances of the AlexNet, VGG, ResNet-18, ResNet-50, MobileNetV2, and DensNet-121 deep learning models. These models only build on augmented data of the CT scans. These models were evaluated for both the COVID19-CT and Mosmed datasets.
The performances of the proposed data-efficient models are also evaluated in Table 3. Table 3 also reports the performances of the AlexNet, VGG, ResNet-18, ResNet-50, MobileNetV2, and DensNet-121 deep learning models. These models build on both augmented and synthetic data of the CT scans. These models were evaluated for both the COVID19-CT and Mosmed datasets.
Table 3 also reports a comparison between the classic deep learning method and the proposed data-efficient method. First, the comparisons of the models were performed using the COVID19-CT dataset. All data-efficient AlexNet, VGG, ResNet-18, ResNet-50, MobileNetV2, and DensNet-121 models outperformed the classic convolutional models. Furthermore, the ResNet-18 model built on augmented and synthetic data outperformed the ResNet-18 model, which only builds on augmented data. This model provided a 0 . 89 % ROC value. This augmented and synthetic data-based model also outperformed all other models.
The comparisons of the models were performed using the Mosmed dataset. All data-efficient AlexNet, VGG, ResNet-18, ResNet-50, MobileNetV2, and DensNet-121 models outperformed the classic convolutional models. Furthermore, the MobileNetV2 model built on augmented and synthetic data outperformed the MobileNetV2 model, which only builds on augmented data. This model provided a 0 . 84 % ROC value. This augmented and synthetic data-based model also outperformed all other models.

5. Discussion

The proposed models are also compared with the recent work of ref. [30]. ResNet-18 and ResNet-50 models give 0.88 % and 0.95 % sensitivity values. However the authors of ref. [30], report 0.85 % sensitivity value for ResNet-50 model. The sensitivity value is the measure of the COVID-19 detection. The proposed method shows that the COVID-19 was detected more accurately than other work. Training models on both synthetic and augmented datasets increase the sensitivity values. Table 3 also presents a comparison of the proposed and other works.
All data-efficient models outperformed the classic convolutional neural networks on the COVID19-CT and Mosmed datasets. These models show that building the CNN model on synthetic and augmented images allows better recognition of COVID-19 on CT scans. The reason is that GAN models produce different synthetic CT images related to different COVID-19 deform CT scans. Since synthetic images cover large variations of COVID-19 disease signs on CT scans, the CNN can capture all details of these signals on the images. When synthetically generated images are used in conjunction with augmented data, CNN models perform better than augmented data-based CNN models.
This study also shows that the different CNN models exhibited varying performance results for the two datasets. The models and their performances are listed in Table 4. In this table, the best performing model on the COVID19-CT dataset is ResNet-18, while the best performing model on the Mosmed dataset is MobileNetV2. Both of these models employed augmented and synthetically generated CT images for COVID-19 detection.

6. Conclusions

The proposed data-efficient ResNet-18 and ResNet-50 models give 0.88 % and 0.95 % sensitivity values. The sensitivity value is the measure of the COVID-19 detection. The proposed method shows that the COVID-19 was detected more accurately than other works. We found that we can generate accurate deep networks using limited data. We achieved this using synthetic and augmentation techniques.
The proposed machine learning-based systems provide faster healthcare services. Doctors and radiologists can also use these systems for collaboration to provide better help to patients.
The main novelty of the proposed method is that CNN networks can be generated from several available CT images during pandemic situations. It is well known that accessing CT images during a pandemic might be problematic. Therefore, this paper presents novel data-efficient networks for the diagnosis of the COVID-19 virus from CT images. This method builds on a convolutional neural network and a generative adversarial network (DGAN). The method utilises the DGAN model to produce unseen COVID-19 and normal CT images from a small dataset. Then, the CNN network uses these synthetically generated CT images to learn all possible virus signs on the images. The experiments prove that the proposed method allows higher accuracy results than the classic CNN networks.

Author Contributions

Methodology, S.S., M.A.D. and F.A.-T. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors do not have a conflict of interest.

References

  1. Waheed, A.; Goyal, M.; Gupta, D.; Khanna, A.; Al-Turjman, F.; Pinheiro, P.R. CovidGAN: Data Augmentation Using Auxiliary Classifier GAN for Improved Covid-19 Detection. IEEE Access 2020, 8, 91916–91923. [Google Scholar] [CrossRef] [PubMed]
  2. Nasir, A.; Shaukat, K.; Hameed, I.A.; Luo, S.; Alam, T.M.; Iqbal, F. A Bibliometric Analysis of Corona Pandemic in Social Sciences: A Review of Influential Aspects and Conceptual Structure. IEEE Access 2020, 8, 133377–133402. [Google Scholar] [CrossRef] [PubMed]
  3. Alali, Y.; Harrou, F.; Sun, Y. A proficient approach to forecast COVID-19 spread via optimized dynamic machine learning models. Sci. Rep. 2022, 12, 2467. [Google Scholar] [CrossRef] [PubMed]
  4. Baig, T.I.; Alam, T.M.; Anjum, T.; Naseer, S.; Wahab, A.; Imtiaz, M.; Raza, M.M. Classification of Human Face: Asian and Non-Asian People. In Proceedings of the 2019 International Conference on Innovative Computing (ICIC), Seoul, Korea, 26–29 August 2019; pp. 1–6. [Google Scholar] [CrossRef]
  5. Alam, T.M.; Shaukat, K.; Khelifi, A.; Khan, W.A.; Raza, H.M.E.; Idrees, M.; Luo, S.; Hameed, I.A. Disease diagnosis system using IoT empowered with fuzzy inference system. Comput. Mater. Contin. 2022, 7, 5305–5319. [Google Scholar] [CrossRef]
  6. Kogilavani, S.; Prabhu, J.; Sandhiya, R.; Kumar, M.S.; Subramaniam, U.; Karthick, A.; Muhibbullah, M.; Imam, S.B.S. COVID-19 detection based on lung CT scan using deep learning techniques. Comput. Math. Methods Med. 2022, 2022, 7672196. [Google Scholar] [CrossRef]
  7. Aileni, M.; Rohela, G.K.; Jogam, P.; Soujanya, S.; Zhang, B. Biotechnological Perspectives to Combat the COVID-19 Pandemic: Precise Diagnostics and Inevitable Vaccine Paradigms. Cells 2022, 11, 1182. [Google Scholar] [CrossRef]
  8. Serte, S.; Demirel, H. Gabor wavelet-based deep learning for skin lesion classification. Comput. Biol. Med. 2019, 113, 103423. [Google Scholar] [CrossRef]
  9. Serte, S.; Demirel, H. Wavelet-based deep learning for skin lesion classification. IET Image Process. 2020, 14, 720–726. [Google Scholar] [CrossRef]
  10. Serener, A.; Serte, S. Geographic variation and ethnicity in diabetic retinopathy detection via deeplearning. Turk. J. Electr. Eng. Comput. Sci. 2020, 28, 664–678. [Google Scholar] [CrossRef]
  11. Serener, A.; Serte, S. Transfer Learning for Early and Advanced Glaucoma Detection with Convolutional Neural Networks. In Proceedings of the 2019 Medical Technologies Congress (TIPTEKNO), Izmir, Turkey, 3–5 October 2019; pp. 1–4. [Google Scholar]
  12. Serte, S.; Serener, A. A Generalized Deep Learning Model for Glaucoma Detection. In Proceedings of the 2019 3rd International Symposium on Multidisciplinary Studies and Innovative Technologies (ISMSIT), Ankara, Turkey, 11–13 October 2019; pp. 1–5. [Google Scholar]
  13. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. ImageNet Classification with Deep Convolutional Neural Networks. In Advances in Neural Information Processing Systems 25; Pereira, F., Burges, C.J.C., Bottou, L., Weinberger, K.Q., Eds.; National Science Foundation: Alexandria, VA, USA, 2012; pp. 1097–1105. [Google Scholar]
  14. Szegedy, C.; Liu, W.; Jia, Y.; Sermanet, P.; Reed, S.; Anguelov, D.; Erhan, D.; Vanhoucke, V.; Rabinovich, A. Going deeper with convolutions. In Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA, 7–12 June 2015; pp. 1–9. [Google Scholar]
  15. Simonyan, K.; Zisserman, A. Very Deep Convolutional Networks for Large-Scale Image Recognition. arXiv 2014, arXiv:1409.1556. [Google Scholar]
  16. Sandler, M.; Howard, A.; Zhu, M.; Zhmoginov, A.; Chen, L. MobileNetV2: Inverted Residuals and Linear Bottlenecks. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 4510–4520. [Google Scholar]
  17. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
  18. Huang, G.; Liu, Z.; van der Maaten, L.; Weinberger, K.Q. Densely Connected Convolutional Networks. arXiv 2016, arXiv:1608.06993. [Google Scholar]
  19. Deng, J.; Dong, W.; Socher, R.; Li, L.J.; Li, K.; Fei-Fei, L. ImageNet: A Large-Scale Hierarchical Image Database. In Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, USA, 20–25 June 2009. [Google Scholar]
  20. Almezhghwi, K.; Serte, S.; Al-Turjman, F. Convolutional neural networks for the classification of chest X-rays in the IoT era. Multimed. Tools Appl. 2021, 80, 29051–29065. [Google Scholar] [CrossRef] [PubMed]
  21. Serte, S.; Demirel, H. Deep learning for diagnosis of COVID-19 using 3D CT scans. Comput. Biol. Med. 2021, 132, 104306. [Google Scholar] [CrossRef]
  22. Serte, S.; Serener, A. Graph-based saliency and ensembles of convolutional neural networks for glaucoma detection. IET Image Process. 2021, 15, 797–804. [Google Scholar] [CrossRef]
  23. Serte, S.; Serener, A.; Al-Turjman, F. Deep learning in medical imaging: A brief review. Trans. Emerg. Telecommun. Technol. 2020, e4080. [Google Scholar] [CrossRef]
  24. He, X.; Yang, X.; Zhang, S.; Zhao, J.; Zhang, Y.; Xing, E.; Xie, P. Sample-Efficient Deep Learning for COVID-19 Diagnosis Based on CT Scans. medRxiv 2020. [Google Scholar] [CrossRef]
  25. Hu, S.; Gao, Y.; Niu, Z.; Jiang, Y.; Li, L.; Xiao, X.; Wang, M.; Fang, E.F.; Menpes-Smith, W.; Xia, J.; et al. Weakly Supervised Deep Learning for COVID-19 Infection Detection and Classification From CT Images. IEEE Access 2020, 8, 118869–118883. [Google Scholar] [CrossRef]
  26. Mei, X.; Lee, H.C.; Diao, K.Y.; Huang, M.; Lin, B.; Liu, C.; Xie, Z.; Ma, Y.; Robson, P.; Chung, M.; et al. Artificial intelligence–enabled rapid diagnosis of patients with COVID-19. Nat. Med. 2020, 26, 1224–1228. [Google Scholar] [CrossRef]
  27. Harmon, S.A.; Sanford, T.H.; Xu, S.; Turkbey, E.B.; Roth, H.; Xu, Z.; Yang, D.; Myronenko, A.; Anderson, V.; Amalou, A.; et al. Artificial intelligence for the detection of COVID-19 pneumonia on chest CT using multinational datasets. Nat. Commun. 2020, 11, 4080. [Google Scholar] [CrossRef]
  28. Bhandary, A.; Prabhu, G.A.; Rajinikanth, V.; Thanaraj, K.P.; Satapathy, S.C.; Robbins, D.E.; Shasky, C.; Zhang, Y.D.; Tavares, J.M.R.; Raja, N.S.M. Deep-learning framework to detect lung abnormality–A study with chest X-ray and lung CT scan images. Pattern Recognit. Lett. 2020, 129, 271–278. [Google Scholar] [CrossRef]
  29. Butt, C.; Gill, J.; Chun, D.; Babu, B.A. Deep learning system to screen coronavirus disease 2019 pneumonia. Appl. Intell. 2020, 1–7. [Google Scholar] [CrossRef]
  30. Loey, M.; Manogaran, G.; Khalifa, N.E.M. A deep transfer learning model with classical data augmentation and CGAN to detect COVID-19 from chest CT radiography digital images. Neural Comput. Appl. 2020, 1–13. [Google Scholar] [CrossRef] [PubMed]
  31. Yi, X.; Walia, E.; Babyn, P. Generative adversarial network in medical imaging: A review. Med. Image Anal. 2019, 58, 101552. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  32. Goodfellow, I.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative Adversarial Nets. In Advances in Neural Information Processing Systems 27; Ghahramani, Z., Welling, M., Cortes, C., Lawrence, N.D., Weinberger, K.Q., Eds.; Curran Associates, Inc.: Red Hook, NY, USA, 2014; pp. 2672–2680. [Google Scholar]
  33. Goodfellow, I.J.; Shlens, J.; Szegedy, C. Explaining and Harnessing Adversarial Examples. arXiv 2014, arXiv:1412.6572. [Google Scholar]
  34. Radford, A.; Metz, L.; Chintala, S. Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks. arXiv 2015, arXiv:1511.06434. [Google Scholar]
  35. Morozov, S.P.; Andreychenko, A.E.; Pavlov, N.A.; Vladzymyrskyy, A.V.; Ledikhova, N.V.; Gombolevskiy, V.A.; Blokhin, I.A.; Gelezhe, P.B.; Gonchar, A.V.; Chernina, V.Y. MosMedData: Chest CT Scans With COVID-19 Related Findings Dataset. arXiv 2020, arXiv:2005.06465. [Google Scholar]
  36. Acar, E.; Şahin, E.; Yılmaz, İ. Improving effectiveness of different deep learning-based models for detecting COVID-19 from computed tomography (CT) images. Neural Comput. Appl. 2021, 33, 17589–17609. [Google Scholar] [CrossRef]
Figure 1. Proposed data-efficient deep convolutional neural network model for COVID-19 detection.
Figure 1. Proposed data-efficient deep convolutional neural network model for COVID-19 detection.
Sustainability 14 05820 g001
Table 1. Total number of images in the datasets.
Table 1. Total number of images in the datasets.
CT-ScanDatasetTrainTest
COVID-19COVID19-CT32440
NormalCOVID19-CT29337
COVID-19Mosmed16820
NormalMosmed16820
Table 2. Total number of images in the datasets.
Table 2. Total number of images in the datasets.
DataCT-ScanDatasetTrainTest
Aug.COVID-19COVID19-CT139340
Aug.NormalCOVID19-CT167237
GANCOVID-19COVID19-CT50040
GANNormalCOVID19-CT50037
Aug+GANNormalCOVID19-CT217237
Aug+GANCOVID-19COVID19-CT189337
AugCOVID-19Mosmed108723
AugNormalMosmed106923
GANCOVID-19Mosmed12823
GANNormalMosmed12823
Aug+GANNormalMosmed119723
Aug+GANCOVID-19Mosmed121823
Table 3. Performance comparisons.
Table 3. Performance comparisons.
NetworkDiseaseDataAUCACCSESP
Resnet18COVID19-CTAug0.770.750.830.71
Resnet18COVID19-CTAug+GAN0.890.740.880.68
Resnet50COVID19-CTAug0.710.770.860.72
Resnet50COVID19-CTAug+GAN0.810.730.950.66
VggCOVID19-CTAug0.650.750.860.70
VggCOVID19-CTAug+GAN0.670.760.870.70
MobileNetV2COVID19-CTAug0.710.730.820.69
MobileNetV2COVID19-CTAug+GAN0.770.730.840.68
Densenet121COVID19-CTAug0.700.740.870.69
Densenet121COVID19-CTAug+GAN0.770.670.920.61
AlexNetCOVID19-CTAug0.600.670.720.64
AlexNetCOVID19-CTAug+GAN0.800.690.880.64
AlexNetMosMedAug0.710.701.000.63
AlexNetMosMedAug+GAN0.730.660.890.60
MobileNetV2MosMedAug0.770.670.690.65
MobileNetV2MosMedAug+GAN0.840.620.650.60
Resnet50MosMedAug0.740.690.690.69
Resnet50MosMedAug+GAN0.780.690.690.69
Resnet18MosMedAug0.700.670.680.66
Resnet18MosMedAug+GAN0.750.690.690.69
VggMosMedMA0.630.690.710.68
VggMosMedAug+GAN0.710.660.670.64
Densenet121MosMedAug0.600.650.640.65
Densenet121MosMedAug+GAN0.620.610.630.60
Table 4. Performance comparisons.
Table 4. Performance comparisons.
NetworkModelAUCACCSESP
[30]AlexNet-75.7363.8387.62
[36]VGG16-0.900.910.80
Proposed WorkResNet180.890.740.880.68
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Serte, S.; Dirik, M.A.; Al-Turjman, F. Deep Learning Models for COVID-19 Detection. Sustainability 2022, 14, 5820. https://doi.org/10.3390/su14105820

AMA Style

Serte S, Dirik MA, Al-Turjman F. Deep Learning Models for COVID-19 Detection. Sustainability. 2022; 14(10):5820. https://doi.org/10.3390/su14105820

Chicago/Turabian Style

Serte, Sertan, Mehmet Alp Dirik, and Fadi Al-Turjman. 2022. "Deep Learning Models for COVID-19 Detection" Sustainability 14, no. 10: 5820. https://doi.org/10.3390/su14105820

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop