COVID-19 Screening Using a Lightweight Convolutional Neural Network with Generative Adversarial Network Data Augmentation
Abstract
:1. Introduction
2. Related Work
2.1. Convolutional Neural Network Classifier
2.2. COVID-19 Classification Using Deep Learning Models
3. Material
3.1. Chest X-ray Dataset
3.2. Synthetic Data Generation through Conditional DC-GAN
4. LightCovidNet: A Lightweight Deep Learning Model
5. Results and Discussion
6. Conclusions
Author Contributions
Funding
Conflicts of Interest
Abbreviations
CNN | Convolutional neural networks |
DC-GAN | Deep convolutional generative adversarial network |
CT | Computed tomography |
RT-PCR | Reverse transcription polymerase chain reaction |
FLOPs | Floating points operations |
References
- Park, G.S.; Ku, K.; Baek, S.H.; Kim, S.J.; Kim, S.I.; Kim, B.T.; Maeng, J.S. Development of Reverse Transcription Loop-Mediated Isothermal Amplification Assays Targeting Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-CoV-2). J. Mol. Diagn. 2020, 22, 729–735. [Google Scholar] [CrossRef] [PubMed]
- Abdani, S.R.; Zulkifley, M.A.; Zulkifley, N.H. A Lightweight Deep Learning Model for Covid-19 Detection. In Proceedings of the IEEE Symposium on Industrial Electronics & Applications (ISIEA), Shah Alam, Selangor, Malaysia, 17–18 July 2020; pp. 1–5. [Google Scholar] [CrossRef]
- Bampoe, S.; Odor, P.; Lucas, D. Novel coronavirus SARS-CoV-2 and COVID-19. Practice recommendations for obstetric anaesthesia: What we have learned thus far. Int. J. Obstet. Anesth. 2020, 43, 1–8. [Google Scholar] [CrossRef] [PubMed]
- Kosaka, H.; Meno, H. 88 affiliated with school soccer club among 91 infected with virus in western Japan city. The Mainichi, 10 August 2020. [Google Scholar]
- Lv, D.F.; Ying, Q.M.; Weng, Y.S.; Shen, C.B.; Chu, J.G.; Kong, J.P.; Sun, D.H.; Gao, X.; Weng, X.B.; Chen, X.Q. Dynamic change process of target genes by RT-PCR testing of SARS-Cov-2 during the course of a Coronavirus Disease 2019 patient. Clin. Chim. Acta 2020, 506, 172–175. [Google Scholar] [CrossRef]
- Neveu, S.; Saab, I.; Dangeard, S.; Bennani, S.; Tordjman, M.; Chassagnon, G.; Revel, M.P. Incidental diagnosis of Covid-19 pneumonia on chest computed tomography. Diagn. Interv. Imaging 2020, 101, 457–461. [Google Scholar] [CrossRef]
- Mahmud, T.; Rahman, M.A.; Fattah, S.A. CovXNet: A multi-dilation convolutional neural network for automatic COVID-19 and other pneumonia detection from chest X-ray images with transferable multi-receptive feature optimization. Comput. Biol. Med. 2020, 122, 103869. [Google Scholar] [CrossRef] [PubMed]
- Moncada, D.C.; Rueda, Z.V.; Macias, A.; Suarez, T.; Ortega, H.; Velez, L.A. Reading and interpretation of chest X-ray in adults with community-acquired pneumonia. Braz. J. Infect. Dis. 2011, 15, 540–546. [Google Scholar] [CrossRef] [Green Version]
- Rajpurkar, P.; Irvin, J.; Zhu, K.; Yang, B.; Mehta, H.; Duan, T.; Ding, D.; Bagul, A.; Langlotz, C.; Shpanskaya, K.; et al. Chexnet: Radiologist-level pneumonia detection on chest x-rays with deep learning. arXiv 2017, arXiv:1711.05225. [Google Scholar]
- Hani, C.; Trieu, N.; Saab, I.; Dangeard, S.; Bennani, S.; Chassagnon, G.; Revel, M. COVID-19 pneumonia: A review of typical CT findings and differential diagnosis. Diagn. Interv. Imaging 2020, 101, 263–268. [Google Scholar] [CrossRef] [PubMed]
- Pereira, R.M.; Bertolini, D.; Teixeira, L.O.; Silla, C.N.; Costa, Y.M. COVID-19 identification in chest X-ray images on flat and hierarchical classification scenarios. Comput. Methods Prog. Biomed. 2020, 105532. [Google Scholar] [CrossRef]
- Atif, M.; Sulaiman, S.A.; Shafie, A.A.; Saleem, F.; Ahmad, N. Determination of chest x-ray cost using activity based costing approach at Penang General Hospital, Malaysia. Pan Afr. Med. J. 2012, 12, 1–7. [Google Scholar]
- Howard, A.G.; Zhu, M.; Chen, B.; Kalenichenko, D.; Wang, W.; Weyand, T.; Andreetto, M.; Adam, H. MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications. CoRR 2017, arXiv:1704.04861v1. [Google Scholar]
- Sandler, M.; Howard, A.; Zhu, M.; Zhmoginov, A.; Chen, L. MobileNetV2: Inverted Residuals and Linear Bottlenecks. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 4510–4520. [Google Scholar]
- Huang, G.; Liu, Z.; Van Der Maaten, L.; Weinberger, K.Q. Densely Connected Convolutional Networks. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 2261–2269. [Google Scholar] [CrossRef] [Green Version]
- Jegou, S.; Drozdzal, M.; Vazquez, D.; Romero, A.; Bengio, Y. The One Hundred Layers Tiramisu: Fully Convolutional DenseNets for Semantic Segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Honolulu, HI, USA, 21–26 July 2017; pp. 1175–1183. [Google Scholar] [CrossRef] [Green Version]
- Zulkifley, M.A.; Trigoni, N. Multiple-Model Fully Convolutional Neural Networks for Single Object Tracking on Thermal Infrared Video. IEEE Access 2018, 6, 42790–42799. [Google Scholar] [CrossRef]
- Zulkifley, M.A. Two Streams Multiple-Model Object Tracker for Thermal Infrared Video. IEEE Access 2019, 7, 32383–32392. [Google Scholar] [CrossRef]
- Iandola, F.N.; Moskewicz, M.W.; Ashraf, K.; Han, S.; Dally, W.J.; Keutzer, K. SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <1MB model size. CoRR 2016, arXiv:1602.07360v4. [Google Scholar]
- Zulkifley, M.A.; Abdani, S.R.; Zulkifley, N.H. Pterygium-Net: A deep learning approach to pterygium detection and localization. Multimed. Tools Appl. 2019. [Google Scholar] [CrossRef]
- Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. Technical Report, University of Oxford. arXiv 2014, arXiv:1409.1556v6. [Google Scholar]
- Szegedy, C.; Liu, W.; Jia, Y.; Sermanet, P.; Reed, S.; Anguelov, D.; Erhan, D.; Vanhoucke, V.; Rabinovich, A. Going deeper with convolutions. In Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA, 7–12 June 2015; pp. 1–9. [Google Scholar] [CrossRef] [Green Version]
- Abdani, S.R.; Zulkifley, M.A. DenseNet with Spatial Pyramid Pooling for Industrial Oil Palm Plantation Detection. In Proceedings of the 2019 International Conference on Mechatronics, Robotics and Systems Engineering, Bali, Indonesia, 4–6 December 2019. [Google Scholar]
- Howard, A.; Sandler, M.; Chen, B.; Wang, W.; Chen, L.; Tan, M.; Chu, G.; Vasudevan, V.; Zhu, Y.; Pang, R.; et al. Searching for MobileNetV3. In Proceedings of the 2019 IEEE/CVF International Conference on Computer Vision (ICCV), Seoul, Korea, 27 October–2 November 2019; pp. 1314–1324. [Google Scholar] [CrossRef]
- Zhang, X.; Zhou, X.; Lin, M.; Sun, J. ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 6848–6856. [Google Scholar] [CrossRef]
- Ma, N.; Zhang, X.; Zheng, H.T.; Sun, J. Shufflenet v2: Practical guidelines for efficient cnn architecture design. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 116–131. [Google Scholar]
- Zulkifley, M.A.; Mohamed, N.A.; Zulkifley, N.H. Squat Angle Assessment through Tracking Body Movements. IEEE Access 2019, 7, 48635–48644. [Google Scholar] [CrossRef]
- Badrinarayanan, V.; Kendall, A.; Cipolla, R. SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 2481–2495. [Google Scholar] [CrossRef]
- Apostolopoulos, I.D.; Mpesiana, T.A. Covid-19: Automatic detection from x-ray images utilizing transfer learning with convolutional neural networks. Phys. Eng. Sci. Med. 2020, 43, 635–640. [Google Scholar] [CrossRef] [Green Version]
- Panwar, H.; Gupta, P.; Siddiqui, M.K.; Morales-Menendez, R.; Singh, V. Application of deep learning for fast detection of COVID-19 in X-Rays using nCOVnet. Chaos Solitons Fractals 2020, 138, 109944. [Google Scholar] [CrossRef]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
- Narin, A.; Kaya, C.; Pamuk, Z. Automatic detection of coronavirus disease (covid-19) using X-ray images and deep convolutional neural networks. arXiv 2020, arXiv:2003.10849. [Google Scholar]
- Sethy, P.K.; Behera, S.K. Detection of coronavirus disease (covid-19) based on deep features. Int. J. Math. Eng. Manag. Sci. 2020, 5, 643–651. [Google Scholar]
- Togacar, M.; Ergen, B.; Cömert, Z. COVID-19 detection using deep learning models to exploit Social Mimic Optimization and structured chest X-ray images using fuzzy color and stacking approaches. Comput. Biol. Med. 2020, 121, 103805. [Google Scholar] [CrossRef] [PubMed]
- Ucar, F.; Korkmaz, D. COVIDiagnosis-Net: Deep Bayes-SqueezeNet based diagnosis of the coronavirus disease 2019 (COVID-19) from X-ray images. Med. Hypotheses 2020, 140, 109761. [Google Scholar] [CrossRef] [PubMed]
- Ozturk, T.; Talo, M.; Yildirim, E.A.; Baloglu, U.B.; Yildirim, O.; Acharya, U.R. Automated detection of COVID-19 cases using deep neural networks with X-ray images. Comput. Biol. Med. 2020, 121, 103792. [Google Scholar] [CrossRef]
- Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A. You Only Look Once: Unified, Real-Time Object Detection. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 779–788. [Google Scholar]
- Toraman, S.; Alakus, T.B.; Turkoglu, I. Convolutional capsnet: A novel artificial neural network approach to detect COVID-19 disease from X-ray images using capsule networks. Chaos Solitons Fractals 2020, 140, 110122. [Google Scholar] [CrossRef]
- Loey, M.; Smarandache, F.; Khalifa, M.N.E. Within the Lack of Chest COVID-19 X-ray Dataset: A Novel Detection Model Based on GAN and Deep Transfer Learning. Symmetry 2020, 12, 651. [Google Scholar] [CrossRef] [Green Version]
- Cohen, J.P.; Morrison, P.; Dao, L. COVID-19 image data collection. arXiv 2020, arXiv:2003.11597. [Google Scholar]
- Adeyemia, H.O.; Nabotha, S.A.; Yusufa, S.O.; Dadaa, O.M.; Alaob, P.O. The Development of Fuzzy Logic-Base Diagnosis Expert System for Typhoid Fever. J. Kejuruteraan 2020, 32, 9–16. [Google Scholar] [CrossRef]
- Rahman, F. The Malaysian Response to COVID-19: Building Preparedness for Surge Capacity, Testing Efficiency and Containment; Russell Publishing Ltd.: Brasted, Kent, UK, 2020. [Google Scholar]
- Shibly, K.H.; Dey, S.K.; Islam, M.T.U.; Rahman, M.M. COVID faster R—CNN: A novel framework to Diagnose Novel Coronavirus Disease (COVID-19) in X-ray images. Inform. Med. Unlocked 2020, 20, 100405. [Google Scholar] [CrossRef]
- Kermany, D.; Zhang, K.; Goldbaum, M. Labeled Optical Coherence Tomography (OCT) and Chest X-Ray Images for Classification. Mendeley Data 2018. [Google Scholar] [CrossRef]
- Radford, A.; Metz, L.; Chintala, S. Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv 2015, arXiv:1511.06434. [Google Scholar]
- Kingma, D.P.; Ba, J. Adam: A Method for Stochastic Optimization. In Proceedings of the 3rd International Conference on Learning Representations. arXiv 2014, arXiv:1412.6980v9. [Google Scholar]
- Srivastava, N.; Hinton, G.; Krizhevsky, A.; Sutskever, I.; Salakhutdinov, R. Dropout: A Simple Way to Prevent Neural Networks from Overfitting. J. Mach. Learn. Res. 2014, 15, 1929–1958. [Google Scholar]
- Chollet, F. Xception: Deep Learning with Depthwise Separable Convolutions. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 1800–1807. [Google Scholar]
Method | Network Architecture | Strength | Weakness |
---|---|---|---|
Loey et al. [39] | ResNet-18 | • Uses a deep convolutional generative adversarial network to produce synthetic data. | • Does not produce unique synthetic data as the network is trained separately for each class. |
• The newest benchmarked method is ResNet-18, where many recent state-of-the-art methods are not compared. | |||
Ucar et al. [35] | SqueezeNet | • Uses Bayesian optimization to tune the hyper-parameters. | • Only 66 images of COVID-19 cases were used for testing and training. |
• Used basic data augmentation methods: Shearing, flipping, contrast variation and additive noise. | |||
Panwar et al. [30] | nCOVnet | • Uses a combined architecture of VGG-16 convolutional layers and five trainable dense layers. | • The network is not trained until convergence because the training loss value is still reducing before the training has ended. |
• Only considers a simple two-class problem of normal and COVID-19 cases without reporting any performance comparison to the other methods. | |||
Mahmud et al. [7] | CovXNet | • Uses multi-dilation convolutional layers, where group convolution is performed using several dilation rates. | • Training convergence is very erratic, where it fluctuates a lot after 45 epochs. This is because all classes have only 305 images each and overfitting can easily be spotted. |
Sethy et al. [33] | ResNet-50 | • Uses ResNet-50 as the feature extractor and support vector machine as the classifier. | • Not end-to-end network with a very low number of COVID-19 cases (25 images). |
Ozturk et al. [36] | DarkCovidNet | • Inspired by DarkNet object detection architecture. | • The network does not apply any feedforward or residual connections. |
• Performs rigorous testing using 5-fold cross-validation. | • The network has not been trained until convergence as the training loss and training accuracy are still moving downward and upward, respectively. | ||
Abdani et al. [2] | SPP-COVID-Net | • Applies spatial pyramid pooling module to extract features of various scales. | • The experiment has included X-ray images taken from a side view for COVID-19 cases, which will give advantage for that class detection performance. |
Our Method | LightCovidNet | • Applies data augmentation through a conditional deep convolutional generative adversarial network. | • Does not perform hyper-parameters tuning. |
• Applies separable convolution and simplified spatial pyramid pooling module to produce a lightweight network. | |||
• Uses the maximum available 446 images of COVID-19 cases and applies 5-fold cross-validation to compare performance with the state-of-the-art methods. |
Method | Mean Accuracy | FLOPs | Total Parameters | Trainable Parameters | Non-Trainable Parameters |
---|---|---|---|---|---|
Loey et al. | 0.8528 | 22,364,607 | 11,192,003 | 11,182,275 | 9728 |
Ucar et al. | 0.8633 | 1,467,994 | 736,963 | 736,963 | 0 |
MobileNet V3 | 0.9033 | 3,304,513 | 1,665,501 | 1,653,477 | 12,024 |
MobileNet V2 | 0.9163 | 4,457,760 | 2,262,979 | 2,228,803 | 34,176 |
ShuffleNet V1 | 0.9237 | 1,801,092 | 939,531 | 900,363 | 39,168 |
MobileNet V1 | 0.9330 | 6,420,178 | 3,231,939 | 3,210,051 | 21,888 |
ShuffleNet V2 | 0.9387 | 10,702,449 | 5,384,859 | 5,351,143 | 33,716 |
Panwar et al. | 0.9416 | 29,684,614 | 14,846,787 | 14,846,787 | 0 |
Mahmud et al. | 0.9473 | 2,641,952 | 1,338,291 | 1,320,979 | 17,312 |
Narin et al. | 0.9556 | 47,134,858 | 23,593,859 | 23,540,739 | 53,120 |
Ozturk et al. | 0.9579 | 2,328,332 | 1,167,363 | 1,164,143 | 3220 |
Abdani et al. | 0.9591 | 1,719,803 | 862,331 | 859,883 | 2448 |
Xception-41 | 0.9639 | 41,626,347 | 20,867,627 | 20,813,099 | 54,528 |
LightCovidNet | 0.9697 | 1,679,643 | 841,771 | 839,803 | 1968 |
Method | Mean Accuracy | FLOPs | Total Parameters |
---|---|---|---|
LightCovidNet without spatial pyramid pooling | 0.9237 | 2,364,607 | 1,192,003 |
LightCovidNet without synthetic data | 0.9493 | 1,679,643 | 841,771 |
LightCovidNet without feed-forward layer | 0.9499 | 896,283 | 450,091 |
LightCovidNet without Separable convolution | 0.9637 | 2,022,511 | 1,012,731 |
LightCovidNet without DropOut | 0.9642 | 1,679,643 | 841,771 |
LightCovidNet | 0.9697 | 1,679,643 | 841,771 |
© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Zulkifley, M.A.; Abdani, S.R.; Zulkifley, N.H. COVID-19 Screening Using a Lightweight Convolutional Neural Network with Generative Adversarial Network Data Augmentation. Symmetry 2020, 12, 1530. https://doi.org/10.3390/sym12091530
Zulkifley MA, Abdani SR, Zulkifley NH. COVID-19 Screening Using a Lightweight Convolutional Neural Network with Generative Adversarial Network Data Augmentation. Symmetry. 2020; 12(9):1530. https://doi.org/10.3390/sym12091530
Chicago/Turabian StyleZulkifley, Mohd Asyraf, Siti Raihanah Abdani, and Nuraisyah Hani Zulkifley. 2020. "COVID-19 Screening Using a Lightweight Convolutional Neural Network with Generative Adversarial Network Data Augmentation" Symmetry 12, no. 9: 1530. https://doi.org/10.3390/sym12091530
APA StyleZulkifley, M. A., Abdani, S. R., & Zulkifley, N. H. (2020). COVID-19 Screening Using a Lightweight Convolutional Neural Network with Generative Adversarial Network Data Augmentation. Symmetry, 12(9), 1530. https://doi.org/10.3390/sym12091530