COVID-19 and Artificial Intelligence: An Approach to Forecast the Severity of Diagnosis
Abstract
:1. Introduction
2. Materials and Methods
2.1. Multimodal Approach Description
- At admission, from positive-tested COVID-19 patients, we collected the symptoms, clinical variables, blood tests, and chest X-ray scans together with a radiologist’ report.
- During hospitalization, each patient was diagnosed with a COVID-19 severity score (mild, moderate, and severe) assessed by the oxygen flow rate, the necessity of mechanical ventilation, or patient death.
- We constructed modules based on artificial intelligence that were trained on data collected from patients with COVID-19 and could predict the future severity of the diagnosis.
2.2. Retrospective Study
2.3. Clinical and Biological Variables
2.4. Chest X-ray Image Acquisition and Radiologist Report
2.5. Datasets
2.6. CXR-Score Module
2.6.1. VGG Model
Algorithm 1. The VGG-19 model description. |
Input: CXR images of dimension 500 × 500 pixels from the Training CXR Dataset Output: VGG model weights 1. epochs ← 100 2. for each image in the dataset do 3. resize the image to 224 × 224 pixels 4. normalize the image pixels values from (0,255) to (0,1) 5. end 6. Load the VGG-19 model pre-trained on the ImageNet dataset 7. Remove the last layer of the model 8. Make non-trainable all the layers of the model 9. Add a Flatten layer to the model output to obtain 1-D array of features 10. Apply a batch normalization to the 1-D array of features 11. Add a fully connected layer with 256 hidden neurons 12. Apply a dropout for inactivate units (40%) in the previous layer 13. Add a fully connected layer with 128 hidden neurons 14. Apply a dropout for inactivate units (60%) in the previous layer 15. Apply a batch normalization 16. Add a fully connected layer with four hidden units and a softmax activation function. 17. Optimize the model with Adam with learning_rate = 0.01 and a decay = learning_rate/epochs 18. Train the model for the given number of epochs and a batch size of 32 19. Save the final model |
2.6.2. ResNet Model
Algorithm 2. The ResNet50 model description. |
Input: CXR images of dimension 500 × 500 pixels from the Training CXR-Dataset Output: ResNet model weights 1. epochs ← 100 2. for each image in the dataset do 3. resize image to 224 × 224 pixels 4. normalize the image pixels values from (0,255) to (0,1) 5. end 6. Load the ResNet50 model pre-trained on the ImageNet dataset 7. Make non-trainable all the layers of the model 8. Add a Flatten layer to the model output to obtain 1-D array of features 9. Apply a batch normalization to the 1-D array of features 10. Add a fully connected layer with 256 hidden neurons 11. Apply a dropout for inactivate units (50%) in the previous layer 12. Add a fully connected layer with 128 hidden neurons 13. Apply a dropout for inactivate units (50%) in the previous layer 14. Apply a batch normalization 15. Add a fully connected layer with four hidden units and a softmax activation function. 16. Optimize the model with Adam optimizer using a learning_rate = 0.0001 and a decay = learning_rate/epochs 17. Train the model for the given number of epochs and a batch size of 32 18. Save the final model |
2.6.3. Inception Model
Algorithm 3. The InceptionV3 model description. |
Input: CXR images of dimension 500 × 500 pixels from the Training CXR-Dataset Output: Inception model weights 1. epochs ← 100 2. for each image in the dataset do 3. resize the image to 224 × 224 pixels 4. normalize the image pixels values from (0,255) to (0,1) 5. end 6. Load the InceptionV3 model pre-trained on the ImageNet dataset 7. Make non-trainable all the layers of the model 8. Add a Flatten layer to the model output to obtain a 1-D array of features 9. Apply a batch normalization to the 1-D array of features 10. Add a fully connected layer with 256 hidden neurons 11. Apply a dropout for inactivate units (40%) in the previous layer 12. Add a fully connected layer with 128 hidden neurons 13. Apply a dropout for inactivate units (60%) in the previous layer 14. Add a fully connected layer with 4 hidden units and a softmax activation function. 15. Optimize the model with RMSprop optimizer using a learning_rate = 0.001 and a decay = learning_rate/epochs 16. Train the model for the given number of epochs and a batch size of 32 17. Save the final model |
2.6.4. DenseNet Model
Algorithm 4. The DenseNet121 model description. |
Input: CXR images of dimension 500 × 500 pixels from the Training CXR-Dataset Output: DenseNet model weights 1. epochs ← 100 2. for each image in the dataset do 3. resize the image to 224 × 224 pixels 4. normalize the image pixels values from (0,255) to (0,1) 5. end 6. Load the DenseNet121 model pre-trained on the ImageNet dataset 7. Make non-trainable all the layers of the model 8. Add a Flatten layer to the model output to obtain a 1-D array of features 9. Apply a batch normalization to the 1-D array of features 10. Add a fully connected layer with 512 hidden neurons 11. Apply a dropout for inactivate units (20%) in the previous layer 12. Add a fully connected layer with 256 hidden neurons 13. Apply a dropout for inactivate units (65%) in the previous layer 14. Apply a batch normalization 15. Add a fully connected layer with four hidden units and a softmax activation function. 16. Optimize the model with Adam optimizer using a learning_rate = 0.001 and a decay = learning_rate/epochs 17. Train the model for the given number of epochs and a batch size of 32 18. Save the final model |
2.6.5. CXR-Score Ensemble Model
Algorithm 5. The CXR-Score ensemble description. |
Input: CXR images of dimension 500 × 500 pixels from the Testing CXR Dataset Output: prediction probabilities for each diagnosis class (Normal, Mild, Moderate, Severe) 1. for each image in the dataset do 2. resize the image to 224 × 224 pixels 3. normalize the image pixels values from (0,255) to (0,1) 4. end 5. Load the trained VGG-19 model 6. Load the trained InceptionV3 model 7. Load the trained ResNet50 model 8. Load the trained DenseNet121 model 9. Predict the images with VGG-19 resulting in a list of probabilities (P11, P12, P13, P14) 10. Predict the images with InceptionV3 resulting in a list of probabilities (P21, P22, P23, P24) 11. Predict the images with ResNet50 resulting in a list of probabilities (P31, P32, P33, P34) 12. Predict the images with DenseNet121 resulting in a list of probabilities (P41, P42, P43, P44) 13. Average the four lists of predictions of the four models. 14. for each class in the set of diagnoses do 15. output prediction probability 16. end |
2.7. AI-Score Module
2.7.1. AdaBoost Model
2.7.2. Random Forests Model
2.7.3. XGBoost Model
2.7.4. CatBoost Model
2.7.5. AI-Score Ensemble Model
- In the first level, the base machine learning models (AdaBoost, RandomForests, XGBoost) used the same 5-fold splits of the training data.
- In the second level, a meta-model (CatBoost) was fit on the out-of-fold predictions from each model of the previous level.
Algorithm 6. The AI-Score ensemble description. |
Input: SCB training dataset, SCB testing dataset. Output: prediction probabilities for each diagnosis class (Mild, Moderate, Severe) 1. Select a 5-fold split of the train SCB Dataset. 2. base_models = [“Ada Boost”, “Random Forests”,” XGBoost”] 3. meta_model = “CatBoost” 4. For each model in base_models: 5. Evaluate the model using 5-fold cross-validation. 6. Save all out-of-fold predictions. 7. Fit the model on the full training dataset and save. 8. Fit the meta-model on the out-of-fold predictions from the previous layer. 9. Evaluate the model on the SCB testing dataset. 10. For each class in the set of diagnoses do 11. output prediction probability |
2.8. Software and Statistical Tools
3. Results
3.1. Selection of Patient Variables through Statistics
3.2. Interpretability of CXR-Score Module
3.3. Interpretability of AI-Score Module for Predicting the Final Diagnosis Severity
4. Discussion
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Calina, D.; Hernández, A.F.; Hartung, T.; Egorov, A.M.; Izotov, B.N.; Nikolouzakis, T.K.; Tsatsakis, A.; Vlachoyiannopoulos, P.G.; Docea, A.O. Challenges and Scientific Prospects of the Newest Generation of mRNA-Based Vaccines against SARS-CoV-2. Life 2021, 11, 907. [Google Scholar] [CrossRef]
- Calina, D.; Hartung, T.; Mardare, I.; Mitroi, M.; Poulas, K.; Tsatsakis, A.; Rogoveanu, I.; Docea, A.O. COVID-19 pandemic and alcohol consumption: Impacts and interconnections. Toxicol. Rep. 2021, 8, 529–535. [Google Scholar] [CrossRef]
- Islam, M.T.; Salehi, B.; Karampelas, O.; Sharifi-Rad, J.; Docea, A.O.; Martorell, M.; Calina, D. High skin melanin content, vitamin d deficiency and immunity: Potential interference for severity of covid-19. Farmacia 2020, 68, 970–983. [Google Scholar] [CrossRef]
- Yüce, M.; Filiztekin, E.; Özkaya, K.G. COVID-19 diagnosis -A review of current methods. Biosens Bioelectron. 2021, 172, 112752. [Google Scholar] [CrossRef] [PubMed]
- Yang, R.; Li, X.; Liu, H.; Zhen, Y.; Zhang, X.; Xiong, Q.; Luo, Y.; Gao, C.; Zeng, W. Chest CT Severity Score: An Imaging Tool for Assessing Severe COVID-19. Radiol. Cardiothorac. Imaging 2020, 2, e200047. [Google Scholar] [CrossRef] [Green Version]
- Islam, M.T.; Hossen, M.; Kamaz, Z.; Zali, A.; Kumar, M.; Docea, A.O.; Arsene, A.L.; Călina, D.; Sharifi-Rad, J. The role of HMGB1 in the immune response to SARS-COV-2 infection: From pathogenesis towards a new potential therapeutic target. Farmacia 2021, 69, 621–634. [Google Scholar]
- Docea, A.O.; Tsatsakis, A.; Albulescu, D.; Cristea, O.; Zlatian, O.; Vinceti, M.; Moschos, S.; Tsoukalas, D.; Goumenou, M.; Drakoulis, N.; et al. A new threat from an old enemy: Re-emergence of coronavirus (Review). Int. J. Mol. Med. 2020, 45, 1631–1643. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Islam, M.; Karray, F.; Alhajj, R.; Zeng, J. A Review on Deep Learning Techniques for the Diagnosis of Novel Coronavirus (COVID-19). IEEE Access 2021, 9, 30551–30572. [Google Scholar] [CrossRef]
- Irmak, E. COVID-19 disease severity assessment using CNN model. IET Image Process. 2021, 15, 1814–1824. [Google Scholar] [CrossRef] [PubMed]
- Lassau, N.; Ammari, S.; Chouzenoux, E.; Gortais, H.; Herent, P.; Devilder, M.; Soliman, S.; Meyrignac, O.; Talabard, M.-P.; Lamarque, J.-P.; et al. Integrating deep learning CT-scan model, biological and clinical variables to predict severity of COVID-19 patients. Nat. Commun. 2021, 12, 634. [Google Scholar] [CrossRef]
- Punn, N.S.; Sonbhadra, S.K.; Agarwal, S. COVID-19 Epidemic Analysis using Machine Learning and Deep Learning Algorithms. medRxiv 2020. [Google Scholar] [CrossRef] [Green Version]
- Almansoor, M.; Hewahi, N.M. Exploring the Relation between Blood Tests and Covid-19 Using Machine Learning. In Proceedings of the 2020 International Conference on Data Analytics for Business and Industry: Way Towards a Sustainable Economy (ICDABI), Sakheer, Bahrain, 26–27 October 2020; pp. 1–6. [Google Scholar] [CrossRef]
- AlJame, M.; Ahmad, I.; Imtiaz, A.; Mohammed, A. Ensemble learning model for diagnosing COVID-19 from routine blood tests. Inform. Med. Unlocked 2020, 21, 100449. [Google Scholar] [CrossRef]
- Kukar, M.; Gunčar, G.; Vovko, T.; Podnar, S.; Černelč, P.; Brvar, M.; Zalaznik, M.; Notar, M.; Moškon, S.; Notar, M. COVID-19 diagnosis by routine blood tests using machine learning. Sci. Rep. 2021, 11, 10738. [Google Scholar] [CrossRef]
- Zoabi, Y.; Deri-Rozov, S.; Shomron, N. Machine learning-based prediction of COVID-19 diagnosis based on symptoms. Npj Digit. Med. 2021, 4, 3. [Google Scholar] [CrossRef] [PubMed]
- Lopes, F.P.P.L.; Kitamura, F.C.; Prado, G.F.; Kuriki, P.E.D.A.; Garcia, M.R.T.; COVID-AI-Brasil. Machine learning model for predicting severity prognosis in patients infected with COVID-19: Study protocol from COVID-AI Brasil. PLoS ONE 2021, 16, e0245384. [Google Scholar] [CrossRef]
- Pan, S.J.; Yang, Q. A Survey on Transfer Learning. IEEE Trans. Knowl. Data Eng. 2009, 22, 1345–1359. [Google Scholar] [CrossRef]
- Adjabi, I.; Ouahabi, A.; Benzaoui, A.; Taleb-Ahmed, A. Past, Present, and Future of Face Recognition: A Review. Electronics 2020, 9, 1188. [Google Scholar] [CrossRef]
- Polley, E.; Laan, M. Super Learner in prediction. Available online: https://biostats.bepress.com/cgi/viewcontent.cgi?article=1269&context=ucbbiostat (accessed on 7 September 2020).
- Clinical management of COVID-19: Interim guidance: WHO/2019-nCoV/clinical/2021.1. Available online: https://apps.who.int/iris/handle/10665/332196 (accessed on 13 August 2020).
- Yu, Z.; Li, X.; Sun, H.; Wang, J.; Zhao, T.; Chen, H.; Ma, Y.; Zhu, S.; Xie, Z. Rapid identification of COVID-19 severity in CT scans through classification of deep features. Biomed. Eng. Online 2020, 19, 1–13. [Google Scholar] [CrossRef] [PubMed]
- Carvalho, A.R.S.; Guimarães, A.; Werberich, G.M.; De Castro, S.N.; Pinto, J.S.F.; Schmitt, W.R.; França, M.; Bozza, F.A.; Guimarães, B.L.D.S.; Zin, W.A.; et al. COVID-19 Chest Computed Tomography to Stratify Severity and Disease Extension by Artificial Neural Network Computer-Aided Diagnosis. Front. Med. 2020, 7, 577609. [Google Scholar] [CrossRef]
- Giovagnoni, A. Facing the COVID-19 emergency: We can and we do. Radiol. Med. 2020, 125, 337–338. [Google Scholar] [CrossRef] [Green Version]
- Neri, E.; Miele, V.; Coppola, F.; Grassi, R. Use of CT and artificial intelligence in suspected or COVID-19 positive patients: Statement of the Italian Society of Medical and Interventional Radiology. Radiol. Med. 2020, 125, 505–508. [Google Scholar] [CrossRef] [PubMed]
- ACR recommendations for the use of chest radiography and computed tomography (CT) for suspected COVID-19 infection (updated 22 March 2020). American College of Radiology. Available online: https://www.acr.org/Advocacyand-Economics/ACR-Position-Statements/Recommendations-for Chest-Radiography-and-CT-for-Suspected-COVID-19-infection. (accessed on 1 August 2020).
- Wong, H.Y.F.; Lam, H.Y.S.; Fong, A.H.-T.; Leung, S.T.; Chin, T.W.-Y.; Lo, C.S.Y.; Lui, M.M.-S.; Lee, J.C.Y.; Chiu, K.W.-H.; Chung, T.W.-H.; et al. Frequency and Distribution of Chest Radiographic Findings in Patients Positive for COVID-19. Radiology 2020, 296, E72–E78. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. arXiv 2014, arXiv:1409.1556. [Google Scholar]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
- Szegedy, C.; Liu, W.; Jia, Y.; Sermanet, P.; Reed, S.; Anguelov, D.; Erhan, D.; Vanhoucke, V.; Rabinovich, A. Going deeper with convolutions. In Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA, 7–12 June 2015; pp. 1–9. [Google Scholar]
- Huang, G.; Liu, Z.; Maaten Lvd Weinberger, K.Q. Densely Connected Convolutional Networks. arXiv 2016, arXiv:1608.06993. [Google Scholar]
- Vasile, C.; Udriștoiu, A.; Ghenea, A.; Popescu, M.; Gheonea, C.; Niculescu, C.; Ungureanu, A.; Udriștoiu, S.; Drocaş, A.; Gruionu, L.; et al. Intelligent Diagnosis of Thyroid Ultrasound Imaging Using an Ensemble of Deep Learning Methods. Medicina 2021, 57, 395. [Google Scholar] [CrossRef] [PubMed]
- Freund, Y.; Schapire, R.E. A Decision-Theoretic Generalization of On-Line Learning and an Application to Boosting. J. Comput. Syst. Sci. 1997, 55, 119–139. [Google Scholar] [CrossRef] [Green Version]
- Breiman, L. Random Forests. Mach. Learn. 2001, 45, 5–32. [Google Scholar] [CrossRef] [Green Version]
- Chen, T.; Guestrin, C. XGBoost: A Scalable Tree Boosting System. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD ’16), San Francisco, CA, USA, 13–17 August 2016; Association for Computing Machinery: New York, NY, USA; pp. 785–794. [Google Scholar] [CrossRef] [Green Version]
- CatBoost. Available online: https://catboost.ai/docs (accessed on 13 December 2020).
- Abadi, M. TensorFlow: Learning functions at scale. In Proceedings of the 21st ACM SIGPLAN International Conference on Functional Programming, Nara, Japan, 19–21 September 2016; Volume 51, p. 1. [Google Scholar]
- Pedregosa, F.; Varoquaux, G.; Gramfort, A.; Michel, V.; Thirion, B.; Grisel, O.; Blondel, M.; Prettenhofer, P.; Weiss, R.; Dubourg, V.; et al. Scikit-learn: Machine learning in Python. J. Mach. Learn. Res. 2011, 12, 2825–2830. [Google Scholar]
- MatPlotLib. Available online: https://matplotlib.org/ (accessed on 1 March 2021).
- Lippi, G.; Plebani, M. Laboratory abnormalities in patients with COVID-2019 infection. Clin. Chem. Lab. Med. 2020, 58, 1131–1134. [Google Scholar] [CrossRef] [Green Version]
- Zhang, J.J.; Dong, X.; Cao, Y.Y.; Yuan, Y.D.; Yang, Y.B.; Yan, Y.Q.; Akdis, C.A.; Gao, Y.D. Clinical characteristics of 140 patients infected by SARS-CoV-2 in Wuhan, China. Allergy 2020, 75, 1 730–1741. [Google Scholar] [CrossRef]
- Huang, C.; Wang, Y.; Li, X.; Ren, L.; Zhao, J.; Hu, Y.; Zhang, L.; Fan, G.; Xu, J.; Gu, X.; et al. Clinical features of patients infected with 2019 novel coronavirus in Wuhan, China. Lancet 2020, 395, 497–506. [Google Scholar] [CrossRef] [Green Version]
- Wang, B.; Li, R.; Lu, Z.; Huang, Y. Does comorbidity increase the risk of patients with COVID-19: Evidence from meta-analysis. Aging 2020, 12, 6049–6057. [Google Scholar] [CrossRef]
- Wang, X.; Fang, X.; Cai, Z.; Wu, X.; Gao, X.; Min, J.; Wang, F. Comorbid Chronic Diseases and Acute Organ Injuries Are Strongly Correlated with Disease Severity and Mortality among COVID-19 Patients: A Systemic Review and Meta-Analysis. Research 2020, 2020, 2402961. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Liang, W.; Liang, H.; Ou, L.; Chen, B.; Chen, A.; Li, C.; Li, Y.; Guan, W.; Sang, L.; Lu, J.; et al. Development and Validation of a Clinical Risk Score to Predict the Occurrence of Critical Illness in Hospitalized Patients With COVID-19. JAMA Intern. Med. 2020, 180, 1081–1089. [Google Scholar] [CrossRef]
- Li, K.; Wu, J.; Wu, F.; Guo, D.; Chen, L.; Fang, Z.; Li, C. The Clinical and Chest CT Features Associated with Severe and Critical COVID-19 Pneumonia. Investig. Radiol. 2020, 55, 327–331. [Google Scholar] [CrossRef] [PubMed]
- Islam, M.T.; Quispe, C.; Martorell, M.; Docea, A.O.; Salehi, B.; Calina, D.; Reiner, Z.; Sharifi-Rad, J. Dietary supplements, vitamins and minerals as potential interventions against viruses: Perspectives for COVID-19. Int. J. Vitam. Nutr. Res. 2021, 1–18. [Google Scholar] [CrossRef]
- Ouahabi, A. A review of wavelet denoising in medical imaging. In Proceedings of the 2013 8th International Workshop on Systems, Signal Processing and their Applications (WoSSPA), Algiers, Algeria, 12–15 May 2013; pp. 19–26. [Google Scholar] [CrossRef]
- Ouahabi, A.; Taleb-Ahmed, A. Deep learning for real-time semantic segmentation: Application in ultrasound imaging. Pattern Recognit. Lett. 2021, 144, 27–34. [Google Scholar] [CrossRef]
Diagnosis | Training (No Patients) | Testing/Control (No Patients) | Total (No Patients) |
---|---|---|---|
Mild | 114 | 30 | 144 |
Moderate | 127 | 30 | 157 |
Severe | 139 | 35 | 174 |
Total | 380 | 95 | 475 |
Variable | RR | Difference | Cohen’s d | Statistic | p Value |
---|---|---|---|---|---|
Age ≥ 65 years | 1.28 | - | - | 4.27 | 0.039 * |
Male sex | 0.47 | - | - | 38.04 | 0.001 * |
Oxygen saturation (%) | - | −5.66 | −1.29 | 10.47 | 0.001 * |
Systolic pressure (mm Hg) | - | 7.39 | 0.80 | 15.1 | 0.001 * |
Diastolic pressure (mm Hg) | - | 13.74 | 1.57 | 17.51 | 0.001 * |
Respiratory rate (/min) | - | 1.18 | 0.96 | 9.72 | 0.001 * |
Cardiac frequency (/min) | - | 16.15 | 1.34 | 12.89 | 0.001 * |
Body temperature (°C) | - | 0.14 | 0.18 | 1.93 | 0.054 |
Coughing | 1.57 | - | - | 12.77 | 0.001 * |
Sore throat | 1.19 | - | - | 1.34 | 0.246 |
Headache | 1.69 | - | - | 18.53 | 0.001 * |
Shortness of breath | 2.78 | - | - | 48.75 | 0.001 * |
Vertigo | 1.53 | - | - | 11.34 | 0.001 * |
Palpitations | 1.33 | - | - | 4.13 | 0.042 * |
Physical asthenia | 1.34 | - | - | 2.91 | 0.088 |
Abdominal pain | 1.35 | - | - | 4.74 | 0.029 * |
Myalgia | 2.83 | - | - | 72.64 | 0.001 * |
Inappetence | 1.35 | - | - | 5.95 | 0.015 * |
Diarrhea | 1.86 | - | - | 17 | 0.001 * |
Diabetes | 1.71 | - | - | 19.21 | 0.001 * |
Cardiac disease | 3.06 | - | - | 78.46 | 0.001 * |
Kidney disease | 1.77 | - | - | 7.99 | 0.004 * |
Asthma | 0.61 | - | - | 2.96 | 0.085 |
Hypertension | 4.24 | - | - | 124.02 | 0.001 * |
Autoimmune thyroid | 2.88 | - | - | 39.3 | 0.001 * |
Obesity | 2.53 | - | - | 56.41 | 0.001 * |
WBC (white blood cells) (×103/mmc) | - | 2.60 | 0.16 | 1.64 | 0.010 * |
LYM (Lymphocytes) (%) | - | 3.43 | 0.30 | 2.85 | 0.005 * |
MON (Monocytes) (%) | - | −1.38 | −0.48 | −5.35 | 0.001 * |
NEU (Neutrophiles) (%) | - | −1.95 | −0.17 | −1.65 | 0.100 |
EOS (eosinophiles) (%) | - | 0.31 | 0.52 | 5.7 | 0.001 * |
BAS (Basophiles) (%) | - | 0.06 | 0.25 | 2.67 | 0.008 * |
HGB (Hemoglobin) (g/dL) | - | 0.09 | 0.08 | 0.78 | 0.437 |
PLT (Thrombocytes) (×103/mmc) | - | 7568 | 0.07 | 0.67 | 0.500 |
AST (UI/L) | - | 3.13 | 0.09 | 1.07 | 0.285 |
ALT (UI/L) | - | 5.26 | 0.24 | 2.3 | 0.022 * |
Glucose (g/dL) | - | 33.97 | 0.27 | 2.76 | 0.006 * |
ESR (mm/h) | - | 13.65 | 0.52 | 5.08 | 0.001 * |
Total bilirubin (g/dL) | - | 0.003 | 0.01 | 0.09 | 0.924 |
CRP (mg/L) | - | 63.09 | 3.1 | 30.26 | 0.001 * |
Creatinine | - | 0.07 | 0.11 | 1.06 | 0.289 |
Urea | - | −2.15 | −0.10 | −0.93 | 0.352 |
Fibrinogen | - | 86.83 | 2.17 | 24.67 | 0.001 * |
D-Dimers | - | 1371 | 2.37 | 19.54 | 0.001 * |
CXR Severity | - | 1.67 | 2.89 | 36.1 | 0.001 * |
Severe Disease | Odds Ratio | Std. Err. | z | p > z | [95% Conf. Interval] |
---|---|---|---|---|---|
Oxygen saturation | 0.695636 | 0.030356 | −8.32 | <0.001 * | [0.638613, 0.757751] |
Sex | 0.047451 | 0.024916 | −5.8 | <0.001 * | [0.016955, 0.132799] |
Age group | 1.236799 | 0.51204 | 0.51 | 0.608 | [0.549412, 2.784198] |
Diabetes | 0.052452 | 0.044864 | −3.45 | <0.001 * | [0.009811, 0.280428] |
Obesity | 166.5959 | 183.348 | 4.65 | <0.001 * | [19.26948, 1440.319] |
WBC | 1.001285 | 0.000306 | 4.2 | <0.001 * | [1.000686, 1.001885] |
Creatinine | 6.825661 | 10.79885 | 1.21 | 0.225 | [0.307229, 151.6449] |
Urea | 1.029222 | 0.029315 | 1.01 | 0.312 | [0.973341, 1.088312] |
Constant | 0.279416 | 2.470919 | −0.14 | 0.885 | [8.3 × 10−9, 9409517] |
ResNet50 | VGG-19 | Inceptionv3 | DenseNet121 | CXR-Score | |
---|---|---|---|---|---|
Acc_severe (%) | 99 | 98.85 | 99.31 | 99.31 | 99.54 |
Se_severe (%) | 99.05 | 97.17 | 98.11 | 98.11 | 98.11 |
Sp_severe (%) | 99.09 | 99.39 | 99.69 | 99.69 | 100 |
PPV_severe (%) | 97.22 | 98.09 | 99.04 | 99.04 | 100 |
NPV_severe (%) | 99.69 | 99.09 | 99.39 | 99.39 | 99.4 |
Acc_moderate (%) | 97.26 | 97.94 | 97.71 | 98.17 | 98.4 |
Se_moderate (%) | 94.35 | 94.35 | 95.16 | 95.16 | 96.77 |
Sp_moderate (%) | 98.4 | 99.36 | 98.72 | 99.36 | 99.04 |
PPV_moderate (%) | 95.9 | 98.31 | 96.72 | 98.33 | 97.56 |
NPV_moderate (%) | 97.78 | 97.8 | 98.10 | 98.11 | 98.73 |
Acc_normal (%) | 100 | 99.54 | 99.77 | 99.54 | 99.54 |
Se_normal (%) | 100 | 99.08 | 100 | 99.08 | 99.08 |
Sp_normal (%) | 100 | 99.69 | 99.69 | 99.69 | 99.69 |
PPV_normal (%) | 100 | 99.08 | 99.09 | 99.08 | 99.08 |
NPV_normal (%) | 100 | 99.69 | 100 | 99.69 | 99.69 |
Acc_mild (%) | 98.17 | 98.63 | 98.63 | 99.08 | 98.85 |
Se_mild (%) | 95.95 | 100 | 96.96 | 100 | 97.97 |
Sp_mild (%) | 98.82 | 98.23 | 99.11 | 98.82 | 99.11 |
PPV_mild (%) | 95.95 | 94.28 | 96.96 | 96.11 | 97 |
NPV_mild (%) | 98.82 | 100 | 99.11 | 100 | 99.40 |
Random Forests | XGBoost | Ada Boost | AI-Score | |
---|---|---|---|---|
Acc_severe (%) | 96.84 | 98.94 | 91.57 | 98.94 |
Se_severe (%) | 91.42 | 97.14 | 77.14 | 97.14 |
Sp_severe (%) | 100 | 100 | 100 | 100 |
PPV_severe (%) | 100 | 100 | 100 | 100 |
NPV_severe (%) | 95.23 | 98.36 | 88.23 | 98.36 |
Acc_moderate (%) | 97.89 | 97.89 | 90.52 | 98.94 |
Se_moderate (%) | 93.33 | 93.33 | 100 | 96.66 |
Sp_moderate (%) | 100 | 100 | 86.15 | 100 |
PPV_moderate (%) | 100 | 100 | 76.92 | 100 |
NPV_moderate (%) | 97.01 | 97.01 | 100 | 98.48 |
Acc_mild (%) | 94.73 | 96.84 | 98.94 | 97.89 |
Se_mild (%) | 100 | 100 | 96.66 | 100 |
Sp_mild (%) | 92.30 | 95.38 | 100 | 96.92 |
PPV_mild (%) | 85.71 | 90.90 | 100 | 93.75 |
NPV_mild (%) | 100 | 100 | 98.48 | 100 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Udriștoiu, A.L.; Ghenea, A.E.; Udriștoiu, Ș.; Neaga, M.; Zlatian, O.M.; Vasile, C.M.; Popescu, M.; Țieranu, E.N.; Salan, A.-I.; Turcu, A.A.; et al. COVID-19 and Artificial Intelligence: An Approach to Forecast the Severity of Diagnosis. Life 2021, 11, 1281. https://doi.org/10.3390/life11111281
Udriștoiu AL, Ghenea AE, Udriștoiu Ș, Neaga M, Zlatian OM, Vasile CM, Popescu M, Țieranu EN, Salan A-I, Turcu AA, et al. COVID-19 and Artificial Intelligence: An Approach to Forecast the Severity of Diagnosis. Life. 2021; 11(11):1281. https://doi.org/10.3390/life11111281
Chicago/Turabian StyleUdriștoiu, Anca Loredana, Alice Elena Ghenea, Ștefan Udriștoiu, Manuela Neaga, Ovidiu Mircea Zlatian, Corina Maria Vasile, Mihaela Popescu, Eugen Nicolae Țieranu, Alex-Ioan Salan, Adina Andreea Turcu, and et al. 2021. "COVID-19 and Artificial Intelligence: An Approach to Forecast the Severity of Diagnosis" Life 11, no. 11: 1281. https://doi.org/10.3390/life11111281
APA StyleUdriștoiu, A. L., Ghenea, A. E., Udriștoiu, Ș., Neaga, M., Zlatian, O. M., Vasile, C. M., Popescu, M., Țieranu, E. N., Salan, A. -I., Turcu, A. A., Nicolosu, D., Calina, D., & Cioboata, R. (2021). COVID-19 and Artificial Intelligence: An Approach to Forecast the Severity of Diagnosis. Life, 11(11), 1281. https://doi.org/10.3390/life11111281