Comparison of Machine Learning Techniques for Prediction of Hospitalization in Heart Failure Patients
Abstract
:1. Introduction
2. Materials and methods
2.1. Gestione Integrata dello Scompenso Cardiaco (GISC) Study
- Numerical variables: body mass index (BMI), age, heart rate, BNP, pulmonary pressure, serum creatinine, mean years between clinical examinations at follow-up;
- Categorical variables: gender, the occurrence of myocardial infarction, etiology related to ischemic cardiomyopathy, dilated cardiomyopathy or valvulopathy, presence of comorbidities, chronic obstructive pulmonary disease (COPD) or anemia (dichotomous data), and New York Heart Association (NYHA) class (ordinal data).
2.2. Machine Learning Techniques
2.3. Model Training and Testing
3. Results
4. Discussion
5. Conclusions
Supplementary Materials
Author Contributions
Conflicts of Interest
References
- Altman, R.B.; Ashley, E.A. Using “Big Data” to Dissect Clinical Heterogeneity. Circulation 2015, 131, 232–233. [Google Scholar] [CrossRef] [PubMed]
- Feied, C.F.; Handler, J.A.; Smith, M.S.; Gillam, M.; Kanhouwa, M.; Rothenhaus, T.; Conover, K.; Shannon, T. Clinical Information Systems: Instant Ubiquitous Clinical Data for Error Reduction and Improved Clinical Outcomes. Acad. Emerg. Med. 2004, 11, 1162–1169. [Google Scholar] [CrossRef] [PubMed]
- Savarese, G.; Lund, L.H. Global public health burden of heart failure. Card. Fail. Rev. 2017, 3, 7. [Google Scholar] [CrossRef]
- Cowie, M.R.; Mosterd, A.; Wood, D.A.; Deckers, J.W.; Poole-Wilson, P.A.; Sutton, G.C.; Grobbee, D.E. The epidemiology of heart failure. Eur. Heart J. 1997, 18, 208–225. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Ponikowski, P.; Voors, A.A.; Anker, S.D.; Bueno, H.; Cleland, J.G.F.; Coats, A.J.S.; Falk, V.; González-Juanatey, J.R.; Harjola, V.-P.; Jankowska, E.A.; et al. ESC Guidelines for the diagnosis and treatment of acute and chronic heart failure: The Task Force for the diagnosis and treatment of acute and chronic heart failure of the European Society of Cardiology (ESC). Developed with the special contribution of the Heart Failure Association (HFA) of the ESC. Eur. J. Heart Fail. 2016, 18, 891–975. [Google Scholar] [PubMed]
- Conrad, N.; Judge, A.; Tran, J.; Mohseni, H.; Hedgecott, D.; Crespillo, A.P.; Allison, M.; Hemingway, H.; Cleland, J.G.; McMurray, J.J. Temporal trends and patterns in heart failure incidence: A population-based study of 4 million individuals. Lancet 2018, 391, 572–580. [Google Scholar] [CrossRef]
- Lorenzoni, G.; Azzolina, D.; Lanera, C.; Brianti, G.; Gregori, D.; Vanuzzo, D.; Baldi, I. Time trends in first hospitalization for heart failure in a community-based population. Int. J. Cardiol. 2018, 271, 195–199. [Google Scholar] [CrossRef] [PubMed]
- Cook, C.; Cole, G.; Asaria, P.; Jabbour, R.; Francis, D.P. The annual global economic burden of heart failure. Int. J. Cardiol. 2014, 171, 368–376. [Google Scholar] [CrossRef]
- Johnson, K.W.; Soto, J.T.; Glicksberg, B.S.; Shameer, K.; Miotto, R.; Ali, M.; Ashley, E.; Dudley, J.T. Artificial intelligence in cardiology. J. Am. Coll. Cardiol. 2018, 71, 2668–2679. [Google Scholar] [CrossRef]
- Awan, S.E.; Sohel, F.; Sanfilippo, F.M.; Bennamoun, M.; Dwivedi, G. Machine learning in heart failure: Ready for prime time. Curr. Opin. Cardiol. 2018, 33, 190–195. [Google Scholar] [CrossRef]
- Tripoliti, E.E.; Papadopoulos, T.G.; Karanasiou, G.S.; Naka, K.K.; Fotiadis, D.I. Heart failure: Diagnosis, severity estimation and prediction of adverse events through machine learning techniques. Comput. Struct. Biotechnol. J. 2017, 15, 26–47. [Google Scholar] [CrossRef] [PubMed]
- Mortazavi, B.J.; Downing, N.S.; Bucholz, E.M.; Dharmarajan, K.; Manhapra, A.; Li, S.-X.; Negahban, S.N.; Krumholz, H.M. Analysis of machine learning techniques for heart failure readmissions. Circ. Cardiovasc. Qual. Outcomes 2016, 9, 629–640. [Google Scholar] [CrossRef] [PubMed]
- Frizzell, J.D.; Liang, L.; Schulte, P.J.; Yancy, C.W.; Heidenreich, P.A.; Hernandez, A.F.; Bhatt, D.L.; Fonarow, G.C.; Laskey, W.K. Prediction of 30-day all-cause readmissions in patients hospitalized for heart failure: Comparison of machine learning and other statistical approaches. JAMA Cardiol. 2017, 2, 204–209. [Google Scholar] [CrossRef] [PubMed]
- Dai, W.; Brisimi, T.S.; Adams, W.G.; Mela, T.; Saligrama, V.; Paschalidis, I.C. Prediction of hospitalization due to heart diseases by supervised learning methods. Int. J. Med. Inf. 2015, 84, 189–197. [Google Scholar] [CrossRef] [PubMed]
- Pisanò, F.; Lorenzoni, G.; Sabato, S.S.; Soriani, N.; Narraci, O.; Accogli, M.; Rosato, C.; Paolis, P.; de Folino, F.; Buja, G.; et al. Networking and data sharing reduces hospitalization cost of heart failure: The experience of GISC study. J. Eval. Clin. Pract. 2015, 21, 103–108. [Google Scholar] [CrossRef] [PubMed]
- Aksoy, S.; Haralick, R.M. Feature normalization and likelihood-based similarity measures for image retrieval. Pattern Recognit. Lett. 2001, 22, 563–582. [Google Scholar] [CrossRef] [Green Version]
- Goldstein, B.A.; Navar, A.M.; Carter, R.E. Moving beyond regression techniques in cardiovascular risk prediction: Applying machine learning to address analytic challenges. Eur. Heart J. 2017, 38, 1805–1814. [Google Scholar] [CrossRef]
- Austin, P.C.; Tu, J.V.; Ho, J.E.; Levy, D.; Lee, D.S. Using methods from the data-mining and machine-learning literature for disease classification and prediction: A case study examining classification of heart failure subtypes. J. Clin. Epidemiol. 2013, 66, 398–407. [Google Scholar] [CrossRef]
- Jain, S. Applications of Logistic Model to Medical Research. Biom. J. 1987, 29, 369–374. [Google Scholar] [CrossRef]
- Kruppa, J.; Liu, Y.; Diener, H.-C.; Holste, T.; Weimar, C.; König, I.R.; Ziegler, A. Probability estimation with machine learning methods for dichotomous and multicategory outcome: Applications. Biom. J. 2014, 56, 564–583. [Google Scholar] [CrossRef]
- Steyerberg, E.W.; van der Ploeg, T.; Van Calster, B. Risk prediction with machine learning and regression methods. Biom. J. 2014, 56, 601–606. [Google Scholar] [CrossRef]
- Friedman, J.; Hastie, T.; Tibshirani, R. Regularization Paths for Generalized Linear Models via Coordinate Descent. J. Stat. Softw. 2010, 33, 1–22. [Google Scholar] [CrossRef] [Green Version]
- Hastie, T.; Tibshirani, R.; Friedman, J. The Elements of Statistical Learning: Data Mining, Inference, and Prediction, 2nd ed.; Springer Series in Statistics; Springer-Verlag: New York, NY, USA, 2009; ISBN 978-0-387-84857-0. [Google Scholar]
- Breiman, L.; Friedman, J.; Stone, C.J.; Olshen, R.A. Classification and Regression Trees; Chapman and Hall: Wadsworth, NY, USA, 1984; ISBN 978-0-412-04841-8. [Google Scholar]
- Marshall, R.J. The use of classification and regression trees in clinical epidemiology. J. Clin. Epidemiol. 2001, 54, 603–609. [Google Scholar] [CrossRef]
- Austin, P.C.; Lee, D.S. Boosted classification trees result in minor to modest improvement in the accuracy in classifying cardiovascular outcomes compared to conventional classification trees. Am. J. Cardiovasc. Dis. 2011, 1, 1–15. [Google Scholar]
- Breiman, L. Random Forests. Mach. Learn. 2001, 45, 5–32. [Google Scholar] [CrossRef] [Green Version]
- Sakr, S.; Elshawi, R.; Ahmed, A.; Qureshi, W.T.; Brawner, C.; Keteyian, S.; Blaha, M.J.; Al-Mallah, M.H. Using machine learning on cardiorespiratory fitness data for predicting hypertension: The Henry Ford ExercIse Testing (FIT) Project. PLoS ONE 2018, 13, e0195344. [Google Scholar] [CrossRef]
- Andrews, P.J.D.; Sleeman, D.H.; Statham, P.F.X.; McQuatt, A.; Corruble, V.; Jones, P.A.; Howells, T.P.; Macmillan, C.S.A. Predicting recovery in patients suffering from traumatic brain injury by using admission variables and physiological data: A comparison between decision tree analysis and logistic regression. J. Neurosurg. 2002, 97, 326–336. [Google Scholar] [CrossRef]
- Freund, Y.; Schapire, R.E. Experiments with a new boosting algorithm. In Proceedings of the 13th International Conference on ML, Bari, Italy, 3–6 July 1996; Morgan Kaufmann Publishers Inc.: San Francisco, CA, USA, 1996; pp. 148–156. [Google Scholar]
- Friedman, J.; Hastie, T.; Tibshirani, R. Additive logistic regression: A statistical view of boosting (with discussion and a rejoinder by the authors). Ann. Stat. 2000, 28, 337–407. [Google Scholar] [CrossRef]
- Blagus, R.; Lusa, L. Boosting for high-dimensional two-class prediction. BMC Bioinform. 2015, 16, 300. [Google Scholar] [CrossRef]
- Chen, P.; Pan, C. Diabetes classification model based on boosting algorithms. BMC Bioinform. 2018, 19, 109. [Google Scholar] [CrossRef]
- Cortes, C.; Vapnik, V. Support-vector networks. Mach. Learn. 1995, 20, 273–297. [Google Scholar] [CrossRef]
- Rossing, K.; Bosselmann, H.S.; Gustafsson, F.; Zhang, Z.-Y.; Gu, Y.-M.; Kuznetsova, T.; Nkuipou-Kenfack, E.; Mischak, H.; Staessen, J.A.; Koeck, T.; et al. Urinary Proteomics Pilot Study for Biomarker Discovery and Diagnosis in Heart Failure with Reduced Ejection Fraction. PLoS ONE 2016, 11, e0157167. [Google Scholar] [CrossRef]
- Zhang, Z.Y.; Ravassa, S.; Nkuipou-Kenfack, E.; Yang, W.Y.; Kerr, S.M.; Koeck, T.; Campbell, A.; Kuznetsova, T.; Mischak, H.; Padmanabhan, S.; et al. Novel Urinary Peptidomic Classifier Predicts Incident Heart Failure. J. Am. Heart Assoc. 2017, 6, e005432. [Google Scholar] [CrossRef]
- Choi, E.; Schuetz, A.; Stewart, W.F.; Sun, J. Using recurrent neural network models for early detection of heart failure onset. J. Am. Med. Inform. Assoc. 2017, 24, 361–370. [Google Scholar] [CrossRef]
- Bishop, C.M. Neural Networks for Pattern Recognition; Oxford University Press, Inc.: New York, NY, USA, 1995; ISBN 978-0-19-853864-6. [Google Scholar]
- Cherry, K.M.; Qian, L. Scaling up molecular pattern recognition with DNA-based winner-take-all neural networks. Nature 2018, 559, 370. [Google Scholar] [CrossRef]
- Wu, D.; Pigou, L.; Kindermans, P.; Le, N.D.; Shao, L.; Dambre, J.; Odobez, J. Deep Dynamic Neural Networks for Multimodal Gesture Segmentation and Recognition. IEEE Trans. Pattern Anal. Mach. Intell. 2016, 38, 1583–1597. [Google Scholar] [CrossRef] [Green Version]
- Kubilius, J.; Bracci, S.; Beeck, H.P.O.d. Deep Neural Networks as a Computational Model for Human Shape Sensitivity. PLoS Comput. Biol. 2016, 12, e1004896. [Google Scholar] [CrossRef]
- Polezer, G.; Tadano, Y.S.; Siqueira, H.V.; Godoi, A.F.L.; Yamamoto, C.I.; de André, P.A.; Pauliquevis, T.; Andrade, M.d.F.; Oliveira, A.; Saldiva, P.H.N.; et al. Assessing the impact of PM2.5 on respiratory disease using artificial neural networks. Environ. Pollut. 2018, 235, 394–403. [Google Scholar] [CrossRef]
- Oweis, R.J.; Abdulhay, E.W.; Khayal, A.; Awad, A. An alternative respiratory sounds classification system utilizing artificial neural networks. Biomed. J. 2015, 38, 153–161. [Google Scholar] [CrossRef]
- Sharifi, M.; Buzatu, D.; Harris, S.; Wilkes, J. Development of models for predicting Torsade de Pointes cardiac arrhythmias using perceptron neural networks. BMC Bioinform. 2017, 18, 497. [Google Scholar] [CrossRef]
- Puddu, P.E.; Menotti, A. Artificial neural networks versus proportional hazards Cox models to predict 45-year all-cause mortality in the Italian Rural Areas of the Seven Countries Study. BMC Med. Res. Methodol. 2012, 12, 100. [Google Scholar] [CrossRef]
- Cohen, J. A coefficient of agreement for nominal scales. Educ. Psychol. Meas. 1960, 20, 37–46. [Google Scholar] [CrossRef]
- Kuhn, M.; Johnson, K. Applied Predictive Modeling; Springer-Verlag: New York, NY, USA, 2013; ISBN 978-1-4614-6848-6. [Google Scholar]
- Wahl, S.; Boulesteix, A.-L.; Zierer, A.; Thorand, B.; van de Wiel, M.A. Assessment of predictive performance in incomplete data by combining internal validation and multiple imputation. BMC Med. Res. Methodol. 2016, 16, 144. [Google Scholar] [CrossRef]
- Hickey, G.L.; Grant, S.W.; Dunning, J.; Siepe, M. Statistical primer: Sample size and power calculations—Why, when and how? Eur. J. Cardiothorac. Surg. 2018, 54, 4–9. [Google Scholar] [CrossRef]
- Aranda, J.M.; Johnson, J.W.; Conti, J.B. Current trends in heart failure readmission rates: Analysis of Medicare data. Clin. Cardiol. 2009, 32, 47–52. [Google Scholar] [CrossRef]
- R Core Team. R: A Language and Environment for Statistical Computing; R Foundation for Statistical Computing: Vienna, Austria, 2018. [Google Scholar]
- Friedman, J.; Hastie, T.; Tibshirani, R.; Simon, N.; Narasimhan, B.; Qian, J. Glmnet: Lasso and Elastic-Net Regularized Generalized Linear Models. R package version 2.0.5 2016. Available online: https://rdrr.io/cran/glmnet/ (accessed on 26 November 2016).
- Therneau, T.; Atkinson, B.; Port, B.R. (producer of the initial R.; maintainer 1999–2017) rpart: Recursive Partitioning and Regression Trees. Available online: https://rdrr.io/cran/rpart/ (accessed on 1 May 2019).
- Wright, M.N.; Wager, S.; Probst, P. Ranger: A Fast Implementation of Random Forests; R package version 0.5. 0 (2016). Available online: http://CRAN.R-project.org/package=ranger (accessed on 7 July 2019).
- Tuszynski, J. caTools: Tools: Moving Window Statistics, GIF, Base64, ROC AUC, etc. Available online: http://CRAN.R-project.org/package=caTools (accessed on 1 April 2014).
- Alfaro-Cortes, E.; Gamez-Martinez, M.; Garcia-Rubio, N.; Guo, L. Adabag: Applies Multiclass AdaBoost.M1, SAMME and Bagging. Available online: https://rdrr.io/cran/adabag/man/adabag-package.html (accessed on 1 May 2019).
- Meyer, D.; Dimitriadou, E.; Hornik, K.; Weingessel, A.; Leisch, F.; Chang, C.-C.; Lin, C.-C. Libsvm e1071: Misc Functions of the Department of Statistics, Probability Theory Group (Formerly: E1071), TU Wien. Available online: https://rdrr.io/rforge/e1071/ (accessed on 4 June 2019).
- Ripley, B.; Venables, W. nnet: Feed-Forward Neural Networks and Multinomial Log-Linear Models. Available online: https://CRAN.R-project.org/package¼nnet (accessed on 15 January 2018).
- Kuhn, M.; Wing, J. Steve West. Andre Williams Chris Keefer Allan Engelhardt Tony Cooper Zachary Mayer Brenton Kenkel R Core Team Michael Benesty Reynald Lescarbeau Andrew Ziem Luca Scrucca YT C Candan Caret Classif. Regres. Train. 2016. Available online: http://CRAN.R-project.org/package=caret (accessed on 23 May 2019).
- Wickham, H. tidyverse: Easily Install and Load “Tidyverse” Packages; R Core Team: Vienna, Austria, 2017. [Google Scholar]
- Ishwaran, H. Variable importance in binary regression trees and forests. Electron. J. Stat. 2007, 1, 519–537. [Google Scholar] [CrossRef]
- Gregori, D.; Petrinco, M.; Bo, S.; Rosato, R.; Pagano, E.; Berchialla, P.; Merletti, F. Using data mining techniques in monitoring diabetes care. The simpler the better? J. Med. Syst. 2011, 35, 277–281. [Google Scholar] [CrossRef]
- IZSTO; Ru, G.; Crescio, M.; Ingravalle, F.; Maurella, C.; UBESP; Gregori, D.; Lanera, C.; Azzolina, D.; Lorenzoni, G.; et al. Machine Learning Techniques applied in risk assessment related to food safety. EFSA Support. Publ. 2017, 14, 1254E. [Google Scholar]
- Voigt, J.; John, M.S.; Taylor, A.; Krucoff, M.; Reynolds, M.R.; Gibson, C.M. A Reevaluation of the Costs of Heart Failure and Its Implications for Allocation of Health Resources in the United States. Clin. Cardiol. 2014, 37, 312–321. [Google Scholar] [CrossRef]
- Murdoch, T.B.; Detsky, A.S. The Inevitable Application of Big Data to Health Care. JAMA 2013, 309, 1351–1352. [Google Scholar] [CrossRef]
Not Hospitalized (N = 170) | Hospitalized (N = 210) | p-Value | |
---|---|---|---|
Gender: Female | 54% (92) | 60% (125) | 0.29 |
Age | 72.0/78.0/83.0 | 73.0/79.0/83.0 | 0.357 |
BMI | 25.78/29.33/33.21 | 25.49/29.37/34.75 | 0.99 |
Medical history | |||
AMI | 12% (21) | 12% (26) | 0.993 |
HF etiology—ischemic cardiomyopathy | 15% (25) | 22% (47) | 0.058 |
HF etiology—dilated cardiomyopathy | 9% (16) | 10% (21) | 0.847 |
HF etiology—valvulopathy | 18% (30) | 21% (45) | 0.357 |
COPD | 26% (45) | 45% (94) | <0.001 |
Anemia | 15% (25) | 23% (48) | 0.045 |
Comorbidities | 39% (67) | 48% (101) | 0.09 |
Clinical examination | |||
Heart rate | 75.0/90.0/100.0 | 80.0/90.0/94.25 | 0.098 |
BNP | 850/1335/3000 | 1178/2228/3680 | <0.001 |
Pulmonary pressure | 35/40/47 | 35/41.5/52 | 0.051 |
NYHA class | 0.914 | ||
2 | 24% (39) | 26% (53) | |
3 | 67% (107) | 66% (136) | |
4 | 9% (14) | 8% (16) | |
Creatinine | 0.800/1.000/1.208 | 0.810/1.070/1.450 | 0.021 |
Mean years between clinical examinations | 0.625/1.600/2.900 | 0.900/1.800/2.900 | 0.281 |
Technique | Sensitivity | PPV | NPV | Specificity | Accuracy | AUC |
---|---|---|---|---|---|---|
GLMN | 77.8 | 87.5 | 75 | 85.7 | 81.2 | 80.6 |
LR | 54.7 | 51.6 | 64.9 | 61.9 | 58.9 | 64.6 |
CART | 44.3 | 61.6 | 65.4 | 78.1 | 63.5 | 58.6 |
RF | 54.9 | 73.0 | 72.7 | 85.6 | 72.6 | 69.1 |
AB | 57.3 | 63.8 | 70.8 | 74.4 | 67.1 | 64.4 |
LB | 66.7 | 66.7 | 57.1 | 51.1 | 62.5 | 65.4 |
SVM | 57.3 | 69.0 | 72.2 | 79.4 | 69.9 | 69.5 |
NN | 61.6 | 62.8 | 72.4 | 73.1 | 68.2 | 67.7 |
Technique | Sensitivity | PPV | NPV | Specificity | Accuracy | AUC |
---|---|---|---|---|---|---|
GLMN | 26.5 | 66.0 | 59.5 | 68.1 | 60.3 | 62.8 |
LR | 54.7 | 57.9 | 65.2 | 68.1 | 62.1 | 64.1 |
CART | 40.0 | 56.6 | 60.9 | 74.3 | 58.9 | 57.2 |
RF | 50.6 | 64.2 | 65.7 | 76.7 | 65.0 | 66.7 |
AB | 56.5 | 62.1 | 67.5 | 72.4 | 65.3 | 68.0 |
LB | 50.0 | 61.2 | 64.8 | 72.5 | 62.5 | 58.9 |
SVM | 66.5 | 57.7 | 69.2 | 60.5 | 63.2 | 63.6 |
NN | 28.8 | 58.2 | 59.1 | 83.3 | 58.9 | 61.9 |
Technique | Sensitivity | PPV | NPV | Specificity | Accuracy | AUC |
---|---|---|---|---|---|---|
GLMN | 24.1 | 64.8 | 59.4 | 89.5 | 60.3 | 62.4 |
LR | 54.1 | 57.6 | 64.9 | 68.1 | 61.8 | 63.2 |
CART | 45.3 | 54.4 | 61.2 | 69.5 | 58.7 | 57.8 |
RF | 50.6 | 64.2 | 65.7 | 76.7 | 65.0 | 66.7 |
AB | 53.5 | 60.2 | 65.3 | 71.0 | 63.2 | 65.4 |
LB | 60.7 | 60.3 | 68.8 | 67.9 | 65.0 | 64.2 |
SVM | 53.5 | 57.2 | 64.3 | 67.6 | 61.3 | 62.2 |
NN | 55.9 | 58.5 | 65.9 | 68.1 | 62.6 | 64.1 |
NN | LB | SVM | LR | AB | CART | RF | |
---|---|---|---|---|---|---|---|
GLMN | 0.8 (0.64–0.95) | 1 (1–1) | 0.75 (0.59–0.91) | 0.75 (0.59–0.91) | 0.75 (0.59–0.91) | 0.8 (0.65–0.95) | 0.77 (0.61–0.93) |
NN | _ | 0.77 (0.61–0.93) | 0.51 (0.35–0.68) | 0.51 (0.35–0.68) | 0.51 (0.35–0.68) | 0.92 (0.85–1) | 1 (1–1) |
LB | _ | _ | 0.54 (0.38–0.7) | 0.54 (0.38–0.7) | 0.54 (0.38–0.7) | 0.73 (0.6–0.86) | 0.69 (0.55–0.83) |
SVM | _ | _ | _ | 1 (1–1) | 1 (1–1) | 0.55 (0.39–0.71) | 0.51 (0.35–0.68) |
LR | _ | _ | _ | _ | 1 (1–1) | 0.55 (0.39–0.71) | 0.51 (0.35–0.68) |
AB | _ | _ | _ | _ | _ | 0.55 (0.39–0.71) | 0.51 (0.35–0.68) |
CART | _ | _ | _ | _ | _ | _ | 0.92 (0.85–1) |
GLMN | LR | CART | RF | AB | LB | SVM | NN | |
---|---|---|---|---|---|---|---|---|
Gender (female vs. male) | _ | _ | _ | _ | ||||
Age | _ | _ | _ | _ | ||||
BMI | _ | _ | _ | _ | ||||
Medical history | _ | _ | _ | _ | ||||
AMI (yes vs. no) | X | X | X | X | _ | _ | _ | _ |
HF etiology–ischemic cardiomyopathy (yes vs. no) | X | X | X | X | _ | _ | _ | _ |
HF etiology–dilated cardiomyopathy (yes vs. no) | _ | _ | _ | _ | ||||
HF etiology–valvulopathy (yes vs. no) | _ | _ | _ | _ | ||||
COPD (yes vs. no) | X | X | _ | _ | _ | _ | ||
Anemia (yes vs. no) | X | _ | _ | _ | _ | |||
Comorbidities (yes vs. no) | X | X | X | X | _ | _ | _ | _ |
Clinical examination | _ | _ | _ | _ | ||||
Heart rate | X | _ | _ | _ | _ | |||
BNP | X | X | _ | _ | _ | _ | ||
Pulmonary pressure | X | X | X | _ | _ | _ | _ | |
NYHA class | X | _ | _ | _ | _ | |||
Creatinine | X | X | X | _ | _ | _ | _ | |
Mean years between clinical examinations | X | X | X | _ | _ | _ | _ |
95% CI lower limit | Median | 95% CI upper limit | |
---|---|---|---|
Gender (female vs. male) | 0.80 | 0.98 | 1.19 |
Age | 0.99 | 1 | 1.02 |
BMI | 0.98 | 1 | 1.01 |
Medical history | |||
AMI (yes vs. no) | 1.08 | 1.41 | 1.74 |
HF etiology—ischemic cardiomyopathy (yes vs. no) | 1.05 | 1.31 | 1.57 |
HF etiology—dilated cardiomyopathy (yes vs. no) | 0.73 | 1 | 1.36 |
HF etiology—valvulopathy (yes vs. no) | 0.71 | 0.90 | 1.15 |
COPD (yes vs. no) | 1 | 1.22 | 1.49 |
Anemia (yes vs. no) | 0.96 | 1.19 | 1.40 |
Comorbidities (yes vs. no) | 1.12 | 1.34 | 1.44 |
Clinical examination | |||
Heart rate | 0.99 | 1 | 1 |
BNP | 1 | 1 | 1 |
Pulmonary pressure | 1 | 1.01 | 1.02 |
NYHA class | 0.72 | 0.91 | 1.14 |
Creatinine | 1.01 | 1.21 | 1.40 |
Mean years between clinical examinations | 0.99 | 1.08 | 1.17 |
© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Lorenzoni, G.; Sabato, S.S.; Lanera, C.; Bottigliengo, D.; Minto, C.; Ocagli, H.; De Paolis, P.; Gregori, D.; Iliceto, S.; Pisanò, F. Comparison of Machine Learning Techniques for Prediction of Hospitalization in Heart Failure Patients. J. Clin. Med. 2019, 8, 1298. https://doi.org/10.3390/jcm8091298
Lorenzoni G, Sabato SS, Lanera C, Bottigliengo D, Minto C, Ocagli H, De Paolis P, Gregori D, Iliceto S, Pisanò F. Comparison of Machine Learning Techniques for Prediction of Hospitalization in Heart Failure Patients. Journal of Clinical Medicine. 2019; 8(9):1298. https://doi.org/10.3390/jcm8091298
Chicago/Turabian StyleLorenzoni, Giulia, Stefano Santo Sabato, Corrado Lanera, Daniele Bottigliengo, Clara Minto, Honoria Ocagli, Paola De Paolis, Dario Gregori, Sabino Iliceto, and Franco Pisanò. 2019. "Comparison of Machine Learning Techniques for Prediction of Hospitalization in Heart Failure Patients" Journal of Clinical Medicine 8, no. 9: 1298. https://doi.org/10.3390/jcm8091298
APA StyleLorenzoni, G., Sabato, S. S., Lanera, C., Bottigliengo, D., Minto, C., Ocagli, H., De Paolis, P., Gregori, D., Iliceto, S., & Pisanò, F. (2019). Comparison of Machine Learning Techniques for Prediction of Hospitalization in Heart Failure Patients. Journal of Clinical Medicine, 8(9), 1298. https://doi.org/10.3390/jcm8091298