Next Article in Journal
DSpix2pix: A New Dual-Style Controlled Reconstruction Network for Remote Sensing Image Super-Resolution
Previous Article in Journal
A Low-Cost Evaluation Tool for Synchronization Methods in Three-Phase Power Systems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Deep Learning for Risky Cardiovascular and Cerebrovascular Event Prediction in Hypertensive Patients

1
European Laboratory for Non-Linear Spectroscopy (LENS), University of Florence, 50019 Florence, Italy
2
Department of Medical Biotechnologies, University of Siena, 53100 Siena, Italy
3
Dermatology Unit, Department of Medical Science, Surgery and Neuroscience, University of Siena, 53100 Siena, Italy
4
Biomedical Engineering Department, Università Campus Bio-Medico, 00128 Rome, Italy
5
USL Toscana Centro, Department of Cardiology, Ospedale S. Maria Nuova, 50122 Florence, Italy
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Appl. Sci. 2025, 15(3), 1178; https://doi.org/10.3390/app15031178
Submission received: 13 November 2024 / Revised: 14 January 2025 / Accepted: 21 January 2025 / Published: 24 January 2025

Abstract

:
In this comprehensive study, we employed a versatile approach to tackle the prediction challenges associated with atrial fibrillation (AF) and cardiovascular events (CE). Exploiting the Gaussian copula synthesizer technique for data generation, we created high-quality synthetic data to overcome the limitations posed by scarce patient records. Heart rate variability (HRV), known to be an efficient indicator of cardiac health often used with artificial intelligence (AI), was used to train and optimize custom-built deep learning (DL) models. Additionally, we explored transfer learning (TL) to enhance the model capabilities by adapting our AF classification model to address CE classification challenges, effectively transferring learned features and patterns, without extensive retraining. As a result, our models achieved accuracy rates of 77% for AF and 82% for CEs, with high sensitivity, highlighting the efficacy of synthetic data generation and transfer learning in improving classification performance across diverse medical datasets. These findings hold significant promise for enhancing diagnostic and predictive capabilities in clinical settings, ultimately contributing to improved patient care and outcomes.

1. Introduction

Atrial fibrillation (AF) and hypertension are significant contributors to cardiovascular risk. One of the most common types of arrhythmia is AF, whose incidence increases exponentially with age [1,2,3]. This condition is caused by an abnormal sinus rhythm, which results in an irregular heartbeat that can often be associated with several adverse health outcomes, including stroke and heart failure [4,5,6,7]. Sadly, the economic and clinical burden of AF is substantial due to repeated hospitalizations and healthcare service utilization. Therefore, the prediction of AF is clinically and economically significant [8,9]. As reported by Velleca et al. [10], in Italy, France, Germany, and England alone, more than 11 million patients are affected by this disease, with an average annual cost of more than EUR 600 million. Regarding cardiovascular diseases in general, The American Heart Association projected that the total cost of cardiovascular diseases, including myocardial infarction (MI), in the United States will rise from USD 627 billion in 2020 to USD 1851 billion by 2050, driven by a near quadrupling of healthcare costs [11]. In the European Union, cardiovascular diseases (CVDs) cost approximately EUR 282 billion annually, meaning EUR 630 per EU citizen [12]. Heart rate variability (HRV) and electrocardiography (ECG) are essential methods for diagnosing and managing a wide range of cardiovascular, cerebrovascular, and related diseases. ECG is simply the recording of the bio-electrical activity of the heart. HRV is a measure of the variation in cardiac rhythm; usually a heart that has a higher HRV represents the ability of the subject to adapt rapidly to different stimuli and is correlated with a healthier condition [13]. HRV analysis has evolved a lot over time, and, to date, there are many indices that can be extracted from an ECG record. Time-based, frequency-based, and non-linear features can be easily calculated from a heart rate record, where each of these features represents a different aspect of the HRV and different values have been correlated with different outcomes [14]. These techniques provide detailed insights into the autonomic nervous system, offering crucial prognostic information that can guide clinical decisions and improve patient outcomes [15]. Given the high variability and number of patients, AI-based systems could help physicians in the difficult task of monitoring patients and deciding on the best treatment approach. Developing AI-driven clinical decision support systems (CDSS) requires extensive patient data to ensure accuracy and reliability. However, this requirement introduces significant privacy challenges, as the reliance on large datasets amplifies concerns about data security, informed consent, and the potential for unauthorized exploitation of sensitive information [16]. Synthetic data offer a promising solution to these privacy concerns. By generating artificial datasets that mirror the statistical properties of real patients, synthetic data enable the development and validation of CDSSs without exposing personal health information. This approach facilitates data sharing and collaboration across institutions, while adhering to stringent privacy regulations [17]. However, it is important to acknowledge that synthetic data have limitations, including the potential risks of data leakage and dependency on imputation models. Not all synthetic datasets precisely replicate the content and properties of the original data, which may impact their applicability across different applications [18]. Despite these challenges, the integration of synthetic data into AI model development has significant potential to advance CDSSs, while protecting patient privacy. In this paper, we introduce an approach that aims to predict high-risk cardiovascular and cerebrovascular events in hypertensive patients well before their occurrence. Our work involved developing a custom DNN specifically tailored for detecting atrial fibrillation (AF), using a small dataset obtained from a single hospital. To address the limitations posed by the small size of the dataset, we used synthetic data generation techniques to augment our training set. Furthermore, we extended our investigation by applying TL techniques, where parts of the weights obtained from the deep learning (DL) model trained for AF prediction were utilized to train another model tasked with detecting cardiovascular events (CE). By integrating DL with advanced data generation methods, our study aimed to demonstrate the effectiveness of this combined approach in enhancing predictive accuracy and robustness in healthcare applications. To summarize, our goal is to equip clinicians with innovative technological solutions for predicting adverse cardiovascular outcomes. Exploiting deep learning and synthetic data generation, our approach addresses the challenges of data sharing, enhancing model quality, while preserving patient privacy. With our approach, the similarities between common and rare CVDs can be exploited to tackle the significant problem of the small datasets for the latter class of patients. This aspect will boost the quality of models developed for the prediction and management of CVDs in general. Moreover, with a validated and safe infrastructure, it will be possible to share valuable insights about the features of disease patients, without the risk of exposing their real data; this will further improve the efficacy of CDSSs, lowering the burden on the economy and personnel of healthcare systems.
We had the unique opportunity to work with a locally available dataset containing Holter ECG data from patients who developed atrial fibrillation (AF). Leveraging this dataset, we aimed to explore its potential in predicting rarer and more severe cardiac events. To extend the analysis, we incorporated a widely recognized dataset containing detailed HRV features from patients who had experienced different cardiac and cerebrovascular events, including 11 myocardial infarctions, three strokes, and three syncopal episodes. This work addresses critical gaps in cardiovascular risk prediction by leveraging advanced synthetic data generation and transfer learning methods to enhance model performance on limited datasets. By tailoring deep learning architectures for atrial fibrillation and cardiovascular event classification, this approach not only achieves robust accuracy but also addresses the challenges of data scarcity and privacy in healthcare. These innovations promise significant practical implications, including improving clinical decision support systems, optimizing resource allocation, and ultimately advancing patient care through early risk stratification and targeted intervention strategies.

2. Related Works

Advances in HRV and ECG analysis have greatly improved our ability to predict and manage CVDs. For example, Weimann and Conrad [19] demonstrated the use of deep convolutional neural networks (CNNs) combined with TL to classify ECG recordings, specifically focusing on AF detection. This approach significantly reduces the need for extensive annotations by leveraging large pretrained datasets like Icentia11K and fine-tuning them on more specific datasets, such as PhysioNet/CinC Challenge 2017. Similarly, Melillo et al. [20] explored the use of HRV to predict cardiovascular and cerebrovascular events in hypertensive patients. Their study employed various data-mining algorithms, including support vector machines, tree-based classifiers, and artificial neural networks, achieving notable sensitivity and specificity rates. This demonstrated that HRV analysis could be a reliable method for early risk stratification of hypertensive patients. Additionally, Alkhodari et al. [21] utilized HRV features and the RUSBOOST algorithm to predict cardiovascular and cerebrovascular events in hypertensive patients, achieving high accuracy and F1 scores. This approach highlights the potential of HRV and machine learning to enhance prognostic assessments and clinical decision-making. Deka et al. [22] advanced this further by integrating dual-tree complex wavelet packet transform (DTCWPT) and nonlinear HRV feature extraction with cost-sensitive RUSBoost (CS-RUSBoost) to effectively identify high-risk hypertensive patients.
HRV and ECG analysis have been extended to diagnose a variety of cardiac pathologies. Rajput et al. [23] developed an advanced system for assessing hypertension severity using ECG signals. They employed a bi-orthogonal wavelet filter bank (BOWFB) to decompose ECG signals into sub-bands, followed by extracting features, such as sample entropy and wavelet entropy. Their ensemble bagged trees (EBT) classifier achieved a high accuracy of 99.95%, demonstrating the potential for automated early stage hypertension detection. Furthermore, Jin et al. [24] utilized TL to predict myocardial injury from continuous single-lead ECG signals. By pretraining on labeled 12-lead ECGs and fine-tuning on single-lead data, their models achieved significant improvements in prediction accuracy, surpassing traditional diagnostic methods. Additionally, Alghamdi et al. [25] developed a computer-aided diagnosis (CAD) system for MI detection using deep learning, achieving accuracies of 99.02% and 99.22% with their models, demonstrating the robustness of these methods in clinical applications.
Heart failure prediction and management have also benefited from advancements in HRV and ECG analysis. Kusuma and Jothi [26] presented a model combining deep CNNs for feature extraction with long short-term memory (LSTM) networks for classification, achieving an accuracy of 95.21%. This highlights the potential of these technologies for early and accurate diagnosis of congestive heart failure (CHF). Moses et al. [27] also focused on distinguishing between healthy individuals and those with CHF using HRV data and machine learning algorithms. Their study analyzed ECG recordings from the PhysioNet database, achieving the highest accuracy of 77% using KNN and decision tree models. This demonstrates HRV’s potential as a non-invasive biomarker for early detection and management of CHF.
Machine learning techniques applied to HRV data have also enhanced stroke risk prediction. Chen et al. [28] introduced a hybrid deep transfer learning-based stroke risk prediction (HDTL-SRP) framework, leveraging data from correlated sources like hypertension and diabetes. This approach addressed the challenges of small and imbalanced stroke datasets, demonstrating the significant potential for accurate stroke risk prediction, while preserving patient privacy.
The use of big data in cardiovascular practice is growing rapidly and has great potential to have a positive impact on patients’ quality of life, but the road to implementing such systems in regular care is long and needs further studies, as Rumsfeld et al. [29] covered in their review. It is not easy to achieve high accuracy in real case scenarios, while avoiding bias or information leakage due to oversampling techniques. Mixing data from patients with different outcomes is useful. In our case, we used AF and CE data, demonstrating that TL is an effective technique for grouping cardiovascular data to improve the prediction of various cardiovascular events. Our HRV data for the models were generated synthetically to address the need for a high data volume and to safeguard patients’ data privacy. General ECG data generation has been proven to achieve permissive performance in the development and objective assessment of novel machine learning algorithms, such as in the work of Gillette et al. [30], where they generated a novel synthetic database comprising a total of 16,900 12-lead ECGs based on electrophysiological simulations. Chen et al. [28] strongly supported the principle of shared data for clinical applications. They developed a prediction model for stroke based on synthetic data generation and TL. Training generators with real data protects the identity of patients, while harnessing the valuable information contained in the data. By using a model trained on stroke and other pathologies with many common aspects, such as diabetes and hypertension, they achieved an accuracy of 91.2% in predicting stroke risk.

3. Materials and Methods

In this study, we tackled the processing of patient HRV datasets for two classification tasks by employing resource-efficient methods, synthetic data generation, deep learning techniques, and transfer learning. The overall framework is visually outlined in Figure 1.

3.1. Hardware and Software Resources

We conducted this study using a local workstation with macOS Sonoma 14.2, featuring an 8-Core Intel Core i9 2.3 GHz central processing unit (CPU) (Intel Corporation, Santa Clara, CA, USA), and 16 GB of random access memory (RAM). Our computational environment was enriched with a suite of open-source tools, including Keras (v2.15.0) for DL; Programming Language: Python (v3.11.11); Conda management system (v24.11.1); PyCharm Edu (v2022.1.3) integrated development environment; synthetic data vault (SDV), a Python library creating tabular synthetic data; and Gaussian Copula Synthesizer (2023) (GCS).

3.2. Datasets

We utilized two distinct datasets comprising patient ECG records obtained from different medical facilities. Descriptions of the datasets, namely the AF and CE datasets, are provided below.

3.2.1. Atrial Fibrillation Dataset

The ECG data pertaining to AF were acquired from a study conducted by Goretti et al. [31]. The data were provided by the Santa Maria Nuova Hospital in Florence, Italy. This dataset encompasses 102 patient records, categorized into two classes: class 0, comprising 60 patients without AF, and class 1, comprising 42 patients diagnosed with AF. In this dataset, for each patient, three-lead Holter ECG signals with a sampling rate of 200 Sample per seconds were recorded for 24 h; then, in the preprocessing stage, a duration of three minutes was selected. All the patients involved in this project underwent a Holter recording for various different reasons, and a common label was assigned relative to whether or not AF developed in the subsequent three years.

3.2.2. Cardiovascular Events Dataset

This dataset was sourced from the “Physionet.org” platform under the title “Smart Health for Assessing the Risk of Events via ECG Database” (SHAREEDB) [20,32,33]. It comprises data from 139 hypertensive patients aged 55 and above, recruited between January 2012 and November 2013 at the Centre of Hypertension of the University Hospital Federico II in Naples, Italy. Each record in the dataset is composed of 24 h worth of signals recorded with a 3-lead ECG and a sampling frequency of 128 samples per second. During a 12-month follow-up period aimed at monitoring their cardiac health, 17 patients experienced adverse CEs: 11 suffered from myocardial infarction, 3 experienced a stroke, and 3 experienced syncope. These 17 patients were categorized as class 1, “high-risk patients”, while the remaining 122 subjects were classified as class 0, “low-risk patients”.

3.3. Data Pre-Processing

Both datasets were initially provided as ECG recordings. The CE dataset comprised 24 h recordings, while the AF dataset consisted of 3-min segments, manually chosen based on data quality, basically those segments where the PQRS components were clearly visible and were not affected by additional, spurious spikes. HRV features were extracted from both datasets using Kubios Premium software. Kubios Premium 3.4.1. is a specialized tool for HRV analysis, recognized for its scientific and professional utility [34]. This same tool could be used to detect segments of high quality data in case researchers are not acquainted with visual inspection of ECG recordings. It incorporates a built-in Pan–Tompkins QRS detector and advanced options for correcting RR interval artifacts, efficiently rectifying missed, additional, and ectopic beats with high precision. In the CE dataset, we utilized 5-min samples from the 24 h recordings for each subject, selecting segments with minimal beat corrections and the least noise. For the AF dataset, clinical personnel manually selected 3 min segments of high-quality data, eliminating the need for a window selection phase. Upon selecting the appropriate ECG time window, Kubios automatically extracted time-based, frequency-based, and non-linear features. The resulting HRV features were extracted as tabular data. We used ultra-short term recordings to test a system that, among other advantages, has the potential to become a cheap, easy, and fast tool for screening. In [35], the authors highlighted the risk of ultra-short-term HRV analysis, but also suggested that further development and validation are needed, because this could provide new tools for overcoming the high cost associated with long-term recordings. They suggested that this is a new challenge for HRV-based applications, and we believe that we are contributing more evidence on the effectiveness of using very short ECG recordings; to give another recent example, Orini et al. [36] successfully predicted increased risk of cardiovascular events by extracting HRV from tracks shorter than 15 s. Next, to address class imbalances, both datasets underwent undersampling, a simple procedure that removes random samples from the majority class until a balance is reached. In the CE dataset, we balanced the classes by undersampling the larger class 0 (with 122 records) to match the size of class 1 (with 17 records). For the AF dataset, we achieved a balance by randomly selecting 42 records from the available 60 in class 0 to match the size of class 1, also with 42 records. We acknowledge that our dataset is small and that undersampling further reduced its size. However, balanced datasets are essential for achieving unbiased data augmentation with a generative algorithm. For this reason, we applied undersampling before generating the synthetic data. Subsequently, we retained all features with complete data across the patient records, resulting in 64 feature columns. The final datasets were shuffled and divided into a 70% training set and a 30% test set. All features were normalized to values between 0 and 1.

3.4. Synthetic Data Generation

Synthetic data generation is the creation of artificial datasets that mimic the characteristics of real-world data in structure, statistical properties, and complexity, without containing actual sensitive or personally identifiable information. This technique is commonly used to protect privacy, enhance data availability, and facilitate model training and validation, without solely relying on original data [37].
Given the limited number of patient records in both datasets, we chose to bolster their size and diversity by incorporating synthetic data generation. To achieve this, we utilized the Gaussian Copula Synthesizer (GCS), a versatile open-source tool renowned for its efficacy in generating synthetic datasets [38]. Prior to synthesizing the additional data, we created tailored metadata tables for each dataset. Employing the SDV single-table metadata method, we emulated the structure and attributes of the original datasets. Subsequently, a validation process was undertaken to verify the fidelity and compatibility of the created metadata with the actual data frames. With the metadata curated, we sampled synthetic records in GCS to construct dataset-specific models. These models, informed by the corresponding metadata and a normal distribution, were trained on the original datasets. The data augmentation process was carried out on the training data for both datasets, meaning on 70% of the original size. In the case of the training set of the AF dataset, which initially comprised 58 records, a GCS model was fitted to generate 1000 synthetic records. For the training set of the CE dataset, which originally contained 23 real records, a GCS model was trained to produce 100 synthetic patient records.
The choice of synthetic data ranges was motivated by two key considerations:
  • Minimum number of records for generalization: To ensure the deep learning model was able to generalize well, we needed a dataset size sufficient to capture the variability in the underlying data distribution. Based on previous empirical experiments, a minimum of 100 records was found to provide enough variability to train a model that avoids overfitting while maintaining predictive accuracy. This ensures the model can generalize effectively to unseen data.
  • Maximum number of records to optimize model size: Given that the trained model was intended for deployment across multiple devices, we aimed to limit the number of records, to keep the model size manageable. Generating excessively large datasets, such as 10,000 records, can unnecessarily increase model complexity, resulting in longer training times and larger model sizes. This can hinder deployment efficiency, particularly on devices with limited computational and storage capacities.
For the AF model, 1000 records were needed to better generalize on the underlying HRV features; however, we only needed 100 synthetic records for the CE model, as general knowledge was already captured in the larger model of AF then transferred to the CE model.

3.5. Deep Learning

Deep learning is a subset of machine learning that focuses on training artificial neural networks to recognize patterns and solve complex tasks by learning from large datasets. Inspired by the structure and function of the human brain, deep learning models consist of layers of interconnected nodes (neurons) that progressively extract higher-level features from input data, enabling applications such as image recognition, natural language processing, and autonomous systems. Its effectiveness has been demonstrated in numerous domains, from healthcare to robotics, showcasing its transformative potential in addressing real-world challenges [39]. In our case, we developed a simple neural network and we tested its accuracy in predicting patients at high risk of developing AF and CE. Moreover, we tested if, and how much, the prediction accuracy of the model changed if we applied transfer learning, a technique where a model trained on one task is repurposed or fine-tuned to perform a different but related task, leveraging the knowledge it has already acquired. This approach is especially effective when the target task has limited data, as the pretrained model can provide a robust feature representation from the source domain. Commonly used in fields like computer vision and natural language processing, transfer learning has significantly improved performance in tasks like object recognition and text classification [40].

3.6. Atrial Fibrillation Deep Learning Model

3.6.1. Model Architecture

The model was developed using Keras, featuring a custom design for optimal performance. The first hidden layer comprises 64 neurons with a rectified linear unit (ReLU) activation function. The second hidden layer also consists of 64 neurons with ReLU activation, coupled with a 0.5 dropout layer to enhance generalization and prevent overfitting. The third hidden layer mirrors this structure, incorporating 64 neurons with ReLU activation and a 0.5 dropout layer. Finally, the output layer is composed of a single neuron employing a Sigmoid activation function, to perform binary classification. This architecture aims to capture intricate patterns within the data, promoting efficient learning and robust predictive capabilities; see the AF model architecture in Table 1.

3.6.2. Training

The model underwent compilation using the Adam optimizer [41] to minimize the loss function during the training of neural networks, with a specified learning rate of 5 × 10 4 , coupled with a binary cross-entropy loss function to effectively guide the training process. The training regimen extended over 50 epochs, employing a modest batch size of 32 to balance computational efficiency and model convergence. Throughout the training phase, a validation split of 20% was applied to assess the model’s generalization on an independent subset of the training data.

3.7. Cardiovascular Event Transfer Learning Model

3.7.1. Model Architecture

In pursuit of enhancing the model’s classification capabilities to encompass CEs, a transfer learning strategy was implemented. The pre-existing model, initially designed for AF classification, served as the foundation. To facilitate the adaptation to CEs, the layers beyond the first dense layer, including those responsible for AF classification, were frozen. This retention of foundational aspects allowed the model to retain its learned features specific to AF, while accommodating the incorporation of new knowledge related to CEs. For model extension, additional layers were introduced and fine-tuned to the distinctive characteristics of CE data. A new dense layer, comprising 32 units and employing ReLU activation, was introduced, followed by a 0.5 dropout layer to enhance the generalization. The output layer, a single dense layer, utilized a Sigmoid activation function to tailor the model’s output for the binary classification of CEs. This strategic combination of frozen and extended layers ensured a synergistic transfer of knowledge from the original AF-focused model, promoting robust classification capabilities across diverse cardiac events; see the CE model architecture in Table 2.

3.7.2. Training

For the training of the transfer learning model, a meticulous approach was adopted. The model was compiled utilizing the Adam optimizer, with a specified learning rate of 5 × 10 4 , and we employed a binary cross-entropy loss function to effectively guide the learning process. Training unfolded over a duration of 10 epochs, with each epoch comprising batches of 32 samples to balance the computational efficiency and convergence stability. This careful training strategy ensured that the transfer learning model not only leveraged the foundational knowledge acquired during training of the initial model but also fine-tuned its parameters to adapt specifically to the nuances of cardiac event classification. Through this iterative and focused training process, the model strove to attain a refined and robust understanding of the target domain, optimizing its predictive performance for CEs.

4. Results

4.1. Accuracy of Synthetic Data Generation

To evaluate the correct distribution of the synthetically generated data with respect to the original dataset, we employed statistical analysis. Once validated, we tested the model on unseen data to evaluate the performance of the described custom network.

4.1.1. Statistical Analysis

In this work, we used two sets of methods to analyze our synthetic AF and CE data; the “evaluate quality” function [42] of SDV single table and the Mann–Whitney test [43].
Table 3 displays the quality function results. Below, we provide a detailed interpretation of the results and their implications:
  • Column shapes (for single columns of data and is often called the marginal distribution of each column): While these values may appear slightly lower than other metrics, they still indicate a strong resemblance between real and synthetic column distributions.
  • Column pair trends (for pair of columns and is the correlation or bivariate distribution of columns): These high percentages indicate that relationships and dependencies between pairs of columns (e.g., correlations) in the synthetic data closely mimic those in the real data. This demonstrates that the structural integrity of the dataset is preserved, which is critical for maintaining its utility in downstream tasks like machine learning model training.
  • Overall score: These scores combine multiple aspects of statistical similarity and indicate a strong match between real and synthetic datasets. An overall score above 85% provides confidence that the generated data retain the key representational features of the real data, while reducing risks associated with direct data sharing.
Within the Open-CESP initiative, a key aspect of synthetic data utility lies in its capacity to mirror the statistical characteristics of the original data. This necessitates a significant degree of distributional similarity between the synthetic data and the original dataset [44]. When generating synthetic data, it is crucial to balance the resemblance to the original data with privacy requirements. While a higher similarity enhances usability in practical applications, the generated data must differ enough to prevent leakage of sensitive information. This balance, often referred to as “realism” or “resemblance”, is validated through statistical comparisons and expert qualitative assessments to ensure the synthetic data are both plausible and privacy-preserving [45]. We considered these critical points in our approach, and the reported statistical similarities between the real and synthetic datasets for AF and CEs highlight our commitment to effectively addressing both utility and privacy requirements.
The Mann–Whitney test was used to investigate the differences in HRV parameters. The median, interquartile range (IQR), and p-value were estimated for each parameter. We did not interpret the Mann–Whitney U test’s results as direct “quality scores”, instead, p-values indicated whether there was a statistically significant difference between the synthetic and real data distributions for each feature. A high p-value suggests similarity, implying that the synthetic data closely reflect the real data distribution. In this way, the test helped us assess the “quality” of the synthetic data: if distributions did not differ significantly, we considered the synthetic data to be good quality in representing that feature. Our findings for both datasets are described next:

4.1.2. Atrial Fibrillation

The analysis revealed that the median values for most HRV parameters in the synthetic dataset were comparable to the real dataset. Notably, the following parameters showed significant differences between the two datasets ( p < 0.05 ), suggesting potential variations:
  • NNxx (beats): Number of successive RR interval pairs that differ more than xx ms;
  • pNNxx (%): NNxx divided by the total number of RR intervals;
  • Very low frequency (VLF) (Hz) AR spectrum: The overall activity of the various slow mechanisms of the sympathetic function.
See the parameter differences in Table 4.

4.1.3. Cardiovascular Events

Only TINN (ms) (Triangular interpolation of NN intervals measured in milliseconds), showed significant differences between the synthetic and real datasets ( p < 0.001 ), suggesting potential variations; see the parameter differences in Table 5. Furthermore, in Figure 2, we visual show the similarity between two pairs of our features (synthetic vs. real).

4.2. Experiment Setup

In the following section, we present the results obtained by testing our models trained in different conditions:
  • Atrial fibrillation prediction:
    -
    Training model on synthetic and real data.
    -
    Training model on real data only.
  • Cardiovascular events prediction:
    -
    Training model using transfer learning with synthetic and real data.
    -
    Training model using transfer learning and real data.
    -
    Training model on real data only, without transfer learning.

4.2.1. Classification Performance on Atrial Fibrillation

The AF classification model was tested on the 30% of the dataset that constituted 26 real patient records. The model with the best performance had an accuracy of 77% on the test set; see its confusion matrix in Table A1 in Appendix A.
Table 6 includes all the performance metrics of our AF models. Model (1) trained on synthetic + real data outperformed (2) trained on real data in almost all metrics, demonstrating its effectiveness. For the without AF class, (1) achieved a balanced precision and recall ( 0.71 and 0.83 ) with an F1-score of 0.77 , compared to (2)’s 0.62 precision and 0.71 F1-score. Similarly, for the diagnosed with AF class, (1) showed a higher precision ( 0.83 vs. 0.80 ) and recall ( 0.71 vs. 0.57 ), leading to a better F1-score ( 0.77 vs. 0.67 ). Model (1) also had higher specificity ( 0.83 vs. 0.80 ) and AUROC ( 0.77 vs. 0.70 ), indicating better overall discrimination and reliability. These results highlight the benefit of combining synthetic data with real data to improve predictive performance.

4.2.2. Classification Performance for Cardiovascular Events

The CE classification model was tested on the 30% of the dataset that constituted 11 real patient records. The model had an accuracy of 82% on the test set; see the confusion matrix in Table A2 in Appendix A.
Table 7 includes all the performance metrics of our CE models. Model (1) synthetic + real data outperformed models (2) real data and (3) no transfer learning in almost all metrics, demonstrating its effectiveness. For (without CE), (1) achieved a balanced precision and recall (0.83 each) with an F1-score of 0.83, compared to (2)’s 0.80 precision, 0.67 recall, and 0.73 F1-score. Model (3) performed the worst, with a precision of 0.56 and an F1-score of 0.67. Similarly, for (diagnosed with CE), (1) showed a higher precision (0.80 vs. 0.67 for (2) and 0.50 for (3)) and recall (0.80 vs. 0.80 for (2) and 0.20 for (3)), leading to a better F1-score (0.80 vs. 0.73 and 0.29, respectively). Model (1) also had a higher specificity (0.80 vs. 0.67 and 0.50) and AUROC (0.82 vs. 0.73 and 0.52), indicating better overall discrimination and reliability.
These results highlight the benefit of combining synthetic data with real data and the importance of transfer learning for improved predictive performance.

5. Discussion

In our study, we aimed to predict high-risk events, like AF and CEs, in hypertensive patients using DL and TL methodologies, leveraging synthetic data generation to address the challenge of small datasets. The results obtained, from testing both models on real datasets, showed the promising quality of the synthetic data and its ability to generalize on unseen data. Furthermore, with the limited real data testing set of the CE dataset, we showed the value of transfer learning. By freezing one layer from the AF model, we transferred a basic understanding of the HRV data to help in the training of another task, namely predicting severe cardiovascular events. The evaluation results demonstrated the effectiveness of the models in discriminating between individuals with and without a risk of developing AF/CEs, with moderate to high performance across different metrics. Confusion matrices provided insights into the models’ predictive capabilities, highlighting their ability to accurately classify patients based on their cardiac health status. The model decision criteria were consistent with those presented in the literature; in fact, the most predictive features, listed in Figure A1 and Figure A2 in Appendix A, were mainly related to the frequency group, such as VLF, low frequency (LF), and high frequency (HF). Our results can be understood in the context of the comparable studies summarized in Table 8. Other AI-based models have been tested for similar purposes, showing promising performance. However, a weakness in most cases is that different preprocessing steps are executed before splitting the data into training and test sets, leading to classification results being reported on the same data used to extract insights for class differentiation. For example, improperly applying undersampling or oversampling techniques, such as performing these steps before splitting the data, can introduce bias into an algorithm’s predictive performance.
Our model achieved an accuracy of 82% for CE prediction. This performance is on par with other leading methods. For instance, Melillo et al. [20] utilized a random forest model for cardiovascular event prediction using HRV data, achieving a sensitivity, specificity, and accuracy of 71.4%, 87.8%, and 85.7%, respectively. Their model demonstrated effective early risk prediction, and our approach further enhances the overall stratification by integrating synthetic data and transfer learning techniques.
Similarly, Alkhodari et al. [21] employed the RUSBOOST algorithm to predict cardiovascular events, achieving an accuracy of 97.08% and an F1 score of 86.67%. Both their study and ours used balanced datasets with undersampling techniques to address the class imbalance. This methodological choice ensures our model maintains a balance between predictive performance and practical applicability in diverse clinical settings.
Deka et al. [22] applied a cost-sensitive RUSBoost algorithm to identify high-risk hypertensive patients, achieving an F1 score of 93.47%. Similarly, Moshawrab et al. [46] used support vector machines (SVM) and reported an accuracy of 91.80% and an F1-score of 92.06%. These studies highlight the effectiveness of machine learning in cardiovascular risk prediction. Our inclusion of synthetic data generation and TL offers a unique advantage, by enhancing model adaptability and performance, particularly in scenarios with limited real-world data.
Table 8. Comparison of different studies on HRV analysis methods.
Table 8. Comparison of different studies on HRV analysis methods.
StudyDatasetMethodSensitivitySpecificityAccuracyF1
Melillo et al. [20]- HRV for risk of vascular events- Random Forest (RF)71.4%87.8%85.7%N/A
(2015)- 139 Holter recordings
Zhang et al. [47]- EMR for cardiovascular disease- Enhanced Character-level Deep CNNsN/AN/A95.22%95.16%
(2019)- 659 records(EnDCNNs)
Alkhodari et al. [21]- HRV for cardio events- Random Under- SamplingN/AN/A97.08%86.67%
(2020)- 139 Holter recordingsBoosting (RUSBOOST)
Deka et al. [22]- HRV for cardio events- Cost-Sensitive RUSBoostN/AN/AN/A93.47%
(2021)- 139 Holter recordings(CS-RUSBoost)
Moshawrab et al.- HRV for cardiovascular events- Support Vector MachinesN/A87.09%91.8%92.06%
[46] (2023)- 139 Holter recordings(SVM)
Moses et al. [27]- HRV for Heart Failure Prediction- Support Vector Machines74%74%74%73%
(2024)- 99 records(SVM)
Our AF Model- HRV for- Deep Learning71%83%77%77%
Atrialwith Synthetic
(2024)FibrillationData
Our CE Model- HRV for Cardiovascular Events- Transfer Learning with83%80%82%82%
(2024)- 139 Holter recordsSynthetic Data
Our study’s results are also consistent with those of Zhang et al. [47], who proposed an enhanced deep convolutional neural network (EnDCNN) for cardiovascular disease prediction using electronic medical records (EMRs), achieving high accuracy and F-scores. This consistency across different methodologies and datasets underscores the robustness of deep learning techniques in HRV-based predictions. Additionally, their use of EMRs and our integration of synthetic data both reflect a broader trend towards leveraging diverse data sources to improve model performance.
Moses et al. [27] focused on distinguishing between healthy individuals and those with congestive heart failure using HRV data and machine learning algorithms. Their study achieved a highest accuracy of 74% using support vector machines. Our approach achieved a higher accuracy, and this difference may be owing to the combination of synthetic data and TL to address the limitations of the small dataset, a methodology not employed by Moses et al. [27].
The incorporation of synthetic data generation and transfer learning in our study provides a robust framework for improving predictive accuracy in medical research, particularly for conditions with limited datasets. Using synthetic data generation techniques, we generated high-quality synthetic data to augment our training sets, ensuring our models could generalize well to unseen data. We showed that, by using transfer learning techniques and synthetic data, our deep learning architecture outperformed the same model when trained with only real data and without transfer learning, proving the efficacy of mixing these methodological approaches when developing clinical decision support systems. This method addresses a significant limitation in medical research, where small datasets cannot be improved due to privacy reasons, often hindering the development of reliable predictive models.

6. Conclusions

This study addresses the challenge of processing patient HRV datasets for classification tasks using resource-efficient methods, synthetic data generation, deep learning techniques, and transfer learning. The utilization of these techniques offers a promising approach to overcome the limitations posed by small datasets in medical research, enabling robust analysis and model development. The schematic overview provided in Figure 1 illustrates the pipeline employed in the classification of atrial fibrillation and cardiovascular events based on HRV data, highlighting the integration of various methodologies for comprehensive data analysis with final prediction performance. Such methods can provide clinicians with useful insights about the condition of patients in advance of event outcomes, allowing them to design the best therapeutic path to prevent severe consequences. The aim was to prove that synthetic data can improve clinical decision support system performance, but, in future works, it will be interesting to explore the performance of the models when changing and optimizing parameters like the size of the synthetically generated data and the model hyperparameters. The study’s limitations include the reliance on small, imbalanced datasets despite synthetic augmentation, which may not fully capture real-world variability. The fidelity of the generated data to actual clinical scenarios needs further validation. Additionally, the approach’s generalizability across diverse populations and its integration into routine clinical workflows remain untested, warranting broader studies. We generated synthetic data to overcome the small dimensions and proportions of the dataset, and we proved that this approach secured a boost in classification performance. The main contributions of this work can be summarized as follows:
  • Usage of a small fraction (5 or 3 min) of ECG data to extract HRV parameters.
  • Usage of synthetic data generation to improve performance, even with unbalanced datasets, while safeguarding the privacy of the patients.
  • Prediction of CEs is an important aim for the optimal allocation of clinical resources. Estimating event outcomes can help design efficient therapeutic paths.
  • The models trained with synthetic data and transfer learning outperformed the same architecture trained with only real data and without transfer learning.
  • TL is efficient in predicting CEs with a model trained for atrial fibrillation. Merging pathologies with similar features could help improve the efficacy of the prediction model.

Author Contributions

Conceptualization, F.G., A.S., L.P. and E.I.; Methodology, F.G., A.S. and L.P.; Software, F.G., A.S., L.P. and E.I.; Validation, A.S., M.M. and E.I.; Formal analysis, A.C.; Resources, M.M. and E.I.; Writing—original draft, F.G. and A.S.; Writing—review & editing, A.S. and A.L.; Visualization, A.S. and A.C.; Supervision, A.L. and E.I.; Project administration, E.I. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Ethical review and approval were waived for this study due to the fact that there was no deviation from the “normal care” of patients who had provided consent for the management of personal data.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The datasets and code scripts for this study are accessible at https://github.com/alexsalman/heart_rate_variability (accessed on 12 November 2024).

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A

Table A1. Confusion matrix of the best performing deep learning model of atrial fibrillation (AF) trained on synthetic dataset and tested on real patient dataset.
Table A1. Confusion matrix of the best performing deep learning model of atrial fibrillation (AF) trained on synthetic dataset and tested on real patient dataset.
Predicted ClassActual Class
Without AFDiagnosed with AF
Without AF102
Diagnosed with AF410
Classification threshold = 0.5.
Table A2. Confusion matrix of the best performing model of cardiovascular events (CE) trained on (transfer learning and CE synthetic dataset) and tested on real patient dataset.
Table A2. Confusion matrix of the best performing model of cardiovascular events (CE) trained on (transfer learning and CE synthetic dataset) and tested on real patient dataset.
Predicted ClassActual Class
Without CEDiagnosed with CE
Without CE51
Diagnosed with CE14
Classification threshold = 0.4.
Figure A1. Feature importance in atrial fibrillation (AF) dataset. Top 20 important HRV features for AF classification.
Figure A1. Feature importance in atrial fibrillation (AF) dataset. Top 20 important HRV features for AF classification.
Applsci 15 01178 g0a1
Figure A2. Feature importance in cardiovascular events (CE) dataset. Top 20 important HRV features for CE classification.
Figure A2. Feature importance in cardiovascular events (CE) dataset. Top 20 important HRV features for CE classification.
Applsci 15 01178 g0a2
Table A3. Comparison of HRV features: Synthetic vs. real data of atrial fibrillation (AF).
Table A3. Comparison of HRV features: Synthetic vs. real data of atrial fibrillation (AF).
Feature NumberFeature NameSynthetic (1000)Real (58)p-Value
Median [IQR]Median [IQR]
1PNS index−0.63 [−1.46, 0.23]−0.57 [−1.23, 0.14]0.738
2SNS index2.13 [0.16, 4.19]1.55 [0.39, 2.97]0.170
3Stress index20.89 [14.12, 28.23]17.74 [13.87, 23.55]0.234
4Mean RR (ms)900.77 [780.75, 1019.23]905.19 [793.98, 1030.33]0.727
5SDNN (ms)22.11 [11.72, 32.16]20.70 [13.14, 26.75]0.531
6Mean HR (beats/min)70.34 [59.07, 81.94]66.28 [58.23, 75.57]0.207
7SD HR (beats/min)1.73 [0.98, 2.48]1.41 [0.99, 2.12]0.352
8Min HR (beats/min)65.19 [55.19, 76.52]62.07 [54.87, 70.26]0.231
9Max HR (beats/min)75.54 [63.29, 88.03]71.99 [62.53, 81.32]0.238
10RMSSD (ms)21.85 [7.71, 37.11]17.38 [11.44, 30.23]0.459
11NNxx (beats)10.00 [0.00, 24.00]1.50 [0.00, 13.25]0.009
12pNNxx (%)5.62 [0.00, 13.14]0.92 [0.00, 7.75]0.007
13RR tri index5.60 [3.77, 7.46]5.62 [3.60, 7.27]0.848
14TINN (ms)107.00 [57.00, 152.00]101.00 [60.00, 127.00]0.554
15DC (ms)13.67 [6.96, 19.91]11.06 [7.72, 19.21]0.594
16DCmod (ms)22.82 [8.93, 37.95]17.62 [11.91, 33.10]0.518
17AC (ms)−12.97 [−18.20, −6.88]−11.32 [−17.30, −7.06]0.762
18ACmod (ms)−22.25 [−36.13, −9.66]−18.30 [−29.50, −12.65]0.534
19VLF (Hz)0.04 [0.03, 0.04]0.04 [0.03, 0.04]0.235
20LF (Hz)0.06 [0.05, 0.08]0.05 [0.04, 0.07]0.291
21HF (Hz)0.26 [0.21, 0.31]0.27 [0.20, 0.32]0.995
22VLF (ms2)54.03 [2.92, 112.87]26.56 [12.01, 69.15]0.218
23LF (ms2)319.61 [8.79, 641.64]164.21 [43.76, 399.63]0.246
24HF (ms2)256.24 [0.97, 1025.54]85.15 [42.10, 203.47]0.343
25VLF (log)3.28 [2.53, 4.12]3.28 [2.49, 4.24]0.983
26LF (log)4.99 [4.03, 5.86]5.10 [3.78, 5.99]0.853
27HF (log)4.45 [3.40, 5.37]4.44 [3.74, 5.32]0.704
28VLF (%)12.41 [6.57, 18.24]9.40 [5.47, 16.93]0.331
29LF (%)52.28 [40.11, 64.88]52.74 [34.52, 67.47]0.693
30HF (%)34.64 [21.58, 49.04]32.39 [18.05, 56.61]0.994
31LF (n.u.)61.15 [46.63, 74.91]62.71 [38.68, 78.18]0.887
32HF (n.u.)38.93 [25.06, 53.25]37.24 [21.81, 61.30]0.881
33Total power (ms2)636.92 [16.26, 1729.63]333.62 [150.94, 659.64]0.371
34LF/HF ratio2.92 [0.89, 4.86]1.69 [0.63, 3.58]0.072
35RESP (Hz)0.31 [0.27, 0.34]0.30 [0.28, 0.33]0.922
36SD1 (ms)15.49 [5.47, 26.31]12.32 [8.11, 21.45]0.459
37SD2 (ms)26.73 [15.18, 36.61]25.10 [15.17, 32.29]0.593
38SD2/SD1 ratio1.93 [1.43, 2.46]1.73 [1.35, 2.27]0.197
39Approximate entropy0.92 [0.84, 0.99]0.91 [0.84, 0.99]0.495
40Sample entropy1.74 [1.52, 1.97]1.74 [1.54, 1.92]0.918
41alpha 11.04 [0.85, 1.23]1.01 [0.77, 1.22]0.668
42alpha 20.45 [0.30, 0.60]0.41 [0.30, 0.59]0.476
43Correlation dimension0.63 [0.00, 1.39]0.20 [0.01, 0.85]0.065
44Mean line length9.40 [7.44, 11.37]8.46 [7.36, 10.01]0.145
45Max line length (beats)87.00 [38.00, 131.00]58.50 [42.00, 92.50]0.055
46Recurrence rate (%)27.34 [20.94, 33.45]24.96 [20.31, 30.58]0.164
47Determinism (DET) (%)96.69 [95.40, 97.91]96.80 [95.38, 97.94]0.857
48Shannon entropy2.90 [2.69, 3.12]2.84 [2.70, 3.03]0.388
49MSE(1)1.74 [1.52, 1.97]1.74 [1.54, 1.92]0.917
50MSE(2)1.68 [1.45, 1.91]1.63 [1.43, 1.86]0.593
51VLF (Hz) AR spectrum0.04 [0.03, 0.04]0.04 [0.04, 0.04]<0.001
52LF (Hz) AR spectrum0.08 [0.06, 0.10]0.07 [0.06, 0.08]0.435
53HF (Hz) AR spectrum0.25 [0.20, 0.30]0.26 [0.15, 0.31]0.920
54VLF (ms2) AR spect.70.02 [8.91, 128.74]52.90 [19.33, 98.78]0.537
55LF (ms2) AR spectrum287.63 [50.10, 515.18]212.48 [62.39, 388.24]0.248
56HF (ms2) AR spectrum222.73 [0.76, 692.79]94.33 [37.32, 224.52]0.327
57VLF (log) AR spectrum3.69 [2.89, 4.50]3.96 [2.96, 4.59]0.697
58LF (log) AR spectrum5.04 [4.09, 5.87]5.36 [4.13, 5.96]0.750
59HF (log) AR spectrum4.49 [3.46, 5.45]4.55 [3.62, 5.41]0.634
60VLF (%) AR spectrum14.86 [10.05, 19.48]12.49 [8.96, 20.15]0.495
61LF (%) AR spectrum50.73 [40.52, 61.63]53.53 [35.22, 61.21]0.839
62HF (%) AR spectrum34.06 [22.11, 47.25]29.93 [20.11, 50.98]0.994
63LF (n.u.) AR spectrum60.35 [47.52, 73.18]64.12 [42.67, 75.74]0.890
64HF (n.u.) AR spectrum39.62 [26.69, 52.35]35.82 [24.25, 57.28]0.890
Table A4. Comparison of HRV features: Synthetic vs. real data of cardiovascular events (CE).
Table A4. Comparison of HRV features: Synthetic vs. real data of cardiovascular events (CE).
Feature NumberFeature NameSynthetic (100)Real (23)p-Value
Median [IQR]Median [IQR]
1PNS index0.06 [−0.96, 1.17]0.34 [−0.78, 1.03]0.810
2SNS index1.24 [0.07, 2.92]0.20 [−0.28, 2.53]0.235
3Stress index18.14 [11.79, 26.05]14.80 [10.23, 23.06]0.198
4Mean RR (ms)903.23 [800.83, 975.13]942.76 [790.94, 1028.20]0.381
5SDNN (ms)27.70 [13.29, 46.17]30.90 [13.73, 47.81]0.840
6Mean HR (beats/min)67.94 [62.97, 77.89]63.64 [58.39, 75.88]0.233
7SD HR (beats/min)2.11 [1.24, 3.26]2.02 [1.29, 3.00]0.948
8Min HR (beats/min)64.44 [58.09, 71.49]61.34 [54.02, 72.67]0.308
9Max HR (beats/min)73.84 [66.51, 83.09]69.82 [64.56, 81.36]0.290
10RMSSD (ms)40.39 [22.13, 70.09]31.31 [18.62, 73.47]0.935
11NNxx (beats)53.87 [0.00, 131.65]13.00 [3.00, 141.50]0.685
12pNNxx (%)22.36 [2.46, 38.18]12.00 [1.06, 49.60]0.827
13RR tri index6.87 [3.15, 9.59]5.58 [4.02, 9.41]0.851
14TINN (ms)49.50 [24.75, 74.25]115.00 [71.50, 189.00]<0.001
15DC (ms)16.76 [5.82, 26.78]12.15 [6.16, 25.69]0.933
16DCmod (ms)45.48 [20.10, 70.00]31.52 [19.01, 75.73]0.933
17AC (ms)−16.02 [−25.90, −6.40]−13.91 [−25.98, −6.56]0.992
18ACmod (ms)−43.01 [−70.02, −18.92]−30.71 [−72.49, −18.87]0.984
19VLF (Hz)0.04 [0.03, 0.04]0.04 [0.03, 0.04]0.681
20LF (Hz)0.07 [0.04, 0.08]0.06 [0.05, 0.07]0.932
21HF (Hz)0.29 [0.25, 0.32]0.29 [0.24, 0.33]0.695
22VLF (ms2)42.31 [0.77, 98.63]21.34 [11.92, 45.23]0.462
23LF (ms2)161.15 [2.01, 492.59]103.57 [53.49, 205.44]0.976
24HF (ms2)675.83 [10.66, 1621.47]141.55 [51.96, 1096.91]0.680
25VLF (log)3.04 [2.32, 3.74]3.06 [2.48, 3.81]0.815
26LF (log)4.47 [3.64, 5.63]4.64 [3.98, 5.32]0.433
27HF (log)5.12 [4.12, 6.63]4.95 [3.95, 6.99]0.570
28VLF (%)9.11 [3.51, 14.69]6.80 [1.88, 13.70]0.481
29LF (%)37.17 [19.07, 49.42]32.42 [16.26, 47.79]0.575
30HF (%)55.59 [36.92, 74.30]55.41 [38.24, 79.80]0.604
31LF (n.u.)40.49 [19.63, 59.00]36.82 [16.93, 55.82]0.626
32HF (n.u.)58.94 [40.95, 79.56]62.93 [44.06, 81.93]0.601
33Total power (ms2)819.21 [13.72, 2243.34]399.90 [160.46, 1495.71]0.804
34LF/HF ratio1.15 [0.06, 2.55]0.59 [0.21, 1.27]0.329
35RESP (Hz)0.34 [0.29, 0.38]0.30 [0.25, 0.38]0.287
36SD1 (ms)28.66 [15.82, 49.66]22.16 [13.19, 52.02]0.946
37SD2 (ms)26.37 [14.00, 41.11]31.84 [14.87, 39.66]0.719
38SD2/SD1 ratio1.11 [0.73, 1.52]0.91 [0.72, 1.40]0.512
39Approximate entropy1.06 [0.96, 1.20]1.08 [1.03, 1.14]0.726
40Sample entropy1.66 [1.43, 1.89]1.61 [1.40, 1.90]0.910
41alpha 10.62 [0.39, 0.91]0.59 [0.37, 0.87]0.743
42alpha 20.37 [0.26, 0.48]0.33 [0.24, 0.45]0.394
43Correlation dimension1.41 [0.55, 2.87]0.66 [0.01, 3.36]0.943
44Mean line length7.19 [5.25, 8.52]6.71 [5.29, 7.67]0.630
45Max line length (beats)58.73 [29.13, 83.13]41.00 [30.50, 60.40]0.379
46Recurrence rate (%)18.55 [13.15, 23.24]15.85 [13.01, 20.57]0.630
47Determinism (DET) (%)93.81 [92.43, 95.40]93.87 [92.47, 95.85]0.841
48Shannon entropy2.59 [2.34, 2.82]2.61 [2.32, 2.74]0.775
49MSE(1)1.66 [1.43, 1.89]1.61 [1.40, 1.90]0.910
50MSE(2)1.61 [1.36, 1.88]1.63 [1.46, 1.76]0.982
51VLF (Hz) AR spectrum0.02 [0.01, 0.04]0.04 [0.00, 0.04]0.331
52LF (Hz) AR spectrum0.08 [0.04, 0.10]0.06 [0.04, 0.10]0.475
53HF (Hz) AR spectrum0.28 [0.24, 0.34]0.29 [0.21, 0.36]0.676
54VLF (ms2) AR spect.46.75 [3.91, 91.92]31.48 [20.02, 72.95]0.825
55LF (ms2) AR spectrum156.85 [1.47, 436.10]101.50 [46.95, 230.65]0.990
56HF (ms2) AR spectrum607.06 [8.90, 1594.85]122.87 [53.45, 1005.80]0.728
57VLF (log) AR spectrum3.52 [2.66, 4.23]3.45 [2.99, 4.28]0.604
58LF (log) AR spectrum4.42 [3.40, 5.63]4.62 [3.85, 5.44]0.436
59HF (log) AR spectrum5.04 [4.01, 6.36]4.81 [3.97, 6.91]0.468
60VLF (%) AR spectrum11.64 [6.68, 18.66]9.03 [4.15, 16.48]0.463
61LF (%) AR spectrum32.35 [17.56, 47.24]27.74 [13.41, 47.65]0.518
62HF (%) AR spectrum53.88 [33.83, 73.24]62.74 [31.26, 83.11]0.606
63LF (n.u.) AR spectrum38.96 [22.33, 58.89]31.09 [13.93, 60.25]0.588
64HF (n.u.) AR spectrum60.95 [41.13, 77.17]68.66 [39.62, 85.66]0.575

References

  1. Kannel, W.B.; Wolf, P.A.; Benjamin, E.J.; Levy, D. Prevalence, incidence, prognosis, and predisposing conditions for atrial fibrillation: Population-based estimates. Am. J. Cardiol. 1998, 82, 2N–9N. [Google Scholar] [CrossRef]
  2. Vizzardi, E.; Curnis, A.; Latini, M.G.; Salghetti, F.; Rocco, E.; Lupi, L.; Rovetta, R.; Quinzani, F.; Bonadei, I.; Bontempi, L.; et al. Risk factors for atrial fibrillation recurrence: A literature review. J. Cardiovasc. Med. 2014, 15, 235–253. [Google Scholar] [CrossRef] [PubMed]
  3. Lloyd-Jones, D.M.; Wang, T.J.; Leip, E.P.; Larson, M.G.; Levy, D.; Vasan, R.S.; D’Agostino, R.B.; Massaro, J.M.; Beiser, A.; Wolf, P.A.; et al. Lifetime risk for development of atrial fibrillation: The Framingham Heart Study. Circulation 2004, 110, 1042–1046. [Google Scholar] [CrossRef] [PubMed]
  4. Britton, M.; Gustafsson, C. Non-rheumatic atrial fibrillation as a risk factor for stroke. Stroke 1985, 16, 182–188. [Google Scholar] [CrossRef]
  5. Wolf, P.A.; Dawber, T.R.; Thomas, H.E.; Kannel, W.B. Epidemiologic assessment of chronic atrial fibrillation and risk of stroke: The fiamingham Study. Neurology 1978, 28, 973. [Google Scholar] [CrossRef]
  6. Stewart, S.; Hart, C.L.; Hole, D.J.; McMurray, J.J. A population-based study of the long-term risks associated with atrial fibrillation: 20-year follow-up of the Renfrew/Paisley study. Am. J. Med. 2002, 113, 359–364. [Google Scholar] [CrossRef]
  7. Gopinathannair, R.; Etheridge, S.P.; Marchlinski, F.E.; Spinale, F.G.; Lakkireddy, D.; Olshansky, B. Arrhythmia-induced cardiomyopathies: Mechanisms, recognition, and management. J. Am. Coll. Cardiol. 2015, 66, 1714–1728. [Google Scholar] [CrossRef] [PubMed]
  8. January, C.T.; Wann, L.S.; Calkins, H.; Chen, L.Y.; Cigarroa, J.E.; Cleveland, J.C.; Ellinor, P.T.; Ezekowitz, M.D.; Field, M.E.; Furie, K.L.; et al. 2019 AHA/ACC/HRS focused update of the 2014 AHA/ACC/HRS guideline for the management of patients with atrial fibrillation: A report of the American College of Cardiology/American Heart Association Task Force on Clinical Practice Guidelines and the Heart Rhythm Society. J. Am. Coll. Cardiol. 2019, 74, 104–132. [Google Scholar] [PubMed]
  9. Jeong, J.H. Prevalence of and risk factors for atrial fibrillation in Korean adults older than 40 years. J. Korean Med. Sci. 2005, 20, 26. [Google Scholar] [CrossRef] [PubMed]
  10. Velleca, M.; Costa, G.; Goldstein, L.; Bishara, M.; Ming, L. A review of the burden of atrial fibrillation: Understanding the impact of the new millennium epidemic across Europe. Cardiology 2019, 7, 110–118. [Google Scholar] [CrossRef]
  11. Kazi, D.S.; Elkind, M.S.; Deutsch, A.; Dowd, W.N.; Heidenreich, P.; Khavjou, O.; Mark, D.; Mussolino, M.E.; Ovbiagele, B.; Patel, S.S.; et al. Forecasting the Economic Burden of Cardiovascular Disease and Stroke in the United States Through 2050: A Presidential Advisory from the American Heart Association. Circulation 2024, 150, 4. [Google Scholar] [CrossRef]
  12. Luengo-Fernandez, R.; Walli-Attaei, M.; Gray, A.; Torbica, A.; Maggioni, A.P.; Huculeci, R.; Bairami, F.; Aboyans, V.; Timmis, A.D.; Vardas, P.; et al. Economic burden of cardiovascular diseases in the European Union: A population-based cost study. Eur. Heart J. 2023, 44, 4752–4767. [Google Scholar] [CrossRef]
  13. Malik, M.; Camm, A.J. Heart rate variability. Clin. Cardiol. 1990, 13, 570–576. [Google Scholar] [CrossRef] [PubMed]
  14. Cygankiewicz, I.; Zareba, W. Heart rate variability. Handb. Clin. Neurol. 2013, 117, 379–393. [Google Scholar] [PubMed]
  15. Faust, O.; Hong, W.; Loh, H.W.; Xu, S.; Tan, R.S.; Chakraborty, S.; Barua, P.D.; Molinari, F.; Acharya, U.R. Heart rate variability for medical decision support systems: A review. Comput. Biol. Med. 2022, 145, 105407. [Google Scholar] [CrossRef] [PubMed]
  16. Murdoch, B. Privacy and artificial intelligence: Challenges for protecting health information in a new era. BMC Med. Ethics 2021, 22, 1–5. [Google Scholar] [CrossRef] [PubMed]
  17. Gonzales, A.; Guruswamy, G.; Smith, S.R. Synthetic data in health care: A narrative review. PLoS Digit. Health 2023, 2, e0000082. [Google Scholar] [CrossRef]
  18. Giuffrè, M.; Shung, D.L. Harnessing the power of synthetic data in healthcare: Innovation, application, and privacy. NPJ Digit. Med. 2023, 6, 186. [Google Scholar] [CrossRef] [PubMed]
  19. Weimann, K.; Conrad, T.O.F. Transfer learning for ECG classification. Sci. Rep. 2021, 11, 5251. [Google Scholar] [CrossRef] [PubMed]
  20. Melillo, P.; Izzo, R.; Orrico, A.; Scala, P.; Attanasio, M.; Mirra, M.; De Luca, N.; Pecchia, L. Automatic prediction of cardiovascular and cerebrovascular events using heart rate variability analysis. PLoS ONE 2015, 10, e0118504. [Google Scholar] [CrossRef]
  21. Alkhodari, M.; Islayem, D.K.; Alskafi, F.A.; Khandoker, A.H. Predicting Hypertensive Patients with Higher Risk of Developing Vascular Events Using Heart Rate Variability and Machine Learning. IEEE Access 2020, 8, 192727–192739. [Google Scholar] [CrossRef]
  22. Deka, D.; Deka, B. Stratification of High-Risk Hypertensive Patients Using Hybrid Heart Rate Variability Features and Boosting Algorithms. IEEE Access 2021, 9, 62665–62675. [Google Scholar] [CrossRef]
  23. Rajput, J.S.; Sharma, M.; Tan, R.S.; Acharya, U.R. Automated detection of severity of hypertension ECG signals using an optimal bi-orthogonal wavelet filter bank. Comput. Biol. Med. 2020, 123, 103924. [Google Scholar] [CrossRef] [PubMed]
  24. Jin, B.T.; Palleti, R.; Shi, S.; Ng, A.Y.; Quinn, J.V.; Rajpurkar, P.; Kim, D. Transfer learning enables prediction of myocardial injury from continuous single-lead electrocardiography. J. Am. Med. Informatics Assoc. 2022, 29, 1908–1918. [Google Scholar] [CrossRef] [PubMed]
  25. Alghamdi, A.; Hammad, M.; Ugail, H.; Abdel-Raheem, A.; Muhammad, K.; Khalifa, H.S.; El-Latif, A.A.A. Detection of myocardial infarction based on novel deep transfer learning methods for urban healthcare in smart cities. Multimed. Tools Appl. 2020, 83, 14913–14934. [Google Scholar] [CrossRef]
  26. Kusuma, S.; Jothi, K.R. ECG signals-based automated diagnosis of congestive heart failure using Deep CNN and LSTM architecture. Biocybern. Biomed. Eng. 2022, 42, 247–257. [Google Scholar] [CrossRef]
  27. Moses, J.C.; Adibi, S.; Angelova, M.; Islam, S.M.S. Time-domain heart rate variability features for automatic congestive heart failure prediction. ESC Heart Fail. 2024, 11, 378–389. [Google Scholar] [CrossRef]
  28. Chen, J.; Chen, Y.; Li, J.; Wang, J.; Lin, Z.; Nandi, A.K. Stroke Risk Prediction with Hybrid Deep Transfer Learning Framework. IEEE J. Biomed. Health Inform. 2022, 26, 411–422. [Google Scholar] [CrossRef] [PubMed]
  29. Rumsfeld, J.S.; Joynt, K.E.; Maddox, T.M. Big data analytics to improve cardiovascular care: Promise and challenges. Nat. Rev. Cardiol. 2016, 13, 350–359. [Google Scholar] [CrossRef] [PubMed]
  30. Gillette, K.; Gsell, M.A.; Nagel, C.; Bender, J.; Winkler, B.; Williams, S.E.; Bär, M.; Schäffter, T.; Dössel, O.; Plank, G.; et al. MedalCare-XL: 16,900 healthy and pathological synthetic 12 lead ECGs from electrophysiological simulations. Sci. Data 2023, 10, 531. [Google Scholar] [CrossRef] [PubMed]
  31. Goretti, F.; Marzullo, A.; Milli, M.; Iadanza, E. Prediction of Atrial Fibrillation using Deep Learning techniques. In Convegno Nazionale di Bioingegneria; Patron Editore Srl: Bologna, Italy, 2023; pp. 1–4. [Google Scholar]
  32. Melillo, P.; Izzo, R.; Orrico, A.; Scala, P.; Attanasio, M.; Mirra, M.; De Luca, N.; Pecchia, L. Smart Health for Assessing the Risk of Events via ECG Database. 2015. Available online: https://physionet.org/content/shareedb/1.0.0/ (accessed on 15 June 2021).
  33. Goldberger, A.; Amaral, L.; Glass, L.; Hausdorff, J.; Ivanov, P.; Mark, R.; Mietus, J.; Moody, G.; Peng, C.; Stanley, H. PhysioBank, PhysioToolkit, and PhysioNet: Components of a new research resource for complex physiologic signals. Circulation 2000, 101, E215–E220. [Google Scholar] [CrossRef]
  34. Kubios, O. HRV Analysis Methods. 2016. Available online: https://www.kubios.com/blog/about-heart-rate-variability/ (accessed on 15 June 2021).
  35. Shaffer, F.; Meehan, Z.M.; Zerr, C.L. A critical review of ultra-short-term heart rate variability norms research. Front. Neurosci. 2020, 14, 594880. [Google Scholar] [CrossRef] [PubMed]
  36. Orini, M.; van Duijvenboden, S.; Young, W.J.; Ramírez, J.; Jones, A.R.; Hughes, A.D.; Tinker, A.; Munroe, P.B.; Lambiase, P.D. Long-term association of ultra-short heart rate variability with cardiovascular events. Sci. Rep. 2023, 13, 18966. [Google Scholar] [CrossRef] [PubMed]
  37. Buggineni, V.; Chen, C.; Camelio, J. Enhancing Manufacturing Operations with Synthetic Data: A Systematic Framework for Data Generation, Accuracy, and Utility. 2023. Available online: https://www.researchgate.net/publication/377698850_Enhancing_Manufacturing_Operations_with_Synthetic_Data_A_Systematic_Framework_for_Data_Generation_Accuracy_and_Utility (accessed on 17 December 2024).
  38. Ravens, B. An Introduction to Copulas; Taylor & Francis: London, UK, 2000. [Google Scholar]
  39. Goodfellow, I.; Bengio, Y.; Courville, A. Deep Learning; MIT Press: Cambridge, MA, USA, 2016. [Google Scholar]
  40. Pan, S.J.; Yang, Q. A survey on transfer learning. IEEE Trans. Knowl. Data Eng. 2009, 22, 1345–1359. [Google Scholar] [CrossRef]
  41. Kingma, D.; Ba, J. Adam: A Method for Stochastic Optimization. In Proceedings of the International Conference on Learning Representations (ICLR), San Diega, CA, USA, 7–9 May 2015. [Google Scholar]
  42. Synthetic Data Vault. Data Quality. 2023. Available online: https://docs.sdv.dev/sdv/single-table-data/evaluation/data-quality (accessed on 9 February 2024).
  43. Mann, H.B.; Whitney, D.R. On a Test of Whether one of Two Random Variables is Stochastically Larger than the Other. Ann. Math. Stat. 1947, 18, 50–60. [Google Scholar] [CrossRef]
  44. Chapelle, R.; Falissard, B. Statistical properties and privacy guarantees of an original distance-based fully synthetic data generation method. arXiv 2023, arXiv:2310.06571. [Google Scholar]
  45. Murtaza, H.; Ahmed, M.; Khan, N.F.; Murtaza, G.; Zafar, S.; Bano, A. Synthetic data generation: State of the art in health care domain. Comput. Sci. Rev. 2023, 48, 100546. [Google Scholar] [CrossRef]
  46. Moshawrab, M.; Adda, M.; Bouzouane, A.; Ibrahim, H.; Raad, A. Predicting Cardiovascular Events with Machine Learning Models and Heart Rate Variability. Int. J. Ubiquitous Syst. Pervasive Netw. 2023, 18, 49–59. [Google Scholar]
  47. Zhang, Z.; Qiu, Y.; Yang, X.; Zhang, M. Enhanced character-level deep convolutional neural networks for cardiovascular disease prediction. BMC Med. Informatics Decis. Mak. 2020, 20, 123. [Google Scholar] [CrossRef]
Figure 1. Schematic overview of HRV classification pipeline: (1) data preprocessing (undersampling, feature selection, shuffling, and train/test split), (2) synthetic data generation, (3) data normalization (0–1), (4) deep learning model trained on AF data, and (5) transfer learning of AF model and fine-tuning on CE data. All the preprocessing steps were the same but applied separately to both datasets.
Figure 1. Schematic overview of HRV classification pipeline: (1) data preprocessing (undersampling, feature selection, shuffling, and train/test split), (2) synthetic data generation, (3) data normalization (0–1), (4) deep learning model trained on AF data, and (5) transfer learning of AF model and fine-tuning on CE data. All the preprocessing steps were the same but applied separately to both datasets.
Applsci 15 01178 g001
Figure 2. Box plots comparing two examples of HRV features, SDNN (ms) and pNNxx (%), between synthetic (blue) and real (green) data for cardiovascular events (CE). Each box shows the median (horizontal line), interquartile range (IQR, box boundaries), and mean (represented by the triangle inside the box), with whiskers indicating the data range. p-values (SDNN: 0.840, pNNxx: 0.827) are displayed inside the chart, highlighting the similarity between the synthetic and real data distributions.
Figure 2. Box plots comparing two examples of HRV features, SDNN (ms) and pNNxx (%), between synthetic (blue) and real (green) data for cardiovascular events (CE). Each box shows the median (horizontal line), interquartile range (IQR, box boundaries), and mean (represented by the triangle inside the box), with whiskers indicating the data range. p-values (SDNN: 0.840, pNNxx: 0.827) are displayed inside the chart, highlighting the similarity between the synthetic and real data distributions.
Applsci 15 01178 g002
Table 1. Atrial fibrillation (AF) model architecture summary. Computational complexity analysis of the proposed deep learning network architecture: For a given number of epochs E, training samples N, input dimensionality D, and hidden layer size H, the complexity per epoch is on the order of O ( N · D · H ) . Over E epochs, it scales to O ( E · N · D · H ) .
Table 1. Atrial fibrillation (AF) model architecture summary. Computational complexity analysis of the proposed deep learning network architecture: For a given number of epochs E, training samples N, input dimensionality D, and hidden layer size H, the complexity per epoch is on the order of O ( N · D · H ) . Over E epochs, it scales to O ( E · N · D · H ) .
Model: “Keras Sequential”
Layer (type)Output Shape
Input layer (Dense)(, 64)
Activation (Activation)(, 64)
Hidden layer 1 (Dense)(, 64)
Activation (Activation)(, 64)
Dropout 1 (Dropout)(, 64)
Hidden layer 2 (Dense)(, 64)
Activation (Activation)(, 64)
Dropout 2 (Dropout)(, 64)
Output layer (Dense)(, 1)
Trainable params: 12,545
Non-trainable params: 0
Table 2. Cardiovascular event (CE) model architecture summary. Computational complexity analysis of the transfer learning architecture: by freezing the earlier layers, gradients need only be computed for the newly added (unfrozen) layers. Thus, if D is the input dimensionality fed into the first unfrozen layer and H is the size of the newly added layers, the computational complexity per epoch reduces to O ( N · D · H ) . Over E epochs, it becomes O ( E · N · D · H ) . This maintains a linear scaling with the number of samples, epochs, and the size of the unfrozen portion of the model.
Table 2. Cardiovascular event (CE) model architecture summary. Computational complexity analysis of the transfer learning architecture: by freezing the earlier layers, gradients need only be computed for the newly added (unfrozen) layers. Thus, if D is the input dimensionality fed into the first unfrozen layer and H is the size of the newly added layers, the computational complexity per epoch reduces to O ( N · D · H ) . Over E epochs, it becomes O ( E · N · D · H ) . This maintains a linear scaling with the number of samples, epochs, and the size of the unfrozen portion of the model.
Model: “Keras Sequential”
Layer (type)Output Shape
Input layer (Dense)(, 64)
Activation (Activation)(, 64)
Hidden layer 1 (Dense)(, 32)
Activation (Activation)(, 32)
Dropout 1 (Dropout)(, 32)
Output layer (Dense)(, 1)
Trainable params: 2113
Non-trainable params: 4160
Table 3. Statistical similarities between real and synthetic datasets of AF and CEs.
Table 3. Statistical similarities between real and synthetic datasets of AF and CEs.
PropertyAtrial Fibrillation (AF)Cardiovascular Events (CE)
Column shapes 82.05 % 79.61 %
Column pair trends 95.74 % 96.12 %
Overall score 88.89 % 87.86 %
Table 4. Synthetic features statistically different with respect to the original AF data. To see the full table with all listed features check Table A3 in Appendix A.
Table 4. Synthetic features statistically different with respect to the original AF data. To see the full table with all listed features check Table A3 in Appendix A.
Feature NumberFeature NameSynthetic (1000)Real (58)p-Value
Median [IQR]Median [IQR]
11NNxx (beats)10.00 [0.00, 24.00]1.50 [0.00, 13.25]0.009
12pNNxx (%)5.62 [0.00, 13.14]0.92 [0.00, 7.75]0.007
51VLF (Hz) AR spectrum0.04 [0.03, 0.04]0.04 [0.04, 0.04]<0.001
Table 5. Synthetic features statistically different with respect to the original CE data. To see the full table with all the listed features, check Table A4 in Appendix A.
Table 5. Synthetic features statistically different with respect to the original CE data. To see the full table with all the listed features, check Table A4 in Appendix A.
Feature NumberFeature NameSynthetic (100)
Median [IQR]
Real (23)
Median [IQR]
p-Value
14TINN (ms)49.50 [24.75, 74.25]115.00 [71.50, 189.00]<0.001
Table 6. Classification report of model performance on atrial fibrillation (AF) datasets: (1) Synthetic + Real data: Deep learning model trained on synthetic + real data. (2) Real data: Deep learning model trained on real data.
Table 6. Classification report of model performance on atrial fibrillation (AF) datasets: (1) Synthetic + Real data: Deep learning model trained on synthetic + real data. (2) Real data: Deep learning model trained on real data.
Trained onClassesPrecisionRecallF1-ScoreSpecificityAUROC
(1) Synthetic + Real data(Without AF) 0.71 0.83 0.77 0.83 0.77
(Diagnosed with AF) 0.83 0.71 0.77
(2) Real data(Without AF) 0.62 0.83 0.71 0.80 0.70
(Diagnosed with AF) 0.80 0.57 0.67
Precision: The proportion of true positives among the predicted positives. Measures accuracy in positive predictions. Recall (Sensitivity): The proportion of true positives among the actual positives. Indicates the ability to detect positive instances. F1-Score: The harmonic mean of precision and recall. Balances precision and recall in a single metric. Specificity: The proportion of true negatives among the actual negatives. Reflects the ability to correctly identify negatives. AUROC (Area Under Receiver Operating Characteristic Curve): A summary of the trade-off between true positive rate and false positive rate across all thresholds. Measures overall classification performance.
Table 7. Classification report of model performance on CE dataset: (1) Synthetic + Real data: Transfer learning model trained on synthetic data. (2) Real data: Transfer learning model trained on real data. (3) No Transfer Learning: Deep learning model trained on real data without transfer learning.
Table 7. Classification report of model performance on CE dataset: (1) Synthetic + Real data: Transfer learning model trained on synthetic data. (2) Real data: Transfer learning model trained on real data. (3) No Transfer Learning: Deep learning model trained on real data without transfer learning.
Trained onClassesPrecisionRecallF1-ScoreSpecificityAUROC
(1) Synthetic + Real data(Without CE) 0.83 0.83 0.83 0.80 0.82
(Diagnosed with CE) 0.80 0.80 0.80
(2) Real data(Without CE) 0.80 0.67 0.73 0.67 0.73
(Diagnosed with CE) 0.67 0.80 0.73
(3) No Transfer Learning(Without CE) 0.56 0.83 0.67 0.50 0.52
(Diagnosed with CE) 0.50 0.20 0.29
Precision: The proportion of true positives among the predicted positives. Measures accuracy in positive predictions. Recall (Sensitivity): The proportion of true positives among the actual positives. Indicates the ability to detect positive instances. F1-Score: The harmonic mean of precision and recall. Balances precision and recall in a single metric. Specificity: The proportion of true negatives among the actual negatives. Reflects the ability to correctly identify negatives. AUROC (Area Under Receiver Operating Characteristic Curve): A summary of the trade-off between true positive rate and false positive rate across all thresholds. Measures overall classification performance.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Goretti, F.; Salman, A.; Cartocci, A.; Luschi, A.; Pecchia, L.; Milli, M.; Iadanza, E. Deep Learning for Risky Cardiovascular and Cerebrovascular Event Prediction in Hypertensive Patients. Appl. Sci. 2025, 15, 1178. https://doi.org/10.3390/app15031178

AMA Style

Goretti F, Salman A, Cartocci A, Luschi A, Pecchia L, Milli M, Iadanza E. Deep Learning for Risky Cardiovascular and Cerebrovascular Event Prediction in Hypertensive Patients. Applied Sciences. 2025; 15(3):1178. https://doi.org/10.3390/app15031178

Chicago/Turabian Style

Goretti, Francesco, Ali Salman, Alessandra Cartocci, Alessio Luschi, Leandro Pecchia, Massimo Milli, and Ernesto Iadanza. 2025. "Deep Learning for Risky Cardiovascular and Cerebrovascular Event Prediction in Hypertensive Patients" Applied Sciences 15, no. 3: 1178. https://doi.org/10.3390/app15031178

APA Style

Goretti, F., Salman, A., Cartocci, A., Luschi, A., Pecchia, L., Milli, M., & Iadanza, E. (2025). Deep Learning for Risky Cardiovascular and Cerebrovascular Event Prediction in Hypertensive Patients. Applied Sciences, 15(3), 1178. https://doi.org/10.3390/app15031178

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop