Next Article in Journal
Performance Assessment of Flat Plate Solar Collector Using Simple and Hybrid Carbon Nanofluids at Low Thermal Capacity
Previous Article in Journal
Design of a Steady-State Adjustment Method and Sensitivity Analysis for an ORC System with Plate Heat Exchangers
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

End-to-End Electrocardiogram Signal Transformation from Continuous-Wave Radar Signal Using Deep Learning Model with Maximum-Overlap Discrete Wavelet Transform and Adaptive Neuro-Fuzzy Network Layers

by
Tae-Wan Kim
and
Keun-Chang Kwak
*
Interdisciplinary Program in IT-Bio Convergence System, Department of Electronics Engineering, Chosun University, Gwangju 61452, Republic of Korea
*
Author to whom correspondence should be addressed.
Appl. Sci. 2024, 14(19), 8730; https://doi.org/10.3390/app14198730
Submission received: 2 September 2024 / Revised: 13 September 2024 / Accepted: 17 September 2024 / Published: 27 September 2024

Abstract

:
This paper is concerned with an end-to-end electrocardiogram (ECG) signal transformation from a continuous-wave (CW) radar signal using a specialized deep learning model. For this purpose, the presented deep learning model is designed using convolutional neural networks (CNNs) and bidirectional long short-term memory (Bi-LSTM) with a maximum-overlap discrete wavelet transform (MODWT) layer and an adaptive neuro-fuzzy network (ANFN) layer. The proposed method has the advantage of developing existing deep networks and machine learning to reconstruct signals through CW radars to acquire ECG biological information in a non-contact manner. The fully connected (FC) layer of the CNN is replaced by an ANFN layer suitable for resolving black boxes and handling complex nonlinear data. The MODWT layer is activated via discrete wavelet transform frequency decomposition with maximum-overlap to extract ECG-related frequency components from radar signals to generate essential information. In order to evaluate the performance of the proposed model, we use a dataset of clinically recorded vital signs with a synchronized reference sensor signal measured simultaneously. As a result of the experiment, the performance is evaluated by the mean squared error (MSE) between the measured and reconstructed ECG signals. The experimental results reveal that the proposed model shows good performance in comparison to the existing deep learning model. From the performance comparison, we confirm that the ANFN layer preserves the nonlinearity of information received from the model by replacing the fully connected layer used in the conventional deep learning model.

1. Introduction

Many people and businesses have set as their ideal goal the systematic management of an individual’s mental and physical health, and the identification of emotional states that are not directly expressed, and the provision of convenience and services accordingly. Artificial intelligence has recently attracted attention as an essential technology for realizing this ideal. The information obtained from human-like biological signals, facial expressions, voices, and behaviors can infer unexpressed health or expressions of opinion, thereby enabling an appropriate response even when people are physically challenged or unable to express their own intentions [1]. In particular, ECG signals from a person’s heart contain high-quality information about the person which can be used to identify their health status and emotional status [2]. In general, in order to acquire an ECG signal, a professional technique needs to be followed, attaching pads and devices to the body without movement. However, in order to analyze ECG signals in everyday environments, pads and devices that work in a contact manner are not suitable [3]. Deep learning is a technology that classifies and predicts using high-dimensional nonlinear features extracted from large-scale data, making it suitable for radars with high-frequency bands to pass through the human body and analyze the reflected signals to acquire ECG information. In order to rebuild an ECG in a non-contact manner through radar signals, there is a limit in terms of checking changes in the signal over time. When a high-frequency radar passes or is reflected through a human body, it is received, including the activity signal information performed therein. In this case, signal-processing technologies such as noise filters and heart rate extraction are applied by checking the frequency band that includes the desired information [4,5]. In order to check the frequency which changes over time, a method of analyzing signals in a high-dimensional manner by converting a one-dimensional radar signal into a two-dimensional time–frequency image is also being studied. A signal transformed into a two-dimensional format can be treated as an image, enabling the use of image preprocessing techniques and learning through CNN-based models, which are capable of handling a vast amount of embedded information.
Existing models extract features from signals and transfer them to the FC layer, resulting in linear weights being applied, leading to output values. In contrast, fuzzy logic is applied to the input data using various membership functions (MFs), expressing the degree of membership for each one and generating rules based on these memberships, thereby creating linguistic rules which capture the relationships between feature channels. Although feature extraction abilities for large-scale data are somewhat lacking, this model can generate linguistic rules, enabling transparent learning and excellent processing abilities for uncertainty and ambiguous data acquired in the real world. In the case of the conventional fully connected (FC) layer, it is difficult to represent channel-specific relationships by the linear combination of high-dimensional nonlinear representations obtained from large-scale input data. As a result, the nonlinear features passing through the FC layer risk being weakened or lost, and the expressive power may decrease due to the flat combination approach. However, in the case of the ANFN layer, it usually has useful characteristics to solve these problems. This layer has the capability of handling complex nonlinear data and is advantageous for identifying relationships through clustering between input channels. Moreover, even if there is noise in the delivered feature map, it can obtain the robustness and stability of signal reconstruction.
Thus, a deep–fuzzy method is being studied, combining two different methods to simultaneously use nonlinear feature extraction capabilities for large-scale data and processing capabilities for uncertain data [6].
In this study, we propose an improvement compared to the traditional fully connected (FC) layer—a linear classification layer commonly used in deep network models—using a fuzzy classification layer. This layer emphasizes the nonlinear features extracted by the model and enhances robustness against noise by incorporating fuzzy logic. Consequently, it becomes more effective in handling uncertain and ambiguous data and is better suited for processing nonlinear features, thereby increasing the model’s robustness to heterogeneous characteristics.
We describe, in Section 2, the study of converting radar signals into ECGs and the study of fusing deep networks and fuzzy layers. Section 3 describes the MODWT theory and the algorithms needed to design the output layer using fuzzy logic. Section 4 compares general deep network performance using the MODWT layer and that of the proposed model using the fuzzy layer, and, finally, Section 5 describes the conclusions.

2. Related Work

This section describes the deep learning techniques applied to acquire vital sign signals through existing external devices and convert them into an ECG or heart rate. The signal obtained by being reflected from the body through the radar includes information on the heart rate. It aims to import information related to the heart rate by frequency decomposition or pass through a deep network or algorithm and generate the desired ECG signal.
Sharma [7] applied ANFN and unspecified wavelet transform to extract fetal ECGs from maternal ECGs. This technique is useful for handling nonlinear and time-series data, recording lower MSE values than discrete wavelet transforms. Cao [8] applied a lightweight residual U-Net to pursue simple ECG measurement monitoring through a millimeter-wave (MW) radar. The experiment was carried out and verified with data measured using 10 subjects. As a result, the reconstructed ECG signal and actual signal showed a correlation coefficient of 0.8 or higher and proved the effect. Wu [9] converted radar and ECG data obtained from 30 subjects into spectrograms through short-time Fourier transforms and then conducted signal reconstruction research with RSSRnet. As a result of verifying the performance of the signal converted through cross-channel attention in the model through the median-normalized root mean square error, a value of 0.049 was shown. Chen [10] designed an end-to-end DNN model to rebuild a non-contact ECG signal with an MW radar system. Signal reconstruction via a deep network by extracting four-dimensional cardiac motion has a median error of 14 ms or less of the electrical event of the heart, and the results are significantly similar to real ECGs with a median Pearson correlation coefficient of 90%. Abdelmadjid [11] reconstructed it into a deep network through signal-processing algorithms and radio frequency signals based on signals acquired in four different situations through 35 experiments. The experimental results confirmed that the dimensional transformation algorithm and the deep network are effective in transforming signals. Toda [12] conducted a study to receive signals from outside the body with a frequency-modulated continuous wave and rebuild them into ECG signals through CNN. In six adult males without heart disease, the reconstruction performance was confirmed with signals obtained in a contactless or contact manner. Although noise was added, the signal reconstruction method using deep learning was confirmed to have good performance with a low signal-to-noise ratio. Li [13] conducted a wavelet-scattering network–long short-term memory (WSN-LSTM) network study that enabled the measurement of the ECG signal and heart rate via FMCW radar signals using data from Physio Net. The bio signal used variational-mode extraction to extract the heart rate signal and built the heart rate signal with the learned WSN-LSTM. Afterwards, when the heart rate was estimated by fast Fourier transform, a lower error was recorded compared to the conventional method. Jang [14]’s study aimed to achieve contactless heart monitoring through Doppler data sensors. DCG signals and deep networks were utilized to generate ECG signals containing a heart rate variability similar to ECG information, rather than diagnosing through conventional ECG alone. For data processing, they used a variational autoencoder network and found a 58% improvement in consistency.
Yamamoto [15] proposed an ECG reconstruction model that combined CNN and LSTM to measure the heart rate in a non-contact manner through Doppler sensors. For verification, data from nine healthy subjects without heart disease were used, and the correlation coefficient between the reconstructed signal and the actual signal was 0.86, confirming a similar performance. Cerda [16] used a deep learning method as a way to reconstruct ECG signals from photoplethysmogram signals. As a result of reconstructing the signal by applying Bi-LSTM to public data, the average correlation value confirmed a high performance above 0.8. Shyu [17] proposed the use of the UWB radar and the first valley peak of the energy function of intrinsic mode functions and ensemble empirical mode decomposition to properly separate signals reflected from the body without being disturbed by the environment or movement. The bio-signal calculated by the above method could simultaneously acquire breathing and heart rate information and effectively separate even fine signals. Yang [18] proposed methods based on permutation entropy and ensemble empirical mode decomposition algorithms to measure bio-signals in a non-contact manner. The proposed algorithm calculated the distance between the person and the radar through PE and decomposed the combined signal by the ensemble empirical mode decomposition algorithm into intrinsic mode functions. When breathing and heart rate signals were collected in the above manner, it was demonstrated that they could be efficiently extracted. Kim [19] proposed a method to measure the heart rate through a frequency-modulated continuous-wave radar and lightweight deep network.

3. CNN and Bi-LSTM Model with MODWT and ANFN Layers

We performed the following experiments to demonstrate whether the model’s performance was sufficient to replace the fully connected (FC) layer with the adaptive neuro-fuzzy network (ANFN) layer with the nonlinear clustering treatment in the deep network generating the ECG, as shown in Figure 1. This section addresses the algorithm of MODWT in the deep network layer and the algorithmic methodology for calculating the membership function (MF) and applying rule weights by transferring the values delivered in the intermediate layer to the ANFN layer.
In the existing FC layer, the received feature value was weighted for each channel to obtain a linear sum using the desired number of output channels. While the existing method for obtaining the desired output by applying weights to nonlinear channel information was also valid, we sought higher predictions by leveraging fuzzy layers that could handle nonlinearities more appropriately. Figure 2 shows the proposed method for converting the CW signals obtained through radar into an ECG signal. Obtaining ECG-related information from radar signals in the time domain was insufficient. Thus, we converted them into two-dimensional image data in the time–frequency domain to extract ECG information included in the low-frequency band. Afterwards, the combined model of CNN and Bi-LSTM was employed. To generate the inference using the fuzzy concept rather than conventional linear methods of nonlinear feature channel data, we obtained reconstructed ECG signals by replacing the fully connected layer with an ANFN layer. Here, the fuzzy c-means (FCM) clustering method in the ANFN layer was used to efficiently compress a number of received channel data.

3.1. MODWT (Maximum-Overlap Discrete Wavelet Transform) Layer

The MODWT is an extended transformation of the discrete wavelet transform (DWT) that can represent data in the time domain as multi-resolution-like spectrograms, scalograms, etc. The MODWT can also be applied without limitation to the signal length, unlike DWT. By overlapping signal sampling, a high-resolution time–frequency representation can be obtained. In addition, it has a shift variance for time movement; thus, it can be used for noise removal or voice feature extraction. The MODWT can be converted through the scaling filter and the wavelet filter, as shown in Equations (1)–(4) below, depending on each scale level and the index of the filter coefficient.
H j , k = H 2 j 1 k m o d   N m = 0 j 2 G 2 m k m o d   N
W j , t = 1 N k = 0 N 1 H j , k e i 2 π n k N
G j , k = m = 0 j 1 G 2 m k m o d   N
V j , t = 1 N k = 0 N 1 G j , k e i 2 π n k / N
The signal converted by the above method can be returned to the original signal through inverse transformation. The standard algorithm for MODWT implements direct cyclic synthesis in the time domain. The MODWT implementation in our experiment performed a circular convolution in the Fourier domain. The wavelet and scaling filter coefficients at level j were calculated by taking the inverse discrete Fourier transform (DFT) of the product of the DFTs. The DFT of the product was the DFT of the signal and the DFT of the j level wavelet or scaling filter. The MODWT has a fixed-frequency band, so it may not be sensitive to nonlinear changes in the signal. However, for ECG reconstruction, in this study, the purpose was to extract heart information in the low-frequency band. For this reason, we used the MODWT, which enabled multiple-resolution analysis and allowed the easy separation of the desired frequency band.

3.2. CNN and Bi-LSTM Model

The CNN is a technique for extracting features of image or time-series input data through a convolutional filter. As shown in Equation (5), where X is the input data, W is the filter matrix, and b is the bias value, the window filter of the set size moves to perform a convolution, and a feature map is output with channels as the number of filters. The output feature map emphasizes nonlinearity through normalization and activation functions.
F e a t u r e   m a p = X W i , j = M = 1 M N = 1 N X i + m 1 , j + n 1 · W m , n + b
We designed a hybrid convolutional autoencoder and a Bi-LSTM network to reconstruct the ECG signal. The first one-dimensional (1D) convolutional layer filtered the signal. Then, the convolutional autoencoder removed most of the high-frequency noise and captured the high-level patterns of the entire signal. Furthermore, the transposed 1D convolution layer was used to upsample 1D feature maps in the final stage of the CNN. The convolution operation downsampled the input by applying a sliding convolution filter to the input. By flattening the input and output, the convolution operation was computed by the convolution matrix and the bias vector that could be derived from the layer weights and biases. Similarly, the transposed convolution operation upsampled the input by applying a sliding convolution filter to the input.
The Bi-LSTM is a model that learns the bidirectional dependencies of time-series data, and extended features can be obtained because the LSTM is applied in both directions. As shown in Equations (6)–(11), where W indicates the input weights, R is the recurrent weight, and b is the bias, this model consists of an oblivion gate that determines whether to exclude cell information from the previous time period and an input and output gate. The role of the Bi-LSTM layer in this paper is to further refine the signal details.
I n p u t   g a t e   I t = σ g W i x t + R i H t 1 + b i
F o r g e t   g a t e   F t = σ g W f x t + R f H t 1 + b f
C e l l   s t a t e   G t = σ c W g x t + R g H t 1 + b g
O u t p u t   g a t e   O t = σ g W o x t + R o H t 1 + b o
C t = F t · C t 1 + I t · G t
H t = O t   ·   σ c ( C t )
The Bi-LSTM should specify the number of hidden units. Eight units are used in this paper. The number of hidden units corresponds to the amount of information that the layer maintains between each time step. The hidden state can contain information from all previous time steps, regardless of the sequence length. If the length of the hidden units is too long, the layer may overfit the training data. The hidden state does not limit the number of time steps that the layer processes in one iteration.

3.3. ANFN (Adaptive Neuro-Fuzzy Network) Layer

Deep learning extracts nonlinear features from large-scale input data and transforms them into high-dimensional feature channels. However, the transferred data are cumbersome to handle in general fuzziness, creating problems in the course of the dimension and increasing amount of computation. The fuzzy c-means (FCM) clustering method can be used to estimate appropriate clusters with the transferred data and compress them to the desired number of output channels. The FCM algorithm is one of various data clustering techniques, representing the degree to which a data point belongs to each cluster as a probability, and it generates a rule value by synthesizing the probability of belonging to the set clusters. As shown in Figure 3, the clusters are generated for each input data channel using FCM clustering. The data point expresses the membership value to which it belongs to the cluster for each channel as the probability of a real number between 0 and 1. This represents to which cluster each data point belongs to more strongly. The membership values are then leveraged to generate and learn the rules through rule consequents and rule weights, and the final output is calculated. The output value calculates the loss value with the MSE (mean square error) compared to the actual ECG and updates the training parameters contained in CNN and Bi-LSTM with an ANFN layer. The pseudo-codes of the ANFN are described in Algorithm 1.
Algorithm 1. Pseudo-code of ANFN
1:
Initialization: center, sigma, ruleConsequents, ruleWeights
2:
X = feature data propagated by the Bi-LSTM
3:
NormalizedData = (X − min(X))/(max(X) − min(X))
4:
For C = 1:numClusters
5:
sig = sigma(C,:), cnt = centers(C,:)
6:
Distance = NormalizedData − cnt, Membership = 0
7:
SquaredDistance = sum(Distance), Squaredsig = sig2
8:
For K = 1:input_channel
9:
Membership = Membership + exp(-SquaredDistance/(2 × Squaredsig(K)))
10:
End
11:
MembershipValues(k) = Membership
12:
End
13:
ruleOutput = MembershipValues × ruleConsequents
14:
weightedSum = sum(ruleOutput × ruleWeights)
15:
sumWeights = sum(ruleWeights)
16:
finalOutput = weightedSum/sumWeights
17:
loss = mse(finalOutput,trueSignal)
18:
Update of weights and biases in the CNN and Bi-LSTM
center, sigma, ruleConsequents, ruleWeights in the FAL

4. Experimental Results

In this section, we evaluate the performance of an end-to-end model designed to convert a CW (continuous-wave) radar signal recorded in a non-contact manner into an ECG through frequency decomposition and feature extraction from the public dataset. The performance of the proposed model is compared with the conventional CNN with an FC (fully connected) layer through the mean square error (MSE). The compared deep learning networks generate complex nonlinear features from the input data, and the FC layers derive the final classification results by applying weights to these features. However, this approach does not fully reflect the interaction between the feature channels, potentially overlooking important relationships, and has the disadvantage of limiting nonlinearity. Therefore, the proposed method can preserve the relationships between the input channels and improves the nonlinear data-processing power, enabling a more accurate reconstruction.
The synchronized radar ECG dataset consisted of synchronized data collected over 24 h through a CW radar and a reference device, the Task Force Monitor 3040i from CNSystems Medizintechnik GmbH (Graz, Austria) [20]. The equipment operated at 24 GHz in the ISM band based on a six-port technology. For efficient learning, a portion of the data acquired (data from 6 subjects out of 30 healthy subjects) were used. All subjects were required to complete a questionnaire on epidemiological data such as age, sex, weight, and medical history. In addition, the condition of the subjects was briefly checked by examining their blood pressure, heart rate, and heart sounds. If all the criteria were confirmed to be positive, the subject was included in the study, and their measurements were made available. This had been based on the six-port technology but was expanded into individual components to become a portable radar system. Figure 4 shows the CW radar signal and the corresponding ECG signal. We can recognize that it is almost impossible to identify any correlation between the CW radar signals and the corresponding reference ECG measurements.
We used an Adam optimizer and chose to shuffle the data at every epoch for 600 epochs as the training option parameters. The first convolution1dLayer was replaced by an MODWT layer. The MODWT layer was configured to have the same filter size and number of output channels to maintain the same number of learning parameters. Based on our previous observation, only components in a certain frequency range were kept. Instead of the conventional FC layer, we replaced the ANFN layer based on FCM clustering to effectively process the nonlinear characteristics and have a robust processing power for ambiguous results. Here, FCM clustering was selected for processing a high-dimensional representation. The rules were generated by the membership value of the input data point, and then the weights were obtained so that each rule could make a valid decision through the ANFN. Figure 5 shows the membership function of each input channel. We could obtain a higher accuracy compared to conventional linear processing by learning the center value, the sigma value of each MF, and the shape through the ANFN layer.
Figure 6 shows some samples of reconstructed ECG signals and measured ECG signals. As shown in Figure 6, it can be confirmed that the proposed method predicted the pattern of the heart rate more accurately. Figure 7 compares the ECG signal reconstructed by the proposed model from CW radar signals recorded in a non-contact manner and the measured ECG signals. As shown in Figure 7, the experimental results reveal that the proposed deep learning model showed a good reconstruction performance with a small loss.
To evaluate the performance of the proposed method, we used three measures: the MSE, the SNR, and the PSNR. Firstly, the MSE is the average obtained by squaring the difference between the original signal and the reconstructed signal. A lower MSE indicates a better reconstruction performance, while a higher MSE reflects a poorer performance. The SNR represents the ratio between the strength of the signal and the noise. A high SNR indicates a good quality of the reconstruction signal, while a low SNR indicates a poor quality. Finally, the PSNR calculates the peak signal-to-noise ratio between the original signal and the reconstructed signal. This ratio is often used as a quality measure between two signals: the higher the PSNR, the better the quality of the reconstructed signal.
As a result of comparing the loss values through the MSE, the conventional CNN with only the FC layer showed an average loss value of 0.0138, while the proposed model with ANFN layer recorded a lower loss value of 0.010. Table 1 lists the performance of the previous two models and the proposed model. As listed in Table 1, the SNR was 7.514 and the PSNR 18.909 for the signal reconstructed using the existing deep network with only the FC layer. Meanwhile, the deep network with the ANFN layer showed a better performance than the previous models, with an SNR of 9.023 and a PSNR of 20.418.
Figure 8 and Table 1 show the MSE histograms and the performance table of the predicted ECG signals obtained using the proposed model, respectively. As shown in Figure 8 and Table 1, the design of the deep learning model with an FC layer exhibited a higher error distribution compared to the other methods. The model combining the MODWT and deep learning with FC layers showed the most moderate error distribution out of all three methods. Finally, the proposed model with MODWT and ANFN layers demonstrated the lowest error distribution, indicating a superior reconstruction performance with less deviation from the original signal in comparison to the previous two methods.
As a result, in order to reconstruct an ECG signal from a CW radar signal, it is essential to check changes in the frequency domain alongside conducting an analysis in the time domain. Therefore, the reconstruction performance was the lowest when the MODWT was not used. On the other hand, the existing FC method based on the MODWT was sufficiently effective. However, when the FC layer was replaced with an ANFN layer, we achieved reconstruction stability by maintaining the nonlinear characteristics seen in deep learning. Therefore, the proposed method demonstrated that a nonlinearity process through an ANFN layer was more effective than through an FC layer.

5. Conclusions

We designed a specialized deep learning model for end-to-end electrocardiogram (ECG) signal transformation from a continuous-wave (CW) radar signal. The proposed deep learning model was composed of convolutional neural networks (CNNs) and bidirectional long short-term memory (Bi-LSTM) with a maximum-overlap discrete wavelet transform (MODWT) layer and an adaptive neuro-fuzzy network (ANFN) layer. From the experimental results, we could recognize the fact that the FC layer linearly summed the input nonlinear data by applying channel-specific weights, but this process did not fully represent the complex nonlinear patterns extracted from the model and lacked reflection of channel-specific interactions. In the case of the ANFN layer, it had a high processing power for complex patterns and nonlinear data, and it was more effective in dealing with these problems because it generated linguistic rules by reflecting the interactions between each feature channel. The experimental results showed that, when the deep learning model was combined with the MODWT and ANFN layers, the reconstruction performance was effective in handling the nonlinear representation of the model. Furthermore, we found that the overall ECG signal reconstruction was stable, presenting superior performance in comparison to other methods. However, it took more processing time than the existing models due to the replacement of the ANFN layer. In future research, we shall study ANFN transformation to make the overall model explainable when designing deep learning models.

Author Contributions

Conceptualization, T.-W.K. and K.-C.K.; methodology, T.-W.K. and K.-C.K.; software, T.-W.K. and K.-C.K.; validation, T.-W.K. and K.-C.K.; formal analysis, T.-W.K. and K.-C.K.; investigation, T.-W.K. and K.-C.K.; resources, K.-C.K.; data curation, K.-C.K.; writing—original draft preparation, T.-W.K.; writing—review and editing, K.-C.K.; visualization, T.-W.K. and K.-C.K.; supervision, K.-C.K.; project administration, K.-C.K.; and funding acquisition, K.-C.K. All authors have read and agreed to the published version of the manuscript.

Funding

This study was supported by a research fund from Chosun University (2024).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data are available in a publicly accessible repository [20].

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Casado, C.; Cañellas, M.L.; López, M.B. Depression Recognition Using Remote Photoplethysmography from Facial Videos. IEEE Trans. Affect. Comput. 2023, 14, 3305–3316. [Google Scholar] [CrossRef]
  2. Melzi, P.; Tolosana, R.; Vera-Rodriguez, R. ECG Biometric Recognition: Review, System Proposal, and Benchmark Evaluation. IEEE Access 2023, 11, 15555–15566. [Google Scholar] [CrossRef]
  3. Alam, A.; Ansari, A.Q.; Urooj, S. Design of Contactless Capacitive Electrocardiogram (ECG) Belt System. In Proceedings of the 2022 IEEE Delhi Section Conference (DELCON), New Delhi, India, 11–13 February 2022. [Google Scholar]
  4. Hossain, S.; Uddin, S.D.; Islam, S.M.M. Heart Rate Variability Assessment Using Single Channel CW Doppler Radar. In Proceedings of the 2023 IEEE Microwaves, Antennas, and Propagation Conference (MAPCON), Ahmedabad, India, 11–14 December 2023. [Google Scholar]
  5. Li, J.F.; Yang, C.L. High-Accuracy Cardiac Activity Extraction Using RLMD-Based Frequency Envelogram in FMCW Radar Systems. In Proceedings of the 2023 IEEE/MTT-S International Microwave Symposium—IMS 2023, San Diego, CA, USA, 11–16 June 2023. [Google Scholar]
  6. Pashikanti, R.S.; Patil, C.Y.; Shinde, A.A. Cardiac Arrhythmia Classification using Deep Convolutional Neural Network and Fuzzy Inference System. In Proceedings of the 2022 International Conference on Artificial Intelligence of Things (ICAIoT), Istanbul, Turkey, 29–30 December 2022. [Google Scholar]
  7. Sharma, A.; Gowda, D.; Anandaram, H. Extraction of Fetal ECG Using ANFIS and the Undecimated-Wavelet Transform. In Proceedings of the 2022 IEEE 3rd Global Conference for Advancement in Technology (GCAT), Bangalore, India, 7–9 October 2022. [Google Scholar]
  8. Cao, B.; Zhao, M.; Liu, B.; Ping, Q.; He, M. Research on non-contact electrocardiogram monitoring based on millimeter-wave radar and residual Unet. In Proceedings of the IET International Radar Conference (IRC 2023), Chongqing, China, 3–5 December 2023. [Google Scholar]
  9. Wu, Y.; Ni, H.; Mao, C.; Han, J. Contactless Reconstruction of ECG and Respiration Signals with mmWave Radar Based on RSSRnet. IEEE Sensors J. 2023, 24, 6358–6368. [Google Scholar] [CrossRef]
  10. Chen, J.; Zhang, D.; Wu, Z.; Zhou, F.; Sun, Q.; Chen, Y. Contactless Electrocardiogram Monitoring with Millimeter Wave Radar. IEEE Trans. Mob. Comput. 2022, 23, 270–285. [Google Scholar] [CrossRef]
  11. Abdelmadjid, M.A.; Boukadoum, M. Neural Network-Based Signal Translation with Application to the ECG. In Proceedings of the 2022 20th IEEE Interregional NEWCAS Conference (NEWCAS), Quebec City, QC, Canada, 19–22 June 2022. [Google Scholar]
  12. Toda, D.; Anzai, R.; Ichige, K.; Saito, R.; Ueki, D. ECG Signal Reconstruction Using FMCW Radar and Convolutional Neural Network. In Proceedings of the 2021 20th International Symposium on Communications and Information Technologies (ISCIT), Tottori, Japan, 19–22 October 2021. [Google Scholar]
  13. Li, H.; Liu, Y.; Zhou, M.; Cao, Z.; Zhai, X.; Zhang, Y. Non-Contact Heart Rate Detection Technology Based on Deep Learning. In Proceedings of the 2023 International Seminar on Computer Science and Engineering Technology (SCSET), New York, NY, USA, 29–30 April 2023. [Google Scholar]
  14. Jang, Y.I.; Kyu Kwon, N. Comparison of the Signal Processing Methods to Enhance the Performance of the Signal Re-Construction System with Deep Learning. In Proceedings of the 2022 13th Asian Control Conference (ASCC), Jeju, Republic of Korea, 4–7 May 2022. [Google Scholar]
  15. Yamamoto, K.; Hiromatsu, R.; Ohtsuki, T. ECG Signal Reconstruction via Doppler Sensor by Hybrid Deep Learning Model with CNN and LSTM. IEEE Access 2020, 8, 130551–130560. [Google Scholar] [CrossRef]
  16. Cerda-Dávila, D.A.; Reyes, B.A. Exploring the Reconstruction of Electrocardiograms from Photoplethysmograms via Deep Learning. In Proceedings of the 2023 IEEE EMBS R9 Conference, Guadalajara, Mexico, 5–7 October 2023. [Google Scholar]
  17. Shyu, K.K.; Chiu, L.J.; Lee, P.L.; Tung, T.H.; Yang, S.H. Detection of Breathing and Heart Rates in UWB Radar Sensor Data Using FVPIEF-Based Two-Layer EEMD. IEEE Sens. J. 2019, 19, 774–784. [Google Scholar] [CrossRef]
  18. Yang, D.; Zhu, Z.; Liang, B. Vital Sign Signal Extraction Method Based on Permutation Entropy and EEMD Algorithm for Ultra-Wideband Radar. IEEE Access 2019, 7, 178879–178890. [Google Scholar] [CrossRef]
  19. Kim, D.; Choi, J.; Yoon, J.; Cheon, S.; Kim, B. HeartBeatNet: Enhancing Fast and Accurate Heart Rate Estimation With FMCW Radar and Lightweight Deep Learning. IEEE Sens. Lett. 2024, 8, 6004004. [Google Scholar] [CrossRef]
  20. Schellenberger, S.; Shi, K.; Steigleder, T.; Malessa, A.; Michler, F.; Hameyer, L.; Neumann, N.; Lurz, F.; Weigel, R.; Ostgathe, C.; et al. A dataset of clinically recorded radar vital signs with synchronised reference sensor signals. Sci. Data 2020, 7, 297. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Overview of the reconstruction of a CW radar signal into an ECG signal.
Figure 1. Overview of the reconstruction of a CW radar signal into an ECG signal.
Applsci 14 08730 g001
Figure 2. Overview of the ECG signal’s reconstruction process using MODWT, deep learning, and ANFN.
Figure 2. Overview of the ECG signal’s reconstruction process using MODWT, deep learning, and ANFN.
Applsci 14 08730 g002
Figure 3. Process procedure in the design of the ANFN.
Figure 3. Process procedure in the design of the ANFN.
Applsci 14 08730 g003
Figure 4. Plot of demodulated radar signal and synchronized ECG: (a) CW radar samples and (b) synchronized ECG samples.
Figure 4. Plot of demodulated radar signal and synchronized ECG: (a) CW radar samples and (b) synchronized ECG samples.
Applsci 14 08730 g004
Figure 5. Membership function for each input channel.
Figure 5. Membership function for each input channel.
Applsci 14 08730 g005
Figure 6. Comparison between the signal predicted through MCBF-net and the true signal: (a) reconstructed ECG signal by MCBF-net and (b) actual ECG signal.
Figure 6. Comparison between the signal predicted through MCBF-net and the true signal: (a) reconstructed ECG signal by MCBF-net and (b) actual ECG signal.
Applsci 14 08730 g006aApplsci 14 08730 g006b
Figure 7. Prediction performance of the actual ECG signal and the reconstructed ECG signal. (Blue signal: the actual ECG; red signal: the reconstructed ECG; and green signal: the difference between the actual ECG and the reconstructed ECG.)
Figure 7. Prediction performance of the actual ECG signal and the reconstructed ECG signal. (Blue signal: the actual ECG; red signal: the reconstructed ECG; and green signal: the difference between the actual ECG and the reconstructed ECG.)
Applsci 14 08730 g007
Figure 8. Performance comparison by MSE between measured and reconstructed ECG signals.
Figure 8. Performance comparison by MSE between measured and reconstructed ECG signals.
Applsci 14 08730 g008
Table 1. Performance comparison between the previous two models and the proposed model.
Table 1. Performance comparison between the previous two models and the proposed model.
DL Model with
Only FC Layer
DL Model with
MODWT and FC Layers
DL Model with
MODWT and ANFN
MSE0.01380.01260.0101
SNR7.51397.92409.0227
PSNR18.908819.318920.4176
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Kim, T.-W.; Kwak, K.-C. End-to-End Electrocardiogram Signal Transformation from Continuous-Wave Radar Signal Using Deep Learning Model with Maximum-Overlap Discrete Wavelet Transform and Adaptive Neuro-Fuzzy Network Layers. Appl. Sci. 2024, 14, 8730. https://doi.org/10.3390/app14198730

AMA Style

Kim T-W, Kwak K-C. End-to-End Electrocardiogram Signal Transformation from Continuous-Wave Radar Signal Using Deep Learning Model with Maximum-Overlap Discrete Wavelet Transform and Adaptive Neuro-Fuzzy Network Layers. Applied Sciences. 2024; 14(19):8730. https://doi.org/10.3390/app14198730

Chicago/Turabian Style

Kim, Tae-Wan, and Keun-Chang Kwak. 2024. "End-to-End Electrocardiogram Signal Transformation from Continuous-Wave Radar Signal Using Deep Learning Model with Maximum-Overlap Discrete Wavelet Transform and Adaptive Neuro-Fuzzy Network Layers" Applied Sciences 14, no. 19: 8730. https://doi.org/10.3390/app14198730

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop