Next Article in Journal
A Wirelessly Powered Smart Contact Lens with Reconfigurable Wide Range and Tunable Sensitivity Sensor Readout Circuitry
Next Article in Special Issue
A Non-Invasive Multichannel Hybrid Fiber-Optic Sensor System for Vital Sign Monitoring
Previous Article in Journal
Quantitative Assessment of First Annular Pulley and Adjacent Tissues Using High-Frequency Ultrasound
Previous Article in Special Issue
Wearable IMU for Shoulder Injury Prevention in Overhead Sports
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Pruning-Based Sparse Recovery for Electrocardiogram Reconstruction from Compressed Measurements

Department of Information & Communication Engineering, Daegu Gyeongbuk Institute of Science and Technology (DGIST), Daegu 771-813, Korea
*
Author to whom correspondence should be addressed.
Sensors 2017, 17(1), 105; https://doi.org/10.3390/s17010105
Submission received: 24 October 2016 / Revised: 18 December 2016 / Accepted: 3 January 2017 / Published: 7 January 2017
(This article belongs to the Collection Sensors for Globalized Healthy Living and Wellbeing)

Abstract

:
Due to the necessity of the low-power implementation of newly-developed electrocardiogram (ECG) sensors, exact ECG data reconstruction from the compressed measurements has received much attention in recent years. Our interest lies in improving the compression ratio (CR), as well as the ECG reconstruction performance of the sparse signal recovery. To this end, we propose a sparse signal reconstruction method by pruning-based tree search, which attempts to choose the globally-optimal solution by minimizing the cost function. In order to achieve low complexity for the real-time implementation, we employ a novel pruning strategy to avoid exhaustive tree search. Through the restricted isometry property (RIP)-based analysis, we show that the exact recovery condition of our approach is more relaxed than any of the existing methods. Through the simulations, we demonstrate that the proposed approach outperforms the existing sparse recovery methods for ECG reconstruction.

1. Introduction

It is well known that electrocardiogram (ECG) sensors enable effective medical diagnosis for heart problems, such as arrhythmia and myocardial infarction, in everyday life [1,2,3]. In this regard, implanted ECG-based pacemakers and wearable ECG monitoring devices were developed to detect critical problems in the cardiovascular system [4]. Meanwhile, recently-developed electrocardiogram (ECG) sensors in everyday life require stable and long time capability for developing wearable devices in ambulatory environments [5,6]. Due to the growing demand of smart wearable devices, the major issue for recent ECG sensors is to achieve efficient management of large quantities of real-time biosignals in ambulatory environments [7]. As a means of ECG signal processing implemented with low power and small data storage, one of the promising paradigms that has received much attention recently is the compressed sensing (CS)-based signal compression and reconstruction [8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24]. The well-known finding of the CS-based data reconstruction is that signals can be recovered from far fewer measurements than traditional schemes whenever the signal is sparse (signals with a very small number of nonzero coefficients, that is, s is a K-sparse signal if s 0 = K dim ( s ) , which can be exactly reconstructed from underdetermined measurement y = Φ s ) and the sensing mechanism satisfies the restricted isometry property (RIP), which is given as follows.
Definition 1.
The sensing matrix Φ R M × N ( M N ) is said to satisfy the RIP of order K if there exists a restricted isometry constant (RIC) δ K ( 0 , 1 ) , such that:
1 δ K s 2 Φ s 2 1 + δ K s 2
for any K-sparse vector s R N .
Additional benefits of the CS-based ECG processing are (1) the computationally-efficient data compression and (2) the guarantee of exact reconstruction from far fewer measurements than conventional methods. While the computational efficiency of data compression can be easily demonstrated since it requires only simple linear matrix multiplication (see Definition 1), our main interest lies in the reduction in the number of required measurements ensuring exact sparse signal recovery from compressed ECG data. A popular method for identifying the sparest signal s from the measurements y = Φ s is to formulate the 0 -minimization problem as:
min s s 0 subject   to   y = Φ s
where Φ R M × N ( M N ) is often called the sensing matrix. Since Equation (1) is known to be NP-hard [9], 1 -relaxation methods, such as basis pursuit (BP) [9], BP denoising (BPDN) [10] (or Lasso [11]) and the Dantzig selector [12], were introduced. Other than 1 -relaxation methods, further reduction in complexity can be achieved by the greedy approaches. To be specific, greedy algorithms attempt to identify the support T = { j   |   1 j N ,   s j 0 } (index set of nonzero entries of s ) in an iterative fashion, returning a sequence of estimates of the sparse input vector. However, although the greedy algorithm, such as orthogonal matching pursuit (OMP) [21], enables computationally-efficient implementation, its performance in general is not quite satisfactory, in particular in the presence of noise. This is mainly due to the fact that stepwise identification of the support elements might lead to a myopic decision in each iteration. Moreover, such a criterion does not provide any further chances to correct the mistake of selecting incorrect (i.e., j, such that s j = 0 ) indices once selected [21,22,23] (see Figure 1a).
The aim of this work is to introduce a new sparse signal recovery scheme that overcomes such drawbacks of conventional methods and achieves effective ECG reconstruction. By employing the tree search with an aggressive pruning strategy, our method achieves not only accurate ECG reconstruction, but also real-time implementation suitable for ambulatory environments. The main benefit of the tree search is that it enables multiple candidate investigations for identifying the support T = { j   |   1 j N ,   s j 0 } (see Figure 1b). That is, since the tree search examines the reliability of multiple index sets simultaneously, it improves the reconstruction performance by reducing the misdetection, as well as the false alarm probabilities (the misdetection and the false alarm probabilities in this manuscript denote the probabilities of the support index not being selected and the incorrect index ( j T c ) being identified as the support, respectively). In fact, many of the previous works in the literature focused on recovering sparse signals using the tree structure [16,17,18,19,20]. For instance, tree search-based orthogonal matching pursuit (TB-OMP) constructs the search tree by spreading multiple branches for each path [16], and its modified version was introduced in [17]. In fast Bayesian matching pursuit (FBMP), a fixed number of paths with the best posterior probabilities survives in each layer [18]. Further, the multipath matching pursuit (MMP) [19] attempts to select multiple branches ( L 2 ) by choosing maximally-correlated indices with the residual, and the combined method of A search [25,26] and orthogonal matching pursuit (OMP) [21] was introduced as a stage-wise residual minimization employing an effective pruning strategy [20].
Our approach, referred to as tree pruning-based matching pursuit (TPMP), provides further improvement by exploiting the full dictionary information with aggressive pruning strategies for each path. To be specific, the proposed TPMP considerably reduces the computational burden of the brute-force tree search, yet achieves excellent recovery performance by jointly implementing two pruning criteria, that is (1) the pre-scanning and (2) the pruning-based tree search. In the pre-scanning stage, we greedily choose a small number of promising column indices of the sensing matrix. If we denote the set of column indices obtained in the pre-scanning stage as Θ, then we set K | Θ | N where | Θ | is the cardinality of Θ. Once the pre-scanning is finished, the search tree is initialized by spreading the paths using only elements of Θ, so that the number of total possible paths in the tree is reduced from N K to | Θ | K , where a b = a ! b ! ( a b ) ! . For additional alleviation of the computational burden, TPMP employs a pruning strategy for removing unpromising paths from the tree. Similar to sphere decoding (SD) or list sphere decoding (LSD) with probabilistic pruning criteria [27,28,29,30], the pruning strategy is based on computing the cost function by greedily estimating the further path. Instead of obtaining the probabilistic characteristics of each path as previous works, our method exploits a full-blown candidate with cardinality K, which is constructed by combining the current path and greedily estimating further indices considering the complete dictionary information. By doing so, TPMP reduces the possibility of not selecting the support element (as well as selecting the incorrect index) subject to the constraint of sparsity level K. In addition, we demonstrate that this can rather reduce the running time complexity by shutting down the search in the early stage of the search tree.
While the preliminary version of this work was presented for an arbitrary system in [31], we show that the proposed method is highly suitable for ECG processing with some modifications and performs close to the best possible estimator (the estimator referred to as the oracle least squares (LS) estimator where the support information is given) [32]. To be specific, we reduced the cardinality of Θ for constructing smaller number of paths in the search tree for real-time implementation. In addition, we modified the cost function computation to maintain high reconstruction accuracy since investigating a smaller number of paths might degrade the performance. In order to achieve further reduction in complexity, we also employ a new stopping criterion with marginal performance loss by limiting the minimum pruning threshold. Moreover, compared to [31], we demonstrate that such modifications not only reduce the search complexity, but also improve the exact recovery condition (ERC) bound. From numerical simulations, we show that our proposed method outperforms the existing methods with practical complexity and provides additional flexibility for hardware implementation.
The rest of this manuscript is organized as follows. In Section 2, we briefly provide our setup for compressing and reconstructing ECG and then propose the TPMP algorithm. In Section 3, we analyze the exact recovery condition under which TPMP identifies the support accurately. In Section 4, we provide the numerical performance of the proposed method and then conclude in Section 5.

2. Tree Search-Based ECG Reconstruction

In this section, we introduce a low-power ECG reconstruction method based on the tree search where the system model is provided in Figure 2. We first introduce an existing ECG compression procedure following the compressed sensing-based system architecture (Section 2.1) and then discuss our proposed method for reconstructing the ECG data from compressed measurements (Section 2.2).

2.1. ECG Compression

The digitized signal x ˜ of the original ECG is approximated into x by selecting only K dominant elements of s ˜ = Ψ 1 x ˜ where Ψ is the N × N discrete cosine transform (DCT) basis matrix. In other words, s ˜ can be approximated as a K-sparse signal s with negligible information loss as:
x ˜ = Ψ s ˜ x = Ψ s .
After that, x is compressed into y R M × N ( M N ) as:
y = Φ x + v = Φ Ψ s + v = A s + v = j T a j s j + v
where Φ R M × N is the sensing matrix (or compression matrix), A = Φ Ψ , a j and s j are the j-th column of A and the j-th entry of s , respectively, and v R M is the additive noise (while the sparse structure in the DCT domain is still preserved, the noise v denotes the measurement distortion of y after the compression procedure or during the transmission process). Note that since | T | = K N , CS-based compression offers the linear superposition of K elements of s and, thus, enables its implementation with a substantially small number of digital architectures. From the measurement reduction perspective, it is worthwhile noting that the support information at the compression stage cannot be jointly provided to the reconstruction part. That is, for T to be given at the reconstruction stage, the information amount to be delivered increases from dim ( y ) = M to dim ( y ) + | T | = M + K ( dim ( y ) is the dimension of y ), which is against our intention. Furthermore, since the compression is based on the approximated signal s with a sparse structure in the DCT basis, the sensing matrix should then obey the restricted isometry property (RIP) given as:
1 δ K s 2 Φ Ψ s 2 = As 2 1 + δ K s 2 .
Considering such property, one of the good choices for the sensing matrix Φ is a random matrix, since such a matrix is said to obey the RIP with high probability [33].

2.2. ECG Reconstruction via Tree Search

In order to achieve low complexity ECG reconstruction with improved recovery accuracy, two key pruning criteria of our method are the pre-scanning and the pruning-based tree search. In the pre-scanning stage, we greedily choose the columns of A = Φ Ψ that are highly likely to be associated with nonzero entries of the sparse vector. In other words, the pre-scanning reduces the index set to be investigated from Ω = { 1 , 2 , , N } to a small subset Θ of Ω (i.e., Θ Ω and | Θ | N ). Then, the tree search is performed by using only the elements of Θ (see Figure 3). While any existing sparse recovery algorithm can be used to obtain Θ, we use a simple method for complexity reduction by choosing only K indices in this work. That is, we select K column indices of A = Φ Ψ corresponding to the columns with maximum correlation in magnitude with y as:
Θ = arg max | I K | = K A I K y 2
where I K is an arbitrary subset of Ω with cardinality K and A I K is the submatrix of A containing the columns associated with the indices in I K . Note that since Θ is constructed by simply choosing K indices corresponding to maximally correlated columns with y , the computational burden in the first stage is nearly negligible.
Once the pre-scanning is finished, a pruning-based tree search is performed to select the index set that minimizes the cost function. In this stage, an aggressive pruning strategy is employed to remove the paths with a small possibility of being the support (index set of nonzero entries). As shown in Figure 3, only the paths that are not removed in the i-th layer spread branches in the ( i + 1 )-th layer. The pruning strategy is based on removing paths with a larger cost function than the pruning threshold ϵ, since such paths have little hope to be the support. In the beginning of the search, the initial pruning threshold ϵ can be determined as any positive number, since the cost function of the support T is zero ( r T 2 = 0 ) and thus the true path can survive as long as T is found at least once. In order to compute the cost function J ( Λ i ) = y A Λ i s ^ Λ i 2 of the path Λ i = { t 1 ,   t 2 ,   ,   t i } , we obtain the temporarily required indices following the current path, the so-called posterior indices. By doing so, proposed TPMP greedily obtains the remaining part of each path Λ i and estimates its residual in magnitude at the end of the search (i.e., bottom layer). To this end, the posterior index set { t ^ i + 1 ,   t ^ i + 2 ,   ,   t ^ K } of each path is temporarily chosen where t ^ ( i + 1 K ) are highly likely to be the support among the elements of Θ Λ i . In fact, a similar concept of estimating the residual magnitude when the search is completed was proposed in [20]. While [20] presented three cost models to directly estimate the residual in magnitude (for example, when using the multiplicative cost model, the estimated residual in magnitude at the bottom layer is determined by the multiplication of a constant α and residual magnitude of the current path, i.e., α r Λ i 2 ), we focus on obtaining actual child node of each path. This problem is yet another problem of reconstructing the ( K i )-sparse signal, and in fact, the proper choice of the algorithm enables sufficient reconstruction accuracy with practical computational complexity (in our numerical simulations, we used the subspace pursuit (SP) algorithm). For instance, one can attempt to find { t ^ i + 1 ,   ,   t ^ K } minimizing the residual in magnitude:
{ t ^ i + 1 ,   ,   t ^ K } = arg min I Ω Λ i , | I | = K i r Λ i A I s ^ I 2
where r Λ i = y A Λ i s ^ Λ i , s ^ Λ i = A Λ i r Λ i and A Λ i = ( A Λ i A Λ i ) 1 A Λ i . To be specific, the posterior indices { t ^ i + 1 ,   ,   t ^ K } in Equation (5) can be obtained by MMP [19], where this choice is to pursue accurate estimation of the cost function. On the other hand, other greedy methods, such as orthogonal matching pursuit (OMP) [21] or subspace pursuit (SP) [23], can be also used for simpler hardware implementations.
After the posterior index set is obtained, the cost function of Λ i is then computed using Λ K = Λ i { t ^ i + 1 ,   ,   t ^ K } (note that the cost function is computed by using the candidate with cardinality K). That is, if the 2 -norm of the residual is greater than the threshold ϵ ( r Λ K 2 > ϵ ), then the path is removed and whenever the search of a layer is finished and r Λ K 2 is replaced as the newly updated ϵ. The construction of the posterior index set by using the existing greedy method might be computationally burdensome if a nontrivial number of paths exist in the tree. Therefore, we attempt to additionally alleviate such search complexity by employing the stopping criteria, which constrains the minimum residual in magnitude by c E [ v 2 2 ] = c N σ 2 for some non-negative constant 0 c 1 . In fact, although we assumed c to satisfy 0 c 1 since r T 2 = P T v 2 v 2 , if a larger error tolerance is acceptable, c can be assumed to be any proper positive constant larger than 1. In the noise-free scenario ( v = 0 ), the initial pruning threshold ϵ can be determined as any positive number. This is because since r T 2 = 0 for noiseless y , any positive ϵ is larger than the cost function of the support T (set of nonzero elements of the sparse vector) and thus the true path can survive as long as Λ K is obtained as T at least once. Therefore, if we set c = 0 and whenever any path satisfying r Λ K = 0 is found, then we regard Λ K as the support and immediately shut down the search. On the other hand, in the noisy scenario ( v 0 ), we assume a positive c ( 0 < c ), since r T 2 = P T v 2 > 0 . Note that from the accurate reconstruction perspective, too aggressive pruning should be avoided, and thus, small c should be assumed, and vice versa for complexity reduction. Through performance guarantee analyses in Section 3 and numerical simulations in Section 4, we demonstrate that this stopping criteria not only improves the recovery performance as well as the condition bound, but also achieves substantial reduction in search complexity. We summarize the proposed TPMP algorithm in Table 1.

3. Recovery Bound for Exact Reconstruction

In this section, we provide the sufficient condition under which TPMP accurately reconstructs the K-sparse signal s . In our analysis, we assume that the posterior index set of each path is constructed by MMP to show how maximally our bound can be relaxed.
The following lemmas are useful for our analysis.
Lemma 1.
(Lemma 3 in [8]): If the sensing matrix A satisfies the RIP of both orders K 1 and K 2 , then δ K 1 < δ K 2 for any K 1 < K 2 .
Lemma 2.
(Consequences of RIP [8,22]): If 0 < δ | I | < 1 exists for I Ω , then for any vector x R | I | ,
1 δ | I | s 2 A I A I s 2 1 + δ | I | s 2 , 1 1 + δ | I | s 2 A I A I 1 s 2 1 1 δ | I | s 2 .
Lemma 3.
(Lemma 2.1 in [34]): Let I 1 ,   I 2 Ω and I 1 I 2 = . If 0 < δ | I 1 | + | I 2 | < 1 exists, then:
A I 1 A I 2 s 2 δ | I 1 | + | I 2 | s 2 .
Lemma 4.
For M × N matrix A , A 2 is bounded as:
A 2 1 + δ min ( M , N )
Definition 2.
(Residual definition in [35]): For the index set Λ i with cardinality | Λ i | = i , the residual r Λ i is rewritten as:
r Λ i = P Λ i y = y A Λ i s ^ Λ i = y A Λ i ( A Λ i A Λ i ) 1 A Λ i y = A T Λ i s T Λ i A Λ i ( A Λ i A Λ i ) 1 A Λ i A T Λ i s T Λ i = A T Λ i s T Λ i A Λ i z Λ i = A T Λ i s ¯ T Λ i
where P Λ i = I A Λ i ( A Λ i A Λ i ) 1 A Λ i is the projection matrix onto the orthogonal complement of s p a n ( Λ i ) , z Λ i = ( A Λ i A Λ i ) 1 A Λ i A T Λ i s T Λ i and s ¯ T Λ i = s T Λ i z Λ i .

3.1. Exact Recovery from Noiseless Measurements

TPMP is guaranteed to exactly reconstruct s if the following two conditions are jointly satisfied:
  • (3-1) (Theorem 1) At least one support index should be found in the pre-scanning (i.e., T Θ ).
  • (3-2) (Theorem 2) At least one true path Λ i T has to survive the pruning strategy in each layer.
If (3-1) holds, then at least one branch in each layer of the tree is the support element. Therefore, whenever there is at least one true path in current layer that is not removed by the pruning strategy, (3-1) enables the true path to proceed further. Along with (3-1), the additional condition that ensures the survival of the true path is necessary for exact support identification, which is given as (3-2). In our analysis, both (3-1) and (3-2) are guaranteed under the results in Theorems 1 and 2, respectively, and these theorems jointly provide the overall sufficient condition for exact recovery in Theorem 3.
First, we obtain the sufficient condition for (3-1). Let κ be the largest correlation in magnitude between y and the columns associated with correct indices ( j T ) and ζ be the K-th largest correlation in magnitude between y and the columns corresponding to incorrect indices ( j T c ). That is,
κ = max j T | a j y | ,   ζ = min j I K | a j y |
where a j is the j-th column of A and I K = arg max | I | = K , I T c A I y 2 . The following lemma provides the lower and the upper bounds of κ and ζ, respectively.
Lemma 5.
κ and ζ satisfy:
κ 1 δ 2 K K s T 2 ,   ζ δ 2 K K s T 2 .
Proof of Lemma 5.
See Appendix A. ☐
Using Lemma 5, one can obtain the sufficient condition for (3-1).
Theorem 1 (Sufficient condition for (3-1)).
At least one support element is found in the pre-scanning stage under:
δ 2 K < 0.5 .
Proof of Theorem 1.
In order to choose at least one correct index in the pre-scanning stage, we should have κ > ζ . From Lemma 5, we can easily obtain the desired result.  ☐
Next, the condition (3-2) is guaranteed if the posterior indices of a true path always contain the support elements, that is { t ^ i + 1 ,   ,   t ^ K } = Ω Λ i where Λ i T . This is because r T 2 = 0 , and thus, the condition (3-2) always holds for any positive pruning threshold ϵ as:
r Λ i { t ^ i + 1 ,   ,   t ^ K } 2 = r T 2 < ϵ .
As mentioned, the problem to choose the posterior index set for a given true path Λ i T is equivalent to the problem of reconstructing the ( K i ) -sparse signal from the measurement r Λ i . Before we proceed, we provide useful definitions in our analysis. Let Υ l be the combination of Λ i and { t ^ i + 1 ,   ,   t ^ i + l } where 1 l K i ( Υ l = Λ i { t ^ i + 1 ,   ,   t ^ i + l } ). Next, let λ l be the largest correlation in magnitude between the residual r Υ l and columns associated with correct indices and γ l be the ( K i ) -th largest correlation in magnitude between r Υ l and columns associated with incorrect indices. That is,
λ l = max j T Υ l | a j r Υ l | ,   γ l = min j D i | a j r Υ l |
where D i = arg max | D | = K i , D Ω T A D r Υ l 2 . In the following lemma, we provide the lower bound of λ l and the upper bound of γ l .
Lemma 6.
If Υ l T , then:
λ l 1 δ K K 1 s ¯ T Υ l 2 ,   γ l δ 2 K 1 K 1 s ¯ T Υ l 2 .
where s ¯ T Υ i = s T Υ i ( A Υ i A Υ i ) 1 A Υ i A T Υ i s T Υ i .
Proof of Lemma 6.
See Appendix B.  ☐
The following theorem provides the sufficient condition for (3-2).
Theorem 2 (Sufficient condition for (3-2)).
The posterior index set of a true path Λ i T consists only of correct ones under:
δ 2 K 1 < 0.5 .
for any 1 i K 1 .
Proof of Theorem 2.
The element of posterior index set t ^ i + l becomes t ^ i + l T for any 1 l K i if the inequality λ l > γ l is satisfied. That is, from Lemma 6, we have:
δ K + δ 2 K 1 < 1 .
From Lemma 1, this inequality can be rewritten as 2 δ 2 K 1 < 1 , which is the desired result.  ☐
The overall recovery condition of TPMP can be obtained by combining Theorems 1 and 2.
Theorem 3 (Recovery condition of TPMP).
TPMP exactly identifies the support of any K-sparse signal s from y = As under:
δ 2 K < 0.5 .
Proof of Theorem 3.
The condition (Equation (13)) is obtained by choosing a stricter condition between Theorems 1 and 2.  ☐
It is worthwhile to note that TPMP provides a more relaxed recovery condition than conventional greedy algorithms, such as OMP δ K + 1 < 1 K + 1 [21], SP ( δ 3 K < 0.165 ) [23], gOMP δ K 2 < L L + K [35] and MMP ( δ 2 K < 0.33 ). In addition, our condition achieves the state-of-the-art recovery bound for the greedy algorithm, which was presented in [24].

3.2. Reconstruction from Noisy Measurements

We also consider reconstructing ECG when the compressed signal y is distorted by noise. In this scenario, the measurement y is defined as:
y = Φ x + v = As + v
where v is an additive noise vector. Using the new expression of y containing v , we analyze the condition of TPMP to accurately identify the support by following the main architecture of the proofs for the noiseless scenario. Two requirements of TPMP to identify the support are (1) at least one support element should be chosen in the pre-scanning process (i.e., T Ω ) (Theorem 4) and (2) true path ( Λ i T ) should survive the pruning strategy (Theorem 7). It is worth noting that while the pre-scanning condition (Theorem 4) is similar to that in the previous section, the search tree condition (Theorem 7) should satisfy the additional requirement compared to the result in Theorem 2. In the noiseless scenario (i.e., v = 0 ), the support T always survives the pruning strategy whenever it is detected once since r T 2 = 0 is the minimum residual in magnitude. On the other hand, in the presence of noise (i.e., v 0 ), the additional guarantee of the support having the minimum residual in magnitude is required, that is,
arg min Λ K r Λ K 2 = T
should hold to ensure the search tree condition.
Before we proceed, we provide a useful lemma in our analysis.
Lemma 7.
For any Υ l T , s ¯ T Υ l = [ s T Υ l z Υ l ] satisfies:
s ¯ T Υ l 2 2 ( 1 + δ K 2 ) 1 + δ K s T Υ l 2 .
Proof of Lemma 7.
Since z Λ i = ( A Λ i A Λ i ) 1 A Λ i A T Λ i s T Λ i (see Lemma 2), s ¯ T Υ l 2 2 is:
s ¯ T Υ l 2 2 = s T Υ l ( A Υ l A Υ l ) 1 A Υ l A T Υ l s T Υ l 2 2
= s T Υ l 2 2 + ( A Υ l A Υ l ) 1 A Υ l A T Υ l s T Υ l 2 2 .
From Lemma 2 and Definition 1, we then have:
( A Υ l A Υ l ) 1 A Υ l A T Υ l s T Υ l 2 2   1 ( 1 + δ | Υ l | ) 2 A Υ l A T Υ l s T Υ l 2 2   1 δ Υ l ( 1 + δ | Υ l | ) 2 A T Υ l s T Υ l 2 2   ( 1 δ Υ l ) ( 1 δ T Υ l ) ( 1 + δ | Υ l | ) 2 A T Υ l s T Υ l 2 2   ( 1 δ K ) 2 ( 1 + δ K ) 2 s T Υ l 2 2 .
From Equations (16) and (17), we obtain the lower bound of s ¯ T Υ l 2 as:
s ¯ T Υ l 2 2 s T Υ l 2 2 + ( 1 δ K ) 2 ( 1 + δ K ) 2 s T Υ l 2 2 2 ( 1 + δ K 2 ) ( 1 + δ K ) 2 s T Λ i 2 2
which is the desired result.  ☐
We first analyze the condition ensuring at least one support element is chosen by pre-scanning from noisy measurements. Let ρ be the largest correlation in magnitude between y , and the columns associated with correct indices and η be the K-th largest correlation in magnitude between y and the columns associated with incorrect indices. That is,
ρ = max j T | a j y | ,   η = min j I K | a j y |
where I K = arg max | I | = K , I T c A I y 2 .
In the following lemmas, we provide the lower bound of ρ and the upper bound of η.
Lemma 8.
ρ satisfies:
ρ 1 K 1 δ K s T 2 1 + δ K v 2
and η satisfies:
η 1 K δ 2 K s T 2 + 1 + δ K v 2 .
Proof of Lemma 8.
See Appendix C.  ☐
The following theorem provides the condition ensuring that at least one support element is identified by the pre-scanning.
Theorem 4 (Pre-scanning condition).
At least one element in the support is found in the pre-scanning stage if the nonzero entries of the original sparse signal s satisfy:
min j T | s j | > 2 1 + δ K K ( 1 δ K δ 2 K ) v 2 .
Proof of Theorem 4.
It is clear that at least one support element ( j T ) is chosen in the pre-scanning if:
ρ > η .
From Lemma 8, Equation (22) can be rewritten as:
1 K 1 δ K s T 2 1 + δ K v 2 > 1 K δ 2 K s T 2 + 1 + δ K v 2
and thus, we have:
s T 2 > 2 1 + δ K 1 δ K δ 2 K v 2 .
Since s T 2 K min j T | s j | , we obtain the desired result as:
min j T | s j | > 2 1 + δ K K ( 1 δ K δ 2 K ) v 2 .
Next, we analyze the sufficient condition ensuring that the true path is not removed from the search tree. This requirement holds if (1) the posterior indices { t ^ i + 1 ,   ,   t ^ K } of any true path Λ i T satisfy { t ^ i + 1 ,   ,   t ^ K } = T Λ i and (2) the corresponding Λ K = Λ i { t ^ i + 1 ,   ,   t ^ K } = T satisfies r Λ K 2 < ϵ . For obtaining the condition ensuring { t ^ i + 1 ,   ,   t ^ K } = T Λ i for any Λ i T , let β l be the largest correlation in magnitude between correct indices and r Υ l and α l be the ( K i ) -th largest correlation in magnitude between r Υ l and columns associated with incorrect indices. That is,
β l = arg max j T Υ l | a j r Υ l | , α l = arg min j D i | a j r Υ l |
where D i = arg max | D | = K i , D Ω T A D r Υ l 2 . The following lemma provides the lower bound of β l and the upper bound of α l .
Lemma 9.
For any Υ l T , β l and α l satisfy:
β l > 1 K 1 ( 1 δ K ) s ¯ T Υ l 2 1 + δ K 1 v 2
and:
α l < 1 K 1 δ 2 K 1 s ¯ T Υ l 2 + 1 + δ K 1 v 2 ,
respectively.
Proof of Lemma 9.
See Appendix D.  ☐
The guaranteed condition of the posterior indices to satisfy { t ^ i + 1 ,   ,   t ^ K } = T Λ i can be identified by combining Lemmas 7 and 9.
Theorem 5.
For any Λ i T , the posterior indices satisfy { t ^ i + 1 ,   ,   t ^ K } = T Λ i under:
min j T | s j | > ( 1 + δ K ) 2 ( 1 + δ K 1 ) ( 1 δ K δ 2 K 1 ) 1 + δ K 2 v 2 .
Proof of Theorem 5.
Similar to Theorem 2, one can notice that the posterior indices of the true path Λ i contain only true indices for any 1 l K i if:
β l > α l ,
which can be rewritten by using Lemma 9 as:
1 K 1 ( 1 δ K ) s ¯ T Υ 2 1 + δ K 1 v 2 > 1 K 1 δ 2 K 1 s ¯ T Υ l 2 + 1 + δ K 1 v 2
where Υ l T . After some manipulations, we have:
s ¯ T Υ l 2 > 2 1 + δ K 1 1 δ K δ 2 K 1 v 2 .
Recall Equation (14) from Lemma 7 that:
s ¯ T Υ l 2 2 ( 1 + δ K 2 ) 1 + δ K s T Υ l 2
and since s T Υ l 2 min j T | s j | , we get the desired result.  ☐
Next, we provide the guaranteed condition under which the residual of the support satisfies:
r T 2 ϵ
for any positive pruning threshold ϵ. Recall that the pruning threshold is updated by the smallest residual in magnitude among all Λ K found in each layer of the tree. Therefore, as long as Equation (32) holds and Λ K = T is found at least once, T has a smaller residual in magnitude than any possible pruning threshold ϵ and, thus, cannot be removed from the search tree.
Theorem 6.
The support has the minimum residual in magnitude among all possible Λ K ( | Λ K | = K ) if:
min j T | s j | > 2 ( 1 + δ K ) v 2 ( 1 δ 2 K ) ( 1 + δ K 2 ) .
Proof of Theorem 6.
See Appendix E.  ☐
If Theorems 5 and 6 jointly hold, then the condition that the true path Λ i is not removed can be guaranteed as follows.
Theorem 7 (Search tree condition).
The true path Λ i T survives the pruning strategy for any i under:
min j T | s j | > max ( μ , ω ) v 2
where μ = ( 1 + δ K ) 2 ( 1 + δ K 1 ) ( 1 δ K δ 2 K 1 ) 1 + δ K 2 and ω = 2 ( 1 + δ K ) ( 1 δ 2 K ) ( 1 + δ K 2 ) .
Proof of Theorem 7.
Immediate from Theorems 5 and 6.  ☐
By combining the results from Theorems 4 and 7, we obtain the sufficient condition of exact support identification from noisy measurements.
Theorem 8 (Exact support identification of TPMP).
The TPMP algorithm accurately identifies the support from the noisy measurement y = As + v under:
min j T | s j | > max ( ν , μ , ω ) v 2 = γ v 2
where ν = 2 1 + δ K K ( 1 δ K δ 2 K ) , μ = ( 1 + δ K ) 2 ( 1 + δ K 1 ) ( 1 δ K δ 2 K 1 ) 1 + δ K 2 and ω = 2 ( 1 + δ K ) ( 1 δ 2 K ) ( 1 + δ K 2 ) .
Proof of Theorem 8.
Immediate from Theorems 4 and 7.  ☐
Note that the sufficient condition given in Equation (35) infers that the signal-to-noise ratio (SNR) of the sparse signal should be higher than the constant γ. If Equation (35) holds, the support T can be exactly identified, and the signal reconstruction is based on the columns of A associated with T. In this sense, the system is equivalent to the overdetermined system ( y = A T s T + v ) and achieves identical performance to the best possible estimator referred to as the oracle estimator s ^ T = A T y .

4. Simulation and Discussion

In this section, we evaluate the numerical ECG recovery performance of the proposed TPMP algorithm and the existing sparse recovery algorithms. The simulation is based on the discrete cosine transform (DCT) basis matrix Ψ N × N , the random Bernoulli sensing matrix Φ M × N ( M N ) where each element of Φ is either ± 1 M , and the measurement is distorted by an additive noise vector (according to the results in [19], we set the signal-to-noise ratio (SNR) as 40 dB when v 0 ) v N ( 0 , σ 2 I ) . In the simulation, we check the reconstruction performance by performing at least 5000 independent trials for each number of measurements M, which is directly related to the compression ratio (CR) defined as CR = N M N × 100 (%). In addition, we used two measures for the performance evaluation: (1) the exact recovery ratio (ERR), which is the probability of the exact identification of the support of s ( T = { j   |   s j 0 } ) and (2) the percentage root-mean-square difference (PRD), which is defined as:
P R D = x ˜ x ^ 2 x ˜ E [ x ˜ ] 2
where x ˜ is the digitized signal of original ECG and x ^ is the reconstructed ECG. We exploited six ECG samples from the European ST-TDatabase in Physionet [36], and a randomly chosen window of 1000 continuative signal samples among all the data is used for each trial with sparsity level K = 100 . The samples are measured from distinct patients including people with normal status, left circumflex artery (LCA) or right coronary artery (RCA) diseases (see Figure 4). Each record is two hours in duration and contains two signals, each sampled at 250 samples per second with 12-bit resolution over a nominal 20-mV input range. The sample values were rescaled after digitization with reference to calibration signals in the original analog recordings, in order to obtain a uniform scale of 200 ADC units per mV for all signals, and each of the signal files is 5,400,000 bytes long. All algorithms under test are coded by MATLAB software and run by a personal computer with an Intel Core i5 processor and Microsoft Windows 7.
  • Oracle estimator [32]
  • Linear MMSE
  • Basis pursuit (BP) [9]
  • Orthogonal matching pursuit (OMP) [21]
  • Subspace pursuit (SP) [23]
  • Multipath matching pursuit (MMP) [19] with L = 2
  • Depth-first multipath matching pursuit (MMP-DF) [19] with L = 2 : MMP with reduced complexity
  • TPMP ( c = 0 ,   1 )
Before the discussion, it is clear that higher CR (or smaller M) degrades the reconstruction performance (see Figure 5), and thus, we demonstrate the effectiveness of the proposed method by showing that TPMP requires minimum M for accurately identifying the support.
First, we evaluate the ECG reconstruction performance from noiseless measurements ( y = Φ x ). Figure 6 provides the ERR performance when K = 100 (i.e., K N = 10 % ) as a function of M. Overall, we observe that TPMP performs better than existing algorithms, in particular for small M. In particular, it is shown that while ERR of TPMP drops moderately with the decrement of M, that of other conventional algorithms drops sharply and fails to provide reliable recovery performance. In Figure 7, we plot the PRD performance of the sparse recovery methods. Note that since the exact support information is given to the oracle estimator, it can be regarded to have the lower bound of PRD (since x ˜ is approximated to x , PRD determined by the approximation error x ˜ x 2 is the lower bound of PRD). Due to the multiple candidate investigation, we observe that TPMP reaches the optimal performance with minimum M among tested algorithms. For instance, while SP requires at least M = 385 measurements for optimal performance, TPMP requires only M = 325 . In addition, while MMP provides lower PRD for very small M, the PRD of TPMP reaches the best possible performance with smaller M. This demonstrates that TPMP not only outperforms conventional methods in reconstruction accuracy, but also enables reduction in data storage. Figure 8 shows the running time complexity. Since TPMP performs the tree search, it is no wonder that the running time complexity of TPMP can be large. Interestingly, we observe that the complexity of TPMP becomes similar to that of conventional greedy algorithms in the large M regime. This is because r T 2 = 0 , and thus, whenever any path satisfying r Λ K 2 = 0 is found, we regard Λ K to be the support and immediately stop the search. In this sense, the support can be identified in the early layer of the search.
Next, we provide the recovery performance in the presence of noise, that is when the measurement is defined as y = Φ x + v . Recall that in the noiseless scenario, the search could be finished in the early stage whenever any Λ K satisfying r Λ K 2 = 0 is found with c = 0 . However, this is no longer valid in the presence of noise since r T 2 = P T v 2 0 , and thus, we assume positive c in the noisy setting. Note that this stopping criterion does not affect the recovery condition since if Theorem 8 holds, then r T 2 still has the minimum residual in magnitude, and thus, T is selected as the support whether r T 2 2 > c N σ 2 or not. Therefore, one can notice that c is used only to shut down the search earlier than the original TPMP, and thus proper choice of c is required for the minimum tradeoff between the numerical performance and the complexity. In Figure 9, we plot the PRD performance of the sparse recovery algorithms. Similar to the noiseless scenario, we observe that the proposed TPMP algorithm outperforms conventional methods. In particular, the PRDs of TPMP with both c = 0 and c = 1 are smaller than that of MMP. To be specific, we observe in Figure 9 that TPMP performs closest to the lower bound of PRD (PRD of oracle LS) among all of the tested algorithms. In order to demonstrate the validity of real-time implementation, Figure 10 provides the average running time complexity of the sparse recovery algorithms as a function of M. Similar to the results in Figure 8, the running time complexity of TPMP is the highest due to the tree search, especially for small M. Nevertheless, the computational burden of TPMP can be substantially reduced by limiting the minimum pruning threshold determined by c. In particular, if c = 1 , significant complexity reduction over the original TPMP ( c = 0 ) is achieved, and it performs with similar complexity as OMP. In addition, since TPMP with c = 1 performs similar to MMP with smaller complexity than MMP and MMP-DF, one can notice that TPMP provides a better tradeoff between the performance and the complexity than MMP.

5. Conclusions

In this work, we proposed an effective ECG reconstruction method referred to as tree pruning-based matching pursuit (TPMP). In order to improve the accuracy of ECG recovery for large CR (or small M), the TPMP algorithm performs the tree search and investigates multiple promising candidates. Further, the complexity overhead caused by the tree search is reduced by the pruning strategy. In our analysis, we analyzed the sufficient condition of TPMP for exactly identifying the support, which provides an improved recovery bound compared to the existing methods. In addition, our numerical results discussed in this work demonstrate that TPMP provides improved performance with competitive complexity with conventional algorithms.

Acknowledgments

This work was partly supported by the Daegu Gyeongbuk Institute of Science and Technology (DGIST) R & D Program of the Ministry of Science, ICT and Future Planning (16-BD-0404), convergence technology development program for bionic arm through the National Research Foundation of Korea (NRF) funded by the Ministry of Science, ICT & Future Planing (No. 2016M3C1B2912987), and the Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Science, ICT & Future Planning (NRF-2015R1A2A2A01008218).

Author Contributions

Jaeseok Lee and Ji-Woong Choi contributed to the algorithm design, recovery bound analysis and simulation. Kyungsoo Kim contributed to the background knowledge of electrocardiogram processing and the intuition for proper mathematical modeling.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Proof of Lemma 5

From the definition of κ, we have:
κ = max j T | a j y | = A T y 1 K A T y 2
where (A1) is from the inequality u 1 u 0 u 2 for any vector u . Using Definition 1, we further have:
κ 1 K A T y 2 = 1 K A T A s 2 = 1 K A T A T s T 2 1 δ K K s T 2 > 1 δ 2 K K s T 2
where (A2) is from Lemma 2 and (A2) is from Lemma 1.
Next, from the definition of ζ, we have:
K ζ j I K | a j y | 2 = A I K y 2
where I K = arg max | I | = K , I T c A I y 2 . Since I K and T are disjoint ( I K T c ) and | I K | = K , we have:
A I K y 2 = A I K A T s T 2 δ 2 K s T 2
where (A4) is from Lemma 3. From (A3) and (A4), we get the desired result.

Appendix B. Proof of Lemma 6

For any Υ l = Λ i { t ^ i + 1 ,   ,   t ^ i + l } T and λ l = max j T Υ l | a j r Υ l | , we have:
max j T Υ l | a j r Υ l | = A T Υ l r Υ l 1 | T Υ l | A T Υ l r Υ l 2 = 1 K i l A T A T Υ l s ¯ T Υ l 2
where P Υ l = I A λ l ( A Υ l A Υ l ) 1 A Υ l , and (B1) is from Definition 2. Since T Ψ l = T ( Ψ l T ), (B1) can be rewritten from Lemma 2 as:
max j T Υ l | a j r Υ l | 1 K i l A T A T Υ l s ¯ T Υ l 2 = 1 K i l A T A T s ¯ T Υ l 2 1 δ | T | K i l s ¯ T Υ l 2 1 δ | T | K i s ¯ T Υ l 2 > 1 δ K K s ¯ T Υ l 2
where (B2) is from Lemma 1.
Next, From the definition of γ l = min j D K | a j r Υ l | , we have:
| D K | γ l j D K | a j r Υ l | 2 = A D K r Υ l 2
where D K = arg max | D | = K i , D Ω T A D r Υ l 2 and Υ l T . Using the triangle inequality, we have:
A D K r Υ l 2 = A D K A T Υ l s ¯ T Υ l 2
= A D K A T s ¯ T Υ l 2
δ 2 K i s ¯ T Υ l 2
< δ 2 K s ¯ T Υ l 2
where (B4) is from Definition 2, (B5) is because T Υ l = T , (B6) is from Lemma 3 and (B7) is from Lemma 1.

Appendix C. Proof of Lemma 8

From the definition of ρ, we have:
ρ = max j T | a j y | = A T y 1 | T | A T y 2 = 1 K A T ( A T s T + v ) 2 1 K A T A T s T 2 A T v 2 .
From Lemma 2 and Definition 1, we have:
A T A T s T 2 ( 1 δ K ) s T 2
and:
A T v 2 1 + δ K v 2 ,
respectively. Using (C2) and (C3), ρ is lower bounded as:
ρ 1 K ( 1 δ K ) s T 2 1 + δ K v 2 ,
which is the desired result.
From the definition of η, we have:
K η j I K | a j y | 2 = A I K y 2
where I K = arg max | I | = K , I T c A I y 2 . Using the triangle inequality, we have:
A I K y 2 = A I K ( A T s T + v ) 2 A I K A T s T 2 + A I K v 2 .
Since I K and T are disjoint ( I K T c ), we further have:
A I K A T s T 2 δ 2 K s T 2
from Lemma 3 and:
A I K v 2 1 + δ K v 2
from Definition 1. Using (C7) and (C8), we have:
A I K y 2 δ 2 K s T 2 + 1 + δ K v 2
and since A I K y 2 K η , we have:
η 1 K δ 2 K s T 2 + 1 + δ K v 2 ,
which is the desired result.

Appendix D. Proof of Lemma 9

Since Υ l T and β l = max j T Υ l | a j r Υ l | , we have:
max j T Υ l | a j r Υ l | = A T Υ l r Υ l 1 | T Υ l | A T Υ l P Υ l y 2 = 1 | T Υ l | A T Υ l P Υ l ( A T s T + v ) 2 = 1 K i l A T P Υ l A T Υ l s T Υ l + A T Υ l P Υ l v 2 1 K i l [ A T P Υ l A T Υ l s T Υ l 2 A T Υ l P Υ l v 2 ] 1 K i l [ A T A T s ¯ T Υ l 2 A T Υ l P Υ l v 2 ]
where (D2) is from Definition 2. From (B2) in Appendix B and Lemma 4, we have:
A T A T s ¯ T Υ l 2 > ( 1 δ K ) s ¯ T Υ l 2
and:
A T Υ l P Υ l v 2 A T Υ l 2   P Υ l v 2 1 + δ | T Υ l | P Υ l v 2 1 + δ | T Υ l | v 2 1 + δ K i l v 2 ,
respectively. From (D2), (D2) and (D4), we have:
β l > 1 K i l A T A T s ¯ T Υ l 2 A T Υ l P Υ l v 2 > 1 K i l ( 1 δ K ) s ¯ T Υ l 2 1 + δ K i l v 2 > 1 K 1 ( 1 δ K ) s ¯ T Υ 2 1 + δ K 1 v 2 .
Next, from the definition of α l = min j D K | a j r Υ l | , we have:
| D K | α l j D K | a j r Υ l | 2 = A D K r Υ l 2
where D K = arg max | D | = K i , D Ω T A D r Υ l 2 . Using the triangle inequality, (D5) can be rewritten as:
A D K r Υ l 2 = A D K P Υ l y 2 = A D K P Υ l ( A T s T + v ) 2 = A D K P Υ l A T Υ l s T Υ l + A D K P Υ l v 2 A D K P Υ l A T Υ l s T Υ l 2 + A D K P Υ l v 2 = A D K A T Υ l s ¯ T Υ l 2 + A D K P Υ l v 2 .
where (D7) is from Definition 2. From (B6) in Appendix B and Lemma 4, (D7) is upper bounded as:
A D K r Υ l 2 A D K A T Υ l s ¯ T Υ l 2 + A D K P Υ l v 2 δ 2 K i s ¯ T Υ l 2 + A D K P Υ l v 2 δ 2 K i s ¯ T Υ l 2 + A D K 2 P Υ l v 2 δ 2 K i s ¯ T Υ l 2 + 1 + δ | D K | P Ψ l v 2 δ 2 K i s ¯ T Υ l 2 + 1 + δ | D | v 2 δ 2 K i s ¯ T Υ l 2 + 1 + δ K i v 2 .
Then by combining (D5) and (D8), we get the desired result as:
α l 1 | D K | δ 2 K i s ¯ T Υ l 2 + 1 + δ K i v 2 < 1 K i δ 2 K i s ¯ T Υ l 2 + 1 + δ K i v 2
Since 1 i K 1 , we obtain the desired result.

Appendix E. Proof of Theorem 6

If the support has the minimum residual in magnitude among all possible candidates with cardinality K, then:
r T 2 < r Λ K 2
for any Λ K T . Note that the above inequality always holds if the upper bound of r T 2 is smaller than the lower bound of r Λ K 2 . In this regard, we first obtain the upper bound of r T 2 as:
r T 2 = P T y 2 = P T A T s T + v 2 = P T v 2 v 2
where (E1) is because P T A T x T = 0 .
Next, we obtain the lower bound of r Λ K 2 when Λ K T . For any Λ K T , we have:
r Λ K 2 = P Λ K y 2 = P Λ K ( A s + v ) 2 = P Λ K ( A T Λ K s T Λ K + v ) 2 = A T Λ K s ¯ T Λ K 2 P Λ K v 2
where (E3) is from the triangle inequality and Definition 2. Using the result from Lemma 7, (E3) is lower bounded as:
r Λ K 2 = A T Λ K s ¯ T Λ K 2 P Λ K v 2
1 δ | T Λ K | s ¯ T Λ K 2 P Λ K v 2
1 δ | T Λ K | s ¯ T Λ K 2 v 2
1 δ 2 K s ¯ T Λ K 2 v 2
where (E3) is from Definition 1, (E4) is because P Λ K v 2 v 2 and (E5) is from Lemma 1. From (E1) and (E5), we have:
s ¯ T Λ K 2 2 v 2 1 δ 2 K
Using the result from Lemma 7, we additionally have:
s T Λ K 2 2 ( 1 + δ K ) v 2 ( 1 δ 2 K ) ( 1 + δ K 2 )
and since s T Λ K 2 min j T | s j | , we have the desired result.

References

  1. Birnbaum, Y.; Wilson, J.M.; Fiol, M.; Luna, A.B.; Eskola, M.; Nikus, K. ECG diagnosis and classification of acute coronary syndromes. Ann. Noninvasive Electrocardiol. 2014, 19, 4–14. [Google Scholar] [CrossRef] [PubMed]
  2. Singh, B.; Singh, D.; Jaryal, A.; Deepak, K. Ectopic beats in approximate entropy and sample entropy-based HRV assessment. Int. J. Syst. Sci. 2012, 43, 884–893. [Google Scholar] [CrossRef]
  3. Steg, P.G.; James, S.K.; Atar, D.; Badano, L.P.; Blömstrom-Lundqvist, C.; Borger, M.A.; Di Mario, C.; Dickstein, K.; Ducrocq, G.; Fernandez-Aviles, F.; et al. ESC guidelines for the management of acute myocardial infarction in patients presenting with ST-segment elevation. Eur. Heart J. 2012, 33, 2569–2619. [Google Scholar] [CrossRef] [PubMed]
  4. Manina, G.; Agnelli, G.; Becattini, C.; Zingarini, G.; Paciaroni, M. 96 hours ECG monitoring for patients with ischemic cryptogenic stroke or transient ischaemic attack. Intern. Emerg. Med. 2014, 9, 65–67. [Google Scholar] [CrossRef] [PubMed]
  5. Weder, M.; Hegemann, D.; Amberg, M.; Hess, M.; Boesel, L.; Abächerli, R.; Meyer, V.; Rossi, R. Embroidered electrode with silver/titanium coating for long-term ECG monitoring. Sensors 2015, 15, 1750–1759. [Google Scholar] [CrossRef] [PubMed]
  6. Chen, Y.H.; de Beeck, M.; Vanderheyden, L.; Carrette, E.; Mihajlović, V.; Vanstreels, K.; Grundlehner, B.; Gadeyne, S.; Boon, P.; van Hoof, C. Soft, comfortable polymer dry electrodes for high quality ECG and EEG recording. Sensors 2014, 14, 23758–23780. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  7. Wang, C.; Lu, W.; Narayanan, M.R.; Redmond, S.J.; Lovell, N.H. Low-power technologies for wearable telecare and telehealth systems: A review. Biomed. Eng. Lett. 2015, 5, 1–9. [Google Scholar] [CrossRef]
  8. Candes, E.J.; Tao, T. Decoding by linear programming. IEEE Trans. Inf. Theory 2005, 51, 4203–4215. [Google Scholar] [CrossRef]
  9. Candes, E.J.; Romberg, J.; Tao, T. Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information. IEEE Trans. Inf. Theory 2006, 52, 489–509. [Google Scholar] [CrossRef]
  10. Chen, S.S.; Donoho, D.L.; Saunders, M.A. Atomic Decomposition by Basis Pursuit. SIAM J. Sci. Comput. 1998, 20, 33–61. [Google Scholar] [CrossRef]
  11. Tibshirani, R. Regression Shrinkage and Selection via the Lasso. J. R. Stat. Soc. Ser. B 1996, 58, 267–288. [Google Scholar]
  12. Candes, E.; Tao, T. The Dantzig Selector: Statistical Estimation When p Is Much Larger than n. Ann. Stat. 2007, 35, 2313–2351. [Google Scholar] [CrossRef]
  13. Mamaghanian, H.; Khaled, N.; Atienza, D.; Vandergheynst, P. Compressed sensing for real-time energy-efficient ECG compression on wireless body sensor nodes. IEEE Trans. Biomed. Eng. 2011, 58, 2456–2466. [Google Scholar] [CrossRef] [PubMed]
  14. Craven, D.; McGinley, B.; Kilmartin, L.; Glavin, M.; Jones, E. Compressed sensing for bioelectric signals: A review. IEEE J. Biomed. Health Inf. 2015, 19, 529–540. [Google Scholar] [CrossRef] [PubMed]
  15. Zhang, Z.; Jung, T.P.; Makeig, S.; Rao, B.D. Compressed sensing for energy-efficient wireless telemonitoring of noninvasive fetal ECG via block sparse bayesian learning. IEEE Trans. Biomed. Eng. 2013, 60, 300–309. [Google Scholar] [CrossRef] [PubMed]
  16. Cotter, S.F.; Rao, B.D. Application of tree-based searches to matching pursuit. In Proceedings of the IEEE International Conference Acoustics, Speech, and Signal Process, Salt Lake City, UT, USA, 7–11 May 2001; pp. 3933–3936.
  17. Karabulut, G.Z.; Moura, L.; Panario, D.; Yongacoglu, A. Integrating flexible tree searches to orthogonal matching pursuit algorithm. IEE Proc. Vis. Image Signal Process. 2006, 153, 538–548. [Google Scholar] [CrossRef]
  18. Schniter, P.; Potter, L.C.; Ziniel, J. Fast bayesian matching pursuit. In Proceedings of the Information Theory and Applications Workshop, San Diego, CA, USA, 27 January–1 Febuary 2008; pp. 326–333.
  19. Kwon, S.; Wang, J.; Shim, B. Multipath matching pursuit. IEEE Trans. Inf. Theory 2013, 56, 4867–4878. [Google Scholar]
  20. Karahanoglu, N.B.; Erdogan, H. A* orthogonal matching pursuit: Best-first search for compressed sensing signal recovery. Digit. Signal Process. 2012, 22, 555–568. [Google Scholar] [CrossRef]
  21. Cai, T.; Wang, L. Orthogonal matching pursuit for sparse signal recovery with noise. IEEE Trans. Inf. Theory 2011, 57, 4680–4688. [Google Scholar] [CrossRef]
  22. Needell, D.; Tropp, J.A. CoSaMP: Iterative signal recovery from incomplete and inaccurate samples. Commun. ACM 2010, 53, 93–100. [Google Scholar] [CrossRef]
  23. Dai, W.; Milenkovic, O. Subspace Pursuit for Compressive Sensing Signal Reconstruction. IEEE Trans. Inf. Theory 2009, 55, 2230–2249. [Google Scholar] [CrossRef]
  24. Karahanoglu, N.; Erdogan, H. Improving A*OMP: Theoretical and empirical analyses with a novel dynamic cost model. J. Signal Process. 2016, 118, 62–74. [Google Scholar] [CrossRef]
  25. Dechter, R.; Pearl, J. Generalized best-first search strategies and the optimality of A*. J. ACM 1985, 32, 505–536. [Google Scholar] [CrossRef]
  26. Jelinek, F. Statistical Methods for Speech Recognition; MIT Press: Cambridge, MA, USA, 1998. [Google Scholar]
  27. Fincke, U.; Pohst, M. Improved methods for calculating vectors of short length in a lattice, including a complexity analysis. Math. Comput. 1985, 44, 463–471. [Google Scholar] [CrossRef]
  28. Hochwald, B.M.; Ten Brink, S. Achieving near-capacity on a multiple-antenna channel. IEEE Trans. Commun. 2003, 51, 389–399. [Google Scholar] [CrossRef]
  29. Shim, B.; Kang, I. Sphere Decoding With a Probabilistic Tree Pruning. IEEE Trans. Signal Process. 2008, 56, 4867–4878. [Google Scholar] [CrossRef]
  30. Lee, J.; Shim, B.; Kang, I. Soft-Input Soft-Output List Sphere Detection with a Probabilistic Radius Tightening. IEEE Trans. Wirel. Commun. 2012, 11, 2848–2857. [Google Scholar] [CrossRef]
  31. Lee, J.; Kwon, S.; Shim, B. A greedy search algorithm with tree pruning for sparse signal recovery. In Proceedings of the 2014 IEEE International Symposium on Infromation Theory (ISIT) 2014, Honolulu, HI, USA, 29 June–4 July 2014; pp. 1847–1851.
  32. Chen, W.; Rodrigues, M.R.D.; Wassell, I.J. Projection Design for Statistical Compressive Sensing: A Tight Frame Based Approach. IEEE Trans. Signal Process. 2013, 61, 2016–2029. [Google Scholar] [CrossRef]
  33. Baraniuk, R.G.; Davenport, M.A.; DeVore, R.; Wakin, M.B. A Simple Proof of the Restricted Isometry Property for Random Matrices. Constr. Approx. 2008, 28, 253–263. [Google Scholar] [CrossRef]
  34. Candes, E.J. The restricted isometry property and its implications for compressed sensing. C. R. Math. 2008, 346, 589–592. [Google Scholar] [CrossRef]
  35. Satpathi, S.; Lochan Das, R.; Chakraborty, M. Improving the Bound on the RIP Constant in Generalized Orthogonal Matching Pursuit. IEEE Signal Process. Lett. 2013, 20, 1074–1077. [Google Scholar] [CrossRef]
  36. Taddei, A.; Distante, G.; Emdin, M.; Pisani, P.; Moody, G.B.; Zeelenberg, C.; Marchesi, C. The European ST-T database: Standard for evaluating systems for the analysis of ST-T changes in ambulatory electrocardiography. Eur. Heart J. 1992, 13, 1164–1172. [Google Scholar] [PubMed]
Figure 1. Illustration of conventional orthogonal matching pursuit (OMP) and recovery based on tree search where the true paths contain only elements in the support T, and incorrect paths consist of at least one element from T c . While OMP does not provide any further chances after it selects an incorrect index in the second layer (a), multiple path investigation in (b) enables a reduction in the misdetection of the support element.
Figure 1. Illustration of conventional orthogonal matching pursuit (OMP) and recovery based on tree search where the true paths contain only elements in the support T, and incorrect paths consist of at least one element from T c . While OMP does not provide any further chances after it selects an incorrect index in the second layer (a), multiple path investigation in (b) enables a reduction in the misdetection of the support element.
Sensors 17 00105 g001
Figure 2. Basic structure of ECG compression and reconstruction. Note that the reconstruction is based on the discrete cosine transform (DCT) basis ( s ^ ), while compression is performed in the time domain ( y = Φ x ).
Figure 2. Basic structure of ECG compression and reconstruction. Note that the reconstruction is based on the discrete cosine transform (DCT) basis ( s ^ ), while compression is performed in the time domain ( y = Φ x ).
Sensors 17 00105 g002
Figure 3. Illustration of the proposed method when | Θ | = 4 and K = 3 . The branches of each node consist of only the elements in Θ, and the paths with large cost functions are removed from the search.
Figure 3. Illustration of the proposed method when | Θ | = 4 and K = 3 . The branches of each node consist of only the elements in Θ, and the paths with large cost functions are removed from the search.
Sensors 17 00105 g003
Figure 4. Examples of the tested ECG samples in our simulations and the reconstructed signal for each sample signal by tree pruning-based matching pursuit (TPMP) at compression ratio (CR) = 65 , where the solid and dotted lines denote the original ECG ( x ) and the reconstructed ECG ( x ^ = Ψ s ^ ), respectively.
Figure 4. Examples of the tested ECG samples in our simulations and the reconstructed signal for each sample signal by tree pruning-based matching pursuit (TPMP) at compression ratio (CR) = 65 , where the solid and dotted lines denote the original ECG ( x ) and the reconstructed ECG ( x ^ = Ψ s ^ ), respectively.
Sensors 17 00105 g004
Figure 5. ECG reconstruction using the proposed method when N = 1000 and K = 100 where the solid and dotted lines denote the original ECG ( x ) and the reconstructed ECG ( x ^ = Ψ s ^ ), respectively.
Figure 5. ECG reconstruction using the proposed method when N = 1000 and K = 100 where the solid and dotted lines denote the original ECG ( x ) and the reconstructed ECG ( x ^ = Ψ s ^ ), respectively.
Sensors 17 00105 g005
Figure 6. Exact recovery ratio (ERR) performance of the sparse signal recovery methods when N = 1000 and K = 100 .
Figure 6. Exact recovery ratio (ERR) performance of the sparse signal recovery methods when N = 1000 and K = 100 .
Sensors 17 00105 g006
Figure 7. Percentage root-mean-square difference (PRD) performance of the sparse signal recovery methods when N = 1000 and K = 100 .
Figure 7. Percentage root-mean-square difference (PRD) performance of the sparse signal recovery methods when N = 1000 and K = 100 .
Sensors 17 00105 g007
Figure 8. Complexity of the sparse signal recovery methods when N = 1000 and K = 100 .
Figure 8. Complexity of the sparse signal recovery methods when N = 1000 and K = 100 .
Sensors 17 00105 g008
Figure 9. PRD performance of the sparse signal reconstruction from noisy measurements.
Figure 9. PRD performance of the sparse signal reconstruction from noisy measurements.
Sensors 17 00105 g009
Figure 10. Average running time complexity of the sparse signal reconstruction from noisy measurements.
Figure 10. Average running time complexity of the sparse signal reconstruction from noisy measurements.
Sensors 17 00105 g010
Table 1. The TPMP algorithm.
Table 1. The TPMP algorithm.
Input: Measurement y, sensing matrix A, sparsity K, 0 < c < 1
Output: Reconstructed ECG x ^
Initialization: i : = 0 , W 0 : = , ϵ =
Θ = arg max | I K | = K A I K y 2 (pre-scanning)
while i < K do
   i : = i + 1 , W i : = , ϵ i + 1 : = ϵ i
  for l = 1 to | W i 1 | do
   θ : = Θ Λ i 1 ( l )
   for j = 1 to | θ | do
    Λ i : = Λ i 1 ( l ) t i ( j ) (update j-th path)
    if Λ i W i then(check the duplicated path)
      Obtain { t ^ i + 1 ,   ,   t ^ K } (posterior index set construction)
       Λ K = Λ i { t ^ i + 1 ,   ,   t ^ K }
       r Λ K = P Λ K y
       if r Λ K 2 ϵ i then(pruning decision)
        W i : = W i Λ i , I : = Λ K
        if r I 2 2 c N σ 2 then(search termination)
          j = | θ | + 1 , l = | W i 1 | + 1 , i = K + 1
        end if
        if r I 2 ϵ i + 1 then
          ϵ i + 1 : = r I 2 (update pruning threshold)
        end if
      end if
    end if
   end for
  end for
end while
s ^ = A I y
return x ^ = Ψ s ^ (ECG reconstruction)

Share and Cite

MDPI and ACS Style

Lee, J.; Kim, K.; Choi, J.-W. Pruning-Based Sparse Recovery for Electrocardiogram Reconstruction from Compressed Measurements. Sensors 2017, 17, 105. https://doi.org/10.3390/s17010105

AMA Style

Lee J, Kim K, Choi J-W. Pruning-Based Sparse Recovery for Electrocardiogram Reconstruction from Compressed Measurements. Sensors. 2017; 17(1):105. https://doi.org/10.3390/s17010105

Chicago/Turabian Style

Lee, Jaeseok, Kyungsoo Kim, and Ji-Woong Choi. 2017. "Pruning-Based Sparse Recovery for Electrocardiogram Reconstruction from Compressed Measurements" Sensors 17, no. 1: 105. https://doi.org/10.3390/s17010105

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop