Next Article in Journal
Experimental Study of Hydrogen Addition Effects on a Swirl-Stabilized Methane-Air Flame
Next Article in Special Issue
Combination of Compensations and Multi-Parameter Coil for Efficiency Optimization of Inductive Power Transfer System
Previous Article in Journal
An Economical Route Planning Method for Plug-In Hybrid Electric Vehicle in Real World
Previous Article in Special Issue
An Impact-Based Frequency Up-Converting Hybrid Vibration Energy Harvester for Low Frequency Application
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Applicability of Compressive Sensing for Wireless Energy Harvesting Nodes

1
School of Electronic Engineering, Soongsil University, Seoul 06978, Korea
2
Department of Wireless Communications Engineering, Kwangwoon University, Seoul 01897, Korea
3
College of Information and Communication Engineering, Sungkyunkwan University, Suwon 16419, Gyeonggi-do, Korea
*
Author to whom correspondence should be addressed.
Energies 2017, 10(11), 1776; https://doi.org/10.3390/en10111776
Submission received: 30 September 2017 / Revised: 24 October 2017 / Accepted: 2 November 2017 / Published: 3 November 2017
(This article belongs to the Special Issue Wireless Power Transfer and Energy Harvesting Technologies)

Abstract

:
This paper proposes an approach toward solving an issue pertaining to measuring compressible data in large-scale energy-harvesting wireless sensor networks with channel fading. We consider a scenario in which N sensors observe hidden phenomenon values, transmit their observations using amplify-and-forward protocol over fading channels to a fusion center (FC), and the FC needs to choose a number of sensors to collect data and recover them according to the desired approximation error using the compressive sensing. In order to reduce the communication cost, sparse random matrices are exploited in the pre-processing procedure. We first investigate the sparse representation for sensors with regard to recovery accuracy. Then, we present the construction of sparse random projection matrices based on the fact that the energy consumption can vary across the energy harvesting sensor nodes. The key ingredient is the sparsity level of the random projection, which can greatly reduce the communication costs. The corresponding number of measurements is chosen according to the desired approximation error. Analysis and simulation results validate the potential of the proposed approach.

1. Introduction

A wireless sensor network (WSN) is an intelligent system with data collection, data fusion and independent transmission, which involves in many applications such as military surveillance, embedded systems, computer networks and communications. It consists of several sensors and each node is generally small in size and has a battery of limited capacity and energy. The lifetimes of WSNs thus are extremely limited by the total energy available in the batteries. Thus, using optimal techniques for energy management such as energy harvesting (EH), we can prolong the lifetime and duration of maintenance-free operation of WSNs. For instance, the energy existing in our environment from solar, wind, and thermal sources is converted into that can be used electrically. The advantages of EH-WSN solutions include high reliability, low energy needs, time savings, ecological compatibility and cost benefits [1,2,3]. In EH-WSN, each sensor node provides two functionalities: sensing, transmitting data to the fusion center (FC), and harvesting energy from ambient energy sources. The FC collects and reconstructs the observed signal by querying only a subset of sensors [4,5]. In order to reduce the energy consumption while forwarding observations to FC, we consider an innovative data gathering and reconstruction process based on three key subproblems: (i) compressive sensing (CS) based data acquisition; (ii) transmission of sparse random projection under fading for adapting random energy availability in EH systems; and (iii) CS based data reconstruction.
Data collected from wireless sensors are typically correlated, and are thus compressible in some appropriate domains. According to the CS theory, if a signal x C N is compressible, it can be well approximated using a small number k N of orthogonal transform coefficients [6,7,8]. Based on the CS model, the FC receives a compressed approximation of the original signal at multiple nodes by exploiting dense random matrix, i.e., all of the EH sensors in the networks participate in forwarding observations, and the FC randomly chooses them. However, in order to avoid this situation, which consumes a large amount of energy, we first have to build a sparse random projection such that the information can be extracted from any k-sparse signal. Second, we need to design a suitable recovery algorithm to reconstruct the original signal with good accuracy for given energy neutral conditions. Therefore, it would be good for EH sensors to prolong their lifetime and for FC to query an appropriate number of random projections and to still reconstruct a good approximation. Regarding sparse random projections, a good random projection will preserve all pairwise distances with a high probability. Thus, it can be used as a reliable estimator of distances in the original space. In [9], the authors proposed a distributed compressive sensing scheme for WSNs, where the sparsity of the random projections is used consistently to reduce the computational complexity and the communication cost. They also proved that the sparse random projections are sufficient to recover a data approximation that is comparable to the optimal k-term approximation with a high probability. With the fading channel and energy-harvesting constraints, the problems regarding sparsity of random projections are studied in [10,11]. In [10], the authors only considered additive white Gaussian noise (AWGN) channels, while, in [11], the authors focused on Rayleigh fading channels and investigated sufficient conditions for guaranteeing a reliable and computationally-efficient data approximation for the sparse random projections. Due to the harvesting conditions, the sensors typically have different energy harvesting rates that lead to different available energy constraints. However, the sparsity factors in the aforementioned works have been assumed to be homogeneous for all sensors and were kept fixed for entire transmission states. Thus, they cannot be responsive to battery dynamics and channel conditions. To overcome those issues, we consider a dynamic sparsity factor that relates to the available energy constraints of wireless sensors and transmission between them, and then build a sparse random projection matrix that is stable and robust under channel fading effects and CS recovery.
The main purpose of this paper is to study sparse representation and sparse random projection for EH WSNs under fading channels. We consider a problem of data transmission in EH WSNs where multiple sensors send spatially-correlated data to a fusion center using amplify-and-forward (AF) protocol over independent Rayleigh fading channels with additive noise. Supposing that the measured data are compressible under an approximate orthogonal transform, our task is first choosing a certain number of sensors to query according to the desired approximation error by designing a sparse random projection matrix, and then exploiting the CS recovery algorithm to obtain an optimal approximation. Inspired by the work in [12] on sparse random projections for heavy-tailed data, we propose a random projection-based CS scheme where the sparsity factor is dynamic due to energy constraints. We also prove that, under the fading channel condition, our projection matrix still satisfies the restricted isometry property (RIP) for successful recovery in CS.
The organization of this paper is as follows. In Section 2, we introduce the problem of recovering a signal observed by an EH WSN under channel fading, and briefly introduce the compressive sensing. In Section 3, we present our construction on basis representations for compressible data and sparse random projection design. Section 4 proves that our sparse random matrices preserve the pairwise distance under the fading and guarantee the reconstruction accuracy subject to the energy constraints. The simulation results and conclusion are presented in Section 5 and Section 6, respectively.
Notations: We denote A = [ a i j ] as a matrix whose entries are a i j , ( · ) 1 as the matrix inverse operation, ( · ) * as the conjugate transpose, · as the floor operation, | T | = supp ( T ) as the number of elements in a given set T, E ( · ) and Var ( · ) as the expectation and the variance operators, respectively. The p norm of a vector x = [ x 1 x n ] T is defined as | | x | | p = i = 1 n | x i | p 1 / p for a positive integer p. We call a signal x is a k-sparse vector if | | x | | 0 | supp ( x ) | k . The notation O ( · ) denotes the complexity operation, CN ( μ , Σ ) denotes the circularly symmetric complex Gaussian distribution with mean μ and covariance Σ , and w CN ( μ , Σ ) means that w is distributed according to CN ( μ , Σ ) , alternatively, w N μ , 1 2 Re ( Σ ) Im ( Σ ) Im ( Σ ) Re ( Σ ) .

2. System Model

2.1. Problem Formulation

We consider an EH-WSN consisting of N sensors, each of which observes a single value x j C then transmits it to the FC with the AF protocol. Note that the decode-and-forward (DF) approach can be used for sensor transmissions, where we apply digital modulation schemes to transmit the data. However, as shown in [13,14,15], for a simple distributed sensor network, the AF approach over multiple-access channel (MAC) is optimal for signal detection and estimation, as well as saving energy in relaying data. Thus, we restrict our analysis in this paper to analog transmission suitable for energy-constrained EH-WSNs, while digital modulated signals will be considered in the future work The FC collects the received signals in M ( M < N ) time slots and recovers the original signal based on this measurement as
y = B x + w ,
where B = H A with ⊙ representing the Hadamard product. The matrix H = [ h i j ] C M × N represents the flat fading channels between the sensors and the FC, which is a random matrix having independent and identically distributed (i.i.d.) complex circular Gaussian entries with zero-mean and unit variance, i.e., h i j = h i j R + j h i j I , where h i j R N ( 0 , 1 2 ) and h i j I N ( 0 , 1 2 ) . The matrix A = [ a i j ] R M × N represents the random projection with energy constraints, which will be described later, x = [ x 1 , , x N ] T is the transmitted vector, and w = [ w 1 , , w M ] is the additive noise where each w j CN ( 0 , σ w 2 ) . The real-value of Equation (1) can be written as
y ^ = Re ( H A ) Im ( H A ) Im ( H A ) Re ( H A ) Re ( x ) Im ( x ) + Re ( w ) Im ( w ) = B ^ Re ( x ) Im ( x ) + Re ( w ) Im ( w ) ,
where
B ^ = Re ( H ) A Im ( H ) A Im ( H ) A Re ( H ) A R 2 M × 2 N .
Our goal is to find a good approximation for x given y and B . According to the CS theory, for a given upper bound of error ϵ , the FC can recover x by solving the following optimization problem [6]:
x ^ = min z C N | | z | | 1 subject to | | y Bz | | 2 2 ϵ .
However, in order to obtain recovery guarantees of a given x based on Equation (4), some essential conditions for x and B will be considered in the next section.

2.2. Signal Reconstruction with Compressive Sensing

We define Σ k by the set of all k-sparse signals as Σ k = { x C N , | | x | | 0 k } . We say that the matrix B satisfies the restricted isometry property (RIP) of order k if there exists a number δ ( 0 , 1 ) such that ( 1 δ k ) | | x | | 2 2 | | Bx | | 2 2 ( 1 + δ k ) | | x | | 2 2 , x Σ k . The best k-term approximation denoted by x k , can be obtained by
x k = arg min z Σ k | | x z | | 1 .
We suppose that B satisfies the RIP of order 2 k , i.e., B x 1 B x 2 for any pair x 1 , x 2 Σ k , where x 1 x 2 , and the corresponding restricted isometry constant δ 2 k < 2 1 . For a given measurement y in Equation (1), where | | w | | 2 ϵ , and the solution of Equation (4) obeys
| | x ^ x | | 2 c 0 ϵ + c 1 | | x x k | | 2 k ,
where c 0 = 4 1 + δ 2 k 1 ( 1 + 2 ) δ 2 k , c 1 = 2 1 ( 1 2 ) δ 2 k 1 ( 1 + 2 ) δ 2 k , according to the results in [8].

3. Compressive Sensing for Wireless Energy Harvesting Nodes

3.1. Motivation

In this section, we present the idea of constructing a sparse random projection and basis representation for achieving a significant speed up with little loss in accuracy recovery. First, we introduce an appropriate sparse representation basis for complex data, which takes into account data transmission cost and data recovery quality. The goal of this step is to obtain the sparse representation learned from the sensor data; thus, it has the ability to adapt the signal under fading. Second, we provide the concept of the sparse random projection, which is used for measurement matrix design. This design ensures that each sampling value represents one CS measurement guaranteeing a successful recovery, and satisfies the assumptions of energy on sensing. Finally, we verify the sparsity level and the RIP condition for the sensing matrix.

3.2. Basis Representation for Compressible Data

In practice, the signal x may not be sparse, but we want to resolve it in a certain sparse basis Ψ , i.e., x = Ψ α , where Ψ is a unitary N × N matrix and α C N has at most k < N 2 non-zero components. The matrix Ψ can be obtained either from an appropriate transform (e.g., wavelets transform, discrete Fourier transform, etc.) or from learning a dictionary to perform best based on a training set [16]. In this case, we require that B Ψ satisfies the RIP and the performance will depend on | | α ^ α | | 2 . We can sort elements of the vector x in the decreasing order of magnitude, where the i-th largest coefficient satisfies | x I ( j ) | G i 1 / r , j = 1 , , N . Here, I { 1 , , N } represents the index set of the sorted elements, where | I ( j ) | M N . For a rate of decay 0 < r < 2 , the approximation error in 2 -norm can be obtained by taking the k largest coefficients as
| | x ^ x | | 2 = | | α ^ α | | 2 ( r s ) 1 2 G k s ,
where G is a constant and s = 1 r 1 2 [7].

3.3. Measurement Matrix: Sparse Random Projection Design

We first introduce the Johnson–Lindenstrauss (JL) lemma and its connection with the random matrix constructions of CS with regard to the stable embedding of a finite set of points under a random dimensionality-reducing projection. This lemma is stated as follows.
Lemma 1.
Let Q be a collection of finite points in R N . Given 0 < ϵ < 1 and β > 0 , let A be a random orthogonal projection from R N to R M with M N and
M 4 + 2 β ϵ 2 / 2 ϵ 3 / 3 log ( | Q | ) .
According to [17], with probability at least 1 | Q | β , for all q i , q j Q , i j , the following statement holds
( 1 ϵ ) M N | | q i q j | | 2 2 | | A q i A q j | | 2 2 ( 1 + ϵ ) M N | | q i q j | | 2 2 .
With respect to the JL lemma, the authors in [9,10] consider the AWGN channel, where the random projection matrix A with i.i.d. entries is defined as
a i j = 1 ρ + 1 , with probability ( w . p . ) ρ 2 , 0 , w . p . 1 ρ , 1 , w . p . ρ 2 ,
where ρ is a factor that provides the probability of measurement and controls the sparsity level of A . For example, if ρ = 1 , the random matrix has no sparsity, and if ρ = log N N , the expected number of non-zeros in each row is log N . Moreover, these authors proved that the a i j are four-wise independent in rows and independent across rows, i.e.,
E [ a i j ] = 0 , E [ a i j 2 ] = 1 , E [ a i j 4 ] = 1 ρ .
Note that the energy consumed for wireless transmission cannot exceed the energy available in each slot. Thus, it is reasonable to take into account both the energy constraint and the sparse random projection for reducing data transmission cost and improving data recovery quality. With regard to fading channels, Ref. [11] gave us an improvement of Equation (10), with which the projected data matrix A is associated with a squared-amplitude b j > 0 . Each entry is
a i j = b j + 1 , w . p . ρ 2 , 0 , w . p . 1 ρ , 1 , w . p . ρ 2 ,
for j = 1 , , M . Given an available energy E j , the value of b j is chosen such that
ρ b j E j , j = 1 , , N .
It leads to E ( a i j 2 ) = p b j N E j , that is, the energy can be saved for the future transmission. In this paper, we define our sparse measurement matrix as A = [ a i j ] , where each entry a i j is defined by
a i j = N 4 ρ i j + 1 , w . p . ρ i j 2 N , 0 , w . p . 1 ρ i j , 1 , w . p . ρ i j 2 N .
Based on the definition of A in Equation (14), the measurement vector y can be expressed as in Equation (1). In the sequel, we explain the reasons for choosing this sparse random projection.
(i)
It has been shown that the conventional random projections A N ( 0 , 1 ) are appropriate only for the 2 norm, while, in many applications, there is greater concern for the inner product [12]. Moreover, one can use 1 ρ 3 in Equation (10) to speed up the computation process [12].
(ii)
The projection given in Equation (14) selects the sensor to transmit and assigns weights to the data according to the harvested energy at the sensor. For instance, the sparsity of random projection given E i j , can be defined as
ρ i j = p i j o p t E i j ,
where p i j o p t is the optimal power allocation and E i j is the available energy of node j during the i-th slot.
Therefore, we obtain
E ( a i j ) = 0 , E ( a i j 2 ) = Var ( a i j ) = 1 , E ( a i j 4 ) = N ρ i j .

4. Proposed Distributed Algorithm and Analysis

4.1. Sparse Random Projection with Fading Channels

Suppose that we have two input vectors x 1 = [ x 1 ( 1 ) , , x N ( 1 ) ] T , x 2 = [ x 1 ( 2 ) , , x N ( 2 ) ] T C N (alternatively, x 1 , x 2 R 2 N ) and the random matrix B ^ = [ b ^ i j ] given in Equation (3). The corresponding projections of x 1 and x 2 are defined by
u = 1 M B ^ x 1 , v = 1 M B ^ x 2 R 2 M .
We also assume that under fading the channel matrix H is independent of the random matrix A . Thus, we have
E ( b ^ i j ) = 0 , E ( b ^ i j 2 ) = Var ( b ^ i j ) = 1 , and E ( b ^ i j 4 ) = 3 N ρ i j .
The sparse random projection A is desired to have the properties of length, distance, and inner product preservation. We need to check that those properties are still preserved under fading channel conditions. In order to check the length preservation of the sparse random matrix B ^ , we first express E ( | | u | | 2 2 ) = i = 1 2 M E ( u i 2 ) , where
E ( u i 2 ) = 1 2 M E j = 1 2 N { x j ( 1 ) } 2 b ^ i j 2 + l m x l ( 1 ) x m ( 1 ) b ^ i l b ^ i m = 1 2 M j = 1 2 N { x j ( 1 ) } 2 E ( b ^ i j 2 ) + l m x l ( 1 ) x m ( 1 ) E ( b ^ i l ) E ( b ^ i m ) = 1 2 M j = 1 2 N { x j ( 1 ) } 2 = 1 2 M | | x 1 | | 2 2 .
Thus, E ( | | u | | 2 2 ) = i = 1 2 M 1 2 M | | x 1 | | 2 2 = | | x 1 | | 2 2 .
For the distance preservation, we have
E ( | | u v | | 2 2 ) = i = 1 2 M E [ ( u i v i ) 2 ] = i = 1 2 M 1 2 M j = 1 2 N { x j ( 1 ) x j ( 2 ) } 2 E ( b ^ i j 2 ) = i = 1 2 M 1 2 M | | x 1 x 2 | | 2 2 = | | x 1 x 2 | | 2 2 .
Similarly, we can compute the inner product as E ( u · v ) = E ( u T v ) = i = 1 2 M E ( u i v i ) , where
E ( u i v i ) = E 1 2 M j = 1 2 N x j ( 1 ) b ^ i j j = 1 2 N x j ( 2 ) b ^ i j = 1 2 M j = 1 2 N x j ( 1 ) x j ( 2 ) E ( b ^ i j 2 ) + l m x l ( 1 ) x m ( 2 ) E ( b ^ i l ) E ( b ^ i m ) = 1 2 M j = 1 2 N x j ( 1 ) x j ( 2 ) = 1 2 M x 1 T x 2 .
Thus, the inner product is still preserved by applying B ^ since we have E ( u · v ) = x 1 · x 2 .

4.2. Stability and Robustness of Sparse Random Projections

By partitioning the sparse random matrix B ^ into B ^ ( 1 ) , , B ^ M 2 , where each B ^ ( ) ( = 1 , , M 2 ) has size M 1 × N and 2 M = M 1 M 2 ( M 1 and M 2 will be determined later), the corresponding measurement y R 2 M can be split into M 2 vectors { y ( 1 ) , , y ( M 2 ) } . Each y ( ) R M 1 is defined as
y ( ) = B ^ ( ) x + w ( ) .
We let z ( ) = B ^ ( ) ψ , where ψ R 2 N and | | ψ | | 2 2 = 1 . Thus, we can perform
z ( ) T y ( ) = ψ T B ^ ( ) T B ^ ( ) x + ψ T B ^ ( ) T w ( ) = i 0 = 1 M 1 u i 0 ( ) + i 0 = 1 M 1 v i 0 ( ) ,
where u i 0 ( ) = j = 1 2 N b ^ i 0 j ( ) ψ j j = 1 2 N b ^ i 0 j ( ) x j and v i 0 ( ) = ( j = 1 2 N b ^ i 0 j ( ) ψ j ) w i 0 . The corresponding means, variances of u i , v i , and their covariances can be calculated as
E [ u i 0 ( ) ] = ψ T x ,
Var [ u i 0 ( ) ] = ( ψ T x ) 2 + | | ψ | | 2 2 | | x | | 2 2 + j = 1 2 N 3 N ρ i 0 j ( ) 3 x j 2 ψ j 2 ,
E [ v i 0 ( ) ] = 0 ,
Var [ v i 0 ( ) ] = σ w 2 | | ψ | | 2 2 ,
Cov [ u i 0 ( ) , v i ¯ 0 ( ) ] = 0 .
The detailed derivations of Equations (21)–(25) are given in the Appendix A. Thus, we obtain
E 1 M 1 z ( ) T y ( ) = 1 M 1 E i = 1 M 1 u i ( ) + E i = 1 M 1 v i ( ) = ψ T x ,
Var 1 M 1 z ( ) T y ( ) = 1 M 1 2 M 1 ( ψ T x ) 2 + ( | | x | | 2 2 + σ w 2 ) | | ψ | | 2 2 + i 0 = 1 M 1 j = 1 2 N 3 N ρ i 0 j ( ) 3 x j 2 ψ j 2 .
For any ϵ > 0 , using the Chebychev’s inequality and the fact that | | ψ | | 2 2 = 1 , we have
P 1 M 1 z ( ) T y ( ) ψ T x ϵ | | x | | 2 Var 1 M 1 z ( ) T y ( ) ϵ 2 | | x | | 2 2 = 1 ϵ 2 M 1 2 M 1 ( ψ T x ) 2 | | x | | 2 2 + | | x | | 2 2 + σ w 2 | | x | | 2 2 + i 0 = 1 M 1 j = 1 2 N 3 N ρ i 0 j ( ) 3 x j 2 ψ j 2 | | x | | 2 2 1 ϵ 2 M 1 2 + σ w 2 | | x | | 2 2 + j = 1 2 N 3 N min i 0 ρ i 0 j ( ) μ 2 δ .
We have also used the fact that for any data vector u R 2 N , it satisfies the peak-to-total energy condition, i.e., | | u | | | | u | | 2 μ [10]. Following the approach given in [9], the probability that an estimate lies outside the tolerable approximation interval cannot exceed e c 2 M 2 / 12 , where 0 < c < 1 . Setting M 1 = O 1 ϵ 2 2 + σ w 2 | | x | | 2 2 + j = 1 2 N 3 N min i 0 ρ i 0 j ( ) μ 2 yields δ = 1 4 , and setting M 2 = O [ ( 1 + η ) log 2 N ] gives p e ( 2 N ) η for some constant η > 0 . Finally, for M = 1 2 M 1 M 2 = 1 2 O μ 2 ( 1 + η ) ϵ 2 2 + σ w 2 | | x | | 2 2 + j = 1 2 N 3 N min i 0 ρ i 0 j ( ) log 2 N , the sparse random matrix B ^ can preserve all the pairwise inner products within an approximation error ϵ with the probability of at least 1 ( 2 N ) η .
Remark 1 (Complexity Analysis).
According to CS model with a dense random matrix [6,8], it requires at least O ( k log N η ) measurements for obtaining an approximation via 1 -minimization problem in Equation (4) with probability exceeding 1 η , and the CS decoding has computational complexity O ( N 3 ) . On the other hand, the sparse random projection scheme requires at least O ( k 2 log N ) random projections and the corresponding decoding process takes O ( M N log N ) , where M is the number of measurements. Since k N , using sparse random projection attains low decoding complexity, which makes it applicable for EH sensors, while the FC can request a little more measurements from sensors and recover the signal with a better approximation. Our proposed scheme has inherited this advantage and optimized the sparsity level, which adapts to channel conditions and energy constraints.

4.3. Sparsity Level and RIP Verification

Following the signal model in Equation (1), i.e., y = B ^ Ψ α + w , the decoding process is to recover the sparse signal α instead of recovering the sensor data x by using Equation (4). However, we must verify that Ψ satisfies the sparse basis representation in R N and the matrix Z ¯ = B ^ Ψ obeys the RIP condition to guarantee successful recovery via 1 -minimization.
To analyze the feasibility of the measurement matrix and the sparse basis design, we have to answer the following two questions:
(1)
Is it reasonable to select Ψ obtained from Section 3.2 as an orthogonal basis for x ?
(2)
For the matrix A obtained from Section 3.3, does Z ¯ = B ^ Ψ obey the RIP condition?
First, as we demonstrated in Section 3.2, the matrix Ψ is obviously an orthogonal basis in C N from an appropriate transformation. Otherwise, it can be an overcomplete dictionary from data learning approach, which promised to represent a wider range of signal phenomena [16].
Second, in order to show that the random variable | | Z α | | 2 is highly concentrated about | | α | | 2 , we can assume that the row of B ^ is independent of Ψ . Fixing ϵ ( 0 , 1 ) , and with each row of Z ¯ satisfying the sub-Gaussian distribution, we prove that Z = [ z i ] i = 1 M = 1 M [ z ¯ 1 , , z ¯ M ] T will satisfy the RIP with high probability, i.e.,
( 1 ϵ ) | | | Z α | | 2 2 | | α | | 2 2 ( 1 + ϵ ) , α Σ k .
To do that, we prove that each part of the matrix Z = Z R + j Z I satisfies the RIP for complex data α , i.e.,
1 2 ( 1 ϵ ) | | | Z R α | | 2 2 | | α | | 2 2 1 2 ( 1 + ϵ ) ,
and 1 2 ( 1 ϵ ) | | | Z I α | | 2 2 | | α | | 2 2 1 2 ( 1 + ϵ ) , α Σ k .
First, in order to to prove Equation (30), by letting Z R = [ z i j R ] , we have
E ( z i R · α ) = E j = 1 N z i j R α j = j = 1 N E ( z i j ) R α j = 0 , Var ( z i R · α ) = Var j = 1 N z i j R α j = j = 1 N Var ( z i j R ) α j 2 = | | α | | 2 2 M , E ( | | Z R α | | 2 2 ) = E i = 1 M ( z i R · α ) 2 = i = 1 M E ( z i R · α ) 2 = i = 1 M Var ( z i R · α ) = i = 1 M | | α | | 2 2 M = | | α | | 2 2 .
Then, following Theorem 4.2 of [18], we obtain P | | Z R α | | 2 2 | | α | | 2 2 1 2 ( 1 ϵ ) = e M ϵ 2 / 4 c 2 and P | | Z R α | | 2 2 | | α | | 2 2 1 2 ( 1 + ϵ ) = e M ϵ 2 / 4 c 2 . Thus,
P | | Z R α | | 2 2 | | α | | 2 2 1 2 ϵ 2 2 e M ϵ 2 / 4 c 2 .
Fixing an index set I { 1 , , N } with | I | = k , there are N k possible k-dimensional subspaces of Z R and the probability of a k-sparse vector α satisfying | | Z α | | 2 2 | | α | | 2 2 1 2 ϵ 2 is given by 2 ( e N / k ) k e M 2 ϵ 2 / 4 c 2 = O ( k log ( N / k ) ) . Here, we use the Sterling’s approximation, which states that k ! ( k / e ) k . Thus, it leads to N k ( e N / k ) k . Finally, we conclude that the probability of Z R satisfying the RIP for all k-sparse vector α approaches 1 2 . Similarly, we obtain the same result for Z I .
Remark 2 (trade-off between the MSE and the system delay).
There exists a trade-off between the system delay and the approximation error, which is described as follows. For an allowable mean-square error (MSE) ξ > 0 , the achievable system delay D ( ξ ) is defined as
D ( ξ ) min M M s . t . E ( | | α ^ α | | 2 2 ) ξ ,
where ξ relates to the bounded error in Equation (6).
Thus, the total energy consumption for all sensors is D ( ξ ) × j = 1 N N ρ i j . In order to minimize the total network energy consumption, ρ i j should be chosen to be as large as possible, or maximizing p i j o p t as shown in Equation (15), which leads to the following problem.
Remark 3 (throughput maximum problem).
The optimal power allocation p i j o p t in Equation (15) can be obtained by solving the maximum output problem [19], which is given by
max p i j i = 1 M C i j s . t . k = 1 i p k j k = 1 i 1 E k j , k = 1 , , M , k = 0 i E k j k = 1 i p k j P max , k = 1 , , M 1 , p i j 0 .
Here, C i j = i = 1 M log 2 1 + p i j | h i j | 2 l = 1 , l j n | h i l | 2 E i l + σ w 2 and the value P max is a constant that depends on the hardware limitations. The above problem can be efficiently solved by using the iterative resource allocation algorithm method [19].

5. Simulation Results

We now present the results of a number of numerical simulations that illustrate the effectiveness of our approach. All simulations are performed in MATLAB R2015a (version 8.5.0.197613 (R2015a), The MathWorks Inc., Seoul, Korea) on a 3.60 GHz Intel Core i7 machine with 8 GB of RAM. We use MATLAB codes of the competing algorithms for our numerical studies. The vector x was assumed to be uniformly distributed in the interval [ 1 , 10 ] . In our work, we used the basis pursuit de-noising algorithm [20] to compute the sparse solution in Equation (4). We evaluate the performance based on the MSE, which is given by
MSE = E | | x ^ x | | 2 | | x | | 2 .
Figure 1 plots the MSE versus the compression ratio M / N for support cardinality k and fading channels plotted for N = 100 , σ w 2 = 1 , and E i j is uniformly distributed in the interval [ 0 dB, E max ] , where E max = 2 dB and M O ( k log N ) to guarantee a stable recovery [6]. When E i j = 0 dB, ρ i j was set at 0.25 as the conventional baseline. In order to minimize the total energy consumption, we can perform the power allocation among different transmission time slots subject to the causality of the harvested energy, which refers to the resource allocation problem with energy constraint. Note that the sparsity level in Equation (15) is still adaptive since the optimal power allocations p i j o p t obtained by solving the maximum output problem [1] are dynamic. The MSE values decrease as k decreases, as expected. We observed that the proposed scheme performs well compared to the conventional ones with AWGN and Rayleigh fading channels. The performance gap between those schemes is getting smaller when the ratio M / N increases. This is because when M is large enough, the MSE may not achieve any improvements. We notice that the sparsity level of the random projection determines the amount of communication. Increasing the sparsity level yields to decrease the preprocessing cost but unfortunately increase the latency to recover a CS approximation. We will show this trade-off in the simulation results.
Figure 2 shows the outage probability with several ρ i j intervals when k = 5 and N = 100 . The outage probability is defined as the probability that the matrix Z does not satisfy the RIP, which scales as exp { M ϵ 2 / 4 c 2 } as shown in Equation (32). For sufficiently large number M, we observe that the optimal compressed rate M / N decreases as the sparsity level ρ i j increases. This means the number of measurements M must approximately obey O ( k l o g ( N / k ) ) for effective CS recovery, while it is large enough for minimizing the outage probability. Moreover, from the result in Section 4.2, since M is proportional to ρ i j , the larger value of ρ i j leads to a smaller compressed ratio M / N for a fixed N.
Figure 3 illustrates a trade-off between the system delay and the MSE threshold ξ for the proposed approach as we have discussed in Remark 2 when k = 5 and N = 100 . We observed that the proposed scheme achieves a better trade-off when either SNR or ξ increases as expected. This is because higher SNR means the signal is more clearly readable, the CS recovery procedure will be much easier. Moreover, for a tight MSE threshold, the procedure of choosing the estimate to minimize the expected MSE will take longer, since the best MSE scaling depends on the value of M.
Remark 4 (sparsity level option).
This scheme is developed for transformative sensing mechanisms, which can be used in conjunction with current or upcoming EH capabilities in order to enable the deployment of energy neutral EH WSNs with practical network lifetime and improve data gathering rates. However, the sparsity level in Equation (15) should be carefully chosen to maintain a good trade-off between the MSE and the system complexity. For example, when the channel condition is not good, we should select ρ i j large enough (e.g., ρ i j 1 / 4 ) to guarantee an acceptable MSE.

6. Conclusions

In this paper, we have aimed to address the problem of recovering a sparse signal observed by a resource constrained in EH-WSNs for optimal data transmission strategy. By exploiting sparse random projections, there are significant reductions of the data measurements to be made. First, we studied a basis representation that can make the measurement matrix sufficiently sparse. The EH sensors store the sparse random projections of data, and thus the FC can estimate using compressive sensing with a sufficient number of measurements of sensors. Due to fading channels, the sparsity level can be adaptively chosen according to the available harvested energy at each EH sensor. This approach provides a better trade-off of the query latency and the desired approximation error, and also speeds up the processing time. We plan to generalize this concept in future work to incorporate sparsity of user activity and imperfect channel information as well. In addition, we would to like to emphasize that there are many ideas in the literature that would certainly enhanced our proposed scheme. We mentioned a few such possibilities as the following. First, we limited our analysis on AF protocol while different approach such as applying channel coding, and then using a modulation scheme for data transmission can be a huge open field. Second, approximation recovery for imperfect data via different norm, e.g., q -norm ( 0 < q 1 ) , can be promising due to its high quality of solutions and various types of sensing matrices that can be used in the CS reconstruction algorithms. Third, it needs to further discuss how to design the sparsity parameter of the random projection matrix based on different channel fading statistics so that the number of measurements required for signal recovery at the FC is minimized. Finally, for many application of interests, we often have prior information on addition constraints, e.g., rate-energy trade-off for simultaneous information and power transfer in EH-WSNs. Thus, other sparse random projections can be provided according to those constraints.

Acknowledgments

This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korean government (MSIT) (2014R1A5A1011478).

Author Contributions

The work was realized with the collaboration of all of the authors. Thu L. N. Nguyen contributed to the main results and code implementation. Yoan Shin, Jin Young Kim, and Dong In Kim organized the work, provided the funding, supervised the research and reviewed the draft of the paper. All authors discussed the results, approved the final version, and wrote the paper.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
AWGNAdditive white Gaussian noise
AFAmplify-and-forward
CSCompressive sensing
DFDecode-and-forward
EHEnergy harvesting
FCFusion center
i.i.d.Independent and identically distributed
MACmultiple-access channel
MSEMean-square error
JLJohnson–Lindenstrauss
RIPRestricted isometry property
SNRSignal-to-noise ratio
s.t.Subject to
WSNWireless sensor network

Appendix A. Proofs of Equations (21)–(25)

First, the mean and the variance of u i 0 ( ) are calculated as follows:
E [ u i 0 ( ) ] = E j = 1 2 N ( b ^ i 0 j ( ) ) 2 ψ j x j + j j 0 b ^ i 0 j ( ) b ^ i 0 j 0 ( ) ψ j x j 0 = j = 1 2 N E ( b ^ i 0 j ( ) ) 2 ψ j x j + j j 0 E b ^ i 0 j ( ) E b ^ i 0 j 0 ( ) ψ j x j 0 = ψ T x ;
E ( u i 0 ( ) ) 2 = E j = 1 2 N ( b ^ i 0 j ( ) ) 2 ψ j x j 2 + j j 0 b ^ i 0 j ( ) b ^ i 0 j 0 ( ) ψ j x j 0 2 + 2 j = 1 2 N ( b ^ i 0 j ( ) ) 2 ψ j x j j j 0 b ^ i 0 j ( ) b ^ i 0 j 0 ( ) ψ j x j 0 = j = 1 2 N E ( b ^ i 0 j ( ) ) 4 ψ j 2 x j 2 + 2 j < j 0 ψ j x j ψ j 0 x j 0 E [ ( b ^ i 0 j ( ) ) 2 ] E [ ( b ^ i 0 j 0 ( ) ) 2 ] + j j 0 ψ j 2 x j 0 2 E [ ( b ^ i 0 j ( ) ) 2 ] E [ ( b ^ i 0 j 0 ( ) ) 2 ] + 2 j < j 0 ψ j x j 0 ψ j 0 x j E [ ( b ^ i 0 j ( ) ) 2 ] E [ ( b ^ i 0 j 0 ( ) ) 2 ] = j = 1 2 N 3 N ρ i 0 j ( ) ψ j 2 x j 2 + 2 j j 0 ψ j x j ψ j 0 x j 0 + j j 0 ψ j 2 x j 0 2 = 2 j = 1 2 N ψ j 2 x j 2 + j j 0 ψ j x j ψ j 0 x j 0 + j = 1 2 N ψ j 2 x j 2 + j j 0 ψ j 2 x j 0 2 + j = 1 2 N 3 N ρ i 0 j ( ) 3 ψ j 2 x j 2 = 2 ( ψ T x ) 2 + | | ψ | | 2 2 | | x | | 2 2 + j = 1 2 N 3 N ρ i 0 j ( ) 3 x j 2 ψ j 2 .
Var [ u i 0 ( ) ] = E ( u i 0 ( ) ) 2 E [ u i 0 ( ) ] 2 = ( ψ T x ) 2 + | | ψ | | 2 2 | | x | | 2 2 + j = 1 2 N 3 N ρ i 0 j ( ) 3 x j 2 ψ j 2 .
Similarly, we have
E [ v i 0 ( ) ] = j = 1 2 N E [ b ^ i 0 j ( ) ] ψ j E [ w i 0 ] = 0 ,
E [ ( v i 0 ( ) ) 2 ] = E j = 1 2 N b ^ i 0 j ( ) ψ j 2 w i 0 2 = j = 1 2 N E [ ( b ^ i 0 j ( ) ) 2 ] ψ j 2 + 2 j j 0 E [ b ^ i 0 j ( ) ] E [ b ^ i 0 j 0 ( ) ] ψ j ψ j 0 σ w 2 = j = 1 2 N ψ j 2 σ w 2 = σ w 2 | | ψ | | 2 ,
Var [ v i 0 ( ) ] = E [ ( v i 0 ( ) ) 2 ] E [ v i 0 ( ) ] 2 = σ w 2 | | ψ | | 2 .
The covariance of u i 0 ( ) and v i ¯ 0 ( ) is obtained as
Cov [ u i 0 ( ) , v i ¯ 0 ( ) ] = E u i 0 ( ) v i ¯ 0 ( ) E [ u i 0 ( ) ] E [ v i ¯ 0 ( ) ] = E j = 1 2 N b ^ i 0 j ( ) ψ j j = 1 2 N b ^ i 0 j ( ) x j j = 1 2 N b ^ i ¯ 0 j ( ) ψ j E [ w i ¯ 0 ] E [ u i 0 ( ) ] E [ v i ¯ 0 ( ) ] = 0 since E [ w i ¯ 0 ] = 0 and E [ v i ¯ 0 ( ) ] = 0 .

References

  1. Ho, C.-K.; Zhang, R. Optimal energy allocation for wireless communications with energy harvesting constraints. IEEE Trans. Signal Process. 2012, 60, 4808–4818. [Google Scholar] [CrossRef]
  2. Sharma, V.; Mukherji, U.; Joseph, V.; Gupta, S. Optimal energy management policies for energy harvesting sensor nodes. IEEE Trans. Wirel. Commun. 2010, 9, 1326–1336. [Google Scholar] [CrossRef]
  3. Lu, X.; Wang, P.; Niyato, P.; Kim, D.I.; Han, Z. Wireless networks with RF energy harvesting: A contemporary survey. IEEE Commun. Surv. Tutor. 2014, 17, 757–789. [Google Scholar] [CrossRef]
  4. Shaikh, F.K.; Zeadally, S. Energy harvesting in wireless sensor networks: A comprehensive review. Renew. Sustain. Energy Rev. 2016, 55, 1041–1054. [Google Scholar] [CrossRef]
  5. Dong, M.; Ota, K.; Liu, A. RMER: Reliable and energy-efficient data collection for large-Scale wireless sensor networks. IEEE Internet Things J. 2016, 3, 511–519. [Google Scholar] [CrossRef]
  6. Candès, E.J.; Romberg, J.; Tao, T. Stable signal recovery from incomplete and inaccurate measurements. Commun. Pure Appl. Math. 2006, 59, 1207–1223. [Google Scholar] [CrossRef]
  7. Baraniuk, R.G.; Cevher, V.; Duarte, M.F.; Hegde, C. Model-based compressive sensing. IEEE Trans. Inf. Theory 2010, 56, 1982–2001. [Google Scholar] [CrossRef]
  8. Davenport, M.A.; Boufounos, P.T.; Wakin, M.B.; Baraniuk, R.G. Signal processing with compressive measurements. IEEE J. Sel. Top. Signal Process. 2010, 4, 445–460. [Google Scholar] [CrossRef]
  9. Wang, W.; Garofalakis, M.; Ramchandran, K. Distributed sparse random projections for refinable approximation. In Proceedings of the 6th International Conference on Information Processing in Sensor Networks, Cambridge, MA, USA, 25–27 April 2007; pp. 331–339. [Google Scholar]
  10. Rana, R.; Hu, W.; Chou, C. Energy-aware sparse approximation technique (EAST) for rechargeable wireless sensor networks. In Proceedings of the 7th European Conference on Wireless Sensor Networks, EWSN 2010, Coimbra, Portugal, 17–19 February 2010; pp. 306–321. [Google Scholar]
  11. Yang, G.; Tan, V.Y.F.; Ho, C.-K.; Ting, S.H.; Guan, Y.L. Wireless compressive sensing for energy harvesting sensor nodes. IEEE Trans. Signal Process. 2013, 61, 4491–4505. [Google Scholar] [CrossRef]
  12. Li, P.; Hastie, T.R.; Church, K.W. Very sparse random projections. In Proceedings of the 12th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Philadelphia, PA, USA, 20–23 August 2006; ACM: New York, NY, USA, 2006; pp. 287–296. [Google Scholar]
  13. Bajwa, W.U.; Haupt, J.D.; Sayeed, A.M.; Nowak, R.D. Joint source channel communication for distributed estimation in sensor networks. IEEE Trans. Inf. Theory 2007, 53, 3629–3653. [Google Scholar] [CrossRef]
  14. Gastpar, M. Uncoded transmission is exactly optimal for a simple Gaussian sensor network. IEEE Trans. Inf. Theory 2008, 54, 5247–5251. [Google Scholar] [CrossRef]
  15. Marano, S.; Matta, V.; Tong, L.; Willett, P. A likelihood-based multiple access for estimation in sensor networks. IEEE Trans. Signal Process. 2007, 55, 5155–5166. [Google Scholar] [CrossRef]
  16. Rubinstein, R.; Bruckstein, A.M.; Elad, M. Dictionaries for sparse representation modeling. Proc. IEEE 2010, 98, 1045–1057. [Google Scholar] [CrossRef]
  17. Dasgupta, S.; Gupta, A. An elementary proof of the Johnson-Lindenstrauss lemma. Random Struct. Algorithms 2003, 22, 60–65. [Google Scholar] [CrossRef]
  18. Davenport, M. Random Observations on Random Observations: Sparse Signal Acquisition and Processing. Ph.D. Thesis, Rice University, Houston, TX, USA, 2010. [Google Scholar]
  19. Ng, D.W.K.; Lo, E.S.; Schober, R. Wireless information and power transfer: Energy efficiency optimization in OFDMA systems. IEEE Trans. Wirel. Commun. 2013, 12, 6352–6370. [Google Scholar] [CrossRef]
  20. Berg, E.V.D.; Friedlander, M.P. Probing the Pareto frontier for basis pursuit solutions. Proc. Soc. Ind. Appl. Math. 2008, 31, 890–912. [Google Scholar]
Figure 1. MSE versus different compressed ratios with fading channels.
Figure 1. MSE versus different compressed ratios with fading channels.
Energies 10 01776 g001
Figure 2. Outage probability with different sparsity levels.
Figure 2. Outage probability with different sparsity levels.
Energies 10 01776 g002
Figure 3. System delay for several allowable MSE thresholds.
Figure 3. System delay for several allowable MSE thresholds.
Energies 10 01776 g003

Share and Cite

MDPI and ACS Style

Nguyen, T.L.N.; Shin, Y.; Kim, J.Y.; Kim, D.I. Applicability of Compressive Sensing for Wireless Energy Harvesting Nodes. Energies 2017, 10, 1776. https://doi.org/10.3390/en10111776

AMA Style

Nguyen TLN, Shin Y, Kim JY, Kim DI. Applicability of Compressive Sensing for Wireless Energy Harvesting Nodes. Energies. 2017; 10(11):1776. https://doi.org/10.3390/en10111776

Chicago/Turabian Style

Nguyen, Thu L. N., Yoan Shin, Jin Young Kim, and Dong In Kim. 2017. "Applicability of Compressive Sensing for Wireless Energy Harvesting Nodes" Energies 10, no. 11: 1776. https://doi.org/10.3390/en10111776

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop