Next Article in Journal
DEA Cross-Efficiency Ranking Method Based on Grey Correlation Degree and Relative Entropy
Previous Article in Journal
High Efficiency Video Coding Compliant Perceptual Video Coding Using Entropy Based Visual Saliency Model
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Low-Complexity and Asymptotically Optimal Coding Strategy for Gaussian Vector Sources

by
Marta Zárraga-Rodríguez
,
Jesús Gutiérrez-Gutiérrez
* and
Xabier Insausti
Tecnun, University of Navarra, Paseo de Manuel Lardizábal 13, 20018 San Sebastián, Spain
*
Author to whom correspondence should be addressed.
Entropy 2019, 21(10), 965; https://doi.org/10.3390/e21100965
Submission received: 4 August 2019 / Revised: 27 September 2019 / Accepted: 28 September 2019 / Published: 2 October 2019
(This article belongs to the Section Information Theory, Probability and Statistics)

Abstract

:
In this paper, we present a low-complexity coding strategy to encode (compress) finite-length data blocks of Gaussian vector sources. We show that for large enough data blocks of a Gaussian asymptotically wide sense stationary (AWSS) vector source, the rate of the coding strategy tends to the lowest possible rate. Besides being a low-complexity strategy it does not require the knowledge of the correlation matrix of such data blocks. We also show that this coding strategy is appropriate to encode the most relevant Gaussian vector sources, namely, wide sense stationary (WSS), moving average (MA), autoregressive (AR), and ARMA vector sources.

1. Introduction

The rate distortion function (RDF) of a source provides the minimum rate at which data can be encoded in order to be able to recover them with a mean squared error (MSE) per dimension not larger than a given distortion.
In this paper, we present a low-complexity coding strategy to encode (compress) finite-length data blocks of Gaussian N-dimensional vector sources. Moreover, we show that for large enough data blocks of a Gaussian asymptotically wide sense stationary (AWSS) vector source, the rate of our coding strategy tends to the RDF of the source. The definition of AWSS vector process can be found in ([1] (Definition 7.1)). This definition was first introduced for the scalar case N = 1 (see ([2] (Section 6)) or [3]), and it is based on the Gray concept of asymptotically equivalent sequences of matrices [4].
A low-complexity coding strategy can be found in [5] for finite-length data blocks of Gaussian wide sense stationary (WSS) sources and in [6] for finite-length data blocks of Gaussian AWSS autoregressive (AR) sources. Both precedents deal with scalar processes. The low-complexity coding strategy presented in this paper generalizes the aforementioned strategies to Gaussian AWSS vector sources.
Our coding strategy is based on the block discrete Fourier transform (DFT), and therefore, it turns out to be a low-complexity coding strategy when the fast Fourier transform (FFT) algorithm is used. Specifically, the computational complexity of our coding strategy is O ( n N log n ) , where n is the length of the data blocks. Besides being a low-complexity strategy, it does not require the knowledge of the correlation matrix of such data blocks.
We show that this coding strategy is appropriate to encode the most relevant Gaussian vector sources, namely, WSS, moving average (MA), autoregressive (AR), and ARMA vector sources. Observe that our coding strategy is then appropriate to encode Gaussian vector sources found in the literature, such as the corrupted WSS vector sources considered in [7,8] for the quadratic Gaussian CEO problem.
The paper is organized as follows. In Section 2, we obtain several new mathematical results on the block DFT, and we present an upper bound for the RDF of a complex Gaussian vector. In Section 3, using the results given in Section 2, we present a new coding strategy based on the block DFT to encode finite-length data blocks of Gaussian vector sources. In Section 4, we show that for large enough data blocks of a Gaussian AWSS vector source, the rate of our coding strategy tends to the RDF of the source. In Section 5, we show that our coding strategy is appropriate to encode WSS, MA, AR, and ARMA vector sources. In Section 6, conclusions and numerical examples are presented.

2. Preliminaries

2.1. Notation

In this paper N , Z , R , and C are the set of positive integers, the set of integers, the set of real numbers, and the set of complex numbers, respectively. The symbol ⊤ denotes transpose and the symbol ∗ denotes conjugate transpose. · 2 and · F are the spectral and the Frobenius norm, respectively. x denotes the smallest integer higher than or equal to x. E stands for expectation, ⊗ is the Kronecker product, and λ j ( A ) , j { 1 , , n } , denote the eigenvalues of an n × n Hermitian matrix A arranged in decreasing order. R n × 1 is the set of real n-dimensional (column) vectors, C m × n denotes the set of m × n complex matrices, 0 m × n is the m × n zero matrix, I n denotes the n × n identity matrix, and V n is the n × n Fourier unitary matrix, i.e.,
[ V n ] j , k = 1 n e 2 π ( j 1 ) ( k 1 ) n i , j , k { 1 , , n } ,
where i is the imaginary unit.
If A j C N × N for all j { 1 , , n } , then diag 1 j n ( A j ) denotes the n × n block diagonal matrix with N × N blocks given by diag 1 j n ( A j ) = ( A j δ j , k ) j , k = 1 n , where δ is the Kronecker delta.
Re and Im denote the real part and the imaginary part of a complex number, respectively. If A C m × n , then Re ( A ) and Im ( A ) are the m × n real matrices given by [ Re ( A ) ] j , k = Re ( [ A ] j , k ) and [ Im ( A ) ] j , k = Im ( [ A ] j , k ) with j { 1 , , m } and k { 1 , , n } , respectively.
If z C N × 1 , then z ^ denotes the real 2 N -dimensional vector given by
z ^ = Re ( z ) Im ( z ) .
If z k C N × 1 for all k { 1 , , n } , then z n : 1 is the n N -dimensional vector given by
z n : 1 = z n z n 1 z 1 .
Finally, if z k is a (complex) random N-dimensional vector for all k N , { z k } denotes the corresponding (complex) random N-dimensional vector process.

2.2. New Mathematical Results on the Block DFT

We first give a simple result on the block DFT of real vectors.
Lemma 1.
Let n , N N . Consider x k C N × 1 for all k { 1 , , n } . Suppose that y n : 1 is the block DFT of x n : 1 , i.e.,
y n : 1 = V n * I N x n : 1 = V n I N * x n : 1 .
Then the two following assertions are equivalent:
1. 
x n : 1 R n N × 1 .
2. 
y k = y n k ¯ for all k { 1 , , n 1 } and y n R N × 1 .
Proof. 
See Appendix A. □
We now give three new mathematical results on the block DFT of random vectors that are used in Section 3.
Theorem 1.
Consider n , N N . Let x k be a random N-dimensional vector for all k { 1 , , n } . Suppose that y n : 1 is given by Equation (1). If k { 1 , , n } , then
λ n N ( E x n : 1 x n : 1 * ) λ N E ( x k x k * ) λ 1 E ( x k x k * ) λ 1 E x n : 1 x n : 1 *
and
λ n N ( E x n : 1 x n : 1 * ) λ N E ( y k y k * ) λ 1 E ( y k y k * ) λ 1 E x n : 1 x n : 1 * .
Proof. 
See Appendix B. □
Theorem 2.
Let x n : 1 and y n : 1 be as in Theorem 1. Suppose that x n : 1 is real. If k { 1 , , n 1 } { n 2 } , then
λ n N ( E x n : 1 x n : 1 ) 2 λ 2 N E y k ^ y k ^ λ 1 E y k ^ y k ^ λ 1 ( E x n : 1 x n : 1 ) 2 .
Proof. 
See Appendix C. □
Lemma 2.
Let x n : 1 and y n : 1 be as in Theorem 1. If k { 1 , , n } , then
1. 
E y k y k * = V n I N * E x n : 1 x n : 1 * V n I N n k + 1 , n k + 1 .
2. 
E y k y k = V n I N * E x n : 1 x n : 1 V n I N ¯ n k + 1 , n k + 1 .
3. 
E y k ^ y k ^ = 1 2 Re E y k y k * + Re E y k y k Im E y k y k Im E y k y k * Im E y k y k * + Im E y k y k Re E y k y k * Re E y k y k .
Proof. 
See Appendix D. □

2.3. Upper Bound for the RDF of a Complex Gaussian Vector

In [9], Kolmogorov gave a formula for the RDF of a real zero-mean Gaussian N-dimensional vector x with positive definite correlation matrix E x x , namely,
R x ( D ) = 1 N k = 1 N max 0 , 1 2 ln λ k E x x θ D 0 , tr E x x N ,
where tr denotes trace and θ is a real number satisfying
D = 1 N k = 1 N min θ , λ k E x x .
If D 0 , λ N E x x , an optimal coding strategy to achieve R x ( D ) is to encode [ z ] 1 , 1 , , [ z ] N , 1 separately, where z = U x with U being a real orthogonal eigenvector matrix of E x x (see ([6] (Corollary 1))). Observe that in order to obtain U, we need to know the correlation matrix E x x . This coding strategy also requires an optimal coding method for real Gaussian random variables. Moreover, as 0 < D λ N E x x 1 N k = 1 N λ k E x x = tr E x x N , if D 0 , λ N E x x , then from Equation (4) we obtain
R x ( D ) = 1 N k = 1 N 1 2 ln λ k E x x D = 1 2 N ln k = 1 N λ k E x x D N = 1 2 N ln det E x x D N .
We recall that R x ( D ) can be thought of as the minimum rate (measured in nats) at which x can be encoded (compressed) in order to be able to recover it with an MSE per dimension not larger than D, that is:
E x x ˜ 2 2 N D ,
where x ˜ denotes the estimation of x .
The following result gives an upper bound for the RDF of a complex zero-mean Gaussian N-dimensional vector (i.e., a real zero-mean Gaussian 2 N -dimensional vector).
Lemma 3.
Consider N N . Let z be a complex zero-mean Gaussian N-dimensional vector. If E z ^ z ^ is a positive definite matrix, then
R z ^ ( D ) 1 2 N ln det E z z * ( 2 D ) N D 0 , λ 2 N E z ^ z ^ .
Proof. 
We divide the proof into three steps:
Step 1: We prove that E z z * is a positive definite matrix. We have
E z ^ z ^ = E Re ( z ) Re ( z ) E Re ( z ) Im ( z ) E Im ( z ) Re ( z ) E Im ( z ) Im ( z )
and
E z z * = E Re ( z ) + i Im ( z ) Re ( z ) i Im ( z ) = E Re ( z ) Re ( z ) + E Im ( z ) Im ( z ) + i E Im ( z ) Re ( z ) i E Re ( z ) Im ( z ) .
Consider u C N × 1 , and suppose that u * E z z * u = 0 . We only need to show that u = 0 N × 1 . As E z ^ z ^ is a positive definite matrix and
u i u * E z ^ z ^ u i u = u i u * E Re ( z ) Re ( z ) u i E Re ( z ) Im ( z ) u E Im ( z ) Re ( z ) u i E Im ( z ) Im ( z ) u = u * E Re ( z ) Re ( z ) u i u * E Re ( z ) Im ( z ) u + i u * E Im ( z ) Re ( z ) u + u * E Im ( z ) Im ( z ) u = u * E z z * u = 0 ,
we obtain u i u = 0 2 N × 1 , or equivalently u = 0 N × 1 .
Step 2: We show that det E z ^ z ^ det E z z * 2 2 2 N . We have E z z * = Λ c + i Λ s , where Λ c = E Re ( z ) Re ( z ) + E Im ( z ) Im ( z ) and Λ s = E Im ( z ) Re ( z ) E Im ( z ) Re ( z ) . Applying ([10] (Corollary 1)), we obtain
det E z ^ z ^ det Λ c + Λ s Λ c 1 Λ s det Λ c 2 2 N = det I N + Λ s Λ c 1 Λ s Λ c 1 det Λ c 2 2 2 N = det I N + i Λ s Λ c 1 I N i Λ s Λ c 1 det Λ c 2 2 2 N = det Λ c + i Λ s det Λ c i Λ s 2 2 N = det E z z * det E z z * ¯ 2 2 N = det E z z * det E z z * ¯ 2 2 N = det E z z * 2 2 2 N .
Step 3: We now prove Equation (6). From Equation (5), we conclude that
R z ^ ( D ) = 1 4 N ln det E z ^ z ^ D 2 N 1 4 N ln det E z z * 2 ( 2 D ) 2 N = 1 2 N ln det E z z * ( 2 D ) N .
 □

3. Low-Complexity Coding Strategy for Gaussian Vector Sources

In this section (see Theorem 3), we present our coding strategy for Gaussian vector sources. To encode a finite-length data block x n : 1 of a Gaussian N-dimensional vector source { x k } , we compute the block DFT of x n : 1 ( y n : 1 ) and we encode y n 2 , , y n separately with E y k y k ˜ 2 2 N D for all k n 2 , , n (see Figure 1).
We denote by R ˜ x n : 1 ( D ) the rate of our strategy. Theorem 3 also provides an upper bound of R ˜ x n : 1 ( D ) . This upper bound is used in Section 4 to prove that our coding strategy is asymptotically optimal whenever the Gaussian vector source is AWSS.
In Theorem 3 C A n denotes the matrix ( V n I N ) diag 1 k n ( V n I N ) * A n ( V n I N ) k , k ( V n I N ) * , where A n C n N × n N .
Theorem 3.
Consider n , N N . Let x k be a random N-dimensional vector for all k { 1 , , n } . Suppose that x n : 1 is a real zero-mean Gaussian vector with a positive definite correlation matrix (or equivalently, λ n N E x n : 1 x n : 1 > 0 ). Let y n : 1 be the random vector given by Equation (1). If D 0 , λ n N E x n : 1 x n : 1 , then
R x n : 1 ( D ) R ˜ x n : 1 ( D ) 1 2 n N ln det C E x n : 1 x n : 1 D n N ,
where
R ˜ x n : 1 ( D ) = R y n 2 D + 2 k = n 2 + 1 n 1 R y k ^ D 2 + R y n ( D ) n i f   n   i s   e v e n , 2 k = n + 1 2 n 1 R y k ^ D 2 + R y n ( D ) n i f   n   i s   o d d .
Moreover,
0 1 2 n N ln det C E x n : 1 x n : 1 D n N R x n : 1 ( D ) 1 2 ln 1 + E x n : 1 x n : 1 C E x n : 1 x n : 1 F n N λ n N E x n : 1 x n : 1 .
Proof. 
We divide the proof into three steps:
Step 1: We show that R x n : 1 ( D ) R ˜ x n : 1 ( D ) . From Lemma 1, y k = y n k ¯ for all k { 1 , , n 2 1 } , and y k R N × 1 with k { n 2 , n } N . We encode y n 2 , , y n separately (i.e., if n is even, we encode y n 2 , y n 2 + 1 ^ , , y n 1 ^ , y n separately, and if n is odd, we encode y n + 1 2 ^ , , y n 1 ^ , y n separately) with
E y k ^ y k ^ ˜ 2 2 2 N D 2 , k n 2 , n 1 n 2
and
E y k y k ˜ 2 2 N D , k n 2 , n N .
Let x n : 1 ˜ = V n I N y n : 1 ˜ with
y n : 1 ˜ = y n ˜ y 1 ˜ ,
where y k ˜ ^ = y k ^ ˜ for all k { n 2 , n 1 } { n 2 } , and y k ˜ = y n k ˜ ¯ for all k { 1 , , n 2 1 } . Applying Lemma 1 yields x n : 1 ˜ R n N × 1 . As V n I N * is unitary and · 2 is unitarily invariant, we have
E x n : 1 x n : 1 ˜ 2 2 n N = E V n I N * x n : 1 V n I N * x n : 1 ˜ 2 2 n N = E y n : 1 y n : 1 ˜ 2 2 n N = 1 n N k = 1 n E y k y k ˜ 2 2 = 1 n N 2 k 1 { n 2 , n 1 } { n 2 } E y k 1 y k 1 ˜ 2 2 + k 2 { n 2 , n } N E y k 2 y k 2 ˜ 2 2 = 1 n N 2 k 1 { n 2 , n 1 } { n 2 } E y k 1 ^ y k 1 ^ ˜ 2 2 + k 2 { n 2 , n } N E y k 2 y k 2 ˜ 2 2 1 n N 2 n 2 1 N D + 2 N D if n is even , 1 n N 2 n n + 1 2 N D + N D if n is odd , = D .
Consequently,
R x n : 1 ( D ) N R y n 2 D + 2 N k = n 2 + 1 n 1 R y k ^ D 2 + N R y n ( D ) n N if n is even , 2 N k = n + 1 2 n 1 R y k ^ D 2 + N R y n ( D ) n N if n is odd , = R ˜ x n : 1 ( D ) .
Step 2: We prove that R ˜ x n : 1 ( D ) 1 2 n N ln det C E x n : 1 x n : 1 D n N . From Equations (3) and (5), we obtain
R y k ( D ) = 1 2 N ln det E y k y k D N , k n 2 , n N ,
and applying Theorem 2 and Equation (5) yields
R y k ^ D 2 = 1 4 N ln det E y k ^ y k ^ D 2 2 N , k 1 , , n 1 n 2 .
From Lemma 3, we have
R ˜ x n : 1 ( D ) 1 n 2 k 1 { n 2 , , n 1 } { n 2 } 1 2 N ln det E y k 1 y k 1 * D N + k 2 { n 2 , n } N 1 2 N ln det E y k 2 y k 2 * D N = 1 2 n N k 1 { n 2 , , n 1 } { n 2 } ln det E y k 1 y k 1 * D N + ln det E y k 1 y k 1 * ¯ D N + k 2 { n 2 , n } N ln det E y k 2 y k 2 * D N = 1 2 n N k 1 { n 2 , , n 1 } { n 2 } ln det E y k 1 y k 1 * D N + ln det E y n k 1 y n k 1 * D N + k 2 { n 2 , n } N ln det E y k 2 y k 2 * D N = 1 2 n N k = 1 n ln det E y k y k * D N = 1 2 n N ln k = 1 n det E y k y k * D n N .
As
λ j ( E y k y k * ) : j { 1 , , N } , k { 1 , , n } = λ j ( [ E y n : 1 y n : 1 * ] k , k ) : j { 1 , , N } , k { 1 , , n } = λ j V n I N * E x n : 1 x n : 1 V n I N k , k : j { 1 , , N } , k { 1 , , n } = λ j diag 1 k n V n I N * E x n : 1 x n : 1 V n I N k , k : j { 1 , , n N } = λ j V n I N diag 1 k n V n I N * E x n : 1 x n : 1 V n I N k , k V n I N 1 : j { 1 , , n N } = λ j C E x n : 1 x n : 1 : j { 1 , , n N } ,
we obtain
k = 1 n det E y k y k * = k = 1 n j = 1 N λ j E y k y k * = j = 1 n N λ j C E x n : 1 x n : 1 = det C E x n : 1 x n : 1 .
Step 3: We show Equation (8).
As E x n : 1 x n : 1 is a positive definite matrix (or equivalently, E x n : 1 x n : 1 is Hermitian and λ j ( E x n : 1 x n : 1 ) > 0 for all j { 1 , , n N } ), V n I N * E x n : 1 x n : 1 V n I N is Hermitian. Hence, [ V n I N * E x n : 1 x n : 1 V n I N ] k , k is Hermitian for all k { 1 , , n } , and therefore, diag 1 k n [ V n I N * E x n : 1 x n : 1 V n I N ] k , k is also Hermitian. Consequently, V n I N diag 1 k n [ V n I N * E x n : 1 x n : 1 V n I N ] k , k V n I N * is Hermitian, and applying Equations (3) and (11), we have that C E x n : 1 x n : 1 is a positive definite matrix.
Let E x n : 1 x n : 1 = U diag 1 j n N λ j E x n : 1 x n : 1 U 1 be an eigenvalue decomposition (EVD) of E x n : 1 x n : 1 , where U is unitary. Thus, E x n : 1 x n : 1 = U diag 1 j n N λ j E x n : 1 x n : 1 U * and E x n : 1 x n : 1 1 = U diag 1 j n N 1 λ j E x n : 1 x n : 1 U * .
Since E x n : 1 x n : 1 1 is Hermitian and C E x n : 1 x n : 1 is a positive definite matrix, E x n : 1 x n : 1 1 C E x n : 1 x n : 1 E x n : 1 x n : 1 1 is also a positive definite matrix.
From Equation (5), we have
R x n : 1 ( D ) = 1 2 n N ln det E x n : 1 x n : 1 D n N ,
and applying the arithmetic mean-geometric mean inequality yields
0 1 2 n N ln det C E x n : 1 x n : 1 D n N R x n : 1 ( D ) = 1 2 n N ln det C E x n : 1 x n : 1 det E ( x n : 1 x n : 1 ) = 1 2 n N ln det C E x n : 1 x n : 1 det E x n : 1 x n : 1 det E x n : 1 x n : 1 = 1 2 n N ln det E x n : 1 x n : 1 1 det C E x n : 1 x n : 1 det E x n : 1 x n : 1 1 = 1 2 n N ln det E x n : 1 x n : 1 1 C E x n : 1 x n : 1 E x n : 1 x n : 1 1 = 1 2 n N ln j = 1 n N λ j E x n : 1 x n : 1 1 C E x n : 1 x n : 1 E x n : 1 x n : 1 1 1 2 n N ln 1 n N j = 1 n N λ j E x n : 1 x n : 1 1 C E x n : 1 x n : 1 E x n : 1 x n : 1 1 n N = 1 2 ln 1 n N tr E x n : 1 x n : 1 1 C E x n : 1 x n : 1 E x n : 1 x n : 1 1 = 1 2 ln 1 n N tr C E x n : 1 x n : 1 E x n : 1 x n : 1 1 E x n : 1 x n : 1 1 = 1 2 ln 1 n N tr C E x n : 1 x n : 1 E x n : 1 x n : 1 1 1 2 ln n N n N C E x n : 1 x n : 1 E x n : 1 x n : 1 1 F = 1 2 ln 1 n N C E x n : 1 x n : 1 E x n : 1 x n : 1 E x n : 1 x n : 1 1 + I n N F 1 2 ln 1 n N C E x n : 1 x n : 1 E x n : 1 x n : 1 E x n : 1 x n : 1 1 F + n N 1 2 ln 1 n N C E x n : 1 x n : 1 E x n : 1 x n : 1 F E x n : 1 x n : 1 1 2 + n N = 1 2 ln 1 + E x n : 1 x n : 1 C E x n : 1 x n : 1 F n N λ n N ( E x n : 1 x n : 1 ) .
 □
In Equation (12), R x n : 1 ( D ) is written in terms of E x n : 1 x n : 1 . R ˜ x n : 1 ( D ) can be written in terms of E x n : 1 x n : 1 and V n by using Lemma 2 and Equations (9) and (10).
As our coding strategy requires the computation of the block DFT, its computational complexity is O ( n N log n ) whenever the FFT algorithm is used. We recall that the computational complexity of the optimal coding strategy for x n : 1 is O ( n 2 N 2 ) since it requires the computation of U n x n : 1 , where U n is a real orthogonal eigenvector matrix of E x n : 1 x n : 1 . Observe that such eigenvector matrix U n also needs to be computed, which further increases the complexity. Hence, the main advantage of our coding strategy is that it notably reduces the computational complexity of coding x n : 1 . Moreover, our coding strategy does not require the knowledge of E x n : 1 x n : 1 . It only requires the knowledge of E y k ^ y k ^ , with k { n 2 , n } .
It should be mentioned that Equation (7) provides two upper bounds for the RDF of finite-length data blocks of a real zero-mean Gaussian N-dimensional vector source { x k } . The greatest upper bound in Equation (7) was given in [11] for the case in which the random vector source { x k } is WSS, and therefore, the correlation matrix of the Gaussian vector, E x n : 1 x n : 1 , is a block Toeplitz matrix. Such upper bound was first presented by Pearl in [12] for the case in which the source is WSS and N = 1 . However, neither [11] nor [12] propose a coding strategy for { x k } .

4. Optimality of the Proposed Coding Strategy for Gaussian AWSS Vector Sources

In this section (see Theorem 4), we show that our coding strategy is asymptotically optimal, i.e., we show that for large enough data blocks of a Gaussian AWSS vector source { x k } , the rate of our coding strategy, presented in Section 3, tends to the RDF of the source.
We begin by introducing some notation. If X : R C N × N is a continuous and 2 π -periodic matrix-valued function of a real variable, we denote by T n ( X ) the n × n block Toeplitz matrix with N × N blocks given by
T n ( X ) = ( X j k ) j , k = 1 n ,
where { X k } k Z is the sequence of Fourier coefficients of X:
X k = 1 2 π 0 2 π e k ω i X ( ω ) d ω k Z .
If A n and B n are n N × n N matrices for all n N , we write { A n } { B n } when the sequences { A n } and { B n } are asymptotically equivalent (see ([13] (p. 5673))), that is, { A n 2 } and { B n 2 } are bounded and
lim n A n B n F n = 0 .
The original definition of asymptotically equivalent sequences of matrices was given by Gray (see ([2] (Section 2.3)) or [4]) for N = 1 .
We now review the definition of the AWSS vector process given in ([1] (Definition 7.1)). This definition was first introduced for the scalar case N = 1 (see ([2] (Section 6)) or [3]).
Definition 1.
Let X : R C N × N , and suppose that it is continuous and 2 π -periodic. A random N-dimensional vector process { x k } is said to be AWSS with asymptotic power spectral density (APSD) X if it has constant mean (i.e., E ( x k 1 ) = E ( x k 2 ) for all k 1 , k 2 N ) and { E x n : 1 x n : 1 * } { T n ( X ) } .
We recall that the RDF of { x k } is defined as lim n R x n : 1 ( D ) .
Theorem 4.
Let { x k } be a real zero-mean Gaussian AWSS N-dimensional vector process with APSD X. Suppose that inf n N λ n N E x n : 1 x n : 1 > 0 . If D 0 , inf n N λ n N E x n : 1 x n : 1 , then
lim n R x n : 1 ( D ) = lim n R ˜ x n : 1 ( D ) = 1 4 π N 0 2 π ln det ( X ( ω ) ) D N d ω .
Proof. 
We divide the proof into two steps:
Step 1: We show that lim n R x n : 1 ( D ) = 1 4 π N 0 2 π ln det ( X ( ω ) ) D N d ω . From Equation (12), ([1] (Theorem 6.6)), and ([14] (Proposition 2)) yields
lim n R x n : 1 ( D ) = lim n 1 2 n N ln k = 1 n N λ k E x n : 1 x n : 1 D n N = lim n 1 2 n N k = 1 n N ln λ k E x n : 1 x n : 1 D = 1 4 π 0 2 π 1 N k = 1 N ln λ k ( X ( ω ) ) D d ω = 1 4 π N 0 2 π ln det ( X ( ω ) ) D N d ω .
Step 2: We prove that lim n R x n : 1 ( D ) = lim n R ˜ x n : 1 ( D ) . Applying Equations (7) and (8), we obtain
0 R ˜ x n : 1 ( D ) R x n : 1 ( D ) 1 2 n N ln det C E x n : 1 x n : 1 D n N R x n : 1 ( D ) 1 2 ln 1 + E x n : 1 x n : 1 C E x n : 1 x n : 1 F n N λ n N E x n : 1 x n : 1 1 2 ln 1 + E x n : 1 x n : 1 C E x n : 1 x n : 1 F n N inf m N λ m N E x m : 1 x m : 1 n N .
To finish the proof, we only need to show that
lim n E x n : 1 x n : 1 C E x n : 1 x n : 1 F n = 0 .
Let C n ( X ) be the n × n block circulant matrix with N × N blocks defined in ([13] (p. 5674)), i.e.,
C n ( X ) = ( V n I N ) diag 1 k n X 2 π ( k 1 ) n ( V n I N ) * n N .
Observe that
C C n ( X ) = ( V n I N ) diag 1 k n ( V n I N ) * C n ( X ) ( V n I N ) k , k ( V n I N ) * = ( V n I N ) diag 1 k n diag 1 j n X 2 π ( j 1 ) n k , k ( V n I N ) * = ( V n I N ) diag 1 k n X 2 π ( k 1 ) n ( V n I N ) * = C n ( X ) n N .
Consequently, as the Frobenius norm is unitarily invariant, we have
C n ( X ) C E x n : 1 x n : 1 F = C C n ( X ) C E x n : 1 x n : 1 F = ( V n I N ) diag 1 k n ( V n I N ) * C n ( X ) E x n : 1 x n : 1 ( V n I N ) k , k ( V n I N ) * F = diag 1 k n [ ( V n I N ) * C n ( X ) E x n : 1 x n : 1 ( V n I N ) ] k , k F ( V n I N ) * C n ( X ) E x n : 1 x n : 1 ( V n I N ) F = C n ( X ) E x n : 1 x n : 1 F n N .
Therefore,
0 E x n : 1 x n : 1 C E x n : 1 x n : 1 F n E x n : 1 x n : 1 C n ( X ) F n + C n ( X ) C E x n : 1 x n : 1 F n 2 E x n : 1 x n : 1 C n ( X ) F n 2 E x n : 1 x n : 1 T n ( X ) F n + T n ( X ) C n ( X ) F n n N .
Since { E x n : 1 x n : 1 } { T n ( X ) } , Equation (16) and ([1] (Lemma 6.1)) yields Equation (15). □
Observe that the integral formula in Equation (13) provides the value of the RDF of the Gaussian AWSS vector source whenever D 0 , inf n N λ n N E x n : 1 x n : 1 . An integral formula of such an RDF for any D > 0 can be found in ([15] (Theorem 1)). It should be mentioned that ([15] (Theorem 1)) generalized the integral formulas previously given in the literature for the RDF of certain Gaussian AWSS sources, namely, WSS scalar sources [9], AR AWSS scalar sources [16], and AR AWSS vector sources of finite order [17].

5. Relevant AWSS Vector Sources

WSS, MA, AR, and ARMA vector processes are frequently used to model multivariate time series (see, e.g., [18]) that arise in any domain that involves temporal measurements. In this section, we show that our coding strategy is appropriate to encode the aforementioned vector sources whenever they are Gaussian and AWSS.
It should be mentioned that Gaussian AWSS MA vector (VMA) processes, Gaussian AWSS AR vector (VAR) processes, and Gaussian AWSS ARMA vector (VARMA) processes are frequently called Gaussian stationary VMA processes, Gaussian stationary VAR processes, and Gaussian stationary VARMA processes, respectively (see, e.g., [18]). However, they are asymptotically stationary but not stationary, because their corresponding correlation matrices are not block Toeplitz.

5.1. WSS Vector Sources

In this subsection (see Theorem 5), we give conditions under which our coding strategy is asymptotically optimal for WSS vector sources.
We first recall the well-known concept of WSS vector process.
Definition 2.
Let X : R C N × N , and suppose that it is continuous and 2 π -periodic. A random N-dimensional vector process { x k } is said to be WSS (or weakly stationary) with PSD X if it has constant mean and { E x n : 1 x n : 1 * } = { T n ( X ) } .
Theorem 5.
Let { x k } be a real zero-mean Gaussian WSS N-dimensional vector process with PSD X. Suppose that min ω [ 0 , 2 π ] λ N X ( ω ) > 0 (or equivalently, det ( X ( ω ) ) 0 for all ω R ). If D 0 , min ω [ 0 , 2 π ] λ N X ( ω ) , then
lim n R x n : 1 ( D ) = lim n R ˜ x n : 1 ( D ) = 1 4 π N 0 2 π ln det X ( ω ) D N d ω .
Proof. 
Applying ([1] (Lemma 3.3)) and ([1] (Theorem 4.3)) yields { E x n : 1 x n : 1 } = { T n ( X ) } { T n ( X ) } . Theorem 5 now follows from ([14] (Proposition 3)) and Theorem 4. □
Theorem 5 was presented in [5] for the case N = 1 (i.e., just for WSS sources but not for vector WSS sources).

5.2. VMA Sources

In this subsection (see Theorem 6), we give conditions under which our coding strategy is asymptotically optimal for VMA sources.
We start by reviewing the concept of VMA process.
Definition 3.
A real zero-mean random N-dimensional vector process { x k } is said to be MA if
x k = w k + j = 1 k 1 G j w k j k N ,
where G j , j N , are real N × N matrices, { w k } is a real zero-mean random N-dimensional vector process, and E w k 1 w k 2 = δ k 1 , k 2 Λ for all k 1 , k 2 N with Λ being a real N × N positive definite matrix. If there exists q N such that G j = 0 N × N for all j > q , then { x k } is called a VMA(q) process.
Theorem 6.
Let { x k } be as in Definition 3. Assume that { G k } k = , with G 0 = I N and G k = 0 N × N for all k N , is the sequence of Fourier coefficients of a function G : R C N × N , which is continuous and 2 π -periodic. Suppose that { T n ( G ) } is stable (that is, { ( T n ( G ) ) 1 2 } is bounded). If { x k } is Gaussian and D 0 , inf n N λ n N E x n : 1 x n : 1 , then
lim n R x n : 1 ( D ) = lim n R ˜ x n : 1 ( D ) = 1 2 N ln det ( Λ ) D N .
Moreover, R x n : 1 ( D ) = 1 2 N ln det ( Λ ) D N for all n N .
Proof. 
We divide the proof into three steps:
Step 1: We show that det E x n : 1 x n : 1 = det ( Λ ) n for all n N . From ([15] (Equation (A3))) we have that E x n : 1 x n : 1 = T n ( G ) T n ( Λ ) T n ( G ) * . Consequently,
det E x n : 1 x n : 1 = det T n ( G ) det T n ( Λ ) det T n ( G ) ¯ = | det T n ( G ) | 2 det ( Λ ) n = det ( Λ ) n n N .
Step 2: We prove the first equality in Equation (17). Applying ([15] (Theorem 2)), we obtain that { x k } is AWSS. From Theorem 4, we only need to show that inf n N λ n N E x n : 1 x n : 1 > 0 . We have
λ n N E x n : 1 x n : 1 = 1 λ 1 E x n : 1 x n : 1 1 = 1 E x n : 1 x n : 1 1 2 = 1 T n ( G ) T n ( Λ ) T n ( G ) * 1 2 = 1 T n ( G ) 1 * T n ( Λ 1 ) T n ( G ) 1 2 1 T n ( G ) 1 * 2 T n ( Λ 1 ) 2 T n ( G ) 1 2 = 1 T n ( G ) 1 2 2 λ 1 ( Λ 1 ) = λ N ( Λ ) T n ( G ) 1 2 2 λ N ( Λ ) sup m N T m ( G ) 1 2 2 > 0 n N .
Step 3: We show that R x n : 1 ( D ) = 1 2 N ln det ( Λ ) D N for all n N . Applying Equation (12) yields
R x n : 1 ( D ) = 1 2 n N ln det ( Λ ) n D n N = 1 2 N ln det ( Λ ) D N n N .
 □

5.3. VAR AWSS Sources

In this subsection (see Theorem 7), we give conditions under which our coding strategy is asymptotically optimal for VAR sources.
We first recall the concept of VAR process.
Definition 4.
A real zero-mean random N-dimensional vector process { x k } is said to be AR if
x k = w k j = 1 k 1 F j x k j k N ,
where F j , j N , are real N × N matrices, { w k } is a real zero-mean random N-dimensional vector process, and E w k 1 w k 2 = δ k 1 , k 2 Λ for all k 1 , k 2 N with Λ being a real N × N positive definite matrix. If there exists p N such that F j = 0 N × N for all j > p , then { x k } is called a VAR(p) process.
Theorem 7.
Let { x k } be as in Definition 4. Assume that { F k } k = , with F 0 = I N and F k = 0 N × N for all k N , is the sequence of Fourier coefficients of a function F : R C N × N , which is continuous and 2 π -periodic. Suppose that { T n ( F ) } is stable and det F ( ω ) 0 for all ω R . If { x k } is Gaussian and D 0 , inf n N λ n N E x n : 1 x n : 1 , then
lim n R x n : 1 ( D ) = lim n R ˜ x n : 1 ( D ) = 1 2 N ln det ( Λ ) D N .
Moreover, R x n : 1 ( D ) = 1 2 N ln det ( Λ ) D N for all n N .
Proof. 
We divide the proof into three steps:
Step 1: We show that det E x n : 1 x n : 1 = det ( Λ ) n for all n N . From ([19] (Equation (19))), we have that E x n : 1 x n : 1 = T n ( F ) 1 T n ( Λ ) T n ( F ) * 1 . Consequently,
det E x n : 1 x n : 1 = det T n ( Λ ) det T n ( F ) det T n ( F ) * = det ( Λ ) n | det T n ( F ) | 2 = det ( Λ ) n n N .
Step 2: We prove the first equality in Equation (18). Applying ([15] (Theorem 3)), we obtain that { x k } is AWSS. From Theorem 4, we only need to show that inf n N λ n N E x n : 1 x n : 1 > 0 . Applying ([1] (Theorem 4.3)) yields
λ n N E x n : 1 x n : 1 = 1 E x n : 1 x n : 1 1 2 = 1 T n ( F ) 1 T n ( Λ ) T n ( F ) * 1 1 2 λ N ( Λ ) T n ( F ) 2 2 λ N ( Λ ) sup m N T m ( F ) 2 2 > 0 n N .
Step 3: We show that R x n : 1 ( D ) = 1 2 N ln det ( Λ ) D N for all n N . This can be directly obtained from Equation (12). □
Theorem 7 was presented in [6] for the case of N = 1 (i.e., just for AR sources but not for VAR sources).

5.4. VARMA AWSS Sources

In this subsection (see Theorem 8), we give conditions under which our coding strategy is asymptotically optimal for VARMA sources.
We start by reviewing the concept of VARMA process.
Definition 5.
A real zero-mean random N-dimensional vector process { x k } is said to be ARMA if
x k = w k + j = 1 k 1 G j w k j j = 1 k 1 F j x k j k N ,
where G j and F j , j N , are real N × N matrices, { w k } is a real zero-mean random N-dimensional vector process, and E w k 1 w k 2 = δ k 1 , k 2 Λ for all k 1 , k 2 N with Λ being a real N × N positive definite matrix. If there exists p , q N such that F j = 0 N × N for all j > p and G j = 0 N × N for all j > q , then { x k } is called a VARMA(p,q) process (or a VARMA process of (finite) order (p,q)).
Theorem 8.
Let { x k } be as in Definition 5. Assume that { G k } k = , with G 0 = I N and G k = 0 N × N for all k N , is the sequence of Fourier coefficients of a function G : R C N × N which is continuous and 2 π -periodic. Suppose that { F k } k = , with F 0 = I N and F k = 0 N × N for all k N , is the sequence of Fourier coefficients of a function F : R C N × N which is continuous and 2 π -periodic. Assume that { T n ( G ) } and { T n ( F ) } are stable, and det F ( ω ) 0 for all ω R . If { x k } is Gaussian and D 0 , inf n N λ n N E x n : 1 x n : 1 , then
lim n R x n : 1 ( D ) = lim n R ˜ x n : 1 ( D ) = 1 2 N ln det ( Λ ) D N .
Moreover, R x n : 1 ( D ) = 1 2 N ln det ( Λ ) D N for all n N .
Proof. 
We divide the proof into three steps:
Step 1: We show that det E x n : 1 x n : 1 = det ( Λ ) n for all n N . From ([15] (Appendix D)) and ([1] (Lemma 4.2)), we have that E x n : 1 x n : 1 = ( T n ( F ) ) 1 T n ( G ) T n ( Λ ) T n ( G ) * ( ( T n ( F ) ) * ) 1 . Consequently,
det E x n : 1 x n : 1 = | det T n ( G ) | 2 det ( Λ ) n | det T n ( F ) | 2 = det ( Λ ) n n N .
Step 2: We prove the first equality in Equation (19). Applying ([15] (Theorem 3)), we obtain that { x k } is AWSS. From Theorem 4, we only need to show that inf n N λ n N E x n : 1 x n : 1 > 0 . Applying ([1] (Theorem 4.3)) yields
λ n N E x n : 1 x n : 1 = 1 E x n : 1 x n : 1 1 2 = 1 ( T n ( F ) ) 1 T n ( G ) T n ( Λ ) T n ( G ) * ( ( T n ( F ) ) * ) 1 1 2 λ N ( Λ ) T n ( F ) 2 2 T n ( G ) 1 2 2 λ N ( Λ ) sup m N T m ( F ) 2 2 sup m N T m ( G ) 1 2 2 > 0 n N .
Step 3: We show that R x n : 1 ( D ) = 1 2 N ln det ( Λ ) D N for all n N . This can be directly obtained from Equation (12). □

6. Numerical Examples

We first consider four AWSS vector processes, namely, we consider the zero-mean WSS vector process in ([20] (Section 4)), the VMA(1) process in ([18] (Example 2.1)), the VAR(1) process in ([18] (Example 2.3)), and the VARMA(1,1) process in ([18] (Example 3.2)). In ([20] (Section 4)), N = 2 and the Fourier coefficients of its PSD X are
X 0 = 2.0002 0.7058 0.7058 2.0000 , X 1 = X 1 * = 0.3542 0.1016 0.1839 0.2524 , X 2 = X 2 * = 0.0923 0.0153 0.1490 0.0696 ,
X 3 = X 3 * = 0.1443 0.0904 0.0602 0.0704 , X 4 = X 4 * = 0.0516 0.0603 0 0 ,
and X j = 0 2 × 2 with | j | > 4 . In ([18] (Example 2.1)), N = 2 , G 1 is given by
0.8 0.7 0.4 0.6 ,
G j = 0 2 × 2 for all j N , and
Λ = 4 1 1 2 .
In ([18] (Example 2.3)), N = 2 , F j = 0 2 × 2 for all j N , and F 1 and Λ are given by Equations (20) and (21), respectively. In ([18] (Example 3.2)), N = 2 ,
G 1 = 0.6 0.3 0.3 0.6 , F 1 = 1.2 0.5 0.6 0.3 , Λ = 1 0.5 0.5 1.25 ,
G j = 0 2 × 2 for all j N , and F j = 0 2 × 2 for all j N .
Figure 2, Figure 3, Figure 4 and Figure 5 show R x n : 1 ( D ) and R ˜ x n : 1 ( D ) with n 100 and D = 0.001 for the four vector processes considered, by assuming that they are Gaussian. The figures bear evidence of the fact that the rate of our coding strategy tends to the RDF of the source.
We finish with a numerical example to explore how our method performs in the presence of a perturbation. Specifically, we consider a perturbed version of the WSS vector process in ([20] (Section 4)) (Figure 6). The correlation matrices of the perturbed process are
T n ( X ) + 0 2 n 2 × 2 n 2 0 2 n 2 × 2 0 2 × 2 n 2 I 2 , n N .

7. Conclusions

The computational complexity of coding finite-length data blocks of Gaussian N-dimensional vector sources can be reduced by using the low-complexity coding strategy presented here instead of the optimal coding strategy. Specifically, the computational complexity is reduced from O ( n 2 N 2 ) to O ( n N log n ) , where n is the length of the data blocks. Moreover, our coding strategy is asymptotically optimal (i.e., the rate of our coding strategy tends to the RDF of the source) whenever the Gaussian vector source is AWSS and the considered data blocks are large enough. Besides being a low-complexity strategy, it does not require the knowledge of the correlation matrix of such data blocks. Furthermore, our coding strategy is appropriate to encode the most relevant Gaussian vector sources, namely, WSS, MA, AR, and ARMA vector sources.

Author Contributions

Authors are listed in order of their degree of involvement in the work, with the most active contributors listed first. J.G.-G. conceived the research question. All authors proved the main results and wrote the paper. All authors have read and approved the final manuscript.

Funding

This work was supported in part by the Spanish Ministry of Economy and Competitiveness through the CARMEN project (TEC2016-75067-C4-3-R).

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Proof of Lemma 1

Proof. 
(1) ⇒(2) We have
y k = [ y n : 1 ] n k + 1 , 1 = j = 1 n V n * I N n k + 1 , j x n : 1 j , 1 = j = 1 n V n * n k + 1 , j I N x n : 1 j , 1 = j = 1 n V n j , n k + 1 ¯ x n : 1 j , 1 = 1 n j = 1 n e 2 π ( j 1 ) ( n k ) n i x n : 1 j , 1 = 1 n j = 1 n e 2 π ( j 1 ) i e 2 π ( j 1 ) k n i x n : 1 j , 1 = 1 n j = 1 n e 2 π ( j 1 ) k n i x n : 1 j , 1 = 1 n j = 1 n e 2 π ( j 1 ) k n i x n : 1 j , 1 ¯ = y n k ¯
for all k { 1 , , n 1 } and y n = 1 n j = 1 n x n : 1 j , 1 R N × 1 .
(2)⇒(1) Since V n I N is a unitary matrix and
V n k , n j + 1 = 1 n e 2 π ( k 1 ) ( n j ) n i = 1 n e 2 π ( k 1 ) i e 2 π ( k 1 ) j n i = V n k , j + 1 ¯
for all k { 1 , , n } and j { 1 , , n 1 } , we conclude that
x k = [ x n : 1 ] n k + 1 , 1 = [ V n I N y n : 1 ] n k + 1 , 1 = j = 1 n V n n k + 1 , j y n : 1 j , 1 = j = 1 n V n n k + 1 , j y n j + 1 = V n n k + 1 , 1 y n + h = 1 n 1 V n n k + 1 , n h + 1 y h = 1 n y n + h = 1 n 2 1 V n n k + 1 , n h + 1 y h + V n n k + 1 , n h + 1 ¯ y h ¯ + 1 + ( 1 ) n 2 V n n k + 1 , n 2 + 1 y n 2 = 1 n y n + h = 1 n 2 1 V n n k + 1 , n h + 1 y h + V n n k + 1 , n h + 1 y h ¯ + 1 + ( 1 ) n 2 1 n e π ( n k ) i y n 2 = 1 n y n + 2 h = 1 n 2 1 Re V n n k + 1 , n h + 1 y h + 1 + ( 1 ) n 2 ( 1 ) n k n y n 2 R N × 1
for all k { 1 , , n } . □

Appendix B. Proof of Theorem 1

Proof. 
Fix k { 1 , , n } . Let E x n : 1 x n : 1 * = U diag 1 j n N λ j E x n : 1 x n : 1 * U 1 and E ( x k x k * ) = W diag 1 j N λ j E ( x k x k * W 1 be an eigenvalue decomposition (EVD) of E x n : 1 x n : 1 * and E ( x k x k * ) , respectively. We can assume that the eigenvector matrices U and W are unitary. We have
λ j E ( x k x k * ) = W * E ( x k x k * ) W j , j = h = 1 N [ W * ] j , h l = 1 N E ( x k x k * ) h , l W l , j = h = 1 N [ W * ] j , h l = 1 N E x n : 1 x n : 1 * ( n k ) N + h , ( n k ) N + l W l , j = h = 1 N [ W * ] j , h l = 1 N U diag 1 p n N λ p E x n : 1 x n : 1 * U * ( n k ) N + h , ( n k ) N + l W l , j = h = 1 N [ W * ] j , h l = 1 N p = 1 n N U ( n k ) N + h , p λ p E x n : 1 x n : 1 * U * p , ( n k ) N + l W l , j = p = 1 n N λ p E x n : 1 x n : 1 * h = 1 N [ W ] h , j ¯ U ( n k ) N + h , p l = 1 N U ( n k ) N + l , p ¯ W l , j = p = 1 n N λ p E x n : 1 x n : 1 * h = 1 N [ W ] h , j ¯ U ( n k ) N + h , p l = 1 N W l , j ¯ U ( n k ) N + l , p ¯ = p = 1 n N λ p E x n : 1 x n : 1 * h = 1 N [ W ] h , j ¯ U ( n k ) N + h , p 2 ,
and consequently,
λ n N E x n : 1 x n : 1 * p = 1 n N h = 1 N [ W ] h , j ¯ U ( n k ) N + h , p 2 λ j E ( x k x k * ) λ 1 E x n : 1 x n : 1 * p = 1 n N h = 1 N [ W ] h , j ¯ U ( n k ) N + h , p 2
for all j { 1 , , N } . Therefore, since
p = 1 n N h = 1 N [ W ] h , j ¯ U ( n k ) N + h , p 2 = p = 1 n N h = 1 N [ W ] h , j ¯ U ( n k ) N + h , p l = 1 N U ( n k ) N + l , p ¯ W l , j = h = 1 N [ W * ] j , h l = 1 N p = 1 n N U ( n k ) N + h , p U * p , ( n k ) N + l W l , j = h = 1 N [ W * ] j , h l = 1 N U U * ( n k ) N + h , ( n k ) N + l W l , j = h = 1 N [ W * ] j , h l = 1 N I n N ( n k ) N + h , ( n k ) N + l W l , j = h = 1 N [ W * ] j , h W h , j = W * W j , j = I N j , j = 1 ,
Equation (2) holds. We now prove Equation (3). Let E ( y k y k * ) = M diag 1 j N λ j E ( y k y k * M 1 be an EVD of E ( y k y k * ) , where M is unitary. We have
λ j E ( y k y k * ) = h = 1 N [ M * ] j , h l = 1 N E y n : 1 y n : 1 * ( n k ) N + h , ( n k ) N + l M l , j = h = 1 N [ M * ] j , h l = 1 N E V n I N * x n : 1 x n : 1 * V n I N ( n k ) N + h , ( n k ) N + l M l , j = h = 1 N [ M * ] j , h l = 1 N V n I N * E x n : 1 x n : 1 * V n I N ( n k ) N + h , ( n k ) N + l M l , j = h = 1 N [ M * ] j , h l = 1 N V n I N * U diag 1 p n N λ p E x n : 1 x n : 1 * V n I N * U * ( n k ) N + h , ( n k ) N + l M l , j = p = 1 n N λ p E x n : 1 x n : 1 * h = 1 N [ M ] h , j ¯ V n I N * U ( n k ) N + h , p 2 ,
and thus,
λ n N E x n : 1 x n : 1 * p = 1 n N h = 1 N [ M ] h , j ¯ V n I N * U ( n k ) N + h , p 2 λ j E ( y k y k * ) λ 1 E x n : 1 x n : 1 * p = 1 n N h = 1 N [ M ] h , j ¯ V n I N * U ( n k ) N + h , p 2
for all j { 1 , , N } . Hence, as
p = 1 n N h = 1 N [ M ] h , j ¯ V n I N * U ( n k ) N + h , p 2 = h = 1 N [ M * ] j , h l = 1 N V n I N * U V n I N * U * ( n k ) N + h , ( n k ) N + l M l , j = h = 1 N [ M * ] j , h l = 1 N V n I N * I n N V n I N ( n k ) N + h , ( n k ) N + l M l , j = h = 1 N [ M * ] j , h l = 1 N I n N ( n k ) N + h , ( n k ) N + l M l , j = 1 ,
Equation (3) holds. □

Appendix C. Proof of Theorem 2

Proof. 
Fix k { 1 , , n 1 } { n 2 } . Since
y k = 1 n j = 1 n e 2 π ( j 1 ) k n i x n : 1 j , 1 = 1 n j = 1 n cos 2 π ( 1 j ) k n + i sin 2 π ( 1 j ) k n x n j + 1 ,
we obtain
E y k ^ y k ^ = E Re ( y k ) Im ( y k ) Re ( y k ) | Im ( y k ) = E Re ( y k ) Re ( y k ) E Re ( y k ) Im ( y k ) E Im ( y k ) Re ( y k ) E Im ( y k ) Im ( y k ) = 1 n j 1 , j 2 = 1 n cos 2 π ( 1 j 1 ) k n cos 2 π ( 1 j 2 ) k n E x n j 1 + 1 x n j 2 + 1 cos 2 π ( 1 j 1 ) k n sin 2 π ( 1 j 2 ) k n E x n j 1 + 1 x n j 2 + 1 sin 2 π ( 1 j 1 ) k n cos 2 π ( 1 j 2 ) k n E x n j 1 + 1 x n j 2 + 1 sin 2 π ( 1 j 1 ) k n sin 2 π ( 1 j 2 ) k n E x n j 1 + 1 x n j 2 + 1 = 1 n j 1 , j 2 = 1 n A j 1 E x n j 1 + 1 x n j 2 + 1 A j 2 ,
where A j = cos 2 π ( 1 j ) k n I N | sin 2 π ( 1 j ) k n I N with j { 1 , , n } . Fix r { 1 , , 2 N } , and consider a real eigenvector v corresponding to λ r E y k ^ y k ^ with v v = 1 . Let E x n : 1 x n : 1 = U diag 1 j n N λ j E x n : 1 x n : 1 U 1 be an EVD of E x n : 1 x n : 1 , where U is real and orthogonal. Then
λ r E y k ^ y k ^ = λ r E y k ^ y k ^ v v = v λ r E y k ^ y k ^ v = v E y k ^ y k ^ v = 1 n j 1 , j 2 = 1 n v A j 1 E x n j 1 + 1 x n j 2 + 1 A j 2 v = 1 n j 1 , j 2 = 1 n v A j 1 E x n : 1 x n : 1 j 1 , j 2 A j 2 v = 1 n j 1 , j 2 = 1 n v A j 1 e j 1 E x n : 1 x n : 1 e j 2 A j 2 v = 1 n j 1 , j 2 = 1 n v A j 1 e j 1 U diag 1 p n N λ p E x n : 1 x n : 1 U e j 2 A j 2 v = 1 n j 1 = 1 n v A j 1 e j 1 U diag 1 p n N λ p E x n : 1 x n : 1 j 2 = 1 n U e j 2 A j 2 v = 1 n B diag 1 p n N λ p E x n : 1 x n : 1 B 1 , 1 = 1 n p = 1 n N B 1 , p λ p E x n : 1 x n : 1 B p , 1 = 1 n p = 1 n N λ p E x n : 1 x n : 1 B p , 1 2 ,
where e l C n N × N with e l j , 1 = δ j , l I N for all j , l { 1 , , n } and B = j = 1 n U e j A j v . Consequently,
λ n N E x n : 1 x n : 1 1 n p = 1 n N B p , 1 2 λ r E y k ^ y k ^ λ 1 E x n : 1 x n : 1 1 n p = 1 n N B p , 1 2 .
Therefore, to finish the proof we only need to show that 1 n p = 1 n N B p , 1 2 = 1 2 . Applying ([5] (Equations (14) and (15))) yields
1 n p = 1 n N B p , 1 2 = 1 n p = 1 n N B 1 , p B p , 1 = 1 n B B = 1 n j 1 = 1 n U e j 1 A j 1 v j 2 = 1 n U e j 2 A j 2 v = 1 n j 1 , j 2 = 1 n v A j 1 e j 1 e j 2 A j 2 v = 1 n j = 1 n v A j A j v = 1 n j = 1 n ( A j v ) ( A j v ) = 1 n j = 1 n s = 1 N A j v s , 1 2 = 1 n j = 1 n s = 1 N cos 2 π ( 1 j ) k n [ v ] s , 1 + sin 2 π ( 1 j ) k n [ v ] N + s , 1 2 = 1 n s = 1 N j = 1 n cos 2 π ( 1 j ) k n 2 [ v ] s , 1 2 + sin 2 π ( 1 j ) k n 2 [ v ] N + s , 1 2 + 2 cos 2 π ( 1 j ) k n sin 2 π ( 1 j ) k n [ v ] s , 1 [ v ] N + s , 1 = s = 1 N [ v ] s , 1 2 1 n j = 1 n cos 2 π ( 1 j ) k n 2 + [ v ] N + s , 1 2 1 n j = 1 n sin 2 π ( 1 j ) k n 2 + [ v ] s , 1 [ v ] N + s , 1 1 n j = 1 n 2 sin 2 π ( 1 j ) k n cos 2 π ( 1 j ) k n = s = 1 N [ v ] s , 1 2 1 n j = 1 n 1 sin 2 π ( 1 j ) k n 2 + [ v ] N + s , 1 2 2 + [ v ] s , 1 [ v ] N + s , 1 1 n j = 1 n sin 4 π ( 1 j ) k n = s = 1 N [ v ] s , 1 2 1 1 n j = 1 n sin 2 π ( 1 j ) k n 2 + [ v ] N + s , 1 2 2 [ v ] s , 1 [ v ] N + s , 1 1 n j = 1 n sin 4 π ( j 1 ) k n = s = 1 N [ v ] s , 1 2 2 + [ v ] N + s , 1 2 2 [ v ] s , 1 [ v ] N + s , 1 1 n j = 1 n Im e 4 π ( j 1 ) k n i = s = 1 N [ v ] s , 1 2 2 + [ v ] N + s , 1 2 2 [ v ] s , 1 [ v ] N + s , 1 1 n Im j = 1 n e 4 π ( j 1 ) k n i = s = 1 N [ v ] s , 1 2 2 + [ v ] N + s , 1 2 2 = 1 2 h = 1 2 N [ v ] h , 1 2 = 1 2 v v = 1 2 .
 □

Appendix D. Proof of Lemma 2

Proof. 
(1) E y k y k * = E y n : 1 y n : 1 * n k + 1 , n k + 1 = V n I N * E x n : 1 x n : 1 * V n I N n k + 1 , n k + 1 .
(2) E y k y k = E y n : 1 y n : 1 n k + 1 , n k + 1 = V n I N * E x n : 1 x n : 1 V n I N * n k + 1 , n k + 1 .
(3) We have
E y k y k * = E Re ( y k ) + i Im ( y k ) Re ( y k ) i Im ( y k ) = E Re ( y k ) Re ( y k ) + E Im ( y k ) Im ( y k ) + i E Im ( y k ) Re ( y k ) E Re ( y k ) Im ( y k ) ,
and
E y k y k = E Re ( y k ) + i Im ( y k ) Re ( y k ) + i Im ( y k ) = E Re ( y k ) Re ( y k ) E Im ( y k ) Im ( y k ) + i E Im ( y k ) Re ( y k ) + E Re ( y k ) Im ( y k ) .
As
E y k ^ y k ^ = E Re ( y k ) Re ( y k ) E Re ( y k ) Im ( y k ) E Im ( y k ) Re ( y k ) E Im ( y k ) Im ( y k ) ,
assertion (3) follows directly from Equations (A1) and (A2).  □

References

  1. Gutiérrez-Gutiérrez, J.; Crespo, P.M. Block Toeplitz matrices: Asymptotic results and applications. Found. Trends Commun. Inf. Theory 2011, 8, 179–257. [Google Scholar] [CrossRef]
  2. Gray, R.M. Toeplitz and circulant matrices: A review. Found. Trends Commun. Inf. Theory 2006, 2, 155–239. [Google Scholar] [CrossRef]
  3. Ephraim, Y.; Lev-Ari, H.; Gray, R.M. Asymptotic minimum discrimination information measure for asymptotically weakly stationary processes. IEEE Trans. Inf. Theory 1988, 34, 1033–1040. [Google Scholar] [CrossRef]
  4. Gray, R.M. On the asymptotic eigenvalue distribution of Toeplitz matrices. IEEE Trans. Inf. Theory 1972, IT-18, 725–730. [Google Scholar] [CrossRef]
  5. Gutiérrez-Gutiérrez, J.; Zárraga-Rodríguez, M.; Insausti, X. Upper bounds for the rate distortion function of finite-length data blocks of Gaussian WSS sources. Entropy 2017, 19, 554. [Google Scholar] [CrossRef]
  6. Gutiérrez-Gutiérrez, J.; Zárraga-Rodríguez, M.; Villar-Rosety, F.M.; Insausti, X. Rate-Distortion function upper bounds for Gaussian vectors and their applications in coding AR sources. Entropy 2018, 20, 399. [Google Scholar] [CrossRef]
  7. Viswanathan, H.; Berger, T. The quadratic Gaussian CEO problem. IEEE Trans. Inf. Theory 1997, 43, 1549–1559. [Google Scholar] [CrossRef]
  8. Torezzan, C.; Panek, L.; Firer, M. A low complexity coding and decoding strategy for the quadratic Gaussian CEO problem. J. Frankl. Inst. 2016, 353, 643–656. [Google Scholar] [CrossRef]
  9. Kolmogorov, A.N. On the Shannon theory of information transmission in the case of continuous signals. IRE Trans. Inf. Theory 1956, IT-2, 102–108. [Google Scholar] [CrossRef]
  10. Neeser, F.D.; Massey, J.L. Proper complex random processes with applications to information theory. IEEE Trans. Inf. Theory 1993, 39, 1293–1302. [Google Scholar] [CrossRef]
  11. Gutiérrez-Gutiérrez, J.; Zárraga-Rodríguez, M.; Insausti, X.; Hogstad, B.O. On the complexity reduction of coding WSS vector processes by using a sequence of block circulant matrices. Entropy 2017, 19, 95. [Google Scholar] [CrossRef]
  12. Pearl, J. On coding and filtering stationary signals by discrete Fourier transforms. IEEE Trans. Inf. Theory 1973, 19, 229–232. [Google Scholar] [CrossRef]
  13. Gutiérrez-Gutiérrez, J.; Crespo, P.M. Asymptotically equivalent sequences of matrices and Hermitian block Toeplitz matrices with continuous symbols: Applications to MIMO systems. IEEE Trans. Inf. Theory 2008, 54, 5671–5680. [Google Scholar] [CrossRef]
  14. Gutiérrez-Gutiérrez, J. A modified version of the Pisarenko method to estimate the power spectral density of any asymptotically wide sense stationary vector process. Appl. Math. Comput. 2019, 362, 124526. [Google Scholar] [CrossRef]
  15. Gutiérrez-Gutiérrez, J.; Zárraga-Rodríguez, M.; Crespo, P.M.; Insausti, X. Rate distortion function of Gaussian asymptotically WSS vector processes. Entropy 2018, 20, 719. [Google Scholar] [CrossRef]
  16. Gray, R.M. Information rates of autoregressive processes. IEEE Trans. Inf. Theory 1970, IT-16, 412–421. [Google Scholar] [CrossRef]
  17. Toms, W.; Berger, T. Information rates of stochastically driven dynamic systems. IEEE Trans. Inf. Theory 1971, 17, 113–114. [Google Scholar] [CrossRef]
  18. Reinsel, G.C. Elements of Multivariate Time Series Analysis; Springer: Berlin/Heidelberg, Germany, 1993. [Google Scholar]
  19. Gutiérrez-Gutiérrez, J.; Crespo, P.M. Asymptotically equivalent sequences of matrices and multivariate ARMA processes. IEEE Trans. Inf. Theory 2011, 57, 5444–5454. [Google Scholar] [CrossRef]
  20. Gutiérrez-Gutiérrez, J.; Iglesias, I.; Podhorski, A. Geometric MMSE for one-sided and two-sided vector linear predictors: From the finite-length case to the infinite-length case. Signal Process. 2011, 91, 2237–2245. [Google Scholar] [CrossRef]
Figure 1. Proposed coding strategy for Gaussian vector sources. In this figure, E n c o d e r k ( D e c o d e r k ) denotes the optimal encoder (decoder) for the Gaussian N-dimensional vector y k with k n 2 , , n .
Figure 1. Proposed coding strategy for Gaussian vector sources. In this figure, E n c o d e r k ( D e c o d e r k ) denotes the optimal encoder (decoder) for the Gaussian N-dimensional vector y k with k n 2 , , n .
Entropy 21 00965 g001
Figure 2. Considered rates for the wide sense stationary (WSS) vector process in ([20] (Section 4)).
Figure 2. Considered rates for the wide sense stationary (WSS) vector process in ([20] (Section 4)).
Entropy 21 00965 g002
Figure 3. Considered rates for the VMA(1) process in ([18] (Example 2.1)).
Figure 3. Considered rates for the VMA(1) process in ([18] (Example 2.1)).
Entropy 21 00965 g003
Figure 4. Considered rates for the VAR(1) process in ([18] (Example 2.3)).
Figure 4. Considered rates for the VAR(1) process in ([18] (Example 2.3)).
Entropy 21 00965 g004
Figure 5. Considered rates for the VARMA(1,1) process in ([18] (Example 3.2)).
Figure 5. Considered rates for the VARMA(1,1) process in ([18] (Example 3.2)).
Entropy 21 00965 g005
Figure 6. Considered rates for the perturbed WSS vector process with D = 0.001.
Figure 6. Considered rates for the perturbed WSS vector process with D = 0.001.
Entropy 21 00965 g006

Share and Cite

MDPI and ACS Style

Zárraga-Rodríguez, M.; Gutiérrez-Gutiérrez, J.; Insausti, X. A Low-Complexity and Asymptotically Optimal Coding Strategy for Gaussian Vector Sources. Entropy 2019, 21, 965. https://doi.org/10.3390/e21100965

AMA Style

Zárraga-Rodríguez M, Gutiérrez-Gutiérrez J, Insausti X. A Low-Complexity and Asymptotically Optimal Coding Strategy for Gaussian Vector Sources. Entropy. 2019; 21(10):965. https://doi.org/10.3390/e21100965

Chicago/Turabian Style

Zárraga-Rodríguez, Marta, Jesús Gutiérrez-Gutiérrez, and Xabier Insausti. 2019. "A Low-Complexity and Asymptotically Optimal Coding Strategy for Gaussian Vector Sources" Entropy 21, no. 10: 965. https://doi.org/10.3390/e21100965

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop