Next Article in Journal
Improving Haptic Response for Contextual Human Robot Interaction
Next Article in Special Issue
Inter-Multilevel Super-Orthogonal Space–Time Coding Scheme for Reliable ZigBee-Based IoMT Communications
Previous Article in Journal
Fluorometric Detection of Oil Traces in a Sea Water Column
Previous Article in Special Issue
Sharing Studies between 5G IoT Networks and Fixed Service in the 6425–7125 MHz Band with Monte Carlo Simulation Analysis
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Equivalent MIMO Channel Matrix Sparsification for Enhancement of Sensor Capabilities

1
Faculty of Radio and Television, Moscow Technical University of Communications and Informatics (MTUCI), 111024 Moscow, Russia
2
Suisse Department of International Telecommunication Academy, 1202 Geneva, Switzerland
3
Department of Information Technology, Peoples’ Friendship University of Russia (RUDN University), 6 Miklukho-Maklaya Str., 117198 Moscow, Russia
*
Author to whom correspondence should be addressed.
Sensors 2022, 22(5), 2041; https://doi.org/10.3390/s22052041
Submission received: 10 January 2022 / Revised: 24 February 2022 / Accepted: 28 February 2022 / Published: 5 March 2022

Abstract

:
One of the development directions of new-generation mobile communications is using multiple-input multiple-output (MIMO) channels with a large number of antennas. This requires the development and utilization of new approaches to signal detection in MIMO channels, since the difference in the energy efficiency and the complexity between the optimal maximum likelihood algorithm and simpler linear algorithms become very large. The goal of the presented study is the development of a method for transforming a MIMO channel into a model based on a sparse matrix with a limited number of non-zero elements in a row. It was shown that the MIMO channel can be represented in the form of a Markov process. Hence, it becomes possible to use simple iterative MIMO demodulation algorithms such as message-passing algorithms (MPAs) and Turbo.

1. Introduction

This paper is an extended version of the conference paper [1]. The study introduces and compares various multiple-input multiple-output (MIMO) channel matrix sparsification methods. A description of the Turbo detection algorithm in a MIMO system with a sparsed equivalent channel matrix based on a minimum mean square error (MMSE) detector is also considered. Finally, a comparison of the link-level performances for all of the analyzed algorithms is provided.
One of the directions of the development of new generations of mobile communications is the use of MIMO channels with a large number of antennas. The MIMO technology is one of the most efficient technologies, providing a significant increase in the throughput and an increase in the number of active users. Moreover, the performance grows linearly with an increase in the number of transmitting and receiving antennas. This addresses the requirements for communication systems of 5G and 6G generations [2,3,4].
However, an increase in the number of antennas leads to a significant complication of signal-processing algorithms in MIMO systems. There are many options for constructing MIMO detectors, which differ both in their characteristics and in the complexity of implementation [5,6]. Among the variety of options for constructing MIMO detectors, two limiting cases can be distinguished [7]. At one extreme is the optimal maximum likelihood (ML) receiver, which has the best characteristics but is also the most difficult to implement. Its complexity grows exponentially with the number of antennas and modulation order. At the other extreme, there is a linear MMSE receiver, which has a simple implementation (the complexity grows in proportion to the third power of the number of antennas), but it significantly loses the characteristics to an ML receiver.
In [5], a comprehensive overview of MIMO detectors is given, of which the characteristics are between those of ML and MMSE receivers. There are two directions for improving MIMO detectors: simplifying the ML detector with losses in the energy efficiency and complicating the MMSE detector by increasing the energy efficiency. The first direction includes algorithms based on spherical decoding, e.g., K-best [6,8,9]. The second direction includes various iterative algorithms using the MMSE detector as a basis, e.g., ESA, EPA, and V-BLAST detector [2,10,11,12,13,14,15,16].
It should be noted that a similar problem also exists in the implementation of multiuser detectors in communication systems with multiple access, especially in systems of non-orthogonal multiple access (NOMA) [17,18,19]. However, unlike MIMO systems, communication systems with NOMA have the ability to select various templates and user signals matching a specific type of processing at the receiver. This makes it possible to provide good energy performance with acceptable implementation complexity.
One of the effective approaches of signal detection is based on iterative methods, such as low-density signature (LDS) [20] or sparse code multiple access (SCMA) [3,20,21,22,23,24,25,26,27]. A necessary condition for their use is the ability to represent the channel model in the form of a sparse matrix, as a result of which an individual observation is not a superposition of all symbols at once but only a subset of the symbols. In this case, the number of combinations for one observation will be significantly less than the number of combinations for the entire set of symbols. This makes it possible to use the sequential (optimal) processing of each observation and transfer the received information for the processing of the next observation [8,16,19,28,29,30,31,32]. Similar algorithms are used in Markov processes with finite fixed connectivity.
In MIMO systems, in the general case, the channel matrix is completely filled. Therefore, it is impossible to directly use iterative algorithms to detect MIMO channel signals [14,15,25,28]. This paper proposes a new direction in the development of MIMO detectors. It is based on the transformation of the MIMO channel model to a form convenient for using simple algorithms, i.e., the processing algorithm is not adjusted for the channel model, but the channel model is converted to the processing algorithm.
The goal of the paper is the development of a method for transforming a MIMO channel model into a channel with a sparse matrix containing a limited number of non-zero elements in a row or representing a MIMO channel signal in the form of a Markov process. Hence, it becomes possible to use simple iterative MIMO demodulation algorithms (message-passing algorithms (MPAs), Turbo, etc.). The utilization of these algorithms is especially beneficial in low-power devices, such as, sensors, meter, Internet of things (IoT), and reduced-capacity (RedCap) devices.
The rest of the paper is arranged as follows. In the next section, we introduced a MIMO system model and lay down theoretical prerequisites and potential possibilities of an accurate representation of the MIMO channel matrix in the sparse format. Then, in Section 3, we derived several methods of the approximation of the MIMO channel matrix by a sparse matrix. In Section 4, the modeling and verification of the efficiency of the methods introduced above are presented. Finally, we drew the conclusions in Section 5.
In the paper, the following mathematical symbols, parameters, and operators are used:
  • capital letter, e.g., H used for matrices;
  • d e t : determinant of a matrix;
  • t r : trace of a matrix;
  • c h o l : Cholesky decomposition of a matrix;
  • X : transpose of matrix X ;
  • X 1 : inverse of a matrix X ;
  • | h m , n | : modulo of h m , n ;
  • V ( i j ) : block-component matrix of matrix V;
  • p p r ( x ) : the prior distribution;
  • Λ e q ( x ) : the equivalent likelihood function.

2. Representation of the MIMO Channel Matrix in the Sparse Format

Let there be an observation:
Y = H X + η ,
where Y is an N-dimensional complex vector of signal samples at the input of the MIMO detector, which can be considered a vector of output samples of the MIMO channel, X is an M-dimensional complex vector of transmitted QAM symbols, which can be considered a vector of input samples of a MIMO channel, η is the N-dimensional vector of complex samples of the observation noise with zero mathematical expectations and a correlation matrix, R = 2 σ η 2 I N is an identity matrix of size (N × N), and H is a (N × M)-dimensional complex matrix of the MIMO channel, the elements of which are complex Gaussian random variables with zero mathematical expectations and unit variance for Rayleigh fading. The observation noise variance σ η 2 determines the signal-to-noise ratio (SNR) at the input of the receiving antenna:
S N R T x R x = 1 2 σ η 2 .
It is known that for any square matrix H of full rank, there exists a QR decomposition:
H = Q R ,
where R is an upper triangular matrix, which is completely described by N × (N + 1)/2 coefficients, and Q is an orthogonal matrix, which is also uniquely determined by N × (N − 1)/2 coefficients.
From the point of view of the amount of information, such a decomposition does not lead to the appearance of redundancy or to the loss of information, due to the following equation:
N ( N + 1 ) / 2 + N ( N 1 ) / 2 = N 2 ,
which is the amount of information contained in the original matrix H that is fully preserved.
Therefore, it can be assumed there exists such a decomposition of the channel matrix:
H = Q S ,
for which matrix S contains N × (N + 1)/2 nonzero elements. The only problem is that these non-zero elements are evenly distributed over all rows, with the average number of non-zero elements in each row less than or equal to (N + 1)/2.
If such a transformation shown in Equation (4) is found, then it will allow the use of simpler iterative algorithms for demodulation.

3. Methods for the Approximation of the Channel Matrix by a Sparse Matrix

Unfortunately, it is not possible to find the exact decomposition of the channel matrix into an orthogonal and sparse matrix with a fixed number of non-zero entries in a row. The application of numerical methods to test this approach aslo shows that, most likely, such a decomposition does not exist. In addition, even if we assume that such a decomposition exists, then it will allow using iterative demodulation algorithms with a complexity of N i t N 2 M M b i t / 2 , instead of an optimal demodulator with a complexity of 2 M M b i t , where M b i t = log 2 ( M ) is the number of bits in one M-QAM symbol. For large M, such an algorithm will also be difficult to implement. Therefore, we considered approximate methods for representing a MIMO channel in the form of decimated channels as belows, i.e., such channels in which each observation contains only part of the symbols.

3.1. Kullback Distance for the Approximated Channel Model

Let us consider as an approximation criterion the Kullback distance between two distributions, which is determined as follows [33,34]:
D ( p , q ) = X p ( X ) ln ( p ( X ) q ( X ) ) d X ,
where p(X) is the original distribution and q(X) is the approximated distribution.
Let distributions p(X) and q(X) be multivariate complex Gaussian distributions and are represented in the following forms:
q ( X ) = 1 π M det ( Q ) exp { ( X X ˜ ) Q 1 ( X X ˜ ) } p ( X ) = 1 π M det ( V ) exp { ( X X ¯ ) V 1 ( X X ¯ ) } ,
Substituting Equation (6) into Equation (5), we can obtain:
D ( p , q ) = ln det ( Q 1 V ) + + tr { ( Q 1 V ) I M } + ( X ^ X ˜ ) Q 1 ( X ^ X ˜ ) .
Let us assume we need to approximate a distribution with a fully filled correlation matrix V of which the distribution determined by a partially filled correlation matrix Q (for example, diagonal, block-diagonal, or sparse) or by a correlation matrix with a certain relationship between elements. In this case, minimizing the distance D(p,q) is reduced for the fulfillment of the equality:
X ˜ = X ¯ ,
and the minimization of the following functional:
D ( p , q ) = ln det ( Q 1 V ) + tr { ( Q 1 V ) I M } ,
which is subjected to restrictions on the form of the matrix Q.
We use the criterion D ( p , q ) ( X ˜ , Q ) min for minimizing the distance between the channel matrix H by an approximate sparse matrix H ˜ , i.e., instead of Equation (1), we use the following model:
Y ˜ = H ˜ X + η ,
where matrix H ˜ in each row has at most m non-zero elements. For now, we consider the case when the placement of these elements in the matrix is given.
As the initial and approximated distributions, we use the posterior distributions obtained for Equations (1) and (10) using the Gaussian prior distribution X N ( X ¯ p r , V p r ) . The posterior distributions can be found from the Bayes’ theorem:
p p s ( X | Y ) = L ( Y | X ) p p r ( X ) L ( Y | X ) p p r ( X ) d X = 1 π M det ( V ) exp { ( X X ^ ) V 1 ( X X ^ ) } ,
q p s ( X | Y ˜ ) = L ( Y ˜ | X ) p p r ( X ) L ( Y ˜ | X ) p p r ( X ) d X = 1 π M det ( Q ) exp { ( X X ˜ ) Q 1 ( X X ˜ ) } ,
where L ( Y | X ) and L ( Y ˜ | X ) are likelihood functions that are Gaussian according to the models (1) and (10).
It is easy to show that the parameters of the distributions p p s ( X ) and q p s ( X ) are MMSE solutions (i.e., from the estimates and correlation matrices perspectives) for the corresponding models and are determined by the expressions [7]:
For an accurate model:
X ^ = X ¯ p r + K ( Y H X ¯ p r ) V = ( H R η 1 H + V p r 1 ) 1 = V p r K H V p r K = ( H R η 1 H + V p r 1 ) 1 H R η 1 = V p r H ( H V p r H + R η ) 1 ;
For the approximated model:
X ˜ = X ¯ p r + K ˜ ( Y ˜ H ˜ X ¯ p r ) Q = ( H ˜ R η 1 H ˜ + V p r 1 ) 1 = V p r K ˜ H ˜ V p r K ˜ = ( H ˜ R η 1 H ˜ + V p r 1 ) 1 H ˜ R η 1 = V p r H ˜ ( H ˜ V p r H ˜ + R η ) 1 .
According to Equation (8), the mathematical expectations of the original and approximated distributions should be fulfilled, which are MMSE estimates:
X ^ = X ˜ .
From Equation (15), we obtain that:
K ˜ Y ˜ = X ^ Y ˜ = K ˜ 1 X ^ = ( H ˜ ) 1 ( H ˜ H ˜ + 2 σ η 2 I M ) ( H H + 2 σ η 2 I M ) 1 H Y .
Taking into account that combinations of QAM symbols X ¯ pr = 0 and V pr = I M are independent and equiprobable, as well as the independence of the noise samples R η = 2 σ η 2 I N , and substituting the expression for the correlation matrix Q in Equation (9), we find the conditions which must meet the channel matrix:
H ˜ = arg min D ( p , q ) H ˜ = arg min H ˜ ( ln det Q 1 V + tr { Q 1 V I M } ) = arg min H ˜ ( ln det ( 1 2 σ η 2 H ˜ H ˜ + I M ) + tr { ( 1 2 σ η 2 H ˜ H ˜ + I M ) V } ) .
This condition is minimized under the given constraints on the form of the matrix H ˜ , i.e., on the arrangement of non-zero elements. Obviously, the minimum value of D(p,q) also depends not only on the value of these elements, but also on their locations.

3.2. Diagonal Matrix Approximation

The simplest form of the approximating matrix is the diagonal matrix H ˜ = d i a g { h ˜ i i , i = 1 , N ¯ } . For it, we obtain:
H ˜ = arg min H ˜ ( ln det ( 1 2 σ η 2 H ˜ H ˜ + I M ) + tr { ( 1 2 σ η 2 H ˜ H ˜ + I M ) V } ) = arg min H ˜ ( m = 1 M ln ( 1 2 σ η 2 | h ˜ m m | 2 + 1 ) + 1 2 σ η 2 m = 1 M | h ˜ m m | 2 v m m ) ,
where v m m are the diagonal elements of the correlation matrix V.
Performing differentiation with respect to | h ˜ m m | 2 and equating it to zero, we obtain the system of equations:
1 | h ˜ m m | 2 + 2 σ η 2 + v m m 2 σ η 2 = 0 .
From here, we obtain the solution:
| h ˜ m m | 2 = 2 σ η 2 v m m ( 1 v m m ) .
In this case, the accuracy of the approximation depends only on the modulus | h ˜ m m | 2 .

3.3. Block Diagonal Matrix Approximation

Let the matrix H ˜ be represented as a block-diagonal matrix with blocks of size ( m × m ) :
H ˜ = [ H ˜ ( 11 ) O m m O m m O m m H ˜ ( 22 ) O m m O m m O m m H ˜ ( K K ) ] ,
where O m m is a zero matrix of size ( m × m ) .
In this case, the Kullback distance is equal to:
D = ln det V i = 1 K ( ln det ( 1 2 σ η 2 H ˜ ( i i ) H ˜ ( i i ) + I m ) + tr { ( 1 2 σ η 2 H ˜ ( i i ) H ˜ ( i i ) + I m ) V ( i i ) I m } ) ,
where V ( i i ) is the corresponding diagonal block of size ( m × m ) from the general correlation matrix V.
Optimal matrices minimizing this distance are determined by the expressions:
( 1 2 σ η 2 H ˜ ( i i ) H ˜ ( i i ) + I M ) = ( V ( i i ) ) 1 .
From this, we obtain:
H ˜ ( i i ) H ˜ ( i i ) = 2 σ η 2 ( ( V ( i i ) ) 1 I M ) .
To find the matrix H ˜ ( i i ) , one can use known variants of matrix decomposition, for example, SVD decomposition or Cholesky transform [35,36,37,38,39,40]:
H ˜ ( i i ) = chol ( 2 σ η 2 ( ( V ( i i ) ) 1 I M ) ) .
In this case, the matrix H ˜ ( i i ) is the upper triangular matrix. As a result of this procedure, we obtain the channel matrix of the following form:
H ˜ = [ x x x 0 x x 0 0 x 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 x x x 0 x x 0 0 x x x x 0 x x 0 0 x ] .
The accuracy of such an approximation is determined by the following value of the Kullback distance:
D = ln det V + i = 1 K ln det V ( i i ) .
It can be noted that the value of D depends on how the original matrix is split into blocks. Therefore, Equation (27) can be used as an optimization criterion for splitting V into V ( i i ) blocks (groups of symbols).
The detection for Equation (10) with a matrix of Equation (26) was carried out separately for each i-th block ( i = 1 , K ¯ ). The complexity of the optimal demodulation of one such block is proportional to 2 m M b i t .

3.4. Strip Matrix Approximation

The next version of the sparse matrix representation is a strip matrix, which for m = 2 has the following form:
H ˜ = [ h ˜ 11 h ˜ 12 0 0 0 0 0 h ˜ 22 h ˜ 23 0 0 0 0 0 h ˜ 33 0 0 0 0 0 0 h ˜ M - 2 , M - 2 h ˜ M - 2 , M - 1 0 0 0 0 0 h ˜ M - 1 , M - 1 h ˜ M - 1 , M 0 0 0 0 0 h ˜ M , M ] .
It is known that the correlation matrix can be represented by the Cholesky expansion:
V = U U ,
where U is the upper triangular matrix. Likewise, Q 1 = G G .
Then,
D = ln det ( Q 1 V ) + tr { Q 1 V I } = ln det V ln det ( G G ) + tr { G G V I } = ln det V i = 1 M ln | g i i | 2 + tr { G V G I } = ln det V i = 1 M ln | g i i | 2 + tr { G U U G I } = ln det V i = 1 M ln | g i i | 2 + j = 1 M i = 1 M | ( u g ) j i | 2 M ,
where ( u g ) i j is the ij-th element of the matrix U G . For a strip matrix G with m diagonals, we obtain:
( u g ) j i = n = 0 m 1 g i , i + n * u j , i + n , j i + n .

3.5. Approximation by a Markov Process

The possibility of using sequential iterative procedures in MIMO detection is allowed by the possibility of representing the evaluated process in the form of a Markov process. In this case, the multivariate posterior distribution is represented as a product of conditional distributions with a finite fixed connection:
p p s ( x 1 , , x M ) = ( i = 1 M k 1 p p s ( x i | x i + 1 , , x i + k ) ) p p s ( x M k , , x M ) ,
where k is the order (memory, connectivity coefficient) of the Markov process.
Having obtained the distribution parameters p ps ( x i + 1 , , x i + k ) , it is possible to calculate the parameters of the next distribution p ps ( x i , , x i + k 1 ) in Equation (32). In this case, the complexity is determined by the value of the parameter k, not by the dimension of the entire vector M.
It should be noted that the representation of the channel matrix in the form of a strip matrix with k non-zero diagonals also leads to the representation of the posterior distribution in Equation (32). Next, we consider an approach that also follows the MMSE solution, but without using a direct transition to the observation equation.
Let the posterior distribution be obtained as:
p p s ( x 1 , x 2 , x 3 , x M ) = p p s ( X / Y ) = 1 π M det ( V ) exp { ( X X ^ ) V 1 ( X X ^ ) } .
This distribution can be written in a factorized form:
p p s ( x 1 , x 2 , x 3 , x M ) = p p s ( x 1 | x 2 , x 3 , x M ) p p s ( x 2 | x 3 , x M ) p p s ( x M k + 1 , x M ) ,
where
p p s ( x i | x i + 1 , x M ) = p p s ( x i , x i + 1 , x M ) p p s ( x i + 1 , x M ) = 1 π v i | i + 1 , M 2 exp { 1 v i | i + 1 , M 2 | x i x ^ i ( x i + 1 , x M ) | 2 }
This conditional distribution can be represented as the result of combining the prior distribution p pr ( x i ) and the equivalent likelihood function:
p p s ( x i | x i + 1 , x M ) = Λ eq , i ( x i , x i + 1 , x M ) p p r ( x i ) x i Λ eq , i ( x i , x i + 1 , x M ) p p r ( x i ) d x i .
Hence, the equivalent likelihood function is determined by the expression:
Λ eq , i ( x i , x i + 1 , x M ) = C p p s ( x i | x i + 1 , x M ) p p r ( x i ) .
The concept of an equivalent likelihood function, in general, is associated with the concept of external information, which was obtained from general observation and refers only to the considered vector [ x i , x i + 1 , x M ] [7].
The idea of representing a sequence x 1 , x 2 , x 3 , , x M by a Markov process with a connection of the k-th order is to approximate p ps ( x i | x i + 1 , x M ) p ps ( x i | x i + 1 , x i + k ) and describe the general distribution in Equation (32). In this case, the equivalent likelihood functions only depend on k + 1 symbols:
Λ eq , i ( x i , x i + 1 , x i + k ) = C p p s ( x i | x i + 1 , x i + k ) p p r ( x i ) .
For k = 0, we obtain a usual approximation by a diagonal matrix.
Taking into account Equation (32), the complete equivalent likelihood function is the product:
Λ eq ( X ) C p p s ( X | Y ) p p r ( X ) = ( i = 1 M k 1 Λ eq , i ( x i , x i + 1 , x i + k ) ) Λ eq , M k ( x M k , x M ) ,
where
Λ eq , M k ( x M k , x M ) = C p p s ( x M k , , x M ) p p r ( x M k ) p p r ( x M ) .
Below, we present the rules for calculating the parameters of equivalent likelihood functions.
Firstly, there is an initial distribution:
p p s ( x 1 , x 2 , x 3 , x M ) = p p s ( X | Y )     Ν ( X ¯ , V ) .
From Equation (41), one can select the distribution parameters of the truncated vector:
p p s ( x i , x i + 1 , x i + k ) Ν ( X ¯ i i + k , V i i + k ) .
We represent the vector of mathematical expectations and the correlation matrix in the block form:
X ¯ i i + k = [ x ¯ i X ¯ i + 1 i + k ] ,
V i i + k = [ v i i v i , i + 1 i + k H v i , i + 1 i + k V i + 1 i + k ] .
Conditional distribution parameters p ps ( x i | x i + 1 , x i + k ) N ( x ¯ i | i + 1 i + k , v i | i + 1 i + k ) are determined by expressions:
x ¯ i | i + 1 i + k = x ¯ i + T i | i + 1 i + k ( X i + 1 i + k X ¯ i + 1 i + k ) = F i | i + 1 i + k X ¯ i i + k + T i | i + 1 i + k X i + 1 i + k v i | i + 1 i + k = v i i v i , i + 1 i + k H V i + 1 i + k 1 v i , i + 1 i + k = v i i T i | i + 1 i + k v i , i + 1 i + k
where
T i | i + 1 i + k = v i , i + 1 i + k H V i + 1 i + k 1 F i | i + 1 i + k = [ 1 T i | i + 1 i + k ] .
The equivalent likelihood function is determined by the following expression:
Λ eq , i ( x i , x i + 1 , x k ) = C p p s ( x i | x i + 1 , x k ) p p r ( x i ) = C exp ( ( x i x ¯ i | i + 1 i + k ) H v i | i + 1 i + k 1 ( x i x ¯ i | i + 1 i + k ) ) exp ( x i H x i ) = = C exp ( 1 2 σ η 2 | z i f i X i i + k | 2 + | x i | 2 ) .
where
z i = f i X ¯ i i + k f i = F i | i + 1 i + k 2 σ η 2 v i | i + 1 i + k .
The equivalent likelihood function Λ eq , M k + 1 ( x M k + 1 , x M ) used as the initial one for constructing the demodulation procedure is determined based on its definition—Equation (40)—and has the following form:
Λ eq , M k ( x M k , x M ) = C exp { ( X M k M X ¯ M k M ) H V M k M 1 ( X M k M X ¯ M k M ) + X M k M H X M k M } .
After transformation, this function can be represented as follows:
Λ eq , M k ( x M k , x M ) = C exp { 1 2 σ η 2 Z M k F M k X M k 2 } ,
where
Z M k = F M k K M k X ¯ M k M F M k = chol ( V M k M 1 I k + 1 ) K M k = ( I k + 1 V M k M ) 1
The obtained likelihood functions can be used to calculate metrics and implement an iterative demodulation procedure (MPA, Turbo detector, etc.):
μ i ( t ) = log Λ e q , i ( X i i + k ( t ) ) = 1 2 σ η 2 | z i f i X i i + k ( t ) | 2 + | x i ( t ) | 2 , for i = 1 , M k 1 ¯ μ M k ( t ) = log Λ e q , M k ( X M k M ( t ) ) = 1 2 σ η 2 Z M k F M k X M k M ( t ) 2 + X M k M ( t ) 2
where X i i + k ( t ) is the t-th combination of the (k + 1)-dimensional vector X i i + k = [ x i x i + k ] T , X M k M ( t ) is the t-th combination of the (k + 1)-dimensional vector X M k M = [ x M k x M ] T .
It is easy to notice ( M k + 1 ) metrics were calculated with this algorithm, each with a volume 2 ( k + 1 ) M b i t . This is significantly less than in the optimal combination algorithm with 2 M M b i t combinations.
Thus, the considered method makes it possible to factorize the likelihood function into factors with a less connectivity, i.e., with a smaller number of observed symbols in individual FP components. This is equivalent to using a decimated channel matrix. Therefore, the methods of the sparsification of the MIMO channel matrix described in the previous sub-sections and the Markov approximation method described in this sub-section solve the same problem of reducing the number of enumerated combinations with the separate processing of the likelihood function components. Such an approach makes it possible to use simple iterative methods.
The considered methods of channel matrix sparsification or FP factorization are approximate, and their accuracy depends on the connectivity coefficient k (or on the fill factor of the rows of the sparse channel matrix m / M ). However, the accuracy of the approximation also depends on the ordering of symbols during the sparsification procedure, since the considered approaches mainly take into account the correlation of neighboring symbols. Therefore, to improve the accuracy of the approximation, one can preliminarily introduce some ordering of symbols χ = Π X , where Π is the permutation matrix, which depends on the correlation matrix of the original distribution. Obviously, to improve the accuracy of the approximation, before sparsification, it is necessary to arrange the symbols, so that the largest values of the correlation coefficients are concentrated near the main diagonal.
Figure 1 shows the curves of the Kullback distance D ( p , q ) between the original posterior distribution and the one approximated by the number of non-zero symbols in the rows of the sparsed channel matrix (connectivity coefficient) for the following algorithms:
  • Algorithm with a block-diagonal matrix—“block”;
  • Algorithm with a strip matrix and the calculation of the coefficients of the sparse matrix by the method of stochastic optimization—“Opt.”;
  • Algorithm with approximation by a Markov process—“Markov’s”.
  • These curves are shown for three options of symbols ordering:
  • Without ordering—“without order”;
  • Simple ordering—“simple order”. In this option, the symbol with the highest total power of all mutual correlation coefficients is put in the first place, and the rest are arranged in descending order of the magnitudes of their mutual correlation coefficients with the first symbols;
  • Serial ordering—“serial order”. In this option, the symbol with the highest total power of (m − 1) cross-correlation coefficients is selected, and a set Ω ( 1 ) of (m − 1) symbols having the maximum correlation with the first symbol is set. Next, from this set, the second symbol is selected, with the largest total power of (m − 1) cross-correlation coefficients, where (m − 2) symbols are already specified and are taken from the set Ω ( 1 ) (except for the first and second selected symbols) and one symbol is selected from the rest, not included in this set. Then, the third symbol is determined according to the same principle, etc.
With the number of non-zero symbols in the string m = 1, all algorithms have the same maximum Kullback distance. This case corresponds to a diagonal approximation, and in terms of demodulation characteristics, it corresponds to a conventional MMSE receiver. As m increases, as expected, the Kullback distance decreases. Moreover, for the stripe approximation of the channel model and approximation by the Markov process, the distance decreases to a greater extent than with the block approximation. Especially, a strong decrease is observed when using symbol ordering. Sequential ordering is most effective for m > 2.
Figure 2 shows a generalized block diagram of the MIMO detection algorithm using the proposed Markov approximation approach. It is based on the MMSE receiver, which calculates the MMSE estimates vector X ^ M M S E and the correlation matrix V M M S E . These parameters, after ordering, are used to calculate the parameters of the equivalent likelihood functions, which in turn are used to calculate the metrics, which are then used in a soft iterative QAM detector. The output of this MIMO detector is soft bit estimates or a log-likelihood ratio (LLR) for each bit.
Next, we consider the application of Turbo processing in the detection of the signal received after the observation model is parsed. As a result of the performed sparsification procedure, the parameters describing the following likelihood functions can be obtained:
Λ eq , i ( x i , x i + 1 , x i + k ) = C exp ( 1 2 σ η 2 | z i f i X i i + k | 2 + | x i | 2 ) , i = 1 , M k 1 ¯ Λ eq , M k ( x M k , x M ) = C exp { 1 2 σ η 2 Z M k F M k X M k 2 } ,
where parameters z i ,   f i ,   Z M k ,   F M k can be calculated using Equations (48) and (51).
Let each QAM symbol xi contains information about m bits, i.e., it depends on an m-dimensional vector of binary symbols Bi. The total number of bits transmitted by the vector X is mM. Therefore, we can introduce an (m × M)-dimensional vector of binary symbols B. Let us introduce a notation for the set of indices Ψ i of size m(k + 1), which joins the numbers of the bits included in the vector X i = [ x i   x i + 1     x i + k ] T , i = 1 , M K ¯ .
It is easy to see that the likelihood function Λ e q , i contains information about the transmitted bits with numbers n Ψ i . Therefore, by processing this likelihood function, the transmitted bits with numbers n Ψ i can be estimated. It should be noted that the sets Ψ i intersect with each other, so one can get several estimates of the same bit. The task of demodulation is to process all likelihood functions and evaluate each bit using all received information. To solve this problem, it is possible use Turbo processing.
Before processing the i-th likelihood function, there is an a priori distribution of binary symbols:
p pr , i ( B ) = n = 1 m M p pr , n , i ( b n ) = n = 1 m M 1 2 ( 1 + b n ν n , i , pr ) = n = 1 m M e b n λ n , i , pr e λ n , i , pr + e λ n , i , pr ,
where b n = ± 1 is the transmitted information bit.
The parameters describing this distribution are determined by the following expressions
ν n , i , pr = p pr , n , i ( b n = 1 ) p pr , n , i ( b n = 1 ) = tanh ( λ n , i , pr )
λ n , i , pr = 1 2 log p pr , n , i ( b i = 1 ) p pr , n , i ( b i = 1 ) = tanh 1 ( ν n , i , pr ) .
Notice that the parameter ν n , i , pr is the a priori mathematical expectation of the n-th bit before processing the i-th likelihood function, and the parameter λ n , i , pr is proportional to the logarithm of the ratio of the prior probabilities of the n-th bit before processing the i-th likelihood function. At the first processing step, the combinations of all bits are equiprobable, therefore, ν n , 1 , pr = 0 .
With this in mind, the posterior distribution is written as:
p ps , i ( B ) = C Λ e q , i ( B ) p pr ( B ) = C ( Λ e q , i ( { b n Ψ i } ) n Ψ i p pr ( b n ) ) n Ψ i p pr ( b n ) ,
where { b n Ψ i } denotes a set of symbols bn, with numbers n Ψ i .
For those bits that do not belong to the set, we can obtain:
λ n , i , ps = 1 2 log p ps , i ( b n = 1 ) p ps , i ( b n = 1 ) = 1 2 log p pr , i ( b n = 1 ) p pr , i ( b n = 1 ) = λ n , i , pr ,   i Ψ n .
Taking into account Equations (52), (54) and (57), after a series of transformations, we can obtain:
λ n , i , ps = { 1 2 log t t n + exp ( μ i ( t ) + n Ψ i b n ( t ) λ n , i , pr ) t t n exp ( μ i ( t ) + n Ψ i b n ( t ) λ n , i , pr ) for n Ψ i λ n , i , pr for n Ψ i ,   i = 1 , M k ¯ ,
where tn+ is the set of numbers of combinations t in which the value of the n-th bit in the binary bipolar representation is equal to +1 (or equal to 0 in the binary representation); tn is the set of numbers of combinations t in which the value of the n-th bit in the binary bipolar representation is equal to −1 or is equal to 1 in the binary representation.
These parameters of the posterior distribution describe an independent distribution, i.e., it is the same as the input prior distribution. Thus, these distributions can be used as a priori distributions for the next (i + 1)-th step:
λ n , i + 1 , pr = λ n , i , ps .
As a result, we obtain a sequential multi-step procedure for calculating the posterior parameters of the distribution of the full vector of binary bits B.
Figure 3 shows a block diagram of one signal-processing cycle using a sequential multi-step procedure for a model with a Markov approximation.
The resulting multi-step signal-processing procedure at each step uses the optimal algorithm for estimating the vector of the observed binary bits. However, when passing to the next step, not the complete posterior distribution is transferred, but only the parameters describing the independent posterior distribution. Therefore, in the multi-step procedure itself, at each step, the approximation of the full posterior distribution by an independent (factorized) one is used. Obviously, this leads to losses. In such cases, iterative Turbo processing allows for the reduction of losses.
The considered algorithm allows closing the loop and implementing the iterative procedure, using the posterior distribution obtained at the last step of the (l − 1)-th iteration as the prior distribution at the first step of the i-th iteration, i.e.,
λ n , 1 , pr ( l ) = λ n , M k , ps ( l 1 ) ,   l = 1 , 2 , ,
However, according to Equation (59), in the a posteriori parameters λ n , M k , ps ( l 1 ) , and therefore, in the a priori parameters λ n , 1 , pr ( l ) , in part, there is already information obtained during the processing of the i-th equivalent observation likelihood function at the previous iteration expressed through its parameter:
δ λ n , i ( l 1 ) = λ n , i , ps ( l 1 ) λ n , i , pr ( l 1 ) .
Therefore, to avoid the duplication and incorrect accumulation of information, it is necessary to exclude the result of processing the i-th equivalent likelihood function at the (l − 1)-th iteration from the prior distribution at the i-th processing step and at the l-th iteration. Consequently, we use the following parameters:
λ n , i , pr ( l ) = λ n 1 , i , ps ( l ) δ λ n , i ( l 1 ) .
The result is an algorithm with the block diagram shown in Figure 4.
The complexity of the proposed algorithm is determined by the complexity of the MMSE detector, which is proportional to M 3 , and the complexities of Turbo processing and QAM demodulation, which are proportional to N i t M 2 m ( k + 1 ) , where M is the number of transmitting antennas, m is the number of bits in the QAM constellation, N i t is the number of iterations in the Turbo algorithm, k is the connective parameters of the models. Variables N i t and k are the parameters of the algorithm and can be changed during operation to control the complexity and characteristics of the algorithm. For N i t = 1 and k = 0 , the complexity and characteristics of the algorithm are the same as those of the MMSE. Note that the complexity of the ML algorithm is proportional to 2 m M , which is significantly above the proposed Turbo algorithm with the sparse transformation of the channel model.
Thus, the proposed approach to sparse the MIMO channel matrix using the Markov approximation method allows simplified Turbo processing algorithms for signal demodulation in MIMO systems.

4. Modeling and Verification

In order to verify the efficiency of the proposed channel matrix sparsification algorithm, link-level simulations were carried out with various types of MIMO detectors. The frame error rate (FER) is a generally accepted performance characteristic for the analysis of various algorithms used in communication systems. It allows a comparison of energy efficiency with different types of modulation, coding, and processing algorithms.
Figure 5 shows the dependences of the FER on the SNR per bit (Eb/No) for a MIMO channel with a size of 8 × 8. The results were demonstrated with different approaches for the sparsification of the channel matrix (for m = 2) and with different types of MIMO detectors. The simulations were carried out using QPSK modulation and Turbo code with a rate of 1/2. There are results for the following options:
  • “Opt.”—optimal soft MIMO demodulator for the exact model (1);
  • “MMSE”—MMSE demodulator for the exact model. It also corresponds to the variant of the channel model approximation by a diagonal matrix (Section 3.1);
  • “Block”—a demodulator using an approximated block-diagonal MIMO channel model (Section 3.2; Equations (21) and (25)) with a block size of 2 × 2, with an optimal demodulator for each block;
  • “Band, Stoh., Opt”—a demodulator using a striped two-diagonal MIMO channel model (Section 3.3; Equation (28)), in which the parameters of the striped channel matrix are calculated by the stochastic optimization method and the optimal demodulator for this model is used;
  • “Markov’s, Turbo det”—a Markov approximation of the channel model (Section 3.4) with connectivity parameters and iterative detection using the method of equivalent likelihood functions (Equations (47) and (50)) and the principle of Turbo processing (two iterations);
  • “Band, Stoh., MPA det”—iterative MPA detector (three iterations) using a two-diagonal striped MIMO channel model (Section 3.3), in which the channel strip matrix parameters are calculated by the stochastic optimization method.
The results from option (4) above are presented in order to determine the potential capabilities of different variants of the channel model approximation without taking into account the complexity of the detector implementation, i.e., without the influence of suboptimal post-processing.
The curves show that the use of the Markov approximation of the iterative algorithm adopting the method of equivalent likelihood functions and the principle of Turbo processing achieves the same performance as the approximation of the channel model by a strip sparse matrix with the optimization of the coefficients by the method of stochastic optimization and using the optimal soft demodulator. Thus, we can conclude that this approach (Markov approximation) and this demodulation method (Turbo processing + equivalent likelihood functions) achieve theoretically possible characteristics for a given value of the connectivity parameter. In this case, the loss in comparison with the optimal detector and the exact channel model is ~0.8 dB. The difference in the energy efficiency between the Markov approximation (or the approximation of the channel model by a strip matrix) and the block-diagonal approximation of the channel model is about 0.5 dB, in favor of the Markov one.
The MPA demodulation method (variant 6) is similar in complexity in comparison to variant 5 but demonstrates losses of about 0.25 dB.
Figure 6 shows similar curves for a coding rate of 3/4. It also demostrates dependencies on different connectivity coefficients of the Markov model (k = 1, 2, 3).
For a given coding rate, the difference between the block approximation and the Markov model is about 1 dB in favor of the Markov model. An increase in the connectivity coefficient of the Markov model increases the energy efficiency but, at the same time, leads to a complication of the demodulation algorithm. For k = 3, which is equivalent to the number of non-zero elements in the channel matrix m = 4 (half of the total number of symbols), the loss of the proposed approach with respect to the optimal demodulation algorithm is 1–1.5 dB.
Figure 7 shows the FER curves for a 16 × 16 MIMO channel, 16QAM modulation, and Turbo coding with a rate of 3/4. The MIMO detection algorithm with Markov approximation (with a connectivity coefficient k of 1) and Turbo processing with different numbers of iterations are demonstrated. It can be observed that the second iteration leads to an improvement in the energy characteristics by only 0.15 dB. The third iteration does not lead to an improvement in the performance. This is largely due to a fairly effective procedure of symbols ordering.
Compared to the MMSE algorithm (diagonal approximation of the model) for a given MIMO channel configuration, modulation, and coding parameters, the gain in the performance is in the range of 8–14 dB.

5. Conclusions

In this paper, efficient methods to approximate the MIMO channel model by sparse matrices or with an equivalent Markov process were introduced and thoroughly analyzed. The proposed approach allows the use of iterative detection methods and leverages the complexity of the implementation of the MIMO detector and its energy efficiency.
Based on the results of link-level simulations, it was demonstrated that the utilization of the proposed approach makes it possible to achieve performance characteristics of a MIMO communication system that are close to the theoretical maximum but with a significantly less complexity. For example, the reduction in the complexity compared to the optimal ML receiver is ~60 times, whereas the SNR decrease does not exceed 0.5–1 dB in an 8 × 8 MIMO configuration, QPSK modulation, with two iterations and a connectivity of the approximated Markov model k of 1.
The proposed methods are based on the MMSE solution and can be used as the SNR deteriorates. If the MMSE algorithm provides required FER characteristics, further processing can be excluded. However, if the quality of the MMSE detection is not satisfactory (low SNR), then the first- or second-order Markov model is used. In most cases, it is sufficient to use one iteration.

Author Contributions

Conceptualization, V.K. and S.M.; methodology, M.B. and V.K.; software, M.B.; validation, M.B. and D.P.; investigation, M.B., V.K. and D.P.; writing—original draft preparation, M.B. and V.K.; writing—review and editing, D.P.; visualization, M.B.; supervision, V.S.; funding acquisition, D.P. and S.M. All authors have read and agreed to the published version of the manuscript.

Funding

This paper has been supported by the RUDN University Strategic Academic Leadership Program (the recipient is Dmitry Petrov).

Informed Consent Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.

References

  1. Bakulin, M.G.; Kreyndelin, V.B.; Melnik, S.V.; Sudovtsev, V.A.; Petrov, D.A. Equivalent MIMO Channel Matrix Sparcification for Enhancement of Sensor Capabilities. In Proceedings of the 2021 International Conference on Engineering Management of Communication and Technology (EMCTECH), Vienna, Austria, 20–22 October 2021; pp. 1–6. [Google Scholar]
  2. Maruta, K.; Falcone, F. Massive MIMO Systems: Present and Future. Electronics 2020, 9, 385. [Google Scholar] [CrossRef] [Green Version]
  3. Vasuki, A.; Ponnusam, V. Latest Wireless Technologies towards 6G. In Proceedings of the 2021 International Conference on Computer Communication and Informatics (ICCCI), Coimbatore, India, 27–29 January 2021; pp. 1–4. [Google Scholar]
  4. Xu, G.; Yang, W.; Xu, B. Ultra-Massive multiple input multiple output technology in 6G. In Proceedings of the 2020 5th International Conference on Information Science, Computer Technology and Transportation (ISCTT), Shenyang, China, 13–15 November 2020; pp. 56–59. [Google Scholar]
  5. Yang, S.; Hanzo, L. Fifty Years of MIMO Detection: The Road to Large-Scale MIMOs. IEEE Commun. Surv. Tutor. 2015, 17, 1941–1988. [Google Scholar] [CrossRef] [Green Version]
  6. Xu, C.; Sugiura, S.; Ng, S.X.; Zhang, P.; Wang, L.; Hanzo, L. Two Decades of MIMO Design Tradeoffs and Reduced-Complexity MIMO Detection in Near-Capacity Systems. IEEE Access 2017, 5, 18564–18632. [Google Scholar] [CrossRef]
  7. Bakulin, M.G.; Kreyndelin, V.B.; Grigoriev, V.A.; Aksenov, V.O.; Schesnyak, A.S. Bayesian Estimation with Successive Rejection and Utilization of A Priori Knowledge. J. Commun. Technol. Electron. 2020, 65, 255–264. [Google Scholar] [CrossRef]
  8. Bakulin, M.; Kreyndelin, V.; Rog, A.; Petrov, D.; Melnik, S. MMSE based K-best algorithm for efficient MIMO detection. In Proceedings of the 2017 9th International Congress on Ultra Modern Telecommunications and Control Systems and Workshops (ICUMT), Munich, Germany, 6–8 November 2017; pp. 258–263. [Google Scholar]
  9. Challa, N.R.; Bagadi, K. Design of near-optimal local likelihood search-based detection algorithm for coded large-scale MU-MIMO system. Int. J. Commun. Syst. 2020, 33, e4436. [Google Scholar] [CrossRef]
  10. Liu, L.; Peng, G.; Wei, S. Massive MIMO Detection Algorithm and VLSI Architecture; Science Press: Beijing, China, 2020. [Google Scholar]
  11. Fukuda, R.M.; Abrão, T. Linear, Quadratic, and Semidefinite Programming Massive MIMO Detectors: Reliability and Complexity. IEEE Access 2019, 7, 29506–29519. [Google Scholar] [CrossRef]
  12. Azizipour, M.J.; Mohamed-Pour, K. Compressed channel estimation for FDD massive MIMO systems without prior knowledge of sparse channel model. IET Commun. 2019, 13, 657–663. [Google Scholar] [CrossRef]
  13. Albreem, M.A.; Alsharif, M.H.; Kim, S. Impact of Stair and Diagonal Matrices in Iterative Linear Massive MIMO Uplink Detectors for 5G Wireless Networks. Symmetry 2020, 12, 71. [Google Scholar] [CrossRef] [Green Version]
  14. Sun, M.; Wang, J.; Wang, D.; Zhai, L. A Novel Iterative Detection in Downlink of Massive MIMO System. J. Comput. Commun. 2021, 7, 116–126. [Google Scholar] [CrossRef] [Green Version]
  15. Zhang, G.; Gu, Z.; Zhao, Q.; Ren, J.; Han, S.; Lu, W. A Multi-Node Detection Algorithm Based on Serial and Threshold in Intelligent Sensor Networks. Sensors 2020, 20, 1960. [Google Scholar] [CrossRef] [Green Version]
  16. Albreem, M.A. A Low Complexity Detector for Massive MIMO Uplink Systems. Natl. Acad. Sci. Lett. 2021, 44, 545–549. [Google Scholar] [CrossRef]
  17. Liu, L.; Chi, Y.; Yuen, C.; Guan, Y.L.; Li, Y. Capacity-Achieving MIMO-NOMA: Iterative LMMSE Detection. IEEE Trans. Signal Process. 2019, 67, 1758–1773. [Google Scholar] [CrossRef]
  18. Caliskan, C.; Maatouk, A.; Koca, M.; Assaad, M.; Gui, G.; Sari, H. A Simple NOMA Scheme with Optimum Detection. In Proceedings of the 2018 IEEE Global Communications Conference (GLOBECOM), Abu Dhabi, United Arab Emirates, 9–13 December 2018; pp. 1–6. [Google Scholar]
  19. Wang, J.; Jiang, C.; Kuang, L. Iterative NOMA Detection for Multiple Access in Satellite High-Mobility Communications. IEEE J. Sel. Areas Commun. 2022. [Google Scholar] [CrossRef]
  20. Hoshyar, R.; Wathan, F.P.; Tafazolli, R. Novel Low-Density Signature for Synchronous CDMA Systems over AWGN Channel. IEEE Trans. Signal Process. 2008, 56, 1616–1626. [Google Scholar] [CrossRef] [Green Version]
  21. Nikopour, H.; Baligh, H. Sparse code multiple access. In Proceedings of the 2013 IEEE 24th Annual International Symposium on Personal, Idoor, and Mobile Radio Communications (PIMRC), London, UK, 8–11 September 2013; pp. 332–336. [Google Scholar]
  22. Taherzadeh, M.; Nikopour, H.; Bayesteh, A.; Baligh, H. SCMA Codebook Design. In Proceedings of the 2014 IEEE 80th Vehicular Technology Conference (VTC2014-Fall), Vancouver, BC, Canada, 14–17 September 2014; pp. 1–5. [Google Scholar]
  23. Aquino, G.P.; Mendes, L.L. Sparse code multiple access on the generalized frequency division multiplexing. J. Wirel. Commun. Netw. 2020, 2020, 212. [Google Scholar] [CrossRef]
  24. Hai, H.; Li, C.; Li, J.; Peng, Y.; Hou, J.; Jiang, X.-Q. Space-Time Block Coded Cooperative MIMO Systems. Sensors 2021, 21, 109. [Google Scholar] [CrossRef]
  25. Wang, Y.; Zhou, S.; Xiao, L.; Zhang, X.; Lian, J. Sparse Bayesian learning based user detection and channel estimation for SCMA uplink systems. In Proceedings of the 2015 International Conference on Wireless Communications & Signal Processing (WCSP), Nanjing, China, 15–17 October 2015; pp. 1–5. [Google Scholar]
  26. Hou, Z.; Xiang, Z.; Ren, P.; Cao, B. SCMA Codebook Design Based on Decomposition of the Superposed Constellation for AWGN Channel. Electronics 2021, 10, 2112. [Google Scholar] [CrossRef]
  27. Mheich, Z.; Wen, L.; Xiao, P.; Maaref, A. Design of SCMA Codebooks Based on Golden Angle Modulation. IEEE Trans. Veh. Technol. 2019, 68, 1501–1509. [Google Scholar] [CrossRef]
  28. Bakulin, M.; Kreyndelin, V.; Rog, A.; Petrov, D.; Melnik, S. A new algorithm of iterative MIMO detection and decoding using linear detector and enhanced turbo procedure in iterative loop. In Proceedings of the 2019 Conference of Open Innovation Association, FRUCT, Moscow, Russia, 8–12 April 2019; pp. 40–46. [Google Scholar]
  29. Bakulin, M.; Kreyndelin, V.; Rog, A.; Petrov, D.; Melnik, S. Low-complexity iterative MIMO detection based on turbo-MMSE algorithm. In Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Springer: Cham, Switzerland, 2017; pp. 550–560. [Google Scholar]
  30. Kong, W.; Li, H.; Zhang, W. Compressed Sensing-Based Sparsity Adaptive Doubly Selective Channel Estimation for Massive MIMO Systems. Wirel. Commun. Mob. Comput. 2019, 2019, 6071672. [Google Scholar] [CrossRef]
  31. Amin, B.; Mansoor, B.; Nawaz, S.J.; Sharma, S.K.; Patwary, M.N. Compressed Sensing of Sparse Multipath MIMO Channels with Superimposed Training Sequence. Wirel. Pers. Commun. 2017, 94, 3303–3325. [Google Scholar] [CrossRef] [Green Version]
  32. Albreem, M.A.; Juntti, M.; Shahabuddin, S. Massive MIMO Detection Techniques: A Survey. IEEE Commun. Surv. Tutor. 2019, 21, 3109–3132. [Google Scholar] [CrossRef] [Green Version]
  33. Minka, T. Divergence Measures and Message Passing; Technical Report; Microsoft Research: Cambridge, MA, USA, 2005. [Google Scholar]
  34. Ghodhbani, E.; Kaaniche, M.; Benazza-Benyahia, A. Close Approximation of Kullback–Leibler Divergence for Sparse Source Retrieval. IEEE Signal Processing Lett. 2019, 26, 745–749. [Google Scholar] [CrossRef]
  35. Mansoor, B.; Nawaz, S.; Gulfam, S. Massive-MIMO Sparse Uplink Channel Estimation Using Implicit Training and Compressed Sensing. Appl. Sci. 2017, 7, 63. [Google Scholar] [CrossRef]
  36. Shahabuddin, S.; Islam, M.H.; Shahabuddin, M.S.; Albreem, M.A.; Juntti, M. Matrix Decomposition for Massive MIMO Detection. In Proceedings of the 2020 IEEE Nordic Circuits and Systems Conference (NorCAS), Oslo, Norway, 27–28 October 2020; pp. 1–6. [Google Scholar]
  37. Terao, T.; Ozaki, K.; Ogita, T. LU-Cholesky QR algorithms for thin QR decomposition in an oblique inner product. J. Phys. Conf. Ser. 2020, 1490, 012067. [Google Scholar] [CrossRef]
  38. Chinnadurai, S.; Selvaprabhu, P.; Jeong, Y.; Jiang, X.; Lee, M.H. Worst-Case Energy Efficiency Maximization in a 5G Massive MIMO-NOMA System. Sensors 2017, 17, 2139. [Google Scholar] [CrossRef] [Green Version]
  39. Chinnadurai, S.; Selvaprabhu, P.; Sarker, A.L. Correlation Matrices Design in the Spatial Multiplexing Systems. Int. J. Discret. Math. 2017, 2, 20–30. [Google Scholar]
  40. Liu, H.; Zhang, L.; Su, Z.; Zhang, Y.; Zhang, L. Bayesian Variational Inference Algorithm Based on Expectation-Maximization and Simulated Annealing. J. Electron. Inf. Technol. 2021, 43, 2046–2054. [Google Scholar]
Figure 1. The dependence of the Kullback distance on the number of non-zero symbols in the sparsed channel matrix for different methods of multiple-input multiple-output (MIMO) channel matrix sparsification and for different types of symbol ordering.
Figure 1. The dependence of the Kullback distance on the number of non-zero symbols in the sparsed channel matrix for different methods of multiple-input multiple-output (MIMO) channel matrix sparsification and for different types of symbol ordering.
Sensors 22 02041 g001
Figure 2. Generalized block diagram of a MIMO demodulation algorithm using an MMSE detector for factorization or MIMO channel matrix sparsification.
Figure 2. Generalized block diagram of a MIMO demodulation algorithm using an MMSE detector for factorization or MIMO channel matrix sparsification.
Sensors 22 02041 g002
Figure 3. Block diagram of signal processing using a sequential multi-step procedure for a model with Markov approximation.
Figure 3. Block diagram of signal processing using a sequential multi-step procedure for a model with Markov approximation.
Sensors 22 02041 g003
Figure 4. Block diagram of a single-signal Turbo processing using a sequential multi-step procedure for a Markov approximation model.
Figure 4. Block diagram of a single-signal Turbo processing using a sequential multi-step procedure for a Markov approximation model.
Sensors 22 02041 g004
Figure 5. Dependences of the frame error rate (FER) on the signal-to-noise ratio per bit for an 8 × 8 MIMO channel with different options for channel matrix sparsification and for different detectors (QPSK modulation and Turbo code with a rate of 1/2).
Figure 5. Dependences of the frame error rate (FER) on the signal-to-noise ratio per bit for an 8 × 8 MIMO channel with different options for channel matrix sparsification and for different detectors (QPSK modulation and Turbo code with a rate of 1/2).
Sensors 22 02041 g005
Figure 6. Dependences of the FER on the signal-to-noise ratio per bit for an 8 × 8 MIMO channel with different options for the channel matrix sparsification and with different detectors (QPSK modulation and Turbo code with a rate of 3/4).
Figure 6. Dependences of the FER on the signal-to-noise ratio per bit for an 8 × 8 MIMO channel with different options for the channel matrix sparsification and with different detectors (QPSK modulation and Turbo code with a rate of 3/4).
Sensors 22 02041 g006
Figure 7. Dependences of the FER on the signal-to-noise ratio per bit for a 16 × 16 MIMO channel for the algorithm with Markov approximation having a different number of iterations and with an MMSE detector (16QAM modulation and Turbo code with a rate of 3/4).
Figure 7. Dependences of the FER on the signal-to-noise ratio per bit for a 16 × 16 MIMO channel for the algorithm with Markov approximation having a different number of iterations and with an MMSE detector (16QAM modulation and Turbo code with a rate of 3/4).
Sensors 22 02041 g007
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Bakulin, M.; Kreyndelin, V.; Melnik, S.; Sudovtsev, V.; Petrov, D. Equivalent MIMO Channel Matrix Sparsification for Enhancement of Sensor Capabilities. Sensors 2022, 22, 2041. https://doi.org/10.3390/s22052041

AMA Style

Bakulin M, Kreyndelin V, Melnik S, Sudovtsev V, Petrov D. Equivalent MIMO Channel Matrix Sparsification for Enhancement of Sensor Capabilities. Sensors. 2022; 22(5):2041. https://doi.org/10.3390/s22052041

Chicago/Turabian Style

Bakulin, Mikhail, Vitaly Kreyndelin, Sergei Melnik, Vladimir Sudovtsev, and Dmitry Petrov. 2022. "Equivalent MIMO Channel Matrix Sparsification for Enhancement of Sensor Capabilities" Sensors 22, no. 5: 2041. https://doi.org/10.3390/s22052041

APA Style

Bakulin, M., Kreyndelin, V., Melnik, S., Sudovtsev, V., & Petrov, D. (2022). Equivalent MIMO Channel Matrix Sparsification for Enhancement of Sensor Capabilities. Sensors, 22(5), 2041. https://doi.org/10.3390/s22052041

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop