Next Article in Journal
Graph Adaptive Attention Network with Cross-Entropy
Previous Article in Journal
Phase and Amplitude Modes in the Anisotropic Dicke Model with Matter Interactions
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

On Superposition Lattice Codes for the K-User Gaussian Interference Channel

by
María Constanza Estela
* and
Claudio Valencia-Cordero
Department of Electrical Engineering, Universidad de Santiago de Chile (USACH), Santiago 9170022, Chile
*
Author to whom correspondence should be addressed.
Entropy 2024, 26(7), 575; https://doi.org/10.3390/e26070575
Submission received: 20 May 2024 / Revised: 18 June 2024 / Accepted: 18 June 2024 / Published: 3 July 2024
(This article belongs to the Section Information Theory, Probability and Statistics)

Abstract

:
In this study, we work with lattice Gaussian coding for a K-user Gaussian interference channel. Following the procedure of Etkin et al., in which the capacity is found to be within 1 bit/s/Hz of the capacity of a two-user Gaussian interference channel for each type of interference using random codes, we work with lattices to take advantage of their structure and potential for interference alignment. We mimic random codes using a Gaussian distribution over the lattice. Imposing constraints on the flatness factor of the lattices, the common and private message powers, and the channel coefficients, we find the conditions to obtain the same constant gap to the optimal rate for the two-user weak Gaussian interference channel and the generalized degrees of freedom as those obtained with random codes, as found by Etkin et al. Finally, we show how it is possible to extend these results to a K-user weak Gaussian interference channel using lattice alignment.

1. Introduction

Interference is one of the major issues of wireless communications. One important scenario corresponds to the interference channel, where each transmitter wishes to communicate with its correspondent receiver but, as all users share the wireless medium, there is interference between them. Interference is classified according to its level, from very strong to low. When interference is very strong, it has been demonstrated [1] that the capacity is the same as if there was no interference at all. This is because interference is decoded first. Interference is low when it falls below the level of noise. In this case, there is no loss of data rate due to interference. The problem is still open for moderate or weak interference. For this case, the conventional technique consists of orthogonalizing the signals using frequency or time division multiple access schemes. Interference alignment has been proposed from the scope of information theory to align interference at each receiver, using only half of the signal space and leaving the other half for the intended signal, independent of the number of users that the channel has. The sum capacity for the K-user interference channel has been characterized in [2], and it was found that at a high signal-to-noise ratio (SNR), a factor of K / 2 dominates the capacity. This factor represents the degrees of freedom (DoF).
One of the main achievements in finding the capacity for a two-user interference channel can be seen in the work of Han and Kobayashi [3], who found the inner bound for the two-user interference channel using superposition coding. The method to determine the capacity of such an interference channel consists of using private and common messages from each transmitter. The private message of the interferer is treated as noise, while both common messages and the desired private message are decoded at each receiver. Obtaining similar results when K users are considered is desirable. It has been shown in [4] that, through using lattice codes, the interference due to one interferer can be made the same as that caused by many interferers. At each receiver, the signals can be scaled in such a way that the interference signals lie in a lattice, which can be distinguished from the lattice containing the desired signal. This was defined in [4] as lattice alignment. The signal scale idea has been studied in [5,6,7,8] to obtain the DoF of different interference channel models. In [5], a deterministic channel approach was applied to an interference channel, where signals are represented in base Q. In [9], the generalized degrees of freedom (GDoF) were found for different types of interference according to the SNR and interference-to-noise ratio (INR) for a two-user interference channel. Following the ideas of [5,9], in [6], the GDoF was found for different levels of interference for the K-user interference channel. The signals are represented in base Q, and a detailed scheme was given for different types of interference. New approaches have been made to find the GDoF for the K-user interference channel. In particular, in [10], the GDoF of a K-user interference channel was studied when treating interference as noise, which was found to be optimal depending on the relationship between the desired signal strength and the sum of the strengths of the strongest interference from and to this user. In [11], the GDoF of a K-user interference channel was studied using a multi-layer interference alignment scheme with successive decoding. The optimal sum of the GDoF was characterized by the exponents of each of the channel strengths.
Recently, interference alignment has been applied to different scenarios such as wireless interference channels for Smart Grids [12], unmanned aerial vehicles in heterogeneous networks [13] and space–air–ground integrated networks [14]. On the other hand, many of the lattice code techniques that are used in this paper have previously been considered for security. This is the case for [15,16], who worked with the secure capacity of wiretap channels, or [17], who worked with the secure DoF of the K-user interference channel. However, to the best of our knowledge, few researchers have recently studied the GDoF or constant gap to the optimal rate of the K-user interference channels using lattice alignment.
Following the ideas of [9], in [18], the GDoF of the two-user symmetric interference channel is found using a lattice Gaussian distribution. In this study, we propose extending these results for the K-user interference channel, using additive white Gaussian noise (AWGN)-good lattices. First, we begin with a two-user Gaussian interference channel, and work with lattice codes as we want to use the potential of lattices to align interference for a K-user Gaussian interference channel. For this purpose, we propose a lattice Gaussian coding scheme with some constraints over the powers of the messages and the flatness factor of the lattices. Using the intersection of two two-user multiple access channel rate regions, we find that we can achieve the conditions to obtain the same constant gap to the optimal rate and, thus, the same GDoF for a two-user weak interference channel, as found in [9], with lattice Gaussian codes. Finally, we show how to apply these results to a K-user interference channel using lattice alignment, with a careful selection of the lattices for each user.

Roadmap

The remainder of this paper is organized as follows: In Section 2, the upper and inner bounds and the GDoF of the two-user interference channel obtained in [9] are shown, and important Lemmas and Theorems of the lattice Gaussian coding [19] are explained. The main results of this work are stated in Theorems 3 and 4 in Section 4, which identify the channel coefficient conditions to obtain the same GDoF as in [9]. To prove this, we perform the following:
  • In Section 3.1.1, we show it is possible to obtain the HK rate region for a two-user interference channel with the intersection of two two-user multiple access channels.
  • In Section 3.1.2, we express the HK rate region for a two-user Gaussian interference channel with lattice distribution (Section 3.1.2 for a K-user Gaussian interference channel). For this, we introduce restrictions over the flatness factor of lattices given by Lemmas 3 and 4, as well as Theorem 2.
  • Finally, in Section 4.1, for Lemma 9, we apply power constraints to the private and common messages of a two-user weak Gaussian interference channel (Lemma 10 for a K-user weak Gaussian interference channel). These constraints are then applied to obtain conditions for the channel coefficients (Theorem 3 for a two-user weak Gaussian interference channel and Theorem 4 for a K-user weak Gaussian interference channel), which finally lead to the constant gap to the optimal rate and the GDoF of the two-user interference channel obtained in [9].
In Section 5, we discuss and highlight the results obtained. Finally, the conclusions of this work are drawn in Section 6.

2. Preliminaries

A study by Etkin et al. [9] revealed the capacity of the two-user interference channel within 1 Bit/s/Hz. When the power of the interference is smaller than the power of the desired signal, a range of values in which the Han and Kobayashi achievable rate (hereafter, the HK rate) is contained can be found. The GDoF is found through normalizing this rate by the capacity of the point-to-point AWGN channel and taking the limit of this ratio when the SNR and INR . In order to do this, random Gaussian codes and a simple HK scheme are used. In this section, we show the results of [9] for a two-user weak interference channel and, later, we present the main results on lattice Gaussian coding [19], the Lemmas and Theorems of which are used for our later results.

2.1. Outer and Inner Bounds for the Two-User Weak Gaussian Interference Channel [9]

The channel model given in [9] is expressed as:
y i = j = 1 2 h j i x j + z i ,
where i , j = 1 , 2 , x j C are subject to a power constraint E | x j | 2 = P j and the noise is z i CN ( 0 , N 0 ) . The channel coefficients from transmitter i to receiver j are represented by h j i . Let SNR i = | h i i | 2 P i N 0 also be the SNR of user i, and INR 1 = | h 21 | 2 P 2 N 0 and INR 2 = | h 12 | 2 P 1 N 0 . The authors in [9] provide a new outer bound for the two-user weak and mixed Gaussian interference channel. Here, we show their results for the weak interference case:
R 1 log 1 + SNR 1
R 2 log 1 + SNR 2
R 1 + R 2 log 1 + SNR 2 + log 1 + SNR 1 INR 1 + 1
R 1 + R 2 log 1 + SNR 1 + log 1 + SNR 2 INR 2 + 1
R 1 + R 2 log 1 + INR 1 + SNR 1 INR 2 + 1 + log 1 + INR 2 + SNR 2 INR 1 + 1
2 R 1 + R 2 log 1 + SNR 1 + INR 1 + log 1 + INR 2 + SNR 2 INR 1 + 1 + log 1 + SNR 1 1 + INR 2
R 1 + 2 R 2 log 1 + SNR 2 + INR 2 + log 1 + INR 1 + SNR 1 INR 2 + 1 + log 1 + SNR 2 1 + INR 1 .
Later, as presented in [3], superposition coding is considered. The private message of user i = 1 , 2 is represented as u i , while the common message is represented as w i . User i transmits the signal given by x i = u i + w i . The private codeword u i is meant to be decoded only by user i, while it is treated as noise by the other user. Both w 1 and w 2 are decoded by both users. In [9], the codebooks are generated using i.i.d. random Gaussian variables, and the interference-to-noise ratio created by the private message is defined as INR p and chosen as equal to 1. A simplified HK scheme is used in order to find the achievable region within 1 bit/s/Hz of the outer bound. To begin, in ([20] [Section 3.2]), a simplification of the HK rate region is found, which relies on the fact that many of the limits found in [3] are redundant. This has also been acknowledged by Han and Kobayashi in [21]. Consider the auxiliary variables given in [3], U 1 , U 2 , W 1 , W 2 and Q, where U i represents the private information from user i, W i represents the common information from user i = 1 , 2 , and Q is a time sharing parameter. Given the set Z = ( Q , U 1 , W 1 , U 2 , W 2 , X 1 , X 2 , Y 1 , Y 2 ) , the HK capacity rate region R ( Z ) is the set of all simultaneously achievable rate pairs R 1 , R 2 that satisfy ([20] [Section 3.2]):
R 1 min { I Y 1 ; W 1 W 2 Q , I Y 2 ; W 1 U 2 W 2 Q } + I Y 1 ; U 1 W 1 W 2 Q ,
R 2 min { I Y 2 ; W 2 W 1 Q , I Y 1 ; W 2 U 1 W 1 Q } + I Y 2 ; U 2 W 1 W 2 Q ,
R 1 + R 2 min { I Y 1 ; W 1 W 2 Q , I Y 2 ; W 1 W 2 Q , I Y 1 ; W 2 W 1 Q + I Y 2 ; W 1 W 2 Q } + I Y 1 ; U 1 W 1 W 2 Q + I Y 2 ; U 2 W 1 W 2 Q ,
2 R 1 + R 2 I Y 1 ; W 1 W 2 Q + I Y 2 ; W 1 W 2 Q + 2 I Y 1 ; U 1 W 1 W 2 Q + I Y 2 ; U 2 W 1 W 2 Q
R 1 + 2 R 2 I Y 2 ; W 1 W 2 Q + I Y 1 ; W 2 W 1 Q + I Y 1 ; U 1 W 1 W 2 Q + 2 I Y 2 ; U 2 W 1 W 2 Q .
Later, in [9], the authors showed that a simple HK scheme can achieve within one bit of the capacity of the two-user interference channel, considering three cases: (1) a weak interference channel, where INR 1 < SNR 2 and INR 2 < SNR 1 ; (2) a mixed interference channel, where INR 1 SNR 2 and INR 2 < SNR 1 or INR 1 < SNR 2 and INR 2 SNR 1 ; and (3) a strong interference channel, where INR 1 SNR 2 and INR 2 SNR 1 . To complete this study, we present their results within one bit of the capacity rate region of the Gaussian interference channel for the weak interference channel. In Theorem 5 of [9], the authors proved that the achievable region R min 1 , I 2 , min 1 , I 1 is within one bit of the capacity region of the two-user weak Gaussian interference channel. For this, note that both the outer bound rate region and the HK rate region are delimited by straight lines of slopes 0 , 1 / 2 , 1 , 2 , , defined by the bounds R 1 , R 2 , R 1 + R 2 , 2 R 1 + R 2 and R 1 + 2 R 2 . In [9], this outer bound is given by (2)–(8). Those bounds are denoted by U B R 1 , U B R 2 , U B R 1 + R 2 , U B 2 R 1 + R 2 , U B R 1 + 2 R 2 and H K R 1 , H K R 2 , H K R 1 + R 2 , H K 2 R 1 + R 2 , H K R 1 + 2 R 2 for the outer bound and HK regions, respectively. The difference between these bounds is denoted by Δ R 1 = U B R 1 H K R 1 , Δ R 2 = U B R 2 H K R 2 , Δ R 1 + R 2 = U B R 1 + R 2 H K R 1 + R 2 , Δ 2 R 1 + R 2 = U B 2 R 1 + R 2 H K 2 R 1 + R 2 and Δ R 1 + 2 R 2 = U B R 1 + 2 R 2 H K R 1 + 2 R 2 . Thus, the following condition is sufficient for the achievable region to be within 1 bit/s/Hz [9]:
Δ R 1 < 1 Δ R 2 < 1 Δ R 1 + R 2 < 2 Δ 2 R 1 + R 2 < 3 Δ R 1 + 2 R 2 < 3
This is achieved by dividing the proof into four cases [9]:
(1)
INR 1 1 and INR 2 1 . In this case, ([9] [Corollary 1]) the achievable region R 1 , 1 contains all the rate pairs R 1 , R 2 , satisfying:
R 1 log 2 + SNR 1 1
R 2 log 2 + SNR 2 1
R 1 + R 2 log SNR 2 + 2 INR 1 + log 1 + SNR 1 + 1 INR 1 2
R 1 + R 2 log SNR 1 + 2 INR 2 + log 1 + SNR 2 + 1 INR 2 2
R 1 + R 2 log 1 + INR 1 + SNR 1 INR 2 + log 1 + INR 2 + SNR 2 INR 1 2
2 R 1 + R 2 log 1 + SNR 1 + INR 1 + log 1 + INR 2 + SNR 2 INR 1 + log 2 + SNR 1 INR 2 3
R 1 + 2 R 2 log 1 + SNR 2 + INR 2 + log 1 + INR 1 + SNR 1 INR 2 + log 2 + SNR 2 INR 1 3 .
(2)
INR 1 < 1 and INR 2 1 . In this case, the achievable region R 1 , INR 1 contains all the rate pairs:
R 1 log 1 + SNR 1 1 + INR 1 1
R 2 log 2 + SNR 2 1
R 1 + R 2 log INR 2 + SNR 1 1 + INR 1 + log 1 + SNR 2 1 + INR 2 1
R 1 + R 2 log 1 + SNR 1 1 + 1 INR 1 + log 2 + SNR 2 1
R 1 + R 2 log INR 2 + SNR 1 1 + INR 1 + log 1 + SNR 2 1 + INR 2 1
2 R 1 + R 2 log 1 + SNR 1 + INR 1 + log 1 + SNR 2 + INR 2 + log 1 + INR 1 + SNR 1 INR 2 log 2 1 + INR 1 2
R 1 + 2 R 2 log 2 + SNR 2 + log 1 + INR 2 + SNR 1 1 + INR 2 + log 1 + 1 + SNR 2 INR 2 2 .
(3)
INR 1 1 and INR 2 < 1 . In this case, the achievable region R INR 2 , 1 is similar to the one before.
(4)
INR 1 < 1 and INR 2 < 1 . In this case, the achievable region R INR 2 , INR 1 contains only the following rate pairs:
R 1 log 1 + SNR 1 1 + INR 1
R 2 log 1 + SNR 2 1 + INR 2 .
The capacity is defined in [9] by C ( SNR 1 , SNR 2 , INR 1 , INR 2 ) with the parameters SNR 1 , SNR 2 , INR 1 and INR 2 . Define the GDoF using [9] D ( α 1 , α 2 , α 3 ) = lim SNR 1 , SNR 2 , INR 1 , INR 2 = { R 1 SNR 1 , R 2 SNR 2 : ( R 1 , R 2 ) C ( SNR 1 , SNR 2 , INR 1 , INR 2 ) } , where α 1 = log SNR 2 log SNR 1 , α 2 = log INR 1 log SNR 1 , α 3 = log INR 2 log SNR 1 and R 1 = d 1 log SNR 1 , R 2 = d 2 log SNR 2 for d 1 , d 2 D . Using various approximations, the GDoF for the weak interference channel is given by [9]:
d 1 1
d 2 1
d 1 + α 1 d 2 min { 1 + α 1 α 3 + , α 1 + 1 α 2 + , max α 2 , 1 α 3 + max α 3 , α 1 α 2 }
2 d 1 + α 1 d 2 max 1 , α 2 + max α 3 , α 1 α 2 + 1 α 3
d 1 + 2 α 1 d 2 max α 1 , α 3 + max α 2 , 1 α 3 + α 1 α 2 ,

2.2. Lattice Gaussian Coding

In this study, we use lattices due to their potential to align interference by means of lattice alignment for any number of users in an interference channel. Lattice codes also allow us to use higher dimensions, and some lattices are said to be AWGN-good if they are good for AWGN channels. We also note that the randomness of the codewords is useful, particularly when part of the codeword has to be treated as noise. Furthermore, the capacity to be within 1 Bit/s/Hz, as demonstrated in [9] and the GDoF, is based on Gaussian random codes. Due to this need, lattice Gaussian codes [19] are considered. In this section, we present the main results on this topic, which will be applied in the following sections for the interference channel.
Definition 1 
(Lattice [22]). A lattice is a regularly spaced array of points. It can be properly defined as: Λ = { x = i = 1 m λ i v i , λ i Z } . The defined lattice has dimension m, where v 1 , v 2 , , v m are linearly independent vectors in R n and { v 1 , v 2 , , v m } is the basis of the lattice.
Definition 2 
(Theta series). Let Θ Λ q be the theta series of a lattice Λ: Θ Λ q = λ Λ q λ 2 , where, in this paper, q = e 1 2 σ 2 .
Definition 3 
(AWGN-good [19]). A sequence of lattices Λ ( n ) of increasing dimension n and volume-to-noise ratio (VNR), defined as γ = V Λ 2 n σ 2 , where V Λ is the fundamental volume of the lattice Λ, is AWGN-good if, for all P e ( 0 , 1 ) , lim n γ Λ ( n ) ( σ ) = 2 π e and if, for a fixed VNR greater than 2 π e , P e vanishes in n, where P e = P { W n V ( Λ ) } is the error probability of minimum-distance lattice decoding, and where σ 2 is the power of the i.i.d Gaussian noise W n .
Definition 4 
(Discrete Gaussian distribution [19]). Define the discrete Gaussian distribution over Λ centered at c R n as the following discrete distribution taking values in λ Λ : D Λ , σ , c ( λ ) = f σ , c ( λ ) f σ , c ( Λ ) λ Λ , where f σ , c ( Λ ) λ Λ f σ , c ( λ ) and f σ , c ( x ) is the Gaussian distribution of variance σ 2 centered at c R n , f σ , c ( x ) = 1 ( 2 π σ ) n e x c 2 2 σ 2 . For convenience, f σ ( x ) = f σ , 0 ( x ) and D Λ , σ = D Λ , σ , 0 .
We also consider the Λ -periodic function:
f σ , Λ ( x ) = λ Λ f σ , λ ( x ) = 1 ( 2 π σ ) n λ Λ e x λ 2 2 σ 2 ,
for all x R n .
Definition 5 
(Discrete Gaussian distribution over a coset [19]). The discrete Gaussian distribution over a coset of Λ; that is, the shifted lattice Λ c is defined as D Λ c , σ ( λ c ) = f σ ( λ c ) f σ , c ( Λ ) λ Λ . Note that D Λ c , σ ( λ c ) = D Λ , σ , c ( λ ) . Thus, they are a shifted version of each other.
Definition 6 
(Flatness factor [23]). In [24], the notion of the flatness factor of a lattice Λ was introduced. An equivalent definition of the flatness factor is applied in [15,23]: ϵ Λ ( σ ) m a x w R ( Λ ) | V ( Λ ) f σ , Λ w 1 | . Thus, the ratio between f σ , Λ w and the uniform distribution over R ( Λ ) R n are within the range of 1 ϵ Λ ( σ ) , 1 + ϵ Λ ( σ ) , where R ( Λ ) is a fundamental region of the lattice Λ. The flatness factor of Λ is then given by [15]: ϵ Λ ( σ ) = γ 2 π n 2 Θ Λ e 1 2 σ 2 1 .
Theorem 1 
([19]). σ and δ , there exists a sequence of mod-p lattices Λ ( n ) ( σ ) , such that
ϵ Λ ( n ) ( 1 + δ ) · γ Λ ( n ) ( σ ) 2 π n 2 ;
that is, the flatness factor can exponentially reach zero for any fixed VNR γ Λ ( n ) ( σ ) < 2 π .
From [25], mod-p lattices are defined as Λ c = p Z n + C , where p is a prime and C is a linear code over Z p , which is the ring of integers modulo-p.
The following Lemma shows that when the flatness factor is small, the variance per dimension of the discrete Gaussian D Λ , σ , c is not so far from the one of the continuous Gaussian.
Lemma 1 
([15,19]). Let x be sampled from the Gaussian distribution D Λ , σ , c . If ε ϵ Λ ( σ / π π t ) < 1 for 0 < t < π , then
| E x c 2 n σ 2 | 2 π ε t 1 ε σ 2 .
where
ε t = ε , t 1 / e ( t 4 + 1 ) ε , 0 < t < 1 / e
Lemma 2 
(Entropy of discrete Gaussian [15]). Let x D Λ , σ , c . If ε ϵ Λ ( σ / π π t ) < 1 for 0 < t < π , then the entropy rate of x satisfies
| 1 n H ( x ) log ( 2 π e σ ) 1 n log V ( Λ ) | ε ,
where ε = log ( 1 ε ) n + π ε t n ( 1 ε ) .
The next lemma shows that the probability of a lattice Gaussian distribution falling outside of a ball of a radius larger than n σ is exponentially small.
Lemma 3 
([19]). Let x D Λ , σ , c and ε ϵ Λ ( σ ) < 1 . Then, for any ρ > 1 , the probability
P ( x c > ρ · n σ ) 1 + ε 1 ε · e n E s p ( ρ 2 )
where E s p ( x ) = 1 2 x 1 log ( x ) for x > 1 is the sphere packing exponent.
Definition 7 
(Semi-spherical noise [26]). Let B ( 0 , r ) denote a ball of a radius r centered at zero. A sequence Z n is semi-spherical if δ > 0 , P Z n B ( 0 , ( 1 + ϵ ) n σ ) > 1 δ for a sufficiently large n.
Therefore, x D Λ , σ , c can be seen as semi-spherical noise. It is known that the sum of semi-spherical noise and AWGN is semi-spherical [26].
The following Lemma shows that if the flatness factor is small, the sum of the discrete Gaussian distribution and a continuous Gaussian distribution is very close to a continuous Gaussian distribution.
Lemma 4 
([19]). Given any vector, c R n and σ 0 , σ > 0 . Let σ ˜ = σ σ 0 σ 2 + σ 0 2 and σ s = σ 0 2 + σ 2 . Consider the continuous distribution r on R n obtained by adding a continuous Gaussian of variance σ 2 to a discrete Gaussian D Λ c , σ 0 :
r ( x ) = 1 f σ 0 Λ c t Λ c f σ 0 ( t ) f σ ( x t ) , x R n
If ε = ϵ Λ ( σ ˜ ) < 1 2 , then r ( x ) f σ s ( x ) is uniformly close to 1:
x R n , r ( x ) f σ s ( x ) 1 4 ε .
As the distance between points is not uniform, the decoding is performed using MAP decoding. It is demonstrated in [19] that MAP decoding is equivalent to MMSE lattice decoding. The following lemma is given for the error performance of the AWGN-good lattices.
Lemma 5 
([19]). If L is AWGN-good, the average error probability of the MAP decoder is bounded by
P e 1 + ϵ L σ 0 2 σ 0 2 + σ 2 1 ϵ L σ 0 e n L E p γ L σ ˜
where σ ˜ is defined in Lemma 4 and E p ( μ ) denotes the Poltyrev exponent
E p ( μ ) = 1 2 ( μ 1 ) log μ , 1 < μ 2 1 2 log e μ 4 , 2 μ 4 μ 8 , μ 4
It is shown in [19] that in order to achieve this bound, the condition σ 0 2 σ 2 > e must be fulfilled; that is, the SNR is larger than e. Thus, the following theorem shows that by using a lattice Gaussian codebook, we can achieve a rate arbitrarily close to the channel capacity while making the error probability vanish exponentially, as long as SNR > e .
Theorem 2 
([19]). Consider a lattice code whose codewords are drawn from the discrete Gaussian distribution D L c , σ s for an AWGN-good lattice. Assuming that ε t and ε are as defined in Lemma 1, ε is as defined in Lemma 2, and for some small ε 0 , if SNR > e , then any rate (as defined in [19])
R m a x 1 2 log ( 1 + SNR ) π ε t n L ( 1 ε ) 1 2 ε ε
up to the channel capacity
1 2 log 1 + SNR
is achievable, while the error probability of MMSE lattice decoding vanishes exponentially fast as in (39).
The development and proof of Theorem 2 can be found in [19].

3. Materials and Methods

3.1. Lattice Gaussian Coding for the Two-User Gaussian Interference Channel

In this section, we analyze the case for the two-user weak Gaussian interference channel using lattice Gaussian codes. Consider the following channel model:
y i = h i i x i + j i 2 h j i x j + z i .
where h i i and h j i are the real direct and indirect channel gains, respectively; x i is the signal transmitted by transmitter i; x j is the signal transmitted by transmitter j; and z i is the additive white Gaussian noise with variance σ 2 and zero mean, i , j = 1 , 2 , i j . As in [3,9], the transmitted symbols are constructed using a common and a private message, given by w i and u i , respectively, for user i = 1 , 2 . Thus, x i = w i + u i . At receiver i, the common messages of both transmitters, h i i w i and h j i w j and the private desired message h i i u i are decoded, while the interference private message h j i u j is considered as noise, where j = 1 , 2 , j i . Define S i as the signal-to-noise ratio of user i and I i as the interference-to-noise ratio of user i. Furthermore, define, as presented in [9],
S i c h i i 2 σ w i 2 σ 2 ,
S i p h i i 2 σ u i 2 σ 2 ,
as the common and private signal-to-noise ratios of user i, respectively, and
I i c h j i 2 σ w j 2 σ 2 ,
I i p h j i 2 σ u j 2 σ 2 .
as the common and private interference-to-noise ratios of user i, respectively. Thus, S i = S i c + S i p and I i = I i c + I i p , considering the weak interference case, where I i < S j .

3.1.1. Finding the Han–Kobayashi Rate Region with the Intersection of Two Two-User MACs

In [3], the best achievable rate region for a two-user interference channel was found using superposition coding. We will show that we can separate the problem into two multiple access channels (MACs), which can be intersected to obtain the achievable rate region obtained in [3].
Lemma 6. 
The extreme points for the achievable region of MAC 1 and MAC 2, respectively, are given by (see Figure 1):
G = I Y 1 ; W 1 W 2 Q + I Y 1 ; U 1 W 1 W 2 Q , 0
A = I Y 1 ; W 1 W 2 Q + I Y 1 ; U 1 W 1 W 2 Q , I Y 1 ; W 2 Q
E = 0 , I Y 1 ; W 2 U 1 W 2 Q
C = I Y 1 ; W 1 Q + I Y 1 ; U 1 W 1 Q , I Y 1 ; W 2 U 1 W 1 Q
G = I Y 2 ; W 1 U 2 W 2 Q , 0
A = I Y 2 ; W 1 U 2 W 2 Q , I Y 2 ; W 2 + I Y 2 ; U 2 W 2 Q
E = 0 , I Y 2 ; W 2 W 1 Q + I Y 2 ; U 2 W 1 W 2 Q
C = I Y 2 ; W 1 Q , I Y 2 ; W 2 W 1 Q + I Y 2 ; U 2 W 1 W 2 Q
Proof: In order to find each of the MAC rate regions, we follow the procedure explained in [3], Appendix A. First, we notice that the MAC rate regions are delimited from above by only four straight lines, as opposed to the IC region, which is delimited by five. This is due to the fact that each MAC user only needs to decode both common messages and their own private messages. Thus, the only possible slopes for MAC 1 are given by 0 , 1 / 2 , 1 , , and for MAC 2, they are given by 0 , 1 , 2 , . Following the procedure explained in [3], Appendix A, it is straightforward to find (47)–(50). We found that point B is equal to C; therefore, we only have three slopes given by 0 , 1 , . It is also possible to find point H, where R 1 C + R 2 C = R 1 H + R 2 H .
H = I Y 1 ; W 1 Q + I Y 1 ; U 1 W 1 W 2 Q , I Y 1 ; W 2 W 1 Q
We can follow a similar analysis for MAC 2, from (51) to (54). In this case, we find that point B’ is equal to A’; therefore, we have only three slopes as previously given by 0, −1, . It is also possible to find point H , where R 1 A + R 2 A = R 1 H + R 2 H .
H = I Y 2 ; W 1 W 2 Q , I Y 2 ; W 2 Q + I Y 2 ; U 2 W 1 W 2 Q
Lemma 7. 
The achievable rate region found in ([3] [Theorem 4.1]) for a two-user interference channel can be found by intersecting the achievable rate region of two two-user multiple access channels, and this is given by:
R 1 I Y 1 ; U 1 W 1 W 2 Q D 1 + m i n { I Y 1 ; W 1 W 2 Q , I Y 2 ; W 1 U 2 W 2 Q } T 1
R 2 I Y 2 ; U 2 W 1 W 2 Q D 2 + m i n { I Y 2 ; W 2 W 1 Q , I Y 1 ; W 2 U 1 W 1 Q } T 2
R 1 + R 2 I Y 1 ; U 1 W 1 W 2 Q D 1 + I Y 2 ; U 2 W 1 W 2 Q D 2 + I Y 1 ; W 1 W 2 Q T 1 + T 2
R 1 + R 2 I Y 1 ; U 1 W 1 W 2 Q D 1 + I Y 2 ; U 2 W 1 W 2 Q D 2 + I Y 2 ; W 1 W 2 Q T 1 + T 2
R 1 + R 2 I Y 1 ; U 1 W 1 Q D 1 + I Y 2 ; U 2 W 2 Q D 2 + I Y 2 ; W 1 U 2 W 2 Q T 1 + I Y 1 ; W 2 U 1 W 1 Q T 2
2 R 1 + R 2 2 I Y 1 ; U 1 W 1 W 2 Q D 1 + I Y 2 ; U 2 W 2 Q D 2 + I Y 2 ; W 1 U 2 W 2 Q T 1 + I Y 1 ; W 1 W 2 Q T 1 + T 2
R 1 + 2 R 2 I Y 1 ; U 1 W 1 Q D 1 + 2 I Y 2 ; U 2 W 1 W 2 Q D 2 + I Y 1 ; W 2 U 1 W 1 Q T 2 + I Y 2 ; W 1 W 2 Q T 1 + T 2
The proof of Lemma 7 can be found in Appendix A.

3.1.2. Two-User Gaussian Interference Channel Using Lattice Gaussian Coding

We assume that h j i w j and h j i u j for any i , j = 1 , 2 are actually lattice codes and, more importantly, are in a lattice Gaussian distribution. Let us define the lattices properly in the lattice Gaussian distribution. We define that h i i w i D Δ i , δ i where δ i = h i i σ w i ; h j i w j D Π i , ρ i where ρ i = h j i σ w j ; h i i u i D Γ i , γ i where γ i = h i i σ u i ; h j i u j D Ψ i , τ i where τ i = h j i σ u j and where s D Λ , σ indicates that s distributes as the discrete lattice Gaussian distribution over Λ , centered in zero and with variance σ 2 . Note that x i is the superposition of two lattice Gaussians. Figure 2 illustrates an example of a lattice Gaussian distribution of the private and common messages of x i .
Based on the ideas of [3,9], at each receiver, both common and private desired messages must be decoded, along with the common interference message. However, the private interference message is considered noise. To decode similarly to [3], we consider successive decoding. Thus, while decoding one of the messages, the others are considered noise. This is not a problem when the codes used are Gaussian codes, as presented in [3,9]. Therefore, we not only work with lattice codes but with lattice Gaussian codes. The common messages are designed such that they are decodable at both receivers, while the private message must be designed such that it is decodable only at the desired receiver, and at the other receiver, it must be considered noise. Let us define the power of the private and common messages for transmitter i as σ u i 2 and σ w i 2 , respectively, where i = 1 , 2 . From Lemma 4 it can be observed that if
ϵ Ψ i τ i σ τ i 2 + σ 2 = ϵ Ψ i h j i σ u j σ h j i 2 σ u j 2 + σ 2 < 1 2 ,
then h j i u j + z i is not far from a continuous distribution, and we can treat h j i u j as noise. Thus, the new noise z ˜ i = h j i u j + z i is an AWGN with variance h j i 2 σ u j 2 + σ 2 . Lemma 3 is applied to h i i w i , h j i w j and h i i u i with the flatness factor conditions:
ϵ Δ i δ i < 1
ϵ Π i ρ i < 1
ϵ Γ i γ i < 1 ,
One important result of the separation of the problem on two MAC regions is the visualization of the decoding strategy. As in [3] the decoding strategy is the following. For MAC1, as can be observed from regions (47)–(50), we either decode W 2 , then W 1 and finally U 1 or W 1 , then U 1 and finally W 2 , in both cases leaving the private interference message as noise. For MAC2, the approach is similar. From regions (51)–(54), we either decode W 2 , then U 2 and finally W 1 or W 1 , then W 2 and finally U 2 . This can be formally expressed as follows. Considering a system given by (42), we will show the two possible ways of decoding at receiver i, i = 1 , 2 :
  • Decoding h i i w i , then h i i u i and finally h j i w j : If we decode the desired common message first, w i , to consider the rest of the messages as noise, we must apply Lemmas 4 and 3. Lemma 4 is applied to h j i u j , while Lemma 3 is applied to h j i w j and h i i u i . Thus, we decode w i from y k = h i i w i + z ˇ k , where z ˇ k = h j i w j + h i i u i + h j i u j + z k is the new semi-spherical noise. This is valid from Lemma 4 with the flatness factor condition
    ϵ Ψ i ^ h j i σ u j σ h j i 2 σ u j 2 + σ 2 < 1 2
    and from Lemma 3 with the flatness factor conditions
    ϵ Π i ρ i < 1
    and
    ϵ Γ i γ i < 1 ,
    Consider now Theorem 2. We have that
    I Y i ; W i = 1 2 log 1 + h i i 2 σ w i 2 h j i 2 σ w j 2 + h i i 2 σ u i 2 + h j i 2 σ u j 2 + σ 2
    with the condition
    h i i 2 σ w i 2 h j i 2 σ w j 2 + h i i 2 σ u i 2 + h j i 2 σ u j 2 + σ 2 > e
    Here, we decode the desired private message with a subset of the flatness factor conditions that were already defined in the first step. Thus, we decode h i i u i from y i h i i w ^ i = h i i u i + h j i w j + h j i u j + z i , where w i ^ is the estimated w i , considering (64) and (66), which are the flatness factor conditions that make h j i u j and h j i w j part of the noise. Utilizing Theorem 2, we obtain
    I Y i ; U i W i = 1 2 log 1 + h i i 2 σ u i 2 h j i 2 σ w j 2 + h j i 2 σ u j 2 + σ 2
    where
    h i i 2 σ u i 2 h j i 2 σ w j 2 + h j i 2 σ u j 2 + σ 2 > e
    Finally, we can decode w j using y k h i i w ^ i h i i u ^ i = h j i w j + ( h j i u j + z k ) , where w ^ i and u ^ i are the estimated w i and u i , respectively. Again, using Lemma 4, we can consider h j i u j as part of the noise, with its respective flatness factor condition (64), and we can apply Theorem 2 to obtain
    I Y i ; W j U i W i = 1 2 log 1 + h j i 2 σ w j 2 h j i 2 σ u j 2 + σ 2
    where
    h j i 2 σ w j 2 h j i 2 σ u j 2 + σ 2 > e
  • Decoding h j i w j , then h i i w i and finally h i i u i :
    If we start by decoding the interference common message first, h j i w j , to consider the rest of the messages as noise, we apply Lemma 3 to h i i w i and h i i u i with the flatness factor conditions (65) and (67), and Lemma 4 to h j i u j with the flatness factor condition (64).
    Then, using Theorem 2, we obtain
    I Y i ; W j = 1 2 log 1 + h j i 2 σ w j 2 h i i 2 σ w i 2 + h i i 2 σ u i 2 + h j i 2 σ u j 2 + σ 2
    where
    h j i 2 σ w j 2 h i i 2 σ w i 2 + h i i 2 σ u i 2 + h j i 2 σ u j 2 + σ 2 > e
    Here, we decode the desired common message w i from y i h j i w ^ j = h i i w i + h i i u i + h j i u j + z i again, where w ^ j is the estimated w j , considering, as previously mentioned, h i i u i and h j i u j as noise with the conditions (67) and (64). Using Theorem 2, we obtain
    I Y i ; W i W j = 1 2 log 1 + h i i 2 σ w i 2 h i i 2 σ u i 2 + h j i 2 σ u j 2 + σ 2
    where
    h i i 2 σ w i 2 h i i 2 σ u i 2 + h j i 2 σ u j 2 + σ 2 > e
    Finally, once both common messages have been found, we can decode u i using y k h i i w ^ i h j i w ^ j = h i i u i + ( h j i u j + z k ) , where w ^ i and w ^ j are the estimated w i and w j , respectively. Again using Lemma 4, we can consider h j i u j as part of the noise, with its respective flatness factor condition (64), and we can apply Theorem 2 to obtain
    I Y i ; U i W i W j = 1 2 log 1 + h i i 2 σ u i 2 h j i 2 σ u j 2 + σ 2
    where
    h i i 2 σ u i 2 h j i 2 σ u j 2 + σ 2 > e

3.2. Lattice Gaussian Coding for the K-User Interference Channel

In this section, we demonstrate how to use the previous results for the K-user interference channel utilizing lattice Gaussian coding.
Consider a K-user interference channel model given by:
y i = h i i x i + i j h j i x j + z i ,
where h i i and h j i are the real direct and indirect channel gains, respectively; x i is the signal transmitted by transmitter i; x j is the signal transmitted by transmitter j; and z i is the additive white Gaussian noise with variance σ 2 and zero mean, where i , j = 1 , , K .

K-User Gaussian Interference Channel Using Lattice Gaussian Coding

The main idea of using lattice codes is to apply lattice alignment to the receivers so that we can ensure the model mimics a two-user interference channel.
Lemma 8. 
In a K-user Gaussian interference channel where lattice alignment is used, such that the channel resembles K two-user MACs, the number of lattice Gaussian codes needed to align interference is given by 2 + 4 K .
Proof. 
To prove this let us begin with a three-user interference channel example. Our goal is to mimic the idea of the two-user interference channel where we can intersect the two-user MAC. For simplicity, let us consider the following channel model with only common messages:
y 1 = h 11 w 1 + h 21 w 2 + h 31 w 3 + z 1
y 2 = h 22 w 2 + h 12 w 1 + h 32 w 3 + z 2
y 3 = h 33 w 3 + h 13 w 1 + h 23 w 2 + z 3
In order to mimic a two-user interference channel, we will say that each user will see only one interference user in the following way:
For user i:
y i = h i i w i + j i h j i w j + z i
y i n t i = j i y j = j i h j j w j + j , l i , l j h l j w j + j i h i j w i + j i z j .
We have user i and the interference user, which is now the addition of two interferers; namely, user int i (see Figure 3).
Assigning the lattices when K 3 is more challenging than for the two-user case. Suppose we assign the lattices in the same way as the two-user interference channel. In this case, we would have that h i i w i Δ , h j i w j Θ , h j l w j Θ , h i j w i Θ , for i , j = 1 , 2 , 3 and i j . Then, we would have
y i Δ + Θ + Θ
y i n t i Δ + Δ + Θ + Θ + Θ + Θ .
We can see it is not be possible to decode at int i as we cannot decode j , l i , l j h l j w j from j i h i j w i . Let us consider the following strategy shown in Table 1.
In the first column, we show the perspective of each user. User 1 assigns h i i w i Δ for i = 1 , 2 , 3 , while it assigns h j 1 w j Π and h 1 j w 1 Π for j = 2 , 3 . User 2 assigns h i i w i Δ for i = 1 , 2 , 3 , while it assigns h j 2 w j Θ and h 2 j w 2 Θ for j = 1 , 3 . User 3 assigns h i i w i Δ for i = 1 , 2 , 3 , while it assigns h j 3 w j Υ and h 3 j w 3 Υ for j = 1 , 2 . The combination of all possible lattices for the case of h j i w j for i , j = 1 , 2 , 3 , i j is given in the last line of Table 1. Note that, for example, the lattice Π Θ is not necessarily a combination of lattices Π and Θ . It simply symbolizes a lattice that is useful for both h 21 w 2 for users 1 and 2 and h 12 w 1 for users 1 and 2. The same can be applied for h 31 w 3 for users 1 and 3, h 13 w 1 for users 1 and 3, h 23 w 2 for users 2 and 3 and h 32 w 3 for users 2 and 3. Thus, for each user, we obtain the following:
For user 1:
y 1 Δ + Π Θ + Π Υ
y i n t 1 Δ + Θ Υ + Θ Υ + Δ + Π Θ + Π Υ .
Similarly, for user 2:
y 2 Δ + Π Θ + Θ Υ
y i n t 2 Δ + Π Υ + Π Υ + Δ + Π Θ + Θ Υ .
Furthermore, for user 3:
y 3 Δ + Π Υ + Θ Υ
y i n t 3 Δ + Π Θ + Π Θ + Δ + Π Υ + Θ Υ .
Here, let us focus on decoding. In order to obtain the same decoding rates at the desired receiver as in the previous two-user case, we need to be able to decode the following: common interferers, common desired messages and, finally, private desired messages or common desired messages, private desired messages and, finally, common interferers. For the interferer receiver, we need to be able to decode the following: common interferer messages, private interferer messages and, finally, common messages of user i or common messages of user i, common interferer messages and, finally, private interferer messages. In our three-user example without private messages, this means the following:
  • At receiver 1, we decode to lattice Δ and then to lattice Π 1 , where Π Θ + Π Υ Π 1 , or to lattice Π 1 and then to lattice Δ .
  • At receiver int 1 , we decode to lattice Δ 1 , where Δ + Θ Υ + Θ Υ + Δ Δ 1 , and then to Π 1 or first to Π 1 and then to Δ 1 .
    The process is similar for the other users. Thus, for our three-user interference channel example using only common messages, we need seven lattices to be able to decode three users and three interference users. This can be observed in Figure 4.
Following the same strategy, we find that for any K-user interference channel with common and private messages, we would need 2 + 4 K lattices. Note that by using this strategy, there are lattices that repeat both at user i and user int i , thus allowing us to reduce the number of lattices that are needed to decode. □
The channel model is now given by:
y i = h i i w i + j i h j i w j + h i i u i + j i h j i u j + z i
y i n t i = j i h j l w j + j i h j j w j + j i h i j w i + j i h j l u j + j i h j j u j + j i h i j u i + z i n t i ,
where i , j , l = 1 , , K . Let us properly define the lattices as follows: h i i w i D Δ , δ , where δ = h i i σ w i , j h j i w j D Π i , ρ i , h i i u i D Γ , γ where γ = h i i σ u i , j h j i u j D Ψ i , τ i , j h j l w j + h j j w j D Λ i , λ i , where λ i = j i h j l 2 + j i h j j 2 σ w j 2 , j h i j w i D Π i , ρ i , where ρ i = j i h j i 2 σ w j 2 = j i h i j 2 σ w i 2 , j h j l u j + h j j u j D Υ i , υ i , where υ i = j i h j l 2 + j i h j j 2 σ u j 2 , j h i j u i D Ψ i , τ i , where τ i = j i h j i 2 σ u j 2 = j i h i j 2 σ u i 2 , where i , j , l = 1 , , K , j i and where we will assume that σ w = σ w i = σ w j , σ u = σ u i = σ u j and h j i 2 = h i j 2 for any i , j = 1 , , K .
We can now define S i c h i i 2 σ w i 2 σ 2 , S i p h i i 2 σ u i 2 σ 2 , I i c j i h j i 2 σ w j 2 σ 2 , I i p j i h j i 2 σ u j 2 σ 2 , S i n t i c j i h j l 2 σ w j 2 + j i h j j 2 σ w j 2 σ 2 , S i n t i p j i h j l 2 σ u j 2 + j i h j j 2 σ u j 2 σ 2 , I i n t i c j i h i j 2 σ w i 2 σ 2 , I i n t i p j i h i j 2 σ u i 2 σ 2 .
From (97) and (98) for each i = 1 , , K , we have two MAC regions with two possible rates, R i and R i n t i . Therefore, the interference channel rate region is given by:
R i min { I Y i ; W i W i n t i Q , I Y i n t i ; W i U i n t i W i n t i Q } + I Y i ; U i W i W i n t i Q ,
R i n t i min { I Y i n t i ; W i n t i W i Q , I Y i ; W i n t i U i W i Q } + I Y i n t i ; U i n t i W i W i n t i Q ,
R i + R i n t i min { I Y i ; W i W i n t i Q , I Y i n t i ; W i W i n t i Q , I Y i ; W i n t i W i Q + I Y i n t i ; W i W i n t i Q } + I Y i ; U i W i W i n t i Q + I Y i n t i ; U i n t i W i W i n t i Q ,
2 R i + R i n t i I Y i ; W i W i n t i Q + I Y i n t i ; W i W i n t i Q + 2 I Y i ; U i W i W i n t i Q + I Y i n t i ; U i n t i W i W i n t i Q
R i + 2 R i n t i I Y i n t i ; W i W i n t i Q + I Y i ; W i n t i W i Q + I Y i ; U i W i W i n t i Q + 2 I Y i n t i ; U i n t i W i W i n t i Q .
for i = 1 , , K ,
In this case, Lemmas 3 and 4 can still be fulfilled using:
ϵ Ψ i τ i σ τ i 2 + σ 2 < 1 2 ,
ϵ Δ δ < 1 ,
ϵ Π i ρ i < 1 ,
ϵ Λ i λ i < 1 ,
ϵ Γ γ < 1 ,
ϵ Υ i υ i < 1 .
As for the two-user case, we will consider decoding in the following manner. From (97) to (98):
  • Decoding at receiver i:
    (a)
    Decoding h i i w i , then h i i u i and finally h j i w j : If we decode the desired common message first, h i i w i , to consider the rest of the messages as noise, we have to apply Lemma 4 to h j i u j and Lemma 3 to h j i w j and h i i u i . Thus, we decode w i from y i = h i i w i + z ˇ k , where z ˇ i = h j i w j + h i i u i + h j i u j + z i is the new semi-spherical noise. This is valid from Lemma 4 with the flatness factor condition
    ϵ Ψ i τ i σ τ i 2 + σ 2 < 1 2 ,
    and from Lemma 3 with the flatness factor conditions
    ϵ Π i ρ i < 1
    and
    ϵ Γ γ < 1 ,
    From Theorem 2, we have that
    I Y i ; W i = 1 2 log 1 + h i i 2 σ w i 2 h j i 2 σ w j 2 + h i i 2 σ u i 2 + h j i 2 σ u j 2 + σ 2
    with the condition
    h i i 2 σ w i 2 h j i 2 σ w j 2 + h i i 2 σ u i 2 + h j i 2 σ u j 2 + σ 2 > e
    We now decode the desired private message with a subset of the flatness factor conditions, which were already defined in the first step. Thus, we decode h i i u i from y i w ^ i = h i i u i + h j i w j + h j i u j + z i , where w i ^ is the estimated h i i w i , considering (104) and (106), which are the flatness factor conditions that make h j i u j and h j i w j part of the noise. Utilizing Theorem 2, we obtain
    I Y i ; U i W i = 1 2 log 1 + h i i 2 σ u i 2 h j i 2 σ w j 2 + h j i 2 σ u j 2 + σ 2
    where
    h i i 2 σ u i 2 h j i 2 σ w j 2 + h j i 2 σ u j 2 + σ 2 > e
    Finally, we can decode h j i w j using y k w ^ i u ^ i = h j i w j + ( h j i u j + z k ) , where w ^ i and u ^ i are the estimated h i i w i and h i i u i , respectively. Again, using Lemma 4, we can consider h j i u j as part of the noise, with its respective flatness factor condition (104), and we can apply Theorem 2 to obtain
    I Y i ; W i n t i U i W i = 1 2 log 1 + h j i 2 σ w j 2 h j i 2 σ u j 2 + σ 2
    where
    h j i 2 σ w j 2 h j i 2 σ u j 2 + σ 2 > e
    (b)
    Decoding h j i w j , then h i i w i and finally h i i u i :
    If we start by decoding the interference common message first, h j i w j , to consider the rest of the messages as noise, we apply Lemma 3 to h i i w i and h i i u i with the flatness factor conditions (105) and (108) and Lemma 4 to h j i u j with the flatness factor conditions (104).
    Then, using Theorem 2, we obtain
    I Y i ; W i n t i = 1 2 log 1 + h j i 2 σ w j 2 h i i 2 σ w i 2 + h i i 2 σ u i 2 + h j i 2 σ u j 2 + σ 2
    where
    h j i 2 σ w j 2 h i i 2 σ w i 2 + h i i 2 σ u i 2 + h j i 2 σ u j 2 + σ 2 > e
    Here, we decode the desired common message h i i w i from y i w ^ j = h i i w i + h i i u i + h j i u j + z i , where w ^ j is the estimated h j i w j , considering, as previously mentioned, h i i u i and h j i u j as noise with the conditions (108) and (104). Using Theorem 2, we obtain
    I Y i ; W i W i n t i = 1 2 log 1 + h i i 2 σ w i 2 h i i 2 σ u i 2 + h j i 2 σ u j 2 + σ 2
    where
    h i i 2 σ w i 2 h i i 2 σ u i 2 + h j i 2 σ u j 2 + σ 2 > e
    Finally, once both common messages have been found, we can decode h i i u i using y k w ^ i w ^ j = h i i u i + ( h j i u j + z k ) , where w ^ i and w ^ j are the estimated h i i w i and h j i w j , respectively. Again, using Lemma 4, we can consider h j i u j as part of the noise, with its respective flatness factor condition (104), and we can apply Theorem 2 to obtain
    I Y i ; U i W i W i n t i = 1 2 log 1 + h i i 2 σ u i 2 h j i 2 σ u j 2 + σ 2
    where
    h i i 2 σ u i 2 h j i 2 σ u j 2 + σ 2 > e
  • We will now decode at receiver i n t i :
    (a)
    Decoding h j l w j + h j j w j , then h j l u j + h j j u j and finally h i j w i : If we decode the desired common message first, h j l w j + h j j w j , to consider the rest of the messages as noise, we must apply Lemmas 4 and 3. Lemma 4 is applied to h i j u i , while Lemma 3 is applied to h j l w j + h j j w j and h i j w i . Thus, we decode h j l w j + h j j w j from y i n t i = h j l w j + h j j w j + z ˇ k , where z ˇ i = h i j w j + h j l u j + h j j u j + h i j u i + z i n t i is the new semi-spherical noise. This is valid from Lemma 4 with the flatness factor condition (104) and from Lemma 3 with the flatness factor conditions (109) and (106).
    Thus, from Theorem 2, we have that
    I Y i n t i ; W i n t i = 1 2 log 1 + h j l 2 σ w j 2 + h j j 2 σ w j 2 h i j 2 σ w i 2 + h j l 2 σ u j 2 + h j j 2 σ u j 2 + h i j 2 σ u i 2 + σ 2
    with the condition
    h j l 2 σ w j 2 + h j j 2 σ w j 2 h i j 2 σ w i 2 + h j l 2 σ u j 2 + h j j 2 σ u j 2 + h i j 2 σ u i 2 + σ 2 > e
    Here, we decode the desired private message with a subset of flatness factor conditions, which were already defined in the first step. Thus, we decode h j l u j + h j j u j from y i n t i w ^ j = h j l u j + h j j u j + h i j u i + z i n t i , where w ^ j is the estimated h j i + h j j w j , considering (109) and (106), which are the flatness factor conditions that make h j l u j + h j j u j and h i j w i part of the noise. Using Theorem 2, we obtain
    I Y i n t i ; U i n t i W i n t i = 1 2 log 1 + h j l 2 σ u i 2 + h j j 2 σ u i 2 h i j 2 σ w i 2 + h i j 2 σ u i 2 + σ 2
    where
    h j l 2 σ u i 2 + h j j 2 σ u i 2 h i j 2 σ w i 2 + h i j 2 σ u i 2 + σ 2 > e
    Finally, we can decode h i j w i using y i n t i w ^ j u ^ j = h i j w i + ( h i j u i + z i n t i ) , where w ^ j and u ^ j are the estimated h j l + h j j w j and h j l + h j j u j , respectively. Again, using Lemma 4, we can consider h i j u i as part of the noise, with its respective flatness factor condition (104), and we can apply Theorem 2 to obtain
    I Y i n t i ; W i U i n t i W i n t i = 1 2 log 1 + h i j 2 σ w i 2 h i j 2 σ u i 2 + σ 2
    where
    h i j 2 σ w i 2 h i j 2 σ u i 2 + σ 2 > e
    (b)
    Decoding h i j w i , then h j l w j + h j j w j and, finally, h j l u j + h j j u j :
    If we start by decoding the interference common message first, h i j w i , to consider the rest of the messages as noise, we apply Lemma 3 to h j l w j + h j j w j and h j l u j + h j j u j with the flatness factor conditions (107) and (109) and Lemma 4 to h i j u i with the flatness factor conditions (104).
    Then, utilizing Theorem 2, we obtain
    I Y i n t i ; W i = 1 2 log 1 + h i j 2 σ w i 2 h j l 2 σ w j 2 + h j j 2 σ w j 2 + h j l 2 σ u j 2 + h j j 2 σ u j 2 + h i j 2 σ u i 2 + σ 2
    where
    h i j 2 σ w i 2 h j l 2 σ w j 2 + h j j 2 σ w j 2 + h j l 2 σ u j 2 + h j j 2 σ u j 2 + h i j 2 σ u i 2 + σ 2 > e
    Here, we decode the desired common message h j l w j + h j j w j from y i n t i w ^ i = h j l w j + h j j w j + h j l u j + h j j u j + h i j u i + z i n t i again, where w ^ i is the estimated h i j w i , considering, as previously mentioned, h j l u j + h j j u j and h i j u i as noise with the conditions (109) and (104). Using Theorem 2, we obtain
    I Y i n t i ; W i n t i W i = 1 2 log 1 + h j l 2 σ w j 2 + h j j 2 σ w j 2 h j l 2 σ u j 2 + h j j 2 σ u j 2 + h i j 2 σ u i 2 + σ 2
    where
    h j l 2 σ w j 2 + h j j 2 σ w j 2 h j l 2 σ u j 2 + h j j 2 σ u j 2 + h i j 2 σ u i 2 + σ 2 > e
    Finally, once both common messages have been found, we can decode h j l u j + h j j u j by y i n t i w ^ i w ^ j = h j l u j + h j j u j + ( h i j u i + z i n t i ) , where w ^ i and w ^ j are the estimated h i j w i and h j l + h j j w j , respectively. Again, using Lemma 4, we can consider h i j u i as part of the noise, with its respective flatness factor condition (104), and we can apply Theorem 2 to obtain
    I Y i n t i ; U i n t i W i W i n t i = 1 2 log 1 + h j l 2 σ u j 2 + h j j 2 σ u j 2 h i j 2 σ u i 2 + σ 2
    where
    h j l 2 σ u j 2 + h j j 2 σ u j 2 h i j 2 σ u i 2 + σ 2 > e

4. Results

Although some lemmas were obtained in Section 3, such as Lemmas 6–8, in this section, we will present the main results of this work.

4.1. The Power Constraints and GDoF of the Two-User Weak Gaussian Interference Channel with Lattice Gaussian Coding

Using the results from the previous section, we now find the power constraints for the private and common messages. These are stated in the next Lemma:
Lemma 9. 
For any type of interference, we have the following power constraints from (72), (74), (76), (78), (80) and (82),
σ u i 2 > σ 2 e e + 1 h i i 2 e e + 1 h j i 2 h j j 2 + 1 1 h i j 2 h j i 2 h i i 2 h j j 2 e + 1 2 e 2
σ w i 2 > max { e e + 1 2 h i j 2 σ u i 2 + σ 2 h i j 2 ,
e e + 1 2 h j i 2 σ u j 2 + σ 2 h i i 2 } ,
for i , j = 1 , 2 , j i , and where we consider that h 11 2 h 22 2 h 12 2 h 21 2 > e 2 e + 1 2 .
The proof of Lemma 9 can be found in Appendix B. Note that we choose h 11 2 h 22 2 h 12 2 h 21 2 > e 2 e + 1 2 , which does not contradict the weak interference scenario as, for weak interference, we need h i i 2 h i j 2 > 1 . In order to fulfill the restrictions on the flatness factors, we can apply the same approach as in [19] where, for mod-p lattices, we can satisfy a small flatness factor if:
V ( L ) 2 / n 2 π σ s 2 < 1 ,
where we consider a discrete lattice Gaussian distribution over L, centered on zero and with variance σ s 2 . Then, for each of the defined lattices, to satisfy each of the flatness factor conditions, we must satisfy the following volume constraints:
V ( Δ i ) 2 / n < 2 π h i i 2 σ w i 2 ,
V ( Π i ) 2 / n < 2 π h j i 2 σ w j 2 ,
V ( Γ i ) 2 / n < 2 π h i i 2 σ u i 2 ,
V ( Ψ i ) 2 / n < 2 π h j i 2 σ u j 2 σ 2 h j i 2 σ u j 2 + σ 2 ,
where we consider that the dimension n is the same for all lattices.
From (43)–(46), we can express the rates obtained in Section 3.1.1, Lemma 7 (equivalently (71), (73), (75), (77), (79) and (81)), with the following, where we have reduced the equations where possible,
R 1 min { 1 2 log 1 + S 1 c S 1 p + I 1 p + 1 , 1 2 log 1 + I 2 c I 2 p + 1 } + 1 2 log S 1 p + I 1 p + 1 I 1 p + 1
R 2 min { 1 2 log 1 + S 2 c S 2 p + I 2 p + 1 , 1 2 log 1 + I 1 c I 1 p + 1 } + 1 2 log S 2 p + I 2 p + 1 I 2 p + 1
R 1 + R 2 1 2 log S 1 + I 1 + 1 I 1 p + 1 I 2 p + S 2 p + 1 I 2 p + 1
R 1 + R 2 1 2 log S 2 + I 2 + 1 I 2 p + 1 I 1 p + S 1 p + 1 I 1 p + 1
R 1 + R 2 1 2 log S 1 p + I 1 + 1 I 1 p + 1 S 2 p + I 2 + 1 I 2 p + 1
2 R 1 + R 2 1 2 log S 1 + I 1 + 1 S 1 p + I 1 p + 1 S 2 p + I 2 + 1 I 2 p + 1 I 1 p + S 1 p + 1 I 1 p + 1 2
R 1 + 2 R 2 1 2 log S 2 + I 2 + 1 S 2 p + I 2 p + 1 S 1 p + I 1 + 1 I 1 p + 1 I 2 p + S 2 p + 1 I 2 p + 1 2 .
For the weak interference scenario S 1 > I 2 and S 2 > I 1 . As in [9], the aim is to prove the constant gap and, ultimately, that we can obtain the same GDoF as in [9].
In [9], the HK region is used by R I 1 p , I 2 p , where I i p is approximated by 1. The aim is to find the difference between the outer bound rate region and the HK rate region; in particular, a constant gap. In [9], the authors found that, in some cases, I i p = 1 is not enough to reduce the gap between the outer bound and the HK rate for R 1 and R 2 ; therefore, it is necessary to assign more power to the private interference. Thus, the achievable rate region is given by [9] R min 1 , I 2 , min 1 , I 1 . This leads to four cases of reaching the constant gap. As in ([20] [Section 3.2]), we define k i = I j p , where k i can take values from 1 to I j , and that S i p = S i I j I j p . We obtain the following:
R 1 min { 1 2 log S 1 + I 1 p + 1 I 1 p + 1 , 1 2 log I 2 + 1 I 2 p + 1 }
+ 1 2 log S 1 p + I 1 p + 1 I 1 p + 1
1 2 log k 2 + 1 + S 1 k 2 + 1
R 2 1 2 log k 1 + 1 + S 2 k 1 + 1
R 1 + R 2 1 2 log S 1 + I 1 + 1 I 1 k 2 + 1 + 1 2 log k 2 S 2 + k 1 I 1 + I 1 k 1 + 1
R 1 + R 2 1 2 log S 2 + I 2 + 1 I 2 k 1 + 1 + 1 2 log k 1 S 1 + k 2 I 2 + I 2 k 2 + 1
R 1 + R 2 1 2 log k 1 S 1 I 2 + I 1 + 1 k 2 + 1 + 1 2 log k 2 S 2 I 1 + I 2 + 1 k 1 + 1
2 R 1 + R 2 1 2 log S 1 + I 1 + 1 + 1 2 log k 2 S 2 I 1 + I 2 + 1 k 1 + 1 + 1 2 log k 1 S 1 + k 2 I 2 + I 2 I 2 k 2 + 1 2
R 1 + 2 R 2 1 2 log S 2 + I 2 + 1 + 1 2 log k 1 S 1 I 2 + I 1 + 1 k 2 + 1 + 1 2 log k 2 S 2 + k 1 I 1 + I 1 I 1 k 2 + 1 2 ,
Let us define the difference between the outer bound rate region and the HK rate region, as presented in [9], as: Δ R 1 = U B R 1 H K R 1 , Δ R 2 = U B R 2 H K R 2 , Δ R 1 + R 2 = U B R 1 + R 2 H K R 1 + R 2 , Δ 2 R 1 + R 2 = U B 2 R 1 + R 2 H K 2 R 1 + R 2 and Δ R 1 + 2 R 2 = U B R 1 + 2 R 2 H K R 1 + 2 R 2 . Let us focus on Δ R 1 and Δ R 2 . Depending on the value of k 1 and k 2 , the left or right part of the term inside the min in (154) or (155) is active. It was found in [9,20] that reassigning the value of k i and assigning more power to the private interference allows for the reduction in the gap between the outer bound and the HK rate for R 1 and R 2 . In [9], the authors can also consider when I i < 1. In our case, we find that that is not possible since I i > 1 by construction. Thus, the lowest gap in R 1 and R 2 , as presented in [9], is given by:
Δ R 1 < 1 2 log k 2 + 1
Δ R 2 < 1 2 log k 1 + 1
Δ R 1 + R 2 < 1 2 log k 1 + 1 k 2 + 1
Δ 2 R 1 + R 2 < 1 2 log k 2 + 1 2 k 1 + 1
Δ R 1 + 2 R 2 < 1 2 log k 1 + 1 2 k 2 + 1 .
The above leads to the main Theorem of this section.
Theorem 3. 
The constant gap obtained in [9] for a two-user Gaussian interference channel using Gaussian codes is the same as that obtained using lattice Gaussian distribution when h i i 2 h i j 2 > 2 e e + 1 for i , j = 1 , 2 and i j .
The proof of Theorem 3 can be found in Appendix C.
It is then straightforward to obtain the GDoF, which is the same as in (30)–(34).

4.2. The Power Constraints and GDoF of the K-User Weak Gaussian Interference Channel with Lattice Gaussian Coding

Here, let us consider the case for the K-user Gaussian interference channel, as presented in Section 3. Assume that σ w i = σ w and σ u i = σ u for i , j , l = 1 , , K , i j l . We have the following Lemma that shows the power constraints obtained for the private and common messages:
Lemma 10. 
For any type of interference, we have the following power constraints from (114), (116), (118), (120), (122), (124), (126), (128), (130), (132), (134) and (136):
σ u 2 > max { σ 2 e e + 1 h i i 2 h j i 2 e e + 1 ,
σ 2 e h i i 2 h i i 2 e h j i 2 2 e h j i 2 h i i 2 + h j i 2 ,
σ 2 e h j l 2 + h j j 2 h j l 2 + h j j 2 e h i j 2 2 e 2 h i j 2 h j l 2 + h j j 2 + h i j 2 ,
σ 2 e e + 1 h j l 2 + h j j 2 e e + 1 h i j 2 ,
σ 2 e h i i 2 e h j i 2 ,
σ 2 e h j l 2 + h j j 2 e h i j 2 }
σ w 2 > max { e h i i 2 + h j i 2 σ u 2 + σ 2 e h i i 2 e h j i 2 ,
e h j i 2 σ u 2 + σ 2 e h j i 2 ,
e h j l 2 + h j j 2 + h i j 2 σ u 2 + σ 2 e h j l 2 + h j j 2 e h i j 2 ,
e h i j 2 σ u 2 + σ 2 e h i j 2 ,
σ 2 e h i i 2 h i j 2 e h i i 2 e + 1 h i i 2 e h j i 2 ,
σ 2 e h j l 2 + h j j 2 h i j 2 e h j l 2 + h j j 2 e + 1 h j l 2 + h j j 2 e h j i 2 }
where h i i 2 h j i 2 > e e + 1 , h i i 2 e h j i 2 2 > e h j i 2 h i i 2 + h j i 2 , h j l 2 + h j j 2 > e e + 1 h i j 2 , h j l 2 + h j j 2 e 2 h i j 2 2 > e 2 h i j 2 h j l 2 + h j j 2 + h i j 2 and h j i 2 = h i j 2 , for i , j , l = 1 , , K .
The proof of Lemma 10 can be found in Appendix D.
As for the two-user interference channel, we must satisfy the following volume conditions for each lattice:
V ( Δ ) 2 / n < 2 π h i i 2 σ w 2
V ( Π i ) 2 / n < 2 π h j i 2 σ w 2 = 2 π h i j 2 σ w 2
V ( Λ i ) 2 / n < 2 π h j l 2 + h j j 2 σ w 2
V ( Γ ) 2 / n < 2 π h i i 2 σ u 2
V ( Υ i ) 2 / n < 2 π h j l 2 + h j j 2 σ u 2
V ( Ψ i ) 2 / n < 2 π h j i 2 σ u 2 σ 2 h j i 2 σ u 2 + σ 2 = 2 π h i j 2 σ u 2 σ 2 h i j 2 σ u 2 + σ 2 .
As for the two-user case, we can formally express the K-user interference channel rates with alignment as follows:
R 1 min { 1 2 log 1 + S i c S i p + I i p + 1 , 1 2 log 1 + I i n t i c I i n t i p + 1 } + 1 2 log S i p + I i p + 1 I i p + 1
R 2 min { 1 2 log 1 + S i n t i c S i n t i p + I i n t i p + 1 , 1 2 log 1 + I i c I i p + 1 } + 1 2 log S i n t i p + I i n t i p + 1 I i n t i p + 1
R 1 + R 2 1 2 log S 1 + I 1 + 1 I i p + 1 I i n t i p + S i n t i p + 1 I i n t i p + 1
R 1 + R 2 1 2 log S 2 + I 2 + 1 I i n t i p + 1 I i p + S i p + 1 I i p + 1
R 1 + R 2 1 2 log S i p + I 1 + 1 I i p + 1 S i n t i p + I 2 + 1 I i n t i p + 1
2 R 1 + R 2 1 2 log S 1 + I 1 + 1 S i p + I i p + 1 S i n t i p + I 2 + 1 I i n t i p + 1 I i p + S i p + 1 I i p + 1 2
R 1 + 2 R 2 1 2 log S 2 + I 2 + 1 S i n t i p + I i n t i p + 1 S i p + I 1 + 1 I i p + 1 I i n t i p + S i n t i p + 1 I i n t i p + 1 2
We can observe that this is equivalent to the results obtained in (145)–(151) for the two-user case. Thus, the procedure is the same as the one for (161)–(165).
The main result of this section can be stated in the following theorem:
Theorem 4. 
The constant gap obtained in [9] for a two-user weak Gaussian interference channel using Gaussian codes is the same as that obtained for a K-user weak Gaussian interference channel using lattice Gaussian distribution when h i i 2 h i j 2 > e e + 1 , h j l 2 + h j j 2 h i j 2 > e e + 1 , where h i j 2 = h j i 2 .
The proof of Theorem 4 can be found in Appendix E.

5. Discussion

In this section, we summarize and highlight the results of this research.
First, in Section 3, to understand the achievable rate of a two-user interference channel presented by [3], we divide the problem into two two-MAC regions. This allows us to visualize both the contribution of each user to the HK rate and the decoding order. Both of these are key for later designing the lattice Gaussian codes, particularly when extended to a K-user interference channel.
Second, in Section 3, in order to use the HK decoding method, we want to use lattice Gaussian distribution. For this, we define the lattices and the constraints for each of the lattice distributions. We begin with a two-user interference channel first, where we must consider Lemmas 3 and 4 to treat the common and private messages as noise in each step of the decoding process. Next, we apply Theorem 2 to each of the rates we found for the two-user interference channel, following the decoding order defined before.
From these, we demonstrate how to extend these results to the K-user interference channel, as explained in Section 3. We mimic a two-user interference channel using alignment, thus obtaining two users: i and i n t i . The challenge is the strategy to choose the lattices to decode both at user i and user i n t i . In addition, the strategy by which to choose the lattices (as shown in example Table 1) allows us to visualize that some lattices repeat for both user i and user i n t i , allowing us to reduce the number of lattices used. We again verify Lemmas 3 and 4 and Theorem 2 to each of the rates obtained.
In Section 4, the main results are presented. First, for the two-user weak Gaussian interference channel, we obtain the common and private power constraints. From this, we verify that we can approximate I i p for i = 1 , 2 to 1 if the condition h i i 2 h i j 2 > 2 e e + 1 holds (Theorem 3). Thus, this allows us to apply the same constraint as in [9], which leads to naturally obtaining the same constant gap and GDoF. We repeat the process for the K-user weak Gaussian interference channel, obtaining that we can also approximate I i p to 1 if the conditions h i i 2 h i j 2 > e e + 1 and h j l 2 + h j j 2 h i j 2 > e e + 1 hold (Theorem 4). Note that, in this case, the conditions are weaker than for the two-user interference channel, but we have an extra penalty, given by j i h j i 2 = j i h i j 2 .

6. Conclusions

In this paper, we presented a lattice Gaussian coding scheme for the K-user interference channel. We show that, through the use of random coding, we can obtain the same conditions that lead to the constant gap to the optimal rate and the GDoF for a two-user interference channel as obtained in [9]. Herein, we use the HK scheme with private and common messages and lattice Gaussian coding to obtain randomness within the structure of the lattice. We proved that we can obtain the conditions to find the same constant gap and GDoF as with random coding for the weak interference scenario. This was achieved by using various properties of the flatness factor of the lattices, with some constraints on the common and private message powers as well as the channel coefficients. We also show how this can be extended to a K-user weak Gaussian interference channel, as the interference can be aligned at the receivers using lattice Gaussian coding.

Author Contributions

M.C.E. presented and developed the idea and wrote the paper. C.V.-C. reviewed the manuscript and provided some ideas to improve it. All authors have read and agreed to the published version of the manuscript.

Funding

The authors are grateful for the financial support of ANID (ex CONICYT), FONDECYT Postdoctorado No. 3180323, “LATTICE CODES FOR THE K-USER INTERFERENCE CHANNEL”.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The original contributions presented in the study are included in the article, further inquiries can be directed to the corresponding author.

Acknowledgments

M.C.E. acknowledges ANID (ex-CONICYT), Chile, for the post-doctoral grant FONDECYT project no. 3180323. The authors would also like to thank Cong Ling and Laura Luzzi for their guidance and many fruitful discussions.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
SNR, SSignal-to-noise ratio
DoFDegrees of freedom
HKHan and Kobayashi
GDoFGeneralized degrees of freedom
AWGNAdditive white Gaussian noise
INR, IInterference-to-noise ratio
MACMultiple access channel
ICInterference channel

Appendix A. Proof of Lemma 7

The two MAC regions must be intersected to obtain the same rate region of the two-user interference channel as those in [3,20]. The strategy is as follows. Similar to [3], we define R i = D i + T i , where D i represents the rate of the private message U i , while T i represents the rate of the common message W i . The rates of users 1 and 2 when the other user is not transmitting are trivial and given by:
R 1 I Y 1 ; U 1 W 1 W 2 Q D 1 + min { I Y 1 ; W 1 W 2 Q , I Y 2 ; W 1 U 2 W 2 Q } T 1
R 2 I Y 2 ; U 2 W 1 W 2 Q D 2 + min { I Y 2 ; W 2 W 1 Q , I Y 1 ; W 2 U 1 W 1 Q } T 2
For R 1 + R 2 , we have four possibilities:
The contribution from MAC 1 and the contribution of the private message rate given by MAC 2
R 1 + R 2 I Y 1 ; U 1 W 1 W 2 Q D 1 + I Y 2 ; U 2 W 1 W 2 Q D 2 + I Y 1 ; W 1 W 2 Q T 1 + T 2
The contribution from MAC 2 and the contribution of the private message rate given by MAC 1
R 1 + R 2 I Y 1 ; U 1 W 1 W 2 Q D 1 + I Y 2 ; U 2 W 1 W 2 Q D 2 + I Y 2 ; W 1 W 2 Q T 1 + T 2
The contribution from the intersection of both MAC, where R 1 A < R 1 C and R 2 A > R 2 C
R 1 + R 2 I Y 1 ; U 1 W 1 Q D 1 + I Y 2 ; U 2 W 2 Q D 2 + I Y 2 ; W 1 U 2 W 2 Q T 1 + I Y 1 ; W 2 U 1 W 1 Q T 2
Finally, the contribution from the intersection of both MACs, where R 1 A < R 1 C and R 2 A > R 2 C , is actually redundant and, therefore, discarded, as shown in [20].
Finally, we know that the HK rate region is also bounded by 2 R 1 + R 2 and R 1 + 2 R 2 . This can be found using the following logic. For 2 R 1 + R 2 , we have two possibilities:
The contribution of R 1 + R 2 from MAC 1 and the contribution of T 1 and D 2 from MAC 2:
2 R 1 + R 2 2 I Y 1 ; U 1 W 1 W 2 Q D 1 + I Y 2 ; U 2 W 2 Q D 2 + I Y 2 ; W 1 U 2 W 2 Q T 1 + I Y 1 ; W 1 W 2 Q T 1 + T 2
The contribution of R 1 + R 2 from MAC 2 and the contribution of T 1 and D 1 from MAC 1, which is actually redundant and, therefore, discarded, as shown in [20]
For R 1 + 2 R 2 , we have two possibilities:
The contribution of R 1 + R 2 from MAC 1 and the contribution of T 2 and D 2 from MAC 2, which is actually redundant and, therefore, discarded, as shown in [20],
The contribution of R 1 + R 2 from MAC 2 and the contribution of T 2 and D 1 from MAC 1:
R 1 + 2 R 2 I Y 1 ; U 1 W 1 Q D 1 + 2 I Y 2 ; U 2 W 1 W 2 Q D 2 + I Y 1 ; W 2 U 1 W 1 Q T 2 + I Y 2 ; W 1 W 2 Q T 1 + T 2

Appendix B. Proof of Lemma 9

At user i, if we first decode w i , then u i and finally w j , from (72), (74) and (76), we obtain the following power constraints:
σ w j 2 > e h j i 2 σ u j 2 + σ 2 h j i 2
σ u i 2 > e e + 1 h j i 2 σ u j 2 + σ 2 h i i 2
σ w i 2 > e e + 1 2 h j i 2 σ u j 2 + σ 2 h i i 2
Similarly, at user i, if we first decode w j , then w i and finally u i , from (78), (80) and (82), we obtain the following power constraints:
σ u i 2 > e h j i 2 σ u j 2 + σ 2 h i i 2
σ w i 2 > e e + 1 h j i 2 σ u j 2 + σ 2 h i i 2
σ w j 2 > e e + 1 2 h j i 2 σ u j 2 + σ 2 h j i 2
If we analyze each of the equations above assigning the values of i , j = 1 , 2 , we obtain twelve equations, six for user 1 and six for user 2. Then, we can consider four cases when decoding (i.e., at user 1, decode w 1 , then u 1 and finally w 2 , while at user 2, decode w 2 , then u 2 and finally w 1 , or at user 1 decode w 1 , then u 1 and finally w 2 , while at user 2, decode w 1 , then w 2 and finally u 2 and so on). These cases are shown in Table A1.
Table A1. Common and private power messages obtained for each user, considering the decoding strategy presented in Section 3.1.2.
Table A1. Common and private power messages obtained for each user, considering the decoding strategy presented in Section 3.1.2.
User 1 User 2
h 11 w 1 h 11 u 1 h 21 w 2 h 21 w 2 h 11 w 1 h 11 u 1 h 22 w 2 h 22 u 2 h 12 w 1 h 12 w 1 h 22 w 2 h 22 u 2
σ w 1 2 > e e + 1 2 h 21 2 σ u 2 2 + σ 2 h 11 2 e e + 1 h 21 2 σ u 2 2 + σ 2 h 11 2 e h 12 2 σ u 1 2 + σ 2 h 12 2 e e + 1 2 h 12 2 σ u 1 2 + σ 2 h 12 2
σ w 2 2 > e h 21 2 σ u 2 2 + σ 2 h 21 2 e e + 1 2 h 21 2 σ u 2 2 + σ 2 h 21 2 e e + 1 2 h 12 2 σ u 1 2 + σ 2 h 22 2 e e + 1 h 12 2 σ u 1 2 + σ 2 h 22 2
σ u 1 2 > e e + 1 h 21 2 σ u 2 2 + σ 2 h 11 2 e h 21 2 σ u 2 2 + σ 2 h 11 2
σ u 2 2 > e e + 1 h 12 2 σ u 1 2 + σ 2 h 22 2 e h 12 2 σ u 1 2 + σ 2 h 22 2
For each of these cases, we solve σ u 1 2 and σ u 2 2 . For example, for the first case, we find:
σ u 1 2 > e e + 1 h 21 2 σ u 2 2 + σ 2 h 11 2 ,
σ u 2 2 > e e + 1 h 12 2 σ u 1 2 + σ 2 h 22 2
If we solve (A14) with (A15), we obtain:
σ u 1 2 > σ 2 e e + 1 h 11 2 e e + 1 h 21 2 h 22 2 + 1 1 h 12 2 h 21 2 h 11 2 h 22 2 e 2 e + 1 2 ,
where we assume that h 11 2 h 22 2 h 12 2 h 21 2 > e 2 e + 1 2 .
If we repeat the previous reasoning for the four cases, we obtain four possible values for both σ u 1 2 and σ u 2 2 . We choose the most restrictive one as the one presented in (137). Here, we take these values of σ u 1 2 and σ u 2 2 , replace them in all the possible results of σ w 1 2 and σ w 2 2 , and find the most restrictive ones, which are those presented in (139) and (138).

Appendix C. Proof of Theorem 3

From (137) to (139), we have:
I 1 p = h 21 2 σ 2 σ u 2 2 > h 21 2 σ 2 σ 2 e e + 1 h 22 2 e e + 1 h 12 2 h 11 2 + 1 1 h 21 2 h 12 2 h 22 2 h 11 2 e + 1 2 e 2 ,
I 1 c = h 21 2 σ 2 σ w 2 2 > max { h 21 2 σ 2 e e + 1 2 h 21 2 σ u 2 2 + σ 2 h 21 2 ,
h 21 2 σ 2 e e + 1 2 h 12 2 σ u 1 2 + σ 2 h 22 2 } ,
I 2 p = h 12 2 σ 2 σ u 1 2 > h 12 2 σ 2 σ 2 e e + 1 h 11 2 e e + 1 h 21 2 h 22 2 + 1 1 h 12 2 h 21 2 h 11 2 h 22 2 e + 1 2 e 2 ,
I 2 c = h 12 2 σ 2 σ w 1 2 > max { h 12 2 σ 2 e e + 1 2 h 12 2 σ u 1 2 + σ 2 h 12 2 ,
h 12 2 σ 2 e e + 1 2 h 21 2 σ u 2 2 + σ 2 h 11 2 } ,
Define:
β 1 h 11 2 h 12 2 1 e e + 1
β 2 h 22 2 h 21 2 1 e e + 1
where β 1 , β 2 > 1 . Thus, we have:
I 1 p > β 1 + 1 β 1 β 2 1 ,
I 1 c > max { e e + 1 2 I 1 p + 1 , e + 1 β 2 I 2 p + 1 }
I 2 p > β 2 + 1 β 1 β 2 1 ,
I 2 c > max { e e + 1 2 I 2 p + 1 , e + 1 β 1 I 1 p + 1 }
Consider what we obtained for I 1 c and I 2 c in (A26) and (A28). Observe that the left term in (A26) (or in (A28)) is always bigger than 1, as I 1 p > 0 and I 2 p > 0 . Then, it is straightforward to find that I1 = I1c + I1p > 1 and I2 = I2c + I2p > 1. Then, we only need to analyze the constant gap for the case where I 1 , I 2 > 1 and I 1 p = I 2 p = 1 . If the first terms in I 1 p and I 2 p are bigger than 1 > β 1 + 1 β 1 β 2 1 and 1 > β 2 + 1 β 1 β 2 1 , then
β 1 > 2
β 2 > 2 .
This completes the proof.

Appendix D. Proof of Lemma 10

As σ w i 2 = σ w j 2 = σ w 2 and σ u i 2 = σ u j 2 = σ u 2 , it is straightforward to find the following:
  • Equations (172) and (173) from (114) and (118)
  • Equations (174) and (175) from (126) and (130)
  • Equation (176). From (120) and (122), we obtain
    σ w 2 > max { h i i 2 + h j i 2 e σ u 2 + e σ 2 h j i 2 h i i 2 e , h i i 2 + h j i 2 e σ u 2 + e σ 2 h i i 2 }
    From (124), we obtain:
    σ u 2 > e σ 2 h i i 2 e h j i 2
    If we evaluate (A32) in (A31) and the maximum value, we obtain (176).
  • Equation (177). From (132) and (134), we obtain
    σ w 2 > max { h j l 2 + h j j 2 + h i j 2 e σ u 2 + e σ 2 h i j 2 e h j l 2 + h j j 2 , h j l 2 + h j j 2 + h i j 2 e σ u 2 + e σ 2 h j l 2 + h j j 2 }
    From Equation (136), we obtain:
    σ u 2 > e σ 2 h j l 2 + h j j 2 e h i j 2
    If we evaluate (A34) in (A33) and the maximum value, we obtain (177).
  • Equations (166) and (167): From (116), we obtain:
    σ u 2 > e h j i 2 σ w 2 + σ 2 e h i i 2 h j i 2 e
    This is evaluated by taking (173) to obtain (166) or (172) to obtain (167).
  • Equations (168) and (169): From (128), we obtain:
    σ u 2 > e σ w 2 h j i 2 + σ 2 e h j l 2 + h j j 2 e h i j 2
    This is evaluated by taking (174) to obtain (168) and by taking (175) to obtain (169).
  • Equation (170) can easily be derived from (124), while (171) can easily be derived from (136).

Appendix E. Proof of Theorem 4

From (137) to (139), we have:
I i p = I i n t i p = h j i 2 σ 2 σ u 2 ,
> max { h j i 2 σ 2 σ 2 e e + 1 h i i 2 h j i 2 e e + 1 ,
h j i 2 σ 2 σ 2 e h i i 2 h i i 2 e h j i 2 2 e h j i 2 h i i 2 + h j i 2 ,
h j i 2 σ 2 σ 2 e h j l 2 + h j j 2 h j l 2 + h j j 2 e h i j 2 2 e 2 h i j 2 h j l 2 + h j j 2 + h i j 2 ,
h j i 2 σ 2 σ 2 e e + 1 h j l 2 + h j j 2 e e + 1 h i j 2 ,
h j i 2 σ 2 σ 2 e h i i 2 e h i j 2 ,
h j i 2 σ 2 σ 2 e h j l 2 + h l l 2 e h i j 2 }
I i c = I i n t i c = h j i 2 σ 2 σ w 2 ,
> max { h j i 2 σ 2 e h i i 2 + h j i 2 σ u 2 + σ 2 e h i i 2 e h j i 2 ,
h j i 2 σ 2 e h j i 2 σ u 2 + σ 2 e h j i 2 ,
h j i 2 σ 2 e h j l 2 + h j j 2 + h i j 2 σ u 2 + σ 2 e h j l 2 + h j j 2 e h i j 2 ,
h j i 2 σ 2 e h i j 2 σ u 2 + σ 2 e h i j 2 ,
h j i 2 σ 2 σ 2 e h i i 2 h i j 2 e h i i 2 e + 1 h i i 2 e h j i 2 ,
h j i 2 σ 2 σ 2 e h j l 2 + h j j 2 h i j 2 e h j l 2 + h j j 2 e + 1 h j l 2 + h j j 2 e h j i 2 }
We define the following:
β 1 ˜ h i i 2 h i j 2 1 e e + 1
β 2 ˜ h j l 2 + h j j 2 h j i 2 1 e e + 1 = h j l 2 + h j j 2 h i j 2 1 e e + 1 .
where β 1 ˜ , β 2 ˜ > 1 . Thus, we have:
I i p > max { 1 β 1 ˜ 1 ,
β 1 ˜ e 2 e + 1 β 1 ˜ e e + 1 e 2 e β 1 ˜ e e + 1 + 1 ,
β 2 ˜ e 2 e + 1 β 2 ˜ e e + 1 e 2 e 2 β 2 ˜ e e + 1 + 1 ,
1 β 2 ˜ 1 ,
1 β 1 ˜ e + 1 1 ,
1 β 2 ˜ e + 1 1 }
I i c > max { e β 1 ˜ e e + 1 e + β 1 ˜ e e + 1 + 1 e I i p β 1 ˜ e e + 1 e ,
e 1 + I i p ,
e β 2 ˜ e e + 1 e + β 2 ˜ e e + 1 + 1 e I i p β 2 ˜ e e + 1 e
β 1 ˜ e 2 e + 1 2 1 β 1 ˜ e 2 e + 1 β 1 ˜ e e + 1 e ,
β 2 ˜ e 2 e + 1 2 1 β 2 ˜ e 2 e + 1 β 2 ˜ e e + 1 e . }
In this case, we can verify that (A53) is bigger than (A54) and (A57); similarly, (A56) is bigger than (A55) and (A58), (A62) and (A63) are negative. Thus, we have:
I i p > max { 1 β 1 ˜ 1 , 1 β 2 ˜ 1 }
I i c > max { e 1 + I i p , β 1 ˜ e e + 1 + 1 I i p + 1 β 1 ˜ e + 1 1 , β 2 ˜ e e + 1 + 1 I i p + 1 β 2 ˜ e + 1 1 }
From I i p and I i c , we see that it only suffices that β 1 ˜ , β 2 ˜ > 1 e + 1 , which is already fulfilled as β 1 ˜ , β 2 ˜ > 1 . This completes the proof.

References

  1. Carleial, A. A case where interference does not reduce capacity (corresp.). IEEE Trans. Inf. Theory 1975, 21, 569–570. [Google Scholar] [CrossRef]
  2. Cadambe, V.; Jafar, S. Interference alignment and degrees of freedom of the K-user interference channel. IEEE Trans. Inf. Theory 2008, 54, 3425–3441. [Google Scholar] [CrossRef]
  3. Han, T.; Kobayashi, K. A new achievable rate region for the interference channel. IEEE Trans. Inf. Theory 1981, 27, 49–60. [Google Scholar] [CrossRef]
  4. Bresler, G.; Parekh, A.; Tse, D. The approximate capacity of the many-to-one and one-to-many Gaussian interference channels. IEEE Trans. Inf. Theory 2010, 56, 4566–4592. [Google Scholar] [CrossRef]
  5. Cadambe, V.; Jafar, S.; Shamai, S. Interference alignment on the deterministic channel and application to fully connected Gaussian interference networks. IEEE Trans. Inf. Theory 2009, 55, 269–274. [Google Scholar] [CrossRef]
  6. Jafar, S.A.; Vishwanath, S. Generalized degrees of freedom of the symmetric Gaussian K user interference channel. IEEE Trans. Inf. Theory 2010, 56, 3297–3303. [Google Scholar] [CrossRef]
  7. Sridharan, S.; Jafarian, A.; Vishwanath, S.; Jafar, S. Capacity of symmetric k-user gaussian very strong interference channels. In Proceedings of the IEEE GLOBECOM 2008—2008 IEEE Global Telecommunications Conference, New Orleans, LA, USA, 30 November–4 December 2008; pp. 1–5. [Google Scholar]
  8. Sridharan, S.; Jafarian, A.; Vishwanath, S.; Jafar, S.; Shamai, S. A layered lattice coding scheme for a class of three user Gaussian interference channel. In Proceedings of the 46th Annual Allerton Conference on Communication, Control, and Computing 2008, Monticello, IL, USA, 23–26 September 2008; pp. 531–538. [Google Scholar]
  9. Etkin, R.; Tse, D.; Wang, H. Gaussian interference channel capacity to within one bit. IEEE Trans. Inf. Theory 2008, 54, 5534–5562. [Google Scholar] [CrossRef]
  10. Geng, C.; Naderializadeh, N.; Avestimehr, A.S.; Jafar, S.A. On the optimality of treating interference as noise. IEEE Trans. Inf. Theory 2015, 61, 1753–1767. [Google Scholar] [CrossRef]
  11. Chen, J. Multi-layer interference alignment and GDoF of the k-user asymmetric interference channel. IEEE Trans. Inf. Theory 2021, 67, 3986–4000. [Google Scholar] [CrossRef]
  12. Peng, S.; Chen, X.; Lu, W.; Deng, C.; Chen, J. Spatial Interference Alignment with Limited Precoding Matrix Feedback in a Wireless Multi-User Interference Channel for Smart Grids. Energies 2022, 15, 1820. [Google Scholar] [CrossRef]
  13. Lu, J.; Li, J.; Yu, F.R.; Jiang, W.; Feng, W. UAV-Assisted Heterogeneous Cloud Radio Access Network With Comprehensive Interference Management. IEEE Trans. Veh. Technol. 2024, 73, 843–859. [Google Scholar] [CrossRef]
  14. Li, J.; Chen, G.; Zhang, T.; Feng, W.; Jiang, W.; Quek, T.Q.S.; Tafazolli, R. UAV-RIS-Aided Space-Air-Ground Integrated Network: Interference Alignment Design and DoF Analysis. IEEE Trans. Wirel. Commun. 2024; early access. [Google Scholar] [CrossRef]
  15. Ling, C.; Luzzi, L.; Belfiore, J.-C.; Stehlé, D. Semantically Secure Lattice Codes for the Gaussian Wiretap Channel. IEEE Trans. Inf. Theory 2014, 60, 6399–6416. [Google Scholar] [CrossRef]
  16. Campello, A.; Ling, C.; Belfiore, J.-C. Semantically Secure Lattice Codes for Compound MIMO Channels. IEEE Trans. Inf. Theory 2020, 66, 1572–1584. [Google Scholar] [CrossRef]
  17. Xie, J.; Ulukus, S. Secure Degrees of Freedom of K -User Gaussian Interference Channels: A Unified View. IEEE Trans. Inf. Theory 2015, 61, 2647–2661. [Google Scholar] [CrossRef]
  18. Estela, M. Interference Management for Interference Channels: Performance Improvement and Lattice Techniques. Ph.D. Thesis, Imperial College London, London, UK, 2014. Available online: http://hdl.handle.net/10044/1/24678 (accessed on 2 July 2021).
  19. Ling, C.; Belfiore, J.-C. Achieving AWGN channel capacity with lattice Gaussian coding. IEEE Trans. Inf. Theory 2014, 60, 5918–5929. [Google Scholar] [CrossRef]
  20. Etkin, R. Spectrum Sharing: Fundamental Limits, Scaling Laws, and Self-Enforcing Protocols. Ph.D. Thesis, University of California at Berkeley, Berkeley, CA, USA, 2006. [Google Scholar]
  21. Han, T.; Kobayashi, K. A further consideration of the HK and CMG regions for the interference channel. In Proceedings of the ITA Workshop 2007, San Diego, CA, USA, 29 January–2 February 2007. [Google Scholar]
  22. Oggier, F.; Viterbo, E. Algebraic Number Theory and Code Design for Rayleigh Fading Channels; Now Foundations and Trends: Boston, MA, USA, 2004; Volume 1, pp. 333–415. [Google Scholar]
  23. Estela, M.; Luzzi, L.; Ling, C.; Belfiore, J. Analysis of lattice codes for the many-to-one interference channel. In Proceedings of the 2012 IEEE Information Theory Workshop, Lausanne, Switzerland, 3–7 September 2012; pp. 417–421. [Google Scholar]
  24. Belfiore, J.-C. Lattice codes for the compute-and-forward protocol: The flatness factor. In Proceedings of the 2011 IEEE Information Theory Workshop, Paraty, Brazil, 6–20 October 2011; pp. 1–4. [Google Scholar]
  25. Loeliger, H.-A. Averaging bounds for lattices and linear codes. IEEE Trans. Inf. Theory 1997, 43, 1767–1773. [Google Scholar] [CrossRef]
  26. Zamir, R. Lattice Coding for Signal and Networks; Cambridge University Press: Cambridge, UK, 2014. [Google Scholar]
Figure 1. Representation of MAC 1 and MAC 2 rate regions.
Figure 1. Representation of MAC 1 and MAC 2 rate regions.
Entropy 26 00575 g001
Figure 2. Representation of two superposed lattice Gaussians.
Figure 2. Representation of two superposed lattice Gaussians.
Entropy 26 00575 g002
Figure 3. Representation of a three-user interference channel without (left) and with (right) the proposed alignment scheme, for i = 1 , 2 , 3 .
Figure 3. Representation of a three-user interference channel without (left) and with (right) the proposed alignment scheme, for i = 1 , 2 , 3 .
Entropy 26 00575 g003
Figure 4. Lattice codes as seen by each receiver for the example described.
Figure 4. Lattice codes as seen by each receiver for the example described.
Entropy 26 00575 g004
Table 1. Example of a three-user interference channel lattice assignment.
Table 1. Example of a three-user interference channel lattice assignment.
h 11 w 1 h 21 w 2 h 31 w 3 h 12 w 1 h 22 w 2 h 32 w 3 h 13 w 1 h 23 w 2 h 33 w 3
User 1 Δ Π Π Π Δ Π Δ
User 2 Δ Θ Θ Δ Θ Θ Δ
User 3 Δ Υ Δ Υ Υ Υ Δ
Δ Π Θ Π Υ Π Θ Δ Θ Υ Π Υ Θ Υ Δ
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Estela, M.C.; Valencia-Cordero, C. On Superposition Lattice Codes for the K-User Gaussian Interference Channel. Entropy 2024, 26, 575. https://doi.org/10.3390/e26070575

AMA Style

Estela MC, Valencia-Cordero C. On Superposition Lattice Codes for the K-User Gaussian Interference Channel. Entropy. 2024; 26(7):575. https://doi.org/10.3390/e26070575

Chicago/Turabian Style

Estela, María Constanza, and Claudio Valencia-Cordero. 2024. "On Superposition Lattice Codes for the K-User Gaussian Interference Channel" Entropy 26, no. 7: 575. https://doi.org/10.3390/e26070575

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop