Next Article in Journal
Food Choices after Cognitive Load: An Affective Computing Approach
Next Article in Special Issue
Robust Resource Control Based on AP Selection in 6G-Enabled IoT Networks
Previous Article in Journal
Stratospheric Night Sky Imaging Payload for Space Situational Awareness (SSA)
Previous Article in Special Issue
Patterned Reed–Muller Sequences with Outer A-Channel Codes and Projective Decoding for Slot-Controlled Unsourced Random Access
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Iterative List Patterned Reed-Muller Projection Detection-Based Packetized Unsourced Massive Random Access

1
School of Electronics and Information, Northwestern Polytechnical University, Xi’an 710129, China
2
Department of Architecture and Built Environment, University of Nottingham, Nottingham NG7 2RD, UK
*
Author to whom correspondence should be addressed.
Sensors 2023, 23(14), 6596; https://doi.org/10.3390/s23146596
Submission received: 12 June 2023 / Revised: 19 July 2023 / Accepted: 19 July 2023 / Published: 21 July 2023
(This article belongs to the Special Issue Sustainable Solutions for 6G-Enabled Internet of Things Networks)

Abstract

:
In this paper, we consider a slot-controlled coded compressed sensing protocol for unsourced massive random access (URA) that concatenates a shared patterned Reed–Muller (PRM) inner codebook to an outer error-correction code. Due to the limitations of the geometry-based decoding algorithm in single-sequence settings and due to the message interference that may result in decreased decoding performance under multi-sequence circumstances, a list PRM projection algorithm and an iterative list PRM projection algorithm are proposed to supplant the signal detector associated with the inner PRM sequences in this paper. In detail, we first propose an enhanced path-saving algorithm, called list PRM projection detection, for use in single-user scenarios that maintains multiple candidates during the first few layers so as to remedy the risk of spreading errors. On this basis, we further propose an iterative list PRM projection algorithm for use in multi-user scenarios. The vectors for PRM codes and channel coefficients are jointly detected in an iterative manner, which offers significant improvements regarding the convergence rate for signal recovery. Furthermore, the performances of the proposed algorithms are analyzed mathematically, and we verify that the theoretical simulations are consistent with the numerical simulations. Finally, we concatenate the inner PRM codes that employ iterative list detection in two practical error-correction outer codes. According to the simulation results, we conclude that the packetized URA with the proposed iterative list projection detection works better than benchmarks in terms of the number of active users it can support in each slot and the amount of energy needed per bit to meet an expected error probability.

1. Introduction

One of the most prominent uses for 6G wireless networks remains massive machine-type communication’s (mMTC+) continuous evolution [1]. Indeed, mMTC+ services feature the presence of a large number of machines that sporadically link to the system while carrying short data packets. Notably, they are battery-limited and are intended to achieve low transmission latency [2,3,4].
Grant-free transmission [5,6], in which packets are transmitted to the network without informing the base station in advance, is the approach that provides the most promise. “Sourced Random Access” [7] uses grant-free transmission and allocates distinct dictionaries to different users.
Given that there are so many users, employing various encoders would complicate reception since the decoder would first have to determine which encoders were being used. Therefore, using the same coding technique for all users is the most promising strategy. These systems are referred to as “Unsourced Random Access (URA)” [8]. In this way, the identification and decoding responsibilities are decoupled, and the URA receiver only needs to decode the messages without knowing who sent them. A comparison of uncoordinated/unsourced grant-free schemes and coordinated/sourced grant-free (based on compressed sensing technique) schemes in terms of application scenarios, typical access performance, and their characteristics is shown in Table 1. All active users’ per-user probability of error (PUPE) is used to examine the access and transmission performance of the entire system.
The fundamental limits of the Gaussian multiple-access channel are described in Reference [8], which formulates the URA problem from an information-theoretic perspective. Based on Reference [8], an asymptotic improvement is presented in Reference [13]. Both examine a scenario in which the receiver has a fixed count of active users. These findings are expanded in Reference [14] to a case in which the receiver’s active user count is arbitrary and unknowable. For the Gaussian multiple-access channel, a range of low-complexity URA algorithms has been suggested. The main schemes are T-fold irregular repetition (which includes the T-fold irregular repetition slotted ALOHA protocol with collision detection [15,16]), the sparse interleave division multiple access (IDMA) scheme [17], random spreading and correlation-based energy detection [18,19], as well as the coded compressed sensing (CCS)-based URA scheme [20]. Coded compressed sensing is a divide-and-conquer strategy that concatenates the inner codes to outer A-channel codes, where a tree code is a specific instance of an outer A-channel code devised for this kind of application [21,22]. This makes it simple to accommodate novel channel models owing to the concatenated form’s adaptability to URA. Several later investigations on URA employed an outer A-channel code [23,24,25,26,27,28]. In paper [29], the study scope of a tree code behaving as the outer A-channel code is extended through the integration of the approximate message passing (AMP) mechanism with sparse regression codes [20]. The coded compressed sensing approach is further enhanced in Reference [23] by enabling the inner AMP decoder and the outer tree decoder to communicate soft information through a common message-passing protocol. In order to simplify the high-dimensional sparse signals’ joint detection pertaining to several bases, the research work in Reference [30] suggests a coded demixing method.
Compressed sensing has made extensive use of codebook construction techniques related to second-order Reed–Muller (RM) sequences [31]. RM sequences are excellent options for massive access scenarios during the continuous evolution of massive machine-type communication owing to their high capacity and capability for low-complexity random access [32].
In Reference [33], non-orthogonal Reed–Muller sequences are used as user identifiers (IDs) for active user identification in the grant-based access mechanism in mMTC+, which has low access potency and high signaling expenses. In Reference [34], a strategy for massive random access is proposed that makes use of the nested characteristics of RM codes. Additionally, the coded compressed sensing scheme makes extensive use of inner codebook construction techniques related to second-order RM sequences [35,36], i.e., RM sequences are utilized as inner codes for the CCS protocol, and the outer tree codes are employed for coupling messages for slotted transmissions. In Reference [37], a shared patterned Reed–Muller (PRM) codebook that embeds zero patterns in the second-order RM codes according to a binary vector space partition principle is employed as the inner codebook for the coded compressed sensing protocol.
This paper is inspired by a slot-controlled CCS protocol that concatenates outer error-correction codes to a common PRM inner codebook [37]. In this scheme, a slot-pattern-control (SPC) criterion that corresponds one-to-one to an information segment to construct the slot occupation guideline for each active user. The users’ messages are partitioned into several sub-blocks and add the same SPC segment in each block as a prefix to make up the input signal of the inner encoder. On this basis, each sequence is related to a single codeword from a proposed common codebook called patterned Reed–Muller codes. For recovering the PRM sequences received at the receiver, an algorithm exploiting the geometry of PRM sequences is proposed.  Moreover, the outer tree code is replaced with an error-correcting code, and codewords with a distance at most t from the channel output are recovered by outer codes. In view of this scheme, it is anticipated that there is a potential rise in the efficiency of the PRM sequence-related geometry-based decoding algorithm for single-sequence scenarios due to the limited number of active users that can be served over a slot. In addition, message interference may also lead to a decreased decoding efficiency in multi-sequence scenarios. In view of these shortcomings, we replace the inner decoder related to PRM with a proposed list PRM projection algorithm to enhance the efficiency of the inner decoder for a single sequence. Additionally, we construct an iterative list PRM projection algorithm in order to decrease the multi-interference between sequences for the multi-sequence senario. We then deduce the theoretically successful detection probabilities of the proposed algorithm and validate its correctness via numerical simulations. Subsequently, we demonstrate that the proposed schemes can accommodate more active users within each slot using numerical simulations. We finally assess the performance of the slot-controlled URA system with the proposed iterative list algorithm as the inner code detector. According to simulation results, we demonstrate that the schemes using the proposed detection can help improve the overall system’s performance.

1.1. Contributions and Arrangement

Here is a summary the contributions of this article:
  • A shared PRM codebook that embeds zero patterns in the second-order RM codes according to a binary vector space partition principle is employed as the inner codebook for the coded compressed sensing protocol. On this basis, we propose an enhanced algorithm called the “list PRM projection algorithm” that uses a list of candidates to hinder error spreading in each detection layer.
  • As for the multi-sequence scenario, an iterative list PRM projection algorithm is proposed. More specifically, except for the user’s information during the current loop, we consider all signals as interference. We remove the interference in priority and then employ the proposed list projection algorithm to recover the PRM sequence. The PRM estimations are then inserted into the channel estimator to enhance precision. After that, the modified channel estimations are utilized in the subsequent iteration to improve PRM detection further. As a result, the proposed iterative list PRM projection algorithm offers significant advantages regarding the convergence rate.
  • The theoretical successful detection probabilities of the proposed list projection and the iterative list projection algorithms are analyzed mathematically, and the results demonstrate that: (i) The factors that affect the efficiency of the list PRM projection algorithm are shown in the simulation results, which validate that the recovery reliability of the first few layers is crucial to the whole performance; (ii) Simulation results further explore the effect of the relationship between rank Υ and m for PRM sequence properties on the theoretical successful detection probability; (iii) We verified that the theoretical results are consistent with the simulation results.
  • We substitute the inner decoder of the scheme [37] with the proposed iterative list PRM projection algorithm. Simulations of the quasi-static Rayleigh fading multiple-access channel (MAC) are performed numerically, and we conclude the observations as follows: (i) The proposed scheme is superior to the OMP-based inner codes detection, i.e., the chance of successful recovery is significantly improved since the PRM detector and channel estimator share information, and the negative impacts can be further reduced by eliminating those sequences that are incorrectly detected in each operation. (ii) By increasing the count of candidates in the first few layers, we can boost the original PRM projection algorithm’s performance, which further confirms the above proof of the importance of ensuring the reliable recovery of the first few layers.
    (iii) The simulation results of the overall URA system confirm that the packetized URA with the proposed iterative list projection detection method works better than benchmarks in terms of the number of active users it can support in each slot and the amount of energy needed per bit to meet an expected error probability.
The rest of this paper is arranged as follows: In Section 2, we induce the system model and review the patterned Reed–Muller sequences as well as their projection algorithm. An enhanced list PRM projection algorithm and an iterative list PRM projection algorithm are proposed in Section 3. The theoretical probability of the algorithms proposed is derived in Section 4. The simulation results are presented in Section 5, and the paper is concluded in Section 6.

1.2. Conventions

It is stipulated that all vectors, whether complex or binary, will be column vectors. We employ F 2 m to present the finite (binary) field in this paper. P m denotes the size of an m × m matrix.
P T stands for the transpose of matrix P and P T stands for the inverse transpose of matrix P . The superscript H stands for the conjugate transpose of a matrix. Vectors 0 N and 1 N are all-zero and all-one vectors of size N × 1 , respectively. We use A = a i i = 1 I to stand for a vector whose capacity is I, consisting of the components a i ( i = 1 , , I ) . The vectors B 1 C T 1 × 1 and B 2 C T 2 × 1 are added together to produce B = B 1 ; B 2 C ( T 1 + T 2 ) × 1 . CN ( 0 , I N ) stands for a complex standard normal random vector. The notation x N ( μ , σ 2 ) indicates that x is a Gaussian random variable whose mean is μ and variance is σ 2 ; its probability density function (PDF) is written as f N ( x ; μ , σ 2 ) = 1 2 π σ 2 exp ( x μ ) 2 2 σ 2 , while x CN ( μ , σ 2 ) denotes that the random variable x follows the circular symmetric complex Gaussian distribution with its PDF being f C N ( x ; μ , σ 2 ) = 1 π σ 2 exp | x μ | 2 σ 2 .

2. System Model

In this section, we first review the studies of the slot-controlled URA scheme in Reference [37] and then propose an enhanced slot-controlled URA scheme. In the scheme, each active user partitions the message into several sub-blocks and repeatedly adds a slot-pattern-controlled (SPC) sequence as a prefix to form an outer encoder input signal. Then, error correction encoding is employed to stitch the message through all sub-blocks. A geometry-based technique is employed to detect the sub-blocks transmitted by active users in all sub-slots, and an error correction algorithm stitches these sub-blocks in order to recover the original messages.

2.1. Encoding Process

The transmission strategy includes two encoders: an inner-code encoder and an error-correction code/outer-code encoder. The error-correction code encoder uses two practical outer A-channel codes, i.e., the t-tree code and the Reed–-Solomon code with Guruswami–-Sudan list decoding. The inner encoder maps each sub-block into a codeword in the common PRM codebook.
As for the transmitter of practical transmission, a B-bit message is carried by each active user as part of the set { W 1 , W 2 , , W K a } for accessing the system. We let N = 2 B , and each message W k ( 1 k K a ) is encoded into an H N -length outer error-correction code, thus W k [ N ] , where [ N ] = 1 , , N . The A-channel code used for the outer encoding is derived from paper [27]: the kth user’s message U ( k ) of length B is mapped to a J-ary Reed–Solomon code of length H N , designated M ̲ RS , h ( k ) h = 1 H N = M RS , 1 ( k ) , M RS , 2 ( k ) , , M RS , H N ( k ) [ J ] H N , 0 h H N . We then repeatedly append the binary prefix sequence X p ( k ) to each J-ary bit M RS , h ( k ) to create the G-ary sequence. Thus, the length of the binary sequence X p ( k ) is set to log 2 ( G / J ) and we denote x p = log 2 ( G / J ) . For further interpretation, a G-ary bit is divided into 2 x p cosets, each containing 2 J components, and this process can be summarized as m ˘ PRM , h ( k ) = 𝘍 X p ( k ) ; M RS , h ( k ) , where 𝘍 ( · ) is the bijective mapping: 𝘍 ( · ) : [ 2 x p ] × [ J ] and m ˘ PRM , h ( k ) G . By repeating the above processes H N times, an  H N -length sequence m ̲ PRM , h ( k ) h = 1 H N = m ˘ PRM , 1 ( k ) , m ˘ PRM , 2 ( k ) , , m ˘ PRM , H N ( k ) is obtained, which serves as the input sequence for the inner encoder. We require one-to-one matching between m ˘ PRM ( k ) and codeword C PRM ( k ) from the common codebook (we omit the subscript m here for simplicity), and, herein, m ˘ PRM ( k ) is the input signal of the inner encoder while the output codeword is C PRM ( k ) . Finally, the output of inner coding over H slots is recorded as C ̲ PRM , h ( k ) h = 1 H N = C PRM , 1 ( k ) , C PRM , 2 ( k ) , , C PRM , H N ( k ) . This process of the transmitter is depicted in Figure 1. In detail, a length- N = 2 m sequence C PRM , h ( k ) represents an inner PRM code randomly picked from a shared codebook Γ that is sent by the k-th user within the n-th chunk, where 1 n H N . There is a one-to-one match between the sequence C PRM , h ( k ) and a partitioned message of length B / H N . Next, using the slot-pattern-control rule, the H N PRM sequences are arranged into N T chunks, which leads to the system employing T = N · N T channel uses to operate a slotted-controlled transmission.
The inner codeword in the h-th slot C PRM , h ( k ) is derived as follows: a new version of the binary subspace chirp (BSSC) [38,39,40], which embeds zero patterns in the second-order RM codes according to the partition rule of the binary vector space, is proposed by reducing the subspace matrix H χ to I χ , where χ [ m ] indicates all possible subsets of rank Υ { 1 , 2 , , m } under full dimension m. The simplified sequence is denoted as patterned Reed–Muller (PRM) and can be written as follows
C PRM , h ( k ) ( v ) = 1 2 Υ i 2 w T B | 1 Υ + w T P ^ w , if v = I χ w + I χ ˜ B | Υ + 1 m , 0 , otherwise ,
where w F 2 Υ , symmetric matrix P ^ and vector B are dominant parameters for RM ( Υ , 2 ) ; additionally, sub-vector B | Υ + 1 m identifies the sequence of the last ( m Υ ) -bit segment of the m-length vector B , and subspace I χ is a matrix of size ( m × Υ ) with column vectors e χ ( 1 ) , e χ ( 2 ) , , e χ ( Υ ) , where e χ ( r ) is the unit vector with non-zero at the r-th position, 1 r Υ . Similarly, matrix I χ ˜ follows the same rule.

2.2. Decoding Process

The signal received in the receiver for the n-th slot ( 1 n N T ) is expressed as
y n = k = 1 K a h k , n · x k , n + n n ,
where h k , n C N ( 0 , 1 ) represents a constant channel coefficient that is supposed to be stable for the duration of a user’s transmission between the active user k and the access point (AP) in n-th chunk. While, n n C N ( 0 , I N ) is the complex additive white Gaussian noise (AWGN). For the inner-code decoder: PRM codes have their geometry property: for any z F 2 m Υ , the PRM sequence satisfies
E I χ e r , I χ ˜ z + I χ P ^ r · C P ^ , B , I χ = ( 1 ) B T ( e r ; 0 m Υ ) + ( B | Υ + 1 m ) T z · C P ^ , B , I χ ,
where P ^ r = P ^ e r . According to Equation (3), the PRM sequence obtains its specific projective property by letting z = 0 m Υ :
λ P R M , ϖ ( r ) ( 0 m Υ ) · C P ^ , B , I χ = C P ^ , B , I χ , if ϖ = ( 1 ) B r , 0 , if ϖ ( 1 ) B r ,
where
λ P R M , ϖ ( r ) ( 0 m Υ ) = I N + ϖ · E I χ e r , I χ P ^ r 2
is a projection operator for PRM sequences. We decode the inner code using the OMP and the projective-based algorithm. Owing to the slot-controlled URA, the inner-code sequence based on projective property and the channel estimations are recovered sequentially. For the outer-decoding process, the number of output codewords is equivalent to the count of decoding iterations K 0 . The outer Reed–Solomon decoding process is suggested. The efficiency of the method described above can be improved using a modified version based on list decoding, which will be described in the next part.
We assess performance from the perspective of messages and use the miss-detection rate (MDR) and the false-alarm rate (FAR) as the primary indicators, which are given as
P e 1 K a k K a Pr W k D ^ Y
and
P f Pr D ^ Y { W k | k K a } ,
respectively, where K a = { 1 , , K a } .

3. Iterative List PRM Projection Algorithm-Based Inner-Code Detector

Our proposed detection scheme for inner codes is described in this section. In addition to the modified list PRM projection detection adopted for a single sequence, we propose an iterative list PRM projection algorithm implemented at the AP for multiple sequences.

3.1. List PRM Projection Algorithm

The received signal is y = C P ^ , B , I χ + n , and the enhanced algorithm based on the detection proposed in Reference [37] is executed for the single-user scenario. Specifically, in this single-user scenario, we propose a modified algorithm that utilizes a candidate-saving technique during each layer of detection to boost performance. Algorithm 1’s particular processes are outlined in the following section.
Three elements I χ , P ^ , B of the PRM sequence need to be recovered, and the subspace matrix I χ should be retrieved first. Since the rank is unknown, we go over every conceivable rank in 1 Υ m and set aside L potential subspace matrices for each one (in Lines 3 and 4) after we calculate F sum ( f χ ( Υ ) ) = j = 1 2 ( m Υ ) y H E 0 , f χ ( Υ ) ( j ) y for every f χ ( Υ ) ( j ) F 2 m , employ the relation of f χ ( Υ ) ( j ) j = 1 2 ( m Υ ) = span I ˜ χ ( Υ ) and seek the biggest 2 ( m Υ ) estimations of F sum , the set f χ ( Υ ) ( j ) j = 1 2 ( m Υ ) and the subspace matrix I ˜ χ ( Υ ) can be identified. We finally keep L candidates as I χ ( Υ ) = I ˜ χ ( Υ ) ( l ) l = 1 L , Υ .
The recovery of remaining elements P ^ , B depends on the subspaces in I χ ( Υ ) . In the following, we will decode the vector P l ^ and bit B l in a layer-by-layer manner. Furthermore, we will keep H parent candidates for the next layer at the end of each iteration. We initialize the corresponding parameters in Line 6; we denote the list of updated symmetric matrix and vector (under I ˜ χ ( Υ ) ( l ) ) as H l during each layer, and update the list of search space as F l , 1 ( h ) h = 1 H using (6). During the iteration, as for each candidate derived from the previous layers, we obtain a list of H candidates for the current layer (in Lines 8–11); we first update the search space from the previous knowledge using (6) (in Lines 6 and 19). The detected layers notated as R ( R = 1 , , r 1 ) and utilize the recovered vector P ^ r , l ( h ) ( P ^ r , l ( h ) is h-th candidates for r -th column vector of P ^ under subspace I χ ( Υ ) ( l ) ) to fill out the symmetric matrix; thus, the matrix under the r -th layer can be represented as P ^ ̲ R ( h ) . On this basis, the search space is limited to
F l , r ( h ) = f F 2 m f i = P ^ ̲ R ( h ) ( i , r ) for all i R ,
where the value of P ^ ̲ R ( h ) ( i , r ) refers to the i-th row and r -th column of matrix P ^ ̲ R ( h ) . Next, we derive H paths for each candidates of previous decoding (in Lines 9 and 10).
For all vectors f F l , r ( h ) , we compute F sum ( f ) = j = 1 2 ( m Υ ) y H E I χ ( Υ ) ( l ) · e r , f y to search H largest F sum , the corresponding H sets of f j j = 1 2 ( m Υ ) are derived. Based on f j j = 1 2 ( m Υ ) = span I ˜ χ ( Υ ) ( l ) + I χ ( Υ ) ( l ) P ^ r , l e r , vector P r , l ( h ) can be obtained. Once 2 H iterations are complete, we finally get the P ^ r , l ( h ) h = 1 H within 2 H possibles.
Algorithm 1: List PRM projection algorithm
Sensors 23 06596 i001
As for the detection of B l ( h ) (in Lines 14–17), we make z ( m Υ ) = 0 m Υ and compute F r ( f ) = y H E I χ ( Υ ) ( l ) · e r , I χ ( Υ ) ( l ) P ^ r , l e r y , then we obtain sign F r ( f ) = ( 1 ) B r , l . In light of this, B r can be obtained as follows
B r , l = 1 , if sign F r ( f ) = 1 , 0 , if sign F r ( f ) = 1 .
Line 19 updates all parameters for the next layer detection. In Line 23, after conducting all iterations, we evaluate each rank’s ( L × H ) results and retain the optimal results. The final results are selected at the end of the procedure for all ranks (in Line 25).

3.2. Iterative List PRM Projection Algorithm

We consider a scenario where the simultaneous presence of multiple active users in a system. As described below, we will detect multiple PRM sequences iteratively. We begin by introducing some notations in this section: it is possible to detect a maximum of K 0 users. We denote the notation C ^ P ^ , B , I χ ( s , k ) as the detected PRM sequence, and h ^ k ( s ) represents the channel estimation. Both notations indicate the case under the k-th user and during the i-th iteration ( 1 s S m a x ). The particular processes of Algorithm 2 are outlined in the  following section.
Algorithm 2: Iterative List PRM projection Algorithm
Sensors 23 06596 i002
First, the PRM sequence and the channel coefficient are initialized as C ^ P ^ , B , I χ ( 0 , k ) = 0 N and h ^ k ( 0 ) = 0 for s = 0 , respectively, (in Line 1). We then denote an interference for the user k during the s-th iteration as
Λ k ( s ) = k = 1 a n d k k K 0 h ^ k ( s ) · C ^ P ^ , B , I χ ( s , k ) ,
where
s = s , if 1 k k 1 , s = s 1 , if k + 1 k K 0 .
Then, in order to retrieve a PRM sequence C ^ P ^ , B , I χ ( s , k ) , we feed the signal ( y Λ k ( s ) ) into Algorithm 1.
Next, when detecting each user’s PRM sequence, the detected sequences are inserted into the linear least square channel estimator in order to improve their accuracy further, specified as (8) in Algorithm 2. Due to the exchange of information between the sequence detector and the channel estimator, the effectiveness gradually improves, and it terminates once the results have converged or the maximum number of iterations has been reached.

4. Performance Analysis

4.1. Successful Recovery Probability of the List PRM Projection Algorithm

For detecting a single PRM sequence, Algorithm 1 recovers the column vectors of the matrix P ^ layer by layer. In this section, we will derive the probability of successful recovery of each layer element P ^ r for 1 r m . The corresponding conclusions are summarized in Lemma 1 as well as Theorem 1.
Lemma 1. 
Suppose that Algorithm 1’s input signal is y = h C P ^ , B , I χ + n , where h C N ( 0 , 1 ) is the channel coefficient, and n C N ( 0 , I N ) denotes the complex additive white Gaussian noise (AWGN). The notation P ( r ) ( 1 r m ) represents the event “the vector P ^ r is recovered successful” and the notation P ( 1 , , r 1 ) denotes the event ” the vectors P ^ 1 , , P ^ r 1 are correctly recovered”. On this basis, if h and P ( 1 , , r 1 ) is given, then for any z F 2 m Υ , Pr P ( r ) | P ( 1 , , r 1 ) , h can be expressed as
Pr P ( r ) | P ( 1 , , r 1 ) , h Q 2 Υ | h | 2 · ( 1 ) B r + ( B | Υ + 1 m ) T z 2 σ F , r · Q 2 Υ | h | 2 · ( 1 ) B r + ( B | Υ + 1 m ) T z 2 σ F , r 2 ( Υ r + 1 ) 1 .
In (13), function Q ( · ) is Q ( x ) = x 1 2 π exp t 2 2 d t , and the variance σ F , r 2 is defined as
σ F , r 2 = 2 m 1 2 | h | 2 N 0 2 r 1 + N 0 2 r 1 2 .
Theorem 1. 
Taking into account the channel coefficients h and the results of (13), the average probability of successful detection for Algorithm 1 can be calculated as
P s u c c = h f CN ( h ; 0 , 1 ) · P ˇ Υ , N 0 | h d h ,
where P ˇ Υ , N 0 | h = r = 1 Υ Pr P ( r ) | P ( 1 , , r 1 ) , h and f CN ( h ; 0 , 1 ) = 1 π exp | μ | 2 .
Proof 
We assume that the subspace matrix I χ is decoded correctly and H = 1 . Define a concept F S ( f r ) related to f r F 2 m as
F S ( f r ) = ( h C P ^ , B , I χ + n ) H · E I χ e r , f r · ( h C P ^ , B , I χ + n ) = | h | 2 C P ^ , B , I χ H E I χ e r , f r C P ^ , B , I χ F A ( f r ) + n H E I χ e r , f r n Z 1 ( f r ) + h C P ^ , B , I χ H E I χ e r , f r n + n H E I χ e r , f r C P ^ , B , I χ Z 2 ( f r ) .
According to (16), Z 1 ( f r ) and Z 2 ( f r ) are the sources of interference, and we use Z ( f r ) = Z 1 ( f r ) + Z 2 ( f r ) to conclude all interferences. The following discussion will demonstrate Lemma 1 through mathematical induction.
( 1 ) When r = 1 : according to Algorithm 1, the event P ( 1 ) occurring is described as z F 2 m Υ F I χ ˜ z + p ˜ χ , 1 > z F 2 m Υ F I χ ˜ z + p ¯ ¯ χ , 1 (we omit the subscript S and record it as F ( · ) in this section), where p ˜ χ , 1 = I χ P ^ e 1 , and symbol p ¯ ¯ χ , 1 denotes any possible vector that is different from p ˜ χ , 1 . It is known that the vector p ¯ ¯ χ , 1 has a total of ( 2 Υ 1 ) possibilities, then we denote the probability of the event P ( 1 ) as follows:
Pr P ( 1 ) | h = Pr z F 2 m Υ F I χ ˜ z + p ˜ χ , 1 > max z F 2 m Υ F I χ ˜ z + p ¯ ¯ χ , 1 = [ p ¯ ¯ χ , 1 ( j ) ] j = 1 2 Υ 1 Pr z F 2 m Υ F I χ ˜ z + p ˜ χ , 1 > z F 2 m Υ F I χ ˜ z + p ¯ ¯ χ , 1 ( j ) .
In (17), [ p ¯ ¯ χ , 1 ( j ) ] j = 1 2 Υ 1 is the set of all possible vectors except p ˜ χ , 1 . Since each term within the summation function obeys the same distribution, calculating (17) is equivalent to computing its approximate probability as
[ p ¯ ¯ χ , 1 ( j ) ] j = 1 2 Υ 1 Pr F I χ ˜ z + p ˜ χ , 1 > F I χ ˜ z + p ¯ ¯ χ , 1 ( j ) .
For convenience, we let the notation Ξ ˜ χ , z ( 1 ) = I χ ˜ z + p ˜ χ , 1 and Ξ ¯ ¯ χ , z ( 1 ) = I χ ˜ z + p ¯ ¯ χ , 1 . Since the ( 2 Υ 1 ) events in (18) are independent of one another, we calculate the probability of one event first:
Pr F I χ ˜ z + p ˜ χ , 1 > F I χ ˜ z + p ¯ ¯ χ , 1 = ( a ) Pr | h | 2 C P ^ , B , I χ H E I χ e 1 , Ξ ˜ χ , z ( 1 ) C P ^ , B , I χ + Z ( Ξ ˜ χ , z ( 1 ) ) > Z ( Ξ ¯ ¯ χ , z ( 1 ) ) = ( b ) Pr | h | 2 C P ^ , B , I χ H E I χ e 1 , Ξ ˜ χ , z ( 1 ) C P ^ , B , I χ + Z ( Ξ ˜ χ , z ( 1 ) ) > Z ( Ξ ¯ ¯ χ , z ( 1 ) ) · Pr | h | 2 C P ^ , B , I χ H E I χ e 1 , Ξ ˜ χ , z ( 1 ) C P ^ , B , I χ + Z ( Ξ ˜ χ , z ( 1 ) ) > 0 + Pr | h | 2 C P ^ , B , I χ H E I χ e 1 , Ξ ˜ χ , z ( 1 ) C P ^ , B , I χ Z ( Ξ ˜ χ , z ( 1 ) ) > Z ( Ξ ¯ ¯ χ , z ( 1 ) ) · Pr | h | 2 C P ^ , B , I χ H E I χ e 1 , Ξ ˜ χ , z ( 1 ) C P ^ , B , I χ + Z ( Ξ ˜ χ , z ( 1 ) ) < 0 ( c ) Pr | h | 2 C P ^ , B , I χ H E I χ e 1 , Ξ ˜ χ , z ( 1 ) C P ^ , B , I χ + Z ( Ξ ˜ χ , z ( 1 ) ) > Z ( Ξ ¯ ¯ χ , z ( 1 ) ) · Pr | h | 2 C P ^ , B , I χ H E I χ e 1 , Ξ ˜ χ , z ( 1 ) C P ^ , B , I χ + Z ( Ξ ˜ χ , z ( 1 ) ) > 0 ,
where the equal sign ( a ) is based on C P ^ , B , I χ H E I χ e 1 , Ξ ¯ ¯ χ , z ( 1 ) C P ^ , B , I χ = 0 , ( b ) and ( c ) are obtained according to the following equations:
Pr ( | A | | B | ) = Pr ( A > | B | ) Pr ( A > 0 ) + Pr ( A > | B | ) Pr ( A < 0 ) Pr ( A > | B | ) Pr ( A > 0 ) ,
and | | A | | B | | | A ± B | | A | + | B | .
In order to calculate the two probabilities of the result of (19), it is necessary to study the distribution of the variables | h | 2 C P ^ , B , I χ H E I χ e 1 , Ξ ˜ χ , z ( 1 ) C P ^ , B , I χ , Z ( Ξ ˜ χ , z ( 1 ) ) and Z ( Ξ ¯ ¯ χ , z ( 1 ) ) , respectively. We prioritize the study of | h | 2 C P ^ , B , I χ H E I χ e 1 , Ξ ˜ χ , z ( 1 ) C P ^ , B , I χ : we omit the factor 1 / 2 r for the representation of C P ^ , B , I χ and denote F A ( Ξ ˜ χ , z ( 1 ) ) as follows
F A ( Ξ ˜ χ , z ( 1 ) ) = | h | 2 C P ^ , B , I χ H E I χ e 1 , Ξ ˜ χ , z ( 1 ) C P ^ , B , I χ = | h | 2 · i ( I χ e 1 ) T · Ξ ˜ χ , z ( 1 ) w F 2 m C P ^ , B , I χ w + I χ e 1 ¯ · ( 1 ) w T · Ξ ˜ χ , z ( 1 ) · C P ^ , B , I χ ( w ) = | h | 2 · w F 2 m i 2 ( P ^ 1 ; 0 ) T · Q 1 w + P 1 , 1 + 2 B T ( e 1 ; 0 ) · i ( I χ e 1 ) T · Ξ ˜ χ , z ( 1 ) · ( 1 ) w T · Ξ ˜ χ , z ( 1 ) · ϕ B , Q 1 w + ( e 1 ; 0 ) , Υ · ϕ B , Q 1 w , Υ = | h | 2 · i 2 B T ( e 1 ; 0 ) + P 1 , 1 w F 2 m i 2 ( P ^ 1 ; 0 ) T · Q 1 w + 2 w T · Ξ ˜ χ , z ( 1 ) + ( I χ e 1 ) T · Ξ ˜ χ , z ( 1 ) · ϕ B , Q 1 w + ( e 1 ; 0 ) , Υ · ϕ B , Q 1 w , Υ ,
where P 1 , 1 = e 1 T P e 1 = e 1 T P 1 and ϕ B , Q 1 w , Υ = 1 in the case of B | Υ + 1 m = ( Q 1 w ) | Υ + 1 m . We then substitute Ξ ˜ χ , z ( 1 ) = I χ ˜ z + p ˜ χ , 1 in (21), and the equation expression becomes:
F A ( Ξ ˜ χ , z ( 1 ) ) = F A I χ ˜ z + p ˜ χ , 1 = | h | 2 · i 2 B T ( e 1 ; 0 ) + 2 P 1 , 1 · w F 2 m i 2 ( I χ ˜ z ) T w · ϕ B , Q 1 w + ( e 1 ; 0 ) , Υ · ϕ B , Q 1 w , Υ = | h | 2 · i 2 B T ( e 1 ; 0 ) + 2 P 1 , 1 · w F 2 m i 2 ( B | Υ + 1 m ) T z = 2 Υ · | h | 2 ( 1 ) B 1 + ( B | Υ + 1 m ) T z ,
where B 1 = e 1 T B and z F 2 m Υ . To sum up, F A ( Ξ ˜ χ , z ( 1 ) ) takes the value of 2 Υ · | h | 2 · ( 1 ) B 1 + ( B | Υ + 1 m ) T z if and only if Ξ ˜ χ , z ( 1 ) = I χ ˜ z + p ˜ χ , 1 .
Next, we calculate the distribution of Z ( Ξ ˜ χ , z ( 1 ) ) : we expand Z ( Ξ ˜ χ , z ( 1 ) ) into two parts:
Z 1 ( Ξ ˜ χ , z ( 1 ) ) = n 1 H E I χ e 1 , I χ ˜ z + p ˜ χ , 1 n 1 = i ( I χ e 1 ) T · z F 2 m r ( I χ ˜ z + I χ P ^ 1 ) w F 2 m n 1 w + I χ e 1 ¯ · ( 1 ) w T · z F 2 m r ( I χ ˜ z + I χ P ^ 1 ) · n ( w ) .
Therefore, the variance of Z 1 ( Ξ ˜ χ , z ( 1 ) ) is 2 m · N 0 2 . Following the same rule, Z 2 turns into
Z 2 ( Ξ ˜ χ , z ( 1 ) ) = h · C P ^ , B , I χ H E I χ e 1 , I χ ˜ z + p ˜ χ , 1 n + n H E I χ e 1 , I χ ˜ z + p ˜ χ , 1 C P ^ , B , I χ = h · i ( I χ e 1 ) T · ( I χ ˜ z + I χ P ^ 1 ) w F 2 m C P ^ , B , I χ w + I χ e 1 ¯ · ( 1 ) w T · ( I χ ˜ z + I χ P ^ 1 ) · n ( w ) + h · i ( I χ e 1 ) T · ( I χ ˜ z + I χ P ^ 1 ) w F 2 m n w + I χ e 1 ¯ · ( 1 ) w T · ( I χ ˜ z + I χ P ^ 1 ) · C P ^ , B , I χ ( w ) .
According to (24), the mean of variable Z 2 ( Ξ ˜ χ , z ( 1 ) ) is 0 and the variance is 2 · 2 m | h | 2 N 0 . We denote the notation σ F , 1 2 = 2 m 1 2 | h | 2 N 0 + N 0 2 . Since the distribution of Z ( Ξ ¯ ¯ χ , z ( 1 ) ) is the same with Z ( Ξ ˜ χ , z ( 1 ) ) , we have Z ( Ξ ˜ χ , z ( 1 ) ) CN ( 0 , 2 σ F , 1 2 ) , Z ( Ξ ¯ ¯ χ , z ( 1 ) ) CN ( 0 , 2 σ F , 1 2 ) and F A ( Ξ ˜ χ , z ( 1 ) ) + Z ( Ξ ˜ χ , z ( 1 ) ) CN ( 2 Υ · | h | 2 · ( 1 ) B 1 + ( B | Υ + 1 m ) T z , 2 σ F , 1 2 ) .
Based on above distribution, the first term of the last equation in (19) can be approximated as:
Pr | h | 2 C P ^ , B , I χ H E I χ e 1 , Ξ ˜ χ , z ( 1 ) C P ^ , B , I χ + Z ( Ξ ˜ χ , z ( 1 ) ) > Z ( Ξ ¯ ¯ χ , z ( 1 ) ) = 1 Pr Z 1 ( Ξ ¯ ¯ χ , z ( 1 ) ) + Z 2 ( Ξ ¯ ¯ χ , z ( 1 ) ) Z ( Ξ ˜ χ , z ( 1 ) ) | h | 2 C P ^ , B , I χ H E I χ e 1 , Ξ ˜ χ , z ( 1 ) C P ^ , B , I χ · Pr Z 1 ( Ξ ¯ ¯ χ , z ( 1 ) ) + Z 2 ( Ξ ¯ ¯ χ , z ( 1 ) ) > 0 Pr Z 1 ( Ξ ¯ ¯ χ , z ( 1 ) ) Z 2 ( Ξ ¯ ¯ χ , z ( 1 ) ) Z ( Ξ ˜ χ , z ( 1 ) ) | h | 2 C P ^ , B , I χ H E I χ e 1 , Ξ ˜ χ , z ( 1 ) C P ^ , B , I χ · Pr Z 1 ( Ξ ¯ ¯ χ , z ( 1 ) ) + Z 2 ( Ξ ¯ ¯ χ , z ( 1 ) ) < 0 = 1 2 Pr Z 1 ( Ξ ¯ ¯ χ , z ( 1 ) ) + Z 2 ( Ξ ¯ ¯ χ , z ( 1 ) ) Z ( Ξ ˜ χ , z ( 1 ) ) | h | 2 C P ^ , B , I χ H E I χ e 1 , Ξ ˜ χ , z ( 1 ) C P ^ , B , I χ · Pr Z 1 ( Ξ ¯ ¯ χ , z ( 1 ) ) + Z 2 ( Ξ ¯ ¯ χ , z ( 1 ) ) > 0 ( d ) 1 Q 2 Υ | h | 2 · ( 1 ) B 1 + ( B | Υ + 1 m ) T z 2 σ F , 1 ,
where (d) depends on Z 1 ( Ξ ¯ ¯ χ , z ( 1 ) ) + Z 2 ( Ξ ¯ ¯ χ , z ( 1 ) ) Z ( Ξ ˜ χ , z ( 1 ) ) N ( 0 , 4 σ F , 1 2 ) . We then compute the latter term of (19) as
Pr | h | 2 C P ^ , B , I χ H E I χ e 1 , Ξ ˜ χ , z ( 1 ) C P ^ , B , I χ + Z ( Ξ ˜ χ , z ( 1 ) ) > 0 = Pr Z ( Ξ ˜ χ , z ( 1 ) ) > | h | 2 C P ^ , B , I χ H E I χ e 1 , Ξ ˜ χ , z ( 1 ) C P ^ , B , I χ = Pr Z ( Ξ ˜ χ , z ( 1 ) ) > 2 Υ | h | 2 · ( 1 ) B 1 + ( B | Υ + 1 m ) T z = Q 2 Υ | h | 2 · ( 1 ) B 1 + ( B | Υ + 1 m ) T z 2 σ F , 1 .
By substituting (25) and (26) into (18), we finally get
[ p ¯ ¯ χ , 1 ( j ) ] j = 1 2 Υ 1 Pr F I χ ˜ z + p ˜ χ , 1 > F I χ ˜ z + p ¯ ¯ χ , 1 ( j ) Q 2 Υ | h | 2 · ( 1 ) B 1 + ( B | Υ + 1 m ) T z 2 σ F , 1 · Q 2 Υ | h | 2 · ( 1 ) B 1 + ( B | Υ + 1 m ) T z 2 σ F , 1 2 Υ 1 .
Equation (27) is consistent with (13) in Lemma 1, so Lemma 1 holds for r = 1 .
(2) When 2 r Υ : Algorithm 1’s input signal becomes
y r = λ P R M , ϖ ( r ) · y r 1 = C P ^ , B , I χ + i = 1 r 1 λ P R M , ϖ ( i ) · n = C P ^ , B , I χ + 1 2 i 1 i = 1 r 1 I N + ϖ i · E I χ e i , I χ P ^ i · n .
By substituting (28) into (16), we have σ F , r 2 = 2 m 1 2 | h | 2 N 0 2 r 1 + N 0 2 r 1 2 . Thus, Z ( Ξ ˜ χ , z ( r ) ) CN ( 0 , 2 σ F , r 2 ) , and Z ( Ξ ¯ ¯ χ , z ( r ) ) CN ( 0 , 2 σ F , r 2 ) . Furthermore, we get F A ( Ξ ˜ χ , z ( r ) ) + Z ( Ξ ˜ χ , z ( r ) ) CN ( 2 Υ · | h | 2 ( 1 ) B r + ( B | Υ + 1 m ) T z , 2 σ F , r 2 ) . The derivation process is similar to r = 1 , and we omit it here. The result is consistent with (13) and (14) in Lemma 1.
The PRM sequence is reconstructed successfully if the estimations for all layers are correct. Therefore, Algorithm 1 has a successful detection probability as follows:
P ˇ Υ , N 0 | h = r = 1 Υ Pr P ( r ) | P ( 1 , , r 1 ) , h .
Taking the channel coefficient h into account, we get the following equation
P s u c c = h f CN ( h ; 0 , 1 ) · P ˇ Υ , N 0 | h d h ,
Thus, the proof of Theorem 1 is complete. □
The factors that affect the efficiency of the layer-by-layer PRM projection algorithm are discussed below. According to Lemma 1, the conditional probability Pr P ( r ) | P ( 1 , , r 1 ) , h of the vector P ^ r is related to three factors, namely Υ , r , m and N 0 . Since it is challenging to mathematically express the relationship between them, we fix two elements and track the changes in the conditional probability as a result of changing the third factor. In accordance with Figure 2, Figure 3, Figure 4 and Figure 5, the following results are obtained:
  • Assume that the PRM code is of full rank with Υ = m , when m and N 0 are given, as r increases from 1 to m, the value of Pr P ( r ) | P ( 1 , , r 1 ) , h increases (Figure 2). The results illustrate the properties of the projection algorithm: if the first few layers are error-free, then the remaining layers are highly likely to be recovered correctly. Additionally, a further conclusion can be drawn by observing the above proof that n r CN ( 0 , N 0 2 r 1 ) indicates that the noise power in each layer decreases exponentially as r increases, leading to the successful recovery probability of the vector increasing layer by layer. Overall, the recovery reliability of the first few layers of the layer-by-layer PRM projection algorithm is crucial to its performance. This motivates us to increase the number of candidate paths in the first few layers to obtain better detection.
  • As shown in Figure 3, when r , Υ and N 0 are given, Pr P ( r ) | P ( 1 , , r 1 ) , h increases as m increases, which shows that we can improve the performance of the PRM sequence detection algorithm by increasing the length of the PRM sequence.
  • Figure 4 details the variation of Pr P ( r ) | P ( 1 , , r 1 ) , h with N 0 for fixed r , m and Υ . It is clear that as N 0 increases, the signal-to-noise ratio (SNR) decreases, resulting in Pr P ( r ) | P ( 1 , , r 1 ) , h decreasing consequently.
  • Figure 5 gives the variation of Pr P ( r ) | P ( 1 , , r 1 ) , h with r for fixed N 0 , m and Υ : the larger Υ indicates the better performance under the same m, and this result is consistent with the fact that a greater signal power will result in a greater signal-to-noise ratio. Furthermore, the larger the m, the worse the performance under the same layer r , indicating that the closer Υ to m, the better the performance will be.

4.2. Successful Recovery Probability of the Iterative List PRM Projection Algorithm

In this section, Lemma 2 and Theorem 2 analyze the performance of the iterative list PRM projection algorithm in a multi-user scenario.
Lemma 2. 
Suppose that the input signal of Algorithm 2 contains a linear combination of all K a active users’ sequences. We let these sequences be detected in the order of ( ρ 1 , ρ 2 , , ρ K a ) . We denote the event “the ρ k -th recovered sequence has been successfully detected” as B ρ k and the event “the sequences with order ( ρ 1 , , ρ k 1 ) have been successfully detected and have been subtracted from the received signal” as B ρ 1 , , ρ k 1 . Thus, given the channel parameters h ̲ = [ h 1 , , h K a ] and the event B ρ 1 , , ρ k 1 , the chance of the event B ρ 1 , , ρ k 1 occurs in the first iteration of Algorithm 2 can be approximated as
Pr B ρ k | B ρ 1 , , ρ k 1 , h = Pr P ρ k ( 1 ) | h ̲ · P ˇ Υ ρ k 1 , N 0 ( Υ ρ k 1 ) | h ̲ .
In (31), Pr P ρ k ( 1 ) | h ̲ refers to the probability of P ^ ρ k e 1 being successfully recovered under the condition h ̲ , which is expressed as
Pr P ρ k ( 1 ) | h ̲ = j = 1 2 Υ ρ k 1 Q 2 Υ ρ k | h ρ k | 2 + ρ k + 1 i ρ K a n ρ k , i · 2 Υ ρ i | h i | 2 · n i , j · 2 Υ ρ i | h i | 2 2 σ Λ , 1 · Q 2 Υ ρ k | h ρ k | 2 σ Λ , 1 2 Υ ρ k 1 ,
where the notation n ρ k , i indicates the number of intersections of positions corresponding to non-zero F ( · ) for the i-th detected sequence and ρ k -th detected sequence. The notation n i , j represents the number of intersections of positions corresponding to non-zero F ( · ) for the i-th detected sequence and the j-th coset of the set related to non-zero F ( · ) for the first layer recovery of the ρ k -th detected sequence. Additionally, the variance in (32) is
σ Λ , 1 2 = ρ = ρ k ρ K a ρ = ρ k a n d ρ ρ ρ K a | h ρ H · h ρ | 2 · σ μ , 1 2 + 2 m 1 · ρ = ρ k ρ K a 2 | h ρ | 2 N 0 + N 0 2 ,
in which μ is the inner product created by two PRM sequences of length- 2 m , and σ μ , 1 2 denotes the variance of the inner product. The second component P ˇ Υ ρ 1 1 , N 0 ( Υ ρ k 1 ) | h in Equation (31) represents the matrix P ^ Υ ρ k 1 ’s likelihood of a successful recovery given h , where P ^ Υ ρ k 1 denotes the submatrix of the symmetric matrix P ^ ρ k with rank ( Υ ρ k 1 ) for order ρ k . Additionally, the notation N 0 ( Υ ρ k 1 ) = 1 2 ( ρ = ρ k + 1 ρ K a | h ρ | 2 + N 0 ) .
Theorem 2. 
Suppose that the input signal of Algorithm 2 contains a linear combination of all K a active users’ sequences. We denote each sequence by the specific notation k ( 1 k K a ) and let these sequences be detected in the order of ( ρ 1 , ρ 2 , , ρ K a ) . Then, given the channel parameter h ̲ , the probability that Algorithm 2 can successfully detect the sequence ρ ( 1 ρ K a ) in the first iteration is
P ˇ ˇ ρ K a , m , N 0 | h ̲ = Pr B ρ | h ̲ + k = 2 K a ( ρ 1 , , ρ k 1 ) N k k = 1 k 1 Pr ( B ρ k | B ρ 1 , , ρ k 1 , h ̲ ) · Pr ( B ρ | B ρ 1 , , ρ k 1 , h ̲ ) ,
where the set N k contains all permutations of ( k 1 ) indexes of order from the set K a ρ . Furthermore, by taking the channel parameters into consideration, the average probability of successful detection for the first iteration of the iterative PRM sequence can be approximated as
P ˇ ˇ ave K a , m , N 0 | h ̲ = 1 K a h ̲ ρ = 1 K a f CN h ρ ; 0 , 1 · ρ = 1 K a P ˇ ˇ ρ K a , m , N 0 | h ̲ .
Proof. 
For convenience, two users are assumed to be active in the system. Since users are not provided with IDs in the URA scenario, we can deduce the theoretical results of successful detection probabilities with the help of the detection order between sequences; we denote two sequences as a and b, respectively, and the detection order between them is represented by ρ a and ρ b . At this point, the input signal of the algorithm is rewritten as a linear superposition of the two users of the form: y = h a C P ^ a , B a , I χ , a + h b C P ^ b , B b , I χ , b + n . The rank of PRM codes C P ^ a , B a , I χ , a and C P ^ b , B b , I χ , b are noted as Υ a and Υ b , respectively. According to the expansion of (16), the F M can be expanded regarding the r -layer as
F M ( f r ) = ( h a C P ^ a , B a , I χ , a + h b C P ^ b , B b , I χ , b + n ) H · E I χ e r , f r · ( h a C P ^ a , B a , I χ , a + h b C P ^ b , B b , I χ , b + n ) = | h a | 2 C P ^ a , B a , I χ , a H E I χ e r , f r C P ^ a , B a , I χ , a F a ( f r ) + | h b | 2 C P ^ b , B b , I χ , b H E I χ e r , f r C P ^ b , B b , I χ , b F b ( f r ) + h a H · h b C P ^ a , B a , I χ , a H E I χ e r , f r C P ^ b , B b , I χ , b Z a b , 1 ( f r ) + h a · h b H C P ^ b , B b , I χ , b H E I χ e r , f r C P ^ a , B a , I χ , a Z a b , 2 ( f r ) + h a C P ^ a , B a , I χ , a H E I χ e r , f r n + n H E I χ e r , f r C P ^ a , B a , I χ , a X a , 1 ( f r ) + h b C P ^ b , B b , I χ , b H E I χ e r , f r n + n H E I χ e r , f r C P ^ b , B b , I χ , b X a , 2 ( f r ) + n H E I χ e r , f r n N ( f r ) .
For ease of use, we make ( Z a b , 1 + Z a b , 2 + X a , 1 + X a , 2 + N ) ( f r ) = Λ ( f r ) .
We first assess the performance of the first-layer ( r = 1 ) detection for the ρ a -th sequence: suppose in the first iteration of Algorithm 2, active users are detected successively according to the order ( ρ a , ρ b ) . In the following, we write the probability of the event P ρ a ( 1 ) as form
Pr P ρ a ( 1 ) | h = Pr z F 2 m Υ a F I ˜ χ , a z + p ˜ χ a , 1 > max z F 2 m Υ a F I ˜ χ , a z + p ¯ ¯ χ a , 1 = [ p ¯ ¯ χ a , 1 ( j ) ] j 2 Υ a 1 Pr z F 2 m Υ a F I ˜ χ , a z + p ˜ χ a , 1 > z F 2 m Υ a F I ˜ χ , a z + p ¯ ¯ χ a , 1 ( j ) .
We omit the subscript M and record it as F ( · ) in this section. As seen in (37), the present situation is more complicated than the single-user scenario. One of the problems is that the 2 m Υ a jointly summed events are no longer independent and no longer obey the same distribution. In addition, for the property of the PRM sequence, the vector ( I ˜ χ , a z + p ˜ χ a , 1 ) in (37) is specific to the ρ a -th detected sequences, and the ρ b -th sequence utilizes a PRM whose rank Υ b is not necessarily equivalent to Υ a ; therefore, the analysis requires a detailed consideration of the interference of the other user. For simplicity, we first approximate a single term in (37) as follows:
Pr z F 2 m Υ a F a + Λ ( Ξ ˜ χ a , z ( 1 ) ) + F b ( Ξ ˜ χ a , z ( 1 ) ) z F 2 m Υ a F b ( Ξ ¯ ¯ χ a , z ( j , 1 ) ) + Λ ( Ξ ¯ ¯ χ a , z ( j , 1 ) ) = ( e ) Pr F a + Λ ( Ξ ˜ χ a , z ( 1 ) ) + z F 2 m Υ a F b ( Ξ ˜ χ a , z ( 1 ) ) Λ ( Ξ ¯ ¯ χ a , z ( j , 1 ) ) + z F 2 m Υ a F b ( Ξ ¯ ¯ χ a , z ( j , 1 ) ) ( f ) Pr F a + Λ ( Ξ ˜ χ a , z ( 1 ) ) + z F 2 m Υ a F b ( Ξ ˜ χ a , z ( 1 ) ) Λ ( Ξ ¯ ¯ χ a , z ( j , 1 ) ) + z F 2 m Υ a F b ( Ξ ¯ ¯ χ a , z ( j , 1 ) ) · Pr F a + Λ ( Ξ ˜ χ a , z ( 1 ) ) > 0 ,
where Ξ ˜ χ a , z ( 1 ) = I ˜ χ , a z + p ˜ χ a , 1 , Ξ ¯ ¯ χ a , z ( j , 1 ) = I ˜ χ , a z + p ¯ ¯ χ a , 1 ( j ) (different from (19), the superscript “j” can not be removed from Ξ ¯ ¯ χ a , z ( j , 1 ) since the events in joint multiply from (37) are not independent anymore) and Ξ ¯ ¯ χ a , z ( j , 1 ) is the j-th coset of the set Ξ ˜ χ a , z ( 1 ) ( 1 j 2 Υ a 1 ). It is known that F a ( Ξ ˜ χ a , z ( 1 ) ) take non-zero values for the set Ξ ˜ χ a , z ( 1 ) . Additionally, the equal sign ( e ) in (36) retains one of the terms on both sides owing to the same distribution. The inequality sign ( f ) follows the formula (18). In the following, we will calculate each of the two probabilities of the result of (38).
We first solve the probability of the latter term. Since we know that F a ( Ξ ˜ χ a , z ( 1 ) ) takes a fixed value, it is sufficient to obtain the distribution of Λ ( Ξ ˜ χ a , z ( 1 ) ) . Thus, the distribution of Z a b , 1 ( Ξ ˜ χ a , z ( 1 ) ) and Z a b , 2 ( Ξ ˜ χ a , z ( 1 ) ) in Λ ( Ξ ˜ χ a , z ( 1 ) ) can be extended as
Z a b , 1 ( Ξ ˜ χ a , z ( 1 ) ) = h a H · h b · C P ^ a , B a , I χ , a H E I χ a e r , Ξ ˜ χ a , z ( 1 ) C P ^ b , B b , I χ , b = h a H · h b · i ( I χ e r ) T Ξ ˜ χ a , z ( 1 ) w F 2 m C P a ^ , B a , I χ a w + I χ a e r ¯ · ( 1 ) w T · Ξ ˜ χ a , z ( 1 ) · C P b ^ , B b , I χ b ( w ) = h a H · h b · i ( I χ · e r + 2 w ) T · Ξ ˜ χ a , z ( 1 ) · μ ρ a ¯ , ρ b ( 1 )
and
Z a b , 2 ( Ξ ˜ χ a , z ( 1 ) ) = h a · h b H · C P ^ b , B b , I χ , b H E I χ a e r , Ξ ˜ χ a , z ( 1 ) C P ^ a , B a , I χ , a = h a · h b H · i ( I χ e r ) T Ξ ˜ χ a , z ( 1 ) w F 2 m C P b ^ , B b , I χ b w + I χ a e r ¯ · ( 1 ) w T · Ξ ˜ χ a , z ( 1 ) · C P a ^ , B a , I χ a ( w ) = h a · h b H · i ( I χ · e r + 2 w ) T · Ξ ˜ χ a , z ( 1 ) · μ ρ a , ρ b ¯ ( 1 ) ,
respectively, where μ ρ a ¯ , ρ b ( 1 ) and μ ρ a , ρ b ¯ ( 1 ) are the inner product of two sequences. The inner product is related to the pattern form of the PRM. Since the distribution of the occupancy pattern has equal probability, the mean of the inner product is 0, and we denote its variance as σ μ , 1 2 . Thus, we can obtain that Λ ( Ξ ˜ χ a , z ( 1 ) ) obeys the distribution N ( 0 , σ Λ , 1 2 ) , where the variance is
σ Λ , 1 2 = ( | h a H · h b | 2 + | h a · h b H | 2 ) · σ μ , 1 2 + 2 m 1 2 ( | h a | 2 + | h b | 2 ) N 0 + N 0 2 .
In light of this, the latter term of (38) can be computed as
Pr F a + Λ ( Ξ ˜ χ a , z ( 1 ) ) > 0 = Pr Λ ( Ξ ˜ χ a , z ( 1 ) ) > F a ( Ξ ˜ χ a , z ( 1 ) ) = Pr Λ ( Ξ ˜ χ a , z ( 1 ) ) > 2 Υ a | h a | 2 · ( 1 ) B a , 1 + ( B a | Υ + 1 m ) T z = ( g ) Q 2 Υ a | h a | 2 σ Λ , 1 .
In equal sign (g), we set z = 0 m Υ ; this setting is also extended in the following formulas.
We next solve the probability of the first term in (38), we suppose that the set corresponding to one-zero F b ( · ) is noted as Ξ ˜ χ b , z ( 1 ) , and we make | Ξ ˜ χ a , z ( 1 ) Ξ ˜ χ b , z ( 1 ) | = n a b , obviously, | Ξ ˜ χ b , z ( 1 ) | = 2 m Υ b . We further denote | Ξ ¯ ¯ χ a , z ( j , 1 ) Ξ ˜ χ b , z ( 1 ) | = n b , j and n a b + j = 1 2 Υ a 1 n b , j = 2 m Υ b . On this basis, the first term of the results of (38) yields the following equation:
Pr F a + Λ ( Ξ ˜ χ a , z ( 1 ) ) + z F 2 m Υ a F b ( Ξ ˜ χ a , z ( 1 ) ) Λ ( Ξ ¯ ¯ χ a , z ( j , 1 ) ) + z F 2 m Υ a F b ( Ξ ¯ ¯ χ a , z ( j , 1 ) ) = Pr F a + Λ ( Ξ ˜ χ a , z ( 1 ) ) + n a b · 2 Υ b | h b | 2 Λ ( Ξ ¯ ¯ χ a , z ( j , 1 ) ) + n b , j · 2 Υ b | h b | 2 2 Υ b = 1 Pr Λ ( Ξ ¯ ¯ χ a , z ( j , 1 ) ) Λ ( Ξ ˜ χ a , z ( 1 ) ) 2 Υ a | h a | 2 + n a b · 2 Υ b | h b | 2 n b , j · 2 Υ b | h b | 2 · Pr Λ ( Ξ ¯ ¯ χ a , z ( j , 1 ) ) > 0 Pr Λ ( Ξ ¯ ¯ χ a , z ( j , 1 ) ) Λ ( Ξ ˜ χ a , z ( 1 ) ) 2 Υ a | h a | 2 + n a b · 2 Υ b | h b | 2 n b , j · 2 Υ b | h b | 2 · Pr Λ ( Ξ ¯ ¯ χ a , z ( j , 1 ) ) < 0 1 Pr Λ ( Ξ ¯ ¯ χ a , z ( j , 1 ) ) Λ ( Ξ ˜ χ a , z ( 1 ) ) 2 Υ a | h a | 2 + n a b · 2 Υ b | h b | 2 n b , j · 2 Υ b | h b | 2 = ( h ) 1 Q 2 Υ a | h a | 2 + n a b · 2 Υ b | h b | 2 n b , j · 2 Υ b | h b | 2 2 σ Λ , 1 ,
the equal sign (h) is based on Λ ( Ξ ¯ ¯ χ a , z ( j , 1 ) ) Λ ( Ξ ˜ χ a , z ( 1 ) ) N ( 0 , 2 σ Λ , 1 2 ) . On this basis, we return to (38) and get the final result
j = 1 2 Υ a 1 Pr F a + Λ ( Ξ ˜ χ a , z ( 1 ) ) + z F 2 m Υ a F b ( Ξ ˜ χ a , z ( 1 ) ) Λ ( Ξ ¯ ¯ χ a , z ( j , 1 ) ) + z F 2 m Υ a F b ( Ξ ¯ ¯ χ a , z ( j , 1 ) ) = j = 1 2 Υ a 1 Q 2 Υ a | h a | 2 + n a b · 2 Υ b | h b | 2 n b , j · 2 Υ b | h b | 2 2 σ Λ , 1 · Q 2 Υ a | h a | 2 σ Λ , 1 .
This equation is consistent with (32) in Lemma 2.
Now we present a special example to provide a further explanation for the result of (43): We suppose that two users have the same submatrix I χ and different vectors P ^ 1 ; thus, (37) directly becomes
j = 1 2 Υ a 1 Pr z F 2 m Υ a F a + Λ ( Ξ ˜ χ a , z ( 1 ) ) > z F 2 m Υ a F b + Λ ( Ξ ¯ ¯ χ a , z ( j , 1 ) ) j = 1 2 Υ a 1 Pr F a + Λ ( Ξ ˜ χ a , z ( 1 ) ) > F b ( Ξ ¯ ¯ χ a , z ( j , 1 ) ) + Λ ( Ξ ¯ ¯ χ a , z ( j , 1 ) ) · Pr F a + Λ ( Ξ ˜ χ a , z ( 1 ) ) > 0 = Q 2 Υ a | h a | 2 σ Λ , 1 2 Υ a 1 · j = 1 2 Υ a 2 Pr F a + Λ ( Ξ ˜ χ a , z ( 1 ) ) > Λ ( Ξ ¯ ¯ χ a , z ( j , 1 ) ) I · Pr F a + Λ ( Ξ ˜ χ a , z ( 1 ) ) > F b ( Ξ ¯ ¯ χ a , z ( j , 1 ) ) + Λ ( Ξ ¯ ¯ χ a , z ( j , 1 ) ) I I = Q 2 Υ a | h a | 2 σ Λ , 1 2 Υ a 1 · Q 2 Υ a | h a | 2 2 σ Λ , 1 2 Υ a 2 · Q 2 Υ a | h a | 2 2 Υ b | h b | 2 2 σ Λ , 1 .
In this case, term I refers to interference imposed by the cosets other than set Ξ ˜ χ a , z ( 1 ) on the ρ a -th detected sequence. Since | Ξ ¯ ¯ χ a , z ( j , 1 ) Ξ ˜ χ b , z ( 1 ) | = 2 Υ b for one particular j, we conclude that there is only one case in term I I . Additionally, owing to F b ( Ξ ¯ ¯ χ a , z ( j , 1 ) ) = 0 for the remaining ( 2 Υ a 2 ) possible j, it allows ( 2 Υ a 2 ) possibilities for the term I. The results of (44) satisfy (43) provided that n a b = 0 and n b , j = 1 for one j, 1 j 2 Υ a 1 . This equation is also consistent with (32) in Lemma 2.
Based on the above results, after recovering vector P ^ ρ a e 1 , the y 2 becomes
y 2 = λ P R M , ρ a ( 1 ) · y 1 = h ρ a C P ^ ρ a , B ρ a , I χ , ρ a + λ P R M , ρ a ( 1 ) · h ρ b C P ^ ρ b , B ρ b , I χ , ρ b + n = h ρ a C P ^ ρ a , B ρ a , I χ , ρ a + λ P R M , ρ a ( 1 ) · h ρ b C P ^ ρ b , B ρ b , I χ , ρ b + λ P R M , ρ a ( 1 ) · n ,
where λ P R M , ρ a ( 1 ) is the first layer projection operator for the user whose detection order is ρ a . We can approximate the noise as a Gaussian random variable whose mean is 0 and whose variance is set to be N 0 ( Υ a 1 ) = | h ρ b | 2 + N 0 2 . In light of this, the following detection procedure in layer 2 r Υ a is equivalent to the recovery of the symmetric matrix P ^ ρ a Υ ρ a 1 in a single sequence scenario under the condition that the noise variance is N 0 ( Υ a 1 ) . Since the ρ a -th detected sequence can be successfully recovered if and only if both P ^ ρ a e 1 and P ^ ρ a Υ ρ a 1 are correct, the successful detection probability of event “the ρ k -th recovered sequence has been successfully detected” is
Pr B ρ a | h ̲ = Pr P ρ a ( 1 ) | h ̲ · P ˇ Υ ρ a 1 , N 0 ( Υ a 1 ) | h ̲ .
This result is consistent with (31) in Lemma 2.
Next, we assume that the sequence C P ^ ρ a , B ρ a , I χ , ρ a is ideally eliminated from the signal y , the receiver proceeds to recover C P ^ ρ b , B ρ b , I χ , ρ b , and we denote its probability of successful detection as Pr B ρ b | B ρ a , h ̲ = P ˇ Υ ρ b , N 0 | h ρ b .
As a next step, we will calculate the average probability of two sequences a and b being detected successfully. In the following two scenarios, the sequences a can be recovered:
  • In the first instance, sequence a is detected successfully first, which has a chance of Pr B a | h ̲ .
  • In the second instance, sequence b is detected in priority with the probability of Pr B b | h ̲ , followed by sequence a being detected from the subtracted signal with the probability Pr B a | B b , h ̲ .
As a result, the probability that sequence a will be successfully detected equals
P ˇ ˇ K a , m , N 0 | h ̲ = Pr B 1 | h ̲ + Pr B 2 | h ̲ · Pr B 1 | B 2 , h ̲ .
The equation is consistent with (34) in Theorem 2. Finally, the average probability of successful detection equals
P ˇ ˇ ave K a , m , N 0 | h ̲ = 1 2 h ̲ ρ = 1 2 f CN h ρ ; 0.1 · P ˇ ˇ 1 K a , m , N 0 | h ̲ + P ˇ ˇ 2 K a , m , N 0 | h ̲ .
The equation is consistent with (35) in Theorem 2. □

5. Simulation Results

In this section, we first examine the effectiveness of the proposed algorithms by evaluating the numerical simulation of the successful detection probability in Section 5.1. Additionally, in Section 5.2, a slot-controlled CCS scheme with the proposed iterative list projection algorithm in place of the origin inner decoder has been measured against several benchmarks.

5.1. Performances of the Algorithms 1 and 2

In this section, we will examine the detection capability of Algorithms 1 and 2, and our proposed scheme’s performance is determined by computing the successful access probability. In the sequel, the received signal-to-noise ratio (SNR) at the AP is given by
SNR ( dB ) = E 10 lg y ( t ) e ( t ) 2 e ( t ) 2 .
The successful detection probability is the mean proportion of determined active users to the system’s overall number of active users.
As for the performance of Algorithm 1: Figure 6 compares the numerical simulation results of the list PRM projection algorithm (Algorithm 1) with the theoretical analysis results based on (13) for different settings of m. We fix Υ = m , H = 1 , and the transmit power of an active user is normalized to one. This figure illustrates that, although we employed Gaussian approximation during the theoretical analysis process, the numerical simulation results and the theoretical analysis are closely matched, and definitely, the difference between the two simulations is the result of the Gaussian approximations. Thus, this result confirms the correctness of the theoretical analysis in Lemma 1. In addition, comparing the successful detection probability for different m, we find that the performance of the list PRM projection algorithm improves as the sequence length increases, further validating the conclusion in Section 4.1.
Next, we will compare the performance of the list PRM projection algorithm (Algorithm 1) with the existing decoding compressed sensing algorithm for RM sequences as the common codebook for all active users in a random access system. The numerical simulation results are shown in Figure 7, where “Origin PRM” and “List PRM” represent the origin projection detection algorithm in [37] and its list version (Algorithm 1), respectively. “RM LLD” represents the RM sequence detection algorithm proposed in Reference [33], which is based on the shift-and-multiply operation on the received signal, and “List RM LLD” represents its list version detection. For all algorithms, we set the sequence length to N = 2 8 and the list size to H = 8 . As can be seen from Figure 7, the successful detection probabilities of all algorithms are close to one when the signal-to-noise ratio (SNR) is larger than −2 dB. However, the “RM LLD” algorithm suffers from severe performance degradation under low SNR conditions compared to the “Origin PRM”, which shows strong robustness. In addition, due to the enhanced list detection, the “List RM LLD” and the “List PRM” provide a more outstanding performance gain under lower SNR when compared with no candidate-saving algorithms.
As for the performance of Algorithm 2: in Figure 8, we compare the numerical simulation results of the iterative list PRM projection detection (Algorithm 2) with the theoretical analysis results derived from Lemma 2. We set the number of iterations in the numerical simulation 1. From Figure 8, the theoretical analysis results fit the numerical simulation results well, especially under high SNR conditions, thus verifying the correctness of Theorem 2.
In addition, both simulation and theoretical results show that the detection capability of the algorithm decreases as the number of active users in the system increases.

5.2. Iterative List PRM Projection Algorithm-Based Slot-Controlled URA

In this section, we combine an inner code employing an iterative list PRM projection detector as the decoder with two types of practicable outer codes (t-tree and Reed–Solomon codes) to form a slot-pattern control-based CCS system (as illustrated in Section 2). We demonstrate the positive effects of the proposed algorithms on the overall system via simulation results. We consider a communication system in which each user transmits B = 100 bits by employing a H N length G-ary outer-code using T = 2 15 channel uses. The H N positions are carefully selected within N T chunks. There are G codewords in the inner codebook, each of which has a length of N = T / N T . We employ the PRM sequence under one rank Υ rather than all ranks in this section, and Algorithm 2 is used to decode the inner codes for multi-sequence scenarios. The energy efficiency of different schemes is plotted as follows: energy efficiency is the minimum E b / N 0 over the possible scheme-specific parameters in the URA scheme, P e < 0.1 is the probability of error constraint for each user, and P f < 0.001 is the FAR (false-alarm rate) constraint. Numerical results are presented in Figure 9.
This section first compares the numerical simulations of the enhanced slot-controlled URA system where the outer-code is a t-tree code with some benchmarks. We cite benchmarks as follows: We denote a “list PRM-tree” scheme that concatenates the inner-code with an OMP decoder to an outer decoder that can correct up to t errors for various setups, i.e., the benchmarks called “ t = 3 , list PRM-tree, H = 1 ”, “ t = 2 , list PRM-tree, H = 1 ” and “ t = 1 , list PRM-tree, H = 1 ” are plotted in light blue in Figure 9. These benchmarks are drawn from Reference [27] and named “PRM-RS” in Reference [27]. We employ blue lines to illustrate the simulations for the proposed “ t = 3 , iterative list PRM-tree, H = 8 ”, “ t = 2 , iterative list, H = 8 ” and “ t = 1 , iterative list PRM-tree, H = 8 ” schemes. Specifically, for these proposed “iterative list PRM-tree, H = 8 ” schemes, the PRM sequences are employed as the inner codewords, in which the former x p -bit message is filled with the selected SPC, and the remaining J-bit is used to transport the user’s messages. The enhanced Iterative list PRM projection algorithm is adopted in order to jointly recover the list of messages and the channel estimations of all sequences within one slot. Moreover, the decoder capable of correcting t errors continues to make up the outer code. A greedy information bit allocation method is applied to all list PRM-tree schemes, where each subsequent slot is assigned the maximum amount of information bits while maintaining the E [ | V l | ] 2 25 constraint and finding the minimum E b / N 0 . The energy efficiency is the minimum power needed to serve K a users with the per-user probability of error less than 0.1 , we depict the curves in the blue line in Figure 9 under the optimal setups. The optimal setups of H N , N, N T , G, J and x p are shown in Table 2. From Table 2, we find that the outer-code length H N = 32 is used for t = 1 and H N = 64 for t = 2 , 3 , which is due to the fact that the latter case performs relatively poorly since it has a greater number of simultaneous appearances, which leads to inner-code failure at K a = 185 for t = 1 , K a = 110 and K a = 130 for t = 2 and t = 3 , respectively, under “list PRM-tree, H = 1 ” schemes; K a = 220 for t = 1 , K a = 125 and K a = 150 for t = 2 and t = 3 , respectively, when conducting “iterative list PRM-tree, H = 8 ” schemes.
Next, we will compare the numerical simulations of the enhanced slot-controlled URA system where the outer-code is a Reed–Solomon code with some benchmarks. We cite benchmarks as follows: We denote the CCS scheme that concatenates the inner codes whose distribution is i.i.d. uniform on the power shell to an outer Reed–Solomon code as the “RS scheme” (it is named ”Reed–Solomon scheme” in Reference [27]), and we use it as the first benchmark. Furthermore, the scheme called “list PRM-RS, H = 1 ” is directly drawn from Reference [37] as the second benchmark (it is named “PRM-RS” in Reference [37]), and this slot-controlled URA scheme ”list PRM-RS, H = 1 ” that is described in the system model (Section 2) does not employ the enhanced detection algorithms. Additionally, the “iterative list PRM-RS, H = 1 ” scheme is the proposed scheme whose multi-user inner-code detector is substituted by Algorithm 2 without saving candidates while calling Algorithm 1 (since H = 1 ), whereas the proposed “iterative list PRM-RS, H = 8 ” scheme keeps H = 8 candidates during each layer while conducting Algorithm 1. The optimal setups of “list PRM-RS, H = 1 ”, “iterative list PRM-RS, H = 1 ” and “iterative list PRM-RS, H = 8 ” are shown in Table 3, and the energy efficiency outcomes are illustrated in Figure 9.
From the simulations depicted in Figure 9, the following observations can be drawn:
1.
As for all “List PRM-tree” schemes, since the outer-code length H N = 32 is suitable for scheme t = 1 and H N = 64 for t = 2 , 3 cases, the former case ( t = 1 case) can accommodate more active users than t = 2 , 3 owing to a smaller number of simultaneous appearances in each slot (the outer code length affects the probability of appearance over slots. See more setup details in Table 2). Additionally, a higher t for the outer code increases performance when the inner code has the same length. Therefore, t = 3 is superior to t = 2 . Furthermore, when comparing the curves of “list PRM-tree” and “iterative list PRM-tree”, we conclude that the enhanced inner code with the proposed iterative list projection algorithm can help the entire system lower the energy-per-bit requirement to meet a target error probability as well as boost the ability to accommodate more users’ transmission.
2.
Figure 9 illustrates how the “iterative list PRM-RS, H = 1 ” curve outperforms the “list PRM-RS, H = 1 ” curve regarding energy consumption when the number of active users is fixed. This observation is an unambiguous demonstration of the benefit of Algorithm 2, which carries out iterative detection.
3.
When compared with different schemes of “iterative list PRM-RS, H = 1 ” and “iterative list PRM-RS, H = 8 ”, we demonstrate that the performance of the detection can be refined by raising the count of candidates in the first few layers, thereby confirming the significance of Algorithm 1, which guarantees the first few layers’ reliable recoveries.
The per-user error probability P e as a function of E b / N 0 for K a = 130 , 180 and 200 for the “list PRM-RS, H = 1 ”, “iterative list PRM-RS, H = 1 ” and “iterative list PRM-RS, H = 8 ” schemes are obtained (in Figure 10). The optimal system parameters are depicted in Table 3. It can be seen from Figure 10 that:
1.
For “list PRM-RS, H = 1 ” and “iterative list PRM-RS, H = 1 ” schemes, the minimum P e exceeds 0.1 when the user count reaches 220. Besides, the minimum P e for “iterative list PRM-RS, H = 8 ” schemes below 0.1 when E b / N 0 is beyond 16 dB .
2.
The simulation curves in Figure 10 are consistent with the performances in Figure 9, i.e., for “list PRM-RS, H = 1 ” and “iterative list PRM-RS H = 1 ” schemes, when the number of users exceeds 205 and 213, respectively, the number of simultaneous transmissions exceeds the identifiable maximum.
For “iterative list PRM-RS, H = 8 ” schemes, when the number of users exceeds 243, the number of simultaneous transmissions exceeds the identifiable maximum.

6. Conclusions

In this paper, we address a packetized and slotted transmission CCS framework which concatenates an inner PRM code to an outer error correction code. First, we improve the decoder of the inner-code: we propose an enhanced algorithm that makes use of multiple candidates so as to remedy the venture of spreading errors. On this basis, an iterative list PRM projection algorithm is proposed for the multi-sequence scenario. We then deduce the theoretical probabilities of the list projection and iterative list projection detection methods. Via numerical simulations, we indicate the theoretical and the simulation results agreed. Finally, we implement the iterative list PRM projection algorithm in two practical error-correction codes. From the simulation results, we conclude that the packetized URA with the proposed iterative list projection detection works better than benchmarks in terms of the number of active users it can support in each slot and the amount of energy needed per bit to meet an expected error probability.
Additional interesting research avenues include: (i) the PRM-based system can be expanded to support multiple antennas; (ii) the decomposition of the Clifford matrix G F associated with the patterned RM code can be considered as a transmitter’s information reconstruction, and the iterative approach is generalized at the receiver.

Author Contributions

Methodology, W.X. and H.Z.; software, W.X. and R.T.; validation, W.X.; formal analysis, W.X. and R.T.; investigation, W.X. and R.T.; resources, W.X.; data curation, W.X.; writing—original draft preparation, W.X.; writing—review and editing, W.X. and R.T.; visualization, W.X. and R.T.; supervision, H.Z.; project administration, H.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Saad, W.; Bennis, M.; Chen, M. A vision of 6G wireless systems: Applications, trends, technologies, and open research problems. IEEE Netw. 2019, 34, 134–142. [Google Scholar] [CrossRef] [Green Version]
  2. Guo, F.; Yu, F.R.; Zhang, H.; Li, X.; Ji, H.; Leung, V.C. Enabling massive IoT toward 6G: A comprehensive survey. IEEE Internet Things J. 2021, 8, 11891–11915. [Google Scholar] [CrossRef]
  3. Pan, C.; Mehrpouyan, H.; Liu, Y.; Elkashlan, M.; Arumugam, N. Joint pilot allocation and robust transmission design for ultra-dense user-centric TDD C-RAN with imperfect CSI. IEEE Wirel. Commun. 2018, 17, 2038–2053. [Google Scholar] [CrossRef] [Green Version]
  4. Chettri, L.; Bera, R. A comprehensive survey on Internet of Things (IoT) toward 5G wireless systems. IEEE Internet Things J. 2020, 7, 16–32. [Google Scholar] [CrossRef]
  5. Masoudi, M.; Azari, A.; Yavuz, E.A.; Cavdar, C. Grant-free radio access IoT networks: Scalability analysis in coexistence scenarios. In Proceedings of the IEEE International Conference on Communications (ICC), Kansas City, MO, USA, 20–24 May 2018; pp. 1–7. [Google Scholar]
  6. Shahab, M.B.; Abbas, R.; Shirvanimoghaddam, M.; Johnson, S.J. Grant-free non-orthogonal multiple access for IoT: A survey. IEEE Commun. Surv. Tut. 2020, 22, 1805–1838. [Google Scholar] [CrossRef]
  7. Chen, X.; Chen, T.Y.; Guo, D. Capacity of Gaussian many-access channels. IEEE Trans. Inf. Theory 2017, 63, 3516–3539. [Google Scholar] [CrossRef]
  8. Polyanskiy, Y. A perspective on massive random-access. In Proceedings of the IEEE International Symposium on Information Theory (ISIT), Aachen, Germany, 25–30 June 2017; pp. 2523–2527. [Google Scholar]
  9. Chen, Z.; Sohrabi, F.; Yu, W. Sparse activity detection for massive connectivity. IEEE Trans. Signal Process. 2018, 66, 1890–1904. [Google Scholar] [CrossRef] [Green Version]
  10. Liu, L.; Yu, W. Massive connectivity with massive MIMO—Part I: Device activity detection and channel estimation. IEEE Trans. Signal Process. 2018, 66, 2933–2946. [Google Scholar] [CrossRef] [Green Version]
  11. Senel, K.; Larsson, E.G. Grant-free massive MTC-enabled massive MIMO: A compressive sensing approach. IEEE Trans. Commun. 2018, 66, 6164–6175. [Google Scholar] [CrossRef] [Green Version]
  12. Yang, L.; Fan, P.; Li, L.; Ding, Z.; Hao, L. Cross validation aided approximated message passing algorithm for user identification in mMTC. IEEE Commun. Lett. 2021, 25, 2077–2081. [Google Scholar] [CrossRef]
  13. Zadik, I.; Polyanskiy, Y.; Thrampoulidis, C. Improved bounds on Gaussian MAC and sparse regression via Gaussian inequalities. In Proceedings of the 2019 IEEE International Symposium on Information Theory (ISIT), Paris, France, 7–12 July 2019; pp. 430–434. [Google Scholar]
  14. Ngo, K.H.; Lancho, A.; Durisi, G. Unsourced multiple access with random user activity. IEEE Trans. Inf. Theory 2023, 69, 4537–4558. [Google Scholar] [CrossRef]
  15. Ordentlich, O.; Polyanskiy, Y. Low complexity schemes for the random access Gaussian channel. In Proceedings of the 2017 IEEE International Symposium on Information Theory (ISIT), Aachen, Germany, 25–30 June 2017; pp. 2528–2532. [Google Scholar]
  16. Marshakov, E.; Balitskiy, G.; Andreev, K.; Frolov, A. A polar code based unsourced random access for the Gaussian MAC. In Proceedings of the 2019 IEEE 90th Vehicular Technology Conference (VTC2019-Fall), Honolulu, HI, USA, 22–25 September 2019; pp. 1–5. [Google Scholar]
  17. Pradhan, A.K.; Amalladinne, V.K.; Vem, A.; Narayanan, K.R.; Chamberland, J.F. Sparse IDMA: A Joint Graph-Based Coding Scheme for Unsourced Random Access. IEEE Trans. Commun. 2022, 70, 7124–7133. [Google Scholar] [CrossRef]
  18. Ahmadi, M.J.; Duman, T.M. Random spreading for unsourced MAC with power diversity. IEEE Commun. Lett. 2021, 25, 3995–3999. [Google Scholar] [CrossRef]
  19. Pradhan, A.K.; Amalladinne, V.K.; Narayanan, K.R.; Chamberland, J.F. LDPC Codes with Soft Interference Cancellation for Uncoordinated Unsourced Multiple Access. In Proceedings of the IEEE International Conference on Communications (ICC), Montreal, QC, Canada, 14–23 June 2021; pp. 1–6. [Google Scholar]
  20. Amalladinne, V.K.; Chamberland, J.F.; Narayanan, K.R. A Coded Compressed Sensing Scheme for Unsourced Multiple Access. IEEE Trans. Inf. Theory 2020, 66, 6509–6533. [Google Scholar] [CrossRef]
  21. Amalladinne, V.K.; Vem, A.; Soma, D.K.; Narayanan, K.R.; Chamberland, J.F. A coupled compressive sensing scheme for unsourced multiple access. In Proceedings of the 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Calgary, AB, Canada, 15–20 April 2018; pp. 6628–6632. [Google Scholar]
  22. Lancho, A.; Fengler, A.; Polyanskiy, Y. Finite-blocklength results for the A-channel: Applications to unsourced random access and group testing. In Proceedings of the 2022 58th Annual Allerton Conference on Communication, Control, and Computing (Allerton), Monticello, IL, USA, 28–30 September 2022; pp. 1–8. [Google Scholar]
  23. Amalladinne, V.K.; Pradhan, A.K.; Rush, C.; Chamberland, J.F.; Narayanan, K.R. Unsourced random access with coded compressed sensing: Integrating AMP and belief propagation. IEEE Trans. Inf. Theory 2022, 68, 2384–2409. [Google Scholar] [CrossRef]
  24. Andreev, K.; Rybin, P.; Frolov, A. Reed-Solomon coded compressed sensing for the unsourced random access. In Proceedings of the 2021 17th International Symposium on Wireless Communication Systems (ISWCS), Berlin, Germany, 6–9 September 2021; pp. 1–5. [Google Scholar]
  25. Che, J.; Zhang, Z.; Yang, Z.; Chen, X.; Zhong, C.; Ng, D.W.K. Unsourced random massive access with beam-space tree decoding. IEEE J. Sel. Areas Commun. 2021, 40, 1146–1161. [Google Scholar] [CrossRef]
  26. Fengler, A.; Haghighatshoar, S.; Jung, P.; Caire, G. Non-Bayesian Activity Detection, Large-Scale Fading Coefficient Estimation, and Unsourced Random Access With a Massive MIMO Receiver. IEEE Trans. Inf. Theory 2021, 67, 2925–2951. [Google Scholar] [CrossRef]
  27. Andreev, K.; Rybin, P.; Frolov, A. Coded Compressed Sensing with List Recoverable Codes for the Unsourced Random Access. IEEE Trans. Commun. 2022, 70, 7886–7898. [Google Scholar] [CrossRef]
  28. Liang, Z.; Zheng, J.; Ni, J. Index modulation–aided mixed massive random access. Front. Commun. Networks 2021, 2, 694557. [Google Scholar] [CrossRef]
  29. Fengler, A.; Jung, P.; Caire, G. SPARCs for unsourced random access. IEEE Trans. Inf. Theory 2021, 67, 6894–6915. [Google Scholar] [CrossRef]
  30. Ebert, J.R.; Amalladinne, V.K.; Rini, S.; Chamberland, J.F.; Narayanan, K.R. Coded demixing for unsourced random access. IEEE Trans. Signal Process. 2022, 70, 2972–2984. [Google Scholar] [CrossRef]
  31. Zhang, L.; Luo, J.; Guo, D. Neighbor discovery for wireless networks via compressed sensing. Perform. Eval. 2013, 70, 457–471. [Google Scholar] [CrossRef] [Green Version]
  32. Zhang, H.; Li, R.; Wang, J.; Chen, Y.; Zhang, Z. Reed-Muller sequences for 5G grant-free massive access. In Proceedings of the GLOBECOM 2017 IEEE Global Communications Conference, Singapore, 4–8 December 2017; pp. 1–7. [Google Scholar]
  33. Wang, J.; Zhang, Z.; Hanzo, L. Joint active user detection and channel estimation in massive access systems exploiting Reed–Muller sequences. IEEE J. Sel. Top. Signal Process. 2019, 13, 739–752. [Google Scholar] [CrossRef] [Green Version]
  34. Wang, J.; Zhang, Z.; Hanzo, L. Incremental massive random access exploiting the nested Reed-Muller sequences. IEEE Trans. Wirel. Commun. 2020, 20, 2917–2932. [Google Scholar] [CrossRef]
  35. Calderbank, R.; Thompson, A. CHIRRUP: A practical algorithm for unsourced multiple access. Inform. Inference J. IMA 2020, 9, 875–897. [Google Scholar] [CrossRef] [Green Version]
  36. Wang, J.; Zhang, Z.; Chen, X.; Zhong, C.; Hanzo, L. Unsourced massive random access scheme exploiting reed-muller sequences. IEEE Trans. Commun. 2020, 70, 1290–1303. [Google Scholar] [CrossRef]
  37. Xie, W.; Zhang, H. Patterned Reed–Muller Sequences with Outer A-Channel Codes and Projective Decoding for Slot-Controlled Unsourced Random Access. Sensors 2023, 23, 5239. [Google Scholar] [CrossRef] [PubMed]
  38. Pllaha, T.; Tirkkonen, O.; Calderbank, R. Binary subspace chirps. IEEE Trans. Inf. Theory 2022, 68, 7735–7752. [Google Scholar] [CrossRef]
  39. Pllaha, T.; Tirkkonen, O.; Calderbank, R. Reconstruction of multi-user binary subspace chirps. In Proceedings of the 2020 IEEE International Symposium on Information Theory (ISIT), Los Angeles, CA, USA, 21–26 June 2020; pp. 531–536. [Google Scholar]
  40. Tirkkonen, O.; Calderbank, R. Codebooks of complex lines based on binary subspace chirps. In Proceedings of the 2019 IEEE Information Theory Workshop (ITW), Visby, Sweden, 25–28 August 2019; pp. 1–5. [Google Scholar]
Figure 1. The procedures of the transmitter for the slot-control URA.
Figure 1. The procedures of the transmitter for the slot-control URA.
Sensors 23 06596 g001
Figure 2. Given m, N 0 = 10 and Υ , the variation of Pr P ( r ) | P ( 1 , , r 1 ) , h with r .
Figure 2. Given m, N 0 = 10 and Υ , the variation of Pr P ( r ) | P ( 1 , , r 1 ) , h with r .
Sensors 23 06596 g002
Figure 3. Given r , N 0 = 10 and Υ , the variation of Pr P ( r ) | P ( 1 , , r 1 ) , h with m.
Figure 3. Given r , N 0 = 10 and Υ , the variation of Pr P ( r ) | P ( 1 , , r 1 ) , h with m.
Sensors 23 06596 g003
Figure 4. Given r , m and Υ , the variation of Pr P ( r ) | P ( 1 , , r 1 ) , h with N 0 .
Figure 4. Given r , m and Υ , the variation of Pr P ( r ) | P ( 1 , , r 1 ) , h with N 0 .
Sensors 23 06596 g004
Figure 5. Given N 0 , m and Υ , the variation of Pr P ( r ) | P ( 1 , , r 1 ) , h with r .
Figure 5. Given N 0 , m and Υ , the variation of Pr P ( r ) | P ( 1 , , r 1 ) , h with r .
Sensors 23 06596 g005
Figure 6. The comparison of numerical simulation results and theoretical analysis results of list PRM projection algorithm.
Figure 6. The comparison of numerical simulation results and theoretical analysis results of list PRM projection algorithm.
Sensors 23 06596 g006
Figure 7. The successful detection probability versus SNR for different schemes, m = 8 .
Figure 7. The successful detection probability versus SNR for different schemes, m = 8 .
Sensors 23 06596 g007
Figure 8. The successful detection probability of Algorithm 1 versus SNR.
Figure 8. The successful detection probability of Algorithm 1 versus SNR.
Sensors 23 06596 g008
Figure 9. The simulations of energy efficiency are depicted. The simulations are listed as follows: list PRM-tree scheme for t = 1 , , 3 from [37], iterative list PRM-tree scheme for t = 1 , , 3 , Reed-Solomon scheme from [27], list PRM-RS scheme without iteration form [37], and iterative list PRM-RS schemes for H = 1 and H = 8 , respectively.
Figure 9. The simulations of energy efficiency are depicted. The simulations are listed as follows: list PRM-tree scheme for t = 1 , , 3 from [37], iterative list PRM-tree scheme for t = 1 , , 3 , Reed-Solomon scheme from [27], list PRM-RS scheme without iteration form [37], and iterative list PRM-RS schemes for H = 1 and H = 8 , respectively.
Sensors 23 06596 g009
Figure 10. Probability of error P e vs. E b / N 0 for K a = 130 , 180 and 220 under three schemes, i.e., “list PRM-RS, H = 1 ”, “iterative list PRM-RS, H = 1 ” and “iterative list PRM-RS, H = 8 ”.
Figure 10. Probability of error P e vs. E b / N 0 for K a = 130 , 180 and 220 under three schemes, i.e., “list PRM-RS, H = 1 ”, “iterative list PRM-RS, H = 1 ” and “iterative list PRM-RS, H = 8 ”.
Sensors 23 06596 g010
Table 1. Uncoordinated/unsourced grant-free vs. coordinated/sourced grant-free (based on compressed sensing technique).
Table 1. Uncoordinated/unsourced grant-free vs. coordinated/sourced grant-free (based on compressed sensing technique).
SchemesApplication ScenariosTypical Access PerformanceAdvantages and Disadvantages
Uncoordinated/unsourced grant-free [8]Providing complete data transfer for active users in a large-scale random access environmentIt is possible to achieve a BER of 0.05 using 30 , 000 channels, a 3 dB SNR, and 200 active usersAdvantages: can support a large number of total users, does not need user identity recovery process, and is suitable for low latency transmission scenarios Disadvantages: no pilot sequence, hard to estimate the channel estimations, and high complexity of decoders
Coordinated/sourced grant-Free based on Compressed sensing technique [9,10,11,12]Providing active user identity detection and channel estimation in a large-scale random access environmentUsing a pilot sequence of length-800, an SNR of 3 dB, and a setup of 40,000 users, it is possible to achieve a BER of 0.0125 Advantage: can jointly recover users’ ID and their channel estimations Disadvantages: the overhead of the pilot sequence is large, and the total number of supported users is constrained by the length of the pilot sequence
Table 2. URA system parameters for “PRM-tree” scheme.
Table 2. URA system parameters for “PRM-tree” scheme.
Parameter DescriptionSpecific Value
PRM sequence length, N = 2 m 2 7
The length of RS codes, H N 2 5 ( t = 1 ) / 2 6 ( t = 2 , 3 )
The number of slots, N T 2 8
The number of complex channel uses, T 2 15
The length of slot-occupation control, x p 9 ( t = 1 ) / 7 ( t = 2 , 3 )
G-ary 19 ( t = 1 ) / 15 ( t = 2 , 3 )
J-ary 10 ( t = 1 ) / 8 ( t = 2 , 3 )
The capacity of PRM codebook, | Γ Υ | 286720 ( t = 1 ) / 21504 ( t = 2 , 3 )
The code rate 0.3125 ( t = 1 ) / 0.1953 ( t = 2 , 3 )
Table 3. URA system parameters for “RS” scheme.
Table 3. URA system parameters for “RS” scheme.
Parameter DescriptionSpecific Value
PRM sequence length, N = 2 m 2 7
The length of RS codes, H N 2 5
The number of slots, N T 2 8
The number of complex channel uses, T 2 15
The length of slot-occupation control, x p 9
G-ary15
J-ary6
The capacity of PRM codebook, | Γ Υ | 21504
The code rate of RS 0.5208
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Xie, W.; Tian, R.; Zhang, H. Iterative List Patterned Reed-Muller Projection Detection-Based Packetized Unsourced Massive Random Access. Sensors 2023, 23, 6596. https://doi.org/10.3390/s23146596

AMA Style

Xie W, Tian R, Zhang H. Iterative List Patterned Reed-Muller Projection Detection-Based Packetized Unsourced Massive Random Access. Sensors. 2023; 23(14):6596. https://doi.org/10.3390/s23146596

Chicago/Turabian Style

Xie, Wenjiao, Runhe Tian, and Huisheng Zhang. 2023. "Iterative List Patterned Reed-Muller Projection Detection-Based Packetized Unsourced Massive Random Access" Sensors 23, no. 14: 6596. https://doi.org/10.3390/s23146596

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop