1. Introduction
Over the past four decades, the rapid development of mobile networks has gradually improved the way people work and live. The development of 6G not only means that society is gradually moving towards a new era of the “Internet of Everything”, it also means that the target audience of mobile communication systems is expanding rapidly and the demand for communications is rising sharply [
1,
2]. This has brought serious challenges to terrestrial communications. First, it is difficult for terrestrial communication systems to achieve global coverage. The ability of terrestrial base stations to withstand natural disasters is also relatively poor [
3,
4]. Second, terrestrial communication systems are unable to meet the surging communication demands with high quality. Therefore, 6G proposes a space–air–ground integrated network, namely, the Integrated Satellite-Terrestrial Network. The network adopts a vertically heterogeneous framework, integrating terrestrial cellular networks, high, medium, and low satellite networks, and UAV networks into a seamless three-dimensional network [
5]. Integrated Satellite-Terrestrial Networks can break through terrain limitations and provide a truly seamless high-speed service experience for the public and industries. It can meet the demand for connecting everything seamlessly and provide humans with global, uninterrupted, and consistent information services.
However, the serious shortage of frequency resources makes 6G satellite–ground convergence heterogeneous scenarios have a lot of frequency sharing. This will bring serious interference problems and affect the system’s performance. Therefore, the spectrum utilization scheme and the corresponding interference management technology will be the key technologies for Integrated Satellite-Terrestrial Networks. ISAC technology can mitigate the impact of limited spectrum resources and improve the utilization of spectrum resources. The aim of ISAC is to exploit the highly convergent characteristics of the communication system and the sensing system in terms of hardware and information processing, to make the two independent functions of communication and sensing unified in a single system for mutual benefit [
6]. To realize the full integration of communication and sensing, ISAC technology still needs a long stage of development. Among the existing technologies, the MWC based on sub-Nyquist can be used as the basis for the development of ISAC technology [
7]. On the one hand, the MWC system fulfills the dynamic sensing function of the spectrum by recovering the signal support set. On the other hand, it achieves communication by reconstructing the signal. This means that the MWC system can support the miniaturization and intelligence of wireless device systems. However, under the influence of noise, the effect of MWC on signal processing is not stable, which affects the accuracy of information transfer in the sensing and communication process [
8,
9,
10,
11]. It is necessary to process the sampled signals in a timely manner, which can enhance the accuracy of the subsequent communication and form a virtuous cycle of sensing and communication.
Since most actual signals have non-stationary characters, separate time-domain and frequency-domain analyses cannot provide comprehensive information about the signal. The time-frequency analysis method is appropriate for research on non-stationary signals, the corresponding denoising method is also applied in various fields. The main time-frequency analysis methods include short-time Fourier transform (STFT), wavelet transform (WT), Empirical Mode Decomposition (EMD), and Singular Spectrum Analysis (SSA). WT can focus on the local characteristics of the signal by multi-scale analysis. The use of wavelet transform denoising can retain the peaks and mutations of signals while suppressing the interference of high-frequency noise. The wavelet basis function has a large impact on the wavelet denoising effect. Traditional methods of threshold determinations include unbiased risk estimation, heuristic thresholds, fixed thresholds, and Minimax thresholds. To achieve a better denoising effect, there are different threshold determination methods for various situations. To improve the accuracy of motion parameter estimation, ref. [
12] introduces inter-scale correlation to determine the threshold. Such an approach not only improves the adaptability of the estimation function but also makes the threshold estimation smoother. For partial high-voltage electrical signals, ref. [
13] reduces the computational amount by constructing dual-tree complex wavelet pairs for signal decomposition. It calculates the wavelet coefficients by the block threshold, considering the correlation of neighboring wavelet coefficients. Ref. [
14] uses the statistical properties of wavelet coefficients to construct adaptive wavelet threshold denoising. Ref. [
15] combines the simulated annealing algorithm with the artificial bee colony algorithm and uses it to select the optimal threshold.
Reconfiguration algorithms are also crucial in MWC systems. In recent years, for the optimization of reconstruction algorithms, researchers have conducted a lot of research. Refs. [
16,
17,
18] optimize the algorithm by reducing the amount of computation. Generalized Orthogonal Matching Pursuit (GOMP) is a generalization of the OMP [
16]. Although GOMP is a simple improvement, its indices are significantly improved compared with OMP. Ref. [
17] proposed nearest orthogonal matching pursuit (N-OMP) algorithms, which directly determine the occupancy of two neighboring sub-bands by calculating the correlation coefficients between the residual vectors and the two neighboring sub-bands. Ref. [
18] reduces the number of iterations and the computational complexity by exploiting the correlation between the observation matrix and the sample. Refs. [
19,
20,
21,
22] enhance algorithms’ accuracy by filtering multiple times or by proposing multiple filtering conditions. Ref. [
19] uses a backtracking elimination strategy to filter the initially selected support set indices. Ref. [
20] performs the screening of sampling channels before reconstruction. Ref. [
21] sets additional support set screening conditions to achieve accurate reconstruction. Double Threshold Matching Pursuit (DTMP) is an energy-based dual threshold algorithm [
22]. DTMP compares the power of its reconstructed signal with the corresponding noise power for the initially selected support set. If it is below a certain threshold, it is determined that it contains only noise and it is removed from the support set. DTMP has improved the performance of the support set in terms of the false alarm rate.
In summary, we propose the following optimization scheme for the communication and sensing system of wireless devices:
To minimize the impact of noise in the sampled signal on subsequent sensing and communication, we propose a dynamic flexible processing method based on spatial information entropy. The method first dynamically denoises the signal according to the noise characteristics, and second, for the subsequent reconstruction, needs to construct the spatial information entropy and use it to further process the denoised signal.
To further improve the accuracy of communication and sensing, we optimized the OMP algorithm based on the concept of the genetic algorithm. According to the characteristics of the traditional OMP algorithm, the spatial information gain and spatial information characteristics are utilized to constitute an information feature factor, which helps the algorithm to further filter the support set. The incorporation of randomness reduces the false alarm rate of the support set and the Bit Error Rate (BER) of the signal, and overcomes the defect of the OMP algorithm falling into the local optimal solution.
The particulars of the remaining sections of this paper are as follows.
Section 2 introduces the MWC system model, and
Section 3 and
Section 4 detail the adaptive threshold denoising framework based on spatial information entropy, and the Genetic Orthogonal Matching (Genetic-OMP) algorithm based on spatial information entropy, respectively. In
Section 5, the superiority of the proposed scheme is evaluated by different indices through simulation. In
Section 6, the conclusion is given.
2. MWC System Model
The MWC system framework is shown in
Figure 1. The whole system can be divided into three parts, namely, signal sampling, support set recovery, and signal reconstruction. The dynamic sensing of spectrum resources by wireless devices can be based on the results of support set recovery, and the reconstructed signals will become communication information if the devices need to communicate.
2.1. Signal Sampling
The framework of the MWC system sampling section is illustrated in
Figure 2. The input signal
is a sparse multi-band signal. The sparse signal
x(
t) contains a large number of zero elements. The support set of
x(
t) is defined as the smallest closed set containing all points that are not mapped to zeros. The sub-band number of
is
N, and the bandwidth of each sub-band is not more than
.
is the Nyquist frequency of
. The frequency span of
is in the range of
. The MWC system has
m channels in the analog sampling front-end and each channel will input
. Taking the
ith channel as an example, the first step is to multiply
with the mixing function
.
, which has
as its period, can extend the bandwidth of
to the baseband.
where
, so the Fourier expansion of
can be seen as
Therefore, after the mixing function, the Fourier transform of
is as follows.
Then,
is passed through an ideal low-pass filter with bandwidth
and sampled at the sampling rate of
to obtain the sampled
.
is well below
. The discrete Fourier transform of
is as follows:
where
2.2. Reconstruction
Convert (
5) to matrix form:
Therefore, signal reconstruction operation can be regarded as the process of acquiring unknown
from known
and
. The solution of (
7) is an NP-hard problem. Since most of the elements
in
are 0, meaning that
has sparsity, this process can be regarded as the recovery of support set
of
, where
Since the sampling front-end continuously generates observation vector
, the solution of (
7) is an Infinite Measurement Vectors (IMV) problem. Using CTF module can be transformed into the Multiple Measurement Vectors (MMV) problem [
23]. Before using the CTF module, the covariance matrix
Q needs to be constructed first.
The architecture of the CTF module is shown in
Figure 3. The CTF module is made up of two parts, eigenvalue decomposition and support set recovery. The main purpose of eigenvalue decomposition is to reduce the dimensionality of the sampling matrix. The matrix
V is constructed based on the decomposition result of
Q, which satisfies
. We set the support set of
U as the same as that of
. So, the solution of
can be converted into the following:
After solving the support set of
U, the device has an initial overview of the current spectrum availability. The device can further communicate with the outside world by reconstructing the signal based on the support set. The reconstructed signal can be based on the following equation.
where
is the support set of
U and
, and
is the pseudo-inverse of
.
There are many algorithms used for signal reconstruction, but two commonly used algorithms are the greedy and convex optimization algorithms. Convex optimization algorithms are highly accurate but also imply high complexity and slow computational speed. Compared to convex optimization algorithms, greedy algorithms are easier to implement with higher execution speeds and lower computational complexity. Therefore, in this paper, we use the OMP algorithm as the basis and the OMP algorithm to form the final solution through multiple iterations, continuously screening the optimal solution in each iteration. This means that the final solution of the greedy algorithm does not skip out of the range of local optimal solutions. To achieve the balance between the global optimal solution and the local optimal solution, the OMP algorithm needs to be optimized.
In the above analysis, only is mentioned. In fact, noise also appears in the system at the same time, that is, the sampled signal is noisy. The MWC system does not handle the noise between sampling and support set recovery. Noise also enters the process of signal reconstruction. In the OMP algorithm, there is no separation operation between the useful signal and the noise. The presence of a large amount of noise will interfere with the support set recovery, thus affecting the device’s judgment of spectrum resources and the accuracy of acquiring information about the outside world. The MWC system is based on compressed sensing theory. After sampling and reconstruction, the energy of noise is weakened to some degree. Although the recovery signal’s Signal-to-Noise Ratio (SNR) was improved, its accuracy could not be guaranteed. As a result, it is necessary to integrate the denoising of the signal between sampling and support set recovery to reduce the impact of noise on the sensing and communication of devices.
3. Dynamic Flexible Processing of Sampled Signals
The MWC system has a limited noise suppression capability. However, at a low SNR, the denoising capability of the MWC system cannot achieve the wireless device’s requirement of reconstructed signal accuracy. So, in this paper, we add a separate dynamic flexible processing module before signal support set recovery. The module is divided into two parts. One is denoising, and the other is processing the denoised signal based on spatial information entropy.
3.1. Dynamic Wavelet Threshold Denoising
For the noisy signal after sampling, due to the noise folding effect, the sampled signal y contains the folded noise, so it is necessary to denoise in the first place. The dynamic denoising of is performed according to the following four steps:
- (1)
Select the wavelet basis function and the wavelet decomposition layers L. Different wavelet functions have different characteristics. According to the experimental results and comprehensive considerations, the db1 wavelet is finally selected as the wavelet basis function .
- (2)
Using to perform the wavelet transformation of at the L layer.
- (3)
Determine the threshold
T and handle wavelet coefficients below
T. The determination of
T is crucial. If
T is on the large side, more wavelet coefficients are handled; hence, the characteristics of the signal are lost; if
T is on the small side, the denoising effect is not guaranteed. Therefore, we incorporate dynamic factors
into noise variance estimation to determine the threshold.
where
is the noise standard deviation estimation of the signal.
is the median of the approximate component wavelet coefficients after decomposition for each channel. Considering that the
of each channel is different, the SNR of the sampled signal
obtained is also different, so
T is mainly determined by the SNR of each channel. Meanwhile, a large amount of noise and signals are mixed, making it difficult to distinguish between them. Therefore the determination of
T also requires
to be dynamically adjusted. When the channel SNR is low,
should be larger, so that noise can be removed to the maximum extent. When the channel SNR is high,
should be scaled down appropriately to retain more signal details [
24]. The noise power
is determined based on the noise standard variance
, and the value of signal power
is derived from
and the overall power of the signal. Based on the
and
of each channel, the scaling factor
is defined.
After the threshold is determined, the wavelet coefficients below T are set to zero.
- (4)
The wavelet inversion of the processed signal gives the denoised signal .
3.2. Flexible Processing of Denoised Signal
After undergoing dynamic denoising, the denoised processed signal makes it difficult to retain all the useful signal features. Therefore, we propose the concept of spatial information entropy, and from this, the denoised signal is further flexibly processed. The purpose of flexible processing is not only to compensate for the signal feature loss due to dynamic denoising but also to characterize the before recovering the support set. The flexible processing facilitates the subsequent recovery of the support set. Therefore, we need a criterion to determine whether the denoised signal needs to be processed or not.
For the OMP algorithm, the reconstructed signal’s value is the least squares solution corresponding to the support set
. When the
is determined, its reconstruction signal is defined. Before the
is recovered, there are multiple possibilities for the reconstructed signal, which make up the range of possible values for the reconstructed signal. When the signal is distorted, the uncertainty of the
’s value increases. The smaller the value range of the
is, the more beneficial it is for recovering the support set. Therefore, when the value range of the reconstructed signal increases, it is necessary to linearly compensate the distorted signal. In this paper, the spatial information entropy
is used to reflect the solution space size of the reconstructed signal.
is defined as follows.
is the result of the normalization of the signal. Where
and
.
In information theory, when a certain event occurs with probability , the information quantity will be obtained. There are various outcomes from the development of event X. The information entropy is defined as the expectation of the amount of information brought by these outcomes. Borrowing the concept of information entropy, we define the size for the least squares solution space of the signal mapping as the spatial information entropy . Higher represents more uncertainty in the ’s value fetch and the reconstructed signal solution space is enlarged.
Firstly,
and the denoised signal
are normalized in their absolute value to obtain
and
. These two processed signals are divided into multiple signal fragments of length
R, respectively. Compensation is performed for each fragment.
where
and
. The spatial signal entropy corresponding to these two signals is as follows:
When , it represents that the signal in that fragment needs to be compensated.
The signal segment
of
is amplified
times, and the amplified signal is normalized again.
After amplification, the signal information entropy is
It can be proved that after compensation when .
Proof. The difference between the spatial information entropy of
and
is:
For convenience of proof, we let
and
. So,
can be written as
. The derivative of
is
.
From the , it is possible to calculate the monotonic interval of . monotonically decreases at , and monotonically increases at
Because , , so . Therefore, when □
is determined by three main factors of the signal, which are the peak, valley, and power. The change in signal power
responds to the increase or decrease in signal amplitude in total terms. The peak and valley changes in the signal (denoted as
,
) are the most intuitive. Based on these factors, the specific formula for determining the final
is as follows:
Considering the spatial information entropy definition, we determine
as an exponential function form. As in this paper we deal with IQ signals, the exponential part of
is determined by the sine and cosine functions together. The signal power and the peaks and valleys are equally important for reflecting the variation in signal amplitude, so
. The specific values of
,
, and
are shown in
Table 1.
4. Genetic-OMP Algorithm
The sampled signal enters the reconstruction stage after dynamic flexible processing. The reconstruction algorithm is directly related to the performance of the false alarm rate of the recovered support set and the BER of the reconstructed signal. Therefore, for the reconstruction algorithm, we hope that it is easy to realize and has a high accuracy rate. Although the OMP algorithm meets the characteristics of easy implementation, its principle determines that the OMP algorithm is not highly accurate. It needs to be optimized.
The genetic algorithm simulates the process of natural selection and reproduction. The fitness in the genetic algorithm quantifies the strengths and weaknesses of individuals in the current environment, which can diminish the influence of the single selection criterion of the OMP algorithm on the accuracy of the results. Random selection in genetic algorithms helps to keep the OMP algorithm from falling into local optimal solutions. The local optimal solution usually cannot represent the global optimal solution. Therefore, we propose a Genetic-OMP algorithm based on the feature factor .
The degree of match between the observation matrix and the measurement vector is measured by the inner product. We use the inner product as an initial screening strategy to filter out the backup set . The larger inner product means a higher match, which is probably selected for the final support set . For further subsequent selection, we utilize the concepts of fitness and random selection in genetic algorithms. Precise selection is performed by constructing the feature factor . Before the signal is reconstructed, the dynamic flexibility processing of the signal in the preamble is based on the spatial information entropy. For this precondition, the design of is based on the following two principles.
- (1)
Spatial information gain
As mentioned above, reconstruction is the process of continually determining the
by using observation matrices and measurement vectors. Referring to (
7), we have
. It is known that
has multiple possible values, and
U is determined by
. When
U has unique solutions, that means the spatial information entropy of both should be equal.
If the difference between
and
is large, the selectable range of possible values for
U increases, making it difficult to determine the optimal solution. On the contrary, the smaller selectable range facilitates the determination of the final value. Therefore spatial information gain is used as a factor in designing fitness. The spatial information gain is defined as follows.
where
is the spatial information entropy of the
ith iteration residual matrix.
is the spatial information entropy of
.
is the inner product of
and the residual matrix
. A larger spatial information gain represents
and
corresponding to support sets with a larger range of values, which is not conducive to the determination of the reconstructed signal. Therefore a lower spatial information gain is the superior individual.
- (2)
Spatial information characteristics
The OMP algorithm only considers the relationship between the observation matrix and the measurement vectors and does not take into account the effect of on the subsequent support set election. If is chosen in the ith iteration, the updated residual matrix has a direct impact on the choice of the th support set. To facilitate the identification of , the result of the ith selection should lead to a smaller possible range of support sets. Therefore, we introduce spatial information characteristics as a judgment tool. The spatial information characteristic is numerically equal to the spatial information entropy of . The spatial information characteristic represents the uncertainty that the choice of brings to the option of the support set in the next iteration. is obtained in continuous iteration. As the number of iterations increases, the range of ’s values should continue to shrink. Accordingly, the spatial information characteristics of each selected should be as small as possible.
In summary, spatial information gain and spatial information characteristics constitute the information characterization norm. Based on the norm, lower spatial information gain and smaller spatial information characteristics are superior individuals. So,
is satisfied.
After calculating the of each individual, the individual can be chosen. Based on the norm, we use a roulette wheel method of choice. In the ith iteration, roulette choices are performed, and each choice will produce a single individual. The number of times are made is decided by the outcome of ’s option. When selection is achieved by using the roulette strategy, there is a probability that an individual with a high will be chosen multiple times. This leads to the fact that after choices, the number of individuals selected is usually smaller than . Thus, the number of selections is gradually reduced in each iteration, which is also a reflection of the “meritocracy” of the genetic algorithm.
The detailed steps of the Genetic-OMP are described in Algorithm 1.
Algorithm 1 Genetic-OMP Algorithm |
Input: Observation matrix A, Measurement vector V, Sparsity k Output: support set |
| - 1:
Initialize: Residual , iteration counter , preliminary index set , roulette selection number - 2:
Calculate , and select K individual according to the size of a as the preliminary index set . - 3:
Calculate of each element in , roulette the elements in according to , and select the support set index of this iteration. Update into - 4:
Update residuals according to least squares estimates: , - 5:
. If stops the iteration
|
5. Simulation and Analysis
To test the performance of the proposed schemes, we constructed a multi-band signal
,
, where
is Gaussian white noise. QPSK is a quadratic phase modulation with excellent noise immunity characteristics and band utilization, which is widely used in wireless communication systems. Therefore, we chose to use the QPSK signal to simulate
.
where
N is the sparsity (
).
is the energy coefficient that is randomly generated and conforms to i.i.d,
.
B is the bandwidth of each sub-band (
MHz).
is the carrier frequency of each signal.
and
represent the randomly generated I-channel and Q-channel signals, respectively, with a range of values
.
The addition of Gaussian white noise
is used to simulate the situation under a different average SNR (ASNR).
We used Monte Carlo methods to validate the performance of schemes. Each of the data are a statistical average obtained by running the program independently 1000 times. The parameters related to the MWC system are set as shown in
Table 2.
In the subsequent simulations we will concentrate on the support set recovery accuracy and the reconstructed signals for the samples. The support set recovery accuracy is shown by the false alarm rate. The reconstructed signal accuracy will be analyzed in terms of the correlation coefficient, BER, and other dimensions.
5.1. The Analysis of the Compensated Denoising Module
To compare the effect of the compensated denoising module, two groups are set up in this section. One is processed by the compensated denoising module (CD-MWC) and the other is the unprocessed (MWC). The signal reconstructed through the MWC system is , and the reconstructed signal obtained by the CD-MWC is . We will analyze and in terms of different metrics to derive the performance gains from our scheme.
Figure 4 shows the root-mean-square error (RMSE) comparison plot of
and
. RMSE is the difference in amplitude between the recovered signal and
. The smaller RMSE represents that the recovered signal is closer to
.
Figure 4 compares the cases with the number of sampling channels 30 and 50, respectively. At a sampling channel number of 30, the RMSE of
decreases by an average of 13.52% compared to that of
at different SNRs. In the case of 50 sampling channels, the RMSE of
decreased by 7.8% on average compared to
for different SNRs.
The correlation coefficients of
and
are illustrated in
Figure 5. Correlation coefficients are used to measure the difference between signals in their trends of variation. As shown in
Figure 5, there is a significant improvement in the correlation coefficient of
as compared to
. Especially when the channel is 30, the correlation coefficient of
is increased by 36.4% on average compared with
at different SNRs.
The RMSE and the correlation coefficient reflect the consistency of the two signals in macro and micro terms, respectively.
Figure 4 and
Figure 5 directly demonstrate the performance gain from the compensated denoising module at the signal level. At high SNRs, adaptive threshold denoising can preserve signal features better and reduce the possibility of excessive denoising. At low SNRs, the compensating denoising module not only denoises to a maximum extent but also minimizes the impact of distortion on subsequent reconstruction. Therefore, for the optimized system, the noise in the sampled signal is suppressed while the useful signal features are enhanced. These changes lead to an improvement in system performance.
From
Figure 6, it can be seen that the support set false alarm rate
is
and
. A false alarm is a situation that incorrectly determines an element that does not belong to the support set. In this paper, we calculate the
for each experiment separately and take the average result as the final false alarm rate. The
can reflect the accuracy of spectrum sensing. In
Figure 6, compared to the unprocessed one, there is a clear decreasing trend in the recovery false alarm rate of the support set after the dynamic flexible process. Compensated denoising helps support set choices to some extent. However, the compensated denoising process does not change the support set selection criteria of the OMP algorithm, so
is not significantly improved. To further improve the performance of the system, it is necessary to optimize the reconstruction algorithm.
The BER from the information level measures the impact of denoising compensation modules on communication accuracy. The compensated denoising process weakens the noise thus the recovered signal is more accurate than the unprocessed recovered signal, so the BER of the signal is also improved in different degrees. It can be seen from
Figure 7 that when the number of sampling channels is 30 and 50, the BER of
is decreased by 6.16% and 3.17%, compared to
, respectively.
5.2. The Analysis of the Genetic-OMP Algorithm
To reduce the of the support set, the reconstruction algorithm needs to be optimized. To better compare the effect of algorithm optimization, this simulation set up four groups, which are the Genetic-OMP algorithm with the compensated denoising process (CD-Genetic-OMP), CD-OMP, GOMP, and DTMP.
Figure 8 and
Figure 9 illustrate that CD-Genetic-OMP has lower RMSE and higher correlation coefficients compared to the other schemes. For CD-OMP and CD-Genetic-OMP, although both have undergone signal dynamic flexibility processing, the difference in the reconstruction algorithms results in their different performances in terms of metrics. The genetic-OMP algorithm defines the criteria multidimensionally based on the characteristics of the signal pre-processing in the support set recovery process. It avoids the reduction of accuracy brought by the single criterion of traditional algorithms. At the same time, the incorporation of random selection makes the reconstruction results have the development towards the global optimum. Therefore, the CD-genetic-OMP scheme has better performance in RMSE and better correlation coefficients.
Figure 10 shows the comparison of the false alarm rate of the reconstructed signal support set under different algorithms. The
of the support set under the same algorithm decreases slightly as the SNR increases and the sampling channels increase. Therefore, the main influencing factor of the
of the support set is the reconstruction algorithm. The traditional GOMP algorithm and OMP algorithm are similar in principle, with a single criterion for determining specific elements in the support set, and thus the false alarm rate of these two is skewed higher than the other groups. Both the DTMP algorithm and the Genetic-OMP algorithm add other criteria for secondary screening. The scheme proposed in this paper is based on feature factors. According to the current environment, the feature factor is constructed based on the signal characteristics. The feature factor effectively measures the match between the individual and the residual matrix. The feature factor is used as the probability that an individual is identified as an element in the support set. It can prevent the algorithm from falling into the local optimal solution. Therefore, the false alarm rate of the support set of our proposed scheme is well suppressed.
Figure 11 is the spectrum of the reconstructed signal with an
and with 50 sampling. From the figure, it can be more intuitively seen that the Genetic-OMP is a great help in reducing the support set’s
and increasing the detection probability.
Figure 12 shows the BER comparison of the reconstructed signals with different SNRs. The least squares solution of the support set is the reconstructed signal. The accuracy of the reconstructed signal affects the BER after demodulation. Our proposed scheme outperforms the conventional scheme in recovering the support set accuracy and reconstructed signal accuracy. Thus, the BER of our proposed scheme is also lower.