1. Introduction
In this era, the direct-sequence spread-spectrum (DSSS) has become one of the most widely used spread-spectrum (SS) techniques in the wireless secure communications [
1]. In DSSS, the baseband digital code streams are spread into a much wider band through the modulation with pseudo-noise (PN) sequences. Due to the wide bandwidths, the DSSS signals usually keep a low power-spectrum density and are hidden under the channel noise. For cooperative receivers, the PN sequences are exactly known. Thus, the baseband signals can be directly recovered through demodulation. In contrast, for a non-cooperative receiver without the exact knowledge of the PN sequence, the DSSS signals appears merely noise [
2]. Moreover, without precise knowledge of the PN sequence, the conventional non-cooperative receivers must operate at a high sampling rate to catch the signal, according to Nyquist sampling theorem. This significantly increases the system cost and sometimes makes the system impossible to be implemented.
The detection of the DSSS signals is the prerequisite to the following signal processing and information extraction steps [
3,
4]. It has been intensively studied ever since the beginning years of the DSSS technique. To detect the DSSS signals, many methods have been proposed, such as energy-based, analysis of fluctuation based on second-order statistics, dirty template-based, etc. The most commonly used method among them is the energy-based detection [
5], which is easier and relatively less expensive to be implemented [
6,
7]. However, due to the wide bandwidths of the DSSS signals, high sampling rates are required in those methods to capture the entire spread-spectrum, which usually brings a burden on the hardware cost.
In last decade, the compressive sensing (CS) theorem was rendered [
8,
9], which provided perspectives on sufficient sampling on image and communication signal processing techniques [
10,
11,
12]. Motivated by the CS theorem, many compressive signal detection methods were proposed, such as sparse signal reconstruction-based methods. However, only random measurement kernels were proposed in most of these methods. In [
13], the measurement kernels were proposed to be designed based on recursive information optimization. However, the high time and computational costs determined that the method was not feasible in the adaptive measurement and detection scenarios.
Presently, with the rapid development of the computer technologies, especially the parallel computing, the artificial neural network (or neural network) and the deep learning [
14] techniques have been widely used in the area of signal processing, such as topics on biomedical and civil engineering [
15,
16]. The well-trained neural networks can efficiently extract the features of the signals, and are proved to have good performance in pattern recognition, signal parameter estimation, prediction, etc. Therefore, it is possible to improve the detection performance and adaptability through the neural networks.
In this paper, we propose methods to detect the DSSS non-cooperatively and adaptively based on knowledge-enhanced compressive measurements and artificial neural networks. The measurements are done with compressive rates to reduce the costs of sampling process and the detection decisions are made based on measurement energy thresholding. The detection task-specific information (TSI) quantitative analysis with the signal posterior probability updates is introduced in the adaptive measurement design to improve the detection accuracy. To greatly improve the efficiency of the algorithm, the artificial neural networks are trained based on the TSI optimization. The resulting neural networks can take the posterior probabilities of the signals from the Bayes updates as the inputs and directly give the adaptively designed measurement kernels.
Our work makes several novel contributions:
- (1)
Compared to the existing compressive detection methods, the proposed methods enable an adaptive compressive measurement framework, where the measurement kernels can be flexibly adjusted to track the DSSS signals without the exact prior knowledge of the PN sequences in the detection.
- (2)
To ensure the gain of the detection accuracy, the quantitative information analysis from the previous measurements is implemented in the following adaptive measurement matrix design, ensuring the gradually increased correlation to the most probable signals.
- (3)
Through the effective combination of knowledge-enhanced compressive measurement with TSI optimization and the artificial neural network techniques, the compressive measurement matrix can be designed in an both adaptive and efficient manner. Compared to the recursive measurement kernels directly optimized based on quantitative information analysis in the literature, the artificial neural networks are trained based on TSI optimization off-line and implemented repeatedly and efficiently in the online adaptive measurement kernel design, which not only improves the adaptability, but also saves a lot of detection time.
From the aspect of the signal processing systems, with the proposed method, both the efficiency of the adaptive measurement system and the adaptability of the neural network-based system are achieved.
The remainder of the paper is organized as follows: In
Section 2, the existing methods in DSSS signals detection are briefly discussed as the background of this paper. Then in
Section 3, the framework and the principle of the proposed DSSS signals adaptive compressive measurement and detection methods are introduced. In
Section 4, the design and the implementation of the artificial neural networks in the proposed adaptive measurement and detection framework are detailed. Then in
Section 5, the proposed methods are evaluated and discussed through the theoretical analysis and the simulations with DSSS signals. Finally, the conclusions are drawn in
Section 6.
2. Related Works
With the rapid development of the communication technology, spread-spectrum communication has become an important way in the modern communication system. DSSS communication system has been widely used in the military and civil communication domain. From the electronic countermeasure perspective, to intercept or interfere signals that may be transmitted in the DSSS mode, it is necessary to first detect whether there is a DSSS signal present in the wireless channel, before finally recovering the information contained in the signal. Therefore, the detection of the DSSS signals is indispensable in the entire DSSS signal reception process.
Due to the importance of the DSSS signal detection step, a lot of research has been done in this area with a series of detection methods proposed, which can be basically classified as non-compressive and compressive detection methods. In the following part of this section, the two types of methods are introduced as the background of the DSSS signal detection techniques.
2.1. Non-Compressive Detection Methods
As is implemented in conventional methods of the signal processing, the signals were sampled according to the Nyquist sampling rate to capture their entire spectrum and avoid aliasing. Before the CS theory was rendered, researchers proposed many non-compressive detection methods, such as energy-based detection methods [
17], auto-correlation-based detection methods [
18,
19] and spectrum-based detection methods [
20,
21,
22], etc. These methods are introduced in the remainder of this subsection.
Back to the early 1960s, H. Urkowitz proposed the energy-based detection method [
17], where the detection was done based on the fact that the energy of noise is small than the total energy of the signal and noise. By calculating the energy of the received signal and selecting an appropriate threshold, the DSSS signals can be detected in the DSSS signal present case. In the existing non-compressive detection methods, the energy-based detection method is the simplest and least expensive, and thus is commonly used.
The autocorrelation-based detection methods [
18,
19] were first used to detect the frequency hopping spread-spectrum (FHSS) signals, and later researchers extended it to the detection of the DSSS signals. These detection methods perform auto-correlation operation on the received signal based on the difference between signal and noise in the auto-correlation domain. Then the correlative peaks are implemented to detect the DSSS signals. Burel et al. rendered a detection method of fluctuation analysis based on second-order statistics, which was done by dividing the received signal into analysis windows, calculating second-order statistics on each window, and then using the results to compute the fluctuations [
23]. However, as a drawback of all these methods, the correlative peaks or the second-order statistics are still not easy to be extracted if the signal-to-noise ratio (SNR) is low, which makes these detection scheme not viable in low SNR scenarios.
The spectrum-based detection methods (time-frequency analysis-based [
20,
21], short-time Fourier transform-based [
22], etc.) model the DSSS signals as periodic stationary, and perform the detection decisions in the transform domain. These methods have good detection performance for the non-periodic stationary signals or the low SNR environment, but suffer from cross-interferences. Moreover, the computational complexity was high, which results in slow detection speed and difficulty in real-time implementation. In 2019, Lee and Oh proposed to implement a dirty-template-based scheme in the detection of SS signals [
24], which could also be implemented in the DSSS signals detection. This detection method was done by calculating the cross-correlation between the template and the received signals in the frequency domain. However, the ‘dirty’ template was obtained from the received signal in frequency domain, which would make the template difficult to be obtained in the low SNR scenarios.
Although various methods based on non-compressive sampling were rendered in the past few decades, a common shortcoming exists within them: The high sampling rates are required to capture the entire spectrum of the DSSS signals, resulting in expensive sampling and signal processing hardware. Especially in this cases of ultra-band DSSS signals, these methods may become infeasible. Moreover, there is a lack of adaptability in these methods, leading to a constraint on the further improvement of their detection performance.
2.2. Compressive Detection Methods
In 2006, the compressive sensing (CS) theorem [
8,
9] was rendered by Candes et al. and Donoho. In contrast to the conventional Shannon-Nyquist sampling theorem, the CS theorem states that a signal can be recovered from much lower number of its linear projections (i.e., low measurement rates), if the signal can be sparsely represented on a transform or a dictionary. The signal recovery can be done by solving non-linear optimization problems respective to its sparse representation. Motivated by the measurement rates in this theorem, a series of DSSS signal detection methods based on CS have been proposed.
Most of the existing CS-based DSSS signal detection methods were proposed based on random measurement kernels and CS recovery methods. Some of these methods retained the information carried by a signal, which could be sparsely represented based on a transform or a dictionary [
25,
26,
27,
28,
29]. Others cooperatively detected the signal based on the signal reconstruction or the expression of the original signal [
30,
31]. However, the reconstruction algorithms usually require high computational complexity, which greatly affect the computational efficiency of the algorithms, especially in the online signal detection scenarios.
Although most of the literature on CS used random measurement kernels, Gu et al. [
32] and Neifeld et al. [
33] illustrated that the signal recovery accuracy could be improved, if the compressive measurement kernels were designed using prior knowledge of the signal. More recently, Liu et al. proposed non-cooperative compressive DSSS signals detection methods [
13]. In contrast to most of the existing literature in area of the CS-based DSSS signal processing that included an intermediate step of signal or information recovery, the detection decision was directly made from the compressive measurements. Besides random measurement kernels, the designed measurement kernels were also proposed based on the prior knowledge of the signals and the quantitative information optimization [
33]. However, as the measurement kernel optimizations were conducted using a recursive method and could take an extremely long time, the measurement kernels in Liu et al. [
13] had to be designed prior to the measurement procedure and could not be used in the adaptive regimes.
In this paper, we propose methods to detect adaptive DSSS signals based on knowledge-enhanced compressive measurements and artificial neural networks. With the compressive measurements, the hardware burden caused by the non-compressive detection methods are solved. The detection decisions of the DSSS signals are made by the observation of the measurement energy, which is easier and less expensive than most of the compressive measurement-based detection methods. Moreover, with posterior knowledge of the signal updated and the implementations of the artificial neural networks, the measurement kernels are designed adaptively and efficiently with the quantitative TSI optimized, leading to improved detection performance.
3. The Framework and Principle of the Adaptive Compressive Measurement and Detection of the DSSS Signals
The proposed compressive measurement and detection framework is shown in
Figure 1. In the measurement step, the received signal is first preprocessed by a band-pass filter to remove frequency components outside the spectrum of interest. The preprocessed signal is then multiplied by the compressive measurement kernels and passed through a low-pass filter, which works as an integrator. The filtered result is sampled with a sampling rate that is much lower than the Nyquist sampling rate indicated by the DSSS spectrum. The sampling results form the measurement vector. The measurement vector is analyzed based on Bayes rule and the analyzed results are used in the adaptive measurement kernel design for the following measurements, as the knowledge enhancement in the compressive measurement procedure. Finally, in the detection step, the energy of the measurement vector is calculated and thresholded to determine if the DSSS signal is present.
In this paper, we focus on the non-fading communication channels and the signal detection using the framework in
Figure 1 can be formulated as a decision from two hypotheses:
where
and
represent the signal absent and signal present hypotheses of the DSSS signal, respectively.
is the compressive measurement matrix,
is the DSSS signal at the receiver in the signal present case,
is the channel noise and
is the compressive measurement vector. The compression ratio (
) of the system is defined by the ratio between the number of Nyquist samples with respect to the spread bandwidth and the number of compressive measurements in a given time period. For the system in
Figure 1, the measurement matrix is block-diagonal, where each single block is a row vector. The coefficients in each row block of the measurement matrix form the measurement kernel of the corresponding measurement.
In this paper, we take the phase-shift-keying (PSK) DSSS signals as examples, which can be represented as:
In Equation (
2),
is the baseband PSK signal,
is the binary-valued (1 or −1) periodic signal modulated by the PN sequence,
is the carrier frequency and
is the initialized random phase.
In this paper, we model the wireless channel as additive white Gaussian noise (AWGN) channel over the DSSS spectrum with the noise variance of
. Let us consider that the rows of measurement matrix are normalized to unit energy. According to the noise folding theory [
34], the measurement vector would become a zero-mean circularly symmetric complex random vector with the variance of
. Then in the signal absent case, the theoretical probability density function (PDF) of the energy in a measurement vector
at the length
M can be expressed as:
where
is the energy of
.
If the coefficients in each single block of the measurement matrix are randomly selected from the identically independent complex Gaussian distributions and normalized to unit energy, the DSSS signal can also be modeled as AWGN over the DSSS spectrum in the signal detection scenario. Then in the signal present case, the theoretical PDF of the energy in an M-length measurement vector can be expressed as:
where
is the signal power.
The signal detection is done by energy thresholding. More specifically, given a threshold
T, the theoretical false positive rate (
) and false negative rate (
) follow:
and
respectively.
In this paper, we focus on adaptive knowledge-enhanced compressive measurements based on the TSI optimization. If we conduct the adaptations within symbol periods and design the measurement kernels for the measurements (i.e., the row blocks of the measurement matrix) sequentially, the measurement kernel for the
mth measurement is designed by solving the following optimization problem:
where
is the collection of the measurement kernels and the measurement data in the 1st through the
th measurements.
,
and
represent the measurement kernel, preprocessed signal from the input filter and the measurement data at the
mth measurement, respectively.
represents the l–2 norm operation. The mutual information between
and
, i.e.,
, is defined as the TSI in the signal detection.
During the operation period of
, if the channel noise and the DSSS signal in the signal present case are denoted as
and
, then:
According to the information theory,
where
denotes the conditional differential entropy. If
is known in the signal present case or in the signal absent case, the measurement data
only depends on
and the channel noise. Therefore,
. As the measurement noise
is a zero-mean circularly symmetric complex random variable with the variance of
according to the noise folding theory,
is a constant given the noise power. Thus, the optimization problem in Equation (
7) is equivalent to:
In this paper, we focus on short-code DSSS (SC-DSSS) signals, where the period of the PN sequence is equal to the symbol period. In the case of measurements within symbol period, the measurement kernel
is designed to cover at most the period of the PN sequence. To solve the statistical signal processing problems, the mixture of Gaussian (MoG) models has usually been used [
35,
36]. In measurement design stage of this paper, we establish a dictionary
of the DSSS signals. The atoms of the dictionary, denoted by
(
), are taken to be the Nyquist rate sampled DSSS signals in a symbol period, which carry a fixed symbol content and are modulated by the possible PN sequences. Based on the dictionary, we establish a MoG model of the posterior distribution for the signal
in the DSSS signal present case:
where
L is the number of possible PN sequences, and
(
) denotes the posterior probability that the
lth PN sequence is used in the DSSS signal present case, given the measurement kernels and data in the 1st through the
th measurements. The component
(
) is modeled with a complex zero-mean Gaussian distribution with the covariance matrix:
where
is a vector taken from the dictionary atom
, according to the locations of the coefficient block in the
mth row of the measurement matrix.
represents the Hermitian operation.
With a simplified assumption that the single measurements are independent to each other,
(
) in Equation (
11) can be obtained by:
where
=
, with
.
denotes the identity matrix in the same size of
.
With the MoG signal and AWGN channel models, if the rows of the compressive measurement matrix are normalized, it can be further proved that the signal absent case can be ignored in the optimization problem. Therefore, Equation (
10) can be derived into the following form:
where
is the conditional differential entropy of
on
in the signal present case, with the known measurement kernels and data in the 1st through the
th measurements.
can be approximated as:
where
, with
representing the identity matrix in the same size of
.
In the literature, to solve an optimization problem such as Equation (
14), a recursive gradient method has usually been used [
13,
37]. In this method, the refinement of the measurement kernel
at the
kth iteration is performed using:
where
is the optimization step size, and the gradient item can be approximated as:
The derivations to Equations (
14), (
15) and (
17) are provided in the
Appendix A.
5. Evaluations and Discussions through Theoretical Analysis and Simulations
In this paper, we used the binary PSK (BPSK) modulated SC-DSSS signals in the theoretical analysis and simulations. The possible candidate PN sequences were taken from the maximum-length sequences (m-sequences) [
19,
38] of the orders 1 through 5, which were commonly implemented in DSSS communications. The m-sequences at the order
N were generated with the feedback shift-registers with the structure described in
Figure 4.
In
Figure 4, a seed to the shift-registers is a binary sequence at the length
N, where not all the entries are zero-valued.
are the values stored in the registers. The binary multipliers
are generated from the primitive polynomials
. The additions in
Figure 4 are binary additions, and the module at the end of registers coverts the binary
to the values in
. The primitive polynomials ordered from 1 to 5 and the number of the m-sequences are shown in
Table 1.
The maximum length of the m-sequences specified in
Table 1 is 31. Therefore, we took the number of Nyquist samples from each symbol period as 62 in the theoretical analysis and the simulations.
Both multiple neural network strategy and the single neural network strategy described in
Section 4 were performed in this section. 3 hidden layers were included for each of the neural networks trained in this paper. For the single neural network strategy, the widths of the 3 hidden layers were 350, 128 and 64, respectively. For the multiple neural network strategy, the hidden layer widths were taken to be 512, 350, 256, respectively.
The neural networks were optimized using the TensorFlow 2.0 GPU version [
39] based on Python 3.7. To train each of the neural networks, 20,000 random probability vectors (at the batch size of 100 and 10 epochs) were used as the training data. The resulting neural networks were used to evaluate the performance of the proposed adaptive methods.
In the simulations, we define the SNR as the ratio between the signal power and the noise variance, i.e.,
As is discussed in
Section 4, the measurements and detections can be done within single symbol period or across multiple symbol periods. In the remainder of this section, we first evaluate theoretical analysis and simulated performance of the proposed adaptive measurement and detection methods on single symbol-period basis. Then, the simulated results with the measurements and detections across multiple symbol periods are provided and discussed as an extension to the theory discussed in
Section 3 and
Section 4.
5.1. The Theoretical Analysis and Simulations of DSSS Detection through Single Symbol-Period Measurements
As is specified above, in the theoretical analysis and simulations of the measurements and detection within single symbol period, the number of Nyquist samples was taken to be 62. To conduct compressive measurements, the number of compression measurements for detection were chosen from 6, 9 and 12 in this part, resulting in the values of about 10, 7 and 5, respectively.
We first analyzed the theoretical detection accuracies of the proposed methods through single symbol-period measurements. As the adaptive measurement processes are stochastic with feedbacks, it is difficult to analyze their theoretical detection performance with closed formulas. To surrogate, we took an approximation, where the PN sequence used in the DSSS signals was exactly known in each detection and the posterior probabilities of the PN sequence usage in each measurement kernel adaptation were given as a binary 1-sparse vector. In this case, we ran Monte-Carlo simulations with the
values ranging from −30 dB to 20 dB. The curves of the
versus the
are plotted in
Figure 5 for the 3
cases, where each point in the curves was generated using 100,000 simulations. In each simulation, the PN sequence were selected randomly from the 234 possible candidates with equal probabilities. The detection thresholds were obtained with the theoretical
s to be 0.01, according to Equations (
3) and (
5). Consequentially, the curves of the proposed adaptive methods in
Figure 5 represent their best possible results and are regarded as their theoretically analyzed detection accuracy results.
For comparison, the theoretical performance of the non-compressive energy detection method and the conventional compressive detection method with random measurement kernels at the 3
values were also analyzed according to Equations (
3)–(
6). The analyzed results are shown in
Figure 5. In the non-compressive energy detection, the measurement matrix was an identity matrix, thus no compression was done during the measurements and the number of measurements was equal to the number of Nyquist samples. In the conventional compressive detection method, the coefficient blocks of each row in the measurement matrix were randomly selected from the identically independent complex zero-mean Gaussian distributions and then normalized to unit energy. The thresholds used in these two methods were also obtained by taking their theoretical
s as 0.01.
From
Figure 5, we observe that the detection accuracies for all the methods generally improve with decreased
values. This improvement is resulted from the more and more distinguished statistics of the measurement energies between the signal absent and present cases. The non-compressed method gets the best detection accuracy and can be treated as a benchmark. Comparing the detection accuracies of the compressive methods, we observe that the theoretical optimal performance of the proposed methods are significantly improved over the conventional compressive detection method with random measurement kernels. For example, to achieve a given
value at
≈ 5, the proposed methods can save up to about 5 dB in
at their theoretical best performance, compared to the conventional compressed detection system with the random measurement kernel.
For the proposed adaptive methods, if we compare the multiple neural network and single neural network strategies, we observe that the adaptive method with the multiple neural network strategy shows slightly better performance in the detection accuracy than the adaptive method with single neural network strategy. As a trade-off, a higher cost in the hardware and network training time is introduced by the multiple neural network strategy.
Besides the theoretical analysis, the Monte-Carlo simulations of the DSSS signals detection using the proposed adaptive methods were also performed for the 3
cases. The system setups were similar to the theoretical analysis. The simulated
results versus
for the proposed methods are shown in
Figure 6. To generate each point in these curves, 5,000,000 Monte-Carlo simulations were done. The simulation results of non-compressive detection method and conventional compressive detection method with random measurement kernels are also shown in
Figure 6 for comparisons. Similar to the theoretical analysis, the thresholds used in the detection step were generated according to Equations (
3) and (
5), with the theoretical
s to be 0.01.
Comparing
Figure 5 and
Figure 6, we observe that the simulated performance of the non-compressive detection method and the conventional compressive detection method with random measurement kernels match well with their theoretically analyzed results. The simulated detection accuracies of the proposed methods, although are slightly lower than their theoretical optimum cases at given
and
values sometimes, are still significantly improved compared to the conventional compressive method with random measurement kernels. In addition, we can see that the signal can also be detected even when the SNR is lower than 0 dB. This is because that the designed measurement kernels concentrated more and more on the signal as the adaptive measurements proceeded, which leads to the increased SNRs in the measurement data.
To validate the discussions above and have a deeper insight into the proposed adaptive methods, we conducted a further study on the correlations between the rows of the designed measurement matrix and the PN sequence that was factually used in the DSSS signal generation. In this paper, the correlation between the
mth row of the measurement matrix and the used PN sequence (assuming that the
vth PN sequence was factually used) is defined by:
where
and
denote the inner product operations and the absolute value, respectively.
is the
mth row of the measurement matrix
, and
represents the
vth dictionary atom discussed in the MoG model in
Section 3. A larger correlation value from Equation (
19) indicates a higher
in the measurement result, which in turn leads to a higher detection accuracy.
As a representative,
Figure 7 depicts the correlation values versus the measurement indices in a symbol-period adaptive procedure for the proposed adaptive methods at
≈ 5 and
dB. To compare, the curve for the conventional compressive detection method with random measurement kernels is also shown in
Figure 7. To generate each point in the curves, 100,000 Monte-Carlo simulations were done, where the PN sequence used in each simulation were randomly selected from the 234 possible candidates with equal probabilities, and the resulting correlation values at each
were averaged.
In
Figure 7, it can be observed that the correlation values gradually increase for the proposed adaptive methods, as the measurements proceed. This indicates that the designed measurement kernels concentrate more and more on the signal as the adaptive measurements proceed, leading to gradually increased
s in the measurement data and the improved detection accuracies. In contrast, for the random measurement kernels, the correlation values randomly fluctuate around the value of 0.4 and are lower than those of the proposed adaptive case over almost the entire measurement procedure. Thus, the
of the measurement data is relatively lower for conventional compressive detection method, which in turn results in a lower detection accuracy. Comparing the curves of the two proposed neural network strategies, we find that the correlation value from the multiple neural network strategy increases slightly faster than that from the single neural network strategy. This in turn results in slightly improved detection accuracy than the single neural network strategy.
Besides the studies on the detection accuracies and the measurement procedures of the proposed adaptive methods, we also conducted a study on the time costs of the proposed adaptive measurement kernel design methods based on artificial neural networks to observe their efficiencies. To validate, the time cost of the recursive optimization method described in
Section 3 was also observed for comparison. For quantitative evaluations, the time costs of the measurement kernel design for 500 measurements (i.e., the time costs for the adaptive measurements over 100 single symbol-period detections) were measured with the proposed methods and the recursive optimization method at
≈ 10 and
dB. For the recursive optimization method, to reach the detection accuracies using the artificial neural networks, 2000 iterations are usually needed to design the measurement kernel for a single method, which was implemented in this study. The simulations were done on a computer with the CPU of Intel Core i5-9400 @ 2.90 GHz and the RAM size of 32.00 GB. The timing information of the measurement kernel design for a single measurement are shown in
Table 2.
From
Table 2, it can be seen that the efficiencies of the proposed methods are significantly improved over the recursive optimization method in the measurement kernel design. The improvement can be as high as around 10,000 times. Comparing the two strategies in the proposed methods, the multiple neural network strategy results in slightly lower time cost, as the structure of each neural network in this strategy is relatively simpler. Although the time costs of the proposed methods shown in
Table 2 are still relatively high for the practical DSSS signals detection, the efficiency can be expected to be significantly improved with the specially designed hardware and software. This improvement will be studied in our future work.
5.2. The Simulations of DSSS Detection through Multi-Symbol-Period Measurements
It has been discussed in
Section 4 that the proposed adaptive methods can be extended for DSSS signals detection with the measurements over multiple symbol periods. In this paper, with similar setups to those in the single symbol-period detection simulations, we also conducted Monte-Carlo simulations for the DSSS signals detection over on multi-symbol-period measurements. In these simulations, the coefficients in the measurement kernel for the first measurement of the first symbol period were generated according to identically independent complex zero-mean Gaussian distributions and then normalized. Then, the measurement kernels for the other measurements are sequentially designed. The adaptations within each symbol period were done similarly to single symbol period simulations. For inter-symbol adaptations, the measurement kernels corresponding to the first measurements in the 2nd though the last symbol periods were adaptively designed based on the posterior information from measurements in the previous symbol periods. The simulated curves of the
versus the
for the 3
values are shown in
Figure 8 and
Figure 9, which corresponds to the multiple neural network and single neural network strategies, respectively. The numbers of symbol periods included in the entire measurement procedure were selected as 20 and 40. To compare, the simulated results of the conventional compressive method with random kernels over single and multiple symbol periods, as well as the proposed adaptive methods with single symbol-period measurements, are also plotted in
Figure 8 and
Figure 9.
In
Figure 8 and
Figure 9, we find that at any given
value, the multi-symbol-period implementations of the proposed adaptive methods get better detection accuracies than the conventional compressive method with random measurement kernels, which is similar to the single symbol-period detection. For example, at
≈ 5 and
dB, the conventional compressive detection method with 20 symbol periods yields an
of 0.1244 and the adaptive method with the single neural network strategy gets the
lower than 0.001, which is about 100 times’ improvement. We also observe that the multi-symbol-period signal detection performance of all systems, especially for the proposed adaptive methods, gets improved over those of their single symbol-period implementations. In particular, the more symbol periods that are included in the measurements, the better signal detection accuracies the systems can achieve. For example, at
and
dB, the adaptive method using the multiple neural network strategy in single symbol period yields an
of 0.7432, while the resulting
s of the adaptive methods over 20 and 40 symbol periods are lower than 0.001 and 0.0001, which are about 740 times’ and 7400 times’ improvement, respectively. Similar to the single symbol-period detection simulations, with higher hardware and training time costs, the adaptive method with the multiple neural network strategy also shows slightly better performance than the single neural network strategy for multi-symbol-period implementations.
For the simulation results in
Figure 8 and
Figure 9, besides the increased number of measurements that makes the energy statistics in the signal absent and present cases get more and more distinguished, the gradually increased correlation between the designed measurement kernels and the signal (as more posterior information updates and measurement kernel adaptations are done in this scenario) in the proposed adaptive methods also plays an important role in the detection accuracy improvement. On the other hand, the time cost in the detection task is increased in this scenario. Therefore, in practical implementations, the trade-off between the time cost and the detection accuracy needs to be comprehensively considered, according to the detailed detection tasks.