Next Article in Journal
Healthcare Big Data Analysis with Artificial Neural Network for Cardiac Disease Prediction
Next Article in Special Issue
Research Progress of Wireless Positioning Methods Based on RSSI
Previous Article in Journal
PV-OPTIM: A Software Architecture and Functionalities for Prosumers
Previous Article in Special Issue
Radar Signal Behavior in Maritime Environments: Falling Rain Effects
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Interference Response Prediction of Receiver Based on Wavelet Transform and a Temporal Convolution Network

1
Science and Technology on Electromagnetic Compatibility Laboratory, China Ship Development and Design Center, Wuhan 430064, China
2
School of Naval Architecture, Ocean and Energy Power Engineering, Wuhan University of Technology, Wuhan 430063, China
*
Author to whom correspondence should be addressed.
Electronics 2024, 13(1), 162; https://doi.org/10.3390/electronics13010162
Submission received: 24 November 2023 / Revised: 22 December 2023 / Accepted: 27 December 2023 / Published: 29 December 2023
(This article belongs to the Special Issue Cognition and Utilization of Electromagnetic Space Signals)

Abstract

:
In order to improve the prediction performance of existing methods amidst multi modulation coupling interference in complex electromagnetic environments, this paper introduces a novel approach that integrates wavelet transform with a temporal convolutional network. The model begins with a data preprocessing stage, where wavelet transform decomposes the original signal into various scales. This step generates scale coefficients across different frequency categories, effectively reducing the signal length. To enhance the model’s ability to capture long-term dependencies in time series data, temporal convolutional networks are employed for feature extraction. Moreover, the model’s performance is further refined by incorporating an attention mechanism-driven feature fusion strategy. This strategy methodically combines high and low frequency features along with local and global characteristics. The model’s efficacy is validated using a custom MATLAB dataset, with simulation results confirming a significant improvement in prediction accuracy.

1. Introduction

With the continuous advancement in radio frequency microwave device technology, the electromagnetic environment faced by radio frequency microwave equipment in naval formations has become increasingly complex [1,2], and the problem of co-frequency interference is becoming more and more obvious. When the spectrum allocation is not appropriate, the RF transmitting/receiving equipment between the platforms may cause mutual interference. In other words, we commonly refer to this as the electromagnetic compatibility issue within the ship formation [3,4]. To ensure electromagnetic compatibility within the system, it is crucial to establish a behavioral model for radio frequency devices and predict their operational conditions under potential interference. Due to the high sensitivity of the front-end receivers of the equipment [5,6], modeling the front-end receivers of radio frequency microwave devices is a key step in achieving interference response prediction. Establishing a behavioral model of the receiver to predict the receiver’s interference response to potential interference is a key prerequisite for conducting scientific and rational spectrum management.
The current behavior-level modeling of radio frequency microwave devices can be divided into two approaches. One is represented by polynomial models [7,8,9,10]. Ref. [7] established a nonlinear model of an operational amplifier based on the Volterra series. As the memory depth and the order of nonlinearity increase, the number of coefficients in the Volterra series grows exponentially, making the Volterra series mainly suitable for weakly nonlinear systems. To reduce the complexity of the Volterra model and ease computational difficulty, Refs. [11,12] discuss nonlinear behavior models based on Wiener and Hammerstein models. However, the accuracy of these models still fails to satisfy researchers [13]. The other approach utilizes a neural network, with commonly used methods including BP neural networks [14], support vector machines [15], and radial basis neural networks [16,17], among others. However, when the input signal and the neural network exhibit complex nonlinear characteristics, it is challenging to achieve good modeling performance with low-complexity neural network models. This is due to the fact that the accuracy of the neural network model is highly dependent on the quality of the artificial features used.
In contrast, deep learning approaches eliminate the need for reliance on manually engineered features, as they are capable of autonomously extracting and learning relevant characteristics from the data. In recent years, deep learning methods have been vigorously researched for predicting the nonlinear behavior of radio frequency microwave devices, with a focus on sequence models represented by CNN [18], RNN [19], LSTM [20] and others. In [18], researchers leveraged the weight-sharing feature of CNNs to extract features from one-dimensional time-domain signals, establishing a nonlinear model for power amplifiers. Ref. [21] discussed the use of Recurrent Neural Networks (RNN) for modeling RF nonlinear devices. Ref. [19] built upon the RNN model by adding derivative information to the training dataset, making training more efficient and achieving the same accuracy as traditional RNNs with less data. To address the gradient vanishing issue encountered in traditional RNN training, Ref. [20] proposed a nonlinear model based on LSTM networks, preventing gradient disappearance through the forget gate in LSTM networks. Overall, LSTM networks are adept at discovering and utilizing the intrinsic patterns of long sequence data and have advantages in temporal prediction accuracy compared to CNNs and RNNs. However, when the signal feature quantity is large [22], and there are multiple modulation types of signal superposition in the environment, the performance of LSTM may decrease.
To overcome the limitations of existing methods and enhance nonlinear predictive performance, we have proposed a novel approach that combines a Temporal Convolutional Network (TCN) with wavelet transform to enhance the nonlinear predictive performance of the receiver’s interference response prediction model. TCN employs sparse connections, allowing for easy control of the model’s receptive field by adjusting the size of the convolution kernel or adding more layers. Therefore, it is widely used in the modeling and prediction of time series [23,24]. The specific work carried out in this paper is as follows:
  • The electromagnetic environment, where the receiver of radar is located, is often riddled with various types of interference. This complexity hampers the performance of existing models [20] in achieving effective modeling. To tackle this, we propose a receiver behavior model extraction architecture based on TCN. This approach is designed specifically to enhance nonlinear modeling performance under the influence of multiple interfering signals;
  • To simplify the feature extraction process, we employ wavelet transform to decompose signals into coefficients across various frequency scales. This approach not only diminishes the coupling among different frequency components but also condenses the signal length, thereby facilitating more efficient feature extraction by the model. Furthermore, to effectively integrate features across these different frequency scales, we have incorporated a feature fusion method named attention feature fusion;
  • Compared existing methods, our numerical experiments indicate that the proposed method has a stronger generalization ability and can adapt to the complex and changeable electromagnetic environment.
In summary, this paper proposes a radar receiver interference response prediction model named Wavelet-TCN-AFF (Wavelet-Temporal Convolutional Network-Attention Feature Fusion), which addresses the performance degradation of existing methods when processing long-distance and complex time series signals. Moreover, this model can adapt to interference signals of various typical modulation types in complex electromagnetic environments. By collecting operational data from a typical receiver model established in MATLAB as a dataset, the effectiveness of the proposed model in predicting typical radar radiation interference is validated.

2. Methods

The Wavelet-TCN-AFF model in this paper is shown in Figure 1. The model consists of four components: a wavelet transform module, a TCN feature extraction module, a stacked AFF feature fusion module, and a predictor. Initially, the time-domain signal from the antenna aperture is taken as the input signal and decomposed into coefficients representing different frequency components through the wavelet decomposition module. In the second step, the TCN model extracts the corresponding features of each frequency component from each layer of coefficients, which will be used as local features of the signal. Concurrently, to avoid loss of some global features of the original signal after wavelet transformation, we compress the input signal through pooling operations to obtain its global features. Thirdly, the stacked AFF module is used to fuse local and global features across different frequency scales. Finally, the predictor, consisting of fully connected layers, predicts the time-domain signal output of the receiver’s video end based on the fused features. This output time-domain signal is the interference response of the receiver, representing the model’s prediction of the time domain waveform at the receiver’s video output. The following sections provide a detailed introduction to each component of the WTA model.

2.1. Wavelet Transform

In the field of signal processing, both high-frequency and low-frequency characteristics of the input signal contain information about the interference response at the receiver’s output. However, due to the coupling of signals of different frequencies in the time domain, it poses a challenge for deep learning models to extract features. In previous studies [20], when the input to the model is an unprocessed raw signal sequence, it is difficult for the model to extract deep features directly from the raw sequence. Unlike Fourier transforms that provide only frequency domain information and lose local time domain information, wavelet transforms can obtain local time information of the signal by shifting the mother wavelet and frequency information by scaling the wavelet. Wavelet transform is a multiscale analysis method, where the high-frequency component decomposed by the wavelet has a high time resolution but low-frequency resolution, whereas the low-frequency component has low time resolution but high-frequency resolution. This characteristic makes wavelet transform an effective method for capturing both high-frequency and low-frequency features of a signal, thereby improving the performance of signal processing.
In wavelet decomposition, the original signal is decomposed into sub-signals of different frequency bands through the scale function ϕ(t) and wavelet basis function ψ(t). The wavelet approximation coefficients a0(k) and wavelet detail coefficients dj(k) of a discrete signal s(t) with a length of M can be represented as:
a 0 k = 1 M m = 1 M s m ϕ 0 , k m
d j k = 1 M m = 1 M s m ψ j , k m
where the 0 in Equation (1) represents the coarsest scale level. It corresponds to the lowest-frequency component in multi-resolution analysis. k in Equation (1) represents the discrete time index at the current scale. M indicates the length of the signal. j in Equation (2) represents the scale level of the detail coefficient.
It is noteworthy that the choice of wavelet series and wavelet basis is a key factor affecting the effectiveness of wavelet decomposition. To avoid the impact of scale span on subsequent signal processing, the selected parameters should be able to decompose the signal into low frequency signals with clear periodicity and clean waveforms, and high-frequency signals with distinct detail information. To this end, this study selects Daubechies wavelets as the wavelet basis function [25].

2.2. TCN Model

Although wavelet transform converts signals into coefficients of different scales, there still exists a problem, namely that the generated data volume remains very large, making it unsuitable for direct use in decision making. To better handle the relationships between interference signal sequences at different time scales, this study introduces the TCN [26] as the feature extraction module. The TCN mainly consists of three parts: causal convolution, dilated convolution, and residual connections. The introduction of causal convolution ensures a causal relationship between network layers, while dilated convolution expands the receptive field without increasing the network depth, thus better understanding the time-domain coupling information brought by interference signals of different frequency components in complex electromagnetic environments. The introduction of residual connections helps to solve the problem of gradient vanishing or explosion that may occur due to increased network layers. Therefore, by combining wavelet transform and TCN network, our research aims to compress signal length through wavelet transform to reduce the complexity of feature extraction and to enhance the model’s performance in processing complex time series data through the TCN network. In summary, our method aims to address the data volume problem in signal processing and improve the model’s performance, enabling more effective handling of complex time series data.
The structure of the time convolutional network is shown in the Figure 2. d respects the dilatation rate. As the depth of the network increases, the dilation factor typically increases in an exponential manner by a factor of 2, thereby exponentially expanding the receptive field, allowing for a broader field of view with fewer layers and extracting deeper temporal features while accessing more distant historical data.
Expansive convolution is the first step of TCN network. For a one-dimensional time series x, the expansion convolution operation formula [26] is:
F x = i = 0 N 1 f i x s d i
where x represents the input sequence; f is the filter; d is the dilation factor; i is the convolution kernel position; N is the convolution kernel size; s-di refers to the s-di-th element of the previous layer.
As the causal convolution layers and dilated convolution layers are stacked, the introduction of residual connections prevents the gradient vanishing phenomenon that may occur as the network deepens. The residual structure combines the input x with the output F(x) of the convolutional network through addition operation, with the calculation formula as follows:
T C N o u t = Activation x + F x
where Activation(·) is the activation function.

2.3. Attention Feature Fusion

The purpose of incorporating AFF [27] is to enable adaptive amalgamation of features from various frequency components, thus guaranteeing that the fusion of features is not excessively biased towards particularly prominent traits. In certain research areas, such as radar signal type identification [28,29,30], the core idea is to identify the most unique and prominent features of the signal. However, in this research, unlike other classification tasks, the signal characteristics here are unique. Some less prominent frequency components may not equate to irrelevant signal features, as they could influence the response at the output end after passing through the power amplifier in the receiver. Therefore, the adaptive feature fusion of AFF is crucial for integrating both significant and less significant features.
Inspired by the work of [27], an Attention Feature Fusion (AFF) block was introduced to fuse the features of multiscale frequency components. The AFF structure is shown in Figure 3. The AFF module performs initial feature fusion on two input features, such as C1 and C2, and then processes them through channel attention calculations for both local and global features. The channel attention for local features begins with a point-wise convolution applied to the input feature X. This convolutional step, employing a 1 × 1 kernel, primarily serves to modify the channel dimensions of the input feature map which can be described as
X = C 1 + C 2
Y = P W C o n v 1 X
where the PWConv1 stands for the point-wise convolution.
Following the convolution, the output undergoes batch normalization, a standard process in deep learning architectures. The output of batch normalization is
Z = B Y
where the B is batch normalization.
Subsequent to batch normalization, the Rectified Linear Unit (ReLU) activation function is applied. The output of ReLU is
W = δ Z
where the δ stands for ReLU.
The process is then reiterated: another point-wise convolution is applied, followed by a final batch normalization step
V = δ W
L ( X ) = B V
The method for computing the channels of the global features is similar to that of the local features, requiring only a global average pooling operation to be performed first on the input data. In this study, a stacked fusion method is used to integrate the local feature extraction results of three layers of wavelet coefficients and the global feature extraction results. In the local features, the fusion proceeds from high-frequency features to low-frequency features, and then the local features are fused with the global features. The specific calculation method is illustrated in Figure 3b.

3. Numerical Experiments and Analysis

3.1. Sample Generation Considering the Diversity of Interference Forms

The modeling subject of this study is a typical receiver, which was simulated in MATLAB version 2021b. The configuration of the receiver is presented in Table 1. The following parameters were used for setting and simulation. The input signal of the simulated receiver uses a linear frequency modulated signal to simulate echo and interference signals superimposed in the time domain. The echo signal is simulated with a center frequency of 3.2 GHz, a bandwidth of 5 MHz, and a pulse width of 1 μs. In simulating the interference signal, considering the variety of interference types in complex electromagnetic environments, we set up six different interference scenarios in the sample generation process. The dataset is divided according to each type of interference, including Continuous Wave Interference (dataset CW Jam), Linear Frequency Modulated Interference (dataset LFM Jam), Cosine Modulated Nonlinear Frequency Modulated Interference (dataset NLFM Jam), Binary Phase Shift Keying Interference (dataset BPSK Jam), Quaternary Phase Shift Keying Interference (dataset QPSK Jam), and Binary Frequency Shift Keying Interference (dataset BFSK Jam).
The parameters of the interference signals in these datasets are as follows: The range of the center frequency is from 3197.5 MHz to 3202.5 MHz with a sampling interval of 1 MHz. The range of the bandwidth is from 1 MHz to 5 MHz with a sampling interval of 1 MHz. The range of the interference signal power is from −15 dBm to 15 dBm with a sampling interval of 5 dBm. Additionally, the Nonlinear Frequency Modulation Signal uses cosine modulation. The binary phase shift keying signal uses a 10-bit random sequence with a roll off factor of 0.35. In the binary frequency shift keying signal, f1 = 0.5 × B and f2 is 0.5 × B, where B is the bandwidth.
The continuous wave interference dataset contains 77 samples, while each of the other datasets contains 395 samples. We simulate the mixer process of the receiver in MATLAB based on the interference parameters of each sample and collect the time-domain waveforms at the input and output ends. This includes the time-domain data of the antenna aperture signal and the time-domain data of the video end output. The parameters of the receiver are as follows in the Table 1. Finally, we divide these collected datasets in an 8:1:1 ratio to obtain the training set, validation set, and test set required for subsequent example analysis.
In order to evaluate the quality of the model, the determination coefficient R2 and the normalized mean square error NMSE are used to measure the accuracy of the model comprehensively. R-squared measures the degree of correlation between the predicted value of the model and the actual value. The NMSE are often used as quality numbers for comparing the performance of models. The calculation method for R2 is
R 2 = 1 i = 1 n y pre y true 2 i = 1 n y true ¯ y true 2
where ypre represents the predicted value of the model, ytrue stands for the actual observed value, and y true ¯ denotes the average of the actual observed values. The range of R2 is (−∞, 1), and the closer R2 is to 1, the closer the predicted waveform of the model is to the actual waveform, indicating a better prediction performance.
The calculation method for NMSE is
N M S E = 10 l g i = 1 n y pre y true 2 i = 1 n y true 2
where ypre represents the predicted value of the model, ytrue stands for the actual observed value.

3.2. Model Verification

The operating platform of the experimental network model in this paper is a desktop computer, the CPU is an Intel(R) Core(TM) i5-12600KF, the default frequency is 3.69 GHz, the memory is 16.8 GB, and the GPU is an NVIDIA GeForce RTX 2060, the computing framework is Pytorch version 1.12.0. To obtain the best configuration of hyperparameters, after repeated testing, the hyperparameters as follows: The TCN convolutional layers are 2, with filter numbers of 8 and 16, dilation factors of 1 and 2, kernel size of 2, and a dropout rate of 0.2. The AFF module has 16 channels and a scaling factor of 4. The fully connected layers have neuron counts of 16,000 and 8000 respectively. During the training process, we used a batch size of 64, a training iteration count of 200, selected the Adam optimizer for network parameter optimization, and set the learning rate to 0.0005. We employed the Mean Square Error (MSE) loss function for training and evaluation.
To evaluate the impact of the number of wavelet decomposition levels on the predictive performance of our proposed model, we set the decomposition levels to 2, 3, and 4, respectively, while keeping other parameters constant. As shown in the Figure 4, when the number of decomposition levels is set to 3, the average coefficient of determination R2 value increases, and the accuracy of the model’s predictions is above 97%. However, when the decomposition level is set to 4, the prediction accuracy of the model shows a slight increase. Therefore, considering the complexity of the model and the prediction accuracy, we consider setting the wavelet decomposition level to 3 to achieve a balance between performance and complexity.
To verify the effectiveness of the proposed model under different interference conditions, we present in Figure 5 the time-domain waveforms of the video end output under various interferences from (a) to (f), respectively representing NLFM, QPSK, LFM, CW, BPSK, and BFSK interferences. The ordinate represents the amplitude of the signal, and the ordinate represents the time. In the Figure 5, the blue solid line is the time-domain signal (target value) which represents output from the video end of the receiver, and the red solid line is the prediction result based on the model proposed in this paper. In order to display the results of numerical implementation in more detail, the local interval marked by the dotted line box is enlarged, as shown in Figure 6 and Figure 7. The solid blue lines and dotted red lines in Figure 6 and Figure 7 represent the same meaning as in Figure 5. It can be seen that the predicted results are very close to the observed results. In (a), the Nonlinear Frequency Modulation (NLFM) interference is characterized by a central frequency offset of 3 MHz, a bandwidth of 3 MHz, and an amplitude of 15 dBm. In (b), the Quadrature Phase Shift Keying (QPSK) interference presents with a central frequency offset of 4 MHz, a bandwidth of 3 MHz, and an amplitude of 5 dBm. For (c), the Linear Frequency Modulation (LFM) interference features a central frequency offset of −5 MHz, a bandwidth of 5 MHz, and an amplitude of 15 dBm. In (d), the Continuous Wave (CW) interference has a central frequency offset of −3 MHz and an amplitude of 5 dBm, (e) involves the Binary Phase Shift Keying (BPSK) interference, which has a central frequency offset of 5 MHz, a bandwidth of 5 MHz, and an amplitude of −5 dBm. Lastly, in (f), the Binary Frequency Shift Keying (BFSK) interference is identified with a central frequency offset of 0 MHz, a bandwidth of 5 MHz, and an amplitude of 5 dBm.
To validate the predictive performance of the proposed model in different electromagnetic interference environments, we selected four common data-driven methods for comparative analysis. Table 2 represents the chosen model and the hyperparameters. The RNN and LSTM was proposed by [20], where CNN is set to 1-d CNN.
Comparing the results of TCN and Wavelet-TCN models highlights the enhancement effect of wavelet transformation and the Attentional Feature Fusion (AFF) introduced in this paper on predictive performance. In the Wavelet-TCN model, we concatenated features extracted from wavelet coefficients at different scales.
Table 3 shows the results of prediction accuracy for different models on various datasets. The first row lists the different model names, while the first column shows the different datasets. From Table 3, it can be observed that models achieve higher prediction accuracy more easily with continuous wave interference at a single frequency. However, when the modulation of the interference signals becomes complex, the prediction accuracy of the models generally shows a varying degree of decline. The model proposed in this paper demonstrates the highest prediction accuracy across six different datasets.
In the ablation experiments designed in this paper, the predictive accuracy of TCN network surpasses that of RNN [20], LSTM [31], and CNN networks. After wavelet transformation, the prediction accuracy of the Wavelet-TCN model increased by about 1% compared to the TCN model. The model proposed in this paper achieved the highest prediction results, further validating the effectiveness of the proposed model in enhancing predictive performance.
Next, we compare the NMES of the different model predictions. It can be seen from the results shown in Table 4 that the method proposed in this paper has the highest accuracy on all test sets. Under continuous wave interference, the average prediction error of the test set is the smallest, which is −26.6538 dB.
Besides, we compare the complexity of the different models with the proposed method, as measured by the time it takes to complete the training under the same conditions. The results are shown in Table 5. The complexity of RNN [20] and LSTM [31] models is greatly affected by the sequence length, and the model training time is longer when the input is long. The CNN model benefits from the sharing of convolution kernel parameters, which makes the operation simple and efficient, and the training time is fast. The training time of TCN and W-TCN models is between CNN, RNN [20], and LSTM [31]. Due to the addition of feature fusion module, the training time of the model proposed in this paper is longer than that of TCN and W-TCN, but it is still faster than RNN [20] and LSTM [31]. In summary, the proposed model has better prediction performance.

3.3. Model Generalization Ability Analysis

In complex electromagnetic environments, the presence of multiple transmitters within a formation can lead to the overlay of multiple interference signals. Existing methods often improve the predictive performance of intelligent models under changing electromagnetic interference environments by generating new datasets and updating models. However, this approach is a significant challenge in terms of both time and personnel costs. More importantly, it is impossible to enumerate every potential overlapping electromagnetic interference environment. Enhancing the generalization ability of models in unlearned interference environments holds significant practical importance. Therefore, this section will validate the performance advantages of the proposed method under changing interference environments by comparing the prediction accuracy of different models in unlearned interference environments.
In this study, two types of unlearned interference environments were established: Cosine Modulated Nonlinear Frequency Modulated with Binary Phase Shift Keying Interference (dataset NLFM + BPSK Jam) and Cosine Modulated Nonlinear Frequency Modulated with Binary Frequency Shift Keying Interference (dataset NLFMcos + BFSK Jam). They represent typical frequency modulation interference and combinations of frequency modulation with phase and frequency shift keying interference, respectively. The number of test sets in each interference environment is consistent with that in Section 3.2. The models trained under the datasets in Section 3.2, which have not learned any samples generated under the overlay of these two types of interference, were used to predict the interference response in the new interference environments. The average results of the example are shown in Table 6 and Table 7.
From Table 6, it can be seen that the Wavelet-CNN-AFF model proposed in this paper achieved R2 values of 99.15 and 97.45 on the test sets of the new interference environments. Furthermore, Table 7 also shows that the model proposed in this paper has the lowest prediction error. Compared with RNN [20], LSTM [31], CNN, TCN, and Wavelet-TCN, the proposed model has higher prediction accuracy in unlearned test sets. This means that the proposed model is more accurate in predicting unfamiliar interference signals in changing complex electromagnetic environments and has better generalization capabilities. This characteristic makes the proposed model more promising for applications in predicting interference responses in receivers within actual complex electromagnetic environments where interference signals frequently change.

4. Conclusions

In this article, we propose a Wavelet-TCN-AFF model for predicting interference response of radar receiver. The wavelet decomposition coefficients of different frequency components are obtained by wavelet transform of receiver antenna aperture signal. We propose a feature extraction model based on temporal convolutional networks and introduce an attention feature fusion mechanism for feature fusion. Finally, a simulation study is carried out to prove the effectiveness of the proposed framework. Simulation results show that the proposed method has excellent predictive performance for receiver interference response prediction and is superior to the existing methods. In addition, the method has better generalization ability for unlearned new interference scenes and is more in line with the requirements of the complex and changeable electromagnetic environment.

Author Contributions

Conceptualization, L.Z. and H.T.; data curation, Z.W.; writing—original draft preparation, L.Z.; writing—review and editing, H.T. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Poisel, R. Information Warfare and Electronic Warfare Systems; Artech House: Boston, MA, USA, 2013. [Google Scholar]
  2. Pirich, R.; Basanez, C.; Anumolu, P. Electromagnetic environmental effects modeling, simulation & test validation for cosite mitigation—An overview. In Proceedings of the 2008 IEEE Long Island Systems, Applications and Technology Conference, Farmingdale, NY, USA, 2 May 2008; pp. 1–6. [Google Scholar]
  3. Tang, S.P.; Mao, J.F. Evaluation model and method of margin in electromagnetic environmental effects for complex systems. Chin. J. Electron. 2021, 30, 171–179. [Google Scholar]
  4. Hu, Y.; Ding, W.R.; Liu, C.L. Assessment and prediction of complex electromagnetic environment based on Bayesian network. In Proceedings of the 2017 IEEE International Conference on Unmanned Systems (ICUS), Beijing, China, 27–29 October 2017; pp. 120–125. [Google Scholar]
  5. Lee, D.G.; Mercier, P.P. Noise Analysis of Phase-Demodulating Receivers Employing Super-Regenerative Amplification. IEEE Trans. Microw. Theory Tech. 2017, 65, 3299–3311. [Google Scholar] [CrossRef]
  6. Månsson, D.; Thottappillil, R.; Nilsson, T.; Lundén, O.; Bäckström, M. Susceptibility of civilian GPS receivers to electromagnetic radiation. IEEE Trans. Electromagn. Compat. 2008, 50, 434–437. [Google Scholar] [CrossRef]
  7. Zhu, A.; Pedro, J.C.; Brazil, T. Dynamic Deviation Reduction-Based Volterra Behavioral Modeling of RF Power Amplifiers. IEEE Trans. Microw. Theory Tech. 2006, 54, 4323–4332. [Google Scholar] [CrossRef]
  8. Zhu, A.; Brazil, T.J. Behavioral modeling of RF power amplifiers based on pruned volterra series. IEEE Microw. Wirel. Compon. Lett. 2004, 14, 563–565. [Google Scholar] [CrossRef]
  9. Braithwaite, R.N. Digital predistortion of an RF power amplifier using a reduced volterra series model with a memory polynomial estimator. IEEE Trans. Microw. Theory Tech. 2017, 65, 3613–3623. [Google Scholar] [CrossRef]
  10. Rahati Belabad, A.; Motamedi, S.A.; Sharifian, S. A novel generalized parallel two-box structure for behavior modeling and digital predistortion of RF power amplifiers at LTE applications. Circuits Syst. Signal Process 2018, 37, 2714–2735. [Google Scholar] [CrossRef]
  11. Xu, G.M.; Liu, T.J.; Ye, Y.; Xu, T.F.; Wen, H.F.; Zhang, X.P. Generalized two-box cascaded nonlinear behavioral model for radio frequency power amplifiers with strong memory effects. IEEE Trans. Microw. Theory Tech. 2014, 62, 2888–2899. [Google Scholar] [CrossRef]
  12. Hammi, O.; Ghannouchi, F.M. Twin nonlinear two-box models for power amplifiers and transmitters exhibiting memory effects with application to digital predistortion. IEEE Microw. Wirel. Compon. Lett. 2009, 19, 530–532. [Google Scholar] [CrossRef]
  13. Ghannouchi, F.M.; Hammi, O. Behavioral modeling and predistortion. IEEE Microw. Mag. 2009, 10, 52–64. [Google Scholar] [CrossRef]
  14. Reina, H.; Yoshimasa, E.; Thomas, M.H.; Keiichi, Y. Deep Neural Network-Based Digital Predistorter for Doherty Power Amplifiers. IEEE Microw. Wirel. Compon. Lett. 2019, 29, 146–148. [Google Scholar]
  15. Cai, J.L.; Yu, C.; Sun, L.L.; Chen, S.C.; King, J.B. Dynamic Behavioral Modeling of RF Power Amplifier Based on Time-Delay Support Vector Regression. IEEE Trans. Microw. Theory Tech. 2019, 67, 533–543. [Google Scholar] [CrossRef]
  16. Liu, Y.J.; Boumaiza, S.; Ghannouchi, F.M. Dynamic behavioral modeling of 3G power amplifiers using real-valued time-delay neural networks. IEEE Trans. Microw. Theory Tech. 2004, 52, 1025–1033. [Google Scholar] [CrossRef]
  17. Wang, D.M.; Aziz, M.; Helaoui, M.; Ghannouchi, F.M. Augmented real-valued time-delay neural network for compensation of distortions and impairments in wireless transmitters. IEEE Trans. Neural Netw. Learn. Syst. 2019, 30, 242–254. [Google Scholar] [CrossRef] [PubMed]
  18. Hu, X.; Liu, Z.J.; Yu, X.F.; Zhao, Y.L.; Chen, W.H.; Hu, B.; Du, X.K.; Li, X.; Helaoui, M.; Wang, W.D.; et al. Convolutional Neural Network for Behavioral Modeling and Predistortion of Wideband Power Amplifiers. IEEE Trans. Neural Netw. Learn. Syst. 2021, 33, 3923–3937. [Google Scholar] [CrossRef] [PubMed]
  19. Naghibi, Z.; Sadrossadat, S.A.; Safari, S. Adjoint recurrent neural network technique for nonlinear electronic component modeling. Int. J. Circuit Theory Appl. 2022, 50, 1119–1129. [Google Scholar] [CrossRef]
  20. Mahvash, M.A.; Sayed, A.S.; Vali, D. Long short-term memory neural networks for modeling nonlinear electronic components. IEEE Trans. Compon. Packag. Manuf. Technol. 2021, 11, 840–847. [Google Scholar]
  21. Naghibi, Z.; Sadrossadat, S.A.; Safari, S. Time-domain modeling of nonlinear circuits using deep recurrent neural network technique. AEU-Int. J. Electron. Commun. 2019, 100, 66–74. [Google Scholar] [CrossRef]
  22. Zhou, H.; Zhang, S.; Peng, J.; Zhang, S.; Li, J.; Xiong, H.; Zhang, W. Informer: Beyond efficient transformer for long sequence time-series forecasting. In Proceedings of the Association for the Advancement of Artificial Intelligence, Online, 2–9 February 2021. [Google Scholar]
  23. Deng, F.Y.; Yan, B.; Liu, Y.Q.; Yang, S.P. Remaining Useful Life Prediction of Machinery: A New Multiscale Temporal Convolutional Network Framework. IEEE Trans. Instrum. Meas. 2022, 71, 1–13. [Google Scholar] [CrossRef]
  24. Chao, M.; Jiang, X.S.; Wei, X.M.; Wei, T. A Time Convolutional Network Based Outlier Detection for Multidimensional Time Series in Cyber-Physical-Social Systems. IEEE Access 2020, 8, 74933–74942. [Google Scholar]
  25. Su, B.Y.; Ho, K.C.; Rantz, M.J.; Skubic, M. Doppler Radar Fall Activity Detection Using the Wavelet Transform. IEEE Trans. Biomed. Eng. 2015, 62, 865–875. [Google Scholar] [CrossRef] [PubMed]
  26. Bai, S.; Kolter, J.Z.; Koltun, V. An empirical evaluation of generic convolutional and recurrent networks for sequence modeling. arXiv 2018, arXiv:1803.01271. [Google Scholar]
  27. Dai, Y.; Gieseke, F.; Oehmcke, S.; Wu, Y.Q.; Barnard, K. Attentional feature fusion. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, Waikoloa, HI, USA, 5–9 January 2021; pp. 3559–3568. [Google Scholar]
  28. Liu, M.; Zhang, Z.; Chen, Y.; Ge, J.; Zhao, N. Adversarial attack and defense on deep learning for air transportation communication jamming. IEEE Trans. Intell. Transp. Syst. 2023, 1–14. [Google Scholar] [CrossRef]
  29. Liu, M.; Liu, Z.; Lu, W.; Chen, Y.; Gao, X.; Zhao, N. Distributed Few-Shot Learning for Intelligent Recognition of Communication Jamming. IEEE J. Sel. Top. Signal Process. 2022, 16, 395–405. [Google Scholar] [CrossRef]
  30. Zhang, H.; Liu, M.; Chen, Y.; Zhao, N. Attacking Modulation Recognition with Adversarial Federated Learning in Cognitive Radio-Enabled IoT. IEEE Internet Things J. 2023. [Google Scholar] [CrossRef]
  31. Roche, J.P.; Friebe, J.; Niggemann, O. Neural Network Modeling of Nonlinear Filters for EMC Simulation in Discrete Time Domain. In Proceedings of the IECON 2021–47th Annual Conference of the IEEE Industrial Electronics Society, Toronto, ON, Canada, 13–16 October 2021; pp. 1–7. [Google Scholar]
Figure 1. Proposed wavelet-TCN-AFF model block diagram.
Figure 1. Proposed wavelet-TCN-AFF model block diagram.
Electronics 13 00162 g001
Figure 2. TCN structure diagram of the proposed wavelet-TCN-AFF model.
Figure 2. TCN structure diagram of the proposed wavelet-TCN-AFF model.
Electronics 13 00162 g002
Figure 3. AFF process of the proposed wavelet-TCN-AFF model. (a) represents the inside struct of AFF; (b) represent the method of feature fusion.
Figure 3. AFF process of the proposed wavelet-TCN-AFF model. (a) represents the inside struct of AFF; (b) represent the method of feature fusion.
Electronics 13 00162 g003
Figure 4. The influence of different wavelet decomposition layers on the prediction accuracy.
Figure 4. The influence of different wavelet decomposition layers on the prediction accuracy.
Electronics 13 00162 g004
Figure 5. Interference response prediction of proposed model in different interference situations. (a) NLFM interference response; (b) QPSK interference response; (c) LFM interference response; (d) CW interference response; (e) BPSK interference response; (f) BFSK interference response.
Figure 5. Interference response prediction of proposed model in different interference situations. (a) NLFM interference response; (b) QPSK interference response; (c) LFM interference response; (d) CW interference response; (e) BPSK interference response; (f) BFSK interference response.
Electronics 13 00162 g005
Figure 6. The comparison between the predicted waveform and the actual waveform is described in magnification 1. (a) Details of NLFM interference response, part 1; (b) Details of QPSK interference response, part 1; (c) Details of LFM interference response, part 1; (d) Details of CW interference response, part 1; (e) Details of BPSK interference response, part 1; (f) Details of BFSK interference response, part 1.
Figure 6. The comparison between the predicted waveform and the actual waveform is described in magnification 1. (a) Details of NLFM interference response, part 1; (b) Details of QPSK interference response, part 1; (c) Details of LFM interference response, part 1; (d) Details of CW interference response, part 1; (e) Details of BPSK interference response, part 1; (f) Details of BFSK interference response, part 1.
Electronics 13 00162 g006
Figure 7. The comparison between the predicted waveform and the actual waveform is described in magnification 2. (a) Details of NLFM interference response, part 2; (b) Details of QPSK interference response, part 2; (c) Details of LFM interference response, part 2; (d) Details of CW interference response, part 2; (e) Details of BPSK interference response, part 2; (f) Details of BFSK interference response, part 2.
Figure 7. The comparison between the predicted waveform and the actual waveform is described in magnification 2. (a) Details of NLFM interference response, part 2; (b) Details of QPSK interference response, part 2; (c) Details of LFM interference response, part 2; (d) Details of CW interference response, part 2; (e) Details of BPSK interference response, part 2; (f) Details of BFSK interference response, part 2.
Electronics 13 00162 g007
Table 1. The configuration of the receiver.
Table 1. The configuration of the receiver.
Receiver ParameterValue
Center frequency3.2 GHz
Filter bandwidth5 MHz
Lo11700 MHz
Lo21200 MHz
Front-end gain of receiver75 dB
Table 2. Network Structure of Different Methods.
Table 2. Network Structure of Different Methods.
ModelParameterSettings
RNN [20]Num. of input neuron8000
Hidden layer structure[8000, 8000]
Num. of output neuron8000
LSTM [31]Num. of input neuron8000
Num. of neurons in the LSTM layer8000
Num. of output neuron8000
CNNNum. of input neuron8000
Num. of convolution kernels[2, 4]
Size of convolution kernels[3, 5]
Num. of output neuron8000
TCNNum. of input neuron8000
Num. of convolution kernels[8, 16]
Size of convolution kernels2
Num. of dilation size[2, 4]
Activation functionReLU
Num. of output neuron8000
Wavelet-TCNNum. of Wavelet decomposition layer3
Num. of input neuron8000
Num. of convolution kernels[8, 16]
Size of convolution kernels2
Num. of dilation size[2, 4]
Activation functionReLU
Num. of output neuron8000
Wavelet-TCN-AFFNum. of Wavelet decomposition layer3
Num. of input neuron8000
Num. of convolution kernels[8, 16]
Size of convolution kernels2
Num. of dilation size[2, 4]
Activation functionReLU
Dilation factors[1, 2]
Num. of output neuron8000
Table 3. Comparison of the Performance of Different Methods.
Table 3. Comparison of the Performance of Different Methods.
ModelRNN [20]LSTM [31]CNNTCNW-TCNProposed
NLFM74.2078.4066.3694.6894.7598.49
BFSK78.2688.3576.6696.4597.9799.29
CW85.4095.7080.8696.2198.6399.58
BPSK75.4877.5473.3892.8991.6697.50
QPSK75.2178.6969.9092.4792.4997.25
LFM85.9990.0486.2496.5498.3999.31
Table 4. NMSE (dB) Comparison of Prediction Results of Different Models.
Table 4. NMSE (dB) Comparison of Prediction Results of Different Models.
ModelRNN [20]LSTM [31]CNNTCNW-TCNProposed
NLFM−10.4006−11.5640−8.3757−13.6545−17.0597−22.7598
BFSK−12.2136−14.4782−10.6178−14.5228−19.5205−25.3828
CW−12.6655−17.4662−10.5532−14.3326−19.5247−26.6538
BPSK−10.2372−11.7250−9.6844−12.6244−11.7624−20.3491
QPSK−10.0212−10.4211−8.6167−12.9668−12.4582−20.0446
LFM−13.9662−14.7934−11.7354−15.3819−21.8907−25.0067
Table 5. Comparison of Training Time of Different Models.
Table 5. Comparison of Training Time of Different Models.
ModelRNN [20]LSTM [31]CNNTCNW-TCNProposed
Time4220.63 s5763.15 s840.75 s1926.25 s2722.32 s3717.64 s
Table 6. Comparison of the Performance of Different Methods in unlearned situation.
Table 6. Comparison of the Performance of Different Methods in unlearned situation.
ModelRNN [20]LSTM [31]CNNTCNW-TCNProposed
NLFM + BFSK81.6487.2669.4196.0397.4099.15
NLFM + BPSK70.5673.6784.7588.4891.8097.45
Table 7. NMSE(dB) Comparison of Different Methods in unlearned situation.
Table 7. NMSE(dB) Comparison of Different Methods in unlearned situation.
ModelRNN [20]LSTM [31]CNNTCNW-TCNProposed
NLFM + BFSK−11.6694−13.4606−4.7313−14.1272−18.3439−25.2874
NLFM + BPSK−4.7559−6.0672−10.3204−12.5407−10.3415−17.6547
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhang, L.; Tan, H.; Wang, Z. Interference Response Prediction of Receiver Based on Wavelet Transform and a Temporal Convolution Network. Electronics 2024, 13, 162. https://doi.org/10.3390/electronics13010162

AMA Style

Zhang L, Tan H, Wang Z. Interference Response Prediction of Receiver Based on Wavelet Transform and a Temporal Convolution Network. Electronics. 2024; 13(1):162. https://doi.org/10.3390/electronics13010162

Chicago/Turabian Style

Zhang, Lingyun, Hui Tan, and Zhili Wang. 2024. "Interference Response Prediction of Receiver Based on Wavelet Transform and a Temporal Convolution Network" Electronics 13, no. 1: 162. https://doi.org/10.3390/electronics13010162

APA Style

Zhang, L., Tan, H., & Wang, Z. (2024). Interference Response Prediction of Receiver Based on Wavelet Transform and a Temporal Convolution Network. Electronics, 13(1), 162. https://doi.org/10.3390/electronics13010162

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop