Next Article in Journal
Wideband Singly Fed Compact Circularly Polarized Rectangular Dielectric Resonator Antenna for X-Band Wireless Applications
Previous Article in Journal
Typhoon Tracks Prediction with ConvLSTM Fused Reanalysis Data
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Cooperative Multi-Node Jamming Recognition Method Based on Deep Residual Network

1
63rd Research Institute, National University of Defense Technology, Nanjing 210007, China
2
School of Electronic Science, National University of Defense Technology, Changsha 410000, China
*
Authors to whom correspondence should be addressed.
Electronics 2022, 11(20), 3280; https://doi.org/10.3390/electronics11203280
Submission received: 16 September 2022 / Revised: 6 October 2022 / Accepted: 10 October 2022 / Published: 12 October 2022

Abstract

:
Anti-jamming is the core issue of wireless communication viability in complex electromagnetic environments, where jamming recognition is the precondition and foundation of cognitive anti-jamming. In the current jamming recognition methods, the existing convolutional networks are limited by the small number of layers and the extracted feature information. Simultaneously, simple stacking of layers will lead to the disappearance of gradients and the decrease in correct recognition rate. Meanwhile, most of the jamming recognition methods use single-node methods, which are easily affected by the channel and have a low recognition rate under the low jamming-to-signal ratio (JSR). To solve these problems, a multi-node cooperative jamming recognition method based on deep residual networks was proposed in this paper, and two data fusion algorithms based on hard fusion and soft fusion for jamming recognition were designed. Simulation results show that the use of deep residual networks to replace the original shallow CNN network structure can gain a 6–14% improvement in the correct recognition rate of jamming signals, and the hard and soft fusion-based methods can significantly improve the correct jamming recognition rate by about 3–7% and 5–12%, respectively, under low JSR conditions compared with the existing single-node method.

1. Introduction

Due to its openness and broadcast nature, the wireless communication signal transmitted by the wireless channel is susceptible to attack by malicious enemy jamming, which is a major factor threatening the viability of communication [1]. In order to solve this problem, various anti-jamming techniques have been proposed, where jamming recognition is a key part of cognitive anti-jamming techniques and also the basis and prerequisite for the implementation [2,3,4,5]. Jamming recognition methods can be divided into two categories based on the presence or absence of feature extraction. The features-based category is an evaluation by the performance parameters that are described to be the calculation of C (carrier factor coefficient), FSE (average frequency domain flatness coefficient) and other feature parameters of jamming [5], extraction of a signal’s time–frequency diagrams [6] and decision tree classification algorithms [7]—all such methods rely on setting up the threshold values that are greatly influenced by the channel environment and prone to lose useful information. Compared to threshold-based features analysis, the deep learning categories [8,9,10] can directly use the original information for deep feature mapping, which does not require pre-extraction of features and, thus, can obtain better recognition performance. The Convolutional Neural Network (CNN) has a multilayer network structure which is suitable for solving image feature extraction problems and has advantages in extracting features related to special (two-dimensional) features [11], and so CNNs are described as bearing the potential for jamming and modulation recognition. Consequently, researchers have introduced CNNs into these fields where the CNN also obtains leading performance. Moreover, one of the active research areas on jamming is related to jamming of the classical and quantum radar [12]. Researchers used ResNet-AE [13] or Quantum-secured imaging [14] to recognize the jamming of radar. Additionally, the multi-node method was used to provide decisions for the anti-jamming of radar [15,16]. In the next step of our research, we can apply the multi-node method to the jamming recognition of classical and quantum radar to improve the overall performance compared with the single-node method.
For the concentration and unsatisfactory performance of single-node jamming recognition among most of the current research, authors used a multi-node method based on convolutional neural networks for jamming recognition in a previous work [17] to solve the problem of the single-node method being susceptible to channel influence, which effectively improved the correct recognition rate of jamming at a lower JSR. To the authors’ knowledge, this is the first paper to apply a multi-node approach to the field of jamming recognition. However, existing CNN jamming recognition methods still suffer from the problems of a small number of network layers and limited feature information extraction, which will lead to the gradient disappearance of CNNs and a decrease in the correct recognition rate of jamming as the number of network training layers increases.
The shortcut connection [18] was applied early to Hopfield networks by adding a cross-layer loop between two neurons of the neural network, enabling the gradient to skip the current neuron and pass directly to the next neuron, which is conducive to gradient transfer during neural network training and improves the efficiency of information propagation. Earlier multilayer perceptrons also added shortcut connections between network layers to address gradient disappearance, thus enabling the stacking of more network layers [19]. Deep network structure helps to improve the correct rate of signal classification and recognition. What is more, the introduction of residual connections [20] into the jamming recognition problem helps the gradient transfer to deeper networks, overcoming problems such as limited extraction of features by jamming recognition methods in existing shallow convolutional networks and gradient disappearance due to deeper layers.
The simple network structure of the shallow CNN jamming recognition method leads to limited extraction of feature information of the received signal and decrease in the correct recognition rate due to stacking layers directly. In order to solve this problem, this paper adds residual connections to CNN and proposes a multi-node cooperative jamming recognition method based on deep residual networks with reference to the cooperative spectrum sensing scheme [21]. This method uses multiple nodes to jointly perform jamming signal perception and deep residual networks for jamming signal recognition, while two schemes based on hard fusion as well as soft fusion are proposed when the central node performs data fusion.

2. System Model

Assume that the multi-node cooperative jamming recognition network consists of one central node and M cooperative cognitive nodes [22,23], as shown in Figure 1. After receiving the signal from M cognitive nodes, the central node will train a network for recognition. The signal yi(n) received by the i-th cognitive node can be expressed as:
y i ( n ) = S i ( n ) + J i ( n ) + v i ( n ) n = 1 , 2 , N
Here Si(n) is the communication signal, Ji(n) is the jamming signal, vi(n) is the noise signal, and this noise signal obeys a complex Gaussian distribution with mean 0 and variance σ2, and N is the number of signal sampling points.
The vector of sample points received by the M cognitive nodes at moment n can be expressed as: yn = [y1(n), y2(n), …, yM(n)]T [24,25]. The received information is aggregated to the central node and can be written as y = [y1, y2, …, yN], which is composed of the information of M cognitive nodes in the cognitive time period. It can also be presented as the received signal matrix, i.e.,
y = [ y 1 y 2 y N ] T = [ y 1 ( 1 ) y 1 ( 2 ) y 1 ( N ) y 2 ( 1 ) y 2 ( 2 ) y 2 ( N ) y M ( 1 ) y M ( 2 ) y M ( N ) ] M × N
The communication signal in this paper is a QPSK signal, and the channel is an additive Gaussian white noise channel. After receiving the information from each cooperative cognitive node, the central node aggregates the information to the input of neural network. The input layer performs the FFT transform of the received signal matrix, i.e., Y = FFT{y}, then the real part as well as the imaginary part of the FFT transform of the received signal matrix can be expressed as:
Y I = [ I 1 ( 1 ) I 1 ( 2 ) I 1 ( N ) I 2 ( 1 ) I 2 ( 2 ) I 2 ( N ) I M ( 1 ) I M ( 2 ) I M ( N ) ] M × N
Y Q = [ Q 1 ( 1 ) Q 1 ( 2 ) Q 1 ( N ) Q 2 ( 1 ) Q 2 ( 2 ) Q 2 ( N ) Q M ( 1 ) Q M ( 2 ) Q M ( N ) ] M × N
YI and YQ are used as input of the neural network model in the form of pages, that is, the input to the neural network is a two-page matrix. Compared to inputting the I-way and Q-way information into the neural network training by rows, the input by pages can make better use of the interrelationship between the I-way and Q-way during network training and obtain more information, which can improve the performance of the final trained network [26].
Finally, the trained network is used to classify the six malicious jamming signals. In this paper, the six common malicious jamming styles [27] to be classified are single-tone jamming, multi-tone jamming, narrow-band jamming, broad-band jamming, comb jamming and swept jamming. The amplitude spectra of the six jamming signals at a Jammer-to-Noise Ratio (JNR) of 10 dB are given in Figure 2.

3. Cooperative Multi-Node Jamming Recognition Method Based on Deep Residual Network

The existing CNN jamming recognition methods generally use the LeNet-5 network structure [6]. Given that the more layers of the CNN, the richer the features that can be learned, as the common CNN models are relatively simple, directly stacking the number of network layers will lead to the disappearance of gradients, resulting in a decrease in the correct recognition rate of jamming signal. Hence, this paper adds residual connections to existing CNN networks and proposes a multi-node cooperative jamming recognition method based on deep residual neural networks with referring to a multi-node cooperative spectrum sensing scheme. The process of the method is shown in Figure 3. The jamming signals are perceived and transmitted back to the central node by multiple cooperative perception nodes for data fusion and training of the neural network. The central node then receives the current jamming signal information and uses the trained neural network for jamming recognition.

3.1. Residual Connections

Gradient disappearance occurs from time to time during the training of deeper CNNs while residual learning can solve the gradient vanishing problem, to some extent. Suppose that, in a deep network, we expect a non-linear unit (which can be one or more convolutional layers) f(x; θ) to approximate an objective function h(x). At this point, the objective function is split into two parts: the identity function x and the residue function h(x) − x:
h ( x ) = x + ( h ( x ) x )
According to the general approximation theorem, a nonlinear unit constituted by a neural network has sufficient capacity to approximate either the original objective function or the residual function, but, in practice, the latter is easier to learn [20]. Thus, the original optimisation problem can be transformed into a new case: let the non-linear unit f(x;θ) approximate the residual function h(x) − x and use f(x; θ) + x to approximate the objective function h(x).
An example of a typical residual module is given in Figure 4. The residual module consists of multiple cascaded convolutional layers and a straight-connected edge across the layers, followed by an activation function to obtain the output. The convolutional layers within the shortcut connection are omitted. A residual network is a deep network consisting of several residual modules connected in series, which is similar to a highway network. To facilitate analysis without loss of generality, a simplified structure of a residual network with five residual modules is given in Figure 5.
The residual network learns features from shallow l to deep L via shortcut connections, which can be expressed as:
x L = x l + i = l L F ( x i , { W i } )
Using the chain rule of derivation, the gradient in the backpropagation process can be expressed as:
l o s s x l = l o s s x L l o s s x l = l o s s x L ( 1 + x L i = l L 1 F ( x i , { W i } ) )
where l o s s x L denotes the gradient of the loss function reaching the xL layer, and the “1” in Equation (7) indicates that the shortcut connection can pass the gradient without loss, while the other residual gradient cannot be propagated directly and needs to pass through the l layer to the (L − 1) layer with the convolutional layer with the right value Wi. Even if the residual gradient is close to 0, the presence of an “1” does not make the gradient of the residual network disappear. This is the reason why adding residual connections to a deep network allows the gradient to be passed deeper and allows the network layers to be trained even if they are very deep.
In the forward propagation process of neural network training, there is an error between the output of the residual network and the actual output, and the loss function can be expressed as:
L o s s = 1 2 i = 1 c s t r a i n ( n ) ( i ) s ^ t r a i n ( n ) ( i ) 2
where c denotes the number of jamming classifications, s t r a i n ( n ) denotes the true jamming classification label, and s ^ t r a i n ( n ) denotes the predicted jamming classification label.
In the backpropagation process, the gradient of the last layer k can be expressed as:
l o s s x k = 1 2 i = 1 c s t r a i n ( n ) ( i ) s ^ t r a i n ( n ) ( i ) 2 x k = | s t r a i n ( n ) ( i ) s ^ t r a i n ( n ) ( i ) | s ^ t r a i n ( n ) ( i ) x k = e i x f ( x k )
Bringing l = 0, 3, 6, …, k − 3, k into Equation (7) in turn and associating it with Equation (9), we can deduce that the gradient of x0 is:
l o s s x 0 = e i x f ( x k ) ( 1 + x 3 i = 0 2 F ( x i , { W i } ) ) ( 1 + x 6 i = 3 5 F ( x i , { W i } ) ) ( 1 + x L i = L 3 L 1 F ( x i , { W i } ) )
In addition, we can use a similar approach to derive the gradient of x0 in a CNN without residual connections as:
l o s s x 0 = l o s s x k x k x k 1 x k 1 x k 2 x 1 x 0 = e i x f ( x k ) f ( W k x k 1 ) x k 1 f ( W k 1 x k 2 ) x k 2 f ( W 1 x 0 ) x 0 = e i x f ( x k ) W k W k 1 W 1
Comparing Equations (10) and (11), it can be concluded that as the number of layers of the network increases, the gradient values of each convolutional layer are cumulatively multiplied in the CNN without residual connections, causing the gradient reaching x0 to become smaller and smaller until it disappears. In contrast, the presence of the coefficient “1” in the network with residual connections does not lead to the disappearance of the gradient even if the value obtained by cumulative multiplication of the weights W of each convolutional layer is close to 0.

3.2. Residual Network Structure

The residual network designed in this paper is composed of an input layer, a convolutional layer (Conv), a batch normalization (BN) layer, a ReLU layer, a fully connected layer(fc), a classification layer and a residual connection. Figure 6 shows a schematic diagram of the structure of the two residual modules, where the residual module changes the size of the convolutional layer by using a 1 × 1 convolutional layer as well as the BN when the size of the convolutional layer changes. In the two residual modules, “conv” denotes the convolutional layer, “K” denotes the number of convolutional kernels, “S” denotes equal-width convolution and “/2” denotes downsampling by factor 2. The residual module (a) indicates that it has been downsampled, while the residual module (b) indicates equal-width convolution.
The structural parameters of this residual network are given in Table 1, where block1 indicates the use of structure (a) in Figure 6, and block2 indicates the use of structure (b).
Some of the parameters for the training of the neural network in this paper are given in Table 2.

3.3. Multi-Node Cooperative Jamming Recognition Method

In single node jamming recognition methods, the recognition results tend to deteriorate due to channel effects. Cooperative spectrum sensing uses multiple cooperative sensing nodes to overcome this problem. Referring to this scheme and its data fusion scheme [28,29,30,31], two multi-node jamming recognition schemes are proposed in this paper, using two different data fusion algorithms for jamming recognition, namely, the hard and soft fusion-based cooperative jamming recognition method.
In the hard-fusion based multi-node cooperative jamming recognition method, the recognition network consists of one central node and M cooperative cognitive nodes, where the algorithm can be divided into three phases. In the training phase, each of the M cognitive nodes perceives the jamming signal, and the central node uses this perception information to train a neural network and distribute the parameters of this neural network to the individual cognitive nodes. In the test phase, the M cognitive nodes use the received neural network to recognize the current jamming signal, generate the recognition result Hi and transmit the recognition result back to the central node. In the data fusion phase, the global jamming recognition result Hw is given by the central node based on the majority judgment criterion. The global recognition result Hw can be expressed as:
H w = Ψ ( H 1 , H 2 H M )
where the function Ψ indicates returning the first value with the most frequent occurrences. The hard-fusion based multi-node jamming recognition method is shown in Table 3.
In the soft-fusion based multi-node cooperative jamming recognition method, the structure of the recognition network is the same as the hard one, and the algorithm is also divided into three phases. The difference lies in the test phase where each cooperative cognitive node makes an independent judgement based on the neural network and returns a recognition vector Vi, whose length is equal to the amount of jamming to be recognized, which can be expressed as:
V i = [ V i ( 1 ) V i ( 2 ) V i ( 3 ) V i ( 4 ) V i ( 5 ) V i ( 6 ) ] , 1 i M
where Vi(j) denotes the probability that the co-cognitive node considers the current jamming signal to be the jth type of jamming, and each component satisfies the following relation:
j = 1 6 V i ( j ) = 1 , 1 i M
In the data fusion phase, all vectors are added to produce a global judgment vector Vw after the central node receives the information from each cooperative cognitive node, and the jamming with the highest probability in the vector Vw is used as the global recognition result Hw, i.e.,
H w = M A X ( V w )
The soft-fusion based multi-node jamming recognition method is shown in Table 4:

4. Simulation Result and Analysis

4.1. Parameter Settings

Matlab simulations were used to generate the corresponding signals for QPSK modulated communication signals and six common jamming signals as the training sets and test sets for the neural network. In the simulation, the sampling rate fs is 10 MHz, the number of sampling points is 512, the QPSK signal carrier frequency fc is 2.5 MHz, the channel model is additive Gaussian white noise, the SNR is −5 dB, and the frequency bands of the jamming signal are all located near the QPSK signal. The JSR of the training set ranges from −16 dB to 10 dB with an interval of 2 dB, and the number of samples for each type of jamming signal under each JSR is 100. The JSR of test set ranges from −16 dB to 10 dB with an interval of 2 dB, and the number of samples for each type of jamming signal under each JSR is 500. The parameters of the jamming signal are shown in Table 5.

4.2. Performance Analysis

In this paper, four sets of simulation experiments are designed to verify the performance of the algorithm proposed in this paper. In the experiments, the average correct recognition rate of six types of jamming signals is used as the main performance index.
Experiment 1: Advantages of residual networks with different layers over shallow CNNs.
Figure 7 provides a comparison of the performance of the residual network with a different number of layers and the CNN with the single node recognition method (the number of co-cognitive nodes is 6). The figure shows that all residual networks with a different number of layers outperform CNNs. Additionally, the average correct recognition rate of residual networks reaches its highest when the number of network layers is 18. The correct recognition rate of the network first increases with the number of layers and then starts to decrease after reaching about 20 layers. This is because, when the number of network layers is small, increasing the number of layers helps to extract more features, resulting in an increase in the correct recognition rate. However, when the number of layers is deeper, although increasing the number of layers can enable the network to extract more features, it also leads to an increase in the training parameters and training time, and the complexity of the recognition model will also increase; too deep a network leads to overfitting during training, resulting in a decrease in the correct recognition rate. In this paper, the correct recognition rate peaked at 18 layers, so the residual network with an 18-layer network structure was chosen as the jamming recognition model.
To further compare the performance differences between the two networks, Table 6 shows the average correct recognition rate of the 2 networks when using the single node solution when the number of layers is 18 for both networks. It can be seen in Figure 8 that the residual network effectively improves the correct recognition rate, and the improvement is even more pronounced at lower JSR, with the average correct recognition rate improving by approximately 6–12%. This is due to the fact that the additional residual connections allow for more efficient training and gradient transfer, resulting in better performance of the trained network.
Experiment 2: Performance comparison of jamming recognition under different number of nodes in the hard-fusion based method.
Figure 9 gives the performance curves of the hard-fusion based methods with different numbers of co-cognitive nodes. It can be seen from the figure that the correct recognition rates of the hard fushion-based methods are all higher. At JSR = −5 dB, all curves achieve a correct recognition rate of over 90%. Looking at the correct recognition rate curves for different numbers of nodes, it can be seen that the correct recognition rate gradually increases as the number of nodes increases, and that the performance improvement from increasing the number of nodes is more obvious at low JSR. This is because, as the number of nodes increases, the cognitive nodes receive more samples of jamming signals and more information back to the central node, enabling a more comprehensive judgement, which ultimately leads to an increase in network performance. As the JSR increases, the correct recognition rate of the network increases. This is because, as the JSR increases, the energy of the jamming signal also increases, making it easier for the network to extract the jamming signal information and, thus, make a correct recognition.
Experiment 3: Performance comparison of jamming recognition for a different number of nodes in the soft-fusion based method.
The performance curves of the soft-fusion based method with a different number of nodes are given in Figure 10. Observing the performance curves of the soft-fusion based method in Figure 10, it can be seen that similar to the performance curves of the hard fushion method, the correct recognition rate for all curves reaches over 90% at JSR = −5 dB. In addition, as the number of nodes increases, the correct recognition rate based on the soft-fusion based method gradually increases.
Experiment 4: Performance comparison of the of hard-fusion based and soft-fusion based multi-node cooperative jamming recognition.
Figure 11 gives the performance curves of the residual network as well as the existing CNN for the soft-fusion based method and the hard-fusion based method when the number of co-cognitive nodes is six. Looking at Figure 11, it can be observed that the residual network outperforms the CNN method in both fusion methods, and the soft-fusion based method outperforms the hard-fusion based method overall. This is because the residual connections incorporated in the residual network allow for better gradient and training information transfer in the deeper network, resulting in better performance of the trained network. Compared to the hard-fusion based method, the soft-fusion based method fuses more information and takes more advantage of multi-node jamming perception. However, the soft-fusion based method also results in increased transmission information, placing higher demands on the data processing capabilities of the central node and adding additional system complexity.
Under the conditions of Experiment 4, in order to see further how well the two networks and data fusion methods correctly identified each type of jamming, confusion matrices were made for the CNN method under the hard-fusion based method (Figure 12) and for the residual network under both fusion methods when the number of co-cognitive nodes was 6 and JSR = −6 dB (Figure 13 and Figure 14). Generally speaking, the overall performance of the confusion matrix is consistent with Figure 11, with the recognition performance of the residual network outperforming the CNN network and the soft-fusion based method outperforming the hard one. Among the misclassified labels, it is mainly the confusion between narrow-band jamming and broad-band jamming. This is due to the fact that the spectra of the two types of jamming are more similar in the low JSR case, which makes it difficult for the neural network to distinguish between the two types of jamming signal characteristics, and, therefore, it is easy to misclassify the two types of jamming signals. Comparing Figure 12 with Figure 13, it can be seen that the residual network significantly reduces the misclassification probability and mitigates the confusion between different jamming, mainly generating incorrect judgements for broad-band jamming, with the rest of the jamming being correctly recognized at close to 100%. Comparing Figure 13 with Figure 14, it can be seen that the soft-fusion based method further improves the network performance, reducing the misclassification probability of broad-band jamming from 68.2% to 56% compared with the hard-fusion based method.

5. Conclusions

In this paper, we investigated a cooperative multi-node jamming recognition method based on deep residual networks and proposed two judgement algorithms based on hard and soft fusion. Through the multi-node co-cognitive jamming signal information and training the neural network, the jamming signal is recognized by both hard and soft fusion methods. Simulation results show that the use of deep residual networks to replace the original shallow CNN network structure can gain a 6–14% improvement in the correct recognition rate of jamming signals, and the hard-fusion based method can significantly improve the correct recognition rate by about 3–7% under low JSR conditions compared with the single-node method. At the same time, the soft-fusion based method further improves the performance by about 2–5% over the hard one, but the system complexity and processing time increase relatively.

Author Contributions

Conceptualization, J.S. and Y.L.; methodology, J.S. and Y.L.; software, J.S.; validation, J.S., L.W. and Y.Z.; formal analysis, J.S. and Y.L.; investigation, J.S. and L.W.; resources, Y.L.; data curation, J.S. and L.W.; writing—original draft preparation, J.S.; writing—review and editing, J.S., Y.L., L.W. and Y.Z.; visualization, J.S.; supervision, Y.L.; project administration, Y.L.; funding acquisition, Y.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Science Foundation of China (NSFC grants: U19B214, 61901502), the Foundation Strengthening Plan Area Fund (grants: 2019-JCJQ-JJ-212, 2019-JCJQ-JJ226) and the Research Project of the National University of Defense Technology (grants: 18-QNCXJ-029).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Yao, F. Communication Anti-Interference Engineering and Practice; Electronic Industry Press: Beijing, China, 2012. [Google Scholar]
  2. Lang, B.; Gong, J. JR-TFViT: A Lightweight Efficient Radar Jamming Recognition Network Based on Global Representation of the Time–Frequency Domain. Electronics 2022, 11, 2794. [Google Scholar] [CrossRef]
  3. Choi, H.; Park, S.; Lee, H. Covert Anti-Jamming Communication Based on Gaussian Coded Modulation. Appl. Sci. 2021, 11, 3759. [Google Scholar] [CrossRef]
  4. Jia, L.; Xu, Y.; Sun, Y.; Feng, S.; Anpalagan, A. Stackelberg game approaches for anti-jamming defence in wireless networks. IEEE Wirel. Commun. 2018, 25, 120–128. [Google Scholar] [CrossRef] [Green Version]
  5. Xuan, Y.; Shen, Y.; Shin, I.; Thai, M.T. On Trigger Detection against Reactive Jamming Attacks: A Clique-Independent Set Based Approach. In Proceedings of the PERFORMANCE Computing and Communications Conference, Scottsdale, AZ, USA, 14–16 December 2009; pp. 223–230. [Google Scholar]
  6. Yang, X.; Ruan, H. A Recognition Method of Deception Jamming Based on image Zernike Moment Feature of Time-frequency Distribution. Mod. Radar 2018, 40, 91–95. [Google Scholar]
  7. Fang, F.; Li, Y.; Niu, Y. Jamming signal recognition based on decision tree algorithm. Commun. Technol. 2019, 52, 2617–2623. [Google Scholar]
  8. Julian, W.; Kazuto, Y.; Norisato, S.; Yafei, H.; Eiji, N.; Toshihide, H.; Abolfazl, M.; Yoshinori, S. WLAN Interference Identification Using a Convolutional Neural Network for Factory Environments. J. Commun. 2021, 16, 276–283. [Google Scholar]
  9. Lan, X.; Wan, T.; Jiang, K.; Xiong, Y.; Tang, B. Intelligent Recognition of Chirp Radar Deceptive Jamming Based on Multi-Pulse Information Fusion. Sensors 2021, 21, 2693. [Google Scholar] [CrossRef]
  10. Zhou, H.; Dong, C.; Wu, R.; Xu, X.; Guo, Z. Feature Fusion Based on Bayesian Decision Theory for Radar Deception Jamming Recognition. IEEE Access 2021, 9, 16296–16304. [Google Scholar] [CrossRef]
  11. Yu, C.; Zhao, M.; Song, M.; Wang, Y.; Li, F.; Han, R.; Chang, C.-I. Hyperspectral image classification method based on CNN Architecture embedding with hashing semantic feature. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2019, 12, 1866–1881. [Google Scholar] [CrossRef]
  12. Lu, S.; Meng, Z.; Huang, J.; Yi, M.; Wang, Z. Study on Quantum Radar Detection Probability Based on Flying-Wing Stealth Aircraft. Sensors 2022, 22, 5944. [Google Scholar] [CrossRef] [PubMed]
  13. Cheng, D.; Fan, Y.; Fang, S.; Wang, M.; Liu, H. ResNet-AE for Radar Signal Anomaly Detection. Sensors 2022, 22, 6249. [Google Scholar] [CrossRef]
  14. Malik, M.; Magaña-Loaiza, O.S.; Robert, W.B. Quantum-secured imaging. Appl. Phys. Lett. 2012, 101, 241103. [Google Scholar] [CrossRef] [Green Version]
  15. You, S.; Diao, M.; Gao, L. Implementation of a combinatorial-optimisation-based threat evaluation and jamming allocation system. IET Radar Sonar. Navig. 2019, 13, 1636–1645. [Google Scholar] [CrossRef]
  16. Zhao, S.; Yi, M.; Liu, Z. Cooperative Anti-Deception Jamming in a Distributed Multiple-Radar System under Registration Errors. Sensors 2022, 22, 7216. [Google Scholar] [CrossRef]
  17. Shen, J.; Li, Y.; Shi, Y.; An, K. Multi-node Cooperative Jamming Recognition Method Based on Deep Convolutional Neural Network. Radio Commun. Technol. 2022, 48, 711–717. [Google Scholar]
  18. Zhao, Y.; Zhang, X.; Feng, W.; Xu, J. Deep Learning Classification by ResNet-18 Based on the Real Spectral Dataset from Multispectral Remote Sensing Images. Remote Sens. 2022, 14, 4883. [Google Scholar] [CrossRef]
  19. Li, X.-X.; Li, D.; Ren, W.-X.; Zhang, J.-S. Loosening Identification of Multi-Bolt Connections Based on Wavelet Transform and ResNet-50 Convolutional Neural Network. Sensors 2022, 22, 6825. [Google Scholar] [CrossRef] [PubMed]
  20. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
  21. Nair, R.G.; Narayanan, K. Cooperative spectrum sensing in cognitive radio networks using machine learning techniques. Appl. Nanosci. 2022. [Google Scholar] [CrossRef]
  22. Sarikhani, R.; Keynia, F. Cooperative Spectrum Sensing Meets Machine Learning: Deep Reinforcement Learning Approach. IEEE Commun. Lett. 2020, 24, 1459–1462. [Google Scholar] [CrossRef]
  23. Xie, J.; Fang, J.; Liu, C.; Li, X. Deep Learning-Based Spectrum Sensing in Cognitive Radio: A CNN-LSTM Approach. IEEE Commun. Lett. 2020, 24, 2196–2200. [Google Scholar] [CrossRef]
  24. Zhou, F.; Beaulieu, N.; Li, Z.; Si, J. Feasibility ofmaximum eigenvalue cooperative spectrum sensing based on Cholesky factorisation. IET Commun. 2016, 10, 199–206. [Google Scholar] [CrossRef]
  25. Thilina, K.; Choi, K.; Saquib, N.; Hossain, E. Machinelearning techniques for cooperative spectrum sensing incognitive radio networks. IEEE J. Sel. Areas Commun. 2013, 31, 2209–2221. [Google Scholar] [CrossRef]
  26. Liu, X.; Yang, D.; Gamal, A. Deep Neural Network Architectures for Modulation classification. In Proceedings of the 2017 51st Asilomar Conference on Signals, Systems, and Computers, Pacific Grove, CA, USA, 29 October 2017–1 November 2017. [Google Scholar]
  27. Niu, Y.; Yao, F.; Chen, J. Fuzzy Jamming Pattern Recognition Based on Statistic Parameters of Signal’s PSD. J. China Ordnance 2011, 7, 15–23. [Google Scholar]
  28. Gai, J.; Xue, X.; Wu, J. Cooperative Spectrum Sensing Method Based on Deep Convolutional Neural Network. J. Electron. Inf. 2021, 43, 2911–2919. [Google Scholar]
  29. Valadão, M.; Amoedo, D.; Costa, A.; Carvalho, C.; Sabino, W. Deep Cooperative Spectrum Sensing Based on Residual Neural Network Using Feature Extraction and Random Forest Classifier. Sensors 2021, 21, 7146. [Google Scholar] [CrossRef] [PubMed]
  30. Janu, D.; Singh, K.; Kumar, S. Machine learning for cooperative spectrum sensing and sharing: A survey. Trans. Emerg. Telecommun. Technol. 2021, 33, e4352. [Google Scholar] [CrossRef]
  31. Woongsup, L.; Minhoe, K.; Dong-Ho, C. Deep Cooperative Sensing: Cooperative Spectrum Sensing Based on Convolutional Neural Networks. IEEE Trans. Veh. Technol. 2019, 68, 3005–3009. [Google Scholar]
Figure 1. Framework of multi-node cooperative jamming recognition.
Figure 1. Framework of multi-node cooperative jamming recognition.
Electronics 11 03280 g001
Figure 2. Amplitude spectra of six common jamming patterns (JNR = 10 dB). (a) Single-tone Jamming. (b) Multi-tone Jamming. (c) Narrow-band Jamming. (d) Broad-band Jamming. (e) Comb Jamming. (f) Sweep Jamming.
Figure 2. Amplitude spectra of six common jamming patterns (JNR = 10 dB). (a) Single-tone Jamming. (b) Multi-tone Jamming. (c) Narrow-band Jamming. (d) Broad-band Jamming. (e) Comb Jamming. (f) Sweep Jamming.
Electronics 11 03280 g002
Figure 3. Multi-node cooperative jamming recognition process.
Figure 3. Multi-node cooperative jamming recognition process.
Electronics 11 03280 g003
Figure 4. Example of a residual module.
Figure 4. Example of a residual module.
Electronics 11 03280 g004
Figure 5. Simple residual network structure diagram with six residual modules.
Figure 5. Simple residual network structure diagram with six residual modules.
Electronics 11 03280 g005
Figure 6. Example of two residual modules. The residual module (a) indicates that it has been downsampled, while the residual module (b) indicates equal-width convolution.
Figure 6. Example of two residual modules. The residual module (a) indicates that it has been downsampled, while the residual module (b) indicates equal-width convolution.
Electronics 11 03280 g006
Figure 7. Performance comparison of ResNet under different levels.
Figure 7. Performance comparison of ResNet under different levels.
Electronics 11 03280 g007
Figure 8. Performance comparison of ResNet and CNN under single-node method.
Figure 8. Performance comparison of ResNet and CNN under single-node method.
Electronics 11 03280 g008
Figure 9. Performance comparison of hard-fusion based methods under a different number of nodes.
Figure 9. Performance comparison of hard-fusion based methods under a different number of nodes.
Electronics 11 03280 g009
Figure 10. Performance comparison of soft-fusion based methods under a different number of nodes.
Figure 10. Performance comparison of soft-fusion based methods under a different number of nodes.
Electronics 11 03280 g010
Figure 11. Performance comparison between ResNet and CNN under two fusion methods.
Figure 11. Performance comparison between ResNet and CNN under two fusion methods.
Electronics 11 03280 g011
Figure 12. Confusion matrix of the hard-fusion based method (CNN, the number of co-cognitive nodes is 6 under JSR = −6 dB).
Figure 12. Confusion matrix of the hard-fusion based method (CNN, the number of co-cognitive nodes is 6 under JSR = −6 dB).
Electronics 11 03280 g012
Figure 13. Confusion matrix of the hard-fusion based method (ResNet, the number of co-cognitive nodes is 6 under JSR = −6 dB).
Figure 13. Confusion matrix of the hard-fusion based method (ResNet, the number of co-cognitive nodes is 6 under JSR = −6 dB).
Electronics 11 03280 g013
Figure 14. Confusion matrix of the soft-fusion based method (ResNet, the number of co-cognitive nodes is 6 under JSR = −6 dB).
Figure 14. Confusion matrix of the soft-fusion based method (ResNet, the number of co-cognitive nodes is 6 under JSR = −6 dB).
Electronics 11 03280 g014
Table 1. Residual network structure parameters r.
Table 1. Residual network structure parameters r.
IndexLayersOutput Dimension
1Input512 × 1
215 × 1,conv,48512 × 1
37 × 1,conv,64512 × 1
43 × 1,maxpool,/2256 × 1
5Residual-block1 (64)128 × 1
6Residual-block2 (64)128 × 1
7Residual-block1 (128)64 × 1
8Residual-block2 (128)64 × 1
9Residual-block2 (128)64 × 1
1032 × 1,average pooling layer1 × 256
11fc × 321 × 32
12Softmax, fc × 61 × 6
Table 2. Training parameters of CNNs.
Table 2. Training parameters of CNNs.
Training ParametersNumerical Values
Initial learning rate0.001
Number of iterative rounds6
Small batch size128
Parameter optimizerAdam
Learning rate drop period9
Convolution kernel size
Desampling step size
1 × 1, 3 × 1, 7 × 1, 15 × 1
2
Table 3. Multi-node cooperative jamming recognition method based on hard decision.
Table 3. Multi-node cooperative jamming recognition method based on hard decision.
Input: sensory information from M co-cognitive nodes
Output: global jamming signal classification judgement
Step 1: (Training phase) The central node trains a neural network “Trainednet” using the perceptual information of the M co-cognitive nodes and distributes the network parameters to each co-cognitive node.
Step 2: (Testing phase) The M co-cognitive nodes use the trained neural network to perform classification judgments, independently obtain recognition results Hi and send the recognition results back to the central node.
Step 3: (Data fusion phase) The central node performs data fusion based on the sensory information of each cooperative cognitive node and derives the global recognition result Hw based on the majority judgment criterion, which is used as the final recognition result of the multi-node cooperative jamming recognition network.
Table 4. Multi-node cooperative jamming recognition method based on soft decision.
Table 4. Multi-node cooperative jamming recognition method based on soft decision.
Input: sensory information from M co-cognitive nodes
Output: global jamming signal classification judgement
Step 1: (Training phase) The central node trains a neural network “Trainednet” using the sensory information of the M co-cognitive nodes and distributes the network parameters to each co-cognitive node.
Step 2: (Testing phase) The M co-cognitive nodes use the trained neural network to perform classification judgments, obtain recognition vectors Vi independently and transmit the recognition results back to the central node.
Step 3: (Data fusion phase) The central node performs data fusion based on the sensory information of each cooperative cognitive node, all vectors are added to obtain the global judgment vector Vw, and the jamming with the highest probability is used as the global recognition result Hw.
Table 5. Simulation parameters of jamming signal.
Table 5. Simulation parameters of jamming signal.
Jamming PatternJamming Parameters
Single-tone JammingThe frequency point is randomly located near the carrier frequency fc.
Multi-tone JammingThe frequency point is randomly located near the carrier frequency fc. The number of tones is random.
Narrow-band JammingThe central frequency is located at fc and the band width is randomly generated.
Broad-band JammingThe central frequency is located at fc and the band width is randomly generated.
Comb JammingInitial frequency is random around fc and the number and intervals of frequency points are random.
Sweep JammingInitial frequency is random around fc.
Table 6. Performance comparison of ResNet and CNN under single-node method.
Table 6. Performance comparison of ResNet and CNN under single-node method.
NetAverage Correct Recognition Rate
ResNet88.08%
CNN (Lenet)82.41%
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Shen, J.; Li, Y.; Zhu, Y.; Wan, L. Cooperative Multi-Node Jamming Recognition Method Based on Deep Residual Network. Electronics 2022, 11, 3280. https://doi.org/10.3390/electronics11203280

AMA Style

Shen J, Li Y, Zhu Y, Wan L. Cooperative Multi-Node Jamming Recognition Method Based on Deep Residual Network. Electronics. 2022; 11(20):3280. https://doi.org/10.3390/electronics11203280

Chicago/Turabian Style

Shen, Junren, Yusheng Li, Yonggang Zhu, and Liujin Wan. 2022. "Cooperative Multi-Node Jamming Recognition Method Based on Deep Residual Network" Electronics 11, no. 20: 3280. https://doi.org/10.3390/electronics11203280

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop