Next Article in Journal
Aerosols over East and South Asia: Type Identification, Optical Properties, and Implications for Radiative Forcing
Previous Article in Journal
Multispectral Optical Diagnostics of Lightning from Space
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Semi-Supervised Classification for Intra-Pulse Modulation of Radar Emitter Signals Using Convolutional Neural Network

1
School of Electronic Engineering, Xidian University, Xi’an 710071, China
2
Southwest China Research Institute of Electronic Equipment, Chengdu 610036, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2022, 14(9), 2059; https://doi.org/10.3390/rs14092059
Submission received: 18 March 2022 / Revised: 19 April 2022 / Accepted: 23 April 2022 / Published: 25 April 2022

Abstract

:
Intra-pulse modulation classification of radar emitter signals is beneficial in analyzing radar systems. Recently, convolutional neural networks (CNNs) have been used in classification of intra-pulse modulation of radar emitter signals, and the results proved better than the traditional methods. However, there is a key disadvantage in these CNN-based methods: the CNN requires enough labeled samples. Labeling the modulations of radar emitter signal samples requires a tremendous amount of prior knowledge and human resources. In many circumstances, the labeled samples are quite limited compared with the unlabeled samples, which means that the classification will be semi-supervised. In this paper, we propose a method which could adapt the CNN-based intra-pulse classification approach to the case where a very limited number of labeled samples and a large number of unlabeled samples are provided, to classify the intra-pulse modulations of radar emitter signals. The method is based on a one-dimensional CNN and uses pseudo labels and self-paced data augmentation, which could improve the accuracy of intra-pulse classification. Extensive experiments show that our proposed method can improve the intra-pulse modulation classification performance in the semi-supervised situations.

1. Introduction

Intra-pulse modulation classification of radar emitter signals is a key technology in electronic support measure systems, electronic intelligence systems and radar warning receivers [1,2,3] that is beneficial in analyzing radar systems. The accurate classification of the intra-pulse modulation of radar emitter signals could increase the reliability of estimating the function of the radar and indicate the presence of a potential threat, such that necessary measures or countermeasures against enemy radars could be taken by the ELINT system [4].
In recent years, deep learning [5] has attracted great attention in the field of artificial intelligence. Some deep-learning-based methods, especially convolutional neural networks (CNNs) [6,7,8,9,10,11], have been applied in many tasks in radar systems, such as synthetic aperture radar (SAR) automatic target recognition [12]. Apart from this, they have been used in intra-pulse modulation classification of radar emitter signals. In [13], Kong et al. used Choi−Williams Distribution (CWD) images of low probability of intercept (LPI) radar signals and recognized the intra-pulse modulations. Similarly, Yu et al. [14] obtained time-frequency images (TFIs) of the radar signal using CWD and extracted the contour of the signal in the TFIs. After that, a CNN model was trained with the contour images. In [15], Zhang et al. used Wigner−Ville Distribution (WVD) images of radar signals, which contained five types of intra-pulse modulations, to train the CNN model. In [16], Huynh-The et al. designed a cost-efficient convolutional neural network which could learn the spatiotemporal signal correlations via different asymmetric convolution kernels to classify automatic modulation classification. In [17], Peng et al. used two CNN models and three conventional machine-learning-based methods to classify the modulations, and the results show that CNN models are superior to other machine-learning-based methods. In [18], Yu et al. firstly preprocessed the radar signal by Short-Time Fourier Transformation (STFT), and then a CNN model was trained to classify the intra-pulse modulation of the radar signals. These methods mentioned above are based on a two-dimensional (2-D) CNN model. In terms of the dimension of radar signals, one-dimensional (1-D) CNN-based method are more suitable to classify the modulations task. Wu et al. [19] designed a 1-D CNN with an attention mechanism to recognize seven types of radar emitter signals in the time domain, where the intra-pulse modulations of the signals are different. In our previous work [20], a 1-D Selective Kernel Convolutional Neural Network (SKCNN) was proposed to classify eleven types of intra-pulse modulations of radar emitter signals. According to our experimental results, the proposed method based on 1-D SKCNN has the advantage of faster speed in data preprocessing and higher accuracy in intra-pulse modulation classification of radar emitter signals.
For the methods mentioned above, the performance of the CNN models depends on the number of labeled samples. The similarity among these methods is that the labeled samples are enough. However, labeling the radar emitter signal samples requires a tremendous amount of prior knowledge and human resources, such as a probability model or some other parameters of the signals including spectrum autocorrelation, time-frequency analysis, high-order cumulants and various transform domain analyses, which increases the difficulty of labeling and leads to a situation in which the labeled samples are quite limited. In the real application of classification intra-pulse modulations of radar emitter signals, unlabeled samples are much greater than the labeled samples. Therefore, the original fully supervised classification tasks often do not exist, and tasks will be semi-supervised. Semi-supervised tasks have been widely studied in image classification. In [21], Lee proposed a pseudo label learning method for deep neural networks, which shows a state-of-the-art performance in the MNIST dataset. In [22], the authors produced a new algorithm called MixMatch by guessing low-entropy labels for data-augmented unlabeled examples and mixing labeled and unlabeled data using MixUp, which reduced the error rate from 38% to 11% in the CIFAR-10 dataset. In [23], the authors simplified the existing semi-supervised methods and proposed FixMatch by using the pseudo labels generated from the weakly augmented unlabeled samples and the corresponding strongly augmented samples to train a CNN model, which combined the techniques of pseudo label learning and consistency regularization [24,25,26]. The experimental results show that FixMatch achieved state-of-the-art performance across a variety of standard semi-supervised learning benchmarks.
For intra-pulse modulation classification of radar emitter signals, most of the deep-learning-based methods focus on the fully supervised cases. As a result, in this paper, we have proposed a method to classify the intra-pulse modulation of radar emitter signals in the case where a very limited number of labeled samples and a large number of unlabeled samples are provided. The method uses the techniques of a CNN based on a mixed attention mechanism, pseudo labels and self-paced data augmentation, which could improve the accuracy for intra-pulse classification. The method could classify eleven types of intra-pulse modulations signals, including single-carrier frequency (SCF) signals, linear frequency modulation (LFM) signals, sinusoidal frequency modulation (SFM) signals, binary frequency shift keying (BFSK) signals, quadrature frequency shift keying (QFSK) signals, even quadratic frequency modulation (EQFM) signals, dual frequency modulation (DLFM) signals, multiple linear frequency modulation (MLFM) signals, binary phase shift keying (BPSK) signals, Frank phase-coded (Frank) signals, and composite modulation of LFM and BPSK (LFM-BPSK) signals.
The main contribution of this paper is that unlike most of the fully supervised situations, we focus on a more common semi-supervised situation, where a very limited number of labeled samples and a large number of unlabeled samples are provided. Another contribution is that our method is based on a one-dimensional CNN, and it uses pseudo labels and self-paced data augmentation, which could improve the accuracy for intra-pulse classification. The specific results can be seen in Section 4 and Section 5.
This paper is organized as follows: In Section 2, the proposed method will be introduced in detail. The dataset and setting of parameters will be shown in Section 3. The extensive experiments and corresponding analysis will be carried out in Section 4. The ablation study and application scenarios will be given in Section 5. Finally, the conclusion will be presented in Section 6.

2. The Proposed Method

Semi-supervised classification for intra-pulse modulation of radar emitter signals refers to the case in which a very limited number of labeled samples and a huge number of unlabeled samples are provided. For an L-class classification problem, let X = { ( x b , y b ) : b ( 1 , , B ) } be B labeled samples, where x b are the training samples and y b are true labels. Let U = { u b : b ( 1 , , B ) } be B labeled samples, where u b are the unlabeled samples, and B is always much larger than B . In this paper, the method mainly consists of the following steps:
(1)
Preprocess the original raw radar emitter signal samples.
(2)
Generate pseudo labels of unlabeled samples and their confidence. Select the high-confidence unlabeled samples.
(3)
Make self-paced data augmentation for the selected unlabeled samples. Train the CNN model with the true-label samples and the selected pseudo-label samples.
The flow chart of our proposed method is provided in Figure 1, where the parts linked by red arrow are executed only once, and the parts in the black circle will be executed several times. Finally, the well-trained CNN will be obtained for the testing sessions.

2.1. The Structure of CNN

It is widely known that a more complicated CNN model such as VGG network or ResNet, where the depth, floating point operations and parameters of the model are very high, could bring about better classification results. However, the training cost would be huge. As radar systems have higher real-time requirements, the model should be light-weighted. Attention mechanisms have been widely used in CNN model designing for their light-weighted structure, for instance, channel attention [27] and spatial attention [28]. In addition, inspired by our previous work in [20], which employed different scales of convolutional kernel size, in this paper, we design a light-weighted CNN model (MA-CNN) with mixed attention mechanism as the classification backbones, where the attention features are extracted by multi-scale components. The structure of the proposed MA-CNN and the structure of the mixed attention block is shown in Figure 2 and Figure 3, respectively.
The MA-CNN contains four main blocks, and each block contains a convolutional layer with a “ReLU” activation function [29], a Max-Pooling layer, a batch-normalization layer [30] and a mixed attention block. The details of the MA-CNN will be introduced in Section 3.2.
In the mixed attention block, for a given input feature map F i n W × C , where W is the length and C is the number of channel, firstly, F i n is split into two groups, X g r o u p 1 W × C 2 and X g r o u p 2 W × C 2 , in a channel-wise way. Then, each group is split channel-wise into two parts named X c h a n n e l W × C 4 and X s p a t i a l W × C 4 .
For the X c h a n n e l , we extract the channel attention by fusing the output of two different Multi-Layer Perceptrons (MLPs). This process can be written as:
X c h a n n e l _ a t t e n t i o n = MLP _ 1 ( GAP s ( X g r o u p ) ) + MLP _ 2 ( GAP s ( X g r o u p ) )
where X c h a n n e l _ a t t e n t i o n 1 × C 4 , MLP _ 1 ( ) and MLP _ 2 ( ) refer to the two MLPs with different nodes in the hidden layer. GAP s ( ) refers to global average pooling in a wide-wise way. X g r o u p refers to X g r o u p 1 or X g r o u p 2 .
Similarly, for the X s p a t i a l , we extract the spatial attention by fusing the output of two different convolutional layers. This process could be written as:
X s p a t i a l _ a t t e n t i o n = Conv _ 1 ( GAP c ( X g r o u p ) ) + Conv _ 2 ( GAP c ( X g r o u p ) )
where X s p a t i a l _ a t t e n t i o n W × 1 , GAP s ( ) refers to global average pooling in a channel-wise way. Conv _ 1 ( ) and Conv _ 2 ( ) refer to the computation of convolution by the two convolutional layers with different kernel sizes.
After that, we can obtain the weighted feature maps according to the channel attention and spatial attention, respectively. Then, the output result for X g r o u p can be obtained by concatenating the two weighted feature maps in a channel-wise way. This process can be written as:
X g r o u p _ o u t = Concat ( X c h a n n e l _ a t t e n t i o n c X g r o u p , X s p a t i a l _ a t t e n t i o n s X g r o u p )
where X g r o u p _ o u t W × C 2 , c and s refers to the multiplication for channel attention and spatial attention, respectively.
After obtaining the output results for X g r o u p 1 and X g r o u p 2 , the final result of mixed attention block can be expressed as:
X m i x _ a t t e n t i o n _ o u t = Concat ( X g r o u p _ o u t 1 , X g r o u p _ o u t 2 )
where X m i x _ a t t e n t i o n _ o u t W × C . X g r o u p _ o u t 1 W × C 2 and X g r o u p _ o u t 2 W × C 2 refer to the output result for X g r o u p 1 and X g r o u p 2 , respectively.

2.2. Preprocessing of Data

In intra-pulse modulation classification of radar emitter signals, the wave modes of radars can be multiple, where the pulse widths may range from a microsecond to hundreds of microseconds. However, the commonly used wave mode of radars is the short pulse width mode, and the pulses can be collected separately according to their pulse widths.
In this paper, we assume that the collected radar signals vary in a certain range. Moreover, we choose to use IQ-sampling to sample the time-domain sequence in the digital domain. Therefore, the sampled radar signal can be written as:
y r e c e i v e ( n ) = A ( n ) e j φ ( n ) + μ ( n )           n = [ 0 , 1 , 2 , , N 1 ]
I r e c e i v e ( n ) = r e a l ( y r e c e i v e ( n ) )
Q r e c e i v e ( n ) = i m a g ( y r e c e i v e ( n ) )
where N refers to the total sampling point of y r e c e i v e ( n ) , A ( n ) represents the amplitude of radar signals at sampling point n , φ ( n ) refers to the phase at sampling point n , and μ ( n ) represents the noise at sampling point n , which is considered to be additive white Gaussian noise (AWGN) [22]. I r e c e i v e ( n ) and Q r e c e i v e ( n ) , which are also the real part and imaginary part of y r e c e i v e ( n ) , refer to the time-domain sampled sequence in I-channel and Q-channel, respectively. When the sampling frequency is given, the length of y r e c e i v e ( n ) is determined.
Then, we choose to use Fast Fourier Transformation (FFT) to speed up the process of Discrete Fourier Transformation (DFT) to transfer the time-domain sequence into the frequency domain. As FFT requires the length sequence to be the k-th power of two, padding y r e c e i v e ( n ) with zero is needed. The number of k is calculated by:
k = log 2 ( M a x _ l e n )
where M a x _ l e n refers to the max length of the whole sampled sequences, and “ ” refers to the computation that rounds the input element to the nearest integer, which is greater than or equal to the input element.
According to k, the number of padded zero can be determined. The padded sequence y p a d d e d can be written as:
y p a d d e d = [ y r e c e i v e ( n ) , 0 , , 0 ]
where the length of y p a d d e d is 2 k .
Then, the frequency-domain sequence of the sample can be obtained by:
y f r e q u e n c y ( m ) = i = 0 2 k 1 y p a d d e d ( i ) e j 2 π m i 2 k
where m [ 0 , 1 , , 2 k 1 ] , and the process of calculation can use FFT to reduce the times of addition and multiplication. Next, we calculate the modulus of y f r e q u e n c y ( m ) and normalize the result to reduce the influence of different amplitudes on classification. This process can be written as:
y a b s ( m ) = ( y f r e q u e n c y ( m ) y f r e q u e n c y ( m ) )
y n o r m a l i z a t i o n ( m ) = y a b s ( m ) max { y a b s ( a ) | a [ 0 , 1 , , 2 k 1 ] }
where y f r e q u e n c y ( m ) refers to the conjugation of y f r e q u e n c y ( m ) , and the value of y n o r m a l i z a t i o n ( m ) ranges from 0 to 1. a is the element index of y a b s , and max { } in Equation (12) refers to the operation that obtains the maximum value of vector y a b s in 2 k length.

2.3. Generating Pseudo Labels and Selecting Unlabeled Samples

Pseudo label is the method for training CNN in semi-supervised classification tasks. In semi-supervised classification for intra-pulse modulation of radar emitter signals, an original CNN model can be trained firstly by the limited number of labeled samples. A general way to generate pseudo labels for unlabeled samples is using the pre-trained CNN to predict the classes of the unlabeled samples. However, as the number of samples for training the CNN is limited, the pre-trained CNN cannot predict the unlabeled sample accurately. For an L-class classification problem, the last layer in CNN contains L nodes with “SoftMax” activation function. Therefore, the class of the sample y o u t and the confidence of the sample y c o n f i d e n c e can be written as:
y o u t = arg max ( y 0 , y 1 , , y L )
y c o n f i d e n c e = max ( y 0 , y 1 , , y L )
where [ y 0 , y 1 , , y L ] refers to the output of the last layer in CNN model and i = 0 L y i = 1 .
Generally, a typical way to select the high-confidence samples is setting a threshold. This can be written as:
y s e l e c t e d = { 1   if   y c o n f i d e n c e t h r e h o l d 0   if   y c o n f i d e n c e < t h r e h o l d
where the value of y s e l e c t e d is binary and refers to the status of whether the pseudo-labeled samples should be selected: “1” for the status “selected” and “0” for the status “unselected”.
In this case, the algorithm of accomplishing the semi-supervised classification task based on CNN and selecting samples according to the threshold of confidence is shown in Algorithm 1:
Algorithm 1. The algorithm of accomplishing the semi-supervised classification task based on CNN and selecting samples according to the threshold of confidence
Step 1Train a CNN model: f with labeled samples X = { ( x b , y b ) : b ( 1 , , B ) } . Thus, the pre-trained CNN model f p r e t r a i n e d can be obtained.
Step 2Generate pseudo labels for unlabeled samples by the latest trained CNN. This can be written as:
U f l a t e s t U p s e u d o = { ( u b , y b ) : b ( 1 , , B ) }
where f l a t e s t refers to the latest trained CNN and y b refers to the pseudo label. If it is the first time to generate pseudo labels, f l a t e s t will be f p r e t r a i n e d . Otherwise, f l a t e s t will be f b e t t e r in Step 4.
Step 3 Set the threshold and select high-confidence samples based on Equation (15). We can obtain a sub-collection of samples U p s e u d o _ s e l e c t e d from U p s e u d o .
Step 4Train a better CNN model: f b e t t e r with U p s e u d o _ s e l e c t e d and X .
Step 5Perform Step 2 to Step 4 until the threshold meets the stop condition or other conditions.
However, the threshold is difficult to set. If there are some samples that are hard for CNN to classify, which refers to the case that the classification results are right but the value of y c o n f i d e n c e is lower, a higher threshold value would prevent these samples from participating in training the CNN. On the other hand, if the threshold value is lower, some unlabeled samples whose classification results are actually wrong will be selected, which would have a negative impact on training the CNN.
Considering these limitations, we do not apply the threshold in selecting samples. Instead, we first calculate the number of the labeled samples and the number of unlabeled samples. Based on the numbers of these two types of samples, we can control the number of selected samples: N U M for training CNN on each cycle. The algorithm of accomplishing the semi-supervised classification task based on CNN and selecting samples by controlling the number is shown in Algorithm 2:
Algorithm 2. The algorithm of accomplishing the semi-supervised classification task based on CNN and selecting samples by controlling the number
Step 1Train a CNN model: f with labeled samples X = { ( x b , y b ) : b ( 1 , , B ) } . The pre-trained CNN model: f p r e t r a i n e d can be obtained.
Step 2Generate pseudo labels for unlabeled samples by the latest trained CNN according to mentioned Equation (16):
U f l a t e s t U p s e u d o = { ( u b , y b ) : b ( 1 , , B ) }
where f l a t e s t refers to the latest trained CNN and y b refers to the pseudo label. If it is the first time to generate pseudo labels, f l a t e s t will be f p r e t r a i n e d . Otherwise, f l a t e s t will be f b e t t e r in Step 4.
Step 3Set the number of selected samples: N U M and select N U M samples with highest confidence. Through this process, the selected pseudo-labeled samples U p s e u d o _ s e l e c t e d in this session can be obtained, where the number of samples in U p s e u d o _ s e l e c t e d is N U M .
Step 4Train a better CNN model: f b e t t e r with U p s e u d o _ s e l e c t e d and X .
Step 5Perform Step 2 to Step 4 until N U M meets the stop condition or other conditions.
Compared with the threshold method, our method is smoother in selecting samples. For example, in the threshold methods, as the training session goes, more unlabeled samples should be added. At this time, the threshold should be lower than the previous value. However, the decrement is hard to control, and even a small decrement would bring more unlabeled samples which contains more wrong pseudo labels. In our method, we can increase the value of N U M linearly so that the uncertainty would be lower, and the initial value for N U M can be set lower in order to allow the right unlabeled sample being selected, which would be helpful for next training cycle.

2.4. Self-Paced Data Augmentation

Data augmentation is widely used in classification tasks, and it can improve the performance of CNN models. In Reference [23], the author firstly used weakly augmented unlabeled samples to obtain their pseudo labels and computed the prediction for the corresponding strongly augmented samples. Then, the model was trained to make the predictions on the strongly augmented version match the pseudo label via a cross-entropy loss.
For the received radar signals, their frequency-domain amplitude spectra have been polluted by the noise. Unlike the technique used in [23], in this paper, we obtain the pseudo labels of the unlabeled samples without weak augmentation. For the strong augmentation part, we proposed a self-paced data augmentation strategy for the unlabeled samples.
Similar to the description in Section 2.3, when the pseudo labels and the selected samples are determined, we add an even-distributed noise sequence to the selected data-preprocessed samples and normalize the results. The amplitude of the noise sequence ranges from 0 to d, where d is greater or equal than 0, and the value of d depends on the number of training cycles. The data augmentation for the unlabeled samples in this paper can be written as:
y e x t r a _ n o i s e ( m ) = y n o r m a l i z a t i o n ( m ) + d γ ( m )
y a u g m e n t a t i o n ( m ) = y e x t r a _ n o i s e ( m ) max { y e x t r a _ n o i s e ( m ) | a [ 0 , 1 , , 2 k 1 ] }
where γ ( m ) is an even-distributed noise sequence, and the amplitude ranges from 0 to 1.
Then, the CNN will be trained by the selected pseudo-labeled augmented samples and the labeled samples. The algorithm of accomplishing the semi-supervised classification task based on CNN and selecting samples by controlling the number and self-paced data augmentation is shown in Algorithm 3.
Algorithm 3. The algorithm of accomplishing the semi-supervised classification task based on CNN and selecting samples by controlling the number and self-paced data augmentation
Step 1Train a CNN model: f with labeled samples X = { ( x b , y b ) : b ( 1 , , B ) } . Thus, the pre-trained CNN model: f p r e t r a i n e d can be obtained.
Step 2Generate pseudo labels for unlabeled samples by the latest trained CNN according to mentioned Equation (16):
U f l a t e s t U p s e u d o = { ( u b , y b ) : b ( 1 , , B ) }
where f l a t e s t refers to the latest trained CNN and y b refers to the pseudo label. If it is the first time to generate pseudo labels, f l a t e s t will be f p r e t r a i n e d . Otherwise, f l a t e s t will be f b e t t e r in Step 4.
Step 3Set the number of selected samples: N U M and select N U M samples with highest confidence. Through this process, the selected pseudo-labeled samples U p s e u d o _ s e l e c t e d in this session can be obtained, where the number of samples in U p s e u d o _ s e l e c t e d is N U M .
Step 4Set the value of d. Augment the selected samples and normalize the results based on Equations (17) and (18). Then, the selected pseudo-labeled augmented samples U p s e u d o _ s e l e c t e d & a u g m e n t e d can be obtained.
Step 5Train a better CNN model: f b e t t e r with U p s e u d o _ s e l e c t e d & a u g m e n t e d and X .
Step 6Perform Step 3 to Step 5 until N U M meets the stop condition or other conditions.

3. Dataset and Parameters of MA-CNN

In this section, a simulation dataset will be introduced. In addition, the parameters of MA-CNN will be shown in detail. A computer with Intel 10,900 K, 128 GB RAM, RTX 3090 GPU hardware capabilities, “MATLAB 2021a” software, “Keras” and “Python” programming language was used.

3.1. Dataset

Typically, the carrier frequency of radar can be from 300 MHz to 300 GHz, and the receivers usually have the adaptive local oscillators, which can down mix the frequency of the received signals and output the signals with lower frequency after the low-pass filter. In addition, relatively short pulse width mode is commonly used, and it is the typical wave mode of radar.
In this case, we used the simulation dataset to examine the proposed method. Eleven different varieties of radar emitter signals whose pulse widths varied from 4 μs to 40 μs, including SCF signals, LFM signals, SFM signals, BFSK signals, QFSK signals, EQFM signals, DLFM signals, MLFM signals, BPSK signals, Frank signals and LFM-BPSK signals, were used. The method of sampling is IQ-sampling. The sampling frequency is 400 MHz, and the signal parameters are shown in Table 1.
The signal-to-noise ratio (SNR) is controlled as the power of the signals over the noise. In Equation (9), the received sampled signal contains the pure signal part: A ( n ) e j φ ( n ) and noise part μ ( n ) . Therefore, the SNR can be defined as:
S N R [ d B ] = 10 log 10 ( i = 0 N 1 A ( n ) e j φ ( n ) A ( n ) e j φ ( n ) N / i = 0 N 1 μ ( n ) μ ( n ) N )       = 10 log 10 ( i = 0 N 1 ( A ( n ) ) 2 i = 0 N 1 μ ( n ) μ ( n ) )
Figure 4 provides the frequency-domain amplitude spectra of eleven intra-pulse modulations of the radar emitter signal samples without noise and the according frequency-domain amplitude spectra when SNR is 0 dB.
The SNR ranges from −12 dB to 0 dB with 1 dB increment. At each value of SNR, the number of samples for each type of intra-pulse modulation is 1200. That is, there are 171,600 samples in the dataset. In this paper, the proposed method aims to solve the semi-supervised classification problem, and we split the dataset with the following steps:
(1)
For the testing dataset, we select 400 samples for each type of intra-pulse modulation at each value of SNR from the original dataset.
(2)
For the labeled dataset, we select 30 samples for each type of intra-pulse modulation at each value of SNR from the original dataset.
(3)
The labeled dataset is divided into training dataset and validation dataset. For the training dataset, we select 15 samples for each type of intra-pulse modulation at each value of SNR from the labeled dataset. Similarly, for the validation dataset, we select 15 samples for each type of intra-pulse modulation at each value of SNR from the labeled dataset.
(4)
For the unlabeled dataset, we select 770 samples for each type of intra-pulse modulation at each value of SNR from the original dataset.
(5)
The unlabeled dataset is divided into two datasets: the 1th unlabeled dataset and the 2nd unlabeled dataset. For the 1th unlabeled dataset, we select 385 samples for each type of intra-pulse modulation at each value of SNR from the unlabeled dataset. Similarly, for the 2nd unlabeled dataset, we select 385 samples for each type of intra-pulse modulation at each value of SNR from the unlabeled dataset.
Note that there is no intersection of samples among the training dataset with 2145 samples, the validation dataset with 2145 samples, the 1th unlabeled dataset with 55,055 samples, the 2nd unlabeled dataset with 55,055 samples and the testing dataset with 57,200 samples. In addition, the number of each modulation at each value of SNR is the same in each dataset mentioned.
The reason why the number of samples in the validation dataset is larger and is the same as that of the training dataset is that a metric for judging the time when the weights are saved for the next training cycles is the accuracy of CNN for the validation dataset. Based on the experimental results in [19], if the number of samples in validation dataset is lower, the minimum graduation value for the accuracy will be higher, which will cause a larger difference in testing sessions. As the number of labeled samples is limited, splitting the labeled dataset in a half-to-half way would be suitable.
In addition, two unlabeled datasets, the 1-st unlabeled dataset and the 2nd unlabeled dataset, could reduce the uncertainty and increase the universality of the proposed method.

3.2. Parameters of MA-CNN

Due to the sampling frequency and the maximum of pulse width, based on Equation (8), the value of k is calculated at 14, which represents that the point of FFT is 16,384. Therefore, the parameters of designed MA-CNN are shown in Table 2.
The time usage for training the proposed MA-CNN based on different settings in Section 4 and Section 5 will be shown in Appendix A.8.

4. Experiments

To test the effectiveness of the proposed method, we apply it to the dataset and provide an extensive ablation study to tease apart the contribution of each of the proposed components. For convenience, we list the situations of the datasets that are used for training the MA-CNN in Table 3.
The datasets in Situation Dataset 1, Situation Dataset 2 and Situation Dataset 3 are based on fully supervised conditions where the MA-CNN models trained by the datasets in these three situations can provide the standards of classification results with different numbers of labeled samples.
The samples in the Situation Dataset 4 and Situation Dataset 5 are in same distribution. These two situations are conducted to reduce the uncertainty and increase the universality of the proposed method. The datasets in the Situation Dataset 4 and Situation Dataset 5 are based on semi-supervised conditions, which will be used in our proposed methods.
In order to help readers understand the datasets used in different situations more intuitively, a visual diagram is shown in Figure 5, which describes the situations and their datasets visually.

4.1. Implementation Details

In all experiments, we use the same structure of MA-CNN. Overall, the learning rate is set as 0.001 and the batch size as 64. The cross-entropy function is selected as the loss function. The optimization algorithm for MA-CNN is adaptive moment estimation (ADAM) [31]. There are 50 epochs in each training cycle. In addition, the initial weights of MA-CNN will be generated firstly and will be loaded at the start of each training cycle for the initialization. For each training cycle, the weights are saved based on the standard that the weights in this training cycle have the highest accuracy for the samples on validation dataset. Therefore, if the number of training cycle is, for example, 100, there will be 100 groups of weights. When all the training sessions are finished, we will select the best group of weights from all training cycles, which has the best accuracy for the validation dataset, to test the real performance.
The framework of our proposed method is shown in Algorithm 4.
Algorithm 4. The framework of our proposed method
Step 1Train a MA-CNN model: f with labeled samples X and obtain the pre-trained CNN model: f p r e t r a i n e d .
Step 2Set the value of training cycles: T c y c l e . Set the value of N U M and d in each round of training cycle.
Step 3Generate pseudo labels for unlabeled samples by the latest well-trained MA-CNN: f l a t e s t and obtain the pseudo-labeled samples U p s e u d o . Note that f l a t e s t refers to the latest trained MA-CNN. If it is the first time to generate pseudo labels, f l a t e s t will be f p r e t r a i n e d . Otherwise, f l a t e s t will be f b e t t e r in Step 7.
Step 4Based on the value of the current round of training cycle, select the N U M samples with highest confidence in this round.
Step 5Augment the selected pseudo-labeled samples U p s e u d o and normalize the results based on the value of d in this round.
Step 6Train a better MA-CNN model: f b e t t e r with the selected pseudo-labeled augmented samples and the labeled samples.
Step 7Perform Step 3 to Step 7 until the training cycle reaches T c y c l e .
In our proposed method, the value of T c y c l e is 100. The value of N U M increases linearly, starting from 551 with 551 increments (in the last training cycle, the N U M will be 55055). The value of d decreases linearly according to the value of T c y c l e , starting from 1 to 0.

4.2. Experiments of Fully Supervised Baseline Methods

In this section, we analyze the classification performance of the model trained by the datasets in Situation Dataset 1, Situation Dataset 2 and Situation Dataset 3. Three fully supervision-based experiments with the datasets in Situation Dataset 1, Situation Dataset 2 and Situation Dataset 3, respectively, are carried out, and their classification results are shown in Table 4 and Table 5. The results in detail can be found in Table A1, Table A2 and Table A3 of Appendix A.1.
From the tables, we can find that the LFM signals, MLFM signals and LFM-BPSK signals are more difficult to be classified compared with other types of intra-pulse modulations. Besides, for these three types of signals, the classification accuracy of the model, which only uses a labeled dataset, is quite low. For other modulations, such as DLFM signals and EQFM signals, this model cannot classify them accurately when SNR is lower. In addition, there is a huge average accuracy gap between the model trained by labeled dataset and the model trained by both labeled dataset and unlabeled dataset with true labels. In summary, using only the labeled dataset with limited samples cannot bring about good classification results.

4.3. Experiments of the Proposed Method

In this section, we evaluate the accuracy of the proposed method trained by datasets in Situation Dataset 4 and Situation Dataset 5. The results can be seen in Table 6 and Table 7. The results of our proposed method in detail can be found from Table A4 and Table A5 of Appendix A.2.
Moreover, we provide the average classification accuracy of the model at each end of training cycle on the validation dataset in Figure 6.
Compared with the results in Section 4.2, we find that the classification accuracy of LFM signals, EQFM signals, DLFM signals, MLFM signals and LFM-BPSK signals improves a lot, especially for the LFM-BPSK signals.
Although the classification accuracy gap of our proposed method and the fully supervised method in classifying LFM signals, DLFM signals and MLFM signals still exists, for other modulations, the classification performance has little difference. In terms of average accuracy, the proposed method could improve the performance of classifying eleven intra-pulse modulations and could compete with the fully supervised situation-based model.
Figure 6 shows that the average accuracy of two experiments in the validation dataset at the first 30 training cycles is not stable and varies in range. However, the accuracy turns out to be stable and higher than the initial accuracy (the accuracy when training cycle is 0) as the training session goes, which shows the effectiveness of our proposed framework.

5. Discussion

Since our proposed method consists of selecting unlabeled samples and self-paced data augmentation, we study the effect of changing components in order to provide additional insight into what makes our method performant. Specifically, we measure the effects of changing the values of T c y c l e and N U M , removing the data augmentation and changing the strategy of self-paced data augmentation. The situations of the dataset are Situation Dataset 4 and Situation Dataset 5.
In addition, to verify the generality of the frameworks, specifically for generating pseudo labels, selecting unlabeled samples and self-paced data augmentation, we replace the proposed MA-CNN by 1-D SKCNN in [20], and the details can be found in Section 5.4. In Section 5.5, we provide some scenarios in which our method could be applied.

5.1. Ablation Study on the Value of T c y c l e and N U M

In this section, we will change the value of T c y c l e and N U M . Specifically, three experiments with different pairs of ( T c y c l e , N U M ) are carried out. The value of d in self-paced data augmentation is still 1 to 0, decreasing linearly. The experimental results of ( T c y c l e = 26 , N U M = 2145 ), ( T c y c l e = 50 , N U M = 1102 ), ( T c y c l e = 100 , N U M = 551 ) and ( T c y c l e = 200 , N U M = 276 ) are shown in Table 8. The corresponding results in detail can be found from Table A6, Table A7, Table A8, Table A9, Table A10 and Table A11 of Appendix A.3.
Combined with the results in Section 4.3, we find that when the value of T c y c l e increases, the average accuracy tends to increase. Although the experiments using the 2nd unlabeled dataset with ( T c y c l e = 26 , N U M = 2145 ) and ( T c y c l e = 50 , N U M = 1102 ) do not meet the tendency, the average accuracy of them is similarly lower than the other experimental group using the same datasets. In addition, although the tendency is that more training cycles can bring higher accuracy, the price is that more training cycles would cost more training time as they contain more iterations in total. The time usage for training the model based on these four pairs can be found in Appendix A.8.
As a result, it is important to balance the number of training cycles and the demand of classification accuracy. In this paper, the pair of ( T c y c l e = 100 , N U M = 551 ) is more suitable.

5.2. Ablation Study on Data Augmentation

In this section, we will remove the self-paced data augmentation. Therefore, the unlabeled samples without any augmentation are used for training MA-CNN. Four experiments with pairs of ( T c y c l e = 26 , N U M = 2145 ), ( T c y c l e = 50 , N U M = 1102 ), ( T c y c l e = 100 , N U M = 551 ) and ( T c y c l e = 200 , N U M = 276 ) are carried out. The experimental results are shown in Table 9. The corresponding results in detail can be found from Table A12, Table A13, Table A14, Table A15, Table A16, Table A17, Table A18 and Table A19 of Appendix A.4.
Combined with the tables in Section 4.3 and Section 5.1, we can find that pseudo-label learning without self-paced data augmentation can improve the classification accuracy compared with the classification result that only uses labeled datasets. However, the classification performance of these four models is all worse than the models in Section 5.1, where all the average accuracy of the models is at least 5% lower than that of the corresponding models in Section 5.1. Based on these experimental results, a conclusion can be drawn that the self-paced data augmentation in our proposed framework is necessary and effective, which can improve the performance of MA-CNN.

5.3. Ablation Study on the Strategy of Self-Paced Data Augmentation

In this section, we will change the strategy of self-paced data augmentation. Specifically, three experiments with the same pair of ( T c y c l e = 100 , N U M = 551 ) are carried out. The differences between them are that, for the first experiment, the value of d in Equation (17) is constant at 1; for the second experiment, the value of d is constant at 0.5; and for the third experiment, the value of d decreases linearly, starting from 0.5 to 0. The experimental results are shown in Table 10. The corresponding results in detail can be found in Table A20, Table A21, Table A22, Table A23, Table A24 and Table A25 of Appendix A.5.
Combined with the tables in Section 4.3, we find the value of d influences the classification performance. Specifically, the strategy of the self-paced data augmentation that decreases the value of d according to the training cycles can improve the accuracy of the model. Moreover, in the case of this self-paced strategy, a higher initial value of d can lead a better result compared with a lower initial value of d.

5.4. Generality Study on Generating Pseudo Labels, Selecting Unlabeled Samples and Augmenting Self-Paced Data

In this section, we will evaluate the generality of generating pseudo labels, selecting unlabeled samples and augmenting self-paced data. Specifically, MA-CNN is replaced by an existing complicated 1-D SKCNN model in [20]. The structure of 1-D SKCNN can be found in Appendix A.6. Similarly, three fully supervision-based experiments with the datasets in Situation Dataset 1, Situation Dataset 2 and Situation Dataset 3, respectively, are carried out. Meanwhile, the semi-supervised classification is conducted based on the same strategies in Section 4.1. The classification results are shown in Table 11. The results in detail can be found in Table A26, Table A27, Table A28, Table A29 and Table A30 of Appendix A.7.
According to the table, we can find that although the classification results of 1-D SKCNN in semi-supervised conditions are better than those of MA-CNN, the tendency of accuracy based on 1-D SKCNN is same as that of MA-CNN, where the average accuracy rises by about 16% compared with the original 1-D SKCNN model that was trained only by the labeled samples. Focusing on the improvement of accuracy, we can draw the conclusion that the framework, specifically generating pseudo labels, selecting unlabeled samples and augmenting self-paced data, can be widely used in different structures of CNN models for semi-supervised intra-pulse modulation classification.

5.5. Application Scenarios

CNNs have been widely used in classification tasks, and the existing literature has proven the effectiveness of CNNs in intra-pulse modulation classification. However, the problem is that in the real environment, unlabeled samples are much greater than labeled samples, where the intra-pulse modulation classification tasks are semi-supervised. Through the experiments of MA-CNN and the extra experiments, which use 1-D SKCNN as the classification backbones, we can obtain the conclusion that generating pseudo labels, selecting unlabeled samples and augmenting self-paced data can increase the performance of the CNN model in a semi-supervised condition. At the same time, we also find that our framework requires more training cycles. Therefore, it would be essential to design the classification backbones in a light-weighted way if the computation resources are limited.
Currently, there are few studies focusing on semi-supervised classification of intra-pulse modulations of radar signals. Our method provides guidance for this type of classification.

6. Conclusions

In this paper, we focused on a more common semi-supervised instead of fully supervised situation and proposed a method to classify the intra-pulse modulation of radar emitter signals in the case where a very limited number of labeled samples and a large number of unlabeled samples are provided. Through the extensive experiments including the fully supervised baseline methods and other ablation study, we found that our proposed method, which combines generating pseudo labels, selecting high-confidence samples and augmenting self-paced data, can significantly improve the classification accuracy of eleven types of intra-pulse modulations of radar emitter signals in the semi-supervised situations. Additionally, the results of the generality study proved the generality of the components in our proposed method. Our proposed method in this paper could provide guidance for this type of semi-supervised classification.
In future work, we will attempt to focus on the extreme situation where the dataset only includes limited labeled samples. Furthermore, to reduce the training time and apply this framework into the real radar system is another problem for the future.

Author Contributions

Conceptualization, S.Y. and B.W.; methodology, S.Y. and B.W.; software, S.Y., X.L. and J.W.; validation, S.Y.; formal analysis, P.L. and S.Y.; investigation, S.Y. and B.W.; resources, P.L.; data curation, S.Y.; writing—original draft preparation, S.Y.; writing—review and editing, S.Y.; supervision, P.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data sharing is not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Appendix A.1. The Classification Results of the Experiments

Table A1. The fully supervised classification results with the datasets in Situation Dataset 1.
Table A1. The fully supervised classification results with the datasets in Situation Dataset 1.
SNR−12−11−10−9−8−7−6−5−4−3−2−10Average
SCF0.92000.93750.93500.97000.97500.96500.96750.98750.96500.96500.96750.96750.96000.9602
LFM0.23000.21250.25250.24250.21750.26500.26500.29250.31250.37750.46000.53250.52250.3217
SFM0.74500.84750.90250.92750.95250.95500.95000.96000.98750.97750.97750.97750.98250.9340
BFSK0.69000.75500.79000.85250.86500.86750.87250.87500.89000.90250.88250.92750.92250.8533
QFSK0.79750.87000.90250.92750.91250.92750.91500.90500.93500.95750.95250.96250.95000.9165
EQFM0.29000.39250.48250.54250.64500.70250.81000.83250.90250.92750.92500.92500.94250.7169
DLFM0.25500.25500.33250.34250.45500.53750.66250.67500.79000.81250.86000.89500.87250.5958
MLFM0.21000.19000.21250.25250.26750.25000.33250.33500.34750.41750.44000.48750.48750.3254
BPSK0.95500.94750.97250.96250.94250.94000.95250.95500.96000.92000.94750.95250.95000.9506
FRANK0.85750.93000.95000.97500.98750.98250.99000.99500.97750.99000.99750.99250.99250.9706
LFM-BPSK0.23750.23000.27500.23250.28250.31500.39000.38750.43500.48500.47000.47500.43000.3573
Average0.56250.59700.63700.65700.68200.70070.73700.74550.77300.79390.80730.82680.81930.7184
Table A2. The fully supervised classification results with the datasets in Situation Dataset 2.
Table A2. The fully supervised classification results with the datasets in Situation Dataset 2.
SNR−12−11−10−9−8−7−6−5−4−3−2−10Average
SCF0.99500.99251.00001.00001.00001.00001.00001.00001.00001.00001.00001.00001.00000.9990
LFM0.45000.57250.67750.77250.80750.90000.96000.97000.98000.99000.99750.99750.99750.8517
SFM0.92500.97000.98500.99250.99500.99750.99750.99500.99750.99750.99751.00000.99250.9879
BFSK0.98000.98750.99500.99500.99750.99250.99750.99751.00000.99751.00001.00001.00000.9954
QFSK0.98000.98750.99750.99751.00000.99751.00001.00001.00001.00001.00001.00001.00000.9969
EQFM0.88250.94250.97750.98250.99500.99751.00001.00001.00001.00001.00001.00001.00000.9829
DLFM0.51250.60000.73250.83750.90750.93000.95000.96000.98750.99000.99750.99501.00000.8769
MLFM0.52000.51500.64000.67750.74000.82750.88000.92750.94750.96750.96750.96500.97250.8113
BPSK0.99250.99751.00000.99751.00001.00001.00001.00001.00001.00001.00001.00001.00000.9990
FRANK0.99501.00001.00001.00001.00001.00001.00001.00001.00001.00001.00001.00001.00000.9996
LFM-BPSK0.44000.44250.51000.56500.60000.63750.71750.79000.89250.92750.95500.96750.99000.7258
Average0.78840.81890.86500.89250.91300.93450.95480.96730.98230.98820.99230.99320.99570.9297
Table A3. The fully supervised classification results with the datasets in Situation Dataset 3.
Table A3. The fully supervised classification results with the datasets in Situation Dataset 3.
SNR−12−11−10−9−8−7−6−5−4−3−2−10Average
SCF0.99750.99751.00001.00001.00001.00001.00001.00001.00001.00001.00001.00001.00000.9996
LFM0.44250.55750.64000.72000.75250.79750.87500.93000.96000.97750.98250.99750.99500.8175
SFM0.91500.94750.98250.98500.99750.99000.99000.99251.00000.99250.99500.99500.98750.9823
BFSK0.97500.98000.99000.99500.99500.99500.99250.99751.00001.00001.00000.99750.99500.9933
QFSK0.96750.98000.99500.99250.99500.99750.99751.00000.99750.99751.00001.00001.00000.9938
EQFM0.92500.98500.99500.99751.00001.00001.00001.00001.00001.00001.00001.00001.00000.9925
DLFM0.71250.79000.87000.92250.95500.97000.98500.98751.00000.99500.99250.99751.00000.9367
MLFM0.56000.57750.62500.66000.73250.79000.86250.90000.91500.93500.96250.95750.98000.8044
BPSK0.98500.99750.99751.00001.00001.00001.00001.00001.00001.00001.00001.00001.00000.9985
FRANK0.99500.99751.00000.99751.00001.00001.00001.00001.00000.99750.99250.99500.98750.9971
LFM-BPSK0.46500.47250.53000.61750.66250.73250.83250.87250.96000.96250.97250.98000.99750.7737
Average0.81270.84390.87500.89890.91730.93390.95770.97090.98480.98700.99070.99270.99480.9354

Appendix A.2. The Classification Results of the Experiments

Table A4. Semi-supervised classification results of our proposed method with the datasets in Situation Dataset 4.
Table A4. Semi-supervised classification results of our proposed method with the datasets in Situation Dataset 4.
SNR−12−11−10−9−8−7−6−5−4−3−2−10Average
SCF0.99750.99000.99750.99750.99000.99500.99750.99250.99250.99000.99250.99250.99000.9935
LFM0.40750.50250.54500.63000.63000.68750.76750.79000.77000.79500.80500.83500.83750.6925
SFM0.86000.93750.97000.98000.98750.99750.99250.99501.00001.00001.00000.99750.99750.9781
BFSK0.96750.97500.97000.98500.98750.99500.99750.99500.99750.99500.99500.99250.99750.9885
QFSK0.98250.97500.99000.98500.99500.99251.00001.00001.00001.00001.00001.00001.00000.9938
EQFM0.78500.86750.93000.95000.99250.97750.99251.00000.99751.00001.00001.00001.00000.9610
DLFM0.39000.42250.55000.60000.69750.78750.83250.86500.89750.88500.93250.91750.90000.7444
MLFM0.50000.52500.61750.66750.61250.66000.64750.63250.66500.64250.64500.65250.59750.6204
BPSK0.99250.99000.99250.99500.99500.99751.00000.99751.00001.00001.00001.00000.99500.9965
FRANK0.98250.99500.99751.00000.99501.00000.99751.00000.99500.99750.99250.99750.98500.9950
LFM-BPSK0.27250.36000.39000.47000.57250.63250.70750.77000.83250.87500.88750.88250.94000.6610
Average0.73980.77640.81360.84180.85950.88390.90300.91250.92250.92550.93180.93340.93090.8750
Table A5. Semi-supervised classification results of our proposed method with the datasets in Situation Dataset 5.
Table A5. Semi-supervised classification results of our proposed method with the datasets in Situation Dataset 5.
SNR−12−11−10−9−8−7−6−5−4−3−2−10Average
SCF1.00000.99751.00001.00001.00001.00001.00001.00001.00001.00001.00001.00001.00000.9998
LFM0.29750.34750.35000.37750.44000.49750.50750.53500.60250.67750.69750.74750.73750.5242
SFM0.90750.95250.97250.98500.99000.99750.99000.99001.00000.99500.99750.99750.99250.9821
BFSK0.84000.94000.95500.97500.97750.98750.99250.98501.00000.99500.99000.99750.99250.9713
QFSK0.97500.98501.00000.99250.99500.99000.99500.99500.99750.99751.00001.00001.00000.9940
EQFM0.85500.94750.96500.98500.99750.98751.00001.00001.00001.00001.00001.00001.00000.9798
DLFM0.36000.45000.59000.66250.76250.82750.89500.91750.93750.95500.98000.97000.96250.7900
MLFM0.52750.51250.60750.65750.61750.67500.72750.69000.65750.66500.72500.73250.63750.6487
BPSK0.97500.98250.99000.98751.00000.99751.00001.00001.00001.00001.00001.00001.00000.9948
FRANK0.97000.98750.99750.99750.99751.00001.00001.00001.00000.99500.99750.99750.99250.9948
LFM-BPSK0.32750.41250.55750.63250.73750.77000.82500.88500.91000.93500.96000.96250.97000.7604
Average0.73050.77410.81680.84110.86500.88450.90300.90890.91860.92860.94070.94590.93500.8764

Appendix A.3. The Classification Results of the Experiments

Table A6. Semi-supervised classification results with the datasets in Situation Dataset 4 ( T c y c l e = 26 , N U M = 2145 ).
Table A6. Semi-supervised classification results with the datasets in Situation Dataset 4 ( T c y c l e = 26 , N U M = 2145 ).
SNR−12−11−10−9−8−7−6−5−4−3−2−10Average
SCF0.97500.96750.96250.96250.96500.97500.96750.98000.97000.95500.95250.96250.95000.9650
LFM0.47250.49250.56750.66250.70750.73250.76000.78250.76750.80000.78750.83000.79750.7046
SFM0.89250.94000.97000.97750.98750.99000.98500.99000.99500.99750.99750.99500.99250.9777
BFSK0.94250.96750.99250.97500.99000.98750.99500.99500.99500.99500.99750.99500.99500.9863
QFSK0.95000.97500.98500.97750.98750.99250.99750.99501.00001.00000.99751.00001.00000.9890
EQFM0.88000.95250.96250.98501.00000.99751.00001.00001.00001.00001.00001.00001.00000.9829
DLFM0.33500.37750.56000.62000.69750.81000.88500.90250.91500.91000.92500.93250.91250.7525
MLFM0.42500.47750.55750.64000.58000.64500.64750.69750.65250.66250.65000.68250.61750.6104
BPSK0.93250.95250.95000.96000.97750.96750.98750.98750.98750.98000.98500.98250.97000.9708
FRANK0.96750.98250.98250.99750.99750.99500.99500.99750.99750.99500.99750.99750.99750.9923
LFM-BPSK0.26750.29750.36750.43000.49250.58750.65500.75000.84000.88250.89250.92250.94750.6410
Average0.73090.76200.80520.83520.85300.88000.89770.91610.92000.92520.92570.93640.92550.8702
Table A7. Semi-supervised classification results with the datasets in Situation Dataset 5 ( T c y c l e = 26 , N U M = 2145 ).
Table A7. Semi-supervised classification results with the datasets in Situation Dataset 5 ( T c y c l e = 26 , N U M = 2145 ).
SNR−12−11−10−9−8−7−6−5−4−3−2−10Average
SCF0.97500.96500.96750.95750.96000.96000.97000.98750.96250.94250.95750.96500.95000.9631
LFM0.36500.42000.43500.45500.53000.54250.60750.65250.71000.75000.77250.81000.82250.6056
SFM0.84000.90500.94500.95500.97500.99000.97750.99001.00000.99250.99500.99750.98500.9652
BFSK0.93000.92250.92000.95250.96250.97000.98000.97750.98750.98500.98000.98750.98250.9644
QFSK0.89500.91500.95000.93000.95500.98000.98500.99750.99751.00000.99751.00001.00000.9694
EQFM0.87000.94000.95750.97750.99500.97750.99500.99501.00000.99750.99750.99751.00000.9769
DLFM0.29250.33000.54500.65000.69250.75500.83750.87750.90250.89750.91500.93500.93250.7356
MLFM0.43000.46500.56000.58500.52500.64500.60500.66750.61000.63750.66250.65500.61000.5890
BPSK0.95000.96000.95500.97250.96250.97000.98750.98500.98250.99250.99500.98250.98750.9756
FRANK0.94000.97500.99500.98750.99250.99000.98500.99000.98500.97250.96750.95500.97500.9777
LFM-BPSK0.29250.38500.48500.58250.66000.73750.77000.82750.85500.86750.86500.91250.91250.7040
Average0.70730.74390.79230.81860.83730.86520.88180.90430.90840.91230.91860.92700.92340.8570
Table A8. Semi-supervised classification results with the datasets in Situation Dataset 4 ( T c y c l e = 50 , N U M = 1102 ).
Table A8. Semi-supervised classification results with the datasets in Situation Dataset 4 ( T c y c l e = 50 , N U M = 1102 ).
SNR−12−11−10−9−8−7−6−5−4−3−2−10Average
SCF0.99751.00000.99751.00000.99751.00000.99501.00000.99500.98000.97500.99250.97500.9927
LFM0.32500.36000.38500.48750.55750.54000.62000.68000.67500.73250.73750.77250.76500.5875
SFM0.91750.93750.97750.99000.99250.99750.99500.99501.00001.00001.00000.99750.99500.9842
BFSK0.97250.97000.98500.98250.99500.99250.98500.99500.99750.99750.99500.99250.99000.9885
QFSK0.90000.92500.95250.97750.98500.99250.99750.99751.00001.00001.00001.00001.00000.9790
EQFM0.91750.97000.97250.99250.99750.99750.99501.00001.00001.00001.00001.00001.00000.9879
DLFM0.33000.40000.52500.59250.69000.72750.78000.81000.88750.89000.92250.95250.95000.7275
MLFM0.44750.49750.57750.67500.70250.71250.76250.78500.80500.80750.86500.85750.79000.7142
BPSK0.92000.91750.93750.95250.94250.95000.96500.97250.95750.95000.95000.93500.92500.9442
FRANK0.97500.99750.99250.99750.99501.00001.00001.00001.00000.99751.00001.00001.00000.9965
LFM-BPSK0.30000.39250.43750.54500.59500.68000.76500.83750.87000.91250.93750.94000.96500.7060
Average0.72750.76070.79450.83570.85910.87180.89640.91570.92610.93340.94390.94910.94140.8735
Table A9. Semi-supervised classification results with the datasets in Situation Dataset 5 ( T c y c l e = 50 , N U M = 1102 ).
Table A9. Semi-supervised classification results with the datasets in Situation Dataset 5 ( T c y c l e = 50 , N U M = 1102 ).
SNR−12−11−10−9−8−7−6−5−4−3−2−10Average
SCF0.99500.99750.99750.99250.99501.00001.00001.00000.99750.99500.99500.99750.99500.9967
LFM0.22250.33500.33250.39000.44750.55500.62500.66750.68500.78250.79000.80000.82500.5737
SFM0.86000.92750.94750.97000.99250.99500.99500.99251.00000.99501.00001.00000.99000.9742
BFSK0.81750.90000.91000.95500.96750.98250.98500.98500.98500.97500.98500.97750.98000.9542
QFSK0.99750.99500.99750.99751.00000.99751.00001.00001.00001.00001.00001.00000.99500.9985
EQFM0.85750.92250.95250.97250.99250.97500.99501.00000.99751.00001.00001.00001.00000.9742
DLFM0.40500.38250.50500.61750.69750.74000.84750.86500.92750.92750.96250.95750.95500.7531
MLFM0.44750.49250.55750.64750.66750.69750.73250.74500.78000.83000.84500.82500.80500.6979
BPSK0.98250.98750.97750.96500.97500.97750.99250.99500.99751.00000.99750.99501.00000.9879
FRANK0.97750.99500.99500.99750.99751.00001.00001.00000.99751.00001.00001.00001.00000.9969
LFM-BPSK0.23250.32750.41750.54500.62750.68250.79000.79500.87750.93000.94000.93750.96000.6971
Average0.70860.75110.78090.82270.85090.87300.90570.91320.93140.94860.95590.95360.95500.8731
Table A10. Semi-supervised classification results with the datasets in Situation Dataset 4 ( T c y c l e = 200 , N U M = 276 ).
Table A10. Semi-supervised classification results with the datasets in Situation Dataset 4 ( T c y c l e = 200 , N U M = 276 ).
SNR−12−11−10−9−8−7−6−5−4−3−2−10Average
SCF0.99751.00000.99750.99751.00000.99750.99501.00001.00001.00000.99751.00000.99000.9979
LFM0.26250.38250.42750.53000.59750.64000.73000.72750.76000.80000.80500.81750.81750.6383
SFM0.89250.94750.97500.98000.98751.00000.99500.99501.00000.99751.00000.99750.99000.9813
BFSK0.91750.93000.95000.97500.98250.98500.99000.98000.99750.99250.98750.99000.98500.9740
QFSK0.99000.99250.99500.99500.99500.99001.00001.00001.00001.00001.00001.00001.00000.9967
EQFM0.90750.94000.97250.98250.99750.99250.99751.00001.00000.99750.99751.00001.00000.9835
DLFM0.35500.43750.60500.68000.76750.87250.90500.95000.98750.99500.99250.99500.98000.8094
MLFM0.58250.56250.67250.67500.70500.71750.77250.73750.72500.74000.74500.76000.67250.6975
BPSK0.95500.96750.97500.99500.99500.99501.00000.99750.99500.99750.98750.97250.96750.9846
FRANK0.97000.98250.99000.99000.99501.00000.99500.99750.99750.99751.00000.99750.99250.9927
LFM-BPSK0.32500.40750.44000.51750.60500.69000.76750.82500.88500.92000.91500.94500.95750.7077
Average0.74140.77730.81820.84700.87520.89820.92250.92820.94070.94890.94800.95230.94110.8876
Table A11. Semi-supervised classification results with the datasets in Situation Dataset 5 ( T c y c l e = 200 , N U M = 276 ).
Table A11. Semi-supervised classification results with the datasets in Situation Dataset 5 ( T c y c l e = 200 , N U M = 276 ).
SNR−12−11−10−9−8−7−6−5−4−3−2−10Average
SCF0.99750.99751.00001.00001.00001.00000.99751.00000.99750.98750.98250.99500.97750.9948
LFM0.33250.43750.51500.56000.61250.68500.75000.75500.79250.80750.80750.82750.82000.6694
SFM0.81750.91000.94750.98000.98750.99750.99750.99250.99750.99751.00001.00000.99500.9708
BFSK0.92750.95750.97500.99250.99500.99000.99250.99751.00000.99500.99750.99000.99750.9852
QFSK0.99250.99000.99250.99500.99500.99251.00001.00001.00001.00001.00001.00001.00000.9967
EQFM0.86750.94500.97750.98000.99750.99000.99750.99000.99001.00000.99500.99751.00000.9790
DLFM0.40750.44750.59500.67500.74500.84750.88000.91000.94500.92000.95000.97500.95500.7887
MLFM0.56250.51250.59500.59000.57500.63250.64500.69000.65250.71000.72750.72000.65500.6360
BPSK0.93250.95250.97750.98000.98750.98000.98750.99500.99750.99250.99750.99750.99750.9827
FRANK0.98500.99250.99501.00000.99250.99500.99750.99250.99500.99500.99500.99751.00000.9948
LFM-BPSK0.35750.39000.46000.58500.64500.72500.77500.82750.87000.91250.91500.93250.95000.7188
Average0.74360.77570.82090.84890.86660.89410.91090.92270.93070.93800.94250.94840.94070.8834

Appendix A.4. The Classification Results of the Experiments

Table A12. Semi-supervised classification results with the datasets in Situation Dataset 4 without self-paced data augmentation ( T c y c l e = 26 , N U M = 2145 ).
Table A12. Semi-supervised classification results with the datasets in Situation Dataset 4 without self-paced data augmentation ( T c y c l e = 26 , N U M = 2145 ).
SNR−12−11−10−9−8−7−6−5−4−3−2−10Average
SCF0.99750.99751.00001.00001.00001.00001.00001.00001.00001.00001.00001.00001.00000.9996
LFM0.17250.21750.22000.24750.34250.38500.44250.48750.52750.54750.58750.60500.60500.4144
SFM0.89500.93500.95250.97500.96750.98000.98250.98501.00000.99000.99000.98750.98250.9710
BFSK0.97000.98250.97000.98500.99750.99000.99250.99001.00000.99250.99250.99750.99500.9888
QFSK0.97500.97500.97750.96750.99500.99250.99750.99501.00000.99751.00001.00001.00000.9902
EQFM0.81250.89250.95000.97750.99000.98500.99501.00000.99751.00001.00001.00001.00000.9692
DLFM0.36500.39500.50000.63250.75250.82000.89500.90250.95750.95750.98750.98250.99250.7800
MLFM0.46000.43000.45500.49000.49000.51500.45750.47750.42750.51250.48000.51750.50000.4779
BPSK0.96250.98000.98250.98500.98750.97000.99000.99250.98250.98000.98000.99000.99500.9829
FRANK0.98001.00000.99751.00001.00001.00001.00001.00000.99751.00001.00001.00001.00000.9981
LFM-BPSK0.11500.08250.14500.14500.14500.16250.17250.27000.26000.28750.30500.34750.33750.2135
Average0.70050.71700.74090.76410.78800.80000.81140.82730.83180.84230.84750.85700.85520.7987
Table A13. Semi-supervised classification results with the datasets in Situation Dataset 5 without self-paced data augmentation ( T c y c l e = 26 , N U M = 2145 ).
Table A13. Semi-supervised classification results with the datasets in Situation Dataset 5 without self-paced data augmentation ( T c y c l e = 26 , N U M = 2145 ).
SNR−12−11−10−9−8−7−6−5−4−3−2−10Average
SCF0.99500.98750.99250.99750.99751.00001.00001.00001.00001.00001.00001.00001.00000.9977
LFM0.31250.32250.37500.38000.40250.40250.47250.44500.47000.49000.47750.48750.47500.4240
SFM0.92250.95000.97250.98250.98250.99000.98250.99751.00000.99000.99500.99000.98750.9802
BFSK0.91250.96000.96250.99250.99500.99500.99750.99000.99750.99500.99250.99750.99500.9833
QFSK0.98000.98750.99500.99001.00000.99750.99501.00001.00001.00001.00001.00001.00000.9958
EQFM0.84250.90250.94750.96750.99500.97751.00001.00000.99751.00001.00001.00001.00000.9715
DLFM0.24750.28000.39250.50250.63750.74750.82750.88250.94250.97000.98750.99500.99750.7238
MLFM0.47250.46750.42250.46750.43250.45000.44750.43500.41250.45250.47750.51500.47250.4558
BPSK0.93000.94000.94250.93750.95500.93500.96250.93750.94250.94500.93000.92000.92250.9385
FRANK0.97250.99500.99751.00000.99751.00001.00001.00001.00001.00001.00001.00001.00000.9971
LFM-BPSK0.19250.18500.23000.21750.23250.26000.29250.35250.43500.39000.42250.49250.41250.3165
Average0.70730.72520.74820.76680.78430.79590.81610.82180.83610.83930.84390.85430.84200.7986
Table A14. Semi-supervised classification results with the datasets in Situation Dataset 4 without self-paced data augmentation ( T c y c l e = 50 , N U M = 1102 ).
Table A14. Semi-supervised classification results with the datasets in Situation Dataset 4 without self-paced data augmentation ( T c y c l e = 50 , N U M = 1102 ).
SNR−12−11−10−9−8−7−6−5−4−3−2−10Average
SCF1.00001.00001.00001.00001.00001.00001.00001.00001.00001.00001.00001.00001.00001.0000
LFM0.29000.27500.33000.35000.33250.33250.34500.37500.44250.42750.47250.50000.50250.3827
SFM0.88250.93250.95500.97750.98000.99000.98250.98751.00000.99250.99250.98750.98250.9725
BFSK0.94250.97000.98500.99250.99500.99500.99750.99251.00000.99501.00000.99750.99500.9890
QFSK0.99500.99001.00000.99501.00001.00001.00001.00001.00001.00001.00001.00001.00000.9985
EQFM0.82250.91250.94500.97500.99750.98500.99751.00001.00001.00001.00001.00001.00000.9719
DLFM0.17750.20500.35000.48000.58000.69000.76500.82750.90750.93500.97250.99250.96750.6808
MLFM0.58250.52250.52000.51000.48250.47000.42500.44500.39750.45750.42500.43750.40500.4677
BPSK0.97750.98250.98750.98500.99500.99750.99751.00000.99751.00000.99751.00001.00000.9937
FRANK0.97500.99250.99501.00001.00000.99751.00000.99751.00001.00001.00001.00001.00000.9967
LFM-BPSK0.18750.18500.26000.25500.31000.35250.34750.40750.40000.41250.42250.41500.39000.3342
Average0.71200.72430.75700.77450.78840.80090.80520.82110.83140.83820.84390.84820.84020.7989
Table A15. Semi-supervised classification results with the datasets in Situation Dataset 5 without self-paced data augmentation ( T c y c l e = 50 , N U M = 1102 ).
Table A15. Semi-supervised classification results with the datasets in Situation Dataset 5 without self-paced data augmentation ( T c y c l e = 50 , N U M = 1102 ).
SNR−12−11−10−9−8−7−6−5−4−3−2−10Average
SCF0.99500.99750.99751.00001.00001.00001.00001.00001.00001.00001.00001.00001.00000.9992
LFM0.30750.26500.29750.28500.37500.33000.32750.37000.39500.43750.45000.45250.44750.3646
SFM0.88250.91750.96750.97500.97750.98500.98750.99501.00000.99501.00000.99250.98750.9740
BFSK0.85500.92000.94000.95500.98000.99000.99250.97250.99500.99000.99000.99250.99250.9665
QFSK0.98250.99250.99500.98500.98500.99750.99251.00001.00001.00001.00001.00001.00000.9946
EQFM0.81500.90250.94250.96750.99500.98250.99750.99750.99750.99750.99501.00001.00000.9685
DLFM0.08750.07500.16000.31500.45500.61750.74250.81000.88750.89500.95000.96750.96000.6094
MLFM0.50250.53250.60750.62000.60500.63500.66750.66000.58750.68250.67750.65500.63500.6206
BPSK0.99000.99750.99750.99751.00001.00001.00001.00000.99751.00001.00001.00001.00000.9985
FRANK0.96000.98500.98500.98750.99750.99751.00001.00001.00001.00001.00001.00000.99750.9931
LFM-BPSK0.14250.16000.18000.22000.21000.25750.28750.34000.36250.38750.41250.46250.43250.2965
Average0.68360.70410.73360.75520.78000.79930.81770.83140.83840.85320.86140.86570.85930.7987
Table A16. Semi-supervised classification results with the datasets in Situation Dataset 4 without self-paced data augmentation ( T c y c l e = 100 , N U M = 551 ).
Table A16. Semi-supervised classification results with the datasets in Situation Dataset 4 without self-paced data augmentation ( T c y c l e = 100 , N U M = 551 ).
SNR−12−11−10−9−8−7−6−5−4−3−2−10Average
SCF0.99751.00001.00001.00001.00001.00001.00001.00001.00001.00001.00001.00001.00000.9998
LFM0.18750.18500.26000.26500.33250.34500.39250.39000.41750.41500.47250.47000.51750.3577
SFM0.92500.94500.98000.98000.98000.99250.98250.99251.00000.99501.00000.99000.98750.9808
BFSK0.95250.97000.97500.99750.99500.99000.99500.99500.99750.99750.99500.99500.99750.9887
QFSK0.99751.00001.00001.00001.00001.00001.00001.00001.00001.00001.00001.00001.00000.9998
EQFM0.81000.91000.93750.96750.98750.99001.00001.00001.00001.00001.00001.00001.00000.9694
DLFM0.13750.20500.36000.48250.59500.73000.81250.86750.92750.92750.96500.99000.98750.6913
MLFM0.55000.58000.62000.62250.58250.60750.61000.60500.58000.63250.60250.63250.58000.6004
BPSK0.98000.99250.99500.99500.99751.00000.99501.00001.00001.00001.00001.00001.00000.9965
FRANK0.97750.99500.99751.00000.99750.99751.00001.00001.00001.00001.00001.00001.00000.9973
LFM-BPSK0.24750.23250.27000.29750.29000.34500.34000.41250.38500.42500.39250.44000.43750.3473
Average0.70570.72860.76320.78250.79610.81800.82980.84200.84610.85390.85700.86520.86430.8117
Table A17. Semi-supervised classification results with the datasets in Situation Dataset 5 without self-paced data augmentation ( T c y c l e = 100 , N U M = 551 ).
Table A17. Semi-supervised classification results with the datasets in Situation Dataset 5 without self-paced data augmentation ( T c y c l e = 100 , N U M = 551 ).
SNR−12−11−10−9−8−7−6−5−4−3−2−10Average
SCF0.99000.98750.99000.98751.00000.99501.00001.00001.00000.99500.98500.95000.93250.9856
LFM0.38500.32750.40500.36000.36250.32000.36500.36000.43000.43000.43000.44250.47250.3915
SFM0.84000.91000.94000.95750.97750.97750.97000.98251.00000.98750.98500.98500.98250.9612
BFSK0.92250.93750.95250.98500.98000.98500.98750.98500.99750.99250.99000.99500.99250.9771
QFSK0.95750.95750.97750.97500.98500.98250.98500.99750.99250.99251.00001.00001.00000.9848
EQFM0.91250.95500.96500.97250.99750.99001.00001.00001.00001.00001.00001.00001.00000.9840
DLFM0.25500.29250.50500.65000.76500.88000.93500.96000.99250.98750.99251.00001.00000.7858
MLFM0.40250.46250.54000.57500.56500.60750.62000.60250.61500.64500.62750.67250.63750.5825
BPSK0.95250.97000.99000.99250.99501.00000.99501.00001.00001.00001.00001.00001.00000.9919
FRANK0.98000.99750.99751.00001.00001.00001.00001.00001.00001.00001.00001.00001.00000.9981
LFM-BPSK0.21750.22750.23250.27250.33250.37000.41250.47250.49000.45500.51500.57500.55500.3944
Average0.71050.72950.77230.79340.81450.82800.84270.85090.86520.86230.86590.87450.87020.8215
Table A18. Semi-supervised classification results with the datasets in Situation Dataset 4 without self-paced data augmentation ( T c y c l e = 200 , N U M = 276 ).
Table A18. Semi-supervised classification results with the datasets in Situation Dataset 4 without self-paced data augmentation ( T c y c l e = 200 , N U M = 276 ).
SNR−12−11−10−9−8−7−6−5−4−3−2−10Average
SCF0.99250.99250.99750.99751.00001.00001.00001.00001.00001.00001.00001.00001.00000.9985
LFM0.23500.21500.23000.24500.27500.28500.33750.35250.35750.41500.41250.44250.41500.3244
SFM0.89500.94250.97250.97750.98250.99500.98250.99001.00000.99500.99750.98750.98250.9769
BFSK0.88000.93500.96000.98500.99500.99250.99750.99751.00001.00000.99750.99750.99750.9796
QFSK0.99500.99501.00001.00001.00001.00000.99750.99751.00001.00001.00001.00001.00000.9988
EQFM0.83250.87250.93250.96000.98250.96750.99751.00001.00001.00000.99750.99751.00000.9646
DLFM0.17750.18750.29750.43000.51750.69000.79250.85250.92500.96000.96750.98250.99750.6752
MLFM0.52750.54500.66750.72000.71250.77500.82500.83500.86750.89000.90250.89000.89000.7729
BPSK0.97750.98500.99750.98750.99250.99000.99500.99750.99250.99750.99500.99250.99750.9921
FRANK0.98500.99750.99750.99250.99750.99751.00000.99750.99751.00001.00001.00001.00000.9971
LFM-BPSK0.34000.32500.40250.40000.37250.47250.48000.51000.50000.54000.58250.60500.57750.4698
Average0.71250.72660.76860.79050.80250.83320.85500.86640.87640.89070.89570.89950.89610.8318
Table A19. Semi-supervised classification results with the datasets in Situation Dataset 5 without self-paced data augmentation ( T c y c l e = 200 , N U M = 276 ).
Table A19. Semi-supervised classification results with the datasets in Situation Dataset 5 without self-paced data augmentation ( T c y c l e = 200 , N U M = 276 ).
SNR−12−11−10−9−8−7−6−5−4−3−2−10Average
SCF0.99501.00001.00001.00001.00001.00001.00001.00001.00001.00001.00001.00001.00000.9996
LFM0.33000.26500.28500.26750.29500.23500.30750.25750.28250.28750.30000.29000.29000.2840
SFM0.84000.90750.93000.96000.97000.98250.97750.99000.99750.98750.98500.98250.98000.9608
BFSK0.91250.94250.95750.98500.98000.98750.99500.98750.99250.99000.99000.99500.99500.9777
QFSK0.96750.96750.98500.98250.99500.98500.98750.99501.00000.99751.00001.00000.99750.9892
EQFM0.78500.89500.96000.96500.99750.98000.99751.00001.00001.00001.00001.00001.00000.9677
DLFM0.23250.28250.42000.56500.67750.77750.83000.85000.91500.95750.97000.99250.99250.7279
MLFM0.49000.45250.54500.59000.55750.65500.71250.74750.75750.83000.85250.86500.87750.6871
BPSK0.98751.00000.99751.00001.00001.00001.00001.00001.00001.00001.00001.00001.00000.9988
FRANK0.97250.98000.99500.99751.00001.00001.00001.00001.00001.00001.00001.00001.00000.9958
LFM-BPSK0.27000.31750.38000.40250.47500.52250.61000.64500.68500.66750.67000.71500.66500.5404
Average0.70750.72820.76860.79230.81340.82950.85610.86110.87550.88340.88800.89450.89070.8299

Appendix A.5. The Classification Results of the Experiments

Table A20. Semi-supervised classification results with the datasets in Situation Dataset 4 (d = 1 constant).
Table A20. Semi-supervised classification results with the datasets in Situation Dataset 4 (d = 1 constant).
SNR−12−11−10−9−8−7−6−5−4−3−2−10Average
SCF0.97250.96500.95250.96250.95500.95750.94250.97500.94500.92250.92500.94250.94250.9508
LFM0.34750.30250.31750.37000.36000.36000.45250.50750.59500.66750.69750.77000.80750.5042
SFM0.81250.87500.92750.94250.95750.96750.98250.97751.00000.98750.99000.99250.98500.9537
BFSK0.93500.94500.96500.96250.95250.94500.94750.96000.96500.98250.95750.94750.95500.9554
QFSK0.76250.82250.89000.91250.92000.93000.92500.95750.95250.97750.97500.99000.96750.9217
EQFM0.61500.76500.82250.88750.95000.94250.97000.98000.98750.99000.99000.99750.99750.9150
DLFM0.19250.27000.32750.37250.46750.56000.71500.71750.84250.88250.93500.93250.94500.6277
MLFM0.35750.35250.42750.42250.40250.49750.52500.57750.61250.62000.62250.60250.58250.5079
BPSK0.86250.86250.85500.90250.88500.84000.85000.86250.84250.80000.82000.79750.80500.8450
FRANK0.94500.97750.97000.98250.99250.99250.99000.99500.99500.99000.99750.99250.99250.9856
LFM-BPSK0.36000.41750.46250.56250.57750.67750.76750.80000.85750.92500.91250.93750.97500.7102
Average0.65110.68680.71980.75270.76550.78820.82430.84640.87230.88590.89300.90020.90500.8070
Table A21. Semi-supervised classification results with the datasets in Situation Dataset 5 (d = 1 constant).
Table A21. Semi-supervised classification results with the datasets in Situation Dataset 5 (d = 1 constant).
SNR−12−11−10−9−8−7−6−5−4−3−2−10Average
SCF0.92750.90500.90750.90500.91250.90000.91500.91250.91250.88000.88250.88250.90250.9035
LFM0.33750.40250.43250.49000.48750.49250.58500.62500.70000.72250.74000.76000.81500.5838
SFM0.85000.89250.92500.93250.94750.96500.97750.97250.99000.98500.99000.98500.98000.9533
BFSK0.93500.92750.94000.94250.92750.93250.92250.88750.91750.93750.93000.90000.90250.9233
QFSK0.84250.83000.90250.93250.93000.93000.94500.96000.96000.97750.96500.98500.98000.9338
EQFM0.70750.83500.89250.93750.97250.95500.97500.99500.98750.98750.99250.98750.99000.9396
DLFM0.23250.24000.34250.38250.49500.59250.69000.70750.79750.83250.88500.87000.87250.6108
MLFM0.28500.29000.35500.31750.38500.43750.46000.53000.56000.57250.62000.62000.55500.4606
BPSK0.91500.92500.90500.90250.91750.88000.90750.88250.85500.86250.84000.86000.88000.8871
FRANK0.94500.97250.98250.97000.97500.98500.96500.98500.96500.97250.97250.98250.96250.9719
LFM-BPSK0.18500.22500.32000.45000.45250.58000.66250.76750.83500.90250.90750.93750.97000.6304
Average0.65110.67680.71860.74200.76390.78640.81860.83860.86180.87570.88410.88820.89180.7998
Table A22. Semi-supervised classification results with the datasets in Situation Dataset 4 (d = 0.5 constant).
Table A22. Semi-supervised classification results with the datasets in Situation Dataset 4 (d = 0.5 constant).
SNR−12−11−10−9−8−7−6−5−4−3−2−10Average
SCF0.98000.96250.96500.95250.94750.95000.95500.97250.94250.92000.92500.94500.93000.9498
LFM0.21000.21250.30750.35250.41250.45250.55250.65000.63000.70250.75500.77000.79000.5229
SFM0.75250.83000.88750.93250.95500.96500.96750.98001.00000.98750.99500.98750.98750.9406
BFSK0.93750.95750.98000.98750.99500.98750.99500.99250.98750.99250.98000.99000.97500.9813
QFSK0.96500.96500.98500.98500.98750.98750.98500.98750.98750.99750.99250.99500.99000.9854
EQFM0.75250.86250.90500.91250.98000.97750.98250.99000.99751.00000.99500.99000.99500.9492
DLFM0.26750.27750.40250.45750.61000.70250.76250.83750.89000.88500.94750.95000.95750.6883
MLFM0.48000.51250.52500.52750.52750.61000.68000.71000.75000.72750.79500.81000.73000.6450
BPSK0.93250.95500.96500.96750.96500.95750.95750.96750.94750.93750.94500.94500.94750.9531
FRANK0.92250.97000.96000.96500.98500.98250.97500.98250.98000.97500.97500.97750.97000.9708
LFM-BPSK0.25250.36500.38750.50500.54750.66500.71750.76000.81250.86750.90000.90250.95500.6644
Average0.67750.71550.75180.77680.81020.83980.86640.89360.90230.90840.92770.93300.92980.8410
Table A23. Semi-supervised classification results with the datasets in Situation Dataset 5 (d = 0.5 constant).
Table A23. Semi-supervised classification results with the datasets in Situation Dataset 5 (d = 0.5 constant).
SNR−12−11−10−9−8−7−6−5−4−3−2−10Average
SCF0.98500.95750.96000.96250.95250.95750.94750.98500.95000.92500.93000.94250.94000.9535
LFM0.33250.32750.35000.42000.49250.52500.61750.65000.68500.73250.71750.76250.78750.5692
SFM0.81250.88250.93000.95000.96000.97750.96250.98001.00000.98750.99250.99500.99250.9556
BFSK0.93250.95000.97000.97250.98250.98500.98000.96750.98250.98750.98250.97500.96500.9717
QFSK0.96250.95000.96750.97750.98000.97250.98000.99000.99750.99750.99751.00000.99750.9823
EQFM0.74250.86250.88250.92250.96250.96500.97000.98250.99250.99000.98000.99500.98750.9412
DLFM0.33000.38500.45250.51500.65250.73000.80000.81750.89000.87000.92750.94500.92500.7108
MLFM0.39500.40000.47500.55500.50250.60000.65000.64750.67750.68000.70750.74750.68750.5942
BPSK0.93750.96250.97500.96500.95000.95250.97000.97000.95750.93500.94750.95500.95500.9563
FRANK0.96000.97250.97750.96250.96750.97500.97250.98750.98250.96500.96750.96750.96500.9710
LFM-BPSK0.23750.24750.29750.45000.49750.57000.70500.78500.84750.90750.95250.94250.96500.6465
Average0.69340.71800.74890.78660.80910.83730.86860.88750.90570.90700.91840.92980.92430.8411
Table A24. Semi-supervised classification results with the datasets in Situation Dataset 4 (d = 0.5 decreasing linearly).
Table A24. Semi-supervised classification results with the datasets in Situation Dataset 4 (d = 0.5 decreasing linearly).
SNR−12−11−10−9−8−7−6−5−4−3−2−10Average
SCF0.98000.98000.99500.99500.99751.00000.99750.99250.99250.98500.98500.98750.98000.9898
LFM0.17000.21750.25000.32750.38250.46250.54500.66500.74500.80250.83250.88500.88250.5513
SFM0.85000.91000.95000.97250.97750.99250.98500.98750.99750.99251.00000.99750.99250.9696
BFSK0.93000.95000.94750.97500.98250.98500.99250.98500.99750.99250.99250.99750.99750.9788
QFSK0.99000.98250.99500.98750.99000.99500.99751.00000.99751.00000.99501.00001.00000.9946
EQFM0.72500.86000.94250.95500.99250.98500.99501.00000.99501.00000.99750.99500.98500.9560
DLFM0.15750.21000.33750.46500.56750.71500.81250.88750.94000.96750.98750.99500.98750.6946
MLFM0.56500.58000.58750.60000.55250.62000.62750.68000.72000.73500.73250.70750.69250.6462
BPSK0.98500.98250.98000.99000.98500.98250.99250.99500.99750.99751.00000.99000.99750.9904
FRANK0.98250.99500.99250.99751.00001.00001.00001.00000.99751.00000.99750.99750.99500.9965
LFM-BPSK0.31750.33500.46500.58750.65000.71750.75750.82750.89750.93250.94750.96500.99250.7225
Average0.69570.72750.76750.80480.82520.85950.88200.91090.93430.94590.95160.95610.95480.8628
Table A25. Semi-supervised classification results with the datasets in Situation Dataset 5 (d = 0.5 decreasing linearly).
Table A25. Semi-supervised classification results with the datasets in Situation Dataset 5 (d = 0.5 decreasing linearly).
SNR−12−11−10−9−8−7−6−5−4−3−2−10Average
SCF0.98750.98750.98750.99250.98500.99500.97250.99250.93500.90250.91000.90250.91500.9588
LFM0.22750.25250.33750.40500.49000.50250.54500.59750.69250.72250.75250.81750.84250.5527
SFM0.84250.91250.94000.96250.97250.99250.98750.99001.00000.99250.99750.99750.99000.9675
BFSK0.96250.97000.97500.98500.99000.99000.99500.99000.99000.99250.99000.99500.99250.9860
QFSK0.96750.96500.98500.97750.98250.98250.96750.99000.99000.99250.99501.00001.00000.9842
EQFM0.84000.89500.94500.95750.98750.97750.99250.99750.99500.99501.00000.99750.99750.9675
DLFM0.41000.36000.48500.51750.59750.73500.80500.85000.92000.96750.97750.99751.00000.7402
MLFM0.46750.44500.56500.56500.55500.65500.68500.70750.65500.68750.67750.65500.62500.6112
BPSK0.98500.99000.98250.99250.99500.99000.99000.98750.99000.99500.99500.97750.98000.9885
FRANK0.98000.99500.99250.99250.99500.99500.99500.99750.99750.99500.99500.98500.97500.9915
LFM-BPSK0.17500.25250.34250.48250.57250.68000.80250.83000.94000.95500.97750.97500.99000.6904
Average0.71320.72950.77610.80270.82930.86320.88520.90270.91860.92700.93340.93640.93700.8580

Appendix A.6. The Parameters of 1-D SKCNN

InputInput LayerShape: 16384
Main Block 1Selective Convolutional BlockFilters: 16
First kernel size: 9
Second kernel size: 16
Nodes of first hidden layer in MLP: 8
Max-PoolingStride: 6
Pooling_size: 6
BatchNormalizationBatchNormalization
Main Block 2Selective Convolutional BlockFilters: 32
First kernel size: 9
Second kernel size: 16
Nodes of first hidden layer in MLP: 8
Max-PoolingStride: 6
Pooling_size: 6
BatchNormalizationBatchNormalization
Main Block 3Selective Convolutional BlockFilters: 64
First kernel size: 9
Second kernel size: 16
Nodes of first hidden layer in MLP: 8
Max-PoolingStride: 6
Pooling_size: 6
BatchNormalizationBatchNormalization
Main Block 4Selective Convolutional BlockFilters: 128
First kernel size: 9
Second kernel size: 16
Nodes of first hidden layer in MLP: 8
Max-PoolingStride: 6
Pooling_size: 6
BatchNormalizationBatchNormalization
Full Connection UnitFull Connection LayerNodes: 512
Activation function: “ReLU”
OutputFull Connection LayerNodes: 11
Activation function = “SoftMax”
Total Parameters:
1,096,467
Floating point operations:
2,191,445
Training time/10000 iterations with 64 batch size:
485 s

Appendix A.7. The Classification Results of the Experiments

Table A26. The fully supervised classification results with the datasets in Situation Dataset 1.
Table A26. The fully supervised classification results with the datasets in Situation Dataset 1.
SNR−12−11−10−9−8−7−6−5−4−3−2−10Average
SCF0.97250.97250.98500.99250.99000.99500.99001.00000.98000.98500.98750.98750.97500.9856
LFM0.25250.24000.27500.23500.30250.34750.35500.38500.43750.51000.53000.58500.49250.3806
SFM0.79750.85500.91000.93500.96000.96750.97500.98000.99750.98750.99000.99000.98500.9485
BFSK0.76500.82250.85500.90000.89500.94250.91750.92250.92500.92500.91500.94750.92750.8969
QFSK0.88250.92750.95000.95000.94500.96000.96500.95000.98250.96750.96750.98000.97500.9540
EQFM0.26000.38750.46000.54000.62000.70250.82000.88750.92750.95000.95750.98500.99250.7300
DLFM0.25750.21750.31500.34250.53500.59000.67500.70750.81000.85750.87500.89500.88750.6127
MLFM0.30250.32750.29750.29500.29250.34500.35250.34500.37500.37750.39250.40250.40250.3467
BPSK0.95500.96750.97750.98000.98500.98500.99000.99000.98750.98250.97500.98750.99250.9812
FRANK0.88000.94000.97500.97750.99750.99501.00000.99250.99250.99250.99750.99250.97500.9775
LFM-BPSK0.19500.22000.18500.20750.23750.28500.28000.31500.31500.32500.41250.43750.42250.2952
Average0.59270.62520.65320.66860.70550.73770.75640.77050.79360.80550.81820.83550.82070.7372
Table A27. The fully supervised classification results with the datasets in Situation Dataset 2.
Table A27. The fully supervised classification results with the datasets in Situation Dataset 2.
SNR−12−11−10−9−8−7−6−5−4−3−2−10Average
SCF0.99000.99751.00001.00001.00000.99751.00001.00001.00000.99751.00001.00001.00000.9987
LFM0.36750.43750.55500.62000.67000.78750.84250.90750.94000.95750.96750.99250.98500.7715
SFM0.93000.95750.98250.99251.00000.99500.99750.99750.99750.99751.00001.00000.99500.9879
BFSK0.85750.90250.93500.97750.96000.97500.97500.97500.98500.97750.97250.98750.98500.9588
QFSK0.99250.98751.00001.00001.00001.00000.99751.00001.00001.00001.00001.00001.00000.9983
EQFM0.91750.95750.98250.98001.00000.99501.00001.00001.00001.00000.99750.99751.00000.9867
DLFM0.61500.71250.83500.88500.92000.94250.97250.98250.98250.99750.99501.00001.00000.9108
MLFM0.59000.64000.65750.72500.78250.82250.88500.91500.91750.96000.97500.96500.95500.8300
BPSK1.00000.99751.00001.00001.00001.00000.99751.00000.99501.00001.00001.00001.00000.9992
FRANK0.98501.00000.99751.00001.00001.00001.00001.00001.00001.00001.00001.00001.00000.9987
LFM-BPSK0.50500.54500.63250.67750.74000.80750.87750.92250.97500.97501.00000.99251.00000.8192
Average0.79550.83050.87070.89610.91570.93840.95860.97270.98110.98750.99160.99410.99270.9327
Table A28. The fully supervised classification results with the datasets in Situation Dataset 3.
Table A28. The fully supervised classification results with the datasets in Situation Dataset 3.
SNR−12−11−10−9−8−7−6−5−4−3−2−10Average
SCF1.00001.00001.00001.00001.00001.00001.00001.00001.00001.00001.00001.00001.00001.0000
LFM0.49500.59500.63250.75500.73750.82750.89000.94000.97500.97500.99250.99751.00000.8317
SFM0.90500.95750.98750.99250.99250.99500.99250.99501.00000.99751.00000.99750.99250.9850
BFSK0.97750.99250.99750.99751.00000.99750.99751.00001.00000.99751.00000.99750.99750.9963
QFSK0.99250.99500.99501.00000.99751.00001.00001.00000.99751.00000.99750.99751.00000.9979
EQFM0.90750.95500.97750.98001.00000.99750.99751.00001.00001.00001.00001.00001.00000.9858
DLFM0.63500.75000.83500.89000.93000.97250.97750.99750.99250.99750.99750.99751.00000.9210
MLFM0.60000.58000.63750.66250.73500.79500.83250.92250.93000.95250.96250.97250.97750.8123
BPSK0.99751.00000.99751.00001.00001.00001.00001.00001.00001.00001.00001.00001.00000.9996
FRANK0.99251.00000.99750.99751.00001.00001.00001.00001.00001.00001.00001.00001.00000.9990
LFM-BPSK0.40000.48750.56750.66750.70750.80250.86000.90250.95750.97000.98250.98751.00000.7917
Average0.80930.84660.87500.90390.91820.94430.95890.97800.98660.99000.99390.99520.99700.9382
Table A29. Semi-supervised classification results of our proposed method with the datasets in Situation Dataset 4.
Table A29. Semi-supervised classification results of our proposed method with the datasets in Situation Dataset 4.
SNR−12−11−10−9−8−7−6−5−4−3−2−10Average
SCF1.00001.00001.00001.00001.00001.00001.00001.00001.00001.00001.00000.99500.98750.9987
LFM0.36000.34750.42250.40750.46500.53500.59250.66500.74250.80000.83750.89000.89000.6119
SFM0.87000.93000.94750.97500.98751.00000.99750.99750.99751.00001.00001.00000.99500.9767
BFSK0.97500.99500.98250.99250.99000.99250.99250.99501.00000.99500.99000.99250.99500.9913
QFSK0.99000.99500.99500.99750.99751.00000.99751.00001.00001.00001.00001.00001.00000.9979
EQFM0.89500.97000.98250.98251.00000.98750.99250.99500.99750.99751.00001.00001.00000.9846
DLFM0.37000.41750.57000.71500.80500.87500.90750.94250.93000.95500.95750.94750.96250.7965
MLFM0.50500.55000.59750.64000.68000.75500.82000.78500.81000.83000.84500.86500.83500.7321
BPSK0.98250.98500.98750.99250.99500.99500.99500.98750.98750.99750.99500.99500.99000.9912
FRANK0.98750.99750.99251.00001.00001.00001.00001.00000.99751.00000.99751.00000.99750.9977
LFM-BPSK0.38000.43000.54250.67750.72750.85250.90750.94750.98250.97750.97750.99250.99000.7988
Average0.75590.78340.82000.85270.87700.90840.92750.93770.94950.95930.96360.97070.96750.8980
Table A30. Semi-supervised classification results of our proposed method with the datasets in Situation Dataset 5.
Table A30. Semi-supervised classification results of our proposed method with the datasets in Situation Dataset 5.
SNR−12−11−10−9−8−7−6−5−4−3−2−10Average
SCF0.99250.98750.99750.98750.99000.99750.98750.99500.98750.97250.96500.97500.96250.9844
LFM0.50500.50000.57250.62500.68500.71500.81750.81500.85000.85750.88750.92750.91750.7442
SFM0.88750.95000.97250.98250.98750.99750.99750.99500.99751.00001.00000.99750.99500.9815
BFSK0.95500.97751.00000.99000.99750.99751.00000.99751.00001.00001.00001.00000.99250.9929
QFSK0.97000.99250.99000.99751.00000.99500.99501.00001.00000.99751.00001.00001.00000.9952
EQFM0.86500.92750.96000.96000.98750.99250.99751.00001.00000.99751.00001.00001.00000.9760
DLFM0.38500.50000.64250.78250.87000.93500.94250.97250.97750.98750.98500.99000.98000.8423
MLFM0.38250.41250.51500.51250.62500.63750.73500.74000.77250.81500.84750.85250.82500.6671
BPSK0.94000.92250.95750.97500.98000.99000.97750.97000.98500.99000.95500.99000.98500.9706
FRANK0.98500.98750.99750.99000.99500.99500.99750.99501.00001.00000.99750.99500.99750.9948
LFM-BPSK0.31000.33250.45500.56250.58250.72500.81500.87000.92250.93500.97250.97000.98750.7262
Average0.74340.77180.82360.85140.88180.90700.93300.94090.95390.95930.96450.97250.96750.8977

Appendix A.8. The Time Usage for Training the Proposed MA-CNN Based on Different Pairs of T c y c l e and N U M

Table 2 shows that the training time of proposed MA-CNN is approximately 410 s per 10,000 iterations with 64 batch size. As the values of T c y c l e and N U M in the experiments in Section 4 and Section 5 have four pairs, the total training time is different. The details of training time for different values of T c y c l e and N U M are shown in Table A31.
Table A31. The details of training time for different values of T c y c l e and N U M with 64 batch size and 50 epochs per T c y c l e .
Table A31. The details of training time for different values of T c y c l e and N U M with 64 batch size and 50 epochs per T c y c l e .
( T c y c l e , N U M )(26, 2145)(50, 1102)(100, 551)(200, 276)
Iterations in total610 k1181 k2341 k4977 k
Training time7 h14 h27 h58 h
Based on the experiment results of Section 4 and Section 5, a large value of T c y c l e and a small value of N U M will increase the accuracy. However, the cost of training is growing with the increment of iterations, which leads to more training time to obtain the final well-trained MA-CNN. As Table A31 shows, when the value of T c y c l e becomes twice as large and the value of N U M reduces by half, the iterations in total will be around twice as large as before, which leads to approximately twice the training time.
As a result, in the real application, it is important to balance the accuracy requirement and the training time usage. Moreover, extra GPU cards could reduce the training time if the computation resource permits.

References

  1. Barton, D.K. Radar System Analysis and Modeling; Artech: London, UK, 2004. [Google Scholar]
  2. Richards, M.A. Fundamentals of Radar Signal Processing, 2nd ed.; McGraw-Hill Education: New York, NY, USA, 2005. [Google Scholar]
  3. Wiley, R.G.; Ebrary, I. ELINT: The Interception and Analysis of Radar Signals; Artech: London, UK, 2006. [Google Scholar]
  4. Qu, Z.; Hou, C.; Hou, C.; Wang, W. Radar Signal Intra-Pulse Modulation Recognition Based on Convolutional Neural Network and Deep Q-Learning Network. IEEE Access 2020, 8, 49125–49136. [Google Scholar] [CrossRef]
  5. Schmidhuber, J. Deep learning in neural networks: An overview. Neural Netw. 2015, 61, 85–117. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  6. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. ImageNet Classification with Deep Convolutional Neural Networks; NIPS Curran Associates Inc.: New York, NY, USA, 2012. [Google Scholar]
  7. Szegedy, C.; Liu, W.; Jia, Y.; Sermanet, P.; Reed, S.; Anguelov, D.; Erhan, D.; Vanhoucke, V.; Rabinovich, A. Going deeper with convolutions. In Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA, 7–12 June 2015; pp. 1–9. [Google Scholar] [CrossRef] [Green Version]
  8. Srivastava, R.K.; Greff, K.; Schmidhuber, J. Highway networks. arXiv 2015, arXiv:1505.00387. [Google Scholar]
  9. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar] [CrossRef] [Green Version]
  10. Huang, G.; Liu, Z.; van der Maaten, L.; Weinberger, K.Q. Densely Connected Convolutional Networks. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 2261–2269. [Google Scholar] [CrossRef] [Green Version]
  11. Li, X.; Wang, W.; Hu, X.; Yang, J. Selective Kernel Networks. In Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 15–20 June 2019; pp. 510–519. [Google Scholar] [CrossRef] [Green Version]
  12. Wang, L.; Bai, X.; Gong, C.; Zhou, F. Hybrid Inference Network for Few-Shot SAR Automatic Target Recognition. IEEE Trans. Geosci. Remote Sens. 2021, 59, 9257–9269. [Google Scholar] [CrossRef]
  13. Kong, S.; Kim, M.; Hoang, L.M.; Kim, E. Automatic LPI Radar Waveform Recognition Using CNN. IEEE Access 2018, 6, 4207–4219. [Google Scholar] [CrossRef]
  14. Yu, Z.; Tang, J. Radar Signal Intra-Pulse Modulation Recognition Based on Contour Extraction. In Proceedings of the IGARSS 2020—2020 IEEE International Geoscience and Remote Sensing Symposium, Waikoloa, HI, USA, 26 September–2 October 2020; pp. 2783–2786. [Google Scholar] [CrossRef]
  15. Zhang, J.; Li, Y.; Yin, J. Modulation classification method for frequency modulation signals based on the time–frequency distribution and CNN. IET Radar Sonar Navigat. 2017, 12, 244–249. [Google Scholar] [CrossRef]
  16. Huynh-The, T.; Hua, C.; Pham, Q.; Kim, D. MCNet: An Efficient CNN Architecture for Robust Automatic Modulation Classification. IEEE Commun. Lett. 2020, 24, 811–815. [Google Scholar] [CrossRef]
  17. Peng, S.; Jiang, H.; Wang, H.; Alwageed, H.; Zhou, Y.; Sebdani, M.M.; Yao, Y.-D. Modulation Classification Based on Signal Constellation Diagrams and Deep Learning. IEEE Trans. Neural Netw. Learn. Syst. 2019, 30, 718–727. [Google Scholar] [CrossRef] [PubMed]
  18. Yu, Z.; Tang, J.; Wang, Z. GCPS: A CNN Performance Evaluation Criterion for Radar Signal Intrapulse Modulation Recognition. IEEE Commun. Lett. 2021, 25, 2290–2294. [Google Scholar] [CrossRef]
  19. Wu, B.; Yuan, S.; Li, P.; Jing, Z.; Huang, S.; Zhao, Y. Radar Emitter Signal Recognition Based on One-Dimensional Convolutional Neural Network with Attention Mechanism. Sensors 2020, 20, 6350. [Google Scholar] [CrossRef] [PubMed]
  20. Yuan, S.; Wu, B.; Li, P. Intra-Pulse Modulation Classification of Radar Emitter Signals Based on a 1-D Selective Kernel Convolutional Neural Network. Remote Sens. 2021, 13, 2799. [Google Scholar] [CrossRef]
  21. Lee, D.H. Pseudo-label: The simple and efficient semi-supervised learning method for deep neural networks. In Workshop on Challenges in Representation Learning; ICML: Atlanta, GA, USA, 2013; Volume 3, p. 2. [Google Scholar]
  22. Berthelot, D.; Carlini, N.; Goodfellow, I.; Papernot, N.; Oliver, A.; Raffel, C.A. Mixmatch: A holistic approach to semi-supervised learning. arXiv 2019, arXiv:1905.02249. [Google Scholar]
  23. Sohn, K.; Berthelot, D.; Li, C.L.; Zhang, Z.; Carlini, N.; Cubuk, E.D.; Kurakin, A.; Zhang, H.; Raffel, C. FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence. arXiv 2020, arXiv:2001.07685. [Google Scholar]
  24. Bachman, P.; Alsharif, O.; Precup, D. Learning with pseudo-ensembles. Adv. Neural Inf. Process. Syst. 2014, 3365–3373. [Google Scholar]
  25. Sajjadi, M.; Javanmardi, M.; Tasdizen, T. Regularization with stochastic transformations and perturbations for deep semi-supervised learning. Adv. Neural Inf. Process. Syst. 2016, 1171–1179. [Google Scholar]
  26. Laine, S.; Aila, T. Temporal ensembling for semi-supervised learning. arXiv 2016, arXiv:1610.02242. [Google Scholar]
  27. Hu, J.; Shen, L.; Albanie, S.; Sun, G.; Wu, E. Squeeze-and-Excitation Networks. IEEE Trans. Pattern Anal. Mach. Intell. 2020, 42, 2011–2023. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  28. Chen, L.; Zhang, H.; Xiao, J.; Nie, L.; Shao, J.; Liu, W.; Chua, T.-S. SCA-CNN: Spatial and Channel-Wise Attention in Convolutional Networks for Image Captioning. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 6298–6306. [Google Scholar] [CrossRef] [Green Version]
  29. Nair, V.; Hinton, G.E. Rectified linear units improve restricted boltzmann machines. In Proceedings of the 27th International Conference on Machine Learning (ICML-10), Haifa, Israel, 21–24 June 2010. [Google Scholar]
  30. Ioffe, S.; Szegedy, C. Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv 2015, arXiv:1502.03167. [Google Scholar]
  31. Kingma, D.P.; Ba, J. Adam: A Method for Stochastic Optimization. arXiv 2014, arXiv:1412.6980. [Google Scholar]
Figure 1. The flow chart of our proposed method.
Figure 1. The flow chart of our proposed method.
Remotesensing 14 02059 g001
Figure 2. The structure of mixed attention convolutional neural network (MA-CNN).
Figure 2. The structure of mixed attention convolutional neural network (MA-CNN).
Remotesensing 14 02059 g002
Figure 3. The structure of mixed attention block.
Figure 3. The structure of mixed attention block.
Remotesensing 14 02059 g003
Figure 4. The frequency-domain amplitude spectra of eleven intra-pulse modulations of some radar emitter signal samples without noise (upper) and the according frequency-domain amplitude spectra when SNR is 0 dB (lower).
Figure 4. The frequency-domain amplitude spectra of eleven intra-pulse modulations of some radar emitter signal samples without noise (upper) and the according frequency-domain amplitude spectra when SNR is 0 dB (lower).
Remotesensing 14 02059 g004aRemotesensing 14 02059 g004b
Figure 5. The datasets in different situations.
Figure 5. The datasets in different situations.
Remotesensing 14 02059 g005
Figure 6. The average classification accuracy of the model at the end of each training cycle in the validation dataset.
Figure 6. The average classification accuracy of the model at the end of each training cycle in the validation dataset.
Remotesensing 14 02059 g006
Table 1. Parameters of radar emitter signals with eleven different intra-pulse modulations.
Table 1. Parameters of radar emitter signals with eleven different intra-pulse modulations.
Class IDTypeCarrier FrequencyParametersDetails
1SCF0.05–0.95NoneNone
2LFM0.05–0.95Bandwidth: 0.05–0.81. Up LFM and down LFM are included
2. Both max value and min value of the instantaneous frequency for LFM range from 0.05 to 0.95
3SFM0.05–0.95Bandwidth: 0.05–0.8Both max value and min value of the instantaneous frequency for LFM range from 0.05 to 0.95
4BFSK0.05–0.95
0.05–0.95
5, 7, 11, 13-bit Barker codeThe distance of two-carrier frequency is longer than 0.05
5QFSK0.05–0.95
0.05–0.95
0.05–0.95
0.05–0.95
16-bit Frank codeThe distance of each two-carrier frequency is longer than 0.05
6EQFM0.05–0.95Bandwidth: 0.05–0.81. The instantaneous frequency increases first and then decreases, or decreases first and then increases2. Both max value and min value of the instantaneous frequency for EQFM range from 0.05 to 0.95
7DLFM0.05–0.95Bandwidth: 0.05–0.81. The instantaneous frequency increases first and then decreases, or decreases first and then increases
2. Both max value and min value of the instantaneous frequency for EQFM range from 0.05 to 0.95
8MLFM0.05–0.95
0.05–0.95
Bandwidth: 0.05–0.8
Bandwidth: 0.05–0.8
Segment: 20–80%
1. Up LFM and down LFM are included in each of the two parts
2. Both max value and min value of the instantaneous frequency for each part of the MLFM range from 0.05 to 0.95
3. The distance of the instantaneous frequency in the end of first part and the instantaneous frequency in the start of last part is longer than 0.05
9BPSK0.05–0.955, 7, 11, 13-bit Barker code5, 7, 11, 13-bit Barker code
10FRANK0.05–0.95Phase number: 6, 7, 8Phase number: 6, 7, 8
11LFM-BPSK0.05–0.95Bandwidth: 0.05–0.8
5, 7, 11, 13-bit Barker code
1. Up LFM and down LFM are included
2. Both max value and min value of the instantaneous frequency for LFM range from 0.05 to 0.95
Note: The carrier frequency and bandwidth are normalized. For instance, if sampling frequency is 400 MHz, the carrier frequency of LFM (0.05–0.95) ranges from 20 MHz to 380 MHz, and the bandwidth of LFM (0.05–0.8) ranges from 20 MHz to 320 MHz.
Table 2. The parameters of MA-CNN.
Table 2. The parameters of MA-CNN.
InputInput LayerShape: 16384
Main Block 1Convolutional LayerFilters: 16
Kernel size: 16
Max-PoolingStride: 6
Pooling_size: 6
BatchNormalizationBatchNormalization
Mixed Attention BlockMLP_1:
Hidden layer 1: 16 nodes, activation: “ReLU”
Hidden layer 2: 4 nodes, activation: “Sigmoid”
MLP_2:
Hidden layer 1: 32 nodes, activation: “ReLU”
Hidden layer 2: 4 nodes, activation: “Sigmoid”
Convolution Layer_1:
Kernel size: 9, activation: “Sigmoid”
Convolution Layer_2:
Kernel size: 16, activation: “Sigmoid”
Main Block 2Selective Convolutional BlockFilters: 32
First kernel size: 9
Second kernel size: 16
Nodes of first hidden layer in MLP: 8
Max-PoolingStride: 6
Pooling_size: 6
BatchNormalizationBatchNormalization
Mixed Attention BlockMLP_1:
Hidden layer 1: 32 nodes, activation: “ReLU”
Hidden layer 2: 8 nodes, activation: “Sigmoid”
MLP_2:
Hidden layer 1: 64 nodes, activation: “ReLU”
Hidden layer 2: 8 nodes, activation: “Sigmoid”
Convolution Layer_1:
Kernel size: 9, activation: “Sigmoid”
Convolution Layer_2:
Kernel size: 16, activation: “Sigmoid”
Main Block 3Selective Convolutional BlockFilters: 64
First kernel size: 9
Second kernel size: 16
Nodes of first hidden layer in MLP: 8
Max-PoolingStride: 6
Pooling_size: 6
BatchNormalizationBatchNormalization
Mixed Attention BlockMLP_1:
Hidden layer 1: 64 nodes, activation: “ReLU”
Hidden layer 2: 16 nodes, activation: “Sigmoid”
MLP_2:
Hidden layer 1: 128 nodes, activation: “ReLU”
Hidden layer 2: 16 nodes, activation: “Sigmoid”
Convolution Layer_1:
Kernel size: 5, activation: “Sigmoid”
Convolution Layer_2:
Kernel size: 9, activation: “Sigmoid”
Main Block 4Selective Convolutional BlockFilters: 128
First kernel size: 9
Second kernel size: 16
Nodes of first hidden layer in MLP: 8
Max-PoolingStride: 6
Pooling_size: 6
BatchNormalizationBatchNormalization
Mixed Attention BlockMLP_1:
Hidden layer 1: 128 nodes, activation: “ReLU”
Hidden layer 2: 32 nodes, activation: “Sigmoid”
MLP_2:
Hidden layer 1: 256 nodes, activation: “ReLU”
Hidden layer 2: 32 nodes, activation: “Sigmoid”
Convolution Layer_1:
Kernel size: 3, activation: “Sigmoid”
Convolution Layer_2:
Kernel size: 5, activation: “Sigmoid”
Full Connection UnitFull Connection layerNodes: 512
Activation: “ReLU”
OutputFull Connection layerNodes: 11
Activation: “SoftMax”
Total Parameters: 1,033,195Floating point operations:
2,063,221
Training time/10,000 iterations with 64 batch size:
410 s
Table 3. The situations of the datasets that are used for training the MA-CNN.
Table 3. The situations of the datasets that are used for training the MA-CNN.
SituationDetails
Situation Dataset1Labeled dataset
Situation Dataset2Labeled dataset + 1-st unlabeled dataset with true labels
Situation Dataset3Labeled dataset + 2nd unlabeled dataset with true labels
Situation Dataset4Labeled dataset + 1-st unlabeled dataset without true labels
Situation Dataset5Labeled dataset + 2nd unlabeled dataset without true labels
Table 4. The fully supervised classification accuracy for SNR with the datasets in Situation Dataset 1, Situation Dataset 2 and Situation Dataset 3.
Table 4. The fully supervised classification accuracy for SNR with the datasets in Situation Dataset 1, Situation Dataset 2 and Situation Dataset 3.
SNR−12−11−10−9−8−7−6−5−4−3−2−10Average
Situation
Dataset 1
0.56250.59700.63700.65700.68200.70070.73700.74550.77300.79390.80730.82680.81930.7184
Situation
Dataset 2
0.78840.81890.86500.89250.91300.93450.95480.96730.98230.98820.99230.99320.99570.9297
Situation
Dataset 3
0.81270.84390.87500.89890.91730.93390.95770.97090.98480.98700.99070.99270.99480.9354
Table 5. The fully supervised classification accuracy for the classes with the datasets in Situation Dataset 1, Situation Dataset 2 and Situation Dataset 3.
Table 5. The fully supervised classification accuracy for the classes with the datasets in Situation Dataset 1, Situation Dataset 2 and Situation Dataset 3.
SituationSituation Dataset 1Situation Dataset 2Situation Dataset 3
Class
SCF0.96020.99900.9996
LFM0.32170.85170.8175
SFM0.93400.98790.9823
BFSK0.85330.99540.9933
QFSK0.91650.99690.9938
EQFM0.71690.98290.9925
DLFM0.59580.87690.9367
MLFM0.32540.81130.8044
BPSK0.95060.99900.9985
FRANK0.97060.99960.9971
LFM-BPSK0.35730.72580.7737
Average0.71840.92970.9354
Table 6. The semi-supervised classification accuracy of our proposed method for SNR with the datasets in Situation Dataset 4 and Situation Dataset 5.
Table 6. The semi-supervised classification accuracy of our proposed method for SNR with the datasets in Situation Dataset 4 and Situation Dataset 5.
SNR−12−11−10−9−8−7−6−5−4−3−2−10Average
Situation
Dataset 4
0.73980.77640.81360.84180.85950.88390.90300.91250.92250.92550.93180.93340.93090.8750
Situation
Dataset 5
0.73050.77410.81680.84110.86500.88450.90300.90890.91860.92860.94070.94590.93500.8764
Table 7. The semi-supervised classification accuracy of our proposed method for the classes with the datasets in Situation Dataset 4 and Situation Dataset 5.
Table 7. The semi-supervised classification accuracy of our proposed method for the classes with the datasets in Situation Dataset 4 and Situation Dataset 5.
SituationSituation Dataset 4Situation Dataset 5
Class
SCF0.99350.9998
LFM0.69250.5242
SFM0.97810.9821
BFSK0.98850.9713
QFSK0.99380.9940
EQFM0.96100.9798
DLFM0.74440.7900
MLFM0.62040.6487
BPSK0.99650.9948
FRANK0.99500.9948
LFM-BPSK0.66100.7604
Average0.87500.8764
Table 8. Semi-supervised classification results with the datasets with different values of T c y c l e and N U M .
Table 8. Semi-supervised classification results with the datasets with different values of T c y c l e and N U M .
SNR−12−11−10−9−8−7−6−5−4−3−2−10Average
Situation
Dataset 4
(26, 2145)0.73090.76200.80520.83520.85300.88000.89770.91610.92000.92520.92570.93640.92550.8702
(50, 1102)0.72750.76070.79450.83570.85910.87180.89640.91570.92610.93340.94390.94910.94140.8735
(100, 551)0.73980.77640.81360.84180.85950.88390.90300.91250.92250.92550.93180.93340.93090.8750
(200, 276)0.74140.77730.81820.84700.87520.89820.92250.92820.94070.94890.94800.95230.94110.8876
Situation
Dataset 5
(26, 2145)0.70730.74390.79230.81860.83730.86520.88180.90430.90840.91230.91860.92700.92340.8570
(50, 1102)0.70860.75110.78090.82270.85090.87300.90570.91320.93140.94860.95590.95360.95500.8731
(100, 551)0.73050.77410.81680.84110.86500.88450.90300.90890.91860.92860.94070.94590.93500.8764
(200, 276)0.74360.77570.82090.84890.86660.89410.91090.92270.93070.93800.94250.94840.94070.8834
Table 9. Semi-supervised classification results with the datasets without self-paced data augmentation.
Table 9. Semi-supervised classification results with the datasets without self-paced data augmentation.
SNR−12−11−10−9−8−7−6−5−4−3−2−10Average
Situation
Dataset 4
(26, 2145)0.70050.71700.74090.76410.78800.80000.81140.82730.83180.84230.84750.85700.85520.7987
(50, 1102)0.71200.72430.75700.77450.78840.80090.80520.82110.83140.83820.84390.84820.84020.7989
(100, 551)0.70570.72860.76320.78250.79610.81800.82980.84200.84610.85390.85700.86520.86430.8117
(200, 276)0.71250.72660.76860.79050.80250.83320.85500.86640.87640.89070.89570.89950.89610.8318
Situation
Dataset 5
(26, 2145)0.70730.72520.74820.76680.78430.79590.81610.82180.83610.83930.84390.85430.84200.7986
(50, 1102)0.68360.70410.73360.75520.78000.79930.81770.83140.83840.85320.86140.86570.85930.7987
(100, 551)0.71050.72950.77230.79340.81450.82800.84270.85090.86520.86230.86590.87450.87020.8215
(200, 276)0.70750.72820.76860.79230.81340.82950.85610.86110.87550.88340.88800.89450.89070.8299
Table 10. Semi-supervised classification result with the datasets with different strategies of self-paced data augmentation.
Table 10. Semi-supervised classification result with the datasets with different strategies of self-paced data augmentation.
SNR−12−11−10−9−8−7−6−5−4−3−2−10Average
Situation
Dataset 4
d = 1 constant0.65110.68680.71980.75270.76550.78820.82430.84640.87230.88590.89300.90020.90500.8070
d = 0.5 constant0.67750.71550.75180.77680.81020.83980.86640.89360.90230.90840.92770.93300.92980.8410
d = 0.5 decreasing linearly0.69570.72750.76750.80480.82520.85950.88200.91090.93430.94590.95160.95610.95480.8628
Situation
Dataset 5
d = 1 constant0.65110.67680.71860.74200.76390.78640.81860.83860.86180.87570.88410.88820.89180.7998
d = 0.5 constant0.69340.71800.74890.78660.80910.83730.86860.88750.90570.90700.91840.92980.92430.8411
d = 0.5 decreasing linearly0.71320.72950.77610.80270.82930.86320.88520.90270.91860.92700.93340.93640.93700.8580
Table 11. The fully supervised classification accuracy for SNR with the datasets in Situation Dataset 1, Situation Dataset 2 and Situation Dataset 3 based on 1-D SKCNN.
Table 11. The fully supervised classification accuracy for SNR with the datasets in Situation Dataset 1, Situation Dataset 2 and Situation Dataset 3 based on 1-D SKCNN.
SNR−12−11−10−9−8−7−6−5−4−3−2−10Average
Situation
Dataset 1
0.59270.62520.65320.66860.70550.73770.75640.77050.79360.80550.81820.83550.82070.7372
Situation
Dataset 2
0.79550.83050.87070.89610.91570.93840.95860.97270.98110.98750.99160.99410.99270.9327
Situation
Dataset 3
0.80930.84660.87500.90390.91820.94430.95890.97800.98660.99000.99390.99520.99700.9382
Situation
Dataset 4
0.75590.78340.82000.85270.87700.90840.92750.93770.94950.95930.96360.97070.96750.8980
Situation
Dataset 5
0.74340.77180.82360.85140.88180.90700.93300.94090.95390.95930.96450.97250.96750.8977
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Yuan, S.; Li, P.; Wu, B.; Li, X.; Wang, J. Semi-Supervised Classification for Intra-Pulse Modulation of Radar Emitter Signals Using Convolutional Neural Network. Remote Sens. 2022, 14, 2059. https://doi.org/10.3390/rs14092059

AMA Style

Yuan S, Li P, Wu B, Li X, Wang J. Semi-Supervised Classification for Intra-Pulse Modulation of Radar Emitter Signals Using Convolutional Neural Network. Remote Sensing. 2022; 14(9):2059. https://doi.org/10.3390/rs14092059

Chicago/Turabian Style

Yuan, Shibo, Peng Li, Bin Wu, Xiao Li, and Jie Wang. 2022. "Semi-Supervised Classification for Intra-Pulse Modulation of Radar Emitter Signals Using Convolutional Neural Network" Remote Sensing 14, no. 9: 2059. https://doi.org/10.3390/rs14092059

APA Style

Yuan, S., Li, P., Wu, B., Li, X., & Wang, J. (2022). Semi-Supervised Classification for Intra-Pulse Modulation of Radar Emitter Signals Using Convolutional Neural Network. Remote Sensing, 14(9), 2059. https://doi.org/10.3390/rs14092059

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop