Next Article in Journal
Multi-Objective Optimization of Differentiated Urban Ring Road Bus Lines and Fares Based on Travelers’ Interactive Reinforcement Learning
Previous Article in Journal
Photophysical Properties and Electronic Structure of Symmetrical Curcumin Analogues and Their BF2 Complexes, Including a Phenothiazine Substituted Derivative
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Modulation Recognition of Communication Signal Based on Convolutional Neural Network

Heilongjiang Province Key Laboratory of Laser Spectroscopy Technology and Application, Harbin University of Science and Technology, Harbin 150080, China
*
Author to whom correspondence should be addressed.
Symmetry 2021, 13(12), 2302; https://doi.org/10.3390/sym13122302
Submission received: 2 October 2021 / Revised: 3 November 2021 / Accepted: 25 November 2021 / Published: 2 December 2021

Abstract

:
In the noncooperation communication scenario, digital signal modulation recognition will help people to identify the communication targets and have better management over them. To solve problems such as high complexity, low accuracy and cumbersome manual extraction of features by traditional machine learning algorithms, a kind of communication signal modulation recognition model based on convolution neural network (CNN) is proposed. In this paper, a convolution neural network combines bidirectional long short-term memory (BiLSTM) with a symmetrical structure to successively extract the frequency domain features and timing features of signals and then assigns importance weights based on the attention mechanism to complete the recognition task. Seven typical digital modulation schemes including 2ASK, 4ASK, 4FSK, BPSK, QPSK, 8PSK and 64QAM are used in the simulation test, and the results show that, compared with the classical machine learning algorithm, the proposed algorithm has higher recognition accuracy at low SNR, which confirmed that the proposed modulation recognition method is effective in noncooperation communication systems.

1. Introduction

Modulation recognition is a technology to judge the modulation mode of a received signal when the content of modulation information is unknown. It is widely used in radio signal monitoring, electronic countermeasures, intelligent communication and other fields. In a real environment, due to the interference of noncooperative communication and background noise, some features of the received signal will be blurred, which will affect the recognition result. How to obtain higher recognition accuracy of the modulation mode under low SNR is an important research topic. Automatic modulation recognition is the main modulation recognition method in wireless communication.
In 1969, Alarabi published an article entitled “Automatic Classification of Modulation Types by Pattern Recognition Technology”, in which he proposed for the first time that the recognition of signal modulation mode is essentially a pattern recognition problem [1]. At present, the methods of automatic modulation recognition can be divided into three categories: modulation recognition based on hypothesis testing, modulation recognition based on feature extraction and modulation recognition based on deep learning. The research status of these three methods at home and abroad is analyzed as follows.
Based on the recognition method of the modulation mode of the hypothesis test, the maximum likelihood function and the most appropriate decision threshold of the signal are derived according to the signal characteristics by using the theory of probability and hypothesis test, and then the statistical quantity of the signal to be tested is compared with the decision threshold, so as to realize the recognition of the modulation mode of the signal. In 1988, Kim and Polydoros proposed a recognition method based on the average likelihood ratio test (ALRT) for the first time to identify the modulation pattern of BPSK and QPSK [2]. In 1997, Schreyogg et al. proposed a robust method for asynchronous modulation recognition without a priori knowledge about modulation and transmission parameters and proved the robustness of the classifier modulation types on ASK, BPSK, QPSK, 2FSK, MSK and CW simulation signals [3]. In 2000, Panagiotou et al. proposed a recognition method based on a hybrid likelihood ratio test (HLRT). This method combines the above two methods and absorbs the advantages of the two methods to improve the recognition effect [4]. The recognition method of the modulation mode based on hypothesis testing requires a lot of prior information such as mean value and variance of modulation signal, which is difficult to obtain accurately in noncooperative communication. In addition, the recognition accuracy is low at low SNR because of the great influence of noise. Therefore, the traditional recognition method is gradually being replaced by new methods.
The method of modulation recognition based on feature extraction is to extract the most representative and reflective features in the time domain or frequency domain of signals of different modulation types, so as to accurately identify signals of different modulation types. Nandi proposed a recognition method based on instantaneous statistical features and used the decision tree as a classifier to identify 13 kinds of communication signals. When the SNR is 10 dB, the recognition accuracy reaches more than 90% [5]. Tan proposed a recognition method based on time-domain characteristics. Feature parameters are composed into feature vectors, and the classifier selects random forest, which can realize automatic recognition of six modulation modes of low-order digital signals such as 2ASK and 2FSK [6]. Swami used fourth-order cumulants to identify MPSK and MQAM signals [7]. Wang combined the fourth-order and sixth-order cumulants and used the support vector machine (SVM) as the classifier to realize the modulation identification of MPSK [8]. Vladimir proposed a recognition method based on normalized sixth-order cumulants. This method can identify BPSK, QPSK, 16QAM and 64QAM efficiently [9].
With the popularity of big data, deep learning has become a research hotspot in the field of artificial intelligence. With its powerful ability to automatically extract features, it has been widely used in computer vision and natural language processing. As a result, more and more researchers are applying deep learning to their field of study. In recent years, deep learning has been gradually introduced into the research of digital signal modulation recognition. Automatic modulation recognition based on deep learning does not require prior information. It fully utilizes the autonomic learning mechanism of the neural network to take the original signal or the converted signal data as the input of the neural network to train the label. Then the characteristics of the nonlinear data can be extracted through the nonlinear function in the network. Then the features are sent to the output layer to realize the recognition of the modulation mode of the communication signal.
Tu proposed a modulation recognition method for digital signals based on deep autocoder networks. According to different modulation signals with different cyclic spectrum characteristics and small wave characteristics, the original characteristics of signals are extracted to complete the identification of the unknown signal modulation mode [10]. O’Shea takes the baseband complex signal as input directly and uses CNN to extract the signal characteristics and identify the modulation methods of 11 signals. When SNR is 0 dB, the recognition accuracy reaches 80% [11]. Peng proposed a modulation recognition method based on deep neural network (DNN), which preprocesses the signal to be detected, generates the constellation map and takes the constellation map as the input of the neural network. Using the trained network, the model can recognize the modulated signal. When SNR is greater than 4 dB, the recognition accuracy reaches over 95% [12]. Hou proposed a modulation recognition method for communication signals based on deep learning, which solves the end-to-end signal recognition problem by designing a deep neural network model, thus simplifying the tedious process of artificial feature extraction [13]. Peng proposed to represent data in the form of a grid topology and combine it with CNN to complete the recognition of modulation signals [14].
Wang proposed a convolution neural network recognition algorithm based on the constellation to identify the modulation modes of different signals [15]. Xie proposed a new modulation identification method. First, high-order cumulant features are extracted. On this basis, features of signals of different modulation types can be further extracted, and features are taken as the input of the DNN to improve the recognition accuracy [16]. Tang proposed a modulation recognition method for communication signals based on generating antagonistic networks. Firstly, the auxiliary classifier is used to generate adversarial networks (ACGANs) to extend the data. The classic model AlexNet is then used as a classifier. Compared with the recognition accuracy of the original data set, the expanded data set has a significant improvement in accuracy [17]. Tu used pruning technology to reduce the convolution parameter in CNN and the number of floating-point operations per second and used this modulation method of CNN to identify digital signals. Compared with the original CNN, this lightweight CNN can reduce the training time to 33–35% on the basis of maintaining the recognition accuracy [18]. Shi proposed a particle swarm optimization algorithm for the problem that DNN falls into local minimum. The number of hidden layer nodes of DNN was optimized to extract the characteristics of the signal to be detected as the network input, which could improve the recognition accuracy under low SNR [19]. Liu [20] proposed a deep complex network that cascades the bidirectional long short-term memory network (DCN-BiLSTM) for automatic modulation recognition, and its recognition rate for the 11 modulation signals can reach 90% when SNR exceeds 4 dB.
In this paper, a novel modulation recognition method of the digital signal based on CNN is proposed, called CNN + BiLSTM + attention (C-BiLSTM-A). The algorithm includes two steps of feature extraction and recognition. In the feature extraction operation, the convolution neural network was first used to extract the frequency-domain features of the signal, and then the feature was taken as input. BiLSTM was used to extract the timing features of the signal. Finally, the feature parameters were weighted and summed through the attention mechanism, and a SoftMax classifier was used to identify the modulation mode of the signal.

2. Algorithm Design

The structure of the C-BiLSTM-A network model proposed in this paper is shown in Figure 1. The digital signal data set was used as the input of C-BiLSTM-A. Firstly, a one-dimensional convolution of two layers was selected to extract the signal’s coarse features, then the activation function ReLU was used to map the features to the nonlinear space, and then the nonlinear features were mapped to the sample label space through a full connection layer. After that, the signal features were input into BiLSTM, and the two-way gating structure in BiLSTM was used to filter the features and retain part of the feature information to further extract the timing features of digital signals. Due to the different feature information carried by each short subsequence in a long sequence, its importance is also different. Therefore, an attention mechanism was introduced after BiLSTM, and the attention value of different short sequences was obtained by calculating the attention distribution of the input feature information and weighted sum. Finally, it was input to the full connection layer and combined with the classifier to recognize the modulation mode of the digital signal correctly.

3. Bi-Directional Long Short-Term Memory

At present, the deep learning structure is mainly applied in image and natural processing, and there is less network structure in the application of sequence signal classification. The data processed in this paper belong to one-dimensional time-series signals. On the basis of referring to the convolutional neural network designed by Li, the two-dimensional convolutional layer was adjusted to the one-dimensional convolutional layer [21]. The linear rectifying function ReLU was selected as the activation function, which can effectively alleviate the problem of gradient disappearance in the network and thus make the deep learning model of training more stable.
It was found that digital modulation signals can be regarded as a kind of unit sequence, while the long short-term memory network (LSTM), a powerful sequential signal processing structure, belongs to a special kind of recurrent neural networks (RNNs) [22]. Compared with RNN, LSTM can well express the long-term dependent information in the input. Therefore, LSTM can perform better in longer sequences [23,24]. Whereas RNN has only one transitive state h t , LSTM has two transitive states: one cell state, c t , and one hidden state, h t . The cell state, c t , provides time dependence and time characteristics for the input data, and the LSTM realizes long-term control through the cell state. The unit state realizes the long-term control function, mainly through three kinds of gate structures: forgetting gate, inputting gate and outputting gate. The structure of the LSTM is shown in Figure 2.
The positions of the three gate structures in Figure 2 correspond to forgetting gate, f ; inputting gate, i ; and outputting gate, o , respectively, and the candidate value, g , adds information to the cell state. The element state can filter and update the input timing signals well and give it time dependence, which is beneficial to the classification and prediction of timing signals.
For the structure in Figure 2, the learnable weights of the LSTM layer are input weights, w ; recurrent weights, R ; and bias, b . The matrices are the series of input weights, recursive weights, and biases for each component, respectively. The connection of these matrices is shown in Equation (1):
{ f t = σ ( w f x t + R f h t 1 + b f ) i t = σ ( w i x t + R i h t 1 + b i ) o t = σ ( w o x t + R o h t 1 + b o ) g t = σ c ( w g x t + R g h t 1 + b g )
where x t represents current input, h t 1 represents the output of the previous hidden state, σ is activation function sigmoid, t a n h represents another activation σ c .
w = [ w f w i w g w o ] , R = [ R f R i R g R o ] , b = [ b f b i b g b o ]
The expression of unit state about time, t , is as follows:
c t = f t c t 1 + i t g t
where represents the Hadamard product (vector product of elements). The hidden state expression at t is expressed as follows:
h t = o t σ c ( c t )
In general, LSTM layer functions use the hyperbolic tangent function ( t a n h ) as the state activation function.
The specific working principle is as follows: the forgetting gate belongs to the forgetting stage, which is mainly about the selective forgetting of the input passed in by the previous node. Specifically, f t controls which parts of the output, c t 1 , for the previous state need to be retained and which parts need to be forgotten. Its expression is as follows:
f t = σ g ( ω f x t + R f h t 1 + b f )
where f t refers to forgetting gating, is the last output information and also the input data information at the current moment.
The input gate belongs to the input stage, which is to restrict the input information of this stage. Here, it is mainly to process the input, x t , at the current moment. Specifically, the current input is controlled through input gating i t (i stands for information). Its expression is as follows:
i t = σ g ( ω i x t + R i h t 1 + b i )
The output gate belongs to the output stage. In this stage, it is determined which feature information will be the output of the current state, which is specifically controlled by output gating, o t . Its expression is as follows:
o t = σ g ( ω o x t + R o h t 1 + b o )
The role of the element state in Figure 2 is to make the information of the past moment run directly along the chain, with only a few linear interactions. Its expression is as follows:
g t = σ c ( ω g x t + R g h t 1 + b g )
Similar to the LSTM calculation process, BiLSTM adds the reverse operation on top of it, which can be understood as reversing the input sequence and calculating the output in the way of LSTM again. The final result is the stack of the results of forward LSTM and reverse LSTM [25]. The structure of BiLSTM is shown in Figure 3. The forgetting gate selectively forgets the input passed by the previous node. The input gate is to learn new information to replace the forgotten information. Here, it is mainly to process the input at the current time. The output gate determines which feature information is output as the current unit state. The bidirectional long-term and short-term memory network adds reverse operation on the basis of LSTM. It can be understood that the input sequence is reversed and calculated again in the way of LSTM. The final result is the stacking of forward LSTM and reverse LSTM, which is a symmetrical structure.

4. Attention Mechanism

The mechanism of attention stems from the study of human vision. The attention mechanism can be expressed as a resource allocation model. At some point, the attention will focus on one part of the target, while ignoring the rest. Using mathematical description is a weighted change of the target data. Its core objective is also to select the information that is more critical to the current mission objective from the numerous data [26,27]. Because the importance degree of different short subsequences in the long time series varies, and the important salient features usually contain more information, it needs to analyze the importance degree of the attention mechanism introduced into the target sequence.
The attention mechanism can be divided into hard attention and soft attention mechanisms according to the method of selecting task-related information from the input information. Hard attention refers to the selection of information at a certain position in the input sequence as input, such as the random selection of a piece of information or the information with the highest probability of occurrence. Soft attention refers to the choice of information in the input sequence, it means that all pieces of information are used as input. Since soft attention is more common in practical applications, this paper chooses it as the attention mechanism weighted method. Input information [ x 0 , x 1 , , x n ] is represented as n + 1 groups, and here as a group of input information x i , i = 0 , 1 , , n . From the perspective of computing resources, it is not necessary to input all the information into the computing network but rather to select some information that is more relevant to the task. The computational process of the attention mechanism can be expressed as two steps: firstly, the attention distribution is calculated on all input information. Then the input information is weighted according to the distribution of attention. The specific calculation process is divided into the following three steps:
The first step is to introduce different functions and calculation mechanisms. According to q u e r y and input, the correlation of them is calculated, and the formula is expressed as follows:
s i m i l a r i t y ( q u e r y , x i ) = q u e r y · x i q u e r y · x i
The second step is to introduce a similar calculation method, s o f t m a x , to numerically transform the score in the first step, which can not only transform the original score into the probability distribution with the sum of all element weights equal to 1 through normalization but also highlight the weight of important elements in the sequence through the internal mechanism. The formula is shown as follows:
a i = s o f t m a x ( s i m i l i ) = e s i m i l i i = 0 n e s i m i l i
a i represents the distribution of attention, s i m i l i refers to s i m i l a r i t y ( q u e r y , x i ) , which represents the attention rating mechanism.
The third step is to calculate the weight coefficients according to the second step, the input, x i , can be a weighted sum to get the attention value. The formula is represented as follows:
A t t e n t i o n = i = 0 N a i · x i
The attention value can complete the classification prediction task. In view of the characteristics of the digital modulation signal itself, a new deep learning model is proposed in this paper. The corresponding network structure of the proposed C-BiLSTM-A model is shown in Figure 4.
In this paper, firstly, CNN was used on a training data set to extract spatial features. Then, BiLSTM was introduced to further extract the temporal features of signals and to train and learn the two types of features, which can improve the recognition accuracy. Although BiLSTM is suitable for the classification and prediction of sequences, considering that the signals to be identified are all long sequences, the importance of different short subsequences in long sequences is different, so only using BiLSTM cannot accurately distinguish the differences of each subsequence. Therefore, the attention mechanism was introduced after BiLSTM to effectively distinguish the importance by calculating the attention distribution of different short subsequences in long sequences. This method can further improve the recognition accuracy of digital signal modulation at low SNR.

5. Experimental Results and Analysis

5.1. Experimental Environment and Super Parameter Selection

In order to verify the universality and effectiveness of the proposed algorithm, this paper used MATLAB simulation to get the seven signal sets of 2ASK, 4ASK, 4FSK, BPSK, QPSK, 8PSK and 64QAM. Each signal is obtained by pulse shaping of baseband signal, shaping filtering, carrier modulation and Gaussian noise simulation. Meanwhile, the TensorFlow-1.8.0 + Keras-2.2.4 framework was set up to train the CNN + BiLSTM + attention (C-BiLSTM-A) network. The data set required for the experiment was divided into a training set and a test set in a ratio of 9:1 and a k-Fold cross-validation method was used to evaluate machine learning mode.
Since the length of the signal data used for training was not long, the step size of the convolution kernel was set to 2, and the size of the convolution kernel is set to 1 × 1 and 1 × 3 . The spatial features of digital signals were extracted using a convolution neural network. The cross-entropy loss function was selected as the loss function, and the formula is as follows:
L = k = 1 7 log ( p ( k ) ) q ( k )
where p ( k ) represents the probability distribution of real SNR and q ( k ) represents the probability distribution of SNR predicted by training model. The cross-entropy loss function can measure the similarity between two distributions, which is often used in multi classification problems. The optimizer was used to reduce the value of loss function to update hidden layer parameters. Adam was selected as the optimizer, which has a series of advantages, such as less memory requirement, simple implementation and high efficiency. The learning rate was set to a fixed value of 0.001, and dropout was set to 0.1. In this way, the of over fitting problem in the full connection layer due to too many parameters was avoided.

5.2. Recognition Performance Comparison

This paper adopts the C-BiLSTM-A algorithm to identify the modulation mode of the digital signal. Meanwhile, CNN, C-BiLSTM, SVM [28], random forest (RF) [29] and KNN [30] were taken as comparison algorithms. The evaluation index required by the experiment was the recognition accuracy of seven typical signals under different SNRs (−20–18 dB). The experimental results are shown in Figure 5 below.
Figure 5 shows the recognition rates of seven signals at different SNRs, respectively. As can be seen from the figure, the recognition rate of each digital signal increases with the increase of the SNR. The recognition rate of the algorithm presented in this paper was obviously higher than that of other comparison algorithms. When the SNR was 0 dB, the recognition accuracy of the proposed algorithm was 3–10% higher than that of other comparison algorithms.
Figure 5b shows the recognition accuracy of signal 4ASK at different signal-to-noise ratios. It can be seen from the figure that the recognition accuracy of the C-BiLSTM-A model increased rapidly from SNR = −14 dB. When SNR = 2 dB, the recognition accuracy gradually reached a stable level and approached 100%, especially under low SNR, the recognition accuracy was better than that in other comparison algorithm models. It can be seen from Figure 5d that the recognition rate of the C-BiLSTM-A model for BPSK increased rapidly from SNR = −6 dB. When SNR = 6 dB, the recognition rate was close to 90% until SNR = 10 dB, which tended to be stable, while other comparison algorithms gradually reached a stable state when SNR = 14 dB.
Table 1, Table 2, Table 3, Table 4, Table 5, Table 6 and Table 7 show seven kinds of signal recognition rates obtained by applying different modulation recognition algorithms to the results, including C-BiLSTM-A, C-BiLSTM, CNN, random forest, KNN and SVM. It can be seen from the table that the C-BiLSTM-A algorithm had the highest recognition accuracy for seven kinds of signals.
Table 1 shows the recognition accuracy of BPSK. It can be seen from Table 1 that the recognition accuracy of the C-BiLSTM-A model gradually increased from SNR = −6 dB, reaching a recognition accuracy of 0.914870 when SNR = 8 dB, and CNN and C-BiLSTM only reached a recognition accuracy of 0.873964 and 0.830846 when SNR = 8 dB. The recognition accuracy of C-BiLSTM-A was more than 4% higher than that of CNN and C-BiLSTM. Compared with the three traditional machine learning algorithms, SVM had the highest recognition rate of 0.826976 when SNR = 8 dB. The results show that the recognition effect of the C-BiLSTM-A model was better than that of other comparison algorithm models.
Table 2 shows the recognition accuracy of QPSK. From Table 2, it can be seen that the recognition accuracy of the C-BiLSTM-A model increased rapidly from SNR = −4 dB, which was significantly better than that of other models. When SNR = 8 dB, it reached a recognition accuracy of 0.829187, while CNN and C-BiLSTM only reached a recognition accuracy of 0.784964 and 0.819237 when SNR = 8 dB. C-BiLSTM-A had a higher recognition rate than CNN and C-BiLSTM. It shows that the recognition effect of the C-BiLSTM-A model on QPSK was better than that of other comparison algorithm models.
Table 7 shows the recognition accuracy of 64QAM. It can be seen from Table 7 that the recognition accuracy of C-BiLSTM-A model increased rapidly from SNR = −4 dB, which was significantly better than that of other models. When SNR = 10 dB, it reached a recognition accuracy of 0.933112 and tended to be stable. When SNR = 10 dB, CNN and C-BiLSTM only achieved a recognition accuracy of 0.883914 and 0.878386. The recognition accuracy of C-BiLSTM-A was 5% and 6% higher than that of CNN and C-BiLSTM. Compared with the three traditional machine learning algorithms, the highest recognition rate of the SVM model was 0.883361 when SNR = 10 dB. The results show that the recognition effect of the C-BiLSTM-A model on 64QAM was better than that of other comparison algorithm models.
It can be seen from the table that the recognition accuracy of the method proposed in this paper was higher than that of other comparison algorithms at low SNR.
The confusion matrix of various algorithm models obtained by experiments at SNR = 0 dB is shown in Figure 6. The rightmost column of the chart shows the percentage of all samples belonging to each category correctly classified and incorrectly classified, which is called precision. The row at the bottom of the chart shows the percentage of all samples for each category correctly classified and incorrectly classified, which is called recall rate. The lower right cell of the chart shows the overall accuracy. As can be seen from Figure 6, the confusion matrix can visually show the misclassification of samples by different algorithms. It can be seen from the confusion matrix that the error rate of the proposed algorithm was smaller than that of other comparison algorithms. The C-BiLSTM-A model easily confused the [QPSK, 8PSK] modulation types, which means that the recognition effect of this model on this group of modulation types was general. In terms of accuracy, the recognition accuracy of 4ASK, 4FSK, BPSK and 64QAM is high and reaches more than 90%, and the recognition accuracy on QPSK, 2ASK and 8PSK was relatively general and also reaches more than 70%. In terms of recall rate, the recall rate of 4ASK has reached more than 90%, the recall rate of 2ASK has also reached 70%, and the effect on BPSK is relatively general. However, in general, the deep learning algorithm designed in this chapter had a very good overall recognition effect in the recognition of digital signal modulation.

6. Conclusions

This paper proposed the C-BiLSTM-A algorithm, which improves the recognition accuracy of digital signal modulation by combining CNN, BiLSTM and the attention mechanism. A lot of experimental results show that the improved C-BiLSTM-A model proposed in this chapter has achieved good recognition results under the modulation types of 2ASK, 4ASK, 4FSK, BPSK, QPSK, 8PSK and 64QAM, and the recognition accuracy is about 5% higher than that of the comparison algorithm under low signal-to-noise ratio. It shows that adding BiLSTM and attention mechanism to CNN is very helpful to improve the recognition effect of the model. At the same time, it also proves that the deep learning model performs better in universality in the field of digital signal modulation recognition than the traditional machine learning methods. There are still many aspects to be improved in this experiment, for example, the network structure and super parameters still have some room for improvement, and the follow-up work will continue to try and improve them.

Author Contributions

Conceptualization, writing—review and editing, A.W. and K.J.; methodology, software and validation, X.Q. and J.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This work was funded by the National Natural Science Foundation of China under grant no. NSFC-61671190.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

We thank Haibin Wu for his valuable comments.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Alarabi, A.; Alkishriwo, O.A.S. Modulation Classification Based on Statistical Features and Artificial Neural Network. In Proceedings of the 2021 IEEE 1st International Maghreb Meeting of the Conference on Sciences and Techniques of Automatic Control and Computer Engineering MI-STA, Tripoli, Libya, 25–27 May 2021; IEEE: Piscataway, NJ, USA, 2021; pp. 748–751. [Google Scholar]
  2. Kim, K.; Polydoros, A. Digital Modulation Classification: The BPSK versus QPSK case. In Proceedings of the Military Communications Conference, MILCOM 88, San Diego, CA, USA, 23–26 October 1988; IEEE: Piscataway, NJ, USA, 2003; Volume 2, pp. 431–436. [Google Scholar]
  3. Freund, Y.; Schapire, R.E. A Decision-Theoretic Generalization of On-Line Learning and an Application to Boosting. J. Comput. Syst. Sci. 1997, 55, 119–139. [Google Scholar] [CrossRef] [Green Version]
  4. Panagiotou, P.; Anastasopoulos, A.; Polydoros, A. Likelihood ratio tests for modulation classification. In Proceedings of the Likelihood Ratio Tests for Modulation Classification. 21st Century Military Communications, Architectures and Technologies for Information Superiority, Los Angeles, CA, USA, 22–25 October 2000; IEEE: Piscataway, NJ, USA, 2002; Volume 2, pp. 670–674. [Google Scholar]
  5. Nandi, A.; Azzouz, E. Algorithms for automatic modulation recognition of communication signals. IEEE Trans. Commun. 1998, 46, 431–436. [Google Scholar] [CrossRef]
  6. Tan, Z.; Shi, J.; Hu, J. Low-Order Digital Modulation Recognition Algorithm based on Random Forest. Commun. Technol. 2018, 51, 527–532. [Google Scholar]
  7. Swami, A.; Sadler, B.M. Hierarchical digital modulation classification using cumulants. IEEE Trans. Commun. 2000, 48, 416–429. [Google Scholar] [CrossRef]
  8. Wang, L.-X.; Ren, Y.-J.; Zhang, R.-H. Algorithm of digital modulation recognition based on support vector machines. In Proceedings of the 2009 International Conference on Machine Learning and Cybernetics, Baoding, China, 12–15 July 2009; IEEE: Piscataway, NJ, USA, 2009; Volume 2, pp. 980–983. [Google Scholar]
  9. Orlic, V.; Dukic, M.L. Automatic modulation classification algorithm using higher-order cumulants under real-world channel conditions. IEEE Commun. Lett. 2009, 13, 917–919. [Google Scholar] [CrossRef]
  10. Ya, T.; Lin, Y.; Wang, H. Modulation Recognition of Digital Signal Based on Deep Auto-Ancoder Network. In Proceedings of the 2017 IEEE International Conference on Software Quality, Reliability and Security Companion (QRS-C), Prague, Czech Republic, 25–29 July 2017; IEEE: Piscataway, NJ, USA, 2017; pp. 256–260. [Google Scholar]
  11. O’Shea, T.J.; Corgan, J.; Clancy, T.C. Convolutional Radio Modulation Recognition Networks. In Proceedings of the Communications in Computer and Information Science, Aberdeen, UK, 2–5 September 2016; Springer: Cham, Switzerland, 2016; pp. 213–226. [Google Scholar]
  12. Peng, C.; Diao, W.; Du, Z. Digital Modulation Recognition Based on Deep Convolutional Neural Network. Comput. Measurem. Control 2018, 26, 222–226. [Google Scholar]
  13. Hou, T.; Zheng, Y. Communication Signal Modulation Recognition Based on Deep Learning. Radio Eng. 2019, 49, 796–800. [Google Scholar]
  14. Peng, S.; Jiang, H.; Wang, H.; Alwageed, H.; Zhou, Y.; Sebdani, M.M.; Yao, Y.-D. Modulation Classification Based on Signal Constellation Diagrams and Deep Learning. IEEE Trans. Neural Netw. Learn. Syst. 2019, 30, 718–727. [Google Scholar] [CrossRef] [PubMed]
  15. Wang, Y.; Liu, M.; Yang, J.; Gui, G. Data-Driven Deep Learning for Automatic Modulation Recognition in Cognitive Radios. IEEE Trans. Veh. Technol. 2019, 68, 4074–4077. [Google Scholar] [CrossRef]
  16. Xie, W.; Hu, S.; Liao, J.; Yu, C.; Zhu, P.; Peng, X. Deep Learning in Digital Modulation Recognition Using High Order Cumulants. IEEE Access 2019, 7, 63760–63766. [Google Scholar] [CrossRef]
  17. Tang, B.; Tu, Y.; Zhang, S.; Lin, Y. Digital Signal Modulation Classification with Data Augmentation Using Generative Adversarial Nets in Cognitive Radio Networks. IEEE Access 2018, 6, 15713–15722. [Google Scholar] [CrossRef]
  18. Tu, Y.; Lin, Y. Deep Neural Network Compression Technique towards Efficient Digital Signal Modulation Recognition in Edge Device. IEEE Access 2019, 7, 58113–58119. [Google Scholar] [CrossRef]
  19. Shi, W.; Liu, D.; Cheng, X.; Li, Y.; Zhao, Y. Particle Swarm Optimization-Based on Deep Neural Network for Digital Modulation Recognition. IEEE Access 2019, 7, 104591–104600. [Google Scholar] [CrossRef]
  20. Liu, K.; Gao, W.; Huang, Q. Automatic Modulation Recognition Based on a DCN-BiLSTM Network. Sensors 2021, 21, 1577. [Google Scholar] [CrossRef] [PubMed]
  21. Li, G.; RangZhuoma, C.; Zhijie, C.; Chen, D. Tibetan Voice Activity Detection Based on One-Dimensional Convolutional Neural Network. In Proceedings of the 2021 3rd International Conference on Natural Language Processing (ICNLP), Beijing, China, 26–28 March 2021; IEEE: Piscataway, NJ, USA, 2021; pp. 129–133. [Google Scholar]
  22. Yu, J.; Yi, Z.; Zhou, J. Continuous Attractors of Lotka–Volterra Recurrent Neural Networks with Infinite Neurons. IEEE Trans. Neural Netw. 2010, 21, 1690–1695. [Google Scholar] [CrossRef] [PubMed]
  23. Palangi, H.; Deng, L.; Shen, Y.; Gao, J.; He, X.; Chen, J.; Song, X.; Ward, R. Deep Sentence Embedding Using Long Short-Term Memory Networks: Analysis and Application to Information Retrieval. IEEE/ACM Trans. Audio Speech Lang. Process. 2016, 24, 694–707. [Google Scholar] [CrossRef] [Green Version]
  24. Kim, K.; Kim, D.K.; Noh, J.; Kim, M. Stable Forecasting of Environmental Time Series via Long Short Term Memory Recurrent Neural Network. IEEE Access 2018, 6, 75216–75228. [Google Scholar] [CrossRef]
  25. Wang, J.; Zhang, J.; Wang, X. Bilateral LSTM: A Two-Dimensional Long Short-Term Memory Model with Multiply Memory Units for Short-Term Cycle Time Forecasting in Re-entrant Manufacturing Systems. IEEE Trans. Ind. Inform. 2017, 14, 748–758. [Google Scholar] [CrossRef]
  26. Jiao, S.; Wang, J.; Hu, G.; Pan, Z.; Du, L.; Zhang, J. Joint Attention Mechanism for Person Re-Identification. IEEE Access 2019, 7, 90497–90506. [Google Scholar] [CrossRef]
  27. Cheng, K.; Yue, Y.; Song, Z. Sentiment Classification Based on Part-of-Speech and Self-Attention Mechanism. IEEE Access 2020, 8, 16387–16396. [Google Scholar] [CrossRef]
  28. Wu, J.; Yang, H. Linear Regression-Based Efficient SVM Learning for Large-Scale Classification. IEEE Trans. Neural Netw. Learn. Syst. 2015, 26, 2357–2369. [Google Scholar] [CrossRef] [PubMed]
  29. Yuan, H.; Fang, M.; Zhu, X. Hierarchical Sampling for Multi-Instance Ensemble Learning. IEEE Trans. Knowl. Data Eng. 2012, 25, 2900–2905. [Google Scholar] [CrossRef]
  30. Zhang, S.; Li, X.; Zong, M.; Zhu, X.; Wang, R. Efficient kNN Classification with Different Numbers of Nearest Neighbors. IEEE Trans. Neural Netw. Learn. Syst. 2018, 29, 1774–1785. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Algorithm block diagram of C-BiLSTM-A.
Figure 1. Algorithm block diagram of C-BiLSTM-A.
Symmetry 13 02302 g001
Figure 2. LSTM internal unit structure.
Figure 2. LSTM internal unit structure.
Symmetry 13 02302 g002
Figure 3. Internal structure of BiLSTM.
Figure 3. Internal structure of BiLSTM.
Symmetry 13 02302 g003
Figure 4. The structure of C-BiLSTM-A.
Figure 4. The structure of C-BiLSTM-A.
Symmetry 13 02302 g004
Figure 5. Signal recognition rate comparison by different methods. (a) 2ASK, (b) 4ASK, (c) 4FSK, (d) BPSK, (e) QPSK, (f) 8PSK, (g) 64QAM.
Figure 5. Signal recognition rate comparison by different methods. (a) 2ASK, (b) 4ASK, (c) 4FSK, (d) BPSK, (e) QPSK, (f) 8PSK, (g) 64QAM.
Symmetry 13 02302 g005aSymmetry 13 02302 g005b
Figure 6. Confusion matrix of C-BiLSTM-A.
Figure 6. Confusion matrix of C-BiLSTM-A.
Symmetry 13 02302 g006
Table 1. Recognition rate comparison of BPSK.
Table 1. Recognition rate comparison of BPSK.
SNR (dB)C-BiLSTM-ACNNC-BiLSTMRFKNNSVM
−10000000
−8000000
−60.00552800000
−40.0580430.0309560.04919800.0193480.005528
−20.1768930.1426200.1492540.1050300.0359310.029851
00.4477610.4344940.4156990.3720290.1332230.145384
20.6080710.5527920.5522390.5080150.3814260.405749
40.6992810.6080710.6556110.6224430.558320.553897
60.8844670.8291870.7186290.7678280.6556110.655058
80.9148700.8739640.8308460.8026530.8214480.826976
100.9535660.8971810.9165280.8866780.8839140.883361
120.9701490.9386400.9397460.9038140.9391930.929795
140.9756770.9397460.9535660.9098950.9463790.941404
160.9889440.9756770.9767830.9314540.9541180.964069
180.9961300.9950250.993920.9502490.9756770.982311
Table 2. Recognition rate comparison of QPSK.
Table 2. Recognition rate comparison of QPSK.
SNR (dB)C-BiLSTM-ACNNC-BiLSTMRFKNNSVM
−10000000
−8000000
−6000000
−40.0829190.0055280.049198000
−20.2647870.1105580.2216690.0364840.0326150.034826
00.3758980.2763960.3864010.2244330.2354890.220564
20.5527920.3875070.5522390.3786620.3747930.37424
40.6633500.6080710.6627970.5467110.5411830.532891
60.7490330.6965170.7186290.663350.6661140.672195
80.8291870.7849640.8192370.7545610.7716970.766722
100.8955220.8413490.914870.8048650.8280820.81592
120.9563290.9397460.9508020.8839140.8866780.923715
140.9756770.9397460.9574350.9596460.9707020.965174
160.9889440.952460.9701490.9707020.9707020.970149
180.9977890.9579880.993920.9889440.9867330.976230
Table 3. Recognition rate comparison of 8PSK.
Table 3. Recognition rate comparison of 8PSK.
SNR (dB)C-BiLSTM-ACNNC-BiLSTMRFKNNSVM
−10000000
−8000000
−6000000
−4000000
−20.00608100000
00.0552790.0055280.0038700.0049750.0022110.003317
20.1133220.0480930.0359320.0425650.0541740.053068
40.2520730.1105580.1044780.1033720.1039250.102266
60.4986180.3869540.3886130.3814260.3206190.387507
80.6716420.5527920.5522390.6080710.5417360.608071
100.8269760.6627970.6080710.6241020.6605860.656716
120.9375350.8291870.8164730.7296850.8208960.811498
140.9723600.8839140.8767280.8828080.8805970.880044
160.9944720.9397460.9452740.9414040.9436150.944168
180.9955780.9707020.9939200.9883910.9756770.992261
Table 4. Recognition rate comparison of 2FSK.
Table 4. Recognition rate comparison of 2FSK.
SNR (dB)C-BiLSTM-ACNNC-BiLSTMRFKNNSVM
−10000000
−80.005528000.0022110.0033170
−60.0779440.0143730.0646770.0480930.0530680.004422
−40.1840800.0961860.1835270.1647320.1835270.132670
−20.2703150.1907130.2664460.2155890.2487560.239912
00.3880600.3366500.2974020.3316750.3261470.309563
20.5671640.5328910.4980650.5771140.5666110.508568
40.7175230.6495300.6522940.6340520.6185740.613599
60.8977340.8181320.7954670.7711440.7031510.714207
80.9574350.9275840.9203980.8258710.7435050.773908
100.9872860.9530130.9414040.8291870.7794360.847430
120.9878390.9762300.976230.8457710.8291870.884467
140.9889440.9889440.9767830.8844670.9397460.945826
160.9950250.9889440.9911550.9342180.9889440.994472
180.9966830.9917080.9939200.9673850.9955780.996130
Table 5. Recognition rate comparison of 4ASK.
Table 5. Recognition rate comparison of 4ASK.
SNR (dB)C-BiLSTM-ACNNC-BiLSTMRFKNNSVM
−100.2338310.1741290.1348810.1658370.1415150.164732
−80.4825870.431730.3421780.4367050.3609730.381426
−60.6898840.5632950.5638470.5527920.5467110.55832
−40.7799890.7650640.6639030.7678280.6633500.662797
−20.8474300.8131560.7805420.8291870.8231070.801548
00.8966280.8629080.8568270.8839140.8640130.877833
20.9292430.9463790.901050.8844670.8844670.885019
40.9701490.952460.9309010.9574350.9397460.939193
60.9762300.9823110.9601990.9701490.9635160.957988
80.9806520.9834160.9734660.9778880.9756770.974572
100.9690440.9817580.9491430.9579880.9756770.974572
120.9845220.9878390.9601990.9834160.9784410.978441
140.9828630.9767830.9607520.9801000.9828630.982311
160.9707020.9900500.9474850.9707020.9839690.982311
180.9640690.9740190.9939200.9640690.9894970.993919
Table 6. Recognition rate comparison of 4FSK.
Table 6. Recognition rate comparison of 4FSK.
SNR (dB)C-BiLSTM-ACNNC-BiLSTMRFKNNSVM
−100.0082920.0011060.0116090.0121610.0105030.005528
−80.1160860.0055280.1127690.0995020.1044780.093975
−60.2780540.2211170.2692100.2205640.2216690.215036
−40.4975120.3316750.4632390.4422330.4411280.435600
−20.6323940.5527920.5710340.5527920.5522390.553344
00.7915980.7186290.7694860.7490330.7562190.66335
20.8568270.8286350.8302930.7739080.7849640.774461
40.9286900.8844670.8844670.8844670.8861250.883914
60.9701490.9391930.9336650.9148700.9209510.913765
80.9701490.9397460.9563290.9397460.9408510.939193
100.9944720.9673850.9872860.9701490.9695960.941404
120.9944720.9839690.9944720.9707020.9718080.941957
140.9955780.9839690.9944720.9944720.9883910.970702
160.9966830.9894970.9950250.9944720.9889440.98618
180.9977890.9894970.993920.9955780.9950250.994472
Table 7. Recognition rate comparison of 64QAM.
Table 7. Recognition rate comparison of 64QAM.
SNR (dB)C-BiLSTM-ACNNC-BiLSTMRFKNNSVM
−10000000
−8000000
−60.01437300000.000553
−40.1166390.0099500.0082920.0049750.0016580.044223
−20.2763960.1044780.0983970.0989500.0934220.185738
00.4975120.2758430.2741850.2515200.2758430.425650
20.5831950.4975120.4975120.4964070.4776120.521835
40.7175230.5638470.6080710.5749030.6279710.627971
60.8164730.7119960.7042560.663350.7186290.742952
80.8844670.8087340.8291870.7739080.8734110.828082
100.9331120.8839140.8783860.8833610.8833610.854063
120.9867330.8855720.9231620.9270320.9148700.909342
140.9944720.9397460.9430620.9447210.9563290.922609
160.9950250.9684910.9629630.9651740.9701490.928690
180.9983420.9944720.9939200.9883910.9773360.939746
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Jiang, K.; Qin, X.; Zhang, J.; Wang, A. Modulation Recognition of Communication Signal Based on Convolutional Neural Network. Symmetry 2021, 13, 2302. https://doi.org/10.3390/sym13122302

AMA Style

Jiang K, Qin X, Zhang J, Wang A. Modulation Recognition of Communication Signal Based on Convolutional Neural Network. Symmetry. 2021; 13(12):2302. https://doi.org/10.3390/sym13122302

Chicago/Turabian Style

Jiang, Kaiyuan, Xvan Qin, Jiawei Zhang, and Aili Wang. 2021. "Modulation Recognition of Communication Signal Based on Convolutional Neural Network" Symmetry 13, no. 12: 2302. https://doi.org/10.3390/sym13122302

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop