Next Article in Journal
Chinese Few-Shot Named Entity Recognition and Knowledge Graph Construction in Managed Pressure Drilling Domain
Next Article in Special Issue
Sound Identification Method for Gas and Coal Dust Explosions Based on MLP
Previous Article in Journal
Gas of Particles Obeying the Monotone Statistics
Previous Article in Special Issue
Comparative Study on Feature Extraction of Marine Background Noise Based on Nonlinear Dynamic Features
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Feature Fusion Based on Graph Convolution Network for Modulation Classification in Underwater Communication

School of Marine Science and Technology, Northwestern Polytechnical University, Xi’an 710072, China
*
Author to whom correspondence should be addressed.
Entropy 2023, 25(7), 1096; https://doi.org/10.3390/e25071096
Submission received: 19 June 2023 / Revised: 16 July 2023 / Accepted: 18 July 2023 / Published: 21 July 2023
(This article belongs to the Special Issue Entropy and Information Theory in Acoustics III)

Abstract

:
Automatic modulation classification (AMC) of underwater acoustic communication signals is of great significance in national defense and marine military. Accurate modulation classification methods can make great contributions to accurately grasping the parameters and characteristics of enemy communication systems. While a poor underwater acoustic channel makes it difficult to classify the modulation types correctly. Feature extraction and deep learning methods have proven to be effective methods for the modulation classification of underwater acoustic communication signals, but their performance is still limited by the complex underwater communication environment. Graph convolution networks (GCN) can learn the graph structured information of the data, making it an effective method for processing structured data. To improve the stability and robustness of AMC in underwater channels, we combined the feature extraction and deep learning methods by fusing the multi-domain features and deep features using GCN. The proposed method takes the relationships among the different multi-domain features and deep features into account. Firstly, a feature graph was built using the properties of the features. Secondly, multi-domain features were extracted from the received signals and deep features were extracted from the signals using a deep neural network. Thirdly, we constructed the input of GCN using these features and the graph. Then, the multi-domain features and deep features were fused by the GCN. Finally, we classified the modulation types using the output of GCN by way of a softmax layer. We conducted the experiments on a simulated dataset and a real-world dataset, respectively. The results show that the AMC based on GCN can achieve a significant improvement in performance compared to the current state-of-the-art methods. Our approach is robust in underwater acoustic channels.

1. Introduction

AMC has been an important method with which to identify the modulation types of the received signals in underwater communication scenarios; this is useful for the monitoring and identification of communication interference, which are core technologies in spectrum surveillance and underwater acoustic countermeasures. The advanced AMC technology has a broad application prospect in the underwater unmanned platform [1]. However, the complexity of underwater communication means the underwater acoustic channel is full of multi-path fading and ocean ambient noise, which can decrease the AMC performance of underwater acoustic communication signals significantly.
The AMC methods include two categories: the maximum likelihood ratio algorithm and the feature extraction algorithm. Due to the high computational complexity of the maximum likelihood ratio algorithm, most of the current studies on AMC focus on the feature extraction algorithm. The commonly used feature extraction methods in AMC include instantaneous statistics features (envelop, frequency, phase, etc.) [2], high-order cumulant features (HOC) [3,4], spectrum features [5,6,7], cyclostationary statistics features (CS) [8,9,10], and wavelet features [4,11], etc. In recent years, some new feature extraction methods based on entropy have shown effectiveness in underwater signals processing [12,13]. These feature extraction methods are always followed by a classifier; the common classifiers include neural network classifiers [14], support vector machine (SVM) [15], decision tree classifiers [16], and so on. Some applications of these feature extraction methods and classifiers have been used in AMC approaches for underwater acoustic communication signals. Zhao [17] introduced the Stockwell-transform and SVM into modulation classification in underwater acoustic channels; Stockwell-transform is a kind of spectrum feature. Sanderson [8] proposed hierarchical blind modulation classification for underwater acoustic communication signals; they used second order cyclostationary features to classify the binary phase shift keying (BPSK) and non-BPSK signals. Wu [9] proposed a modulation detection scheme for underwater acoustic communication signals through cyclostationary analysis; they extracted cyclic frequency/frequency-peak ratio to identify the modulation types. Ge [18] used HOC features and a spectrum correlation function for AMC of underwater acoustic communication signals. The performance of the AMC based on the feature extraction algorithm depends on the quality of the features.
As deep learning has shown remarkable results in many fields, many deep learning neural networks (DNN) have been proposed for various tasks. Convolution neural networks (CNN) [19] are used to process computer vision and natural language, and to build some advanced deep learning models, such as ResNet [20], GoogleNet [21], VGGNet [22], and generative adversarial networks (GAN) [23], etc. Recurrent neural networks (RNN) [24] are always used to process time series data; widely used variants of the RNN include long short-term memory (LSTM) [25] and gate recurrent unit (GRU) [26]. Some AMC methods based on deep learning theory have been proposed in recent years. DNNs can learn high-level features from raw data automatically without much prior knowledge, or can also accept the features from a feature extraction algorithm and work as a classifier. Yao [1] proposed an AMC method based on deep complex networks. They built two complex physical signal processing layers to improve the performance of AMC in underwater acoustic communication. Zhang [27] proposed an AMC method based on a multi-scale network to address the inter-class diversity problem. Zhou [28] proposed an AMC relation network for AMC under few-shot conditions. Yao [29] used GAN to enhance the signals and showed good robustness under different underwater acoustic channels. O’Shea [30] carried out research on the performance of deep learning; the effects of carrier frequency offset, symbol rate, and multi-path fading were considered. F. Wang [31] combined deep learning and a zero-center normalized instantaneous amplitude tightness characteristic parameter to overcome the intra-class diversity problem; the proposed method improved the classification performance of quadrature amplitude modulation signals. Yu [32] used LSTM for AMC of non-cooperative underwater acoustic communication signals. Jiang [33] used a sparse autoencoder network to realize data transfer for the AMC of underwater acoustic communication signals. Ding [34] proposed a deep neural network for the AMC of underwater acoustic communication signals that combined the CNN with LSTM; they used CNN to learn the time domain IQ data and LSTM to learn the amplitude and phase data.
In recent years, the underlying relationships among data have attracted more and more attention in several areas of machine learning. There have been studies that attempted to exploit the graph structure information in data processing [35]. A graph convolution network (GCN) builds a neural network based on the topology of the data graph. GCN can be used to classify elements of the graph or the graph itself. There have been many applications of GCN in the field of identification. Long [36] proposed a multi-modal relational graph network to dynamically integrate visual and kinematics information to boost gesture recognition accuracy in robotic surgery. Kipf [37] presented a graph convolution network for the semi-supervised classification of graph-structured data; the performance of the proposed model was validated on different datasets. In the field of AMC, Xuan [38] proposed an adaptive visibility graph algorithm to map a time series into a graph adaptively; they used the proposed method and GCN to achieve modulation classification of radio signals.
In this paper, we proposed a new method for AMC of underwater acoustic communication signals using GCN. In the past few years, traditional feature extraction methods have been proven effective in some conditions. To improve the stability and robustness of AMC in underwater scenarios, we used GCN to integrate the multi-domain features and deep features of the received underwater acoustic communication signals. The multi-domain features come from HOC, CS, and high order moment (HOM). We extracted multi-domain features of the received signals and learned the deep features from the signals. A feature graph was built using the properties of the features. Then, the multi-domain features and deep features were fused by the GCN. Finally, we classified the modulation type using the fused features. Our contributions are as follows:
  • We adopted GCN to AMC to improve the stability and robustness of AMC in underwater communication scenarios. GCN was used to fuse the multi-domain features and deep features of the received signals.
  • To take the relationships between multi-domain features and deep features into account, we built a graph of the multi-domain features and deep features using their properties.
  • The performance of the proposed method was validated using the simulated dataset in different underwater acoustic channels and a real-world dataset.
This paper is organized as follows. Section 2 introduces the proposed AMC method of underwater acoustic communication signals based on GCN. In Section 3, we evaluated the performance of the proposed method with a series of contrastive experiments using simulation and real-world datasets. Finally, the conclusion of the paper is given in Section 4.

2. Materials and Methods

2.1. Multi-Domain Features

We chose three kinds of features extraction methods to extract the multi-domain features from the received signals. These feature extraction methods included HOC, CS, and HOM.

2.1.1. High-Order Cumulant

High-order cumulant (HOC) [3,4,39] is a common feature extraction method for AMC. Since the cumulants of an order higher than 3 for a Gaussian distribution are zero, the HOC of a signal with additive white Gaussian noise is ideally the HOC of the signal without noise. Given a received signal x ( t ) , the p-th order mixing moment can be expressed as:
M p q = E [ x ( t ) p q x * ( x ) q ] ,
where E [ ] is the expected value operator, ∗ is the complex conjugate. The different order HOC features used in our work can be expressed as:
C 20 = M 20
C 21 = M 21
      C 40 = M 40 3 M 20 2
         C 41 = M 41 3 M 21 M 20
           C 42 = M 42 M 20 2 2 M 21 2
                 C 60 = M 60 15 M 20 M 40 + 30 M 20 2
                          C 61 = M 61 5 M 40 M 21 10 M 20 M 41 + 30 M 21 M 20 2
                       C 63 = M 63 9 M 42 M 21 9 M 20 2 M 21 + 12 M 21 3
                                C 80 = M 80 28 M 20 M 60 35 M 40 2 + 420 M 20 2 M 40 630 M 20 4 .
The relationships among these HOC features were used to construct the graph of the features. It is obvious that each feature has a relationship with x ( t ) . The internal relationships can be obtained according to Equations (2)–(10) and can be expressed in Table 1.

2.1.2. Cyclostationary Statistics

Cyclostationary statistics (CS) is an important tool for performing signal detection, modulation classification, signal parameter estimation, etc. CS is based on the fact that communications signals are not accurately described as stationary, but rather more appropriately modeled as cyclostationary. We used second-order CS features in the proposed framework, including spectral correlation density (SCD), which can be denoted as S X α ( f ) [10,40]. S X α ( f ) of a signal x ( t ) is defined as:
               S X α ( f ) = lim T lim Δ T 1 Δ T 1 Δ T 1 Δ T 1 T X T ( t , f + α 2 ) X T * ( t , f α 2 ) d t
X T ( t , f ) = t T 2 t + T 2 x ( u ) e j 2 π f u d u ,
where α is the cyclic frequency. The normalized version of the SCD is spectral coherence function (SCF), which can be calculated by:
C X α ( f ) = S X α ( f ) S X 0 ( f + α 2 ) S X 0 ( f α 2 ) 1 2 .
It is obvious that S X α ( f ) and C X α ( f ) of a signal can be visualized as images. To simplify the CS features, we used the frequency profile as well as the cycle frequency profile from C X α ( f ) [10]:
I ( α ) = max f | C X α ( f ) |
I ( f ) = max α | C X α ( f ) | .

2.1.3. High Order Moment

High order moment (HOM) [41] is a kind of spectrum feature. HOM is associated with the modulation order and it is often used for intra-class classification of phase shift keying modulation signals. The K order HOM ( U K ( f ) ) of a signal x ( t ) can be represented as:
U K ( f ) = F ( x K ( t ) ) ,
where F denotes the Fourier transform function and K is the order of HOM. When K is an integral multiple of the modulation order, there will be distinct lines in U K ( f ) . U 2 ( f ) and U 4 ( f ) will be used in the following work.

2.2. The Proposed AMC Method

The framework of the proposed method is illustrated in Figure 1. The graph was built based on the properties of the multi-domain features and deep features. The multi-domain features were extracted using different feature extraction methods. Different deep features were learned from the time domain and short-time Fourier transform (STFT) of the received signals, respectively. These features and the graph were used to construct the input matrices of GCN. We used GCN to fuse these features and used a softmax layer to classify the modulation types.

2.2.1. Graph Convolution Network

A graph convolution network (GCN) was used to learn features from a graph. Unlike CNNs, which operate on a local region in an image, in GCN, the convolutional operations compute the response at a node based on the neighboring nodes defined by the adjacency graph. A graph can be denoted as G = ( V , E ) , where V is the set of nodes and E is the set of edges. Nodes in a graph represent objects or concepts, and edges represent their relationships. The adjacency matrix is denoted as A , the node feature matrix is F R n × d , n is the number of the nodes, and d is the length of the node feature. The propagation rule in GCN can be expressed as:
F l + 1 = σ D ˜ 1 2 A ˜ D ˜ 1 2 F l W l ,
where A ˜ = A + I N is the adjacency matrix of the graph G with added self-connections. I N is the identity matrix, D ˜ is the degree matrix, W l is a layer-specific trainable weight matrix, F l is the matrix of activations in the l-th layer, and σ denotes an activation function; we used a linear rectification unit (ReLU) as the activation function.

2.2.2. Features Fusion Based on GCN

(a)
Build graph for the features.
We built an undirected graph of the features. There are 15 nodes in the graph ( N = 15 ), which include time domain signal x ( t ) , STFT F ( x ) , nine HOC features, two CS features and two HOM features. We denote each node as v i and the node-feature pairs are shown in Table 2. The graph was built using the properties of the features. The nodes were connected based on the mathematics of the feature extraction algorithms, for example, C 80 was calculated using x ( t ) , C 20 , C 40 and C 60 , and there were four edges between C 80 and the other four nodes. The graph of the these features is shown in Figure 2.
(b)
Extract features for each node.
Deep features include features from the time domain and STFT of the received signals. We used deep autoencoder networks (DAE) [42] to extract the deep features from the time domain signals and their STFT. The architecture of DAE is shown in Figure 3. Since the time domain signal is a 1D complex vector and the STFT is a 2D matrix, we used 1D-DAE and 2D-DAE to extract deep features from the time domain and STFT, respectively. The real part and the imaginary part of the time domain signal were treated as two channels. The deep features of these two DAE are 1D vectors and the length is 128.
The multi-domain features were extracted using the corresponding feature extraction methods. Each HOC feature has only one value. The CS features and HOM features are all 1D vectors. We used 1D-DAE to compress these features to have same length as the deep features.
(c)
Construct the input of GCN.
The input of GCN includes three matrices: adjacency matrix A ˜ , degree matrix D ˜ , and feature matrix degree matrix F . A ˜ and D ˜ can be extracted from the feature graph. The number of the nodes is 15 and they were sorted in the order shown in Table 2. A ˜ is used to express the relationships between the nodes; element ( v i , v j ) represents the relationship between node i and node j; ( v i , v j ) = 1 indicates that the two nodes are related; ( v i , v j ) = 0 indicates that the two nodes are not related when i = j , ( v i , v j ) = 1 . Then, A ˜ can be repressed as Equation (18). The rows and columns correspond to the nodes in Table 2; they are separated by dotted lines according to the corresponding feature domains.
Entropy 25 01096 i001
D ˜ is a diagonal matrix, which can be expressed as:
D ˜ = diag ( 15 , 2 , 9 , 6 , 6 , 5 , 5 , 5 , 6 , 5 , 5 , 2 , 2 , 2 , 2 ) .
The size of F was set to 15 × 128. To build the feature matrix, the length of each feature should be 128. For the HOC features, we used the zero-padding to supplement their length to 128.
(d)
Feature fusion and modulation classification.
We used two GCN layers to learn features from the input graph and features. The D ˜ 1 2 A ˜ D ˜ 1 2 in Equation (17) can be calculated in a pre-processing step. The output of the last GCN layer was flatted to a 1D vector. Then, we used a softmax layer to classify the modulation types. A fully connection layer was used to connect the GCN layer and the softmax layer. The weights of these layers were trained using gradient descent.

3. Experiments and Discussion

We conducted a series of contrastive experiments in this section to verify the performance of the proposed AMC method:
(1)
We analyzed the influence of the different features.
(2)
We analyzed the influence of the edges inside HOC.
(3)
We compared the performance of the proposed method with other AMC methods.
(4)
The performance of the proposed method was verified using real-world underwater acoustic communication signals.
The results in this section were the average values over multiple runs.

3.1. Dataset and Parameters

Signals Generation

We considered several commonly used modulation types in underwater acoustic communication scenarios, including frequency shift keying (FSK) (2FSK, 4FSK, 8FSK), phase shift keying (PSK) (BPSK, QPSK, 8PSK), and quadrature amplitude modulation (QAM) (16QAM, 32QAM, 64QAM). In the simulation condition, the SNR ranges from −9 dB to 21 dB with an interval of 3 dB. The received signals were expressed as the sampled complex baseband, the dimension of each sample was 3000 × 2, and the duration was 0.25 s. The number of each modulation type at each SNR was 10,000, then the total number of samples was 990,000. Of the samples, 75% were used as training signals and 25% were used as testing samples. The parameters of each modulation type are shown in Table 3, the frequency separation of FSK modulation was 200 Hz.
We used the simulated underwater acoustic channels with multi-path fading. The sound velocity profile is shown in Figure 4. The depth of the sea is 460 m.
There was one transmitter ( T x ) and two receivers ( R x 1 and R x 2 ) in the simulated underwater acoustic communication channel, as shown in Figure 5. The horizontal distances between the transmitter and the two receivers were 3 km and 5 km, respectively. The depths of the transmitter and receivers were 30 m and 80 m, respectively.
The time delays and amplitudes of the two multi-path fading channels are shown in Figure 6, in which the modules of the amplitudes are normalized to [0,1].

3.2. Experiment Results Analysis

A series of contrastive experiments was carried out in the following work. In each simulation experiment, we calculated the classification accuracy at each SNR point and the average accuracy at all SNR, which can be expressed as:
A c c ¯ = 1 N s n r i = 1 N s n r A c c i .
A c c i is the classification accuracy at the i-th SNR point from −9 dB to 21 dB, A c c ¯ is the average accuracy at all SNR, and N s n r is the number of SNR points. We analyzed the performance in the contrastive experiments mainly using the average accuracy.

3.2.1. The Analysis of the Influence of the Different Features

We used an ablation experiment to analyze the influence of the different features and verify the effectiveness of the proposed method. The features were extracted individually from the signals. In the following contrastive experiments, the features coming from different domains were replaced by white Gaussian noise (WGN) in turn. Each experiment was carried out in the two multi-path channels, respectively.
(a)
Baseline performance.
The performance of the proposed method with all features was used as a baseline and the classification is shown in Figure 7.
The mean accuracies in Ch1 and Ch2 are 82.9% and 81.4%, respectively. To analyze the classification of each modulation type, we visualized the features from the fully connected layer using t-SNE [43], as shown in Figure 8. We can see from Figure 8 that, in the multi-path fading channels, the classification errors mainly occur among different modulation orders of the same modulation mode.
(b)
Deep feature of time domain.
To analyze the contribution of deep feature from time domain, we first replaced the deep features from the time domain with WGN and other conditions were kept the same. The performance comparison is shown in Figure 9. The average accuracies using deep features from the time domain in Ch1 and Ch2 are 82.9% and 81.4%. The average accuracies without using deep features from the time domain in Ch1 and Ch2 are 59.3% and 50.8%. The accuracies without using deep features from the time domain decrease to 23.6% and 30.6% in the two channels, respectively. It is obvious that the deep features from the time domain make great contributions to the AMC performance.
(c)
Deep features of STFT.
Secondly, the deep features from STFT were replaced by WGN and other conditions were kept the same. Figure 10 has shown the performance comparison. The average accuracies without using deep features from STFT in Ch1 and Ch2 are 79.7% and 71.6%. The accuracies without using deep features from STFT decrease to 3.2% and 9.8% in the two channels, respectively. The influence of the deep features from STFT was smaller than that of the time domain.
(d)
HOC features.
Thirdly, the nine HOC features were replaced by WGN and other conditions were kept the same. The performance comparison is illustrated in Figure 11. The average accuracies without using HOC features in Ch1 and Ch2 are 74.3% and 73.5%. The accuracies without using HOC features decrease to 8.6% and 7.9% in the two channels, respectively. Figure 11 shows that the HOC features mainly influence the AMC performance at a higher SNR.
(e)
CS features.
Fourthly, the two CS features were replaced by WGN and other conditions were kept the same. The performance comparison is illustrated in Figure 12. The average accuracies without using HOC features in Ch1 and Ch2 are 78.7% and 78.9%. The accuracies without using CS features decrease to 4.2% and 2.5% in the two channels, respectively.
(f)
HOM features.
Finally, the two HOM features were replaced by WGN and other conditions were kept the same. The performance comparison is shown in Figure 13. The average accuracies without using CS features in Ch1 and Ch2 are 79.2% and 78.1%. The accuracies without using CS features decrease to 3.7% and 3.3% in the two channels, respectively.
The summary of this ablation experiment is shown in Table 4. Table 4 shows that the multi-domain feature fusion based on GCN is quite effective for the AMC of underwater acoustic communication signals. All the features make contributions to the AMC performance. The deep features from the time domain are the most indispensable for an exact classification.

3.2.2. The Analysis of the Influence of the Edges Inside HOC

Nine features were extracted using the HOC algorithm. The relationships among these features are complex; we constructed these edges based on the calculation relationships of such features. To analyze the influence of these edges, a contrastive experiment was carried out. In this experiment, a new adjacency matrix A ˜ 1 and degree matrix D ˜ 1 were used as the input of GCN. Since we would not consider the edges inside HOC, A ˜ 1 and D ˜ 1 can be expressed as:
Entropy 25 01096 i002
D ˜ = diag ( 15 , 2 , 2 , 2 , 2 , 2 , 2 , 2 , 2 , 2 , 2 , 2 , 2 , 2 , 2 ) .
The network was trained in the same way as the baseline. The classification results are shown in Figure 14, in which the baseline performance was used for comparison.
The average accuracies without using the edge inside HOC are 79.5% and 78.4%. The comparison of the accuracies is shown in Table 5. The accuracies without using edges inside HOC decrease to 3.4% and 3.0% in the two channels, respectively. The comparison result shows that making use of the relationships between HOC features can improve the classification performance.

3.2.3. Comparison with Other State-of-the-Art AMC Methods

To demonstrate the effectiveness of the proposed AMC method based on GCN, we compared the performance of the proposed method with those state-of-the-art AMC methods. The achieved methods include deep learning methods (basic CNN [44], InceptionV3 [45], GAN [29], VGGnet [30], ResNet [46,47], LSTM [48,49], deep complex network (DCN) [1]), and feature extraction methods (HOC [3,4] using an SVM classifier, CS [50] with a neural network classifier, and continuous wavelet transform (CWT) [11,51] with an SVM classifier). We carried out the comparison experiments in Ch1 and Ch2, respectively. The performance comparison is shown in Figure 15 and the average accuracy comparison is shown in Table 6. The proposed method has obvious advantages in both underwater acoustic channels.

3.2.4. Performance Analysis Using Real-World Dataset

To verify the performance of the proposed AMC method in a real-world underwater scenario, we carried out an experiment using the real-world underwater acoustic communication dataset. This dataset was recorded in the South China Sea. The data were recorded using an omnidirectional hydrophone placed about 10 m under the surface, the transmitter was about 3 km away from the receiver, and the relative speed of the transmitter and receiver was less than 5 m/s. The modulation types of this dataset were 2FSK, 4FSK, BPSK, QPSK, 16QAM, and 32QAM. The SNR of the received signals was about 3–5 dB. The number of each modulation type was 100. The classification results are shown in Table 7. The proposed method can classify the real-world dataset well; the average accuracy of this dataset is 75.3%.

3.2.5. Computational Cost Analysis

Computational cost is an important performance metric for AMC. To analyze the computational cost of our proposed AMC method, we calculated the time consumed by the modulation types prediction process. The prediction process of the proposed method includes two steps. The first step is to extract the multi-domain features and the deep features and the second step is the forward propagation of the GCN and its subsequent network layers. The first step typically involves complex calculations and requires a significant amount of computation. The second step was implemented in the CUDA environment, which consumed fewer computing resources with GPU acceleration. In order to accelerate the computational speed, we redesigned the calculation operation of feature extraction using TensorFlow in the CUDA environment. Thus, we could not only accelerate computational speed but also integrate the feature extraction process and forward propagation of the GCN into one computational framework. We compared our proposed method with DCN in our previous work [1]. Figure 16 shows the computational cost comparison of different methods—GCN1 denotes the process of the first step without GPU acceleration and GCN2 denotes the process of the first step with GPU acceleration.
As we can see, the duration of the feature extraction process was greatly reduced by using the redesigned calculation operation. Though the proposed method involves much more complex calculation, it can achieve a better performance while maintaining a computation cost close to that of the DCN.

4. Conclusions

In this paper, we presented a novel feature fusion method based on GCN for the AMC of underwater acoustic communication signals. The experimental results indicate that the proposed method can integrate multi-domain features and deep features to achieve a state-of-the-art AMC performance. The conclusions are highlighted as follows:
(1)
To improve the stability and robustness of AMC in underwater scenarios, a new feature fusion method based on a graph convolution network was proposed to fuse the multi-domain features and deep features of underwater acoustic communication signals. The feature extraction methods and deep learning methods were effectively integrated into the constructed feature fusion framework.
(2)
A graph was built for the multi-domain features and deep features based on their properties. The proposed feature fusion method can make full use of the relationships among these features. The experiments have shown that making use of the relationships can improve the AMC performance.
(3)
The comparative experiments indicate that the feature fusion method based on GCN can significantly improve the AMC performance in underwater scenarios and achieve excellent classification performance in different simulated and real-world underwater acoustic channels.

Author Contributions

Conceptualization, H.Y.; Data curation, H.Y.; Formal analysis, X.Y.; Funding acquisition, H.Y.; Methodology, X.Y.; Project administration, H.Y.; Resources, X.Y.; Software, X.Y.; Supervision, M.S.; Validation, X.Y. and H.Y.; Visualization, X.Y.; Writing—original draft, X.Y.; Writing—review & editing, H.Y. and M.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the National Natural Science Foundation of China (No. 52171339).

Data Availability Statement

The data presented in this paper are available after contacting the corresponding author.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
AMCAutomatic modulation classification
GCNGraph convolution network
HOCHigh-order cumulant
CSCyclostationary statistics
DNNDeep neural networks
CNNConvolution neural network
GANGenerative adversarial networks
RNNRecurrent neural network
LSTMLong short term memory
GRUGate recurrent unit
HOMhigh order moment
SCDSpectral correlation density
SCFSpectral coherence function
STFTShort-time Fourier transform
WGNwhite Gaussian noise
FSKFrequency shift keying
PSKPhase shift keying
QAMQuadrature amplitude modulation
DCNDeep complex networks

References

  1. Yao, X.H.; Yang, H.H.; Sheng, M.P. Automatic Modulation Classification for Underwater Acoustic Communication Signals Based on Deep Complex Networks. Entropy 2023, 25, 318. [Google Scholar] [CrossRef] [PubMed]
  2. Azzouz, E.E.; Nandi, A.K. Automatic identification of digital modulation types. Signal Process. 1995, 47, 55–69. [Google Scholar] [CrossRef]
  3. Vanhoy, G.; Asadi, H.; Volos, H.; Bose, T. Multi-receiver modulation classification for non-cooperative scenarios based on higher-order cumulants. Analog. Integr. Circuits Signal Process. 2021, 106, 1–7. [Google Scholar] [CrossRef]
  4. Wenxuan, C.; Yuan, J.; Lin, Z.; Yang, Z. A New Modulation Recognition Method Based on Wavelet Transform and High-order Cumulants. J. Phys. Conf. Ser. 2021, 1738, 12025. [Google Scholar] [CrossRef]
  5. Fang, T.; Wang, Q.; Zhang, L.; Liu, S. Modulation Mode Recognition Method of Non-Cooperative Underwater Acoustic Communication Signal Based on Spectral Peak Feature Extraction and Random Forest. Remote Sens. 2022, 14, 1603. [Google Scholar] [CrossRef]
  6. Jeong, S.; Lee, U.; Kim, S.C. Spectrogram-Based Automatic Modulation Recognition Using Convolutional Neural Network. In Proceedings of the 10th International Conference on Ubiquitous and Future Networks, ICUFN 2018, IEEE Computer Society, Prague, Czech Republic, 3–6 July 2018; Volume 2018, pp. 843–845. [Google Scholar] [CrossRef]
  7. Fan, H.; Yang, Z.; Cao, Z. Automatic Recognition for common used modulations in satellite communication. J. China Inst. Commun. 2004, 25, 140–149. [Google Scholar] [CrossRef]
  8. Sanderson, J.; Li, X.; Liu, Z.; Wu, Z. Hierarchical Blind Modulation Classification for Underwater Acoustic Communication Signal via Cyclostationary and Maximal Likelihood Analysis. In Proceedings of the Military Communications Conference, Milcom 2013, San Diego, CA, USA, 18–20 November 2013; Institute of Electrical and Electronics Engineers Inc.: San Diego, CA, USA, 2013; pp. 29–34. [Google Scholar] [CrossRef]
  9. Wu, Z.; Yang, T.C.; Liu, Z.; Chakarvarthy, V. Modulation detection of underwater acoustic communication signals through cyclostationary analysis. In Proceedings of the Military Communications Conference, 2012-Milcom, Orlando, FL, USA, 29 October–1 November 2012; Institute of Electrical and Electronics Engineers Inc.: Orlando, FL, USA, 2012; pp. 1–6. [Google Scholar] [CrossRef]
  10. Like, E.; Chakravarthy, V.; Ratazzi, P.; Wu, Z. Signal Classification in Fading Channels Using Cyclic Spectral Analysis. EURASIP J. Wirel. Comm. Netw. 2009, 2009, 879812. [Google Scholar] [CrossRef] [Green Version]
  11. Chen, J.; Kuo, Y.H.; Li, J.D.; Ma, Y.B. Modulation Identification of Digital Signals with Wavelet Transform. J. Electron. Inf. Technol. 2006, 28, 2026–2029. [Google Scholar]
  12. Li, Y.; Tang, B.; Geng, B.; Jiao, S. Fractional Order Fuzzy Dispersion Entropy and Its Application in Bearing Fault Diagnosis. Fractal Fract. 2022, 6, 544. [Google Scholar] [CrossRef]
  13. Li, Y.; Tang, B.; Jiao, S. SO-slope entropy coupled with SVMD: A novel adaptive feature extraction method for ship-radiated noise. Ocean. Eng. 2023, 280, 114677. [Google Scholar] [CrossRef]
  14. Lippmann, R.P. Pattern classification using neural networks. IEEE Commun. Mag. 1989, 27, 47–50. [Google Scholar] [CrossRef]
  15. Cortes, C.; Vapnik, V. Support-vector networks. Mach. Learn. 1995, 20, 273–297. [Google Scholar] [CrossRef]
  16. Safavian, S.R.; Landgrebe, D. A survey of decision tree classifier methodology. IEEE Trans. Syst. Man Cybern. 1991, 21, 660–674. [Google Scholar] [CrossRef] [Green Version]
  17. Zhao, Z.; Wang, S.; Zhang, W.; Xie, Y. A novel Automatic Modulation Classification method based on Stockwell-transform and energy entropy for underwater acoustic signals. In Proceedings of the IEEE International Conference on Signal Processing, Communications and Computing, Hong Kong, China, 5–8 August 2016; Institute of Electrical and Electronics Engineers Inc.: New York, NY, USA, 2016. [Google Scholar] [CrossRef]
  18. Ge, Y.; Zhang, X.; Zhou, Q. Modulation Recognition of Underwater Acoustic Communication Signals Based on Joint Feature Extraction. In Proceedings of the 2019 IEEE International Conference on Signal, Information and Data Processing (ICSIDP), Chongqing, China, 11–13 December 2019; pp. 1–4. [Google Scholar] [CrossRef]
  19. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. ImageNet Classification with Deep Convolutional Neural Networks. In Proceedings of the International Conference on Neural Information Processing Systems, Neural Information Processing Systems Foundation, Lake Tahoe, Nevada, USA, 3–6 December 2012; Volume 5, pp. 1097–1105. [Google Scholar] [CrossRef]
  20. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition, IEEE Computer Society, Las Vegas, NV, USA, 27–30 June 2016; Volume 2016, pp. 770–778. [Google Scholar] [CrossRef] [Green Version]
  21. Szegedy, C.; Liu, W.; Jia, Y.; Sermanet, P.; Reed, S.; Anguelov, D.; Erhan, D.; Vanhoucke, V.; Rabinovich, A. Going Deeper with Convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2015, IEEE Computer Society, Boston, MA, USA, 7–12 June 2015; pp. 1–9. [Google Scholar] [CrossRef] [Green Version]
  22. Simonyan, K.; Zisserman, A. Very Deep Convolutional Networks for Large-Scale Image Recognition. In Proceedings of the International Conference on Learning Representations 2015, International Conference on Learning Representations, ICLR, Diego, CA, USA, 7–9 May 2015. [Google Scholar]
  23. Goodfellow, I.J.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative Adversarial Networks. In Proceedings of the 28th Annual Conference on Neural Information Processing Systems 2014, NIPS 2014, Neural Information Processing Systems Foundation, Montreal, QC, Canada, 8–13 December 2014; Volume 3, pp. 2672–2680. [Google Scholar]
  24. Arjovsky, M.; Shah, A.; Bengio, Y. Unitary Evolution Recurrent Neural Networks. In Proceedings of the 33rd International Conference on Machine Learning. JMLR.org, New York, NY, USA, 20–22 June 2016; Volume 48, pp. 1120–1128. [Google Scholar] [CrossRef]
  25. Danihelka, I.; Wayne, G.; Uria, B.; Kalchbrenner, N.; Graves, A. Associative Long Short-Term Memory. In Proceedings of the 33rd International Conference on Machine Learning, New York, NY, USA, 20–22 June 2016; Volume 48, pp. 1986–1994. [Google Scholar] [CrossRef]
  26. Cho, K.; Merrienboer, B.V.; Bahdanau, D.; Bengio, Y. On the Properties of Neural Machine Translation: Encoder-Decoder Approaches. Comput. Sci. 2014, 103–111. [Google Scholar] [CrossRef]
  27. Zhang, H.; Zhou, F.; Wu, Q.; Wu, W.; Hu, R.Q. A Novel Automatic Modulation Classification Scheme Based on Multi-Scale Networks. IEEE Trans. Cogn. Commun. Netw. 2022, 8, 97–110. [Google Scholar] [CrossRef]
  28. Zhou, Q.; Zhang, R.H.; Mu, J.S.; Zhang, H.M.; Zhang, F.P.; Jing, X.J. AMCRN: Few-Shot Learning for Automatic Modulation Classification. IEEE Commun. Lett. 2022, 26, 542–546. [Google Scholar] [CrossRef]
  29. Yao, X.; Yang, H.; Li, Y. Modulation Identification of Underwater Acoustic Communications Signals Based on Generative Adversarial Networks. In Proceedings of the OCEANS 2019—Marseille, Marseille, France, 17–20 June 2019; Institute of Electrical and Electronics Engineers Inc.: New York, NY, USA, 2019; pp. 1–6. [Google Scholar] [CrossRef]
  30. O’Shea, T.J.; Roy, T.; Clancy, T.C. Over-the-Air Deep Learning Based Radio Signal Classification. IEEE J. Sel. Top. Signal Process. 2018, 12, 168–179. [Google Scholar] [CrossRef] [Green Version]
  31. Wang, M.; Fan, Y.; Fang, S.; Cui, T.; Cheng, D. A Joint Automatic Modulation Classification Scheme in Spatial Cognitive Communication. Sensors 2022, 22, 6500. [Google Scholar] [CrossRef]
  32. Yu, X.; Li, L.; Yin, J.; Shao, M.; Han, X. Modulation Pattern Recognition of Non-cooperative Underwater Acoustic Communication Signals Based on LSTM Network. In Proceedings of the 2019 IEEE International Conference on Signal, Information and Data Processing (ICSIDP), Chongqing, China, 11–13 December 2019; pp. 1–5. [Google Scholar] [CrossRef]
  33. Jiang, N.; Wang, B. Modulation Recognition of Underwater Acoustic Communication Signals Based on Data Transfer. In Proceedings of the 2019 IEEE 8th Joint International Information Technology and Artificial Intelligence Conference (ITAIC), Chongqing, China, 24–26 May 2019; pp. 243–246. [Google Scholar] [CrossRef]
  34. Lida, D.; Shilian, W.; Wei, Z. Modulation Classification of Underwater Acoustic Communication Signals Based on Deep Learning. In Proceedings of the 2018 OCEANS-MTS/IEEE Kobe Techno-Oceans, OCEANS-Kobe 2018, Kobe, Japan, 28–31 May 2018; Institute of Electrical and Electronics Engineers Inc.: New York, NY, USA, 2018; pp. 1–4. [Google Scholar] [CrossRef]
  35. Scarselli, F.; Gori, M.; Tsoi, A.C.; Hagenbuchner, M.; Monfardini, G. The Graph Neural Network Model. IEEE Trans. Neural Netw. 2009, 20, 61–80. [Google Scholar] [CrossRef] [Green Version]
  36. Long, Y.; Wu, J.Y.; Lu, B.; Jin, Y.; Unberath, M.; Liu, Y.H.; Heng, P.A.; Dou, Q. Relational Graph Learning on Visual and Kinematics Embeddings for Accurate Gesture Recognition in Robotic Surgery. In Proceedings of the 2021 IEEE International Conference on Robotics and Automation (ICRA), Xi’an, China, 30 May–5 June 2021; pp. 13346–13353. [Google Scholar] [CrossRef]
  37. Kipf, T.; Welling, M. Semi-Supervised Classification with Graph Convolutional Networks. In Proceedings of the International Conference on Learning Representations, Toulon, France, 24–26 April 2017. [Google Scholar] [CrossRef]
  38. Xuan, Q.; Zhou, J.; Qiu, K.; Chen, Z.; Xu, D.; Zheng, S.; Yang, X. AvgNet: Adaptive Visibility Graph Neural Network and Its Application in Modulation Classification. IEEE Trans. Netw. Sci. Eng. 2022, 9, 1516–1526. [Google Scholar] [CrossRef]
  39. Gao, Y.Q.; Chen, J.A. Recognition of Digital Modulation Signals Based on High Order Cumulants. Wirel. Commun. Technol. 2006, 1, 26–29. [Google Scholar]
  40. Wu, Z.; Yang, T.C. Blind cyclostationary carrier frequency and symbol rate estimation for underwater acoustic communication. In Proceedings of the 2012 IEEE International Conference on Communications (ICC), Ottawa, ON, Canada, 10–15 June 2012; pp. 3482–3486. [Google Scholar] [CrossRef]
  41. Reichert, J. Automatic classification of communication signals using higher order statistics. In Proceedings of the 1992 IEEE International Conference on Acoustics, Speech, and Signal Processing, San Francisco, CA, USA, 23–26 March 1992; Volume 5, pp. 221–224. [Google Scholar] [CrossRef]
  42. Vincent, P.; Larochelle, H.; Bengio, Y.; Manzagol, P.A. Extracting and composing robust features with denoising autoencoders. In Proceedings of the 25th International Conference on Machine Learning, ICML 2008, Association for Computing Machinery (ACM), Helsinki, Finland, 5–9 July 2008; pp. 1096–1103. [Google Scholar] [CrossRef] [Green Version]
  43. van der Maaten, L.; Hinton, G. Viualizing data using t-SNE. J. Mach. Learn. Res. 2008, 9, 2579–2605. [Google Scholar]
  44. Yao, X.; Yang, H.; Li, Y. Modulation recognition of underwater acoustic communication signals based on convolutional neural networks. Unmanned Syst. Technol. 2018, 1, 68–74. [Google Scholar]
  45. Szegedy, C.; Vanhoucke, V.; Ioffe, S.; Shlens, J.; Wojna, Z. Rethinking the Inception Architecture for Computer Vision. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 2818–2826. [Google Scholar] [CrossRef] [Green Version]
  46. Wu, P.; Sun, B.; Su, S.; Wei, J.; Wen, X. Automatic Modulation Classification Based on Deep Learning for Software-Defined Radio. Math. Probl. Eng. 2020, 2020, 1–13. [Google Scholar] [CrossRef]
  47. Qi, P.; Zhou, X.; Zheng, S.; Li, Z. Automatic Modulation Classification Based on Deep Residual Networks With Multimodal Information. IEEE Trans. Cogn. Commun. Netw. 2021, 7, 21–33. [Google Scholar] [CrossRef]
  48. Karahan, S.N.; Kalaycioğlu, A. Deep Learning Based Automatic Modulation Classification With Long-Short Term Memory Networks. In Proceedings of the 2020 28th Signal Processing and Communications Applications Conference (SIU), Gaziantep, Turkey, 5–7 October 2020; pp. 1–4. [Google Scholar] [CrossRef]
  49. Hamza, M.A.; Hassine, S.B.; Larabi-Marie-Sainte, S.; Nour, M.K.; Al-Wesabi, F.N.; Motwakel, A.; Hilal, A.M.; Al Duhayyim, M. Optimal Bidirectional LSTM for Modulation Signal Classification in Communication Systems. Cmc-Comput. Mater. Contin. 2022, 72, 3055–3071. [Google Scholar] [CrossRef]
  50. Liu, X.; Li, C.J.; Jin, C.T.; Leong, P.H.W. Wireless Signal Representation Techniques for Automatic Modulation Classification. IEEE Access 2022, 10, 84166–84187. [Google Scholar] [CrossRef]
  51. Chen, J.; Kuo, Y.; Li, J.; Fu, F.; Ma, Y. Digital modulation identification by wavelet analysis. In Proceedings of the Sixth International Conference on Computational Intelligence and Multimedia Applications (ICCIMA’05), Las Vegas, NV, USA, 16–18 August 2005; pp. 29–34. [Google Scholar] [CrossRef]
Figure 1. Framework of the proposed AMC method based on GCN.
Figure 1. Framework of the proposed AMC method based on GCN.
Entropy 25 01096 g001
Figure 2. The graph of the multi-domain features and deep features. The graph is undirected, the black edges denote the relationships between different domain, the blue edges denote the relationships between the nodes belonging to the same domain.
Figure 2. The graph of the multi-domain features and deep features. The graph is undirected, the black edges denote the relationships between different domain, the blue edges denote the relationships between the nodes belonging to the same domain.
Entropy 25 01096 g002
Figure 3. The architecture of the DAE.
Figure 3. The architecture of the DAE.
Entropy 25 01096 g003
Figure 4. Sound velocity profile.
Figure 4. Sound velocity profile.
Entropy 25 01096 g004
Figure 5. Diagram of underwater acoustic channel.
Figure 5. Diagram of underwater acoustic channel.
Entropy 25 01096 g005
Figure 6. Time delays and amplitudes of the two multi-path fading channels.
Figure 6. Time delays and amplitudes of the two multi-path fading channels.
Entropy 25 01096 g006
Figure 7. Performance of proposed method in the two underwater multi-path channels.
Figure 7. Performance of proposed method in the two underwater multi-path channels.
Entropy 25 01096 g007
Figure 8. Visualization of the features of the fully connected layer: (a) features are learned from the signals in Ch1 (SNR = 6 dB); (b) features are learned from the signals in Ch2 (SNR = 6 dB).
Figure 8. Visualization of the features of the fully connected layer: (a) features are learned from the signals in Ch1 (SNR = 6 dB); (b) features are learned from the signals in Ch2 (SNR = 6 dB).
Entropy 25 01096 g008
Figure 9. Performance comparison with and without deep features from the time domain; F-Y and F-N mean with and without such deep features, respectively.
Figure 9. Performance comparison with and without deep features from the time domain; F-Y and F-N mean with and without such deep features, respectively.
Entropy 25 01096 g009
Figure 10. Performance comparison with and without deep features from STFT; F-Y and F-N mean with and without such deep features, respectively.
Figure 10. Performance comparison with and without deep features from STFT; F-Y and F-N mean with and without such deep features, respectively.
Entropy 25 01096 g010
Figure 11. Performance comparison with and without HOC features; F-Y and F-N mean with and without HOC features, respectively.
Figure 11. Performance comparison with and without HOC features; F-Y and F-N mean with and without HOC features, respectively.
Entropy 25 01096 g011
Figure 12. Performance comparison with and without CS features; F-Y and F-N mean with and without CS features, respectively.
Figure 12. Performance comparison with and without CS features; F-Y and F-N mean with and without CS features, respectively.
Entropy 25 01096 g012
Figure 13. Performance comparison with and without HOM features; F-Y and F-N mean with and without HOM features, respectively.
Figure 13. Performance comparison with and without HOM features; F-Y and F-N mean with and without HOM features, respectively.
Entropy 25 01096 g013
Figure 14. Performance comparison with and without edges inside HOM features; E-Y and E-N mean with and without such edges respectively.
Figure 14. Performance comparison with and without edges inside HOM features; E-Y and E-N mean with and without such edges respectively.
Entropy 25 01096 g014
Figure 15. Performance comparison with state-of-the-art AMC methods: (a) comparison result in Ch1, (b) comparison result in Ch2.
Figure 15. Performance comparison with state-of-the-art AMC methods: (a) comparison result in Ch1, (b) comparison result in Ch2.
Entropy 25 01096 g015aEntropy 25 01096 g015b
Figure 16. Computational cost comparison of different methods. GCN1 denotes the process of the first step without GPU acceleration, GCN2 denotes the process of the first step with GPU acceleration.
Figure 16. Computational cost comparison of different methods. GCN1 denotes the process of the first step without GPU acceleration, GCN2 denotes the process of the first step with GPU acceleration.
Entropy 25 01096 g016
Table 1. Relationships among the HOC features.
Table 1. Relationships among the HOC features.
C 20 C 21 C 40 C 41 C 42 C 60 C 61 C 63 C 80
C 20 -
C 21 -
C 40 -
C 41 -
C 42 -
C 60 -
C 61 -
C 63 -
C 80 -
∘: has no relationship, •: has relationship, -: not available.
Table 2. The node–feature pairs.
Table 2. The node–feature pairs.
NodeFeatureNodeFeatureNodeFeature
v 1 x ( t ) v 6 C 41 v 11 C 80
v 2 F ( x ) v 7 C 42 v 12 I ( α )
v 3 C 20 v 8 C 60 v 13 I ( f )
v 4 C 21 v 9 C 61 v 14 U 2 ( f )
v 5 C 40 v 10 C 63 v 15 U 4 ( f )
Table 3. Parameters of each modulation type.
Table 3. Parameters of each modulation type.
Modulation TypeSampling Rate (kHz)Carrier Frequency Offset (Hz)Symbol Rate (Baud)Roll off ValueSNR (dB)
FSK12k300100∼200-−9∼21
PSK12k300800∼12000.1∼0.4−9∼21
QAM12k300800∼12000.1∼0.4−9∼21
Table 4. Summary of the comparison of different feature sets.
Table 4. Summary of the comparison of different feature sets.
AMC Accuracy
Feature SetsCh1Ch2Average
All features82.9%81.4%82.2%
Without time domain59.3%50.8%55.1%
Without STFT79.7%71.6%75.7%
Without HOC74.3%73.5%73.9%
Without CS78.7%78.9%78.8%
Without HOM79.2%78.1%78.7%
Table 5. Comparison of the influence of the edges inside HOC.
Table 5. Comparison of the influence of the edges inside HOC.
AMC Accuracy
Features SetCh1Ch2Average
With HOC edges82.9%81.4%82.3%
Without HOC edges79.5%78.4%79.0%
Table 6. Average accuracy of different methods in two channels.
Table 6. Average accuracy of different methods in two channels.
Channel Ch 1 Ch 2 Average Channel Ch 1 Ch 2 Average
GCN 82.9%81.4%82.2% InceptionV 3 74.5%73.4%74.0%
VGGnet 69.3%76.0%72.7% GAN 78.8%77.5%78.2%
ResNet 68.5%70.4%69.5% HOC 59.7%69.6%64.7%
LSTM 73.2%69.2%71.2% CS 60.6%63.6%62.1%
DCN 74.9%78.1%76.5% CWT 61.7%65.3%63.5%
CNN 68.4%69.0%69.7%
Table 7. Classification results of the real-world underwater acoustic communication signals.
Table 7. Classification results of the real-world underwater acoustic communication signals.
2 FSK 4 FSK BPSK QPSK 16 QAM 32 QAM
Accuracy84%80%77%71%67%73%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Yao, X.; Yang, H.; Sheng, M. Feature Fusion Based on Graph Convolution Network for Modulation Classification in Underwater Communication. Entropy 2023, 25, 1096. https://doi.org/10.3390/e25071096

AMA Style

Yao X, Yang H, Sheng M. Feature Fusion Based on Graph Convolution Network for Modulation Classification in Underwater Communication. Entropy. 2023; 25(7):1096. https://doi.org/10.3390/e25071096

Chicago/Turabian Style

Yao, Xiaohui, Honghui Yang, and Meiping Sheng. 2023. "Feature Fusion Based on Graph Convolution Network for Modulation Classification in Underwater Communication" Entropy 25, no. 7: 1096. https://doi.org/10.3390/e25071096

APA Style

Yao, X., Yang, H., & Sheng, M. (2023). Feature Fusion Based on Graph Convolution Network for Modulation Classification in Underwater Communication. Entropy, 25(7), 1096. https://doi.org/10.3390/e25071096

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop