A Lightweight Dual-Branch Complex-Valued Neural Network for Automatic Modulation Classification of Communication Signals
Abstract
:1. Introduction
- To integrate the advantages of the two main approaches in existing CVNN-based AMC, a dual-branch extraction structure is designed to capture both features containing rich phase information and features with complex-scaling equivariance. These features are then fused using a trainable weighted fusion.
- To reduce redundant complex-valued features of CVNNs, spatial and channel reconstruction convolution (SCConv) is extended to the complex domain;
- To further enhance feature diversity and facilitate efficient feature mining and dimensionality reduction, the fused features are further extracted by complex-valued spatial and channel reconstruction convolution (CSCConv), complex-valued depthwise separable convolution block (CBlock) and Complex-valued average pooling (CAP).
2. Methodology
2.1. Problem Statement and Signal Model
2.2. Complex-Valued Building Blocks
2.2.1. Complex-Valued Convolution (CConv)
2.2.2. Complex-Valued Activations
2.2.3. Complex-Valued Max Pooling (CMP)
2.2.4. Complex-Valued Average Pooling (CAP)
2.2.5. Complex-Valued Batch Normalization (CBN)
2.2.6. Complex-Valued Full Connection (CFC)
2.2.7. Complex-Valued Depthwise Separable Convolution Block (CBlock)
2.2.8. Complex-Valued Spatial and Channel Reconstruction Convolution (CSCConv)
2.2.9. Complex-Valued Softmax (CSoftmax)
2.3. Proposed Model
3. Experiments
3.1. Datasets and Experimental Condition
3.2. Evaluation Method
3.3. Experiment Settings
4. Results and Discussions
4.1. Pre-Experiment
4.2. Ablation Experiments
4.3. Comparative Experiments
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
Abbreviations
AMC | automatic modulation classification |
LDCVNN | lightweight dual-branch complex-valued neural network |
CVNN | complex-valued neural network |
RVNN | real-valued neural network |
IQ | in-phase and quadrature |
SCConv | spatial and channel reconstruction convolution |
CBlock | complex-valued depthwise separable convolution block |
CDSC | complex-valued depthwise separable convolution |
CR | cognitive radio |
CNN | convolutional neural network |
GRU | gated recurrent unit |
DA | data augmentation |
DSC | depthwise separable convolution |
ReLU | rectified linear unit |
DWC | depthwise convolution |
SRU | spatial reconstruction unit |
CRU | channel reconstruction unit |
GN | group normalization |
PIFE | phase information feature extraction |
CSEFE | complex-valued scaling equivariant feature extraction |
FEDR | feature extraction and dimension reduction |
TP | true positive |
FP | false positive |
TN | true negative |
FN | false negative |
FLOPs | floating-point operations per second |
DNN | dense neural network |
LSTM | long short-term memory network |
CConv | complex-valued convolution |
wFM | weighted Fréchet mean filtering |
tReLU | tangent rectified linear unit |
CMP | complex-valued max pooling |
CAP | complex-valued average pooling |
CBN | complex-valued batch normalization |
CFC | complex-valued full connection |
CBlock | complex-valued depthwise separable convolution block |
CDSC | complex-valued depthwise separable convolutions |
CPWC | complex-valued pointwise convolution |
CSCConv | complex-valued spatial and channel reconstruction convolution |
References
- Huynh-The, T.; Pham, Q.V.; Nguyen, T.V.; Nguyen, T.T.; Ruby, R.; Zeng, M.; Kim, D.S. Automatic Modulation Classification: A Deep Architecture Survey. IEEE Access 2021, 9, 142950–142971. [Google Scholar] [CrossRef]
- Zhang, F.; Luo, C.; Xu, J.; Luo, Y.; Zheng, F.-C. Deep learning based automatic modulation recognition: Models, datasets, and challenges. Digit. Signal Process. 2022, 129, 103650. [Google Scholar] [CrossRef]
- Dulek, B. Online Hybrid Likelihood Based Modulation Classification Using Multiple Sensors. IEEE Trans. Wirel. Commun. 2017, 16, 4984–5000. [Google Scholar] [CrossRef]
- Wen, W.; Mendel, J.M. Maximum-likelihood classification for digital amplitude-phase modulations. IEEE Trans. Commun. 2000, 48, 189–193. [Google Scholar] [CrossRef]
- Xu, J.L.; Su, W.; Zhou, M. Likelihood-Ratio Approaches to Automatic Modulation Classification. IEEE Trans. Syst. Man Cybern. Part C (Appl. Rev.) 2011, 41, 455–469. [Google Scholar] [CrossRef]
- Hazza, A.; Shoaib, M.; Alshebeili, S.A.; Fahad, A. An overview of feature-based methods for digital modulation classification. In Proceedings of the 2013 1st International Conference on Communications Signal Processing, and their Applications (ICCSPA), Sharjah, United Arab Emirates, 12–14 February 2013; pp. 1–6. [Google Scholar]
- Meng, F.; Chen, P.; Wu, L.; Wang, X. Automatic modulation classification: A deep learning enabled approach. IEEE Trans. Veh. Technol. 2018, 67, 10760–10772. [Google Scholar] [CrossRef]
- Snoap, J.A.; Popescu, D.C.; Latshaw, J.A.; Spooner, C.M. Deep-Learning-Based Classification of Digitally Modulated Signals Using Capsule Networks and Cyclic Cumulants. Sensors 2023, 23, 5735. [Google Scholar] [CrossRef]
- Snoap, J.A.; Popescu, D.C.; Spooner, C.M. Deep-Learning-Based Classifier With Custom Feature-Extraction Layers for Digitally Modulated Signals. IEEE Trans. Broadcast. 2024, 70, 763–773. [Google Scholar] [CrossRef]
- Rajendran, S.; Meert, W.; Giustiniano, D.; Lenders, V.; Pollin, S. Deep Learning Models for Wireless Signal Classification with Distributed Low-Cost Spectrum Sensors. IEEE Trans. Cogn. Commun. Netw. 2018, 4, 433–445. [Google Scholar] [CrossRef]
- Xu, J.; Luo, C.; Parr, G.; Luo, Y. A Spatiotemporal Multi-Channel Learning Framework for Automatic Modulation Recognition. IEEE Wirel. Commun. Lett. 2020, 9, 1629–1632. [Google Scholar] [CrossRef]
- Liu, X.; Yang, D.; El Gamal, A. Deep Neural Network Architectures for Modulation Classification. In Proceedings of the 51st IEEE Asilomar Conference on Signals Systems, and Computers, Pacific Grove, CA, USA, 29 October–1 November 2017; pp. 915–919. [Google Scholar]
- Clarke, T.L. Generalization of neural networks to the complex plane. In Proceedings of the 1990 IJCNN International Joint Conference on Neural Networks, San Diego, CA, USA, 17–21 June 1990; pp. 435–440. [Google Scholar]
- Trabelsi, C.; Bilaniuk, O.; Zhang, Y.; Serdyuk, D.; Subramanian, S.; Santos, J.F.; Mehri, S.; Rostamzadeh, N.; Bengio, Y.; Pal, C.J. Deep Complex Networks. arXiv 2017, arXiv:1705.09792. [Google Scholar]
- Hirose, A.; Yoshida, S. Comparison of Complex- and Real-Valued Feedforward Neural Networks in Their Generalization Ability. In Neural Information Processing; Lu, B.-L., Zhang, L., Kwok, J., Eds.; Springer: Berlin/Heidelberg, Germany, 2011; Volume 7062, pp. 526–531. [Google Scholar]
- Lee, C.; Hasegawa, H.; Gao, S. Complex-Valued Neural Networks: A Comprehensive Survey. IEEE-CAA J. Autom. Sin. 2022, 9, 1406–1426. [Google Scholar] [CrossRef]
- Xu, J.; Wu, C.; Ying, S.; Li, H. The Performance Analysis of Complex-Valued Neural Network in Radio Signal Recognition. IEEE Access 2022, 10, 48708–48718. [Google Scholar] [CrossRef]
- Li, W.; Xie, W.; Wang, Z. Complex-Valued Densely Connected Convolutional Networks. In Data Science; Zeng, J., Jing, W., Song, X., Lu, Z., Eds.; Springer: Singapore, 2020; Volume 1257, pp. 299–309. [Google Scholar]
- Tu, Y.; Lin, Y.; Hou, C.; Mao, S. Complex-Valued Networks for Automatic Modulation Classification. IEEE Trans. Veh. Technol. 2020, 69, 10085–10089. [Google Scholar] [CrossRef]
- Kim, S.; Yang, H.-Y.; Kim, D. Fully Complex Deep Learning Classifiers for Signal Modulation Recognition in Non-Cooperative Environment. IEEE Access 2022, 10, 20295–20311. [Google Scholar] [CrossRef]
- Cheng, R.; Chen, Q.; Huang, M. Automatic modulation recognition using deep CVCNN-LSTM architecture. Alex. Eng. J. 2024, 104, 162–170. [Google Scholar] [CrossRef]
- Chakraborty, R.; Wang, J.; Yu, S.X. SurReal: Fréchet Mean and Distance Transform for Complex-Valued Deep Learning. arXiv 2019, arXiv:1906.10048. [Google Scholar]
- Chakraborty, R.; Xing, Y.; Yu, S.X. SurReal: Complex-Valued Learning as Principled Transformations on a Scaling and Rotation Manifold. IEEE Trans. Neural Netw. Learn. Syst. 2020, 3, 940–951. [Google Scholar] [CrossRef]
- Singhal, U.; Xing, Y.; Yu, S.X. Co-domain Symmetry for Complex-Valued Deep Learning. arXiv 2021, arXiv:2112.01525. [Google Scholar]
- Zhang, F.; Luo, C.; Xu, J.; Luo, Y. An Efficient Deep Learning Model for Automatic Modulation Recognition Based on Parameter Estimation and Transformation. IEEE Commun. Lett. 2021, 25, 3287–3290. [Google Scholar] [CrossRef]
- Lin, Y.; Tu, Y.; Dou, Z. An Improved Neural Network Pruning Technology for Automatic Modulation Classification in Edge Devices. IEEE Trans. Veh. Technol. 2020, 69, 5703–5706. [Google Scholar] [CrossRef]
- Tu, Y.; Lin, Y. Deep Neural Network Compression Technique Towards Efficient Digital Signal Modulation Recognition in Edge Device. IEEE Access 2019, 7, 58113–58119. [Google Scholar] [CrossRef]
- Zhang, X.; Zhao, H.; Zhu, H.; Adebisi, B.; Gui, G.; Gacanin, H.; Adachi, F. NAS-AMR: Neural Architecture Search-Based Automatic Modulation Recognition for Integrated Sensing and Communication Systems. IEEE Trans. Cogn. Commun. Netw. 2022, 8, 1374–1386. [Google Scholar] [CrossRef]
- Fu, X.; Gui, G.; Wang, Y.; Ohtsuki, T.; Adachi, F. Lightweight Automatic Modulation Classification Based on Decentralized Learning. IEEE Trans. Cogn. Commun. Netw. 2021, 8, 57–70. [Google Scholar] [CrossRef]
- Lin, Y.; Zha, H.; Tu, Y.; Zhang, S.; Yan, W.; Xu, C. GLR-SEI: Green and Low Resource Specific Emitter Identification Based on Complex Networks and Fisher Pruning. IEEE Trans. Emerg. Top. Comput. Intell. 2023, 8, 3239–3250. [Google Scholar] [CrossRef]
- Wang, F.; Shang, T.; Hu, C.; Liu, Q. Automatic Modulation Classification Using Hybrid Data Augmentation and Lightweight Neural Network. Sensors 2023, 23, 4187. [Google Scholar] [CrossRef]
- Guo, L.; Wang, Y.; Liu, Y.; Lin, Y.; Zhao, H.; Gui, G. Ultralight Convolutional Neural Network for Automatic Modulation Classification in Internet of Unmanned Aerial Vehicles. IEEE Internet Things J. 2024, 11, 20831–20839. [Google Scholar] [CrossRef]
- Xiao, C.; Yang, S.; Feng, Z. Complex-Valued Depthwise Separable Convolutional Neural Network for Automatic Modulation Classification. IEEE Trans. Instrum. Meas. 2023, 72, 2522310. [Google Scholar] [CrossRef]
- Fréchet, M.R. Les éléments aléatoires de nature quelconque dans un espace distancié. Ann. L’institut Henri Poincaré 1948, 10, 215–310. [Google Scholar]
- Li, J.; Wen, Y.; He, L. SCConv: Spatial and Channel Reconstruction Convolution for Feature Redundancy. In Proceedings of the 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Vancouver, BC, Canada, 17–24 June 2023; pp. 6153–6162. [Google Scholar]
- O’Shea, T.J.; West, N. Radio Machine Learning Dataset Generation with GNU Radio. In Proceedings of the GNU Radio Conference, Boulder, CO, USA, 12–16 September 2016. [Google Scholar]
- O’Shea, T.J.; Corgan, J.; Clancy, T.C. Convolutional radio modulation recognition networks. In Proceedings of the Engineering Applications of Neural Networks: 17th International Conference, EANN 2016, Aberdeen, UK, 2–5 September 2016; Proceedings 17. pp. 213–226. [Google Scholar]
- O’Shea, T.J.; Roy, T.; Clancy, T.C. Over-the-Air Deep Learning Based Radio Signal Classification. IEEE J. Sel. Top. Signal Process. 2018, 12, 168–179. [Google Scholar] [CrossRef]
- Tekbıyık, K.; Ekti, A.R.; Görçin, A.; Kurt, G.K.; Keçeci, C. Robust and Fast Automatic Modulation Classification with CNN under Multipath Fading Channels. In Proceedings of the 2020 IEEE 91st Vehicular Technology Conference (VTC2020-Spring), Antwerp, Belgium, 25–28 May 2020; pp. 1–6. [Google Scholar]
- Njoku, J.N.; Morocho-Cayamcela, M.E.; Lim, W. CGDNet: Efficient Hybrid Deep Learning Model for Robust Automatic Modulation Recognition. IEEE Netw. Lett. 2021, 3, 47–51. [Google Scholar] [CrossRef]
- Hermawan, A.P.; Ginanjar, R.R.; Kim, D.S.; Lee, J.M. CNN-Based Automatic Modulation Classification for Beyond 5G Communications. IEEE Commun. Lett. 2020, 24, 1038–1041. [Google Scholar] [CrossRef]
- Zeng, Y.; Zhang, M.; Han, F.; Gong, Y.; Zhang, J. Spectrum Analysis and Convolutional Neural Network for Automatic Modulation Recognition. IEEE Wirel. Commun. Lett. 2019, 8, 929–932. [Google Scholar] [CrossRef]
Dataset | RML2016.10a [36] | RML2016.10b [37] | RML2018.01a [38] | HisarMod2019.1 [39] |
---|---|---|---|---|
Details | classes *: 11 SNR: −20:2:18 dim: 2 × 128 size: 220 K | classes *: 10 SNR: −20:2:18 dim: 2 × 128 size: 1.2 M | classes *: 24 SNR: −20:2:30 dim: 2 × 1024 size: 2.56 M | classes *: 26 SNR: −20:2:18 dim: 2 × 1024 size: 780 K |
Layer | Input Shape | Kernel | Stride | Output Shape | |||
---|---|---|---|---|---|---|---|
L1 | L2 | L1 | L2 | - | L1 | L2 | |
wFM | [2,1,1,128] | [2,1,1,1024] | 7 | 9 | 2 | [2,16,1,61] | [2,16,1,508] |
tReLU | [2,16,1,61] | [2,16,1,508] | - | - | - | [2,16,1,61] | [2,16,1,508] |
wFM | [2,16,1,61] | [2,16,1,508] | 7 | 9 | 1 | [2,16,1,55] | [2,16,1,500] |
tReLU | [2,16,1,55] | [2,16,1,500] | - | - | - | [2,16,1,55] | [2,16,1,500] |
CDSC1d | [2,128] | [2,1024] | 7 | 9 | 2 | [32,61] | [32,508] |
CBN + CReLU | [32,61] | [32,508] | - | - | - | [32,61] | [32,500] |
CDSC1d | [32,61] | [32,500] | 7 | 9 | 2 | [32,61] | [32,500] |
CBN + CReLU | [32,61] | [32,500] | - | - | - | [32,55] | [32,500] |
weighted feature fusion | [2,16,1,55], [32,55] | [2,16,1,500], [32,500] | - | - | - | [32,55] | [32,500] |
CBlock(Rep = 5) | [32,55] | [32,500] | 3 | 3 | 2 | [48,28] | [48,250] |
CBlock(Rep = 1) | [48,28] | [48,250] | 1 | 1 | 1 | [48,28] | [48,250] |
CAP | [48,28] | [48,250] | - | - | - | [48,1] | [48,1] |
CFC | [48] | [48] | - | - | - | [classes *] | [classes *] |
Model | Main Structure | Network Type |
---|---|---|
CNN2 [39] | CNN + DNN | RVNN |
CLDNN [12] | CNN + LSTM + DNN | RVNN |
CGDNN [40] | CNN + GRU + DNN | RVNN |
ResNet [12] | ResNet | RVNN |
MCLDNN [11] | CNN + LSTM + Multi-channel | RVNN |
ICAMC [41] | CNN + Gaussian noise | RVNN |
SCNN [42] | CNN | RVNN |
PET-CGDNN [25] | CNN + GRU + DNN | PET + RVNN |
ULCNN [32] | CV-CNN + GRU +DNN | DA + RVNN (with CV-CNN) |
CDSN [19] | CV-CNN + CV-DNN | CVNN |
CSDNN [33] | CV-CNN + Residual Connection + CDSC + CV-DNN | CVNN |
Position | Capacity * | Average Accuracy |
---|---|---|
① | 9025 | 62.41% |
② | 9977 | 61.40% |
③ | 9977 | 61.43% |
Model | Ablation Part | Average Accuracy (All SNRs) | Capacity * |
---|---|---|---|
LDCVNN | - | 62.41% | 9025 |
LDCVNN-A | w/o CSEFE | 61.89% (↓0.52%) | 8332 |
LDCVNN-B | w/o PIEF | 56.23% (↓6.18%) | 8051 |
LDCVNN-C | w/o CSCConv | 60.80% (↓1.61%) | 8225 |
LDCVNN-D | w/o CBlock | 61.53% (↓0.88%) | 7609 |
LDCVNN + DA | - | 62.46% (↑0.05%) | - |
Model | Average Accuracy (All SNRs) | Average Accuracy (≥0 dB) | Highest Accuracy | F1-Score (%) | Capacity * (K) | FLOPs (G/Sample) | Test Time (ms/Sample) |
---|---|---|---|---|---|---|---|
CNN2 | 51.46% | 74.85% | 76.45% | 47.99 | 858.1 | 0.07878 | 0.6350 |
CLDNN | 59.53% | 88.23% | 89.36% | 56.88 | 76.7 | 0.01991 | 0.1340 |
CGDNN | 59.93% | 89.48% | 90.77% | 57.59 | 52.0 | 0.00170 | 0.0454 |
ResNet | 46.32% | 71.34% | 74.64% | 42.33 | 3098.3 | 0.12413 | 0.8657 |
MCLDNN | 60.83% | 89.82% | 90.77% | 58.73 | 405.8 | 0.04869 | 0.1028 |
ICAMC | 51.47% | 75.34% | 76.91% | 53.09 | 1263.6 | 0.01466 | 0.0624 |
SCNN | 55.45% | 82.81% | 84.36% | 47.92 | 104.1 | 0.00189 | 0.0635 |
PET-CGDNN | 60.85% | 90.31% | 91.55% | 58.68 | 71.8 | 0.00831 | 0.0515 |
ULCNN | 60.58% | 90.11% | 91.19% | 58.65 | 9.4 | 0.00082 | 0.0605 |
CDSN | 50.59% | 74.82% | 76.55% | 49.29 | 1336.1 | 0.00923 | 0.0570 |
CSDNN | 58.66% | 86.91% | 88.09% | 56.90 | 327.1 | 1.1811 | 0.0787 |
Our Model | 62.41% | 91.63% | 92.36% | 59.58 | 9.0 | 0.00060 | 0.4210 |
Model | Average Accuracy (All SNRs) | Average Accuracy (≥0 dB) | Highest Accuracy | F1-Score (%) | Capacity * (K) | FLOPs (G/Sample) | Test Time (ms/Sample) |
---|---|---|---|---|---|---|---|
CNN2 | 48.74% | 70.88% | 71.98% | 45.22 | 858.0 | 0.07878 | 0.6980 |
CLDNN | 61.38% | 89.41% | 90.25% | 59.70 | 76.4 | 0.01991 | 0.1403 |
CGDNN | 63.71% | 92.31% | 92.90% | 62.47 | 49.7 | 0.00170 | 0.0448 |
ResNet | 47.65% | 72.64% | 75.59% | 45.61 | 3098.2 | 0.12413 | 0.8758 |
MCLDNN | 65.49% | 93.50% | 94.09% | 64.31 | 405.7 | 0.04869 | 0.1103 |
ICAMC | 59.85% | 87.56% | 88.67% | 58.51 | 1263.5 | 0.01466 | 0.0590 |
SCNN | 53.48% | 76.76% | 77.67% | 50.62 | 95.9 | 0.00188 | 0.0582 |
PET-CGDNN | 65.00% | 93.30% | 93.86% | 64.30 | 71.6 | 0.00831 | 0.0508 |
ULCNN | 63.09% | 91.72% | 92.46% | 62.18 | 9.3 | 0.00082 | 0.0653 |
CDSN | 64.04% | 86.19% | 87.08% | 63.24 | 1331.6 | 0.00337 | 0.0606 |
CSDNN | 65.82% | 91.34% | 91.73% | 65.13 | 315.4 | 0.00923 | 0.0900 |
Our Model | 63.97% | 92.29% | 93.18% | 63.00 | 9.0 | 0.00062 | 0.4209 |
Model | Average Accuracy (all SNRs) | Average Accuracy (≥0 dB) | Highest Accuracy | F1-Score (%) | Capacity * (K) | FLOPs (G/Sample) | Test Time (ms/Sample) |
---|---|---|---|---|---|---|---|
CNN2 | 52.81% | 76.15% | 81.73% | 50.78 | 1777.3 | 0.6302 | 2.5694 |
CLDNN | 47.42% | 69.03% | 73.93% | 44.90 | 80.0 | 0.1669 | 2.0549 |
CGDNN | 39.29% | 56.38% | 58.61% | 33.44 | 512.3 | 0.0151 | 1.8561 |
ResNet | 41.95% | 63.65% | 70.65% | 41.30 | 21450.0 | 0.9930 | 2.7234 |
MCLDNN | 61.56% | 90.04% | 96.76% | 60.11 | 407.5 | 0.4019 | 2.0184 |
ICAMC | 47.21% | 68.31% | 72.39% | 44.64 | 8605.3 | 0.1183 | 2.4862 |
SCNN | 30.09% | 41.22% | 43.03% | 24.81 | 1586.9 | 0.0160 | 2.4636 |
PET-CGDNN | 61.01% | 89.16% | 96.21% | 59.85 | 75.2 | 0.0719 | 1.8814 |
ULCNN | 58.55% | 85.40% | 92.57% | 57.17 | 9.9 | 0.0067 | 2.3571 |
CDSN | 40.20% | 56.75% | 58.58% | 37.06 | 1362.7 | 0.0120 | 3.4023 |
CSDNN | 57.91% | 84.19% | 90.82% | 56.27 | 333.8 | 0.0741 | 3.5972 |
Our Model | 60.17% | 88.91% | 96.12% | 59.18 | 11.2 | 0.0066 | 3.6759 |
Model | Average Accuracy (All SNRs) | Average Accuracy (≥0 dB) | Highest Accuracy | F1-Score (%) | Capacity * (K) | FLOPs (G/Sample) | Test Time (ms/Sample) |
---|---|---|---|---|---|---|---|
CNN2 | 36.31% | 39.84% | 40.54% | 35.39 | 1777.6 | 0.6302 | 2.5212 |
CLDNN | 39.82% | 49.74% | 51.50% | 39.16 | 80.5 | 0.1669 | 1.9662 |
CGDNN | 33.45% | 37.45% | 38.31% | 32.67 | 552.7 | 0.0151 | 1.8690 |
ResNet | 32.47% | 35.82% | 37.04% | 31.91 | 21450.3 | 0.0210 | 2.7768 |
MCLDNN | 43.09% | 49.60% | 51.65% | 42.93 | 407.7 | 0.1452 | 2.0062 |
ICAMC | 26.49% | 29.02% | 29.69% | 24.26 | 8605.6 | 0.1183 | 2.3548 |
SCNN | 30.89% | 34.01% | 34.54% | 30.23 | 1718.0 | 0.0161 | 2.3327 |
PET-CGDNN | 47.05% | 52.13% | 53.35% | 46.77 | 75.5 | 0.0719 | 1.8153 |
ULCNN | 43.85% | 48.10% | 49.77% | 42.32 | 9.9 | 0.0054 | 2.3179 |
CDSN | 32.04% | 36.13% | 37.23% | 29.24 | 1360.3 | 0.0120 | 2.3788 |
CSDNN | 43.33% | 48.95% | 50.35% | 41.76 | 334.8 | 0.0741 | 2.4996 |
Our Model | 43.97% | 50.47% | 53.08% | 43.02 | 11.3 | 0.0067 | 2.5524 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Xu, Z.; Fan, Y.; Fang, S.; Fu, Y.; Yi, L. A Lightweight Dual-Branch Complex-Valued Neural Network for Automatic Modulation Classification of Communication Signals. Sensors 2025, 25, 2489. https://doi.org/10.3390/s25082489
Xu Z, Fan Y, Fang S, Fu Y, Yi L. A Lightweight Dual-Branch Complex-Valued Neural Network for Automatic Modulation Classification of Communication Signals. Sensors. 2025; 25(8):2489. https://doi.org/10.3390/s25082489
Chicago/Turabian StyleXu, Zhaojing, Youchen Fan, Shengliang Fang, You Fu, and Liu Yi. 2025. "A Lightweight Dual-Branch Complex-Valued Neural Network for Automatic Modulation Classification of Communication Signals" Sensors 25, no. 8: 2489. https://doi.org/10.3390/s25082489
APA StyleXu, Z., Fan, Y., Fang, S., Fu, Y., & Yi, L. (2025). A Lightweight Dual-Branch Complex-Valued Neural Network for Automatic Modulation Classification of Communication Signals. Sensors, 25(8), 2489. https://doi.org/10.3390/s25082489