Specific Emitter Identification Model Based on Improved BYOL Self-Supervised Learning
Abstract
:1. Introduction
- We first proposed an SEI model based on improved BYOL self-supervised learning. To the best of our knowledge know, this is the first scheme applying self-supervised knowledge to SEI. Compared with the traditional data augmentation and residual networks, the improved BYOL scheme can obtain better recognition accuracy and anti-noise performance with small samples.
- We designed three new data augmentation methods: phase rotation, random cropping, and jitter. Through different data enhancement methods, the network can obtain the positive and negative samples needed for contrastive learning to carry out self-supervised constraints.
- Recent self-supervised learning methods based on contrastive learning require negative samples. They need a careful treatment of negative pairs by relying on large batch sizes or memory banks, which significantly increases the network’s computing resources, making it impossible to implement in small terminals. Our scheme removes negative samples, which can be implemented with minimal resources, and the recognition accuracy exceeds the latest self-supervised learning algorithm, significantly enhancing the algorithm’s application.
2. System Model
3. Related Work
3.1. Context-Based Self-Supervised Learning
3.2. Temporal-Based Self-Supervised Learning
3.3. Contrastive-Based Self-Supervised Learning
4. Methodology
4.1. Data Augmentation
4.2. Improved BYOL
Algorithm 1 Pseudocode of improved BYOL |
Require: |
1: # : online encoder, projector, and predictor |
2: # : target encoder, projector |
3: = |
4: = |
5: for s in loader do # load a batch s |
6: # a randomly augmented version |
7: # another randomly augmented version |
8: |
9: |
10: |
11: |
12: .detach() # no gradient to target network |
13: |
14: update() |
15: |
16: |
17: end for |
Ensure: encoder |
5. Experiments
5.1. Datasets and Parameter Settings
5.2. Performs vs. Sample Number
5.3. Performs vs. SNR
6. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Baldini, G.; Steri, G.; Giuliani, R. Identification of wireless devices from their physical layer radio-frequency fingerprints. In Encyclopedia of Information Science and Technology, 4th ed.; IGI Global: Hershey, PA, USA, 2018; pp. 6136–6146. [Google Scholar]
- Qu, L.; Yang, J.; Huang, K.; Liu, H. Specific emitter identification based on one-dimensional complex-valued residual networks with an attention mechanism. Bull. Pol. Acad. Sci. Tech. Sci. 2021, 69, e138814. [Google Scholar]
- Huang, K.; Yang, J.; Liu, H.; Hu, P. Deep adversarial neural network for specific emitter identification under varying frequency. Bull. Pol. Acad. Sci. Tech. Sci. 2021, 69, e136737. [Google Scholar]
- Talbot, K.I.; Duley, P.R.; Hyatt, M.H. Specific emitter identification and verification. Technol. Rev. 2003, 113, 133. [Google Scholar]
- Qian, Y.; Qi, J.; Kuai, X.; Han, G.; Sun, H.; Hong, S. Specific emitter identification based on multi-level sparse representation in automatic identification system. IEEE Trans. Inf. Forensics Secur. 2021, 16, 2872–2884. [Google Scholar] [CrossRef]
- Ezuma, M.; Erden, F.; Anjinappa, C.K.; Ozdemir, O.; Guvenc, I. Detection and classification of UAVs using RF fingerprints in the presence of Wi-Fi and Bluetooth interference. IEEE Open J. Commun. Soc. 2019, 1, 60–76. [Google Scholar] [CrossRef]
- Ezuma, M.; Erden, F.; Anjinappa, C.K.; Ozdemir, O.; Guvenc, I. Micro-UAV detection and classification from RF fingerprints using machine learning techniques. In Proceedings of the 2019 IEEE Aerospace Conference, Big Sky, MT, USA, 2–9 March 2019; pp. 1–13. [Google Scholar]
- Ali, A.M.; Uzundurukan, E.; Kara, A. Improvements on transient signal detection for RF fingerprinting. In Proceedings of the 2017 25th Signal Processing and Communications Applications Conference (SIU), Antalya, Turkey, 15–18 May 2017; pp. 1–4. [Google Scholar]
- Serinken, N.; Ureten, O. Generalised dimension characterisation of radio transmitter turn-on transients. Electron. Lett. 2000, 36, 1064–1066. [Google Scholar] [CrossRef]
- Choe, H.C.; Poole, C.E.; Andrea, M.Y.; Szu, H.H. Novel identification of intercepted signals from unknown radio transmitters. In Proceedings of the Wavelet Applications II, SPIE, Orlando, FL, USA, 17–21 April 1995; Volume 2491, pp. 504–517. [Google Scholar]
- Zhang, X.D.; Shi, Y.; Bao, Z. A new feature vector using selected bispectra for signal classification with application in radar target recognition. IEEE Trans. Signal Process. 2001, 49, 1875–1885. [Google Scholar] [CrossRef]
- Aubry, A.; Bazzoni, A.; Carotenuto, V.; De Maio, A.; Failla, P. Cumulants-based radar specific emitter identification. In Proceedings of the 2011 IEEE International Workshop on Information Forensics and Security, Iguacu Falls, Brazil, 29 November–2 December 2011; pp. 1–6. [Google Scholar]
- López-Risueño, G.; Grajal, J.; Sanz-Osorio, A. Digital channelized receiver based on time-frequency analysis for signal interception. IEEE Trans. Aerosp. Electron. Syst. 2005, 41, 879–898. [Google Scholar] [CrossRef]
- Zhang, J.; Wang, F.; Dobre, O.A.; Zhong, Z. Specific emitter identification via Hilbert—Huang transform in single-hop and relaying scenarios. IEEE Trans. Inf. Forensics Secur. 2016, 11, 1192–1205. [Google Scholar] [CrossRef]
- Lundén, J.; Koivunen, V. Automatic radar waveform recognition. IEEE J. Sel. Top. Signal Process. 2007, 1, 124–136. [Google Scholar] [CrossRef]
- O’Shea, T.J.; Corgan, J.; Clancy, T.C. Convolutional radio modulation recognition networks. In Proceedings of the International Conference on Engineering Applications of Neural Networks, Halkidiki, Greece, 5–7 June 2016; pp. 213–226. [Google Scholar]
- Zha, X.; Chen, H.; Li, T.; Qiu, Z.; Feng, Y. Specific Emitter Identification Based on Complex Fourier Neural Network. IEEE Commun. Lett. 2021, 26, 592–596. [Google Scholar] [CrossRef]
- Yang, N.; Zhang, B.; Ding, G.; Wei, Y.; Wei, G.; Wang, J.; Guo, D. Specific emitter identification with limited samples: A model-agnostic meta-learning approach. IEEE Commun. Lett. 2021, 26, 345–349. [Google Scholar] [CrossRef]
- Cho, K.; Van Merriënboer, B.; Gulcehre, C.; Bahdanau, D.; Bougares, F.; Schwenk, H.; Bengio, Y. Learning phrase representations using RNN encoder-decoder for statistical machine translation. arXiv 2014, arXiv:1406.1078. [Google Scholar]
- Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Imagenet classification with deep convolutional neural networks. Adv. Neural Inf. Process. Syst. 2012, 25, 84–90. [Google Scholar] [CrossRef] [Green Version]
- Graves, A.; Mohamed, A.R.; Hinton, G. Speech recognition with deep recurrent neural networks. In Proceedings of the 2013 IEEE International Conference on Acoustics, Speech and Signal Processing, Vancouver, BC, Canada, 26–31 May 2013; pp. 6645–6649. [Google Scholar]
- Karpathy, A.; Toderici, G.; Shetty, S.; Leung, T.; Sukthankar, R.; Fei-Fei, L. Large-scale video classification with convolutional neural networks. In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, Washington, DC, USA, 23–28 June 2014; pp. 1725–1732. [Google Scholar]
- Radford, A.; Narasimhan, K.; Salimans, T.; Sutskever, I. Improving Language Understanding by Generative Pre-Training. cs.ubc.ca. 2018. Available online: https://www.cs.ubc.ca/~amuham01/LING530/papers/radford2018improving.pdf (accessed on 24 October 2022).
- He, K.; Fan, H.; Wu, Y.; Xie, S.; Girshick, R. Momentum contrast for unsupervised visual representation learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 9729–9738. [Google Scholar]
- Radford, A.; Wu, J.; Child, R.; Luan, D.; Amodei, D.; Sutskever, I. Language models are unsupervised multitask learners. OpenAI blog 2019, 1, 9. [Google Scholar]
- Ferriyan, A.; Thamrin, A.H.; Takeda, K.; Murai, J. Encrypted Malicious Traffic Detection Based on Word2Vec. Electronics 2022, 11, 679. [Google Scholar] [CrossRef]
- Zhang, Z.; Guo, T.; Chen, M. Dialoguebert: A self-supervised learning based dialogue pre-training encoder. In Proceedings of the 30th ACM International Conference on Information & Knowledge Management, Virtual, 1–5 November 2021; pp. 3647–3651. [Google Scholar]
- Zhou, Z.; Hu, Y.; Zhang, Y.; Chen, J.; Cai, H. Multiview Deep Graph Infomax to Achieve Unsupervised Graph Embedding. IEEE Trans. Cybern. 2022, 1–11. [Google Scholar] [CrossRef]
- Oord, A.v.d.; Li, Y.; Vinyals, O. Representation learning with contrastive predictive coding. arXiv 2018, arXiv:1807.03748. [Google Scholar]
- Tian, Y.; Krishnan, D.; Isola, P. Contrastive multiview coding. In Proceedings of the European Conference on Computer Vision, Glasgow, UK, 23–28 August 2020; pp. 776–794. [Google Scholar]
- Grill, J.B.; Strub, F.; Altché, F.; Tallec, C.; Richemond, P.; Buchatskaya, E.; Doersch, C.; Avila Pires, B.; Guo, Z.; Gheshlaghi Azar, M.; et al. Bootstrap your own latent-a new approach to self-supervised learning. Adv. Neural Inf. Process. Syst. 2020, 33, 21271–21284. [Google Scholar]
- Zhao, D.; Yang, J.; Liu, H.; Huang, K. A Complex-Valued Self-Supervised Learning-Based Method for Specific Emitter Identification. Entropy 2022, 24, 851. [Google Scholar] [CrossRef]
- Bjorck, N.; Gomes, C.P.; Selman, B.; Weinberger, K.Q. Understanding batch normalization. arXiv 2018, arXiv:1806.02375. [Google Scholar]
- Noroozi, M.; Favaro, P. Unsupervised learning of visual representations by solving jigsaw puzzles. In Proceedings of the European Conference on Computer Vision, Amsterdam, The Netherlands, 11–14 October 2016; pp. 69–84. [Google Scholar]
- Wu, J.; Wang, X.; Wang, W.Y. Self-supervised dialogue learning. arXiv 2019, arXiv:1907.00448. [Google Scholar]
- Xiong, Z.; Shen, Q.; Xiong, Y.; Wang, Y.; Li, W. New generation model of word vector representation based on CBOW or skip-gram. Comput. Mater. Contin. 2019, 60, 259. [Google Scholar] [CrossRef] [Green Version]
- Du, X.; Yan, J.; Zhang, R.; Zha, H. Cross-network skip-gram embedding for joint network alignment and link prediction. IEEE Trans. Knowl. Data Eng. 2020, 34, 1080–1095. [Google Scholar] [CrossRef]
- Sermanet, P.; Lynch, C.; Hsu, J.; Levine, S. Time-contrastive networks: Self-supervised learning from multi-view observation. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Honolulu, HI, USA, 21–26 July 2017; pp. 486–487. [Google Scholar]
- Chen, T.; Kornblith, S.; Norouzi, M.; Hinton, G. A simple framework for contrastive learning of visual representations. In Proceedings of the International Conference on Machine Learning, Virtual Event, 13–18 July 2020; pp. 1597–1607. [Google Scholar]
- Caron, M.; Bojanowski, P.; Joulin, A.; Douze, M. Deep clustering for unsupervised learning of visual features. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 132–149. [Google Scholar]
Signal Parameter | Parameter Value |
---|---|
Learning rate | 0.0001 |
Batch size | 64 |
Epoch | 1000 |
Momentum values | 0.99 |
Optimizer | Adam |
Loss function | Mean square |
Signal Parameter | Parameter Value |
---|---|
Learning rate | 0.01 |
Batch size | 64 |
Epoch | 1000 |
Optimizer | Adam |
Loss function | Cross entropy |
Number | 10 | 15 | 20 | 25 | 200 | 400 |
---|---|---|---|---|---|---|
Acc (%) | ||||||
Residual network | 23.65 | 25.18 | 26.28 | 27.50 | 48.18 | 65.65 |
DA+residual network | 55.09 | 60.62 | 67.09 | 76.84 | 89.84 | 91.00 |
MoCo | 68.12 | 72.00 | 75.21 | 79.62 | 90.91 | 93.06 |
Improved BYOL | 70.56 | 76.84 | 78.87 | 80.96 | 92.13 | 96.25 |
SNR | 3 | 4 | 5 | 6 | 7 |
---|---|---|---|---|---|
Acc (%) | |||||
Residual network | 16.43 | 24.62 | 30.65 | 32.22 | 34.65 |
DA+residual network | 81.37 | 82.43 | 83.71 | 83.84 | 86.34 |
MoCo | 83.15 | 83.49 | 84.18 | 84.72 | 85.78 |
Improved BYOL | 85.18 | 86.75 | 89.03 | 90.78 | 91.34 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Zhao, D.; Yang, J.; Liu, H.; Huang, K. Specific Emitter Identification Model Based on Improved BYOL Self-Supervised Learning. Electronics 2022, 11, 3485. https://doi.org/10.3390/electronics11213485
Zhao D, Yang J, Liu H, Huang K. Specific Emitter Identification Model Based on Improved BYOL Self-Supervised Learning. Electronics. 2022; 11(21):3485. https://doi.org/10.3390/electronics11213485
Chicago/Turabian StyleZhao, Dongxing, Junan Yang, Hui Liu, and Keju Huang. 2022. "Specific Emitter Identification Model Based on Improved BYOL Self-Supervised Learning" Electronics 11, no. 21: 3485. https://doi.org/10.3390/electronics11213485
APA StyleZhao, D., Yang, J., Liu, H., & Huang, K. (2022). Specific Emitter Identification Model Based on Improved BYOL Self-Supervised Learning. Electronics, 11(21), 3485. https://doi.org/10.3390/electronics11213485