1D-CNN-Transformer for Radar Emitter Identification and Implemented on FPGA
Abstract
:1. Introduction
- In terms of algorithms, we propose a 1D-CNN-Transformer lightweight network algorithm that is easy to implement in hardware. This algorithm achieves high recognition performance of specific emitter identification with low model complexity while ensuring the model’s expressiveness through parameter sharing.
- We developed an instruction-based hardware accelerator architecture that enables rapid deployment of neural networks through instructions. In addition, this article designs a reconfigurable buffer called “Ping-pong CBUF” for the accelerator platform to achieve efficient parallel reading and writing of the CNN layer and LW-CT layer data streams. This buffer not only maximizes the reuse of memory resources, but also significantly improves the overall throughput efficiency of the accelerator.
- Based on the algorithm and hardware co-optimization, we developed and implemented an efficient accelerator on the XCKU040 platform. Experimental results show that our algorithm model outperforms the models in related papers in radar emitter identification performance. In addition, the energy efficiency ratio of the Transformer part in our accelerator exceeds that of accelerators in other related papers.
2. Related Works
2.1. Specific Emitter Identification
2.2. Hardware Accelerator Design
3. Methods of Software
3.1. Characteristic Analysis of Radar Emitter Signal
3.2. Algorithm
4. Hardware Accelerator Architecture
4.1. Overall Structure
4.1.1. Global Controller
4.1.2. Central Processing Core
4.1.3. Multifunctional Processing Array
4.1.4. AvgPooL Array
4.1.5. On-Chip Storage
4.2. Hardware Accelerated Optimization Algorithm
4.2.1. Instruction Set Control Logic Operations
4.2.2. Convolution Operation Module Optimization Algorithm
4.3. Convolution Processing Module
4.4. MHSA Processing Module
4.5. Fully Connected Layer Processing Module
5. Experiment Results and Analysis
5.1. Data Set Generation
5.2. Experiment
5.3. Hardware Performance Analysis and Comparison
6. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Zohuri, B. Electronic countermeasure and electronic counter-countermeasure. In Radar Energy Warfare and the Challenges of Stealth Technology; Springer: Berlin/Heidelberg, Germany, 2020; pp. 111–145. [Google Scholar]
- Cao, R.; Cao, J.; Mei, J.-P.; Yin, C.; Huang, X. Radar emitter identification with bispectrum and hierarchical extreme learning machine. Multimed. Tools Appl. 2019, 78, 28953–28970. [Google Scholar] [CrossRef]
- Zhang, W.; Yin, X.; Cao, X.; Xie, Y.; Nie, W. Radar emitter identification using hidden Markov model. In Proceedings of the 2019 IEEE 3rd Advanced Information Management, Communicates, Electronic and Automation Control Conference (IMCEC), Chongqing, China, 11–13 October 2019; pp. 1997–2000. [Google Scholar]
- Sui, J.; Liu, Z.; Liu, L.; Peng, B.; Liu, T.; Li, X. Online non-cooperative radar emitter classification from evolving and imbalanced pulse streams. IEEE Sens. J. 2020, 20, 7721–7730. [Google Scholar] [CrossRef]
- Xue, J.; Tang, L.; Zhang, X.; Jin, L. A novel method of radar emitter identification based on the coherent feature. Appl. Sci. 2020, 10, 5256. [Google Scholar] [CrossRef]
- Zhang, J.; Wang, F.; Dobre, O.A.; Zhong, Z. Specific emitter identification via Hilbert–Huang transform in single-hop and relaying scenarios. IEEE Trans. Inf. Forensics Secur. 2016, 11, 1192–1205. [Google Scholar] [CrossRef]
- Ramkumar, B. Automatic modulation classification for cognitive radios using cyclic feature detection. IEEE Circuits Syst. Mag. 2009, 9, 27–45. [Google Scholar] [CrossRef]
- Wang, C.; Wang, J.; Zhang, X. Automatic radar waveform recognition based on time-frequency analysis and convolutional neural network. In Proceedings of the 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), New Orleans, LA, USA, 5–9 March 2017; pp. 2437–2441. [Google Scholar]
- Swami, A.; Sadler, B.M. Hierarchical digital modulation classification using cumulants. IEEE Trans. Commun. 2000, 48, 416–429. [Google Scholar] [CrossRef]
- Wei, Y.; Fang, S.; Wang, X. Automatic modulation classification of digital communication signals using SVM based on hybrid features, cyclostationary, and information entropy. Entropy 2019, 21, 745. [Google Scholar] [CrossRef]
- Jiang, H.; Guan, W.; Ai, L. Specific radar emitter identification based on a digital channelized receiver. In Proceedings of the 2012 5th International Congress on Image and Signal Processing, Chongqing, China, 16–18 October 2012; pp. 1855–1860. [Google Scholar]
- Tang, L.; Zhang, K.; Dai, H.; Zhu, P.; Liang, Y.-C. Analysis and optimization of ambiguity function in radar-communication integrated systems using MPSK-DSSS. IEEE Wirel. Commun. Lett. 2019, 8, 1546–1549. [Google Scholar] [CrossRef]
- Zhu, M.; Zhang, X.; Qi, Y.; Ji, H. Compressed sensing mask feature in time-frequency domain for civil flight radar emitter recognition. In Proceedings of the 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Calgary, AB, Canada, 15–20 April 2018; pp. 2146–2150. [Google Scholar]
- Zhang, X.-D.; Shi, Y.; Bao, Z. A new feature vector using selected bispectra for signal classification with application in radar target recognition. IEEE Trans. Signal Process. 2001, 49, 1875–1885. [Google Scholar] [CrossRef]
- López-Risueño, G.; Grajal, J.; Sanz-Osorio, A. Digital channelized receiver based on time-frequency analysis for signal interception. IEEE Trans. Aerosp. Electron. Syst. 2005, 41, 879–898. [Google Scholar] [CrossRef]
- Triantafyllakis, K.; Surligas, M.; Vardakis, G.; Papadakis, S. Phasma: An automatic modulation classification system based on random forest. In Proceedings of the 2017 IEEE International Symposium on Dynamic Spectrum Access Networks (DySPAN), Baltimore, MD, USA, 6–9 March 2017; pp. 1–3. [Google Scholar]
- Javed, Y.; Bhatti, A. Emitter recognition based on modified x-means clustering. In Proceedings of the IEEE Symposium on Emerging Technologies, Islamabad, Pakistan, 18 September 2005; pp. 352–358. [Google Scholar]
- Yuan, S.-X.; Lu, S.-J.; Wang, S.-L.; Zhang, W. Modified communication emitter recognition method based on D-S theory. In Proceedings of the 2015 IEEE International Conference on Signal Processing, Communications and Computing (ICSPCC), Ningbo, China, 19–22 September 2015; pp. 1–4. [Google Scholar]
- O’Shea, T.J.; Corgan, J.; Clancy, T.C. Convolutional radio modulation recognition networks. In Engineering Applications of Neural Networks, Proceedings of the 17th International Conference, EANN 2016, Aberdeen, UK, 2–5 September 2016; Springer International Publishing: Cham, Switzerland, 2016; pp. 213–216. [Google Scholar]
- LeCun, Y.; Bengio, Y.; Hinton, G. Deep learning. Nature 2015, 521, 436–444. [Google Scholar] [CrossRef] [PubMed]
- Zhu, M.; Feng, Z.; Zhou, X. A Novel Data-Driven Specific Emitter Identification Feature Based on Machine Cognition. Electronics 2020, 9, 1308. [Google Scholar] [CrossRef]
- Man, P.; Ding, C.; Ren, W.; Xu, G. A Specific Emitter Identification Algorithm under Zero Sample Condition Based on Metric Learning. Remote Sens. 2021, 13, 4919. [Google Scholar] [CrossRef]
- Merchant, K.; Revay, S.; Stantchev, G.; Nousain, B. Deep Learning for RF Device Fingerprinting in Cognitive Communication Networks. IEEE J. Sel. Top. Signal Process. 2018, 12, 160–167. [Google Scholar] [CrossRef]
- Qasaimeh, M.; Denolf, K.; Lo, J.; Vissers, K.; Zambreno, J.; Jones, P.H. Comparing energy efficiency of CPU, GPU and FPGA implementations for vision kernels. In Proceedings of the 2019 IEEE International Conference on Embedded Software and Systems (ICESS), Las Vegas, NV, USA, 2–3 June 2019; pp. 1–8. [Google Scholar]
- Lee, J.; Shin, D.; Lee, J.; Lee, J.; Kang, S.; Yoo, H.-J. A full HD 60 fps CNN super resolution processor with selective caching based layer fusion for mobile devices. In Proceedings of the 2019 Symposium on VLSI Circuits, Kyoto, Japan, 9–14 June 2019; pp. C302–C303. [Google Scholar]
- Jo, J.; Cha, S.; Rho, D.; Park, I.-C. DSIP: A scalable inference accelerator for convolutional neural networks. IEEE J. Solid-State Circuits 2018, 53, 605–618. [Google Scholar] [CrossRef]
- Kim, S.; Jo, J.; Park, I.-C. Hybrid convolution architecture for energy-efficient deep neural network processing. IEEE Trans. Circuits Syst. I Regul. Pap. 2021, 68, 2017–2029. [Google Scholar] [CrossRef]
- Hsieh, Y.-Y.; Lee, Y.-C.; Yang, C.-H. A CycleGAN accelerator for unsupervised learning on mobile devices. In Proceedings of the 2020 IEEE International Symposium on Circuits and Systems (ISCAS), Seville, Spain, 12–14 October 2020; pp. 1–5. [Google Scholar]
- Chang, J.-W.; Kang, K.-W.; Kang, S.-J. An energy-efficient FPGA-based deconvolutional neural networks accelerator for single image super-resolution. IEEE Trans. Circuits Syst. Video Technol. 2020, 30, 281–295. [Google Scholar] [CrossRef]
- Yu, Y.; Zhao, T.; Wang, M.; Wang, K.; He, L. Uni-OPU: An FPGA-based uniform accelerator for convolutional and transposed convolutional networks. IEEE Trans. Very Large Scale Integr. (VLSI) Syst. 2020, 28, 1545–1556. [Google Scholar] [CrossRef]
- Smart SSD. Available online: https://semiconductor.samsung.com/ssd/smart-ssd/ (accessed on 20 November 2022).
- Boards-and-Kits. Available online: https://www.xilinx.com/products/boards-and-kits.html (accessed on 20 November 2022).
- Howard, A.G.; Zhu, M.; Chen, B.; Kalenichenko, D.; Wang, W.; Weyand, T.; Andreetto, M.; Adam, H. MobileNets: Efficient convolutional neural networks for mobile vision applications. arXiv 2017, arXiv:1704.04861. [Google Scholar]
- Sandler, M.; Howard, A.; Zhu, M.; Zhmoginov, A.; Chen, L.-C. MobileNetV2: Inverted residuals and linear bottlenecks. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 4510–4520. [Google Scholar]
- Howard, A.; Sandler, M.; Chen, B.; Wang, W.; Chen, L.-C.; Tan, M.; Chu, G.; Vasudevan, V.; Zhu, Y.; Pang, R.; et al. Searching for MobileNetV3. arXiv 2019, arXiv:1905.02244. [Google Scholar]
- Han, S.; Mao, H.; Dally, W.J. Deep compression: Compressing deep neural networks with pruning, trained quantization and Huffman coding. In Proceedings of the 2016 International Conference on Learning Representations (ICLR), San Juan, Puerto Rico, 2–4 May 2016; pp. 1–14. [Google Scholar]
- Cheng, Y.; Wang, D.; Zhou, P.; Zhang, T. A survey of model compression and acceleration for deep neural networks. arXiv 2017, arXiv:1710.09282. [Google Scholar]
- Han, S.; Liu, X.; Mao, H.; Pu, J.; Pedram, A.; Horowitz, M.A.; Dally, W.J. EIE: Efficient inference engine on compressed deep neural network. In Proceedings of the 2016 ACM/IEEE 43rd Annual International Symposium on Computer Architecture (ISCA), Seoul, Republic of Korea, 18–22 June 2016; pp. 393–405. [Google Scholar]
- Song, K.; Zhou, X.; Yu, H.; Huang, Z.; Zhang, Y.; Luo, W.; Duan, X.; Zhang, M. Towards Better Word Alignment in Transformer. IEEE/ACM Trans. Audio Speech Lang. Process. 2020, 28, 1801–1812. [Google Scholar] [CrossRef]
- Lu, Y.; Zhang, J.; Zeng, J.; Wu, S.; Zong, C. Attention Analysis and Calibration for Transformer in Natural Language Generation. IEEE/ACM Trans. Audio Speech Lang. Process. 2022, 30, 1927–1938. [Google Scholar] [CrossRef]
- Touvron, H.; Cord, M.; Douze, M.; Massa, F.; Sablayrolles, A.; Jégou, H. Training data-efficient image transformers & distillation through attention. In Proceedings of the International Conference on Machine Learning (ICML), Online, 18–24 July 2021; pp. 10347–10357. [Google Scholar]
- Li, Q.; Zhang, X.; Xiong, J.; Hwu, W.-M.; Chen, D. Efficient methods for mapping neural machine translator on FPGAs. IEEE Trans. Parallel Distrib. Syst. 2021, 32, 1866–1877. [Google Scholar] [CrossRef]
- Zhang, X.; Wu, Y.; Zhou, P.; Tang, X.; Hu, J. Algorithm-hardware co-design of attention mechanism on FPGA devices. ACM Trans. Embed. Comput. Syst. 2021, 20, 1–24. [Google Scholar] [CrossRef]
- Wang, T.; Gong, L.; Wang, C.; Yang, Y.; Gao, Y.; Zhou, X.; Chen, H. Via: A novel vision-transformer accelerator based on fpga. IEEE Trans. Comput. Aided Des. Integr. Circuits Syst. 2022, 41, 4088–4099. [Google Scholar] [CrossRef]
- Dong, Q.; Xie, X.; Wang, Z. SWAT: An Efficient Swin Transformer Accelerator Based on FPGA. In Proceedings of the 2024 29th Asia and South Pacific Design Automation Conference (ASP-DAC), Incheon, Republic of Korea, 22–25 January 2024; pp. 515–520. [Google Scholar] [CrossRef]
- Huang, M.; Luo, J.; Ding, C.; Wei, Z.; Huang, S.; Yu, H. An Integer-Only and Group-Vector Systolic Accelerator for Efficiently Mapping Vision Transformer on Edge. IEEE Trans. Circuits Syst. I Regul. Pap. 2023, 70, 5289–5301. [Google Scholar] [CrossRef]
- Chen, Y.; Dai, X.; Chen, D.; Liu, M.; Dong, X.; Yuan, L.; Liu, Z. Mobile-Former: Bridging MobileNet and Transformer. In Proceedings of the 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), New Orleans, LA, USA, 18–24 June 2022; pp. 5260–5269. [Google Scholar]
- Li, J.; Xia, X.; Li, W.; Li, H.; Wang, X.; Xiao, X.; Wang, R.; Zheng, M.; Pan, X. Next-ViT: Next Generation Vision Transformer for Efficient Deployment in Realistic Industrial Scenarios. arXiv 2022, arXiv:2207.05501. [Google Scholar]
- Liu, Z.; Mao, H.; Wu, C.-Y.; Feichtenhofer, C.; Darrell, T.; Xie, S. A ConvNet for the 2020s. In Proceedings of the 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), New Orleans, LA, USA, 18–24 June 2022; pp. 11966–11976. [Google Scholar] [CrossRef]
- Guo, J.; Han, K.; Wu, H.; Tang, Y.; Chen, X.; Wang, Y.; Xu, C. CMT: Convolutional Neural Networks Meet Vision Transformers. In Proceedings of the 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), New Orleans, LA, USA, 18–24 June 2022; pp. 12165–12175. [Google Scholar] [CrossRef]
- Chen, Y.; Li, P.; Yan, E.; Jing, Z.; Liu, G.; Wang, Z. A Knowledge Graph-Driven CNN for Radar Emitter Identification. Remote Sens. 2023, 15, 3289. [Google Scholar] [CrossRef]
- Jing, Z.; Li, P.; Wu, B.; Yan, E.; Chen, Y.; Gao, Y. Attention-Enhanced Dual-Branch Residual Network with Adaptive L-Softmax Loss for Specific Emitter Identification under Low-Signal-to-Noise Ratio Conditions. Remote Sens. 2024, 16, 1332. [Google Scholar] [CrossRef]
- Demir, A.; Mehrotra, A.; Roychowdhury, J. Phase noise in oscillators: A unifying theory and numerical methods for characterization. IEEE Trans. Circuits Syst. I Regul. Pap. 2000, 47, 655–674. [Google Scholar] [CrossRef]
- Wortsman, M.; Lee, J.; Gilmer, J.; Kornblith, S. Replacing softmax with ReLU in Vision Transformers. arXiv 2023, arXiv:2309.08586. [Google Scholar]
- Du, G.; Tian, C.; Li, Z.; Zhang, D.; Yin, Y.-S.; Ouyang, Y. Efficient softmax hardware architecture for deep neural networks. In Proceedings of the 2019 on Great Lakes Symposium on VLSI (GLSVLSI), Tysons Corner, VA, USA, 9–11 May 2019; pp. 75–80. [Google Scholar]
- Spagnolo, F.; Perri, S.; Corsonello, P. Aggressive Approximation of the SoftMax Function for Power-Efficient Hardware Implementations. IEEE Trans. Circuits Syst. II Express Briefs 2022, 69, 1652–1656. [Google Scholar] [CrossRef]
- Lu, J.; Hu, J. Radar Emitter Identification based on CNN and FPGA Implementation. In Proceedings of the 2024 IEEE 6th Advanced Information Management, Communicates, Electronic and Automation Control Conference (IMCEC), Chongqing, China, 24–26 May 2024; pp. 190–194. [Google Scholar] [CrossRef]
- Notaro, P.; Paschali, M.; Hopke, C.; Wittmann, D.; Navab, N. Radar Emitter Classification with Attribute-specific Recurrent Neural Networks. arXiv 2019, arXiv:1911.07683. [Google Scholar]
- Wu, B.; Wu, X.; Li, P.; Gao, Y.; Si, J.; Al-Dhahir, N. Efficient FPGA Implementation of Convolutional Neural Networks and Long Short-Term Memory for Radar Emitter Signal Recognition. Sensors 2024, 24, 889. [Google Scholar] [CrossRef] [PubMed]
Radar Emitter | Carrier Frequency | Frequency Bandwidth | Pulse Width |
---|---|---|---|
Emitter 1 | 10 GHz | 30 MHz | 2 μs |
Emitter 2 | 10 GHz | 30 MHz | 2 μs |
Emitter 3 | 10 GHz | 30 MHz | 2 μs |
Emitter 4 | 8 GHz | 20 MHz | 1.5 μs |
Emitter 5 | 8 GHz | 20 MHz | 1.5 μs |
Emitter 6 | 8 GHz | 20 MHz | 1.5 μs |
Radar Emitter | Phase Noise Frequency Offset (kHz) | Phase Modulation Coefficient | Filter Sampling Frequency (kHz) | Filter Cutoff Frequency (Hz) |
---|---|---|---|---|
Emitter 1 | [1, 10, 100, 1000, 10,000] | [1, 0.1, 0.01, 0.001, 0.0001] | 20 | 200 |
Emitter 2 | [1, 60, 100, 4000, 20,000] | [0.2, 0.3, 0.05, 0.007, 0.0004] | 20 | 200 |
Emitter 3 | [5, 30, 200, 1000, 15,000] | [0.9, 0.6, 0.05, 0.008, 0.0006] | 20 | 200 |
Emitter 4 | [1, 10, 100, 1000, 10,000] | [1, 0.1, 0.01, 0.001, 0.0001] | 30 | 150 |
Emitter 5 | [1, 60, 100, 4000, 20,000] | [0.2, 0.3, 0.05, 0.007, 0.0004] | 30 | 150 |
Emitter 6 | [5, 30, 200, 1000, 15,000] | [0.9, 0.6, 0.05, 0.008, 0.0006] | 30 | 150 |
Resource | LUT | FF | BRAM | DSP |
---|---|---|---|---|
Available | 242,400 | 484,800 | 600 | 1920 |
Utilization (%) | 139 K (57.3) | 134 K (27.6) | 386 (64.33) | 992 (51.67) |
CPU | GPU | FPGA | |
---|---|---|---|
Device | I7-13700 | RTX4090 | Xilinx XCKU040 |
FPS | 1576.3 | 21,429.0 | 2962.7 |
Average (W) | 152.00 a | 69.57 b | 5.72 |
Throughput (GOPS) | 66.56 | 904.84 | 125.10 |
Power Efficiency (GOPS/W) | 0.44 | 13.01 | 21.87 |
Related Work | [42] | [43] | [44] | [45] | [46] | Our Works |
---|---|---|---|---|---|---|
Model | NMT | Multi30k | Swin-T | Swin-T | ViT-s | LW-CT part |
Year | 2021 | 2021 | 2022 | 2024 | 2023 | 2024 |
Platform | VCU118 | ZCU102 | AlveoU50 | AlveoU50 | ZCU102 | Xcku040 |
Frequency (MHz) | 100 | 125 | 300 | 200 | 300 | 150 |
Quantization | float32/half16 | int8 | float16 | int16/int8 | int8 | int16 |
DSP Utilization | 4838 (70.7%) | 2500 (99.2%) | 2420 (40.7%) | 1863 (31.3%) | 1268 (50.3%) | 992 (51.7%) |
LUT Utilization | - | 252 K (91.9%) | 258 K (29.6%) | 271 K (31.1%) | 144 K (52.7%) | 139 K (57.3%) |
BRAM Utilization | - | 699 (71.0%) | 1002 (74.6%) | 609.5 (45.3%) | - | 386 (64.33%) |
Throughput (GOPS) | 22.0 | 190.1 | 309.6 | 301.9 | 762.7 | 153.2 |
Power (W) | 20.95 | 22.5 | 39 | 14.35 | 29.6 | 5.72 |
Power Efficiency (GOPS/W) | 1.05 | 8.44 | 7.94 | 21.04 | 25.76 | 26.78 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Gao, X.; Wu, B.; Li, P.; Jing, Z. 1D-CNN-Transformer for Radar Emitter Identification and Implemented on FPGA. Remote Sens. 2024, 16, 2962. https://doi.org/10.3390/rs16162962
Gao X, Wu B, Li P, Jing Z. 1D-CNN-Transformer for Radar Emitter Identification and Implemented on FPGA. Remote Sensing. 2024; 16(16):2962. https://doi.org/10.3390/rs16162962
Chicago/Turabian StyleGao, Xiangang, Bin Wu, Peng Li, and Zehuan Jing. 2024. "1D-CNN-Transformer for Radar Emitter Identification and Implemented on FPGA" Remote Sensing 16, no. 16: 2962. https://doi.org/10.3390/rs16162962
APA StyleGao, X., Wu, B., Li, P., & Jing, Z. (2024). 1D-CNN-Transformer for Radar Emitter Identification and Implemented on FPGA. Remote Sensing, 16(16), 2962. https://doi.org/10.3390/rs16162962