SAR Target Recognition Based on Inception and Fully Convolutional Neural Network Combining Amplitude Domain Multiplicative Filtering Method
Abstract
:1. Introduction
- We designed the ADMF image processing method to improve image quality and achieve data augmentation, which avoids problems caused by other data augmentations. Due to the significant difference in amplitude between target and background information in radar imaging, the ADMF method is to achieve noise reduction and other effects from the perspective of the amplitude domain, without changing the position information, which is very beneficial for SAR image recognition and detection. In addition, the ADMF method provides a new idea for segmenting the target and target shadow part of the image, and the ADMF method is simple to implement, easy to understand and improve, and has an obvious noise reduction effect;
- We utilize the full convolution structure of the FCNN to alleviate the problem of small sample overfitting, and introduce the Inception structure and the combination of mixed progressive convolution layers into the FCNN to improve the generalization performance of the network and the convergence rate of network training. The small-scale convolution kernel decomposition methods of the Inception structure not only accelerate the convergence of the network but also increase the depth of the network. The mixed progressive convolution layer is also utilized to accelerate the network training and reduce the computational load;
- For the initialization of network parameters, we do not follow the method of transfer learning or metric learning, etc, but construct pretraining samples by the ADMF method and then complete network initialization by the network pretraining. This method can initialize network parameters well and avoid problems that transfer learning creates. No other models are introduced, which avoids the process of data migration between two or more models, and also avoids the problems [19] in the migration process. In addition, initial parameters optimization through pretraining also reduces training time loss and the process of designing more models. Although the method proposed has a pre-training process, the overall training time loss of the pre-trained network is significantly less than that of the non-pre-trained network.
2. Methods
2.1. Overall Architecture
2.2. Design of ADMF Method
2.3. Design of IFCNN Model
2.4. The FCNN and Mixed Progressive Convolution Layers
2.5. The Inception Structure
3. Experiments
3.1. Experimental Models and Datasets
3.2. Experimental Results and Analysis
3.2.1. Experimental Simulation of ADMF Method
- Configure the experiment environment and build the CNN, FCNN, and IFCNN. The convolution kernel weights of three networks are initialized by a normal distribution with a mean value of 0 and a standard deviation of , where n denotes the number of inputs to each unit [37];
- Then train three networks by the SAR subset-20 (see 0 for details). When the accuracy and loss of the training no longer change, adjust the learning rate by 10% until the learning rate reaches 10−7. Finally test and verify the three models and record the experimental results (the test sample targets are BMP2, T62, 2S1, and D7);
- Utilize the ADMF image processing method to structure the pre-training set, subset-50 (see 0 for details), after the pre-training of the network reaches convergence, and then repeat step 2;
3.2.2. Comparative Experiments of IFCNN
3.2.3. Ablation Experiment of IFCNN
4. Discussion
5. Conclusions
Author Contributions
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Wang, S.; Wang, Y.; Liu, H. Attribute-Guided Multi-Scale Prototypical Network for Few-Shot SAR Target Classification. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2021, 14, 12224–12245. [Google Scholar] [CrossRef]
- Pei, J.; Huang, Y.; Huo, W.; Zhang, Y.; Yang, J.; Yeo, S.T. SAR Automatic Target Recognition Based on Multiview Deep Learning Framework. IEEE Trans. Geosci. Remote Sens. 2017, 56, 2196–2210. [Google Scholar] [CrossRef]
- Chen, S.; Wang, H.; Xu, F.; Jin, Y. Target Classification Using the Deep Convolutional Networks for SAR Images. IEEE Trans. Geosci. Remote Sens. 2016, 54, 4806–4817. [Google Scholar] [CrossRef]
- Wu, W.; Li, H.; Zhang, L.X. High-Resolution PolSAR Scene Classification With Pretrained Deep Convnets and Manifold Polarimetric Parameters. IEEE Trans. Geosci. Remote Sens. 2018, 56, 6159–6168. [Google Scholar] [CrossRef]
- Zeiler, M.D.; Fergus, R. Visualizing and Understanding Convolutional Networks. In Proceedings of the European Conference on Computer Vision (ECCV), Zurich, Switzerland, 6–12 September 2014; pp. 12–25. [Google Scholar]
- Houkseini, A.; Toumi, A. A Deep Learning for Target Recognition from SAR Images. In Proceedings of the Seminar on Detection Systems Architectures and Technologies, Algiers, Algeria, 20–22 February 2017; IEEE: Piscataway, NJ, USA, 2017; pp. 111–115. [Google Scholar]
- Amrani, M.; Jiang, F. Deep feature extraction and combination for synthetic aperture radar target classification. J. Appl. Remote Sens. 2017, 11, 1. [Google Scholar] [CrossRef]
- Du, Y.; Liu, J.; Song, W.; He, Q.; Huang, D. Ocean Eddy Recognition in SAR Images With Adaptive Weighted Feature Fusion. IEEE Access 2019, 7, 152023–152033. [Google Scholar] [CrossRef]
- Xiao, Z.; Liu, W. Research on SAR target recognition based on convolutional neural networks. Electron. Meas. Technol. 2018, 44, 15–27. [Google Scholar]
- Ding, J.; Chen, B.; Liu, H.W. Convolutional Neural Network With Data Augmentation for SAR Target Recognition. IEEE Geosci. Remote Sens. Lett. 2016, 13, 364–368. [Google Scholar] [CrossRef]
- Goodfellow, I.J.; Pouget-Abadie, J.; Mirza, M. Generative adversarial nets. In Proceedings of the 27th International Conference on Neural Information Processing Systems, Montreal, QC, Canada, 8–13 December 2014; pp. 2672–2680. [Google Scholar]
- Guo, J.Y.; Lei, B.; Ding, C.B. Synthetic Aperture Radar Image Synthesis by Using Generative Adversarial Nets. IEEE Geosci. Remote Sens. Lett. 2017, 14, 1111–1115. [Google Scholar] [CrossRef]
- Bao, X.J.; Pan, Z.X.; Liu, L. SAR Image Simulation by Generative Adversarial Networks. In Proceedings of the IGARSS 2019, Yokohama, Japan, 28 July–2 August 2019; pp. 9995–9998. [Google Scholar]
- Li, S.; Yue, X.; Shi, M. A Depth Neural Network SAR Occlusion Target Recognition Method. J. Xi’Univ. Electron. Sci. Technol. Nat. Sci. Ed. 2015, 31, 154–160. [Google Scholar]
- Huang, Z.; Pan, Z.; Lei, B. Transfer Learning with Deep Convolutional Neural Network for SAR Target Classification with Limited Labeled Data. Remote Sens. 2017, 9, 907. [Google Scholar] [CrossRef] [Green Version]
- Zhang, W.; Zhu, Y.; Fu, Q. Semi-Supervised Deep Transfer Learning-Based on Adversarial Feature Learning for Label Limited SAR Target Recognition. IEEE Access 2019, 7, 152412–152420. [Google Scholar] [CrossRef]
- Rui, Q.; Ping, L. A semi-greedy neural network CAE-HL-CNN for SAR target recognition with limited training data. Int. J. Remote Sens. 2020, 41, 7889–7911. [Google Scholar]
- Lingjuan, Y.; Ya, W. SAR ATR based on FCNN and ICAE. J. Radars 2018, 7, 136–139. [Google Scholar]
- Huang, Z.; Pan, Z.; Lei, B. What, Where, and How to Transfer in SAR Target Recognition Based on Deep CNNs. IEEE Trans. Geosci. Remote Sens. 2019, 58, 2324–2336. [Google Scholar] [CrossRef] [Green Version]
- Zhong, C.; Mu, X.; He, X. SAR Target Image Classification Based on Transfer Learning and Model Compression. IEEE Geosci. Remote Sens. Lett. 2019, 16, 412–416. [Google Scholar] [CrossRef]
- Wang, K.; Zhang, G.; Xu, Y. SAR Target Recognition Based on Probabilistic Meta-Learning. IEEE Geosci. Remote Sens. Lett. 2020, 18, 682–686. [Google Scholar] [CrossRef]
- Li, L.; Liu, J.; Su, L. A Novel Graph Metalearning Method for SAR Target Recognition. IEEE Geosci. Remote Sens. Lett. 2021, 19, 1–5. [Google Scholar] [CrossRef]
- Fu, K.; Zhang, T.; Zhang, Y. Few-shot SAR target classification via meta-learning. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–14. [Google Scholar]
- Zi, Y.; Fa, W. Self-attention multiscale feature fusion network for small sample SAR image recognition. Signal Process. 2020, 36, 1846–1858. [Google Scholar]
- Hang, W.; Xiao, C. SAR image recognition based on small sample learning. Comput. Sci. 2020, 47, 132–136. [Google Scholar]
- Pan, Z.X.; Bao, X.J.; Wang, B.W. Siamese Network Based Metric Learning for SAR Target Classification. In Proceedings of the IGARSS, Yokohama, Japan, 28 July–2 August 2019; pp. 1342–1345. [Google Scholar]
- Chen, Y.; Lingjuan, Y.U.; Xie, X. SAR Image Target Classification Based on All Convolutional Neural Network. Radar Sci. Technol. 2018, 1672–2337. [Google Scholar]
- Ling, L.; Qin, W. A SAR Target Classification Method Based on Fast Full Convolution Neural Network; CN111814608A; 2020; pp. 20–34.
- Chen, L.; Hong, W.U.; Cui, X. Convolution neural network SAR image target recognition based on transfer learning. Chin. Space Sci. Technol. 2018, 38, 45–51. [Google Scholar]
- Long, F.; Zheng, N. A Visual Computing Model Based on Attention Mechanism. J. Image Graph. 1998, 7, 62–65. [Google Scholar]
- Gao, K.; Yu, X.; Tan, X. Small sample classification for hyperspectral imagery using temporal convolution and attention mechanism. Remote Sens. Lett. 2021, 12, 510–519. [Google Scholar] [CrossRef]
- Bouvrie, J. Notes On Convolutional Neural Networks. In Tech Report; MIT CBCL: Cambridge, MA, USA, 2006; pp. 22–30. [Google Scholar]
- Lin, M.; Chen, Q.; Yan, S.C. Network in network. In Proceedings of the 2014 International Conference on Learning Representations, Banffff, AB, Canada, 14–16 April 2014; Computational and Biological Learning Society: Banffff, AB, Canada, 2014; pp. 24–37. [Google Scholar]
- Szegedy, C.; Vanhoucke, V.; Ioffe, S. Rethinking the Inception Architecture for Computer Vision. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR); IEEE: Piscataway, NJ, USA, 2016; pp. 2818–2826. [Google Scholar]
- Szegedy, C.; Ioffe, S.; Vanhoucke, V. Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning. In Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence, San Francisco, CA, USA, 4–9 February 2017; AAAI Press: Palo Alto, CA, USA, 2017; pp. 2913–2921. [Google Scholar]
- Min, X.; Hai, L.; Li, M. Research on fingerprint in vivo detection algorithm based on deep convolution neural network. Inf. Netw. Secur. 2018, 28–35. [Google Scholar]
- He, K.; Zhang, X.; Sen, S.; Sun, J. Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In Proceedings of the International Conference on Computer Vision, Las Condes, Chile, 11–18 December 2015; pp. 1026–1034. [Google Scholar]
- Barak, O.; Rigotti, M.; Fusi, S. The Sparseness of Mixed Selectivity Neurons Controls the Generalization-Discrimination Trade-Off. J. Neurosci. 2013, 33, 3844–3856. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Huesmann, K.; Rodriguez, L.G.; Linsen, L. The Impact of Activation Sparsity on Overfitting in Convolutional Neural Networks. In Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 2021; Volume 12663, pp. 130–145. [Google Scholar] [CrossRef]
- Ping, L.; Xiongjun, F.; Cheng, F.; Jian, D.; Rui, Q.; Marco, M. LW-CMDANet: A Novel Attention Network for SAR Automatic Target Recognition. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2022, 15, 6615–6630. [Google Scholar]
- Chen, Y.; Wang, X.; Liu, Z.; Xu, H.; Darrell, T. A New Meta-Baseline for Few-Shot Learning. CoRR 2020, 6, 12–19. [Google Scholar]
- Sun, Y.; Wang, Y.; Liu, H.; Wang, N.; Wang, J. SAR Target Recognition With Limited Training Data Based on Angular Rotation Generative Network. IEEE Geosci. Remote Sens. Lett. 2019, 17, 1928–1932. [Google Scholar] [CrossRef]
Dataset | BMP2 | BRDM2 | 2S1 | BTR60 | BTR70 | D7 | T62 | T72 | ZIL131 | ZSU234 |
---|---|---|---|---|---|---|---|---|---|---|
Training samples | 20 | 20 | 20 | 20 | 20 | 20 | 20 | 20 | 20 | 20 |
Pretraining samples | 50 | 50 | 50 | 50 | 50 | 50 | 50 | 50 | 50 | 50 |
Testing samples | 195 | 274 | 274 | 195 | 197 | 274 | 273 | 196 | 274 | 274 |
Accuracy | BMP2 | 2S1 | D7 | T62 | Average Accuracy | Training Time (s) |
---|---|---|---|---|---|---|
CNN | 70.85% | 66.5% | 71.33% | 64.58% | 68.32% | 1050 |
ADMF-CNN | 83.85% | 82.4% | 86.98% | 80.39% | 83.4% | 68 |
FCNN | 74.96% | 72. 5% | 75.45% | 67.33% | 72.56% | 900 |
ADMF-FCNN | 87.03% | 88.15% | 89.59% | 84.37% | 87.28% | 56 |
IFCNN | 75.43% | 73.58% | 74.33% | 70.93% | 73.58% | 850 |
ADMF-IFCNN | 91.11% | 88.12% | 91.15% | 87.25% | 89.4% | 48 |
Class | BMP2 | BRDM2 | 2S1 | BTR60 | BTR70 | D7 | T62 | T72 | ZIL131 | ZSU234 | Accuracy |
---|---|---|---|---|---|---|---|---|---|---|---|
BMP2 | 174 | 9 | 0 | 6 | 4 | 0 | 0 | 2 | 0 | 0 | 88.91% |
BRDM2 | 18 | 236 | 7 | 0 | 1 | 2 | 0 | 7 | 3 | 0 | 86.13% |
2S1 | 9 | 15 | 240 | 0 | 0 | 0 | 5 | 5 | 1 | 0 | 87.59% |
BTR60 | 5 | 9 | 16 | 163 | 14 | 0 | 0 | 2 | 0 | 0 | 83.58% |
BTR70 | 11 | 0 | 0 | 5 | 181 | 0 | 0 | 0 | 0 | 0 | 91.87% |
D7 | 4 | 0 | 6 | 0 | 0 | 248 | 15 | 0 | 0 | 0 | 90.51% |
T62 | 0 | 0 | 9 | 0 | 0 | 0 | 233 | 0 | 22 | 7 | 85.04% |
T72 | 0 | 0 | 0 | 8 | 11 | 0 | 0 | 177 | 0 | 0 | 90.12% |
ZIL131 | 0 | 0 | 1 | 0 | 0 | 5 | 0 | 0 | 255 | 13 | 93.06% |
ZSU234 | 0 | 1 | 1 | 0 | 0 | 7 | 0 | 0 | 11 | 254 | 92.7% |
Total | 88.95% |
Class | BMP2 | BRDM2 | 2S1 | BTR60 | BTR70 | D7 | T62 | T72 | ZIL131 | ZSU234 | Accuracy |
---|---|---|---|---|---|---|---|---|---|---|---|
BMP2 | 168 | 0 | 13 | 0 | 5 | 0 | 0 | 0 | 9 | 0 | 86.12% |
BRDM2 | 1 | 232 | 0 | 24 | 0 | 0 | 0 | 0 | 17 | 0 | 84.94% |
2S1 | 10 | 29 | 235 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 86.03% |
BTR60 | 0 | 8 | 3 | 151 | 33 | 0 | 0 | 0 | 0 | 0 | 78.41% |
BTR70 | 28 | 0 | 0 | 0 | 169 | 0 | 0 | 0 | 0 | 0 | 85.77% |
D7 | 0 | 9 | 0 | 25 | 0 | 240 | 0 | 0 | 0 | 0 | 88.13% |
T62 | 0 | 0 | 0 | 0 | 0 | 0 | 227 | 0 | 7 | 39 | 83.41% |
T72 | 0 | 0 | 0 | 5 | 21 | 0 | 0 | 170 | 0 | 0 | 86.71% |
ZIL131 | 0 | 1 | 5 | 1 | 0 | 7 | 0 | 0 | 246 | 13 | 89.95% |
ZSU234 | 0 | 0 | 7 | 0 | 0 | 9 | 0 | 0 | 17 | 241 | 88.49% |
Total | 85.6% |
Class | BMP2 | BRDM2 | 2S1 | BTR60 | BTR70 | D7 | T62 | T72 | ZIL131 | ZSU234 | Accuracy |
---|---|---|---|---|---|---|---|---|---|---|---|
BMP2 | 162 | 0 | 13 | 0 | 9 | 0 | 0 | 0 | 11 | 0 | 83.07% |
BRDM2 | 33 | 210 | 0 | 15 | 11 | 0 | 0 | 1 | 0 | 0 | 77.92% |
2S1 | 17 | 31 | 224 | 0 | 0 | 0 | 2 | 0 | 0 | 0 | 81.75% |
BTR60 | 0 | 0 | 18 | 157 | 20 | 0 | 0 | 0 | 0 | 0 | 80.51% |
BTR70 | 20 | 0 | 0 | 12 | 165 | 0 | 0 | 0 | 0 | 0 | 83.75% |
D7 | 9 | 0 | 11 | 19 | 0 | 225 | 0 | 0 | 0 | 0 | 82.11% |
T62 | 0 | 0 | 13 | 0 | 0 | 0 | 215 | 0 | 7 | 38 | 78.75% |
T72 | 1 | 1 | 0 | 14 | 18 | 0 | 0 | 162 | 0 | 0 | 82.65% |
ZIL131 | 0 | 0 | 9 | 0 | 0 | 3 | 0 | 0 | 239 | 23 | 86.03% |
ZSU234 | 0 | 0 | 0 | 0 | 0 | 8 | 0 | 0 | 35 | 231 | 84.57% |
Total | 82.11% |
Methods | Dataset | Training Sample Size | Testing Accuracy | Training Time(s) |
---|---|---|---|---|
Baseline CNN [40] | MSTAR | 200 | 54.20% | 956 |
A-ConvNet [3] | MSTAR | 200 | 78.82% | - |
CAE-CNN [17] | MSTAR | 200 | 78.56% | 486 |
Meta- Baseline [41] | MSTAR | 200 | 83.25% | - |
DA-Net [10] | MSTAR | 200 | 73.5% | - |
CAE-HL-CNN [17] | MSTAR | 200 | 81.03% | 488 |
LW-CMDANet [40] | MSTAR | 200 | 55.34% | 450 |
Unnamed method * [25] | MSTAR | 275 | 88.09% | - |
AG-MsPN [1] | MSTAR | 200 | 92.68% | - |
ARGN [42] | MSTAR | 200 | 82.73% | - |
ADMF-FCNN (ours) | MSTAR | 200 | 82.11% | 279 |
ADMF-FCNN (ours) | MSTAR | 200 | 85.6% | 253 |
ADMF-IFCNN (ours) | MSTAR | 200 | 88.95% | 235 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Chen, H.; Fu, X.; Dong, J. SAR Target Recognition Based on Inception and Fully Convolutional Neural Network Combining Amplitude Domain Multiplicative Filtering Method. Remote Sens. 2022, 14, 5718. https://doi.org/10.3390/rs14225718
Chen H, Fu X, Dong J. SAR Target Recognition Based on Inception and Fully Convolutional Neural Network Combining Amplitude Domain Multiplicative Filtering Method. Remote Sensing. 2022; 14(22):5718. https://doi.org/10.3390/rs14225718
Chicago/Turabian StyleChen, He, Xiongjun Fu, and Jian Dong. 2022. "SAR Target Recognition Based on Inception and Fully Convolutional Neural Network Combining Amplitude Domain Multiplicative Filtering Method" Remote Sensing 14, no. 22: 5718. https://doi.org/10.3390/rs14225718
APA StyleChen, H., Fu, X., & Dong, J. (2022). SAR Target Recognition Based on Inception and Fully Convolutional Neural Network Combining Amplitude Domain Multiplicative Filtering Method. Remote Sensing, 14(22), 5718. https://doi.org/10.3390/rs14225718