Single-Scene SAR Image Data Augmentation Based on SBR and GAN for Target Recognition
Abstract
:1. Introduction
- Simulation methods focusing on amplitude correctness:These methods typically necessitate the establishment of appropriate physical scattering models and employ approximate electromagnetic computation techniques to derive the final simulated SAR images. As early as 1991, J. M. Nasr and D. Vidal-Madjar utilized high-frequency approximation techniques to compute the RCS of electrically large targets subsequently modulating the selected SAR pulse response to generate simulated images [3]. Feng Xu and Ya-Qiu Jin applied mapping and projection principles alongside a vector radiative transfer model and integral equation method to calculate the scattering field, successfully simulating polarized SAR images of complex terrain scenes [4]. Wenna Fan et al. [5] used a combination of Geometric Optics (GOs) and Physical Optics (POs) methods to obtain spatially distributed scattering fields, employing a modified frequency scaling algorithm (AFSA) for echo data processing, which resulted in highly squinted spotlight simulated SAR images. Lamei Zhang et al. [6] integrated the Kirchhoff approximation with GO and PO methodologies to compute the frequency domain echo of linear frequency modulation (LFM) signals; they then employed inverse fast Fourier transform (IFFT) techniques to produce simulated polarized SAR images. While these methods offer well-defined physical mechanisms and high numerical accuracy, they inherently demand significant computational resources due to their reliance on detailed physical modeling and numerical techniques. Additionally, SAR imaging involves multiple signal processing steps, each introducing further computational overhead. These challenges become particularly pronounced when generating large-scale datasets, significantly limiting the practicality of these approaches in deep learning applications.
- Simulation methods focusing on geometric accuracy:These methods typically necessitate precise CAD models and employ various ray tracing algorithms to simulate the interaction between electromagnetic waves and target models, thereby generating simulated SAR images. RaySAR [7] utilizes the ray tracing algorithm from the open-source software POV-ray to simulate radar signals in 3D—specifically, azimuth, range, and elevation. However, RaySAR primarily emphasizes the geometric accuracy of the simulated signals while neglecting random scattering effects. It also employs Phong shading’s diffuse reflection model and specular reflection model to simplify the scattering processes of targets. Due to significant differences in the physical properties of light compared to electromagnetic waves, as well as a lack of precise analytical expressions for modeling scattering effects such as specular reflection coefficients and diffuse reflection coefficients, the pixel amplitude values derived from SAR images generated by RaySAR do not possess exact physical significance. Nonetheless, RaySAR remains a commendable tool for SAR image simulation. The coherent raytracing SAR simulator [8], developed based on the ray tracing concept introduced by Amanatides and Woo [9], serves as a coherent SAR image simulator. Similar to RaySAR, this simulator models scattering effects using diffuse reflection coefficients and specular reflection coefficients. Additionally, it mandates that all polygons within the 3D model be convex; this requirement limits its applicability in certain scenarios. On another front, SARViz [10] is a rasterization method implemented on graphics processing units (GPUs), capable of generating approximately 20 simulated images with dimensions of 1024 × 768 pixels per second [2]. However, in pursuit of enhanced simulation efficiency, SARViz compromises the geometric and radiometric accuracy to some extent; notably, it is unable to simulate third-order reflections or higher.
- Develop RCS image simulation technology based on the SBR algorithm, which meets the needs of large-scale high-resolution SAR image training sets while ensuring geometric accuracy and amplitude correctness.
- Use cycle GAN to obtain simulated images that are closer to real data. The training dataset strictly limits the number of samples and sample parameters, i.e., the dataset contains only 30 images, all from a single scene SAR image.
- This framework can simulate and generate a large number of realistic and usable simulated images with other imaging parameters, such as data with different incidence angles and different aircraft models.
- Based on public data, construct a 0.5-m resolution aircraft target dataset Aircraft-VariedSAR containing different imaging parameters and random aircraft models and verify the improvement effect of the simulated data generated by this framework on the performance of the target recognition network on this dataset.
2. Background
2.1. Shooting and Bouncing Rays
2.1.1. Ray Tracing
2.1.2. Amplitude Tracking
2.1.3. Far-Field Integration
2.1.4. Radar Cross Section (RCS)
2.2. Cycle GAN-Based Unpaired Image-to-Image Translation
3. Proposed Method
3.1. Framework of the SingleScene–SAR Simulator
3.2. SAR Image Simulation
3.2.1. Parameter Parsing Module
3.2.2. SBR Simulation Module
3.2.3. RCS Synthesis Module
3.2.4. Cycle GAN-Based Translation Model
3.2.5. Noise Model
3.3. Aircraft Recognition with Data Augmentation
3.4. Program Implementation
- Parameter parsing module;
- SBR simulation module;
- RCS synthesis module;
- Cycle GAN-based translation;
- Aircraft recognition network.
4. Experimental Results
4.1. Data Description
4.2. Implementation Details
- Original SAR Images D/T: The training dataset for these experiments comprises unprocessed original real SAR images. Specifically, the aircraft samples consist of 30 SAR image chips corresponding to Table 3. Additionally, the clutter samples are derived from the same set of SAR images, with quantities of either 60 or 90, respectively, identified by the suffixes “D (double)” and “T (triple)”.
- Enhanced SAR Images D/T: In these experiments, we augment the size of the training dataset through shifting and cropping techniques based on real data D/T to improve the network’s generalization capability. Consequently, the number of aircraft samples increases from 30 to 510; similarly, clutter samples expand from either 60 or 90 to a total of 1020 or 1530, respectively, distinguished by the suffixes “D (double)” and “T (triple)”.
- Data Augmentation D/T: These experiments incorporate simulated SAR images sourced from the RCS2SAR-xx dataset alongside real data D/T. The aircraft sample set includes both 30 real SAR images and an additional 480 simulated SAR images; meanwhile, clutter samples remain consistent with those in enhanced real data D/T.
4.3. Evaluation Metrics
4.3.1. Similarity Evaluation Metrics Between Image Domains
4.3.2. Usability Evaluation Metrics of Data Augmentation
4.4. Performance of SingleScene–SAR Simulator
4.5. Ablation Experiments
- SBR only: These experiments use only the unprocessed simulated RCS images as training data for data augmentation. The rest of the samples from real SAR images in the training set remain consistent with the baseline experiment.
- SBR and noise estimator only: These experiments add noise to the simulated RCS images using the noise estimator module based on the SBR-only D/T but do not perform image translation.
- SBR and cycle GAN only: These experiments perform image translation on the simulated RCS images based on the SBR-only D/T but do not add noise.
5. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Cha, M.; Majumdar, A.; Kung, H.T.; Barber, J. Improving SAR automatic target recognition using simulated images under deep residual refinements. In Proceedings of the ICASSP, Calgary, AB, Canada, 15–20 April 2018; pp. 2606–2610. [Google Scholar]
- Balz, T.; Hammer, H.; Auer, S. Potentials and limitations of SAR image simulators—A comparative study of three simulation approaches. ISPRS J. Photogramm. Remote Sens. 2015, 101, 102–109. [Google Scholar] [CrossRef]
- Nasr, J.M.; Vidal-Madjar, D. Image simulation of geometric targets for spaceborne synthetic aperture radar. IEEE Trans. Geosci. Remote Sens. 1991, 29, 986–996. [Google Scholar] [CrossRef]
- Xu, F.; Jin, Y.-Q. Imaging Simulation of Polarimetric SAR for a Comprehensive Terrain Scene Using the Mapping and Projection Algorithm. IEEE Trans. Geosci. Remote Sens. 2006, 44, 3219–3234. [Google Scholar] [CrossRef]
- Fan, W.; Zhang, M.; Li, J. Numerical simulation of highly squinted spotlight SAR images from complex targets using an amendatory frequency scaling algorithm. Int. J. Remote Sens. 2018, 39, 3306–3319. [Google Scholar] [CrossRef]
- He, W.; Yokoya, N. Multi-Temporal Sentinel-1 and -2 Data Fusion for Optical Image Simulation. ISPRS Int. J.-Geo-Inf. 2018, 7, 389. [Google Scholar] [CrossRef]
- Auer, S.; Hinz, S.; Bamler, R. Ray-Tracing Simulation Techniques for Understanding High-Resolution SAR Images. IEEE Trans. Geosci. Remote Sens. 2010, 48, 1445–1456. [Google Scholar] [CrossRef]
- Hammer, H.; Schulz, K. Coherent simulation of SAR images. Image Signal Process. Remote Sens. XV 2009, 7477, 406–414. [Google Scholar]
- Amanatides, J.; Woo, A. A fast voxel traversal algorithm for ray tracing. Eurographics 1987, 87, 3–10. [Google Scholar]
- Balz, T.; Stilla, U. Hybrid GPU-Based Single- and Double-Bounce SAR Simulation. IEEE Trans. Geosci. Remote Sens. 2009, 47, 3519–3529. [Google Scholar] [CrossRef]
- Bao, X.; Pan, Z.; Liu, L.; Lei, B. SAR Image Simulation by Generative Adversarial Networks. In Proceedings of the IGARSS, Yokohama, Japan, 28 July–2 August 2019; pp. 9995–9998. [Google Scholar]
- Cao, C.; Cao, Z.; Cui, Z. LDGAN: A Synthetic Aperture Radar Image Generation Method for Automatic Target Recognition. IEEE Trans. Geosci. Remote Sens. 2020, 58, 3495–3508. [Google Scholar] [CrossRef]
- Zhang, M.; Cui, Z.; Wang, X.; Cao, Z. Data Augmentation Method of SAR Image Dataset. In Proceedings of the IGARSS, Valencia, Spain, 22–27 July 2018; pp. 5292–5295. [Google Scholar]
- Krizhevsky, A.; Sutskever, I.; Hinton, G.E. ImageNet Classification with Deep Convolutional Neural Networks. In Proceedings of the NIPS, Lake Tahoe, NV, USA, 3–6 December 2012; pp. 1097–1105. [Google Scholar]
- Sohl-Dickstein, J.; Weiss, E.; Maheswaranathan, N.; Ganguli, S. Deep Unsupervised Learning using Nonequilibrium Thermodynamics. In Proceedings of the 32nd International Conference on Machine Learning, Lille, France, 6–11 July 2015; Volume 37, pp. 2256–2265. [Google Scholar]
- Qosja, D.; Wagner, S.; O’Hagan, D. SAR Image Synthesis with Diffusion Models. In Proceedings of the 2024 IEEE Radar Conference, Denver, CO, USA, 6–10 May 2024; pp. 1–6. [Google Scholar]
- Ling, H.; Chou, R.-C.; Lee, S.-W. Shooting and bouncing rays: Calculating the RCS of an arbitrarily shaped cavity. IEEE Trans. Antennas Propag. 1989, 37, 194–205. [Google Scholar] [CrossRef]
- Xia, W.; Li, H.; Wang, F.; Li, H.; Zhang, J. SAR image simulation for urban structures based on SBR. In Proceedings of the IEEE Radar Conference, Cincinnati, OH, USA, 19–23 May 2014; pp. 0792–0795. [Google Scholar]
- Dong, C.-L.; Meng, X.; Guo, L.-X. Research on SAR Imaging Simulation Based on Time-Domain Shooting and Bouncing Ray Algorithm. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2023, 16, 1519–1530. [Google Scholar] [CrossRef]
- Chang, Y.-L.; Chiang, C.-Y.; Chen, K. SAR image simulation with application to target recognition. Progr. Electromagn. Res. 2011, 119, 35–57. [Google Scholar] [CrossRef]
- Zhu, J.-Y.; Park, T.; Isola, P.; Efros, A.A. Unpaired Image-to-Image Translation Using Cycle-Consistent Adversarial Networks. In Proceedings of the ICCV, Venice, Italy, 22–29 October 2017; pp. 2223–2232. [Google Scholar]
- Chang, Y.; Liu, C.; Cao, L.; Yu, W.; Chen, H.; Cui, T. Generating SAR Images Based on Neural Network. In Proceedings of the ICCEM, Shanghai, China, 20–22 March 2019; pp. 1–3. [Google Scholar]
- Liu, L.; Pan, Z.; Qiu, X.; Peng, L. SAR Target Classification with Cycle GAN Transferred Simulated Samples. In Proceedings of the IGARSS, Valencia, Spain, 22–27 July 2018; pp. 4411–4414. [Google Scholar]
- Rosales, R.; Achan, K.; Frey, B. Unsupervised image translation. In Proceedings of the ICCV, Nice, France, 13–16 October 2003; pp. 472–478. [Google Scholar]
- Liu, M.-Y.; Breuel, T.; Kautz, J. Unsupervised image-to-image translation networks. In Proceedings of the NIPS, Long Beach, CA, USA, 4–9 December 2017; pp. 700–708. [Google Scholar]
- Taigman, Y.; Polyak, A.; Wolf, L. Unsupervised Cross-Domain Image Generation. arXiv 2016, arXiv:1611.02200. [Google Scholar]
- Wu, K.; Jin, G.; Xiong, X.; Zhang, H.; Wang, L. SAR Image Simulation Based on Effective View and Ray Tracing. Remote Sens. 2022, 14, 5754. [Google Scholar] [CrossRef]
- Xu, Z. Wavelength-Resolution SAR Speckle Model. IEEE Geosci. Remote Sens. Lett. 2022, 9, 4504005. [Google Scholar] [CrossRef]
- Voigt, G.H.M.; Alves, D.I.; Muller, C.; Machado, R.; Ramos, L.P.; Vu, V.T.; Pettersson, M.I. A Statistical Analysis for Intensity Wavelength-Resolution SAR Difference Images. Remote Sens. 2023, 15, 2401. [Google Scholar] [CrossRef]
- Chen, S.; Wang, H.; Xu, F.; Jin, Y.-Q. Target Classification Using the Deep Convolutional Networks for SAR Images. IEEE Trans. Geosci. Remote Sens. 2016, 54, 4806–4817. [Google Scholar] [CrossRef]
- RaytrAMP. Available online: https://github.com/RedBlight/RaytrAMP (accessed on 30 March 2024).
- Umbra Synthetic Aperture Radar (SAR) Open Data. Available online: https://registry.opendata.aws/umbra-open-data (accessed on 30 March 2024).
- Keydel, E.R.; Lee, S.W.; Moore, J.T. MSTAR extended operating conditions: A tutorial. In Proceedings of the 3rd SPIE Conference Algorithms SAR Imagery, Orlando, FL, USA, 10 June 1996; SPIE: Bellingham, WA, USA, 1996; Volume 2757, pp. 228–242. [Google Scholar]
- Niu, S.; Qiu, X.; Lei, B.; Ding, C.; Fu, K. Parameter Extraction Based on Deep Neural Network for SAR Target Simulation. IEEE Trans. Geosci. Remote Sens. 2020, 58, 4901–4914. [Google Scholar] [CrossRef]
- Niu, S.; Qiu, X.; Lei, B.; Fu, K. A SAR Target Image Simulation Method With DNN Embedded to Calculate Electromagnetic Reflection. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2021, 14, 2593–2610. [Google Scholar] [CrossRef]
- Sampat, M.P.; Wang, Z.; Gupta, S.; Bovik, A.C.; Markey, M.K. Complex Wavelet Structural Similarity: A New Image Similarity Index. IEEE Trans. Image Process. 2009, 18, 2385–2401. [Google Scholar] [CrossRef] [PubMed]
- Zhang, R.; Isola, P.; Efros, A.A.; Shechtman, E.; Wang, O. The Unreasonable Effectiveness of Deep Features as a Perceptual Metric. In Proceedings of the Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 586–595. [Google Scholar]
- Chen, J.; Wang, H.; Lu, H. Aircraft Detection in SAR Images via Point Features. IEEE Geosci. Remote Sens. Lett. 2024, 21, 4004205. [Google Scholar] [CrossRef]
- Zhao, Y.; Zhao, L.; Liu, Z.; Hu, D.; Kuang, G.; Liu, L. Attentional Feature Refinement and Alignment Network for Aircraft Detection in SAR Imagery. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5220616. [Google Scholar] [CrossRef]
- Chen, Y.; Cong, Y.; Zhang, L. Deformable Scattering Feature Correlation Network for Aircraft Detection in SAR Images. IEEE Geosci. Remote Sens. Lett. 2023, 20, 4007205. [Google Scholar] [CrossRef]
Name | Length (Meter) | Width (Meter) | Number of Triangular Faces |
---|---|---|---|
Aircraft A | 39.5 | 35.8 | 152,633 |
Aircraft B | 76.3 | 64.7 | 5706 |
Aircraft C | 69.8 | 66.3 | 79,517 |
Aircraft D | 63.0 | 60.0 | 9250 |
Aircraft E | 37.6 | 33.7 | 371,197 |
Aircraft F | 63.6 | 62.7 | 196,994 |
Aircraft G | 66.8 | 66.1 | 72,200 |
Aircraft H | 77.2 | 79.8 | 545,862 |
Symbol | Parameter | Description |
---|---|---|
VV | Polarization mode of the antenna | |
1 | Antenna left-right viewing parameter (1 for left view and −1 for right view) | |
2.49 | Scanning angle of the antenna along the azimuth direction (degree) | |
0 | Squint angle (rad) | |
0.5 | Range resolution | |
0.5 | Azimuth resolution | |
9.6 × | Central operating frequency of the SAR platform | |
5.708 × | Pulse repetition frequency | |
10 | Line density of the ray tube per unit wavelength |
Name | Incidence Angle (Degree) | SAR Image Serial Number |
---|---|---|
RCS2SAR-30 | 30° | 2023-02-01-03-05-24 |
RCS2SAR-40 | 40° | 2023-03-05-14-25-30 |
RCS2SAR-50 | 50° | 2023-03-25-14-19-09 |
Metric | ||||
---|---|---|---|---|
Real SAR vs. Real SAR | 0.000 | 0.000 | 0.000 | 0.000 |
Real SAR vs. RCS Images | 0.528 | 0.533 | 2.245 | 0.845 |
Real SAR vs. RCS2SAR-30 | 0.049 | 0.014 | 0.200 | 0.479 |
Real SAR vs. RCS2SAR-40 | 0.038 | 0.014 | 0.175 | 0.487 |
Real SAR vs. RCS2SAR-50 | 0.046 | 0.018 | 0.257 | 0.463 |
Dataset | Network Type | ConvNet | VGGNet | ||||
---|---|---|---|---|---|---|---|
Metrics | Precision | Recall | F1 | Precision | Recall | F1 | |
RCS2SAR-30 | Original SAR Images D | 0.9944 | 0.5551 | 0.7124 | 0.9971 | 0.5325 | 0.6942 |
Enhanced SAR Images D | 0.9908 | 0.6294 | 0.7696 | 0.9933 | 0.5544 | 0.7116 | |
Data Augmentation D | 0.9813 | 0.7538 | 0.8520 | 0.9835 | 0.7219 | 0.8326 | |
Difference | −0.0095 | 0.1244 | 0.0824 | −0.0098 | 0.1675 | 0.1210 | |
Original SAR Images T | 0.9945 | 0.5657 | 0.7210 | 0.9971 | 0.5163 | 0.6802 | |
Enhanced SAR Images T | 0.9918 | 0.5778 | 0.7302 | 0.9922 | 0.5491 | 0.7068 | |
Data Augmentation T | 0.9938 | 0.6900 | 0.8136 | 0.9897 | 0.6712 | 0.7996 | |
Difference | 0.0020 | 0.1122 | 0.0834 | −0.0025 | 0.1221 | 0.0928 | |
RCS2SAR-40 | Original SAR Images D | 0.9846 | 0.6899 | 0.8110 | 0.9820 | 0.6249 | 0.7638 |
Enhanced SAR Images D | 0.9757 | 0.8030 | 0.8808 | 0.9625 | 0.7305 | 0.8306 | |
Data Augmentation D | 0.9730 | 0.9014 | 0.9358 | 0.9563 | 0.7933 | 0.8672 | |
Difference | −0.0027 | 0.0984 | 0.0550 | −0.0061 | 0.0627 | 0.0366 | |
Original SAR Images T | 0.9962 | 0.5751 | 0.7290 | 0.9942 | 0.5267 | 0.6886 | |
Enhanced SAR Images T | 0.9780 | 0.7536 | 0.8512 | 0.9860 | 0.6901 | 0.8118 | |
Data Augmentation T | 0.9803 | 0.8895 | 0.9326 | 0.9783 | 0.7791 | 0.8674 | |
Difference | 0.0023 | 0.1359 | 0.0814 | −0.0076 | 0.0889 | 0.0556 | |
RCS2SAR-50 | Original SAR Images D | 0.9812 | 0.7607 | 0.8568 | 0.9700 | 0.6839 | 0.8022 |
Enhanced SAR Images D | 0.9528 | 0.7998 | 0.8694 | 0.9489 | 0.6406 | 0.7648 | |
Data Augmentation D | 0.9596 | 0.9008 | 0.9292 | 0.9729 | 0.7751 | 0.8628 | |
Difference | 0.0068 | 0.1011 | 0.0598 | 0.0240 | 0.1345 | 0.0980 | |
Original SAR Images T | 0.9830 | 0.6840 | 0.8066 | 0.9858 | 0.6019 | 0.7474 | |
Enhanced SAR Images T | 0.9647 | 0.7455 | 0.8410 | 0.9650 | 0.6431 | 0.7718 | |
Data Augmentation T | 0.9834 | 0.8224 | 0.8954 | 0.9839 | 0.7181 | 0.8302 | |
Difference | 0.0187 | 0.0769 | 0.0544 | 0.0189 | 0.0750 | 0.0584 |
Dataset | Network Type | ConvNet | VGGNet | ||||
---|---|---|---|---|---|---|---|
Metrics (Difference) | Precision | Recall | F1 | Precision | Recall | F1 | |
Real & Simulated Group D/T with RCS2SAR-30 Dataset | SBR D | 0.0117 | −0.2196 | −0.1622 | 0.0107 | −0.2737 | −0.2148 |
SBR + Noise Estimator D | 0.0158 | −0.2402 | −0.1742 | 0.0123 | −0.2393 | −0.1826 | |
SBR + Image Translation D | −0.0464 | −0.1782 | −0.1396 | −0.0421 | −0.1405 | −0.1140 | |
SBR T | 0.0045 | −0.1919 | −0.1504 | 0.0090 | −0.2395 | −0.1968 | |
SBR + Noise Estimator T | 0.0056 | −0.1918 | −0.1490 | 0.0097 | −0.1972 | −0.1570 | |
SBR + Image Translation T | −0.0484 | −0.1471 | −0.1248 | −0.0478 | −0.1137 | −0.0992 | |
Real & Simulated Group D/T with RCS2SAR-40 Dataset | SBR D | 0.0156 | −0.3299 | −0.2118 | 0.0347 | −0.2954 | −0.2044 |
SBR + Noise Estimator D | 0.0072 | −0.2927 | −0.1848 | 0.0321 | −0.2724 | −0.1850 | |
SBR + Image Translation D | −0.0385 | −0.1455 | −0.1004 | −0.0323 | −0.1507 | −0.1096 | |
SBR T | 0.0095 | −0.3056 | −0.1982 | 0.0133 | −0.2794 | −0.2030 | |
SBR + Noise Estimator T | 0.0117 | −0.2961 | −0.1900 | 0.0102 | −0.2483 | −0.1768 | |
SBR + Image Translation T | −0.0333 | −0.1946 | −0.1318 | −0.0134 | −0.1828 | −0.1304 | |
Real & Simulated Group D/T with RCS2SAR-50 Dataset | SBR D | 0.0161 | −0.0970 | −0.0480 | 0.0091 | −0.0874 | −0.0540 |
SBR + Noise Estimator D | 0.0148 | −0.0932 | −0.0460 | 0.0080 | −0.0379 | −0.0212 | |
SBR + Image Translation D | −0.0331 | −0.1202 | −0.0824 | −0.0165 | −0.0503 | −0.0382 | |
SBR T | −0.0049 | −0.1476 | −0.0970 | 0.0057 | −0.1480 | −0.1068 | |
SBR + Noise Estimator T | −0.0074 | −0.1229 | −0.0806 | 0.0048 | −0.0933 | −0.0646 | |
SBR + Image Translation T | −0.0328 | −0.0844 | −0.0646 | −0.0254 | −0.0675 | −0.0552 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Feng, S.; Fu, X.; Feng, Y.; Lv, X. Single-Scene SAR Image Data Augmentation Based on SBR and GAN for Target Recognition. Remote Sens. 2024, 16, 4427. https://doi.org/10.3390/rs16234427
Feng S, Fu X, Feng Y, Lv X. Single-Scene SAR Image Data Augmentation Based on SBR and GAN for Target Recognition. Remote Sensing. 2024; 16(23):4427. https://doi.org/10.3390/rs16234427
Chicago/Turabian StyleFeng, Shangchen, Xikai Fu, Yanlin Feng, and Xiaolei Lv. 2024. "Single-Scene SAR Image Data Augmentation Based on SBR and GAN for Target Recognition" Remote Sensing 16, no. 23: 4427. https://doi.org/10.3390/rs16234427
APA StyleFeng, S., Fu, X., Feng, Y., & Lv, X. (2024). Single-Scene SAR Image Data Augmentation Based on SBR and GAN for Target Recognition. Remote Sensing, 16(23), 4427. https://doi.org/10.3390/rs16234427