Next Article in Journal
A GRU-Based Model for Detecting Common Accidents of Construction Workers
Next Article in Special Issue
CCDN-DETR: A Detection Transformer Based on Constrained Contrast Denoising for Multi-Class Synthetic Aperture Radar Object Detection
Previous Article in Journal
Extrinsic Calibration of Thermal Camera and 3D LiDAR Sensor via Human Matching in Both Modalities during Sensor Setup Movement
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

SAR Image Generation Method Using DH-GAN for Automatic Target Recognition

1
Department of Aerospace Engineering, Korea Advanced Institute of Science and Technology, Daejeon 34141, Republic of Korea
2
Hanwha Systems, Yongin-si 17121, Republic of Korea
*
Author to whom correspondence should be addressed.
Sensors 2024, 24(2), 670; https://doi.org/10.3390/s24020670
Submission received: 23 October 2023 / Revised: 4 January 2024 / Accepted: 16 January 2024 / Published: 20 January 2024
(This article belongs to the Special Issue Target Detection and Classification Based on SAR)

Abstract

:
In recent years, target recognition technology for synthetic aperture radar (SAR) images has witnessed significant advancements, particularly with the development of convolutional neural networks (CNNs). However, acquiring SAR images requires significant resources, both in terms of time and cost. Moreover, due to the inherent properties of radar sensors, SAR images are often marred by speckle noise, a form of high-frequency noise. To address this issue, we introduce a Generative Adversarial Network (GAN) with a dual discriminator and high-frequency pass filter, named DH-GAN, specifically designed for generating simulated images. DH-GAN produces images that emulate the high-frequency characteristics of real SAR images. Through power spectral density (PSD) analysis and experiments, we demonstrate the validity of the DH-GAN approach. The experimental results show that not only do the SAR image generated using DH-GAN closely resemble the high-frequency component of real SAR images, but the proficiency of CNNs in target recognition, when trained with these simulated images, is also notably enhanced.

1. Introduction

In recent years, a dramatic surge in computing power has enabled the training of intricate neural network architectures, spurring advancements in machine learning. Among these deep learning models, convolutional neural networks (CNNs) are particularly noteworthy, as they can learn features directly from data, which enables them to identify and extract intricate patterns or objects from images with remarkable accuracy [1]. CNNs have shown remarkable potential, surpassing traditional image processing and analysis methods.
CNNs’ feature extraction capabilities have garnered considerable attention in target recognition for SAR images. While SAR images hold substantial commercial and military value, traditional target recognition methods face challenges due to the intrinsic noise and intricate patterns characteristic of these images. Along with research utilizing clustering techniques such as the support vector machine (SVM) for target recognition [2,3], the CNN-based approach has notably enhanced target recognition performance by addressing the inherent complexities in SAR images [4,5,6,7,8,9,10].
Nevertheless, there are several challenges in SAR-ATR using CNNs. A primary issue is that acquiring SAR images requires considerable resources and time, making securing enough samples for CNN training difficult. A lack of samples can potentially lead CNNs to overfit the training data, resulting in low recognition rates for new data not included in the training dataset.
Various studies have been conducted to address the scarcity of SAR images and improve SAR-ATR performance. One approach is data augmentation. This technique expands the dataset by applying simple transformations to the original data, such as translation, rotation, and brightness adjustment [11,12,13], while data augmentation can easily increase the sample size and the network’s generalization ability, its impact on performance improvement is limited. Another strategy involves generating simulated data for specific targets using 3D modeling [14]. This method can produce a range of SAR images that match user-defined characteristics. However, it necessitates a deep understanding of both SAR image and 3D modeling techniques. A further alternative involves leveraging data from other domains. For instance, transfer learning with infrared (IR) data, which are more accessible than SAR images, has been proposed as a viable strategy [15].
Recently, researchers have been utilizing various GAN models to generate and utilize simulated SAR images that resemble real SAR images. GANs learn real (training) dataset distributions through competition between two neural networks: a discriminator and a generator. Due to these characteristics, GANs are gaining attention as an innovative solution to address the sample scarcity of SAR images. The conditional GAN (cGAN), enhanced with conditional variables, was proposed to generate images with specific targets [16]. The deep convolutional GAN (DCGAN) based on semi-supervised learning was developed to enhance the quality of generated SAR images [17]. Cui et al. introduced a study employing the Wasserstein GAN (WGAN) for generating SAR images at specific azimuth angles [18]. Furthermore, a method employing Cycle-GAN has been suggested for the style transfer of images from simulated to real domains [19].
SAR images acquired using radar sensors inevitably contain speckle noise, a type of high-frequency noise. Despite this, many studies that use GANs to create simulated images overlook this noise. Consequently, images produced using GANs may look real when viewed in the spatial domain but are easily distinguishable in the frequency domain. This issue is because GAN models primarily utilize low-frequency components to distinguish between real and simulated data [20,21,22]. Although this characteristic can be robust to noise in general, given that speckle noise is a significant feature of SAR images, it is essential to properly consider high-frequency components in GAN models to achieve results similar to real SAR images.
This paper introduces a GAN model with a dual discriminator designed to enhance the high-frequency component of generated SAR images. The proposed Dual Discriminator with High-frequency Pass Filter GAN (DH-GAN) is built upon the foundation of the conditional DCGAN. The first discriminator differentiates between the simulated images produced by the generator and the real SAR images, similar to the discriminator in conventional GANs. The newly added second discriminator utilizes the high-frequency components of the image to distinguish between real SAR images and simulated images. The generator is trained to deceive both discriminators simultaneously, generating simulated images that emulate the high-frequency traits of real SAR images more accurately.
The contributions of this paper are as follows: First, the proposed method can generate SAR images with high-frequency components similar to real SAR images. This is significant in that it considers the speckle noise characteristics of real SAR images, which existing GAN models have overlooked. Second, it is possible to generate high-quality labeled data by designing a GAN model based on the structure of cGAN and DCGAN. This is an important advantage as CNN training datasets require accurate labels. Finally, using simulated images created using the DH-GAN model allows for efficiently mitigating the overfitting issue in ATR-CNN. Notably, the high-frequency components in simulated SAR images closely resemble those in authentic SAR images, significantly enhancing the generalization capability of CNNs and recognition accuracy for SAR-ATR. The proposed DH-GAN is validated through various experiments.

2. Methods

2.1. Theoretical Background

The GAN framework involves a competitive process between a discriminator (D) and a generator (G) to simulate data. Given a latent vector z drawn from the probability distribution p z in the latent space, the generator produces simulated data. Simultaneously, the discriminator aims to differentiate between real and simulated data sampled from the data distribution p d a t a . The neural network parameters for the generator and discriminator are iteratively updated using various loss functions. The basic architecture of a GAN is depicted in Figure 1.
Here, x denotes the real SAR image sampled from real data and x ¯ = G ( z ) represents the simulated image produced by the generator.
The original loss function of the GAN effectively categorizes the model into minimax games [23]; however, there can be saturation in the generator’s loss function during practical implementations. If the learning rate of the generator does not keep pace with that of the discriminator, the latter tends to dominate. This can lead to the premature termination of training, preventing the generator from learning effectively. To address this challenge, the Non-saturating GAN (NSGAN) was proposed, which adjusts the generator’s loss function as follows:
L D N S G A N = E x p d a t a log D x E z p z log 1 D G z
L G N S G A N = E z p z log D G z
A limitation of using NSGAN for SAR-ATR is the challenge in discerning the specific target represented by the simulated images. This is attributed to the inherent complexity of SAR images, which typically require specialized knowledge for visual interpretation. The conditional GAN (cGAN) offers a solution to this problem. cGAN extends the basic GAN architecture by incorporating condition variables [24]. Its primary objective is to produce data in alignment with given conditions. In this context, if the condition is set based on the target label, the cGAN generates an image corresponding to that label. Consequently, the user can obtain a SAR image of the desired target.

2.2. DH-GAN Model Framework

In this study, we propose a new model called DH-GAN. This model aims to generate simulated SAR images reflecting the high-frequency components of real SAR images. For this purpose, we have augmented the basic NSGAN with an additional discriminator neural network. The first discriminator distinguishes between real and simulated data. The second discriminator also distinguishes between real and simulated data, where both are emphasized with high-frequency components via HPF. The generator is trained to fool both discriminators, thereby ensuring that it generates images reflecting the inherent high-frequency characteristics of real images. Note that the inputs to all neural networks contain conditional parameters, similar to cGAN, to ensure that the generator can produce the desired target data. The architecture of the proposed DH-GAN is illustrated in Figure 2.
Here, y represents the label of the target, serving as the condition, and D 2 denotes the newly introduced discriminator. The real SAR images and simulated images, filtered using HPF, are used as input to D 2 . The loss functions for the two discriminators and the generator are defined as follows:
L D 1 D H G A N = E x p d a t a log D 1 x | y E z p z log 1 D 1 G z | y
L D 2 D H G A N = E x p d a t a log D 2 f x | y E z p z log 1 D 2 f G z | y
L G D H G A N = 1 2 E z p z log D 1 G z | y 1 2 E z p z log D 2 f G z | y
For a given image A, the function f A is defined as the convolution of A with the kernel K H P F , which represents a user-defined kernel.
For training the first discriminator, we employ the loss function given by Equation (2a). Meanwhile, the second discriminator is trained using the loss function in Equation (2b). We take the average of the resulting losses from both discriminators to train the generator, as shown in Equation (2c).
In this study, the neural network structure of DH-GAN is founded on the DCGAN, which comprise a convolutional layer, batch normalization, and the ReLU activation function. Figure 3 shows the structure of the generator’s neural network. The generator can take as input a latent vector z of size 100 and eventually produce a simulated image of size 128 × 128 . Here, the latent vector is drawn from the normal distribution. Figure 4 shows the structure of the discriminator’s neural network. The discriminator takes a 128 × 128 SAR image as input and outputs the probability that the input image is a real SAR image. Thus, the input data can be either a real SAR image or a simulated image from the generator. Note that both discriminators in DH-GAN have the same structure.

2.3. Power Spectrum Density Analysis

There exist several methods to assess simulated data produced using GANs [25,26,27,28]. Given that this paper aims to generate simulated data reflecting the high-frequency components of the original dataset, we employ power spectral density (PSD) analysis. PSD leverages the Fourier transform to depict the distribution of power across the frequency domain of an image. The image’s Fourier transform is expressed as follows:
F u , v = 1 W H x = 0 W 1 y = 0 H 1 f x , y e j 2 π u x u x W W + v y v y H H
where u and v are coordinates of spatial frequencies, and W and H are the width and height of the image. The PSD of an image is calculated as follows:
PSD u , v = F u , v 2
For the PSD analysis of images, the PSD is expressed as a function of wavenumber. The wavenumber is defined as a distance in the frequency domain; thus, it corresponds to the Euclidean distance from the center of the image to each pixel, ω k = u 2 + v 2 . Due to the circular symmetry of the PSD, calculating the average PSD with respect to the wavenumbers necessitates averaging all the PSD values equidistant from the center of the image.
PSD ω k = 1 N ω k u , v PSD u , v
A high PSD value means the image has substantial energy at that particular frequency. Specifically, a raised PSD value at a higher wavenumber indicates the image’s inclusion of complex patterns or high-frequency components, such as edges or noise. Conversely, elevated PSD values in regions with lower wavenumber point to smoother sections or low-frequency components within the image.

2.4. CNN Model Frame

We employed a straightforward and shallow CNN model for the SAR-ATR [4]. Given that there is no unique structure or procedure exclusive to SAR images, the impact of the proposed DH-GAN can be distinctly evaluated. The CNN model directly ingests SAR images as input. Features from the SAR image are extracted via the CNN and subsequently channeled to a softmax activation function for classification. The CNN model adopted in this research is summarized in Table 1.

3. Experimental Study

3.1. MSTAR Dataset

We trained the DH-GAN and CNN models using the MSTAR dataset, released by DARPA (Defense Advanced Research Project Agency) for benchmarking ATR algorithms. The MSTAR dataset comprises SAR images representing 10 different ground targets. For each target, SAR images have been acquired from various depression and azimuth angles. Table 2 shows the number of SAR images of 10 targets for the two depression angles of 15 deg and 17 deg.
The experiment was conducted in two parts. The first part involved training a GAN model using the real dataset, and then using the trained generator to produce simulated images for 10 ground targets. The second part consisted of training the CNN and evaluating its recognition accuracy. In this section, we compare the results of training the CNN exclusively with the MSTAR dataset against those obtained when the training included the simulated dataset. These experimental procedures were also conducted for NSGAN and LSGAN for a comparative analysis with the DH-GAN results. Here, the real dataset refers to the MSTAR dataset, and the simulated dataset refers to the dataset generated by the GAN models.

3.2. SAR Image Generation

For CNN training, we utilized SAR images with a depression angle of 17 degrees. Meanwhile, to assess the recognition rate of the trained CNN, images with a depression angle of 15 degrees were employed. Consequently, to ensure compatibility and consistency, the DH-GAN was also trained using images with a depression angle of 17 degrees, enabling it to generate SAR images at the same depression angle.
Approximately 2500 original images with a depression angle of 17 degrees are insufficient for training the DH-GAN. To address this sample scarcity, we employed three data augmentation techniques. By flipping, rotating, and brightening the original SAR images, we generated 3000 images per target. In total, this yielded around 32,500 images for training the DH-GAN.
Given that the generator employs the tanh activation function, the input SAR images were scaled to fall within the [−1, 1] range. This adjustment enhanced the training speed and stability by aligning the input and output ranges. Both the discriminators and the generator utilized the Adam optimizer, with optimizer parameters β 1 and β 2 set to 0.5 and 0.999, respectively. The learning rate was configured at 0.00002, the batch size was 16, and the training was performed for 10 epochs.
The input for the second discriminator requires SAR images with emphasized high-frequency components. To achieve this, we introduced a preprocessing step that accentuated these high-frequency elements in the SAR image. We employed a high-frequency pass filter (HPF) for this purpose, with the kernel K H P F defined as follows:
K H P F = 1 1 1 1 8 1 1 1 1
Figure 5 displays the simulated images produced using DH-GAN, as well as those generated using NSGAN and LSGAN for comparison. Both NSGAN and LSGAN are built upon DCGAN with a conditional parameter. The sole distinction between NSGAN, LSGAN, and DH-GAN lies in the absence of a second discriminator and HPF for NSGAN and LSGAN. Furthermore, the training conditions and datasets for NSGAN and LSGAN align with those used for DH-GAN.
When assessing the simulated datasets from a human perspective, all three models produced results similar to real images, and differentiating them became challenging. However, this parity was disrupted through power spectrum density (PSD) analysis. As depicted in Figure 6, the PSD of real datasets and those generated using the three models differed considerably. The real dataset’s PSD, represented by a solid blue line, indicated that, on a log scale, the PSD decreased as frequency increased. In contrast, the results for NSGAN and LSGAN, illustrated with green dotted and red dash–dot lines, respectively, showed higher distributions beyond a specific frequency. These results occur because the discriminators of LSGAN and NSGAN mainly distinguish between real SAR images and simulated images based on low-frequency features, such as structural or morphological aspects, of the images.
Conversely, the PSD of the dataset generated by DH-GAN, depicted as an orange dotted line, closely resembles the spectrum of the real data. This similarity indicates that the second discriminator—trained specifically on images with emphasized high-frequency components—effectively induce the generator in producing images that reflect the high-frequency components of actual data. This demonstrates that the proposed dual discriminator structure and loss function are functioning effectively as intended.

3.3. Recognition Performance

We subsequently assessed the recognition rate performance of the CNN model for SAR-ATR using the previously created simulated dataset. For training, the CNN utilized the original dataset, which has a depression angle of 17 degrees, along with the simulated datasets. Three distinct simulated datasets were available, generated using DH-GAN, NSGAN, and LSGAN, each containing 1000 images per target. Throughout the CNN training phase, all 10 targets served as inputs, with each input SAR image having a resolution of 128 × 128. The CNN model training employed Stochastic Gradient Descent (SGD) with a learning rate of 0.0125, a momentum coefficient 0.9, and a weight decay coefficient of 5 × 10 5 . The training was conducted over 100 epochs with a batch size of 64.
Figure 7 displays the CNN traning loss graph for each simulated dataset. As the training results in Figure 8 indicate, the use of simulated datasets accelerated the training process, and the training loss was lower compared to using only the MSTAR data. Among these datasets, the data generated by DH-GAN exhibited the lowest training loss and slightly outpaced the training speed of both NSGAN and LSGAN. These results suggest that the dataset generated by DH-GAN more effectively addresses the issue of insufficient training data compared to those generated by other GAN models. In other words, the data generated by our proposed method demonstrates characteristics more closely resembling real data.
Next, we assessed the recognition performance of each CNN model using a SAR image with a depression angle of 15 degrees, which was not included in the training dataset. Table 3 summarizing the average recognition rate for 10 targets shows that the CNN model’s recognition performance is enhanced when utilizing the simulated datasets in conjunction with the MSTAR. In particular, the recognition rate showed a slight improvement with the DH-GAN dataset compared to those of the other two GAN models. While this improvement in recognition rate might appear modest, it offers a significant advantage by achieving high recognition performance in fewer training epochs.
These experimental results demonstrate the effectiveness of the proposed DH-GAN. The second discriminator with HPF and loss functions in Equation (2) are instrumental in aiding the generator to produce more realistic images by reflecting the high-frequency components of real images. Notably, the dataset generated by DH-GAN enhances the training speed and recognition performance of CNN to a greater extent than those generated by other GAN models. This results indirectly highlights the significance of high-frequency components in the real dataset, something that is often overlooked by existing GAN models.

4. Conclusions

The advent of CNNs has brought about remarkable advancements in target recognition for SAR images. Nevertheless, the broader application of CNNs is limited by challenges in acquiring ample SAR images. To solve this sample scarcity, GAN-based methods for generating simulated images have been proposed. However, when these images are assessed in the frequency domain, there is a clear distinction between the real and simulated images. To bridge this gap, we present the DH-GAN. This model is designed to generate simulated images that emulate the high-frequency characteristics intrinsic to real SAR images. The structure of the proposed DH-GAN is based on a conditional GAN with convolutional neural network, and a discriminator equipped with a high-pass frequency filter is added.
Through PSD analysis, we are able to show that the images generated using DH-GAN closely resemble the inherent high-frequency characteristics in real SAR images. By incorporating these synthetic SAR images into the training dataset, the target recognition capabilities of the CNN were notably enhanced. Moreover, when compared to datasets produced using NSGAN and LSGAN, the validity of the DH-GAN approach became evident. Our proposed approach can generate more realistic SAR images for specific targets. This approach shows promise in enhancing recognition performance and addressing the overfitting challenges commonly faced by CNNs in SAR-ATR applications.
In future work, we plan to utilize not only PSD analysis but also explore new ways to evaluate the simulated images. In addition, we will simulate data that reflect various characteristics in addition to the high-frequency components of the real data, and analyze the impact of specific components of the SAR image on the recognition rate of the CNN. This research will be applied to various SAR imagery beyond the MSTAR dataset to propose data generation methods in the field and improve the recognition rate of CNNs. The final goal is to develop a CNN with robust recognition rates that can be applied to wide-area SAR images containing multiple targets acquired under mission scenarios. Moreover, the implications of DH-GAN’s enhanced accuracy in SAR image generation can be extended beyond military applications to disaster response and environmental monitoring. Future research could explore the integration of DH-GAN with satellite imaging technologies for real-time global monitoring capabilities.

Author Contributions

Conceptualization, S.O.; methodology, S.O.; software, S.O. and Y.K.; validation, H.B.; formal analysis; H.B.; investigation, Y.K.; resources, Y.K.; data curation, S.O. and Y.K.; writing—original draft preparation, S.O.; writing—review and editing, H.B.; visualization, S.O. and Y.K.; supervision, H.B., D.L. and J.K.; project administration, D.L. and J.K.; funding acquistion, H.B. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by a grant-in-aid from Hanwha Systems.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

Authors Deoksu Lim and Junyoung Ko were employed by the company Hanwha Systems. The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

References

  1. Chen, X.; Wang, Z.; Hua, Q.; Shang, W.L.; Luo, Q.; Yu, K. AI-empowered speed extraction via port-like videos for vehicular trajectory analysis. IEEE Trans. Intell. Transp. Syst. 2022, 24, 4541–4552. [Google Scholar] [CrossRef]
  2. Zhao, Q.; Principe, J.C. Support vector machines for SAR automatic target recognition. IEEE Trans. Aerosp. Electron. Syst. 2001, 37, 643–654. [Google Scholar]
  3. Pengcheng, G.; Zheng, L.; Jingjing, W. Radar group target recognition based on HRRPs and weighted mean shift clustering. J. Syst. Eng. Electron. 2020, 31, 1152–1159. [Google Scholar] [CrossRef]
  4. Morgan, D.A. Deep convolutional neural networks for ATR from SAR imagery. In Algorithms for Synthetic Aperture Radar Imagery XXII; SPIE: Bellingham, WA, USA, 2015; Volume 9475, pp. 116–128. [Google Scholar]
  5. Park, J.H.; Seo, S.M.; Yoo, J.H. SAR ATR for Limited Training Data Using DS-AE Network. Sensors 2021, 21, 4538. [Google Scholar] [CrossRef] [PubMed]
  6. Ying, Z.; Xuan, C.; Zhai, Y.; Sun, B.; Li, J.; Deng, W.; Mai, C.; Wang, F.; Labati, R.D.; Piuri, V.; et al. TAI-SARNET: Deep Transferred Atrous-Inception CNN for Small Samples SAR ATR. Sensors 2020, 20, 1724. [Google Scholar] [CrossRef] [PubMed]
  7. Du, K.; Deng, Y.; Wang, R.; Zhao, T.; Li, N. SAR ATR based on displacement- and rotation-insensitive CNN. Remote Sens. Lett. 2016, 7, 895–904. [Google Scholar] [CrossRef]
  8. Shao, J.; Qu, C.; Li, J.; Peng, S. A Lightweight Convolutional Neural Network Based on Visual Attention for SAR Image Target Classification. Sensors 2018, 18, 3039. [Google Scholar] [CrossRef]
  9. Wang, L.; Bai, X.; Zhou, F. SAR ATR of Ground Vehicles Based on ESENet. Remote Sens. 2019, 11, 1316. [Google Scholar] [CrossRef]
  10. Zhang, F.; Wang, Y.; Ni, J.; Zhou, Y.; Hu, W. SAR Target Small Sample Recognition Based on CNN Cascaded Features and AdaBoost Rotation Forest. IEEE Geosci. Remote Sens. Lett. 2020, 17, 1008–1012. [Google Scholar] [CrossRef]
  11. Ding, J.; Chen, B.; Liu, H.; Huang, M. Convolutional neural network with data augmentation for SAR target recognition. IEEE Geosci. Remote Sens. Lett. 2016, 13, 364–368. [Google Scholar] [CrossRef]
  12. Lv, J.; Liu, Y. Data Augmentation Based on Attributed Scattering Centers to Train Robust CNN for SAR ATR. IEEE Access 2019, 7, 25459–25473. [Google Scholar] [CrossRef]
  13. Ding, B.; Wen, G.; Huang, X.; Ma, C.; Yang, X. Data augmentation by multilevel reconstruction using attributed scattering center for SAR target recognition. IEEE Geosci. Remote Sens. Lett. 2017, 14, 979–983. [Google Scholar] [CrossRef]
  14. Malmgren-Hansen, D.; Kusk, A.; Dall, J.; Nielsen, A.A.; Engholm, R.; Skriver, H. Improving SAR automatic target recognition models with transfer learning from simulated data. IEEE Geosci. Remote Sens. Lett. 2017, 14, 1484–1488. [Google Scholar] [CrossRef]
  15. Kim, S.; Song, W.J.; Kim, S.H. Double weight-based SAR and infrared sensor fusion for automatic ground target recognition with deep learning. Remote Sens. 2018, 10, 72. [Google Scholar] [CrossRef]
  16. Guo, J.; Lei, B.; Ding, C.; Zhang, Y. Synthetic aperture radar image synthesis by using generative adversarial nets. IEEE Geosci. Remote Sens. Lett. 2017, 14, 1111–1115. [Google Scholar] [CrossRef]
  17. Gao, F.; Yang, Y.; Wang, J.; Sun, J.; Yang, E.; Zhou, H. A deep convolutional generative adversarial networks (DCGANs)-based semi-supervised method for object recognition in synthetic aperture radar (SAR) images. Remote Sens. 2018, 10, 846. [Google Scholar]
  18. Cui, Z.; Zhang, M.; Cao, Z.; Cao, C. Image data augmentation for SAR sensor via generative adversarial nets. IEEE Access 2019, 7, 42255–42268. [Google Scholar]
  19. Liu, L.; Pan, Z.; Qiu, X.; Peng, L. SAR target classification with CycleGAN transferred simulated samples. In Proceedings of the IGARSS 2018-2018 IEEE International Geoscience and Remote Sensing Symposium, Valencia, Spain, 22–27 July 2018; pp. 4411–4414. [Google Scholar]
  20. Durall, R.; Keuper, M.; Keuper, J. Watch your up-convolution: Cnn based generative deep neural networks are failing to reproduce spectral distributions. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 14–19 June 2020; pp. 7890–7899. [Google Scholar]
  21. Dzanic, T.; Shah, K.; Witherden, F. Fourier spectrum discrepancies in deep network generated images. Adv. Neural Inf. Process. Syst. 2020, 33, 3022–3032. [Google Scholar]
  22. Frank, J.; Eisenhofer, T.; Schönherr, L.; Fischer, A.; Kolossa, D.; Holz, T. Leveraging frequency analysis for deep fake image recognition. In Proceedings of the International Conference on Machine Learning, Virtual, 13–18 July 2020; pp. 3247–3258. [Google Scholar]
  23. Goodfellow, I.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative adversarial nets. In Proceedings of the 28th Conference on Neural Information Processing Systems, Montreal, QC, Canada, 8–13 December 2014. [Google Scholar]
  24. Mirza, M.; Osindero, S. Conditional generative adversarial nets. arXiv 2014, arXiv:1411.1784. [Google Scholar]
  25. Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef]
  26. Van der Maaten, L.; Hinton, G. Visualizing data using t-SNE. J. Mach. Learn. Res. 2008, 9, 2579–2605. [Google Scholar]
  27. Van der Schaaf, V.A.; van Hateren, J.V. Modelling the power spectra of natural images: Statistics and information. Vis. Res. 1996, 36, 2759–2770. [Google Scholar]
  28. Hore, A.; Ziou, D. Image quality metrics: PSNR vs. SSIM. In Proceedings of the 2010 20th International Conference on Pattern Recognition, Istanbul, Turkey, 23–26 August 2010; pp. 2366–2369. [Google Scholar]
Figure 1. Basic structure of a GAN.
Figure 1. Basic structure of a GAN.
Sensors 24 00670 g001
Figure 2. The structure of DH-GAN.
Figure 2. The structure of DH-GAN.
Sensors 24 00670 g002
Figure 3. The network structure of the generator.
Figure 3. The network structure of the generator.
Sensors 24 00670 g003
Figure 4. The network structure of the two discriminators.
Figure 4. The network structure of the two discriminators.
Sensors 24 00670 g004
Figure 5. The result of DH-GAN, NSGAN, and LSGAN at epoch 10.
Figure 5. The result of DH-GAN, NSGAN, and LSGAN at epoch 10.
Sensors 24 00670 g005
Figure 6. Power spectrum density analysis for simulated datasets.
Figure 6. Power spectrum density analysis for simulated datasets.
Sensors 24 00670 g006
Figure 7. The training loss graph.
Figure 7. The training loss graph.
Sensors 24 00670 g007
Figure 8. The recognition rate graph.
Figure 8. The recognition rate graph.
Sensors 24 00670 g008
Table 1. The CNN structure for SAR-ATR.
Table 1. The CNN structure for SAR-ATR.
Layer TypeImage SizeFeature MapsKernel SizeFunction
Input128 × 1281--
Convolution120 × 120189 × 9ReLU
Pooling20 × 20186 × 6Max Pooling
Convolution16 × 16365 × 5ReLU
Pooling4 × 4364 × 4Max Pooling
Convolution1 × 11204 × 4ReLU
Fully Connected-1120 × 10Softmax
Output10---
Table 2. MSTAR datasets used for training and validation.
Table 2. MSTAR datasets used for training and validation.
Target NameDepression Angle = 17 degDepression Angle = 15 deg
2S1299274
BMP2232195
BRDM2298274
BTR60256195
BTR70233196
D7299274
T62299273
T72232196
ZIL131299274
ZSU234299274
Table 3. The recognition rate results of CNNs trained with different simulated datasets.
Table 3. The recognition rate results of CNNs trained with different simulated datasets.
NameMSTAR OnlyWith DH-GANWith NSGANWith LSGAN
2S188.3294.5391.6194.16
BMP280.5189.2393.3393.85
BRDM293.0791.9798.5489.78
BTR6096.4197.4494.3693.85
BTR7092.8698.4797.9697.96
D798.9198.5499.2798.91
T6292.3197.8091.9497.07
T7298.4795.4192.8695.92
ZIL13198.1895.6296.3595.26
ZSU23498.1899.2797.4597.81
Avg.93.7395.8395.3795.46
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Oghim, S.; Kim, Y.; Bang, H.; Lim, D.; Ko, J. SAR Image Generation Method Using DH-GAN for Automatic Target Recognition. Sensors 2024, 24, 670. https://doi.org/10.3390/s24020670

AMA Style

Oghim S, Kim Y, Bang H, Lim D, Ko J. SAR Image Generation Method Using DH-GAN for Automatic Target Recognition. Sensors. 2024; 24(2):670. https://doi.org/10.3390/s24020670

Chicago/Turabian Style

Oghim, Snyoll, Youngjae Kim, Hyochoong Bang, Deoksu Lim, and Junyoung Ko. 2024. "SAR Image Generation Method Using DH-GAN for Automatic Target Recognition" Sensors 24, no. 2: 670. https://doi.org/10.3390/s24020670

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop