1. Introduction
With its rapid development, inverse synthetic aperture radar (ISAR) technology can acquire high-resolution radar images of non-cooperative targets under the conditions of an all-day and all-weather environment, which are widely used in military and civil fields [
1]. The high range resolution of ISAR is due to the broadband of the transmitted signal, while the high azimuth resolution is determined by the virtual aperture synthesized by the relative motion between the radar and the target. ISAR images contain a lot of feature information of targets, which is vital for target recognition [
2]. However, it is not easy to obtain the satisfactory ISAR images in the actual imaging process. The actual ISAR images are often blurred and the resolution is limited. That is because of non-cooperation of targets and noise interference. In addition, the radar echo of the target is often incomplete, which further degrades the quality of the ISAR image. Therefore, it is important to find a method to improve the resolution of the ISAR image.
Since the non-cooperative targets can always be regarded as a combination of scattering points, the sparse reconstruction algorithms based on compressive sensing (CS) are used to handle the imaging problems, which have attracted the attention of many scholars in recent years [
3,
4,
5]. The echo received by ISAR physically has the sparsity characteristic. So, the CS method is very suitable for ISAR imaging with sparse aperture (SA). Many well-known CS algorithms have been applied to ISAR imaging, such as the orthogonal matching pursuit (OMP) algorithm [
6], smoothed
-norm (SL0) algorithm [
7], fast iterative shrinkage-thresholding algorithm (FISTA) [
8], alternating direction method of multipliers (ADMM) [
9], and so on. These CS methods obtain the high-resolution ISAR images by the
-norm or
-norm optimization. Compared with traditional ways, the CS method has better resolution performance on ISAR images. However, the image quality obtained by the CS method is limited by the sparse observation model and it will inevitably produce some fake points if the model is not accurate, which will affect the resolution performance. In addition, the computational complexity of the CS method is high, which results in the time consumption of the CS method being too long to realize real-time imaging.
Recently, deep learning has shown great potential in the field of radar signal processing, including ISAR imaging. Scholars utilize the powerful end-to-end learning ability of neural networks to solve the nonlinear mapping. Deep learning breaks the limitations of traditional ways and gets better results. Many specific neural networks have been applied in ISAR imaging. A complex-valued convolutional neural network (CV-CNN) is proposed in [
10] and has advantages in imaging quality and computational efficiency. A deep neural network needs mass data for training, but adequate training data cannot be guaranteed in the field of radar imaging. So, a fully convolutional neural network (FCNN) is applied to ISAR imaging in [
11]. FCNN does not have fully connected layers, and the number of weights to be trained is lower. So, it can be trained with a smaller training set. In [
12], a deep residual network is proposed to achieve resolution enhancement for ISAR images, which obtains a better performance than the CS method. In [
13], a novel U-net-based imaging method is proposed to improve the quality of ISAR images. A super-resolution ISAR imaging method based on deep-learning-assisted time-frequency analysis (TFA) is proposed in [
14], in which the basic structure of the neural network also adopts the U-net. In [
15], the author proposes a joint FISTA and Visual Geometry Group Network (VGGNet) ISAR imaging method to improve imaging performance and reduce complexity. In [
16], the author notices that many articles use the generative adversarial network (GAN) for SAR image translation, while few articles apply GAN to ISAR imaging. At the same time, most studies aim to minimize reconstruction error based on mean-square-error (MSE), which affects the quality of an ISAR image. So, a GAN-based framework using a combined loss is proposed in [
16]. Besides, many scholars use the deep unrolling methodology to design the deep neural networks. They usually extend a CS algorithm into the iterative procedure to form a deep network. Specifically, one layer of the network represents one iteration of the algorithm step and the parameters of the algorithm are transferred into the network parameters. Deep unrolling methodology makes the network physically interpretable. A complex-valued ADMM net is proposed in [
17] to achieve SA ISAR imaging and autofocusing simultaneously. With the same purpose, a joint approximate message-passing (AMP) and phase error estimation method is proposed in [
18] and the corresponding deep learning network is constructed. In [
19], a 2D ADMM method is unrolled to form a deep network for high-resolution 2D ISAR imaging and autofocusing under the condition of a low signal-to-noise ratio (SNR) and SA.
Although deep learning has made progress in ISAR resolution enhancement, there are still some problems to be solved. First, the ISAR images reconstructed by deep networks often lose weak scattering points, resulting in the loss of many target details. Second, most existing deep neural networks often adopt the MSE as the loss function. However, the reconstructed ISAR image obtained by using this loss function is often over-smoothed, which is not conducive to target recognition. Third, the universality and generalization of deep neural networks are not good enough. For example, the U-net in [
13] is trained in a noisy environment and SA respectively, which makes the trained network only suitable for its specific situation.
To tackle the above problems, this paper proposes a GAN-based ISAR resolution enhancement method to obtain better ISAR images. The key novelties are as follows: (1) inspired by [
16,
20], we adopt the GAN as our basic deep neural network structure. Compared with other networks, GAN has a more powerful ability to describe the target details. We adopt the relativistic average discriminator (RaD) to improve the resolution of the ISAR image. In the generator network, the Residual-in-Residual Dense Block (RRDB) is used. (2) The loss function of the proposed GAN is composed of feature loss, adversarial loss, and absolute loss. Feature loss is to maintain the main characteristics of ISAR images. Adversarial loss is used to recover the detailed features of weak scattering points. Absolute loss is designed to make the ISAR images not over-smoothed. The proposed loss function can achieve superior reconstruction with resolution enhancement. (3) We only train the proposed GAN under the condition of no noise and full aperture; furthermore, the trained network is used to recontruct ISAR images under the condition of low SNRs and SA, respectively. Simulated data and measured data under different parameter conditions are used to verify the effectiveness and universality of the proposed GAN. The simulation results obtain better-focused performance.
The rest of this article is organized as follows. In
Section 2, the ISAR imaging model is constructed.
Section 3 introduces the proposed GAN in detail and gives the network loss function.
Section 4 describes the details of data acquisition and testing strategy. In
Section 5, various experiments are carried out to evaluate the performance of the proposed GAN.
Section 6 draws a conclusion.
2. ISAR Imaging Model
After translational motion of target compensation, the model can be simplified to a classic turntable model. When the target is in uniform motion status and the coherent processing interval (CPI) is short, the target motion can be equivalent as uniform rotation. Here, we take the monostatic radar as an example. Taking the origin as the phase center and a point scatterer
is situated on the target, as shown in
Figure 1.
The radar echo from the point scatterer can be expressed as
where
is azimuth angle,
is scattering coefficient of
,
is vector wave number in the propagation direction, and
stands for the vector from origin to point scatterer
.
As for
, it can be represented by the wave numbers in
and
directions in Equation (2):
where
,
, and
stand for the unit vectors in
,
, and
directions respectively. So, the
in Equation (1) can be reorganized to obtain:
Therefore,
can be written as:
Under the condition of narrow-bandwidth and small azimuth-angle, the
in the second phase term of Equation (4) can be approximated as
and
,
, where
is the wave number corresponding to the center frequency
and
is the electromagnetic wave velocity. So, Equation (4) can be further simplified to:
So, the radar echo from all the scattering centers can be expressed as:
where
is number of scattering centers. Finally, the ISAR image can be obtained by taking the 2D inverse Fourier transform of Equation (6) in the
plane:
where
represents 2D impulse function. However, the values of variables are finite. With Equation derivations, the Equation (7) can be expressed as [
21]:
where
is convolution operation and
is the point spread function (PSF), which is the Sinc function. According to Equation (8), PSF can be regarded as the impulse response of the system to an ideal point scatterer and
represents the scattering function of target.
5. Experiment Results and Analysis
In the experiment part, we select simulated data and measured data to verify the performance of different methods. In most articles, the number of scattering points for simulated aircraft is relatively small. While in our experiment, simulated aircraft data consists of 276 scattering points. The relevant parameters of simulated data are the same as those of the training network. Measured data contains the Yak42 aircraft data and F-16 airplane model data. The carrier frequency and bandwidth of the Yak42 aircraft data are 5.52 GHz and 400 MHz, respectively. The F-16 airplane model is measured in the microwave chamber. The frequency range is from 34.2857 to 37.9428 GHz.
Under the conditions of no noise and full aperture, we compare the performance of different methods. Then we consider the influence of random Gaussian noise on ISAR imaging performance for different methods. The random Gaussian noise is added to the radar echo for simulated data and measured data. The corresponding SNR is 2 and −4 dB. Next, the different methods are tested under the condition of sparse aperture. The echo data of LR ISAR images needs to be under-sampled. Here, we just consider that 224 pulses are recorded. In addition, zero padding is used to obtain test pictures. Finally, the universality and generalization of the proposed GAN are verified by the F-16 airplane model data.
For the complexity of different networks, the training process of the proposed GAN and GAN3 all cost 6.5 h, while GAN1 and GAN2 cost 3 h. Because of this, the proposed GAN adopts a more complex network structure. For the trained networks, the imaging time of the proposed GAN and GAN3 is 0.51 s, while the imaging time of GAN1 and GAN2 is 0.46 s.
5.1. Comparison of No Noise and Full Aperture
The imaging results of simulated aircraft by different methods are shown in
Figure 5 under the condition of no noise and full aperture. The corresponding metrics are presented in
Table 1. A LR ISAR image is shown in
Figure 5b. We can see that its resolution is limited, and strong sidelobes submerge many weak scattering points. It is observed that four different methods all get better imaging performance than IFFT, which indicates the superiority of GAN. The proposed GAN has the smallest IE, and acceptable PSNR and SSIM. From
Figure 5f, it is visually obvious that the proposed GAN has the best resolution performance and correctly reconstructs the most weak scattering points, compared with other methods. The proposed GAN describes the target details correctly, such as the tails of simulated aircraft. Compared with
Figure 5d,f, we can see that the reconstructed result from GAN2 recovers some weak points incorrectly, which shows that the network structure proposed in this paper is better than that in [
16]. Compared with
Figure 5c,d, we can find that GAN2 achieves better performance than GAN1 precisely because the loss function proposed in this paper is adopted, which shows the effectiveness of the proposed loss function. In addition, the metrics of GAN3 are not bad, but
Figure 5e has some unpleasant shadows around scattering points. Meanwhile, in
Figure 5f the ISAR image has sharp edges. So, it is verified that
loss has better performance than MSE loss in ISAR images.
The imaging results of Yak42 by different methods are shown in
Figure 6. The corresponding metrics are shown in
Table 2. As can be seen from
Figure 6a, the imaging result of the traditional method is not focused and has many strong sidelobes. Compared with other methods, the proposed GAN obtains the better-focused image and gets the smallest IE. At the same time, the proposed GAN does not produce many fake points, while the imaging results of GAN1 and GAN2 have some fake points in the background. Just like the simulated aircraft experiment, GAN2 has better imaging effect than GAN1, which further confirms the effectiveness of the proposed loss function. In addition, the imaging result of GAN3 is over-smoothed and has shadows in the background, which also confirms that the MSE loss is not good.
5.2. Comparison of Different SNRs
The imaging results of simulated aircraft under different SNRs are shown in
Figure 7 and
Figure 8. The corresponding metrics are shown in
Table 3 and
Table 4. The proposed GAN gets the smallest IE, the highest SSIM, and acceptable PSNR when SNR is 2 dB and −4 dB. As shown in
Figure 8f, the ISAR image formed by the proposed GAN is focused with a clearer background even when SNR is −4 dB. The proposed GAN improves resolution significantly. At the same time, it can recover the target details as much as possible. Of course, it is inevitable to lose some scattering points information. While for other ISAR images, there are some fake points appearing in the background because of the strong noise. This can prove the superiority of the proposed method. Similarly, GAN2 has better performance than GAN1 because of the proposed loss function. GAN3 has shadows around the target because of the MSE loss.
The imaging results of Yak42 under different SNRs are shown in
Figure 9 and
Figure 10. The corresponding metrics are shown in
Table 5 and
Table 6. The proposed GAN has the smallest IE. It can recover the target details correctly and improve the resolution of Yak42. The image quality does not degrade significantly with the decrease of SNR. The proposed GAN depicts the outline of Yak42 clearly, which is helpful for target recognition. In addition, it produces as few fake points as possible with a clear background, while for GAN1 and GAN2 some fake points still exist in the images. This phenomenon shows the effectiveness of the proposed method. In addition, the outline of Yak42 for GAN3 is blurred because of the MSE loss. The above analysis can show that the proposed GAN is not sensitive to low SNR.
5.3. Comparison of Sparse Aperture
The imaging results of simulated aircraft under SA are shown in
Figure 11. The corresponding metrics are shown in
Table 7. It can be seen from
Figure 11b that the ISAR image of IFFT does not have good focusing performance. The scattering points are seriously defocused in azimuth direction. The proposed GAN gets the smallest IE, the highest SSIM and PSNR. The proposed GAN improves resolution significantly and describes more target details. Weak scattering points are recovered as much as possible by the proposed GAN. However, some weak scattering points inevitably vanish because of the existence of SA. Compare with other ISAR images,
Figure 11f does not generate many fake points around the target. For other networks, there are many fake points in the background. So, the proposed GAN gets the better imaging performance.
The imaging results of Yak42 under SA are shown in
Figure 12. The corresponding metrics are shown in
Table 8. The proposed GAN has the smallest IE. It recovers more target details than other networks and improves resolution performance of Yak42. However, some fake points appear around the target, which shows the proposed GAN is sensitive to SA. Similarly, GAN2 has better imaging performance than GAN1 because of the proposed loss function. The ISAR image of GAN3 has a blurred outline of Yak42, which is the result of using MSE loss.
5.4. Universality and Generalization of the Proposed GAN
To validate the universality and generalization of the proposed GAN, an all-metal scaling model of an F-16 is measured in the microwave chamber, as shown in
Figure 13a. From
Figure 13b, we can see that the proposed GAN improves the resolution performance and has a clear outline of the F-16. However, it does not describe scattering characteristics of the F-16 perfectly. In fact, the carrier frequency of training data is different from Yak42 in previous experiments, which also proves the universality and generalization of the proposed GAN.