Next Article in Journal
Traffic Sign Recognition Based on Bayesian Angular Margin Loss for an Autonomous Vehicle
Previous Article in Journal
A Study of Optimization in Deep Neural Networks for Regression
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Novel Adversarial Learning Framework for Passive Bistatic Radar Signal Enhancement

Key Laboratory of Electronic Information Countermeasure and Simulation Technology of Ministry of Education, Xidian University, Xi’an 710071, China
*
Author to whom correspondence should be addressed.
Electronics 2023, 12(14), 3072; https://doi.org/10.3390/electronics12143072
Submission received: 14 May 2023 / Revised: 7 July 2023 / Accepted: 12 July 2023 / Published: 14 July 2023

Abstract

:
Passive Bistatic Radar (PBR) has significant civilian and military applications due to its ability to detect low-altitude targets. However, the uncontrollable characteristics of the transmitter often lead to subpar target detection performance, primarily due to a low signal-to-noise ratio (SNR). Coherent accumulation typically has limited ability to improve SNR in the presence of strong noise and clutter. In this paper, we propose an adversarial learning-based radar signal enhancement method, called radar signal enhancement generative adversarial network (RSEGAN), to overcome this challenge. On one hand, an encoder-decoder structure is designed to map noisy signals to clean ones without intervention in the adversarial training stage. On the other hand, a hybrid loss constrained by L 1 regularization, L 2 regularization, and gradient penalty is proposed to ensure effective training of RSEGAN. Experimental results demonstrate that RSEGAN can reliably remove noise from target information, providing an SNR gain higher than 5 dB for the basic coherent integration method even under low SNR conditions.

1. Introduction

Passive bistatic radar (PBR) has garnered significant attention due to its low cost, covert operation, low vulnerability, and reduced environmental impact [1,2]. It receives signals exclusively from non-cooperative transmitters, such as FM radio [3,4], television [5], digital video broadcasting satellite (DVB-S) [6], wireless local area network (WiFi) [7,8], integrated services digital broadcast-terrestrial (ISDB-T) [9], and global navigation satellite system (GNSS) [10,11]. PBR has been successfully employed in numerous military applications, including airborne detection [12] and maritime detection [13]. Of these, the DVB-S signal offers the advantages of a large beam angle and wide coverage area, making it ideal for constructing large-scale airborne target surveillance systems. PBR can operate silently and avoid being targeted by anti-radiation missiles or enemy jammers by utilizing existing broadcast transmitters without the need for frequency allocation. PBR also excels in detecting low-altitude targets.
Compared to active radar, PBR has several drawbacks because its received signals are not designed to match existing hardware and signal processing algorithms. Typically, PBR systems receive two-channel signals: the direct-path signal in the reference channel and the echo signal in the surveillance channel. The surveillance channel generally has a much lower SNR than the reference channel, and weak target echoes can be easily disturbed by strong noise [14,15]. PBR uses the correlation between the surveillance channel and the reference channel to detect moving targets. The most commonly used target detection method is Constant False Alarm Rate (CFAR) [16,17], which provides robust detection performance with low false alarm probability under high SNR conditions. However, the SNR of the signals received by PBR is typically low, resulting in a high corresponding false alarm probability.
Therefore, it is necessary to design appropriate signal enhancement algorithms to ensure the PBR system operates normally in complex signal environments and detects targets effectively. Various signal enhancement methods have been proposed by analyzing the characteristics of different signals and noise. Adaptive filtering is a common signal enhancement method, which includes Least Mean Square (LMS) [18], Recursive Least Square (RLS) [19], Normalized Least Mean Square (NLMS) [20], and so on. However, these methods have high computational complexity and the filtering order is often difficult to determine. The Extensive Cancellation Algorithm (ECA)-based methods [21,22] project signals onto a subspace orthogonal to the clutter subspace. Clutter cancellation has also been investigated [23]. The above methods require sophisticated mathematical operations that rely heavily on the prior knowledge of experts. In some cases, the surveillance signal is much weaker than the reference signal, and traditional methods cannot effectively enhance the surveillance signal [24].
Recently, there has been a significant interest in deep learning models, such as autoencoder (AE) [25], generative adversarial network (GAN) [26], and convolutional neural networks (CNN) [27], due to their remarkable ability to automatically extract features and process data. GAN is an effective generative model based on game theory, and various GAN versions have been proposed for different tasks, such as image-to-image translation [28], speech enhancement [29], classification [30,31,32], sample generation [33,34], redundant information mitigation [35,36,37], and image dehazing [38]. Moreover, GAN has been applied to various radar systems, such as synthetic aperture radar (SAR) [39,40,41,42], inverse synthetic aperture radar [43,44], LPI radar [45], and weather radar [46].
In order to address the target detection problem based on the PBR signal under the clutter interference condition, a two-stage generalized likelihood ratio test algorithm [47] and a joint delay-doppler estimation method [48] are proposed. Then, a principal subspace similarity algorithm is proposed to reduce the impact of noise in the reference channel [49]. To fully exploit the target information present in the surveillance channel of PBR and effectively detect targets under low SNR conditions, this paper proposes a signal enhancement method called Radar Signal Enhancement GAN (RSEGAN). By using an adversarial learning strategy, RSEGAN can automatically enhance the low SNR signal of PBR. Compared to existing signal enhancement methods, RSEGAN’s primary contributions include:
(1)
An end-to-end PBR signal enhancement framework based on adversarial learning strategy. The framework leverages transferable knowledge from synthetic noisy signals to provide significant SNR gain for PBR and reduce target detection difficulty. Experiments on both simulation and real PBR data have demonstrated the effectiveness of the proposed method.
(2)
Automatically processing noisy PBR signals using a multi-level generator based on an encoder-decoder structure. The generator gradually filters out clutter information by compressing the feature dimension of the signal and recovers the clean enhanced signal through the decoder. An additional discriminator based on a convolutional network is designed to evaluate the enhanced signal’s quality.
(3)
A hybrid cost function that combines the advantages of gradient penalty and L1/L2-norm constraint. This is proposed to ensure the convergence of the training stage. The gradient penalty and L1/L2-norm constraints are applied to the discriminator and generator, respectively, to ensure that the gradient remains smooth during backward propagation.

2. Related Work

2.1. Signal Model of PBR

PBR does not actively transmit signals, but detects the target by receiving electromagnetic signals reflected by the target from a non-cooperative radiation source [1,2]. Figure 1 illustrates a simple PBR configuration, which consists of a transmitter located at point T, a radar located at point R, and a target located at point P. The reference channel signal u ( t ) can be expressed as
u ( t ) = α s ( t t τ ) + w ( t ) ,
where s ( t ) is the direct-path signal transmitted by the transmitter, α is the amplitude of the direct-path signal, w ( t ) is the thermal noise in reference channel, and t τ is the time delay of direct-path signal calculated as
t τ = R T 2 c ,
where c is the speed of light. Then, the target echo signal in the surveillance channel can be given as follows
x ( t ) = β s ( t t x ) exp ( j 2 π Ω x t ) + w ˜ ( t ) ,
where β is a scaling parameter that accounts for the target reflectivity as well as the propagation effects in the surveillance channel, Ω x is the Doppler frequency, t x is the time delay of the echo signal, and w ˜ ( t ) is the Gaussian noise in the surveillance channel. Note that t x and Ω x can be expressed as
t x = P R 2 + T P 2 c
and
Ω x = f c c v ( t ) · ( P R P R 2 + P T P T 2 ) ,
where f c is the carrier frequency of s ( t ) , and v ( t ) is the speed vector of the target.
In the noiseless case, the time delay Δ t = t x t τ between the two channels and the Doppler frequency Ω x can be calculated by maximizing the following cross-ambiguity function (CAF)
max s , Ω | χ ( s , Ω ) | 2 = | 0 T i u * ( τ s ) x ( τ ) exp ( j 2 π Ω τ ) d τ | 2 ,
where T i is the integration time. In order to calculate conveniently, the above problem can be converted to a discrete version, i.e.,
max n , m | A ( n , m ) | 2 = | 1 N int k = 1 N int u * [ k n ] x [ k ] exp ( j 2 π N int m k ) | 2 ,
where N int is an integral point, and u [ k ] and x [ k ] are a discrete version of u ( t ) and x ( t ) .

2.2. Generative Adversarial Network

GAN can generate new data samples that exhibit specific characteristics by learning the data distributions of training samples. Typically, GAN comprises two main components: a generator G and a discriminator D. The generator attempts to map latent data z with distribution p z ( z ) into a new sample, which is expected to have distribution p g . Ideally, p g should closely resemble the real data distribution p d a t a . Then, the generated sample, along with a real sample, are input into the discriminator, which aims to distinguish the generated sample from the true samples and provide the corresponding confidence. In other words, the generator creates samples to confuse the discriminator, while the discriminator tries to distinguish real from generated samples accurately. The entire framework is trained through an adversarial strategy represented by the following cost function:
min G   max D V ( D , G ) = E x [ log D ( x ) ] + E z [ log ( 1 D ( G ( z ) ) ) ] ,
where V ( D , G ) is the typical cross-entropy loss, x is the real sample from the distribution p d a t a , and z is the latent noise from the distribution p z ( z ) . During the training process, the generator G and discriminator D are optimized alternately using a zero-sum game framework. This process is repeated continuously until the model converges. Theoretically, if both D and G are well-designed and have appropriate structures, they will reach Nash equilibrium at the end of the training process. At this point, the fake samples generated by G will be infinitely close to real samples, and D will be unable to distinguish whether the sample is generated by the generator or comes from the real world.
In some cases, GAN is difficult to train due to the model’s difficulty in converging during training. The original GAN uses the Jensen-Shannon (JS) divergence between the distribution of real samples and generated samples as the cost function. However, it has been shown that the cost function can vanish if the distributions are supported by low-dimensional manifolds. To solve this problem, Wasserstein GAN (WGAN) was proposed, which replaces the JS divergence with the Wasserstein distance [50]. Additionally, a gradient penalty term is added to the cost function to ensure convergence of WGAN [51].
Despite its ability to generate new samples, GAN cannot precisely control the type of new samples. To solve this problem, Conditional Adversarial Network (CGAN) [52] introduces a supervised learning strategy into the training of GAN by adding additional information to both G and D. The additional information can be used to generate new samples that satisfy certain conditions. The objective of CGAN can be expressed as follows:
L C G A N ( D , G ) = E x , y [ log D ( x , y ) ] + E x , z [ log ( 1 D ( x , G ( x , z ) ) ) ] ,
where y is the extra conditional information and other components are the same as that in the original GAN. CGAN can perform the transformation between samples of different styles and has been successfully applied in many areas, such as image-to-image translation [28], image matching [53], and signal denoising [54].

3. Radar Signal Enhancement GAN for PBR Target Detection

This study is inspired by CGAN and aims to develop a signal-to-signal framework that can enhance low SNR noisy PBR signals into high SNR clean PBR signals. To address this issue, we propose a novel signal enhancement method known as radar signal enhancement GAN (RSEGAN), which consists of a training stage and a test stage, as shown in Figure 2. The generator in RSEGAN works as a mapping function G : x n o i s y x c l e a n g e n e r a t e d . During the training stage, RSEGAN is fed with pairs of noisy-clean signals, comprising noisy signals and their corresponding ground truth. Each noisy signal x n o i s y is mapped into a generated clean signal x c l e a n g e n e r a t e d by the generator. The discriminator D then distinguishes the generated clean signal x c l e a n g e n e r a t e d from the corresponding ground truth x c l e a n . The adversarial training process improves the generated signal’s proximity to the ground truth, i.e., the real clean signal. The training process ends when the discriminator cannot differentiate between a signal sample that is the true clean signal and one that is generated by the generator. At this point, the RSEGAN parameters are fixed, and the generator can enhance the SNR of noisy signals. In the test stage, each noisy signal is input to G, and the corresponding clean signal is obtained. It should be noted that all signals in this study have undergone coherent integration. The following sections elaborate on the components of RSEGAN.

3.1. Structure of the Generator

The generator G addresses the problem of signal-to-signal translation, which involves transferring a noisy signal x n o i s y to an enhanced signal x c l e a n g e n e r a t e d = G ( x n o i s y , z ) . The objective is to make x c l e a n g e n e r a t e d as similar as possible to the real clean signal x c l e a n , making it indistinguishable from x c l e a n by the discriminator D. The encoder-decoder network is a common architecture used to map samples from one domain to another domain while satisfying certain conditions. This architecture involves passing the input sample through a series of downsampling layers until it reaches a bottleneck layer. The propagation process is then reversed, and the information in the bottleneck layer is recovered to generate a sample of the same size as the original. In RSEGAN, the generated signal x c l e a n g e n e r a t e d must retain the target information while suppressing the noise information. To achieve this goal, the generator uses an encoder-decoder structure with cross-layer interconnection, known as U-net. This architecture utilizes skip connections to combine low-level features with high-level features and has demonstrated excellent performance in image segmentation tasks. The U-net can store the structural information of the original noisy signal. Figure 3 shows that the encoder comprises 8 convolutional layers, which can be expressed as:
x j ( h ) ( x , y ) = f ( i = 1 m u = 0 S 1 v = 0 S 1 k j i ( h ) ( u , v ) x i ( h 1 ) ( x + u , y + v ) + b j ( h ) ) ,
where h is the index of the layer, S × S represents the kernel size, x i ( h 1 ) ( i = 1 , , m ) is the i th input feature map of the previous layer, x j ( h ) ( j = 1 , , n ) is the j th output feature map of the current layer, ( x , y ) is the pixel coordinates, k j i ( h ) is the convolutional kernel connecting the i th feature map to the j th feature map, and b j ( h ) is the bias of the j th feature map. In this paper, the kernels and stride are set to 2 × 1 and 2, respectively, in each layer. The nonlinear activation function f ( ) is the PReLU activation function which can be expressed as
PReLU ( x ) = { x ,           x 0 α x ,     x < 0 ,
where α is a hyperparameter to control the sparsity of the network and is set to 0.3 in this paper. The size of the input signal is 2048 × 1 and the output dimensions of each layer in the encoder, represented by length × channels, are 2048 × 1, 1024 × 8, 512 × 16, 256 × 16, 128 × 32, 64 × 32, 32 × 64, 16 × 64, and 8 × 128. Then, random noise vector z sampled from 8 × 128-dimensional normal distribution N ( 0 , 1 ) is added to the bottleneck layer (the last convolutional layer). The decoder network performs the inverse operation of the encoder network. It consists of 8 deconvolutional (transposed convolution) layers with a step size of 2. The output dimension of each layer in the decoder is recovered layer by layer to 16 × 64, 32 × 64, 64 × 32, 128 × 32, 256 × 16, 512 × 16, 1024 × 8, and 2048 × 1. PReLU activation is also added to each deconvolution layer. Finally, the skip connection connects each layer in the encoder to its corresponding mirror layer in the decoder by an additional operation in the channel dimension.

3.2. Structure of the Discriminator

Figure 4 illustrates that the discriminator D is a typical convolutional neural network that takes in one-dimensional input. The input size is 2048 × 1. The input data sequentially pass through 8 convolutional layers, each with a 2 × 1 kernel size. Each convolutional layer in the discriminator outputs dimensions of 1024 × 8, 512 × 16, 256 × 16, 128 × 32, 64 × 32, 32 × 64, 16 × 64, and 8 × 128. Additionally, a 1 × 1 kernel convolutional layer is utilized to flatten the feature maps into 1-dimensional vectors. The final layer is a fully connected layer with 1 node, which classifies whether the input signal is a real or generated clean signal.

3.3. Cost Function of RSEGAN

To address the issue of gradient vanishing in the original GAN, the Wasserstein GAN (WGAN) was introduced, which utilizes Wasserstein distance to enhance training stability. Additionally, CGAN effectively resolves the issue of sample-to-sample translation. To optimize the benefits of both WGAN and CGAN, a conditional Wasserstein Generative Adversarial Network (CWGAN) was created. The cost function of this network can be expressed as follows:
L C W G A N = E x n o i s y , x c l e a n [ D ( x n o i s y | x c l e a n ) ] E x n o i s y , z [ D ( G ( z | x n o i s y ) ) ] .
However, CWGAN is sometimes not well-trained due to weight clipping, which forces the Lipschitz constraint to be satisfied. Therefore, a gradient penalty term is added to the cost function of CWGAN.
L G P = E x ^ [ ( | | D ( x ^ ) | | 2 1 ) 2 ] ,
where | | | | is the L 2 -norm; x ^ is the random line combination of the real sample x and the generated sample G ( z | x n o i s y ) , i.e., x ^ = ε x c l e a n + ( 1 ε ) G ( z | x n o i s y ) ; and ε is randomly sampled from the uniform distribution on [0, 1]. Furthermore, to constrain the generator G to output samples close to the real sample, L 1 and L 2 regularization terms are added to the cost function
L L 1 = E x n o i s y , x c l e a n , z [ ( | | x c l e a n G ( z | x n o i s y ) | | 1 ] ,
L L 2 = E x n o i s y , x c l e a n , z [ ( | | x c l e a n G ( z | x n o i s y ) | | 2 ] .
The final cost function of RSEGAN is the combination of L C W G A N , L G P , L L 1 and L L 2 , given by
L f i n a l = min G   max D L C W G A N + λ G P L G P + λ L 1 L 1 + λ L 2 L 2 ,
where λ G P is the coefficient of gradient penalty, λ 1 is the coefficient of L 1 regularization, and λ 2 is the coefficient of L 2 regularization. Specifically, an alternate optimization method is adopted during the training progress of CWGAN, and the cost functions of the discriminator and generator can be expressed, respectively, from (16), given as follows:
min D   L D l o s s ( D , G ) = E x n o i s y , x c l e a n [ D ( x n o i s y | x c l e a n ) ] + E x n o i s y , z [ D ( G ( z | x n o i s y ) ) ] + λ G P E x ^ [ ( | | D ( x ^ ) | | 2 1 ) 2 ] ,
min G   L G l o s s ( D , G ) = E x n o i s y , z [ D ( G ( z | x n o i s y ) ) ] + λ L 1 E x n o i s y , x c l e a n , z [ ( | | x c l e a n G ( z | x n o i s y ) | | 1 ] + λ L 2 E x n o i s y , x c l e a n , z [ ( | | x c l e a n G ( z | x n o i s y ) | | 2 ] .

3.4. CA-CFAR Detection

The Cell Averaging Constant False Alarm Rate (CA-CFAR) method is a commonly used target detection technique for identifying targets in signals. Its reliable performance has made it a popular choice for real radar systems [55]. In this study, we propose using the enhanced clean signal output produced by RSEGAN as the input for a CA-CFAR detector to improve target detection. Figure 5 provides details on the specifics of the CA-CFAR detector. Here, x i ( i = 1 , 2 , , N ) is the sampling value of reference cells and N is the sliding window length. The decision criterion of CA-CFAR can be described as
{ D T Z , H 1 D < T Z , H 0 ,
where D is the cell to be detected, T is the nominal factor, Z is the evaluated power of total clutter, H 1 represents target present, and H 0 represents target absent. We assume the background noise follows the Rayleigh distribution and that detection is based on a single pulse. Therefore, we can describe the sample value of each reference cell using the exponential distribution. The Γ distribution is a general form of the exponential distribution, and its probability density function is provided below.
f ( x ) = β α x α 1 e β x / Γ ( α ) ,   x 0 ,
where α and β are hyperparameters, and Γ ( α ) is the common Gamma function, i.e., Γ ( x ) = 0 + t x 1 e t d t ( x > 0 ) . In CA-CFAR detection, Z follows Gamma distribution. The detection probability and false alarm rate of the CA-CFAR detector are expressed as
P D = ( 1 + T 1 + λ ) N ,
P f a = ( 1 + T ) N ,
where T is the nominal factor and N is the length of sliding window. As shown in the formulations, P D and P f a are independent of δ .

4. Experiments

4.1. Training of RSEGAN

In the training process, the parameters of RSEGAN are updated by root mean square prop (RMSprop), which is defined by
E [ g 2 ] t = α E [ g 2 ] t 1 + ( 1 α ) g t 2 ,
W t + 1 = W t η 0 E [ g 2 ] t + ε g t ,
where t is the number of training iterations, W t is the parameters at time step t, g t is the gradient of W t , E [ g 2 ] t is the mean of the g 2 , α is momentum coefficient, η 0 is the initial learning rate, ε is a small constant (set to 10 - 8 in this paper) that prevents the denominator from being 0, and is the multiplication operation element by element. All of the hyperparameters adopted for training RSEGAN are listed in Table 1.

4.2. Simulation Dataset

A simulation dataset was created to validate the effectiveness of the proposed algorithm. Specifically, the proposed signal enhancement method was evaluated using the DVB-S signal of a Chinese satellite, with the ground station located in Beijing. Table 2 lists the parameters of the direct-path signal, which had an SNR of approximately 21 dB. The simulation data were generated using FEKO software, which first generated radar cross-section (RCS) information for simulated targets, and then converted the locations of the transmitter, receiver, and target to geocentric coordinates (Table 3). The SNR of the direct-path signal is about 21 dB and a real example is shown in Figure 6. The RCS of 20 frequency points within the signal bandwidth containing target information was calculated by FEKO, and by modulating the RCS of different frequency points to the direct-path signal, a clean echo signal without noise was obtained. This was done by calculating the corresponding time and frequency delay. To evaluate the proposed method’s performance under different noise environments, random Gaussian noise was added to the clean echo signal to generate simulated noisy signals with varying SNRs. Finally, the slice data corresponding to the real frequency delay were generated by correlating the simulated echo signal with the direct-path signal. To assess the generalization performance of the proposed algorithm, signal samples with varying SNRs were gathered as training and test datasets, as shown in Table 4. The process of generating the training and test data is shown in Figure 7. It is worth noting that the range of SNRs in the test set is broader than that in the training set. Additionally, the test set includes samples with lower SNRs than those in the training set, apart from those with the same SNR. For each SNR, there were 768 training samples and 255 test samples. Table 4 defines SNR as the energy difference between the noise in the surveillance channel and the target echo. All samples were generated by searching the frequency in the time dimension, and each sample comprised a 2048 × 1 vector with 2048 time steps. The simulation environment involved detecting a Boeing 737 plane, with a radial speed of 100 m/s and an accumulating time of 0.6 s.

4.3. Enhancement Results in the Simulation Scenario

In this section, we present an analysis of the proposed method. To demonstrate the enhancement performance of RSEGAN, we provide examples of initial signals and their corresponding enhanced signals, as shown in Figure 8. The initial signal is the result of coherent accumulation of the target echo and the reference signal. The SNR, which represents the energy ratio of the signal to the noise in the surveillance channel prior to coherent accumulation, is also shown in Figure 8. The results indicate that RSEGAN can effectively improve the SNR of signals. Specifically, Table 5 lists the mean SNR values of test samples before coherent integration, after coherent integration, enhanced by RSEGAN, and improved by RSEGAN. The mean SNR of test samples enhanced by RSEGAN is close to 20 dB when the SNR of test samples is included in the training set. The results indicate that RSEGAN performs well in enhancing samples with SNR intensities similar to those in the training set. Additionally, a lower SNR of the initial signal results in a smaller SNR gain provided by RSEGAN. Specifically, when the SNR of the target echo before coherent integration is −74 dB, the SNR gain provided by RSEGAN is approximately 5 dB. Figure 9 shows the SNR of all initial signals and their corresponding enhanced signals to further demonstrate the effectiveness of RSEGAN. In Figure 9a–e, RSEGAN provides a consistent SNR gain when the test sample’s SNR is similar to that of the training set. However, in Figure 9f–j, when the test sample’s SNR is lower than that of the training set, the SNR gain provided by RSEGAN decreases.
In Figure 10, the CA-CFAR detection results for both initial signals and enhanced signals are presented. The CA-CFAR detector’s false alarm rate is set to 10 - 6 and the number of guard cells is set to 8. The threshold of the CA-CFAR detector is represented by a black solid line in each subfigure. The target can be detected accurately when the signal’s peak value exceeds the detection threshold. In the simulation, each sample contains a target that is situated near the image’s center. As shown in Figure 10, the CA-CFAR detector effectively detects targets in samples that previously failed to detect targets after RSEGAN enhancement. The CA-CFAR detector can still detect targets correctly in test samples with SNR ranging from −70 dB to −66 dB. However, if the SNR of the test sample before coherent integration is below −70 dB, the CA-CFAR detector will likely detect false targets. This situation arises because RSEGAN provides a lower SNR gain of less than 8 dB, and the SNR after RSEGAN enhancement is less than 16 dB. These results are consistent with those presented in Figure 9, where RSEGAN’s enhancement performance becomes unstable when the test sample’s SNR is lower than the training sample’s SNR.

4.4. Analysis of Detection Probability and False Alarm Probability

The detection probability and false alarm probability are crucial indicators in radar target detection. Figure 11 and Figure 12 demonstrate the changes in detection probability and false alarm probability with the detection threshold of the CA-CFAR detector for both the initial signal and the enhanced signal. As depicted in Figure 11a–e, the detection probability decreases with an increase in the detection threshold. Specifically, the detection probability for the initial signal decreases rapidly when the threshold is higher than 11 dB. Additionally, the detection probability of the initial signal reduces to 0 if the threshold surpasses 15 dB, whereas the detection probability of the enhanced signal approaches 1 when the threshold is in the range of 15 dB to 18 dB. Conversely, in Figure 11f–j, the detection probability decreases more rapidly with the increase in the detection threshold, indicating that it is challenging for the CA-CFAR to accurately detect the target at the cost of low false alarm rate when RSEGAN provides low SNR gain. Figure 12 illustrates that a lower threshold leads to a higher false alarm probability. When the threshold is set higher than 14 dB, the false alarm probability of the CA-CFAR detector is minimal. Thus, the CA-CFAR detector can obtain reliable detection performance for the enhanced signal when the threshold is set between 14 dB and 18 dB. However, in Figure 12f–j, the CA-CFAR detector requires the threshold to be set higher than 15 dB to ensure that the detection probability is significant. Furthermore, the detection probability of the samples with SNR lower than −72 dB is less than 60%, and the CA-CFAR detector can hardly detect the target.

4.5. Enhancement Results in the Real Scenario

The proposed method is evaluated using measured data in a real scenario, with a civil airplane on the route from Beijing to Taiyuan as the target, as shown in Figure 13. The radar observation station is located in Shijiazhuang, and the Ku band multicarrier signal from the Asia-Pacific 5 satellite is used as the direct-path signal. After down frequency sampling, the received signal is converted to an intermediate frequency signal. The sampling bandwidth and center frequency are 800 MHz and 1200 MHz, respectively. Sub-band signals with center frequencies of 1265 MHz and 1305 MHz and bandwidths of 37 MHz are selected as training and testing samples, respectively. The structure of RSEGAN used in this section is the same as that used in the simulation experiment. Figure 14 compares a test sample before and after RSEGAN enhancement, and their detection results of CA-CFAR. The initial signal fails to exceed the detection line of CA-CFAR, but the enhanced signal succeeds. In Figure 15, the changes of detection probability and false alarm probability with the threshold in real scenarios are shown. For the initial signal, it is difficult to achieve low false alarm probability and high detection probability simultaneously, even with changes in the threshold. When the detection probability is higher than 80%, the corresponding false alarm probability is also higher than 15%. Additionally, when the false alarm probability is less than 5%, the detection probability drops to less than 60%, indicating low reliability of target detection results based on the initial signal. However, for the enhanced signal, when the detection threshold is between 15 dB and 15.6 dB, the detection probability exceeds 98%, while the corresponding probability of false alarm is lower than 0.1%. This result indicates that the proposed method can effectively adapt to the complex electromagnetic environment and achieve high detection probability and low false alarm probability simultaneously.

5. Conclusions

In this paper, we propose a method for enhancing PBR signals, called RSEGAN, to improve the signal-to-noise ratio (SNR) of PBR signals and reduce the difficulty of target detection. By utilizing an adversarial learning structure, the low SNR signals can be enhanced to high SNR signals end-to-end. Furthermore, we propose a hybrid loss function that combines the advantages of WGAN-GP, CGAN, L 1 -norm regularization, and L 2 -norm regularization to train RSEGAN. Our experimental results demonstrate that after RSEGAN enhancement, the CA-CFAR detector can achieve high detection probability and low false alarm probability.

Author Contributions

J.C. and F.Z. designed the algorithm and the experiments, J.C. and L.W. performed the experiments, J.C. and L.W. wrote the paper, J.C. and C.W. analyzed the data, F.Z. revised technical error of the paper and gave lots of advice. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported in part by the Fundamental Research Funds for the Central Universities under Grants XJS220211, XJS200204, XJS210210 and NSIY031403; by the Natural Science Foundation of China under Grants 62101412, 61201418, 62001350, 61801347, 61801344, 61522114, 61471284, 61571349, 61631019, 61871459, 61801390 and 11871392; by the China Postdoctoral Science Foundation under Grants 2020M673346, 2017M613076 and 2016M602775; by the NSAF under Grant U1430123; by the Natural Science Basic Research Plan in Shaanxi Province of China under Grant 2018JM6051; by the Aeronautical Science Foundation of China under Grant 20181081003; by the Science, Technology and Innovation Commission of Shenzhen Municipality under Grant JCYJ20170306154716846; and by the Innovation Fund of Xidian University.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Klintberg, J.; Mckelvey, T.; Dammert, P. A Parametric Approach to Space-Time Adaptive Processing in Bistatic Radar Systems. IEEE Trans. Aerosp. Electron. Syst. 2022, 58, 1149–1160. [Google Scholar] [CrossRef]
  2. Liu, J.; Li, H.-B.; Himed, B. On the performance of the cross-correlation detector for passive radar applications. Signal Process. 2015, 113, 32–37. [Google Scholar] [CrossRef]
  3. Colone, F.; Bongioanni, C.; Lombardo, P. Multifrequency Integration in FM Radio-Based Passive Bistatic Radar. Part I: Target detection. IEEE Aerosp. Electron. Syst. Mag. 2013, 28, 28–39. [Google Scholar] [CrossRef]
  4. Zaimbashi, A.; Derakhtian, M.; Sheikhi, A. Invariant Target Detection in Multiband FM-Based Passive Bistatic Radar. IEEE Trans. Aerosp. Electron. Syst. 2014, 50, 720–736. [Google Scholar] [CrossRef]
  5. Griffiths, H.-D.; Long, N.-R.-W. Television Based Bistatic Radar. IEE Proc. F Commun. Radar Signal Process. 1986, 133, 649–657. [Google Scholar] [CrossRef]
  6. Jin, W.; Lu, X.D.; Xiang, M.-S. Research on Performance Influence of Direct-Path signal for DVB-S Based Passive Radar. J. Electron. 2013, 30, 111–117. [Google Scholar] [CrossRef]
  7. Chetty, K.; Smith, G.-E.; Woodbridge, K. Through-the-Wall Sensing of Personnel Using Passive Bistatic WiFi Radar at Standoff Distances. IEEE Trans. Geosci. Remote Sens. 2012, 50, 1218–1226. [Google Scholar] [CrossRef]
  8. Colone, F.; Falcone, P.; Bongioanni, C.; Long, P.-L. WiFi-Based Passive Bistatic Radar: Data Processing Schemes and Experimental Results. IEEE Trans. Aerosp. Electron. Syst. 2012, 48, 1061–1079. [Google Scholar] [CrossRef]
  9. Honda, J.; Otsuyama, T. Feasibility Study on Aircraft Positioning by Using ISDB-T Signal Delay. IEEE Antennas Wirel. Propag. Lett. 2016, 15, 1787–1790. [Google Scholar] [CrossRef]
  10. Clemente, C.; Soraghan, J.J. GNSS-Based Passive Bistatic Radar for Micro-Doppler Analysis of Helicopter Rotor Blades. IEEE Trans. Aerosp. Electron. Syst. 2014, 50, 491–500. [Google Scholar] [CrossRef]
  11. Pastina, D.; Santi, F.; Pieralice, F.; Bucciarelli, M.; Ma, H.; Tzagkas, D.; Antoniou, M.N.; Cherniakov, M. Maritime Moving Target Long Time Integration for GNSS-Based Passive Bistatic Radar. IEEE Trans. Aerosp. Electron. Syst. 2018, 54, 3060–3083. [Google Scholar] [CrossRef] [Green Version]
  12. Wang, Y.; Bao, Q.; Wang, D.; Chen, Z. An Experimental Study of Passive Bistatic Radar Using Uncooperative Radar as a Transmitter. IEEE Geosci. Remote Sens. Lett. 2015, 12, 1868–1872. [Google Scholar] [CrossRef]
  13. O’Hagan, D.W.; Capria, A.; Kubica, V. Passive Bistatic Radar (PBR) for Harbor Protection Applications. In Proceedings of the IEEE Radar Conference, Atlanta, GA, USA, 7–11 May 2012; pp. 446–450. [Google Scholar]
  14. Chen, G.; Wang, J. Target Detection Method in Passive Bistatic Radar. J. Syst. Eng. Electron. 2020, 31, 510–519. [Google Scholar]
  15. Kulpa, K.S.; Czekala, N. Masking Effect and Its Removal in PCL Radar. IEE Proc.-Radar Sonar Navig. 2005, 152, 174–178. [Google Scholar] [CrossRef]
  16. Yang, Y.; Su, H.; Hu, Q.; Zhou, S.; Huang, J. Centralized Adaptive CFAR Detection With Registration Errors in Multistatic Radar. IEEE Trans. Aerosp. Electron. Syst. 2018, 54, 2370–2382. [Google Scholar] [CrossRef]
  17. Abbadi, A.; Bouhedjeur, H.; Bellabas, A.; Menni, T.; Soltani, F. Generalized Closed-Form Expressions for CFAR Detection in Heterogeneous Environment. IEEE Geosci. Remote Sens. Lett. 2018, 15, 1011–1015. [Google Scholar] [CrossRef]
  18. Tan, D.K.P.; Lesturgie, M.; Sun, H.; Lu, Y. Space Time Interference Analysis and Suppression for Airborne Passive Radar Using Transmissions of Opportunity. IET Radar Sonar Navig. 2014, 8, 142–152. [Google Scholar] [CrossRef]
  19. Zhu, X.L.; Zhang, X.D. Adaptive RLS Algorithm for Blind Source Separation Using a Natural Gradient. IEEE Signal Process. Lett. 2002, 9, 432–435. [Google Scholar]
  20. Ma, Y.; Shan, T.; Zhang, Y.D.; Amin, M.G.; Tao, R.; Feng, Y. A Novel Two-Dimensional Sparse-Weight NLMS Filtering Scheme for Passive Bistatic Radar. IEEE Geosci. Remote Sens. Lett. 2016, 13, 676–680. [Google Scholar] [CrossRef]
  21. Colone, F.; O’Hagan, D.W.; Lombardo, P.; Baker, C.J. A Multistage Processing Algorithm for Disturbance Removal and Target Detection in Passive Bistatic Radar. IEEE Trans. Aerosp. Electron. Syst. 2009, 45, 698–722. [Google Scholar] [CrossRef]
  22. Colone, F.; Palmarini, C.; Martelli, T.; Tilli, E. Sliding Extensive Cancellation Algorithm for Disturbance Removal in Passive Radar. IEEE Trans. Aerosp. Electron. Syst. 2016, 52, 1309–1326. [Google Scholar] [CrossRef]
  23. Cardillo, E.; Li, C.; Caddemi, A. Vital Sign Detection and Radar Self-Motion Cancellation through Clutter Identification. IEEE Trans. Microw. Theory Tech. 2021, 69, 1932–1942. [Google Scholar] [CrossRef]
  24. Wang, H.; Wang, J.; Zhong, L. Mismatched Filter for Analogue TV Based Passive Bistatic Radar. IET Radar Sonar Navig. 2011, 5, 573–581. [Google Scholar] [CrossRef]
  25. Miranda, V.; Krstulovic, J.; Keko, H.; Moreira, C.; Pereira, J. Reconstructing Missing Data in State Estimation with Autoencoders. IEEE Trans. Power Syst. 2012, 27, 604–611. [Google Scholar] [CrossRef] [Green Version]
  26. Goodfellow, I.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative Adversarial Networks. arXiv 2014, arXiv:1406.2661. [Google Scholar] [CrossRef]
  27. Wang, L.; Yang, X.; Tan, H.; Bai, X.; Zhou, F. Few-Shot Class-Incremental SAR Target Recognition Based on Hierarchical Embedding and Incremental Evolutionary Network. IEEE Trans. Geosci. Remote Sens. 2023, 61, 1–11. [Google Scholar] [CrossRef]
  28. Isola, P.; Zhu, J.Y.; Zhou, T.; Efros, A.A. Image-to-Image Translation with Conditional Adversarial Networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 5967–5976. [Google Scholar]
  29. Pascual, S.; Bonafonte, A.; Serra, J. SEGAN: Speech Enhancement Generative Adversarial Network. Proc. Interspeech 2017, 2017, 174–178. [Google Scholar]
  30. Zhu, L.; Chen, Y.; Ghamisi, P.; Benediktsson, J.A. Generative Adversarial Networks for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2018, 56, 5046–5063. [Google Scholar] [CrossRef]
  31. Wang, H.; Hu, T.; Xu, Z.; Wu, D.; Dai, H.; Lv, P.; Zhang, Y.; Wang, Z. A radar waveform recognition method based on ambiguity function generative adversarial network data enhancement under the condition of small samples. IET Radar Sonar Navig. 2023, 17, 86–98. [Google Scholar] [CrossRef]
  32. Rahman, M.; Gurbuz, S. Physics-Aware Generative Adversarial Networks for Radar-Based Human Activity Recognition. IEEE Trans. Aerosp. Electron. Syst. 2022, 59, 2994–3008. [Google Scholar] [CrossRef]
  33. Truong, T.; Yanushkevich, S. Generative adversarial network for radar signal synthesis. In Proceedings of the 2019 International Joint Conference on Neural Networks (IJCNN), Budapest, Hungary, 14–19 July 2019; pp. 1–7. [Google Scholar]
  34. Yue, Y.; Liu, H.; Meng, X.; Li, Y.; Du, Y. Generation of High-precision Ground Penetrating Radar Images Using Improved Least Square Generative Adversarial Networks. Remote Sens. 2021, 13, 4590. [Google Scholar] [CrossRef]
  35. Chen, C.; Su, Y.; He, Z.; Liu, T.; Song, X. Clutter mitigation in holographic subsurface radar imaging using generative adversarial network with attentive subspace projection. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5116214. [Google Scholar] [CrossRef]
  36. Chen, S.; Shangguan, W.; Taghia, J. Automotive Radar interference mitigation based on a generative adversarial network. In Proceedings of the 2020 IEEE Asia-Pacific Microwave Conference (APMC), Hong Kong, China, 10–13 November 2020; pp. 728–730. [Google Scholar]
  37. Wang, S.; An, Q.; Li, S.; Zhao, G.; Sun, H. Wiring effects mitigation for through-wall human motion micro-Doppler signatures using a generative adversarial network. IEEE Sens. J. 2021, 21, 10007–10016. [Google Scholar] [CrossRef]
  38. Zheng, Y.; Su, J.; Zhang, S.; Tao, M.; Wang, L. Dehaze-AGGAN: Unpaired Remote Sensing Image Dehazing Using Enhanced Attention-Guide Generative Adversarial Networks. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5630413. [Google Scholar] [CrossRef]
  39. Wu, Z.; Zhao, Z.; Ma, P.; Huang, B. Real-world DEM super-resolution based on generative adversarial networks for improving InSAR topographic phase simulation. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2021, 14, 8373–8385. [Google Scholar] [CrossRef]
  40. Du, C.; Zhang, L. Adversarial Attack for SAR Target Recognition Based on UNet-Generative Adversarial Network. Remote Sens. 2021, 13, 4358. [Google Scholar] [CrossRef]
  41. Zheng, C.; Jiang, X.; Liu, X. Semi-supervised SAR ATR via multi-discriminator generative adversarial network. IEEE Sens. J. 2019, 19, 7525–7533. [Google Scholar] [CrossRef]
  42. Ren, Z.; Hou, B.; Wu, Q.; Wen, Z. A distribution and structure match generative adversarial network for SAR image classification. IEEE Trans. Geosci. Remote Sens. 2020, 58, 3864–3880. [Google Scholar] [CrossRef]
  43. Wang, H.; Li, K.; Lu, X.; Zhang, Q.; Luo, Y.; Kang, L. ISAR Resolution Enhancement Method Exploiting Generative Adversarial Network. Remote Sens. 2022, 14, 1291. [Google Scholar] [CrossRef]
  44. Qin, D.; Gao, X.Z. Enhancing ISAR Resolution by a Generative Adversarial Network. IEEE Geosci. Remote Sens. Lett. 2021, 18, 127–131. [Google Scholar] [CrossRef]
  45. Jiang, W.; Li, Y.; Tian, Z. LPI Radar Signal Enhancement Based on Generative Adversarial Networks under Small Samples. In Proceedings of the 2020 IEEE 6th International Conference on Computer and Communications, Chengdu, China, 11–14 December 2020; pp. 1171–1175. [Google Scholar]
  46. Wang, C.; Wang, P.; Wang, P.; Xue, B.; Wang, D. Using Conditional Generative Adversarial 3-D Convolutional Neural Network for Precise Radar Extrapolation. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2021, 14, 5735–5749. [Google Scholar] [CrossRef]
  47. Greco, A.; Bandiera, F.; Maio, D.; Ricci, G. Adaptive Radar Detection of Distributed Targets in Partially-Homogeneous Noise Plus Subspace Interference. In Proceedings of the IEEE International Conference on Acoustics Speech and Signal Processing Proceedings, Toulouse, France, 14–19 May 2006. [Google Scholar]
  48. Zaimbashi, A. Target Detection in Analog Terrestrial TV-Based Passive Radar Sensor: Joint Delay-Doppler Estimation. IEEE Sens. J. 2017, 17, 5569–5580. [Google Scholar] [CrossRef]
  49. Gogineni, S.; Setlur, P.; Rangaswamy, M.; Nadakuditi, R. Passive Radar Detection with Noisy Reference Channel Using Principal Subspace Similarity. IEEE Trans. Aerosp. Electron. Syst. 2018, 54, 18–36. [Google Scholar] [CrossRef]
  50. Arjovsky, M.; Chintala, S.; Bottou, L. Wasserstein GAN. arXiv 2017, arXiv:1701.07875. [Google Scholar]
  51. Gulrajani, I.; Ahmed, F.; Arjovsky, M.; Dumoulin, V.; Courville, A. Improved Training of Wasserstein GANs. In Proceedings of the Advances in Neural Information Processing Systems, Long Beach, CA, USA, 4–9 December 2017; pp. 5769–5779. [Google Scholar]
  52. Mirza, M.; Osindero, S. Conditional Generative Adversarial Nets. arXiv 2014, arXiv:1411.1784. [Google Scholar]
  53. Merkle, N.; Auer, S.; Müller, R.; Reinartz, P. Exploring the Potential of Conditional Adversarial Networks for Optical and SAR Image Matching. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2018, 11, 1811–1820. [Google Scholar] [CrossRef]
  54. Ahmed, I.; Zabit, U.; Salman, A. Self-Mixing Interferometric Signal Enhancement Using Generative Adversarial Network for Laser Metric Sensing Applications. IEEE Access 2019, 7, 174641–174650. [Google Scholar] [CrossRef]
  55. Garcia, F.D.A.; Rodriguez, A.; Fraidenraich, G.; Filho, J.C.S.S. CA-CFAR Detection Performance in Homogeneous Weibull Clutter. IEEE Geosci. Remote Sens. Lett. 2019, 16, 887–891. [Google Scholar] [CrossRef]
Figure 1. A simple passive bistatic radar configuration.
Figure 1. A simple passive bistatic radar configuration.
Electronics 12 03072 g001
Figure 2. Framework of the proposed method.
Figure 2. Framework of the proposed method.
Electronics 12 03072 g002
Figure 3. Detailed structure of the generator G.
Figure 3. Detailed structure of the generator G.
Electronics 12 03072 g003
Figure 4. Detailed structure of the Discriminator D.
Figure 4. Detailed structure of the Discriminator D.
Electronics 12 03072 g004
Figure 5. Illustration of CA-CFAR Detector.
Figure 5. Illustration of CA-CFAR Detector.
Electronics 12 03072 g005
Figure 6. A sample of the direct-path signal.
Figure 6. A sample of the direct-path signal.
Electronics 12 03072 g006
Figure 7. Process of generating the training and test data.
Figure 7. Process of generating the training and test data.
Electronics 12 03072 g007
Figure 8. Samples of the initial signals the corresponding enhanced signals. (a) −60 dB; (b) −61 dB; (c) −62 dB; (d) −63 dB; (e) −64 dB; (f) −66 dB; (g) −68 dB; (h) −70 dB; (i) −72 dB; (j) −74 dB.
Figure 8. Samples of the initial signals the corresponding enhanced signals. (a) −60 dB; (b) −61 dB; (c) −62 dB; (d) −63 dB; (e) −64 dB; (f) −66 dB; (g) −68 dB; (h) −70 dB; (i) −72 dB; (j) −74 dB.
Electronics 12 03072 g008
Figure 9. SNR of the initial signals and enhanced signals. (a) −60 dB; (b) −61 dB; (c) −62 dB; (d) −63 dB; (e) −64 dB; (f) −66 dB; (g) −68 dB; (h) −70 dB; (i) −72 dB; (j) −74 dB.
Figure 9. SNR of the initial signals and enhanced signals. (a) −60 dB; (b) −61 dB; (c) −62 dB; (d) −63 dB; (e) −64 dB; (f) −66 dB; (g) −68 dB; (h) −70 dB; (i) −72 dB; (j) −74 dB.
Electronics 12 03072 g009aElectronics 12 03072 g009b
Figure 10. CA-CFAR detection samples of the initial and enhanced signals. (a) −60 dB initial signal; (b) −60 dB enhanced signal; (c) −61 dB initial signal; (d) −61 dB enhanced signal; (e) −62 dB initial signal; (f) −62 dB enhanced signal; (g) −63 dB initial signal; (h) −63 dB enhanced signal; (i) −64 dB initial signal; (j) −64 dB enhanced signal; (k) −66 dB initial signal; (l) −66 dB enhanced signal; (m) −68 dB initial signal; (n) −68 dB enhanced signal; (o) −70 dB initial signal; (p) −70 dB enhanced signal; (q) −72 dB initial signal; (r) −72 dB enhanced signal; (s) −74 dB initial signal; (t) −74 dB enhanced signal.
Figure 10. CA-CFAR detection samples of the initial and enhanced signals. (a) −60 dB initial signal; (b) −60 dB enhanced signal; (c) −61 dB initial signal; (d) −61 dB enhanced signal; (e) −62 dB initial signal; (f) −62 dB enhanced signal; (g) −63 dB initial signal; (h) −63 dB enhanced signal; (i) −64 dB initial signal; (j) −64 dB enhanced signal; (k) −66 dB initial signal; (l) −66 dB enhanced signal; (m) −68 dB initial signal; (n) −68 dB enhanced signal; (o) −70 dB initial signal; (p) −70 dB enhanced signal; (q) −72 dB initial signal; (r) −72 dB enhanced signal; (s) −74 dB initial signal; (t) −74 dB enhanced signal.
Electronics 12 03072 g010aElectronics 12 03072 g010b
Figure 11. Change of detection probability with the threshold of CA-CFAR. (a) −60 dB; (b) −61 dB; (c) −62 dB; (d) −63 dB; (e) −64 dB; (f) −66 dB; (g) −68 dB; (h) −70 dB; (i) −72 dB; (j) −74 dB.
Figure 11. Change of detection probability with the threshold of CA-CFAR. (a) −60 dB; (b) −61 dB; (c) −62 dB; (d) −63 dB; (e) −64 dB; (f) −66 dB; (g) −68 dB; (h) −70 dB; (i) −72 dB; (j) −74 dB.
Electronics 12 03072 g011
Figure 12. Change of false alarm probability with the threshold of CA-CFAR. (a) −60 dB; (b) −61 dB; (c) −62 dB; (d) −63 dB; (e) −64 dB; (f) −66 dB; (g) −68 dB; (h) −70 dB; (i) −72 dB; (j) −74 dB.
Figure 12. Change of false alarm probability with the threshold of CA-CFAR. (a) −60 dB; (b) −61 dB; (c) −62 dB; (d) −63 dB; (e) −64 dB; (f) −66 dB; (g) −68 dB; (h) −70 dB; (i) −72 dB; (j) −74 dB.
Electronics 12 03072 g012aElectronics 12 03072 g012b
Figure 13. The flight path of the airplane.
Figure 13. The flight path of the airplane.
Electronics 12 03072 g013
Figure 14. Samples of the initial and enhanced signals. (a) Comparison. (b) CA-CFAR detection samples of the initial signal. (c) CA-CFAR detection of the enhanced signal.
Figure 14. Samples of the initial and enhanced signals. (a) Comparison. (b) CA-CFAR detection samples of the initial signal. (c) CA-CFAR detection of the enhanced signal.
Electronics 12 03072 g014
Figure 15. Detection and false alarm probability in the real scenario. (a) Initial signal. (b) Enhanced signal.
Figure 15. Detection and false alarm probability in the real scenario. (a) Initial signal. (b) Enhanced signal.
Electronics 12 03072 g015
Table 1. Hyperparameters of training RSEGAN.
Table 1. Hyperparameters of training RSEGAN.
ParameterValue
η 0 0.0003
α 0.9
Batchsize256
λ L 1 22.5
λ L 2 127.5
λ G P 10
Epochs100
Table 2. Parameters of the direct-path signal in the simulation scenario.
Table 2. Parameters of the direct-path signal in the simulation scenario.
ParameterValue
Center Frequency/GHz12.5
Bandwidth of Receiver/MHz30
Baseband Frequency/MHz28
Sampling Frequency/MHz187.5
Latitude and Longitude of Receiver(116° E, 39.54° N)
Coordinate of Nadir Point(110.5° E, 0)
Table 3. Locations of the transmitter, receiver, and target in geocentric coordinates.
Table 3. Locations of the transmitter, receiver, and target in geocentric coordinates.
Coordinatex/my/mz/m
Transmitter 1.476 × 10 7 3.948 × 10 7 0
Target 2.137 × 10 6 4.546 × 10 6 3.933 × 10 6
Receiver 2.067 × 10 6 4.570 × 10 6 3.927 × 10 6
Table 4. Training and test set for the simulation scenario.
Table 4. Training and test set for the simulation scenario.
TrainingTest
SNR/dBNo. of Each SNRSNR/dBNo. of Each SNR
−60, −61, −62, −63, −64768−60, −61, −62, −63, −64,
−66, −68, −70, −72, −74
255
Table 5. Mean SNR of the signals before/after coherent integration, enhanced by RSEGAN, and improved by RSEGAN.
Table 5. Mean SNR of the signals before/after coherent integration, enhanced by RSEGAN, and improved by RSEGAN.
SNR Before Coherent Integration/dB−60−61−62−63−64−66−68−70−72−74
SNR After Coherent Integration/dB12.7912.3111.6711.2110.549.808.537.647.286.91
SNR after Enhancement by RSEGAN/dB19.9919.9719.9119.8819.8120.7119.7317.3615.1912.73
SNR Improved by RSEGAN/dB7.207.668.248.679.2710.9111.209.727.915.82
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Che, J.; Wang, L.; Wang, C.; Zhou, F. A Novel Adversarial Learning Framework for Passive Bistatic Radar Signal Enhancement. Electronics 2023, 12, 3072. https://doi.org/10.3390/electronics12143072

AMA Style

Che J, Wang L, Wang C, Zhou F. A Novel Adversarial Learning Framework for Passive Bistatic Radar Signal Enhancement. Electronics. 2023; 12(14):3072. https://doi.org/10.3390/electronics12143072

Chicago/Turabian Style

Che, Jibin, Li Wang, Changlong Wang, and Feng Zhou. 2023. "A Novel Adversarial Learning Framework for Passive Bistatic Radar Signal Enhancement" Electronics 12, no. 14: 3072. https://doi.org/10.3390/electronics12143072

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop