Next Article in Journal
Multistep Evolution Method to Generate Topological Interlocking Assemblies
Previous Article in Journal
Ensemble Modelling for Predicting Fish Mortality
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Globally Conditioned Conditional FLOW (GCC-FLOW) for Sea Clutter Data Augmentation

Department of Artificial Intelligence, Korea University, Seoul 02841, Republic of Korea
*
Author to whom correspondence should be addressed.
Appl. Sci. 2024, 14(15), 6538; https://doi.org/10.3390/app14156538
Submission received: 19 June 2024 / Revised: 22 July 2024 / Accepted: 23 July 2024 / Published: 26 July 2024

Abstract

:
In this paper, a novel deep learning approach based on conditional FLOW is proposed for sea clutter augmentation. Sea clutter data augmentation is important for testing detection algorithms for maritime remote sensing and surveillance due to the expensive and time-consuming nature of sea clutter data acquisition. Whereas the conventional parametric methods face challenges in finding appropriate distributions and modeling time correlations of sea clutter data, the proposed deep learning approach, GCC-FLOW, can learn the data distribution from the data without explicitly defining a mathematical model. Furthermore, unlike the existing generative deep learning approaches, the proposed GCC-FLOW is able to synthesize sea clutter data of arbitrary length with the stable autoregressive structure using conditional FLOW. In addition, the proposed algorithm generates sea clutter data not only with the same characteristics of the training data, but also with the interpolated characteristics of different training data by introducing a global condition variable corresponding to the target characteristic, such as sea state. Experimental results demonstrate the effectiveness of the proposed GCC-FLOW in generating sea clutter data of arbitrary length under different sea state conditions.

1. Introduction

Sea clutter, or backscatter from the ocean surface, make it a challenge for radars to detect and classify objects at sea, and, therefore, it is important for radars to deal with sea clutter in various sea conditions for accurate detection and performance evaluation [1].
Ideally, accurate modeling of the statistical properties of sea clutter allows for the correct determination of a target detection threshold as a function of the probability of detection and the probability of false alarm. In reality, however, sea clutter data are used to adaptively optimize the threshold. In particular, advanced clutter mitigation algorithms such as space–time adaptive processing [2] require rich data to optimize algorithm parameters in supervised mode. Therefore, it is critical to generate a variety of sea clutter data to test and optimize radar detection algorithms for different sea environments.
However, due to the expensive and time-consuming nature of acquiring the desired sea clutter data, intensive studies have been focused on mathematical modeling of sea clutter data based on the random process model to generate abundant sea clutter data in a short time [3,4,5,6,7,8,9].
Several distributions have been considered for modeling sea clutter amplitude. Early models adopted the Rayleigh distribution to characterize sea clutter [10], but measurements have shown deviations from this model, and other alternatives have emerged, such as log-normal [11], K-distribution [12], and Weibull [13]. Among them, K-distribution is most widely used due to its accuracy [14,15] and its modified versions, KA [16] and KK [17] distributions, have been proposed to improve the accuracy on tail distribution at low grazing angles [18], To reduce the model complexity of KA or KK distributions, a relatively simple Pareto distribution [19,20] is considered to fit both high and low grazing angle. In addition, the memoryless nonlinear transform (MNLT) method [21], which allows the generation of samples with a specific correlation function, was used to model the spatiotemporal correlation. Although these parametric methods based on mathematical modeling have shown success in finding an appropriate distribution to fit the sea clutter data under ideal assumptions, they are still insufficient to model the full range of geometry and sea conditions in practice, especially for the time correlation. In addition, it is difficult to properly determine the model parameters because the amount of measured data in a given sea state to fit the parameters is always limited due to the time-varying nature of the sea state [22].
Recently, the use of deep learning for data augmentation has become widespread in many fields. The advantages of deep learning approaches in data augmentation are that data with approximately the same statistical properties as sample data, such as distribution and correlation, can be generated without explicit mathematical models, and several generative techniques can be used to generate unlearned data with interpolated features [23,24]. There are a few attempts to generate sea clutter data using deep learning [25,26]. In [25], a generative adversarial network (GAN) with two generators that independently generate the real and imaginary components of sea clutter data is proposed, and in [26], a so-called auxiliary classifier GAN (AC-VAEGAN) is proposed to generate one-dimensional (1-D) sea–land clutter amplitude data to deal with imbalanced datasets in classification tasks.
These existing methods are based on one-dimensional data generation, and therefore multi-dimensional structures such as real–imaginary dependency are difficult to implement. More importantly, since GAN independently generates data block by block, the length of the coherently generated data is limited to the length of the training data. However, the non-blockwise generative algorithms based on RNN and its variants tend to be inaccurate as the length of the sample sequence increases [27,28]. Therefore, it is desirable to design a deep generative algorithm capable of generating arbitrarily long multidimensional data from relatively short length of training data.
This paper proposes a novel sea clutter data augmentation scheme based on conditional FLOW [29], a variant of FLOW [30], which is one of the most robust deep generative methods. The proposed algorithm, named globally conditioned conditional FLOW (GCC-FLOW), is a modified version of conditional FLOW that introduces a global condition variable to reflect the sea state condition. The autoregressive use of GCC-FLOW is able to generate arbitrarily long length of sea clutter sequences with approximately the same statistical characteristics of the given sea clutter data. Furthermore, by using the global condition variable of the GCC-FLOW for different sea states, the proposed method is capable of generating diverse sea clutter data with the interpolated sea state features.
The rest of this paper is organized as follows: In Section 2, the architecture of the proposed GCC-FLOW is described in detail. In Section 3, the training sea clutter data and the hyperparameter settings for the GCC-FLOW to generate sea clutter data are described. In Section 4, the experimental results are presented in terms of ensemble distribution and time correlations of the generated sea clutter compared to those of the training data. Finally, a conclusion is drawn in Section 5.

2. Methods

2.1. Autoregressive Data Generation Using GCC-FLOW

We assume that the sea clutter data generation is a stationary random process, as assumed in most of the literature [3,4,6,7,31]. Existing generative methods based on the GAN sample sea clutter sequence x = [ x 1 , , x L ] follow the joint distribution p * ( x ) learned from the train data. In order to increase L, the data size should be increased. The concatenation of two data x and x by generated GAN cannot be used as a sequence of the sampled random process of length 2 L , since x and x are independent due to the block-by-block data generation of GAN.
Figure 1 illustrates the autoregressive use of the proposed globally conditioned conditional FLOW (GCC-FLOW) in order to generate long data sequences. GCC-FLOW generates a sequence x o = [ x o 1 , , x o L ] given a conditional sequence x c = [ x c 1 , , x c L ] following a conditional distribution p * ( x o | x c ; α ) parameterized by a global conditional variable α [ 0 , 1 ] . The conditional pdf p * ( x o | x c ; α ) is learned by GCC-FLOW to approximate the conditional distribution of the successive data [ x L + 1 + k , x 2 L + k ] given [ x 1 + k , x L + k ] for k = 0 . Given the initial sequence x c ( 0 ) from the data, x o ( 0 ) is generated by the GCC-FLOW, and x o ( k ) is used as the condition sequence for x o ( k + 1 ) for k = 0 , 1 , . The joint distribution of the concatenated sample sequence x = [ x o ( 0 ) , x o ( 1 ) , , x o ( T ) ] satisfies
p ( x o ( 0 ) , x o ( 1 ) , , x o ( T ) ) = K = 0 T p ( x o ( k ) | x o ( k 1 ) , , x o ( 1 ) )
K = 0 T p ( x o ( k ) | x o ( k 1 ) ) ,
assuming the block Markov property,
p ( x N + L + 1 , , x N + 2 L | x 0 , , x N + L ) p ( x N + L + 1 , , x N + 2 L | x N + 1 , x N + L ) .
Since each conditional distribution p ( x o ( k ) | x o ( k 1 ) ) is approximated by p * ( x o ( k ) | x o ( k 1 ) ) learned from GCC-FLOW, the joint distribution x = [ x o ( 0 ) , x o ( 1 ) , , x o ( T ) ] generated by the GCC-FLOW autoregressively approximates the true data distribution as desired.

2.2. Architecture of GCC-FLOW

The structure of GCC-FLOW is similar to conditional-FLOW [29] except that the global condition variable α is fed into the coupling layer in the FLOW step, as shown in Figure 2. GCC-FLOW consists of a pair of FLOW networks, the condition part and the output part, which basically has two identically independent deep network structures that are used repeatedly in the multiscale frame: f ϕ a , n , m and g ψ a , n , m , parameterized by the network weights ϕ a , n , m and ψ a , n , m , respectively, where m = 1 , , M is the index for multiscale stages, n = 1 , , N is the index for flow steps in a multiscale stage, and a = c for the condition part and a = o for the output part.
In each flow step, x c and x o are updated as follows:
x ¯ c = f ϕ c , n , m x c n
x ¯ o = f ϕ o , n , m x o n
s c , b c = g ϕ c , n , m x ¯ c u p , α
s o , b o = g ϕ o , n , n x ¯ o u p , x ¯ c u p , α
x c n + 1 = x ¯ c u p , s c x ¯ c d o w n + b c
x o n + 1 = x ¯ o u p , s o x ¯ o d o w n + b o
where ⊙ is element-wise multiplication, and ( x ¯ c u p , x ¯ c d o w n ) , ( x ¯ o u p , x ¯ o d o w n ) are the result of halving the channel of x ¯ c , x ¯ o , respectively.
The structure of the coupling layer function g c and the conditional coupling layer function g o is described in Figure 3. The input tensors x ¯ c u p and x ¯ o u p , both of size c × , are converted to the C c o u p l i n g × tensors, denoted by y c , y c o , and y o , by convolution networks. The global variable α [ 0 , 1 ] is multiplied by a linear weight v a of size 1 × ( a c , o ). These two tensors, v o and v c , are properly scaled and added channel-wise via actnorm. For example, in the coupling layer part,
w c i = a i y c i + b i + a 0 v c α + b 0 for i = 1 , , C c o u p l i n g ,
where w c denotes the intermediate output tensor of the size C c o u p l i n g × , w c i is the ith channel vector of w c , a i , b i | i = 1 , , C c o u p l i n g are the actnorm parameters for v c , and a 0 , b 0 are the actnorm parameters for v a α . Finally, w c is processed through a convolution network to restore the original size c and split it in half channel-wise to produce s c and b c .
At each multiscale stage, the output x a m N is split in half channel-wise, and the first half becomes the mth stage output, z a m , and the remaining half is squeezed, i.e., the length is reduced by half and the channel size is doubled, and fed into the next flow step. Note that, in this split–squeeze setting, the length of z c k at each multiscale stage becomes 2 k L , where L is the input length of GCC-FLOW.
In the training phase, networks are trained for z c k to be i.i.d. Gaussian, z a k N ( 0 , I L 2 k ) , and in the generation phase the sampled z a k from i.i.d. Gaussian N ( 0 , I L 2 k ) are used as input to the inverse network of GCC-FLOW, which is canonically given by the structure of FLOW [30].

3. Experiment Settings

3.1. Dataset

The sea clutter data used in the experiments come from the Dartmouth and Grimbsby databases collected by the IPIX radar [32]. The following two datasets are used:
(1)
High sea state data: The 3rd range bin of data file #269 at Dartmouth labeled as high. The average wave height is 1.8 m (max 2.9 m);
(2)
Low sea state data: The 5th range bin of data file #287 at Dartmouth labeled as low. The average wave height is 0.8 m (max 1.3 m);
(3)
Unidentified sea state data: The 11th range bin of data file #155 at Grimsby.
All data samples are complex-valued, and the length of the high sea state data and the low sea state data is 131,072, whereas the unidentified sea state data length is 60,000. The detailed specification of the data, such as resolution and sampling rate, can be found in [32].

3.2. GCC-FLOW Settings

The length of x o and x c is set to 512, i.e., L = 512 , and the height of the clutter data is set to 2, i.e., C = 2 , since the data represent single range bin complex data. The number of multiscale stages of GCC-FLOW is set to 4, i.e., M = 4 , and, in a multiscale stage, the number of FLOW iterations is set to N = 4 . The channel size in the coupling layer is C c o u p = 32 . The model is trained with a learning rate of 3 × 10 4 , batch size of 4, and trained on 512 randomly selected sequences with length of 1024 by randomly sampled starting points.

3.3. Complexity

Table 1 summarizes the complexity of the GCC-FLOW in terms of number of parameters, training time, and generation time. To avoid overfitting, the model is trained with an early stop patience of 20, resulting in different training times depending on the dataset. The experiments were performed under identical GPU conditions on a single NVIDIA RTX 3090. The generation time is used to describe the time required to generate a sequence of length 512.

4. Experiment Results

4.1. Sea Clutter Augmentation Using GCC-FLOW

Figure 4 compares the magnitude histogram of the generated sea clutter data by the GCC-FLOW trained on the high sea state data and on the low sea state data, respectively, with the magnitude histograms of the original data. The data of length 131,072, same as the training data, were generated and subjected to comparison: (a) compares data in the high sea state, and (b) compares data in the low sea state. As illustrated in histograms (a), (b), the histogram of the generated data was fitted to the histogram of the corresponding training data, thereby indicating that the model had successfully learned the characteristics of the distributions associated with high and low sea state, respectively.
It is crucial not only to evaluate the magnitude histogram but also to ascertain whether the autocorrelation characteristics are analogous. Figure 5 compares the magnitude autocorrelation of the generated sea clutter data by the GCC-FLOW trained on the high sea state data and on the low sea state data, respectively, with that of the original high sea state and the low sea sate data, where the magnitude autocorrelation is defined as
ρ ( s ) = l = 1 L s ( | x l | x ¯ ) ( | x l + s | x ¯ ) l = 1 L | x l | x ¯ 2 ,
where x ¯ = 1 L l = 1 L | x l | , s = 0 , , L , and L is set to 512.
In Figure 5, (a) is the autocorrelation of the high sea state and (b) is the autocorrelation of the low sea state. The original data pertaining to high sea states (a) are observed to exhibit autocorrelation with high frequency fluctuations, whereas the data associated with low sea states (b) demonstrate autocorrelation with low frequency fluctuations. A comparison of the autocorrelation of the generated data with that of the original data reveals that the generated data and the original data are essentially indistinguishable for each sea state (a) and (b), respectively.
The GCC-FLOW is trained using 256 blocks of 512 consecutive samples (131,072 training samples in total) with the global condition variable α set to 0. GCC-FLOW generates 262,144 consecutive samples, and the first 131,072 samples are used to plot the histogram and the autocorrelation function.
To confirm that the generated samples have the same statistical properties at each time point, consecutive 10 , 000 samples are taken from the beginning (1∼10,000), from the middle (130,001∼140,000), and from the end (252,145∼262,144). Figure 6 and Figure 7 show the magnitude histogram and the magnitude autocorrelation of the generated sea clutter data using GCC-FLOW trained from the high sea state data and the low sea state data, respectively, at the three time points.
In Figure 6, the magnitude histograms of generated data of the high sea state are plotted for 10 , 000 samples of three different time points (beginning 1-(a), middle 1-(b), and end 1-(c)), and the autocorrelations are presented for 512 samples at each time point, as illustrated in 2-(a), 2-(b), and 2-(c). The result for low sea states are also illustrated in Figure 7 in a manner analogous to Figure 6. Note that the characteristic of the high sea data (concentrated distribution and high frequency fluctuation in correlation) and the characteristic of the low sea data (spread distribution and low frequency fluctuation in correlation) are preserved in all time points.
Figure 8 shows the performance of the proposed GCC-FLOW for the Grimsby #155 dataset, collected at different geometric locations from the Dartmouth dataset, in terms of histogram (1) and autocorrelation (2). A total of 262 , 144 data points were generated by the trained model. The three time points are designated by the beginning (1∼10,000), middle (130,001∼140,000) and end (252,145∼262,144) of the sequence. The histogram was plotted for 10 , 000 consecutive samples of the generated data at the three different time points (beginning 1-(a), middle 1-(b), end 1-(c)), and the autocorrelation was plotted for 512 samples corresponding to each time point (beginning 2-(a), middle 2-(b), end 2-(c)). As illustrated in Figure 8, the proposed GCC-FLOW demonstrates the capacity to effectively learn the statistical characteristics from the Grimsby data, generating data that closely resemble the training data.
In Figure 9, the proposed GCC-FLOW is compared with the existing block-by-block deep generative approach [25]. Sub-Figure 1-(a) depicts the autocorrelation of the original high sea state data, 1-(b) illustrates the autocorrelation of data generated by the block-by-block GAN [25], and 1-(c) portrays the autocorrelation of data generated by the proposed GCC-FLOW. Whereas 1-(c) illustrates a similar autocorrelation pattern to 1-(a), the autocorrelation in 1-(b) is corrupted by the independent nature of the block-by-block generation (highlighted by the dashed box). Similarly, 2-(a), 2-(b), and 2-(c) plot the autocorrelation of the original low sea state data, the data generated by the block-by-block GAN [25], and the data generated by the proposed GCC-FLOW, respectively. It is notable that the same corruption in autocorrelation is observed in 2-(b).

4.2. Sea Clutter Generation Using GCC-FLOW with Global Condition

Using the global condition variable α , a single GCC-FLOW can be trained on two different datasets: high sea state data with α = 1 and low sea state data with α = 0 . Figure 10 and Figure 11 plot the magnitude histogram and magnitude autocorrelation of the generated sea clutter data using GCC-FLOW with α = 1 and α = 0 , respectively, at three different time points (beginning, middle, and end) from the 262 , 144 samples. The three time points are designated by the beginning (1∼10,000), middle (130,001∼140,000), and end (252,145∼262,144) of the sequence.
In Figure 10, the histograms of generated high sea state data were plotted for 10 , 000 samples at three different time points (beginning 1-(a), middle 1-(b), end 1-(c)), and the autocorrelations of generated high sea state data were plotted for 512 samples corresponding to each time point (beginning 2-(a), middle 2-(b), end 2-(c)). In Figure 11, the magnitude histograms and autocorrelations of low sea state were plotted in a similar fashion to Figure 10. The statistical characteristics, including distribution and autocorrelation, remain consistent across all time points, exhibiting a resemblance to those observed in the original data. Figure 10 and Figure 11 demonstrate that both high and low sea state data are successfully aligned with the corresponding global conditions in a single GCC-FLOW.
The benefit of this single-mode training on two different datasets is the feature interpolation, i.e., by adjusting α between 0 and 1, the GCC-FLOW can generate data with interpolated statistical characteristics between the high and low sea state.
Figure 12 shows the magnitude histogram of generated data for α = 0.2 ,   0.5 ,   0.8 at three different time points (beginning, middle, and end) of 262 , 144 samples. The magnitude histograms were plotted for 10 , 000 samples for all time points (beginning (a), middle (b), end (c)). Figure 13 presents the autocorrelation of generated data for α = 0 , 2 ,   0.5 ,   0.8 at three different time points (beginning, middle, and end) of 262 , 144 samples. The autocorrelations were plotted for 512 samples corresponding to each time point (beginning (a), middle (b), end (c)) and each value of α (0.2 (1), 0.5 (2), 0.8 (3)).
In both figures, as α increases from 0 to 1, the characteristics of magnitude distribution and autocorrelation transforms from the low sea state characteristics to the high sea state characteristics. It is observed that, as the value of the α increases, the distribution becomes increasingly concentrated, and the correlation becomes increasingly fast-fluctuating for all time points. This suggests that, by using GCC-FLOW with a global condition variable tuned to sea state, various sea state sea clutter can be generated without the need for the dedicated training datasets for different sea states. As a result, data exhibiting consistent statistical characteristics at all time points can be generated as intended with global sea state, not only for high and low sea states, but also for intermediate sea states through interpolation.

5. Conclusions

A novel work, Globally Conditioned Conditional FLOW (GCC-FLOW), has been proposed for sea clutter data augmentation due to the expensive and time-consuming nature of acquiring the desired sea clutter data. Experimental results show that GCC-FLOW successfully generates sequences of arbitrary length with the same statistical properties of the given sea clutter data, in contrast to the existing block-by-block methods. Moreover, by introducing a global state variable, the proposed GCC-FLOW can generate sea clutter with different sea states by feature interpolation without requiring the training data with a target feature. Subsequent studies will concentrate on the interpolation of not only the sea state but also other marine environmental conditions, such as wind speed and the average height of waves, in accordance with global condition variables.

Author Contributions

Software, S.L.; Writing—original draft, S.L.; Writing—review and editing, W.C.; Supervision, W.C.; Project administration, W.C. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by Agency for Defense Development of Korea, grant number UI220077JD.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available on http://soma.ece.mcmaster.ca/ipix/index.html (accessed on 25 April 2023).

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Keith, W.; Robert, T.; Simon, W. Sea clutter: Scattering, the K distribution and radar performance. Waves Random Complex Media 2007, 17, 233–234. [Google Scholar]
  2. Klemm, R. Introduction to space-time adaptive processing. In Proceedings of the IEE Colloquium on Space-Time Adaptive Processing (Ref. No. 1998/241), London, UK, 6 April 1998; pp. 1–11. [Google Scholar]
  3. Ernesto, C.; Maurizio, L. Characterisation of radar clutter as a spherically invariant random process. IEE Proc. F (Commun. Radar Signal Process.) 1987, 134, 191–197. [Google Scholar]
  4. Rangaswamy, M.; Weiner, D.; Ozturk, A. Computer generation of correlated non-Gaussian radar clutter. IEEE Trans. Aerosp. Electron. Syst. 1995, 31, 106–116. [Google Scholar] [CrossRef]
  5. Rangaswamy, M. Spherically invariant random processes for modeling non-Gaussian radar clutter. In Proceedings of the 27th Asilomar Conference on Signals, Systems and Computers, Pacific Grove, CA, USA, 1–3 November 1993; Volume 2, pp. 1106–1110. [Google Scholar]
  6. Barnard, T.; Weiner, D. Non-Gaussian clutter modeling with generalized spherically invariant random vectors. IEEE Trans. Signal Process. 1996, 44, 2384–2390. [Google Scholar] [CrossRef]
  7. Armstrong, B.; Griffiths, H. Modelling spatially correlated K-distributed clutter. Electron. Lett. 1991, 27, 1355–1356. [Google Scholar] [CrossRef]
  8. Li, X.; Xu, X. A Statistical Model for Correlated K-distributed Sea Clutter. In Proceedings of the 2008 Congress on Image and Signal Processing, Hainan, China, 27–30 May 2008; Volume 5, pp. 408–412. [Google Scholar]
  9. Wang, J.; Xu, X. Simulation of Correlated Low-Grazing-Angle Sea Clutter Based on Phase Retrieval. IEEE Trans. Geosci. Remote Sens. 2015, 53, 3917–3930. [Google Scholar] [CrossRef]
  10. Goldstein, H. Sea echo. In Propagation of Short Radio Waves; McGraw-Hill: New York, NY, USA, 1951; Volume 13, pp. 481–527. [Google Scholar]
  11. Trunk, G.V.; George, S.F. Detection of Targets in Non-Gaussian Sea Clutter. IEEE Trans. Aerosp. Electron. Syst. 1970; AES-6, 620–628. [Google Scholar]
  12. Jakeman, E.; Pusey, P. A model for non-Rayleigh sea echo. IEEE Trans. Antennas Propag. 1976, 24, 806–814. [Google Scholar] [CrossRef]
  13. Matsuo, S.; Yuhai, H.M. Weibull Radar Clutter; Radar, Sonar and Navigation; Institution of Engineering and Technology: London, UK, 1990. [Google Scholar]
  14. Ishikawa, Y.; Sekine, M.; Musha, T. Observation of k-distributed sea clutter via an x-band radar. Electron. Commun. Jpn. Part I Commun. 1994, 77, 72–82. [Google Scholar] [CrossRef]
  15. Bouvier, C.; Martinet, L.; Favier, G.; Artaud, M. Simulation of radar sea clutter using autoregressive modelling and K-distribution. In Proceedings of the International Radar Conference, Alexandria, VA, USA, 8–11 May 1995; pp. 425–430. [Google Scholar]
  16. Middleton, D. New physical-statistical methods and models for clutter and reverberation: The KA-distribution and related probability structures. IEEE J. Ocean. Eng. 1999, 24, 261–284. [Google Scholar] [CrossRef]
  17. Rosenberg, L.; Crisp, D.J.; Stacy, N.J. Analysis of the KK-distribution with X-band medium grazing angle sea-clutter. In Proceedings of the 2009 International Radar Conference “Surveillance for a Safer World” (RADAR 2009), Bordeaux, France, 12–16 October 2009; pp. 1–6. [Google Scholar]
  18. Dong, Y. Distribution of X-Band High Resolution and High Grazing Angle Sea Clutter; Citeseer: San Diego, CA, USA, 2006. [Google Scholar]
  19. Farshchian, M.; Posner, F.L. The Pareto distribution for low grazing angle and high resolution X-band sea clutter. In Proceedings of the 2010 IEEE Radar Conference, Paris, France, 30 September–1 October 2010; pp. 789–793. [Google Scholar]
  20. Bocquet, S. Simulation of correlated Pareto distributed sea clutter. In Proceedings of the 2013 International Conference on Radar, Adelaide, Australia, 9–12 September 2013; pp. 258–261. [Google Scholar]
  21. Tough, R.J.A.; Ward, K.D. The correlation properties of gamma and other non-Gaussian processes generated by memoryless nonlinear transformation. J. Phys. D Appl. Phys. 1999, 32, 3075. [Google Scholar] [CrossRef]
  22. Watts, S.; Rosenberg, L. Challenges in radar sea clutter modelling. IET Radar Sonar Navig. 2022, 16, 1403–1414. [Google Scholar] [CrossRef]
  23. Lample, G.; Zeghidour, N.; Usunier, N.; Bordes, A.; DENOYER, L.; Ranzato, M.A. Fader Networks:Manipulating Images by Sliding Attributes. In Proceedings of the Advances in Neural Information Processing Systems; Guyon, I., Luxburg, U.V., Bengio, S., Wallach, H., Fergus, R., Vishwanathan, S., Garnett, R., Eds.; Curran Associates, Inc.: Brooklyn, NY, USA, 2017; Volume 30. [Google Scholar]
  24. He, Z.; Zuo, W.; Kan, M.; Shan, S.; Chen, X. AttGAN: Facial Attribute Editing by Only Changing What You Want. IEEE Trans. Image Process. 2019, 28, 5464–5478. [Google Scholar] [CrossRef] [PubMed]
  25. Bin, D.; Xue, X.; Xuefeng, L. Sea Clutter Data Augmentation Method Based on Deep Generative Adversarial Network. J. Electron. Inf. Technol. 2021, 43, 1985. [Google Scholar]
  26. Zhang, X.; Wang, Z.; Lu, K.; Pan, Q.; Li, Y. Data Augmentation and Classification of Sea–Land Clutter for Over-the-Horizon Radar Using AC-VAEGAN. IEEE Trans. Geosci. Remote Sens. 2023, 61, 1–16. [Google Scholar] [CrossRef]
  27. Bengio, Y.; Simard, P.; Frasconi, P. Learning long-term dependencies with gradient descent is difficult. IEEE Trans. Neural Netw. 1994, 5, 157–166. [Google Scholar] [CrossRef] [PubMed]
  28. Pascanu, R.; Mikolov, T.; Bengio, Y. On the difficulty of training recurrent neural networks. In Proceedings of the 30th International Conference on Machine Learning, Atlanta, GA, USA, 17–19 June 2013; Dasgupta, S., McAllester, D., Eds.; Proceedings of Machine Learning Research. JMLR: Norfolk, MA, USA, 2013; Volume 28, pp. 1310–1318. [Google Scholar]
  29. Pumarola, A.; Popov, S.; Moreno-Noguer, F.; Ferrari, V. C-flow: Conditional generative flow models for images and 3d point clouds. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 7949–7958. [Google Scholar]
  30. Kingma, D.P.; Dhariwal, P. Glow: Generative Flow with Invertible 1x1 Convolutions. In Proceedings of the Advances in Neural Information Processing Systems; Bengio, S., Wallach, H., Larochelle, H., Grauman, K., Cesa-Bianchi, N., Garnett, R., Eds.; Curran Associates, Inc.: Brooklyn, NY, USA, 2018; Volume 31. [Google Scholar]
  31. Carretero-Moya, J.; Gismero-Menoyo, J.; Blanco-del Campo, A.; Asensio-Lopez, A. Statistical Analysis of a High-Resolution Sea-Clutter Database. IEEE Trans. Geosci. Remote Sens. 2010, 48, 2024–2037. [Google Scholar] [CrossRef]
  32. IPIX Radar. Available online: http://soma.ece.mcmaster.ca/ipix/dartmouth/index.html (accessed on 25 April 2023).
Figure 1. Autoregressive generation using GCC-FLOW.
Figure 1. Autoregressive generation using GCC-FLOW.
Applsci 14 06538 g001
Figure 2. Architecture of GCC-FLOW.
Figure 2. Architecture of GCC-FLOW.
Applsci 14 06538 g002
Figure 3. Architecture of coupling networks in GCC-FLOW.
Figure 3. Architecture of coupling networks in GCC-FLOW.
Applsci 14 06538 g003
Figure 4. Magnitude distribution of the generated data by GCC-FLOW in comparison with the true data: high sea state (a), low sea state (b).
Figure 4. Magnitude distribution of the generated data by GCC-FLOW in comparison with the true data: high sea state (a), low sea state (b).
Applsci 14 06538 g004
Figure 5. Autocorrelation of the generated data by GCC-FLOW in comparison with the true data: high sea state (a), low sea state (b).
Figure 5. Autocorrelation of the generated data by GCC-FLOW in comparison with the true data: high sea state (a), low sea state (b).
Applsci 14 06538 g005
Figure 6. Distribution (1) and autocorrelation (2) of the generated data trained on high sea state data: beginning (a), middle (b), end (c).
Figure 6. Distribution (1) and autocorrelation (2) of the generated data trained on high sea state data: beginning (a), middle (b), end (c).
Applsci 14 06538 g006
Figure 7. Distribution (1) and autocorrelation (2) of the generated data trained on low sea state data: beginning (a), middle (b), end (c).
Figure 7. Distribution (1) and autocorrelation (2) of the generated data trained on low sea state data: beginning (a), middle (b), end (c).
Applsci 14 06538 g007
Figure 8. Distribution (1) and autocorrelation (2) of the generated data and 11th range bin of Grimsby #155 data: beginning (a), middle (b), end (c).
Figure 8. Distribution (1) and autocorrelation (2) of the generated data and 11th range bin of Grimsby #155 data: beginning (a), middle (b), end (c).
Applsci 14 06538 g008
Figure 9. Autocorrelation comparison of block-by-block method and the proposed GCC-FLOW: original data (1-(a),2-(a)), block-by-block method (1-(b),2-(b)), and GCC-FLOW (1-(c),2-(c)).
Figure 9. Autocorrelation comparison of block-by-block method and the proposed GCC-FLOW: original data (1-(a),2-(a)), block-by-block method (1-(b),2-(b)), and GCC-FLOW (1-(c),2-(c)).
Applsci 14 06538 g009
Figure 10. Distribution (1) and autocorrelation (2) of the generated data by GCC-FLOW with α = 1 : beginning (a), middle (b), end (c).
Figure 10. Distribution (1) and autocorrelation (2) of the generated data by GCC-FLOW with α = 1 : beginning (a), middle (b), end (c).
Applsci 14 06538 g010
Figure 11. Distribution (1) and autocorrelation (2) of the generated data by GCC-FLOW with α = 0 : beginning (a), middle (b), end (c).
Figure 11. Distribution (1) and autocorrelation (2) of the generated data by GCC-FLOW with α = 0 : beginning (a), middle (b), end (c).
Applsci 14 06538 g011
Figure 12. Interpolated distribution by the global condition variable α = 0.2, 0.5, 0.8: beginning (a), middle (b), end (c).
Figure 12. Interpolated distribution by the global condition variable α = 0.2, 0.5, 0.8: beginning (a), middle (b), end (c).
Applsci 14 06538 g012
Figure 13. Interpolated autocorrelation by the global condition variable α = 0.2 (1), 0.5 (2), 0.8 (3): beginning (a), middle (b), end (c).
Figure 13. Interpolated autocorrelation by the global condition variable α = 0.2 (1), 0.5 (2), 0.8 (3): beginning (a), middle (b), end (c).
Applsci 14 06538 g013
Table 1. Complexity analysis of GCC-FLOW.
Table 1. Complexity analysis of GCC-FLOW.
GCC-FLOWTraining
Time
# of ParamsGeneration
Time
(per 512 Samples)
Global
Condition
Training Data
FixedDartmouth
high sea state
2.6 h71 k1.17 s
Dartmouth
low sea state
2.9 h
Grimsby
11th range of #155
2.2 h
VariableDartmouth
high and low sea state
18.5 h
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Lee, S.; Chung, W. Globally Conditioned Conditional FLOW (GCC-FLOW) for Sea Clutter Data Augmentation. Appl. Sci. 2024, 14, 6538. https://doi.org/10.3390/app14156538

AMA Style

Lee S, Chung W. Globally Conditioned Conditional FLOW (GCC-FLOW) for Sea Clutter Data Augmentation. Applied Sciences. 2024; 14(15):6538. https://doi.org/10.3390/app14156538

Chicago/Turabian Style

Lee, Seokwon, and Wonzoo Chung. 2024. "Globally Conditioned Conditional FLOW (GCC-FLOW) for Sea Clutter Data Augmentation" Applied Sciences 14, no. 15: 6538. https://doi.org/10.3390/app14156538

APA Style

Lee, S., & Chung, W. (2024). Globally Conditioned Conditional FLOW (GCC-FLOW) for Sea Clutter Data Augmentation. Applied Sciences, 14(15), 6538. https://doi.org/10.3390/app14156538

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop