Next Article in Journal
Methodology for CubeSat Debris Collision Avoidance Based on Its Active ADCS System
Previous Article in Journal
Enhanced Temporal Knowledge Graph Completion via Learning High-Order Connectivity and Attribute Information
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Novel Batch Streaming Pipeline for Radar Emitter Classification

1
School of Electrical Engineering, Korea Advanced Institute of Science and Technology, Daejeon 34141, Chungcheong, Republic of Korea
2
LIG Nex1, Seongnam 13488, Gyeonggi, Republic of Korea
*
Author to whom correspondence should be addressed.
Appl. Sci. 2023, 13(22), 12395; https://doi.org/10.3390/app132212395
Submission received: 1 October 2023 / Revised: 12 November 2023 / Accepted: 13 November 2023 / Published: 16 November 2023
(This article belongs to the Section Electrical, Electronics and Communications Engineering)

Abstract

:
In electronic warfare, radar emitter classification plays a crucial role in identifying threats in complex radar signal environments. Traditionally, this has been achieved using heuristic-based methods and handcrafted features. However, these methods struggle to adapt to the complexities of modern combat environments and varying radar signal characteristics. To address these challenges, this paper introduces a novel batch streaming pipeline for radar emitter classification. Our pipeline consists of two key components: radar deinterleaving and radar pattern recognition. We leveraged the DBSCAN algorithm and an RNN encoder, which are relatively light and simple models, considering the limited hardware resource environment of a military weapon system. Although we chose to utilize lightweight machine learning and deep learning models, we designed our pipeline to perform optimally through hyperparameter optimization of each component. We demonstrate the effectiveness of our proposed model and pipeline through experimental validation and analysis. Overall, this paper provides background knowledge on each model, introduces the proposed pipeline, and presents experimental results.

1. Introduction

Electronic warfare (EW) is a modern form of combat that utilizes various electronic devices, such as electronic sensors, signal jammers, radar systems, and communication networks, as well as electromagnetic waves and telecommunications. It can be broadly divided into three categories: electronic support, electronic attack, and electronic protection [1]. The process of radar emitter classification falls under the domain of electronic support and plays a significant role in analyzing and identifying threats in complex radar signal environments in electronic warfare [2]. When an EW system receives radar signals, it analyzes their features, such as the frequency pattern and pulse repetition interval (PRI) pattern of the signals. These analyzed results are then matched with the threat library of the EW systems, to perform threat identification [3].
However, as electronic countermeasures become sophisticated and new types of complex radar systems are developed and deployed, the classification of these radar emitters has become more challenging [4,5,6]. Modern radar emitters often employ a variety of frequency and PRI modulation, which increases the complexity and diversity of radar signals [7]. Consequently, there is a need to develop and implement new methods for classification of these radar emitters. Developing new classification methods could provide a significant advantage in the complex and fast-paced arena of modern electronic warfare.
In electronic warfare, classification of the radar emitters involves several steps [8]. First, pulse descriptor words (PDWs) are generated from the received radar signals. Each PDW includes details about a specific radar pulse, such as TOA (time of arrival), AOA (angle of arrival), frequency, PW (pulse width), PA (pulse amplitude), etc. [9]. Second, the generated PDWs are deinterleaved. Deinterleaving is the process of separating jumbled PDWs back into individual sequences of pulses, which are referred to as PDW trains, each one corresponding to a different emitter [10,11]. Figure 1 illustrates the radar deinterleaving process.
Third, we extract the PRI from each of them, using TOA information, as in [11,12,13]:
P R I i = T O A i T O A i 1 .
The PRI is the time interval between consecutive pulses, and is one of the key features that helps us classify the emitter [7,13]. With this calculated PRI and frequency information, we proceed to the final step, which is the radar pattern recognition process. Each radar emitter has a significant pattern, like a fingerprint, which allows us to recognize it among others [14]. Therefore, by recognizing the frequency pattern and PRI pattern, we can classify the radar emitter of the received signals. The crucial components in the entire radar emitter classification process are radar deinterleaving and radar pattern recognition. These two components have been actively researched and developed, to improve their effectiveness and efficiency.
The radar deinterleaving process has traditionally been approached using heuristic-based methods. These methods rely on predefined rules and thresholds to deinterleave the received radar signal. However, these conventional techniques often face limitations in handling complex signal environments and adapting to varying radar signal characteristics [15]. In recent years, machine learning techniques have been gaining significant attention as an effective approach for radar deinterleaving. By utilizing data-driven algorithms, these methods can learn to recognize and separate different radar signals, based on the intricate relationships of radar pulse data. Guo et al. [15] and Li et al. [16] performed radar deinterleaving by fusing support vector clustering and data field theory, respectively, with the K-means clustering technique. Stefan et al. [17] conducted radar deinterleaving using the incremental DBSCAN algorithm, whose performance was experimentally validated by simulated radar data. In addition, machine learning algorithms that can be considered for application in the radar deinterleaving process include OPTICS (ordering points to identify the clustering structure) [18], HDBSCAN (hierarchical DBSCAN) [19], mean shift [20], and agglomerative [21], etc.
In the conventional radar pattern recognition approach, which also relies on handcrafted features and rule-based methods, similar issues to those encountered in traditional radar deinterleaving are present. To address these issues, recent research has been focusing on the application of deep learning techniques in radar pattern recognition. These methods leverage the power of neural networks to automatically learn relevant features and patterns from PDW trains, allowing for more accurate and robust recognition of radar patterns. Nguyen et al. [22] performed radar pattern recognition, using a fully connected layer-based MLP (multi-layer perceptron) classifier. Li et al. [23] and Sun et al. [24] used a 1D CNN (convolutional neural network) to effectively extract features from radar signals and use them to improve classification performance. Li et al. [25] demonstrated excellent performance by an attention-based multi-recurrent neural network, which was effective for sequential data processing, compared to existing MLP- and CNN-based radar classification models. Following current research trends, we can consider utilizing recurrent neural networks or attention-mechanism-based transformers [26] for sequential data processing problems like radar pattern recognition.
Although extensive research has been conducted on each of these processes individually, to the best of the authors’ knowledge a comprehensive pipeline that encompasses both of these processes remains challenging to find. For this paper, we used a modified DBSCAN method for radar deinterleaving and an RNN-based classifier for radar pattern recognition for batch streaming radar data. By combining these two ingredients, we propose a comprehensive radar emitter classification pipeline. While some methods perform better than the DBSCAN algorithm and RNN-based classifiers at each step, the reason why we designed the pipeline based on these two models was to consider the limited hardware resources of real military weapon systems. Even though we adopted machine learning and deep learning models with a simple architecture, it was designed to achieve optimal performance in the proposed pipeline by performing hyperparameter optimization.
This paper is structured as follows: In Section 2, we provide background knowledge on DBSCAN, RNN encoder, and Bayesian optimization, which is essential for understanding our proposed radar emitter classification pipeline. In Section 3, we introduce our proposed radar deinterleaving model and radar pattern recognition model, utilizing the background knowledge from Section 2. Moreover, we present our complete radar emitter classification pipeline, which includes both of the components. In Section 4, we set up our proposed models in the pipeline, we experimentally validate the functionality, and we analyze the results. Lastly, in Section 5, we provide a comprehensive conclusion and outline future work.

2. Background Knowledge

In this section, we discuss three fundamental algorithms, DBSCAN, RNN encoder, and Bayesian optimization, which are needed for understanding our proposed pipeline’s radar deinterleaving and radar pattern recognition model, respectively.

2.1. DBSCAN

DBSCAN is a popular density-based clustering algorithm introduced by Martin Ester et al. in 1996 [27]. Unlike other clustering algorithms, such as K-means or hierarchical clustering, DBSCAN does not require the number of clusters to be specified beforehand, making it particularly suitable for scenarios where the number of clusters is unknown or varying. The key idea behind DBSCAN is to group data points that are closely packed together, while also identifying noise points that do not belong to any cluster. DBSCAN employs the following core concepts. All concepts are defined with regard to two parameters denoted by ϵ and M i n P t s .
In DBSCAN, a core point is a data point in domain D that has at least a minimum number of neighboring data points ( M i n P t s ) within its ϵ -neighborhood. The ϵ -neighborhood of a point, denoted by N ϵ ( p ) , is defined as
N ϵ ( p ) = q D i = 1 n ( p i q i ) 2 ϵ .
Border points, on the other hand, do not meet the M i n P t s requirement but are reachable from a core point within the distance of ϵ .
A point p is said to be directly density reachable from a point q if:
( 1 ) p N ϵ ( q ) ( r e a c h a b i l i t y ) ; ( 2 ) | N ϵ ( q ) | M i n P t s ( c o r e p o i n t c o n d i t i o n ) .
We can expand the concept of density reachability. A point p is density reachable from a point q if there is a chain of points p 1 , p 2 , , p n with p 1 = q and p n = p , such that p i + 1 is directly density reachable from p i for all 1 < i < n 1 .
The clustering process in DBSCAN involves iteratively exploring the entire dataset and determining core points, border points, and noise points. The algorithm proceeds as follows:
  • DBSCAN starts by selecting an unvisited data point and checks its type.
  • If the selected point is a core point, create a new cluster, and expand the cluster by iteratively adding density-reachable core points.
  • If the selected point is a border point, assign it to the cluster of a reachable core point.
  • If the selected point is neither a core point nor a border point, it is considered as a noise point, which means it does not belong to any cluster.
  • Repeat steps 1–4 until all points have been visited.
Due to its many advantages, DBSCAN continues to be widely used in various clustering problems to this day. However, the performance of the DBSCAN model heavily relies on how the parameters ( ϵ , M i n P t s ) are set [28]. Thus, selecting optimal parameter values is still an ongoing task for researchers and engineers, to achieve optimal clustering performance with DBSCAN.

2.2. RNN Encoder

RNN is a type of neural network that is designed to process sequential data by considering the temporal dependencies within the input sequences [29]. However, initial vanilla RNNs have limitations in capturing long-term dependencies, due to the vanishing gradient problem. As the gradients back-propagate through time, they can become exponentially small, causing the network to struggle in preserving information over long sequences. To address this issue, LSTM (long short-term memory) was introduced.
LSTM is a variant of RNN that incorporates memory cells to selectively store and forget information over time [30]. It consists of different gates, including input, forget, and output gates, which control the flow of information within the cells. These gates enable LSTM to capture long-term dependencies more effectively and to alleviate the vanishing gradient problem. Another variant of RNN that addresses the vanishing gradient problem is GRU (gated recurrent unit). GRU simplifies the architecture compared to LSTM by combining the memory update and reset operations into a single gate. This results in a more computationally efficient model while still maintaining the ability to capture long-term dependencies [31].
The RNN encoder is used when we need to transform an input sequence into a fixed-length vector representation [32,33]. The operation of the RNN encoder can be described as follows. Starting from the first step of the input sequence, it receives an input and updates its hidden state. Then, it moves on to the next step, combining the previous hidden state with the current input, to compute a new hidden state. This process is repeated for each step of the sequence, gradually encoding the information into the hidden states. At the end of this process, the final hidden state serves as the context vector. This context vector representation summarizes the input sequence with the shape of the fixed-length vector. The RNN encoder has been widely used in various tasks, such as natural language processing, speech recognition, and action recognition, demonstrating its effectiveness in encoding sequential data and extracting meaningful features.

2.3. Bayesian Optimization

In machine learning and deep learning, model performance is often dependent on a set of hyperparameters that need to be carefully tuned, to achieve optimal results. Bayesian optimization [34] is a powerful and efficient method for the hyperparameter tuning process. Bayesian optimization is a global optimization technique that aims to find the maximum (or minimum) of an unknown, expensive-to-evaluate, and possibly noisy objective function. In the context of machine learning or deep learning, this objective function is typically the performance score of the model (e.g., accuracy, loss, or F1-score) on a validation dataset. The objective is to find the best set of hyperparameters that maximize the model’s performance. The key idea behind Bayesian optimization is to build a probabilistic surrogate model that approximates the true objective function. This surrogate model is typically a Gaussian process or other probabilistic models. The surrogate model helps in modeling the uncertainty of the objective function and guides the search for the optimal hyperparameters.
The Bayesian optimization process can be summarized as follows:
  • An acquisition function, often based on the surrogate model, is used to determine the next set of hyperparameters to evaluate. Common acquisition functions include expected improvement, probability of improvement, and upper confidence bound.
  • The acquisition function suggests a set of hyperparameters for evaluation, and the true objective function is queried, to obtain its performance on the validation dataset.
  • The surrogate model is updated with the new data point, which improves its approximation of the objective function.
  • Steps 1 to 3 are repeated for a predefined number of iterations, and the process converges to the optimal set of hyperparameters.
Bayesian optimization offers several advantages, including its ability to handle noisy objective functions, make efficient use of limited computational resources, and focus on promising regions of the hyperparameter space. By incorporating Bayesian optimization into machine learning or a deep learning workflow, we can streamline the process of hyperparameter tuning and achieve State-of-the-Art results with less manual trial and error.

3. The Pipeline

In this section, we introduce our radar emitter classification pipeline. The pipeline is primarily divided into two stages: radar deinterleaving and radar pattern recognition. As radar emitters used in electronic warfare have the values of AOA, frequency, PW, and PA in each unique band, we use these four features to deinterleave the radar data to PDW trains. Following the formation of these PDW trains, we proceed to the radar pattern recognition stage. At this stage, we analyze the frequency and PRI patterns within each PDW train set. By analyzing these patterns, we identify the specific modulation technique deployed by the emitter. The models used in each stage are designed based on the background techniques introduced in the previous section. Our pipeline operates under the assumption that the input PDW data has been preprocessed to remove noise. The radar recognition pipeline diagram we propose is shown in Figure 2.

3.1. Radar Deinterleaving Model

Given the batch streaming processing approach, the DBSCAN algorithm introduced in Section 2.1 is well suited to our radar deinterleaving model. This is because in a batch streaming environment, the number of radar emitters we seek to deinterleave can vary from batch to batch. However, directly applying the DBSCAN algorithm from [27] presents two significant challenges. Firstly, finding an appropriate value of ϵ is challenging. When we select AOA, Frequency, PW, and PA as our feature vectors for implementing DBSCAN, it is not straightforward to set an appropriate value of ϵ in Equation (1). This is because the distance to define neighborhood is calculated by weighting each vector element equally, even though all of them have different variances and scales. Secondly, we may encounter issues related to M i n P t s . In the context of our task, despite the fact that the data have been pre-filtered for noise, if one emitter has a low frequency, the number of pulses within a given batch may not meet the M i n P t s threshold. In this case, DBSCAN will fail to cluster these pulses into a PDW train, incorrectly identifying them as noise.
To solve the first problem, we propose modifying Equation (1), used in DBSCAN, as follows:
N thres ( p ) = { q D | p q thres } ,
where p R 4 , q R 4 denote the ( A O A , f r e q u e n c y , P W , P A ) D of each PDW;
thres = ( A O A t h r e s , F r e q t h r e s , P W t h r e s , P A t h r e s )
is a threshold vector, and the norm x is defined as
x r | x i | r i , i = 1 , , 4
for x R 4 and r ( R 0 ) 4 . To account for the different variances and scales of each vector element, the distance calculation of the two vectors to define the neighborhood data is performed independently for each element, and the corresponding thresholds ( thres ) are set separately. The process of defining thresholds in Equation (2) has the advantage of enabling radar engineers to more intuitively determine the thresholds for each component based on the characteristics of each physical value.
The second problem can be solved by increasing the batch size, to avoid data situations where the M i n P t s condition is not met, but larger batch sizes result in longer computation times and less memory efficiency. Therefore, rather than increasing the batch size, we should focus more on how to handle data that do not meet M i n p t s conditions when they occur. As illustrated in the deinterleaving model in Figure 2, we propose carrying over data with fewer numbers of PDWs than M i n P t s to the next batch, instead of classifying them as noise. This approach stems from our assumption that the PDW data in our pipeline have already been pre-filtered for noise. Thus, rather than treating unclustered pulses as noise, we consider them as insufficient data and forward them to the next batch. The unclustered data from the previous batch are combined with new data from the next batch so that there can be enough data in that batch to satisfy the M i n P t s condition. In other words, they can be deinterleaved into PDW trains.

3.2. Radar Recognition Model

Following the deinterleaving step, the next task in our pipeline is the pattern recognition process. Our radar pattern recognition model is designed to accommodate varying sizes of input data—in this case, the PDW trains produced by the previous deinterleaving step. The lengths of the PDW trains within a single batch can vary, due to the different time interval of radar emitters [35]. To handle these varying-sized inputs, we propose using the RNN encoder introduced in Section 2.2, which converts them into a fixed-length vector.
The structure of our RNN encoder is multi-layered. The reason for choosing a multi-layered RNN is to allow the model to learn a richer representation of the input data sequence. Each layer in the RNN encoder is capable of capturing different aspects of the input sequence. Therefore, we do not solely rely on the top layer’s hidden state as our context vector, like a general RNN encoder. Instead, we concatenate the hidden states from all the layers, to create a composite representation. This enhances the model’s ability to understand the data better, incorporating various features at different levels of complexity into the context vector. Finally, this context vector passes through a fully connected (FC) layer, leading to the final classification of frequency and PRI patterns. Figure 3 shows an example of the radar recognition model structure. The RNN encoder in the example is structured in four layers.

4. Experiment

For this section, we conducted experiments, to evaluate the performance of our proposed radar emitter classification pipeline, which consisted of the radar deinterleaving and radar pattern recognition models introduced in Section 3. The primary focus of this section was to examine the experimental dataset, the configuration of the two models, and the analysis of the experimental results. The pipeline in this paper was performed with batch size 3000 samples.

4.1. Dataset

For the experiment in this paper, 500 episodes of radar signals were generated, using a simulated signal generator. Among these episodes, 400 episodes were utilized as train data, while the remaining 100 episodes were used as test data. The radar data were episode-wise alike but had different values of AOA, Frequency, PW, and PA within the margin of errors, and they were organized in different TOA order. Thus, by validating the model, which was designed based on train data, with test data, we could check whether the generalization performance was guaranteed. Within each episode, 43 different radar emitters were active, emitting a total of 1500 pulses. These pulses were captured and stored as PDWs, serving as the input for the subsequent deinterleaving and recognition stages. The PDW dataset consisted of a wide range of radar emitter classes, each characterized by distinct PRI patterns and frequency patterns. The frequency patterns encompassed agile, fixed, hopping, negative, positive, sine, and triangular, while the PRI patterns included options such as dwell and switch, stagger, negative, positive, stable, triangular, and sine. Figure 4 and Figure 5 illustrate examples of these frequency patterns and PRI patterns, respectively. We performed one-hot encoding on the class information, yielding a total of 43 classes used as experimental data.

4.2. Deinterleaving Model Setup

To set up the deinterleaving model, we needed to set the M i n P t s and threshold values for AOA, Frequency, PW, and PA. We set the M i n P t s to 64 and configured the model to forward to the next batch the PDW data that did not meet the M i n P t s condition. In other words, the minimum sample length of detected PDW trains was then 64. As the accuracy and computational speed of the deinterleaving model depended on how the threshold values were set, we used the Bayesian optimization technique to find a good set of threshold values. Unlike the threshold values, the M i n P t s affected the entire pipeline, not just within the deinterleaving model, so we kept its value fixed and only tuned the threshold values. Within the search space shown in Table 1, a total of 16 threshold value sets were found during the Bayesian optimization process, as shown in Table 2. The score in Bayesian optimization for the deinterleaving model was based on the deinterleaving accuracy on the training episode data, and when the accuracy reached 100%, the reciprocal of the average time to complete deinterleaving in each batch was added as an extra point. The model with the best score was selected as the final model.

4.3. Pattern Recognition Model Setup

We performed data preprocessing, model training, and hyperparameter optimization for the pattern recognition model, to process the PDW train output from the deinterleaving model. For the preprocessing step, we structured the frequency information and we calculated the PRI information from each PDW train into a multivariate time series data vector, x R 2 × s a m p l e s . We applied a modified minmax normalization, to accommodate different scales of frequencies and PRI values, as follows:
x normalized = x min ( x ) + α max ( x ) min ( x ) + 2 α ,
where
α = 1000 std ( x ) .
We here added α to the min–max normalization process, to effectively deal with the case of fixed frequency patterns and stable PRI patterns. As shown in Figure 6, without the α , even slight noise could be amplified, making it difficult to discern fixed or stable patterns.
The preprocessed data was divided into train and validation sets, in a ratio of 9:1. The reason why we divided the dataset into training and validation sets was to assess the model’s generalization performance, ensuring it did not overfit to the train set. The train set was used to train the radar pattern recognition model. We adopted different types of RNN encoder, such as vanilla RNN, LSTM, and GRU, while keeping all the other conditions fixed. Initially, we set the dimension of the hidden state in the RNN encoder to 8. The final context vector obtained from the RNN encoder with four layers was of dimension 4 × 8 = 32. The number of three fully connected layers nodes that made up the classifier was set to 1024, 512, and 256 in that order. The model architecture is presented in Table 3.
The optimizer used was Adam, with a learning rate of 0.00001, and the batch size was set to 128. We chose the loss function of the model as the categorical cross-entropy loss, defined as
Categorical Cross-Entropy = i = 1 C y i log ( p i ) .
The training was conducted with the train set for 1000 epochs. At each epoch, the accuracy on the validation set was computed, and if it surpassed the previous best accuracy, the model was saved. After training, the model with the highest validation accuracy was selected as the final model. This allowed us to select a final model that was not overfitted to the train set. The model validation accuracy for each RNN encoder type is presented in Table 4. In the experimental results, both LSTM and GRU outperformed the vanilla RNN, achieving an impressive accuracy of approximately 97%. This superiority can be attributed to the advanced architecture, such as the memory cells and gates of the LSTM and GRU models, as discussed in Section 2.2. Unlike the vanilla RNN, LSTM and GRU were equipped with mechanisms such as memory cells and gates, allowing them to capture and retain long-term dependencies in the input sequence.
We compared the experimental results between the LSTM and GRU models in Figure 7 and Table 5. Figure 7 shows the accuracy and loss graphs on the validation set during the training process. As evident from the graph, the GRU model demonstrated faster learning compared to the LSTM model. This can be attributed to the fact that the GRU model was a simplified version of the LSTM model, with fewer parameters to learn, resulting in faster training. Table 5 provides information on the parameter counts for each encoder and indicates the epoch at which they first reached 90% and 95% accuracy. While there was not a significant difference in performance between the LSTM and GRU models, given sufficient training time, if we had constraints on available resources or time, opting for the GRU model would be a reasonable choice.
As shown in Section 4.2, the pattern recognition model could be further improved by optimizing the hyperparameters. The hyperparameters used in the pattern recognition model were the context vector dimension (context_vec_dim), the number of RNN layers (rnn_layers), the number of fully connected layer nodes (fc_1, fc_2, fc_3), and the learning rate (lr). For the LSTM and GRU units, we found 16 sets of hyperparameters in the search space in Table 6, using Bayesian optimization.
The scores in Bayesian optimization for the pattern recognition model were computed based on pattern recognition accuracy and on validation data. The results of the Bayesian optimization for the deinterleaving model are shown in Table 7 and Table 8. It is shown that the pattern recognition accuracy of the model with the hyperparameter setting that scored the best in each Bayesian optimization was higher than the result of the model with initial hyperparameter values in Table 4. Therefore, we selected the model with the best hyperparameter values from the Bayesian optimization as our final pattern recognition model.

4.4. Radar Emitter Classification Pipeline Test

Finally, we conducted tests with the test dataset on our entire pipeline, which included the radar deinterleaving model and the radar pattern recognition model with each optimized hyperparameter set. The experimental results are summarized in Table 9. We verified that our proposed pipeline could handle all the test dataset in batch streaming without any problem. In other words, we demonstrated that our radar emitter classification pipeline showed excellent performance.

5. Conclusions

In this paper, we presented a novel batch streaming radar emitter classification pipeline. The proposed pipeline addresses the challenges of radar deinterleaving and radar pattern recognition, both of which are essential components of radar emitter classification. By designing machine learning and deep learning models, we achieved 100% deinterleaving accuracy and up to 98% pattern recognition accuracy on the test dataset in our pipeline.
We propose three directions for future work. First, we can improve the pipeline, so as to handle noisy data by adding a noise filter appropriate for radar data at the front of the pipeline. Second, the hyperparameter optimization technique can be improved with a variety of tuning scheduler, which can produce faster and better hyperparameter optimization results. Lastly, we can validate the generalization performance of our model by deploying the proposed pipeline on a real military weapon system and processing real radar data.

Author Contributions

Conceptualization, D.H.P., D.-H.S., J.-H.B. and W.-J.L.; methodology, D.H.P.; software, D.H.P.; validation, D.H.P., D.-H.S., J.-H.B. and W.-J.L.; formal analysis, D.H.P.; investigation, D.H.P., D.-H.S., J.-H.B. and W.-J.L.; resources, D.H.P., D.-H.S., J.-H.B. and W.-J.L.; data curation, D.H.P., D.-H.S., J.-H.B. and W.-J.L.; writing—original draft preparation, D.H.P.; writing—review and editing, D.E.C.; visualization, D.H.P.; supervision, D.E.C.; project administration, D.E.C.; funding acquisition, D.E.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research was performed based on cooperation with KAIST-LIG Nex1 Cooperation (No. G01220631).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available on request from the corresponding author. The data are not publicly available due to legal restrictions.

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

References

  1. Poisel, R. Information Warfare and Electronic Warfare Systems; Artech House: Boston, MA, USA, 2013. [Google Scholar]
  2. Xiao, Y.; Zhang Wei, X. Specific Emitter Identification of Radar Based on One Dimensional Convolution Neural Network. J. Phys. Conf. Ser. 2020, 1550, 032114. [Google Scholar] [CrossRef]
  3. Hassan, H.E. A New Algorithm for Radar Emitter Recognition. In Proceedings of the 3rd International Symposium on Image and Signal Processing and Analysis, Rome, Italy, 18–20 September 2003; Volume 2, pp. 1097–1101. [Google Scholar]
  4. Lin, Z.; Niu, H.; An, K.; Wang, Y.; Zheng, G.; Chatzinotas, S.; Hu, Y. Refracting RIS-Aided Hybrid Satellite-Terrestrial Relay Networks: Joint Beamforming Design and Optimization. IEEE Trans. Aerosp. Electron. Syst. 2022, 58, 3717–3724. [Google Scholar] [CrossRef]
  5. Lin, Z.; Lin, M.; Champagne, B.; Zhu, W.-P.; Al-Dhahir, N. Secrecy-Energy Efficient Hybrid Beamforming for Satellite-Terrestrial Integrated Networks. IEEE Trans. Commun. 2021, 69, 6345–6360. [Google Scholar] [CrossRef]
  6. Sun, Y.; An, K.; Zhu, Y.; Zheng, G.; Wong, K.K.; Chatzinotas, S.; Ng, D.W.K.; Guan, D. Energy-Efficient Hybrid Beamforming for Multilayer RIS-Assisted Secure Integrated Terrestrial-Aerial Networks. IEEE Trans. Commun. 2022, 70, 4189–4210. [Google Scholar] [CrossRef]
  7. Qu, Z.; Hou, C.; Hou, C.; Wang, W. Radar Signal Intra-Pulse Modulation Recognition Based on Convolutional Neural Network and Deep Q-Learning Network. IEEE Access 2020, 8, 49125–49136. [Google Scholar] [CrossRef]
  8. Xu, T.; Yuan, S.; Liu, Z.; Guo, F. Radar Emitter Recognition Based on Parameter Set Clustering and Classification. Remote Sens. 2022, 14, 4468. [Google Scholar] [CrossRef]
  9. Hong, S.J.; Yi, Y.G.; Jo, J.; Seo, B.S. Classification of radar signals with convolutional neural networks. In Proceedings of the 2018 Tenth International Conference on Ubiquitous and Future Networks (ICUFN), Prague, Czech Republic, 3–6 July 2018; pp. 894–896. [Google Scholar]
  10. Cheng, W.; Zhang, Q.; Dong, J.; Wang, C.; Liu, X.; Fang, G. An Enhanced Algorithm for Deinterleaving Mixed Radar Signals. IEEE Trans. Aerosp. Electron. Syst. 2021, 57, 3927–3940. [Google Scholar] [CrossRef]
  11. Ge, Z.; Sun, X.; Ren, W.; Chen, W.; Xu, G. Improved Algorithm of Radar Pulse Repetition Interval Deinterleaving Based on Pulse Correlation. IEEE Access 2019, 7, 30126–30134. [Google Scholar] [CrossRef]
  12. Nguyen, H.P.; Nguyen, H.Q.; Ngo, D.T. Deep Learning for Pulse Repetition Interval Classification. In Proceedings of the ICPRAM, Prague, Czech Republic, 19–21 February 2019; pp. 313–319. [Google Scholar]
  13. Li, X.; Liu, Z.; Huang, Z. Attention-Based Radar PRI Modulation Recognition with Recurrent Neural Networks. IEEE Access 2020, 8, 57426–57436. [Google Scholar] [CrossRef]
  14. Nguyen, H.P.K.; Dong, Q.T. A Parallel Neural Network-Based Scheme for Radar Emitter Recognition. In Proceedings of the 2020 14th International Conference on Ubiquitous Information Management and Communication (IMCOM), Taichung, Taiwan, 3–5 January 2020; pp. 1–8. [Google Scholar]
  15. Guo, Q.; Zhang, X.; Li, Z. SVC and K-Means and Type-Entropy Based De-Interleaving/Recognition System of Radar Pulses. In Proceedings of the 2006 IEEE International Conference on Information Acquisition, Weihai, Shandong, China, 20–23 August 2006; pp. 742–747. [Google Scholar]
  16. Li, M.; He, M.; Han, J.; Tang, Y. A New Clustering and Sorting Algorithm for Radar Emitter Signals. J. Phys. Conf. Ser. 2020, 1617, 012009. [Google Scholar] [CrossRef]
  17. Stefan, S.; Stefan, B. Incremental Deinterleaving of Radar Emitters. IEEE Aerosp. Electron. Syst. Mag. 2023, 38, 26–36. [Google Scholar]
  18. Ankerst, M.; Breunig, M.M.; Kriegel, H.P.; Sander, J. OPTICS: Ordering Points to Identify the Clustering Structure. ACM Sigmod Rec. 1999, 28, 49–60. [Google Scholar] [CrossRef]
  19. McInnes, L.; Healy, J. Accelerated Hierarchical Density Based Clustering. In Proceedings of the 2017 IEEE International Conference on Data Mining Workshops (ICDMW), New Orleans, LA, USA, 18–21 November 2017; Volume 2005, pp. 33–42. [Google Scholar]
  20. Huang, F.; Chen, Y.; Li, L.; Zhou, J.; Tao, J.; Tan, X.; Fan, G. Implementation of the parallel mean shift-based image segmentation algorithm on a GPU cluster. Int. J. Digit. Earth 2019, 12, 328–353. [Google Scholar] [CrossRef]
  21. Day, W.H.E.; Edelsbrunner, H. Efficient algorithms for agglomerative hierarchical clustering methods. J. Classif. 1984, 1, 7–24. [Google Scholar] [CrossRef]
  22. Nguyen, P.H. Classification of Pulse Repetition Interval Modulations Using Neural Networks. In Proceedings of the 2018 IEEE Symposium Series on Computational Intelligence, Bengaluru, India, 18–21 November 2018; pp. 1739–1743. [Google Scholar]
  23. Li, X.; Huang, Z.; Wang, F.; Wang, X.; Liu, T. Toward Convolutional Neural Networks on Pulse Repetition Interval Modulation Recognition. IEEE Commun. Lett. 2018, 22, 2286–2289. [Google Scholar] [CrossRef]
  24. Sun, J.; Xu, G.; Ren, W.; Yan, Z. Radar Emitter Classification Based on Unidimensional Convolutional Neural Network. IET Radar Sonar Navig. 2018, 12, 862–867. [Google Scholar] [CrossRef]
  25. Li, X.; Liu, Z.; Huang, Z.; Liu, W. Radar Emitter Classification with Attention-Based Multi-RNNs. IEEE Commun. Lett. 2020, 24, 2000–2004. [Google Scholar] [CrossRef]
  26. Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, Ł.; Polosukhin, I. Attention is all you need. In Advances in Neural Information Processing Systems; MIT Press: Cambridge, MA, USA, 2017; pp. 5998–6008. [Google Scholar]
  27. Ester, M.; Kriegel, H.P.; Sander, J.; Xu, X. A Density-Based Algorithm for Discovering Clusters in Large Spatial Databases with Noise. In Proceedings of the KDD, Portland, OR, USA, 2–4 August 1996; Volume 96, pp. 226–231. [Google Scholar]
  28. Dadgarnia, A.; Sadeghi, M.T. A novel method of deinterleaving radar pulse sequences based on a modified DBSCAN algorithm. China Commun. 2023, 20, 198–215. [Google Scholar] [CrossRef]
  29. Caterini, A.L.; Chang, D.E. Deep Neural Networks in a Mathematical Framework; Springer International Publishing: Cham, Switzerland, 2018. [Google Scholar]
  30. Schmidhuber, J.; Hochreiter, S. Long Short-Term Memory. Neural Comput. 1997, 9, 1735–1780. [Google Scholar]
  31. Wu, J.; Teng, L.; Guo, Q. Radar Signal Sorting Based on GRU Neural Network. In Proceedings of the 2021 IEEE 5th Information Technology, Networking, Electronic and Automation Control Conference (ITNEC), Xi’an, China, 15–17 October 2021; Volume 5, pp. 711–714. [Google Scholar]
  32. Apfeld, S.; Charlish, A.; Ascheid, G. Identification of Radar Emitter Type with Recurrent Neural Networks. In Proceedings of the 2020 Sensor Signal Processing for Defence Conference (SSPD), Online, 15–16 September 2020; pp. 1–5. [Google Scholar]
  33. Sutskever, I.; Vinyals, O.; Le, Q.V. Sequence to Sequence Learning with Neural Networks. Adv. Neural Inf. Process. Syst. 2014, 27, 3104–3112. [Google Scholar]
  34. Frazier, P.I. A Tutorial on Bayesian Optimization. arXiv 2018, arXiv:1807.02811. [Google Scholar]
  35. Zhang, C.; Han, Y.; Zhang, P.; Song, G.; Zhou, C. Research on Modern Radar Emitter Modelling Technique under Complex Electromagnetic Environment. J. Eng. 2019, 20, 7134–7138. [Google Scholar] [CrossRef]
Figure 1. Radar deinterleaving process.
Figure 1. Radar deinterleaving process.
Applsci 13 12395 g001
Figure 2. Radar recognition pipeline. The radar recognition pipeline we propose operates using a batch streaming approach. In each batch, we combine unprocessed data from the previous batch with new incoming data, to create a batch of size N. This batch undergoes deinterleaving using our deinterleaving model (pink box) to transform it into PDW trains. Deinterleaved data in the form of PDW trains then move to the pattern recognition phase. The data that could not be deinterleaved are carried over to the next batch. In the pattern recognition phase, our pattern recognition model (yellow block) analyzes each de-interleaved PDW train, to uncover the frequency and PRI patterns. Each model will be defined later.
Figure 2. Radar recognition pipeline. The radar recognition pipeline we propose operates using a batch streaming approach. In each batch, we combine unprocessed data from the previous batch with new incoming data, to create a batch of size N. This batch undergoes deinterleaving using our deinterleaving model (pink box) to transform it into PDW trains. Deinterleaved data in the form of PDW trains then move to the pattern recognition phase. The data that could not be deinterleaved are carried over to the next batch. In the pattern recognition phase, our pattern recognition model (yellow block) analyzes each de-interleaved PDW train, to uncover the frequency and PRI patterns. Each model will be defined later.
Applsci 13 12395 g002
Figure 3. Radar pattern recognition model architecture example. The RNN encoder has a multi-layered structure and converts variable-sized frequency and PRI time series data into fixed-sized vectors. Finally, the hidden layers of all layers of the RNN encoder are concatenated and passed to the classifier, to consider various features of the input time series data. The classifier is designed based on a fully connected layer.
Figure 3. Radar pattern recognition model architecture example. The RNN encoder has a multi-layered structure and converts variable-sized frequency and PRI time series data into fixed-sized vectors. Finally, the hidden layers of all layers of the RNN encoder are concatenated and passed to the classifier, to consider various features of the input time series data. The classifier is designed based on a fully connected layer.
Applsci 13 12395 g003
Figure 4. Types of frequency pattern.
Figure 4. Types of frequency pattern.
Applsci 13 12395 g004
Figure 5. Types of PRI pattern.
Figure 5. Types of PRI pattern.
Applsci 13 12395 g005
Figure 6. The original minmax normalization and modified minmax normalization were compared with frequency pattern examples. Signals with patterns other than the fixed pattern showed minimal differences between their results (below). However, in the case of the fixed pattern (above), the original minmax normalization failed to preserve the pattern’s fixed shape within the 0 to 1 scale, as noise amplification occurred. On the other hand, the modified minmax normalization successfully maintained the fixed pattern’s shape.
Figure 6. The original minmax normalization and modified minmax normalization were compared with frequency pattern examples. Signals with patterns other than the fixed pattern showed minimal differences between their results (below). However, in the case of the fixed pattern (above), the original minmax normalization failed to preserve the pattern’s fixed shape within the 0 to 1 scale, as noise amplification occurred. On the other hand, the modified minmax normalization successfully maintained the fixed pattern’s shape.
Applsci 13 12395 g006
Figure 7. Learning curves of loss and accuracy for the validation dataset.
Figure 7. Learning curves of loss and accuracy for the validation dataset.
Applsci 13 12395 g007
Table 1. Search space for Bayesian optimization for the deinterleaving model.
Table 1. Search space for Bayesian optimization for the deinterleaving model.
HyperparameterSearch Space
A O A t h r e s ( )[1, 5]
F r e q t h r e s (kHz)[100,000, 1,000,000]
P W t h r e s (ms)[1, 1000]
P A t h r e s (db)[1, 10]
Table 2. Bayesian optimization result for deinterleaving model.
Table 2. Bayesian optimization result for deinterleaving model.
AOA ( )Frequency (kHz)PW (ms)PA (db)Score
3.74544,756,035.81672993.292414.907981.8952
7.75184,416,538.13911862.12355.316292.3387
8.87032,678,465.34652655.94105.881093.7143
1.96051,071,099.64303491.71651.176495.4718
6.01123,554,959.26014849.54931.391195.4718
3.82391,135,701.52854002.32065.554095.4718
6.91291,212,423.31943500.192814.298595.4718
5.14071,211,903.69382839.439015.143295.4718
2.62561,124,927.18184329.337711.201095.4718
7.65941,082,501.68822062.76171.804795.4718
1.5602822,172.87574330.88072.103695.4983
7.18751,101,082.8165928.223511.0920105.7289
7.42602,649,166.993598.38296.7065105.7543
0.06302,323,468.20732633.742419.5565105.7915
8.98051,065,441.2204182.74834.6231105.8310
8.32441,101,078.5979917.02254.4547106.8509
Table 3. The overall model architecture.
Table 3. The overall model architecture.
Layer TypeInput–Output Shape
RNN encoder4-layered RNN encoder2 ×   s a m p l e s → 32
Fully connected Batch normalization ReLU 32 → 1024
Classifier Fully connected Batch normalization ReLU 1024 → 512
Fully connected Batch normalization ReLU 512 → 256
Fully connected Softmax 256 → 43 1
1 The final model output was the predicted radar pattern class corresponding to the highest value among the softmax outputs.
Table 4. Radar pattern recognition model result for validation dataset.
Table 4. Radar pattern recognition model result for validation dataset.
RNN Encoder TypeVanilla RNNLSTMGRU
Pattern recognition accuracy87.80%97.55%97.80%
Table 5. Radar pattern recognition model result.
Table 5. Radar pattern recognition model result.
RNN Encoder TypeLSTMGRU
Number of parameters21121584
Epoch to reach 90% accuracy.17465
Epoch to reach 95% accuracy.536387
Table 6. Search space for Bayesian optimization in pattern recognition model.
Table 6. Search space for Bayesian optimization in pattern recognition model.
HyperparameterSearch Space
context_vec_dim[2, 32]
rnn_layers[1, 8]
fc_1[256, 1024]
fc_2[256, 1024]
fc_3[256, 1024]
lr[0.000005, 0.00005]
Hyperparameters in integer form were used after rounding up the searched values and changing them to integer form.
Table 7. Bayesian optimization result for pattern recognition model (LSTM).
Table 7. Bayesian optimization result for pattern recognition model (LSTM).
context_vec_dimn_layersfc_1fc_2fc_3lrScore
368072811000 2.615 × 10 5 50.1031
246768786668 6.291 × 10 6 53.1601
66376743936 3.707 × 10 5 62.2538
153659473695 4.915 × 10 5 83.7921
1522571022665 1.226 × 10 5 83.8592
155363480537 1.811 × 10 5 88.1225
136716376376 4.778 × 10 5 88.6277
124326268426 3.549 × 10 5 91.1847
358002721001 4.398 × 10 5 92.0817
86804270989 3.103 × 10 5 92.7828
215510986684 4.416 × 10 5 95.5356
228399488644 2.112 × 10 5 95.5872
245556696313 4.062 × 10 5 96.4275
262397490659 1.456 × 10 5 96.8141
283979677428 2.006 × 10 5 97.9637
145826316281 3.764 × 10 5 98.6442
Table 8. Bayesian optimization result for pattern recognition model (GRU).
Table 8. Bayesian optimization result for pattern recognition model (GRU).
context_vec_dimn_layersfc_1fc_2fc_3lrScore
2110244331024 2.910 × 10 5 15.7387
2110244321024 4.900 × 10 5 17.8266
154359487545 1.017 × 10 5 77.2090
66376743936 3.707 × 10 5 77.7451
136716376376 4.778 × 10 5 78.4101
94820294999 7.085 × 10 6 79.7196
1522571022665 1.226 × 10 5 82.7147
155363480537 1.811 × 10 5 86.2460
272398490659 1.455 × 10 5 86.3491
181724367378 1.711 × 10 5 87.4884
35789269997 4.641 × 10 5 89.5969
368072811000 2.615 × 10 5 89.9010
121815294995 4.367 × 10 5 91.1434
458002721001 4.398 × 10 5 98.0823
57667256980 4.639 × 10 5 98.2472
86372745942 2.506 × 10 5 98.9587
Table 9. Radar emitter classification pipeline test for test data.
Table 9. Radar emitter classification pipeline test for test data.
RNN Encoder TypeLSTMGRU
Radar emitter classification pipeline accuracy98.715%98.717%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Park, D.H.; Seo, D.-H.; Baek, J.-H.; Lee, W.-J.; Chang, D.E. A Novel Batch Streaming Pipeline for Radar Emitter Classification. Appl. Sci. 2023, 13, 12395. https://doi.org/10.3390/app132212395

AMA Style

Park DH, Seo D-H, Baek J-H, Lee W-J, Chang DE. A Novel Batch Streaming Pipeline for Radar Emitter Classification. Applied Sciences. 2023; 13(22):12395. https://doi.org/10.3390/app132212395

Chicago/Turabian Style

Park, Dong Hyun, Dong-Ho Seo, Jee-Hyeon Baek, Won-Jin Lee, and Dong Eui Chang. 2023. "A Novel Batch Streaming Pipeline for Radar Emitter Classification" Applied Sciences 13, no. 22: 12395. https://doi.org/10.3390/app132212395

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop