Next Article in Journal
Community-Level Urban Vitality Intensity and Diversity Analysis Supported by Multisource Remote Sensing Data
Previous Article in Journal
Estimations of Dynamic Water Depth and Volume of Global Lakes Using Machine Learning
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A MFR Work Modes Recognition Method Based on Dual-Scale Feature Extraction

1
School of Electronic Engineering, Xidian University, Xi’an 710071, China
2
Guilin Changhai Development Co., Ltd., Guilin 541001, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2025, 17(6), 1054; https://doi.org/10.3390/rs17061054
Submission received: 6 February 2025 / Revised: 3 March 2025 / Accepted: 11 March 2025 / Published: 17 March 2025
(This article belongs to the Section Engineering Remote Sensing)

Abstract

:
Multi-function radar (MFR) work modes recognition is an important research component of the electronic reconnaissance field. When facing MFR systems equipped with complex mode-waveform mapping relationships and flexible beam scanning techniques, the intercepted work mode pulse sequences have a wide temporal range of feature distributions and variable durations, which bring significant challenges for accurate recognition. To address this issue, this study constructs a novel hierarchical MFR signal model with waveform multiplexing and waveform scheduling laws with spatial beam arrangement and proposes a work mode recognition method based on dual-scale feature extraction. The recognition method first obtains the variable-length sequence processing capability through pulse sequence segmentation. Then, a structure composed of convolutional neural network (CNN) and long short-term memory (LSTM) is followed to extract the deep time-series features at the internal-segment scale of segments, and the features of each segment are concatenated in the time dimension. Subsequently, an LSTM-Attention network is used to extract the external-segment-scale features while adaptively assigning a higher weight to important waveform segments. Ultimately, the work mode recognition results are obtained. The experimental results show that the proposed method’s performance is advantageous in recognizing work modes under the comprehensive MFR signal model.

1. Introduction

With the development of technologies such as phased arrays, MFR has evolved into a multi-mode, high data rate, high reliability, and highly intelligent perception system, playing a crucial role in various applications. Its strong resource management capability and spatiotemporal resource adaptive allocation allow it to flexibly schedule among multiple tasks, such as range resolution, velocity resolution, and target tracking [1,2], demonstrating complex signal modulation patterns and agile waveform scheduling features. This makes behavioral analysis for non-cooperative MFR particularly challenging. Work mode recognition is the process of converting intercepted pulse sequences into radar behavioral state symbols, which is an important component for assessing non-cooperative MFR.
Early identification methods use template matching methods to determine mode based on pulse description word (PDW) parameters [3], which are simple and easy to use but not applicable to advanced system radars with overlapping parameters and complex modulation at present. In order to obtain the system status and the corresponding signal regularity, scholars have modeled the intercepted MFR signals and studied the corresponding analysis methods. Visnevski applied the theory of discrete event systems (DES) to establish the syntactic model of radar signals and proposed the hierarchical structure of MFR for the first time [4,5]. On this basis, models based on hidden Markov models (HMM) [6,7], predictive state representation (PSR) [8,9], and genetic models [10,11] with the corresponding analysis methods have been proposed, respectively. These model-based methods can achieve better results with theoretical support. However, they still have several limitations: (1) they rely on precise prior knowledge, which is difficult to obtain for non-cooperative radars; (2) their representation capability is limited, making it challenging to manage radars with multiple states and complex signal structures; (3) they have high requirements for data quality, which lead to significant performance degradation when faced with strong non-ideal situations; and (4) they have high computational complexity and demand substantial computational resources [2,12,13,14]. With the development of machine learning and deep learning, researchers have gradually introduced these methods into radar work mode recognition and inter-pulse modulation recognition field. The methods proposed in [15,16] combined the attention mechanism with CNN to extract high-dimensional features of pulse sequence and achieved satisfactory performance in recognition. References [17,18,19] used gate recurrent unit (GRU) and LSTM networks by mining the temporal features in pulse sequences to address the classification of pulse sequence patterns [1,13].
The intercepted MFR signals always arrive in the form of continuous pulse streams. However, the above deep learning methods all process the input of fixed-length pulse parameter segments, which limits the potential for feature extraction and recognition of variable-length pulse sequences. To address this problem, Li [20] first introduced the seq2seq framework into the domain of work mode recognition and proposed an LSTM-based seq2seq network to achieve recognition for variable-length sequence inputs. This approach cleverly exploits the variable-length sequence processing capability of LSTM network and achieves pulse-level work mode recognition by assigning a label to each pulse. Compared to variable-length PDW sequence padding to fixed-length sequence processing, this method ensures that the network architecture and algorithm performance are not influenced by the choice of fixed sample length. Additionally, when dealing with a wide range of lengths in the MFR work mode samples, it avoids introducing redundant information into shorter samples, thereby preventing computational resource wastage and interference with gradient propagation, ultimately leading to a good recognition performance. However, LSTM encounters issues such as gradient vanishing when dealing with extremely long time series samples, making it difficult to extract large scale features of long-term task scheduling. Tian [21] proposed a dual-path attention temporal convolution network (DP-ATCN) that can effectively handle recognition of variable length inputs.
Furthermore, most of the above deep learning-based methods do not establish or adopt a figured radar signal model but simply adopt the one-to-one or one-to-many mapping relationship between work modes and waveforms as the basis for analysis and validation. The Mercury radar model was previously made public for algorithm verification, retaining the complete behavioral characteristics of the radar and demonstrating the reuse characteristics between work modes and tasks. This radar was produced too early, resulting in its model having overly fixed work modes and task templates, which limited its representational capabilities. And it also did not consider the radar’s beam scanning characteristics in space, making it difficult to approach advanced radars. Therefore, it is necessary to proceed from the radar’s intrinsic behavior, introducing waveform reuse and waveform scheduling rules to establish a high-fidelity MFR model that can support experimental verification.
The main contributions of this paper can be summarized as follows: (1) On the basis of the original MFR model, a more comprehensive hierarchical MFR signal model is established by task subdivision of work mode, construction of reuseable mapping relationship between works mode and tasks, and the development of scheduling law based on spatial beam arrangement. In this model, the work modes present both time-domain waveform scheduling characteristics and spatial antenna scanning features, making it approximate the behavior of real radars, which can support for related research. (2) A work mode recognition framework based on dual-scale feature extraction is adopted, which achieves recognition through pulse segmentation processing and dual-scale temporal feature extraction within and between pulse segments. This framework enables the processing of variable-length sequences through pulse segmentation processing. It then extracts local waveform parameter features and global waveform scheduling features from the pulse sequence in two steps: internal-segment feature extraction and external-segment feature extraction. This enables the framework to obtain good recognition performance under complex mapping conditions between work modes and waveforms.
The rest of this paper is structured as follows: In Section 2, MFR work mode mapping structure and signal generation model are constructed. In Section 3, the proposed recognition processing framework is introduced, while the experimental data design and analysis of experimental results are presented in Section 4. Section 5 summarizes this paper.

2. MFR Signal Model

2.1. Hierarchical MFR Work Mode Mapping Structure

In the existing research on MFR hierarchical modeling and work mode recognition, MFR signals are mostly modeled by one-to-one or one-to-multiple mapping relationships between work modes and waveforms [5,15,16,17,18,19,20,21], with less consideration given to the multiple-to-multiple mapping case. However, modern MFR work modes have waveform multiplexing situations, such as Mercury radar [4]. Based on the analysis of work modes and the refinement of tasks, and considering the scheduling laws of radar antennas, a multi-layer mapping model of work modes, tasks, and waveforms that better aligns with the radar’s complex scheduling mechanism is established [22,23]. For the convenience of further discussion, some terms are defined as follows:
  • Definition 1: radar pulse x is a vector containing M features x = x 1 , x 2 , , x M T M .
  • Definition 2: Radar task X is described by a fixed combination of finite pulses as X = x 1 , x 2 , , x L M × L . It is regarded as the most basic component in the MFR signal structure, used to perform specific detection functions on a single beam direction.
  • Definition 3: Work mode X is an ordered arrangement of finite task X = X 1 , X 2 , , X K . It reflects the temporal and spatial behavior of MFR in task scheduling between multiple beam directions.
The mode structure of any modern fighter aircraft arises from task profiles [22], with the MFR completing its work modes through the scheduling of several tasks. Under distance quantization and function quantization, four basic radar tasks are first defined: long-range velocity resolution, medium-range velocity and range resolution, medium-range tracking, and close-range tracking.
  • Long-range velocity resolution task uses fixed modulated high pulse repetition frequency (HPRF) waveforms to measure the speed of a nose-on target with high approach speed.
  • Medium-range velocity and range resolution task uses medium or high PRF waveforms with inter-pulse modulation for high-precision speed and distance measurement of a wide-angle target.
  • Medium-range tracking task uses medium or high PRF waveforms to provide stable target distance and speed information that meets the requirements of the guidance system.
  • Close-range tracking task also uses medium or high PRF waveforms to provide distance and velocity information for close-range targets. Their duty cycle is relatively small to achieve better minimum detection distance.
Typical airborne MFRs can perform work modes such as velocity search with ranging (VSR), track-while-scan (TWS), track and search (TAS), multitarget tracking (MTT), and single-target tracking (STT). These work modes are completed by MFR through the scheduling of task waveforms in terms of time and space, relying on efficient resource management capabilities [16,22,23,24,25].
  • VSR realizes all-aspect autonomous searching by interleaving with long-range velocity resolution and medium-range velocity and range resolution tasks between bars and frames. The searching process is illustrated in Figure 1. A bar is a scan segment along a single angular trajectory, each bar consists of multiple beam positions. A frame includes multiple bars of multiple wave positions, which are arranged to optimally cover the selected space.
  • TWS uses medium-range velocity and range resolution task to monitor a small area, providing a fast update rate for targets.
  • TAS uses medium-range velocity and range resolution task to perform search ability. It also allocates tracking beams separately, and utilizes medium-range tracking task to track targets at a certain data rate.
  • MTT employs both medium-range and close-range tracking tasks to track multiple targets at medium or close ranges at high data rates.
  • STT utilizes close-range tracking task to provide continuous and precise detection of close targets.
Based on the above analysis, the mapping and scheduling relationship of work mode and task is established, as shown in Table 1.

2.2. MFR Signal Generation Model

The structure of MFR signals is akin to the grammar of human language, and the signal generation process can be simplified into two steps: task schedule and radar control. Taking the threat-evaluated work mode decisions from the radar manager as inputs, the task scheduler plans task queues based on the mapping relationship between works modes and tasks. The radar controller then maps the task symbols to appropriate waveforms, and these waveforms are retrieved by the radar for execution [1]. This paper establishes a three-layer MFR signal generation model, which is depicted in Figure 2.

3. Proposed Method

Faced with data sample inputs of variable length, the dual-scale network structure can effectively extract features at different time scales [26], and a work mode recognition method based on dual-scale feature extraction is proposed in this paper. This method achieves the classification of work modes under hierarchical MFR signal models through extracting features at the dual scale of pulse segments, and it can handle the recognition of large-length samples and variable-length samples. The proposed network framework is presented, and major modules in the framework are elaborated in this section.

3.1. Introduction of the Network Structure

Figure 3 shows a schematic diagram of the MFR working mode recognition framework. The first step is data preprocessing, where the PDW sequence is segmented into multiple fixed-size PDW segments with a certain segment length and shift length. Then, the network model uses CNN, LSTM, and attention mechanisms to carry out internal- and external-segment-scale feature extraction of pulse segments. Finally, the features are mapped to separable space to achieve the purpose of recognition. The details of each processing step are described below.

3.2. Data Preprocessing

Given the nature of data, it is necessary to preprocess the raw data to ensure the performance and generalization ability of the deep learning model. The preprocessing step of the paper includes normalization and segmentation of pulse sequences, as shown in Figure 4.
Normalization can eliminate the effect of data magnitude inconsistency on the model and makes the influence of different features more balanced. Additionally, normalization prevents the issues of vanishing or exploding gradient, thereby accelerating the network’s convergence speed. Radar pulse x is represented by parameters include carrier frequency ( CF ), pulse width ( PW ), differential pulse arrival time ( DTOA ), and pulse amplitude ( PA ), namely x = CF , PW , DTOA , PA T . The parameters CF , PW , and DTOA are normalized by max–min normalization, as shown in Equation (1), where max x and min x are the maximum and minimum values of the parameter in all samples.
x = x min x max x min x
The pulse amplitude sequence exhibits the scanning pattern of the radar antenna over a period of time. It is the overall switching pattern rather than the specific power value that reflects the characteristic of the work mode. Therefore, each PA sequence is normalized individually as Equation (2), where min PA is the minimum value in the single PA sequence, and F represents the typical main lobe gain of radar transmitting antenna. Through this processing, the PA sequence can be normalized to the same interval as the other parameters.
PA = PA min PA F
After normalization, the segmentation operation is performed, the recognition framework can process PDW sequences of any length through the segmentation processing, and the individual pulse segments are easier for the network model to extract features. Segmenting with segment length w and shift length s , the PDW sequence X = x 1 , x 2 , , x N with a length of N is divided into N / s , fixed-size ordered PDW segments, and perform zero-padding on pulse segments that are shorter than length w . The j -th PDW segment can be represented by Equation (3):
P j = x j 1 × s + 1 , x j 1 × s + 2 , , x j 1 × s + w , j = 1 , , N / s 1 x j 1 × s + 1 , x j 1 × s + 2 , , x j 1 × s + w , O 4 × N / s × s N , j = N / s
where O 4 × N / s × s N represents a 4-by- N / s × s N zero matrix.
After preprocessing, radar work mode samples are fed into the network model as a number of ordered pulse segments P = p 1 , p 2 , , p N / s N / s × w × 4 .

3.3. Recognition Model for Work Mode Based on Dual-Scale Feature Extraction

3.3.1. Feature Extraction in Internal-Segment Scale

The CNN-LSTM network structure is designed to extract internal-segment-scale features for the input pulse segments, respectively, and the network’s parameters are shared. Due to its inherent architectural advantages, CNNs are currently widely used in the processing of images, signals, and other types of data. By employing local connections and weight sharing, CNNs significantly reduce computational complexity compared to fully connected networks. This also helps to avoid model overfitting and improves the generalization ability of the trained model. Furthermore, CNNs can also extract shallow waveform parameter features and high-level inter-pulse modulation features from pulse segments through hierarchical feature learning. For the j-th pulse segment p j w × 4 , the CNN network is first utilized to extract the depth feature p j 1 w 1 × u 1 , where w 1 is the feature size output by the CNN network, and u 1 is the number of output channels.
Although CNN can effectively extract deep hidden features, they still struggle to structurally learn temporal features when dealing with pulse segments as time-series inputs. Therefore, an LSTM network is connected after the CNN structure to capture long-term dependencies between deep features. Since its introduction, LSTM has become an important tool in the field of sequence processing [18]. By introducing a forget mechanism based on RNN, LSTM alleviates the issues of gradient explosion and gradient vanishing, allowing for better transmission of long-term states. The LSTM calculation expression is shown in Equation (4). At the pulse index t , f t is the forgetting gate vector, i t is the input gate vector, C ˜ t is the new candidate cell state, C t is the current cell state, and o t is the output gate vector, respectively. The state vector h t is obtained based on o t , C t [20].
f t = σ ( W f [ h t 1 , x t ] + b f ) i t = σ ( W i [ h t 1 , x t ] + b i ) C ˜ t = tanh ( W c [ h t 1 , x t ] + b c ) C t = f t C t 1 + i t C ˜ t o t = σ ( W o [ h t 1 , x t ] + b o ) h t = o t t a n h ( C t )
In the CNN-LSTM structure, to avoid the potential loss of important early sequence information by a unidirectional LSTM, a Bi-LSTM (Bidirectional LSTM) structure is employed to capture both forward and backward contextual information of the sequence, thereby achieving greater global feature extraction capability and enhancing the model’s robustness. Bi-LSTM operates in a bidirectional form, with two independent LSTMs processing the sequence in both forward and backward directions and obtaining the bidirectional hidden state vectors H f = h 1 f , , h w 1 f and H b = h 1 b , , h w 1 b . Then, h w 1 f , h 1 b are concatenated along the feature dimension to form the feature, i.e., p j 2 = h w 1 f , h 1 b 2 × u 2 , where u 2 is the hidden layer dimension of the LSTM network. p j 2 is further reshaped into a one-dimensional feature, p j 3 1 × 2 u 2 .
After the LSTM layer, p j 3 passes through the dropout layer as the phased output of the internal-segment-scale feature extraction step for p j . The dropout layer increases the model’s generalization ability and prevents overfitting by randomly shutting down a certain proportion of neurons. The computation method is shown in Equation (5).
p j 4 = D r o p o u t p j 3
The internal-segment-scale features extracted from the parameter-shared CNN-LSTM module are concatenated along the time dimension to obtain the feature matrix output P I = p 1 4 , p 2 4 , , p j 4 , , p N / s 4 N / s × 2 u 2 . This module, with the structure of LSTM, can effectively reduce the magnitude of features. Moreover, by keeping the output feature dimensions unchanged, this ensures that the network’s hyperparameters do not need to be adjusted when facing variations in sample length or pulse segment length.

3.3.2. Feature Extraction in External-Segment Scale

The LSTM-Attention network structure is designed to extract external-segment-scale features from the sequence of internal-segment-scale features. Firstly, using the second Bi-LSTM network structure to extract temporal features, since the raw pulse sequence has been segmented into a feature matrix of manageable length, there will be no issues such as gradient vanishing. Consistent with the description above, the hidden state vectors output by Bi-LSTM are concatenated and reshaped to obtain the external-segment feature vector, i.e., P E = h ˜ N / s f , h ˜ 1 b 1 × 2 u 3 , where u 3 is the hidden layer dimension of the second Bi-LSTM network.
In order to make the network pay more attention to the valuable feature channels and suppress unnecessary features, a multi-head attention mechanism is further introduced to achieve adaptive feature weight allocation. The attention mechanism is a method that mimics the human visual and cognitive system, which can be described as mapping queries and a set of key-value pairs to the output [27]. The vector P E is multiplied by three trainable parameter matrices, W Q , W K , W V , and the output vector Q , K , V is as follows:
Q , K , V = W Q , W K , W V × P E
Equation (7) shows the formulation process of calculating attention, where d k is the input vector dimension, and Q K T is the self-attention score.
A t t e n t i o n Q , K , V = s o f t m a x Q K T d k
Compared to the self-attention mechanism, the multi-head attention mechanism captures different interaction information in multiple projection spaces, which can enhance the model’s attention to different positions and obtain more accurate feature representations. Its computation is as shown as follows:
P m u l t i h e a d = c o n c a t h e a d 1 , h e a d 2 , , h e a d h W o
h e a d i = A t t e n t i o n Q W i Q , K W i K , V W i V , 1 i h
where W o is the additional weight matrix of the model used to obtain the spatial location information, h e a d i is the output of the i -th self-attention head, and P m u l t i h e a d represents the concatenation of h self-attention output vectors [26,28].
Based on the output of the attention module, the combination of multi-layer fully connected layers and s o f t m a x activation layers is used to classify the work mode categories. Mapping P m u l t i h e a d to the classification space and output the recognition result of the sample is as follows:
Y ^ = s o f t m a x W F C P m u l t i h e a d + b F C
where W F C and b F C are the weights and biases of the fully connected layer.
Through this module, the raw long PDW sequence samples are further processed to extract external-segment features and mapped into the classification space. Compared to other deep learning methods, the proposed approach, utilizing pulse segmentation processing and a two-scale feature extraction structure, can handle flexible input data and effectively extract local and global features from long sequence input, while preventing issues such as gradient explosion caused by excessive feature quantities.

3.3.3. Loss Function

In the model training, the cross entropy between real label and predicted label is used as the loss function for backpropagation. The cross-entropy loss function is designed to measure the difference between the model’s predicted distribution and the true label distribution. Due to its reasonable and easily optimized gradient properties, the combination of the cross-entropy loss function with the s o f t m a x activation function converges quickly and yields excellent results in classification tasks such as image and text classification. This makes it the standard choice for classification networks.
The loss function calculation for M training samples is as follows:
l o s s = 1 M i = 1 M Y i log Y ^ i
where Y i represents the true labels, and Y ^ i represents the predicted outputs.

4. Simulation and Analysis

4.1. Dataset Description and Experimental Setup

4.1.1. Dataset Description

  • Waveform Parameters and Mapping Structure
We use simulated data to validate the effectiveness of the proposed method; the data from reference [29] was slightly augmented to create a parameter table including 11 waveforms, as shown in Table 2. The parameter characteristics of different waveforms have significant statistical overlap.
Based on the prepared waveform parameters, radar work mode data simulation is conducted under two MFR signal models, respectively. Signal model 1 establishes a one-to-multiple mapping relationship between work mode and waveform, as shown in Figure 5a, which uses the MFR two-layer mapping structure described in [20]. Signal model 2 uses the more comprehensive mapping structure and scheduling law described in Section 2, where the work mode is mapped to the waveform in a multiple-to-multiple manner through tasks, as shown in Figure 5b.
  • Antenna Scanning and Scene Parameters
We set the antenna scanning model and the spatial scene parameters of radar, target, and reconnaissance receiver, as shown in Table 3 [16]. The intercepted MFR pulse sequence can be simulated based on Table 1 and Table 3. The pulse sequence includes CF, PW, DTOA, and PA, in which the PA sequence has the antenna scanning characteristics of work mode.
  • Dataset
Based on the established waveform parameters, the PDW sequences of work modes are simulated for the two MFR signal models. The MFR intercepted signal is inevitably affected by non-ideal conditions such as low signal noise ratio (SNR) transmission, which will lead to incomplete observation of the pulse sequences, and manifest as measurement error, pulse loss, etc [30]. Measurement error is simulated by adding Gaussian distribution deviation to the CF, PW, DTOA, and PA parameters of the pulses, as shown in Table 4. Additionally, we randomly drop p % pulses from the pulse train to simulate the pulse lost scenario; the simulated pulse loss rate levels are shown in Table 5. Further, different degrees of measurement error and pulse loss are combined to simulate complex scenarios; Table 6 shows the simulation levels of pulse loss and measurement error. Finally, by constructing different sample pulse number distributions, the case where the sample length is not fixed in the interception work mode was simulate, the sample length distribution is shown in Table 7.
Two sets of train datasets were generated for the two MFR signal models based on the three scenarios in Table 4, Table 5 and Table 6, where each work mode category contains 1800 samples, a sample is a PDW sequence with 4000 to 10,000 pulses.
For two signal models, six test datasets D i , i = 1 , 2 , , 6 were generated based on the three scenarios in Table 4, Table 5, and Table 6, respectively, for comparing the impact of different methods under non-ideal conditions across different signal models. Further, in the second signal model condition, the test dataset D 7 is generated under the last complex scenario shown in Table 6, with the sample length distribution conditions shown in Table 7. This dataset is used to analyze the impact of sample length, segment length, and shift length on the method’s performance under complex scenario. In the test datasets, each category contains 300 samples.
The original train datasets and test datasets established above are organized into tensors in the shape of N × 4 as the input for the methods that can handle sequence with variable length and further divided into tensors in the shape of N / w × w × 4 as the input of the other methods.

4.1.2. Experimental Setup

  • Implementation Details
The experimental platform is built on the Win 10 system equipped with an Intel(R) Xeon(R) Gold 6133 CPU @2.50 GHz and an NVIDIA RTX A6000. Simulation data were generated by MATLAB R2022a, and the deep learning network framework was developed using PyCharm 2022.2.3, based on Python 3.9 and PyTorch 1.12.
In the experiment, the CNN block has an input channel count of 4, an output channel count of 32, a convolution kernel size of 7, a convolution stride of 3, a maximum pooling layer count of 2, a pooling kernel size of 3, a stride of 2, and uses the ReLU activation function. The number of input channels of the first Bi-LSTM is 32, the number of layers is 1, and the size of the hidden layer is 32. The number of input channels for the second Bi-LSTM is 64, the number of layers is 1, and the size of the hidden layer is 64. In the attention layer, the number of input channels is 128, and the number of attention heads is 8. The number of input channels for the linear layer is 128, and the number of output channels is 6. Adam is chosen for optimization, with a learning rate of 0.0001, a training batch size of 64, and 80 training epochs.
  • Evaluation metric
The accuracy (acc) is used to verify the classification recognition ability. Accuracy is the proportion of correctly identified samples to the total samples, which is defined as follows:
acc = 1 M i = 1 M m i m i = 1 , if   Y ^ i = Y i 0 , otherwise
where M is the total number of test samples.
Since the baseline method processing framework only handles segmented pulse segments, a PDW sequence sample of length N obtains N / w predicted results. Therefore, the majority of the multiple predicted results is taken as the recognition result output of the following work mode sample:
Y ^ = M a j o r i t y y ^ 1 , , y ^ j , , y ^ N / w
where y ^ j is the j -th segment’s predicted result.

4.2. Validation Result

This section presents three experiments to verify the feasibility of the proposed method and its recognition capabilities under different conditions. Experiment 1 is conducted to compare the recognition performance of the proposed method with baseline methods for two MFR signal models. Experiments 2 is conducted only for signal model 2 to verify the impact of sample length distribution, segment length, and shift length on recognition capabilities.
  • Experiment 1: Non-ideal Scenario Experiment
This experiment was carried out on dataset D i , i = 1 , 2 , , 6 of the two MFR signal models to verify the recognition performance of the proposed method in non-ideal environments. The baseline methods CNN [15], bi-GRU [18], bi-LSTM [20], and CNN-LSTM were used to validate the dataset generated by the MFR model 1. Additionally, when validating the dataset generated by MFR model 2, the DP-ATCN [21] method, which can handle long sequence inputs, was further introduced.
Figure 6 shows the recognition results of the signal model 1 dataset under three test scenarios. At a lower pulse loss level, there is a decreasing trend in the accuracy of these methods as the measurement error worsens, but they all can still maintain more than 97% of the category discrimination. Pulse loss can disrupt the temporal distribution of the original PRI parameter sequence with parameter range overlap and complex modulation, while the PW, RF, and PA parameter still have recognition separability within their respective ranges, resulting in a recognition accuracy above 94%. In the complex scenarios with measurement error and pulse loss, the accuracy shows a more obvious downward trend. However, due to the multi-dimensional features expanding separability space and the deep network model extracting effective features from noisy data, the proposed method and various baseline methods can both maintain high recognition accuracy more than 90%. The overall observation results show that under signal model 1, the proposed method is generally superior to the baseline methods. Although it did not show very an outstanding recognition accuracy, this result verifies that our method has competitive recognition performance under the condition of having a simple mapping structure. The superiority of our method is mainly manifested in the recognition under complex waveform mapping models.
Figure 7 shows the recognition results of the signal model 2 dataset under three test scenarios, while Figure 8 shows the confusion matrices of the five methods for the last scenario in the complex dataset D 7 . Due to its special processing structure, the proposed method can effectively extract dual-scale features, and its recognition performance in the three scenarios is significantly better than the comparison methods. DP-ATCN can also extract waveform scheduling features, but due to its lack of bidirectional causal feature extraction capabilities and the introduction of redundant features with a deeper model, its performance is slightly inferior to the proposed method. However, other waveform-based recognition methods are clearly unsuitable for recognition under this MFR model because they can only extract features within individual pulse segments, which ultimately resulting to poor recognition performances. With the increasing prevalence of non-ideal conditions, the separability of PRIs with parameter range overlap and complex modulation types is poor, while the preserved discriminability of PW, RF, and PA parameters within respective ranges ensure recognition separability. Therefore, the proposed method can maintain over 93% recognition accuracy in three scenarios, and other methods also show relatively stable recognition results. As shown in Figure 5b, there is waveform reuse between TAS, TWS, and VSR, as well as between STT and MTT. However, the CNN, bi-GRU, bi-LSTM, and CNN-LSTM methods segment work mode samples as network inputs and cause the consistent pulse segment features across different modes, which leads to widespread misidentification. As a result, the overall recognition accuracy of these methods can only be maintained between 70% and 78%. The experimental results indicate that the proposed method has significant advantages when facing radar with complex waveform mapping models.
  • Experiment 2: Impact of Sample Length, Segment Length, and Shift Length
This experiment is conducted solely on the dataset D 7 , which is based on the MFR signal model 2. The proposed method can handle sample inputs with variable length, and this experiment is used to verify the impact of sample length, segment length, and shift length distribution on its recognition ability. The recognition accuracy at five different sample length distribution levels and five sets of segment length with shift length selections is shown in Table 8.
The proposed method exhibits differential performance when processing samples with varying sample length distribution. As shown in Table 8, when the sample length is between 100 and 2000, the recognition performance is quite low. This is because the short samples only represent a small part of the MFR’s spatial scanning and waveform scheduling pulse sequence over a brief period, which is insufficient to reflect the complete work mode of the radar. This ultimately leads to incorrect discrimination between work modes of VSR, TWS, TAS, etc., which use the same waveform. When the sample length is within the other four distributions, the samples contain complete work mode information, and the proposed method can achieve over 90% recognition accuracy in complex scenario. It can also be noted from the Table 8, as the sample length increases, the accuracy under each segment selection condition both shows a slight decline trend. This is because long samples are divided into more pulse segments, which affects the accurate extraction of features.
The selection of longer segment lengths does not follow with a positive impact on recognition in the complex scenario. As shown in Table 8, under a fixed sample length, as the pulse segment length and step size increase, recognition performance exhibits a downward trend, which is more pronounced in cases of long samples and long segments. This is because, in the bi-LSTM structure of the inter-segment-scale feature extraction, longer segments require a larger time span to output the hidden state vector of the last time step, which increases the model’s complexity and introduces redundant information. Moreover, through compact segmentation, the local features of PRI sequences with complex modulation can be fully preserved.
This experiment verified that appropriate sample length and pulse segmentation methods could help the network extract more representative features, resulting in better recognition performance.

5. Conclusions

Regarding the issue of work mode recognition under the complex mapping of MFR work modes and waveforms, this paper proposes a novel hierarchical MFR signal model and a recognition method based on dual-scale feature extraction. Firstly, by introducing multiplexing mapping and scheduling laws of work modes and tasks based on spatial beam arrangement, a multiple-to-multiple mapping relationship between work modes and waveforms is established, and a comprehensive MFR signal model is constructed as the basis for analysis. Secondly, a pulse sequence processing framework based on deep learning is built. This framework achieves the processing ability of variable-length pulse sequences through pulse sequence segmentation. Further, two-scale feature extraction is realized that include internal-segment-scale feature extraction and external-segment-scale feature extraction so as to complete the extraction of large-scale features and the classification of work modes. Simulation experiments have verified that the proposed method exhibits better performance in non-ideal environments with measurement errors and pulse loss. Additionally, the impact of sample length, segment length, and shift length on recognition performance is analyzed.
This research still has certain limitations. First, due to the sensitivity of radar technology, this paper does not validate the performance of radar work mode recognition in real-world scenarios and lacks research related to MFR model’s syntactic representation and waveform parameter derivation, which are common issues in this field. Additionally, it should be noted that MFR emits a fixed combination of pulse sequences within a beam position and acts as the most basic component of the signal structure. However, this processing method uses the divided pulse segment as the minimum unit to fuse depth features. Due to computational costs and other reasons, alternative methods such as transformers were not explored, which presents certain limitations. In the future, we plan to introduce methods from the field of natural language processing (NLP) to investigate MFR work mode recognition based on beam position boundary segmentation structures and validate it under more stringent conditions.

Author Contributions

Conceptualization, Z.L.; methodology, Z.L.; software, Z.L. and X.F.; validation, Z.L. and X.F.; formal analysis, J.T.; investigation, Z.L.; resources, J.T; data curation, C.M.; writing—original draft preparation, Z.L.; writing—review and editing, J.T. and R.G.; visualization, R.G.; supervision, W.L.; project administration, J.T.; funding acquisition, J.T. and C.M. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The datasets analyzed during this study are available from the corresponding author on reasonable request.

Conflicts of Interest

Author Chengjian Mo was employed by the company Guilin Changhai Development Co., Ltd. The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

References

  1. Wang, A.; Krishnamurthy, V. Modeling and Interpretation of Multifunction Radars with Stochastic Grammar. In Proceedings of the 2008 IEEE Aerospace Conference, Big Sky, MT, USA, 1–8 March 2008; pp. 1–13. [Google Scholar]
  2. Zhang, Z.; Shi, X.; Zhou, F. An Incremental Recognition Method for MFR Working Modes Based on Deep Feature Extension in Dynamic Observation Scenarios. IEEE Sens. J. 2023, 23, 21574–21587. [Google Scholar] [CrossRef]
  3. Zhang, S. Study on the Methods of Radar Function Inference and Software Design. Master’s Thesis, Xidian University, Xi’an, China, 2017. [Google Scholar]
  4. Visnevski, N. Syntactic Modeling of Multi-Function Radars. Ph.D. Thesis, University of McMaster, Hamilton, ON, Canada, 2005. [Google Scholar]
  5. Visnevski, N.; Krishnamurthy, V.; Haykin, S.; Currie, B.; Dilkes, F.; Lavoie, P. Multi-Function Radar Emitter Modelling: A Stochastic Discrete Event System Approach. In Proceedings of the 42nd IEEE International Conference on Decision and Control (IEEE Cat. No.03CH37475), Maui, HI, USA, 9–12 December 2003; Volume 6, pp. 6295–6300. [Google Scholar]
  6. Visnevski, N.; Haykin, S.; Krishnamurthy, V.; Dilkes, F.A.; Lavoie, P. Hidden Markov Models for Radar Pulse Train Analysis in Electronic Warfare. In Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP’05), Philadelphia, PA, USA, 18–23 March 2005; Volume 5, pp. v/597–v/600. [Google Scholar]
  7. Li, C.; Wang, W.; Wang, X. A Method for Extracting Radar Words of Multi-Function Radar at Data Level. In Proceedings of the IET International Radar Conference 2013, Xi’an, China, 14–16 April 2013; pp. 1–5. [Google Scholar]
  8. Ou, J.; Chen, Y.; Zhao, F.; Liu, J.; Xiao, S. Novel Approach for the Recognition and Prediction of Multi-Function Radar Behaviours Based on Predictive State Representations. Sensors 2017, 17, 632. [Google Scholar] [CrossRef] [PubMed]
  9. Ou, J.; Chen, Y.; Zhao, F.; Liu, J.; Xiao, S. Method for operating mode identification of multi-function radars based on predictive state representations. IET Radar Sonar Navig. 2017, 11, 426–433. [Google Scholar] [CrossRef]
  10. Ma, S.; Liu, Z.; Jiang, L. The application of gene techniques to multi-function radar signal analysis. ACTA Electron. Sin. 2013, 41, 2374–2381. [Google Scholar]
  11. Ma, S.; Liu, Z.; Jiang, L. A method for multifunction radar pulse train analysis based on amplitude change point detection. ACTA Electron. Sin. 2013, 41, 1436–1441. [Google Scholar]
  12. Wang, S.; Zhu, M.; Li, Y. Recognition, Inference and Prediction of Advanced Multi-Function Radar System Behaviors: Overview and Prospects. J. Signal Process. 2024, 40, 17–55. [Google Scholar] [CrossRef]
  13. Zhao, Y.; Wang, X.; Huang, Z. Multi-Function Radar Modeling: A Review. IEEE Sens. J. 2024, 24, 31658–31680. [Google Scholar] [CrossRef]
  14. Feng, H.C.; Jiang, K.L.; Zhou, Z.; Zhao, Y.; Yan, H.X.; Tian, K.; Tang, B. Syntactic Modeling and Neural-Based Parsing for Multifunction Radar Signal Interpretation. IEEE Trans. Aerosp. Electron. Syst. 2024, 60, 5060–5072. [Google Scholar] [CrossRef]
  15. Wu, B.; Yuan, S.; Li, P.; Jing, Z.; Huang, S.; Zhao, Y. Radar Emitter Signal Recognition Based on One-Dimensional Convolutional Neural Network with Attention Mechanism. Sensors 2020, 20, 6350. [Google Scholar] [CrossRef] [PubMed]
  16. Li, W.; Dong, Y.-Y.; Zhang, L.; Dong, C. Time Domain Attention Mechanism Based Multi-Functional Radar Working Mode Recognition. In Proceedings of the 2023 8th International Conference on Communication, Image and Signal Processing (CCISP), Chengdu, China, 17–19 November 2023; pp. 457–462. [Google Scholar]
  17. Liu, Z.-M. Recognition of Multifunction Radars Via Hierarchically Mining and Exploiting Pulse Group Patterns. IEEE Trans. Aerosp. Electron. Syst. 2020, 56, 4659–4672. [Google Scholar] [CrossRef]
  18. Liu, Z.-M.; Yu, P.S. Classification, Denoising, and Deinterleaving of Pulse Streams With Recurrent Neural Networks. IEEE Trans. Aerosp. Electron. Syst. 2019, 55, 1624–1639. [Google Scholar] [CrossRef]
  19. He, C.; Zhang, L.; Wei, S.; Fang, Y. Multifunction Radar Working Mode Recognition With Unsupervised Hierarchical Modeling and Functional Semantics Embedding Based LSTM. IEEE Sens. J. 2024, 24, 22698–22710. [Google Scholar] [CrossRef]
  20. Li, Y.; Zhu, M.; Ma, Y.; Yang, J. Work Modes Recognition and Boundary Identification of MFR Pulse Sequences with a Hierarchical Seq2seq LSTM. IET Radar Sonar Navig. 2020, 14, 1343–1353. [Google Scholar] [CrossRef]
  21. Tian, T.; Zhang, Q.; Zhang, Z.; Niu, F.; Guo, X.; Zhou, F. Shipborne Multi-Function Radar Working Mode Recognition Based on DP-ATCN. Remote Sens. 2023, 15, 3415. [Google Scholar] [CrossRef]
  22. Skolnik, M. Radar Handbook, 3rd ed.; McGraw Hill LLC: New York, NY, USA, 2008. [Google Scholar]
  23. Huang, Y.; Feng, K.; Ji, K.; Wang, M.; Yu, X.; Yi, W.; Zhang, L. Multi-Function Radar Electronic Scan Type Recognition Based On Connected Component Analysis. In Proceedings of the 2024 9th International Conference on Signal and Image Processing (ICSIP), Nanjing, China, 12–14 July 2024; pp. 32–36. [Google Scholar]
  24. Ma, K. Research on Air-to-Air Operation States Analysis and Identification of Airborne Fire Control Rada. Master’s Thesis, National University of Defense Technology, Changsha, China, 2021. [Google Scholar]
  25. Tian, W. Research on Working Pattern Recognition and Intention Reasoning Technology of Phased Array Radar. Master’s Thesis, Xidian University, Xi’an, China, 2022. [Google Scholar]
  26. Zhang, Z.; Zhu, M.; Li, Y.; Wang, S. JDMR-Net: Joint Detection and Modulation Recognition Networks for LPI Radar Signals. IEEE Trans. Aerosp. Electron. Syst. 2023, 59, 7575–7589. [Google Scholar] [CrossRef]
  27. Vaswani, A. Attention is all you need. In Proceedings of the 31st International Conference on Neural Information Processing Systems, Long Beach, CA, USA, 4–9 December 2017. [Google Scholar]
  28. Fang, Y.; Zhai, Q.; Zhang, Z.; Yang, J. Change Point Detection for Fine-Grained MFR Work Modes with Multi-Head Attention-Based Bi-LSTM Network. Sensors 2023, 23, 3326. [Google Scholar] [CrossRef] [PubMed]
  29. Feng, H.C.; Jiang, K.L.; Zhao, Y.X.; Al-Malahi, A.; Tang, B. Self-Supervised Contrastive Learning for Extracting Radar Word in the Hierarchical Model of Multifunction Radar. IEEE Trans. Aerosp. Electron. Syst. 2023, 59, 9621–9634. [Google Scholar] [CrossRef]
  30. Guo, R.; Dong, Y.-Y.; Zhang, L.; Dong, C.; Bao, D.; Li, W.; Li, Z. Radar Reconnaissance Pulse-Splitting Modeling and Detection Method. Remote Sens. 2024, 16, 521. [Google Scholar] [CrossRef]
Figure 1. VSR searching process.
Figure 1. VSR searching process.
Remotesensing 17 01054 g001
Figure 2. MFR signal generation model.
Figure 2. MFR signal generation model.
Remotesensing 17 01054 g002
Figure 3. MFR work mode recognition framework.
Figure 3. MFR work mode recognition framework.
Remotesensing 17 01054 g003
Figure 4. Preprocessing description.
Figure 4. Preprocessing description.
Remotesensing 17 01054 g004
Figure 5. MFR work mode mapping structure. (a) Mapping structure of signal model 1; (b) mapping structure of signal model 2.
Figure 5. MFR work mode mapping structure. (a) Mapping structure of signal model 1; (b) mapping structure of signal model 2.
Remotesensing 17 01054 g005
Figure 6. Comparison of accuracy between different methods for signal model 1 in non-ideal conditions. (a) Measure scenarios; (b) Lost pulse scenarios; (c) Complex scenarios.
Figure 6. Comparison of accuracy between different methods for signal model 1 in non-ideal conditions. (a) Measure scenarios; (b) Lost pulse scenarios; (c) Complex scenarios.
Remotesensing 17 01054 g006
Figure 7. Comparison of accuracy between different methods for signal model 2 in non-ideal conditions. (a) Measure scenarios; (b) Lost pulse scenarios; (c) Complex scenarios.
Figure 7. Comparison of accuracy between different methods for signal model 2 in non-ideal conditions. (a) Measure scenarios; (b) Lost pulse scenarios; (c) Complex scenarios.
Remotesensing 17 01054 g007
Figure 8. Confusion matrix of signal model 2 in complex scenario 6. (a) Proposed method’s confusion matrix; (b) DP-ATCN’s confusion matrix; (c) CNN’s confusion matrix; (d) CNN-LSTM’s confusion matrix; (e) GRU’s confusion matrix; (f) LSTM’s confusion matrix.
Figure 8. Confusion matrix of signal model 2 in complex scenario 6. (a) Proposed method’s confusion matrix; (b) DP-ATCN’s confusion matrix; (c) CNN’s confusion matrix; (d) CNN-LSTM’s confusion matrix; (e) GRU’s confusion matrix; (f) LSTM’s confusion matrix.
Remotesensing 17 01054 g008
Table 1. Work mode mapping and scheduling relationship.
Table 1. Work mode mapping and scheduling relationship.
Work ModeTask MappingTask Scheduling
VSRLong-range velocity resolutionSchedule in scanning cycles, interleave with two tasks
Medium-range velocity and range resolution
TWSMedium-range velocity and range resolutionSchedule in scanning cycles
TASMedium-range velocity and range resolutionSchedule in scanning cycles
Medium-range trackingUpdate with data rate for multiple targets
MTTMedium-range tracking and close-range trackingUpdate with data rate for multiple targets
STTClose-range trackingUpdate with data rate for single targets
Table 2. Parameters for the waveforms.
Table 2. Parameters for the waveforms.
Waveform Number C F   G H z P R I   μ s P W   μ s Number of Pulses
1Group agility/9.5~9.6Fixed/15~18Fixed/2200
2Group agility/9.5~9.6Group change/15, 18, 20.5, 22Fixed/2.5200
3Group agility/9.5~9.6Stagger/12, 20, 24, 27Fixed/2.5200
4Group agility/9.6~9.7Group change/75, 90, 95, 115, 125Fixed/6100
5Group agility/9.6~9.7Wobulate/100~200Fixed/6100
6Group agility/9.6~9.7Sliding/70~170Fixed/6100
7Group agility/9.65~9.75Fixed/90~110Fixed/3100
8Group agility/9.7~9.8Group change/70, 78, 85, 90Fixed/2.5100
9Group agility/9.7~9.8Fixed/20~30Fixed/2.5100
10Group agility/9.7~9.8Group change/60, 65, 73, 80Fixed/280
11Group agility/9.7~9.8Fixed/15~25Fixed/280
Table 3. Antenna model and equipment parameters.
Table 3. Antenna model and equipment parameters.
ParametersValue
RadarSpatial wave position arrangement modeStagger
Antenna scanning modeRaster Scan 1
Antenna half-power beamwidth (degree)3.6
Search airspace azimuth range (degree)−30~30
Search airspace pitch range (line)2~4
TargetTarget number1 (STT)
1~5 (TAS)
1~7 (MTT)
Tracking data rate (times/second)9~11 (STT)
1~3 (TAS)
2~4 (MTT)
Relative radar azimuth range (degree)6~8
Relative radar pitch range (degree)0
Relative radar azimuthal velocity range (degree/second)−0.003~0.003
Relative radar pitch velocity range (degree/second)0
reconnaissance receiverRelative radar azimuth range (degree)6~8
Relative radar pitch range (degree)0
1 The scanning mode with tasks scheduling law is determined by Table 1.
Table 4. Level of measurement error.
Table 4. Level of measurement error.
Number σ C F   M H z σ P W   μ s σ P R I   μ s σ P A   d B Lost Rate (%)
130.330.35
240.440.45
350.550.55
460.660.65
570.770.75
680.880.85
Table 5. Level of lost pulse.
Table 5. Level of lost pulse.
Number σ C F M H z σ P W   μ s σ P R I   μ s σ P A   d B Lost Rate (%)
130.330.310
230.330.320
330.330.330
430.330.340
530.330.350
630.330.360
Table 6. Level of measurement error and lost pulse.
Table 6. Level of measurement error and lost pulse.
Number σ C F   M H z σ P W   μ s σ P R I   μ s σ P A   d B Lost Rate (%)
130.330.310
240.440.420
350.550.530
460.660.640
570.770.750
680.880.860
Table 7. Pulse sample length distribution.
Table 7. Pulse sample length distribution.
NumberDistribution Range
1100~2000
22000–4000
34000~6000
46000~8000
58000~10,000
Table 8. Recognition accuracy of different sample lengths, segment lengths, and shift lengths.
Table 8. Recognition accuracy of different sample lengths, segment lengths, and shift lengths.
Segment Length and
Shift Length
64, 64128, 64256, 128512, 2561024, 512Average Accuracy
Sample Length
100~200091.60%91.33%84.80%81.20%61.60%82.11%
2000–400096.57%96.33%94.00%92.33%88.60%94.56%
4000–600096.40%94.73%93.87%90.80%88.47%92.85%
6000–800096.13%93.13%91.33%89.93%85.93%91.37%
8000~10,00096.00%91.93%90.87%89.07%83.53%90.20%
Average accuracy95.32%93.49%90.97%88.66%83.22%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Li, Z.; Fu, X.; Mo, C.; Tang, J.; Guo, R.; Li, W. A MFR Work Modes Recognition Method Based on Dual-Scale Feature Extraction. Remote Sens. 2025, 17, 1054. https://doi.org/10.3390/rs17061054

AMA Style

Li Z, Fu X, Mo C, Tang J, Guo R, Li W. A MFR Work Modes Recognition Method Based on Dual-Scale Feature Extraction. Remote Sensing. 2025; 17(6):1054. https://doi.org/10.3390/rs17061054

Chicago/Turabian Style

Li, Zhiyuan, Xuan Fu, Chengjian Mo, Jianlong Tang, Ronghua Guo, and Wenbo Li. 2025. "A MFR Work Modes Recognition Method Based on Dual-Scale Feature Extraction" Remote Sensing 17, no. 6: 1054. https://doi.org/10.3390/rs17061054

APA Style

Li, Z., Fu, X., Mo, C., Tang, J., Guo, R., & Li, W. (2025). A MFR Work Modes Recognition Method Based on Dual-Scale Feature Extraction. Remote Sensing, 17(6), 1054. https://doi.org/10.3390/rs17061054

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop