Next Article in Journal
Impacts of Climatic Fluctuations and Vegetation Greening on Regional Hydrological Processes: A Case Study in the Xiaoxinganling Mountains–Sanjiang Plain Region, Northeastern China
Previous Article in Journal
Remote Sensing for Mapping Natura 2000 Habitats in the Brière Marshes: Setting Up a Long-Term Monitoring Strategy to Understand Changes
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Realizing Small UAV Targets Recognition via Multi-Dimensional Feature Fusion of High-Resolution Radar

by
Wen Jiang
,
Zhen Liu
,
Yanping Wang
*,
Yun Lin
,
Yang Li
and
Fukun Bi
Radar Monitoring Technology Laboratory, School of Information Science and Technology, North China University of Technology, Beijing 100144, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2024, 16(15), 2710; https://doi.org/10.3390/rs16152710
Submission received: 7 June 2024 / Revised: 19 July 2024 / Accepted: 23 July 2024 / Published: 24 July 2024

Abstract

:
For modern radar systems, small unmanned aerial vehicles (UAVs) belong to a typical types of targets with ‘low, slow, and small’ characteristics. In complex combat environments, the functional requirements of radar systems are not only limited to achieving stable detection and tracking performance but also to effectively complete the recognition of small UAV targets. In this paper, a multi-dimensional feature fusion framework for small UAV target recognition utilizing a small-sized and low-cost high-resolution radar is proposed, which can fully extract and combine the geometric structure features and the micro-motion features of small UAV targets. For the performance analysis, the echo data of different small UAV targets was measured and collected with a millimeter-wave radar, and the dataset consists of high-resolution range profiles (HRRP) and micro-Doppler time–frequency spectrograms was constructed for training and testing. The effectiveness of the proposed method was demonstrated by a series of comparison experiments, and the overall accuracy of the proposed method can reach 98.5%, which demonstrates that the proposed multi-dimensional feature fusion method can achieve better recognition performance than that of classical algorithms and higher robustness than that of single features for small UAV targets.

1. Introduction

A small unmanned aerial vehicle (UAV) is a kind of aircraft that can fly autonomously or remotely, which has wide application prospects and potential. With the progress of science and technology and the reduction in costs, small UAVs play a vital role in economic construction and social development with their advantages of high efficiency, low cost and multi-aspect monitoring in agricultural plant protection, forest fire fighting, and environmental monitoring, etc. [1]. Although the development of small UAVs has brought convenience to our lives and work, it has also brought adverse effects on public safety and other aspects. Because of its low price and simple operation, it is also abused in unsafe and even criminal behavior, threatening the national economic development and national security [2]. Therefore, the recognition of small UAV targets has important application value.
The existing small UAV detection and recognition technology is mainly divided into four categories, namely radio-based detection [3], photoelectric-based detection [4], audio-based detection [5], and radar-based detection technology [6]. Radio-based detection utilizes radio frequency scanning technology to carry out real-time monitoring, analysis and direction finding of the frequency band of the transmission signal. The technology can obtain the UAV control signal waveform by scanning the frequency band and compare it with the waveform in the system library to determine whether there is a small UAV and determine its type. However, the single station measurement of radio-based detection usually can only obtain the target orientation information, and the measurement accuracy is low [7]. Photoelectric-based detection technology mainly uses visible light and infrared images to complete small UAV target detection and recognition, but it is easily affected by light and weather. In addition, the photoelectric signal of small UAVs is weak, the signal-to-noise ratio is low, and the target masking effect further increases the difficulty of detection, tracking and recognition of small UAVs [8]. Audio-based detection technology can measure the sound of propeller rotation, and it has a good detection and recognition effect on large UAVs. However, because it is susceptible to noise and clutter, and the sound of small and medium-sized UAVs is small, the detection effect of small UAVs is poor [9]. As an important means of air target detection, radar is widely used in the detection and recognition of small UAV targets since it can work well in bad weather or weak light, and does not require any target signal. However, since small UAV targets usually fly at low altitudes and at low speeds, and the radar cross-sectional area is small, which is susceptible to background clutter, radar-based recognition of small UAV targets also faces great challenges [10].
High-resolution radars are capable of capturing subtle characteristics of small UAV targets, such as structure characteristics and motion characteristics. In [11], the German Institute of Applied Sciences built a multi-channel external radiation source radar detection system using the Global System for Mobile communications signals in the environment to explore the possibility of external radiation source radar acquiring small UAV targets’ micro-motion characteristics. Knoedler et al. [12] adopted ultra-high-frequency-band signal and multi-frequency and single-frequency networking detection, which can not only realize continuous positioning and tracking of small UAV targets at 3 km but can also detect the micro-Doppler effect of multi-rotor UAV. Hoffmann et al. [13] utilized a new generation of frequency modulation continuous waveform radar to distinguish small UAV targets from bird targets by micro-motion characteristics. Jahangir et al. [14] introduced the holographic radar using a digital array system, which can obtain fine feature representations of targets, and the system can classify small UAV targets and non-UAV targets through a machine learning algorithm based on a decision tree classifier. Therefore, the high-resolution radar system pushes the detection and recognition of small UAV targets to more refined applications. However, the small UAV targets may appear in the field of view of the radar monitoring equipment in a variety of different angles and directions, and these changes in the angle of view cause the target to be partially obscured or deformed, thus increasing the difficulty of target recognition.
Deep learning is an efficient intelligent processing method, which is more suitable for mining higher-dimensional abstract features than traditional machine learning methods, and has good generalization ability. It has been applied in the field of target recognition of high-resolution radar [15]. The deep neural network can acquire various hidden features of the target from the data without constructing complex high-fidelity models, so it has a very good application prospect in the recognition of high-resolution range profile (HRRP), micro-Doppler spectrum and range-Doppler spectrum. Dong et al. [16] proposed a lightweight UAV detection model to achieve high-precision and lightweight detection and recognition of fixed-wing and multi-rotor UAVs in low-altitude complex environments. Yang et al. [9] utilized the deep learning method to analyze the radar signal time series of the UAV target and estimate the UAV’s micro-motion parameters to realize the target recognition. Solaiman et al. [10] designed a convolutional neural network (CNN) model to extract and recognize the features of UAV and non-UAV targets. Wei et al. [17] proposed an information fusion strategy of radar signals and computer vision for UAV target recognition. The deep learning-based target recognition method performs well in the small UAV target recognition task, but it also has some obvious shortcomings, such as requiring a large amount of labeled data for training, using an end-to-end black box model, which is difficult to explain its classification learning process, and prone to poor generalization performance in practical applications [18].
Although great progress has been made in high-resolution radar systems and target recognition algorithms, small UAVs are difficult to recognize due to their unique nature. Currently, this research field is still in the initial research stage, and the construction of CNN, deep feature extraction, parameter setting and dataset construction need further research. To improve the recognition performance of small UAV targets, in this paper, a multi-dimensional feature fusion framework for small UAV target recognition utilizing a small-sized and low-cost high-resolution millimeter-wave radar is proposed, which can fully explore the structure features from HRRP and the micro-motion features from a micro-Doppler spectrum of small UAVs. The main contributions are summarized as follows:
  • The geometric structure feature and micro-motion feature parameters of small UAVs are closely related to UAV type, motion state, radar observation mode and environment background, etc. Therefore, the relationship between parameters is explored from the perspective of echo signal mathematical modeling and radar target characteristic cognition, and the small UAV target recognition is further carried out based on feature differences.
  • Considering the complex motion and environment factors in real applications, the internal relationships are often difficult to describe in terms of models and parameters, a multi-dimensional feature fusion framework for small UAV target recognition is designed, which utilizes structure features from HRRP and micro-motion features from micro-Doppler spectrograms comprehensively to improve the recognition accuracy of small UAV targets.
  • In order to verify the performance of the fusion model, measured data of two kinds of small UAVs is collected by a high-resolution millimeter-wave radar, and the dataset of HRRP and micro-Doppler spectrograms is constructed for training and testing. The proposed multi-dimensional feature fusion model is evaluated on the collected dataset, and different experiments were conducted for comparison.
The rest of this paper is organized as follows: Section 2 introduces measuring devices and conditions. Radar echo modeling and target characteristics analysis of high-resolution radar are described in Section 3. In Section 4, the proposed multi-dimensional feature fusion network for small UAV target recognition is introduced. In Section 5, the experimental results and comparative recognition performance analysis of the proposed model are presented and evaluated. Section 6 discusses the potential challenges and research opportunities. Finally, the conclusion is presented in Section 7.

2. Related Work

2.1. FMCW Radar System

The FMCW radar sensor utilized in this paper for small UAV target recognition is based on the AWR1642 radar board, which is presented in Figure 1. It is a highly integrated 76~81 GHz radar-on-chip millimeter wave radar, which is specifically designed for the field of automotive applications and manufactured by Texas Instruments. The raw radar echo data are captured by DCA1000 board. The radar device comprises a radio frequency module, radio processor module and sensing evaluation module. The radio frequency module consists of an onboard etched antenna with four receivers and two transmitters to implement radio frequency and analog baseband signal chains. The customer-programmable radio processor module utilizes a 600 MHz digital signal processor and a 200 MHz ARM Cortex-R4F microcontroller to implement signal generation and data processing. The sensing evaluation module is utilized for the storage and extension of real-time radar data.
In this AWR1642-based radar system, the transmitted sawtooth-modulated FMCW radar signal can be expressed as (1):
S T ( t ) = exp ( j 2 π f 0 t + j π μ t 2 )
where f 0 represents the start frequency of signal, μ = B / T denotes the slope of sweep frequency, in which B represents the bandwidth, and T represents the sweep period.
The returned echo signal of radar can be denoted as (2):
S R ( t ) = exp ( j 2 π f 0 ( t τ ) + j π μ ( t τ ) 2 )
where τ = 2 ( R + ν t ) / c represents the round-trip time delay, in which R represents the range between radar and target, ν represents the velocity of the target, and c denotes the light speed.
The received signal is mixed with the transmitted signal, and the intermediate frequency signal with a fixed frequency can be obtained through the low-pass filter. The beat signal representation is depicted as (3):
S B ( t ) = exp ( j 2 π f 0 τ + j 2 π μ t τ j π μ τ 2 )
The intermediate frequency is denoted as (4):
f b = f t f r = μ 2 R c + 2 f 0 ν c
where f t represents the transmitted signal frequency, and f r represents the received signal frequency.
The related radar waveform parameters and experimental parameters are set in Table 1.

2.2. Multi-Rotor UAVs

In order to evaluate the recognition performance of small UAV targets, various radar echo data of several types of rotary-wing UAVs flying in the air are recorded. In order to ensure the diversity of data, two types of rotorcrafts, mainly from Tenxind, as shown in Figure 2a and Rgds, as shown in Figure 2b, are utilized to collect returned radar signals. Each type of UAV is different in size, shape and material, etc. More specifically, a four-rotor drone is larger and heavier in size and weight, while the coaxial helicopter is thinner and lighter. The body of the four-rotor drone is made of polycarbonate, magnesium alloy and plastic, and the rotary wing is made of carbon fiber. The material of the coaxial helicopter comprises magnesium–aluminum alloy and plastic, and then the blade is made of carbon fiber-reinforced nylon. In addition, the measurement data during the collection process is influenced by trees, buildings and telegraph poles surrounding the location. Therefore, to obtain measurement data from various environments, we collected a small amount of non-UAV measurement data such as moving cars, running people, and flying birds for analysis and comparison.

3. Multi-Feature Extraction of Small UAV Targets

3.1. Modeling of High-Resolution Radar Echo of Small UAV Targets

UAV targets are unique in that almost all UAVs have one or more propellers. The rotating parts of common aerial targets are usually made of metal or alloy materials, which have strong reflection ability to electromagnetic waves and can form strong target echo, which is the physical basis for studying radar characteristics of aerial targets. For the general high-resolution radar, the radar cross section (RCS) characteristics of aerial targets are in the optical region, so the target can be regarded as a set of independent scattering points, and the target echo is the sum of the scattered points on it [19]. Considering that the body and rotating components of all kinds of small UAV targets meet a certain proportional relationship in size, the radar echo data are a superposition of body components and rotating components in a certain proportion.
It is assumed that the target echo is only composed of the body component of the UAV target and the micro-motion component of the rotating component, without considering the echo of other components such as the wheel hub. For a multi-rotor UAV, it is assumed that the RCS of each rotor blade is the same and all are set as 1. Based on the helicopter rotor model [20,21], the model of the multi-rotor UAV can be constructed as follows:
s s u m = m = 1 M L exp j 4 π λ [ R m + z m sin β m ] K = 0 N 1 sin c 2 π L λ cos β m cos ( ω m t + φ m + 2 π k N ) exp j 2 π f d t
where M represents the total number of rotors, N represents the total number of blades of a single rotor, L represents the length of rotor blades, R m denotes the distance from the radar to the center of the m th rotor, z m represents the height of the m th rotor blade, β m is the pitch angle from the radar to the center of the m th rotor, approximately equal to the pitch angle from the radar to the center of the UAV axis, namely, β 1 = β 2 = = β M = β , ω m represents the rotation angular frequency of the m th rotor, and φ m is the initial rotation angle of the m th rotor.
The instantaneous Doppler frequency of the echo signal can be obtained by calculating the time derivative of the phase function of the signal, and the equivalent instantaneous micro-Doppler frequency of the k th blade of the m th rotor can be expressed as (6):
f m , k ( t ) = L ω m λ cos β m sin ( ω m t + φ m + 2 π k N )
The instantaneous Doppler frequency of a scattering point P on the blade can be denoted as (7):
f m , k , P ( t ) = 2 l P ω m λ cos β m sin ( ω m t + φ m + 2 π k N )
where l P represents the distance from the scattering point P to the rotor rotation center, and 0 l P L .
The above formula shows that the micro-Doppler frequency of scattered points on the blade is a sinusoidal curve, the number of curves indicates the number of blades, and the sinusoidal curve frequency is the same as the blade rotation angular frequency [22]. The Doppler frequency amplitude value at the top of the blade is the largest, and then the maximum micro-Doppler frequency can be expressed as (8):
f m d max = 2 L ω m λ cos β m
When the target translational speed equals zero, the maximum spread of the micro-Doppler frequency of the blade is expressed as (9):
f s max = f m d max f m d min = 4 L ω m λ cos β m
The blade length of the rotor can be deduced as (10):
L = λ f s max 4 ω m cos β m
Therefore, the type and motion state of the small UAV target can be determined by the estimated parameters such as the number of rotors, the number of blades, blade length, rotor speed, etc., to achieve the purpose of recognizing multi-rotor small UAV targets.

3.2. HRRP of Small UAVs Target and Structure Feature Extraction

The range resolution can be greatly improved after the adoption of a wide-band signal in radar, and the received echo is no longer a “point” echo, but a one-dimensional range profile distributed in different radial range units along the radar line-of-sight (LOS), forming a “range extension target”. HRRP is the coherent sum of the echo of the target scatterer in each range unit, which indicates the projection of the complex received signal of the target scattering center on the radar LOS and can reflect the size, shape, structure and energy distribution information of the target. Measurement is an important solution for the study of the HRRP characteristics of the small UAV targets, the HRRP of the four-rotor drone and coaxial helicopter is presented in Figure 3.
Different from the echo signal returned from the small UAV targets, HRRP can present richer details and more obvious feature contours, which are obtained through a one-dimensional fast Fourier transform of the echo signal. As shown in Figure 3, these features can reflect the target structure information, which has stable statistical characteristics. The features with physical meaning include the length of the target, the strong scattering center, and the radial energy distribution, etc., which can be expressed as follows.
The HRRP sequence corresponding to the radial length L is denoted as (11):
x ^ H = [ x ^ H ( n ) , x ^ H ( n + 1 ) , , x ^ H ( m ) ] T
where x ^ H ( ) represents the HRRP amplitude, i = n , n + 1 , m represents the number of range cells.
(1) Center of scattering M . The center of scattering, or center of mass reflects the shape characteristic of HRRP, which is normalized between 0 and 1. It can be denoted as (12):
M = i = n m i x ^ H ( i ) / i = n m x ^ H ( i ) n / ( m n )
(2) Length of radial L . Although it cannot be directly estimated from the HRRP in some LOS anomalies, the length of radial is one of the most significant features in target recognition based on HRRP. It is estimated by calculating the difference between the first range unit that exceeds the noise threshold and the last range unit, which can be denoted as (13):
L = max i | x ^ H ( i ) > T h min i | x ^ H ( i ) > T h l
where T h represents the threshold of noise, and l represents the range unit that considers the effect of LOS on the length measurement.
(3) Number of peaks N P . By adopting the peak-seeking algorithm to calculate the peak number of HRRP, which can represent the scattering point distribution and target structure complexity. It is calculated as (14):
N P = i = n m u ( i ) ,   u ( i ) = 1 x ^ H ( i ) > x ^ H ( i ± 1 ) 0 e l s e
(4) Range between peak points D 1 . By adopting the peak-seeking algorithm to calculate the range between the two largest peaks from the extracted HRRP, which can be expressed as (15):
D 1 = arg max i x ^ H ( i ) | n i m arg max j x ^ H ( j ) | n j m & j i l
(5) Range between maximum peak and nearest edge D 2 . Calculating the range between the maximum peak point and the nearest edge of the extracted HRRP by adopting the peak-seeking method, which can be denoted as (16):
D 2 = arg max i x ^ H ( i ) | n i m i e d g e l i e d g e = n | i n | | i m | m e l s e
These above features were selected because they depend on the structure of the small UAV targets with physical meaning and are relatively stable. However, HRRP is sensitive to the LOS, even though the scattering point model of the target changes slowly with the line-of-sight, the HRRP changes faster. Therefore, it is significant to explore the robust structural feature extraction method based on HRRP for small UAV target recognition.

3.3. Micro-Doppler Time-Frequency Spectrograms of Small UAVs Target and Micro-Motion Feature Analysis

Motion characteristic is another main characteristic of target recognition, in which the micro-motion (vibration, spin, precession, etc.) feature can provide new means for target recognition. UAVs are unique in that almost all have one or more rotating parts, such as the rotating blades of helicopters, the rotating propellers of fixed-wing aircraft, etc. Due to periodic rotation, the amplitude and phase of the electromagnetic wave scattered by the rotating parts will present periodic changes, resulting in the micro-Doppler effect. Because the rotating part belongs to the fine structure of the target, the controllability is low, and the micro-motion characteristics are not easy to imitate. Therefore, the micro-motion feature becomes the unique motion signature of the radar target. Using modern technology to extract these fine motion features can provide new features with good stability and high differentiation for radar target recognition.
Time-frequency analysis can depict both time domain and frequency domain information of non-stationary signals, which explicitly exhibit the change in frequency over time and reveal the transient variation characteristics. Different from the time domain and frequency domain, the time–frequency domain of the signal contains sufficient time–frequency distribution information, which can intuitively present more time–frequency features, and the signal also can present stronger anti-noise performance in the time–frequency domain. Classical time–frequency analysis methods include Wigner-Ville Distribution (WVD) [23], smooth pseudo-Wigner–Ville Distribution (SPWVD) [24], and Short-time Fourier transform (STFT) [25]. WVD method is easy to generate serious cross-terms, which is not conducive to the study of micro-Doppler characteristics and the extraction of micro-motion parameters. Although the SPWVD method greatly reduces the cross-terms of WVD, its calculation is more complicated and computational complexity is high. STFT is a linear transformation method that does not produce cross-terms and requires less computation.
The main idea of STFT is to window the time domain signal and divide the total time domain signal into many short signals of equal length, each of which is approximately stable after segmentation. The signal frequency in the corresponding time period can be obtained by Fourier transform analysis of these short time periods so as to obtain the distribution relationship between the time and frequency of the signal. The transform method can be expressed as (17):
S T F T ( t , f ) = + x ( τ ) g ( τ t ) e j 2 π f τ d τ
where x ( τ ) represents the discrete signal and g ( t ) denotes the window function with a very short time.
The STFT is utilized to convert the echo signal into a two-dimensional time–frequency image. STFT cannot achieve high resolution in both the time domain and frequency domain, which means that the longer the sliding window time, the higher the frequency resolution, and the shorter the sliding window time, the higher the time resolution. The micro-Doppler time–frequency spectrograms of the four-rotor drone and coaxial helicopter are presented in Figure 4.
As can be seen from Formula (6), the micro-Doppler characteristics of multi-rotor UAVs are composed of several sinusoidal curves and are affected by carrier frequency, number of rotors, rotor speed, number of blades, blade length, initial phase and radar LOS, in which blade length, carrier frequency and radar LOS are only related to micro-Doppler frequency amplitude, while the rotor speed, number of rotors, number of blades and initial signal phase will affect the amplitude and phase of the micro-Doppler time–frequency distribution curve.

4. Multi-Feature Fusion Network for Small UAVs Target Recognition

Radar echo signal contains a wealth of target characteristic information, such as geometric structure and motion information. In this section, a small UAV target recognition method based on multi-feature fusion is introduced, which uses the combination of LSTM and CNN to fully exploit the geometric structure feature from HRRP and the micro-motion feature from micro-Doppler time–frequency of radar echo signal for fusion recognition. The multi-feature fusion architecture for small UAV target recognition is presented in Figure 5.
The proposed architecture consists of two parallel feature extraction sub-networks and a classifier, in which HRRP and time–frequency spectrograms are taken as input data, respectively. In the first sub-network, the structural features of a small UAV target are extracted from HRRP through an LSTM network; while in the second sub-network, deep micro-motion features are extracted directly from time–frequency spectrograms of a small UAV target by a multi-layer CNN. After that, these two features from different sub-networks are concatenated and then entered into the classifier to obtain the final classification result.

4.1. The Network of Geometric Structure Feature Extraction from HRRP

According to the analysis of HRRP characteristics of small UAV targets in Section 3.2, the HRRP sample can be considered as a projection of radar echoes from a series of scatterers distributed in the range unit along the radar LOS. In order to explore the internal temporal dependence between range cells of each HRRP sample and predict the corresponding UAV target type based on its structural characteristics, the HRRP samples are converted into sequential inputs first. For an HRRP sample x D , the amplitude of the n th HRRP sample can be denoted as (18):
x ^ ( n ) = ( | x 1 ( n ) | , | x 2 ( n ) | , , | x D ( n ) | )
where | x i ( n ) | represents the amplitude of the i th range cell of the n th HRRP sample, and D represents the total number of range cells in one sample.
The sequential HRRP sample can be segmented by time step t as (19):
x ^ t ( n ) = ( x t 1 ( n ) , x t 2 ( n ) , , x T ( n ) )
Taking the segmented HRRP samples as the inputs of the sequential model, LSTM is adopted to explore the temporal correction within an HRRP sample, and the corresponding hidden state at time step t is introduced (20):
y ^ t = W l f h t + b f n = W l f ( g o tanh ( c t ) ) + b f n
where
g f ( i , o ) = σ ( W f ( i , o ) [ h t 1 , x t ] + b f ( i , o ) ) c ^ t = tanh ( W c [ h t 1 , x t ] + b c ) c t = g f c t 1 + g i c ^ t
where g f , g i , g o denotes the forget gate, input gate and output gate, respectively, W f , W i , W o , b f , b i , b o represents the corresponding weight and threshold, W l f represents the connection weight between the fully connected layer and LSTM, b f n denotes the bias of the fully connected layer, and c ^ t represents the state of current input cell.
As discussed above, the target region in the HRRP sample indicates the essential geometric structure information of the small UAV target in the recognition task, which deserves special attention. Therefore, the attention mechanism is adopted to assign more weights to the output of the corresponding target regions, thereby paying more attention to the discriminative features of the target regions in HRRP. The calculation method of each attention weight of the hidden state can be expressed as (22):
a t = exp ( W t ) l = 1 T W l h t
where h t represents the hidden state, and W t denotes the attention weights.

4.2. The Network of Micro-Motion Feature Learning from Time-Frequency Spectrograms

The target micro-motion features reflect the electromagnetic scattering characteristics, geometric structure and motion characteristics of the target, which is helpful for small UAV target recognition. Time-frequency analysis is a commonly used method to extract micro-motion features. According to the analysis in Section 3.3, three-channel spectrogram can be obtained and processed as the input of the micro-motion feature extraction network.
A deep CNN architecture is adopted for small UAV target recognition based on micro-Doppler time–frequency spectrograms, which generally perform both feature extraction and classification within the same architecture. The cascaded features of different complexity from the time–frequency representation are extracted by seven convolution layers and seven pooling layers, the feature map of the last layer is flattened and output to the classifier. To accelerate training and prevent overfitting, batch normalization and dropout are utilized in the DCNN architecture. The output feature map can be expressed as (23):
y m = x m 1 K m + b m
where x m 1 , K m , and b m represent the input, convolution kernel, and bias of the m th convolutional layer, and represents the two-dimensional convolution operators.
The output flatten features of these two subnetworks are weighted concatenated and entered to the classifier, which is implemented by 4 fully connected networks and an output layer with softmax activation function.
The cost function is defined in the form of cross entropy (24):
L c l c = 1 K y log ( p )
where K represents the total number of samples in one batch, y represents the label value, and p represents the predicted value.
The stochastic gradient descent (SGD) is adopted to minimize the regulation of the cost function in backpropagation until the network converges. The parameters of the multi-fusion network for small UAV target recognition are summarized in Table 2.

5. Experimental Results and Performance Analysis

5.1. Data Collection and Experiment Settings

In order to verify the performance of the proposed multi-dimensional feature fusion network for small UAV target recognition, radar echo data returned from a four-rotor drone and a coaxial helicopter were collected from different radar LOS angles by the AWR1642 radar system, and preprocessed to form the one-dimensional sequential HRRP and two-dimensional time–frequency spectrograms, respectively. Data acquisition experiments were mainly conducted in the drill ground at North China University of Technology, as shown in Figure 6. The observation angle is from −60° to 60° and the attitude angles of the helicopter or drone are from −15° to 15° relative to the radar antenna’s normal direction. In addition, in order to obtain measurement data from different environments, a small amount of non-UAV measurement data such as moving cars, running people, and flying birds were also collected for analysis and comparison.
The proposed multi-feature fusion algorithm is evaluated on the self-built small UAV target dataset, which mainly contains HRRP samples and micro-Doppler time–frequency spectrograms. The dataset is depicted as follows:
  • A total of 3180 pieces of HRRP samples and 3180 pieces of micro-Doppler time–frequency spectrograms are included in the dataset, in which 1800 pairs of samples are used to train the multi-features fusion model, 900 pairs of samples for validation and 480 pairs of samples for testing.
  • The HRRP sample is represented as the amplitude, and the number of range cells in one HRRP sample is 256; the size of the input three-channel time–frequency spectrograms is 900 × 1200.
  • Three types of targets are included in the dataset: 1200 samples of four-rotor drones, 1200 samples of coaxial helicopters and 780 samples of other targets (moving cars, running people, flying birds).
The hyperparameters such as hidden size, number of layers, kernel size, number of kernels, network depth, learning rate and dropout rate are optimized. The network model was trained using backpropagation and SGD optimizer with an initial learning rate of 0.001, a dropout rate set to 0.3, and a batch size of 100. The measured data were collected and processed by the MATLAB 2020 platform, and all experiments were conducted on the server equipped with GPU 3060Ti, and using Python 3.7 for model implementation.
Considering the explicit and convenient demand for analysis and comparison, four accuracy evaluation metrics including overall accuracy, precision (average accuracy), recall, and F1 score were utilized to evaluate the recognition performance of small UAV targets. The accuracy evaluation metrics can be defined as follows:
O A = 1 N i = 1 M T P i + T N i T P i + T N i + F P i + F N i
P = 1 M i = 1 M T P i T P i + F P i
R = 1 M i = 1 M T P i T P i + F N i
F 1 = 1 M i = 1 M 2 × P i × R i P i + R i
where T P , F P i , and F N i represent the true positive, false positive, and false negative counts of the i th class, respectively.

5.2. Experiment Results and Performance Analysis

To validate the superiority of the proposed multi-feature fusion algorithm in small UAV target recognition, support vector machine (SVM), random forest (RF), and AdaBoost algorithms based on HRRP manual structural feature extraction, LSTM algorithm based on one-dimensional sequential HRRP and CNN algorithm based on two-dimensional time–frequency spectrograms were utilized for comparison experiments.
(1)
Geometric structure feature extraction and analysis
According to the HRRP characteristics defined in Section 3.2, the HRRP geometric structure features of the four-rotor drone and the coaxial helicopter are calculated, respectively, which is shown in Table 3 and Table 4. The statistical results demonstrate that there are strong structural similarities between adjacent HRRP samples in the same LOS angle. Due to the existence of the rotating components, the amplitude of HRRP will be modulated by the echo of the rotating parts; therefore, the amplitude fluctuation is relatively large in a small angle range, and there is also a certain amplitude disturbance. In other words, the peak position of the size of the adjacent HRRP sample remained basically unchanged, while the amplitude of the peak fluctuated slightly differently.
In addition, the strong correlation component in a single HRRP within the same frame mainly depends on the target scattering center structure. Since the scattering center of the HRRP sample in the same frame does not cross the range cell, its structure basically remains unchanged, so the strong correlation component between different HRRP samples in the same frame is similar or approximately equal. The unknown amplitude disturbance component mainly results from the change in the target attitude.
In a word, for the same target, the statistical features of HRRP are relatively stable and slightly different of different LOS angles, while for different targets, the statistical structure features are obviously different. Therefore, making full use of the structural features in HRRP is useful and meaningful for target recognition of small UAVs.
(2)
Performance analysis of different algorithms
In order to verify the performance of the proposed small UAV target recognition algorithms based on multi-dimensional feature fusion, it is compared with other related machine learning and deep learning algorithms including SVM, RF, AdaBoost, LSTM, and CNN on the self-built dataset, in which SVM, RF, AdaBoost methods are performed on manually extracted geometric structure features, while LSTM method is performed on one-dimensional sequential HRRP dataset and CNN algorithm is performed on two-dimensional time–frequency spectrograms, respectively. The recognition performance is evaluated by OA, precision, recall, and F1 score. Experimental results of different algorithms are compared and presented in Table 5 and Figure 7. The best results for each item are highlighted in bold.
The experimental results show that on the same dataset, the performance of the three deep learning-based architectures for small UAV target recognition, including our proposed multi-feature fusion algorithm, significantly outperforms that of the three advanced statistical machine learning-based methods, such as the SVM algorithm with manually statistical features. Since the echo data from high-resolution radar contains the geometric structure features and motion features of the small UAV targets, the proposed multi-feature fusion algorithm presents the highest radar target recognition metrics. Specifically, the OA, precision, recall, and F1 score of the proposed multi-feature fusion algorithm are 2.05–10.96%, 2.37–10.52%, 2.36–9.81%, and 2.38–10.18% higher than the other five algorithms, respectively, which proves the effectiveness and stability of the fusion of the geometric structure features from HRRP and the micro-motion features from micro-Doppler time–frequency spectrograms.
As can be seen from Figure 8, the recognition accuracy and F1 score of the proposed multi-dimensional feature fusion algorithm reaches 98.5%, which is superior to other algorithms. Regardless of which classifier is utilized (i.e., SVM, RF, and AdaBoost), recognition algorithms using only manual features have stable but relatively poor recognition performance. Although the recognition results of the method using only with CNN or LSTM are higher than those using only manual features, they are lower than that of the proposed model, with a difference of about 2%. The results show that the multi-feature fusion method has higher recognition accuracy than the single-feature-based method.

6. Discussion

With the continuous development of deep learning, radar target recognition technology has made remarkable progress in theoretical analysis, but there are still many serious challenges in practical application. Combined with the application of deep learning in real-world small UAV target recognition, the following aspects are still worth considering.
(1)
Complex Environment.
The complex environment of small UAV target recognition includes atmospheric disturbance, light condition change, background interference, multi-target tracking, etc. In addition, the presence of a large number of occlusions in a complex environment, such as buildings, trees, or others, may result in small UAV targets that may be completely occluded, making target recognition more difficult. Moreover, complex environments are often dynamic, with frequent changes in targets and backgrounds, such as birds or other moving targets, requiring timely updates of the location and properties of the target, which also brings challenges to small UAV target recognition.
To overcome these difficulties, the researchers tried a number of solutions. By using deep learning methods to learn the feature and context information of the target from a large amount of data [26], more accurate target recognition can be achieved in complex environments. In addition, when the target is occluded or lost, the target re-recognition technology [27] can be used to model and match the appearance features of the target, so as to re-recognize the target. Furthermore, through motion prediction and model update, the motion pattern and behavior of the target are analyzed, which can better recognize the target and cope with the changes in the dynamic environment.
(2)
Non-cooperative Target Recognition.
As a non-cooperative target, small UAV targets have low detectability, and their shape, material, or coating may be similar to the surrounding environment, making them difficult to distinguish from the background in sensor data [28]. However, due to the lack of a public high-quality dataset, deep learning algorithms proposed by different research teams are not easy to compare and verify each other, which limits the application and development of deep learning in radar target recognition. At the same time, since radar target recognition is generally applied to non-cooperative targets, the recognition ability of the model in real scenarios also needs to be verified. In the next stage, the radar target recognition based on deep learning can focus on the improvement of the recognition effect by the new deep learning structure, as well as the application of the algorithm in practical scenarios.
Multi-spectral image sensors, infrared imaging sensors and high-resolution radar can be adopted to improve the recognition ability of the non-cooperative small UAV targets [29]. Multi-spectral image sensors can capture small differences between a UAV target and its surroundings, infrared imaging sensors can rely on thermal radiation from the UAV for recognition, and radar can use the echo signal to recognize the presence of the small UAV target. These advanced sensors can provide a diverse source of data, increasing the chances of recognizing non-cooperative targets.
(3)
“Low-Slow-Small” Target Characteristics.
Because the “low, slow and small” UAV is usually small in size, coupled with the limited number of target pixels and easy to mix with the background, long-distance visual detection is more difficult. The fast and agile movement of the “low, slow and small” UAV target will produce rapid position and morphological changes in the image sequence, which is prone to blur and position instability, making the recognition task more complicated. In addition, a low-altitude flight environment can result in a low signal-to-noise ratio problem, where sensor signals are affected by noise and interference, reducing the visibility of the target in the sensor data.
One promising solution is to utilize high-resolution radar sensors to obtain more detailed information to enhance the visibility of the target. Through sensor data fusion and motion model prediction [30], inference caused by motion can be reduced, and the influence of complex environments on target recognition can be reduced. The compressive use of multi-view or multi-sensor information can reduce the impact of occlusion and improve the recognition performance of the small UAV target.

7. Conclusions

In this paper, a multi-dimensional feature fusion framework for small UAV target recognition based on a high-resolution radar is proposed, which can fully utilize the geometric structure features and micro-motion features of small UAVs for target recognition. The echo data of different small UAV targets was measured and collected through a high-resolution millimeter-wave radar, and further processed into HRRP and micro-Doppler time–frequency spectrograms, respectively, for training and testing. The effectiveness of the proposed multi-dimensional feature fusion method was verified by a series of comparison experiments, and the experimental results demonstrate that the proposed multi-dimensional feature fusion method can achieve better recognition performance and higher robustness than that of single features for small UAV targets, which provides a new feasible idea for the application of anti-UAV in complex scenarios.

Author Contributions

Conceptualization, W.J. and Y.W.; Data curation, W.J. and Z.L.; Investigation, W.J. and Y.L. (Yang Li); Methodology, W.J.; Resources, Y.L. (Yun Lin) and F.B.; Supervision, Y.W.; Writing—original draft, W.J.; Writing—review and editing, W.J. and Z.L. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported in part by the Natural Science Foundation of China (Key Program) under Grants 62131001 and (General Program) under Grants 62371005. It is also supported by the Beijing Natural Science Foundation under Grant 4234082, and Yuxiu Innovation Project of NCUT (Project No. 2024NCUTYXCX119).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Ajakwe, S.O.; Ihekoronye, V.U.; Kim, D.-S.; Lee, J.M. DRONET: Multi-tasking Framework for Real-time Industrial Facility Aerial Surveillance and Safety. Drones 2022, 6, 46. [Google Scholar] [CrossRef]
  2. Wang, C.X.; Tian, J.M.; Cao, J.W.; Wang, X. Deep Learning-based UAV Detection in Pulse-Doppler Radar. IEEE Trans. Geosci. Remote. Sens. 2021, 60, 5105612. [Google Scholar] [CrossRef]
  3. Cabrera-Ponce, A.A.; Martinez-Carranza, J.; Rascon, C. Detection of Nearby UAVs Using a Multi-microphone Array on Board a UAV. Int. J. Micro Air Veh. 2023, 12, 1756829320925748. [Google Scholar] [CrossRef]
  4. Shan, P.; Yang, R.; Xiao, H.M.; Zhang, L.; Liu, Y.H.; Fu, Q.; Zhao, Y. UAVPNet: A Balanced and Enhanced UAV Object Detection and Pose Recognition Network. Measurement 2023, 222, 113654. [Google Scholar] [CrossRef]
  5. Jiang, W.; Wang, Y.; Li, Y.; Lin, Y.; Shen, W. Radar Target Characterization and Deep Learning in Radar Automatic Target Recognition: A Review. Remote. Sens. 2023, 15, 3742. [Google Scholar] [CrossRef]
  6. Victor, H.; Alves, R.; Roberto, S.; Gilberto, R.M. Random Vector Functional Link Forests and Extreme Learning Forests Ap-plied to UAV Automatic Target Recognition. Eng. Appl. Artif. Intell. 2023, 117, 105538. [Google Scholar]
  7. Ghazlane, Y.; Gmira, M.; Medromi, H. Anti-drone Systems: An Attention Based Improved YOLOv7 Model for a Real-time Detection and Identification of Multi-airborne Target. Intell. Syst. Appl. 2023, 20, 200296. [Google Scholar]
  8. Yi, L.; Xin, Y.C.; Chen, Z.D.; Lin, J.W.; Liu, X.W. Research on UAV Target Detection and Substation Equipment Status Recognition Technology Based on Computer Vision. J. Physics Conf. Ser. 2022, 2400, 012033. [Google Scholar] [CrossRef]
  9. Yang, J.C.; Zhang, Z.; Mao, W.; Yang, Y. Identification and Micro-motion Parameter Estimation of Non-cooperative UAV Targets. Phys. Commun. 2021, 46, 101314. [Google Scholar] [CrossRef]
  10. Suhare, S.; Emad, A.; Rajwa, A. Simultaneous Tracking and Recognizing Drone Targets with Millimeter-Wave Radar and Convolutional Neural Network. Appl. Syst. Innov. 2023, 6, 68. [Google Scholar] [CrossRef]
  11. Patel, J.S.; Fioranelli, F.; Anderson, D. Review of Radar Classification and RCS Characterisation Techniques for Small UAVs or Drones. IET Radar Sonar Navig. 2018, 12, 911–919. [Google Scholar] [CrossRef]
  12. Knoedler, B.; Zemmari, R.; Koch, W. On the Detection of Small UAV Using a GSM Passive Coherent Location System. In Proceedings of the 2016 17th International Radar Symposium, Krakow, Poland, 10–12 May 2016. [Google Scholar]
  13. Hoffmann, F.; Ritchie, M.; Fioranelli, F.; Charlish, A.; Griffiths, H. Micro-Doppler Based Detection and Tracking of UAVs with Multistatic Radar. In Proceedings of the 2016 IEEE Radar Conference (RadarConf), Philadelphia, PA, USA, 1–6 May 2016. [Google Scholar]
  14. Jahangir, M.; Baker, C.J.; Oswald, G.A. Doppler Characteristics of Micro-drones with L-Band Multibeam Staring Radar. In Proceedings of the 2017 IEEE Radar Conference (RadarConf17), Seattle, WA, USA, 8–12 May 2017. [Google Scholar]
  15. Jiang, W.; Ren, Y.; Liu, Y.; Leng, J. A Method of Radar Target Detection Based on Convolutional Neural Network. Neural Comput. Appl. 2021, 33, 9835–9847. [Google Scholar] [CrossRef]
  16. Dong, Y.; Ma, Y.; Li, Y.; Li, Z. High-precision Real-time UAV Target Recognition Based on Improved YOLOv4. Comput. Commun. 2023, 206, 124–132. [Google Scholar] [CrossRef]
  17. Wei, Y.; Hong, T.; Fang, C.Q. Research on Information Fusion of Computer Vision and Radar Signals in UAV Target Identifi-cation. Discret. Dyn. Nat. Soc. 2022, 2022, 3898277. [Google Scholar] [CrossRef]
  18. Li, B.; Yan, J.; Wu, W.; Zhu, Z.; Hu, X. High Performance Visual Tracking with Siamese Region Proposal Network. In Proceedings of the Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–22 June 2018; pp. 8971–8980. [Google Scholar]
  19. Chen, V.C. The Micro-Doppler Effect in Radar; Artech House: Washington, DC, USA, 2011; pp. 110–112. [Google Scholar]
  20. Chen, V.; Li, F.; Ho, S.-S.; Wechsler, H. Micro-Doppler Effect in Radar: Phenomenon, Model, and Simulation Study. IEEE Trans. Aerosp. Electron. Syst. 2006, 42, 2–21. [Google Scholar] [CrossRef]
  21. Nanzer, J.A.; Chen, V.C. Microwave Interferometric and Doppler Radar Measurements of a UAV. In Proceedings of the 2017 IEEE Radar Conference, Seattle, WA, USA, 8–12 May 2017; pp. 1628–1633. [Google Scholar]
  22. Li, T.; Wen, B.; Tian, Y.; Li, Z.; Wang, S. Numerical Simulation and Experimental Analysis of Small Drone Rotor Blade Polar-imetry Based on RCS and Micro-Doppler Signature. IEEE Antennas Wirel. Propag. Lett. 2019, 1, 187–191. [Google Scholar] [CrossRef]
  23. de Wit, J.J.M.; Harmanny, R.I.A.; Molchanov, P. Radar Micro-Doppler Feature Extraction Using the Singular Value Decom-position. In Proceedings of the 2014 International Radar Conference, Lille, France, 13–17 October 2014; pp. 1–6. [Google Scholar]
  24. Ritchie, M.; Fioranelli, F.; Griffiths, H.; Torvik, B. Monostatic and Bistatic Radar Measurements of Birds and Micro-drone. In Proceedings of the 2016 IEEE Radar Conference, Philadelphia, PA, USA, 2–6 May 2016; pp. 1–5. [Google Scholar]
  25. Chen, Z.; Li, G.; Fioranelli, F.; Griffiths, H. Personnel Recognition and Gait Classification Based on Multistatic Micro-Doppler Signa-tures Using Deep Convolutional Neural Networks. IEEE Geosci. Remote Sens. Lett. 2018, 15, 669–673. [Google Scholar] [CrossRef]
  26. Tordesillas, J.; How, J.P. Deep-panther: Learning-based Perception-aware Trajectory Planner in Dynamic Environments. IEEE Robot. Autom. Lett. 2023, 8, 1399–1406. [Google Scholar] [CrossRef]
  27. Zhou, X.; Zhong, Y.J.; Cheng, Z.; Liang, F.; Ma, L. Adaptive Sparse Pairwise Loss for Object Re-identification. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 18–22 June 2023; pp. 19691–19701. [Google Scholar]
  28. Jiang, X.P.; Yuan, H.; He, X.; Du, T.; Ma, H.; Li, X.; Luo, M.; Zhang, Z.; Chen, H.; Yu, Y.; et al. Implementing of Infrared Camouflage with Thermal Management Based on Inverse De-sign and Hierarchical Metamaterial. Nanophotonics 2023, 12, 1891–1902. [Google Scholar] [CrossRef]
  29. Jiang, X.P.; Zhang, Z.J.; Ma, H.S.; Du, T.; Luo, M.; Liu, D.; Yang, J. Tunable Mid-infrared Selective Emitter Based on Inverse Design Metasurface for Infrared Stealth with Thermal Management. Opt. Express 2022, 30, 18250–18263. [Google Scholar] [CrossRef] [PubMed]
  30. Huang, K.L.; Shi, B.T.; Li, X.; Li, X.; Huang, S.; Li, Y. Multi-modals Sensor Fusion for Auto Driving Perception: A survey. arXiv 2022, arXiv:2202.02703. [Google Scholar]
Figure 1. Illustration of the HRRP of a plane target.
Figure 1. Illustration of the HRRP of a plane target.
Remotesensing 16 02710 g001
Figure 2. Overview of small UAV targets for capturing radar echo data.
Figure 2. Overview of small UAV targets for capturing radar echo data.
Remotesensing 16 02710 g002
Figure 3. HRRP of four-rotor drone and coaxial helicopter from different LOS.
Figure 3. HRRP of four-rotor drone and coaxial helicopter from different LOS.
Remotesensing 16 02710 g003
Figure 4. The time–frequency spectrograms of a four-rotor drone and a coaxial helicopter from different LOS.
Figure 4. The time–frequency spectrograms of a four-rotor drone and a coaxial helicopter from different LOS.
Remotesensing 16 02710 g004
Figure 5. The proposed multi-feature fusion architecture for small UAV targets recognition.
Figure 5. The proposed multi-feature fusion architecture for small UAV targets recognition.
Remotesensing 16 02710 g005
Figure 6. Data collection setup and experiment scenario.
Figure 6. Data collection setup and experiment scenario.
Remotesensing 16 02710 g006
Figure 7. Confusion matrix between the proposed method and other comparison methods.
Figure 7. Confusion matrix between the proposed method and other comparison methods.
Remotesensing 16 02710 g007
Figure 8. Comparison of evaluation metrics of different methods.
Figure 8. Comparison of evaluation metrics of different methods.
Remotesensing 16 02710 g008
Table 1. Specifications of the FMCW radar utilized in experiments.
Table 1. Specifications of the FMCW radar utilized in experiments.
Radar Experimental ParametersValue
Radar waveformFMCW
Radar antenna2TX & 4RX
Slope of sweep frequency μ 50 MHz/us
Signal sweep bandwidth B 4 GHz
Signal sweep period T 80 us
Speed of light c 3 × 108 m/s
Number of chirps N 256
Signal sampling frequency f s 10 MHz
Radar range resolution R 3.75 cm
Radar velocity resolution ν 4.75 cm/s
Table 2. Specifications of the proposed fusion network for small UAV target recognition.
Table 2. Specifications of the proposed fusion network for small UAV target recognition.
Network NameNetwork Layer No.KernelsPooling
Micro-motion feature extractorConv13 × 3@62 × 2 max pooling
Conv23 × 3@162 × 2 max pooling
Conv33 × 3@322 × 2 max pooling
Conv43 × 3@642 × 2 max pooling
Conv53 × 3@1282 × 2 max pooling
Conv63 × 3@642 × 2 max pooling
Conv73 × 3@322 × 2 max pooling
ClassifierFc5@2021
Fc6@512
Fc7@128
Fc8@3
Table 3. Geometric structure features of the four-rotor drone target from HRRP.
Table 3. Geometric structure features of the four-rotor drone target from HRRP.
Structure FeaturesLOS Angle 1LOS Angle 2LOS Angle 3LOS Angle 4
Center of scattering7.1097.1017.0437.028
Length of radial0.4240.4230.4130.392
Number of peaks1.1101.1151.1161.108
Range between peak points0.1980.2860.2900.275
Range between maximum peak and nearest edge0.3300.3340.3450.328
Table 4. Geometric structure features of the coaxial helicopter target from HRRP.
Table 4. Geometric structure features of the coaxial helicopter target from HRRP.
Structure FeaturesLOS Angle 1LOS Angle 2LOS Angle 3LOS Angle 4
Center of scattering7.6297.4317.1236.944
Length of radial0.4130.4350.4470.397
Number of peaks1.1191.1191.1111.113
Range between peak points0.1280.1060.1690.102
Range between maximum peak and nearest edge0.3600.3160.3040.395
Table 5. Recognition result (%) comparison of different methods for small UAV targets.
Table 5. Recognition result (%) comparison of different methods for small UAV targets.
TypeSVMRFAdaBoostLSTMCNNOurs
Four-rotor drone87.39 ± 2.1789.83 ± 2.4589.56 ± 1.8394.28 ± 1.4796.08 ± 1.4898.50 ± 1.26
Coaxial helicopter86.40 ± 1.1388.85 ± 2.7190.73 ± 1.7895.65 ± 1.7596.52 ± 1.3498.25 ± 0.78
Others89.16 ± 2.3488.70 ± 2.7391.29 ± 2.4694.21 ± 2.1196.01 ± 1.7298.74 ± 0.34
OA (%)87.60 ± 2.1389.01 ± 1.5890.63 ± 1.8694.27 ± 1.0896.51 ± 0.7998.56 ± 0.47
Precision (%)87.91 ± 2.2788.15 ± 2.3790.45 ± 1.7994.39 ± 1.5396.06 ± 1.8298.43 ± 0.14
Recall (%)88.68 ± 2.6789.06 ± 2.4289.66 ± 2.5394.28 ± 1.4596.13 ± 1.3298.49 ± 0.55
F1 (%)88.29 ± 2.9289.03 ± 2.6390.05 ± 2.3294.33 ± 1.7396.09 ± 1.4098.47 ± 0.69
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Jiang, W.; Liu, Z.; Wang, Y.; Lin, Y.; Li, Y.; Bi, F. Realizing Small UAV Targets Recognition via Multi-Dimensional Feature Fusion of High-Resolution Radar. Remote Sens. 2024, 16, 2710. https://doi.org/10.3390/rs16152710

AMA Style

Jiang W, Liu Z, Wang Y, Lin Y, Li Y, Bi F. Realizing Small UAV Targets Recognition via Multi-Dimensional Feature Fusion of High-Resolution Radar. Remote Sensing. 2024; 16(15):2710. https://doi.org/10.3390/rs16152710

Chicago/Turabian Style

Jiang, Wen, Zhen Liu, Yanping Wang, Yun Lin, Yang Li, and Fukun Bi. 2024. "Realizing Small UAV Targets Recognition via Multi-Dimensional Feature Fusion of High-Resolution Radar" Remote Sensing 16, no. 15: 2710. https://doi.org/10.3390/rs16152710

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop