Next Article in Journal
An Ultrasonic Tomography System for the Inspection of Columns in Architectural Heritage
Next Article in Special Issue
Method for Improving Range Resolution of Indoor FMCW Radar Systems Using DNN
Previous Article in Journal
Weak Fault Feature Extraction Method Based on Improved Stochastic Resonance
Previous Article in Special Issue
An InSAR Interferogram Filtering Method Based on Multi-Level Feature Fusion CNN
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Communication

A Weighted Decision-Level Fusion Architecture for Ballistic Target Classification in Midcourse Phase

School of Electronic Science and Engineering, Nanjing University, Nanjing 210023, China
*
Authors to whom correspondence should be addressed.
Sensors 2022, 22(17), 6649; https://doi.org/10.3390/s22176649
Submission received: 31 July 2022 / Revised: 27 August 2022 / Accepted: 31 August 2022 / Published: 2 September 2022
(This article belongs to the Special Issue RADAR Sensors and Digital Signal Processing)

Abstract

:
The recognition of warheads in the target cloud of the ballistic midcourse phase remains a challenging issue for missile defense systems. Considering factors such as the differing dimensions of the features between sensors and the different recognition credibility of each sensor, this paper proposes a weighted decision-level fusion architecture to take advantage of data from multiple radar sensors, and an online feature reliability evaluation method is also used to comprehensively generate sensor weight coefficients. The weighted decision-level fusion method can overcome the deficiency of a single sensor and enhance the recognition rate for warheads in the midcourse phase by considering the changes in the reliability of the sensor’s performance caused by the influence of the environment, location, and other factors during observation. Based on the simulation dataset, the experiment was carried out with multiple sensors and multiple bandwidths, and the results showed that the proposed model could work well with various classifiers involving traditional learning algorithms and ensemble learning algorithms.

1. Introduction

All ballistic missiles follow a common trajectory from launch to impact, which is divided into three phases: boost, midcourse, and terminal. Each phase has a different level of difficulty in implementing an intercept. It is hard to intercept in the boost phase as the interceptor is required to be inside the attack range within a few minutes while the missile engines are firing. The defensive technologies used in the terminal phase are usually the easiest to build because they require only short-range missiles and radars, but the main disadvantage of terminal attacks is that there may not be enough time to schedule all interceptions when countering many large-scale attacks. The midcourse phase represents the majority of the flight time of a ballistic missile, from minutes to the better part of an hour, depending on the range of the missile. The midcourse phase provides the best opportunity to intercept an incoming warhead and gives the defense system more time to observe and discriminate countermeasures from the targets.
The efficient identification of a true warhead is a prerequisite for accurate interception by defense systems. With the increasingly complex battlefield environment, the mature application of warhead attitude control technology, and the development of hypersonic weapons, traditional ballistic missile defense systems face great challenges. Modern high-tech warfare can be regarded as information warfare. When using multi-source information to describe all aspects of an incoming target, the defense system can identify the target more accurately, and more reliably [1,2].
Multi-sensor data fusion technology for target recognition can be represented at three different levels ([3], pp. 51–56): signal-level fusion, feature-level fusion, and decision-level fusion.
Fusion at the signal-level applies a combination operator to each set of registered pixels, which correspond to associated measurements from each sensor. The merged signal has a higher quality than the original source. Then, feature extraction and pattern classification are performed to achieve target identification [4,5,6]. In feature-level fusion, the measurement signals of each sensor are first converted to the original source features and then merged as a new feature to classify the target. Classification of the merged feature is essentially a general pattern recognition problem [7,8]. Decision-level fusion involves performing signal preprocessing, feature extraction, and pattern classification locally and then establishing a preliminary conclusion about the observed target. The fusion center combines the recognition results from each sensor to obtain the final ballistic target identity description [1,9,10,11].
Generally, signal-level fusion requires a wide communication bandwidth if sensors are located on different platforms. Feature-level fusion can synthesize homogeneous or heterogeneous sensors, and the dimensionality of the merged feature is generally relatively high, which leads to difficulties in the subsequent pattern classification. The decision-level fusion structure has low communication bandwidth requirements and can asynchronously process echo signals, which makes it more appropriate for complex ballistic missile problems.
However, most of the research on multi-sensor fusion for ballistic target recognition ignores a critical problem; that is, the quality of the target echo signal is influenced by the environmental working methods, the physical factors, etc. The quality of echo signals of the same target received by the same radar sensor at different times may be quite different. For a new target, it is necessary to know how reliable the online feature provided by the sensor is and how credible the output of each sensor classifier is. Therefore, proper dynamic reliability and credibility evaluation methods are helpful in improving recognition performance.
This paper considers the following two factors of the ballistic target classification problem in the midcourse phase: (1) each radar sensor has different working modes and signal resolutions, and (2) each sensor has different weights at each process time. Therefore, a weighted decision-level fusion architecture is proposed to take advantage of data from multiple radar sensors, for which an online feature reliability evaluation method is used to comprehensively generate sensor weight coefficients.
The remainder of this paper is organized as follows. Section 2 introduces the background knowledge on the characteristics of ballistic missiles and radar observation for the recognition of a ballistic target. Section 3 and Section 4 provide a novel reliability evaluation algorithm and the proposed architecture. Section 5 illustrates the results of the experiment. Section 6 provides the discussion and summary.

2. Radar Network System for Ballistic Target Classification

This section primarily describes the flight characteristics of ballistic targets and the characteristics’ wide-band and narrow-band radar observation signals. It then introduces the radar echo signal simulation method to prepare data for the subsequent multi-sensor fusion target classification research.

2.1. Ballistic Target Characteristics

Figure 1 shows the general characteristics of ballistic missile flight, where the launch of the threat missile is detected by forward-based radars at (1), the threat missile releases its warhead and decoys at (2), the ground-based radar begins tracking the targets at (3), and discrimination radars observe the targets to try to determine which object is the warhead at (4).
Targets released by missiles in the midcourse phase follow the ballistic trajectory but have special micro-motion characteristics. Warheads and decoys move with precession motion due to the separation disturbance and retain this motion until they re-enter the atmosphere [12]. Compared to warheads and decoys, debris is lighter in weight and generally tumbles due to gravity and because of the absence of a spinning motor.
Figure 2a illustrates a typical cone target with precession, where the precession motion can be viewed as a combination of two types of rotational motion: spinning of the target around its symmetry axis and conical rotation, such that the symmetry axis rotates conically around the precession axis. Cylinder debris tumbles around the precession axis illustrated in Figure 2b. The object micro-motion can be modelled by these parameters: spin frequency ω s , precession frequency ω p , tumbling frequency ω t , and nutation angle θ , which serve as a vital theoretical basis to distinguish different types of micro-motions. Chen [13] and Liu [14] give a more detailed mathematical analysis of the micro-motion of ballistic targets.

2.2. Radar Observation

Radar systems used for ballistic target recognition usually comprise low-resolution radars and high-resolution imaging radars. They have different advantages and complementary resources. Low-resolution radar can obtain radar cross section (RCS) time series, target polarization information, and micro-Doppler information [15]; these signals have good real-time recognition performance, but it is difficult to extract the fine features of the target. High resolution radar can achieve more accurate measurement of target structure and micro-motion information, but it is expensive and requires extensive processing resources. This study utilized RCS time series and high-resolution range profile (HRRP) time sequences.

2.2.1. RCS Time Series

RCS measurement is affected by factors such as scattering properties and the attitude of the target. For a given target, the value of the monostatic RCS is related to the incident wavelength and the observation angle, so the RCS can be defined as σ ( f , φ , θ ) , where f is the frequency of the incident wave, φ is the elevation angle, and θ is the aspect angle [16]. Here, the aspect angle of the target is set as a constant value of θ 0 , so the RCS becomes σ ( f , φ ) θ 0 . Three typical metal ballistic targets, shown in Figure 3, are considered here: cone, cone plus cylinder, and cylinder. Setting the frequency of the incident wave as 3 GHz and θ 0 as 90 ° , the scattering characteristics for different φ values for the three targets can be calculated with the Feldberechnung bei Körpern mit Beliebiger Oberfläch (FEKO), as shown in Figure 4. Considering φ [ 50 ° , 80 ° ] , the usual range of observation of a defense radar, the RCS of the cylinder is the largest and the RCSs for the cone and the cone plus cylinder are similar, but the fluctuations in the RCS of the cone plus cylinder are more obvious than those of the cone.

2.2.2. HRRP Sequences

HRRP is the sum of the projection vector of the sub-echo of the target scattering point along the radar line of sight (RLOS). When the bandwidth of the radar is so large that the distance resolution of the radar is much smaller than the size of the target, the equivalent scattering centers of the target are separated in the RLOS. It contains information on the structure, size, and shape of the target. It is important to effectively acquire and use this information in the field of ballistic target recognition. In addition, HRRP can also be used to extract the target radial length feature.
HRRP can be obtained by inversely Fourier-transforming the frequency of the target, which can be written as:
X ( f ) = k = 1 K B k e j 4 π R k c f
where f is the frequency, c is the speed of light, K is the number of scattering points on the target, and B k and R k are, respectively, the amplitude of the k th scattering point and the range between the k th scattering point and the observation radar at a certain time. Setting the radar center frequency f 0 as 9.5 GHz, the frequency step Δ f as 15.625 MHz, and the range sample number N as 64 , the number of visible scattering points varies with the radar line of sight for the three targets, as shown in Figure 5, where the cylinder has the highest number of visible scatter points and the cone has the smallest in φ [ 50 ° , 80 ° ] .

2.3. Radar Echo Signal Generation

The generation of radar echo signals by the scheme is depicted in Figure 6. It begins with three basic components: the object 3D model, the ballistic missile trajectory, and the object micro-motion. The details of the process are introduced below.
The typical types of ballistic targets in the midcourse phase are warheads, decoys, and debris. The 3D models shown in Figure 3 are used to represent these targets. The monostatic RCSs at the different carrier frequencies of the targets was computed with FEKO software.
The ballistic trajectory was formulated using the Systems Tool Kit (STK). The STK establishes the missile launch scene. The missile is launched from point A (44, 60) and lands at point B (110, 42). Five radars are located along the trajectory to observe the targets, and their positions are R1 (85,45), R2 (95,45), R3 (94,45), R4 (85,40), and R5 (95,40), respectively. The types of radar and the pulse repetition frequency (PRF) are shown in Table 1, including two narrowband radars and three wideband radars. Figure 7a illustrates the simulation ballistic missile trajectory, with radars measuring the ballistic target from 540 s to 820 s after its launch, as shown with the red bold line. The five radars’ observation angles as functions of the time sequence are shown in Figure 7b.
From practical experience, the nutation angle of the warhead is small due to attitude control, and the precession axis is often the reentry direction. In contrast, the nutation angle of the decoy is large. Debris usually presents a tumbling motion. Therefore, eight targets with various types and micro-motions were simulated and classified as four kinds according to the motion characteristics of the ballistic target, as shown in Table 2.
The target attitude time series can be obtained from the missile trajectory and target micro-motion using STK simulation. Then, theoretical echo signals can be obtained by combining the attitude time series and electromagnetic calculation results. Finally, complex white Gaussian noise with an SNR of 12 dB can be added to the theoretical radar echo signals to obtain the radar observation echo signals.

3. A Novel Online Feature Reliability Evaluation Based on Dependence

As is well known, the quality of the echo signal received by the radar changes in a real-world online environment, and these constraints and correlations that exist in the physical world can be seen as dependencies. Therefore, Bayes’ theorem can be used to establish a dependence measure, which can automatically adjust to various conditions and track the characteristics of a dynamic system for ballistic target classification.
For an input feature vector x of a newly arriving target, the online feature reliability is denoted as
r n ( x ) = p ( c ( x ) = θ p | c ^ ( x ) = θ p )
which expresses the conditional probability that x potentially belongs to the class θ p when it is classified to the class θ p by the radar’s classifier M n . c ( x ) = θ p denotes the true class of x , and c ^ ( x ) represents the predicted class declared by M n .
Training information is used to measure r n ( x ) . The neighborhood patterns of x in the training feature space χ n generally have close attribute values. The k-nearest neighbors of x ( x k , k = 1 , , K ) are found first in the training feature space χ n according to the Euclidean distance. p ( c ( x ) = θ p | x k ) is used to expresses the probability that sample x k comes from the class θ p and is obtained through following dependence formula:
p ( θ p | θ j ) = k = 1 K p ( c ( x k ) = θ p | x k ) p ( x k | θ j )
where the conditional probability p ( x k | θ j ) is the probability of the neighbor sample x k when drawn from the class θ j . When the item x k is regarded as a random variable, Equation (3) is written as a multiplication between a coefficient matrix A and a vector y :
A y = b
where A is a C × K matrix,
A = [ p ( x 1 | θ j ) p ( x K | θ 1 ) p ( x 1 | θ C ) p ( x K | θ C ) ]
and b is a C × 1 column vector containing the right sides of the conditional probability:
b = [ p ( θ p | θ 1 ) p ( θ p | θ C ) ] T
and
y = [ p ( θ p | x 1 ) p ( θ p | x K ) ] T
Depending on K unknown variables and K equations, solutions to the normal system of linear equations can be found as follows:
y = A 1 b
Finally, the online feature reliability r n ( x ) is obtained by calculating the mean of K dependence probabilities:
r n ( x ) = p ( c ( x ) = θ p | x ) = 1 K k = 1 K p ( θ p | x k )
Supposing that the training data of each class fits a Gaussian distribution, the neighbor sample x k is assigned to a class θ j only with its intensity level, and the conditional density function is defined by:
p ( x k   | θ j ) = 1 ( 2 π ) d 2 | ϵ j | 1 2 × exp ( 1 2 ( x k u j ) T ϵ j 1 ( x k u j ) )
where u j and ϵ j are, respectively, the mean d-vector and the d × d covariance matrix associated with θ j (d is the dimension of the feature vector). The elements p ( θ p | θ j ) of b represent a conditional probability obtained through the confusion matrix of the classifier M n , which can be regarded as expert experience.
p ( θ p | θ j ) = q p j i = 1 C q i j
where the element q p j represents the number of samples that classifier M n predicts for the class θ p in relation to the class θ j .

4. Proposed Multi-Sensor Fusion Architecture for Ballistic Target Classification

The workflow of the proposed multi-sensor fusion architecture is described in Figure 8, and it comprises five stages as follows:
(a)
Observation. This stage involves the collection of measured signals from the radar network, such as the RCS time series and the HRRP time sequence, in which the ballistic missile observation network is supposed to be comprised of N radars, i.e., S = { S 1 , , S N } .
(b)
Signal preprocessing. At this stage, the radar echo signal is processed using feature extraction, and relatively stable and highly separable features are selected for identification. The intuition behind the signal preprocessing is to extract key signal features that depict the target details. In the proposed architecture, two measurement signals are mainly considered: the RCS time series and the HRRP time series. The RCS time series contains the following target characteristic information: (1) location characteristics, which describe the average location and specific location of the target RCS, such as the mean, quantile, minimum, and maximum; (2) dispersion characteristics, which indicate the dispersion of the target RCS sequence across the entire real number axis, such as the variance and standard deviation, standard mean deviation, and coefficient of variation; (3) distribution characteristics, such as the standard skewness coefficient and the standard kurtosis coefficient. HRRP sequences contain information about the structure, size, and shape of the target. It is important to effectively acquire and use this information in the field of ballistic target recognition. The use of HRRPs can make it possible to not only extract features, such as target distance and speed, but can also obtain target features, such as the number, position, and scattering intensity of the target. In addition, HRRPs can also be used to extract the target radial length feature. The features of the original signal extracted from the raw signal can be expressed as A 1 , , A d n , where d n is the total number of the features of the sensor S n .
(c)
Feature transformation. At this stage, the signal features of each sensor are merged into a long vector. The dimension of the long feature vector is the sum of the data dimensions of the original signal features. Feature transformation reduces the impact of the high-dimensional feature space by removing redundant and irrelevant features. Principal component analysis (PCA), independent component analysis (ICA), and linear discriminant analysis (LDA) are popular methods for feature transformation. The transformation method is used for the radar signal feature vector of the radar S n , and result is a new feature vector F s = [ f 1 f d n ] .
(d)
BPA generation using trained classifier and sensor weight evaluation. At this stage, each piece of information extracted from the sensor is modeled as a basic probability assignment (BPA). The BPAs are generated based on the output of the trained classifier. Online feature quality evaluation and dynamic sensor credibility evaluation are used to obtain a comprehensive weight, which is used to modify the BPA of each sensor.
(e)
Weighted decision-level fusion. At this stage, weighted decisions are made.
More details about stage (d) and stage (e) are introduced in the following section.

4.1. BPA Generation Using Trained Classifier

There are many mathematical theories available to represent the imperfection of data, such as Bayesian probability theory [17], fuzzy set theory [18], or belief function theory [19]. Most of these approaches can represent specific aspects of imperfect data. For example, probabilistic methods rely on probability distribution functions to express the uncertainty of the data. Fuzzy set theory introduces the novel notion of partial set membership, which enables imprecise reasoning. Belief function theory is a popular method for dealing with uncertainty and imprecision with a theoretically evidential reasoning framework.
In our work, belief function theory is exploited to construct the evidence given by each radar because of its advantages in being able to separate the two sources of uncertainty and its fairly simple modeling of doubt and lack of information. Let Θ = { θ 1 , , θ C } represent the frame of discernment. The elements of the power set 2 Θ = { H | H Θ } are called hypotheses. A basic probability assignment (BPA) defines a belief function m from 2 Θ [ 0 ,   1 ] , satisfying:
m ( ) = 0
H Θ m ( H ) = 1
where denotes an empty set and H is any subset of Θ . The value taken by the BPA at H is called the basic probability mass and represents the accurate trust degree for the evidence for H in the recognition framework.
The sensor’s BPA is constructed from the likelihood of different classes of the output of M n . Let x = F s be the input feature vector of the trained classifier M n , and the output of the classifier is the posterior probabilities μ i n ( x ) , i = 1 , , C for each possible class.
The μ i n ( x ) [ 0 ,   1 ] represents the degree to which x belongs to the class θ i according to the classifier M n . Then, the likelihood value for each hypothesis under the framework Θ is defined by:
l n ( θ i ) = μ i n ( x )
and the likelihood of the unknown (the universal) set Ω is determined by:
l n ( Ω ) = 1 max ( l n ( θ i ) )
We can normalize every likelihood to obtain the BPA:
m n ( θ i ) = l n ( θ i ) l n ( θ 1 ) + , , + l n ( θ C ) + l n ( Ω )
m n ( Ω ) = l n ( Ω ) l n ( θ 1 ) + , , + l n ( θ C ) + l n ( Ω )
where m n ( Ω ) captures the total ignorant information about the classification undertaken by the classifier M n and plays a neutral role in the combination with the output of other classifiers.

4.2. Dynamic Sensor Weight Evaluation

Here, the online feature reliability and sensor credibility from the classifier performance are employed to determine the weights used in the given scenario. For an input feature vector x of a newly arriving target, each sensor obtains two reliability values: (1) r n ( x ) , the reliability of the online feature; and (2) β n , the credibility of the sensor.
The credibility of sensors is evaluated based on the degree of support between the basic probability assignments (BPAs) provided by sensors. We used the evaluation method proposed by Yong [20]. The sensor weight obtained after combining the two values is:
v n = r n ( x ) × β n
and it is then normalized by:
w n = v n i = 1 N v i

4.3. Weighted Decision-Level Fusion

There are several decision-level fusion techniques, such as voting, weighted decision, Bayesian inference, the Dempster–Shafer method, generalized evidential processing theory, etc. The selection of an appropriate fusion strategy depends mainly on the output formats of the classifier. The purpose of this section is to obtain m f u s e d using a variety of fusion techniques.

4.3.1. Dempster–Shafer

The Dempster–Shafer method enables the fusion of several sources using the Dempster combination operator. Given two distinct BPAs in the set Θ , the aggregation can be achieved using the conjunctive combination rule [21]:
m ( H ) = m 1 ( H ) m 2 ( H ) = 1 K A B = H m 1 ( A ) m 2 ( B )       A , B Θ
K is defined by:
K = 1 A B = m 1 ( A ) m 2 ( B )  
The normalization coefficient K evaluates the conflict between m 1 and m 2 . The fused BPA m f u s e d can be obtained by using Equation (20) to fuse the weighted BPAs of each sensor ( N 1 ) times.

4.3.2. Bayesian Inference

The Bayesian fusion structure uses a priori information on the probability that a hypothesis exists and the likelihood that a sensor can classify the data to the correct hypothesis [22]. The inputs to the structure are (1) p ( θ j ) , a prior probability that the object θ j exists; (2) p ( D n , i | θ j ) , the likelihood that each sensor S n will classify the data as belonging to any one of the C hypotheses; and (3) D n , the input decision from the nth sensor.
In accordance with the independence assumption, the estimated probability for the true class label θ j can be calculated by
p ( D | θ j ) = p ( D 1 , D N | θ j ) = n = 1 N p ( D n | θ j )
Denote by p ( D n ) the probability that the nth classifier labels x in the class D n Θ . N is the number of sensors and C is the number of classes, where D = [ D 1 D N ] denotes the vector that generates the label of the ensemble. Then, the posterior probability needed to label x is
p ( θ j | D ) = p ( θ j ) p ( D | θ j ) p ( D )
The denominator does not depend on θ j and can be ignored, so the final support for class θ j is
m f u s e d ( θ j ) = p ( θ j | D ) p ( θ j ) n = 1 N p ( D n | θ j )
For each sensor’s classifier model, a C × C confusion matrix C M n is calculated from the testing dataset. c m k , s n is the number of elements in the dataset whose true class label is θ k and which is assigned by the classifier to class θ s . We denote c k as the total number of elements in the dataset from class θ k . Then,
p ( D n | θ k ) = c m k , s n c k
and prior knowledge p ( θ k ) can be regarded as equal when unknown. Considering the sensor weight, the final support for class θ k is
m f u s e d ( θ j ) n = 1 N w n   c m k , s n c k

4.3.3. Majority Vote

Voting is the simplest method and involves just counting the number of decisions for each class and assigning the object to the class that obtains the highest number of votes. The weighted voting fusion structure is described by:
m f u s e d ( θ j ) = n = 1 N [ w n m n ( θ j ) + w n m n ( Ω ) C ]

4.3.4. Winner Takes All

The output of the most reliable sensor is taken as the judgment output:
w k = arg   max n = 1 , , N w n
m f u s e d = m k

4.4. Final Decision

A probability function must be constructed from the mass functions to make the final decision; that is, the one that maximizes the expected utility. The Pignistic transformation [23] is used and defined by
B e t P ( H ) = A Θ | H A | | A | m f u s e d ( A ) 1 m f u s e d ( )       H Θ
where | H A | is the cardinality of set | H | . Given m f u s e d ( ) = 0 and θ 1 , , θ C Θ , B e t P ( H ) can be expressed as follows:
B e t P ( { θ } ) = θ B m f u s e d ( B ) | B |       B Θ
and the unknown target for the class with the highest Pignistic probability can be obtained as follows:
A * = arg   max θ Θ B e t P ( { θ } )

5. Experimental Results

The proposed architecture was tested on a simulation dataset. Through the progressive simulation described in Section 2.3, a ballistic missile dataset with four types of ballistic targets measured by five radars was obtained. In this section, the experiment parameters and the results are described.

5.1. Experiment Setup

Firstly, the midcourse echo signal of the target was recorded over 280 s. Then, the records of each radar were sliced to form the signal samples, and the signal slicing window sizes are shown in Table 1 with a window step size of 1 s. Finally, the available datasets were randomly divided into training sets (four out of five of all the datasets) and a testing set (the remaining dataset). It is worth noting that each data splitting operation was synchronized for the five radars.
The signal features of each radar are shown in Table 3. In the feature transformation stage, PCA and ICA were exploited separately. In pattern classification, traditional learning algorithms and ensemble learning algorithms are applied to verify the adaptability of the proposed model. Traditional learning algorithms include decision tree (DT), k-nearest neighbor (kNN), and Gaussian naïve Bayes (NB) algorithms. Ensemble learning algorithms include bagging, random forest bagging (RFB), adaptive boosting for multiclass classification (AdaBoostM2), random subspace boosting (RSB), stacking, and linear programming boosting (LPB) algorithms. The classifiers’ output is a probabilistic score for each hypothesis class for an unknown object. In the experiment, the class scores from the stacking classifier are calculated using Equation (11), that is, the confusion matrix of the classifier is used to calculate the posterior probability of the target. The target scores of other classifiers are given by their respective classifiers. When the target score is obtained, the BPA of each sensor is calculated according to Equations (16) and (17).
Due to the balanced sample size of each class in our simulation dataset, model performance was evaluated using the mean of the accuracy and the F1-score.

5.2. Accuracy of the Weighted Decision-Level Fusion Model

Table 4 shows the classification accuracy of the multi-sensor weighted decision-level fusion model, which uses the PCA method in the feature transformation stage. The result for a single sensor was obtained by directly inputting the transformed feature vector into the classifier. When comparing the five single sensors, R1 had the worst classification performance, while R2 had the best performance. Compared to R2, DS, BAYES, MV, and WTA, the average accuracy rates of the four fusion strategies increased by 4.43%, 8.46%, 8.23%, and 3.75%, respectively.
Table 5 shows the classification accuracy of the fusion model with the ICA method in the feature transformation stage. After weighted decision-level fusion, compared to R2, which had the best performance, the average accuracy rates of the four fusion strategies DS, BAYES, MV and WTA increased by 0.69%, 4.24%, 4.07%, and 0.32%, respectively.

5.3. F1-Scores of Classes

Table 6 shows the F1-scores of each class using the multi-sensor weighted decision-level fusion method with the PCA method in the feature transformation stage. Compared to R2, which had the best recognition performance, the F1-scores for the warheads increased by 3.31%, 9.19%, 8.77%, and 0.46%, respectively, with the four fusion strategies DS, BAYES, MV, and WTA.
Table 7 shows the fusion methods when using the ICA method in the feature transformation stage. Compared to R2, the warhead F1-score was improved by 0.14%, 5.29%, 4.77%, and −3.29%, respectively, with the four fusion methods DS, BAYES, MV, and WTA.
In summary, Bayesian fusion rules in the four types of fusion algorithms showed better performances; the reason may have been that the algorithm integrates the a priori probability of each class. The winner takes all method was not as stable as other fusion methods, and the reason may have been that it selects the output of the most reliable sensor each time, but the classification effect of the sensor affects the final judgment.
The most significant contribution of this work is that the accuracy of the ballistic target classification was increased by taking advantage of a combination of sensor data. Due to differences in working bandwidth, carrier frequency, and data rate, radar recognition performances can be quite distinct. As mentioned before, micro-motion feature extraction is an important means to distinguish the real warhead from other targets in the ballistic midcourse phase. It is easy to identify the debris, as it tumbles randomly and its motion form is single. The real warhead and the decoy have similar shapes and similar motion forms and, when only a single sensor observation is relied on, a certain degree of misjudgment about the true warhead can occur, especially when the data rate is relatively low. However, by using the comprehensive fusion model proposed in this study, it is obvious that the advantages of the radar systems can complement each other. The proposed model has good applicability, and it showed improved performances under different classification algorithms, where the average accuracy rate increased in the range from 0.32% to 8.46%, and the improvement in the F1-score for the warhead ranged from 0.14% up to 9.19%.

6. Discussion and Summary

Considering factors such as the different dimensions of the features between sensors and the different levels of recognition credibility of each sensor, a weighted decision-level fusion architecture using multiple radar sensors was proposed, and an online feature reliability evaluation method was also used to comprehensively generate sensor weight coefficients to overcome the deficiency of a single sensor and enhance the recognition rate for warheads in the midcourse phase.
Firstly, background knowledge on ballistic missiles was introduced. Then, the multi-sensor fusion architecture was described, which was divided into five stages: the observation stage, the signal preprocessing stage, the feature transformation stage, the sensor BPA and weight coefficient generation stage, and the weighted decision-level fusion stage. Finally, we described the experiment carried out under with multiple sensor locations and multiple bandwidths, which showed that the proposed model could work well with various classifiers, including traditional learning algorithms and ensemble learning algorithms. The average accuracy rate increased in the range from 0.32% to 8.46%, and the improvement in the F1-score of the warhead ranged from 0.14% up to 9.19%.
However, for ballistic target identification, the experiments and performances described above had limitations. Firstly, the model employs decision-level fusion at the highest level, and it is unnecessary to assume that five radars work simultaneously at each fusion node. If only one radar system is active in a fusion node, the online feature reliability calculation will use the sensor’s own training data and observations to optimize the BPA function, while the output of the fusion stage depends only on the BPA-adjusted sensor. Secondly, the fusion performance was only assessed through the limited ballistic simulation data, and it would be necessary to analyze the fusion performance of multi-track and multi-position data in the future. Finally, anti-missile operations presently have more requirements for the real-time performance of a system, with the micro-feature extraction and target imaging of ballistic targets often relying on long-term observations, but the evaluation metrics in this paper only considered the classification accuracy, so how to extract stable features and achieve maximum classification accuracy under the restrictions of the data rate, accumulation time, bandwidth, and other conditions is a problem that will be studied further.

Author Contributions

Conceptualization, L.Z.; Data curation, L.Z.; Formal analysis, N.W. and L.Z.; Funding acquisition, L.Z.; Investigation, N.W.; Methodology, N.W. and L.Z.; Project administration, L.Z. and X.Z.; Supervision, X.Z.; Writing—original draft, N.W.; Writing—review and editing, L.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by National Nature Science Foundation Program of China (no. 92059204, 22090054 and 91850204).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Maurer, D.E.; Schirmer, R.W.; Kalandros, M.K.; Peri, J.S. Sensor Fusion Architectures for Ballistic Missile Defense. Johns Hopkins APL Tech. Dig. 2006, 27, 19–31. [Google Scholar]
  2. Wilson, D.K.; Pettit, C.L.; Lewis, M.S.; Mackay, S.; Seman, P.M. Probabilistic Framework for Characterizing Uncertainty in the Performance of Networked Battlefield Sensors. In Defense Transformation and Net-Centric Systems 2008; International Society for Optics and Photonics: Bellingham, WA, USA, 2008. [Google Scholar]
  3. Liggins, M., II; Hall, D.; Llinas, J. Handbook of Multisensor Data Fusion: Theory and Practice, 2nd ed.; CRC Press: Boca Raton, FL, USA, 2017; pp. 51–56. [Google Scholar]
  4. Lu, K.; Zhou, R. Sensor Fusion of Gaussian Mixtures for Ballistic Target Tracking in the Re-Entry Phase. Sensors 2016, 16, 1289. [Google Scholar] [CrossRef]
  5. Janczak, D.; Sankowski, M. Data Fusion for Ballistic Targets Tracking Using Least Squares. AEU-Int. J. Electron. Commun. 2012, 66, 512–519. [Google Scholar] [CrossRef]
  6. Cooperman, R.L. Tactical Ballistic Missile Tracking Using the Interacting Multiple Model Algorithm. In Proceedings of the Fifth International Conference on Information Fusion. FUSION 2002. (IEEE Cat. No. 02EX5997), Annapolis, MD, USA, 8–11 July 2002; Volume 2, pp. 824–831. [Google Scholar]
  7. Wu, X.; Zhou, Y. Intelligent Processing Research for Target Fusion Recognition System Based on Multi-Agents. In Proceedings of the 2010 International Conference on Computational Intelligence and Software Engineering, Wuhan, China, 10–12 December 2010; pp. 1–4. [Google Scholar]
  8. Choi, I.-O.; Kim, S.-H.; Jung, J.-H.; Kim, K.-T.; Park, S.-H. Efficient Recognition Method for Ballistic Warheads by the Fusion of Feature Vectors Based on Flight Phase. J. Korean Inst. Electromagn. Eng. Sci. 2019, 30, 487–497. [Google Scholar] [CrossRef]
  9. McCullough, C.; Dasarathy, B.; Lindberg, P. Multi-Level Sensor Fusion for Improved Target Discrimination. In Proceedings of the 35th IEEE Conference on Decision and Control, Kobe, Japan, 13–13 December 1996; Volume 4, pp. 3674–3675. [Google Scholar]
  10. Dasarathy, B.; McCullough, C. Intelligent Multi-Classifier Fusion for Decision Making in Ballistic Missile Defense Applications. In Proceedings of the 37th IEEE Conference on Decision and Control (Cat. No. 98CH36171), Tampa, FL, USA, 18 December 1998; Volume 1, pp. 233–238. [Google Scholar]
  11. Bhattacharyya, A.; Saraswat, V.; Manimaran, P.; Rao, S. Evidence Theoretic Classification of Ballistic Missiles. Appl. Soft Comput. 2015, 37, 479–489. [Google Scholar] [CrossRef]
  12. Bankman, I.N.; Rogala, E.W.; Pavek, R.E. Laser Radar in Ballistic Missile Defense. Johns Hopkins APL Tech. Dig. 2001, 22, 379–393. [Google Scholar]
  13. Chen, V.C.; Li, F.; Ho, S.-S.; Wechsler, H. Micro-Doppler Effect in Radar: Phenomenon, Model, and Simulation Study. IEEE Trans. Aerosp. Electron. Syst. 2006, 42, 2–21. [Google Scholar] [CrossRef]
  14. Liu, J.; Li, Y.; Chen, S.; Lu, H.; Zhao, B. Micro-Motion Dynamics Analysis of Ballistic Targets Based on Infrared Detection. J. Syst. Eng. Electron. 2017, 28, 472–480. [Google Scholar]
  15. Cun-qian, F.; Jing-qing, L.; Si-san, H.; Hao, Z. Micro-Doppler Feature Extraction and Recognition Based on Netted Radar for Ballistic Targets. J. Radars 2015, 4, 609–620. [Google Scholar]
  16. Wu, B.; Liu, X. A Novel Approach for RCS Feature Extraction Using Imaging Processing. In Proceedings of the 2006 CIE International Conference on Radar, Shanghai, China, 16–19 October 2006; pp. 1–4. [Google Scholar]
  17. Olshausen, B.A. Bayesian Probability Theory; The Redwood Center for Theoretical Neuroscience, Helen Wills Neuroscience Institute at the University of California at Berkeley: Berkeley, CA, USA, 2004. [Google Scholar]
  18. Zadeh, L.A. Fuzzy Sets. In Fuzzy Sets, Fuzzy Logic, and Fuzzy Systems: Selected Papers by Lotfi A Zadeh; World Scientific: Singapore, 1996; pp. 394–432. [Google Scholar]
  19. Shafer, G. A Mathematical Theory of Evidence; Princeton University Press: Princeton, NJ, USA, 1976. [Google Scholar]
  20. Yong, D.; Wenkang, S.; Zhenfu, Z.; Qi, L. Combining Belief Functions Based on Distance of Evidence. Decis. Support Syst. 2004, 38, 489–493. [Google Scholar] [CrossRef]
  21. Voorbraak, F. A Computationally Efficient Approximation of Dempster-Shafer Theory. Int. J. Man-Mach. Stud. 1989, 30, 525–536. [Google Scholar] [CrossRef]
  22. Kuncheva, L.I.; Bezdek, J.C.; Duin, R.P. Decision Templates for Multiple Classifier Fusion: An Experimental Comparison. Pattern Recognit. 2001, 34, 299–314. [Google Scholar] [CrossRef]
  23. Smets, P. Decision Making in the TBM: The Necessity of the Pignistic Transformation. Int. J. Approx. Reason. 2005, 38, 133–147. [Google Scholar] [CrossRef] [Green Version]
Figure 1. The scenario for a ballistic missile defense system includes a complex, global network of components. (1) The launch of the threat missile is detected by forward-based radars, (2) the threat missile releases its warhead and decoys, (3) the ground-based radar begins tracking the targets, (4) discrimination radars observe the target to try to determine which object is the warhead. The red dashed box highlights the specific functions that are addressed in this paper.
Figure 1. The scenario for a ballistic missile defense system includes a complex, global network of components. (1) The launch of the threat missile is detected by forward-based radars, (2) the threat missile releases its warhead and decoys, (3) the ground-based radar begins tracking the targets, (4) discrimination radars observe the target to try to determine which object is the warhead. The red dashed box highlights the specific functions that are addressed in this paper.
Sensors 22 06649 g001
Figure 2. Micro-motion model of ballistic target: (a) precession motion; (b) tumbling motion.
Figure 2. Micro-motion model of ballistic target: (a) precession motion; (b) tumbling motion.
Sensors 22 06649 g002
Figure 3. Sketch of three typical metal target models: (a) cone; (b) cone plus cylinder; (c) cylinder.
Figure 3. Sketch of three typical metal target models: (a) cone; (b) cone plus cylinder; (c) cylinder.
Sensors 22 06649 g003
Figure 4. The full attitude-angle static RCS for φ ∈ [0°, 180° ]: (a) static RCS of cone; (b) static RCS of cone plus cylinder; (c) static RCS of cylinder.
Figure 4. The full attitude-angle static RCS for φ ∈ [0°, 180° ]: (a) static RCS of cone; (b) static RCS of cone plus cylinder; (c) static RCS of cylinder.
Sensors 22 06649 g004
Figure 5. Normalized HRRPs of three models for φ [ 0 ° , 180 ° ] : (a) HRRPs of cone; (b) HRRPs of cone plus cylinder; (c) HRRPs of cylinder.
Figure 5. Normalized HRRPs of three models for φ [ 0 ° , 180 ° ] : (a) HRRPs of cone; (b) HRRPs of cone plus cylinder; (c) HRRPs of cylinder.
Sensors 22 06649 g005
Figure 6. Flowchart for radar echo signal simulation.
Figure 6. Flowchart for radar echo signal simulation.
Sensors 22 06649 g006
Figure 7. Ballistic missile trajectory and radar network simulation data: (a) model of ballistic missile trajectory; (b) multiple radar observation angle sequence.
Figure 7. Ballistic missile trajectory and radar network simulation data: (a) model of ballistic missile trajectory; (b) multiple radar observation angle sequence.
Sensors 22 06649 g007
Figure 8. The workflow of our proposed multi-sensor fusion architecture for ballistic target classification.
Figure 8. The workflow of our proposed multi-sensor fusion architecture for ballistic target classification.
Sensors 22 06649 g008
Table 1. Ground-based radar net parameters.
Table 1. Ground-based radar net parameters.
RadarWork TypePrf, HzWindow Length, s
R1Narrowband radar,
carrier frequncy: 3 GHz
110
R2Narrowband radar,
carrier frequncy: 1.5 GHz
5002
R3Wideband radar,
center frequency: 10.5 GHz, bandwidth: 1 GHz
frequency interval: 15.625 MHz
110
R4Wideband radar,
center frequency: 10.5 GHz, bandwidth: 1 GHz
frequency interval: 15.625 MHz
104
R5Wideband radar,
center frequency: 10.5 GHz, bandwidth: 1 GHz
frequency interval: 15.625 MHz
5002
Table 2. Target micro-motion parameters.
Table 2. Target micro-motion parameters.
Class3D Model Type ω s ,   H z ω p ,   H z θ ,   d e g
WarheadCone31.55
Cone plus cylinder328
Heavy decoyCone31.510
Cone plus cylinder31.512
Light decoyCone3215
Cone plus cylinder31.820
DebrisCylinder Tumbling :   ω t = 2   Hz 90
Cylinder Tumbling :   ω t = 4   Hz 90
Table 3. List of radar signal features.
Table 3. List of radar signal features.
RadarSignal TypeSignal FeatureTotal
R1RCS time seriesMean, standard deviation, kurtosis, skewness, second-order central moment, third-order central moment, range, energy spectrum entropy, coefficient of variation, standard mean difference10
R2RCS time seriesMean, standard deviation, kurtosis, skewness, second-order central moment, third-order central moment, range, energy spectrum entropy, coefficient of variation, standard mean difference, period11
R3HRRP time seriesNumber of scattering points, skewness, target length, SVD principal component, entropy, echo power, irregularity, length change range, length change period9
R4HRRP time seriesNumber of scattering points, skewness, target length, SVD principal component, entropy, echo power, irregularity, length change range, length change period, precession frequency10
R5HRRP time seriesNumber of scattering points, skewness, target length, SVD principal component, entropy, echo power, irregularity, length change range, length change period, precession frequency10
Table 4. Mean accuracy of independent sensor and fusion model with PCA (in %).
Table 4. Mean accuracy of independent sensor and fusion model with PCA (in %).
ClassifierSingle SensorProposed Fusion Model
R1R2R3R4R5MDSMBAYESMMVMWTA
DT66.5289.5177.2383.7183.7179.0295.7696.4394.87
kNN68.5389.2983.7186.3881.794.8797.9998.8895.09
NB64.0669.8764.5162.7269.4280.891.5288.6275.45
Bagging73.8891.0785.9487.9590.1898.8897.9999.5595.54
RFB75.8991.0783.7189.0690.1898.8898.2199.1196.88
AdaBoostM259.3875.6749.5563.3972.3284.8294.4289.0679.69
LPB66.9699.5587.0593.9797.5499.7810010098.66
RSB66.9695.7669.8772.192.6398.2195.9898.2194.42
Stacking67.1991.9681.4789.7383.0497.5497.9997.9996.88
Average67.7188.1975.898184.5292.5396.6596.4391.94
Table 5. Mean accuracy of independent sensor and fusion model with ICA (in %).
Table 5. Mean accuracy of independent sensor and fusion model with ICA (in %).
ClassifierSingle SensorProposed Fusion Model
R1R2R3R4R5MDSMBAYESMMVMWTA
DT67.1992.1982.8184.1589.7378.3597.198.4497.54
kNN66.2998.6689.0689.9691.5296.8899.7899.5598.44
NB60.9480.3665.1872.7785.0488.8495.0995.5485.49
Bagging73.4496.8885.9491.0792.4199.7898.6699.5597.32
RFB75.4599.1189.5191.9690.8510098.8899.7898.21
AdaBoostM258.0487.0560.2773.4482.8184.8296.2185.0489.51
LPB52.998.4478.5776.3486.6199.3397.3299.7893.75
RSB56.9286.6173.2163.8483.9396.6593.397.5475
Stacking66.7498.2185.0489.5189.5199.1199.3398.8899.33
Average64.2193.0678.8481.4588.0593.7597.397.1292.73
Table 6. F1-scores of four classes in the proposed model with PCA (in %).
Table 6. F1-scores of four classes in the proposed model with PCA (in %).
ClassifierClassSingle SensorProposed Fusion Model
R1R2R3R4R5MDSMBAYESMMVMWTA
DTWarhead47.7586.3275.2168.4972.1769.5992.6493.3392.04
Heavy decoy72.2580.3670.3980.5385.5878.7693.5896.3392.24
Light decoy64.692.4565.785.4677.5375.7996.8696.0795.15
Debris81.4599.1197.310010094.93100100100
kNNWarhead44.2187.6184.4374.1169.1690.7696.4397.7892.73
Heavy decoy75.5281.6176.0283.8486.4994.9397.7698.6493.69
Light decoy67.390.837588.1871.1994.0197.7899.1195.15
Debris81.196.9498.6599.5510010010010098.68
NBWarhead24.4971.3750.7540.3760.1475.9490.586.5464.39
Heavy decoy71.0853.5454.0451.9573.9679.786.9686.1769.27
Light decoy60.7758.7555.1458.344.1277.688.6985.5771.77
Debris79.7292.9597.7810010087.8410095.7394.12
BaggingWarhead4989.1887.7675.782.1997.7897.7699.1192.66
Heavy decoy80.8383.0479.8283.1292.5998.2196.8399.1194.12
Light decoy71.793.0277.5792.5186.0899.5597.3710095.69
Debris89.3499.1198.210010010010010099.56
RFBWarhead57.7189.4784.1777.9382.5198.2197.3598.295.02
Heavy decoy79.1783.337584.6292.6699.1196.8399.1195.54
Light decoy74.7792.5276.9993.3385.7198.2198.6799.1197.35
Debris88.899.1198.210010010010010099.56
AdaBoostM2Warhead15.3867.233.3655.9865.0980.1992.0586.2758.39
Heavy decoy63.9350.5948.1224.6490.7478.2390.6582.3578.07
Light decoy56.7286.4250.6861.786.7881.6994.9888.2687.74
Debris78.8590.3296.0410010099.5510010088.19
LPBWarhead55.2310092.2488.6196.8610010010097.74
Heavy decoy62.9899.1176.687.7496.0499.5510010097.78
Light decoy54.0899.1179.8299.5597.399.5610010099.11
Debris94.69100100100100100100100100
RSBWarhead47.7896.8368.075791.7498.6597.398.6594.27
Heavy decoy67.4691.761.7366.0891.8996.492.1796.492.58
Light decoy61.6994.5951.6164.7187.0797.894.5597.892.02
Debris83.6510095.210010010010010098.68
StackingWarhead44.5590.2181.3979.6471.8696.8696.8997.3295.15
Heavy decoy72.7384.1672.4684.2185.5895.5496.8396.4396.43
Light decoy7093.5274.2695.0775.2297.7898.2398.2195.93
Debris79.8210097.74100100100100100100
Table 7. F1-scores of four classes in the proposed model with ICA (in %).
Table 7. F1-scores of four classes in the proposed model with ICA (in %).
ClassifierClassSingle SensorProposed Fusion Model
R1R2R3R4R5MDSMBAYESMMVMWTA
DTWarhead49.0688.4584.7571.3686.2268.0695.8197.7896.89
Heavy decoy70.7482.7672.2576.3986.737495.1597.395.45
Light decoy67.6297.2575.688.585.9778.7697.3998.6797.8
Debris79.1810098.2110010095.81100100100
k NNWarhead43.6998.6492.7779.2587.1894.1299.5699.5698.67
Heavy decoy72.597.7679.6186.9690.4197.7299.5599.1197.74
Light decoy67.6998.6883.1293.0488.5895.8110099.5597.78
Debris77.6599.5610010010010010010099.56
NBWarhead12.577.9851.5847.7374.0786.6793.191.8773.96
Heavy decoy71.1657.7153.6664.4491.1589.4592.3194.4283.76
Light decoy62.4384.5456.5475.2274.7887.2594.9895.6583.61
Debris71.1510098.2510010091.4310010099.11
BaggingWarhead53.8195.6587.882.4186.8810097.7899.5595.65
Heavy decoy77.9793.6477.9887.9392.0499.5597.399.1196.4
Light decoy69.9198.28093.7590.6799.5699.5699.5697.27
Debris88.6110097.3100100100100100100
RFBWarhead53.4798.1889.5483.6482.9510097.8299.5596.52
Heavy decoy81.3698.2582.5186.8493.0910097.7299.5696.8
Light decoy75.6810086.1297.3287.3910010010099.55
Debris88.1410099.56100100100100100100
AdaBoostM2Warhead14.0681.7560.9339.7465.0679.4595.5880.3784.16
Heavy decoy63.7971.8421.5263.299073.3992.3173.3984.08
Light decoy63.5994.3950.6479.5774.1388.3596.8987.890.99
Debris67.7110096.4610010099.5510010098.68
LPBWarhead38.7696.7783.761.5978.5198.6495.3299.5589.17
Heavy decoy49.596.9759.6260.184.9198.6894.3999.5688.12
Light decoy51.110072.7385.7183.4910099.5510097.39
Debris75.610097.72100100100100100100
RSBWarhead37.628372.151.1478.5195.1593.029656.25
Heavy decoy62.3473.4359.2651.1686.4995.588.3196.8372.2
Light decoy53.3990.5767.9465.7971.2295.9692.0497.3579.85
Debris71.0799.1191.685.4798.6810010010083.58
StackingWarhead46.8599.187.2278.783.7698.6599.1198.2198.67
Heavy decoy73.6896.5277.4885.7188.4898.6598.6498.299.1
Light decoy69.0397.2777.8893.185.9799.5699.5699.1199.56
Debris77.2710097.7410010099.56100100100
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Wei, N.; Zhang, L.; Zhang, X. A Weighted Decision-Level Fusion Architecture for Ballistic Target Classification in Midcourse Phase. Sensors 2022, 22, 6649. https://doi.org/10.3390/s22176649

AMA Style

Wei N, Zhang L, Zhang X. A Weighted Decision-Level Fusion Architecture for Ballistic Target Classification in Midcourse Phase. Sensors. 2022; 22(17):6649. https://doi.org/10.3390/s22176649

Chicago/Turabian Style

Wei, Nannan, Limin Zhang, and Xinggan Zhang. 2022. "A Weighted Decision-Level Fusion Architecture for Ballistic Target Classification in Midcourse Phase" Sensors 22, no. 17: 6649. https://doi.org/10.3390/s22176649

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop