Next Article in Journal
Estimation of Non-Optically Active Water Quality Parameters in Zhejiang Province Based on Machine Learning
Next Article in Special Issue
Masked Image Modeling Auxiliary Pseudo-Label Propagation with a Clustering Central Rectification Strategy for Cross-Scene Classification
Previous Article in Journal
Comparison Study of Earth Observation Characteristics between Moon-Based Platform and L1 Point of Earth-Moon System
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Space Infrared Dim Target Recognition Algorithm Based on Improved DS Theory and Multi-Dimensional Feature Decision Level Fusion Ensemble Classifier

1
Department of Aeronautics and Astronautics, Fudan University, Shanghai 200433, China
2
Key Laboratory of Intelligent Infrared Perception, Chinese Academy of Sciences, Shanghai 200083, China
3
Shanghai Institute of Technical Physics, Chinese Academy of Sciences, Shanghai 200083, China
4
University of Chinese Academy of Sciences, Beijing 100049, China
5
Innovation Academy for Microsatellites of Chinese Academy of Sciences, Shanghai 201304, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2024, 16(3), 510; https://doi.org/10.3390/rs16030510
Submission received: 16 December 2023 / Revised: 24 January 2024 / Accepted: 26 January 2024 / Published: 29 January 2024

Abstract

:
Space infrared dim target recognition is an important applications of space situational awareness (SSA). Due to the weak observability and lack of geometric texture of the target, it may be unreliable to rely only on grayscale features for recognition. In this paper, an intelligent information decision-level fusion method for target recognition which takes full advantage of the ensemble classifier and Dempster–Shafer (DS) theory is proposed. To deal with the problem that DS produces counterintuitive results when evidence conflicts, a contraction–expansion function is introduced to modify the body of evidence to mitigate conflicts between pieces of evidence. In this method, preprocessing and feature extraction are first performed on the multi-frame dual-band infrared images to obtain the features of the target, which include long-wave radiant intensity, medium–long-wave radiant intensity, temperature, emissivity–area product, micromotion period, and velocity. Then, the radiation intensities are fed to the random convolutional kernel transform (ROCKET) architecture for recognition. For the micromotion period feature, a support vector machine (SVM) classifier is used, and the remaining categories of the features are input into the long short-term memory network (LSTM) for recognition, respectively. The posterior probabilities corresponding to each category, which are the result outputs of each classifier, are constructed using the basic probability assignment (BPA) function of the DS. Finally, the discrimination of the space target category is implemented according to improved DS fusion rules and decision rules. Continuous multi-frame infrared images of six flight scenes are used to evaluate the effectiveness of the proposed method. The experimental results indicate that the recognition accuracy of the proposed method in this paper can reach 93% under the strong noise level (signal-to-noise ratio is 5). Its performance outperforms single-feature recognition and other benchmark algorithms based on DS theory, which demonstrates that the proposed method can effectively enhance the recognition accuracy of space infrared dim targets.

Graphical Abstract

1. Introduction

Space dim target recognition using infrared sensors is a challenging task for SSA systems [1]. The detection and recognition of a space target directly affects the security of the space environment, and pre-recognition of targets encourages the decision center to take proper initiatives quickly and efficiently to secure space station satellites [2,3]. In the target detection stage, researchers have conducted large amounts of research [4,5,6,7], but there is a lack of research on the recognition stage. Due to the long distance between the infrared sensors on the satellite platform and the target—and because the size of the space target is smaller than the instantaneous field of view of the detector—the space target images as a small dot on the infrared image plane, which lacks shape and target attitude information, resulting in fewer features to exploit [8]. Additionally, due to the weak target imaging signal, the feature information of the target is more seriously affected by interference such as noise, resulting in poor recognition of targets relying only on a single feature.
In recent years, space infrared dim target recognition has been initially studied, and effective results have been presented. For example, Silberman et al. [9] extracted the mean and variance statistical features of the radiation sequence of the target, constructed a classifier model based on the parameterization method, and then realized the classification and recognition of space targets. Gu et al. [10] constructed features such as length–width ratio, the brightness of the centroid, the pixel area, and the invariant moment based on the imaging area of the target and combined these with DS fusion theory to realize the discrimination of the targets. Dai et al. selected radiation intensity as the initial recognition feature and then used the Adaboost classifier to achieve the initial recognition of the target, after which the temperature feature was used to discriminate the unrecognized target again [11]. Zhang et al. simulated the irradiance signal of the target based on the Bhattacharyya optical decoy evaluation (BODE) model and extracted the temperature and effective radiation area of the target, which were input into the gaussian particle swarm optimization probabilistic neural network (GPSO-PNN) to achieve the recognition of the four types of targets [12]. Ma only extracted the radiation intensity of each target as the feature and combined the random projection recurrent neural network (R-RNN) network architecture to achieve the target recognition [13]. However, since the gray value of the target and the radiation can be considered linear within the normal operation of the infrared detector, it is still essentially using the infrared radiation of the target for recognition. These papers [14,15] also only used infrared radiation as input data for target recognition; the difference lies in the recognition algorithms.
From the above, most of the features extracted for spatial target recognition are mainly divided into two categories: (1) Image feature parameters of the target, such as length–width ratio, energy concentration, etc. These features can be extracted very quickly and require fewer computational resources, but they cannot reflect the intrinsic physical information of the target. Once the parameters of the imaging system of the infrared radiation detector change, the feature parameters of the target will also change, failing the training data set for the classifier established earlier. (2) Physical features of the target, such as temperature, radiation intensity, etc., which can reflect the changes in the properties and are the core data of the target. However, some features need to be obtained after model calculation, and the extraction speed of feature information is slower compared to that of image feature parameters. Moreover, most of the existing scholars focus on using a single feature or two features of the target to recognize the target, and the comprehensive use of various features is rare. Only a single feature is used to describe the state of the whole target, so the robustness of discrimination is low, and if the features show instability during the motion, the recognition performance is often not satisfactory. Therefore, it is extremely valuable to extract complete comprehensive feature information with a strong distinguishing ability from images produced by infrared detectors and comprehensively utilize these features to improve the recognition performance of the space target.
If multidimensional effective features are to be obtained, a preliminary analysis of the radiation and motion state of the space target must be performed during flight. In fact, due to the position of the target changing relative to the Sun, Earth, and other radiation sources, the solar radiation and Earth radiation absorbed by the target are also dynamic. Additionally, there are differences in the material, infrared emissivity, shape, and size of each target, and these factors will lead to differences in the trend of temperature change of the target, which provides a potential characteristic parameter for target recognition. In addition to temperature, the target will experience micromotion during the flight due to external disturbance factors [16]. The target will either precess or tumble until entering the atmosphere depending on whether the target is regular in shape and has attitude controllers or not. The micromotion makes the difference in the emissivity–area product in the line of sight of the infrared detector in addition to the variability of the micromotion period, which gives the two features potential for recognition. According to Planck’s law, the temperature, the emissivity area product, and the micromotion period determine the variation of the radiation intensity of the target, which makes it the most critical reference feature. In addition, during the process of space target splitting, the newly generated target has a lighter mass and the velocity between targets will also be different according to the law of momentum conservation. Therefore, the velocity may also provide a certain reference feature for target recognition [17]. It is exciting that the irradiance of the target can be inverted from the grayscale value of the image through the calibration relation equation of the detector, and combined with the distance information between the detector and the target, we can obtain the radiation intensity of the target so that the multi-frame images of the space target are converted into the infrared radiation intensity series signal of the target. Additionally, the relationship between the coordinates of target imaging points and space locations can be obtained via matrix variation, which provides the underlying theory for velocity extraction based on the pixel coordinates of the target. Therefore, in combination with the relevant physical laws, the multi-dimensional feature of the target can be extracted based on the infrared images.
Data fusion is the integrated processing of heterogeneous information from multiple sources to make a complete and accurate assessment of the state of the observed object, which helps us to make full use of the multidimensional feature information of the space target [18]. It has been widely developed in several fields, such as machine fault diagnosis [19], health discrimination [20], emotion understanding [21], and target intention judgment [22]. However, in practical scenarios, the acquisition of feature data is easily disturbed by noise and other factors, resulting in feature data of targets being uncertain, and this uncertainty should be dealt with is still an open problem. To solve it, various theories have been proposed, such as fuzzy set theory [23,24], rough set theory [25], Z numbers theory [26], D-number theory [27], Bayesian theory [28,29], and Dempster–Shafer theory [30,31].
Fuzzy sets can describe the fuzziness hiding in uncertain real word scenarios [23]. Rough set theory plays an important role in simplifying information processing, studying expression learning, and discovering imprecise information [25]. However, quantifying uncertainty is still a huge challenge for fuzzy set and rough set theories. Bayesian theory is a mathematical model based on the probabilistic inference that can effectively deal with uncertainty and incompleteness. However, it presupposes that prior knowledge of events is required [28]. While Dempster–Shafer theory, an efficient method for data fusion, satisfies weaker conditions than Bayesian theory, it can operate without any prior information regarding event probability. It also describes evidence fusion rules that facilitate quantifying and dealing with the uncertainty of events. However, when the fused evidence is highly contradictory, the DS theory may produce counterintuitive results, leading to erroneous inference and conclusions about events. To address the above problem, a large number of research results have emerged, and these can be divided into two categories. One is to modify Dempster’s combination rule; for example, Yager [32] believed that conflicting evidence cannot provide favorable support for the final decision and proposed a new fusion rule to reassign conflicting evidence. Dubois and Prade proposed a disjunctive combination rule to reassign the conflicting part of each evidence to the concatenated set [33]. However, the modification of the fusion rule often leads to the destruction of the combination and exchangeability of the DS combination rule. The other is to modify the evidence body. This method does not change the fusion rule, and the original good properties are retained, including Murphy’s method [34], Deng et al.’s method [35], Sun et al.’s method [36], and others. Murphy proposed modifying and averaging the initial body of evidence before performing fusion. Deng et al. used an improved averaging method based on the distance of evidence to combine quality functions. Sun used a weighting and averaging method to reassign conflicting values, introduced an evidence credibility metric, and successfully solved the problem of conflict between pieces of evidence.
Classifier fusion is an effective strategy for improving the classification performance of complex pattern recognition problems. In practice, multiple classifiers to be combined may have different reliabilities, and an appropriate ensemble classifier plays a crucial role in achieving optimal classification performance during the fusion process [37]. Ensemble classifiers are widely applied in practical scenarios. For instance, Mai et al. used different classifiers learned from three features and combined them using convolutional layers to achieve fruit detection [38]. Kaur et al. proposed a classifier fusion strategy to aggregate predictions from three classifiers: SVM, logistic regression, and K-nearest neighbor through majority voting [39]. Bhowal et al. introduced an alternative low-complexity method for computing fuzzy measures and applied it to Choquet integrals to fuse deep learning classifiers from different application domains [40]. Furthermore, the aforementioned [32,33,34,35,36] are also considered classifier fusion, which modifies the DS evidence theory by formulating different fusion rules to fuse the evidence obtained by different classifiers.
To comprehensively utilize the multidimensional feature and solve the recognition problem of space infrared dim target, this paper combines multiple classifiers with DS theory and proposes an ensemble classifier with an improved DS fusion rule recognition model. Specifically, the ensemble classifier can perform preliminary classification and recognition based on the extracted multidimensional feature to obtain the BPA of each feature evidence, and the improved DS fusion rule can modify the current evidence body to improve the conflict situation among the feature evidence.
The main contributions of this paper are as follows:
(1)
The infrared radiation intensity model and imaging model of space target are developed. Firstly, the characteristics of external radiation, micromotion, temperature, and projected area change are analyzed to derive the radiation model of the space target. The imaging model is established according to the relationship between the target position and the imaging point of the infrared detector. Thus, multi-frame infrared images of space targets can be easily acquired.
(2)
An ensemble classifier based on ROCKET, LSTM, and SVM is constructed, and the corresponding conversion methods are designed to convert the probability outputs and the classification accuracies of the three classifiers into the corresponding BPAs and weight values, respectively.
(3)
A contraction–expansion function for the BPA is proposed, which can scale up or down the value of the current BPA according to whether the value satisfies the threshold to achieve the modification of the evidence.
(4)
Through testing and evaluation of space target data in multiple scenes, it is verified that the recognition accuracy of the proposed method has a certain degree of improvement compared with that of relying on a single dimension, and in addition, it can significantly improve the recognition performance compared to other DST-based benchmark algorithms in strong noise scenes.
The rest of this paper is organized as follows. Section 2 briefly describes the infrared radiation intensity model and the imaging model of the space target. In Section 3, the ROCKET, LSTM, SVM, and DST are briefly introduced. Then, the main framework of our proposed algorithm is presented in Section 4. Section 5 presents the experimental results and performance comparison. Section 6 concludes the paper and provides an outlook for the future work.

2. Infrared Radiation Intensity and Target Imaging Modeling

Since the flight region of the targets is outside the atmosphere and there is no prior information about the attributes, the real infrared radiation information of the objects is difficult to collect. Therefore, in this paper, factors affecting infrared radiation—such as temperature, material emissivity, shape, micromotion, and so on—are considered comprehensively to establish the simulation infrared radiation model. Additionally, considering that the positions of the targets and the infrared detector are changing, the image plane coordinates of the targets in the infrared detector vary dynamically, which affects the speed extraction of the targets. Therefore, in this section, the infrared radiation intensity model and the imaging model of the targets are established.

2.1. Micromotion Model

Some of the targets will split during the flight, producing multiple shapes that pose threats to satellites and space stations. Most of these shapes are ball-base cone, flat-base cone, cone–cylinder, cylinder, sphere, and arc debris. Since the axisymmetric shape is representative and the change law of the sphere is simple, this paper focuses on the four axisymmetric shapes of targets. In addition, during the splitting process, the targets will be subject to the interference moment in addition to the gravitational force, which will produce micromotion [41]. In general, the conventional forms of micromotion are spinning, coning, and tumbling. If the targets have a regular shape and are equipped with an attitude control device, the precession will occur and is decomposed into a spinning component and a coning component.
In contrast, the target lacking a control device will tumble around the tumble axis at a certain angular velocity. The micromotion not only changes the projected area of the targets in the line-of-sight (LOS) of the infrared detector in a short period but also changes the temperature of the target surface elements. Additionally, the micromotion state of different shaped targets is different. Therefore, it is significant to model the micromotion law of space targets and analyze its influence on infrared radiation, to grasp the multi-dimensional characteristic information of the targets.
Assuming the cone–cylinder is coning and spinning during the flight, the local coordinate system is x ,   y , z , the reference coordinate system is X , Y , Z , and the point o O is the origin of the two systems, as shown in Figure 1, which depicts the motion process of the target from t 0 to t 0 + t . The azimuth and elevation angles of the precession axis are α 1 and β 1 at the initial time t 0 , LOS is the sight vector from the detector to the target centroid in the reference coordinate system, and w s and w c are the angular velocities of the spinning and the coning, respectively.
The first step is to obtain the Euler rotation matrix for vector conversion in different coordinate systems. We assume that the matrix rotation order is Z X Z , and the Euler angle is ψ ,   θ ,   ζ . The Euler rotation matrix R e u l e r from the local coordinate system to the reference coordinate system is:
R e u l e r = c o s   ψ s i n   ψ 0 s i n   ψ c o s   ψ 0 0 0 1 · 1 0 0 0 c o s   θ s i n   θ 0 s i n   θ c o s   θ · c o s   ζ s i n   ζ 0 s i n   ζ c o s   ζ 0 0 0 1
Then, the micromotion rotation matrices are calculated. For the precession target, the spinning axis of the target is considered to coincide with the z -axis in the local coordinate system. The spinning angular velocity vector in x , y , z is w = 0 ; 0 ; w s . According to the Rodrigues equation, the rotation matrices of spinning and coning can be calculated as:
R s p i n n i n g t = I + E 0 s i n   w s t + E 0 2 1 c o s   w s t
R c o n i n g t = I + E 1 s i n   w c t + E 1 2 1 c o s   w c t
where E 0 and E 1 are skew-symmetric matrices. By multiplying the three matrices, the global rotation matrix can be mathematically expressed as below:
R t = R c o n e t · R s p i n n i n g t · R e u l e r
Similar to the above, assuming α 2 , β 2 are the azimuth and elevation angles of the tumbling axis, respectively. The tumbling rotation matrix is:
    R t u m b l i n g t = I + E 2 s i n   w t t + E 2 2 1 c o s   w t t
where E 2 is the skew-symmetric matrix. Based on the above contents, the normal vector n p of any target surface facets in x ,   y , z can be translated to the normal vector n p in X , Y , Z by
n p = R t · n p = R c o n i n g t · R s p i n n i n g t · R e u l e r · n p       p r e c e s s i o n R t u m b l i n g t · R e u l e r   · n p                                                 t u m b l i n g
The above equation is the rotation matrix of the micromotion model, which shows the relationship of the vectors of target facets in different coordinate systems. It can calculate the normal vectors of the surface facets in the reference coordinate system X , Y , Z at any moment, which provides a mathematical model to calculate the projected area of the target in the LOS of the infrared detector.

2.2. Infrared Radiation Intensity Model

Based on Planck’s Law, the infrared radiation intensity of a target is mainly determined by temperature, material emissivity, projected area, and so on. Therefore, it is imperative to analyze the temperature state of the target. The temperature change of a space target is mainly influenced by external radiation, heat conduction, and internal heat source, as shown in Figure 2.
To analyze the temperature distribution of the target more finely, the surface of the target is faceted, and the effect of micromotion on the temperature of the facets is considered, the following equations present the external radiation absorbed by the facets:
q i 1 t = α i S 0 A i · m a x c o s R t · n i , n s u n , 0
q i 2 t = α i E 0 A i F 2 i c o s R t · n i , o e i , t
q i 3 t = α i ρ S 0 A i F 3 i m a x c o s R t · n i , n s u n , 0 , c o s o e i , n s u n , t
where q · 1 , q · 2 , q · 3 are the solar radiation, the Earth radiation, the solar radiation reflected from Earth absorbed by the facet, respectively; α is the absorption coefficient; S 0 is the solar constant; A is the area of facet; R is the micromotion matrix; n is the normal vector facet; and n s u n is the solar radiation vector. F 2 i · , F 3 i · are the angle coefficients of radiation, and ρ is the average albedo of the Earth. According to the heat equivalence equation, the temperature of all facets satisfies the following:
m i c i T i t t + A i ϵ i σ T i 4 t = j = 1 N K i , j T j t T i t + q i 1 t + q i 2 t + q i 3 t + α p p i
where N is the number of facets. The relation is linearized and the temperature T of the target can be calculated using the Gauss–Seidel method. At any time, the area observed by the infrared detector is always smaller than the superficial area of the target. Therefore, it is necessary to calculate the projected area of the target in the LOS utilizing the micromotion model, as shown in the following:
A p r o j e c t = i A i · R t · n i · L O S R t · n i · L O S             i | R t · n i · L O S < 0 , 1 i N , i Z
According to the above, the models of temperature and projected area have been shown. However, one important factor has to be considered: the reflected radiation of the target, which will affect the magnitude of the radiation intensity in the detector pupil. This part will affect the accuracy of extracting the temperature and emissivity–area product, especially for the low emissivity of the target. In this paper, only three types of reflected radiation are considered: surface-reflected solar radiation L r s , surface-reflected Earth radiation L r e , and surface-reflected solar radiation reflected from Earth L r e s . The corresponding radiation irradiances are:
L r s i t = f r S 0 m a x c o s R t · n i , n s u n , 0
L r e i t = f r E 0 F 2 i t
L r e s i t = f r ρ S 0 F 3 i t
where f r is the bidirectional reflectance distribution coefficient. The self-radiance can be calculated for the facet as follows:
L s e l f i T i , t = c 1 ϵ i λ 5 e x p c 2 λ T i 1
where c 1 , c 2 are the radiation constants respectively, assuming that B is the set of facets within the detector’s field of view. Based on the above, the infrared radiation intensity in the pupil of the detector, whose spectrum is λ 1 , λ 2 , is modeled as:
I t = i B A i · R t · n i · L O S R t · n i · L O S · λ 1 λ 2 L s e l f i T i , t + L r s i t + L r e i t + L r e s i t d λ

2.3. Simulating Imaging Model of the Target

The angle of the space target to the detector is usually less than the instantaneous field of view of the system so that the target is considered a point source [42]. The image point of the target is determined by the projection position and irradiance. The mapping from the space coordinate of the target to the image plane coordinate requires conversion from Earth-centered inertial (ECI) coordinate system, orbital coordinate system, and sensor coordinate system to the image plane coordinate system. The transformation matrix is as shown:
r s e n = R o r b s e n B R E C I o r b r T + c
where R E C I o r b is the rotation matrix from the ECI coordinate system to the orbital coordinate system and R o r b s e n is the rotation matrix from the orbital coordinate system to the sensor coordinate system. Assuming that the image plane coordinate of the target is point T , it is known from the imaging principle that the relationship between the image plane coordinate and the sensor coordinate is shown as follows:
x I y I = f z s e n x s e n y s e n
Convert the image plane coordinates to the pixel coordinates of the target using the following equation:
x p y p = x I d y I d T + N x 2 N y 2 T
where f is the focal length, d is the pixel size, and N x , N y are the number of pixels in the row and column of the image plane, respectively. When a point source is imaged, a diffraction ring with a bright central spot and several alternating bright and dark diffraction rings is formed on the imaging surface, where the central bright spot is called the Airy spot, whose energy accounts for about 84% of the entire image spot. The irradiance response of the target after diffraction at any image position is calculated by the two-dimensional Gaussian point spread function:
p x , y = I R 2 · 1 2 π σ p s f 2 e x p x x i 2 + y y i 2 2 σ p s f 2
where x i , y i is the target position of the image plane and σ p s f is the energy diffusion range when σ p s f = 0.5 , about 98% of the target energy diffuses to 3 × 3 areas in the center of the pixel.
According to the above radiation intensity model and imaging model of the target, the position and irradiance value of the target at any moment during the flight can be simulated. Combined with the dual-band temperature measurement algorithm, CAMDF algorithm, dual-star positioning, and velocity extraction algorithms of the target, six dimensions—long-wave radiation intensity (8–12 μ m ), long-medium-wave radiation intensity (6–7 μ m ), temperature, emissivity–area product, micromotion period, and velocity of the target—can be extracted from multi-frame infrared images, which provides data support for the multi-dimensional feature decision-level fusion and recognition below.

3. Preliminaries

3.1. Structure of ROCKET

ROCKET was proposed by Dempster et al. to solve univariate time series classification in 2020 [43], drawing on the successful applications of convolutional kernels in image, signal, and other fields. It has proven to achieve state-of-the-art classification accuracy with less training time and computational complexity than MrSEQL [44], Ts-CHIEF [45], and InceptionTime [46] on the 85 datasets from the UCR archive. ROCKET initializes a series of random convolution kernels which are determined by five parameters: length, weight, bias, dilation, and padding. All the kernels are convolved with the original time series to produce a series of feature maps. Then the two values, the proportion of positive values (PPV) and maximum value are extracted as a representation of each feature map. Finally, the vectors formed by the two types of values are used to train the classifier and the PPV and maximum value of the new time series are used as input for classification.
Define the set of time series x 1 N , y 1 , , x i N , y i , x T N , y T , where x i N is the i -th time series, y i is the corresponding category label, N is the length of each time series and T is the length of the set. ROCKET is mainly composing three main steps:
(1) Setting Random Convolution Kernel. A series of convolution kernels are defined by the following five parameters:
  • Length: The length of the convolution kernel is chosen from {7,9,11}; in most cases, the size of the convolution kernel can be guaranteed to be less than N .
  • Weights: The weights are randomly selected from a standard normal distribution.
  • Bias: The value of the bias is randomly generated from a uniform distribution, b ~ 1 , 1
  • Dilation: Define the effective length of the kernel as L k e r n a l ; the value of dilation is obtained from an exponential scale e = 2 a , where a ~ 0 , log 2 N 1 L k e r n a l 1 .
  • Padding: Whether padding is performed or not is randomly determined. When the padding is required, zero values are padded to the start and end of the series to ensure that the convolution kernel is centered with both the start and the end of the series.
(2) Extracting features by transforming. ROCKET generates a feature map by performing a convolution operation on each time series with the kernel, and then extracts the PPV and the maximum value m a x . The convolution operation step is as follows:
f = x w = j = 1 L k e r n a l k x i + j · e · w j                 i = 1 , , N
where f   is the feature map and w is the kernel corresponding to the time series. The P P V is calculated by the following:
P P V = 1 N j = 0 N 1 x i j > 0                 n u m = 1 , , N
where x i j is the j th value of the i th time series and the 1 x i j > 0 represents that 1 x i j > 0 =1 when x i j > 0 . Then, the extracted feature matrix is formed by V = P P V 1 , m a x 1 , , P P V T , m a x T   R T × 2 for all input time series data.
(3) Training the Classifier. Two linear classifiers, logistic regression and ridge regression, are recommended. These capture the rich information from the extracted matrix well. Considering the number of the training set in the paper, we choose the ridge regression in which the number of hyperparameters is 2. Then the matrix is input to the classifier. Finally, the PPV and maximum value of the new time series are used as input for classification.

3.2. Long Short-Term Memory (LSTM) Network Architecture

LSTM is one of the famous variants of the recurrent neural networks (RNN) [47]. It has shown the excellent ability to learn the long-term and short-term dependencies in time series, partially overcoming the gradient disappearance and gradient explosion problems during training. LSTM has a wide range of applications in the fields of wind speed forecasting, speech classification, emotion recognition, and so on. The architecture of the LSTM network is shown in Figure 3.
The most critical components of LSTM are the memory cell and three kinds of gates including forget gate, input gate, and output gate. The flow of information in the structure is described in three stages. The first stage is to determine what information from the previous state should be removed and forgotten, which is finished by relying on the value f t of the forget gate. It can be modeled as follows:
f t = σ W f · h t 1 , x t + b f
where σ is the sigmoid function, W f is the weight, h t 1 represents the output state from the previous state, x t is the input value at the moment and b f is the bias item. The second stage is to decide which input information to be stored in the memory cell. It includes the two main parts: an input gate which decides the information to be updated and a tan h · function which generates a new vector of new updated information. The mathematical expressions can be written as follows respectively:
i t = σ W i · h t 1 , x t + b i
C ^ t = t a n h W C · h t 1 , x t + b C
where i t is the value of the input gate, C ^ t is the new vector, W i , W c are the weight values and b i , b C are the bias items.
The value i t controls how much new data from C ^ t is adopted, while the value of the signal f t controls how much the memory element C t 1 is retained. The new cell state C t can be calculated as follows:
C t = f t   C t 1 + i t C ^ t
If the values of f t , i t are always 1 and 0, respectively, the past memory information C t 1 will be saved over time and passed to the current state C t . This design is introduced to alleviate the gradient disappearance problem, and the structure can better capture the long-range dependencies in the time series.
The third stage is to determine what information about the cell state C t can be the output of the current state. This stage is controlled mainly by the value o t of the output gate and scales the C t using a tan h · function. The calculation process of the value o t the output state h t at present is the below:
o t = σ W o · h t 1 , x t + b o
h t = o t t a n h C t
When the value o t is close to 1, it can effectively update all the memory information to the output value. While the value is close to 0, only all the information within the memory element is retained without updating the output state h t . The output state h t whose range is 1 ,   1 contains short terms. After passing the three stages, the invalid information can be filtered and the effective part can be output.

3.3. The SVM Classifier

The traditional SVM is only suitable for binary classification scenarios, so many researchers have paid attention to improving and proposing SVM for multi-classification scenarios, such as one-versus-one SVM(OVO-SVM) [48], one-versus-all SVM (OVR-SVM) [49], error correction coding SVM (ECC-SVM) [50], and decision binomial tree SVM (DBT-SVM) [51]. Considering that the number of recognition categories is 5, the maximum number of binary classification SVMs to be constructed is 10, and the training time is short, so OVO-SVM is chosen as one of the classifiers in this paper. For a test sample, all 10 classifiers classify and vote, and the category with the most votes is the final result corresponding to the sample.
The output of the SVM Is the category labels, which need to be mapped to the posterior probability to obtain the probability value for each category to construct the BPA of DS. Platt [52] used the sigmoid-fitting method to map the output categories to probability:
P = 1 1 + e x p A f + B
where P is the posterior probability value, f is the output of SVM, and A ,   B are the parameters that can be obtained by calculating the minimum negative log-likelihood value.

3.4. DS Evidence Theory

The Dempster–Shafer theory is one of the powerful processing algorithms in the field of information fusion and was proposed and improved by Dempster and Shafer. It can flexibly handle incomplete, uncertain, and imprecise information in multi-dimensional data fusion without any prior knowledge. It is a new experiment to apply DS evidence theory to the decision fusion of space infrared dim target recognition.
Definition 1.
Frame of discernment.
Assuming the elements in the framework of Θ = θ 1 , θ 2 , θ n are mutually exclusive without intersection, Θ is called the frame of discernment (FOD). Additionally, all subsets of Θ form the power set 2 θ , which is denoted in the following form:
2 θ = , θ 1 , θ 2 , , θ 1 θ 2 , θ 1 θ 3 , , Θ
Definition 2.
Basic probability Assignment.
When a mapping relation m : 2 θ 0 , 1 satisfies the following relation:
m = 0 0 m A 1 , A Θ A Θ m A = 1
For the recognition framework Θ , the mapping relationship m is considered the basic probability assignment (BPA), also called the mass function. Each m A represents the probability value that is assigned for event A, where m represents the probability for the empty set. If m A > 0 , A is considered the focal element. It is critical for m A to be the result of the fusion decision.
Definition 3.
Combination Rules.
When N evidence sources or feature sources make judgments on the type of one sample A , the corresponding output probability values are m 1 , m 2 , m n respectively, and all probability values are combined according to the following equations:
m 1 m 2 m n A = 1 1 K A 1 A 2 A n = A m 1 A 1 · m 2 A 2 · · m n A n
K = A 1 A 2 A n = m 1 A 1 · m 2 A 2 · · m n A n   = 1 A 1 A 2 A n m 1 A 1 · m 2 A 2 · · m n A n
where K is a measure of the conflict degree of this evidence. It is noted that the above fusion rule is valid only when 0 K < 1 .

4. The Proposed Method of Target Recognition

Uncertainty feature data processing is an indispensable step for space target recognition. In the process of judging the types of the space target, it is often necessary to fuse different feature data from observed targets to implement a comprehensive judgment and evaluation of target types. In addition, in the ensemble classifier architecture, the variability in the learning degree of different classifiers for feature data sources leads to divergent recognition results of space targets, which is called information source conflict. To enhance the rationality of feature data fusion, attenuate the conflicts among feature data and further improve the recognition accuracy, this paper proposes a new space target recognition method that combines an ensemble classifier and improved DS. Firstly, three kinds of trained classifiers (ROCKET + Ridge, LSTM, SVM) to test based on the six-dimensional target features (long-wave radiation intensity (8–12 μ m )), infrared radiation intensity (6–7 μ m ), temperature, emissivity–area product, velocity, and micromotion period) extracted from multi-frame infrared radiation images to construct the base BPA.
The proposed method consists of three main modules: an ensemble-classifier-based information processing module for multidimensional feature data, an improved DS information fusion module, and a decision module. The information-processing module uses the three types of classifiers based on the six-dimensional space target features for initial classification and outputs the posterior probability density values of each category to construct the basic BPA matrix for use in the next module. In the information fusion model, the credibility of the BPA value is judged based on the number of space target categories, and the contraction–expansion function is introduced for evidence scaling. The accuracy of each classifier is used as the discount coefficient to improve the reasonableness of the evidence. Finally, the target decision recognition module, by setting decision thresholds, makes fusion judgments on the target category. The flowchart of the proposed algorithm is shown in Figure 4.
In the information processing stage, the main purpose is to perform pre-processing of the dual-band multi-frame infrared images generated from the infrared detector, including blind element removal, infrared dim target detection, target signal enhancement, target tracking, and so on. Then a series of pixel point coordinates and gray value information of the target can be extracted. In this paper, assuming that these two types of information about the target have been accurately obtained, the focus is on the feature extraction and the decision recognition of the target category. Extraction of infrared radiation intensity, temperature, emissivity–area product, and micromotion period of the target is achieved by using the gray value of the target in two infrared bands, and the velocity information of the target is extracted by using the pixel point coordinates of the target in the image combined with the infrared detector position and LOS vector. The main contents of this section are as follows:

4.1. Multi-Dimensional Feature Extraction

Features are the intrinsic information describing the properties of objects, and different categories of targets have distributional variability in the feature space. Space targets have variability in the above six-dimensional features due to differences in materials, emissivity, mass distribution, etc. These provide feature data references for recognizing targets. For the infrared radiation intensity, its magnitude is reflected by the level of the pixel gray value of the target. The corresponding function equation of the radiation of the target and the gray value is required to obtain by the blackbody radiation calibration. Once the radiation equation is obtained, the inverse functional relationship can be used to achieve the conversion of gray values to radiation. In general, the equation is often characterized as a linear relationship, as shown below:
L = a D N + b
where L is the infrared radiation of the target; D N is the gray values; and a , b are the coefficients obtained by fitted.
By performing the above calculation, the consecutive radiation signal of the target in two infrared bands can be extracted. Because of the long distance between the target and the detector, the irradiance magnitude of the target reaching the pupil is weak, so the effect of detector noise cannot be ignored. Considering that the infrared signal is regular, the DA-VMD algorithm [53] is used to denoise the radiation signal. This approach avoids the artificial subjective setting of mode number K and quadratic penalty term α in the VMD and finds the optimal parameter values by iterating the objective function through the optimization algorithm. After that, the feature of infrared radiation intensity signals (8–12 μ m , 6–7 μ m ) can be obtained. The extraction of temperature can be operated with the help of dual-band thermometry. However, since the radiation in the pupil of the detector contains not only the self-radiation of the target but also the external radiation reflected by the target during observing the target, the real temperature and the emissivity of the target can be accurately derived using the method, assuming that the value of the external radiation reflected by the target is known. However, when the target is flying in space, the external radiation is in dynamic change and difficult to know. This difficulty leads to a deviation between the extracted temperature from the real temperature of the target. According to previous studies, it has been shown that the use of 8–12 μ m and 6–7 μ m as the detector bands can reduce this bias to some extent. The principle of dual-band thermometry is shown in the following equation:
ϵ s T 1 λ 1 λ 2 L λ T d λ ϵ s T 2 λ 3 λ 4 L λ T d λ = λ 1 λ 2 L λ T d λ λ 3 λ 4 L λ T d λ = S 1 S 2 = R T
where ϵ s is the emissivity of the target; λ 1 , λ 3 are the lower edge of bands; λ 2 , λ 4 are the upper edge of bands; T is the temperature; and L λ · is Planck’s Law equation. R T is the relation between the temperature and ratio of the radiation of the two infrared bands, which can be obtained by the blackbody. In general, R T is a monotonic function, and the T for the target can be solved by using the dichotomous method.
After that, under knowing the distance between the infrared detector and target, the emissivity-area product can be calculated by the following:
ϵ s A = π R 2 S λ 1 λ 2 L λ T d λ = π I λ 1 λ 2 L λ T d λ
where ϵ s A is the emissivity-area product of the target, R is the distance, S is the irradiance in the pupil, and I is the radiant intensity.
Since the micromotion leads to periodic fluctuations in the radiation signal of the target in the time domain dimension, the CAMDF is used to extract the micromotion period of the target [54], which can overcome the drawback of the relevant algorithms to some extent: false valley points lead to period misclassification. The algorithm is as follows:
F k = 1 N i = 1 N I m o d i + k , i I i
where I is the radiation signal, N is the length of the signal, k is the delay length, and m o d · is the remainder operator.
When using a single-star infrared detector for target observation, if only the motion properties of the target are assumed to conform to the two-body motion law and the position and velocity of the target at the initial moment of observation are known, the space position and velocity of the target during the whole flight can be deduced by combining the Kalman filter and other related derived algorithms. However, there are two drawbacks: (1) In the actual observation scenario, when the LOS of the infrared detector captures the flying target and carries out stable tracking, the position and velocity of the target cannot be known at the initial moment; (2) when the target performs variable orbital motion, the characteristics of the target do not conform to the two-body motion model, failing single-star infrared detector extraction of velocity. Therefore, a combination of dual infrared detectors and the least-squares method is used to achieve the extraction of the target velocity in this paper [55].
We first establish the mathematical model for solving the space position of the target and extracting the velocity based on the position. Firstly, according to the geometric positioning model of the dual infrared detectors, we establish the equation for solving the space position:
u j x l j y = u j X j u j Y j v j y u j z = v j Y j u j Z j v j x l j z = v j X j l j Z j ,                 j = 1 , 2
where x , y , z is the position coordinate of the space target and V j = l j , u j , v j , R j = X j , Y j , Z j denote the unit observation vector and the position of the detector in the J2000 coordinate system, respectively. According to the above equations, the unknown quantities are the position coordinate of the target, and the number of independent equations is six. According to the least-squares method, the target position x , y , z can be solved. Then, the velocity v x , v y , v z of the target can be calculated according to the following equation:
v x , v y , v z = x t , y t , z t   = x t + t x t t , y t + t y t t , z t + t z t t
Where t is the time difference and x , y , z are the variation of displacement.
Based on the above, the six-dimensional target features are extracted from the multi-frame images. for the velocity feature, its extraction accuracy is independent of the error of the radiation magnitude of the target and is related to these error sources, such as the location of the centroid of the target imaging point, the space coordinates of the infrared detector, the velocity of the infrared detector, the pose of the infrared detector, and so on. For the remainder, it should be explained that those which are non-independent are derived from the infrared radiation signal. Therefore, the accuracy of the infrared radiation has an impact on the extraction of other features. In particular, the emissivity–area product feature is affected not only by the infrared radiation but also by the accuracy of the temperature and the distance between the target and the detector.

4.2. Construction of the BPA and Decision Making

As mentioned above, building an ensemble classifier usually accomplishes two main tasks: (1) selecting several independent classifiers; (2) combining classification probability values to construct the basic BPA. Three classification algorithms, such as ROCKET + Ridge, LSTM, and SVM + sigmoid-fitting, are selected as individual classifiers in this paper. It should be clarified that ROCKET + Ridge is used to recognize the radiation intensity sequences of two infrared bands, and LSTM is used to separately recognize temperature sequences, emissivity–area product sequences, and velocity sequences. In the general case, the micromotion period remains constant rather than a time series during space targets flying outside the atmosphere, so the micromotion period is trained using SVM instead of LSTM or ROCKET, which can increase the computational speed and reduce the time to build the BPA. It should be explained that LSTM can also handle the classification of radiation intensity sequence signals, and we choose the additional classification algorithms because they have different classification mechanisms that complement as much information as possible to provide more accurate classification results. In addition, we found that for radiation intensity classification, ROCKET + Ridge has better classification results compared to LSTM.
For a test sample, the process of calculating the basic BPA corresponding to the radiation intensity sequence based on ROCKET is as follows:

4.2.1. Determine the initial BPA

The first step is to establish a basic recognition framework, use the classifier’s recognition results of six-dimensional features to construct an initial BPA, and assign evidence to each target category.
Assuming that there are M categories of space targets, the initial recognized framework can be constructed as Θ = K 1 , K 2 , K M . The radiation intensity signals of all space targets are divided into a training set and a test set, and the training stage of the ROCKET + Ridge is accomplished using the training set, after which the denoising test set is input to the algorithm to obtain the accuracy, which is φ . When the infrared radiation intensity signal of a test sample is obtained and input to the algorithm, the probability value corresponding to each category is p 1 , p 2 , p M , which is the basic BPA value corresponding to the feature.
Therefore, the overall initial BPA matrix BPA ¯ for all feature is as follows:
B P A ¯ = p 11 , p 12 , p 1 M p 61 , p 62 , p 6 M 6 × M

4.2.2. Information fusion

This step is to accomplish the modification of the initial BPA, discount processing, and evidence fusion to better amplify the differences between the different probability values, which will help to make a category judgment.
From the previous description, it is known that there are conflicting situations among the evidence, which leads to fusion recognition results contrary to the facts. The sum of BPA is 1, and the average BPA of each category is 1 / M . In the extreme case, when the BPA value of a feature is 1 / M for all categories, the feature is not credible and is unsuccessful in recognizing the target. Therefore, 1 / M can be used as a threshold for scaling the BPA value. Here, we introduce a contraction-expansion function to improve the DS algorithm by modifying the basic BPA value.
p i j = 10 2 p i j 2 / M           p i j 1 / M   10 2 p i j + 2 / M           p i j > 1 / M               i = 1 , 6 ; j = 1 , M ;
When the BPA value of a feature corresponding to a certain category of the target is less than 1 / M , we consider that the probability of the test sample belonging to this category is small, so the BPA value is compressed. When the BPA value is greater than 1 / M , we consider that the probability of the sample belonging to this category is large, and the BPA value is scaled up. Then, the above-modified BPA values are normalized as follows:
p i j = p i j j = 1 M p i j
After scaling, the accuracy of each recognition algorithm is used to weight the BPA of each category, and the algorithm uncertainty 1 φ i is assigned to the BPA value of the overall recognition framework for the feature source. The calculation formula is below for each feature:
p i 1 = φ i · p i 1 p i M = φ i · p i M p i Θ = 1 φ i
where φ i denotes the accuracy of the ith classifier and p i Θ denotes the probability that the recognition result belongs to all categories, i.e., it cannot be distinguished to which category the test sample belongs.

4.2.3. Category result output

This step is to make a target category judgment based on the results of evidence fusion. Obtain the above-modified BPA values and use Equation (32) to fuse these data. The probabilities after fusing are p 1 , p 2 , , p m , p Θ , and the probability assignment-based decision method is used as follows:
Supposing that M 1 , M 2 Θ and p M 1 = m a x p M i , M i Θ   , p M 2 = m a x p M i , M i Θ   , M i M 1   . If M 1 , M 2 satisfy the following conditions:
p M 1 p M 2 > ϵ 1 p Θ < ϵ 2 p M 1 > p Θ
where ϵ 1 , ϵ 2 are the thresholds, then we consider the recognition result of the target to be the type M 1 .

5. Experiments Results

The purpose of this section is to verify the necessity of multi-dimensional feature decision-level fusion and the performance of the proposed algorithm by implementing multiple experiments. First, the range of attribute value of space target and parameter of flight scene in this experiment are described in detail, and the dataset of space targets is built based on the infrared radiation model and target imaging model in the previous section. Subsequently, the recognition performance of multidimensional feature decision-level fusion is analyzed compared with the performance under a single feature. Then, the performance of our proposed algorithm and the comparison algorithm with different SNRs and different observation time lengths are discussed in the paper.

5.1. Experimental Parameter Setting

For the ROCKET, we choose the default parameters in the paper [43]. Namely, the number of kernels of ROCKET is 10,000, and it produces 20,000 features for each infrared radiation sequence. The kernel function of SVM is RBF, the sigma of the kernel is 0.5, and the penalty factor is 1. The LSTM network is a five-layer neural network: one input layer, one LSTM layer with 200 hidden units, one fully connected layer, one softmax layer, and one classification layer. The classification layer outputs the probability for each category. For the training stage, the Adam optimizer is used with a learning rate of 0.001. The maximum number of epochs is 120, and the loss function is the cross-entropy in the network architecture. In the decision rule, ϵ 1 , ϵ 2 are set to 0 and 0.10, respectively.

5.2. Flight Scene and Space Target Property Setting

In the second section, the infrared radiation intensity model and imaging model of the space target are established in detail, and the main factors affecting the variation of infrared radiation intensity are analyzed. In this subsection, the corresponding parameters are set for the two models of the space target. To be more realistic, we simulate the infrared radiation image data of five types of space targets in six flight scenarios. The flight scenario environment, the flight time, the starting position, and the landing position of the targets are expressed in Table 1. Although the starting position and the landing position of the targets in Scenes 1–3 are set to the same, the flight paths of the targets in each scene are different. The target flight path and observation satellite motion state are the same in Scene 5 and Scene 6; the difference is the lighting environment. Additionally, the lighting conditions of the targets in flight are set to three: full sunlight, full shadow, sunlight for the first half of the flight time, and shadow for the rest time.
The detector bands of infrared detectors are 8–12 μm and 6–7 μm respectively, and the frame frequency is 10 Hz, and the LOS of detectors always points to the centroid of the true target which is destructive. Also, the properties of each target are shown in Table 2. The four shapes of targets are flat-base cone, ball-base cone, cone–cylinder, and cylinder. The categories of targets are real target (flat-base cone), master cabin (cone–cylinder), light decoy (ball-base cone, cylinder), and heavy decoy (cone–cylinder). There are 100 samples for each flight scene and 600 samples for the total flight scene, where the ratio of these targets number is 1:1:1:1:1. We used the infrared-observed images from 85 s to 100 s for recognition.
It should be noted that the infrared radiation intensity of the target extracted from the images can be disturbed by various factors: non-uniformity of the sensor image, target point coordinate extraction residual, measurement distance error, and other factors. These factors are usually described as Gaussian white noise to improve the realism of the data. Figure 5a shows the normalized infrared radiation intensity of five targets under ideal conditions, and it can be seen that the radiation intensity of the targets is characterized by long-term variation and local fluctuation, which is the result of the micromotion. Master cabin and light decoy 1 have a relatively small amplitude of periodic variation, while real target, light decoy 2, and heavy decoy have large periodic variations. Figure 5b shows the normalized radiation intensity signal extracted under noise interference (SNR = 10). It can be seen that the detailed information of the target radiation signal is seriously disturbed by noise, which may lead to a sharp decrease in the performance of target recognition. Therefore, 70% of the original data are randomly assigned to the training set and 30% of the data to the test set by adding different level noises to verify the performance of the proposed algorithm under different SNR ratios.
As shown in Figure 6, five target images for six different flight scenarios are shown. The image pixel size is 1024 ∗ 1024, and it is assumed that the line of sight of the detector always points to the real target, which is always located at the center pixel point of the image. There is also a separation velocity relative to the real target when the other targets follow the set trajectories, causing each target to move towards the surroundings in the image, and it can be seen that there is indeed variability in the velocity of the individual targets.

5.3. Performance Comparison with the Single Feature

Since each dimensional feature data can be used as recognition, taking the case of the observation time length of 15 s as an example, the recognition accuracies of the single-dimensional feature under different noise levels are first calculated. In addition, to compare the recognition effect under fusing different dimensions, the accuracies of the proposed algorithm under fusing four features (temperature, emissivity–area product, period, and velocity) and fusing all features are also calculated, as shown in Table 3.
It can be seen that for single-dimensional feature recognition, the recognition accuracies of long-wave radiation intensity and medium–long-wave radiation intensity are higher than that of other four dimensional features under any noise level—especially when the SNR > 10—the recognition accuracies of radiation intensity can be greater than 85%. For the three categories of features—namely, temperature, emissivity–area product, and period—the effect of recognition has a smaller difference. What surprises us is that the recognition accuracy is poor for the feature of velocity. We guess that the reason is that the difference of velocity of space flight targets is small, which leads to a reduced performance of the recognition algorithm during this observation process. By comparing the results in the table, it is found that the recognition effect of fusing six-dimensional features is higher than that of fusing four-dimensional features. For example, for SNR = 5 , the recognition accuracy of the proposed algorithm for the fusion of six-dimensional features is 93.33%, which is higher than 83.89%. For the contribution of a single feature to the recognition accuracy, the most important is the radiation intensity, the contribution of velocity is the lowest, and the difference between the contribution of the remaining features is small. Although the recognition accuracy of fusing features is only slightly better than that relying only on radiation intensity at SNR     15 , the method of fusing features has a clear advantage at low SNR. In addition, although the recognition accuracy for any of the features (temperature, emissivity–area product, period, velocity) is lower, the accuracy after fusing is equally better than that of relying on the radiation intensity at low SNR, which shows that it is necessary to perform decision-level recognition with multidimensional feature fusion for space target recognition.

5.4. Comparison with Other Baseline Methods

We evaluate the performance of the proposed method with five classical baseline algorithms: the traditional DST [31], the Murphy method [34], the Gao method [56], the Zhang method [24], and the Zhou method [29]. The Murphy method mainly averages the BPAs generated by multiple features for target discrimination and fuses the average BPAs several times to obtain the final recognition result. The Gao method introduces a new cross-entropy-based similarity criterion to modify the BPAs of multiple features and fuse the new values for decision recognition. The Zhang and Zhou methods fuse multidimensional features based on fuzzy sets and Bayesian theory, respectively, which are different from the DS evidence theory in this paper and therefore have wider reference value as comparative methods. It should be noted that the initial BPAs of the first three baseline algorithms are also obtained according to the ensemble classifier of this paper. To illustrate the recognition process of the proposed method, a test sample is used as an example, and Table 4 shows the initial BPA values obtained after processing by the ensemble classifier.
During the whole observation process, assuming that the number of categories of the target is 5, the threshold value of the contraction-expansion function is 0.2. According to the Formula (41), the modified BPA is calculated, then the BPA is weighted by the accuracies of the classifiers to obtain the final BPA value. Finally, the final fusion results of this sample are obtained according to the DS fusion and decision rules, as shown in Table 5, while the fusion results in the comparison algorithms are also listed. In this case, traditional Dempster’s rule, and Murphy’s method all have made wrong decisions. Gao’s method has a certain degree of improvement in the decision and makes correct judgments. The result of the proposed method is more discriminable and resolvable, which is more conducive to the control center making a reasonable judgment on the category of the target. Furthermore, both Zhang and Zhou’s methods made correct judgments, but their fusion results were still lower than the proposed method.
We compare the recognition performance of these algorithms under different observation times (L = 5 s, 10 s, 15 s) and different noise levels (SNR = 5, 10, 15, 20, 25, 30). The results are as shown in Figure 7.
These results illustrate that when the observation time is fixed, with the improvement of the SNR, the recognition accuracy of each algorithm is improved to varying degrees, and then tends to be smooth. Once entering the smooth stage, increasing the SNR has no obvious effect on the improvement of the final recognition effect. This provides a certain index reference for designing the infrared detection system of the space target. Additionally, although the signal of the space target is pre-processed, it is undeniable that the residual noise still distorts the extracted features, which leads to a negative impact on the accuracy of space target recognition. In addition, the proposed algorithm in this paper outperforms the other five benchmark algorithms in most cases. For example, when the observation time is 15 s and the SNR is 5, the recognition accuracy of the proposed algorithm is 93.33%, which is better than 81.11% of the traditional DST algorithm. In summary, compared to the other algorithms, the proposed algorithm performs a higher recognition effect and certain robustness, especially in the case of low SNR.
The process of observing a space target by an infrared detector is dynamic and the size of the collected multi-frame images increases with time, which makes it necessary to analyze the effect of the images on the recognition accuracy at different observation times. Comparing the three figures, it can be seen that the recognition accuracy of the six algorithms improves with the increase in the observation time, and the proposed method outperforms the others. This is because the larger the observation time, the more information about the target can be obtained, which is useful for improving the recognition performance. It should be noted that when the length of observation time (L = 5 s) is smaller than the microrotation period of some targets, it will cause the complete information of one cycle of the target cannot be collected, which will lead to a larger recognition error rate. In this case, the length of observation time needs to be increased to ensure that the detector can acquire complete information about the target.

5.5. Comparison of ROC Curves

To assess the recognition performance of the proposed method more comprehensively, Figure 8 illustrates the receiver operating characteristic (ROC) curves and the area under the ROC curve (AUC) for the above six methods at an observation time of 15 s and an SNR of 5. The conventional ROC represents how the binary classifier performance varies with the classifier threshold and is created by plotting the true positive rate (TPR) and false positive rate (FPR) at different classifier thresholds.
Since the space target recognition is a multi-classification scenario, the micro-ROC is used to evaluate the performance of the algorithm in the paper. AUC is defined as the area under the micro-ROC curve, which means the probability that when a positive sample and a negative sample are randomly selected, the confidence of the positive sample calculated based on the classifier is greater than the confidence of the negative sample. Generally, the AUC value is in the range of 0.5–1, and the larger the AUC value corresponding to the classifier, the more effective the classifier is. Generally, at 0.5–1, the larger the AUC value, the better the performance of the recognition model. According to the results, it can be found that the AUC of the proposed algorithm is the highest, which proves the proposed algorithm shows prominent performance.

5.6. Comparison of Different Frequencies

Since the signals sampled by the infrared detector are discrete and the infrared radiation intensity of the target is periodic, the frequency of the detector will affect the sampled radiation intensity waveform. Figure 9 illustrates the recognition accuracy of space targets at different sampling frequencies when the SNR is 10. We can see that the performance of recognition is the lowest when the frequency is 1 Hz, and the second-lowest when the frequency is 2 Hz. This is because the micromotion velocity of some targets is fast, and the signal obtained from the detector loses important local information about the target according to Nyquist’s sampling law, which is different from the real radiation intensity of the target. In addition, it leads to a larger error in period extraction based on the sampled signal. These factors cause the poor accuracy of target recognition at low sampling frequencies. As the sampling frequency increases, the amount of information in the sampled signal increases, and the recognition accuracy increases gradually. When the sampled frequency of the detector is 10 Hz, the accuracy of target recognition is relatively highest at the same noise level. From the comparison of methods, at low frequencies (1 Hz and 2 Hz), the proposed method has the best accuracy, followed by the Zhou method; at high frequencies (5 Hz and 10 Hz), the proposed method still has the best accuracy, followed by the Zhang method. This verifies the effectiveness of the proposed method at different frequencies.

6. Conclusions

For the problem of the space infrared dim target recognition, a novel intelligent method that combines an ensemble classifier and improved Dempster–Shafer evidence theory for multi-feature decision-level fusion is proposed. The method innovatively combines ROCKET, LSTM, and SVM classifiers with information fusion theory to take full advantage of the different classifiers in information processing to generate the BPA required for the fusion decision stage. Then, a contraction-expansion function is defined to attenuate the contradiction between the BPAs of features, and the value of BPA is scaled by comparing it to the threshold value, which is determined according to the type and number of targets. Next, a discount operation is performed on the values according to the classifier accuracy, to improve the rationality among the features. Then the final discriminations of the target categories are made according to the decision rules. The experimental results show that the recognition accuracy of the method is significantly better than that with a single feature, especially when the SNR of the data is low (when the observation duration is 15 s, the accuracy of the method can still achieve 90%). This gives full play to the advantages of data fusion in improving recognition performance and significantly reflects the effect of feature fusion. In the case of different observation times, the method is still able to identify the target categories more accurately than other existing fusion decision methods, and the recognition accuracy can reach 87%, which is higher than 71.67%, 72.22%, and 84.44% for an observation time of 5 s and an SNR of 5. In addition, the proposed method can be applied to the fields of space situational awareness and multi-source data fusion to provide the corresponding technical support for the decision-making process.
In addition to the field of spatial infrared technology, this method can also be extended to security monitoring, industrial automation, medical diagnosis, and other fields. However, when applying this method to other domains, certain challenges and limitations may arise. Variations in target characteristics and background environments across different fields may necessitate adjustments and optimizations to the method. Additionally, practical considerations such as real-time performance, robustness, and computational efficiency should be considered. Enhancing the algorithm’s usability and applicability will be a focal point of our future research endeavors.
In future research work, we will further optimize our algorithm using more scene datasets, fully exploit the support of the velocity feature for recognition, and continue to study the correction rules of the BPA function to generate more appropriate weight data.

Author Contributions

Conceptualization, X.C. and H.Z.; methodology, X.C. and H.Z.; software, H.Z.; validation, J.F. and H.X.; formal analysis, S.Z.; investigation, H.X.; resources, P.R.; data curation, J.F.; writing—original draft preparation, X.C.; writing—review and editing, S.Z.; visualization, H.Z.; supervision, J.A.; project administration, X.C.; funding acquisition, P.R. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The data presented in this study are available on request from the corresponding author (P.R.). The data are not publicly available due to privacy.

Acknowledgments

This research is supported by the Key Laboratory of Intelligent Infrared Perception, Shanghai Institute of Technical Physics, Chinese Academy of Sciences, Shanghai, China.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Wang, X.; Chen, Y. Application and development of multi-source information fusion in space situational awareness. Spacecr. Recovery Remote Sens. 2021, 42, 11. [Google Scholar] [CrossRef]
  2. Hanif, A.; Muaz, M.; Hasan, A.; Adeel, M. Micro-doppler based target recognition with radars: A review. IEEE Sens. J. 2022, 22, 2948–2961. [Google Scholar] [CrossRef]
  3. Zhang, X.; Wang, W.; Zheng, X.; Wei, Y. A novel radar target recognition method for open and imbalanced high-resolution range profile. Digital Signal Process. 2021, 118, 103212. [Google Scholar] [CrossRef]
  4. Li, S.; Li, C.; Yang, X.; Zhang, K.; Yin, J. Infrared dim target detection method inspired by human vision system. Optik 2020, 206, 164167. [Google Scholar] [CrossRef]
  5. Zhang, T.; Zhang, X.; Ke, X. Quad-FPN: A Novel Quad Feature Pyramid Network for SAR Ship Detection. Remote Sens. 2021, 13, 2771. [Google Scholar] [CrossRef]
  6. Li, A.; Niu, Y.; Wang, Z.; Liu, Z.; Yang, H. Inception-Det: Large aspect ratio rotating object detector for remote sensing images. Wirel. Netw. 2023. [Google Scholar] [CrossRef]
  7. Yang, X.; Yan, J.; Ming, Q.; Wang, W.; Zhang, X.; Tian, Q. Rethinking rotated object detection with gaussian wasserstein distance loss. In Proceedings of the 38th International Conference on Machine Learning (PMLR), Virtual, 18–24 July 2021; pp. 11830–11841. [Google Scholar]
  8. Zhang, H.; Rao, P.; Xia, H.; Weng, D.; Chen, X.; Li, Y. Modeling and analysis of infrared radiation dynamic characteristics for space micromotion target recognition. Infrared Phys. Technol. 2021, 116, 103795. [Google Scholar] [CrossRef]
  9. Silberman, G.L. Parametric classification techniques for theater ballistic missile defense. Johns Hopkins APL Tech. Dig. 1998, 19, 322–339. [Google Scholar]
  10. Gu, X.; Gao, K.; Zhu, Z.; Zhang, X.; Han, L. Fusion recognition based on grey relativity for multi-source infrared dim target. Laser Infrared 2018, 48, 1258–1263. [Google Scholar]
  11. Dai, H.; Zhou, Y.; Huang, S.; Yin, X. Target recognition of ballistic middle segment based on infrared multiple features. J. Command Control 2019, 5, 302–307. [Google Scholar]
  12. Zhang, G.; Yang, C. Discrimination of exo-atmospheric targets based on optimization of probabilistic neural network and IR multispectral fusion. J. Electron. Inf. Technol. 2014, 36, 896–902. [Google Scholar]
  13. Ma, Y.; Hu, M.; Lu, H.; Chang, Q. Recurrent neural networks for discrimination of exo-atmospheric targets based on infrared radiation signature. Infrared Phys. Technol. 2019, 96, 123–132. [Google Scholar] [CrossRef]
  14. Wu, D.; Lu, H.; Hu, M.; Zhao, B. Independent Random Recurrent Neural Networks for Infrared Spatial Point Targets Classification. Appl. Sci. 2019, 9, 4622. [Google Scholar] [CrossRef]
  15. Zhang, S.; Rao, P.; Zhang, H.; Chen, X.; Hu, T. Spatial Infrared Objects Discrimination based on Multi-Channel CNN with Attention Mechanism. Infrared Phys. Technol. 2023, 132, 104670. [Google Scholar] [CrossRef]
  16. Deng, Q.; Lu, H.; Xiao, S.; Wu, Y. Analysis of infrared signatures of exo-atmosphere micromotion objects based on inertial parameters. Infrared Phys. Technol. 2018, 88, 32–40. [Google Scholar] [CrossRef]
  17. Rizik, A.; Tavanti, E.; Chible, H.; Caviglia, D.; Randazzo, A. Cost-efficient FMCW radar for multi-target classification in security gate monitoring. IEEE Sens. J. 2021, 21, 20447–20461. [Google Scholar] [CrossRef]
  18. Gao, X.; Deng, Y. The generalization negation of probability distribution and its application in target recognition based on sensor fusion. Int. J. Distrib. Sens. Netw. 2019, 15, 15501477–19849381. [Google Scholar] [CrossRef]
  19. Zhang, Z.; Jiang, W.; Geng, J.; Deng, X.; Li, X. Fault diagnosis based on non-negative sparse constrained deep neural networks and Dempster–Shafer theory. IEEE Access 2020, 8, 18182–18195. [Google Scholar] [CrossRef]
  20. Li, J.; Ke, L.; Du, Q.; Chen, X.; Ding, X. Multi-modal cardiac function signals classification algorithm based on improved DS evidence theory. Biomed. Signal Process. Control 2022, 71, 103078. [Google Scholar] [CrossRef]
  21. Chen, T.; Yin, X.; Yuan, X.; Gu, Y.; Ren, F.; Sun, X. Emotion recognition based on fusion of long short-term memory networks and SVMs. Digital Signal Process. 2021, 117, 103153. [Google Scholar] [CrossRef]
  22. Zhang, Z.; Wang, H.; Geng, J.; Jiang, W.; Deng, X.; Miao, W. An information fusion method based on deep learning and fuzzy discount-weighting for target intention recognition. Eng. Appl. Artif. Intell. 2022, 109, 104610. [Google Scholar] [CrossRef]
  23. Song, Y.; Fu, Q.; Wang, Y.; Wang, X. Divergence-based cross entropy and uncertainty measures of Atanassov’s intuitionistic fuzzy sets with their application in decision making. Appl. Soft Comput. 2019, 84, 105703. [Google Scholar] [CrossRef]
  24. Zhang, S.; Rao, P.; Hu, T.; Chen, X.; Xia, H. A Multi-Dimensional Feature Fusion Recognition Method for Space Infrared Dim Targets Based on Fuzzy Comprehensive with Spatio-Temporal Correlation. Remote Sens. 2024, 16, 343. [Google Scholar] [CrossRef]
  25. Zhang, P.; Li, T.; Wang, G.; Luo, C.; Chen, H.; Zhang, J.; Wang, D.; Yu, Z. Multi-source information fusion based on rough set theory: A review. Inf. Fusion 2021, 68, 85–117. [Google Scholar] [CrossRef]
  26. Kang, B.; Deng, Y.; Hewage, K.; Sadiq, R. A method of measuring uncertainty for Z-number. IEEE Trans. Fuzzy Syst. 2018, 27, 731–738. [Google Scholar] [CrossRef]
  27. Lai, H.; Liao, H. A multi-criteria decision making method based on DNMA and CRITIC with linguistic D numbers for blockchain platform evaluation. Eng. Appl. Artif. Intell. 2021, 101, 104200. [Google Scholar] [CrossRef]
  28. Yang, F.-J. An implementation of naive bayes classifier. In Proceedings of the 2018 International Conference on Computational Science and Computational Intelligence (CSCI), Las Vegas, NV, USA, 12–14 December 2018; pp. 301–306. [Google Scholar]
  29. Zhou, H.; Dong, C.; Wu, R.; Xu, X.; Guo, Z. Feature Fusion Based on Bayesian Decision Theory for Radar Deception Jamming Recognition. IEEE Access 2021, 9, 16296–16304. [Google Scholar] [CrossRef]
  30. Xiao, F. A new divergence measure for belief functions in D–S evidence theory for multisensor data fusion. Inf. Sci. 2020, 514, 462–483. [Google Scholar] [CrossRef]
  31. Shafer, G. Dempster-shafer theory. Encycl. Artif. Intell. 1992, 1, 330–331. [Google Scholar]
  32. Yager, R. On the Dempster-Shafer framework and new combination rules. Inf. Sci. 1987, 41, 93–137. [Google Scholar] [CrossRef]
  33. Dubois, D.; Prade, H. Representation and combination of uncertainty with belief functions and possibility measures. Comput. Intell. 1988, 4, 244–264. [Google Scholar] [CrossRef]
  34. Murphy, C. Combining belief functions when evidence conflicts. Decis. Support Syst. 2000, 29, 1–9. [Google Scholar] [CrossRef]
  35. Deng, Y.; Shi, W.; Zhu, Z.; Liu, Q. Combining belief functions based on distance of evidence. Decis. Support Syst. 2004, 38, 489–493. [Google Scholar]
  36. Sun, Q.; Ye, X.; Guo, W. A new combination rules of evidence theory. Acta Electon. Sin. 2000, 28, 117. [Google Scholar]
  37. Liu, Z.; Pan, Q.; Dezert, J.; Han, J.; He, Y. Classifier fusion with contextual reliability evaluation. IEEE Trans. Cybern. 2017, 48, 1605–1618. [Google Scholar] [CrossRef] [PubMed]
  38. Mai, X.; Zhang, H.; Jia, X.; Meng, M. Faster R-CNN with classifier fusion for automatic detection of small fruits. IEEE Trans. Autom. Sci. Eng. 2020, 17, 1555–1569. [Google Scholar] [CrossRef]
  39. Kaur, T.; Gandhi, T.K. Classifier fusion for detection of COVID-19 from CT scans. Circuits Syst. Signal Process. 2022, 41, 3397–3414. [Google Scholar] [CrossRef] [PubMed]
  40. Bhowal, P.; Sen, S.; Yoon, J.H.; Geem, Z.W.; Sarkar, R. Evaluation of fuzzy measures using dempster-shafer belief structure: A classifier fusion framework. IEEE Trans. Fuzzy Syst. 2023, 31, 1593–1603. [Google Scholar] [CrossRef]
  41. Liu, J. Research on Features Extraction and Recognition Based on Infrared Signatures of Space Targets; National University of Defense Technology: Changsha, China, 2017. [Google Scholar]
  42. Zhang, H. Tracking Techniques for Midcourse Target Complex via Space-Based Infrared Sensors; National University of Defense Technology: Changsha, China, 2014. [Google Scholar]
  43. Dempster, A.; Petitjean, F.; Webb, G. ROCKET: Exceptionally fast and accurate time series classification using random convolutional kernels. Data Min. Knowl. Discovery 2020, 34, 1454–1495. [Google Scholar] [CrossRef]
  44. Nguyen, T.; Gsponer, S.; Ilie, L.; O’Reilly, M.; Ifrim, G. Interpretable time series classification using linear models and multi-resolution multi-domain symbolic representations. Data Min. Knowl. Discov. 2019, 33, 1183–1222. [Google Scholar] [CrossRef]
  45. Shifaz, A.; Pelletier, C.; Petitjean, F.; Webb, G. TS-CHIEF: A scalable and accurate forest algorithm for time series classification. Data Min. Knowl. Discovery 2020, 34, 742–775. [Google Scholar] [CrossRef]
  46. Fawaz, H.; Lucas, B.; Forestier, G.; Pelletier, C.; Schmidt, D.; Weber, J.; Webb, G.; Idoumghar, L.; Muller, P.; Petitjean, F. Inceptiontime: Finding alexnet for time series classification. Data Min. Knowl. Discov. 2020, 34, 1936–1962. [Google Scholar] [CrossRef]
  47. Yu, Y.; Si, X.; Hu, C.; Zhang, J. A review of recurrent neural networks: LSTM cells and network architectures. Neural Comput. 2019, 31, 1235–1270. [Google Scholar] [CrossRef]
  48. Songsiri, P.; Cherkassky, V.; Kijsirikul, B. Universum selection for boosting the performance of multiclass support vector machines based on one-versus-one strategy. Knowl.-Based Syst. 2018, 159, 9–19. [Google Scholar] [CrossRef]
  49. Mohd Amidon, A.F.; Mahabob, N.Z.; Ismail, N.; Mohd Yusoff, Z.; Taib, M. Agarwood oil quality classification using one versus all strategies in multiclass on SVM model. In Proceedings of the International Jasin Multimedia & Computer Science Invention and Innovation Exhibition (i-JaMCSIIX 2021), Virtual, 15 February–31 March 2021; pp. 84–86. [Google Scholar]
  50. Al-Shargie, F.; Tang, T.B.; Badruddin, N.; Kiguchi, M. Towards multilevel mental stress assessment using SVM with ECOC: An EEG approach. Med. Biol. Eng. Comput. 2018, 56, 125–136. [Google Scholar] [CrossRef]
  51. Kumar, M.A.; Gopal, M. A hybrid SVM based decision tree. Pattern Recognit. 2010, 43, 3977–3987. [Google Scholar] [CrossRef]
  52. Platt, J. Probabilistic outputs for support vector machines and comparisons to regularized likelihood methods. Adv. Large Margin Classif. 1999, 10, 61–74. [Google Scholar]
  53. Zhang, H.; Rao, P.; Chen, X.; Xia, H.; Zhang, S. Denoising and Feature Extraction for Space Infrared Dim Target Recognition Utilizing Optimal VMD and Dual-Band Thermometry. Machines 2022, 10, 168. [Google Scholar] [CrossRef]
  54. Cha, L. Parameter Estimation of Space Precession Target Based on HRRPs. Radar Sci. Technol. 2020, 18, 591–598. [Google Scholar]
  55. Gao, Y.; Cui, K. Method of Space Target Optical Detection and Position Based on Microsatellite. J. Nav. Aviat. Univ. 2016, 31, 365–371. [Google Scholar]
  56. Gao, X.; Pan, L.; Deng, Y. Cross entropy of mass function and its application in similarity measure. Appl. Intell. 2022, 52, 8337–8350. [Google Scholar] [CrossRef]
Figure 1. The motion process of the target with coning and spinning.
Figure 1. The motion process of the target with coning and spinning.
Remotesensing 16 00510 g001
Figure 2. Facets heat transfer under micromotion.
Figure 2. Facets heat transfer under micromotion.
Remotesensing 16 00510 g002
Figure 3. The architecture of the LSTM cell with gates.
Figure 3. The architecture of the LSTM cell with gates.
Remotesensing 16 00510 g003
Figure 4. The flowchart of the proposed algorithm.
Figure 4. The flowchart of the proposed algorithm.
Remotesensing 16 00510 g004
Figure 5. The normalized radiation intensity signal (a) under ideal conditions; (b) under noise interference.
Figure 5. The normalized radiation intensity signal (a) under ideal conditions; (b) under noise interference.
Remotesensing 16 00510 g005
Figure 6. Five target images for six different flight scenarios. (Top): 1–3. (Bottom): 4–6.
Figure 6. Five target images for six different flight scenarios. (Top): 1–3. (Bottom): 4–6.
Remotesensing 16 00510 g006
Figure 7. The performance of target recognition of six methods [24,29,31,34,56] with different SNRs and observation times (a) L = 5 s; (b) L = 10 s; (c) L = 15 s.
Figure 7. The performance of target recognition of six methods [24,29,31,34,56] with different SNRs and observation times (a) L = 5 s; (b) L = 10 s; (c) L = 15 s.
Remotesensing 16 00510 g007
Figure 8. The micro-ROC curves and AUCs of the six methods [24,29,31,34,56].
Figure 8. The micro-ROC curves and AUCs of the six methods [24,29,31,34,56].
Remotesensing 16 00510 g008
Figure 9. The recognition accuracy of six methods [24,29,31,34,56] with different sample frequencies.
Figure 9. The recognition accuracy of six methods [24,29,31,34,56] with different sample frequencies.
Remotesensing 16 00510 g009
Table 1. The scenario environment, the starting position, and the landing position of the targets.
Table 1. The scenario environment, the starting position, and the landing position of the targets.
NumberScenario EnvironmentThe Flight TimeThe Position
1SunlightThe start time:
21 March 2021 03:59:55 UTCG
The end time:
21 March 2021 04:12:00 UTCG
The starting position:
26.34°N, 127.8°E
The landing position:
40.001°N, 116.314°E
2ShadowThe start time:
21 March 2021 13:00:00 UTCG
The end time:
21 March 2021 13:12:00 UTCG
The starting position:
26.34°N, 127.8°E
The landing position:
40.001°N, 116.314°E
3Sunlight for the first half of all flight; shadow for the rest The start time:
21 March 2021 20:10:00 UTCG
The end time:
21 March 2021 20:22:00 UTCG
The starting position:
26.34°N, 127.8°E
The landing position:
40.001°N, 116.314°E
4SunlightThe start time:
21 March 2021 00:00:00 UTCG
The end time:
21 March 2021 00:30:36 UTCG
The starting position:
47.6062°N, 122.332°W
The landing position:
39.913°N, 116.302°E
5ShadowThe start time:
21 March 2021 11:30:00 UTCG
The end time:
21 March 2021 12:00:36 UTCG
The starting position:
47.6062°N, 122.332°W
The landing position:
39.913°N, 116.302°E
6Sunlight for the first half of all flight; shadow for the rest The start time:
21 March 2021 15:30:00 UTCG
The end time:
21 March 2021 16:00:36 UTCG
The starting position:
47.6062°N, 122.332°W
The landing position:
39.913°N, 116.302°E
Table 2. The properties of each target.
Table 2. The properties of each target.
Master CabinReal TargetLight Decoy 1Light Decoy 2Heavy Decoy
ShapeCone–cylinderFlat-base coneCylinderBall-base coneCone–cylinder
Size r = 3   ±   0.5   m
h 1 = 3   ±   0.5   m
h 2 = 3   ±   0.5   m
r = 1   ±   0.5   m
h = 1   ±   0.5   m
r = 1.5   ±   0.5   m
h = 1.5   ±   0.5   m
r = 1   ±   0.5   m
h = 1   ±   0.5   m
r = 0.5   ±   0.1   m
h 1 = 0.5   ±   0.1   m
h 2 = 0.5   ±   0.1   m
MaterialWhite TiO2 paintGrey TiO2 paintWhite epoxy paintBlack paintAluminum
Absorption of solar radiation0.190.870.2480.9750.192
Emissivity0.940.870.9240.8740.036
Density ( kg / m ³ )4260426098013002710
Specific heat
capacity J / ( kg · K )
811811550910880
Micromotion modeSpinning and
Coining
Spinning and
Coining
TumblingSpinning and
Coining
Spinning and
Coining
Micromotion
velocity
0.5π ± 0.2π rad/s 2π ± 0.2π rad/s 0.5π ± 0.2π rad/s 1π ± 0.2π rad/s 4π ± 0.5π rad/s
Initial
Temperature (K)
400300300300300
Table 3. The recognition accuracies of the proposed algorithm under fusing different features.
Table 3. The recognition accuracies of the proposed algorithm under fusing different features.
The Category of FeatureSNR
51015202530
Long-wave radiation intensity67.32%85.14%89.61%92.26%95.67%93.89%
Medium–long-wave radiation intensity78.89%88.33%93.88%95.56%96.67%97.22%
Temperature62.22%73.33%70.00%70.00%71.11%70.56%
Emissivity–area product66.11%70.56%66.67%70.00%66.67%66.11%
Period54.11%61.11%63.33%62.78%65.00%65.56%
Velocity30.00%30.00%30.00%36.67%40.00%33.33%
Four features83.89%92.22%93.33%94.11%93.33%95.00%
All features93.33%97.22%97.22%98.33%98.89%97.78%
Table 4. The initial BPA values obtained when SNR is 5.
Table 4. The initial BPA values obtained when SNR is 5.
Feature{Master Cabin}{Real Target}{Light Decoy 1}{Light Decoy 2}{Heavy Decoy}
Long-wave radiation intensity0.15150.31980.19020.16570.1728
Medium–long-wave radiation intensity0.15220.35020.17700.16700.1536
Temperature 2 . 6642     10 5 0.02730.00020.01360.9588
Emissivity–area product0.00150.17680.00230.07960.7399
Period0.00350.98010.00330.00260.0105
Velocity0.00900.25580.27330.24910.2128
Table 5. The fusion results of different methods.
Table 5. The fusion results of different methods.
Method{Master Cabin}{Real Target}{Light Decoy 1}{Light Decoy 2}{Heavy Decoy}{Θ}Result
Dempster0.00110.24350.01290.03180.60860.1021Heavy decoy
Murphy0.02410.36810.04260.04730.48420.0337Heavy decoy
Gao0.04000.35810.11580.11080.29150.0838Real target
Zhang0.03500.61950.11730.18580.03490.0075Real target
Zhou0.00010.54070.00200.11520.34200Real target
Proposed 0.00960.65620.01530.01420.27730.0273Real target
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Chen, X.; Zhang, H.; Zhang, S.; Feng, J.; Xia, H.; Rao, P.; Ai, J. A Space Infrared Dim Target Recognition Algorithm Based on Improved DS Theory and Multi-Dimensional Feature Decision Level Fusion Ensemble Classifier. Remote Sens. 2024, 16, 510. https://doi.org/10.3390/rs16030510

AMA Style

Chen X, Zhang H, Zhang S, Feng J, Xia H, Rao P, Ai J. A Space Infrared Dim Target Recognition Algorithm Based on Improved DS Theory and Multi-Dimensional Feature Decision Level Fusion Ensemble Classifier. Remote Sensing. 2024; 16(3):510. https://doi.org/10.3390/rs16030510

Chicago/Turabian Style

Chen, Xin, Hao Zhang, Shenghao Zhang, Jiapeng Feng, Hui Xia, Peng Rao, and Jianliang Ai. 2024. "A Space Infrared Dim Target Recognition Algorithm Based on Improved DS Theory and Multi-Dimensional Feature Decision Level Fusion Ensemble Classifier" Remote Sensing 16, no. 3: 510. https://doi.org/10.3390/rs16030510

APA Style

Chen, X., Zhang, H., Zhang, S., Feng, J., Xia, H., Rao, P., & Ai, J. (2024). A Space Infrared Dim Target Recognition Algorithm Based on Improved DS Theory and Multi-Dimensional Feature Decision Level Fusion Ensemble Classifier. Remote Sensing, 16(3), 510. https://doi.org/10.3390/rs16030510

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop