Next Article in Journal
Deng Entropy Weighted Risk Priority Number Model for Failure Mode and Effects Analysis
Previous Article in Journal
Statistical Generalized Derivative Applied to the Profile Likelihood Estimation in a Mixture of Semiparametric Models
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Information Entropy-Based Intention Prediction of Aerial Targets under Uncertain and Incomplete Information

1
College of Automation Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing 211106, China
2
Science and Technology on Electro-Optic Control Laboratory, Luoyang 471000, China
3
Bristol Robotics Laboratory, University of the West of England, Bristol BS16 1QY, UK
*
Author to whom correspondence should be addressed.
Entropy 2020, 22(3), 279; https://doi.org/10.3390/e22030279
Submission received: 5 February 2020 / Revised: 24 February 2020 / Accepted: 26 February 2020 / Published: 28 February 2020

Abstract

:
To improve the effectiveness of air combat decision-making systems, target intention has been extensively studied. In general, aerial target intention is composed of attack, surveillance, penetration, feint, defense, reconnaissance, cover and electronic interference and it is related to the state of a target in air combat. Predicting the target intention is helpful to know the target actions in advance. Thus, intention prediction has contributed to lay a solid foundation for air combat decision-making. In this work, an intention prediction method is developed, which combines the advantages of the long short-term memory (LSTM) networks and decision tree. The future state information of a target is predicted based on LSTM networks from real-time series data, and the decision tree technology is utilized to extract rules from uncertain and incomplete priori knowledge. Then, the target intention is obtained from the predicted data by applying the built decision tree. With a simulation example, the results show that the proposed method is effective and feasible for state prediction and intention recognition of aerial targets under uncertain and incomplete information. Furthermore, the proposed method can make contributions in providing direction and aids for subsequent attack decision-making.

1. Introduction

In modern air combat, the vigorous development of aviation science and military technology leads to more and more severe threats of aerial targets. Meanwhile, due to the application of high-tech technology of Unmanned Combat Air Vehicle (UCAV), such as space early warning systems, radar stealthy composites, artificial intelligence technology, etc., the complexity of a battlefield environment including uncertainty and incompleteness is increasing [1]. Therefore, predicting target state and recognizing intention previously can favor making adequate preparations for air combat autonomous attack and defense decision-making systems. Furthermore, state prediction and intention recognition of aerial targets also have great contributions on increasing the operational efficiency of weapon systems and saving the resources of air combat. Hence, state prediction and intention recognition of aerial targets are important sections of air combat decision support systems, and they have been playing vital roles in future command and control systems [2].
In recent years, many decision support research results have been carried out on military fields to satisfy the requirement of combat decision-making systems. In [3], a rough set theory-based multi-criteria decision-making (MCDM) model was proposed, in which the exceptional importance of the software application to decision-making in the security forces operations was demonstrated. In [4], a hybrid MCDM model in the determination and evaluation of the criteria for selecting an aircraft was presented for the protection of air traffic. A decision support system is an enabling technology leading to numerous disruptive changes in the military field. Using decision support systems reasonably and effectively can greatly enhance the competitiveness in combat decision-making systems. As important parts of decision support systems, state prediction and intention recognition technology can make use of detected information to reflect the actual situation and lay a foundation for decision-making in air combat. Some works can be found in related literature. In [5], a novel method based on support vector machine and Bayesian filtering was studied for online lane change intention prediction in road vehicle driving. To predict the air combat data effectively and accurately, a target state prediction method was introduced in [6] for an aerial target based on the autoregressive integrated moving average (ARIMA) model. An intention prediction method was studied for the aerial target based on an improved grey incidence analysis method in [7]. An algorithm for assessment of the target maneuvering intention in beyond-visual-range air-combat was proposed in [8], in which the target maneuvering intention was divided into nine categories and the characteristic parameters of the target were extracted according to target real-time data being measured and predicted; then the threat level and maneuvering intention could be estimated. Aiming at the difficulty to quantify the mapping relationship between attribute features and combat intentions under the condition of insufficient knowledge of domain experts, a method of combat intention recognition based on deep neural networks was proposed in [9]. In [10], a self-learning method based on decision tree was studied to solve naval vessel intention recognition problem. Although state prediction and intention recognition problems of aerial targets have been studied in recent years, the uncertainty or incompleteness, especially the condition of existing simultaneously in air combat environments is seldom discussed in existing research results.
Long short-memory term (LSTM) network is an improvement over the general recurrent neural network (RNN) [11]. Unlike traditional RNNs, LSTM networks are suitable for learning from experience to classify, process and predict time series when there are very long time lags of unknown size between important events [12,13,14,15,16]. Based on the recent success of LSTM networks for time series domains, a convolutional and LSTM recurrent units-based deep framework was proposed in [17] for activity recognition. To predict the time series of traffic and user mobility in telecommunication networks, a random connectivity LSTM model was put forward in [18]. In air combat, the state data of a target is also a kind of time series. Thus, to solve the aerial target state prediction problem, LSTM networks are employed in this paper. After state prediction, a decision tree is used to extract the rules from the uncertain and incomplete historical data, then the intention of an ariel target is calculated based on the predicted state data and decision tree classification rules. As a decision support tool, a decision tree employs a tree-like graph or model of the decision and its possible consequences, containing chance event outcomes, resource costs and utility [19]. In addition, a decision tree is widely applied in operations research, particularly in decision-making analysis, to help determine a strategy most probable to achieve a goal, but is also a useful tool in machine learning [20,21,22].
In this paper, to solve the difficulties of aerial target intention prediction with uncertainty and incompleteness, the state prediction approach and intention recognition methods are designed synthetically. Firstly, the target state data is predicted based on the LSTM networks from the real-time series data collected through multi-sensors. Then, the improved decision tree theory is used to extract the rules from historical data and is applied to handle the information system with uncertainty and incompleteness. Finally, the intention is recognized by inputting the predicted state data to the built decision tree.
In the rest of the paper, the air combat situation is illustrated, then the definition of air combat uncertainty and incompleteness are elaborated in Section 2. In Section 3, the state prediction method based on LSTM networks is presented. Then, the intention recognition decision tree is generated with the uncertain and incomplete historical data based on information entropy, which is displayed in Section 4. The simulation results are shown in Section 5. The conclusion is summarized in Section 6.

2. Problem Statement

In the modern air combat environment with uncertainty and incompleteness, the air combat data consists of real-time data and priori knowledge [23]. The real-time data is obtained through various kinds of sensors in the air combat process. As to the priori knowledge, it is the historical information and rules in the past air combat. The aerial target state information can be predicted according to the air combat situation and the motion properties of the target. The future state data can be obtained through the analysis of the real-time data in air combat. Furthermore, the intention recognition rules are acquired by the means of priori knowledge and the intention can be recognized.
In this paper, the intention of a target is divided into attack, surveillance, penetration, feint, defense, reconnaissance, cover and electronic interference. Because of the regularity of aerial target intention, different target states can reflect the various results. For example, the high speed targets are more aggressive with a larger possibility of attack intention when the target sight direction towards the UCAV.
The air combat situation diagram between the target and UCAV is shown in Figure 1 [24].
In Figure 1, the line between the target and UCAV is the target line of sight. A is the angle between target line and due north, which is called the azimuth of a target. D is the distance between target and UCAV. V is the velocity of the target and H a is the heading angle of target, the angle between target velocity and target line of sight. For convenience, the H means the height difference in this paper. Obviously, the air combat situation factors are numerical data.
Apart from the above state factors of a target, the intention of a target is also related to the operational task supplementary of the target, such as air-to-air radar status, marine radar status, disturbing state and disturbed state. These factors of operational task are nonnumerical data lies on a nominal scale, which can be expressed as 0 and 1. For instance, the range of air-to-air radar status and marine radar status are { 0 , 1 } , where 0 represents that the radar is on the off state and 1 represents that the radar is on the open state.
Thus, all major factors of air combat situations and operational task supplementary shall be taken into consideration, from which other factors can be derived. The advantage is that the state of UCAV is not indispensable. The tree chart of target intention characteristic description in air combat is shown in Figure 2.
The objective of this paper consists of two parts. The first part is to design a target state prediction algorithm based on real-time data. The second part is to extract rules from the priori knowledge, and identify the intention in accordance with the predicted state data.
The state prediction and intention recognition system diagram is shown in Figure 3.
It should be noted that the priori knowledge maybe uncertain and incomplete due to the complexity of air combat and the confusability of the target. The uncertainty and incompleteness are defined as follows:
  • Uncertainty: The specified value of aerial target state is hard to obtain accurately because of the limitation of sensors and rapidity of air combat. In such cases, only a specific range can be detected by multi-sensors. Hence, some state information is expressed as interval-valued number in this paper to describe the uncertainty of air combat.
  • Incompleteness: In the air combat process, some information of a target may can not be detected due to the application of innovative military technology. In addition, missing data may happen to historical data. Therefore, the priori knowledge is incomplete.

3. State Prediction based on LSTM Networks

LSTM network is a recurrent neural network (RNN) architecture introduced by Hochreiter and Schmidhuber in 1997. Compared with traditional RNNs, LSTM networks contain four interacting layers, which are called cell state, forget gate, input gate and output gate [24]. According to the knowledge of [25], the structure of LSTM networks is shown as follow:
In Figure 4, C t and C t 1 are the cell states, h t and h t 1 are the hidden layer states and x t is the input.
Obviously, the key of LSTM networks is the self-connected memory cell state, the horizontal line running through the top of the LSTM networks structure. The LSTM networks can add or delete information to the cell state based on the structures called gates. Under the control of gates, the information is passed optionally in the cell state. In LSTM networks, the gates consist of a sigmoid neural net layer σ g ( x ) = 1 1 + e x and a pointwise multiplication operation [25].
Firstly, the forget gate layer decides the deleted information from the cell state. From Figure 4, the output of a forget gate can be expressed as [25]
f t = σ g ( W f · [ h t 1 x t ] + b f )
where W f is the input weight matrix and b f is the bias weight matrix of forget gate layer. [ h t 1 x t ] is a vector.
If f t = 1 , the information of h t and x t is retained completely, while f t = 0 means completely losing the information. Namely, the greater f t means the more information is retained.
The next step of LSTM networks is to determine what new information will be stored in the cell state. This step has two parts. Firstly, which values will be updated is decided by the input layer. Then, the output of input gate i t can be expressed as [25]
i t = σ g ( W i · [ h t 1 x t ] + b i )
where W i is the input weight matrix and b i is the bias weight matrix of input gate layer.
Next, a t a n h layer generates a vector of new candidate values C t ˜ , we have [25]
C t ˜ = tan h ( W C · [ h t 1 x t ] + b C )
where W C is the input weight matrix and b C is the bias weight matrix of cell state.
Then, these two parts will be combined to create an update to the cell state and obtain C t . We have [26]
C t = f t · C t 1 + i t · C t ˜
Finally, what LSTM networks is going to output o t should be confirmed. The output gate layer decides the output parts of the cell state. It can be expressed as [25]
o t = σ g ( W o · [ h t 1 x t ] + b o )
where W o is the input weight matrix and b o is the bias weight matrix of output gate layer.
Then, LSTM networks put the cell state through t a n h function and multiply it by the output of the sigmoid layer. Thus, LSTM networks only output the selected parts [26].
h t = o t · tan h ( C t )
In general, the real-time numeric data of a target is essentially a time series which can be expressed as f ( 1 ) , f ( 2 ) , …, f ( t ) . The function of LSTM networks is predicting f ( t + 1 ) based on f ( 1 ) , f ( 2 ) , …, f ( t ) .
To solve the lack of the training data problem in real-time air combat, suppose that there are N time-lagged observations f ( 1 ) , f ( 2 ) , …, f ( N ) in the training set and we need the one-step-ahead prediction. In order to avoid the problem of limited training sample and improve the predict accuracy, a network with p input nodes and one output node is used in this section. Hence, we have N p training patterns.
Assume the size of the time window is p, the first training pattern is composed of f ( 1 ) , f ( 2 ) , …, f ( p ) as the inputs and f ( p + 1 ) as the target output. The second training pattern is composed of f ( 2 ) , f ( 3 ) , …, f ( p + 1 ) as the inputs and f ( p + 2 ) as the target output. By that analogy, the last training pattern is f ( N p 1 ) , f ( N p ) , …, f ( N 1 ) for the inputs and f ( N ) for the target output. The additional benefit is that we can make full use of the real-time data to train the LSTM networks. So far, the aerial target state prediction training model is established. Finally, f ( N p ) , f ( N p + 1 ) , …, f ( N ) are chosen as the input pattern, and the output f ( N + 1 ) is the predicted state data.
For nonnumeric data of factors, the observations is expressed as f n ( 1 ) , f n ( 2 ) , …, f n ( t ) . Due to the high-performance early warning radar, we assume that the operational task supplementary information of target is same as last time. Namely, we have
f n ( N + 1 ) = f n ( N )
where f n ( t ) ( t = 1 , 2 , , N ) are the nonnumeric observations of target operational task supplementary.
The structure of aerial target state prediction is shown as Figure 5:
As mentioned above, the state factors and operational task supplementary are chosen as the intention features in this paper. Hence, for each feature, the predicted value can be obtained by the prediction model, and the next step is intention recognition.

4. Target Intention Recognition Based on Decision Tree and Information Entropy

After the state has been predicted, intention recognition is considered according to the predicted state information. Hence, intention recognition is also an important part of an air combat decision-making system of UCAV.
Because of the incompleteness and the uncertainty of air combat, it is hard to extract the rules from historical data. In this paper, the incompleteness is denoted as the missing data (null value) that may exist in historical data, and the uncertainty can be expressed as interval number. The purpose is to build a decision tree based on an incomplete and interval-valued historical information decision table. Moreover, the input of this part is the predicted value of a target state prediction model, and the output is the target intention.
A decision tree is a flowchart-like structure, in which each internal node represents a test of an attribute, each branch represents the output of the test, and each leaf node represents a class label [27]. The paths from root to leaf represent classification rules. A decision tree has been widely used in classification, information retrieval and dimensionality reduction, and there are broad prospects for development. It can be trained in either supervised or unsupervised ways, depending on the task.
The two major problems in building a decision tree are node splitting order selection and how to choose the best split criterion of nodes. In this paper, a decision support degree is applied for node splitting order selection and split criterion is determined by the information entropy of partitioning.
The structure of decision tree generation for aerial target intention recognition is shown as Figure 6:
As is well known, the priori knowledge of air combat S = ( U , A D ) is a kind of incomplete information system, where U is a finite nonempty set of statistics objects of historical data. A is a finite nonempty set of condition attributes namely, the threat factors, and D is a finite nonempty set of decision attributes, namely, the intention of aerial target in historical data. a i A , a i = , where the special symbol “*” denotes that the value of an attribute is unknown. On the other hand, d i D , d i .
To handle the incomplete information, the following existing definitions are needed.
Definition 1.
Let S = ( U , A D ) be an interval-valued attributes based incomplete system, where A is the set of condition attributes, D is the set of decision attributes and D . A similarity relation S I M ( R ) ( R A ) on U is defined as follows [28]
S I M ( R ) = { ( u , v ) U × U | a A , f ( u , a ) = f ( v , a ) o r f ( u , a ) = o r f ( v , a ) = }
where f : U A is the mapping from U to A.
According to the definition of S I M ( R ) , if ( u , v ) U × U are in S I M ( R ) , they are perceived as similar. Namely, they may have the same properties with respect to R in reality.
In the similarity relation S I M ( R ) , u U , Let S p ( U ) = { v U | ( u , v ) S I M ( R ) } , where S p ( U ) is called the consistent block of U. In other words, S p ( U ) is the maximizing set of indistinguishable object [24].
Actually, the process of intention recognition is extracting rules from historical knowledge and identifying the intention based on the predicted value. On the basis of this, building an incomplete decision tree is a classification issue. In the literature, the guessing technologies are often used in the building of an incomplete system decision tree [29]. In this paper, a condition attribute decision support degree with respect to the decision attribute is defined as follow:
Definition 2.
Let S = ( U , A D ) be an interval-valued attributes based incomplete system, where A is the set of condition attributes, D is the set of decision attributes and D . R A , U / R = R 1 , R 2 , , R m , U / D = D 1 , D 2 , D n , let | U / R | = i = 1 m | R i | , the decision support degree D S D ( R , D ) of condition attribute R to decision attribute D is defined as [30]
D S D ( R , D ) = 1 i = 1 m j = 1 n | R i D j | × | R i D j | | U / R | × ( | U | 1 ) l = 1 n ( | D l | × ( | D l | 1 ) )
Decision support degree indicates the support level of condition attribute R to partition U / R . The large value of decision support degree lies in the better effect of classification based on condition attribute R. Thus, the attribute with greater decision support degree should be split preferentially.
For air combat incomplete interval-valued information system, the set of condition attributes is
A a c = { A , D , V , H a , H , A r s , M r s , D s , D d s }
where A, D, V, H a , H, A r s , M r s , D s and D d s are the azimuth, distance, velocity, heading angle, height, air-to-air radar status, marine radar status, disturbing state and disturbed state of target. In A a c , a i A a c , a i = or a i = [ a i L , a i U ] , a i L , a i U R and a i L a i U , a i L , a i U are the endpoints of interval number a i .
If the set of decision attributes is the intention set, we have
D a c = { A , S , P , F , D , R , C , E }
where A, S, P, F, D, R, C and E express attack, surveillance, penetration, feint, defense, reconnaissance, cover and electronic interference.
For numeric data in A a c , we define
V A = { E a s t , S o u t h , W e s t , N o r t h } V D = { S h o r t , M e d i u m , L o n g } V V = { S l o w , M e d i u m , F a s t } V H a = { S m a l l , M e d i u m , L a r g e } V H = { L o w , M e d i u m , H i g h }
where V A , V D , V V , V H a and V H are the ranges of azimuth, distance, velocity, heading angle and height.
For nonnumeric data in A a c , we define
V A r s = { 0 , 1 } V M r s = { 0 , 1 } V D s = { 0 , 1 } V D d s = { 0 , 1 }
In (13), V A r s and V M r s are the ranges of air-to-air radar status and marine radar status, 0 represents that the radar is on the off state and 1 represents that the radar is on the open state. V D s is the range of disturbing state, 0 means that the target is not jamming the UCAV and 1 means that the target is on the jamming state. V D d s is the ranges of disturbed state, 0 means that the target is not being jammed by UCAV and 1 means that the target is being jammed by UCAV.
Therefore, the first step is to determine the condition attribute class of interval number a i = [ a i L , a i U ] A a c . The fuzzy inference is used to solve this problem. As an online decision support tool, fuzzy inference theory has contributed to achieve classification tasks, process simulation and diagnosis and process control [31].
Take the a i as the representative point of the interval [ a i L , a i U ] , which is given by
a i = a i L a i U y μ i ( y ) d y a i L a i U μ i ( y ) d y
where μ i ( y ) is the membership function of attribute i.
Then, the condition attribute class of a i is obtained by the membership function and a i , the membership functions of azimuth, distance, velocity, heading angle and height are designed as Figure 7, Figure 8, Figure 9, Figure 10 and Figure 11:
In this way, the air combat incomplete interval-valued information system is converted to a traditional information system and the node splitting order can be selected by the decision support degree.
The next step is to determine the best split criterion (selecting the cutpoint) of nodes, and this part is required to consider the air combat incomplete interval-valued information system renewedly.
Definition 3.
[30]: Cutpoint, for an interval-valued condition attribute a = [ a L , a U ] (finite interval), cutpoint is the threshold C ( a L C a U ) which splits the interval-value condition attribute a into two branches ( a 1 = [ a L , C ] and a 2 = [ C , a U ] ).
It is supposed that condition attribute B of an air combat state is chosen to split. The nonempty elements is b i = [ b i L , b i U ] A a c ( i = 1 , 2 , , M ) , where M is the number of the attribute sample. Then the sequence of 2 M points can be obtained by sorting the endpoints of b i in ascending order. Delete the repetitive endpoints and the midpoints of each two neighbor points of the sequence are defined as the alternative cutpoints. The objective is to select the optimal cutpoint from the alternative cutpoints and it is determined by the information entropy of partitioning.
Assume that the decision attribute of selected condition attribute B is D B = { D 1 , D 2 , , D k } , where k is the classes number of decision attribute. The information entropy of selected condition attribute B is defined as [32]
I ( B ) = j = 1 k D j D B log D j D B
where | · | expresses the number of elements in a set.
As the selected condition attribute B can be divided into two subset B 1 and B 2 by an alternative cutpoint C, where C > B 1 and C B 2 . The information entropy of partitioning I E P ( B , C ) is defined as [33]
I E P ( B , C ) = B 1 B · I ( B 1 ) + B 2 B · I ( B 2 )
If one of the alternative cutpoint C makes the I E P ( B , C ) minimums among all alternative cutpoints of selected condition attribute B, the alternative cutpoint C is marked as optimal cutpoint.
Finally, the decision tree is generated by the decision support degree-based node splitting order and split according to the optimal cutpoint of each condition attribute until all the non-empty elements of node are belong to the same class.
The whole decision tree generation algorithm is summarized in Algorithm 1.
At this point, the rules extraction is accomplished from the uncertain and incomplete priori knowledge, and the future aerial target intention can be recognized on the basis of predicted state data from LSTM networks and the generated decision tree.
Algorithm 1 Decision tree generation algorithms in air combat incomplete interval-valued information system.
Input: The air combat incomplete interval-valued information decision table of priori knowledge.
Output: A decision tree.
1:
Determine the condition attribute classes of all interval number based on fuzzy inference;
2:
Generate a fuzzy incomplete decision table;
3:
for each episode do
4:
 Calculate the decision support degree according (9) of each condition attribute;
5:
 Choose the attribute with maximum decision support degree as the split node;
6:
 Count the alternative cutpoints of selected condition attribute;
7:
 Calculate the information entropy of partitioning of each alternative cutpoint based on (15)
8:
and (16);
9:
 Choose the alternative cutpoint with minimum information entropy of partitioning as the
10:
optimal cutpoint to split the condition attribute;
11:
 Delete the split condition attribute from incomplete interval-valued information decision table
12:
and fuzzy incomplete decision table.
13:
until All the condition attributes are split and all the non-empty elements of node are belong to
14:
the same class.
15:
Output: A decision tree.

5. Simulation Results

For the purpose of proving the effectiveness of the proposed method as to aerial target state prediction and intention recognition, the simulation data is given in Table 1, Table 2 and Table 3.
Table 1 is the real-time numeric data of an aerial target. In this paper, a typical scenario of the target with attack intention is considered. It is assumed that the azimuth basically remains unchanged, the distance between target and UCAV is reduced gradually, the velocity of target is increased to a stable value, and the heading angle fluctuates in a certain range and the height is decreased to a narrow range.
Table 2 is the real-time nonnumeric data of aerial target.
Table 3 is the historical data of past air combat.
In Table 3, “*” denotes that the value of the corresponding air combat attribute is unknown.
In order to utilize the real-time data to train the LSTM networks, we choose the size of time window p = 4 . The number of cells of LSTM networks is 12.
Furthermore, in order to demonstrate the efficiency and feasibility of the proposed state prediction approach, ARIMA method [6] is compared with LSTM networks method in this part. We choose the real-time numeric data of time 1 to time 14 as the training set, time 15 as the test set to test the predicted results. The simulation results are shown in Table 4 and Table 5.
From the performance comparisons of ARIMA and LSTM networks, prediction accuracy of LSTM networks is significantly better than ARIMA. For example, the predicted result of ARIMA in the azimuth is totally wrong. On the other hand, the training time using LSTM networks is one of the drawbacks but the state prediction can be parallelized on military systems with high performance computer systems.
The output of LSTM networks and the predicted state data of target are shown in Table 6.
In Table 6, the predicted values of azimuth, distance, velocity, heading angle and height are obtained by the LSTM networks.
In accordance with fuzzy inference, a fuzzy incomplete decision table of historical numeric data part is generated as Table 7.
From Table 3 and Table 7, the partitions of all condition attributes can be obtained
U / A = A E , A S , A N , A W , U / D = D S , D M , D L , U / V = V S , V M , V F , U / H a = H a S , H a M , H a L , U / H = H L , H M , H H . U / A r s = A r s 0 , A r s 1 , U / M r s = M r s 0 , M r s 1 , U / D s = D s 0 , D s 1 , U / D d s = D d s 0 , D d s 1 .
where A E , A S , A N and A W are the partitions of E a s t , S o u t h , N o r t h and W e s t in condition attribute azimuth, D S , D M and D L are the partitions of S h o r t , M e d i u m and L o n g in condition attribute distance, V S , V M and V F are the partitions of S m a l l , M e d i u m and F a s t in condition attribute velocity, H a S , H a M and H a L are the partitions of S h o r t , M e d i u m and L a r g e in condition attribute heading angle, H L , H M and H H are the partitions of L o w , M e d i u m and H i g h in condition attribute height, A r s 0 and A r s 1 are the partitions of 0 and 1 in condition attribute air-to-air radar status, M r s 0 and M r s 1 are the partitions of 0 and 1 in condition attribute marine radar status, D s 0 and D s 1 are the partitions of 0 and 1 in condition attribute disturbing state, D d s 0 and D d s 1 are the partitions of 0 and 1 in condition attribute disturbed state.
Then, we have
A E = { 1 , 2 , 3 , 8 , 9 , 10 , 14 , 15 , 17 , 20 , 21 , 22 } , A S = { 2 , 4 , 5 , 6 , 9 , 13 , 14 , 15 , 16 , 20 , 22 , 23 } , A N = { 2 , 7 , 9 , 14 , 15 , 18 , 20 , 23 } , A W = { 2 , 9 , 11 , 12 , 14 , 15 , 19 , 20 , 22 } . D S = { 2 , 3 , 4 , 12 , 13 , 14 , 15 , 17 , 19 } , D M = { 1 , 3 , 4 , 7 , 8 , 9 , 11 , 16 , 18 , 19 , 20 , 22 , 23 } , D L = { 3 , 4 , 5 , 6 , 10 , 19 , 21 } . V S = { 8 , 9 , 10 , 18 , 22 } , V M = { 8 , 11 , 12 , 16 , 17 , 19 , 20 , 21 , 23 } , V F = { 1 , 2 , 3 , 4 , 5 , 6 , 7 , 8 , 13 , 14 , 15 } . H a S = { 1 , 2 , 3 , 4 , 6 , 16 , 17 , 19 , 20 , 22 , 23 } , H a M = { 5 , 6 , 7 , 8 , 9 , 10 , 11 , 12 , 16 , 19 , 20 , 22 , 23 } , H a L = { 6 , 13 , 14 , 15 , 16 , 18 , 19 , 20 , 21 , 22 , 23 } . H L = { 1 , 2 , 3 , 4 , 9 , 11 , 12 , 13 , 14 , 15 , 21 , 23 } , H M = { 2 , 5 , 6 , 7 , 8 , 9 , 10 , 16 , 17 , 19 , 21 , 22 , 23 } , H H = { 2 , 9 , 18 , 20 , 21 , 23 } . A r s 0 = { 11 , 15 , 18 , 19 , 20 , 21 , 23 } , A r s 1 = { 1 , 2 , 3 , 4 , 5 , 6 , 7 , 8 , 9 , 10 , 11 , 12 , 13 , 14 , 15 , 16 , 17 , 19 , 22 , 23 } . M r s 0 = { 1 , 2 , 3 , 10 , 14 , 16 , 17 , 19 , 20 , 21 } , M r s 1 = { 2 , 3 , 4 , 5 , 6 , 7 , 8 , 9 , 10 , 11 , 12 , 13 , 14 , 15 , 16 , 18 , 20 , 21 , 22 , 23 } . D s 0 = { 2 , 5 , 7 , 8 , 9 , 11 , 12 , 18 , 19 , 20 , 21 , 22 } , D s 1 = { 1 , 2 , 3 , 4 , 5 , 6 , 10 , 12 , 13 , 14 , 15 , 16 , 17 , 18 , 22 , 23 } . D d s 0 = { 1 , 2 , 3 , 5 , 6 , 7 , 8 , 9 , 11 , 12 , 16 , 17 , 18 , 19 , 20 , 21 , 22 , 23 } , D d s 1 = { 3 , 4 , 5 , 6 , 7 , 9 , 10 , 11 , 13 , 14 , 15 , 17 , 22 } .
According to (9), the decision support degree of each condition attribute with decision attribute (intention I) can be calculated, which is given by
D S D ( A , I ) = 0.5836 , D S D ( D , I ) = 0.6154 , D S D ( V , I ) = 0.6538 , D S D ( H a , I ) = 0.5746 , D S D ( H , I ) = 0.5938 , D S D ( A r s , I ) = 0.3493 , D S D ( M r s , I ) = 0.3561 , D S D ( D s , I ) = 0.4933 , D S D ( D d s , I ) = 0.5219 .
Hence, the condition attribute velocity is selected to be split firstly.
Back to Table 3, sort the endpoints of velocity elements in ascending order and delete the repetitive endpoints, we have
110 , 120 , 140 , 150 , 170 , 200 , 210 , 220 , 230 , 240 , 250 , 270 , 280 , 290 , 300 , 315 , 320 , 330
The alternative cutpoints are
115 , 130 , 145 , 160 , 185 , 205 , 215 , 225 , 235 , 245 , 260 , 275 , 285 , 295 , 307.5 , 317.5 , 325
The corresponding information entropy of partitioning can be obtained with (15) and (16), which is shown as follows.
2.0353 , 1.9372 , 1.8260 , 1.8132 , 1.5516 , 1.5516 , 1.5516 , 1.5688 , 1.5688 , 1.4681 , 1.4093 , 1.4093 , 1.4093 , 1.5240 , 1.6618 , 1.8064 , 1.8567 .
where the minimum information entropy of partitioning is 1.4093 , and the first corresponding alternative cutpoint is 260.0 . Therefore, the cutpoint 260.0 is chosen as the optimal cutpoint.
Repeat the process until the decision tree is generated, which is shown in Figure 12.
Finally, after inputting the predicted state data into the decision tree, the intention of the target is recognized as attack, and it is consistent with the simulation scenario.
From the LSTM networks predicted results, the predicted model can effectively predict the variation trend of real-time numeric data. The suitable time window can also solve the lack of training data problem. In addition, the generated decision tree based on the uncertain and incomplete priori knowledge indicates that the proposed method is also applicable to deal with the uncertainty and incompleteness in air combat. Thus, the method presented is practical and effective in intention prediction of aerial targets.

6. Conclusions

As the basis of future air combat autonomous decision-making systems, intention prediction of aerial target makes a positive contribution to unmanned systems. In this paper, a state prediction and intention recognition method is developed. The future state information of a target is predicted based on LSTM networks from real-time series data, and the uncertainty and incompleteness of priori knowledge is expressed as an interval-valued number and null value in an air combat information system. To generate a decision tree based on the uncertain and incomplete historical data, a decision support degree is applied for node splitting order selection and a split criterion is determined by the information entropy of partitioning. Then, the target intention is obtained by the predicted data and the built decision tree. The simulation result shows that the deserved algorithm can predict state and recognize intention of an aerial target. Therefore, the proposed method could handle the air combat information system, in which there is a great deal of uncertainty and incompleteness. However, the relationship between target state and intention needs to be further explored. For future work, air combat attack decision-making based on target intention is worth considering.

Author Contributions

All authors contributed to this article. Conceptualization, T.Z. and M.C.; methodology, T.Z. and M.C.; project administration, Y.W. and C.Y.; supervision, M.C.; validation, T.Z.; writing—original draft preparation, T.Z.; writing—review and editing, T.Z., M.C., Y.W., C.Y. and J.H. All authors have read and agreed to the published version of the manuscript.

Funding

This work was funded by the Major Projects for Science and Technology Innovation 2030 grant number 2018AA0100800, and Equipment Pre-research Foundation of Laboratory grant number 61425040104.

Acknowledgments

The authors also gratefully acknowledge the helpful comments and suggestions of the reviewers, which have improved the presentation.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Zhou, K.; Wei, R.; Xu, Z.; Zhang, Q.; Lu, H.; Zhang, G. An air combat decision learning system based on a brain-Like cognitive mechanism. Cogn. Comput. 2019, 4, 1–12. [Google Scholar] [CrossRef]
  2. Zhou, T.; Chen, M.; Yang, C. Data fusion using Bayesian theory and reinforcement learning method. Sci. China Inf. Sci. in press. [CrossRef]
  3. Karavidic, Z.; Projovic, D. A multi-criteria decision-making (MCDM) model in the security forces operations based on rough sets. Decis. Mak. Appl. Manag. Eng. 2018, 1, 97–120. [Google Scholar] [CrossRef]
  4. Petrovic, I.; Kankaras, M. DEMATEL-AHP multi-criteria decision making model for the selection and evaluation of criteria for selecting an aircraft for the protection of air traffic. Decis. Mak. Appl. Manag. Eng. 2018, 2, 93–110. [Google Scholar]
  5. Kumar, P.; Perrollaz, M.; Lefevre, S.; Laugier, C. Learning-based approach for online lane change intention prediction. IEEE Intell. Veh. Symp. 2013, 1, 797–802. [Google Scholar]
  6. Zhou, T.; Wu, Q.; Chen, M. State prediction based on ARIMA model for aerial target. In Proceedings of the 2018 Chinese Intelligent Systems Conference, Wenzhou, China, 13–14 October 2018. [Google Scholar]
  7. Zhou, T.; Chen, M.; Chen, S.; Zou, J. Intention prediction of aerial target under incomplete information. ICIC Express Lett. 2017, 8, 623–631. [Google Scholar]
  8. Wang, L.; Li, L.; Nie, Z. Assessment of target maneuvering intention in beyond-visual-range air-combat. Electron. Opt. Control 2012, 19, 68–71. [Google Scholar]
  9. Zhou, W.; Zhang, J.; Gu, N.; Yan, G. Recognition of combat intention with insufficient expert knowledge. In Proceedings of the 3rd International Conference on Computational Modeling, Simulation and Applied Mathematics, Wuhan, China, 27–28 September 2018; pp. 316–321. [Google Scholar]
  10. Niu, X.; Zhao, H.; Zhang, Y. Naval vessel intention recognition based on decision tree. Ordnance Ind. Autom. 2010, 29, 44–53. [Google Scholar]
  11. Hochreiter, S.; Schmidhuber, J. Long short-term memory. Neural Comput. 1997, 9, 1735–1780. [Google Scholar] [CrossRef]
  12. Greff, K.; Srivastava, R.K.; Koutník, J.; Steunebrink, B.R.; Schmidhuber, J. LSTM: A search space odyssey. IEEE Trans. Neural Netw. Learn. Syst. 2016, 28, 2222–2232. [Google Scholar] [CrossRef] [Green Version]
  13. Karim, F.; Majumdar, S.; Darabi, H.; Chen, S. LSTM fully convolutional networks for time series classification. IEEE Access 2017, 9, 1662–1669. [Google Scholar] [CrossRef]
  14. Wang, J.; Zhang, J.; Wang, X. Bilateral LSTM: A two-dimensional long short-term memory model with multiply memory units for short-term cycle time forecasting in re-entrant manufacturing Systems. IEEE Trans. Ind. Inform. 2018, 14, 748–758. [Google Scholar] [CrossRef]
  15. Tsironi, E.; Barros, P.; Weber, C.; Wermter, S. An analysis of convolutional long-short term memory recurrent neural networks for gesture recognition. Neurocomputing 2017, 268, 76–86. [Google Scholar] [CrossRef]
  16. Bagnall, A.; Lines, J.; Hills, J.; Bostrom, A. Time-series classification with COTE: The collective of transformation-based ensembles. IEEE Trans. Knowl. Data Eng. 2015, 27, 2522–2535. [Google Scholar] [CrossRef]
  17. Francisco, O.; Roggen, D. Deep convolutional and LSTM recurrent neural networks for multimodal wearable activity recognition. Sensors 2016, 16, 79–95. [Google Scholar]
  18. Hua, Y.; Zhao, Z.; Li, R.; Chen, X.; Liu, Z.; Zhang, H. Deep learning with long short-term memory for time series prediction. IEEE Commun. Mag. 2019, 57, 114–119. [Google Scholar] [CrossRef] [Green Version]
  19. Safavian, S.R.; Landgrebe, D. A survey of decision tree classifier methodology. IEEE Trans. Syst. Man Cybern. 1991, 21, 660–674. [Google Scholar] [CrossRef] [Green Version]
  20. Wang, S.; Fan, C.; Hsu, C.-H.; Sun, Q.; Yang, F. A vertical handoff method via self-selection decision tree for internet of vehicles. IEEE Syst. J. 2014, 10, 1183–1192. [Google Scholar] [CrossRef]
  21. Biswal, M.; Dash, P.K. Detection and characterization of multiple power quality disturbances with a fast S-transform and decision tree based classifier. Digit. Signal Process. 2013, 23, 1071–1083. [Google Scholar] [CrossRef]
  22. He, M.; Zhang, J.; Vijay, V. Robust online dynamic security assessment using adaptive ensemble decision-tree learning. IEEE Trans. Power Syst. 2013, 28, 4089–4098. [Google Scholar] [CrossRef]
  23. Herbst, W. Dynamics of air combat. J. Aircr. 1983, 20, 594–598. [Google Scholar] [CrossRef]
  24. Jiang, C.; Ding, Q. Research on threat assessment and target distribution for multi-aircraftcooperative air combat. Fire Control Command Control 2008, 33, 8–12. [Google Scholar]
  25. Liu, Z.; Chen, M.; Wu, Q.; Chen, S. Prediction of unmanned aerial vehicle target intention under incomplete information. Sci. Sin. Inform. in press (In Chinese). [CrossRef]
  26. Alberto, D.; Jose, A.; Ricardo, S.; Basilio, S. Deep evolutionary modeling of condition monitoring data in marine propulsion systems. Soft Comput. 2019, 23, 9937–9953. [Google Scholar]
  27. Quinlan, J.R. Induction of decision tress. Mach. Learn. 1986, 1, 1. [Google Scholar] [CrossRef] [Green Version]
  28. Leung, Y.; Li, D. Maximal consistent block technique for rule acquisition in incomplete information systems. Inf. Sci. 2003, 153, 85–106. [Google Scholar] [CrossRef]
  29. Elouedi, Z.; Mellouli, K.; Smets, P. Decision trees using the belief function theory. In Proceedings of the International Conference on Information Processing and Management of Uncertainty IPMU, Madrid, Spain, 3–7 July 2000. [Google Scholar]
  30. Guan, X.; Liang, J.; Qian, Y. Decision trees generation algorithm based on decision support degree. Comput. Eng. Appl. 2008, 39, 156–158. [Google Scholar]
  31. Guillaume, S. Designing fuzzy inference systems from data: An interpretability-oriented review. IEEE Trans. Fuzzy Syst. 2001, 9, 426–433. [Google Scholar] [CrossRef] [Green Version]
  32. Chen, J.; Wang, X.; He, Q. Interval-valued attributes based monotonic decision tree algorithm. Pattern Recognit. Artif. Intell. 2016, 1, 47–53. [Google Scholar]
  33. Zhu, H.; Zhai, J.; Wang, S.; Wang, X. Monotonic decision tree for interval valued data. Int. Conf. Mach. Learn. Cybern. 2014, 481, 231–240. [Google Scholar]
Figure 1. Air combat situation diagram.
Figure 1. Air combat situation diagram.
Entropy 22 00279 g001
Figure 2. The tree chart of target intention characteristic description in air combat.
Figure 2. The tree chart of target intention characteristic description in air combat.
Entropy 22 00279 g002
Figure 3. The intention prediction system diagram.
Figure 3. The intention prediction system diagram.
Entropy 22 00279 g003
Figure 4. The structure of LSTM networks.
Figure 4. The structure of LSTM networks.
Entropy 22 00279 g004
Figure 5. The structure of aerial target state prediction.
Figure 5. The structure of aerial target state prediction.
Entropy 22 00279 g005
Figure 6. The structure of decision tree generation for aerial target intention recognition.
Figure 6. The structure of decision tree generation for aerial target intention recognition.
Entropy 22 00279 g006
Figure 7. The membership function curve of azimuth.
Figure 7. The membership function curve of azimuth.
Entropy 22 00279 g007
Figure 8. The membership function curve of distance.
Figure 8. The membership function curve of distance.
Entropy 22 00279 g008
Figure 9. The membership function curve of velocity.
Figure 9. The membership function curve of velocity.
Entropy 22 00279 g009
Figure 10. The membership function curve of heading angle.
Figure 10. The membership function curve of heading angle.
Entropy 22 00279 g010
Figure 11. The membership function curve of height.
Figure 11. The membership function curve of height.
Entropy 22 00279 g011
Figure 12. The decision tree of uncertain and incomplete priori knowledge in air combat.
Figure 12. The decision tree of uncertain and incomplete priori knowledge in air combat.
Entropy 22 00279 g012
Table 1. Real time numeric data of aerial target.
Table 1. Real time numeric data of aerial target.
TimeAzimuth
(mil)
Distance
(km)
Velocity
(m/s)
Heading Angle
( )
Heigh
(km)
12230.0310.0220.012.015.8
22245.0297.0242.011.013.2
32257.0291.0228.014.011.7
42300.0280.0241.013.010.1
52364.0267.0255.010.08.6
62413.0251.0267.08.07.4
72467.0235.0263.05.06.0
82489.0214.0285.014.05.2
92488.0199.0274.04.04.5
102514.0178.0286.07.03.7
112516.0154.0293.011.03.2
122524.0138.0285.010.03.1
132517.0120.0284.09.02.8
142536.097.0292.010.02.5
152528.075.0291.06.02.6
Table 2. Real time nonnumeric data of aerial target.
Table 2. Real time nonnumeric data of aerial target.
TimeAir-to-Air
Radar Status
Marine Radar
Status
Disturbing
State
Disturbed
State
151010
Table 3. Given knowledge of air target intention. (the bold is used to mark off decision attribute and condition attributes).
Table 3. Given knowledge of air target intention. (the bold is used to mark off decision attribute and condition attributes).
IndexAzimuth
(mil)
Distance
(km)
Velocity
(m/s)
Heading Angle
( )
Height
(km)
1[2200.0,2300.0][100.0,110.0][300.0,320.0][10.0,30.0][4.0,5.0]
2[45.0,55.0][320.0,330.0][320.0,350.0]
3[2200.0,2300.0][300.0,330.0][30.0,40.0][2.0,2.5]
4[2800.0,3000.0][270.0,290.0][330.0,350.0][3.6,4.0]
5[2800.0,2850.0][260.0,290.0][315.0,320.0][80.0,90.0][7.7,8.0]
6[2800.0,2850.0][240.0,260.0][300.0,315.0][6.7,7.2]
7[750.0,810.0][180.0,190.0][150.0,170.0][60.0,80.0][6.0,6.5]
8[820.0,830.0][180.0,185.0][70.0,90.0][6.5,7.7]
9[160.0,180.0][110.0,120.0][40.0,60.0]
10[820.0,860.0][200.0,220.0][120.0,140.0][50.0,70.0][5.4,6.0]
11[4000.0,4100.0][50.0,60.0][210.0,220.0][60.0,90.0][3.4,3.6]
12[4020.0,4050.0][35.0,45.0][210.0,250.0][70.0,100.0][2.0,2.6]
13[2600.0,2800.0][30.0,40.0][280.0,300.0][140.0,160.0][2.0,3.0]
14[0,20.0][300.0,320.0][160.0,180.0][2.0,2.4]
15[25.0,35.0][290.0,300.0][210.0,240.0][0,3.0]
16[2400.0,2500.0][150.0,160.0][230.0,240.0][4.4,5.0]
17[1700.0,1800.0][50.0,60.0][300.0,320.0][20.0,30.0][5.0,5.4]
18[600.0,800.0][160.0,180.0][120.0,140.0][150.0,170.0][9.6,10.6]
19[5000.0,5200.0][220.0,240.0][8.0,9.0]
20[150.0,160.0][230.0,240.0][10.0,10.6]
21[810.0,860.0][200.0,220.0][200.0,220.0][200.0,210.0]
22[180.0,200.0][120.0,150.0][6.0,7.2]
23[2800.0,2900.0][160.0,180.0][140.0,170.0]
IndexAir-to-air
radar status
Marine radar
status
Disturbing
state
Disturbed
state
Intention
11010 A
210 A
311 A
41111 A
511 S
6111 S
7110 R
81100 R
9110 R
10101 R
1110 C
12110 C
131111 P
14111 P
15111 P
16110 F
17101 F
18010 D
19000 D
20000 D
21000 D
2211 E
23110 E
Table 4. The state predicted value at time 15 of ARIMA and LSTM networks.
Table 4. The state predicted value at time 15 of ARIMA and LSTM networks.
Predicted Value
MethodAzimuth
(mil)
Distance
(km)
Velocity
(m/s)
Heading Angle
( )
Height
(km)
ARIMA2996.065.7297.79.61.7
LSTM networks2512.571.8294.39.32.6
Table 5. The performance comparisons of ARIMA and LSTM networks.
Table 5. The performance comparisons of ARIMA and LSTM networks.
Error
MethodAzimuthDistanceVelocityHeading AngleHeight
ARIMA468.09.36.73.60.9
LSTM networks15.53.23.33.30
Test time (s)
MethodAzimuthDistanceVelocityHeading angleHeight
ARIMA3.132.202.343.771.69
LSTM networks2.422.112.650.860.92
Table 6. The predicted state data of target.
Table 6. The predicted state data of target.
TimeAzimuth
(mil)
Distance
(km)
Velocity
(m/s)
Heading Angle
( )
Height
(km)
162526.060.7468284.99018.42272.4236
TimeAir-to-air
radar status
Marine radar
status
Disturbing
state
Disturbed
state
161010
Table 7. The fuzzy incomplete decision table of historical numeric data part. (the bold is used to mark off decision attribute and condition attributes).
Table 7. The fuzzy incomplete decision table of historical numeric data part. (the bold is used to mark off decision attribute and condition attributes).
IndexAzimuth
(mil)
Distance
(km)
Velocity
(m/s)
Heading Angle
( )
Height
(km)
Intention
1 E a s t M e d i u m F a s t S m a l l L o w A
2 S h o r t F a s t S m a l l A
3 E a s t F a s t S m a l l L o w A
4 S o u t h F a s t S m a l l L o w A
5 S o u t h L o n g F a s t M e d i u m M e d i u m S
6 S o u t h L o n g F a s t M e d i u m S
7 N o r t h M e d i u m M e d i u m M e d i u m M e d i u m R
8 E a s t M e d i u m M e d i u m M e d i u m R
9 M e d i u m S l o w M e d i u m R
10 E a s t L o n g S l o w M e d i u m M e d i u m R
11 W e s t M e d i u m M e d i u m M e d i u m L o w C
12 W e s t S h o r t M e d i u m M e d i u m L o w C
13 S o u t h S h o r t F a s t L a r g e L o w P
14 S h o r t F a s t L a r g e L o w P
15 S h o r t F a s t L a r g e L o w P
16 S o u t h M e d i u m M e d i u m M e d i u m F
17 E a s t S h o r t M e d i u m S m a l l M e d i u m F
18 N o r t h M e d i u m S l o w L a r g e H i g h D
19 W e s t M e d i u m M e d i u m D
20 M e d i u m M e d i u m H i g h D
21 E a s t L o n g M e d i u m L a r g e D
22 M e d i u m S l o w M e d i u m E
23 S o u t h M e d i u m M e d i u m E
“∗” denotes that the value of the corresponding air combat attribute is unknown.

Share and Cite

MDPI and ACS Style

Zhou, T.; Chen, M.; Wang, Y.; He, J.; Yang, C. Information Entropy-Based Intention Prediction of Aerial Targets under Uncertain and Incomplete Information. Entropy 2020, 22, 279. https://doi.org/10.3390/e22030279

AMA Style

Zhou T, Chen M, Wang Y, He J, Yang C. Information Entropy-Based Intention Prediction of Aerial Targets under Uncertain and Incomplete Information. Entropy. 2020; 22(3):279. https://doi.org/10.3390/e22030279

Chicago/Turabian Style

Zhou, Tongle, Mou Chen, Yuhui Wang, Jianliang He, and Chenguang Yang. 2020. "Information Entropy-Based Intention Prediction of Aerial Targets under Uncertain and Incomplete Information" Entropy 22, no. 3: 279. https://doi.org/10.3390/e22030279

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop