Next Article in Journal
On Some Variational Inequalities Involving Second-Order Partial Derivatives
Next Article in Special Issue
Oscillation Results for Solutions of Fractional-Order Differential Equations
Previous Article in Journal
Nonlocal Coupled System for (k,φ)-Hilfer Fractional Differential Equations
Previous Article in Special Issue
On the Existence and Stability of a Neutral Stochastic Fractional Differential System
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Multi-AUV Dynamic Maneuver Countermeasure Algorithm Based on Interval Information Game and Fractional-Order DE

1
Research & Development Institute, Northwestern Polytechnical University, Shenzhen 518057, China
2
School of Marine Science and Technology, Northwestern Polytechnical University, Xi’an 710072, China
3
MOE Key Laboratory of Marine Intelligent Equipment and System, School of Naval Architecture, Ocean & Civil Engineering, Shanghai Jiao Tong University, Shanghai 200240, China
4
School of Mathematics and Statistics, Northwestern Polytechnical University, Xi’an 710072, China
*
Author to whom correspondence should be addressed.
Fractal Fract. 2022, 6(5), 235; https://doi.org/10.3390/fractalfract6050235
Submission received: 20 March 2022 / Revised: 11 April 2022 / Accepted: 17 April 2022 / Published: 25 April 2022
(This article belongs to the Special Issue Fractional Dynamical Systems: Applications and Theoretical Results)

Abstract

:
The instability of the underwater environment and underwater communication brings great challenges to the coordination and cooperation of the multi-Autonomous Underwater Vehicle (AUV). In this paper, a multi-AUV dynamic maneuver countermeasure algorithm is proposed based on the interval information game theory and fractional-order Differential Evolution (DE), in order to highlight the features of the underwater countermeasure. Firstly, an advantage function comprising the situation and energy efficiency advantages is proposed on account of the multi-AUV maneuver strategies. Then, the payoff matrix with interval information is established and the payment interval ranking is achieved based on relative entropy. Subsequently, the maneuver countermeasure model is presented along with the Nash equilibrium condition satisfying the interval information game. The fractional-order DE algorithm is applied for solving the established problem to determine the optimal strategy. Finally, the superiority of the proposed multi-AUV maneuver countermeasure algorithm is verified through an example.

1. Introduction

Autonomous underwater vehicles (AUVs) are being widely used for ocean observation, sea rescue, minefield search, enemy reconnaissance, and in other related fields [1,2]. Compared to manned equipment, AUVs present the advantages of low cost, zero casualty count, good concealment, strong mobility, low energy consumption, recyclability, etc. [3,4,5]. However, in the face of high efficiency and large-scale task requirements, a single AUV hardly fulfills all these demands. A multi-AUV system is a new alternative for complex ocean tasks due to its high efficiency and reliability, achieved through its time-space distribution and redundant configuration. In recent years, studies of multi-AUV cooperative formation and navigation research has made significant headway [6,7,8], but the multi-AUV cooperative counter-games are still limited. The multi-AUV cooperative countermeasure can be used in marine research and military tasks, including underwater multi-target tracking, monitoring, and detection, which effectively expands the underwater combat radius and reduces underwater equipment loss and casualties.
The maneuver countermeasure is a special case of maneuver decision-making, which can be roughly divided into two categories. One is the ‘one-sided’ optimization algorithm, which only considers one’s self strategy optimization. The other is the ‘two-sided’ game algorithm, which fully analyzes the influence of both sides and emphasizes the conflict and confrontation of the situation. The one-sided optimization algorithm always includes an intelligent algorithm, guidance law algorithm, and expert system [9]. The expert system-based maneuver decision-making algorithm presented in Ref. [10] is one of the earliest works. However, this method relies heavily on experience, the chosen strategy is not guaranteed to be optimal, and adaptability is not reliable. An air combat maneuver decision-making algorithm based on heuristic reinforcement learning is proposed in Ref. [11], which realizes the online task allocation of actual confrontation. In Ref. [12], a machine learning algorithm is used to train a large-scale cluster UAV, showcasing its high intelligence decision-making ability. This kind of one-sided optimization algorithm may achieve the desired results to a certain extent, but the optimization process is not completely objective because the influence of the opponent strategy is not considered, and the interaction and conflict of the game are ignored.
The matrix game and differential game are the most widely studied aspects regarding the two-sided counter-game algorithm [13]. In Ref. [14], the matrix game is first used to study the two UAVs counter-game, but its model and strategy are relatively simple and cannot be applied to actual confrontation. The integration of a game strategy solution to this model, based on intelligent algorithms, is proposed in Ref. [15], which improves the applicability of the former model. Ref. [16] discusses the maneuver counter-game of UAV clusters based on the intuitionistic fuzzy game, mapping the uncertainty of information such as a game environment to fuzzy membership, which is more aligned with the reality of a counter-game scenario. Furthermore, Ref. [17] examines the problem of an active defense cooperative differential game with three players, and Ref. [18] discusses the problem of a qualitative differential game in which multiple pursuers hunt for an advantage runner. Overall, the two-sided game algorithm could highlight the complex conflict of an unmanned system cluster countermeasure, which may realize the maneuver strategy in a more scientific and accurate way. However, owing to the complex marine environment [19], it is necessary to further explore the multi-stage dynamic countermeasure algorithm of multi-AUV systems with regard to underwater characteristics.
Classical game theory only discusses games with clear payment information [20]. However, most of the information in the marine environment is uncertain, which is the main problem in the counter-game process. If the uncertain information is directly converted to a certain value, the maneuver countermeasure algorithm may lose its credibility in the strategy selection. In this paper, this problem is solved by introducing the interval information into the cooperative dynamic maneuver countermeasure algorithm of the multi-AUV. The uncertainty in the multi-AUV counter-game, confrontation effectiveness, and the marine environment is regarded as an interval information series. Meanwhile, an interval payment multi-attribute evaluation of the multi-AUV maneuver strategy is carried out, and the interval payment matrix is obtained. Therefore, the uncertain information is well considered and settled in the proposed counter-game algorithm, which make this algorithm suitable to application in a marine environment. Moreover, the Nash equilibrium condition satisfying the interval payment is then proposed, and the countermeasure model of a Nash equilibrium strategy in a dynamic marine environment is established. To determine the optimal strategy, an improved fractional-order Differential Evolution (DE) algorithm is presented by a combination of DE algorithm and fractional calculus, which could lead to a raise of the convergence rate in the optimization process. Fractional calculus owns a long-memory ability and does not rely on current gradients [21]. Thus, the fractional-order DE contains great potential in avoiding slow convergence and local extremum.
Overall, the main contributions of the proposed dynamic maneuver countermeasure algorithm for multi-AUV can be summarized as follows. Firstly, the underwater environment and communication condition have been well presented in the countermeasure process using the interval information game; thus, the modeling of the dynamic maneuver counter-game of the multi-AUV is more accurate. Then, an improved fractional-order DE algorithm is proposed to improve the efficiency of achieving the optimal strategy of a real-time underwater countermeasure. The order of a fractional-order DE can affect the convergence rate and optimization error, which can also be tuned to satisfy different underwater requirements. But it should be noted that the standard of determining the optimal fractional order is not analytic or unified. The determining process needs more computation at the beginning of the proposed algorithm.
The remainder of this paper is organized as follows: Section 2 presents the preliminary of the fractional derivatives and DE algorithm; Section 3 presents the maneuver attributes, the advantage function, and the payoff matrix; Section 4 presents the payment ranking based on the four-parameter interval set, relative entropy, and the Nash equilibrium optimal solution; the fractional-order DE algorithm is presented in Section 5; Section 6 demonstrates the example of the multi-AUV counter-game; finally, the conclusions are stated in Section 7.

2. Preliminary

2.1. Fractional Derivatives

Fractional calculus is a generalization of the classical integer-order derivatives and integrals. The common definitions of fractional calculus are included in Riemann–Liouville (R-L) [22], Grunwald–Letnikov (G-L), and Caputo definitions [23,24], which have been applied in different fields, such as mathematics, engineering, computer science, etc. [25,26]. These definitions are introduced in the following:
Definition 1
([22]). The R-L fractional derivative of a continuous function f : ( 0 , + ) R with order q > 0 is defined as:
0 R D t q f ( t ) = 1 Γ ( k q ) d k d t k 0 t f ( s ) ( t s ) q k + 1 ds ,
where k is a positive integer and k 1 q k , and Γ ( · ) is the gamma function, i.e.,
Γ ( x ) = 0 + t x 1 e t d t .
Definition 2
([27]). The Caputo fractional derivative of a continuous function f : ( 0 , + ) R with order q > 0 is defined as:
0 C D t q f ( t ) = 1 Γ ( k q ) 0 t f ( k ) ( s ) ( t s ) q k + 1 ds ,
where k is an positive integer and k 1 q k .
The G-L definition is a discretization definition, which is equivalent to the discretized R-L definition. When 0 < q < 1 ( k = 1 ), the G-L definition or discretized R-L definition can be described by
0 G , R D t q f ( t ) = 1 h q f ( s n + 1 ) + i = 1 n + 1 ( 1 ) i q i f ( s n + 1 i ) , h 0 + .
The Caputo definition owns the practical physical meaning in initial values and has more advantages than the R-L or G-L definition. The discretized Caputo definition is different from R-L and G-L ones, and satisfies the following relationship, i.e.,
0 C D t q f ( t ) = 0 G , R D t q f ( t ) t q Γ ( 1 q ) f 0 , 0 < q < 1 .

2.2. Differential Evolution Algorithm

The differential evolution (DE) algorithm is a kind of intelligent optimization method based on population difference inspiration. DE uses the difference between population individuals to produce a disturbance to individual evolution, and it also uses the greedy rule to search the whole optimization space to find the optimal solution. In other words, the population individuals are greedy for a better fitness, and will update to a solution with a better fitness, as shown in the following Equation (7). The update process of the population includes mutation, crossover, and selection, and finally finds the optimal solution. DE demonstrates the characteristics of a simple operation, high robustness, strong optimization ability, etc. [16]. The main population update algorithm in DE are introduced as follows:
(1) Mutation.
Mutation is the first step of the population update in DE. According to the DE/currentto-best strategy [28], the variation vector can be obtained as follows:
w i , g = v i , g + F i ( v b e s t , g v i , g ) + F i ( v r 1 , g v r 2 , g ) ,
where w i , g is the variation vector, F i is the scaling factor of the individual, v i , g represents the current individual vector, and v b e s t , g represents the optimal individual of the population. r 1 and r 2 are two different integers, which are randomly selected from 1 , 2 , , N P ; N P is the population. v r 1 , g and v r 2 , g represent the r 1 th and r 2 th individual vectors, respectively.
(2) Crossover.
Crossover is an exchange of each dimension vector between the mutated individual and the original individual. The crossover operation can be expressed as follows:
u i j , g = v i j , g , i f r a n d [ 0 ,   1 ] < C R i o r j = j r a n d w i j , g , O t h e r w i s e ,
where u i j , g is the jth components of the test vector u i , g , C R i is the crossover rate, j r a n d is a random integer smaller than individual dimension D, and r a n d [ 0 ,   1 ] is a random number between 0 and 1.
(3) Selection.
Selection operation is for choosing a better fitness between the newly generated test vector and the original target vector to become a member of the next population generation. It is a greedy selection operation. The selection operation can be described as follows:
v i , g + 1 = v i , g , f ( u i , g ) < f ( v i , g ) u i , g , O t h e r w i s e ,
where v i , g + 1 is the next generation individual.

3. Multi-AUV Maneuver Countermeasure Model

3.1. Maneuver Strategies

In order to establish the interval information payoff matrix, the evaluation of multi-AUV maneuver attributes is presented according to the situation information of different confronting sides. The counter-game trajectory of the multi-AUV is treated as a combination of each maneuver actions. The enemy multi-AUV and our multi-AUV systems are regarded as two players in the game.
The game model of multi-AUV systems based on uncertain information can be expressed as:
G = N , S , U ,
where N = N 1 ,   N 2 are two players in the game, terminal 1 represents our multi-AUV system with a AUVs, and terminal 2 represents the enemy multi-AUV system with d AUVs; S = s 1 i k , s 2 j k are the strategy spaces of game players, s 1 i k represents that we choose the i-th maneuver strategy, and s 2 j k represents the enemy choosing the j-th maneuver strategy at the k-th stage; U = u 1 ( s k , i ) , u 2 ( s k , j ) are the benefit intervals corresponding to each strategy that may be selected by the multi-AUV system participating in this game. The action of the multi-AUV system at the k-th stage game can be represented by an information set; thus, the maneuver strategy is actually the action rules of the multi-AUV system in each information set.

3.2. Advantage Function

The main difference between the multi-AUV counter-game and other autonomous robots’ counter-game is the information transmission. Due to the marine environment, information in the multi-AUV game process is mainly received through an underwater acoustic. The shallow water acoustic channel is a channel with time-space-frequency variation [19]. It has strong multi-path interference, high environmental noise, large transmission loss, and a serious Doppler shift effect [19]. Therefore, the information provided in the multi-AUV confrontation process has strong uncertainties. It is difficult to quantify the threat degree of each side accurately during the countermeasure process [29]. Hence, in this paper, each attribute is represented by an interval information set in the countermeasure process. The advantage evaluation function that can evaluate the payment of each AUV consists of two parts, including the situation advantage and energy efficiency advantage.

3.2.1. Situation Advantage Function

In order to attack the enemy multi-AUV system, it is necessary to occupy a favorable attack position and minimize the attack risk of our multi-AUV system. The situation advantage function consists of angle advantage, speed advantage, and distance advantage [30].
The angle advantage function A a g can be expressed as:
A a g = 1 | A A | / 180 + | A T A | / 180 2 ,
where | A A | < 180 is the eye angle of both game players and | A T A | < 180 is the target entry angle.
The speed advantage function A s is achieved as:
A s = 0.1 , v n 1 i 0.6 v n 2 j 0.5 + v n 1 i v n 2 j , 0.6 v n 2 j < v n 1 i < 1.5 v n 2 j 1 , v n 1 i 1.5 v n 2 j ,
where v n 1 i ,   v n 2 j are the speed vectors of n 1 i and n 2 j in the game. The above Equation (10) indicates that a higher attack speed will lead to a greater attack advantage. According to Ref. [30], when the speed ratio v n 1 i v n 2 j is larger than 1.5, n 1 i owns an absolute speed advantage. In addition, when the speed ratio v n 1 i v n 2 j is less than 0.6, the speed advantage of n 1 i is negligible.
Then, the distance advantage A d i s can be obtained as:
A d i s = e D i j R 0 / R max R min 2 ,
where D i j is the distance between different AUVs; R 0 = ( R max + R min ) / 2 , R max is the maximum attack distance and R min is the minimum attack distance. When D i j > R max , the distance advantage is considered to be zero; with the decrease in distance, the distance advantage increases gradually; when D i j = R 0 , the distance advantage reaches the maximum value; with the further reduction in distance, the distance advantage gradually decreases.
Hence, the overall situation advantage function W A can be achieved by:
W A = k 1 A a g + k 2 A s + k 3 A d i s ,
where k 1 , k 2 , k 3 are weighting coefficients, and k 1 + k 2 + k 3 = 1 .

3.2.2. Energy Efficiency Advantage Function

The energy efficiency advantage C is mainly measured by seven factors, including maneuverability B, firepower P, target detection capability T, manipulating ability ε 1 , viability ε 2 , voyage ε 3 , and confrontation ability ε 4 [30]:
C = ln B + ln P + 1 + ln T + 1 ε 1 ε 2 ε 3 ε 4 .
The energy efficiency advantage function in Equation (13) is quite different from the situation advantage function. In order to simplify the total advantage function, the energy efficiency advantage function is transformed into the following form according to the actual situation:
W C = 0 , C N 1 a / C N 2 d < 0.3 0.25 , 0.3 C N 1 a / C N 2 d < 1 0.5 , C N 1 a / C N 2 d = 1 0.75 , 1 C N 1 a / C N 2 d < 1.5 1 , C N 1 a / C N 2 d 1.5 ,
where C N 1 a , C N 2 d are the energy efficiency of AUVs from different sides. According to Ref. [30], the energy efficiency advantage function owns five steps depended on by the energy efficiency ratio C N 1 a / C N 2 d shown in Equation (14), which indicates that a higher energy efficiency could provide a greater advantage.

3.3. Payoff Matrix Based on Interval Information

Payment in the game refers to the final gain or loss of players in the strategic choice. In the multi-AUV counter-game, the income of our AUV must be the loss of the enemy’s AUV. Therefore, the game in this paper belongs to the category of the two-person-zero-sum game.
Due to the abovementioned various underwater interference factors, the multi-AUV system is often unable to obtain all kinds of information accurately in an actual underwater maneuver countermeasure. After a reasonable analysis of the counter-game situation, each interference factor often changes within a certain interval. Therefore, the payoff matrix of each multi-AUV system is established based on interval information.
According to the situation and energy efficiency advantage functions proposed in the last section, the overall advantage function of our multi-AUV system, including situation advantage and energy efficiency advantage, can be obtained as:
W 1 = δ 1 W A + δ 2 W C ,
where δ 1 , δ 2 are weighting coefficients, and δ 1 + δ 2 = 1 ; W = W L , W R , W A = W A L , W A R , W C = W C L , W C R represent the advantage functions with upper and lower boundaries.
In the same way, the overall advantage function W 2 of the enemy side in the game can be achieved by exchanging the situation information parameters of both sides.
The payment function of the multi-AUV game under uncertain information is established as:
f = i = 1 m x i j W 1 j = 1 n y j i W 2 ,
where x i j , y j i are binary decision variables, x i j = 1 represents our ith AUV attacks the jth AUV of the enemy, and x i j = 0 means our ith AUV does not attack the jth AUV of the enemy; in the same way, y j i represents whether the jth AUV of the enemy attacks our ith AUV or not.
Then, the payment matrix under uncertain information is obtained as:
F = a 1 a 2 a m f 11 f 12 f 1 n f 21 f 22 f 2 n f m 1 f m 2 f m n d 1 d 2 d n = f 11 L , f 11 R f 12 L , f 12 R f 1 n L , f 1 n R f 21 L , f 21 R f 22 L , f 22 R f 2 n L , f 2 n R f m 1 L , f m 1 R f m 2 L , f m 2 R f m n L , f m n R ,
where a 1 , a 2 , , a m are the maneuvering strategies of our multi-AUV system; d 1 , d 2 , , d n are the maneuver strategies of the enemy multi-AUV system; u m n represents the gain when our multi-AUV system uses the m-th strategy, and the enemy multi-UUV system uses the n-th strategy.

4. Optimal Solution of Multi-AUV Dynamic Maneuver Countermeasure

4.1. Payment Interval Ranking Based on Relative Entropy

For the comparison of interval information sets, one cannot compare the size from the perspective of quantity, such as real numbers. The sorting method based on the possibility degree has the possibility of failure and that based on geometric distance may cause serious information loss [31]. In order to avoid these shortcomings, a method combining the four parameter interval set and relative entropy is proposed in this paper.
According to the interval information given in the last subsection, the payment interval f m n L , f m n R is obtained from the comprehensive information of both sides in the game. However, the payments do not consider the distribution of points in the interval. In fact, the interior point of payment interval set cannot be simply regarded as uniform distribution. It should change according to the change of the underwater game situation. For one strategy x i , when the confrontation situation is favorable to the attacker, the payment of the attacker tends to f R inevitably; if not, it will tend to f L . In order to fully explore the information of the advantage matrix, the payment interval is transformed into four-parameter interval sets in this subsection.
The four-parameter interval set is similar to the trapezoid fuzzy set [32]. Combined with the advantage function in the last section, the payment interval f m n L , f m n R is transformed into a four-parameter interval set f m n L , f m n M L , f m n M R , f m n R , where f m n M L = f m n L + W m n R ( f m n R f m n L ) , f m n M R = f m n L + W m n L ( f m n R f m n L ) , and W m n is the normalized advantage function with 0 < W m n L < W m n R < 1 .
The basic idea of this ranking method is using information entropy to measure the difference between AUV’s own revenue and the maximum revenue (minimum revenue) under different strategies, and to choose the strategy with the smallest difference between AUV’s own revenue and the maximum revenue (or the strategy with the largest difference between AUV’s own revenue and minimum revenue). In fact, the highest return indicates that the AUV has completed the scheduled task without casualties; the lowest return indicates that the AUV has failed to complete the scheduled task with the largest casualties.
First, the Kullback–Leibler distance concept is introduced. For two systems P and Q, the relative information entropy for them in state P i and Q i can be expressed as [33]:
M i = P i log 2 P i / Q i + ( 1 P i ) log 2 ( 1 P i ) / ( 1 Q i ) ,
where the unit of M i is the bit.
In order to overcome the meaninglessness of Equation (18) when Q i = 0 or Q i = 1 , the relative information entropy is improved as [33]:
H i = P i log 2 P i / [ 1 / 2 ( P i + Q i ) ] + ( 1 P i ) log 2 ( 1 P i ) / [ 1 1 / 2 ( P i + Q i ) ] ,
in which the smaller H i means the smaller difference between P i and Q i . Equation (18) measures the relative information entropy of the two systems under the specific attribute; when the information entropy is zero, it means that the two systems are identical under the evaluation criteria of specific attributes.
Definition 3.
For two four-parameter interval sets p = p L , p M L , p M R , p R and q = q L , q M L , q M R , q R , the relative information entropy of p and q is defined as:
E ( p , q ) = i = L , M L , M R , R ω i ( p i log 2 p i / [ 1 / 2 ( p i + q i ) ] + ( 1 p i ) log 2 ( 1 p i ) / [ 1 1 / 2 ( p i + q i ) ] ) ,
where ω i is the weighting coefficient. Equation (20) is asymmetrical to p and q , which is not corresponding to the actual submarine counter-game. Therefore, the definition of the relative information entropy is improved in the following:
I E ( p , q ) = E ( p , q ) + E ( q , p ) ,
where I E ( p , q ) presents the improved relative information entropy between two four-parameter interval sets, p and q .
Property 1.
I E ( p , q ) 0 , if and only if when p = q , the equal sign holds.
Property 2.
I E ( p , q ) = I E ( q , p ) .
Here, we give the proof of the tenable condition of Property 1.
Proof. 
For any i { L , M L , M R , R } , define the corresponding component of relative information entropy I E i as
I E i = p i log 2 p i / [ 1 / 2 ( p i + q i ) ] + ( 1 p i ) log 2 ( 1 p i ) / [ 1 1 / 2 ( p i + q i ) ] = p i log 2 [ 1 / 2 ( p i + q i ) / p i ] ( 1 p i ) log 2 [ 1 1 / 2 ( p i + q i ) ] / ( 1 p i ) .
Because f ( x ) = log 2 x is a convex function, thus it is yielded from Jensen inequality that:
I E i log 2 p i 1 / 2 ( p i + q i ) p i + ( 1 p i ) 1 1 / 2 ( p i + q i ) 1 p i = log 2 1 = 0
if and only if when p i = q i , the equality holds, thus I E i ( p , q ) = 0 . In the same way, it can be proven that I E i ( q , p ) = 0 when p i = q i . This completes the proof of Property 1. □
If the relative information entropy between the payment of strategy i and the maximum payment is d max i , that between the payment of strategy i and the minimum payment is d min i , which can be achieved from Equation (21). Therefore, the relative closeness of the payment of strategy i can be expressed as:
C i = d min i d max i + d min i .
The multi-AUV system maneuver strategies can be sorted according to C i in the last equation. In the countermeasure process, the strategy with the largest C value gets the highest priority; when the C values of different strategies are the same, the strategy with a smaller d max gets the higher priority.

4.2. State Transition of Payment Functions

When one multi-AUV system completes the k 1 -th game stage and reaches the k-th stage, the multi-AUV strategy is i = F ( h ) , which means the i-th strategy is used in this stage. At this moment, the game situation of the last stage is S k 1 , and the estimated confrontation situation after the current stage s k is S k . Over time, when the multi-AUV system completes the maneuver strategy s k , the situation state of the k-th stage transfers to:
W A ( s k ) = W A ( s k 1 ) + Δ W A ( s i k ) W C ( s k ) = W C ( s k 1 ) + Δ W C ( s j k ) ,
where W A ( s k ) is the situation advantage state interval of the current strategy. Δ W A ( s i k ) represents the variation of the state intervals of W A ( s k ) and W A ( s k 1 ) . Δ W A ( s i k ) could be determined easily by the k 1 -th state W A ( s k 1 ) and the definition of i-th strategy. For example, if the i-th strategy is to keep the current speed, Δ W A ( s i k ) is the product of the speed vector in W A ( s k 1 ) and the stage interval time. W C ( s k ) is the energy situation advantage state interval of the current strategy within the j-th strategy.

4.3. Nash Equilibrium Optimal Solution

The payment of the complete information zero-sum-game is replaced by the interval information set in this paper; thus, the payment of the interval information zero-sum-game discussed here is similar to that of the complete information zero-sum-game. For the multi-AUV attack and defense game, there is no guaranteed saddle point in the pure strategy game; thus, the countermeasure algorithm of the attacker and the defender can only choose the strategy randomly with a certain probability in the strategy set. Therefore, the Nash equilibrium for the mixed strategy is discussed here.
Definition 4.
For game G = N , S , U , define x i , y j as the probabilities when players N 1 , N 2 choose strategies s 1 i k , s 2 j k at the k-th stage of the game from strategy sets S 1 , S 2 . The mixed strategies of the players in this game can be expressed as follows.
x = x 1 , x 2 , , x m , i = 1 m x i = 1 , x i 0 . y = y 1 , y 2 , , y n , j = 1 n y j = 1 , y j 0 .
Definition 5.
max x S m min y S n I E ( x , y ) = min y S n max x S m I E ( x , y ) = I E ( x * , y * ) = v , so x * , y * are the optimal mixed strategies of players N 1 , N 2 at this stage of the game, ( x * , y * ) is the optimal mixed situation, and v is the expected benefit to the players.
According to the interval information game proposed, I E ( x , y ) in this paper is an interval information set. Assume f u = f u L , f u R , u = 1 , 2 , m n is the interval payment when our multi-AUV system chooses maneuver strategy x m and the enemy multi-AUV system chooses maneuver strategy y n at this stage of the game.
According to Definition 3 and Definition 4, the Nash equilibrium [34] of our multi-AUV system under the mixed strategy x = x 1 , x 2 , , x m can be achieved as follows:
v = max x min 1 j n i = 1 m F i j x i .
On considering the actual submarine environment constraints, Equation (25) can be transformed into an optimization problem with interval uncertain parameters, as follows:
v = max x v R ( x ) s . t . i = 1 m F i j x i v R ( x ) i = 1 m x i = 1 x i 0 , i = 1 , 2 , , m j = 1 , 2 , , n .
Therefore, the optimal mixed strategy of the multi-AUV maneuver counter-game can be obtained by solving this optimization problem.

5. Fractional-Order Differential Evolution Algorithm

DE algorithm owns the characteristics of simple operation, high robustness, strong optimization ability, etc. [16]. However, slow convergence and falling into a local extremum may occur when the DE algorithm is applied in the optimization process, which makes satisfying the real-time counter-game requirements difficult. Aiming to address this problem, a fractional-order differential evolution (FDE) algorithm is presented in this section.
(1) Fitness function.
Normally, the optimal strategy of game player N 1 is to maximize payoff benefit under the constraint conditions, while the other game player N 2 is the opposite. Therefore, the fitness function here could be the optimization objective function presented in Equation (26).
(2) Mutation.
According to Equations (3)–(5), the variation vector of FDE mutation is proposed within the Caputo definition:
w i , g = h q v i , g + F i ( v b e s t , g v i , g ) + F i ( v r 1 , g v r 2 , g ) + ( g + 1 ) q v i , 0 Γ ( 1 q ) k = 1 g + 1 s k v i , g + 1 k ,
where
s 0 = 1 , s k = 1 q + 1 k s k 1 ,
h is the iteration step of v i , g , and the other parameters are same with the definition in Equation (5).
In particular, the variation vector (27) with h = 1 becomes
w i , g = v i , g + F i ( v b e s t , g v i , g ) + F i ( v r 1 , g v r 2 , g ) + ( g + 1 ) q v i , 0 Γ ( 1 q ) k = 1 g + 1 s k v i , g + 1 k .
Remark 1.
The variation vector (27) of the FDE mutation is derived by the combination of Equation (5) and Caputo fractional derivatives. In fact, Equation (5) is a special case with h = 1 of the following discrete iteration:
w i , g v i , g h = F i ( v b e s t , g v i , g ) + F i ( v r 1 , g v r 2 , g ) .
The first-order difference w i , g v i , g h is replaced by the Caputo fractional one (4), i.e.,
1 h q w i , g + i = 1 g + 1 ( 1 ) i q i v i , g + 1 k [ ( g + 1 ) h ] q Γ ( 1 q ) v i , 0 = F i ( v b e s t , g v i , g ) + F i ( v r 1 , g v r 2 , g ) .
In addition, denote
s k = ( 1 ) k q k , k = 1 , 2 , , g + 1 ,
where s k can be calculated through a recursive scheme, as follows
s 0 = 1 , s k = 1 q + 1 k s k 1 .
Then, the variation vector (27) of the FDE mutation can be obviously deduced.
Note that in Equation (27), the scaling factor F i is for scaling each base vector and generating a new variation vector. A larger F i can search for a potentially optimal solution in a larger range. On the contrary, a smaller one can speed up convergence and improve accuracy. Meanwhile, when the fitness of each individual is relatively superior, it is preferable for F i to be small to reduce the disturbance of the better individuals; on the contrary, when the fitness of each individual is relatively poor, it is preferred to expand the search range of the solution; thus, a larger F i may be applied. Combined with the game algorithm proposed in this paper, the scaling faction F i here is determined according to the evolution time and the difference between the best and worst individuals, as follows:
F i = ( F m a x F m i n ) × f b e s t f i f b e s t f w o r s t × Δ g + F m i n ,
where Δ g = ( G g ) / G , G is the maximum number of iterations, g is the current number of iterations; f b e s t is the best fitness, f w o r s t is the worst fitness, and f i is the current individual fitness. F m a x and F m i n are the maximum and minimum values of F, respectively. If the fitness difference between the current individual and the optimal individual is large, it means that the individual is far away from the optimal individual in space. A larger value of F i signifies that the disturbance to the individual is larger, which in turn implies that the search scope of the algorithm is expanded, and the global search ability is enhanced. If the fitness difference is small, F i may acquire a smaller value, and the disturbance to the individual is also small, which means that the search is only carried out in the small area near the individual, so as to enhance the ability of the algorithm development. Moreover, at a later stage in the evolution, the value of Δ g prefers to be relatively small, which can make the search in the neighborhood of the current individual and ensure the accuracy of the algorithm.
(3) Crossover.
As shown in Equation (6), the crossover operation is described by
u i j , g = v i j , g , i f r a n d [ 0 ,   1 ] < C R i o r j = j r a n d w i j , g , O t h e r w i s e .
Note that C R i is the crossover rate that determines the crossover probability of the mutated individual and the original individual on each dimension vector. The larger C R i of the individual with poor adaptability can accelerate the change of the individual structure. Therefore, it is preferred that a smaller C R i be used in the late evolution stage to reduce the disturbance of the target individual to the experimental individual and ensure the convergence speed of the algorithm. The designed crossover rate is as follows:
C R i = C R min , f > f ( v i , g ) C R min + ( C R max C R min ) × Δ g , f f ( v i , g ) ,
where f is the average fitness of the current population, C R i is the current crossover rate, and C R m a x and C R m i n are the maximum and minimum values of C R , respectively. Equation (30) shows that when the fitness of the target individual v i , g is smaller than the average fitness, the target individual is relatively superior. A smaller C R i should be chosen, and more information of the test vector is obtained from the target vector v i , g . Otherwise, more information of the test vector u i , g is obtained from the variation vector w i , g , which improves the diversity of population. Δ g could ensure a large C R i at an early stage of evolution, increase population diversity, and speed up convergence. In addition, a small C R i in the late evolution stage is conducive to finding the optimal solution.
(4) Selection.
According to Equation (7), the selection operation is re-mentioned.
v i , g + 1 = v i , g , f ( u i , g ) < f ( v i , g ) u i , g , O t h e r w i s e ,
where v i , g + 1 is the next generation individual.

6. Example

In this section, a multi-AUV countermeasure example is given to verify the effectiveness of the proposed algorithm. In the initialization process of the proposed algorithm, several parameters need to be determined as the following guideline.
  • The initial states of game players W A ( s 1 ) and W C ( s 1 ) , such as the initial positions, velocity, deflection angle, and pitch angle, as well as the time interval of the game steps and the weighting coefficients k 1 , k 2 , k 3 in Equation (12).
  • The strategy spaces of both sides S = s 1 i k , s 2 j k , such as keep the current speed, turn left, turn right, pitch up, pitch down, etc.
  • The maximum number of iterations G in the fractional-order DE, as well as the population N P , iteration step h, and fractional order q.
Suppose ‘A’ and ‘D’ are engaged in a two-vs.-two AUVs underwater counter-game. The initial positions of ‘A1’ and ‘A2’ are (0 m, 100 m, and 200 m) and (0 m, −100 m, and 200 m), and ‘D1’ and ‘D2’ are (800 m, 100 m, and 200 m) and (800 m, −100 m, and 200 m), respectively. The velocity, deflection angle, and pitch angle of ‘A1’ and ‘A2’ are 23 m/s, 60 , 5 , 23 m/s, and 60 , 5 , respectively. The velocity, deflection angle, and pitch angle of ‘D1’ and ‘D2’ are 25 m/s, 120 , 3 , 25 m/s, and 120 , 3 , respectively. Both sides have the same control ability, and the time interval of the game steps is 5 s. The weighting coefficients are denoted as k 1 = 0.2 , k 2 = 0.4 , k 3 = 0.4 referring to [30]. The strategy spaces of ‘A’ and ‘D’ are both supposed as keep the current speed, speed up, speed down, turn left, turn right, pitch up, and pitch down for each AUV. In addition, the maximum number of iterations is set as G = 500 , as well as the population N P = 100 , iteration step h = 1 , and fractional order 0 < q 1 . Within all the above initial conditions, the multi-AUV maneuver countermeasure model is constructed according to Section 3 and Section 4, and the fractional-order differential evolution in Section 5 is employed in strategy optimization. For a different fractional order q, the optimization errors of the first game step are calculated and shown in Figure 1. When the optimization errors are close to 0, the convergence rates are not clearly visible in Figure 1. Thus, the base-10 logarithms ( l o g 10 ) of the optimization errors of different fractional orders q are exhibited in Figure 2.
According to Figure 1 and Figure 2, the optimization error of q = 0.2 is better than the others when the iteration is 300. However, the optimization error of q = 0.8 is the most superior one when the iteration is 500. Thus, smaller order q brings faster optimization rate when the iteration number is not enough. When the iterations are raising, the optimization effects of the larger order visibly increase. Thus, according to the iteration number, the fractional-order DE algorithm could adjust the fractional order q to achieve the optimal strategy calculation.
It is obvious that ’D’ possesses some advantage in the beginning. It should also be noted that the maximum maneuver steps should be determined according to the effectiveness of the AUVs used in the confrontation. There are 50 steps in the game process, whose expected benefits are shown in Figure 3. According to the last section, the obtained expected benefits demonstrate that the Nash equilibrium of the interval information game is satisfied.
In order to compare the game performance, ‘A’ uses the cooperative dynamic maneuver countermeasure algorithm proposed in this paper, and ‘D’ uses the max-min countermeasure algorithm during the multi-AUV confrontation process [34]. The three-dimensional counter-game process with five main stages are shown in Figure 4, Figure 5, Figure 6, Figure 7 and Figure 8. The red-dotted line represents the path of ‘A1’, the red solid line represents ‘A2’, and the blue-dotted and solid lines represent ‘D1’ and ‘D2’, respectively. The ‘*’ shows the initial position and ‘’ shows the current position. The confrontation ends when one side’s expected benefit reaches the absolute advantage. For stage 1 in Figure 4, ‘A’ possesses the dominant position and decides to attack ‘D’ actively, in which ‘A1’ tries to attack ‘D2’, and ‘A2’ is moving towards ‘D1’. For stage 2 in Figure 5, ‘A1’ misses the attack on ‘D2’, and tries to attack ‘D1’ with ‘A2’. ‘D2’ escapes the attack of ‘A1’, and tries to outflank ‘A2’. The situation changes when ‘D’ possesses the dominating position in stage 3. This can also be validated in Figure 3, in which the expected benefits change from positive to negative. In Figure 6, ‘D1’ and ‘D2’ try to achieve a converging attack to ‘A2’; ‘A1’ tries to attack ‘D1’ and supports ‘A2’. Then, in stage 4, the situation changes again, in which ‘A’ possesses the dominating position and the expected benefits change from negative to positive. ‘A2’ turns continuously and drives ‘D1’ away successfully, then ‘A1’ and ‘A2’ tries to attack ‘D2’, but ‘D1’ and ‘D2’ escape in two different directions. In the end, both ‘A1’ and ‘A2’ possess the dominating positions, thus ‘A’ achieves the absolute advantage and ends the game, which is illustrated in Figure 8. This example validates the effectiveness of the proposed multi-AUV dynamic maneuver countermeasure algorithm.

7. Conclusions

In this paper, the interval information set is introduced into game theory to study the dynamic maneuver countermeasure algorithm for multi-AUV systems. Marine environment characteristics, including different kinds of uncertainties, are expressed by the interval information sets. The maneuver countermeasure model of the multi-AUV system is established based on the forehead interval information sets, and the payment interval ranking is achieved based on the four-parameter interval set and relative entropy. Combined with the background and model characteristics, the optimal maneuver strategy satisfying the Nash equilibrium condition is obtained using the fractional-order DE algorithm in each step of the dynamic counter-game process. A multi-AUV dynamic counter-game example with several maneuver countermeasure steps is provided to illustrate the superiority and effectiveness of the proposed algorithm. Our future work shall focus on considering different communication conditions in underwater environments, which may bring about more constraints and computation burden in the countermeasure process. Moreover, some intelligent algorithms including the learning mechanism may be explored to obtain the optimal strategy of the multi-AUV game.

Author Contributions

Conceptualization, L.L. and S.Z.; methodology and software, S.Z.; validation, J.W.; writing—original draft preparation, L.L.; writing—review and editing, S.Z.; supervision, L.Z.; project administration, J.W.; funding acquisition, L.L. and J.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Shenzhen Science and Technology Program under Grant JCYJ20210324122010027; the Guangdong Basic and Applied Basic Research Foundation under Grant 2019A1515111073; the Science and Development Program of Local Lead by Central Government, Shenzhen Science and Technology Innovation Committee under Grant 2021Szvup111; the National Natural Science Foundation of China under Grant 52001259, 11902252 and 51979229; the Young Talent fund of University Association for Science and Technology in Shaanxi, China under Grant 20200502.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available in the article.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Fossen, T.I. Handbook of Marine Craft Hydrodynamics and Motion Control; John Wiley & Sons: Hoboken, NJ, USA, 2011. [Google Scholar]
  2. Li, D.L.; Ling, D. AUV trajectory tracking models and control strategies: A review. J. Mar. Sci. Eng. 2021, 9, 1020. [Google Scholar] [CrossRef]
  3. Wynn, R.B.; Huvenne, V.A.; Le Bas, T.P.; Murton, B.J.; Connelly, D.P.; Bett, B.J.; Ruhl, H.A.; Morris, K.J.; Peakall, J.; Parsons, D.R.; et al. Autonomous Underwater Vehicles (AUVs): Their past, present and future contributions to the advancement of marine geoscience. Mar. Geol. 2014, 352, 451–468. [Google Scholar] [CrossRef] [Green Version]
  4. Uchihori, H.; Cavanini, L.; Tasaki, M.; Majecki, P.; Yashiro, Y.; Grimble, M.; Yamamoto, I.; van der Molen, G.; Morinaga, A.; Eguchi, K. Linear parameter-varying model predictive control of AUV for docking scenarios. Appl. Sci. 2021, 11, 4368. [Google Scholar] [CrossRef]
  5. Cavanini, L.; Majecki, P.; Grimble, M.J.; Uchihori, H.; Tasaki, M.; Yamamoto, I. LPV-MPC Path Planner for Autonomous Underwater Vehicles. IFAC-Papers Online 2021, 54, 301–306. [Google Scholar] [CrossRef]
  6. Fossen, T.I.; Lekkas, A.M. Direct and indirect adaptive integral line-of-sight path-following controllers for marine craft exposed to ocean currents. Int. J. Adapt. Control. Signal Process. 2017, 31, 445–463. [Google Scholar] [CrossRef]
  7. Zhang, L.C.; Li, Y.; Liu, L.; Tao, X. Cooperative navigation based on cross entropy: Dual leaders. IEEE Access 2019, 7, 151378–151388. [Google Scholar] [CrossRef]
  8. Qi, X.; Cai, Z.J. Three-dimensional formation control based on nonlinear small gain method for multiple underactuated underwater vehicles. Ocean. Eng. 2019, 123, 45–54. [Google Scholar] [CrossRef]
  9. Chen, J.; Zhu, H.; Zhang, L.; Sun, Y. Research on fuzzy control of path tracking for underwater vehicle based on genetic algorithm optimization. Ocean. Eng. 2018, 156, 217–223. [Google Scholar] [CrossRef]
  10. Chin, H. Knowledge-based system of supermaneuver selection for pilot aiding. J. Aircr. 1989, 26, 1111–1117. [Google Scholar] [CrossRef]
  11. Zuo, J.; Yang, R.; Zhang, C.; Li, Z. Intelligent decision of air combat maneuver based on heuristic reinforcement learning. J. Aeronaut. 2017, 38, 217–230. [Google Scholar]
  12. Fan, D.D.; Theodorou, E.; Reeder, J. Model-based stochastic search for large scale optimization of multi-agent UAV swarm. In Proceedings of the IEEE Symposium Series on Computational Intelligence, Bangalore, India, 18–21 November 2017; pp. 2216–2222. [Google Scholar]
  13. Poropudas, J.; Kai, V. Game-theoretic validation and analysis of air combat simulation models. IEEE Trans. Syst. Man Cybern.-Part A Syst. Hum. 2017, 40, 1057–1070. [Google Scholar] [CrossRef]
  14. Austin, F.; Carbone, G.; Falco, M.; Hinz, H.; Lewis, M. Game theory for automated maneuvering during air-to-air combat. J. Guid. Control. Dyn. 1990, 13, 1143–1149. [Google Scholar] [CrossRef]
  15. Gu, J.; Zhao, J.; Liu, W. Decision framework of air combat maneuver based on game theory and Memetic algorithm. Electrol Opt. Control 1990, 11, 20–23. [Google Scholar]
  16. Li, S. UAV Maneuver Decision Based on Game Model in Complex Air Combat Environment. Master’s Thesis, Nanjing University of Aeronautics and Astronautics, Nanjing, China, 2019. [Google Scholar]
  17. Garcia, E.; Casbeer, D.W.; Pachter, M. Active target defence differential game: Fast defender case. IET Control. Theory Appl. 2017, 11, 2985–2993. [Google Scholar] [CrossRef]
  18. Chen, J.; Zha, W.; Peng, Z.; Gu, D. Multi-player pursuit-evasion games with one superior evader. Automatica 2016, 71, 24–32. [Google Scholar] [CrossRef] [Green Version]
  19. Kilfoyle, D.B.; Baggeroer, A.B. The state of the art in underwater acoustic telemetry. IEEE J. Ocean. Eng. 2000, 25, 4–27. [Google Scholar] [CrossRef]
  20. Osborne, M.J. An Introduction of Game Theory; Springer: Berlin/Heidelberg, Germany, 2009. [Google Scholar]
  21. Tarasov, V.E. Generalized memory: Fractional calculus approach. Fractal Fract. 2018, 2, 23. [Google Scholar] [CrossRef] [Green Version]
  22. Hristova, S.; Tersian, S.; Terzieva, R. Lipschitz Stability in Time for Riemann–Liouville Fractional Differential Equations. Fractal Fract. 2021, 5, 37. [Google Scholar] [CrossRef]
  23. Zhang, S.; Liu, L.; Xue, D. Nyquist-based stability analysis of non-commensurate fractional-order delay systems. Appl. Math. Comput. 2020, 377, 125111. [Google Scholar] [CrossRef]
  24. Zhang, S.; Liu, L.; Xue, D.; Chen, Y. Stability and resonance analysis of a general non-commensurate elementary fractional-order system. Fract. Calc. Appl. Anal. 2020, 23, 183–210. [Google Scholar] [CrossRef]
  25. Wu, G.; Baleanu, D.; Luo, W. Lyapunov functions for Riemann-Liouville-like fractional difference equations. Appl. Math. Comput. 2017, 314, 228–236. [Google Scholar] [CrossRef]
  26. Liu, L.; Zhang, S.; Xue, D.; Chen, Y. Robust stability analysis for fractional-order systems with time delay based on finite spectrum assignment. Int. J. Robust Nonlinear Control 2019, 29, 2283–2295. [Google Scholar] [CrossRef]
  27. Hristova, S.; Ivanova, K. Caputo fractional differential equations with non-instantaneous random erlang distributed impulses. Fractal Fract. 2019, 3, 28. [Google Scholar] [CrossRef] [Green Version]
  28. Fleetwood, K. An introduction to differential evolution. In Proceedings of the 26th Mathematics and Statistics of Complex Systems (MASCOS) One Day Symposium, Brisbane, Australia, 26 November 2004. [Google Scholar]
  29. Chen, T. The inclusion-based TOPSIS method with interval-valued intuitionistic fuzzy sets for multiple criteria group decision making. Appl. Soft Comput. 2015, 26, 57–73. [Google Scholar] [CrossRef]
  30. Zhao, M.; Li, B.; Wang, M. Game strategy research of multi-UAV aerial combat beyond visual range. Electron. Opt. Control 2015, 22, 41–45. [Google Scholar]
  31. Sengupta, A.; Pal, T.K. On comparing interval numbers. Eur. J. Oper. Res. 2000, 127, 28–43. [Google Scholar] [CrossRef]
  32. Liu, X.; Zhao, K. Interval Number Decision Set Pair Analysis; Science Press: Beijing, China, 2014. [Google Scholar]
  33. Cover, T.M.; Thomas, J.A. Foundation of Cybernatics; Mechanical Engineering Press: Beijing, China, 2005. [Google Scholar]
  34. La, Q.D.; Yong, H.C.; Soong, B.H. An Introduction to Game Theory; Oxford University Press: Oxford, UK, 2016. [Google Scholar]
Figure 1. Optimization errors of different fractional orders q.
Figure 1. Optimization errors of different fractional orders q.
Fractalfract 06 00235 g001
Figure 2. Base−10 logarithm ( l o g 10 ) of optimization errors of different fractional orders q.
Figure 2. Base−10 logarithm ( l o g 10 ) of optimization errors of different fractional orders q.
Fractalfract 06 00235 g002
Figure 3. Expected benefit I E ( x * , y * ) referring to the Definition 5.
Figure 3. Expected benefit I E ( x * , y * ) referring to the Definition 5.
Fractalfract 06 00235 g003
Figure 4. Multi−AUV dynamic maneuver countermeasure: Stage 1. (‘*’ initial position; ‘’ current position).
Figure 4. Multi−AUV dynamic maneuver countermeasure: Stage 1. (‘*’ initial position; ‘’ current position).
Fractalfract 06 00235 g004
Figure 5. Multi−AUV dynamic maneuver countermeasure: Stage 2. (‘*’ initial position; ‘’ current position).
Figure 5. Multi−AUV dynamic maneuver countermeasure: Stage 2. (‘*’ initial position; ‘’ current position).
Fractalfract 06 00235 g005
Figure 6. Multi−AUV dynamic maneuver countermeasure: Stage 3. (‘*’ initial position; ‘’ current position).
Figure 6. Multi−AUV dynamic maneuver countermeasure: Stage 3. (‘*’ initial position; ‘’ current position).
Fractalfract 06 00235 g006
Figure 7. Multi−AUV dynamic maneuver countermeasure: Stage 4. (‘*’ initial position; ‘’ current position).
Figure 7. Multi−AUV dynamic maneuver countermeasure: Stage 4. (‘*’ initial position; ‘’ current position).
Fractalfract 06 00235 g007
Figure 8. Multi−AUV dynamic maneuver countermeasure: Stage 5. (‘*’ initial position; ‘’ current position).
Figure 8. Multi−AUV dynamic maneuver countermeasure: Stage 5. (‘*’ initial position; ‘’ current position).
Fractalfract 06 00235 g008
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Liu, L.; Wang, J.; Zhang, L.; Zhang, S. Multi-AUV Dynamic Maneuver Countermeasure Algorithm Based on Interval Information Game and Fractional-Order DE. Fractal Fract. 2022, 6, 235. https://doi.org/10.3390/fractalfract6050235

AMA Style

Liu L, Wang J, Zhang L, Zhang S. Multi-AUV Dynamic Maneuver Countermeasure Algorithm Based on Interval Information Game and Fractional-Order DE. Fractal and Fractional. 2022; 6(5):235. https://doi.org/10.3390/fractalfract6050235

Chicago/Turabian Style

Liu, Lu, Jian Wang, Lichuan Zhang, and Shuo Zhang. 2022. "Multi-AUV Dynamic Maneuver Countermeasure Algorithm Based on Interval Information Game and Fractional-Order DE" Fractal and Fractional 6, no. 5: 235. https://doi.org/10.3390/fractalfract6050235

Article Metrics

Back to TopTop