Next Article in Journal
Tangent Bundles Endowed with Quarter-Symmetric Non-Metric Connection (QSNMC) in a Lorentzian Para-Sasakian Manifold
Previous Article in Journal
LRFID-Net: A Local-Region-Based Fake-Iris Detection Network for Fake Iris Images Synthesized by a Generative Adversarial Network
Previous Article in Special Issue
On the Equilibrium in a Queuing System with Retrials and Strategic Arrivals
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Pursuit Problem of Unmanned Aerial Vehicles

Department of Modelling in Social and Economical Systems, Saint-Petersburg State University, Universitetskaya Embankment, 7/9, 199034 St. Petersburg, Russia
*
Author to whom correspondence should be addressed.
Mathematics 2023, 11(19), 4162; https://doi.org/10.3390/math11194162
Submission received: 14 July 2023 / Revised: 18 September 2023 / Accepted: 19 September 2023 / Published: 4 October 2023
(This article belongs to the Special Issue Multi-Agent Systems of Competitive and Cooperative Interaction)

Abstract

:
The study examines scenarios involving a single pursuer tracking a single evader, as well as situations where multiple pursuers are involved in chasing multiple evaders. We formulate this problem as a search and pursuit problem for unmanned aerial vehicles (UAVs). Game theory offers a mathematical framework to model and examine strategic interactions involving multiple decision-makers. By employing game theory principles to address the search and pursuit problem, our objective is to optimize the efficiency of strategies for detecting and capturing unmanned aerial vehicles (UAVs).

1. Introduction

The intensive research on search problems commenced during World War II. Subsequently, the methods of search theory found applications in solving a wide range of other problems [1,2]. Notably, two significant monographs on the theory of searching for moving objects have emerged, each delving into various aspects of search theory [3]. The developments in the theory of object search can be broadly categorized into three groups: discrete search, continuous search, and search game problems.
The first group consists of work related to the search for an object having a set of finite or countable states. The second group consists of studies looking for an object whose positions form a specific area (finite or infinite) in a research plane or space. Situations where the sought-after object is not actively counteracting the search system are the tasks covered by the first and second groups. The third group, specifically search game problems, is where studies consider search problems when there is active opposition.
From the works in the first group, the following articles stand out: In [4], Staroverova O.V. introduces a search algorithm designed to minimize the average search time, assuming that the probability α i remains unaffected by the cell number and the search process only concludes upon detection of the evading object by the pursuer. Kelin explored the feasibility of solving the optimal allocation of search efforts using a Markov model, particularly when the evading object E moves randomly. He addressed two distinct scenarios:
1. In the first scenario, the evader’s movement is independent of its location, and the pursuer does not receive any information about the evader’s whereabouts throughout the search process.
2. In the second scenario, the evader’s movement is influenced by its previous location, and the search system possesses information about the evader’s prior location at each moment in time.
In both cases, it is assumed that the search system is aware of the probabilistic pattern governing the evader’s movements [5].
Koopman addressed the search problem on a surface where the objects being sought are uniformly distributed and their course angles are unknown (with an equal probability distribution within the range [0, 2 π ]). Assuming that the searcher P maintains a constant velocity, the researcher calculated the average count of targets entering a designated circle centered around P within a unit of time. This occurs at an arbitrary angle γ [ 0 ,   2 π ) and at a specified angle γ [ γ , γ + d γ ] , where γ represents the angle formed between the vector of the searcher’s velocity and the line connecting points P and E [6].
In a related work [7], Kupman successfully tackled the problem of optimizing the allocation of search efforts. This optimization aims to maximize the likelihood of detecting a stationary target, given the prior distribution of its location and an exponential conditional detection function. This achievement marked the initial significant outcome within this domain.
Charnes and Cooper [8] formalized the problem of optimal distribution of search efforts as a convex programming problem. Mac Kuen and Miller [9] investigated the search problem, in which it is necessary to decide whether to start the search or not, and after starting the search, whether to continue or stop the search. They obtained a general functional equation. Pozner [10] solved the two-stage (preliminary and final) search problem for a lost satellite.
Under the condition that the seeking searcher obtains supplementary information during each inspection. This additional information pertains to determining which of the two ordinal numbers of the cells is greater: the number of the cell containing the hidden object or the number of the cell inspected before the current examination.
Danskin analyzed an antagonistic game concerning the allocation of sequential search efforts across various areas, wherein false targets coexist alongside the actual desired object [11].
Halpern delved into a minimax problem that differs from the “Princess and Monster” scenario in that E possesses knowledge about P’s trajectory [12]. For the case of a rectangular area, the “Princess and Monster” problem was resolved by Gal [13].
This study centers around investigating search problems involving mobile targets. Depending on the available information for the search participants, different approaches are proposed for devising a solution, thereby determining optimal behaviors.

2. Fundamentals of Object Search Theory (Background Information)

Currently, the method of mathematical modeling has gained wide popularity. It involves constructing and studying mathematical models to analyze and forecast various processes in natural, technical, economic, and other sciences [14,15,16,17,18,19,20,21]. The application of mathematical models can be used to prevent illegal activities. Terrorists and members of drug cartels utilize modern methods of communication and transportation in their operations. There is a need to implement inspection measures to prevent the spread of their activities. To effectively implement countermeasures, it is essential to utilize modern technical tools and a system for optimizing resource utilization. Additionally, it is important to develop dynamic inspection models.
Subject of object search theory: The subject of object search theory revolves around the pursuit of tangible items within various environments. A search can be described as the deliberate exploration of a specific spatial area to detect the presence of an object located therein. Detection, in this context, pertains to acquiring information about an object’s location through direct energetic interaction. This process employs detection tools such as optical, radar, hydrochloric, and other devices.
One way to study the search process is to build and analyze mathematical models that reflect the objective laws of search and enable us to establish causal relationships between the conditions of search performance and its results. The search process involves two sides: the search object and the observer, who can at the same time be an individual and a group. The search object is various objects located in different environments, such as aircraft, various objects on the surface of the Earth, ships and vessels, etc.
The search object has two characteristic features: Its properties differ from the properties of the environment in which the search is conducted.
Information about the location of the object before the start of the search and during its execution is usually uncertain.
It is this uncertainty that causes search actions, the essence of which is to obtain information about the location of the object. The contrast of the search object against the background of the environment creates the possibility of its detection.
The presence or absence of radiation is a characteristic of search objects. The operation of detection tools is based on the detection of a signal that is reflected from the search object or the reception of the object’s own radiation. The search process largely depends on the properties of the detection object, as well as on the parameters of the detection equipment and the characteristics of the surrounding environment. The theory of search is based on the physical basis of all these issues.
In the research process, the use of detection tools is combined with the active operation of the observer wearing these tools. Therefore, the study of the patterns of mutual movement between the observer and the search object becomes especially important. These patterns are a crucial component of the theory of object search and the kinematics of search.
An important place in the theory of object search is occupied by the justification and methods of calculating the indicators of the success of the search criteria and its effectiveness. The ultimate goal of the theory of search is to choose the optimal methods for performing search actions in a specific situation and under the conditions of the search, the so-called search situation. The choice of the optimal search method is based on an analysis of the mathematical model of the corresponding search situation and reduces to establishing control parameters of the search that ensure the solution of the search task in the shortest or specified time with minimal search efforts.
Mathematical Models for Object Search: Constructing a mathematical model requires identifying all the essential factors and conditions that determine both the state and the development of the search process, as well as the possible control of this process. The model is composed of variables and conditions that are called elements. Variables that can be modified are called controllable, while those that cannot be modified are called uncontrollable. Depending on the nature of the search process, his mathematical model can contain only uncontrollable variables or both controllable and uncontrollable variables. Mathematical models of the first type are called descriptive, while models of the second type are normative.
In a descriptive model, there is no observer deciding about the search, nor is there a search object deciding to evade.
A normative model is characterized by the presence of at least one of the parties making a decision. Depending on the amount of information about the search situation and the regularities underlying it, such models can be classified into one of the following levels: deterministic, stochastic, and uncertain.
At the deterministic level, a normative model is constructed when the outcome of the search situation is subject to regularities and the factors influencing this outcome can be accurately measured or estimated, while random factors are either absent or can be neglected. In this case, it is quite difficult to collect and process data, except for simple situations that fully characterize the search conditions. In addition, purely mathematical difficulties arise in constructing and analyzing such a model. The task of choosing the optimal search method in the conditions of a normative model of the deterministic level is reduced to maximizing or minimizing the efficiency criterion.
At the stochastic level, the normative model, in accordance with probabilistic regularities, is represented as a random process, the course and outcome of which are described by certain characteristics of random variables. Construction of a model at this level is possible if there is sufficient factual material to estimate the necessary probability distributions. In constructing a model at the stochastic level, the method of statistical trials is widely used in addition to the classical apparatus of probability theory, and the principle of optimality is based on the maximization of the mathematical expectation of the efficiency criterion. Thus, the task is practically transferred to the deterministic level.
The indeterminate level is characterized by a volume of information at which only a set of possible search situations is known, but without any a priori information about the probability of each of them. Usually, such a volume of information is characteristic of a conflict situation in which the object of the search and the observer pursue directly opposing goals, choosing a certain course of action to achieve them.
Building a model and choosing the optimal way at the indeterminate level encounters some difficulties, since the principles of optimality in this case may not be entirely clear. Establishing such principles of optimality and finding solutions to such problems constitute the content of game theory.
Game theory deals with the study of mathematical models of decision-making in conditions of conflict. To construct a formal mathematical model of decision-making in conditions of conflict, it is necessary to mathematically describe all possible actions of the participants in the conflict and the results of these actions. The results of the players’ actions are evaluated using a numerical function called the payoff function.

3. Task Formulation

Let us move on to the description of the search process, which is the focus of the main part of this work. An interceptor equipped with a hydrolocator has detected the periscope of a submarine, which immediately disappeared in an unknown direction. A hydrolocator is a means of sound detection for underwater objects using acoustic radiation. It consists of a transceiver that sends sound pulses in the required direction and receives reflected pulses if the transmission, encountering any object on its path, is reflected from it. After the initial detection of the submarine, the task of the interceptor is to catch the submarine in the shortest possible time. It is assumed that although the ship does not know the exact speed of the submarine, it knows a discrete set of speeds, one of which is the actual speed of the submarine. The formulated problem is a problem of secondary search for a moving object. The interceptor will be referred to as the pursuer and the submarine as the evader, denoted as P and E, respectively.
The work considers cases of continuous search, when the resistance of the boat and the ship is not considered, game problems of search, and the case of searching for n submarines. Thus, the goals of my research are the mathematical formalization of the process of search and interception of moving objects under various information conditions, the development of a procedure for finding the optimal solution, and the implementation of the algorithm using the Maple 2023 software package.
At the same time, we will convert the problem into a UAV problem and simulate it as follows.

4. One Pursuer and One Evader Scenario

Algorithm for finding the guaranteed capture time.
Let us present an algorithm for finding the completion time of the search under conditions when the pursuer does not know the speed of the evader with certainty. To do this, start by showing the strategy of the pursuer’s behavior. Assume that the speed of the pursuer is so much greater than the speed of the evading submarine that the completion of the search is assured. At the initial moment t 0 of detection, the pursuer P accurately determines the location of the underwater submarine. Thus, he knows the distance D 0 between him and the evader. To determine the guaranteed completion time of the search, we need to construct a normative model at a deterministic level. We can establish a polar coordinate system ( ρ , φ , O ) in which the origin O corresponds to the position of the underwater submarine’s detection, and the polar axis ρ extends through the location of the intercepting ship. The pursuer’s awareness of the evader’s speed v is not precise; however, it is understood to be chosen from the discrete set V E = { v 1 , , v n } . Denoted by v P is the maximum achievable speed of the pursuer’s ship. Consequently, the evader E’s dynamics are described by the following equations:
ρ ˙ E = v  
φ ˙ E = 0  
The dynamics of the pursuer are described by the equations.
ρ ˙ P = α , α v ρ  
φ ˙ P = β , β v φ  
v P = v ρ 2 + v φ 2  
Since the speed of the fleeing vessel is not known with certainty, the pursuer makes the assumption that E has a speed of v 1 V E . To capture the submarine at time t 0 , the pursuer begins moving towards point O with a speed of v P and continues until time t 1 , at which point both players are at the same distance from point O , i.e., the equation.
ρ 1 P = ρ 1 E  
And
t 0 t 1 v 1 d t + v P t 1 t 0 = D 0  
If the encounter did not take place, the pursuer (denoted as t 1 ), choosing a direction of circumnavigation, continues to move around point O in such a way as to constantly remain at the same distance from point O similar to that of the evading ship. Let us find the trajectory of motion corresponding to this behavior strategy. We will consider the direction of circumnavigation coinciding with the positive direction of the polar angle. The speed of the interceptor ship can be decomposed into two components: radial v ρ and tangential v φ . The radial component is the speed at which the ship moves away from the pole, i.e.,
v ρ = ρ ˙  
The tangential component is the linear speed of rotation relative to the pole, i.e.,
v φ = ρ φ ˙  
In order for the encounter to occur, the pursuer moves at maximum speed, keeping the radial component of the velocity equal to the speed of the fleeing vessel. Then, to find the trajectory of the pursuer, it is necessary to solve the system of differential equations.
ρ ˙ = v 1  
φ ˙ 2 ρ 2 = v P 2 v 1 2  
The initial conditions for this system are:
φ t * = 0  
ρ t 1 = v 1 t 1  
Solving it, we find:
φ t = v P 2 ( v 1 ) 2 v 1 l n v 1 t v 1 t 1  
ρ t = v 1 t  
Then the search time can be expressed as a function of the polar angle:
t φ = t 1 exp v 1 φ v P 2 ( v 1 ) 2  
Thus, the trajectory consists of both straight-line segments and logarithmic spiral segments. By adhering to this behavior strategy, the pursuer will detect the submarine within a timeframe not exceeding one spiral turn. Then, if the ship, having bypassed the spiral turn, does not find the submarine, it means that the initial assumption about the speed of the evader was incorrect. Therefore, it is necessary to choose the next speed v 2 V E and assume that it is the actual speed. The evader has covered a distance of ρ E t 2 = v 2 t 2 , during time t 2 , while the pursuer has covered ρ P t 2 = v 1 t 2 . There are two cases. If ρ P t 2 > ρ E t 2 , then the distance between the players will be equal to D 2 = ρ P t 2 ρ E t 2 and to find the time t 3 , the equation must be solved:
t 2 t 3 v 2 d t + v P t 3 t 2 = D 2  
If ρ P t 2 < ρ E t 2 , then the distance between the players will be equal to D 2 = ρ E t 2 ρ P t 2 ; to find the moment in time t 3 , we need to solve the equation.
v P t 3 t 2 t 2 t 3 v 2 d t = D 2  
After moving along a straight segment, the pursuer moves along a spiral. Thus, the pursuer can guarantee capture by iterating through all elements of the set V E . This algorithm for computing the guaranteed capture time is implemented using the software package Maple 2023.

4.1. Sorting the Set of Velocities for Pursuit

In order to minimize the assured search time, it is beneficial for the pursuer to arrange the set of velocities in a certain order. This involves the application of scheduling theory, which addresses such optimization problems. The primary challenge in establishing this order arises from the fact that the resultant tasks cannot be divided into separate, unrelated components. The sequence of actions taken influences the nature of the tasks at hand and the methods employed to address them. Scheduling theory addresses common elements found in most ordering dilemmas. While the resultant theoretical models may not precisely match specific real-world scenarios, their broad applicability allows for approximations to be derived across a wide spectrum of problems.
Scheduling theory extensively leverages a variety of established concepts and techniques aimed at making optimal decisions. These decisions are often arrived at through the creation and examination of relevant operational models. The advancement of methods for tackling extreme problems, statistical testing, heuristic programming, and similar domains has played a role in shaping the techniques for crafting optimal schedules. The maturation of scheduling theory into a distinct branch of applied mathematics coincided with the era when linear programming had reached a significant level of development. During this period, numerous linear models were studied, and tangible outcomes were achieved in the practical application of these models and methods.
Methods centered around sequential construction, analysis, and the elimination of scheduling options have become well-established approaches in scheduling theory. A multitude of domination rules have been put forth, aiming to discard entire sets of schedules through the examination of individually created segments. Computational strategies have been devised wherein the comparative merits of one schedule over others are methodically assessed. These strategies might incorporate heuristic components, such as the application of diverse preference functions, as well as certain training techniques paired with elements of statistical modeling and the like.
Dynamic programming techniques can be directly applied to address numerous problems in scheduling theory. In straightforward scenarios, optimal schedules can be devised by employing basic logic to assess alterations in schedule attributes brought about by elementary transformations. This set of methods constitutes the foundation of what is known as the combinatorial approach within scheduling theory. This approach has yielded the most efficient algorithms for solving several scheduling problems.
In our specific task, the finite and predetermined number of speeds, denoted as “n”, necessitates validation for each speed. It is assumed that during schedule creation, processing can commence with any of the available speeds. The time required to validate each speed is contingent upon the schedule being developed. Leveraging the algorithm delineated earlier for determining the guaranteed search time, we will proceed to formulate a matrix of time values denoted as T = ( t i j ) , where t i j represents the duration of checking the speed when speed v i precedes speed v j . Then the time taken by the pursuer to check all speeds, i.e., guaranteed search time, depends on the order. Denote by F m a x = F [ n ] = i = 1 n t i 1 , [ i ] .
Maximum test duration: The task is to check each of the n speeds once and only once, and the order should be such as to minimize the maximum duration of the passage. It is necessary to find such a matrix X of order n with elements.
x i k = 0   o r   1 , i = 1 , , n , k = 1 , , n  
i = 1 n x i k = 1 , k = 1 , , n  
k = 1 n x i k = 1 , i = 1 , , n  
The sum i = 1 n k = 1 n x i k t i k is minimized. Two methodologies are employed to attain optimal solutions of heightened accuracy when dealing with a limited number of speeds while also providing approximations for scenarios involving a larger array of speeds. The primary algorithm, known as the “branch and bound” method, engages in a sequential transformation of matrices into one of three standard configurations through a specific procedure.
Initially, an initial matrix is formulated based on the provided problem. Subsequently, this matrix undergoes a systematic process following a predefined scheme to generate more straightforward iterations, which are also depicted in matrix form. The established procedures are repeatedly applied to each of these iterations until a conclusive solution to the problem is achieved. Thus, the analysis of each matrix results in one of three potential outcomes:
  • Directly acquiring a solution from the original matrix, particularly when the problem is of sufficient simplicity.
  • Disregarding a matrix from further consideration upon establishing that it does not yield a solution to the problem.
  • Employing branching, a method that involves simplifying the problem by contemplating two reduced-complexity variations of the original.
The solution to the problem involves checking all speeds, given by a permutation of indices 1 , , n .
1 , 2 , , n  
The obtained solution is a sum of n terms, each of which is determined by an element of the matrix T according to the adopted order:
t 1 , [ 2 ] + t 2 , [ 3 ] , + + t n 1 , n  
The optimal solution is the permutation that minimizes this sum. At each step of the algorithm, the problem involves n speeds, of which k can be determined, and the remaining nk must be chosen optimally. For all selections, a value Y must be assigned as the lower bound for all possible solutions to the problem, including the optimal solution. There are trivial lower bounds, such as the minimum element of matrix T or the sum of the minimum elements of its n rows. The subtlety of the algorithm lies in constructing this lower bound, striving to make it as large as possible.
Thus, the matrix is characterized by the remaining number of unknown steps for checking, n k , and the lower bound Y of the solution. Moreover, it can be assumed that for the remaining set of steps, at least one solution to the problem is known (for example, the permutation 1…, n is a solution), and let Z be the best of them. Now the matrix undergoes further changes depending on the following possibilities:
If n k = 2 , then there are no more than two steps left, and the solution is found immediately. If its value is less than Z , then Z is set equal to this new value and is considered the best of the known solutions.
If Y is greater than or equal to Z , the matrix is excluded because the checks presented in it do not lead to better solutions than what is already known.
If none of the above situations apply, then two matrices are created instead of the original one. The branching of the original verification occurs in two directions, and each direction corresponds to its own matrix:
In one of them, the transition from i to j is chosen, as a result of which the lower bound of the solutions may increase.
In the other one, the transition from i to j is prohibited (the element t i j is set equal to ), because of which the lower bound of the solutions will undoubtedly increase.
Hence, the resultant matrices are distinguished by a progressively increasing lower bound and, potentially, a larger count of established steps. Furthermore, with each subsequent matrix, the quantity of evaluations is less than that of its predecessor, eventually culminating in a state where the permutation is definitively defined.
Situations wherein the solution is readily derived or the matrix is eliminated are readily apparent. The core of branching revolves around the concepts of reduction and selection. Reduction endeavors to ensure that at least one zero is present in every row and column of the original matrix T . Given that each solution to the problem encompasses a solitary element from every row or column of matrix T , altering all elements within a column or row by a constant, either subtracting or adding, does not displace the optimal solution.
Subtract a constant h from each element of a row or column of matrix T . Let the resulting matrix be T . Then the optimal solution found from T is also optimal for T, i.e., both matrices have the same permutation that minimizes time. We can choose Y = h as the lower bound for solutions obtained from T . Subtraction can continue until each column or row contains at least one zero (i.e., the minimum element in each row or column is zero). The sum of all reduction constants determines the lower bound Y for the original problem. The matrix T is reduced if it cannot be further reduced. In this case, finding route options is associated with studying a particular transition, say from i to j. As a result, instead of the original matrix, we consider two matrices.
  • Matrix T i j , which is associated with finding the best of all solutions, given by matrix T and including the order ( i , j ) .
  • Matrix T n ( i j ) , which is associated with choosing the best of all solutions, not including the order ( i , j ) .
After fixing the transition from i to j, we need to exclude transitions from i to other speeds except j and transitions to j from other speeds except i by setting all elements of row i and column j, except t i j , to infinity. We also need to prohibit the order (j, i) in the future by setting t i j = . This is because checking all speeds during a single pass cannot include both ( i , j ) and ( j , i ) simultaneously. Since these prohibitions may lead to the elimination of some zeros in matrix T , further reduction of T and obtaining a new, larger lower bound for solutions associated with matrix T i j is not excluded.
In the matrix T n ( i j ) , it is prohibited to transition from i to j, i.e., t j i is set to infinity. In this case, the possibility of further reducing the matrix and the resulting increase in the lower bound for solutions obtained from T n ( i j ) is not excluded. The choice of ( i , j ) should be such as to maximize the lower bound for T n ( i j ) , which may allow for the elimination of trajectories without further branching. To achieve this, all possible pairs ( i , j ) in the matrix T n ( i j ) are examined, and the choice is made in such a way that the sum of two consecutive reducing constants is maximal. Obviously, transitions ( i , j ) corresponding to zero elements of matrix T should be prohibited first, since the choice with nonzero elements does not contribute to further reducing T n ( i j ) .
The second way to order the enumeration of velocities is via the method of dynamic programming. Without loss of generality, choose a certain velocity v 0 as the initial one. After that, divide the set of all velocities into four non-intersecting subsets:
In the matrix T n ( i j ) the transition from i to j is forbidden, i.e., t j i = is assumed. In this case, there is also the possibility of further reducing the matrix and the resulting increase in the lower bound for solutions obtained from T n ( i j ) . The choice of ( i , j ) should be such as to maximize the lower bound for T n ( i j ) , which may allow the exclusion of a number of trajectories without further branching. To achieve this, all possible pairs ( i , j ) in the matrix T n ( i j ) are examined, and the choice is made in such a way that the sum of two consecutive leading constants is maximized. It is obvious that transitions ( i , j ) that correspond to zero elements of the matrix T should be prohibited in the first place since the choice of non-zero elements do not contribute to further reduction of T n ( i j ) .
The second method for ordering the enumeration of speeds is the dynamic programming approach. Without loss of generality, we choose some speed v 0 as the initial speed. After that, we divide all the sets of velocities into four disjoint subsets:
{ v 0 }-the set consisting only of the initial speed.
{ v i }-the set consisting only of one non-initial speed.
{ V k }-the set consisting of k speeds, except for v 0 and v i .
{ V n k 2 }-the set consisting of the remaining n k 2 speeds.
Let us assume that the optimal order for checking speeds is known, starting with speed v 0 . Then we can choose speed v 0 and a subset { V k } consisting of k speeds in such a way that this optimal permutation begins with { v 0 } and includes the set { V n k 2 }, then { v i }, after which it checks the set { V k }.
Let us now focus exclusively on the subsection of the permutation that lies between { v i } and { v 0 } with an intermediate check of { V k }. It can be noted that the minimum time for this segment is known. If this were not the case, then without changing the part of the permutation up to speed v i , we could find the best guaranteed time for completing its check and, therefore, the minimum time for the whole. However, this is impossible since it contradicts the initial assumption that the optimal permutation is known.
Let f ( v i ; { V k }) be the time for checking the best permutation from v i to v 0 , including the set { V k }. Note that when k = 0.
f v i ; = s i 0  
If T is an element of the matrix T and k = n − 1 and v i coincide with the start of the movement, then f ( v 0 ; { V n 1 } ) is the time of the optimal permutation of the original problem. The idea of dynamic programming is to increment k step by step, starting from k = 0. Starting from v 0 , the permutation is traversed in reverse order to find the optimal solution.
For the problem under consideration, the main functional equation of dynamic programming is given by:
f v i ; V k = min v j { V k } [ s i j + f v j ; V j { v j } ]  
This equation demonstrates that to find the optimal permutation starting from v i and ending with v 0 , with k intermediate velocities, one needs to choose the best among the k permutations, starting from the transition from v i to one of the k velocities and then moving in the fastest manner to v 0 with intermediate visits to k 1 options. Each of these k options, in turn, represents the fastest of the k 1 permutations, according to the equation mentioned earlier. Eventually, a point is reached where the right-hand side of the equation simply represents an element of T .
The solution to the problem for five velocities will be considered as an example, with the fifth velocity taken as the starting point. Then, f ( v 5 ; { v 1 , v 2 , v 3 , v 4 }) represents the shortest time for the best permutation, and any sequence of checking velocities that leads to such time is optimal. At step 0 , the solution is sought for five options with k = 0 .
f v 1 ; = t 16  
f v 2 ; = t 26  
f v 3 ; = t 36  
f v 4 ; = t 46  
At the first step, solutions for k = 1 are expressed in terms of known solutions for k = 0 .
f v 1 ; v 2 = t 12 + f v 2 ;  
f v 1 ; v 3 = t 13 + f v 3 ;  
f v 1 ; v 4 = t 14 + f v 4 ;  
f v 2 ; v 1 = t 21 + f v 1 ;
f v 2 ; v 3 = t 23 + f v 3 ;  
f v 4 ; v 3 = t 43 + f v 3 ;  
At the second step, solutions for k = 2 are expressed in terms of known solutions for k = 1 :
f v 1 ; v 2 , v 3 = min [ t 12 + f v 2 ; v 3 , t 13 + f v 3 ; v 2 ]  
f v 1 ; v 2 , v 4 = min [ t 12 + f v 2 ; v 4 , t 14 + f v 4 ; v 2 ]  
f v 1 ; v 2 , v 5 = min [ t 12 + f v 2 ; v 5 , t 15 + f v 5 ; v 2 ]  
f v 1 ; v 3 , v 4 = min [ t 13 + f v 3 ; v 4 , t 14 + f v 4 ; v 3 ]  
f v 1 ; v 3 , v 5 = min [ t 13 + f v 3 ; v 5 , t 15 + f v 5 ; v 3 ]  
f v 1 ; v 4 , v 5 = min [ t 14 + f v 4 ; v 5 , t 15 + f v 5 ; v 4 ]  
f v 2 ; v 1 , v 4 = min [ t 21 + f v 1 ; v 4 , t 24 + f v 4 ; v 1 ]  
f v 4 ; v 2 , v 3 = min [ t 42 + f v 2 ; v 3 , t 43 + f v 3 ; v 2 ]  
We proceed to the third step, using each of the solutions of the second step.
f v 1 ; v 2 , v 3 , v 4 = min [ t 12 + f v 2 ; v 3 , v 4 , t 13 + f v 3 ; v 2 , v 4 ,
t 14 + f v 4 ; v 2 , v 3 ]  
f v 2 ; v 1 , v 3 , v 4 = min [ t 21 + f v 1 ; v 3 , v 4 , t 23 + f v 3 ; v 1 , v 4 ,
t 24 + f v 4 ; v 1 , v 3 ]  
f v 3 ; v 1 , v 2 , v 4 = min [ t 31 + f v 1 ; v 2 , v 4 , t 32 + f v 2 ; v 1 , v 4 ,
t 34 + f v 4 ; v 1 , v 2 ]  
f v 4 ; v 1 , v 2 , v 3 = min [ t 41 + f v 1 ; v 2 , v 3 , t 42 + f v 2 ; v 1 , v 3 ,
t 43 + f v 3 ; v 1 , v 2 ]  
At the fourth step, the solution of the original problem is obtained.
f v 5 ; v 1 , v 2 , v 3 , v 4 = min [ t 51 + f v 1 ; v 2 , v 3 , v 4 , t 52 + f v 2 ; v 1 , v 3 , v 4 ,
t 53 + f v 3 ; v 1 , v 2 , v 4 , t 54 + f v 4 ; v 1 , v 2 , v 3 ]  
There are n 1 ! k ! n k 2 ! variations. For k     1 , there are k choices.
The number of comparisons to be made between them: The total number of computations at all stages will be equal to this number.
2 k = 1 n 1 k n 1 ! k ! n k 2 ! + n 1 < n 2 2 n  
As an example, let us consider solving the problem for six speeds V E = { 10,20,30,40,50,60 } . The Table 1 is obtained by applying the algorithm for computing the guaranteed search time, implemented using the Maple 2023 software package.

4.2. Theoretical Game Model of Search and Interception

To reduce the guaranteed interception time, it is advisable for the pursuer to order the search for escape speeds. However, if the escapee becomes aware of this, they can move at a speed that the pursuer intends to check last, which would allow the escapee to maximize their search time. Thus, the search problem can be considered a game problem under the conditions of opposition.
The system G = ( X ,   Y ,   K ) , where X and Y are non-empty sets, and the function K :   X × Y     R 1   is called an antagonistic game in normal form. Elements x     X and y     Y are called player 1 and player 2 strategies, respectively, in game G . The Cartesian product elements (i.e., strategy pairs ( x ,   y )   , where ( x     X ) and ( y     Y ) are called situations, and the function K is the function of player 1’s gain. Player 2’s gain in an antagonistic game in situation ( x ,   y ) is assumed to be [ K   ( x ,   y ) ] so the function K is also called the game’s gain function, and game G is a zero-sum game.
Let us define the game for the search problem under consideration. Let the escapee choose any speed from the set V E = { v 1 , , v n } and any direction from the set α = { α 1 , , α n } . Then, the set of pure strategies for the escapee (player 1) will be the set of combinations of possible velocities v i of their movement and movement directions α, and the set of pure strategies for the pursuer will be the set of all possible permutations of the escapee’s velocities. The gain will be the time it takes to catch the escapee, which is found using the algorithm described above. The game G is interpreted as follows: players independently and simultaneously choose strategies x     X and y     Y . After that, player 1 receives a gain equal to K   ( x ,   y ) , and player 2 receives a gain equal to [ K ( x , y ) ] . Antagonistic games in which both players have finite sets of strategies are called matrix games.
Let player 1 in a matrix game have a total of m strategies. We establish a one-to-one correspondence between the set X of strategies and the set M = { 1 ,   2 , ,   m } . Similarly, if player 2 has n strategies, we can establish a one-to-one correspondence between the sets N = { 1 ,   2 , ,   n } and Y . Then, the game G is fully determined by the matrix A = { α i j }, where α i j = K x i , y j , i , j M × N , x i , y j X × Y , i M , j N . In this case, the game G is played as follows: player 1 chooses a row iM, and player 2 (simultaneously with player 1) chooses a column j     N . After that, player 1 receives a payoff of α i j , and player 2 receives ( α i j ).
Each player aims to maximize their own winnings by choosing a strategy. However, for player 1, their winnings are determined by the function K   ( x ,   y ) , while for the second player it is [ K ( x , y ) ] , i.e., the players’ goals are directly opposite. It should be noted that the winnings of player 1 (2) are determined by the situations ( x ,   y )     X × Y that arise during the game. However, each situation, and therefore the winnings of a player, depend not only on their own choice but also on what strategy their opponent will choose. Therefore, in seeking to obtain the maximum winnings possible, each player must take into account the behavior of their opponent.
In game theory, it is assumed that both players act rationally, i.e., strive to achieve maximum winnings, assuming that their opponent acts in the best possible way for themselves. Let player 1 choose a strategy x . Then, in the worst case, they will win min y K ( x , y ) . Therefore, player 1 can always guarantee themselves a win of max x min y K ( x , y ) . If we abandon the assumption of the attainability of the extremum, then player 1 can always obtain winnings that are arbitrarily close to this value.
v ¯ = sup x X inf y Y K ( x , y )  
This is called the lower value of the game. If the external extremum is reached, then the value v ¯ is also called the maximin; the principle of constructing the strategy x based on maximizing the minimum payoff is called the maximin principle; and the strategy x chosen in accordance with this principle is the maximin strategy of player 1.
For player 2, similar reasoning can be applied. Suppose they choose strategy y . Then in the worst case, they will lose max x K ( x , y ) . Therefore, the second player can always guarantee a loss of min y max x K ( x , y ) . The number… (The text appears to be cut off here, so I am unable to translate the complete sentence).
v ¯ = inf y Y sup x X K ( x , y )  
“The upper value of the game G is referred to as the maximum-minimum, and when an external extremum is attained, it is specifically called the minimax. The principle of constructing strategy y , aimed at minimizing the maximum losses, is known as the minimax principle, and the strategy y chosen in accordance with this principle represents the minimax strategy of player 2. It should be emphasized that the existence of a minimax (maximin) strategy is determined by the attainability of an external extremum. In the matrix game G , these extremums are achieved, resulting in the respectively equal of the lower and upper values of the game.”
v ¯ = min 1 j n max 1 i m α i j  
v ¯ = max 1 i m min 1 j n α i j  
The minimax and maximin for the game G can be found as follows:
α 11 α 1 n α m 1 α m n min j α 1 j min j α m j max i min j α i j = v ¯  
max i α i 1 max i α i n } min j max i α i j = v ¯  
Let us consider the issue of optimal behavior for players in an antagonistic game. It is natural to examine a situation ( x * , y * ) X × Y in game G = ( X ,   Y ,   K ) optimal if neither player has an incentive to deviate from it. Such a situation ( x * , y * ) is called an equilibrium, and the optimality principle based on constructing an equilibrium situation is called the principle of equilibrium. For antagonistic games, the principle of equilibrium is equivalent to the principles of minimax and maximin. In an antagonistic game G = ( X ,   Y ,   K ) , a situation ( x * , y * ) is called an equilibrium or a saddle point if:
K ( x , y * ) K ( x * , y * )  
K ( x * , y ) K ( x * , y * )  
or all x X ,   y Y .
For the matrix game G , we are talking about the saddle points of the payoff matrix A, i.e., points ( i * , j * ) such that for all I     M , j     N ) the inequalities.
α i j * α i * j * α i * j  
Theorem 1.
Let ( x 1 * , y 1 * ) and ( x 2 * , y 2 * ) be two arbitrary equilibrium situations in the antagonistic game G . Then:
K x 1 * , y 1 * = K x 2 * , y 2 * ; K x 1 * , y 2 * = K ( x 2 * , y 1 * )  
x 1 * , y 2 * Z G , ( x 2 * , y 1 * ) Z ( G )  where  Z ( G )  the set of all equilibrium situations.
Let ( x * , y * ) be an equilibrium situation in game G . The number v = K x * , y * is called the value of game G .
Now we establish a connection between the principle of equilibrium and the principles of minimax in an antagonistic game.
Theorem 2.
In order for there to exist an equilibrium situation in game G = ( X , Y , K ) , it is necessary and sufficient that the minimax and maximin min y sup x K ( x , y ) , max x inf y K ( x , y ) exist and the equality is satisfied.
v ¯ = max x inf y K ( x , y ) = min y sup x K ( x , y ) = v ¯  
If an equilibrium situation exists in a matrix game, then the minimax is equivalent to the maximin. According to the definition of an equilibrium situation, each player can communicate their optimal (maximin) strategy to their opponent, and as a result, neither player can gain any additional advantage. Now, suppose that in game G there is no equilibrium situation. In such a case, we have.
min j max i α i j max i min j α i j > 0  
When an equilibrium situation is present in a matrix game, the minimax strategy aligns with the maximin strategy. According to the definition of such equilibrium, each player can openly share their optimal (maximin) strategy with their adversary, thereby preventing any party from gaining a further advantage.
Now, consider a scenario where an equilibrium situation is absent in game G . In such cases, both the maximin and minimax strategies cease to be optimal choices. Adhering to these strategies may not yield an advantage, as players could potentially secure greater gains by deviating from them. However, divulging their selected strategy to the opponent might result in even greater losses compared to sticking with the maximin or minimax approach.
In such intricate scenarios, it becomes reasonable for players to introduce an element of randomness into their actions, thereby amplifying the uncertainty in strategy selection. By embracing randomness, the outcome of their choice remains concealed from the opponent, just as it is initially uncertain to the players themselves until the random mechanism is set in motion. This stochastic element, encapsulating the player’s strategies, is termed a “mixed strategy.” As a random variable, a mixed strategy, denoted as x for player 1, is represented as an m-dimensional vector, characterized by its distribution.
x = ξ 1 , , ξ m R m , i = 1 m ξ i = 1 , ξ i 0 , i = 1 , , m  
Similarly, in player 2’s mixed strategy, y is the n-dimensional vector.
y = η 1 , , η n R n , j = 1 n η j = 1 , η j 0 , j = 1 , , n  
In this case, ξ i   0 and η j   0 represent the probabilities of selecting pure strategies i     M and j     N , respectively, when players employ mixed strategies x and y . We will use X and Y to denote the sets of mixed strategies for the first and second players, respectively. Let x = ( ξ 1 , …, ξ m ) ∈ X be a mixed strategy. The set of mixed strategies for a player is an extension of their pure strategy space. A pair x , y of mixed strategies for players in matrix game G is called a situation in mixed strategies.
Let us define the payoff of player 1 in the situation x , y in mixed strategies for the matrix game G as the mathematical expectation of their payoff given that players use mixed strategies x and y, respectively. The players choose their strategies independently of each other; therefore, the expected payoff K x , y in the situation x , y in mixed strategies x = ( ξ 1 , …, ξ m ) and y = ( η 1 , …, η n ) is equal to:
K x , y = i = 1 m j = 1 n α i j ξ i η j  
The situation x * , y * is called an equilibrium situation if:
K ( x , y * ) K ( x * , y * )  
K ( x * , y ) K ( x * , y * )  
for all x     X ,   y     Y .
Theorem 3.
Every matrix game has a situation of equilibrium in mixed strategies.
A common way to solve a matrix game is by reducing it to a linear programming problem. However, difficulties arise when solving matrix games of large dimensions. Therefore, the iterative Brown-Robinson method is often used to find a solution. The idea of the method is to repeatedly play a fictitious game with a given payoff matrix. One repetition of the game is called a round. Let A = { α i j } be an ( m × n ) -matrix game. In the first round, both players choose their pure strategies completely randomly. In the k-th round, each player chooses the pure strategy that maximizes their expected payoff against the observed empirical probability distribution of the opponent’s moves in the previous ( k 1 ) rounds.
So, suppose that in the first k rounds, player 1 used the i-th strategy ξ i k times, and player 2 used the j-th strategy η j k times. Then in the ( k + 1 ) -th round, player 1 will use the i k + 1 -th strategy, and player 2 will use their j k + 1 strategy, where:
v ¯ k = max i j α i j η j k = j α i k + 1 j η j k
v ¯ k = min j i α i j ξ j k = j α i j k + 1 ξ j k
Let v be the value of the matrix game G . Consider the relations.
v ¯ k / k = max i j α i j η j k / k = j α i k + 1 j η j k k
v ¯ k / k = min j i α i j ξ j k / k = j α i j k + 1 ξ j k k
Vectors x k = ( ξ 1 k k , , ξ m k k ) и y k = ( η 1 k k , , η n k k ) are mixed strategies of players 1 and 2, respectively, so by the definition of the value of the game, we have:
v ¯ k = max i j α i j η j k = j α i k + 1 j η j k
v ¯ k = min j i α i j ξ j k = j α i j k + 1 ξ j k
Let v be the value of the matrix game G . Consider the relations.
v ¯ k / k = max i j α i j η j k / k = j α i k + 1 j η j k k
v ¯ k / k = min j i α i j ξ j k / k = j α i j k + 1 ξ j k / k
Vectors x k = ( ξ 1 k k , , ξ m k k ) u y k = ( η 1 k k , , η n k k ) are mixed strategies of players 1 and 2, respectively, so by the definition of the value of the game, we have:
max k v ¯ k / k v min k v ¯ k / k
Thus, an iterative process is obtained, which allows for finding an approximate solution of the matrix game; the degree of approximation to the true value of the game is determined by the length of the interval. The convergence of the algorithm is guaranteed by a theorem.
lim k ( min k v ¯ k / k ) = lim k ( max k v ¯ k / k ) = v
During our simulations, we employed the MPC [22] module as the primary instrument for governing the flight trajectory of the UAV.
Model predictive control (MPC), also known as rolling time domain control and receding horizon control (RHC), is a control strategy. Its operation theory can be described as: at each sampling moment, according to the current measurement information, solve a finite-time-domain, open-loop optimal control problem online and apply the first element of the obtained control sequence to the controlled object; in the next sampling period, repeat the above process. Therefore, model predictive control can predict future events and process them accordingly. Mainly used in process industries and metallurgical manufacturing in chemical plants for 28 long-term multiple-input, multiple-output systems. MPC can handle soft and hard constraints on input, states, and outputs and has the potential to satisfy applications such as chemical plants, metallurgical manufacturing, oil refineries, power systems, process industries, and more. For nonlinear and constrained systems, the exact analytical solution cannot be obtained by directly solving the Hamilton-Jacobi-Bellman equation of the system. So, this method relying on real-time numerical optimization has gained extensive attention.
Example 1. The initial distance between the pursuer and the evader is 200 km. The evader chooses a velocity from the set V E = {8, 56, 78} and a direction from the set α = {23, 137, 182}. The maximum speed of the pursuer is V P = 100 km/h. Then the set of strategies for the evader is:
α 1 , v 1 , α 1 , v 2 , α 1 , v 3 , α 2 , v 1 , α 2 , v 2 , α 2 , v 3 , α 3 , v 1 , α 3 , v 2  
set of pursuer strategies:
v 1 , v 2 , v 3 , v 1 , v 3 , v 2 , v 2 , v 1 , v 3 , v 2 , v 3 , v 1 , v 3 , v 1 , v 2 , v 3 , v 2 , v 1
The resulting game matrix looks like Table 2.
The game is solved by the Brown-Robinson method, and the value of the game is 35189.49. There is an evader strategy (1/20, 0, 0, 0, 0, 0, 1/20, 3/10, 3/5) and pursuer strategy (1/4, 1/20, 3/5, 0, 0, 1/10).
We transform the topic into the search and pursuit between quadrotor UAVs and modify the topic slightly.
Let the distance between the fugitive UAV and the ground be 100 m. The fugitive UAV selects a speed from the set V E = {8, 56, 78} as the X -axis speed and selects from the set α = {23, 37, 82} A value as the Y -axis direction. The maximum speed of the chaser is V P = 100 m/min.
Then the fugitive policy set is:
α 1 , v 1 , α 1 , v 2 , α 1 , v 3 , α 2 , v 1 , α 2 , v 2 , α 2 , v 3 , α 3 , v 1 , α 3 , v 2
The modeling process is Figure 1, Escapees and chasers map out movement routes based on strategy sets Figure 2.
Example 2. An initial distance between the pursuer and the evader be 50 km. The evader chooses a speed from the set V E = {4, 10, 16} and a direction from the set α = {8, 10, 16}. The maximum speed of the pursuer is V P = 80 km/h. Then the set of strategies for the evader is: α 1 , v 1 , α 1 , v 2 , α 1 , v 3 , α 2 , v 1 , α 2 , v 2 , α 2 , v 3 , α 3 , v 1 , α 3 , v 2 , , and the set of strategies for the pursuer is:
v 1 , v 2 , v 3 , v 1 , v 3 , v 2 , v 2 , v 1 , v 3 , v 2 , v 3 , v 1 , v 3 , v 1 , v 2 , v 3 , v 2 , v 1
The resulting game matrix looks like Table 3.
We transform the topic into the search and pursuit between quadrotor UAVs and modify the topic slightly.
Let the distance between the fugitive UAV and the ground be 100 m. The fugitive UAV selects a speed from the set V E = {4, 10, 16} as the X -axis speed and selects from the set α = {8, 10, 16} A value as the Y -axis direction. The maximum speed of the chaser is V^P V P = 100 m/min.
The game was solved using the method of Brown-Robinson, and the value of the game is 1.57. The strategy for the evader is (1/20, 0, 0, 0, 0, 0, 1/10, 1/4, 3/5), and the strategy for the pursuer is (9/20, 1/20, 3/20, 1/20, 1/4, 1/20). The solutions to the examples showed that the most probable speed for the evader would be the maximum of the possible speeds. Therefore, the pursuer should start checking the speeds at the maximum possible speed.
Escapees and chasers map out movement routes based on strategy sets Figure 3.
Then the fugitive policy set is:
α 1 , v 1 , α 1 , v 2 , α 1 , v 3 , α 2 , v 1 , α 2 , v 2 , α 2 , v 3 , α 3 , v 1 , α 3 , v 2

5. One Pursuer and Group of Fugitives

Given a set J of n submarines, for which the times of their capture by the pursuer are known, | J | denotes the time of capture of J .
For each fugitive, the readiness times of the submarines are also known (possibly times when they will reach a certain place where pursuit cannot continue) D ¯ ( J ) and the times required for their execution D ¯ ( J ) .
For each fugitive, a weight coefficient w ( J ) is given, which participates in the objective function that needs to be optimized.
The schedule of the pursuer is determined by specifying the order of searching for the fleeing vessels J 1 , J 2 , , J n and the times t i at which the search for J i begins, satisfying the inequalities:
t i D i = D ( J i )   and   t i t i 1 + T i 1 = T J i 1
The end time of the search is denoted by t i . Thus, t i = t i + T i .
For a number x , x + denotes its positive part, defined by the formula x + = 1 2 ( x + x ) Then the delay in catching i of the evaders is equal to ( t i D ¯ i ) + .
Examples of criteria functions:
  • Minimize the total penalty for delays.
f 1 = k = 1 n 1 w ( J k ) t k D ¯ k +
2.
Minimize the maximum penalty for delays.
f 2 = max k w ( J k ) ( t k D ¯ k ) +
3.
Minimize the amount of tied funds.
f 3 = k = 1 n 1 w ( J k ) t k D ¯ k
4.
Minimize the maximum penalty for delays.
f 4 = k = 1 n 1 w ( J k ) t k
Decision by criterion f 4 .
Let us consider the solution for the function f 4 only in the case where D ¯ J = 0 for any J J . Let us consider the optimal sequence and swap two adjacent elements J k and J k + 1 in it. In this case, the capture time of the last one may increase (otherwise, the considered solution is not optimal). However, the difference in the criterion function between searching for the first k + 1 fleeing members in the modified and optimal order does not exceed.
w J k + 1 J k + 1 + w J k J k + J k + 1 w J k J k + w J k + 1 J k + J k + 1 0
Hence, after reductions, we get.
w J k J k + 1 w J k + 1 J k  
Therefore, for the optimal schedule for any k , we obtain the inequality of relations.
| J k | w ( J k ) | J k + 1 | w J k + !  
Note that if this ratio is equal, the permutation of k + 1 and k does not change the value of the criterion. Therefore, any schedule in which the evaders are arranged in nondecreasing order of the ratios | J k | w ( J k ) will be f 4 -optimal.
Consider the Following Situation
Suppose an intercepting ship having n boats with depth bombs on board at time t detects periscopes of n submarines at various distances from it on the surface of the sea, which at the same moment dived underwater and began to move in different directions at fixed speeds. It is required to send the boats to intercept the submarines in an optimal way, that is, so that the sum of the guaranteed times of interception of the submarines would be minimal. To solve the problem, we will create a matrix of efficiency A = { α i j }, where each element is the guaranteed time of interception of submarine j by boat i , which consists of the time of reaching the periscope detection point by the boat and its total time of passing along the logarithmic interception spiral. Let x i j be variables that can take only 2 values, 0 or 1, as follows.
x i j = 1 , a s s i g n e d   i   b o a t   f o r   j   s u b m a r i n e 0 , a s s i g n e d   i   b o a t   f o r   j   s u b m a r i n e
It is necessary to find an assignment plan-a matrix X = { x i j }, I = 1…m, j = 1…n, which minimizes the search time, while ensuring that each boat is assigned to search for no more than one submarine and each submarine can be searched by no more than one boat.
Mathematical formulation of the optimal assignment problem:
min z = min i = 1 m j = 1 n τ i j x i j  
i = 1 m x i j 1 , j = 1 n  
j = 1 n x i j 1 , i = 1 m  
x i j 0  
In order for the optimal assignment problem to have an optimal solution, it is necessary and sufficient that the number of boats is equal to the number of submarines, i.e., m = n . Under this condition, the inequality constraints become equality constraints.
min z = min i = 1 n j = 1 n τ i j x i j  
i = 1 n x i j = 1 , j = 1 n  
j = 1 n x i j = 1 , i = 1 n  
x i j 0  
If nm, then the assignment problem is unbalanced. Any assignment problem can be balanced by introducing the necessary number of dummy boats or submarines. The dual problem of the optimal assignment problem.
max ω = max ( i = 1 n i + i = 1 n i )  
i + i τ i j , i = 1 n , j = 1 n  
The Hungarian method can be used to solve the assignment problem. The essence of the method is as follows:
In the original matrix A of performances, determine the minimum element in each row and subtract it from all other elements in the row.
In the matrix obtained in the first step, determine the minimum element in each column and subtract it from all other elements in the column. If a feasible solution is not obtained after steps 1 and 2, perform:
In the last matrix, draw the minimum number of horizontal and vertical lines through rows and columns to cross out all zero elements.
Find the minimum non-crossed-out element, subtract it from all other non-crossed-out elements and add it to all elements at the intersection of the lines drawn in the previous step.
If the new distribution of zero elements does not allow a feasible solution to be constructed, repeat step 2a. Otherwise, proceed to step 3.

6. Group of Pursuers and Group of Fugitives

Example 3. An interceptor ship detects 4 submarines. The initial distance to each of them is 100 km, 200 km, 50 km, and 163 km, respectively. The pursuer has 4 boats for catching the submarines. The maximum speed of each boat is 74 km/h, 90 km/h, 178 km/h, and 124 km/h, respectively. The first submarine moves along the straight line α 1 = 23, with the speed v 1 = 23 km/h; the second one α 2 = 137, v 2 = 50 km/h; the third one α 3 = 187, v 3 = 67 km/h; and the fourth one α 4 = 50, v 4 = 70 km/h. The matrix for the assignment problem is Table 4.
We solve the game using the Hungarian method. The value of the objective function is 8.08, and the final table looks like Table 5.
We transform the topic into the search and pursuit between quadrotor UAVs. Modify the topic slightly. Suppose an intercepting quadcopter detects 4 intruding quadcopters. Chaser has 4 ships to chase the submarine. The maximum speed of each ship in the X Y Z axis is 74 km/h, 90 km/h, 178 km/h, and 124 km/h, respectively. For the first invasion quadrotor UAV, the maximum speed of the X -axis v 1 = 23 m/min, the maximum speed of the Y -axis α 1 = 23 m/min, and the height is 100 m. The second invading quadrotor UAV has a maximum speed of X -axis v 2 = 50 m/min, a maximum speed of Y -axis α 2 = 137 m/min, and a height of 200 m. For the third invading quadrotor, the UAV has a maximum speed of X -axis v 3 = 67 m/min, a maximum speed of Y -axis α 3 = 7 m/min, and a height of 50 m. For fourth intrusion quadrotor UAV, the maximum speed is X -axis v 4 = 70 m/min. Y -axis maximum speed α 4 = 50 m/min, and height is 163 m. The modeling process is Figure 4, Matching matrix is Table 6. Movement routes drawn by escapers and chasers based on the strategy is Figure 5.
The value of the objective function is 3.0888.
Example 4. An interceptor ship detects 4 submarines. The initial distance to each of them is 30 km, 11 km, 62 km, and 8 km, respectively. The pursuer has 4 boats for catching the submarines. The maximum speed of each boat is 60 km/h, 65 km/h, 95 km/h, and 105 km/h, respectively. The first submarine moves along the straight line α 1 = 7, with the speed v 1 = 7 km/h; the second one α 2 = 11, v 2 = 11 km/h; the third one α 3 = 30, v 3 = 30 km/h; and the fourth one α 4 = 44, v 4 = 44 km/h. The matrix for the assignment problem is Table 7.
We solve the game using the Hungarian method. The value of the objective function is 1.147, and the final table looks like Table 8.
We transform the topic into the search and pursuit between quadrotor UAVs. Modify the topic slightly.
Suppose an intercepting quadcopter detects 4 intruding quadcopters. Chaser has 4 ships to chase the submarine. The maximum speed of each ship in the X Y Z axis is 60 m/min, 65 m/min, 95 m/min, and 105 m/min respectively. For the first invasion quadrotor UAV, the maximum speed of the X -axis v 1 = 7 m/min, the maximum speed of the Y -axis α 1 = 7 m/min, and the height is 30 m. The second invading quadrotor UAV has a maximum speed of X -axis v 2 = 11 m/min, a maximum speed of Y − axis α 2 = 11 m/min, and a height of 11 m. For the third invading quadrotor UAV, the maximum speed of the X -axis v 3 = 30 m/min, the maximum speed of the Y -axis α 3 = 30 m/min, and the height is 62 m. For the fourth intrusion quadrotor UAV, the maximum speed of the X -axis v 4 = 44 m/min. Y -axis maximum speed α 4 = 44 m/min, and height is 44 m. Matching matrix is Table 9. Movement routes drawn by escapers and chasers based on the strategy is Figure 6.
The value of the objective function is 0.8390.

7. Conclusions

This paper transforms the pursuit assignment problem of submarines into the pursuit assignment problem among UAVs. And carried out SIMULINK modeling and simulation. We carried out SIMULINK modeling and simulation while combining the game-theoretic approach of the articles [23,24,25,26,27,28,29,30,31,32] and UAV modeling ideas.
With reasonable parameters for chasing UAVs, considering the success rate and interception efficiency, after calculating all UAV motion parameters using the Hungarian algorithm, all escaped quadrotor UAVs can be chased and successfully intercepted, considering performance parameters such as the speed of each interceptor and the compatibility between the quadrotor UAVs in the interceptor UAV camp and the UAVs in the escaped UAV camp.
In this study, we delve into the Brown-Robinson approach in two-player games and the Hungarian approach employed by One Pursuer and Group of Fugitives. By applying the MPC module, we simulate the interception choices and paths of UAVs and provide intuitive demonstration figures, thus providing visual support for our study.
In addition, we discuss the process of matrix iteration in detail and provide an in-depth analysis of the three potential outcomes that may arise. The results of these analyses not only help us to better understand the problem but also provide a solid theoretical foundation for future research on multi-target interception and interception in complex environments.
We also note that some related studies have applied MAS (multi-agent systems) [33,34,35], large language models (LLMs) [36,37,38], and visual language models (VLMs) [39] to robot navigation and guidance. They utilize sensors like cameras, laser scanners, or radar to gather detailed environmental data.
LLMs, trained on vast text datasets, generate human-like text using neural networks. They learn statistical patterns, making their text indistinguishable from human writing. Their ability to capture various language structures and patterns, including syntax, grammar, and vocabulary, enables them to perform tasks like translation, summarization, and text generation. VLMs excel at generating textual descriptions for images, offering detailed insights into objects, people, and events within images. They can also perform image-related tasks such as classification, object detection, and generating grammatically correct image captions [40].
Our goal is to apply these techniques more broadly to more complex environments in order to cope with interception and escape problems more effectively. Our research has provided an important exploration of policy choices and interception scenarios for dealing with asymmetric competitive situations (e.g., situations with fewer interceptors and more intruders). In future research, combining LLMs and VLMs for analyzing complex, realistic scenarios will improve the efficiency of interception efforts.

Author Contributions

Conceptualization, M.O. and K.Z.; methodology, M.O. and K.Z. software, M.O. and K.Z.; validation, M.O. and K.Z.; formal analysis, M.O. and K.Z.; investigation, M.O. and K.Z.; resources, M.O. and K.Z.; data curation, M.O. and K.Z.; writing—original draft preparation, M.O. and K.Z.; writing—review and editing, M.O. and K.Z.; visualization, M.O. and K.Z.; supervision, M.O.; project administration, M.O. and K.Z.; funding acquisition, M.O. and K.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Koopman, B.O. Search and Screening; Report No. 56; Operation Evolution Group Office of the Chief of Naval Operations: Washington, DC, USA, 1946. Available online: https://www.loc.gov/item/2009655247/ (accessed on 7 May 2023).
  2. Hellman, O. Introduction to the Theory of Optimal Search; Nauka: Moscow, Russia, 1985. [Google Scholar]
  3. Abchuk, V.A.; Suzdal, V.G. Search for Objects; Sovetskoe Radio: Moscow, Russia, 1977. [Google Scholar]
  4. Staroverov, O.V. On one search problem. Probab. Theory Its Appl. 1963, 8, 184–187. [Google Scholar] [CrossRef]
  5. Kelin, M. Note a sequential search. Nav. Res. Logist. Q. 1968, 15, 469–475. [Google Scholar] [CrossRef]
  6. Koopman, B.O. Theory of search: I. Kinematic bases. Oper. Res. 1956, 4, 324–346. [Google Scholar] [CrossRef]
  7. Koopman, B.O. Theory of search: III. The optimum distribution of searching efforts. Oper. Res. 1957, 5, 613–626. [Google Scholar] [CrossRef]
  8. Charnes, A.; Cooper, W. Theory of search: Optimal distribution of search effort. Manag. Sci. 1958, 5, 44–50. [Google Scholar] [CrossRef]
  9. MacQueen, J.; Miller, R.G. Optimal persistence polices. Oper. Res. 1960, 8, 362–380. [Google Scholar] [CrossRef]
  10. Posner, E.C. Optimal search procedures. IEEE Trans. Inf. Theory 1963, 9, 157–160. [Google Scholar] [CrossRef]
  11. Danskin, J.M. A theory of reconnaissance. I. Oper. Res. 1963, 10, 285–299. [Google Scholar] [CrossRef]
  12. Halpern, B. The robot and the rabbit pursuit problem. Am. Math. Mon. 1969, 76, 140–145. [Google Scholar]
  13. Gal, S. Search games with mobile and immobile hider. SIAM J. Control. Optim. 1979, 17, 99–122. [Google Scholar] [CrossRef]
  14. Cournot, A.O. Recherches sur les Principes Mathematiques de la Theorie des Richesses; Paris, France. Available online: https://www.persee.fr/doc/ahess_0395-2649_1975_num_30_5_293667_t1_1141_0000_001 (accessed on 7 May 2023).
  15. Nash, J. Non-Cooperative Games. Ann. Math. 1951, 54, 286–295. [Google Scholar] [CrossRef]
  16. Malafeyev, O.A.; Chernykh, K.S. Mathematical modeling of company development. Econ. Revival Russ. 2004, 1, 60–67. [Google Scholar]
  17. Grigorieva, K.V.; Malafeyev, O.A. Dynamic process of cooperative interaction in the multicriteria (multi-agent) problem of the postman. Vestn. Grazhdanskikh Inzhenerov 2011, 1, 150–156. [Google Scholar]
  18. Alferov, G.V. Generation of robot strategy under conditions of incomplete information about the environment Problems of mechanics and control: Nonlinear dynamic systems. Permian 2003, 4–24. [Google Scholar]
  19. Grigoryeva, K.V.; Ivanov, A.S.; Malafeyev, O.A. Static coalition model of investing in innovative projects. Econ. Revival Russ. 2011, 90–98. [Google Scholar]
  20. Kolokoltsov, V.N.; Malafeyev, O.A. Dynamic competitive systems of multi-agent interaction and their asymptotic behavior (part 1). Vestn. Grazhdanskikh Inzhenerov 2010, 144–153. [Google Scholar]
  21. Kolokoltsov, V.N.; Malafeyev, O.A. Corruption and botnet defense: A mean field game approach. Int. J. Game Theory 2018, 47, 977–999. [Google Scholar] [CrossRef]
  22. Andersson, A.; Näsholm, E. Fast Real-Time MPC for Fighter Aircraft. Master of Science Thesis in Electrical Engineering, Department of Electrical Engineering, Linköping University, Linköping, Sweden, 2018. Available online: https://liu.diva-portal.org/smash/get/diva2:1217945/FULLTEXT01.pdf (accessed on 7 May 2023).
  23. Zaitseva, I.; Malafeyev, O.; Strekopytov, S.; Ermakova, A.; Shlaev, D. Game-theoretical model of labour force training. J. Theor. Appl. Inf. Technol. 2018, 96, 978–983. [Google Scholar]
  24. Malafeyev, O.; Saifullina, D.; Ivaniukovich, G.; Marakhov, V.; Zaytseva, I. The model of multi-agent interaction in a transportation problem with a corruption component. AIP Conf. Proc. 2017, 1863, 170015. [Google Scholar] [CrossRef]
  25. Alferov, G.V.; Malafeyev, O.A.; Maltseva, A.S. Programming the robot in tasks of inspection and interception. In Proceedings of the 2015 International Conference on Mechanics—Seventh Polyakhov’s Reading, St. Petersburg, Russia, 2–6 February 2015. [Google Scholar] [CrossRef]
  26. Malafeyev, O.A.; Neverova, E.G.; Nemnyugin, S.A. Multi-criteria Model of Laser Radiation Control. In Proceedings of the 2014 2nd 2014 2nd International Conference on Emission Electronics (ICEE), St. Petersburg, Russia, 30 June–4 July 2014; pp. 33–37. [Google Scholar] [CrossRef]
  27. Yahuza, M.; Idris, M.Y.; Ahmedy, I.B.; Wahab, A.W.; Nandy, T.; Noor, N.M.; Bala, A. Internet of Drones Security and Privacy Issues: Taxonomy and Open Challenges. IEEE Access 2021, 9, 57243–57270. [Google Scholar] [CrossRef]
  28. Soleimani, E.; Nikoofard, A.; Yektamoghadam, H. Multiagent UAVs Routing in Distributed vs. Decentralized models: Game theory approach. In Proceedings of the 2021 9th RSI International Conference on Robotics and Mechatronics (ICRoM), Tehran, Iran, 17–19 November 2021; pp. 316–321. [Google Scholar] [CrossRef]
  29. Roldán, J.J.; Del Cerro, J.; Barrientos, A. Should We Compete or Should We Cooperate? Applying Game Theory to Task Allocation in Drone Swarms. In Proceedings of the 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, 1–5 October 2018; pp. 5366–5371. [Google Scholar] [CrossRef]
  30. Yang, K.; Zhu, M.; Guo, X. A Multi-drones Target Tracing Strategy Based on the Pursuit-Evasion Game Formula. In Proceedings of the 2022 China Automation Congress (CAC), Xiamen, China, 25–27 November 2022; pp. 4985–4990. [Google Scholar] [CrossRef]
  31. Wang, C.; Shi, W.; Zhang, P.; Wang, J.; Shan, J. Evader Cooperative Capture by Multiple Pursuers with Area-Minimization Policy. In Proceedings of the 2019 IEEE 15th International Conference on Control and Automation (ICCA), Edinburgh, UK, 16–19 July 2019; pp. 875–880. [Google Scholar] [CrossRef]
  32. Tomashevich, S.; Andrievsky, B. High-order adaptive control in multi-agent quadrotor formation. In Mathematics in Engineering, Science and Aerospace MESA; CSP: Cambridge, UK; I&S: Fort Lauderdale, FL, USA, 2019; Volume 10, pp. 681–693. Available online: https://pureportal.spbu.ru/files/86553810/MESA_Vol_10_No_4_2019_Paper_7_pages_681_693.pdf (accessed on 7 May 2023).
  33. de Curtò, J.; de Zarzà, I.; Roig, G.; Cano, J.C.; Manzoni, P.; Calafate, C.T. LLM-Informed Multi-Armed Bandit Strategies for Non-Stationary Environments. Electronics 2023, 12, 2814. [Google Scholar] [CrossRef]
  34. de Zarzà, I.; de Curtò, J.; Roig, G.; Manzoni, P.; Calafate, C.T. Emergent Cooperation and Strategy Adaptation in Multi-Agent Systems: An Extended Coevolutionary Theory with LLMs. Electronics 2023, 12, 2722. [Google Scholar] [CrossRef]
  35. Dubanov, A.A. Group pursuit on a plane with modeling of the detection area. Bull. South Ural. State Univ. Ser. Constr. Archit. 2022, 22, 71–78. [Google Scholar]
  36. Alayrac, J.-B.; Donahue, J.; Luc, P.; Miech, A.; Barr, I.; Hasson, Y.; Lenc, K.; Mensch, A.; Millican, K.; Reynolds, M.; et al. Flamingo: A visual language model for few-shot learning. arXiv 2022, arXiv:2204.14198. [Google Scholar]
  37. Gu, X.; Lin, T.-Y.; Kuo, W.; Cui, Y. Open-vocabulary object detection via vision and language knowledge distillation. arXiv 2022, arXiv:2104.13921. [Google Scholar]
  38. Wei, J.; Tay, Y.; Bommasani, R.; Raffel, C.; Zoph, B.; Borgeaud, S.; Yogatama, D.; Bosma, M.; Zhou, D.; Metzler, D.; et al. Emergent abilities of large language models. arXiv 2022, arXiv:2206.07682. [Google Scholar]
  39. Radford, A.; Kim, J.W.; Hallacy, C.; Ramesh, A.; Goh, G.; Agarwal, S.; Sastry, G.; Askell, A.; Mishkin, P.; Clark, J.; et al. Learning transferable visual models from natural language supervision. In Proceedings of the 38th International Conference on Machine Learning, Virtual, 18–24 July 2021. [Google Scholar]
  40. de Curtò, J.; de Zarzà, I.; Calafate, C.T. Semantic Scene Understanding with Large Language Models on Unmanned Aerial Vehicles. Drones 2023, 7, 114. [Google Scholar] [CrossRef]
Figure 1. Modeling flow chart.
Figure 1. Modeling flow chart.
Mathematics 11 04162 g001
Figure 2. All optional paths and pursuit results of the quadrotor UAV.
Figure 2. All optional paths and pursuit results of the quadrotor UAV.
Mathematics 11 04162 g002
Figure 3. All optional paths and pursuit results of the quadrotor UAV.
Figure 3. All optional paths and pursuit results of the quadrotor UAV.
Mathematics 11 04162 g003
Figure 4. Modeling flow chart.
Figure 4. Modeling flow chart.
Mathematics 11 04162 g004
Figure 5. Quadrotor drone matching path.
Figure 5. Quadrotor drone matching path.
Mathematics 11 04162 g005
Figure 6. Quadrotor drone matching path.
Figure 6. Quadrotor drone matching path.
Mathematics 11 04162 g006
Table 1. The initial matrix T .
Table 1. The initial matrix T .
0.130.421.133.059.1334.1
0.250.241.744.7314.2353.27
0.541.290.447.622.9486.04
1.232.8460.8939.15147.2
3.147.0414.731.42277.2
9.6221.1643.893.13217.765.6
Table 2. Game matrix.
Table 2. Game matrix.
1.91.9134768,423.44817.7336,724.9
8.448,3451.71.721,184.84236.3
1478211478295.61.91.85
2.22.2156.8901,4705651.8395,023.2
32185,532.76.56.581,300.916,257.6
17,65125317,6513529.722.122.1
2.42.4167960,122.36019.6420,727.8
54.9315,482.31111138,2452764.7
46,98167246,981.49394.858.958.9
Table 3. Game matrix.
Table 3. Game matrix.
0.60.61.3255.562.1614.77
0.93.790.570.573.2492.039
2.20.92.191.380.5360.536
0.60.61.335.572.1654.778
0.93.80.570.5683.2630.048
2.2112.211.390.540.54
0.60.61.335.62.1774.803
0.923.860.580.5763.3062.075
2.261.022.261.420.5510.551
Table 4. Game matrix.
Table 4. Game matrix.
1.180.980.520.73
14.437.061.773.3
373.7812.120.772.13
14.4330.961.53
Table 5. The final matrix for the assignment problem.
Table 5. The final matrix for the assignment problem.
002.371.22
9.632.4600.17
369.988.5200
11.2200.790
Table 6. Matching matrix.
Table 6. Matching matrix.
0010
1000
0100
0001
Table 7. Game matrix.
Table 7. Game matrix.
0.460.420.2970.277
0.160.150.110.097
0.930.630.590.54
0.180.150.090.08
Table 8. The final matrix for the assignment problem.
Table 8. The final matrix for the assignment problem.
0.0930.06300
000.020.034
0.290.230.0230
0.02000.017
Table 9. Matching matrix.
Table 9. Matching matrix.
0100
0001
1000
0010
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Oleg, M.; Zhang, K. Pursuit Problem of Unmanned Aerial Vehicles. Mathematics 2023, 11, 4162. https://doi.org/10.3390/math11194162

AMA Style

Oleg M, Zhang K. Pursuit Problem of Unmanned Aerial Vehicles. Mathematics. 2023; 11(19):4162. https://doi.org/10.3390/math11194162

Chicago/Turabian Style

Oleg, Malafeyev, and Kun Zhang. 2023. "Pursuit Problem of Unmanned Aerial Vehicles" Mathematics 11, no. 19: 4162. https://doi.org/10.3390/math11194162

APA Style

Oleg, M., & Zhang, K. (2023). Pursuit Problem of Unmanned Aerial Vehicles. Mathematics, 11(19), 4162. https://doi.org/10.3390/math11194162

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop