Next Article in Journal
Response Characteristics of Pressure-Sensitive Conductive Elastomer Sensors Using OFC Electrode with Triangular Wave Concavo-Convex Surfaces
Previous Article in Journal
Proposal of a Machine Learning Approach for Traffic Flow Prediction
Previous Article in Special Issue
Fourier Ptychographic Neural Network Combined with Zernike Aberration Recovery and Wirtinger Flow Optimization
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Relative Localization and Circumnavigation of a UGV0 Based on Mixed Measurements of Multi-UAVs by Employing Intelligent Sensors

State Key Laboratory of Intelligent Control and Decision of Complex Systems, School of Automation, Beijing Institute of Technology, Beijing 100081, China
*
Author to whom correspondence should be addressed.
Sensors 2024, 24(7), 2347; https://doi.org/10.3390/s24072347
Submission received: 9 March 2024 / Revised: 3 April 2024 / Accepted: 4 April 2024 / Published: 7 April 2024
(This article belongs to the Special Issue Deep Learning-Based Neural Networks for Sensing and Imaging)

Abstract

:
Relative localization (RL) and circumnavigation is a highly challenging problem that is crucial for the safe flight of multi-UAVs (multiple unmanned aerial vehicles). Most methods depend on some external infrastructure for positioning. However, in some complex environments such as forests, it is difficult to set up such infrastructures. In this paper, an approach to infrastructure-free RL estimations of multi-UAVs is investigated for circumnavigating a slowly drifting UGV0 (unmanned ground vehicle 0), where UGV0 serves as the RL and circumnavigation target. Firstly, a discrete-time direct RL estimator is proposed to ascertain the coordinates of each UAV relative to the UGV0 based on intelligent sensing. Secondly, an RL fusion estimation method is proposed to obtain the final estimate of UGV0. Thirdly, an integrated estimation control scheme is also proposed for the application of the RL fusion estimation method to circumnavigation. The convergence and the performance are analyzed. The simulation results validate the effectiveness of the proposed algorithm for RL fusion estimations and of the integrated scheme.

1. Introduction

Multi-UAV RL and navigation not only play an increasingly crucial role in civil and military fields [1,2], but have also achieved notable successes in commerce, agriculture, and medical rescue [3,4,5]. Among them, UAV positioning technology [6] is a core element ensuring its safe and efficient operation. By controlling the circumnavigation of UAVs around a central point, the system [7] can achieve precise positioning and navigation in complex environments. In the application of circumnavigation, a multi-UAV RL system [8] must be capable of responding to changing environments and mission requirements in real time. Achieving a good real-time performance may necessitate more complex algorithms and hardware. Therefore, in the design and implementation of a multi-UAV RL system, it is essential to comprehensively address the aforementioned shortcomings and identify corresponding solutions to enhance the robustness and adaptability of the system.
The most common methods include external positioning systems, such as the Global Positioning System (GPS) [9,10] and anchor-based ultra-wideband (UWB) positioning [11,12], which have notably enhanced positioning accuracies. However, in specific environments, such as urban canyons, indoors, or during severe weather conditions, satellite signals may be obstructed or interfered with, thereby limiting the reliability and accuracy of traditional Global Navigation Satellite Systems (GNSSs) [13,14] in these situations. Furthermore, the deployment of an external positioning system can lead to complexities in maintenance and updating. In particular, in systems necessitating long-term operation, the maintenance of both hardware and software can pose challenges.
To address these challenges, researchers are integrating other positioning technologies into UAV systems, including vision sensors [15,16,17]. The integrated use of these technologies enables UAVs to operate in more intricate and demanding environments, accomplishing tasks such as navigating through urban buildings or conducting search and rescue operations in forests. While this approach eliminates the need for infrastructure, visual localization often entails extensive image processing and computational tasks, demanding high-performance hardware and sophisticated algorithms. This can pose challenges in meeting real-time requirements, particularly on resource-constrained UAV platforms.
On the other hand, there are also examples of infrastructure-independent solutions, but they may be inadequate in certain aspects. In [18], a method for RL using radar is proposed. Radar is capable of providing high-precision distance measurements and is very useful for precise RL. However, its equipment cost is high, and it is sensitive to ambient light and transparent objects. Radio Frequency Identification (RFID) systems are suitable for tag recognition at short distances and are applicable for indoor positioning [19]. However, for applications with large-range, high-precision RL requirements, the accuracy of the RFID system may be low. In [20], an RL algorithm based on the fusion of relative navigation sensors was proposed. It mainly involves combining information from different sensors, such as inertial navigation systems and magnetometers, to improve the robustness of RL. Nevertheless, the design and calibration of sensor fusion algorithms are relatively complex and can be susceptible to sensor errors. In [21], an RL algorithm based on visual–inertial navigation fusion was proposed. Combining visual and inertial navigation can overcome the sensor switching problem when transitioning between indoors and outdoors, providing more comprehensive RL information. However, covering large areas may require additional devices to capture enough feature points. The main idea of [22] is to present a weight matrix, simplifying the average consensus algorithm over mobile wireless sensor networks and thereby prolonging the network lifetime as well as ensuring the proper operation of the algorithm. However, the majority of wireless sensor networks discussed in this article are depicted as undirected graphs, which may not adequately address the complexities and fluctuations present in real-world environments.
More recently, a consensus-based leader–follower algorithm was developed in mobile sensor networks, where the goal for the entire network is to converge to the state of the leader [23]. However, this article does not address the positioning issue. A control strategy for a quadrotor elliptical target orbit based on uncertain and non-periodic updates of angle measurements was proposed in [24]. At the translational level, only orientation data are utilized, without incorporating the target’s prior position and velocity information. A position estimator is developed for locating unknown targets. However, the influence of measurement noise is not addressed. In [25], a UAV group circumnavigation control strategy is proposed, in which the UAV circumnavigation orbit is an ellipse whose size can be adjusted arbitrarily. However, the article does not include any research on RL-related aspects. Unlike the work in [26], earlier research focused on utilizing multiple agents to localize targets under fixed topology assumptions. This paper addresses the challenge of cooperative localization in a time-varying topology without an infrastructure, such as anchor points.
In this paper, a directed graph model is employed to represent exchanged information. As measurement failures may occur or a UAV could move beyond the sensing range of its neighbors, the directed graph describing the information flow relationship is time-varying. When a UAV strays beyond the operation area, it triggers a return-to-base protocol, where the UAV autonomously navigates back to the designated area using predefined waypoints or by following a path generated by the control system. To address the RL problem in this scenario, a discrete-time RL direct estimation is proposed for each UAV. The estimation of the relative position concerning its neighbors leverages distance and rate-of-change measurements, the angle of arrival, and the velocities of both itself and its neighbor. The displacements of neighbors during the intervals when distance and rate of change measurements are lost are also taken into account. Moreover, an RL fusion estimation method is devised for each UAV. This fusion estimation involves fusing the relative position estimates of the UAV concerning UGV0 and its neighbors. This approach enables UAVs without direct distance or angle measurements to locate UGV0 with the assistance of their neighbors. Subsequently, we apply the proposed RL fusion estimation algorithm to circumnavigation. In contrast to [25], our system integrates RL fusion estimations with a circumnavigation controller.
The primary contributions are outlined as follows:
(1) When the information flow graph between adjacent UAVs is unidirectional and time-varying, this paper proposes a distributed state observer with state switching to dynamically estimate the positions of UGV0. Only local measurements and limited information exchanges between nearby UAVs are used to estimate the relative coordinates of a group of UAVs concerning a single UGV0. The RL direct estimation error is bounded even in the presence of measurement noise.
(2) To enhance the robustness of RL, consensus-based RL fusion estimation is proposed. The boundedness of the RL fused estimation error is analyzed, and the experimental results demonstrate the effectiveness of the proposed method. The proposed RL method enables each UAV to continuously estimate its relative coordinates to UGV0, even in the absence of any relative measurements concerning UGV0 or its neighbors.
(3) The effectiveness of the entire system was demonstrated through numerical simulations of UAVs using RL fusion estimation for circumnavigation. The system integrates RL into circumnavigation control through UWB ranging and communication networks. The RL scheme proposed in this article applies not only to two-dimensional space but also to three-dimensional space.
The remainder of this article is structured as follows: Section 2 presents the problem formulation. Section 3 proposes an indirect RL method and consistency-based RL fusion estimation. Section 4 discusses the use of RL fusion estimation for circumnavigation. Section 5 conducts simulation experiments, and the article is summarized in Section 6.

2. Problem Formulation

This paper considers a network consisting of a single dynamic UGV and N UAVs, labeled 0 and 1 , 2 , , N , respectively. Define the position of each UAV as ψ i . If j ζ i , then in the local coordinate system of UAVi, the relative position of j is ψ i j = ψ j ψ i . Simultaneously, each UAV uses these relative estimates for circumnavigation. Let ζ i represent the neighbors of UAV or UGV0. When j ζ i , UAVi can obtain the distance d i j of UAVj. Utilizing the obtained angle measurement information α i j , UAVi can deduce the relative position ψ i j under UAVj, as illustrated in Figure 1. Subsequently, leveraging the relative position estimates of its neighbors, each UAV generates corresponding circumnavigation control commands. During circumnavigation, the neighboring UAVj is effectively within the sensing radius; i.e., d i j is less than the sensing radius. Assuming a sampling period of T, and denoting the sampling time as k, for simplicity, k is used to represent k T .
Assuming UAVs follow a standard particle model with UAV speeds denoted as v i , the relationship between relative speed and position is given by ψ i j ( k + 1 ) = ψ i j ( k ) + T v i j ( k ) , where v i j ( k ) = v j ( k ) v i ( k ) represents the relative speed of UAVj in the i coordinate system. The angle measurement of neighbor j is denoted as α i j ( k ) , and this neighbor can be the target UGV0. It is assumed that each UAVi has access to its own speed; distance, represented by v i j ( k ) ,   d i j ( k ) ; and angle measurement α i j ( k ) in its own inertial frame. The reference frame is denoted as i , i = { 1 , 2 , , N } and is the same as 0 . This can be achieved if the UAV is equipped with a compass. Furthermore, assume that each UAVi is equipped with airborne sensors, allowing it to obtain the distance measurement α ˙ i j ( k ) and change rate d ˙ i j ( k ) of its neighbor UAVj, or the distance measurement α ˙ i 0 ( k ) and change rate d ˙ i 0 ( k ) of UGV0. As shown in Figure 1, d i j ( k ) = ψ i j ( k ) and ψ ˙ i j ( k ) = v i j ( k ) can be obtained. The goal is to develop an estimator so that each UAV can estimate its relative coordinate ψ i 0 ( k ) in UGV0’s frame 0 . With these RL estimates and measurements of distance and orientation between UAVs, the next goal is to integrate RL into circumnavigation control. Next, let us introduce graph theory.
If each UAV is viewed as a node, their interrelationships can be represented by a directed graph denoted as Γ = ( u , E ) , where u = { 1 , 2 , , N } is the set of all nodes and u corresponds to the set of N UAVs. If j ζ i , then there is a corresponding arc ( i , j ) E in the directed graph, and UAVi can measure the distance, angle, and its corresponding rate of change. To study RL problems (e.g., estimating the position relative to UAVj), another weighted directed graph = ( u j , E j , A ) is also considered. Here, u j = { 1 , 2 , , N } is the set of all nodes, E j u j × u j is the set of all arcs in the graph, A = [ m i j ] R N × N is the weighted adjacency matrix, and each element of the matrix is positive. Given i u j , m i i = 0 , if there is an arc ( i , j ) in graph , then m i j > 0 ; otherwise, m i j = 0 . It is worth noting that may be time-varying due to possible interruptions in the measurement. Assume that the set Γ comprises all ordinary UAVs, referred to as the original set. UGV0 is introduced as the source point to form a new set Γ ¯ = ( u ¯ , E ¯ ) , known as the expanded set. Here, u ¯ = { 0 } u ,   E ¯ = E 0 E , where E 0 denotes the edge set comprising UGV0 and its surrounding neighbor UAVs. ( i , j ) E ¯ signifies that UAVi and UAVj can exchange speed and data packets.
Define the Laplacian matrix of the weighted directed graph as L , and the diagonal matrix P = diag { p 1 , p 2 , , p N } R N × N as the degree matrix of , where the diagonal element is p i = j ζ i m i j , i = { 1 , 2 , , N } . To investigate the RL problem, a system comprising N UAVs and UGV0 is associated with another graph. The graph composed of N UAVs is a subgraph of Γ ¯ . Let ζ ¯ i represent the set of neighbor nodes of node i in Γ ¯ , which may include UGV0. If UAVi can obtain direct observation of the UGV0 distance d i 0 ( k ) or angle measurement α i 0 ( k ) , then 0 ζ i ¯ ; otherwise, 0 ζ i ¯ . Define a diagonal matrix β R N × N as the target adjacency matrix associated with Γ ¯ , with diagonal element s i ,   i = { 1 , 2 , , N } . If UAVj is the neighbor of UAVi, and UAVi can directly obtain the distance and angle measurement of UAVj, then s i > 0 ; otherwise, s i = 0 . For Γ ¯ , if there exists a pathway from UGV0 to UAVi, we consider UGV0 to be jointly reachable.

3. Cooperative RL Algorithm

In this section, we propose a distributed RL algorithm based on mixed measurements. It addresses the challenge of estimating the relative coordinates of a UAV in the local frame of its neighbors. Through this approach, if UGV0 is a neighbor of the UAV, the UAV can acquire relative measurements of UGV0. Subsequently, the UAV can directly estimate its coordinates in the local frame of UGV0, termed a direct estimate. Conversely, if UGV0 is not a neighbor of the UAV, the UAV cannot utilize the relative measurements between UGV0 and itself to estimate its coordinates concerning UGV0. In this scenario, if the UAV can access the relative coordinates of its neighbors and the neighbors know the relative coordinates of UGV0, the UAV can deduce the relative coordinates of UGV0 through its neighbors. Nevertheless, multiple neighbors could potentially aid the UAV in establishing the relative coordinates of UGV0 through this method. To avoid dependence on unique neighbors, it is essential to combine multiple estimates. Furthermore, even with the availability of direct estimation, combining indirect estimation can enhance accuracy and expedite convergence. Hence, a consensus-based fusion estimation method is devised for each UAV in the second part of this section to fuse both direct estimates and all accessible indirect estimates.

3.1. RL Direct Estimation Relies on Persistent Excitation

As we all know, UAVs can experience measurement and communication losses due to harsh environments or sensor failures. In this subsection, we assume that UAVi ( i = 1 , 2 , , N ) can communicate with UGV0 and has access to distance measurement d i 0 ( k ) and the rate of change d ˙ i 0 ( k ) in certain time intervals, in addition to angle measurement α i 0 ( k ) and α ˙ i 0 ( k ) . A direct estimator is designed to estimate the relative coordinates ψ i j ( k ) of UAVi in the local frame of UAVj.
Due to unreliability in local relative measurements, assume that UAVi obtains measurements relative to UAVj, d i j ( k ) and d ˙ i j ( k ) , or angle measurements α i j ( k ) and α ˙ i j ( k ) at time k [ k 0 , k 1 ) [ k 2 , k 3 ) , with a measurement break at k [ k 0 , k 1 ) [ k 2 , k 3 ) . An indicator function, denoted as σ i j ( k ) , is defined to represent the status. When σ i j ( k ) = 1 , UAVi can acquire local relative measurements d i j ( k ) and d ˙ i j ( k ) or angle measurement α i j ( k ) and α ˙ i j ( k ) ; otherwise, σ i j ( k ) = 0 . As illustrated in Figure 2, σ i j ( k ) = 1 , k [ k 2 t , k 2 t + 1 ) , 0 , k [ k 2 t + 1 , k 2 t + 2 ) , t = 0 , 1 , . Taking the derivatives on both sides of d i j 2 ( k ) = ψ i j ( k ) 2 , we can obtain d i j ( k ) d ˙ i j ( k ) = v i j T ( k ) ψ i j ( k ) . Two unit vectors are constructed from the angle measurement information α i j ( k ) : the unit vector Φ i j ( k ) = cos α i j ( k ) sin α i j ( k ) T pointing from UAVi to UAVj, and the vector Ψ i j ( k ) = sin α i j ( k ) cos α i j ( k ) T obtained by rotating it 90° counterclockwise. Because vectors Φ i j ( k ) and ψ i j ( k ) have the same direction, and Φ i j ( k ) and Ψ i j ( k ) are perpendicular to each other, the constraint equation Ψ i j ( k ) T ψ i j ( k ) = 0 can be obtained. When σ i j ( k ) = 1 , UAVi can obtain the estimation algorithm of ψ ^ i j ( k ) through information from measurements and communication. Estimates can become inaccurate due to UAV motion, as the sensors are subject to interference. Assume that the estimated value ψ ^ i j ( k ) before the interruption is initially employed. Once communication and measurement are restored, that is, when σ i j ( k ) = 1 , UAVi will correct the deviation caused by the estimated value in the case of σ i j ( k ) = 0 . Considering sensor noise, the RL direct estimation of UAVi in the local coordinate system of UAVj at time k is estimated as follows:
ψ ^ i j ( k + 1 ) = ψ ^ i j ( k ) + T ( v i j ( k ) + τ ( k ) ) K Υ ^ i j ( k ) ( v i j ( k ) + τ ( k ) ) T ψ ^ i j ( k )
Υ ^ i j ( k ) = d i j ( k ) + σ ( k ) ( d ˙ i j ( k ) + σ ˙ ( k ) ) δ = 1 Υ ^ i j ( k ) = v i j ( k ) + τ ( k ) 2 cos α i j k + Δ ( k ) sin α i j ( k ) + Δ ( k ) ( α ˙ i j ( k ) + Δ ˙ ( k ) ) + θ ˙ i j · δ = 0 k [ k 2 t , k 2 t + 1 )
ψ ^ i j ( k + 1 ) = ψ ^ i j ( k ) k [ k 2 t + 1 , k 2 t + 2 ) .
where τ ( k ) ,   σ ( k ) ,   σ ˙ ( k ) ,   Δ ( k ) ,   Δ ˙ ( k ) represent the measurement noise of v i j ( k ) ,   d i j ( k ) ,   d ˙ i j ( k ) ,   α i j ( k ) ,   α ˙ i j ( k ) at time k, respectively. K is a tunable constant gain. ψ ^ i j ( k ) represents the coordinate of UAVi in the local coordinate system j M of UAVj. When δ = 1 , it indicates the distance measurement is normal, and when δ = 0 , it indicates the angle measurement is normal. It will be demonstrated later that the RL direct estimation error is bounded in the presence of noise.
Let the estimation error of the above observer (1) be denoted as ψ ˜ i j ( k ) = ψ ^ i j ( k ) ψ i j ( k ) , and the dynamic equation of the estimation error is
ψ ˜ i j ( k + 1 ) = [ I T ( v i j ( k ) + τ ( k ) ) ( v i j ( k ) + τ ( k ) ) T ] ψ ˜ i j ( k ) k [ k 2 t , k 2 t + 1 ) ψ ˜ i j ( k ) + T ( v i j ( k ) + τ ( k ) ) k [ k 2 t + 1 , k 2 t + 2 ) .
Let us begin by introducing the concept of persistent excitation [27] before moving on to the discussion of the convergence of error system (2). There exists a positive integer m, α 1 , α 2 such that for any k Z + , there is
α 1 I f = k k + m v i j ( k ) + τ ( k ) v i j ( k ) + τ ( k ) T T α 2 I ,
where T is the sampling period. Next, let us analyze the physical meaning of persistent excitation. Assuming that the speed of each UAV is continuously differentiable and bounded, the upper bound obviously holds. Therefore, the main focus is on (3), the lower bound. Expanding Equation (3), we can obtain
f = k k + m v i j ( f ) + τ ( f ) v i j ( f ) + τ ( f ) T = f = k k + m v i j x ( f ) + τ ( f ) 2 f = k k + m v i j x ( f ) + τ ( f ) v i j y ( f ) + τ ( f ) f = k k + m v i j x ( f ) + τ ( f ) v i j y ( f ) + τ ( f ) f = k k + m v i j y ( k ) + τ ( f ) 2 .
The two components of v i j ( k ) are represented as v i j x ( k ) and v i j y ( k ) , which are linearly independent of f { k , , k + m } . According to the inequality of Cauchy–Bunyakovsky, for any k 0 , the following formula
f = k k + m v i j x ( f ) + τ ( f ) v i j y ( f ) + τ ( f ) 2 f = k k + m ( v i j x ( f ) + τ ( f ) ) 2 f = k k + m ( v i j y ( f ) + τ ( f ) ) 2
holds. Both sides of the above inequality are equal if and only if v i j x ( f ) and v i j y ( f ) are linearly related.
Theorem 1.
According to Equation (1), when persistent excitation (3) holds for UAVi and if the sampling period T satisfies the given criterion
0 < T < 2 ( v ¯ + τ ¯ ) K 2
then the estimation error of system (2) is bounded. The upper bound v i of the error is determined by a specific value v ¯ , ensuring that the error satisfies 0 < v i j 2 v ¯ , the specified condition. To proceed, suppose there exists a constant τ ¯ > 0 such that τ ( k ) τ ¯ .
Proof. 
Firstly, consider the system related to (2):
ε ( k + 1 ) = [ I T K σ i j ( k ) ( v i j ( k ) + τ ( k ) ) ( v i j ( k ) + τ ( k ) ) T ] ψ ˜ i j ( k ) .
Construct the Lyapunov function V ψ ˜ i j ( k ) = ψ ˜ i j ( k ) ψ ˜ i j T ( k ) , and the difference in the Lyapunov function within m time steps can be expressed as
Δ V m ( k ) = V ψ ˜ i j ( k + m ) V ψ ˜ i j ( k ) = f = k k + m C ( k ) ( v i j ( k ) + τ ( k ) ) T ψ ˜ i j ( k ) 2 .
In addition, applying the induction method to system (5), the f-step RL direct estimation error range ψ ˜ i j ( f ) ψ ˜ i j ( k ) + τ ( f ) T ( 1 + K ( 2 v ¯ + τ ¯ ) ) ( f k ) is obtained. Now, (6) can be expressed as
Δ V m ( k ) f = k k + m C(k) v i j ( f ) + τ ( f ) v i j ( f ) + τ ( f ) T + m τ ( f ) T 1 + K ( 2 v ¯ + τ ( f ) ) 2 ψ ˜ i j ( k ) + m τ ( f ) T 1 + K ( 2 v ¯ + τ ¯ ) .
Observing (2) and (5), we can see that if ψ ˜ i j ( k ) = ε ( k ) , then for any k [ k 2 t , k 2 t + 1 ) ,   ψ ˜ i j ( k 2 t ) = ε ( k 2 t ) is satisfied. In the k [ k 2 t , k 2 t + 1 ) interval, ε ( k ) remains unchanged, while ψ ˜ i j ( k ) is time-varying. However, it can be seen from (2) that when k = k 2 t + 2 , ψ ˜ i j ( k 2 t + 2 ) = ψ ˜ i j ( k 2 t + 1 ) . To sum up, it can be deduced that as long as ψ ˜ i j ( 0 ) = ε ( 0 ) , then for any k [ k 2 t , k 2 t + 1 ) , ψ ˜ i j ( k ) = ε ( k ) .
In addition, according to the definition of σ i j ( k ) , when k [ k 2 t k 2 t + 1 ) ,   σ i j ( k ) ψ ˜ i j ( k ) = ψ ˜ i j ( k ) . When k [ k 2 t + 1 , k 2 t + 2 ) ,   σ i j ( k ) ψ ˜ i j ( k ) = 0 . It can be seen from (3) that the RL direct estimation error of (5) is bounded in the case of noise interference. That is, there exists α > 0 ,   C ( k ) > 0 that satisfies for any k 0 ,   ε ( k ) C ( k ) e α k , if and only if there exists α 1 > 0 ,   α 2 > 0 ,   T > 0 , and (3) holds for any k 0 . To now, the RL direct estimation error bound can be expressed as follows. Combining (4) and (7) leads to Δ V m ( k ) σ i j ( k ) ψ ˜ i j ( k ) + α 1 C ( k ) e α k + α 2 , where σ i j ( k ) > 0 ,   α > 0 ,   C ( k ) > 0 . Therefore, when k [ k 2 t + 1 , k 2 t + 2 ) ,   ψ ˜ i j ( k ) α 1 C ( k ) e α k + α 2 . It can be proven that when k [ k 2 t , k 2 t + 1 ) , ψ ˜ i j ( k ) ( 1 + α 1 ) C ( k ) e α k + α 2 ; the proof of Theorem 1 is complete. □

3.2. Fusion-Based RL Estimation

In the previous Section 3.1, we assumed that local relative measurements ( d i j ( k ) , rate of change d ˙ i j ( k ) , and α i j ( k ) , α ˙ i j ( k ) ) are unreliable, with measurement values available only at certain moments. An estimator was then designed for the UAVi to position UGV0. If UGV0 is a neighbor of UAVi, then UAVi can obtain its direct estimate ψ ^ i 0 ( k ) in the local coordinate system of UGV0. However, due to harsh environments or temporary sensor failures, a UAV may lose its local relative measurements. Worse still, some UAVs may not always have relative measurements with respect to UGV0 because UGV0 is outside their sensing range. In this case, cooperation among neighbor UAVs is needed to help UGV0 localize. In this subsection, an RL indirect estimation method is developed for each UAVi to estimate its relative coordinates with respect to UAVj, even when it may lack direct measurements for UGV0 or experience measurement failures over time.
If UAVi can estimate its coordinate x ^ i r relative to neighboring UAVr, UAVr can also share its estimate Z r with UAVi. As a result, UAVi can indirectly obtain its estimate relative to UAVj through UAVr, as illustrated in Figure 3. The formula is as follows:
x ^ i j ( k ) = x ^ i r ( k ) + Z r ( k ) .
Expanding on both direct and indirect RL algorithms, we also explore RL fusion estimation between UAVs to achieve target positioning. RL fusion estimation serves a dual purpose. It aids UAVs lacking direct target measurements, enhancing their ability to locate targets. Simultaneously, it bolsters the robustness of relying on RL direct estimation. Utilizing information gathered by UAVs and integrating both direct (1) and indirect (8) RL estimators, an RL fusion estimation method is proposed. Each UAVi employs the following estimation method to update its final estimate, as expressed in the formula:
Z i ( k + 1 ) = Z i ( k ) + T v i j ( k ) + τ ( k ) + j ζ i ( k ) β i j ( k ) [ x ^ i 0 ( k ) Z i ( k ) ] .
Consequently, each UAVi updates its fused estimate using (9), irrespective of its ability to directly obtain relative measurements about UGV0.
Theorem 2.
If the conditions of Theorem 1 are met; if it is assumed that, for every node pair ( i , j ) in Γ ( k ) that appears infinitely many times, the persistent excitation (3) is satisfied; if each node in Γ ( k ) is uniformly jointly reachable from UGV0; and if the fusion weight satisfies 0 < j ζ i ( t ) β i j ( k ) < 1 , then the fusion estimate Z i ( k ) of each UAVi asymptotically converges to its true coordinates. In the presence of measurement noise, the RL fused estimation error is bounded.
Definition 1
(Stability of a discrete system input state [28]). Consider a nonlinear system given by equation x ( k + 1 ) = f ( x ( k ) , u ( k ) ) . If there exists a function class K L : β : R 0 × R 0 R 0 and a class K function γ such that for any control input u l m and any ξ R n , the following formula | x ( k , ξ , u ) | β ( | ξ | , k ) + γ ( u ) holds for any positive integer k, then the system in Definition 1 is said to be input-to-state stable (ISS).
Lemma 1
([28]). If A is a Schur matrix, then the discrete system x ( k + 1 ) = A x ( k ) + B u ( k ) is ISS.
Lemma 2
([29]). Matrix L j + B j is positively stable, indicated by eigenvalues with positive real parts, if and only if UAVj is jointly reachable in Γ.
Proof. 
For any given i = { 1 , 2 , , N } , let y i ( k ) = Z i ( k ) ψ i j ( k ) . Then, (9) can be rewritten as
y i ( k + 1 ) = y i ( k ) + j ζ i ( k ) β i j ( k ) y j ( k ) y i ( k ) + u i j ( k ) ,
where the u i j ( k ) = j ζ i ( t ) β i j ( k ) ψ ˜ i j ( k ) + T τ ( k ) part serves as the input signal. Aggregating all the equations in i = { 0 , 1 , 2 , , N } , (10) can be expressed in matrix form as
y i j ( k + 1 ) = I L j B j I p y i j ( k ) + u i j ( k ) ,
where y i j ( k ) = [ y 0 T ( k ) y 1 T ( k ) y N T ( k ) ] T ,   u i j ( k ) = [ u 0 T ( k ) u 1 T ( k ) u N T ( k ) ] T . p is two or three, determined by the dimensions of the environmental space. L j represents the weighted Laplacian matrix of the graph Γ , and B j is the adjacency matrix of j associated with Γ .
Firstly, considering the unforced system in
y i j ( k + 1 ) = I L j B j I p y i j ( k ) ,
it can be verified that all eigenvalues of L j + B j strictly lie within the unit circle centered at the origin. Therefore, L j + B j is a Schur matrix. As UAVi and UGV0 are uniformly and jointly reachable, according to Lemmas 1 and 2, when k , all components of the solution of (12) uniformly converge exponentially to a certain common value. It can be concluded that when k , y i ( k ) c (c is a constant), indicating the exponential stability of (12).
Consider (11) next. According to Definition 1, for any k 0 , y i j ( k ) ( y i j ( 0 ) , u i j ( k ) ) β ( y i j ( 0 ) , k ) + γ ( u i j ( k ) ) . According to Theorem 1, for any positive constant β i j ( k ) ,   u i j ( k ) j ζ i ( k ) β i j ( k ) + T τ ( k ) . Since γ ( · ) is a function of class K , it follows that when k , γ ( u i j ( k ) ) c . Because y i j ( 0 ) is bounded and β ( · , · ) is a function of class K L , when k , β ( y i j ( 0 ) , k ) 0 . Therefore, y i j ( k ) ( y i j ( 0 ) , u i j ( k ) ) c when k . The proof of Theorem 2 is complete. □

4. Integrated Solutions for RL and Circumnavigation

In this section, we propose an integrated solution combining RL and circumnavigation to facilitate the rotation of multi-UAVs around UGV0 while maintaining a circular formation. This capability is particularly valuable in practical scenarios like surrounding and entrapping a hostile target. As depicted in Figure 4, it relies on distance d i ( k ) = Z i ( k ) p and angle measurements α i j ( k ) . An adaptive estimator will be formulated to attain relative positioning with the assistance of a specifically designed bounded input u i ( k ) . Let q i ( k ) = Z i ( k ) p denote the relative position to the UGV0. q ^ i ( k ) is expressed as an estimate of q i ( k ) . Note that u i ( k ) should also satisfy u i ( k ) U to address the circumnavigation problem (13), where U is the maximum velocity.
Given UGV0 at any location p ( k ) , the objective is to enable the UAV to orbit around UGV0. The formula is as follows:
lim k Z i k = p ( k ) ,
where Z i ( k ) represents the position of UAVi at time step k. For trajectory planning purposes, consider a discrete-time integrator model with bounded velocity: Z i ( k + 1 ) = Z i ( k ) + T u i ( k ) , where T is the sampling period.
Based on this, a circumnavigation control law involving RL fusion estimation is proposed:
u i , k = u 0 , k d ^ i 2 ( k ) d i 2 ( k ) I β E q ^ i ( k ) δ = 1 d ^ i ( k ) d i ( k ) Φ i j ( k ) + α Ψ i j ( k ) δ = 0 ,
where β is any non-zero real scalar and E is the rotation matrix. d ^ i ( k ) represents the estimated value of d i ( k ) . Additionally, a constant positive scalar α is introduced. By integrating the previously developed RL algorithm (9) with the circumnavigation control algorithm (14), we propose an integrated RL fusion estimation and circumnavigation control algorithm.

5. Simulation

In this part, we consider a cooperative RL fusion estimation involving five mobile UAVs and a slow UGV0 and subsequently apply the estimated values to circumnavigation control. The simulation workflow diagram is illustrated in Figure 5.

5.1. Results of RL Fusion Estimation Simulations

In this section, a simulation of the RL of five mobile UAVs and a slowly moving UGV0 in two-dimensional space is conducted. The information flow graph Γ ( k ) between the five UAVs and UGV0 is allowed to switch between two graphs periodically, Γ ( 1 ) and Γ ( 2 ) , as illustrated in Figure 6, or switch between three graphs randomly, Γ ( 1 ) , Γ ( 2 ) , and Γ ( 3 ) , as illustrated in Figure 7. It can be confirmed that in the information flow graph, UGV0 is jointly reachable. UAV 1 and 3 periodically acquire direct measurements of UGV0, while UAV 2, 4, and 5 do not; they rely solely on indirect estimates through their neighbors. To validate the proposed estimation scheme, the target UGV0 is positioned at the origin, and the five UAVs are subjected to the following dynamic control:
x ˙ 1 ( k ) = sin k cos k ,   x ˙ 2 ( k ) = 0 1 4 x ˙ 3 ( k ) = 1 6 0 ,   x ˙ 4 ( k ) = 0 1 8 ,   x ˙ 5 ( k ) = 2 sin k 2 cos k .
Furthermore, the initial positions of the five UAVs are labeled as (−1, −2), (6, 8), (8, 9), (11, −2), and (1, 9), respectively. The initial position of UGV0 is (0.3, 0).
To validate the feasibility of the proposed RL fusion estimation scheme, we designated UGV0 as the relative target for estimation. In the first simulation, the trajectory diagram is depicted in Figure 8. In the second simulation, we let all of the UAVs move randomly within the range [−50, 50]. Communication protocols were used to ensure that UAVs were aware of each other’s positions and velocities, allowing them to maintain safe distances. Upon receiving position and velocity updates from neighboring UAVs, each UAV processes this information to determine the relative positions and velocities of nearby UAVs. By comparing this information with its trajectory, a UAV can assess the risk of potential collisions and take appropriate action to avoid them. With unlimited energy, UAVs can perform their tasks without the need to consider energy efficiency or battery life. They can fly for extended periods, cover long distances, and execute complex maneuvers without the risk of power depletion.
For the first simulation, it can be verified that, in the case of a time-varying information flow graph, each UAV is uniformly jointly reachable to UGV0. Additionally, it can be confirmed that the persistent excitation (3) is satisfied. For the second simulation, it is challenging to strictly confirm the condition that each UAV is consistently jointly reachable to UGV0 and that the persistent excitation (3) is satisfied. The estimator described in (9) is employed to estimate the coordinates of each UGV0. The direct and indirect estimates are amalgamated to derive the ultimate fusion estimate Z i ( k ) . According to Theorem 2, each UAV has its estimate Z i ( k ) that converges to the true coordinates ψ i 0 ( k ) of UGV0. Here, let Z i ( k ) represent the final fusion estimate. For simplicity, the corresponding real RL is x i 0 ( k ) . Therefore, the evolution of estimation errors is represented by e r r i 0 ( k ) = Z i ( k ) x i 0 ( k ) . In the first and second simulations, the evolution of estimation errors for periodically and randomly switching simulations is depicted in Figure 9b,e and Figure 10a,b, respectively. It can be observed that even in the presence of noise, the fusion estimation error remains bounded uniformly. The RL fusion estimation errors are bounded with 0.6m in both cases. Our proposed method demonstrates robustness in simulation results, performing well in both prescribed trajectories and random scenarios.
In addition, in order to validate the effectiveness of the proposed RL fusion estimation scheme, we conducted comparative experiments. Figure 9a–c depict the simulation results of periodic switching, while Figure 9d–f show the results of random switching. Among them, (a), (d), and (c), (f) represent the corresponding error estimation curves under the methods in paper [30] and paper [31], respectively. (b) and (e) depict the error curves corresponding to the method proposed in this article. In the simulation results of the first simulation, a comparison of the graphs in (a) and (b) reveals that the error of the algorithm proposed in this paper is significantly smaller than the error of the method in paper [30]. Simultaneously, the algorithm proposed in this paper demonstrates faster convergence compared to that in paper [30]. Further, in the comparison between graphs (b) and (c), although the error in paper [31] is lower, its convergence speed is significantly slower.
A comparison of the results in Figure 9d,e, illustrating the first simulation’s outcomes, reveals that the algorithm proposed in this study exhibits a substantially lower error than the method presented in paper [30]. Additionally, our approach achieves faster convergence compared to paper [30]. While the error in [31] is lower, the rate of convergence is relatively slower, as evident in graphs (e) and (f). This demonstrates the robustness of our proposed cooperative RL method in various scenarios. Despite the errors in Figure 9a being one or two orders of magnitude larger than those in Figure 9b, they remain unacceptable as they are on the meter scale. The errors in (d) and (e) are also similar. In the presence of noise and data loss, the fusion estimator mitigates the impact of these factors to some extent. The additional information in indirect estimation contributes to expediting the convergence of RL fusion estimation. The RL fusion estimation scheme errors between Figure 9b,e in the first simulation and Figure 10a,b in the second simulation were compared. While the convergence speed of the second simulation is slower, it still satisfies the convergence criteria. It can be seen that based on the proposed method, RL fusion estimation scheme errors are bounded by 0.6 m. However, the errors using the method in [30,31] can be up to 1.5 m, which are much larger than those of the proposed method.

5.2. Integration of RL Fusion Estimation and Circumnavigation Control [25]

In this section, we will validate the integrated RL and circumnavigation scheme proposed in Section 4 through numerical simulations to confirm its robustness. Consider a five-UAV team, where each UAV is able to measure the distance or angle to UGV0. The control goal is to enable five UAVs to hover around UGV0. In the third simulation, we verified the scheme proposed when UGV0 slowly drifts with an angular velocity equal to 0.005 rad/s. The UAV maintains its distance in a neighborhood of the desired range from UGV0. It is worth noting that the speed of UGV0 is consistently much lower than that of the other UAVs. In this scenario, we assume that p ( 0 ) = [−20, −20] is the initial position of UGV0. The initial positions of the UAVs are set as [15, −8], [−20, 10], [−12, −6], [16, −13] and [18, 10], respectively. We can observe from Figure 11 that the UAV is capable of orbiting around UGV0 very well. Under these conditions, positive constants β = 5 and α = 4 are chosen to satisfy the given criteria. As depicted in Figure 12, Figure 13, Figure 14, Figure 15 and Figure 16, all trajectories converge to zero, validating our theoretical analysis.
To demonstrate the effectiveness of the integrated RL and circumnavigation solution, tests were conducted with five UAVs. The graph in Figure 17 illustrates the reduction in distance between UAVs and UGV0. It is evident that the plane distance consistently decreases rapidly until the circumnavigation radius of 0.7 m is reached, greater than the minimum spacing of 0.2 m. In summary, the experimental results demonstrate the superior performance of the integrated RL and circumnavigation solution. It is anticipated that similar integration concepts can be further applied to more practical scenarios.

6. Conclusions

This paper proposes an RL fusion estimation and distance- or angle-based UAV circumnavigation control scheme that does not rely on infrastructure or the GPS. The proposed algorithm enables a UAV to locate UGV0 using only the distance or the azimuth angle, without any explicit distinction between the measured data. Integrated algorithms of RL estimators and circumnavigation controllers are explored. The concepts in this algorithm can also be extended to scenarios where the UAV’s motion is subject to non-holonomic constraints. A possible generalization is discussed in [32]. Finally, an integrated cooperative RL and circumnavigation control algorithm is proposed by combining the RL and circumnavigation control algorithms. Numerous simulation experiments have verified the effectiveness of the proposed algorithm.
Given the recent innovation and research outcomes in artificial intelligence (AI), we will explore opportunities to enable and improve functionalities in UAVs using AI techniques in the future. Both AI and UAV technology have become popular in recent times, along with research to bring the two fields together. Many problems inherent in UAVs today can be solved with the use of AI. Future research directions include addressing challenges in three-dimensional space and enhancing the algorithm’s convergence speed to reduce tracking and estimation errors within a limited timeframe. Additionally, considering a more general UAV model and accounting for the impact of noise are crucial aspects of future investigations. Another avenue for research involves exploring scenarios with UAV swarms and UGVs. Estimating the location of UGVs may be easier in the case of UAV swarms, as different UAVs can share their estimates of the UGV’s location. These shared estimates can expedite the estimation of UGV positions. Additionally, collision avoidance technology should be taken into account when deploying UAV swarms. Furthermore, it is essential to ensure that the distance between UAVs does not exceed their communication range, preventing the disruption of the connection link between them.

Author Contributions

Conceptualization, J.G. and K.H.; methodology, J.G. and K.H.; software, J.G; validation, J.G. and K.H.; formal analysis, M.G.; investigation, J.G.; resources, K.H.; writing—original draft preparation, J.G.; writing—review and editing, M.G. and K.H.; visualization, J.G.; supervision, K.H.; project administration, J.G. and K.H.; funding acquisition, M.G. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Key Research and Development Program of China under Grant 2020YFB1708500.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data are contained within the article.

Acknowledgments

The authors would like to thank the editor and the anonymous reviewers for their careful reading and valuable suggestions that helped to improve the quality of this article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Xu, L.; Cao, X.; Du, W.; Li, Y. Cooperative path planning optimization for multiple UAVs with communication constraints. Knowl.-Based Syst. 2023, 260, 110164. [Google Scholar] [CrossRef]
  2. Dang, Y.; Benzaïd, C.; Yang, B.; Taleb, T.; Shen, Y. Deep-ensemble-learning-based GPS spoofing detection for cellular-connected UAVs. IEEE Internet Things J. 2022, 9, 25068–25085. [Google Scholar] [CrossRef]
  3. Zhang, G.; Hsu, L.T. Intelligent GNSS/INS integrated navigation system for a commercial UAV flight control system. Aerosp. Sci. Technol. 2018, 80, 368–380. [Google Scholar] [CrossRef]
  4. Basiri, A.; Mariani, V.; Silano, G.; Aatif, M.; Iannelli, L.; Glielmo, L. A survey on the application of path-planning algorithms for multi-rotor UAVs in precision agriculture. J. Navig. 2022, 75, 364–383. [Google Scholar] [CrossRef]
  5. Khan, S.I.; Qadir, Z.; Munawar, H.S.; Nayak, S.R.; Budati, A.K.; Verma, K.D.; Prakash, D. UAVs path planning architecture for effective medical emergency response in future networks. Phys. Commun. 2021, 47, 101337. [Google Scholar] [CrossRef]
  6. Yang, K. Research on key technologies of UAV navigation and positioning system. In Proceedings of the 2021 International Conference on Wireless Communications and Smart Grid (ICWCSG), Hangzhou, China, 13–15 August 2021; pp. 29–33. [Google Scholar]
  7. Yao, W.; Lu, H.; Zeng, Z.; Xiao, J.; Zheng, Z. Distributed static and dynamic circumnavigation control with arbitrary spacings for a heterogeneous multi-robot system. J. Intell. Robot. Syst. 2019, 94, 883–905. [Google Scholar] [CrossRef]
  8. Liu, Y.; Wang, Y.; Shen, X.; Wang, J.; Shen, Y. UAV-aided relative localization of terminals. IEEE Internet Things J. 2021, 8, 12999–13013. [Google Scholar] [CrossRef]
  9. Nemra, A.; Aouf, N. Robust INS/GPS sensor fusion for UAV localization using SDRE nonlinear filtering. IEEE Sens. J. 2010, 10, 789–798. [Google Scholar] [CrossRef]
  10. Bianchi, M.; Barfoot, T.D. UAV localization using autoencoded satellite images. IEEE Robot. Autom. Lett. 2021, 6, 1761–1768. [Google Scholar] [CrossRef]
  11. Niculescu, V.; Palossi, D.; Magno, M.; Benini, L. Energy-efficient, precise uwb-based 3-d localization of sensor nodes with a nano-uav. IEEE Internet Things J. 2022, 10, 5760–5777. [Google Scholar] [CrossRef]
  12. Xu, H.; Zhang, Y.; Zhou, B.; Wang, L.; Yao, X.; Meng, G.; Shen, S. Omni-swarm: A decentralized omnidirectional visual–inertial–uwb state estimation system for aerial swarms. IEEE Trans. Robot. 2022, 38, 3374–3394. [Google Scholar] [CrossRef]
  13. Zhang, J.; Yu, B.; Ge, Y.; Gao, J.; Sheng, C. An Effective GNSS/PDR Fusion Positioning Algorithm on Smartphones for Challenging Scenarios. Sensors 2024, 24, 1452. [Google Scholar] [CrossRef]
  14. Yasyukevich, Y.V.; Zhang, B.; Devanaboyina, V.R. Advances in GNSS Positioning and GNSS Remote Sensing. Sensors 2024, 24, 1200. [Google Scholar] [CrossRef] [PubMed]
  15. Couturier, A.; Akhloufi, M.A. A review on absolute visual localization for UAV. Robot. Auton. Syst. 2021, 135, 103666. [Google Scholar] [CrossRef]
  16. Qian, J.; Chen, K.; Chen, Q.; Yang, Y.; Zhang, J.; Chen, S. Robust visual-lidar simultaneous localization and mapping system for UAV. IEEE Geosci. Remote Sens. Lett. 2021, 19, 1–5. [Google Scholar] [CrossRef]
  17. Zheng, F.; Zhou, L.; Lin, W.; Liu, J.; Sun, L. LRPL-VIO: A Lightweight and Robust Visual–Inertial Odometry with Point and Line Features. Sensors 2024, 24, 1322. [Google Scholar] [CrossRef] [PubMed]
  18. Caballero, F.; Merino, L. DLL: Direct LIDAR Localization. A map-based localization approach for aerial robots. In Proceedings of the 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Prague, Czech Republic, 27 September–1 October 2021; pp. 5491–5498. [Google Scholar]
  19. Li, C.; Tanghe, E.; Suanet, P.; Plets, D.; Hoebeke, J.; De Poorter, E.; Joseph, W. ReLoc 2.0: UHF-RFID relative localization for drone-based inventory management. IEEE Trans. Instrum. Meas. 2021, 70, 1–13. [Google Scholar] [CrossRef]
  20. Zeng, Q.; Jin, Y.; Yu, H.; You, X. A UAV Localization System Based on Double UWB Tags and IMU for Landing Platform. IEEE Sens. J. 2023, 23, 10100–10108. [Google Scholar] [CrossRef]
  21. Dong, X.; Gao, Y.; Guo, J.; Zuo, S.; Xiang, J.; Li, D.; Tu, Z. An Integrated UWB-IMU-Vision Framework for Autonomous Approaching and Landing of UAVs. Aerospace 2022, 9, 797. [Google Scholar] [CrossRef]
  22. Kenyeres, M.; Kenyeres, J. Average consensus over mobile wireless sensor networks: Weight matrix guaranteeing convergence without reconfiguration of edge weights. Sensors 2020, 20, 3677. [Google Scholar] [CrossRef]
  23. Safavi, S.; Khan, U.A. Leader-follower consensus in mobile sensor networks. IEEE Signal Process. Lett. 2015, 22, 2249–2253. [Google Scholar] [CrossRef]
  24. Yue, X.; Shao, X.; Zhang, W. Elliptical encircling of quadrotors for a dynamic target subject to aperiodic signals updating. IEEE Trans. Intell. Transp. Syst. 2021, 23, 14375–14388. [Google Scholar] [CrossRef]
  25. Wang, Z.; Luo, Y. Elliptical Multi-Orbit Circumnavigation Control of UAVS in Three-Dimensional Space Depending on Angle Information Only. Drones 2022, 6, 296. [Google Scholar] [CrossRef]
  26. Chen, L. Triangular angle rigidity for distributed localization in 2D. Automatica 2022, 143, 110414. [Google Scholar] [CrossRef]
  27. Anderson, B. Exponential stability of linear equations arising in adaptive identification. IEEE Trans. Autom. Control 1977, 22, 83–88. [Google Scholar] [CrossRef]
  28. Ha, M.; Wang, D.; Liu, D. Discounted iterative adaptive critic designs with novel stability analysis for tracking control. IEEE/CAA J. Autom. Sin. 2022, 9, 1262–1272. [Google Scholar] [CrossRef]
  29. Xiao, S.; Ge, X.; Han, Q.L.; Zhang, Y. Secure distributed adaptive platooning control of automated vehicles over vehicular ad hoc networks under denial-of-service attacks. IEEE Trans. Cybern. 2021, 52, 12003–12015. [Google Scholar] [CrossRef] [PubMed]
  30. Fang, X.; Li, X.; Xie, L. 3-D distributed localization with mixed local relative measurements. IEEE Trans. Signal Process. 2020, 68, 5869–5881. [Google Scholar] [CrossRef]
  31. Cao, K.; Han, Z.; Lin, Z.; Xie, L. Bearing-only distributed localization: A unified barycentric approach. Automatica 2021, 133, 109834. [Google Scholar] [CrossRef]
  32. Cao, S.; Li, R.; Shi, Y.; Song, Y. Safe convex-circumnavigation of one agent around multiple targets using bearing-only measurements. Automatica 2021, 134, 109934. [Google Scholar] [CrossRef]
Figure 1. Local measurements and relative positions.
Figure 1. Local measurements and relative positions.
Sensors 24 02347 g001
Figure 2. Diagram illustrating the indicator function.
Figure 2. Diagram illustrating the indicator function.
Sensors 24 02347 g002
Figure 3. Obtain the indirect RL estimate x ^ i j ( k ) from UAVi to UAVj through UAVr.
Figure 3. Obtain the indirect RL estimate x ^ i j ( k ) from UAVi to UAVj through UAVr.
Sensors 24 02347 g003
Figure 4. An integrated RL and circumnavigation solution.
Figure 4. An integrated RL and circumnavigation solution.
Sensors 24 02347 g004
Figure 5. Workflow diagram for circumnavigation control.
Figure 5. Workflow diagram for circumnavigation control.
Sensors 24 02347 g005
Figure 6. The periodic switching graph Γ ( k ) that switches between two different topologies Γ ( 1 ) and Γ ( 2 ) .
Figure 6. The periodic switching graph Γ ( k ) that switches between two different topologies Γ ( 1 ) and Γ ( 2 ) .
Sensors 24 02347 g006
Figure 7. The three graphs Γ ( 1 ) , Γ ( 2 ) , and Γ ( 3 ) , among which Γ ( k ) randomly switches.
Figure 7. The three graphs Γ ( 1 ) , Γ ( 2 ) , and Γ ( 3 ) , among which Γ ( k ) randomly switches.
Sensors 24 02347 g007
Figure 8. Movement trajectories of the five UAVs and UGV0.
Figure 8. Movement trajectories of the five UAVs and UGV0.
Sensors 24 02347 g008
Figure 9. The first simulation. (a) Results of RL fusion estimation errors under periodic switching conditions using the method in [30]. (b) Results of RL fusion estimation errors under periodic switching conditions using the proposed method. (c) Results of RL fusion estimation errors under periodic switching conditions using the method in [31]. (d) Results of RL fusion estimation errors under random switching conditions using the method in [30]. (e) Results of RL fusion estimation errors under random switching conditions using the proposed method. (f) Results of RL fusion estimation errors under random switching conditions using the method in [31].
Figure 9. The first simulation. (a) Results of RL fusion estimation errors under periodic switching conditions using the method in [30]. (b) Results of RL fusion estimation errors under periodic switching conditions using the proposed method. (c) Results of RL fusion estimation errors under periodic switching conditions using the method in [31]. (d) Results of RL fusion estimation errors under random switching conditions using the method in [30]. (e) Results of RL fusion estimation errors under random switching conditions using the proposed method. (f) Results of RL fusion estimation errors under random switching conditions using the method in [31].
Sensors 24 02347 g009
Figure 10. The second simulation. (a) Results of RL fusion estimation errors under periodic switching conditions using the proposed method. (b) Results of RL fusion estimation errors under random switching conditions using the proposed method.
Figure 10. The second simulation. (a) Results of RL fusion estimation errors under periodic switching conditions using the proposed method. (b) Results of RL fusion estimation errors under random switching conditions using the proposed method.
Sensors 24 02347 g010
Figure 11. Simulation of a slowly drifting UGV0. The green circle denotes the initial position of UGV0, while the other circles indicate the initial positions of the UAVs.
Figure 11. Simulation of a slowly drifting UGV0. The green circle denotes the initial position of UGV0, while the other circles indicate the initial positions of the UAVs.
Sensors 24 02347 g011
Figure 12. Trajectory diagram depicting the relative positions between UAV1 and UGV0.
Figure 12. Trajectory diagram depicting the relative positions between UAV1 and UGV0.
Sensors 24 02347 g012
Figure 13. Trajectory diagram depicting the relative positions between UAV2 and UGV0.
Figure 13. Trajectory diagram depicting the relative positions between UAV2 and UGV0.
Sensors 24 02347 g013
Figure 14. Trajectory diagram depicting the relative positions between UAV3 and UGV0.
Figure 14. Trajectory diagram depicting the relative positions between UAV3 and UGV0.
Sensors 24 02347 g014
Figure 15. Trajectory diagram depicting the relative positions between UAV4 and UGV0.
Figure 15. Trajectory diagram depicting the relative positions between UAV4 and UGV0.
Sensors 24 02347 g015
Figure 16. Trajectory diagram depicting the relative positions between UAV5 and UGV0.
Figure 16. Trajectory diagram depicting the relative positions between UAV5 and UGV0.
Sensors 24 02347 g016
Figure 17. The rapid and stable approach of UGV0.
Figure 17. The rapid and stable approach of UGV0.
Sensors 24 02347 g017
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Guo, J.; Gan, M.; Hu, K. Relative Localization and Circumnavigation of a UGV0 Based on Mixed Measurements of Multi-UAVs by Employing Intelligent Sensors. Sensors 2024, 24, 2347. https://doi.org/10.3390/s24072347

AMA Style

Guo J, Gan M, Hu K. Relative Localization and Circumnavigation of a UGV0 Based on Mixed Measurements of Multi-UAVs by Employing Intelligent Sensors. Sensors. 2024; 24(7):2347. https://doi.org/10.3390/s24072347

Chicago/Turabian Style

Guo, Jia, Minggang Gan, and Kang Hu. 2024. "Relative Localization and Circumnavigation of a UGV0 Based on Mixed Measurements of Multi-UAVs by Employing Intelligent Sensors" Sensors 24, no. 7: 2347. https://doi.org/10.3390/s24072347

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop