1. Introduction
In recent years, Unmanned Aerial Vehicles (UAVs) and Unmanned Ground Vehicles (UGVs) have captured increasing attention and interest. These vehicles have shown significant potential in various domains, such as search and rescue, surveillance, and environmental monitoring [
1]. UAVs and UGVs are typically equipped with a variety of sensors, cameras, and other devices, enabling them to perform tasks including conducting search and rescue operations at disaster sites, monitoring potential danger zones, and environmental surveillance [
2]. Their unmanned nature allows them to execute hazardous tasks, thus avoiding risks to human lives [
3].
When these UAVs and UGVs form a cooperative system, they complement each other’s capabilities, leveraging their respective strengths to accomplish more complex missions [
4]. For instance, UAVs can provide high-altitude perspectives and rapid response capabilities [
5], while UGVs can offer a stable platform and longer duration of sustained operations [
6]. Their collaboration enhances the efficiency of the entire system, improving the effectiveness and accuracy of the work [
7].
However, operating these vehicles in a coordinated manner proves to be a challenging task, especially in dynamic and uncertain environments [
8]. Localizing UAVs becomes a specific challenge when they operate in tandem with other UAVs or UGVs, particularly in situations where Global Positioning System (GPS) signals are unavailable [
9]. This urgent need for localization has led to the emergence of pioneering studies aimed at addressing this challenge [
10]. Moreover, an algorithm is proposed to localize the UAVs using satellite images in [
11]. The objective of these studies is to devise inventive solutions that empower UAVs to ascertain their position with precision, even in environments devoid of GPS signals.
Accurate localization information is essential for collaborative operations involving multiple unmanned vehicles. Hence, advancements in sensor technology have played an important role in enhancing the localization capabilities of UAVs and UGVs [
12]. For instance, the integration of visual odometry, which relies on the analysis of camera images to estimate the motion of a vehicle, has emerged as a promising approach in environments where traditional GPS-based navigation is ineffective. Additionally, the use of Light Detection and Ranging (LiDAR) and radar technologies for mapping and navigation in complex terrains has gained traction [
13]. These technologies enable UAVs and UGVs to create high-resolution maps of their surroundings, facilitating more precise positioning and maneuvering. Nevertheless, sensor technology may face limitations in precision, leading to less accurate data acquisition and consequently impacting the precision of localization information. Moreover, some sensors may exhibit sensitivity to environmental conditions such as changes in weather and variations in lighting, potentially compromising the sensor performance and, in turn, affecting the localization accuracy [
14].
Simultaneously, machine learning algorithms, specifically within deep learning, have significantly advanced the localization capabilities of UAVs and UGVs. In [
15], a vision-based algorithm is introduced to localize the targets using the UAV–UGV cooperative systems. Through the utilization of neural networks, these vehicles are now capable of processing extensive sensor data in real time, enabling them to make more precise and adaptable decisions [
16]. Such capability proves critical in scenarios characterized by rapidly changing environmental conditions [
17]. Conversely, deep learning algorithms often necessitate extensive datasets for training to achieve precise models [
18]. Acquiring substantial training data in UAVs and UGVs might be constrained, particularly in intricate or perilous environments. Deep learning models might exhibit sensitivity to alterations in the environment, necessitating retraining or recalibration when the environmental conditions fluctuate, potentially impacting the accuracy of the localization information.
Furthermore, the integration of UAVs and UGVs into cooperative systems also offers prospects for leveraging swarm intelligence [
19]. Through coordinated operations, these vehicles can exchange localization data and insights, enhancing the accuracy and efficiency of their endeavors. This swarm-based approach not only bolsters the capabilities of individual vehicles but also extends their operational scope and resilience in demanding environments [
20]. In [
21], a coalition formation game approach is proposed to localize the UAV swarm in a GNSS-denied environment.
Moreover, the establishment of standardized communication protocols and interoperable software frameworks is imperative for the smooth integration of UAVs and UGVs into a collaborative system [
22]. These frameworks guarantee dependable and efficient data exchange and coordination among diverse vehicle types, thereby contributing their overall performance as a collective entity.
To cope with the aforementioned challenges, the expanding potential applications of UAVs and UGVs within cooperative systems signify a significant advancement in autonomous technology. Their versatility across various sectors, from agricultural monitoring [
23] to urban planning [
24], demonstrates their adaptability to diverse environments and tasks. Particularly notable is their ability to operate effectively in GPS-denied environments, overcoming the existing limitations and unlocking new opportunities for these technologies. This adaptability is crucial in addressing the challenges posed by inaccessible or compromised GPS signals, ensuring uninterrupted operations and broadening the application scope for UAVs and UGVs. Consequently, their capability to autonomously navigate and perform tasks in such conditions represents an essential step forward in the evolution of autonomous vehicle technology.
Recently, numerous solutions for UAV–UGV collaboration have emerged. UAVs and UGVs have garnered attention for their potential in high-risk missions, necessitating an effective cooperative control framework to optimize mission completion and resource utilization, particularly in applications like wildfire fighting [
25]. However, their adaptability to dynamic environments may pose constraints, and relying on a central mobile mission controller for decisionmaking and planning could elevate the risk of system failure due to a single point of failure. In hazardous industrial settings, a decentralized scheme employs a drone to lead ground mobile robots transporting objects, with one robot navigating obstacles using drone-provided waypoints and a human operator while the others maintain distance and bearing through predictive vision-based tracking [
26]. However, this scheme places high demands on the reliability and precision of both the drone and human operator, potentially leading to a lack of robustness. Failures or errors regarding either the drone or the operator could adversely affect the overall performance of the system. Reference [
27] presents a cooperative UAV–UGV team for aerial tasks, proposing a control framework validated experimentally for efficient operation and reduced energy use. While UAVs and UGVs can assist each other, ensuring they do not cause damage or pose danger during task execution remains a challenge in real-world environments.
Distributed algorithms are computational procedures designed for systems consisting of multiple computing nodes [
28]. These algorithms distribute tasks among the nodes, enabling them to collaborate and communicate in achieving a common objective. Each node in a distributed system possesses its own computational resources and local state [
29]. Distributed algorithms play a critical role in the coordination and operation of UAVs and UGVs. In [
30], a distributed solution mechanism is employed to determine the information-maximizing trajectories subject to the constraints of individual vehicle and sensor sub-systems. Additionally, ref. [
31] demonstrates a considerable advantage of distributed UGVs over the static placement of control stations. These distributed algorithms facilitate communication and collaboration among the UAVs and UGV, allowing them to work together effectively to accomplish tasks.
The Information Consensus Filter (ICF) is a distributed algorithm used in multi-agent systems for estimating a common state [
32]. It iteratively updates each agent’s estimate based on the local observations and received information from neighboring agents, using a consensus matrix to determine the weights of these updates. Through iterative consensus iterations, the ICF enables the agents to converge towards a consistent estimate of the common state, making it valuable for distributed estimation and control in dynamic environments. The ICF in UAV and UGV systems promotes information exchange and maintains state coherence across various Unmanned Aerial Vehicles and Unmanned Ground Vehicles. By fostering communication and collaboration among individual UAVs and UGVs, the ICF ensures a unified estimation of the shared states, thereby optimizing the task performance and resource utilization. Its implementation bolsters the system coordination and resilience, allowing for adept navigation in intricate environments and expanded utility across diverse applications. In [
33], a distributed information filter algorithm is developed for each UGV to locally estimate the position and velocity of the quadrotor using its information and information from neighboring UGVs. Despite the pioneering advantages, the aforementioned research overlooks the integration of localization and control in unmanned vehicles.
1.1. Motivations
Improving the robustness and reliability of cooperative UAV–UGV systems is driven by the need to overcome challenges. This involves lessening the burdens on individual drones and human operators, ensuring adaptability to changing environments, and decreasing dependence on centralized control. The goal is to enhance the overall performance and safety of these systems in real-world scenarios. In response to the complexities of coordinating vehicle operations and guaranteeing precise localization data, we devised and executed a cost-effective vision-centered collaborative framework. While fully considering the superiority of distributed algorithms such as the ICF, our framework draws inspiration from the integration of UAVs and UGVs into cooperative systems and also incorporates the establishment of standardized communication protocols. It is engineered to dynamically adapt to changing environmental conditions, prioritize vehicle safety, and establish a reliable and efficient communication infrastructure. Additionally, it is engineered to accommodate multiple vehicles and facilitate seamless collaboration among them.
1.2. Contributions
We have developed and implemented a system that incorporates an advanced decentralized control algorithm and employs a Control Barrier Function–Control Lyapunov Function (CBF–CLF) strategy for UAVs, aimed at enhancing safety and efficiency. The efficacy of our system is validated through simulations of real-world scenarios.
In summary, the contributions are elaborated as follows:
Distributed Algorithm Design: Our system integrates an advanced decentralized control algorithm, allowing UAVs to autonomously adapt control inputs in real time by leveraging data from the Information Consensus Filter (ICF). This decentralized strategy enables each UAV to function independently yet maintain cohesion within a larger fleet, bolstering system resilience and adaptability.
Control Algorithm Design: At the core of our system is the Control Barrier Function–Control Lyapunov Function (CBF–CLF) strategy. This innovative control scheme emphasizes the safety of UAVs, which is essential for ensuring operational integrity. By enforcing safe separation distances between UAVs and obstacles, the CBF–CLF strategy minimizes collision risks. The continuous adjustment of control inputs, informed by real-time assessments of the vehicle’s state and surroundings, ensures effective implementation.
Real-World Implementation: We evaluated the effectiveness and robustness of our system through extensive simulations in various real-world scenarios. This practical testing confirms the readiness of our system for real-world applications and sets the stage for its deployment in diverse UAV operational domains.
In comparison to the existing research methods, our system addresses several key limitations. Despite the high demands placed on the reliability and precision of both the drone and human operator, which could potentially lead to a lack of robustness, our system mitigates this risk through the incorporation of an advanced decentralized control algorithm. This algorithm enables UAVs to independently adjust the control inputs in real time via the Information Consensus Filter (ICF), reducing the dependency on precise human intervention and enhancing the overall system resilience. Additionally, while their adaptability to dynamic environments may pose constraints, our system’s decentralized approach allows for greater flexibility in responding to changing conditions. Furthermore, the reliance on a central mobile mission controller for decisionmaking and planning, which could elevate the risk of system failure due to a single point of failure, is minimized in our system by distributing the decisionmaking processes across the UAV fleet. Finally, while UAVs and UGVs can assist each other, ensuring that they do not cause damage or pose danger during task execution remains a challenge in real-world environments. However, our system addresses this challenge through the implementation of the Control Barrier Function–Control Lyapunov Function (CBF–CLF) strategy, which prioritizes UAV safety by enforcing safe separation distances from obstacles, thereby reducing collision risks. Through extensive simulation-based evaluations across various real-world scenarios, our system’s effectiveness and robustness were confirmed, affirming its readiness for diverse UAV operational domains.
The rest of this paper is organized as follows.
Section 2.1 describes the preliminary knowledge of the distributed information for the cooperative systems and Control Lyapunov/Barrier Function. Afterwards,
Section 2.2 provides the details regarding the proposed ICF-based Distributed UAV–UGV Cooperative System, including the system overview, dynamics, ICF for UAVs, and CBF–CLF for UAV–UGV Coopeative System. After that, extensive experimental results are presented in
Section 3. Finally, the conclusion is provided in
Section 4.
3. Results
3.1. Vision-Based Localization
In the practical scenario where a UAV tracks a UGV, the UAV often needs to acquire the pose of the UGV for self-adjustment and decisionmaking. Computer vision algorithms are frequently employed to recognize the object’s pose. However, due to the limitations in the sensor precision and the inherent design of these algorithms, the computed results often exhibit errors. This issue becomes particularly pronounced in distance measurements using the Perspective-n-Point (PnP) algorithm, where increased distance can cause objects to appear blurry or smaller in the image. Consequently, this leads to the visual features used for localization becoming less distinct or sparse, challenging the algorithm’s ability to accurately match these feature points and affecting the precision of the pose estimation.
Given the aforementioned considerations, it is imperative to investigate the impact of distance variation on visual computation. With the knowledge of the camera’s performance across different distances, it becomes possible to regulate the accuracy of the data acquisition by adjusting the observation distance, thus optimizing the system performance. The parameters used for experiment are shown in
Table 1.
The camera was positioned at various distances and angles relative to the Armor, and the previously mentioned visual algorithm was employed to identify the light bars on the Armor and calculate their poses and angles. The distance between the camera and the Armor was computed, and the measurement error was then compared with the actual distance. The positions of the UAVs in the experiment are predetermined based on the distribution of the experimental variables, and each position is carefully measured and set. In this way, the error of the algorithm can be accurately calculated. The experimental setup is shown in
Figure 5.
The experiment involved two variables: distance and angle. The distance varied from 3 m to 5 m, in increments of ten centimeters, resulting in 11 groups. The angle started with the camera facing the Armor at 0 degrees and ranged up to 40 degrees to the left, in increments of ten degrees, creating five groups. Considering the symmetry in the recognition of the left and right deviations, focusing solely on the left deviations was deemed sufficient for meeting the research objective. Consequently, this experiment comprised a total of 110 datasets.
Considering that an excessively large yaw angle could result in the loss of one side of the light bars, thereby making it impossible to form the visual features necessary for the Armor recognition, the experiment avoided overly large angles. Consequently, 40 degrees was selected as the maximum angle for this experiment.
As depicted in
Figure 6, the error margin of the measurement data tends to widen as the distance increases. Within the range of 300 to 410 cm, the measurement error mostly fluctuates within 5 cm. Beginning at 420 cm, the measurement error starts to escalate, exhibiting significant deviations from previous values, with fluctuations between 10 and 15 cm. Consequently, we identify 410 cm as the threshold for accurate measurement. Beyond this distance, the camera pose estimation regarding the UAVs is deemed inaccurate, whereas, within this range, the estimation is considered to be reasonably accurate.
Meanwhile, the results in
Figure 6 indicate that the deviations in angles have a relatively minor impact on the accuracy. The change in the yaw angle of the UAV relative to the UGV from 0 to 40 degrees has a negligible effect on the measurement data, which is theoretically justifiable. The algorithm functions by extracting the features from the upper and lower vertices of two light bars and then obtaining the rectangle corners through sorting. Consequently, the accuracy of the calculation is not dependent on the thickness of the light bars. Although an increase in the yaw angle may cause the captured light bars to appear thinner, it does not hinder the calculation process. However, excessively large yaw angles may lead to one side of the light bar becoming invisible, thereby creating an unsolvable scenario. Consequently, it is imperative to maintain the yaw angle below a critical threshold to ensure continuous observation and data integrity.
3.2. ICF
The tracking system described in this paper closely resembles a linear system. Regarding the algorithm performance, we posit that the ICF is somewhat less effective than the Centralized Kalman Filter (CKF). In the CKF, the sensor nodes collect a substantial amount of target information, facilitating the high-precision data processing of the accumulated information.
However, a significant characteristic of the CKF is its high-dimensional state space, which leads to a marked increase in computational complexity. This makes it challenging to fulfill the real-time requirements of navigation systems. Moreover, the fault tolerance of the CKF is relatively low. Should any node within the system fail, its impact spreads through the filter, affecting the other states and making the navigation information output of the combined system unreliable. Therefore, in practical applications, the ICF is preferred for mobile systems due to these considerations.
Based on the analysis above, if the average error curve of the ICF closely approximates that of the CKF, it is considered that the ICF has achieved a satisfactory performance. In this section, we evaluate the performance of the proposed ICF simulation in MATLAB 2023a and compare it with that of the CKF. The simulation of the tracking trajectory computed by the ICF algorithm is shown in
Figure 7.
Initially, we investigate the influence of the number of cameras on the experimental results. Given the hardware limitations, the FOV of the cameras remains constant. To ensure comprehensive UGV observations, increasing the number of cameras and strategically placing them becomes imperative for accurate UGV tracking. It is essential that the UAV trajectory falls within the FOV of at least one camera; otherwise, it is considered unobservable and disregarded.
In this paper, we introduce two experimental variables: Process Covariance
and Number of Consensus Iterations
K. The UAVs are symmetrically distributed on both sides of the UGV. The input variables encompass the random refreshing of the UGV’s position and an initial distance of 2 m between each UAV and the UGV. The velocity of the UGV is set to 1.5 m/s, with both the UAV’s tracking speed and the UGV’s leading speed undergoing slight random variations. Consequently, the inputs for the simulation can be denoted as
. The observation matrix
and state transition matrix
are defined as follows:
3.2.1. Process Covariance Q
The elements on the diagonal of the noise matrix represent the variance of measurement errors in the system’s corresponding state variables. The magnitude of these diagonal elements significantly influences the filter’s estimation accuracy of the system state. Larger diagonal elements usually indicate higher measurement noise in the corresponding state variables, rendering the system more sensitive to the impact of measurement noise. In this experiment, we have configured four sets of variables:
In
Figure 8, the CKF error curve shows that the error values across all the datasets are predominantly within ten, maintaining a consistently low level. For the ICF error curve, the first dataset displays the lowest error values, with the errors generally staying within 10 and peaking at 12 at most. Initially, the ICF error curve closely mirrors the CKF curve, indicating the efficient performance of the ICF. However, as the noise values on the diagonal escalate, there is a corresponding increase in the error values, leading to a divergence between the ICF and CKF curves. In the fourth dataset, the ICF’s error values notably exceed 20, occasionally nearing 30, and the curve predominantly deviates from the CKF curve.
3.2.2. Consensus Iterations K
The parameter “total number of consensus iterations K” dictates the number of consensus iterations the algorithm undertakes throughout the process, significantly impacting the convergence and performance of the algorithm. The purpose of these consensus iterations is to ensure uniformity among the various nodes in a distributed system, allowing them to collaboratively estimate or deduce the system’s state. As illustrated in
Figure 9, the error curve of the ICF initially shows a significant deviation from the CKF error curve. However, as the number of iterations increases, specifically after
, the error curve of the ICF effectively converges and becomes comparable to the CKF curve. Beyond this point, further increasing the number of iterations does not result in a marked improvement in the ICF’s performance.
A higher value of K denotes more iterations, which aids in bolstering the algorithm’s stability in achieving consensus, but this can lead to prolonged computation times, consequently affecting the algorithm’s convergence speed. Augmenting the number of consensus iterations could impose additional computational demands; hence, choosing an optimal K value necessitates a balance between algorithm performance and computational efficiency. Given the dynamic nature of system states, the selection of K should also consider these dynamics. Larger K values are preferable in scenarios where the system states evolve gradually.
3.3. CBF–CLF
The findings from the preceding visual experiments suggest that maintaining a reasonable distance between UAVs and UGV is crucial to ensure the integrity and clarity of the Armor imaging. This experiment was simulated in MATLAB 2023a, and the parameters in this experiment are shown in
Table 2. The initial velocities of the UAV and UGV are selected under the consideration of platform capabilities, safety, and efficiency.
In this paper, we consider the following two cases:
Case 1: the initial velocity of the UGV is greater than that of the UAV:
As depicted in
Figure 10, the UGV tends to move away from the UAV. To close the distance gap while ensuring that the Armor imaging remains sufficiently large in the frame, the UAV increases its speed to catch up with the UGV. In the initial 0–10 s, the UAV’s velocity, denoted as
, exponentially increases as it swiftly narrows the gap with the UGV. Between 20 and 60 s,
gradually aligns with the UGV’s velocity, denoted as
, and achieves stability. After 60 s,
and
essentially equalize.
Figure 11 (top) illustrates that the distance between the UAV and UGV stabilizes at a value not exceeding 2.4 m. Meanwhile,
Figure 11 (bottom) presents the cumulative distribution function (CDF) of
, indicating that the UAV can catch up with the UGV within
. This constraint naturally regulates the UAV’s horizontal speed to ensure the continuous tracking of the UGV.
Case 2: the initial velocity of the UAV is less than that of the UGV:
As is shown in
Figure 10, the UGV tends to move closer to the UAV. To increase the distance while preserving the integrity of the Armor imaging in the entire image and avoid scenarios where it becomes unrecognizable, the UAV decreases its velocity. In the initial 0–10 s,
exponentially decreases as the UAV slows down to match the UGV’s pace. Between 20 and 60 s, the UAV’s velocity gradually aligns with the UGV’s velocity and stabilizes. After 60 s,
and
essentially become consistent, achieving the desired speed.
Figure 11 indicates that the distance between the UAV and UGV stabilizes at a value not less than 1 m, while
Figure 11 (bottom) displays the CDF of
, demonstrating that the UAV can catch up with the UGV within
.