Previous Article in Journal
Geostatistical Vegetation Filtering for Rapid UAV-RGB Mapping of Sudden Geomorphological Events in the Mediterranean Areas
Previous Article in Special Issue
On the Flying Accuracy of Miniature Drones in Indoor Environments
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

UAV Autonomous Navigation System Based on Air–Ground Collaboration in GPS-Denied Environments

by
Pengyu Yue
1,
Jing Xin
1,*,
Yan Huang
1,
Jiahang Zhao
1,
Christopher Zhang
2,
Wei Chen
3 and
Mao Shan
4
1
Shaanxi Key Laboratory of Complex System Control and Intelligent Information Processing, Xi’an University of Technology, Xi’an 710048, China
2
Department of Mechanical Engineering, McGill University, Montreal, QC H3A 0G4, Canada
3
Xi’an Xiangteng Micro-Electronic Technology Co., Ltd., Xi’an 710075, China
4
Australian Centre for Field Robotics, University of Sydney, Camperdown, NSW 2050, Australia
*
Author to whom correspondence should be addressed.
Drones 2025, 9(6), 442; https://doi.org/10.3390/drones9060442
Submission received: 15 May 2025 / Revised: 9 June 2025 / Accepted: 11 June 2025 / Published: 16 June 2025
(This article belongs to the Special Issue Autonomous Drone Navigation in GPS-Denied Environments)

Abstract

:
This paper explores breakthroughs from the perspective of UAV navigation architectures and proposes a UAV autonomous navigation method based on aerial–ground cooperative perception to address the challenge of UAV navigation in GPS-denied and unknown environments. The approach consists of two key components. Firstly, a mobile anchor trilateration and environmental modeling method is developed using a multi-UAV system by integrating the visual sensing capabilities of aerial surveillance UAVs with ultra-wideband technology. It constructs a real-time global 3D environmental model and provides precise positioning information, supporting autonomous planning and target guidance for near-ground UAV navigation. Secondly, based on real-time environmental perception, an improved D* Lite algorithm is employed to plan rapid and collision-free flight trajectories for near-ground navigation. This allows the UAV to autonomously execute collision-free movement from the initial position to the target position in complex environments. The results of real-world flight experiments demonstrate that the system can efficiently construct a global 3D environmental model in real time. It also provides accurate flight trajectories for the near-ground navigation of UAVs while delivering real-time positional updates during flight. The system enables UAVs to autonomously navigate in GPS-denied and unknown environments, and this work verifies the practicality and effectiveness of the proposed air–ground cooperative perception navigation system, as well as the mobile anchor trilateration and environmental modeling method.

1. Introduction

Multi-UAV systems have been widely used in complex mission scenarios, such as disaster rescue, anti-terrorism drills, environmental monitoring, etc., thanks to their excellent environmental adaptability, fault tolerance, and parallel mission processing capabilities [1,2]. However, achieving the autonomous navigation of UAVs in GPS-denied and unknown environments remains a key challenge in enabling multi-UAV systems to deal with complex tasks [3,4]. Achieving efficient environmental perception and autonomous planning through multi-UAV collaboration has become a current research hotspot, especially in the absence of satellite data [5].
UAV autonomous navigation tasks can generally be divided into two main stages: the perception stage and the decision stage [6]. Firstly, in the upstream perception stage, the multi-UAV system conducts environmental sampling through its surveillance capabilities. Through data association and fusion techniques, the collected multi-source information is transformed into high-precision map data, which UAVs can interpret and use [7,8]. The multi-UAV system achieves the abstract representation of the UAV swarm’s working scenarios and provides a basis for subsequent downstream decision-making and planning tasks. Secondly, in the downstream decision stage, based on prior knowledge and historical experience, the UAV swarm relies on the prior map and integrates spatial scale, obstacle distribution, and target position information in the current scenario [9]. It then autonomously plans and selects the optimal strategy to execute flight commands, thereby maximizing the efficiency of the autonomous navigation task [10,11,12].
However, in urban areas, canyons, or regions with strong electromagnetic interference, satellite-based systems such as GPS often fail or lose precision. This creates serious challenges for perception and mapping in unknown environments. Under conditions of missing prior information, near-ground UAVs suffer from a limited sensor field of view. They cannot obtain global data of the environment in a timely manner. Consequently, both the path planning efficiency and operational safety degrade significantly. In addition, the perception–decision latency introduced by global map construction and planning further hampers the system’s real-time responsiveness.
To address the aforementioned challenges, this paper proposes a complementary air–ground cooperative UAV navigation system designed to address the difficulties of UAV autonomous navigation in GPS-denied and unknown environments. The proposed system leverages the cooperative operation and visual perception advantages of aerial surveillance UAVs, transforming abstract environmental states into real-time data that are comprehensible and applicable. Simultaneously, the system constructs a global 3D environmental data model while providing precise, real-time positioning information from ultra-wideband (UWB) devices, thereby offering a reliable foundation for the decision-making task of the near-ground navigation UAV. This system can improve the task execution efficiency under constrained conditions and enable the near-ground navigation UAV to autonomously and efficiently complete navigation tasks from the start position to the target. This paper provides a novel idea and solution for the advancement of multi-UAV coordination navigation. The main contributions of this paper are as follows:
  • An autonomous navigation system based on air–ground collaboration is proposed. The system leverages the perspective advantage of an aerial surveillance UAV swarm and utilizes its wide-area bird’s-eye view (BEV), perceiving and efficiently constructing global 3D data models of unknown environments. Based on the constructed model, the system performs rapid path planning and trajectory tracking control for the near-ground navigation UAV from the start position to the target. Concurrently, the aerial surveillance UAV swarm provides the real-time position information of the near-ground navigation UAV during flight. The proposed system achieves the collision-free autonomous navigation of UAVs from the start position to the target in GPS-denied scenarios with no prior global environmental data model.
  • A global 3D environmental data modeling and cooperative positioning method based on multi-UAV swarm perception is proposed. This method utilizes the powerful environmental perception capabilities of aerial surveillance UAV swarms. Through UAV BEV image stitching and object detection, the distribution of obstacles and semantic information about the unknown environment are obtained, and a map of the unknown environment is constructed in real time and efficiently. Meanwhile, the position of the near-ground navigation UAV in the environment can be computed using the UWB range perception between the aerial surveillance swarm and the near-ground navigation UAV.
  • An outdoor UAV autonomous navigation live flight experiment is conducted. The results demonstrate the feasibility and effectiveness of the proposed system, as well as the cooperative positioning and global 3D environmental data modeling method.

2. Related Work

In this section, based on the differences in swarm composition and operational modes, multi-UAV cooperative navigation methods are primarily categorized into two types: mutual assistance cooperative task systems, which mainly consist of homogeneous UAV swarms, and complementary coordinative task systems, which mainly consist of heterogeneous UAV swarms. Then, the pertinent literature is reviewed [13,14].

2.1. Mutual Assistance Systems

In mutual assistance systems, homogeneous swarm members perform identical functions and assume equivalent roles to achieve the cooperative execution of swarm tasks [15]. This method exhibits strong fault tolerance due to the high degree of consistency among the individual members of the swarm, and it reduces the operational time through evenly distributing tasks [16]. In this process, each member shares perception information to establish a unified positioning and decision-making framework, ultimately completing tasks such as cooperative exploration, environmental perception, path planning, and coordinated motion [17]. Mutual assistance systems are typically simple in structure and easy to manage, thereby achieving rapid task allocation and cooperative operation in structured environments [18,19]. However, due to the high degree of functional homogeneity and lack of complementary characteristics among individual members in a mutual assistance system, the cooperative efficiency is unable to match that of systems that have heterogeneous and complementary capabilities, thus failing to achieve superlinear gain effects (the sum of parts is smaller than the whole (or similar), i.e., 1 + 1 > 2) [20].

2.2. Complementary Systems

In contrast, the heterogeneous swarm members of complementary systems utilize the structural or sensory diversity of the swarm members to achieve collective objectives through a complementary collaboration paradigm [21,22]. In such systems, swarm members are assigned specialized tasks based on their unique functionalities and then engage in cooperative operations [23]. During the perception phase, aerial surveillance members equipped with high-precision visual cameras, LiDAR, and other sensors acquire fundamental environmental data [24,25,26]. Subsequently, with object detection, semantic segmentation, and Simultaneous Localization and Mapping (SLAM) technologies, the UAV swarm can comprehend the global context, extracting obstacle dimensions, spatial positions, and semantic classifications in complex unknown environments [27]. Meanwhile, the aerial surveillance members collaborate with the inertial measurement units (IMUs) carried by the navigation member to compute its precise position information, thereby providing reliable information for its decision tasks [28]. In the final decision phase, the navigation member generates optimal flight trajectories based on the prior perception information [29]. It achieves collision-free autonomous motion from the current position to the target position under given constraints, minimizing either the flight distance or flight time [30]. For complementary systems, Reference [31] addresses uplink transmission rate optimization over wireless links. Under conditions of missing prior information, it targets the trajectory planning needs of surveillance UAVs. It proposes a trajectory planning scheme based solely on discretized grid signal states. This method requires no high-precision prior map. Instead, it generates a flight trajectory for the surveillance UAV from ground-collected network signal grids. It thus offers innovative theoretical support for aerial perception and flight control under resource constraints. Similarly, Reference [32] also focuses on surveillance UAVs. It addresses the continuous flight requirements of cellular-connected UAVs in weak prior scenarios. The authors design an intelligent trajectory planning framework. This framework aims to minimize the total flight time and expected outage duration. It further enhances task reliability in unknown environments.
However, in existing complementary systems, the aerial surveillance unit can generate flight trajectories efficiently under weak prior conditions, but the near-ground navigation unit must handle multi-source, heterogeneous sensor data. Data matching and fusion impose a heavy computational burden [33]. This burden significantly reduces the navigation unit’s environmental perception efficiency and localization update rate. Moreover, excessive processing delays mean that these systems cannot satisfy the navigation requirements in GPS-denied, complex environments [34]. Furthermore, complex unknown environments and large volumes of perception data further increase the perception–decision latency, severely limiting the applicability of air–ground cooperative systems [35].
In summary, the mutual assistance cooperative architecture features a simple configuration and easy deployment but exhibits clear efficiency bottlenecks and cannot achieve superlinear gains. Complementary systems, by contrast, impose high computational overheads and incur significant latency when fusing heterogeneous perception data, making them ill suited for real-time autonomous navigation in GPS-denied, unknown environments. To overcome these limitations, this paper exploits the aerial surveillance UAV’s BEV perspective and the reliability of shortwave communications to build a real-time global 3D environment mapping and high-precision localization framework at the navigation end. This design eliminates the computational burden of cross-modal data matching and mobile anchor trilateration. On the basis of this, we incorporate an improved D* Lite planner that, leveraging live environmental perception, efficiently generates collision-free flight trajectories for near-ground navigation UAVs. The resulting system successfully accomplishes autonomous navigation tasks under GPS denial in unknown environments.

3. Methods

3.1. Overall System Architecture

To address the issue of UAV navigation in GPS-denied and unknown environments, this paper designs a UAV autonomous navigation system based on air–ground collaboration architecture. The schematic diagram of the proposed system is illustrated in Figure 1. The system leverages the visual perception advantages of an aerial surveillance UAV swarm, utilizing the wide-area BEV provided by the downward-facing monocular cameras mounted on two UAVs to perceive the unknown environment. This process also constructs real-time global 3D data models, accurately describing the geometric structure of the environment. Based on the constructed model, this architecture performs rapid path planning for the near-ground navigation UAV, followed by motion control execution to avoid obstacles according to the planned trajectory. Concurrently, unlike traditional fixed ground station localization, we mount UWB anchor nodes directly on the aerial UAV formation. This creates a set of mobile anchors that combine perception and localization functions. The system continuously measures the ranges between the aerial and near-ground UAVs. It obtains the UAV altitude from a belly-mounted laser sensor. By leveraging the predefined relative geometry of the mobile anchors within the formation, we apply trilateration to compute the precise position of the near-ground navigation UAV. This mobile anchor trilateration approach delivers reliable localization data to guide the UAV’s flight path. Ultimately, the proposed system assists the near-ground navigation UAV in completing collision-free autonomous navigation from the start position to the target.
Figure 2 shows the overall block diagram of the proposed system. In addition to the aerial surveillance UAV swarm and the near-ground navigation UAV, the system also includes two ground control platforms. All components operate within the same communication network for data communication. Each UAV in the system not only has fundamental flight and communication capabilities but also carries devices such as cameras, infrared sensors, and UWB ranging modules to enhance the system’s functionality.
Control Platform 1 issues flight commands to the aerial surveillance UAV swarm, directing them to hover over the mission area in a formation defined by preset geometric parameters. The UAVs stream real-time video from their BEV cameras back to Platform 1 for environmental perception and map construction. The mapping module then stitches the two video feeds and applies object detection to produce a 3D map model for the path planning of the near-ground navigation UAV. Finally, according to the constructed environmental model, the path planning module rapidly plans an optimal 3D path from the start position to the target for the near-ground navigation UAV and transmits the path information to Control Platform 2.
Control Platform 2 computes control commands from the path planning results. It then sends navigation instructions to the near-ground navigation UAV via Wi-Fi. This drives the UAV to fly along the planned trajectory. Simultaneously, the aerial surveillance UAV swarm and the near-ground UAV exchange range measurements through their UWB modules. Control Platform 2 collects these measurements and, leveraging the fixed geometry of the three aerial anchors, employs a trilateration algorithm for localization and real-time position updates of the near-ground UAV. Finally, the system completes collision-free autonomous navigation under GPS denial in unknown environments.

3.2. Multi-UAV Swarm Perception-Based Environmental Modeling and Cooperative Positioning Method

To address the UAV environmental modeling requirements in unknown environments, this paper proposes a global 3D environmental data modeling algorithm based on multi-UAV swarm image perception. The fundamental goal of the algorithm is to achieve global perception and real-time 3D environmental data modeling of the unknown environment through UAV BEV image stitching and object detection. The resulting model format meets the requirements of UAV path planning and autonomous navigation. The flowchart for the developed algorithm is illustrated in Figure 3.
The detailed steps of the algorithm are as follows.
Step 1: Stitch the downward-facing camera images of the two aerial surveillance UAVs, which are hovering above the environment. The flowchart of the UAV image stitching algorithm is illustrated in Figure 4. Initially, the SuperPoint [36] and SuperGlue [37] networks are employed to extract and match feature points between the two UAV images, respectively. Subsequently, based on the matched feature point pairs, a homography matrix is computed to estimate the geometric transformation relationship between the two images. Finally, image transformation and fusion are performed using the homography matrix.
Step 2: Extract the semantic information and bounding box coordinates of obstacles by performing object detection on the stitched global environmental image. Here, the semantic information indicates the obstacle category, while the bounding box coordinates represent the pixel positions of obstacles in the global environmental perception image. Figure 5 contrasts the real-world scene with the stitched image detection results, visually illustrating the coordinate calculation workflow employed during environmental model construction.
Step 3: Compute the real-world positions and dimensions of the obstacles in the environment according to the collected obstacle information. In this paper, the x , y , z frame shown in Figure 5a is defined as the real-world environment model coordinate system. Its origin O sits at the figure’s lower-left corner; the x axis points right, and the z axis points upward. This frame describes objects’ actual 3D positions. The x , y pixel frame in Figure 5b, with origin O at the upper-left corner, is defined as the screen (pixel) coordinate system. Here, the x axis points right and the y axis increases downward. This frame expresses the 2D pixel locations of objects in the stitched BEV image. According to the above correspondence, and also combining the stitched UAV image size with the height of the aerial surveillance UAV swarm, the pixel coordinates of the obstacle bounding boxes can be obtained. Let the stitched image have pixel width w and height h , and let the physical map dimensions be X × Y (m), determined by the UAV swarm altitude. We convert the bounding box top-left x t l , y t l and bottom-right x b r , y b r pixel coordinates as shown in Equations (1)–(4):
x s t a r t = x t l w × X
y s t a r t = h y t l h × Y
x d i m = x b r x t l w × X
y d i m = y b r y t l h × Y
Here, ( x s t a r t , y s t a r t ) gives the obstacle’s real-world position, and ( x d i m , y d i m ) is its extent along the x and y axes. The obstacle height cannot be measured directly from nadir imagery, so we infer it from each obstacle’s semantic class.
Step 4: Construct a 3D environmental model for the path planning of the near-ground navigation UAV. The real position and size information of all obstacles is used to construct a 3D map of the environment. In addition, in order to increase the functionality and usability of the map, the semantic information of the obstacles is integrated into the constructed map, and we further construct a 3D semantic (grid) map of the environment.
Considering the limitations of the fixed base station in the current UWB positioning methods, in order to meet the positioning requirements during the trajectory tracking of the near-ground navigation UAV, a cooperative positioning approach based on multi-UAV swarm UWB range perception is designed. The proposed method can obtain the position information of the near-ground navigation UAV in the environment in real time. Built upon the multi-UAV environmental modeling method described above, this approach supports the real-time positioning of the near-ground navigation UAV in the environment. Unlike the UWB positioning methods based on fixed base stations, the base stations in the positioning method proposed in this paper are not deployed on the ground at a certain location permanently. They are mounted on the aerial surveillance UAV swarm to form a group of mobile positioning base stations. Specifically, the aerial surveillance UAV swarm hovers above the environment in a known triangular formation. Among them, two UAVs perform global 3D environmental data modeling using the method described in Section 3.1. Together, all three UAVs serve as UWB positioning base stations, while the near-ground navigation UAV serves as the target tag. The real-time range measurements between the aerial surveillance UAV swarm and the near-ground navigation UAV are obtained by the onboard UWB devices. Based on the known positions of the aerial surveillance UAV swarm and the range measurements above, the system calculates the real-time position of the near-ground navigation UAV in the environment by using a trilateration algorithm. Thus, the position information of the near-ground navigation UAV is obtained in real time.
As shown in Figure 2, it is assumed that the positions of the three aerial surveillance UAVs are ( x 1 , y 1 , z 1 ) , ( x 2 , y 2 , z 2 ) , ( x 3 , y 3 , z 3 ) , respectively, and the positions of the near-ground navigation UAV are ( x 0 , y 0 , z 0 ) . The three aerial surveillance UAVs send UWB signals to the near-ground navigation UAV via onboard UWB devices. The distances d 1 , d 2 , d 3 between the aerial surveillance UAV swarm and the near-ground navigation UAV can be obtained by measuring the signal’s time of flight and multiplying this by the speed of light. By combining these distances with the known positions of the aerial surveillance UAV swarm, a system of spherical equations can be formulated based on geometric relationships, as shown in (5).
( x 0 x 1 ) 2 + ( y 0 y 1 ) 2 + ( z 0 z 1 ) 2 = d 1 2 ( x 0 x 2 ) 2 + ( y 0 y 2 ) 2 + ( z 0 z 2 ) 2 = d 2 2 ( x 0 x 3 ) 2 + ( y 0 y 3 ) 2 + ( z 0 z 3 ) 2 = d 3 2 ,
There are usually two intersections in the three spherical surfaces formed by the three UAV base stations in space generally, so solving Equation (5) will yield two sets of 3D coordinates, and it is impossible to uniquely determine the position of the near-ground navigation UAV. The designed system keeps the aerial surveillance UAV swarm at the same height plane, so the two sets of candidate coordinates are mirror-symmetric about this plane. In addition, the near-ground navigation UAV is located under the aerial surveillance UAV swarm, so the points above the base station plane can be discarded. Thus, the 3D position ( x 0 , y 0 , z 0 ) of the near-ground navigation UAV in the environmental map can be uniquely determined and updated in real time.
In summary, unlike fixed ground anchors, this method deploys UWB anchors on a mobile aerial UAV swarm, combining environmental perception with an anchor functionality. This significantly enhances the system’s adaptability and deployment flexibility in GPS-denied and unknown environments. Finally, by seamlessly integrating this cooperative positioning strategy with the multi-UAV mapping method, it delivers continuous, high-precision 3D localization to support near-ground UAV path planning and trajectory tracking, enabling autonomous navigation in complex, unknown scenarios.

4. Results

In order to verify the effectiveness of the proposed method, two groups of experiments are conducted in real scenes, including UAV cooperative positioning and UAV autonomous navigation experiments based on air–ground collaboration. The experimental hardware comprises three main components: four DJI RoboMaster TT UAVs, four open-source controllers, and five Nooploop LinkTrack P-A-RMTT UWB sensors. Each UAV integrates a communication system, propulsion system, flight controller, vision-based localization system, and flight battery. Built-in features include infrared height hold, LED indicators, and Wi-Fi connectivity. The BEV camera offers a 320 × 240-pixel resolution. The open-source controller serves as an expansion module for the TT UAV. It attaches to the UAV’s micro-USB port via a micro-USB cable. Its extended I/O interfaces support the UART, I2C, GPIO, PWM, and SPI protocols. The UWB module connects to the controller through UART. In local positioning mode, the UWB achieves a maximum ranging rate of 50 Hz. This meets the real-time tracking demands of high-speed targets. See Figure 6.

4.1. UAV Cooperative Positioning Experiment

The purpose of this experiment is to verify the feasibility of the proposed cooperative positioning method and evaluate its positioning performance under ideal conditions. In the experiment, five UWB modules are configured to local positioning mode, of which three are installed on the three base station UAVs, one is installed on the tag UAV, and the other is installed on the ground control platform. The three base station UAVs are fixed at the same height with tripods and distributed in a triangular formation, as shown in Figure 7. This experimental scenario design enables the positioning system to ignore the cooperative control process of multiple base station UAVs and mitigates the hardware performance limitations of UAVs, allowing us to focus on evaluating the positioning performance of the cooperative positioning method. Each UWB module actively measures the distances between them and completes the automatic calibration of the base station positions. In the experiment, the tag UAV is controlled to fly along the rectangular trajectory marked in Figure 7, and the position information of the tag UAV is calculated in real time by the three base station UAVs through the proposed cooperative positioning algorithm.
The experimental results are shown in Figure 8, in which Figure 8a illustrates the flight process of the tag UAV following a predefined rectangular trajectory, and Figure 8b shows the calculated position information (trajectory) of the tag UAV during the flight process. In Figure 8b, the three blue triangles represent the three base stations, corresponding to base station UAV 1, base station UAV 2, and base station UAV 3. The calibrated positions of the three base station UAVs are (0 m, 0 m, 0 m), (2.428 m, 2.356 m, 0 m), and (4.671 m, 0 m, 0 m), respectively. The orange curve represents the flight trajectory of the tag UAV. As can be seen from Figure 8, the proposed cooperative positioning method can calculate and update the position of the tag UAV during its flight process in real time through the multiple base station UAVs. Whether the tag UAV flies straight or turns, the cooperative positioning method can promptly respond to the change in the UAV position in time without an obvious positioning loss or data drift. The cooperative positioning result curve clearly reveals the rectangular flight trajectory of the tag UAV, and the system realizes real-time positioning throughout its flight. These results demonstrate the performance of the proposed cooperative positioning method and validate its feasibility and effectiveness in real-world environments.
To further quantify the performance of the proposed mobile anchor trilateration method, we fix the aerial surveillance UAVs on tripods at a uniform altitude according to predefined geometric relationships. These UAVs jointly define a reference plane with its z-coordinate set to zero. Under this convention, any point below the plane has a negative z-value, and any point above has a positive z-value. This coordinate scheme clearly depicts the relative positions and spatial distribution of all UAVs, as shown in Figure 9. Then, the cooperative positioning data of the tag UAV from the three base station UAVs are recorded, while the ground truth positions of the tag UAV in the base station coordinate system are precisely measured. By comparing the cooperative positioning data with the ground truth position data, the positioning error of the cooperative positioning method can be calculated.
The quantitative experimental results of UAV cooperative positioning are presented in Table 1, which includes both the cooperative positioning data and ground truth position data of the tag UAV at 10 path points. Table 2 shows the average positioning error of the cooperative positioning results along the x and y directions. From the quantitative experimental results, it can be seen that the positioning errors of the algorithm along the x, y, and z directions are less than 10 cm, indicating high positioning accuracy. This level of accuracy can provide precise real-time position information for the UAV’s trajectory tracking process and assist it in stably completing autonomous navigation tasks. The spatial correspondence between the UAV’s estimated and actual positions is depicted in Figure 10.

4.2. UAV Autonomous Navigation Experiment Based on Air–Ground Collaboration

To validate the overall feasibility and performance of the designed autonomous UAV navigation system, this subsection presents an autonomous UAV navigation experiment based on air–ground collaboration in an outdoor scenario. The experiment contained the complete process of UAV global 3D environmental map modeling, cooperative positioning, path planning, and trajectory tracking. Before this, according to the image object detection requirements of the UAV environmental modeling algorithm, a sufficient environmental BEV image dataset was collected in the experimental scenario using TT UAVs to train the detection network. Then, the trained network model was deployed using the global 3D environmental data modeling algorithm, and the autonomous navigation experiment was carried out.
Firstly, during the network setup, two ground control platforms and four UAVs were connected to the same local area network. IP addresses and ports were assigned according to the signal flow diagram in Figure 2 to ensure stable data exchange between all modules. Next, the mobile anchor trilateration system was configured and activated at 50 Hz. Three aerial surveillance UAVs were manually calibrated to hover in a triangular formation at (3 m, 2.25 m, 4.5 m), (4.5 m, 4.75 m, 4.5 m), and (6 m, 2.25 m, 4.5 m).
Subsequently, Control Platform 1 controlled the aerial UAV swarm to take off and hover over the target area, streaming downward-facing video to the mapping module. Using images from two of the UAVs, this module stitches the BEV, detects objects, and builds a real-time 3D environment model. Based on the hover altitudes and camera fields of view, the combined perception coverage of the two UAVs was determined to be 9 m × 4.5 m. Accordingly, the experimental arena was defined as 9 m (length) × 4.5 m (width) × 3 m (height). For the four categories of obstacles in the scenario dataset, namely person, carton, box, and chair, their heights were estimated to be 1.8 m, 0.6 m, 0.3 m, and 0.8 m, respectively. They were rendered in distinct colors (blue, brown, yellow, green) to correspond to different semantic information.
Finally, it should be noted that the system’s coverage area is determined by the UAV swarm’s flight altitude. As the altitude increases, the monitored region expands. Moreover, the mapping accuracy within this region depends on the BEV camera’s resolution. In our experiments, the computational loads for visual feature matching, BEV stitching, object detection, and map generation were 26.05 GFLOPs, 2.4 GFLOPs, and 3.16 GFLOPs, respectively. The total GPU memory usage was approximately 1.46 GB, and end-to-end global map inference took around 126 ms on the GPU. The CPU inference time for the global path planning algorithm on a single near-ground navigation UAV ranged from 30 ms to 55 ms. Therefore, scaling up the aerial surveillance swarm will proportionally increase the overall computation requirements. Thanks to our algorithmic architecture, this growth remains linear and manageable. Real-time performance on the ground control platform is thus maintained.
Figure 11 illustrates the hovering state of the aerial surveillance UAVs in a real-world scenario, while Figure 12 illustrates the results of UAV BEV image stitching and object detection. The image stitching algorithm achieved the high-precision fusion of the visual data from the two UAV platforms, yielding seams without discernible alignment errors. Multiple obstacles randomly distributed within the panoramas exhibited well-defined contours. The system enables comprehensive global environmental observation that lays the groundwork for accurate target recognition by downstream detection modules. The result shows the effectiveness of the proposed stitching procedure.
Furthermore, the trained YOLOv5 network accurately detected obstacles such as pedestrians, chairs, and storage bins within the composite panoramic images. The detector produced precise pixel-level bounding box coordinates and size estimates for each object, thus supplying reliable prior data for subsequent 3D environmental modeling.
Figure 13 shows the constructed global 3D environmental data model based on UAV BEV image stitching and object detection, along with the UAV path planning results based on the environmental model. In the model, cubes of different sizes and colors represent various categories of obstacles in the environment. The total time for the global environmental data modeling process was 0.04 s. The white point represents the start position of the near-ground navigation UAV, and the black point represents the target of the near-ground navigation UAV, with target coordinates of (8 m, 2.3 m, 0.8 m). The black line represents the 3D path of the near-ground navigation UAV. The path planning process took a total of 0.019 s, with a path length of 6.41 m and a path planning speed of 1214 km/h. T The algorithm also added different-sized inflation areas to different environmental obstacles based on the TT UAV’s radius. This region is rendered as the light-colored area surrounding each cube in the figure.
From the environmental modeling results, it is found that our algorithm can rapidly generate a complete and accurate 3D environmental map from the UAV BEV image stitching and object detection outputs. This model precisely characterizes obstacle distributions, furnishing critical environmental priors for near-ground UAV path planning. Moreover, thanks to the efficient multi-UAV monocular vision pipeline and compact map representation, the method substantially reduces the memory footprint while maintaining extremely low global perception latency. Collectively, these features satisfy the real-time mapping demands in GPS-denied scenarios and validate the practicality of the proposed mapping algorithm.
For path planning, we leverage the constructed 3D environment model and employ an enhanced D* Lite algorithm to generate the near-ground UAV’s optimal path. Experimental results show that the algorithm computes 3D flight paths within milliseconds. By incorporating obstacle inflation zones, it maintains a constant safety buffer around obstacles, thereby improving the safety of autonomous navigation in complex scenarios.
As shown in Figure 14, the near-ground UAV executes flight control along the preplanned path. After takeoff, it performs successive rotations and straight-line maneuvers to avoid multiple obstacle types, ultimately landing precisely at the target. Throughout the flight, the trajectory tracking module computes a sequence of motion commands—specifying the UAV velocity and execution duration—based on the path plan. This module ensures collision-free obstacle traversal and reliable, safe trajectory adherence.
Field measurements reveal that, despite minor control errors, the UAV’s actual endpoint position of (7.80 m, 2.25 m, 0.76 m) deviates from the target (8.00 m, 2.30 m, 0.80 m) by only 0.20 m in x, 0.05 m in y, and 0.04 m in z. The resulting sub-meter accuracy fully meets the requirements of complex navigation tasks.
Figure 15 presents the trilateration results during the trajectory tracking of the near-ground UAV. The three blue triangles denote the aerial surveillance UAVs functioning as mobile UWB anchors, continuously exchanging range measurements with the near-ground UAV. The orange curve shows the UAV’s 3D flight path. The localization module applies trilateration to the high-frequency range data and updates the UAV’s position in a real-time 3D visualization interface. Ultimately, the cooperative positioning result curve completely describes the flight trajectory of the near-ground navigation UAV in the environment, achieving the real-time positioning of its trajectory tracking process. Throughout this process, no obvious positioning data loss or drift occurred. This validates the performance of the UAV cooperative positioning method proposed in this paper and demonstrates its feasibility and effectiveness in real-world environments.
Finally, to fully assess the performance advantages of the proposed method, we compare it with representative mutual assistance and complementary systems along three primary dimensions: the map construction strategy, the localization method, and trajectory planning strategy (including timeliness). The results are summarized in Table 3.
Table 3. Performance comparison of the proposed method with representative mutual assistance and complementary systems.
Table 3. Performance comparison of the proposed method with representative mutual assistance and complementary systems.
ReferenceMap Construction StrategyLocalization MethodSystem TypeTrajectory Planning StrategyPlanning Timeliness
[10]Local map update; data sharingGPS-basedMutual assistanceDistributed local planningReal-time
[34]Local map update; data sharingGPS-denied; LiDAR SLAMMutual assistanceGlobal planning + local obstacle avoidanceReal-time
[38]Global map updateGPS-denied; heterogeneous sensor feature matchingComplementaryPreplanned global pathNon-real-time
[24]Local + global map updateGPS-denied; heterogeneous sensor feature matchingComplementaryPreplanned global pathNon-real-time
[39]No explicit map constructionGPS-based; projected-imaging heterogeneous sensor matchingComplementaryPreplanned global path + local obstacle avoidanceReal-time
This workGlobal map updateGPS-denied; aerial UWB mobile anchor trilaterationComplementaryGlobal planningReal-time
Mutual assistance systems benefit from homogeneous node capabilities and mature data sharing mechanisms. They excel in planning timeliness. However, their perception is limited by the single sensor perspective. They rely on local map updates and patchwork stitching to achieve global awareness and planning. Consequently, their overall efficiency remains low. Complementary systems leverage aerial surveillance to build global maps directly. However, without GPS, matching features across heterogeneous sensors incurs a heavy computational load. This load degrades the planning responsiveness.
Based on the above comparison and our comprehensive experimental results, the proposed multi-UAV swarm perception and mobile anchor trilateration method constructs a real-time, high-efficiency global 3D environment model for near-ground navigation UAVs. It also supports rapid and safe path planning in this model. Meanwhile, by employing high-frequency trilateration, the system continuously updates the UAV’s position. This delivers robust localization with controlled computational demands. The system successfully executes fully autonomous navigation in GPS-denied, unknown environments, validating the practicality and effectiveness of our air–ground collaborative perception navigation architecture.

5. Conclusions

This paper presents a heterogeneous UAV system built on the air–ground cooperative perception framework. It underscores the value of cross-platform collaboration for autonomous UAV navigation. The framework fuses multi-source, multi-modal sensor data from aerial surveillance UAVs to generate a real-time 3D environmental model. The model then drives fast, collision-free trajectory planning for the near-ground navigation UAV, enabling fully autonomous low-altitude flight.
Compared to single-mode solutions such as visual–inertial SLAM or fixed-anchor UWB, our approach achieves a higher map update rate, sub-0.1-m high-frequency localization accuracy, and enhanced environmental robustness. These improvements tightly integrate with our enhanced D* Lite planner. As a result, the framework senses both the scene state and UAV position in real time and rapidly adjusts the flight trajectories. This cooperative paradigm extends to other multi-agent missions, such as disaster response and precision agriculture. In these domains, environmental uncertainty and task diversity are the norm.
It should be noted that our method is specifically designed for GPS-denied, unknown operational environments. By using the aerial surveillance UAV’s BEV, the near-ground UAV receives a real-time environment map and accelerates global path planning. The current framework handles obstacles via continuous map updates and global replanning. This approach effectively manages static obstacle shifts and slow-moving dynamic obstacles. However, severe line-of-sight occlusions—such as under dense forest canopies—or constrained flight altitudes in narrow indoor spaces can limit the BEV’s coverage. Such conditions may cause the loss of fine obstacle details and delayed updates of moving obstacles or dynamic targets.
Moreover, constrained by the global planner’s update rate, the system must still integrate a local path planning module to invoke immediate avoidance when facing rapidly moving dynamic obstacles. This “global  +  local” hybrid scheme inevitably degrades the overall autonomy and efficiency.
To overcome the global planning latency bottleneck, future work will introduce deep reinforcement learning (DRL). Owing to the consistent resolution and scale of the environment model produced at each map update, we will develop an end-to-end global map to global planner module via DRL. This module will learn to make planning decisions directly based on the current environmental representation. It aims to reduce the global path planning latency to below 20 ms, thereby enhancing the adaptability while preserving the navigation efficiency. This will support dynamic obstacle scenarios and expanded requirements for near-ground UAV navigation.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/drones9060442/s1, Video S1: Experimental Video.mp4.

Author Contributions

Conceptualization, P.Y. and J.X.; methodology, P.Y. and J.X.; software and validation, Y.H. and J.Z.; resources, J.X. and W.C.; writing—original draft preparation, P.Y.; writing—review and editing, J.X., W.C., C.Z. and M.S.; supervision, J.X.; funding acquisition, J.X. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the Chinese National Natural Science Foundation, Grant Nos. 62473311 and 61873200.

Data Availability Statement

The original contributions presented in the study are included in the article/Supplementary Material, further inquiries can be directed to the corresponding author.

Conflicts of Interest

Wei Chen was employed by Micro-Electronic Technology Co. Mao Shan was employed by University of Sydney. The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

References

  1. AlMahamid, F.; Grolinger, K. Autonomous Unmanned Aerial Vehicle Navigation Using Reinforcement Learning: A Systematic Review. Eng. Appl. Artif. Intell. 2022, 115, 105321. [Google Scholar] [CrossRef]
  2. Soria, E.; Schiano, F.; Floreano, D. Predictive Control of Aerial Swarms in Cluttered Environments. Nat. Mach. Intell. 2021, 3, 545–554. [Google Scholar] [CrossRef]
  3. Jarraya, I.; Al-Batati, A.; Kadri, M.B.; Abdelkader, M.; Ammar, A.; Boulila, W.; Koubaa, A. GNSS-Denied Unmanned Aerial Vehicle Navigation: Analyzing Computational Complexity, Sensor Fusion, and Localization Methodologies. Satell. Navig. 2025, 6, 9. [Google Scholar] [CrossRef]
  4. Liu, X.; Nardari, G.V.; Cladera Ojeda, F.; Tao, Y.; Zhou, A.; Donnelly, T.; Qu, C.; Chen, S.W.; Romero, R.A.F.; Taylor, C.J.; et al. Large-Scale Autonomous Flight with Real-Time Semantic SLAM Under Dense Forest Canopy. IEEE Robot. Autom. Lett. 2022, 7, 5512–5519. [Google Scholar] [CrossRef]
  5. Al-Kaff, A.; Martin, D.; Garcia, F.; de la Escalera, A.; Armingol, J.M. Survey of Computer Vision Algorithms and Applications for Unmanned Aerial Vehicles. Expert Syst. Appl. 2018, 92, 447–463. [Google Scholar] [CrossRef]
  6. Rasheed, A.A.A.; Abdullah, M.N.; Al-Araji, A.S. A Review of Multi-Agent Mobile Robot Systems Applications. Int. J. Electr. Comput. Eng. 2022, 12, 3517–3529. [Google Scholar] [CrossRef]
  7. Kaufmann, E.; Bauersfeld, L.; Loquercio, A.; Müller, M.; Koltun, V.; Scaramuzza, D. Champion-Level Drone Racing Using Deep Reinforcement Learning. Nature 2023, 620, 982–987. [Google Scholar] [CrossRef] [PubMed]
  8. Shen, H.; Zong, Q.; Tian, B.; Lu, H. Voxel-Based Localization and Mapping for Multirobot System in GPS-Denied Environments. IEEE Trans. Ind. Electron. 2022, 69, 10333–10342. [Google Scholar] [CrossRef]
  9. Xue, Y.; Chen, W. A UAV Navigation Approach Based on Deep Reinforcement Learning in Large Cluttered 3D Environments. IEEE Trans. Veh. Technol. 2023, 72, 3001–3014. [Google Scholar] [CrossRef]
  10. Ye, Z.; Wang, K.; Chen, Y.; Jiang, X.; Song, G. Multi-UAV Navigation for Partially Observable Communication Coverage by Graph Reinforcement Learning. IEEE Trans. Mob. Comput. 2023, 22, 4056–4069. [Google Scholar] [CrossRef]
  11. Wu, K.; Wang, H.; Esfahani, M.A.; Yuan, S. Learn to Navigate Autonomously Through Deep Reinforcement Learning. IEEE Trans. Ind. Electron. 2022, 69, 5342–5352. [Google Scholar] [CrossRef]
  12. Chen, Q.; Xue, B.; Zhang, M. Genetic Programming for Instance Transfer Learning in Symbolic Regression. IEEE Trans. Cybern. 2022, 52, 25–38. [Google Scholar] [CrossRef] [PubMed]
  13. Shahzad, M.M.; Saeed, Z.; Akhtar, A.; Munawar, H.; Yousaf, M.H.; Baloach, N.K.; Hussain, F. A Review of Swarm Robotics in a NutShell. Drones 2023, 7, 269. [Google Scholar] [CrossRef]
  14. Jin, Y.; Wei, S.; Yuan, J.; Zhang, X. Hierarchical and Stable Multiagent Reinforcement Learning for Cooperative Navigation Control. IEEE Trans. Neural Netw. Learn. Syst. 2023, 34, 90–103. [Google Scholar] [CrossRef]
  15. Zhou, X.; Wen, X.; Wang, Z.; Gao, Y.; Li, H.; Wang, Q.; Yang, T.; Lu, H.; Cao, Y.; Xu, C.; et al. Swarm of Micro Flying Robots in the Wild. Sci. Robot. 2022, 7, eabm5954. [Google Scholar] [CrossRef]
  16. Berlinger, F.; Gauci, M.; Nagpal, R. Implicit Coordination for 3D Underwater Collective Behaviors in a Fish-Inspired Robot Swarm. Sci. Robot. 2021, 6, eabd8668. [Google Scholar] [CrossRef]
  17. Fu, J.; Wen, G.; Yu, X.; Wu, Z.-G. Distributed Formation Navigation of Constrained Second-Order Multiagent Systems with Collision Avoidance and Connectivity Maintenance. IEEE Trans. Cybern. 2022, 52, 2149–2162. [Google Scholar] [CrossRef]
  18. Xue, Y.; Chen, W. Multi-Agent Deep Reinforcement Learning for UAVs Navigation in Unknown Complex Environment. IEEE Trans. Intell. Veh. 2024, 9, 2290–2303. [Google Scholar] [CrossRef]
  19. Fan, T.; Long, P.; Liu, W.; Pan, J. Distributed Multi-Robot Collision Avoidance via Deep Reinforcement Learning for Navigation in Complex Scenarios. Int. J. Robot. Res. 2020, 39, 856–892. [Google Scholar] [CrossRef]
  20. Miller, I.D.; Cladera, F.; Smith, T.; Taylor, C.J.; Kumar, V. Stronger Together: Air-Ground Robotic Collaboration Using Semantics. IEEE Robot. Autom. Lett. 2022, 7, 9643–9650. [Google Scholar] [CrossRef]
  21. Miller, I.D.; Cladera, F.; Smith, T.; Taylor, C.J.; Kumar, V. Air-Ground Collaboration with SPOMP: Semantic Panoramic online Mapping and Planning. IEEE Trans. Field Robot. 2024, 1, 93–112. [Google Scholar] [CrossRef]
  22. Mayya, S.; D’Antonio, D.S.; Saldaña, D.; Kumar, V. Resilient Task Allocation in Heterogeneous Multi-Robot Systems. IEEE Robot. Autom. Lett. 2021, 6, 1327–1334. [Google Scholar] [CrossRef]
  23. Chandran, I.; Vipin, K. Network Analysis of Decentralized Fault-Tolerant UAV Swarm Coordination in Critical Missions. Drone Syst. Appl. 2024, 12, 1–15. [Google Scholar] [CrossRef]
  24. Zhang, J.; Liu, R.; Yin, K.; Wang, Z.; Gui, M.; Chen, S. Intelligent Collaborative Localization Among Air-Ground Robots for Industrial Environment Perception. IEEE Trans. Ind. Electron. 2019, 66, 9673–9681. [Google Scholar] [CrossRef]
  25. Chen, L.; Wang, Y.; Miao, Z.; Mo, Y.; Feng, M.; Zhou, Z.; Wang, H. Transformer-Based Imitative Reinforcement Learning for Multirobot Path Planning. IEEE Trans. Ind. Inform. 2023, 19, 10233–10243. [Google Scholar] [CrossRef]
  26. Jiang, H.; Esfahani, M.A.; Wu, K.; Wan, K.-W.; Heng, K.-K.; Wang, H.; Jiang, X. iTD3-CLN: Learn to Navigate in Dynamic Scene Through Deep Reinforcement Learning. Neurocomputing 2022, 503, 118–128. [Google Scholar] [CrossRef]
  27. Jin, Z.; Wu, J.; Liu, A.; Zhang, W.A.; Yu, L. Policy-Based Deep Reinforcement Learning for Visual Servoing Control of Mobile Robots with Visibility Constraints. IEEE Trans. Ind. Electron. 2022, 69, 1898–1908. [Google Scholar] [CrossRef]
  28. Wang, D.; Deng, H. Multirobot Coordination with Deep Reinforcement Learning in Complex Environments. Expert Syst. Appl. 2021, 180, 115128. [Google Scholar] [CrossRef]
  29. Dong, L.; He, Z.C.; Song, C.W.; Sun, C.Y. A Review of Mobile Robot Motion Planning Methods: From Classical Motion Planning Workflows to Reinforcement Learning-Based Architectures. J. Syst. Eng. Electron. 2023, 34, 439–459. [Google Scholar] [CrossRef]
  30. Puente-Castro, A.; Rivero, D.; Pedrosa, E.; Pereira, A.; Lau, N.; Fernandez-Blanco, E. Q-Learning Based System for Path Planning with Unmanned Aerial Vehicles Swarms in Obstacle Environments. Expert Syst. Appl. 2024, 235, 121240. [Google Scholar] [CrossRef]
  31. Li, Y.; Aghvami, A.H.; Dong, D. Intelligent Trajectory Planning in UAV-Mounted Wireless Networks: A Quantum-Inspired Reinforcement Learning Perspective. IEEE Wireless Commun. Lett. 2021, 10, 1994–1998. [Google Scholar] [CrossRef]
  32. Li, Y.; Aghvami, A.H.; Dong, D. Path Planning for Cellular-Connected UAV: A DRL Solution with Quantum-Inspired Experience Replay. IEEE Trans. Wirel. Commun. 2022, 21, 7897–7912. [Google Scholar] [CrossRef]
  33. Munasinghe, I.; Perera, A.; Deo, R.C. A Comprehensive Review of UAV-UGV Collaboration: Advancements and Challenges. J. Sens. Actuator Netw. 2024, 13, 81. [Google Scholar] [CrossRef]
  34. Tang, Y.; Hu, Y.; Cui, J.; Liao, F.; Lao, M.; Lin, F.; Teo, R.S.H. Vision-Aided Multi-UAV Autonomous Flocking in GPS-Denied Environment. IEEE Trans. Ind. Electron. 2019, 66, 616–626. [Google Scholar] [CrossRef]
  35. Shi, C.; Xiong, Z.; Chen, M.; Wang, R.; Xiong, J. Cooperative Navigation for Heterogeneous Air-Ground Vehicles Based on Interoperation Strategy. Remote Sens. 2023, 15, 2006. [Google Scholar] [CrossRef]
  36. DeTone, D.; Malisiewicz, T.; Rabinovich, A. Superpoint: Self-Supervised Interest Point Detection and Description. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, Salt Lake City, UT, USA, 18–23 June 2018. [Google Scholar] [CrossRef]
  37. Sarlin, P.E.; DeTone, D.; Malisiewicz, T.; Rabinovich, A. Superglue: Learning Feature Matching with Graph Neural Networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020. [Google Scholar] [CrossRef]
  38. He, J.; Zhou, Y.; Huang, L.; Kong, Y.; Cheng, H. Ground and Aerial Collaborative Mapping in Urban Environments. IEEE Robot. Autom. Lett. 2020, 6, 95–102. [Google Scholar] [CrossRef]
  39. Liu, D.; Bao, W.; Zhu, X.; Fei, B.; Xiao, Z.; Men, T. Vision-Aware Air-Ground Cooperative Target Localization for UAV and UGV. Aerospace Sci. Technol. 2022, 124, 107525. [Google Scholar] [CrossRef]
Figure 1. Schematic diagram of the proposed UAV autonomous navigation system based on air–ground collaboration.
Figure 1. Schematic diagram of the proposed UAV autonomous navigation system based on air–ground collaboration.
Drones 09 00442 g001
Figure 2. Overall block diagram of the autonomous UAV navigation system based on air–ground collaboration.
Figure 2. Overall block diagram of the autonomous UAV navigation system based on air–ground collaboration.
Drones 09 00442 g002
Figure 3. Flowchart of the environmental modeling algorithm.
Figure 3. Flowchart of the environmental modeling algorithm.
Drones 09 00442 g003
Figure 4. Workflow of the image stitching algorithm.
Figure 4. Workflow of the image stitching algorithm.
Drones 09 00442 g004
Figure 5. Schematic diagram of the environmental model and coordinate calculation. (a) Real-world environment illustration. (b) Image stitching and object detection illustration.
Figure 5. Schematic diagram of the environmental model and coordinate calculation. (a) Real-world environment illustration. (b) Image stitching and object detection illustration.
Drones 09 00442 g005
Figure 6. Experimental UAV and its associated equipment.
Figure 6. Experimental UAV and its associated equipment.
Drones 09 00442 g006
Figure 7. Experimental scenario of UAV cooperative positioning.
Figure 7. Experimental scenario of UAV cooperative positioning.
Drones 09 00442 g007
Figure 8. Experimental results of UAV cooperative positioning. (a) Flight process of the tag UAV. (b) The calculated position information (trajectory) of the tag UAV during the flight process.
Figure 8. Experimental results of UAV cooperative positioning. (a) Flight process of the tag UAV. (b) The calculated position information (trajectory) of the tag UAV during the flight process.
Drones 09 00442 g008
Figure 9. Visualization of the coordinate system of the experimental scenario.
Figure 9. Visualization of the coordinate system of the experimental scenario.
Drones 09 00442 g009
Figure 10. The visualization of estimated versus actual UAV positions.
Figure 10. The visualization of estimated versus actual UAV positions.
Drones 09 00442 g010
Figure 11. Aerial surveillance UAV swarm hovering above the environment.
Figure 11. Aerial surveillance UAV swarm hovering above the environment.
Drones 09 00442 g011
Figure 12. Results of UAV image stitching and object detection.
Figure 12. Results of UAV image stitching and object detection.
Drones 09 00442 g012
Figure 13. Results of global 3D environmental data modeling and path planning.
Figure 13. Results of global 3D environmental data modeling and path planning.
Drones 09 00442 g013
Figure 14. Flight trajectory tracking process of the near-ground navigation UAV. (a) The UAV takes off. (b) The UAV rotates clockwise. (c) The UAV moves straight. (d) The UAV rotates counterclockwise. (e) The UAV moves straight. (f) The UAV continues straight ahead. (g) The UAV arrives at the target point. (h) The UAV makes a landing.
Figure 14. Flight trajectory tracking process of the near-ground navigation UAV. (a) The UAV takes off. (b) The UAV rotates clockwise. (c) The UAV moves straight. (d) The UAV rotates counterclockwise. (e) The UAV moves straight. (f) The UAV continues straight ahead. (g) The UAV arrives at the target point. (h) The UAV makes a landing.
Drones 09 00442 g014
Figure 15. Cooperative positioning results during the flight trajectory tracking of the near-ground navigation UAV. (a) The UAV adjusts its heading orientation and moves straight after takeoff. (b) The UAV moves straight after rotating counterclockwise. (c) The UAV continues straight ahead. (d) The UAV arrives at the target point.
Figure 15. Cooperative positioning results during the flight trajectory tracking of the near-ground navigation UAV. (a) The UAV adjusts its heading orientation and moves straight after takeoff. (b) The UAV moves straight after rotating counterclockwise. (c) The UAV continues straight ahead. (d) The UAV arrives at the target point.
Drones 09 00442 g015
Table 1. Quantitative experimental results of UAV cooperative positioning.
Table 1. Quantitative experimental results of UAV cooperative positioning.
UAV Path PointGround Truth Position Data/cmCooperative Positioning Data/cm
1(68.3, 79.5, −155.4)(76.2, 71.7, −146.8)
2(63.0, 157.0, −124.1)(72.3, 164.4, −116.6)
3(137.5, 76.7, −147.8)(144.2, 67.3, −155.4)
4(163.4, 153.2, −131.7)(173.4, 160.3, −139.8)
5(216.5, 63.3, −140.2)(221.7, 69.4, −132.2)
6(236.8, 166.4, −140.2)(246.4, 157.6, −131.7)
7(329.4, 69.7, −131.7)(319.3, 75.6, −141.8)
8(341.5, 168.9, −147.8)(335.8, 162.5, −137.6)
9(389.3, 63.5, −124.1)(398.3, 71.2, −117.3)
10(382.1, 162.6, −155.4)(389.3, 168.9, −147.6)
Table 2. Average positioning errors of the cooperative positioning results.
Table 2. Average positioning errors of the cooperative positioning results.
Average Positioning Error Along X-AxisAverage Positioning Error Along Y-AxisAverage Positioning Error Along Z-Axis
Average positioning error/cm8.077.298.32
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Yue, P.; Xin, J.; Huang, Y.; Zhao, J.; Zhang, C.; Chen, W.; Shan, M. UAV Autonomous Navigation System Based on Air–Ground Collaboration in GPS-Denied Environments. Drones 2025, 9, 442. https://doi.org/10.3390/drones9060442

AMA Style

Yue P, Xin J, Huang Y, Zhao J, Zhang C, Chen W, Shan M. UAV Autonomous Navigation System Based on Air–Ground Collaboration in GPS-Denied Environments. Drones. 2025; 9(6):442. https://doi.org/10.3390/drones9060442

Chicago/Turabian Style

Yue, Pengyu, Jing Xin, Yan Huang, Jiahang Zhao, Christopher Zhang, Wei Chen, and Mao Shan. 2025. "UAV Autonomous Navigation System Based on Air–Ground Collaboration in GPS-Denied Environments" Drones 9, no. 6: 442. https://doi.org/10.3390/drones9060442

APA Style

Yue, P., Xin, J., Huang, Y., Zhao, J., Zhang, C., Chen, W., & Shan, M. (2025). UAV Autonomous Navigation System Based on Air–Ground Collaboration in GPS-Denied Environments. Drones, 9(6), 442. https://doi.org/10.3390/drones9060442

Article Metrics

Back to TopTop