Previous Article in Journal
Enhanced Design of an Adaptive Anthropomorphic Finger through Integration of Modular Soft Actuators and Kinematic Modeling
Previous Article in Special Issue
Vision-Based Formation Control of Quadrotors Using a Bearing-Only Approach
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Holistic Review of UAV-Centric Situational Awareness: Applications, Limitations, and Algorithmic Challenges

1
College of Computing and IT, UDST University, Doha Z68, Qatar
2
School of Engineering and Energy, Murdoch University, Perth, WA 6150, Australia
3
Department of Computer Science, University-South Tehran Branch, Tehran 1675914337, Iran
*
Author to whom correspondence should be addressed.
Robotics 2024, 13(8), 117; https://doi.org/10.3390/robotics13080117
Submission received: 29 May 2024 / Revised: 24 July 2024 / Accepted: 25 July 2024 / Published: 29 July 2024
(This article belongs to the Special Issue UAV Systems and Swarm Robotics)

Abstract

:
This paper presents a comprehensive survey of UAV-centric situational awareness (SA), delineating its applications, limitations, and underlying algorithmic challenges. It highlights the pivotal role of advanced algorithmic and strategic insights, including sensor integration, robust communication frameworks, and sophisticated data processing methodologies. The paper critically analyzes multifaceted challenges such as real-time data processing demands, adaptability in dynamic environments, and complexities introduced by advanced AI and machine learning techniques. Key contributions include a detailed exploration of UAV-centric SA’s transformative potential in industries such as precision agriculture, disaster management, and urban infrastructure monitoring, supported by case studies. In addition, the paper delves into algorithmic approaches for path planning and control, as well as strategies for multi-agent cooperative SA, addressing their respective challenges and future directions. Moreover, this paper discusses forthcoming technological advancements, such as energy-efficient AI solutions, aimed at overcoming current limitations. This holistic review provides valuable insights into the UAV-centric SA, establishing a foundation for future research and practical applications in this domain.

1. Introduction

1.1. Background and Motivation

The emergence of UAVs has brought a promising advancement in capturing SA for diverse applications and offers many advantages, including the ability to perform high-risk flights, precise navigation, and capture high-quality images and videos using a variety of sensors such as multispectral, hyperspectral, thermal, LIDAR, gas, or radioactivity sensors. Their versatility in deployment, mobility, and ability in real-time data acquisition, combined with the latest advancements in onboard processors, make UAVs the perfect choice for SA in various applications.
In the context of vehicle-related SA, it refers to the real-time perception, comprehension, and projection of the surrounding environment by the vehicle’s systems. It involves the fusion and interpretation of multi-sensor data to generate an accurate, dynamic representation of relevant factors, including obstacles, traffic, terrain, and contextual information, enabling informed decision-making and proactive responses to ensure safe and efficient operation.
SA facilitates automatic identification and reasoning on obtained knowledge in a complex and dynamic environment to understand the current state of the environment to anticipate the future state. It is important for an autonomous vehicle, such as a UAV, to operate successfully in dealing with continuously changing situations. SA keeps the vehicle focused on only relevant and meaningful information [1]. UAVs can operate as mobile agents to enable continuous SA of a vast area, penetrate constricted spaces out of reach of humans, and therefore save considerable vital time and effort. Consistently analyzing the information and data captured by distributed sensors throughout operations, including those onboard UAVs, significantly enhances comprehension of the broader situation. However, the substantial volume of generated data requires extensive processes for transfer, handling, and storage. Meanwhile, UAVs must contend with environmental challenges amid limitations like energy scarcity and unpredictable surroundings [2,3].
Despite significant advancements in edge computing technologies, achieving real-time SA via UAVs remains a crucial challenge, as current solutions struggle to meet the demanding operational requisites. The primary challenge revolves around the limitations in onboard processing capabilities, computational efficiency, and the constrained resource envelope of UAVs, which particularly concerns computing prowess, battery capacity, and endurance. These constraints significantly hinder the integration of machine learning (ML) methodologies crucial for adaptive and robust situational awareness. Moreover, ensuring precise positioning and navigation accuracy while optimizing data acquisition, fusion, memory utilization, and transmission efficiency presents multifaceted complexities. Overcoming these challenges is pivotal in enhancing the efficacy and dependability of UAV-based online SA.

1.2. Objectives and Scope of the Review

SA is the perception of the elements in the environment within a volume of time and space, which lays a solid basis for UAV networks to implement a wide variety of missions. Another aspect is the vehicle-related SA, to ensure the UAV has a safe, efficient, and successful operation, given the environmental and vehicular constraints. Accurate SA is an essential challenge to unleashing the potential of UAVs for practical applications. This article strives to offer advanced techniques and creative ideas for addressing challenges in UAV-based SA. Rather than concentrating solely on particular operations or restricted settings, we have taken a more comprehensive overview to cover a wider range of applications. In this paper, we delved into the concept of SA in relation to UAVs by conducting a thorough analysis of the influential academic literature highlighting UAVs’ potential as powerful tools for SA in critical situations. This research capitalizes on the following aspects:
  • How SA can enhance the overall autonomy of UAVs in different real-world missions;
  • How SA can contribute to persistent and resilient operations;
  • The key barriers and the challenges from both software and hardware parts;
  • Algorithmic and strategic insights into UAV-centric SA;
  • Various perspectives on the effective use of SA in conjunction with UAVs;
  • Insights into ongoing and potential future research directions.
Accordingly, in Section 2, we discuss the fundamentals of UAV-centric SA in both mission-related and vehicle-related perspectives. In Section 3, we categorized UAVs according to their respective roles and determined the most suitable role distribution for different types of UAVs, followed by a particular focus on the most common rotary UAVs applied for SA in Section 3.1. Furthermore, this study provides a comprehensive review of the inevitable challenges and limitations associated with rotary UAVs in time-critical operations in Section 3.2. We thoroughly investigate the diverse applications of UAVs for SA using wireless sensor networks (WSN) and distributed sensors, high-calibrated cameras, and LiDARs in Section 4, followed by a discussion of the advantages and drawbacks of each method in the given applications. Section 5 will cover various cutting-edge algorithms, including supervised data-driven ML, both deterministic and nondeterministic, and RL-based approaches, their derivatives, and variations developed to tackle challenges associated with UAV restrictions. Apart from the algorithmic perspective, this section also covers strategic insights into UAV-centric SA, including cooperative awareness, object detection/recognition/tracking, and SA-oriented navigation in Section 6. Through this, we aim to showcase various algorithms and their practical uses in multiple situations, providing readers with helpful insights and motivation for their problem-solving endeavors. Eventually, Section 7 concludes the findings of this research, summarizes the key challenges and potentials, and discusses the future trends and anticipated technological developments on the horizon. Through this review, our ultimate goal is to encourage further exploration and innovative research in this field, enhance mission efficiency, and positively impact the community.

2. Fundamentals of UAV-Centric Situational Awareness

Historically, UAV-centric SA has progressed from basic video feeds and telemetry data to more advanced systems that incorporate AI, ML, and advanced sensor technologies. This refers to the advancements made in delivering real-time intelligence and information to UAVs in autonomous systems or to operators in semi-autonomous systems. Accordingly, this has led to improved target detection, tracking, and identification capabilities, as well as the ability to operate in complex and contested environments.
The concept of SA, as proposed by Mica Endsley in 1995, introduced a three-layer model grounded in cognitive principles, shown in Figure 1 [1]. SA involves perceiving environmental information within a defined time and space, comprehending its contextual meaning, and estimating the future states of involved elements. When considering SA in the realm of UAVs, it presents two distinctive perspectives. Distinguishing between these perspectives is crucial as they lead to divergent areas of research.
As depicted in Figure 1, at the first level of SA (perception), the attributes and dynamics of critical environmental factors will be perceived. This process will be followed by the second level where the perceived information will be comprehended through the synthesis of disjointed critical factors and understanding what those factors mean in relation to the mission’s tasks. Eventually, the third level of SA involves projecting the next status of the entities in the environment that enables the system to act and respond to the raised situation promptly. The vehicle-related SA functions are classified as follows.

2.1. UAVs for Capturing SA

The first perspective involves utilizing UAVs for SA directly in mission scenarios. The emergence of UAVs has brought a promising advancement to capturing SA for diverse applications, which has garnered significant attention in disaster management, SAR, surveillance [4,5,6,7,8], monitoring aging infrastructures [9,10,11], marine operations [12], flood management systems [13,14,15], surveying underground mines [11,16,17], bushfire incidents, and avalanches [18], etc. A core requirement for the majority of these applications is the real-time awareness of the environment to facilitate early detection of the incident, alert the operator promptly about the detected incident in the area of interest, and respond in a timely manner. While promising, current UAV technology and data processing models, especially ML methods, struggle to meet the specific demands of real-time and continuous SA. UAVs used for SA face constraints, notably in battery energy and storage memory, resulting in limited computing capacity compared to high-performance systems. Moreover, ML methods, although powerful for data processing, demand substantial energy and computational resources, posing challenges for their implementation in UAVs. This convergence of UAV limitations and the resource-intensive nature of ML methods highlights a critical question: How can continuous situational awareness be achieved when using resource-constrained UAVs? Bridging this gap necessitates innovative approaches in technology and methodology to overcome the inherent limitations and achieve effective SA in UAV operations. Figure 2 represents the process of capturing SA and EA within a UAV.

2.2. SA within UAV Operations

The second perspective involves vehicle-related SA, encompassing vital functions like self-awareness (internal awareness), fault awareness, cooperation awareness, collision awareness, and weather awareness. These functions are pivotal for ensuring the successful operation of UAVs, enabling the recognition of unforeseen situations and the identification of environmental changes or dynamic factors essential for executing specific operations within defined spatial and temporal contexts [19].
During pre-flight procedures, such information is gathered to chart suitable flight paths and enable prompt responses to unforeseen challenges that may arise during operations. Effective handling of unstructured and unexpected occurrences hinges on the system’s environmental perception. Achieving this demands concurrent processing of object data and their characteristics, combining pre-attentive knowledge in the working memory with incoming information for continual updates on the evolving situation. This updated insight is crucial for projecting potential future events and aiding real-time decision-making processes in UAV operations. This interplay between SA and the decision-making process, along with the environment’s dynamic state, is depicted in Figure 3, illustrating the significant impact of SA on enhancing the UAV’s decision-making capabilities.

2.2.1. Weather Awareness

The operation of UAVs often takes place at lower altitudes, exposing them to more complex weather conditions compared to higher altitudes. The vulnerability of smaller UAVs to sudden weather changes due to their size and weight underscores the critical importance of weather awareness in all outdoor UAV operations. This awareness includes considerations such as climate zones, temperature fluctuations, wind conditions, and turbulent dynamics. In UAV operations, anticipating and responding to weather conditions is essential for determining suitable airspaces and timeframes before encountering adverse weather. One of the prominent challenges lies in addressing the need for safer flights during harsh weather conditions. For instance, researchers have employed deep CNNs to estimate 3D wind flow within a short timeframe, approximately 120 s, enabling safer UAV flights [19]. Such technologies contribute to enhancing the understanding and prediction of weather patterns, allowing for informed decision-making during flight operations. Researchers, like Sabri et al., have emphasized the need to determine safer flight paths for UAVs in less than thirty seconds [20].

2.2.2. Collision Awareness

Collision awareness stands as a crucial facet in UAV operations, especially in scenarios where airspace congestion or dynamic environments increase the risk of collisions. Ensuring collision-free flights is essential for the safety and efficacy of UAV missions. Advanced collision avoidance systems play a pivotal role in detecting and mitigating potential collisions, and preventing accidents and damages [21,22]. In UAV operations, collision awareness involves the integration of sensors, such as LiDAR (light detection and ranging), radar, or cameras, to perceive obstacles or potential collisions in the UAV’s flight path. These sensors enable real-time detection of nearby objects or hazards, allowing for timely course corrections or evasive actions. Additionally, technologies like computer vision and ML algorithms aid in object recognition and decision-making, contributing to more accurate collision avoidance strategies. To maintain a comprehensive awareness of the surrounding environment, it is important to identify individual objects as well as the higher-order relations between them in order to identify the level of threat and uncertainty posed by each obstacle. By considering factors such as dynamic deployment, behavior, relative distance, relative angle, and relative velocity to the vehicle, it is possible to classify and recognize groups of obstacles. To appraise potential threats effectively, effective data fusion and accurate SA can provide answers to the important questions considering the following subjects:
  • The object’s type, size, and characteristics;
  • Identification and elimination of the sensor redundancy;
  • The objects’ motion pattern and velocity (moving toward/moving idle);
  • The objects’ behavior in one step forward;
  • The level of danger posed by a particular object;
  • The appropriate reaction when facing a specific object.
Researchers have made significant strides in collision avoidance systems for UAVs. Studies explore diverse approaches, such as sensor fusion techniques and predictive modeling, to enhance collision awareness and avoidance capabilities. For instance, combining data from multiple sensors and employing predictive algorithms allows UAVs to anticipate potential collision scenarios and take proactive measures to avoid them [23,24,25]. However, challenges persist, particularly in scenarios with rapidly changing environments or unpredictable obstacles. Maintaining robust collision awareness systems capable of adapting to dynamic conditions remains an ongoing research focus.

2.2.3. Self-Awareness (Internal Failure Awareness)

Self-awareness, often referred to as internal failure awareness in the domain of UAV operations, encompasses the UAV’s ability to monitor and assess its own system health and functionality in real-time. The concept of self-awareness involves continuous monitoring of the internal components, systems, and functionalities of the UAV, including sensors, actuators, propulsion systems, and onboard computing units. By constantly evaluating its own status, a UAV equipped with robust self-awareness capabilities can detect and respond to internal failures, anomalies, or malfunctions promptly. This proactive monitoring enables the UAV to take corrective actions, such as adjusting flight parameters or altering operational modes, to mitigate potential failures and ensure mission continuity [26,27].
Depending on the severity of the issue, appropriate action can be taken to ensure safety and mission success. The internal fault tolerance system is usually designed to handle fault occurrence. A common architecture includes three levels. The first level is fault detection, which oversees the vehicle’s hardware and software availability. The second level identifies the fault’s severity, impact, and degree of tolerability. The final level is fault accommodation, which is taking appropriate action by either safely aborting the mission in the case of unbearable faults or reconfiguring the vehicle control architecture in the event of bearable malfunctions to continue the mission with new restrictions using alternative sensors or actuators. In fully autonomous missions, onboard sensors and subsystems are constantly monitored to ensure proper functioning, which allows for setting timely and effective recovery plans in the event of any failures. To ensure internal failure awareness, a supervised ML model is recommended [20], where the model is trained using historical data to predict future failures such as sensor malfunction, actuator failure, telemetry issues, or connectivity problems in the UAV [28].

2.2.4. Cooperation Awareness (Cognitive Load Awareness)

Cooperation awareness, sometimes referred to as cognitive load awareness within the scope of UAV operations, pertains to the UAV’s capability to understand, manage, and distribute cognitive workload during cooperative missions or within a team of multiple UAVs. This aspect of SA is essential for optimizing decision-making, task allocation, and communication among UAVs in collaborative scenarios. In multi-UAV missions or cooperative operations, each UAV must possess a heightened awareness of its own tasks, operational constraints, and available resources, while concurrently understanding the roles, actions, and intents of other collaborating UAVs. Effective cooperation awareness facilitates coordination, reduces redundancy, and enhances overall mission efficiency.
Achieving cooperation awareness involves intricate coordination protocols, communication frameworks, and distributed decision-making algorithms. These systems enable UAVs to share information, synchronize actions, and allocate tasks based on collective objectives and individual capabilities. Advanced technologies, including swarm intelligence, decentralized control algorithms, and cooperative learning models, play a pivotal role in fostering effective cooperation awareness among UAVs. For example, cooperative fault-tolerant mission planners have been designed to allow for parallel operations, collective resource sharing, and adaptive re-planning in the event of a fault, as thoroughly discussed in [29,30]. In [31], a distributed cooperation method for UAV swarms using SA consensus, including situation perception, comprehension, and information processing mechanisms, was proposed. The researchers analyzed the characteristics of swarm cooperative engagement and developed a method to use in complex and antagonistic mission environments. Considering the UAVs as intelligent decision-makers, the authors established the corresponding information processing mechanisms to handle distributed cooperation. The proposed method was demonstrated to be practical and effective through theoretical analysis and a case study.

3. Classification of UAVs

This section introduces the most common types of UAVs (drones) used for SA, discussing their advantages, disadvantages, and structural differences [32]. Generally, UAVs can be classified based on body design, kinematics, and DoF, including multirotor, unmanned helicopters (UH), and fixed-wing UAVs, as shown in Figure 4.
  • Unmanned Helicopters (UH): A typical UH has a top-mounted motor with a large propeller for lift and a tail motor to balance torque. UHs are agile and suitable for complex environments but have high maintenance costs due to their complex structure. They are ideal for surveillance and tracking missions (e.g., traffic monitoring, border surveillance) due to their hovering capability and maneuverability [38,39].
  • Fixed-Wing UAVs: These UAVs share a structure similar to commercial airplanes, generating lift through pressure differences on their surfaces. They are energy efficient, suitable for time-critical and large-scale applications, but require large runways and have limited maneuverability at high speeds [40,41].
  • Multirotor UAVs: The most commonly used UAV for SA, multirotors generate lift through multiple high-speed rotating motors and propellers. They offer high maneuverability, quick response, VTOL capabilities, and are cost effective. However, they have high battery consumption and poor endurance [42,43].
Table 1 summarizes the advantages, disadvantages, and dedicated use cases of UHs, fixed-wing, and multirotor UAVs. The focus of this review paper is on multirotor UAVs, with a critical analysis of their capabilities, limitations, and applications provided in Section 3.1.

3.1. Multirotor UAV Classes for SA and Environmental Assessment (EA) Applications

One of the most significant advantages of multirotor UAVs is their mechanical simplicity and flexibility, enabling them to take off vertically and fly in any direction without needing to adjust the propellers’ pitch to exert momentum or control forces. Multirotor UAVs, particularly quadcopters, are relatively cheap and widely available tools with low maintenance costs compared to manned aircraft and other aerial vehicles that can perform excellent maneuverability to tackle environmental disturbances (e.g., obstacle avoidance). Multirotor UAVs can expeditiously arrive at the intended destination, and their agility, hovering capabilities, and ease of deployment, which contribute to their quick dispatch to the operation zones, enable them to promptly respond to the raised situations where time is of the essence. They can penetrate narrow spaces, edifices, ravines, woodlands, or perilous terrains to effectively handle mission-related tasks, maneuver around obstacles, and reach areas that are arduous to reach or inaccessible to humans. Their ability to hover in place provides a stable platform for capturing detailed data and imagery. They can be equipped with various sensors and payloads to enhance their operational performance. Signals that are challenging to perceive visually, for instance, heat, sound, and electromagnetic radiation, can be detected by these sensors. The most popular multirotor UAVs in the market used for SA in various scenarios are shown in Figure 5 and their properties is indicated in Table 2 [44].
Despite multirotor UAVs’ effectiveness in acquiring SA, owing to their maneuverability and small sizes, they are resource-constrained tools with strict processing, memory, battery, and communication (PMBC) limitations as well as maneuverability challenges in harsh conditions. Existing autonomous multirotor UAVs suffer a number of serious restrictions to efficiently providing real-time and continuous SA. Discussing these restrictions is important in order to understand where drones can help us and what needs to be conducted to handle these limitations in dealing with different circumstances.

3.2. Limitations and Challenges

As demonstrated in the table, the most energy-efficient vehicle in the market with a full battery of 5935 mAh and a weight of 6300 g can reach about 55 min and 18.5 km in ideal weather conditions and without payload where the camera is switched off. This battery run-time can be even shorter if the vehicle’s motion entails tackling environmental disturbances. Currently, most of the existing UAVs rely on the communications and decisions assigned by the human supervisor or computations that take place in the cloud or central compartment. The following are some of the highlighted restrictions with the existing multirotor UAVs.

3.2.1. Environmental Constraints

In any general operation scenario, the vehicle and environment experience continuous changes during the mission, and events perceived by the vehicle continuously change its perception of the environment. The critical challenges posed by the environment, such as adverse wind, harsh weather conditions, and the presence of static/dynamic/uncertain collision barriers, significantly impact the UAV’s maneuverability, battery consumption, connectivity, and overall operation performance. Due to their small size and lightweight, multirotor UAVs are susceptible to sudden weather changes. As a result, it is crucial to adapt to dynamic weather conditions in real-time to ensure safe and efficient UAV operations. The urgency of establishing rapid and precise responses to sudden weather changes remains a paramount challenge to be addressed.

3.2.2. Restricted Battery Life

This is one of the significant challenges in the application of drones. The research presented by the authors of [45] studied diverse battery types used on UAVs such as NiMH, NiCad, Pb-acid, and Li-ion, where the outcome concluded that lithium-ion, lithium polymer (LiPo), and lithium polymer high-voltage (LiHV) are the best-suited batteries for UAVs. Even these batteries last less than an hour in an ideal situation, depending on the class of the vehicle, while this period can be even shorter if the vehicle is required to perform some processing, data transmission, photography, or filming, as the camera also supplies its power from the drone’s battery. The speed and acceleration, altitude, and weight/payload of the vehicle can also impact battery consumption. On the other hand, environmental factors, such as strong wind, rain, or tackling obstacles, can also influence the vehicle’s battery run time [46]. Apparently, this durability is not sufficient for applications in immersive cases and large landscapes [47]. The application of solar energy to charge UAVs has been investigated recently in [48] for path planning and [49] for target tracking, converting light into electric current. However, the issue with this approach is the extra payload of solar panels as well as the slow charging ratio, which cannot effectively support UAVs during the flight.

3.2.3. Limited Connectivity and Bandwidth

UAVs not only have restricted battery life but also have connectivity limitations, particularly in remote areas where no internet is accessible or connection is poor. Although the majority of existing UAVs are equipped with various positioning systems including GPS, GNSS, and a range of sensors to navigate and avoid colliding with obstacles, data transmission, as the essential base for real-time SA, cannot be effectively addressed [50]. Harsh weather conditions such as heavy snow/rain or fire can also easily break or interrupt the connection (either GPS or internet). On the other hand, the system’s power usage is proportionally linked to the inter-vehicle or vehicle-to-station distance, which is considerably energy-draining in large operational environments [51].

3.2.4. Limited Memory and Onboard Processing Power

Additionally, UAV processors’ computational power and memory are limited, and therefore, algorithms and solutions that require more memory and processing ability will not be applicable to be implemented in UAVs’ onboard processors [52]. Although relatively powerful onboard processors are being used in some UAVs, their processing power is still comparatively below that of high-performance computers.

3.2.5. Regulatory and Ethical Considerations

In addition, aviation regulations impose certain limitations on drones, including restrictions on flight altitude, airspace, and operating conditions. For example, drones are typically not allowed to fly beyond the visual line of sight of the operator or in certain restricted airspace, which can limit their range and coverage area. Compliance with these regulations is crucial to ensure safe and legal drone operations, but it can also pose challenges in many scenarios such as SAR operations where drones may need to operate in remote or restricted areas.
Generally speaking, UAVs still cannot efficiently cope with power supply limitations, processing power scarcity, maximal physical load size, and maneuverability in harsh conditions. Although power generation techniques exist, they are not sufficiently employed and used in practice. Considering the resource scarcity of existing UAVs, providing onboard computation for ML techniques still is a big challenge in the way of online SA. In some existing research [53,54,55], heavy-computational cloud-centric ML approaches are exploited to tackle the computational restrictions of UAVs, where data are sent, stored, and processed in a centralized server. Although cloud computing is currently employed for big data processing worldwide, it has some critical barriers in terms of requiring wide bandwidth for data communication, a high amount of energy for data transmission, and transmission latency towards the centralized entity, leading to other sorts of challenges when exploiting ML techniques. Thus, devising and exploiting an efficient algorithmic approach to tackle all the given problems is still a widely open area for research.

4. Unveiling the Landscape of UAV-Centric SA in Various Applications

The focus of this section lies in a comprehensive review and critical analysis of existing UAV methodologies concerning both SA and EA in multifaceted contexts [56,57]. These methodologies primarily center on information monitoring and processing for prediction and early detection. In this review, UAV applications for SA are classified into three main categories based on distinct data fusion approaches. Each classification undergoes an in-depth examination of relevant research activities, depicting their methodologies and advancements. Moreover, this review critically addresses persistent research and development challenges that pose ongoing obstacles in the field. Figure 6 represents the conceptual example of capturing SA using distributed sensors, on-board camera, and LiDAR.

4.1. Expanding SA with UAVs via WSN and Distributed Sensors

Technological advancements in wireless communication, high-precision sensors, and UAV operational capabilities have enabled enhanced SA through data fusion, integrating various information sources to build a unified understanding of the environment.
The integration of WSN with UAVs creates an ideal system for time-critical missions, such as natural disaster management, enabling the capture of SA for damage assessment before, during, and after disasters to identify threats promptly [58].
Effectively utilizing this capability requires establishing a data collection and monitoring structure of air and ground sensors for accurate data analysis and rapid response. This involves deploying various sensors, such as high-resolution cameras for visual data, LiDAR for 3D mapping, and thermal sensors for heat detection, often mounted on UAVs and coordinated through a central control system. Collected data are transmitted to ground control stations via secure channels for preprocessing, including noise reduction and data fusion, to enhance accuracy and reliability. Advanced algorithms, including machine learning, analyze the data to identify patterns and generate actionable insights. UAVs can also serve as communication hubs, re-establishing damaged communication lines post-disaster [13], and as transportation vehicles for delivering medical equipment [9,59]. Stable weather conditions are necessary for effective UAV operations, making them suitable for hydrological disaster assessments like floods [10].
The study in [11] developed the IMATISSE mechanism, integrating crowd-sensing and UAVs for natural disaster assessment, combining smartphone and WSN data to create comprehensive situational awareness (SA). However, the design overlooked UAV performance in environmental challenges and had energy-intensive data collection. The study presented in [16] introduced a cognitive and collaborative navigation framework for the TU-Berlin swarm of small UAVs, tackling connectivity issues and enhancing SA through local sensor data and inter-vehicle information using non-linear solvers and factor graphs, focusing on data fusion and connection quality but not on the comprehension and projection stages.
Several works focus on using UAVs as signal relays to connect ground-based LoRa nodes with remote base stations [17]. These approaches enhance communication range and reliability by using UAVs equipped with high-gain antennas, signal amplifiers, and advanced signal-processing capabilities. UAVs can quickly establish temporary communication links in remote or disaster-stricken areas, offering flexible network configurations. However, practical limitations include the resilience of UAV relay networks, efficient guidance to critical areas, data fusion, power management, and optimal recharging strategies [60]. In [61], a concept is proposed for UAVs to recharge on public transportation and collect data, while ref. [62] explores using fixed-wing UAVs with detection nodes for bushfire inspection, focusing on optimizing communication intervals and data collection without addressing higher SA levels.
The cited works [10,11,13,16,62] focus on providing environmental SA in natural disaster management using WSN and various UAV classes. Key challenges include operation in severe conditions, diverse data sources, sensor battery limitations, and UAV resource constraints. The complexity of the environment and the unpredictable nature of disasters also remain unaddressed. Addressing these challenges requires considering system properties such as sensor types, UAV classes, and environmental and vehicular constraints.
Another significant concern is communication bandwidth and connectivity, which can become challenging due to damaged infrastructure during disasters. UAVs, wireless sensors, central servers, and processors must actively communicate, and any changes in infrastructure and topology must be instantly addressed to retain connectivity. This is difficult due to the lossy nature of wireless connections, especially without a central node, requiring constant tracking of devices in range. Frequent scanning for new nodes allows faster reactions but increases overhead and battery usage.
While using UAVs as relays in LoRa networks shows promise, challenges remain. Ensuring secure and reliable communication between nodes and base stations, optimizing network performance, and managing power consumption are crucial. Power-hungry sensors may quickly deplete their batteries, making synchronization of network parts during disasters difficult. Communication elements face various resource constraints, from processor capabilities to battery-restricted UAVs. Solutions for low-power, low-memory sensors mainly address WSN tasks like data dispatching to sink nodes, but even these sensors have limited energy that must be managed to maximize data transmission over their lifetime.
Exploiting UAVs for WSN data collection faces limitations like dynamic wireless channels between sensor nodes (SNs) and UAVs. To mitigate energy wastage from continuous idle listening by SNs, a sleep/wake-up mechanism is employed in [63] and location-based data-gathering frameworks prioritize sensors to minimize collisions and energy consumption [64]. Studies like [65] optimize these mechanisms but do not address varying energy consumption and data collection frequencies. UAV battery limitations, critical due to less than an hour of flight time [66,67], are compounded by the energy needed for data transmission. Research focuses on improving path planning and SN mechanisms, with studies like [68,69] addressing issues in fixed-wing UAVs used as mobile data collectors. Despite these challenges, WSN and UAV systems remain optimal for SA, with ongoing advancements paving the way for more efficient communication systems.

4.2. Expanding SA with UAVs Using Vision Systems

Vision systems are crucial for augmenting SA in UAVs, incorporating technologies like cameras and optical sensors to gather high-resolution visual data in real-time. These systems enable UAVs to capture detailed images and videos, facilitating robust visual analysis for terrain mapping, object recognition, and obstacle detection, thereby enriching their SA capabilities [69,70].
Advanced image processing algorithms and computer vision techniques allow UAVs to interpret visual data, extract crucial information, and make informed decisions during missions [71]. These systems enable UAVs to detect and classify objects, navigate complex environments, and respond to dynamic changes in real-time [72]. The fusion of visual data facilitates the creation of detailed 3D maps, aiding in precise localization, route planning, and situational analysis, enhancing the overall effectiveness and safety of UAV operations [73]. In [74], a single UAV-based system uses real-time data from onboard vision sensors to predict fire spread, integrating historical data and online measurements for model updates. However, the limited battery capacity and computational burden restrict the UAV to covering small wildfire zones. A study using a DJI Phantom 4 Pro (P4P) UAV with a 20-megapixel CMOS camera for flood inundation prediction during storms is presented in [14]. This UAV-based multispectral remote sensing platform addresses data collection, processing, and flood mapping to facilitate road closure plans.
A flood monitoring system using a P4P UAV to gather high-resolution images of shorelines was developed in [15]. The collected images were processed to create a digital surface model (DSM) and field measurements for flood risk assessment and georeferencing. This approach requires human-centered processing due to UAV bandwidth limitations and assumed windless conditions, leaving environmental complexities unaddressed. A modular UAV-based SA platform with high-calibrated cameras and optical and thermal sensors is proposed in [75], where onboard processing for object detection and tracking is performed before transmitting data to the Command-and-Control Center (CCC). However, onboard processing imposes a computational burden on the UAV, leading to faster battery depletion and high bandwidth requirements for transmitting abstracted videos. A similar approach for detecting, localizing, and classifying ground elements is proposed in [76], processing visual data for environmental SA during rescue operations. However, fast internet access for video transfer in disaster areas is impractical, and bandwidth limitations remain a significant challenge.
In [77], a UAV vision-based aerial inspection generates a 3D view pyramid for passive collision avoidance during urban SAR operations, facilitating structure identification in unexplored environments. However, the approach lacks practical feasibility for real-time operations due to passive obstacle avoidance and computational burden concerns.
The VisionICE system, used in SAR operations, integrates a camera-mounted helmet, AR display, and UAV vision sensors to enhance air–ground cognition and awareness [78]. Real-time target detection and tracking are performed using a YOLOv7 algorithm, but communication to the cloud server requires significant bandwidth and energy, suffering from latency issues.
Studies in [79,80,81], demonstrate the feasibility of using UAVs for fire monitoring and ISR missions, employing neural networks and multi-UAV coordination for efficient fire perimeter surveying. A distributed control scheme for camera-mounted UAVs to observe fire spread is presented in [82], focusing on minimizing information loss from cameras due to heat. Another study [83] integrates multi-objective Kalman uncertainty propagation for UAVs to provide ground operators with fire propagation data, emphasizing human-centered decision-making.
The ResponDrone platform [84] coordinates a fleet of UAVs to capture and relay critical imagery in disaster areas through a web-based system, though practical challenges with disrupted communications infrastructure remain. In [85], a 3D-draping system dynamically overlays UAV-captured imagery on terrain data for SA, but the perception stage is primarily addressed, leaving the comprehension and projection stages to human operators.
An extensive review in [86] highlights the cost-effectiveness and versatility of UAV-based photogrammetry and geo-computing for hazard and disaster risk monitoring, despite challenges like weather dependency, regulatory constraints, and data quality variability.
Although vision systems are efficient for capturing high-quality data, their use is hindered by UAV resource limitations for onboard computation, data transmission, and battery consumption. Vision sensors are also vulnerable to lighting conditions and obstructed views. Onboard processing is energy-consuming, and UAV processors cannot handle heavy computational algorithms, leading to accuracy trade-offs. Remote computational platforms require continuous data transmission, draining the battery and causing connectivity issues during disasters, making real-time SA challenging.

4.3. Expanding SA with UAVs Using LiDAR

Leveraging LiDAR technology significantly enhances the SA capabilities of UAVs by emitting laser pulses to create highly accurate 3D maps. These elevation data, combined with precise geospatial information, enable UAVs to navigate complex terrains, perceive the environment accurately, and make informed decisions in real-time. LiDAR supports terrain modeling, topographical analysis, and provides detailed 3D representations of landscapes, crucial for applications such as precision agriculture and environmental monitoring [87].
LiDAR-equipped UAVs excel in forestry management, urban planning, and infrastructure development by capturing detailed spatial information. LiDAR systems, like Ouster LiDAR, provide 360° panoramic images and dense 3D point clouds, supporting computer vision applications [88,89]. These systems utilize learning-based approaches compatible with deep learning models for tasks like object detection and tracking, presenting an efficient alternative to vision sensors that require intensive computational resources [90].
In [91], LiDAR-equipped UAVs were used to capture 3D digital terrain models of glacier surfaces, integrating high-resolution terrain data with monitoring systems to create 3D models. A LiDAR-based scanning system for UAV detection and tracking over extended distances was employed in [92], providing precise positioning and navigation in GNSS-denied environments. Tracking UAVs in 3D LiDAR point clouds by adjusting point cloud size based on distance and velocity was extended in [93]. However, the detection and tracking of small UAVs with varied shapes and materials remain challenging.
Learning-based LiDAR positioning techniques leverage machine learning algorithms to extract meaningful information from LiDAR data. In [94], a point-to-box approach rendered ground objects as 3D bounding boxes for easier detection and tracking. Although effective on powerful GPUs, this approach faces computational restrictions on UAVs. Transforming point cloud data into images for visual detection algorithms, as carried out in [95], offers 3D localization but imposes a heavy computational load. To alleviate this, ref. [96] used a 3D range profile from Geiger mode Avalanche Photo Diode (Gm-APD) LiDAR with YOLO v3 for real-time object detection, enhanced by a spatial pyramid pooling (SPP) module.
An adaptive scan planning approach using a building information model (BIM) was proposed in [82,92,94,97] to reduce computational load and battery consumption. Using probabilistic cognition to minimize LiDAR beams while retaining detection accuracy was investigated in [98]. Despite computational limitations, the combination of visual and LiDAR information with learning-based frameworks achieved satisfactory SA [99,100]. However, memory, onboard processor, and battery constraints, as well as environmental factors like wind drift and obstacles, remain critical challenges.
In [97], a centralized ground-based perception platform used UAVs with LiDAR and RGB-D cameras for information fusion, alleviating UAV computational load through ground processing. While effective, long-range data rendering and offloading face communication bandwidth limitations.
Multi-sensor systems combining visual and LiDAR data hold great potential for accurate SA and position estimation [82,92,94,97]. However, integrating these data types to avoid redundancy and preserve essential information is complex. The literature often overlooks important parameters such as computation, memory, and battery restrictions, and the impact of environmental factors on UAV performance. Table 3 summarizes the advantages and disadvantages of three reviewed approaches used for capturing SA by UAVs.

5. Algorithmic and Strategic Insights to UAV-Centric SA

5.1. Data-Driven ML Models

Recent advancements in supervised ML techniques have led to significant improvements in object detection and tracking. These techniques have demonstrated promising performance in tracking-based applications, and their ability to learn complex features and process semantic information makes them powerful tools for addressing some of the challenges in this field [27,75,82].
Supervised learning excels in scenarios such as object detection, image classification, and anomaly detection, where the availability of labeled datasets allows the model to learn precise mappings between inputs and outputs. The supervised approach ensures reliable performance in these tasks, making it preferable for applications requiring high precision and predictability. In contrast, unsupervised learning is typically used for clustering and pattern recognition tasks where labeled data are scarce, but it may not provide the same level of accuracy for specific tasks such as supervised learning.
Several studies devoted to forming SA for autonomous UAVs demonstrate the high efficiency of ML technologies for solving this problem, which will be discussed in Section 5.1.1 and Section 5.1.2 followed by a critical analysis in Section 5.1.3. CNNs and DL are widely used in computer vision applications and are specifically designed to learn and extract features from visionary data through a combination of convolutional layers, pooling layers, and fully connected layers. These layers help to identify patterns and relationships within the image data, allowing for accurate detection, tracking, and analysis. Arguably, a significant portion of the current advancements in tracking are rooted in DL methodologies.
Supervised learning constitutes a fundamental paradigm in leveraging labeled datasets to train models for specific tasks. In the realm of UAV SA, this subsection explores the applications of and advancements in supervised learning techniques, elucidating their contributions to enhancing the capabilities of UAVs.

5.1.1. Object Detection, Recognition, and Tracking

SA requires onboard processing using single- or multi-sensor fusion to capture data from various visual and radio sensors, assessing the environment [101]. The quality and quantity of fused data significantly impact UAVs’ perception and SA, necessitating effective strategies for data acquisition and semantic processing [102,103]. Current models lack cognitive accuracy, highlighting the need for broader, human-like cognition models for robust decision-making and action selection.
Supervised learning plays a pivotal role in object detection and recognition, which are crucial for identifying and categorizing elements within the UAV’s operational environment. This section delves into state-of-the-art supervised learning models such as CNNs and their adaptations for efficient detection of objects like vehicles, structures, or anomalies, contributing to comprehensive SA.
Using CNNs and, in general, DL-based models, various architectural strategies have been devised to enable the integration of sensors and devices, resulting in enhanced autonomy and increased efficiency in search and rescue (SAR) operations. Convolutional layers in CNNs are particularly adept at extracting spatial features from image data, which is crucial for accurate object detection and classification. Pooling layers help in reducing the dimensionality of the data, making the processing more efficient while retaining essential features. Fully connected layers integrate these features to make final predictions, which can be critical in identifying and locating individuals or objects in SAR missions. Additionally, advanced architectures such as ResNet, which introduces residual connections to mitigate the vanishing gradient problem, and YOLO (You Only Look Once), known for its real-time object detection capabilities, have been employed to improve the performance of UAV systems. These models allow for real-time data processing and decision-making, which are vital in time-sensitive SAR operations. By leveraging these architectural advancements, UAVs can effectively process vast amounts of sensory data, leading to quicker and more accurate situational awareness.
In [104], a DL-based person-action-locator approach for SAR operations uses UAVs to capture live video, processed remotely by a DL model on a supercomputer-on-a-module. The Pixel2GPS converter estimates individual locations from the video feeds, addressing limited bandwidth and detection challenges. In [105], UAV-based SA in a SAR mission uses a CNN on a smartphone for human detection. Despite its accuracy and portability, communication bandwidth remains an issue. The choice of a smartphone as the computing unit is unclear, given that more powerful portable devices exist.
Another method uses cellphone GSM signals for target localization through received signal strength indication (RSSI) [106,107,108]. In [107], a deep FFNN with pseudo-trilateration determines the target location, using CNN features as a secondary input. In [108], fixed-wing UAVs collaboratively track SOS signals from cellphones in wilderness applications. The study presented in [109] proposed a hybrid approach integrating Faster R-CNN with a quadrotor’s aerodynamic model for navigating forests and detecting tree trunks. A hierarchical dynamic system for UAV behavior control in [32] uses visual data processed by DL-based pyramid scene parsing network (PSPNet) for semantic segmentation. This system anticipates entity behavior around the UAV, enhancing SA-based action control and decision-making.
A different SAR approach in [110], integrates a pre-trained CNN with UAV cameras for detecting survivors’ vital signs, augmented with bio-radar sensors for accuracy, and using LoRa ad hoc networks for inter-vehicle communication. UAV SLAM technology coupled with CNNs enhances real-time environmental perception and mapping. In [111], real-time indoor mapping integrates SLAM with a CNN-based single image depth estimation (SIDE) algorithm.
End-to-end learning solutions for unknown environments in [112] directly obtain geometric information from RGB images without specialized sensors. The active neural SLAM in [113] combines traditional path planners with learning-based SLAM for fast exploration. While C-SLAM offers a global perspective by integrating local UAV perspectives [114], map accuracy is affected by factors like UAV numbers, communication delays, bandwidth limitations, and reference frames [115,116].
UAVs’ SA is hindered by data acquisition noise and environmental uncertainty [117]. A UAV-based SA system in [97] uses a single camera for real-time individual detection and action recognition, impacting mAP if detection is inaccurate. Geraldes et al. [104] proposed the person-action-locator (PAL) system for recognizing people and their actions using a pre-trained DL model, focusing only on basic perception and neglecting event comprehension and projection.

5.1.2. Data-Driven ML-Based Path Planning and Navigation

Exploring various strategies to optimize the UAV’s flight path and relay operations is another area of interest for many researchers. This involves developing efficient algorithms that determine the most optimal UAV trajectory, considering signal strength, interference, and energy consumption factors.
The application of supervised learning extends to path planning and navigation systems, empowering UAVs to navigate through complex terrains with optimal routes. Exploration of supervised algorithms showcases their role in training UAVs to make informed decisions based on labeled training data, leading to more adaptive and responsive navigation. CNN and DL have also demonstrated their usefulness in addressing challenges associated with positioning, planning, perception, and control in the context of autonomous drones’ collision avoidance.
UAVs must detect surrounding objects to avoid collisions and adjust flight paths. Autonomous navigation systems and onboard cameras estimate paths locally, though this can be computationally intensive. To address this, ref. [118] explored DL methods like RL and CNN for real-time obstacle detection and collision avoidance. Awareness of terrain elevations aids in deconfliction strategies and rerouting paths [119,120,121]. ANN performance for UAV-assisted edge computing and parameter optimization techniques for SA and obstacle avoidance are investigated in [9], though high computational demands pose challenges for real-time UAV-assisted SA, particularly in SAR operations.
Deep CNNs excel in feature identification from input images, making them popular for SA in collision avoidance. For instance, a deep ANN for autonomous indoor navigation in [122] showed efficiency in collision avoidance. Combining LSTM with CNN for UAV navigation was explored in [123] using the Gazebo simulator with obstacles [124].
A LiDAR-based approach for UAV space landing using a hybrid CNN-LSTM model was proposed in [125], offering high pose estimation accuracy but insufficient real-time processing speed. Similarly, ref. [126] proposed a hybrid CNN-LSTM model for early diagnosis of rice nutrient status using UAV-captured images, integrating an SE block to improve accuracy but reduce computation speed.
End-to-end approaches match raw sensory data to actions, learning from skilled pilots’ actions [127,128,129]. These methods often involve depth maps, the UAV position, and odometry for route recalculation [127,130]. A CNN-based relative positioning approach for UAV interception and stable formation was suggested in [131] and DL models were applied for terrain depth maps and obstacle data for collision avoidance in [132].
NNs in flight control methods learn UAVs’ nonlinear dynamics, enhancing adaptability in unknown situations [133]. A cooperative system for UAVs’ collaborative decision-making and adaptive navigation using a trained NN coupled with a consensus architecture was designed in [134], facilitating mission synchronization and environmental adaptation.
Internal SA is crucial for UAVs to address software/hardware failures. Ref. [135] investigated using DNN to classify and detect recovery paths for malfunctioning UAVs, demonstrating its potential for enhancing safety and reliability. Further research could improve autonomous recovery systems, enhancing UAVs’ overall safety and reliability.

5.1.3. Challenges and Future Directions

While supervised learning offers substantial advancements in UAV SA, challenges such as limited labeled datasets, domain adaptation, and real-time processing complexities persist. This section outlines the current challenges and suggests potential directions for future research, aiming to address limitations and propel the application of supervised learning in UAV SA to new heights.
Despite the successful use of supervised learning in the studies mentioned above and meeting the needs of various applications, manual labeling requires tremendous time and effort. On the other hand, the collected datasets should be qualitatively and quantitatively rich enough for the particular environment where the UAV will operate in advance of training. Given that the collected data pertain to specific environments, the entire process must be repeated whenever the model needs to be used in a different setting. Exploiting lightweight supervised algorithms, such as shallow NN, on UAVs is feasible and can be effective at the cost of optimal performance only when sufficient properly labeled data are available or when the environment is fully known.
Nevertheless, since the operating environment is rarely known in real-world applications, relying solely on offline data to train the model is likely inadequate for applications that require real-time sensory data, and the agent has a basic understanding of the environment. Additionally, rendering extensive amounts of data in a dynamic spatiotemporal environment is challenging, while labeling the collected data in such an environment further complicates the process.
Another major challenge with supervised methods, including various types of NN, is the intricate training process requiring extensive computation. Consequently, executing these algorithms in UAVs, as resource-constrained devices, is arduous. Although utilizing DNN can aid in achieving satisfactory performance with high accuracy, it can result in additional computation, power usage, and latency. Integrating extra hardware into UAVs to manage complex computations may reduce the thrust-to-weight ratio, shorten battery life, and delay response time, making it an inefficient solution. Thus, the demand for deployable ML techniques for edge devices, including UAVs, has increased as they face PMBC limitations, making it challenging to perform ML with these devices. This entails transferring the data to an external computation center to be processed on high-performance computers, such as a cloud server [136]. However, data communication is associated with enormous energy consumption and is prone to latency due to bandwidth limitations. Acknowledging the significant contributions of academic research to technology development, the majority of these works remain in the simulation stage using various assumptions that may not apply to real-world applications or present challenges in hardware implementation [133].
Innovative hardware designs are being researched to enhance the computation ability of UAVs while the power supply for heavy computation on UAVs, as well as the communication restrictions, still remain an inevitable challenge.
Given the importance of real-time continuous SA, it becomes essential to incorporate online learning techniques that allow the model to adapt and improve its performance based on real-time feedback. By continuously updating the agent’s understanding of the environment through interaction and feedback, the agent can better navigate dynamic and uncertain conditions, ultimately enhancing its ability to make accurate and timely decisions. This adaptive approach enables the model to stay relevant and effective in complex, evolving environments, ensuring it can handle real-world applications’ demands.

5.2. Stochastic, Deterministic, and Metaheuristics Algorithms

Deterministic and nondeterministic algorithms can provide UAVs with SA in two different aspects. The first is optimizing the route or path the UAV should take to navigate the environment and collect data from its onboard or scattered sensors (see Section 4.1). This approach has been successfully applied in single- and multi-UAV mission planning scenarios (application-based SA).
The second approach is exploiting SA to ensure the UAV’s successful mission completion in unknown environments, considering the important parameters, constraints, and disturbances that can imperil the vehicle’s safety or hamper the operation. This includes enabling vehicles to be aware of and deal with factors such as collision barriers, including trees, cliffs, or objects, the impact of wind on the UAV’s motion, and other harsh environmental conditions that can affect the UAV’s maneuverability, battery consumption, endurance, trajectory length, and overall mission cost (vehicle-based SA).
UAVs are often used to conduct time-critical operations and explore unknown and uncertain environments where neither GPS signal nor an a priori terrain map are available. Thus, devising an effective and feasible exploration strategy, route/path planning, and trajectory tracking are crucial for optimizing the operation performance and minimizing exploration costs. To this end, the UAVs ought to perceive their surroundings in real-time and adjust their strategy accordingly. To achieve this, numerous studies have been dedicated to developing effective exploration strategies.
Both of these perspectives offer great potential for using these groups of algorithms to enhance the vehicle’s autonomy and reduce or eliminate the reliance on remote control or supervision, leading to lower risks of human errors and subjective judgments. This is especially critical in operations like SAR that have strict timelines and require high accuracy.
As one of the crucial considerations for controlling the behavior of autonomous UAVs, threat awareness (mainly obstacles) has been investigated by [23,24,25], wherein [23] the UAV’s collision-free flight over hilly terrain has been addressed by performing an evasive maneuver using the data obtained from the radar. In [24], a similar problem is solved using visual data and integrating a reasoning inference system to discover the relationships between the objects of the observed scene. In [11], collision awareness is also addressed considering different ranges of objects on the Earth’s surface. Available approaches for motion planning and navigation can indeed be categorized based on their nature: deterministic, non-deterministic, and stochastic.

5.2.1. Deterministic Methods for Vehicle-Based SA

Deterministic path and motion planning methods ensure consistent results with the same initial conditions and rules.
Potential field-based methods are commonly used for path planning, creating attractive and repulsive fields to guide vehicle motion. Ref. [137] used an artificial potential field (APF) for UAV navigation in a 3D dynamic environment. Ref. [138] applied an APF-based strategy for bilateral teleoperation with 3D collision-aware planning, incorporating visual and force feedback. Ref. [139] integrated APF with multi-UAV conflict detection and resolution (CDR) using rule priority and geometry-based mechanisms, showing a 46% improvement in near-collision detection. Ref. [140] proposed an improved APF for UAV swarm formation, handling dynamic obstacles to prevent collisions.
While APF is popular for its fast calculations and low computational cost, it can lead to the “repulsive dilemma”, where UAVs may get stuck in local minima. Ref. [141] formulated path planning as a multi-constraint optimization problem using linear programming and adaptive density-based clustering, successfully meeting task completion and execution times. Ref. [142] developed a decentralized PDE-based approach for optimal multi-UAV path planning, leveraging porosity values to model collision risks, and suggested an ML technique for efficient PDE solutions. This method ensures that the UAVs do not collide while taking into account their dynamic properties. The feasibility of on-board implementation is also highlighted, and a simulation study shows the advantages of this method over centralized and sequential planning.
Mixed-integer linear programming (MILP), as an exact methodology, was conducted by [143,144] to facilitate UAV collision-free motion planning in low altitudes flying over a residential area. Cho et al. [145] also used MILP for multi-UAV path planning to maximize the coverage in a maritime SAR operation. In this study, the MILP uses a grid-based decomposed search area translated to a graph form to render an optimal coverage solution. However, due to its deterministic nature, MILP is computationally complex and inefficient for real-time applications, which entails swift responses to environmental changes.
Deterministic algorithms guarantee consistency for the same input, and a predictable performance and are suitable for scenarios where the environment is fully known and static, as they can efficiently compute optimal paths without the need for random sampling or exploration. However, in these algorithms, due to their deterministic nature, computational time heavily relies on the size and complexity of the operation field. These algorithms also face several optimization challenges in dealing with complex environments as they heavily rely on prior knowledge of the surrounding environment, including the potential field, collision cost map, and modeling of the UAV’s kino dynamics in some cases [139,140]. Moreover, these algorithms are highly vulnerable to noisy data, which can adversely affect their accuracy and performance, even in the presence of prior knowledge. The slow convergence and dependence on the size and dynamism of the operation field make them inefficient for real-time applications, which entail a swift response to environmental changes, where the vehicle might be subject to multiple replanning during the mission.

5.2.2. Non-Deterministic and Stochastic Methods for Vehicle-Based SA

Non-deterministic methods can exhibit different behaviors for the same input due to inherent variability. Stochastic and non-deterministic algorithms for UAVs’ target motion prediction and trajectory tracking include an A* algorithm [146], and MDP [147,148], enabling advanced SA for path planning. Since 2011, research has focused on these areas, as discussed in [149].
In [150], a centralized A* algorithm and multi-Bernoulli tracking approach address SA-dependent UAV swarm formation and planning. A mission-centered command and control paradigm using A* for dynamic path planning and UAV-UGV synchronization was proposed in [147]. The A* algorithm also supports UAV path planning in urban air mobility operations, facilitating collision avoidance and deflection updates [151].
Sampling-based approaches like RRT, RRT*, and PRMs [152,153,154,155] support UAV path planning by connecting sampled obstacle-free points. Wu et al. [153] explored RRT variants, highlighting their slow convergence and high computational cost. A hybrid approach combining bidirectional APF and IB-RRT* addresses randomness challenges and controls computational costs. Bias-RRT* was used for collision-free path planning in port inspection operations [155].
Conventional motion planning algorithms, including PRM, RRT derivatives, and A*, involve two stages: observing the surroundings and generating control instructions. These methods suffer from high computational complexity, slow convergence, prolonged flight distance/time, and high energy consumption, making them inefficient for resource-constrained UAVs. Non-deterministic algorithms offer near-optimal solutions quickly but are sensitive to environmental changes and disruptions [156,157]. Open-loop systems [158] lack logical prediction and require frequent decision-making iterations.
Stochastic methods involve randomness in decision-making or environmental representation. PRM struggles with non-holonomic constraints and generates redundant moves based on vertex connections. Sampling-based approaches suffer from slow convergence and high computational costs. Hybrid approaches combining deterministic and stochastic elements handle uncertainty and variability, providing stability and predictability.

5.2.3. Metaheuristic and Evolution-Based Methods for Vehicle-Based SA

Metaheuristic and evolution-based algorithms have shown significant progress in UAV motion planning, leveraging swarm intelligence for better convergence in complex target spaces.
The ant colony optimization (ACO) algorithm was applied for optimal collision-free paths in air force operations [159]. A hybrid ACO-A* algorithm addressed multirotor UAV 3D motion planning, considering energy consumption and wind velocity constraints [160]. In this study, a camera-mounted UAV is utilized for patrol image shooting, while ACO-A* adaptively generates a path plan by considering energy consumption as the cost function and taking wind velocity constraints into account.
Bo Li et al. [161] developed an improved ACO for multi-UAV collision-free path planning and to accommodate the smoothness of the trajectory in sharp turns using the inscribed circle (IC) approach. The primary focus of this research is augmenting ACO with metropolis criteria to hamper the risk of stagnation or getting stuck in local optima. However, the UAV’s flight is assumed to be in a static environment with stationary obstacles, which does not resemble a real-world environmental model, neglecting factors that can affect the UAV’s practical flight in diverse situations.
Cuckoo search and multi-string chromosome genetic algorithms enhanced the multi-dimensional dynamic list programming (MDLS) algorithm for UAV task planning, showing better global optimization and task completion time [162]. They proposed a mission planning method for a UAVs’ course of action that uses a two-segment nested scheme generation strategy incorporating task decomposition and resource scheduling. This method can automatically generate multiple schemes of the entire operational process, which have great advantages in diversity and task completion time compared to other methods. The correlation coefficient can be adjusted to achieve different optimization indexes. The method’s effectiveness was verified through UAV cooperative operation modeling and simulation.
A hybrid grey wolf optimization (GWO) and fruit fly optimization algorithm (FOA) was proposed for UAV route planning in oilfields, improving solution quality in complex environments [163]. However, this work has been considered in 2D terrain which does not resemble the UAV’s kinodynamic tackling of 3D environmental challenges.
Qu et al. [164] introduced a new hybrid algorithm, HSGWO-MSOS, for UAV path planning in complex environments. This algorithm efficiently combines exploration and exploitation abilities by merging a simplified grey wolf optimizer (GWO) with a modified symbiotic organisms search (MSOS). The GWO algorithm phase has been simplified to speed up the convergence rate while maintaining population exploration ability. Additionally, the commensalism phase of the SOS algorithm is modified and synthesized with the GWO to enhance the exploitation ability. To ensure the path is suitable for the UAV, the generated flight route is smoothed using a cubic B-spline curve. The authors provide a convergence analysis of the proposed HSGWO-MSOS algorithm based on the method of linear difference equations. Simulation experiments show that the HSGWO-MSOS algorithm successfully acquires a feasible and effective route, outperforming the GWO, SOS, and SA algorithms.
PSO is a classical meta-heuristic algorithm commonly used for multi-objective path planning problems offering promising performance in UAV path planning and reducing the collision risk without adjusting the algorithm setup [165]. Shang et al. [166], developed a co-optimal coverage path planning (CCPP) method based on PSO that can simultaneously optimize the UAV path and the quality of the captured images while reducing the computational complexity. This method is designed to address the limitations of existing solvers that prioritize efficient flight paths and coverage or reduce the computational complexity of the algorithm while these criteria need to be co-optimized holistically. The CCPP method utilizes a PSO framework that optimizes the coverage paths iteratively without discretizing the space of movement or simplifying the models of perception. The approach comprises a cost function that gauges the effectiveness of a coverage inspection path and a greedy heuristic exploration algorithm to enhance the optimization quality by venturing deep into the viewpoint search spaces. The developed method has shown the ability to significantly improve the quality of coverage inspection while maintaining path efficiency across different test geometries.
In a recent study, Khan et al. [167] proposed a new approach called a capacitated vehicle routing problem (CVRP) using consensus theory for UAV formation control aiming to achieve the safe, collision-free, and smooth navigation of UAVs from their initial position to the site of a medical emergency. The idea is that a patient notifies the nearest hospital through a GSM band about their health condition, and a doctor drone is immediately dispatched to provide medical assistance. In this study, the vehicle routing problem was carried out using CVRP and different evolutionary algorithms, including PSO, ACO, and GA, with a comparative analysis of different vehicle capacities and numbers. Accordingly, the results showed that the CVRP outperformed others in terms of runtime (0.06 s less runtime) and transportation cost, using fewer vehicles with an almost doubled payload capacity. Though this research tackled the computational time and transportation costs, the problem with excessive battery consumption for the increased payload still needs to be addressed.
Zhang et al. [168] introduced a hybrid FWPSALC mechanism for UAV path planning, demonstrating robustness in search operations and constraint handling with improved speed convergence. The study frames UAV global path planning as a multi-constraint optimization problem and proposes a collaborative approach that integrates an enhanced fireworks algorithm (FWA) and PSO. The objective function models the UAV flight path to minimize the length while adhering to stringent constraints posed by multiple threat areas. The α-constrained method, employing a level comparison strategy, is integrated into both FWA and PSO to augment their superior constraint-handling capabilities. The entire population is divided into fireworks and particles, operating in parallel to increase diversity in the search process. The introduction of a novel mutation strategy in fireworks aims to prevent convergence to local optima. An information-sharing stream is designed between fireworks, individuals in FWA, and particles in PSO in this research to improve the global optimization process. The simulation results represent the efficiency of the adjusted hybrid algorithm in solution quality and adhering to the constraints.
One major limitation is the non-deterministic nature of evolutionary algorithms, which poses challenges in escaping local optima, potentially leading to suboptimal solutions. Additionally, creating a decision-making model that can be generalized to unknown environments remains a formidable challenge, even with the computational efficiency of evolutionary algorithms. The computational efficiency of metaheuristic approaches can vary widely. Some evolutionary algorithms and other non-deterministic methods can be computationally intensive, especially as the number of agents or the dimensionality of the problem increases. However, many are indeed suitable for resource-constrained devices due to their ability to provide good-enough solutions quickly, even if they are not the optimal solution [169].

5.3. RL-Based Approaches for UAV-Centric SA

Despite the extensive use of UAVs for SA, their effectiveness in providing accurate SA is hindered by environmental uncertainty. Applying a non-deterministic or centralized optimization algorithm in most environments is challenging due to its uncertain, dynamic, and only partially observable nature. Individual agents must make independent and potentially short-sighted decisions based on the information they receive. RL is suggested in [170] to deal with the shortcomings of supervised, deterministic and nondeterministic algorithms where the agent (UAV) interacts with its surroundings and learns things with trial and error.
RL emerges as a powerful paradigm within UAV-oriented SA, offering a dynamic approach where UAVs learn optimal decision-making strategies through interaction with their environment. RL excels in teaching UAVs to make sequential decisions by learning from feedback received through interactions with the environment. RL also contributes to dynamic object recognition and tracking within UAV SA. By continually learning from interactions, UAVs become adept at recognizing and tracking moving objects in their surroundings. The versatility of RL is highlighted in its ability to facilitate transfer learning across various UAV missions. By accumulating knowledge from one mission, RL-equipped UAVs can adapt and apply learned policies to different scenarios. This section explores how RL algorithms, such as Q-learning and DRL, enable UAVs to formulate optimal decision policies for tasks like navigation, path planning, and adaptive response to environmental changes.
RL is a robust method that lets the agent acquire the correct collection of actions with little or no prior understanding of the environment, and the predictive learning pattern allows the agent to cope with the changing environment easily [171]. Hence, RL is considered to be an effective method supporting adaptive navigation and autonomous motion planning. UAVs equipped with RL models can dynamically adjust their flight paths based on real-time feedback, environmental conditions, and mission objectives. Here we delve into the applications and advancements of RL models, showcasing their significance in training UAVs to navigate, perceive, and respond effectively in complex operational landscapes.
Ref [172] suggested RL with a Deep-Sarsa mechanism for avoiding collision and path planning to steer multi-UAVs in a dynamic environment. A Q-learning algorithm was suggested by the authors of [173,174], which addresses UAVs autonomous path planning. Similarly, in [175], the Q-algorithm is exploited to define routes while accounting for collision avoidance. However, since traditional RLs can only deal with limited action space, in this research, the authors have only employed discrete action space (e.g., move forward, up, left, or right), while operating in a real-world environment necessitates considering UAVs’ kinematics and DoF in actions.
To address traditional RL limitations and deal with continuous action space, DRL has been introduced as a mix of DL and RL and hence receives benefits from both methods. Therefore, it can present improved learning capacity for intelligent path planning. Some of the suggested DRL approaches, including DQN [176], double DQN [177], dueling DQN [178], and prioritized DQN [178], have been employed for UAVs motion planning and have resulted in satisfying outcomes. In [179], different DRL methods on vision-based control of UAVs were employed and their performance through the collision avoidance capability of the UAVs were evaluated. Nonetheless, these value-based methods suffer from the same shortcomings as conventional RL approaches, which can only deal with cases with discrete action space. Therefore, policy gradient techniques [180] are integrated into DRL to handle UAVs’ motion control with more flexible DoF. In this technique, parameterized deterministic or stochastic strategies with continuous actions are acquired by implementing gradient descent in the parameter space. In [181], the authors suggested a deterministic policy gradient (DPG) algorithm and showed that it could considerably surpass the stochastic counterparts to define actions in high-dimensional spaces. In [182], DQN and DPG were combined through the actor–critic framework and introduced a deep deterministic policy gradient (DDPG) algorithm capable of the direct mapping of continuous states to continuous actions. Although DDPG may indicate high performance in some circumstances, problems start when it is employed to address a UAVs time-varying motion control, as UAVs are vulnerable to rapid speed changes, while adding a speed limit module into DDPG will cause instability in the training process. Thus, in [183,184,185], only the heading control channel is considered in navigating the UAV, but this is also associated with serious restrictions in practical applications. Moreover, since the actor and critic are strictly correlated in the DDPG algorithm, any exaggeration of the critic can result in policy instability and vibration, which can lead to frequent collisions or failure. Another problem with DDPG is its being sensitive to exploration patterns and hyper-parameters, such that any unjustifiable adjustments can cause instabilities in the learning process, which is why region policy optimization (TRPO) [186] and proximal policy optimization (PPO) [187] are suggested. Time variations and changing surroundings lead to action and state expansion in motion control scenarios, increasing exploration complications. However, exploiting model-based stochastic policy provides adaptive learning of surroundings with less a priori information, where an action can be extracted from a set of equally reliable options.
The authors of [188] suggest using RL to learn and complete complex UAV trajectory tasks, specifically autonomous navigation in unknown environments. In [189], a DRL model, which integrates a DN and RL, is proposed to reach the optimal trajectory policy that maximizes UAV energy efficiency. RL-based algorithms have become popular for SA-oriented motion planning of UAVs, and the research community has devised several notable algorithms, some of which are discussed below.
In [190], the authors propose a control framework that enhances the experimental glide range of a UAV. They suggest an RL derivative called modified model free dynamic programming (MMDP) with an adjusted reward function to address the intricate control challenges in a computationally feasible manner. The approach is unique since the RL-based dynamic programming algorithm was specially altered to set up the issue in the domains of continuous states and control spaces without taking into account the underlying UAVs’ dynamics. The effectiveness of the outcomes and performance characteristics showed that the suggested algorithm could dynamically adjust to the changing environment, which qualified it for use in UAV applications. The suggested methodology outperformed the traditional classical approaches in non-linear simulations conducted under various environmental conditions.
Chronis et al. [191] suggested a practical and secure UAV navigation system in urban areas utilizing a minimal set of sensors and an RL-based algorithm followed by an empirical assessment of the proposed method in different scenarios and simulated operation fields. This approach considers the unpredictable and intricate nature of the environment in UAV operation, using real-time data from onboard sensors. The implemented approach could facilitate SA for the UAV’s robust navigation considering the safety measures, particularly for transportation applications in rural areas.
A WPT-enabled UAV with an ER and mounting ETs on top of charging stations is conducted in [192] to find the fastest path and acquire SA in a vast area as quickly as possible. The ideal positioning and compliance with UAV constraints, such as the maximum flying range and energy limitation, of every charging station is considered to guide the agent to converge on the best course of action and minimize the flying range. The locations of all data points are clustered using a K-mean algorithm, with each cluster’s centroid corresponding to a charging station’s position. Then, a Deep Q-Network algorithm (DQN) algorithm is developed with a tailored reward function to find the best trajectory possible, ensure quick convergence, and accommodation of uninterrupted mission progress by scheduling the optimal time for visiting charging docks or waypoints.
Li et al. [193] suggested a knowledge-based RL strategy to conduct the domain knowledge in forming the state space the UAVs should explore. The authors configured the algorithm to operate solely relying on observable data and considered the vehicle’s inertial law and signal attenuation principle for UAVs’ operation, designing a corresponding reward function to facilitate compelling space exploration and acceleration of algorithm convergence. Given the comparative analysis in [193], the proposed model converges faster and performs better reward accumulation than the model-free RL algorithm.
Singh et al. [194] applied RL and DRL to empower UAVs to navigate and undertake missions effectively in disaster response situations, considering resource and network restrictions. This study uses the Kalman filtering approach for the UAVs’ location estimation, data fusion, and information sharing. Afterward, trajectory planning and velocity adjustment are handled by RL models, and the produced trajectory is optimized through a non-ML algorithm to ensure high-quality video capturing and streaming in the aerial monitoring of ground entities.
RL-based algorithms have shown promising applications in training UAVs to prioritize and track objects of interest, enhancing overall SA during mission execution. The next subsection outlines the current challenges and suggests avenues for future research, aiming to unlock the full potential of RL in advancing UAV-centric SA.

Challenges and Future Prospects

RL-based approaches for UAV-centric SA are at the forefront of innovation, offering promising solutions for adaptive and intelligent autonomous systems. They can predict online or proactively for trajectory segments in the absence of prior information, which is useful for operations such as wildfire monitoring, target tracking, or SAR operations, where trajectory information is typically limited or unavailable [195]. RL can be used in scenarios where no prior information is available and enhances the adaptability of UAVs, enabling them to navigate through challenging terrains and unforeseen obstacles. However, these approaches also face significant challenges that need addressing to fully unleash their potential. A primary concern is the sample inefficiency of RL algorithms, which often require extensive interaction with the environment to learn optimal policies, making them computationally intensive and potentially impractical for real-time applications. Moreover, the high dimensionality of state and action spaces in UAV operations introduces complexity, hindering the convergence of RL algorithms. Safety and reliability are additional critical challenges, as ensuring safe exploration and robust performance in unpredictable and dynamic environments is crucial for practical deployment.
Looking ahead, the future prospects of RL-based approaches for UAV-centric SA are inherently tied to advancements in computational efficiency, algorithmic robustness, and safety assurance. To address the computational limitations of UAVs, implementing scalable algorithms that optimize computational efficiency by adapting to the available processing power is crucial. Lightweight computing solutions, such as edge computing devices and low-power processors, can significantly enhance the processing capabilities of UAVs without overwhelming their onboard systems. Additionally, the development of sample-efficient algorithms, possibly through transfer learning or meta-learning, could significantly reduce the computational burden. Addressing the curse of dimensionality through hierarchical or model-based RL could enhance learning convergence. Moreover, ensuring safe exploration through techniques like safe RL or risk-sensitive learning is paramount for operational viability. Advancements in machine learning and AI algorithms tailored for low-power environments enable real-time data analysis and decision-making within the limited computational resources of UAVs, further enhancing their overall functionality and efficiency in various applications.

6. Multi-Agent Cooperative SA

In the realm of autonomous systems, the concept of multi-agent cooperative SA has gained significant attention, representing a major shift towards distributed intelligence and collaborative decision-making. This approach focuses on the effective combination of data and cognitive processes across multiple agents, enabling a comprehensive understanding of dynamic environments. Multi-UAV cooperative SA has attracted extensive attention in relevant fields [196,197], which entails all UAVs’ consensus on the captured and amalgamated information [198]. Multi-agent cooperative SA involves various strategies and methodologies to ensure efficient and effective communication, decision-making, and coordination among agents, including but not limited to the following:
  • Consensus Algorithms: These algorithms ensure that multiple agents can reach an agreement on certain data points or decisions, despite having individual inputs and possibly conflicting information.
  • Distributed Data Fusion: This involves combining data from multiple sources (agents) to produce more consistent, accurate, and useful information than that provided by any individual data source.
  • Swarm Intelligence: Inspired by the behavior of natural entities like birds or insects, swarm intelligence focuses on the collective behavior of decentralized, self-organized systems, both natural and artificial.
  • Graph Theory-Based Methods: Utilizing graph models to represent the agents and their relationships, allowing for the analysis and optimization of the network structure for better communication and task allocation.
  • Game Theory: Employed to model and analyze interactions among multiple decision-makers (agents) where the outcome for each participant depends on the decisions of others, facilitating the understanding and design of protocols for cooperation and competition.
  • Multi-Agent Reinforcement Learning (MARL): Agents learn to make decisions through trial and error, receiving rewards or penalties, to achieve a common goal in a cooperative or competitive setting.
The essence of multi-agent cooperative SA lies in its ability to merge the individual perceptual capabilities of each agent, thus creating a coherent and enriched representation of the operational context. This collective perception, supported by sophisticated communication protocols and algorithms for consensus-building, ensures that decision-making is not just based on the localized knowledge of individual agents but is also strengthened by the collective intelligence of the network. The effectiveness of such systems is particularly evident in complex and uncertain scenarios, where the spatial and temporal diversity of information sources significantly enhances the robustness and adaptability of the decision-making process.
When dealing with multiple autonomous UAVs, controlling their individual behavior and ensuring they operate as a group is essential. A critical issue is accommodating SA in and from a multi-vehicle, multi-vision framework where the vehicles’ synergism is also a part of this problem. In this regard, multimodal knowledge-based information fusion [199] and distributed sensing estimation [200] have been investigated for UAVs’ role recognition in the fleet and cooperative situation assessment. This cooperative approach allows multiple UAVs to work together in a coordinated manner to share data and achieve common SA, enabling more efficient use of UAVs for complex missions that may be beyond the capabilities of a single UAV. Zhang et al. suggested a joint trajectory planning and air-to-ground cooperation framework using a double-objective approach called MSUDC (master-slave UAV data collection) to maximize the sensor data collection rate and minimize flight time [201]. However, the proposed approach suffers high functional and computational costs, making the experimental implementation hard to achieve, while the optimization process also remains unclear.
A multi-UAV mission planning and task allocation system is proposed in [202] to conduct sensor data acquisition collaboratively, tackling the UAVs’ payload and battery constraints and avoiding redundant collection of sensor data (tasks allocated). Although the system performs well in prioritizing sensor nodes and maximizing the data collection, the operation environment in this study assumed ideal, which is not the case with real-world scenarios. Also, it is not stated how the communication between vehicles in the fleet is facilitated.
A WSN-driven multi-agent mission coordination system is developed in [18], where an improved computationally fast genetic algorithm (GA) is devised to perform fast and lightweight computation suitable for resource-constrained UAVs addressing continuous and real-time bushfire SA in the western Strzelecki Ranges of Australia. The proposed approach enables the UAVs to traverse the impacted area and identify the locations of fire outbreaks with over 50% probability of fire existence. This information is relayed to the ground control station, where emergency response teams can quickly mobilize and take appropriate action. By leveraging the UAVs’ agility and aerial perspective, the proposed approach significantly enhances the speed and accuracy of fire detection and monitoring efforts.
GA is a widely used optimization technique that can be applied to constrained and unconstrained optimization problems. However, it is important to note that GA does not guarantee finding the optimal path. One of the main challenges is the occurrence of local minima in narrow environments, which can compromise security and hinder solution quality, necessitating avoidance of lower security and narrow corridor problems.
A scalable task allocation and cooperation architecture for UAVs is proposed in [203] to enable the parallel operation of agents over a sensor-cluttered bushfire area. A fire-spread probability model was conducted, and a synchronized communication framework was provided to facilitate UAVs’ adaptive cooperation, maximizing coverage and optimizing flight time through resource sharing. A notable contribution of this study is the dynamic task allocation strategy employed, which assigns tasks based on real-time data and the changing environment. This approach ensures that UAVs are effectively utilized and coordinated to cover critical areas, thereby minimizing the number of deployed UAVs and saving cost and energy compared to non-cooperative operations. By capturing information from spots with a high probability of fire existence, this task allocation strategy paves the way for more proactive and efficient firefighting techniques.
The problem of cooperative SA using a multi-UAV role distribution model is considered in [204], where the system recognizes various uncertainties for recalling SA in a centralized manner. The information fusion and processing of the uncertainties are handled using the Dempster–Shafer evidence theory, which could successfully recognize the high-interposition situations corresponding to the same event in practice. A multi-vehicle cooperative SA approach is proposed in [205] to decrease or eliminate the impact of information uncertainty through multi-sensor information fusion, where the system applies certain set conditions to combine and process information captured from different sources and achieve a unified coherent SA. In [206], a hierarchal probabilistic estimation model was developed for cooperative SA consensus captured by multiple UAVs, where only the information uncertainty measures and data fusion were investigated at an abstract level using a feedback mechanism. Although the probabilistic estimation methods can address information uncertainty, they heavily rely on prior data to calculate a new probability, making them impractical in many situations [207].
Addressing the challenge of UAV swarm trajectory planning in a military context, ref. [208] introduces a mathematical model and employs an enhanced PSO approach for solution optimization. Initially, the paper formulates an environmental model along with the corresponding cost function for a single aircraft trajectory, incorporating constraints within the UAV swarm. Subsequently, the objective function for UAV swarm trajectory optimization is defined. Recognizing the susceptibility of PSO to local optimality, the study introduces an improvement to the PSO algorithm by incorporating the Holonic structure from multi-agent theory. This modification enhances the optimization of the objective function. The paper concludes by outlining the algorithmic flow for UAV swarm trajectory planning based on the refined PSO algorithm, effectively planning the trajectory for the UAV swarm. A comparative analysis with prevailing enhanced PSO algorithms demonstrates superior performance in the algorithm proposed in this study.
Shen et al. [209] proposed a model for dynamic multitarget path planning, which enables multiple UAVs to cooperatively detect ship emissions in ports in a dynamic environment. The proposed approach integrates a Tabu table into PSO to augment its optimization capabilities and establishes the initial detection route of each UAV based on a minimum ring technique. The study presents a synergistic algorithm for multiple UAVs that takes into account the path replanning time in a dynamic environment by determining and eliminating the minimum ring.
Phung and Ha [210] introduced a novel technique employing spherical vector-based particle swarm optimization (SPSO) to ensure safety, feasibility, and optimal paths for fixed-wing UAVs. The proposed approach outperformed classic PSO, QPSO, θ-PSO, and various other algorithms. Focused on addressing the challenge of formation control in distributed ad hoc networks for fixed-wing UAVs, the paper presented a route-based formation switching and obstacle avoidance method. The methodology began with the utilization of consistency theory to design a swarm formation control protocol for UAVs. Following this protocol, the self-organized UAV swarm determined formation waypoints based on current position information and adhered to predefined rules to achieve formation switching. Subsequently, the paper employed a combination of geometric methods and an intelligent path search algorithm to establish obstacle avoidance channels within the formation. To implement formation obstacle avoidance, the UAV swarm was divided into multiple smaller formations. Additionally, the paper addressed abnormal conditions during flight. The simulation results demonstrated the reliability and simplicity of the formation control technology based on a distributed ad hoc network. The approach proved to be easy to implement, versatile, and robust, particularly in handling communication and flight anomalies given diverse topologies.
Ali et al. [211] proposed a hybrid meta-heuristic algorithm to enhance the convergence and durability of a UAV swarm by combining the PSO algorithm with the multi-agent system (MAS), incorporating a cluster-based approach. The authors designed a 3D model of the entire environment using graph theory to address the issues related to the communication gap between clusters and the optimal path planning for each cluster. In this system, the PSO identifies the best agents of a cluster, and MAS assigns the best agent as the leader of the cluster, who can then find the optimal path for the cluster. The clusters initially consist of randomly positioned agents composing a formation by implementing PSO with the MAS model. The coordination among agents inside the cluster is achieved through this formation. However, when two clusters combine and form a swarm in a dynamic environment, MAS alone cannot fill the communication gap between clusters. The authors applied the Vicsek-based MAS connectivity, synchronization model, and active leader selection ability. The study also employs a B-spline curve based on a simple waypoint-defined graph theory to create the flying formations of each cluster and the swarm. A comparative analysis is provided to verify the efficiency of the proposed approach against NSGA-II as a famous multi-objective optimization algorithm.
In a later work [212], Ali et al. used a hybrid algorithm of max–min ant colony optimization (ACO) algorithm with Cauchy mutant (CM) operators on multiple UAVs for collective data fusion and path planning. The proposed algorithm gives the optimal global solution within a minimal amount of time. The authors address the cooperative path planning of multiple UAVs within a dynamic environment. A combination of max–min ACO and CM operators was used to address the issues and develop a bio-inspired optimization algorithm. The proposed solution eliminates the limitations of classical ACO and MMACO in terms of speed and local optima issues. The MMACO algorithm was enhanced by the CM operator through comparing and examining the varying tendency of fitness function of the local and global optimum positions while also considering the other multi-UAV path planning problems. The proposed algorithm ensures that it picks the shortest route while avoiding collisions. The results showed that the algorithm is effective and efficient compared to the classical MMACO. The simulation results were performed under a dynamic environment containing tornadoes and wind forces.
Xu et al. [213] combined the PSO with the GWO algorithm to enable cooperative path planning of multiple UAVs under challenging conditions in complex confrontational environments posed by ground radar, missiles, and terrain threats. Initially, a threat model is designed based on the real situation, followed by establishing a multi-constraint objective optimization model that combines fuel consumption and threat criteria under time and space constraints. Then, the GWO algorithm is applied to solve the optimization model considering the features of multi-UAV cooperative path planning. This approach is efficient in decay factor updating and individual position revamping, resulting in lower path costs and faster convergence.
In [214], Yu et al. proposed the use of a competitive learning pigeon-inspired optimization (CLPIO) algorithm to address the UAVs cooperative dynamic combat problem in which the algorithm is designed to integrate both distributed swarm antagonistic motion and centralized attack target allocation with a threshold trigger strategy to switch between the two sub-tasks seamlessly. A dynamic game approach is used to develop a feasible and optimal combat scheme incorporating a hawk grouping mechanism and situation assessment between sub-groups to guide the optimal attack strategy. The CLPIO algorithm has been analyzed and the numerical simulation shows that the algorithm can converge in theory and reasonably solve mixed Nash equilibrium problem in the context of the given combat scenario.
In a later work by Yu et al. [215], a realistic scenario involving unpredictable and random changes in the UAV target trajectory is studied focusing on real-time target tracking that is formulated as a distributed model predictive control (DMPC) problem. The primary objective is to optimize tracking performance while adhering to various constraints. To achieve this, the study introduces an innovative approach by combining the adaptive differential evolution (ADE) algorithm with Nash optimization, resulting in the proposed Nash-combined ADE method. Specifically, the ADE algorithm is employed to adaptively adjust the predicted trajectory of each UAV agent and the Nash optimization is utilized to efficiently solve the DMPC formulation for global optimization which aims to enhance tracking accuracy while reducing computational complexity. The simulation results show that the proposed method performs well in terms of tracking accuracy, UAV collision avoidance, and tracking stability for realistic target trajectories.
A multi-agent Q-learning approach is used in [216] to attain SA-oriented motion planning for a fleet of UAVs in earthquake SAR missions. The system introduced in this study supports a multi-hop constant connection to the central station and enables the UAVs to cover significantly larger affected areas (43%) compared to currently extended variants of the Monte Carlo algorithm.
In [217], the authors have tackled the challenge of UAV’s motion planning modeled as an SA-oriented MDP and applied DRL to perform decision-making based on the current state, which can help UAVs gather and process information more efficiently.
Given that environmental models are often partially complete and involve a high level of uncertainty, the learning phase of RL can be time consuming, which disrupts the real-time setting of time-critical missions. To address this concern, ref. [218] used a distinct approach of information sharing through the UAVs’ cooperation to complete an identical goal of global SA and multi-target detection. The authors of this research delved into diverse approaches for task allocation and applied a distributed multi-agent RL for UAVs to acquire a navigation policy that enables them to navigate unfamiliar terrain while optimizing target detection accuracy. The findings demonstrated that collaboration at various levels improved the learning process for achieving the team objectives.
Qu et al. [219] proposed a collision-aware RL-based approach to attain SA and location estimation for coordinating a heterogeneous UAV fleet in a disaster response management (DRM) scenario. This study accommodates the cooperation among the vehicles through a flying ad hoc network (FANET) disseminated by the control station. In a later work [220], the authors extended their research, considering additional environmental factors, including obstacles and wind disturbance. They introduced an environmental-aware multi-UAV cooperation approach and, similar to [219], used SA-oriented RL for energy-efficient navigation and localization in DRM scenarios.
In a similar approach, Prakash et al. [221] proposed an RL-based model called multidimensional perception and energy awareness optimized link state routing (MPEAOLSR) for multiple UAVs in FANET, which uses a wireless LAN-specific protocol. This technique is particularly effective in large and dense areas, reducing battery usage through minimization of communication delay and bandwidth consumption, making the MPEAOLSR protocol well suited for such environments.
A multi-agent RL-based algorithm is applied by Chen and colleagues [222] for UAV path planning, where the training is centralized, and vehicle coordination takes place in a decentralized manner. The algorithm incorporates a hidden state of the RNN to integrate historical observation information, and a joint reward function is designed to facilitate learning optimal policies by the UAV under various constraints. Integrating such a joint reward function can effectively address the challenges posed by incomplete information and partial observations, improving overall RL performance in an unknown environment.
Qu et al. [223] proposed RL-GWO for UAVs’ cooperative SA-oriented path planning, where the RL is responsible for controlling the UAVs’ operation switch in considering the collective team performance, and the GWO generates an optimal path for individual vehicles. The proposed approach effectively handles exploration–exploitation, and GWO uses cubic B-spline to provide the flight path considering geometric adjustments, which outperforms the performance of solely using RL or GWO for UAV’s optimal path planning/adjustment.

Challenges and Future Directions

Addressing the challenges and steering the future trajectory of multi-agent cooperative SA necessitates a concentrated focus on both theoretical advancements and practical implementations. A significant challenge lies in enhancing the scalability of these systems, requiring the development of efficient algorithms for real-time data fusion and state estimation that can handle the exponential increase in data volume and agent interactions as the network grows. Practically, this involves optimizing computational resources and ensuring low-latency communication to maintain system coherence and responsiveness. Robustness against dynamic environmental changes and agent failures is another critical concern, calling for adaptive control strategies and fault-tolerant designs that can dynamically reconfigure the agent network and redistribute tasks without compromising the system’s overall mission.
Robustness against dynamic environmental changes and agent failures is another critical concern. Adaptive control strategies and fault-tolerant designs are essential, as highlighted in recent studies on adaptive mesh refinement and dynamic task reallocation algorithms. These techniques enable the dynamic reconfiguration of agent networks and task redistribution without compromising the system’s overall mission. Implementing redundancy and diversity in sensor types and communication links can further enhance system robustness.
From an implementation standpoint, integrating heterogeneous agents with varying sensory and actuation capabilities introduces complexities in ensuring interoperability and data consistency. Developing standardized communication protocols and data formats, as recommended by the Internet Engineering Task Force (IETF) and IEEE standards, can mitigate these issues. Advanced middleware solutions, such as Robot Operating System (ROS) 2.0, facilitate a plug-and-play architecture for multi-agent systems, ensuring seamless integration and operation.
Security and privacy remain paramount as UAV usage continues to grow. To protect against unauthorized access, robust encryption protocols for data transmission and storage, secure communication channels, and regular security updates are essential. The use of blockchain technology for secure and transparent data sharing is gaining traction. Privacy concerns can be mitigated by establishing clear regulations and guidelines, as well as employing geofencing technologies to restrict UAV flights over sensitive areas. These measures ensure that UAV operations are secure and compliant with privacy laws. Incorporating these security measures, alongside anomaly detection systems, safeguards against cyber threats and ensures data integrity.
On the frontier of artificial intelligence, practical applications of machine learning in multi-agent systems require addressing challenges related to distributed learning. Agents must collaboratively learn and update their models without centralized control. Implementing distributed reinforcement learning, federated learning architectures, and consensus-based model aggregation methods can empower agents to autonomously adapt and optimize their behaviors in complex, uncertain environments. Techniques like gossip algorithms and blockchain for decentralized consensus are promising approaches highlighted in recent research. Gossip algorithms facilitate efficient information dissemination and fault-tolerance in large-scale networks, making them suitable for dynamic and distributed UAV systems [224,225]. Blockchain-based decentralized consensus mechanisms offer secure and tamper-proof data sharing, enhancing the reliability and integrity of multi-agent coordination [226,227].
The dependency on a central node for algorithm execution introduces further disadvantages, such as limitations in handling real-time scenarios, increased communication costs, and a notable level of dependency between UAVs and the central station. These drawbacks underscore the need for continued research to address these challenges and enhance the real-time and global optimization capabilities of learning algorithms in dynamic environments [228].
Lastly, fostering cooperative multi-agent learning, possibly through emergent communication or centralized training with decentralized execution, could scale RL applications to complex, real-world scenarios. Techniques like communication-efficient learning protocols and multi-agent deep deterministic policy gradients (MADDPG) are being explored to overcome these challenges. Overcoming these challenges will be pivotal in transitioning RL-based approaches from theoretical constructs to integral components of autonomous UAV-centric operations. Furthermore, integrating multi-agent collaboration within the RL framework without compromising scalability and learning stability poses a substantial challenge.

7. Conclusions

7.1. Summary of Key Findings and Ongoing Challenges

This comprehensive review delineates the multifaceted landscape of UAV-centric SA, encapsulating its diverse applications, inherent limitations, and the algorithmic challenges that pervade the field. The key findings underscore the pivotal role of algorithmic and strategic insights to UAV-centric SA while incorporating the importance of sensor fusion, robust communication protocols, and sophisticated data processing algorithms in enhancing the SA of UAVs. Despite the remarkable progress, ongoing challenges persist, notably in ensuring real-time data processing, maintaining robustness in dynamic and adversarial environments, and achieving scalable and efficient multi-agent coordination. Furthermore, the integration of advanced AI and ML techniques, although promising, introduces complexities related to computational overheads, interpretability, and the need for extensive training data, underscoring a trade-off between sophistication and practical viability.
To address these challenges, future research must focus on developing more efficient algorithms that can handle the vast data and computational demands without compromising performance. Enhancing robustness through adaptive and fault-tolerant systems is crucial for maintaining operational effectiveness in unpredictable environments [229,230,231]. Additionally, the creation of scalable multi-agent systems that can efficiently manage the coordination and communication among numerous UAVs will be pivotal in advancing UAV-centric SA.

7.2. Potential Applications on the Horizon

Focusing on the future of UAV-centric SA, the emphasis shifts towards enhancing the SA capabilities of UAVs themselves, promising a revolution in their operational effectiveness and autonomy. Enhanced SA is poised to transform UAVs into highly autonomous agents capable of complex decision-making and adaptive responses in real-time. One of the prominent applications on the horizon is autonomous swarm coordination, where a fleet of UAVs with advanced SA can collaboratively undertake large-scale operations, from agricultural monitoring to strategic surveillance, without human intervention.
Furthermore, the integration of advanced SA in UAVs is set to significantly advance their capability in dynamic obstacle avoidance, enabling them to navigate through cluttered urban environments or challenging terrains reliably. This leap in SA will also empower UAVs to conduct autonomous predictive maintenance, identifying and addressing potential system failures before they occur, thereby ensuring uninterrupted operations.
In the realm of communication, UAVs with sophisticated SA are anticipated to autonomously manage and optimize network resources, facilitating seamless air-to-ground and air-to-air communication essential for coordinated tasks and data sharing. This enhanced communication SA will be crucial in disaster response scenarios, where UAVs can autonomously establish and manage communication networks in areas with damaged infrastructure.
Lastly, the evolution of cognitive SA in UAVs, integrating advanced perception, learning, and decision-making capabilities, will mark a significant milestone. This will enable UAVs to understand and predict the intentions of other entities in their environment, be it other UAVs, humans, or moving objects, paving the way for harmonious and efficient coexistence and collaboration.

7.3. Future Trends and Anticipated Technological Developments

Anticipated technological advancements are poised to address existing challenges and redefine the capabilities of UAV-centric SA. Breakthroughs in energy-efficient AI chips and lightweight materials promise extended UAV endurance and agility. Moreover, advancements in AI, specifically in unsupervised and reinforcement learning, are anticipated to enhance the autonomy, adaptability, and decision-making process of UAVs. The convergence of these technologies, coupled with evolving regulatory frameworks such as the FAA’s Part 107 regulations in the United States [232], EASA’s regulations in the European Union [233], and ICAO’s global standards [234], and societal acceptance, are set to propel the field of UAV-centric SA into a new era of innovation and application.
Emerging techniques such as distributed reinforcement learning and federated learning can enable collaborative learning among multiple UAVs, reducing the reliance on centralized control. The implementation of these techniques can lead to more resilient and adaptive UAV systems capable of operating efficiently in complex and dynamic environments. Additionally, the development of robust encryption protocols and secure communication channels will be crucial in safeguarding data integrity and privacy, ensuring that UAV operations remain secure and compliant with regulatory standards.
In conclusion, while significant challenges remain, the future of UAV-centric SA is promising, with technological advancements paving the way for more autonomous, efficient, and resilient UAV systems. By addressing the current limitations and leveraging emerging technologies, UAVs can achieve higher levels of situational awareness, transforming their role in various applications and driving innovation across multiple industries.

Author Contributions

In this research article, the authors contributed to various stages of the research, writing, and editing process. A.Y. was involved in conceptualization and material preparation, writing, and editing the paper. Y.K. conducted research and investigations on different types of aerial vehicles and their specifications. B.C. and M.O.A.K. participated in data collection and analysis, interpreting the results, and revision of the manuscript for publication. F.A. contributed to editing, fine-tuning the draft, and obtaining necessary permissions. S.M., the first author, managed project administration, wrote the initial draft, and contributed across all research stages. Each author’s expertise and unique perspective enhanced the overall quality of the research project. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Not Applicable.

Conflicts of Interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Abbreviations

UAVUnmanned Aerial VehiclesGSground station
UASUnmanned Aerial SystemsCCCCommand-and-Control Center
SASituation AwarenessLiDARlight detection and ranging
SARSearch and RescueGm-APDGeiger mode Avalanche Photo Diode
MLMachine LearningYOLO You only look once
CNNConvolutional Neural NetworkSPPSpatial Pyramid Pooling
DoFDegrees of FreedomBIMBuilding Information Model
UHUnmanned HelicoptersCMaBComputation, Memory, and Battery
RLReinforcement LearningSESqueeze-and-Excitation
DLdeep learningRSSIreceived signal strength indication
VTOLvertical take-off and landingFFNNFeed-Forward Neural Network
LiPolithium-ion, lithium polymer PSPNePyramid Scene Parsing Network
LiHVLithium Polymer High VoltageSIDESingle Image Depth Estimation
WSNWireless Sensor NetworksmAPmean Average Precision
WPTWireless power transfer IB-RRT*Informed Biased RRT*
LSTMlong short-term memoryPALPerson-Action-Locator
DNsdetection nodesMILPMixed-integer linear programming
DEdifferential evolutionAPFArtificial Potential Field
SOMACself-organized multi-agent competitionCDRConflict Detection and Resolution
MSUDCmaster-slave UAV data collectionRRTRapidly Exploring Random Trees
GAGenetic AlgorithmRDTsDense Trees
IMUinertial measurement unitsPRMProbability roadmap
DSMdigital surface modelB-RRT*Bidirectional RRT*
MDPMarkov decision processACOAnt Colony Optimization
MRmixed realityICinscribed circle
PSOParticle Swarm OptimizationFWAFireworks Algorithm
GWOgrey wolf optimization SPSOSpherical Vector-Based PSO
FOAfruit fly optimizationQPSOQuantum-based PSO
CVRPcapacitated Vehicle Routing ProblemSGWOsimplified grey wolf optimizer
CCPPco-optimal coverage path planningMSOSmodified symbiotic organism’s search
PDEPartial Differential EquationMDLSmulti-dimensional dynamic list programming
MASMulti-Agent SystemCLPIOLearning Pigeon-Inspired Optimization
DMPCDistributed Model Predictive ControlADEAdaptive Differential Evolution
RLReinforcement learningPPOproximal policy optimization
DPGdeterministic policy gradientDRLDeep RL
DDPGdeep deterministic policy gradientDRMdisaster response management
MMDPModified Model Free Dynamic ProgrammingDQNDeep Q-Network algorithm
FANETflying ad-hoc networkCCScentral command system
MPEAOLSRMultidimensional Perception and Energy Awareness Optimized Link State RoutingIMATISSEInundation Monitoring and Alarm Technology in a System of Systems

References

  1. Endsley, M.R. Toward a theory of situation awareness in dynamic systems. Hum. Factors 1995, 37, 32–64. [Google Scholar] [CrossRef]
  2. Azid, S.I.; Ali, S.A.; Kumar, M.; Cirrincione, M.; Fagiolini, A. Precise Trajectory Tracking of Multi-Rotor UAVs Using Wind Disturbance Rejection Approach. IEEE Access 2023, 11, 91796–91806. [Google Scholar] [CrossRef]
  3. Xu, Z.; Zhang, L.; Ma, X.; Liu, Y.; Yang, L.; Yang, F. An anti-disturbance resilience enhanced algorithm for UAV 3D route planning. Sensors 2022, 22, 2151. [Google Scholar] [CrossRef] [PubMed]
  4. “Airforce-Technology”. Available online: https://www.airforce-technology.com/projects/pc-1-multipurpose-quadcopter/?cf-view (accessed on 16 November 2023).
  5. Available online: https://www.dpreview.com/articles/8351868841/landscape-photography-with-a-drone-disadvantages-and-limitations (accessed on 20 November 2023).
  6. “Aeroexpo.Online”. Available online: https://aeroexpo.online/prod/prioria-robotics/product-180931-26810.html (accessed on 28 November 2023).
  7. Lyu, M.; Zhao, Y.; Huang, C.; Huang, H. Unmanned Aerial Vehicles for Search and Rescue: A Survey. Remote Sens. 2023, 15, 3266. [Google Scholar] [CrossRef]
  8. Sampedro, C.; Rodriguez-Ramos, A.; Bavle, H.; Carrio, A.; de la Puente, P.; Campoy, P. A fully-autonomous aerial robot for search and rescue applications in indoor environments using learning-based techniques. J. Intell. Robot. Syst. 2019, 95, 601–627. [Google Scholar] [CrossRef]
  9. Alsamhi, S.H.; Shvetsov, A.V.; Kumar, S.; Shvetsova, S.V.; Alhartomi, M.A.; Hawbani, A.; Rajput, N.S.; Srivastava, S.; Saif, A.; Nyangaresi, V.O. UAV computing-assisted search and rescue mission framework for disaster and harsh environment mitigation. Drones 2022, 6, 154. [Google Scholar] [CrossRef]
  10. Nikhil, N.; Shreyas, S.; Vyshnavi, G.; Yadav, S. Unmanned aerial vehicles (UAV) in disaster management applications. In Proceedings of the 2020 Third International Conference on Smart Systems and Inventive Technology (ICSSIT), Tirunelveli, India, 20–22 August 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 140–148. [Google Scholar]
  11. Erdelj, M.; Natalizio, E.; Chowdhury, K.R.; Akyildiz, I.F. Help from the sky: Leveraging UAVs for disaster management. IEEE Pervasive Comput. 2017, 16, 24–32. [Google Scholar] [CrossRef]
  12. Ajith, V.; Jolly, K. Unmanned aerial systems in search and rescue applications with their path planning: A review. J. Phys. Conf. Ser. 2021, 2115, 012020. [Google Scholar] [CrossRef]
  13. Josipovic, N.; Viergutz, K. Smart Solutions for Municipal Flood Management: Overview of Literature, Trends, and Applications in German Cities. Smart Cities 2023, 6, 944–964. [Google Scholar] [CrossRef]
  14. Wienhold, K.J.; Li, D.; Li, W.; Fang, Z.N. Flood Inundation and Depth Mapping Using Unmanned Aerial Vehicles Combined with High-Resolution Multispectral Imagery. Hydrology 2023, 10, 158. [Google Scholar] [CrossRef]
  15. Karamuz, E.; Romanowicz, R.J.; Doroszkiewicz, J. The use of unmanned aerial vehicles in flood hazard assessment. J. Flood Risk Manag. 2020, 13, e12622. [Google Scholar] [CrossRef]
  16. de Haag, M.U.; Martens, M.; Kotinkar, K.; Dommaschk, J. Flight Test Setup for Cooperative Swarm Navigation in Challenging Environments using UWB, GNSS, and Inertial Fusion. In Proceedings of the 2023 IEEE/ION Position, Location and Navigation Symposium (PLANS), Monterey, CA, USA, 24–27 April 2023; IEEE: Piscataway, NJ, USA, 2023; pp. 286–294. [Google Scholar]
  17. Erdelj, M.; Natalizio, E. UAV-assisted disaster management: Applications and open issues. In Proceedings of the 2016 International Conference on Computing, Networking and Communications (ICNC), Kauai, HI, USA, 15–18 February 2016; IEEE: Piscataway, NJ, USA, 2016; pp. 1–5. [Google Scholar]
  18. Zadeh, R.B.; Zaslavsky, A.; Loke, S.W.; MahmoudZadeh, S. Multi-UAVs for Bushfire Situational Awareness: A Comparison of Environment Traversal Algorithms. In Proceedings of the 2021 IEEE International Conferences on Internet of Things (iThings) and IEEE Green Computing & Communications (GreenCom) and IEEE Cyber, Physical & Social Computing (CPSCom) and IEEE Smart Data (SmartData) and IEEE Congress on Cybermatics (Cybermatics), Melbourne, VIC, Australia, 6–8 December 2021; IEEE: Piscataway, NJ, USA, 2021; pp. 107–114. [Google Scholar]
  19. Achermann, F.; Lawrance, N.R.; Ranftl, R.; Dosovitskiy, A.; Chung, J.J.; Siegwart, R. Learning to predict the wind for safe aerial vehicle planning. In Proceedings of the 2019 International Conference on Robotics and Automation (ICRA), Montreal, QC, Canada, 20–24 May 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 2311–2317. [Google Scholar]
  20. Abdalla, A.S.; Marojevic, V. Machine learning-assisted UAV operations with the UTM: Requirements, challenges, and solutions. In Proceedings of the 2020 IEEE 92nd Vehicular Technology Conference (VTC2020-Fall), Virtual, 18 November–16 December 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 1–5. [Google Scholar]
  21. Huang, S.; Teo, R.S.H.; Tan, K.K. Collision avoidance of multi unmanned aerial vehicles: A review. Annu. Rev. Control 2019, 48, 147–164. [Google Scholar] [CrossRef]
  22. Yasin, J.N.; Mohamed, S.A.; Haghbayan, M.-H.; Heikkonen, J.; Tenhunen, H.; Plosila, J. Unmanned aerial vehicles (UAVs): Collision avoidance systems and approaches. IEEE Access 2020, 8, 105139–105155. [Google Scholar] [CrossRef]
  23. Kim, Y.-G.; Chang, W.; Kim, K.; Oh, T. Development of an Autonomous Situational Awareness Software for Autonomous Unmanned Aerial Vehicles. J. Aerosp. Syst. Eng. 2021, 15, 36–44. [Google Scholar]
  24. Jeon, M.-J.; Park, H.-K.; Jagvaral, B.; Yoon, H.-S.; Kim, Y.-G.; Park, Y.-T. Relationship between UAVs and ambient objects with threat situational awareness through grid map-based ontology reasoning. Int. J. Comput. Appl. 2022, 44, 101–116. [Google Scholar] [CrossRef]
  25. Cavaliere, D.; Loia, V.; Saggese, A.; Senatore, S.; Vento, M. Semantically enhanced UAVs to increase the aerial scene understanding. IEEE Trans. Syst. Man Cybern. Syst. 2017, 49, 555–567. [Google Scholar] [CrossRef]
  26. Rios, J.; Smith, D.; Smith, I. UAS Reports (UREPs): EnablingExchange of Observation Data between UAS Operations; NASA: Washington, DC, USA, 2017. [Google Scholar]
  27. MahmoudZadeh, S.; Yazdani, A. Distributed task allocation and mission planning of AUVs for persistent underwater ecological monitoring and preservation. Ocean Eng. 2023, 290, 116216. [Google Scholar] [CrossRef]
  28. Zidi, S.; Moulahi, T.; Alaya, B. Fault detection in wireless sensor networks through SVM classifier. IEEE Sens. J. 2017, 18, 340–347. [Google Scholar] [CrossRef]
  29. Abbasi, A.; MahmoudZadeh, S.; Yazdani, A. A cooperative dynamic task assignment framework for COTSBot AUVs. IEEE Trans. Autom. Sci. Eng. 2020, 19, 1163–1179. [Google Scholar] [CrossRef]
  30. MahmoudZadeh, S.; Yazdani, A. A cooperative Fault-Tolerant mission planner system for unmanned surface vehicles in ocean sensor network monitoring and inspection. IEEE Trans. Veh. Technol. 2022, 72, 1101–1115. [Google Scholar] [CrossRef]
  31. Gao, Y.; Li, D. Unmanned aerial vehicle swarm distributed cooperation method based on situation awareness consensus and its information processing mechanism. Knowl.-Based Syst. 2020, 188, 105034. [Google Scholar] [CrossRef]
  32. Sabour, M.; Jafary, P.; Nematiyan, S. Applications and classifications of unmanned aerial vehicles: A literature review with focus on multi-rotors. Aeronaut. J. 2023, 127, 466–490. [Google Scholar] [CrossRef]
  33. “Uavos”. Available online: https://www.uavos.com/uavos-fixed-wing-uav-sitaria-completed-flight-tests/ (accessed on 28 December 2023).
  34. “Goodshoppin”. Available online: https://goodshoppin.top/products.aspx?cname=unmanned+helicopter+drone&cid=226 (accessed on 21 January 2024).
  35. “Satuav”. Available online: https://www.satuav.com/vtol-fixed-wing-drone/long-range-reconnaissance-drone-with-fixed.html (accessed on 21 January 2024).
  36. “Trendhunter”. Available online: https://www.trendhunter.com/trends/allterrain-drone (accessed on 21 January 2024).
  37. “Avinc”. Available online: https://avinc.com/uas/vapor (accessed on 3 February 2024).
  38. Tiseira, A.; Novella, R.; Garcia-Cuevas, L.; Lopez-Juarez, M. Concept design and energy balance optimization of a hydrogen fuel cell helicopter for unmanned aerial vehicle and aerotaxi applications. Energy Convers. Manag. 2023, 288, 117101. [Google Scholar] [CrossRef]
  39. Zhang, M.; Li, W.; Wang, M.; Li, S.; Li, B. Helicopter–UAVs search and rescue task allocation considering UAVs operating environment and performance. Comput. Ind. Eng. 2022, 167, 107994. [Google Scholar] [CrossRef]
  40. Suo, W.; Wang, M.; Zhang, D.; Qu, Z.; Yu, L. Formation control technology of fixed-wing UAV swarm based on distributed ad hoc network. Appl. Sci. 2022, 12, 535. [Google Scholar] [CrossRef]
  41. Aiello, G.; Valavanis, K.P.; Rizzo, A. Fixed-wing UAV energy efficient 3D path planning in cluttered environments. J. Intell. Robot. Syst. 2022, 105, 60. [Google Scholar] [CrossRef]
  42. Liu, X.; Zhao, D.; Oo, N.L. Comparison studies on aerodynamic performances of a rotating propeller for small-size UAVs. Aerosp. Sci. Technol. 2023, 133, 108148. [Google Scholar] [CrossRef]
  43. Phang, S.K.; Ahmed, S.Z.; Hamid, M.R.A. Design, dynamics modelling and control of a H-shape multi-rotor system for indoor navigation. In Proceedings of the 2019 1st International Conference on Unmanned Vehicle Systems-Oman (UVS), Muscat, Oman, 5–7 February 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 1–6. [Google Scholar]
  44. “Dslrpros”. Available online: https://www.dslrpros.com/dslrpros-blog/top-10-drones-for-the-construction-industry-in-2023-full-guide-and-reviews/ (accessed on 3 February 2024).
  45. Townsend, A.; Jiya, I.N.; Martinson, C.; Bessarabov, D.; Gouws, R. A comprehensive review of energy sources for unmanned aerial vehicles, their shortfalls and opportunities for improvements. Heliyon 2020, 6, e05285. [Google Scholar] [CrossRef]
  46. Wu, J.; Wang, H.; Huang, Y.; Su, Z.; Zhang, M. Energy management strategy for solar-powered UAV long-endurance target tracking. IEEE Trans. Aerosp. Electron. Syst. 2018, 55, 1878–1891. [Google Scholar] [CrossRef]
  47. Alwateer, M.; Loke, S.W.; Fernando, N. Enabling drone services: Drone crowdsourcing and drone scripting. IEEE Access 2019, 7, 110035–110049. [Google Scholar] [CrossRef]
  48. Wu, J.; Wang, H.; Li, N.; Yao, P.; Huang, Y.; Yang, H. Path planning for solar-powered UAV in urban environment. Neurocomputing 2018, 275, 2055–2065. [Google Scholar] [CrossRef]
  49. Lun, Y.; Wang, H.; Wu, J.; Liu, Y.; Wang, Y. Target Search in Dynamic Environments with Multiple Solar-Powered UAVs. IEEE Trans. Veh. Technol. 2022, 71, 9309–9321. [Google Scholar] [CrossRef]
  50. Esposito, C.; Rizzo, G. Help from above: UAV-empowered network resiliency in post-disaster scenarios. In Proceedings of the 2022 IEEE 19th Annual Consumer Communications & Networking Conference (CCNC), Las Vegas, NV, USA, 8–11 January 2022; IEEE: Piscataway, NJ, USA, 2022; pp. 477–480. [Google Scholar]
  51. Ouamri, M.A.; Barb, G.; Singh, D.; Adam, A.B.; Muthanna, M.; Li, X. Nonlinear Energy-Harvesting for D2D Networks Underlaying UAV with SWIPT Using MADQN. IEEE Commun. Lett. 2023, 27, 1804–1808. [Google Scholar] [CrossRef]
  52. Ahmed, F.; Jenihhin, M. A survey on UAV computing platforms: A hardware reliability perspective. Sensors 2022, 22, 6286. [Google Scholar] [CrossRef]
  53. Liu, B.; Zhang, W.; Chen, W.; Huang, H.; Guo, S. Online computation offloading and traffic routing for UAV swarms in edge-cloud computing. IEEE Trans. Veh. Technol. 2020, 69, 8777–8791. [Google Scholar] [CrossRef]
  54. Mao, S.; He, S.; Wu, J. Joint UAV position optimization and resource scheduling in space-air-ground integrated networks with mixed cloud-edge computing. IEEE Syst. J. 2020, 15, 3992–4002. [Google Scholar] [CrossRef]
  55. Koubâa, A.; Ammar, A.; Alahdab, M.; Kanhouch, A.; Azar, A.T. Deepbrain: Experimental evaluation of cloud-based computation offloading and edge computing in the internet-of-drones for deep learning applications. Sensors 2020, 20, 5240. [Google Scholar] [CrossRef]
  56. Saraereh, O.A.; Alsaraira, A.; Khan, I.; Uthansakul, P. Performance evaluation of UAV-enabled LoRa networks for disaster management applications. Sensors 2020, 20, 2396. [Google Scholar] [CrossRef]
  57. Fan, R.; Jiao, J.; Ye, H.; Yu, Y.; Pitas, I.; Liu, M. Key ingredients of self-driving cars. arXiv 2019, arXiv:1906.02939. [Google Scholar]
  58. Masroor, R.; Naeem, M.; Ejaz, W. Efficient deployment of UAVs for disaster management: A multi-criterion optimization approach. Comput. Commun. 2021, 177, 185–194. [Google Scholar] [CrossRef]
  59. Hertelendy, A.J.; Al-Wathinani, A.M.; Sultan, M.A.S.; Goniewicz, K. Health sector transformation in Saudi Arabia: The integration of drones to augment disaster and prehospital care delivery. Disaster Med. Public Health Prep. 2023, 17, e448. [Google Scholar] [CrossRef]
  60. Saif, A.; Dimyati, K.; Noordin, K.A.; Alsamhi, S.H.; Hawbani, A. Multi-UAV and SAR collaboration model for disaster management in B5G networks. Internet Technol. Lett. 2021, 7, e310. [Google Scholar] [CrossRef]
  61. Kim, H.; Mokdad, L.; Ben-Othman, J. Designing UAV surveillance frameworks for smart city and extensive ocean with differential perspectives. IEEE Commun. Mag. 2018, 56, 98–104. [Google Scholar] [CrossRef]
  62. Zhang, H.; Dou, L.; Xin, B.; Chen, J.; Gan, M.; Ding, Y. Data collection task planning of a fixed-wing unmanned aerial vehicle in forest fire monitoring. IEEE Access 2021, 9, 109847–109864. [Google Scholar] [CrossRef]
  63. Hamidi-Alaoui, Z.; El Belrhiti El Alaoui, A. FM-MAC: A fast-mobility adaptive MAC protocol for wireless sensor networks. Trans. Emerg. Telecommun. Technol. 2020, 31, e3782. [Google Scholar] [CrossRef]
  64. Wang, Y.; Sun, J.; He, H.; Sun, C. Deterministic policy gradient with integral compensator for robust quadrotor control. IEEE Trans. Syst. Man Cybern. Syst. 2019, 50, 3713–3725. [Google Scholar] [CrossRef]
  65. Gupta, P.M.; Salpekar, M.; Tejan, P.K. Agricultural practices improvement using IoT enabled SMART sensors. In Proceedings of the 2018 International Conference on Smart City and Emerging Technology (ICSCET), Mumbai, India, 5 January 2018; IEEE: Piscataway, NJ, USA, 2018; pp. 1–5. [Google Scholar]
  66. Mohsan, S.A.H.; Othman, N.Q.H.; Li, Y.; Alsharif, M.H.; Khan, M.A. Unmanned aerial vehicles (UAVs): Practical aspects, applications, open challenges, security issues, and future trends. Intell. Serv. Robot. 2023, 16, 109–137. [Google Scholar] [CrossRef]
  67. Wang, Y.; Su, Z.; Zhang, N.; Li, R. Mobile wireless rechargeable UAV networks: Challenges and solutions. IEEE Commun. Mag. 2022, 60, 33–39. [Google Scholar] [CrossRef]
  68. Dong, F.; Li, L.; Lu, Z.; Pan, Q.; Zheng, W. Energy-efficiency for fixed-wing UAV-enabled data collection and forwarding. In Proceedings of the 2019 IEEE International Conference on Communications Workshops (ICC Workshops), Shanghai, China, 20–24 May 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 1–6. [Google Scholar]
  69. Zhao, Z.-Q.; Zheng, P.; Xu, S.-t.; Wu, X. Object detection with deep learning: A review. IEEE Trans. Neural Netw. Learn. Syst. 2019, 30, 3212–3232. [Google Scholar] [CrossRef]
  70. Xu, S.; Savvaris, A.; He, S.; Shin, H.-s.; Tsourdos, A. Real-time implementation of YOLO+ JPDA for small scale UAV multiple object tracking. In Proceedings of the 2018 International Conference on Unmanned Aircraft Systems (ICUAS), Dallas, TX, USA, 12–15 June 2018; IEEE: Piscataway, NJ, USA, 2018; pp. 1336–1341. [Google Scholar]
  71. Benjdira, B.; Khursheed, T.; Koubaa, A.; Ammar, A.; Ouni, K. Car detection using unmanned aerial vehicles: Comparison between faster R-CNN and YOLOv3. In Proceedings of the 2019 1st International Conference on Unmanned Vehicle Systems-Oman (UVS), Muscat, Oman, 5–7 February 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 1–6. [Google Scholar]
  72. Zhu, H.; Qi, Y.; Shi, H.; Li, N.; Zhou, H. Human detection under UAV: An improved faster R-CNN approach. In Proceedings of the 2018 5th International Conference on Systems and Informatics (ICSAI), Nanjing, China, 10–12 November 2018; IEEE: Piscataway, NJ, USA, 2018; pp. 367–372. [Google Scholar]
  73. Zhu, P.; Wen, L.; Bian, X.; Ling, H.; Hu, Q. Vision meets drones: A challenge. arXiv 2018, arXiv:1804.07437. [Google Scholar]
  74. Heracleous, C.; Kolios, P.; Panayiotou, C.G. UAV-based system for real-time wildfire perimeter propagation tracking. In Proceedings of the 2023 31st Mediterranean Conference on Control and Automation (MED), Limassol, Cyprus, 26–29 June 2023; IEEE: Piscataway, NJ, USA, 2023; pp. 13–18. [Google Scholar]
  75. Maltezos, E.; Karagiannidis, L.; Douklias, T.; Dadoukis, A.; Amditis, A.; Sdongos, E. Preliminary design of a multipurpose UAV situational awareness platform based on novel computer vision and machine learning techniques. In Proceedings of the 2020 5th South-East Europe Design Automation, Computer Engineering, Computer Networks and Social Media Conference (SEEDA-CECNSM), Virtual, 25–27 September 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 1–8. [Google Scholar]
  76. Huang, H.; Savkin, A.V.; Ni, W. Online UAV trajectory planning for covert video surveillance of mobile targets. IEEE Trans. Autom. Sci. Eng. 2021, 19, 735–746. [Google Scholar] [CrossRef]
  77. Viswanathan, V.K.; Satpute, S.G.; Nikolakopoulos, G. FLIE: First-Look Enabled Inspect-Explore Autonomy Toward Visual Inspection of Unknown Distributed and Discontinuous Structures. IEEE Access 2023, 11, 28140–28150. [Google Scholar] [CrossRef]
  78. Li, Q.; Yang, X.; Lu, R.; Fan, J.; Wang, S.; Qin, Z. VisionICE: Air–Ground Integrated Intelligent Cognition Visual Enhancement System Based on a UAV. Drones 2023, 7, 268. [Google Scholar] [CrossRef]
  79. Santos, N.P.; Rodrigues, V.B.; Pinto, A.B.; Damas, B. Automatic Detection of Civilian and Military Personnel in Reconnaissance Missions using a UAV. In Proceedings of the 2023 IEEE International Conference on Autonomous Robot Systems and Competitions (ICARSC), Tomar, Portugal, 26–27 April 2023; IEEE: Piscataway, NJ, USA, 2023; pp. 157–162. [Google Scholar]
  80. Hu, X.; Bent, J.; Sun, J. Wildfire monitoring with uneven importance using multiple unmanned aircraft systems. In Proceedings of the 2019 International Conference on Unmanned Aircraft Systems (ICUAS), Atlanta, GA, USA, 11–14 June 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 1270–1279. [Google Scholar]
  81. Savkin, A.V.; Huang, H. Navigation of a network of aerial drones for monitoring a frontier of a moving environmental disaster area. IEEE Syst. J. 2020, 14, 4746–4749. [Google Scholar] [CrossRef]
  82. Song, C.; Chen, Z.; Wang, K.; Luo, H.; Cheng, J.C. BIM-supported scan and flight planning for fully autonomous LiDAR-carrying UAVs. Autom. Constr. 2022, 142, 104533. [Google Scholar] [CrossRef]
  83. Seraj, E.; Gombolay, M. Coordinated control of UAVS for human-centered active sensing of wildfires. In Proceedings of the 2020 American Control Conference (ACC), Denver, CO, USA, 1–3 July 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 1845–1852. [Google Scholar]
  84. Friedrich, M.; Lieb, T.J.; Temme, A.; Almeida, E.N.; Coelho, A.; Fontes, H. ResponDrone—A Situation Awareness Platform for First Responders. In Proceedings of the 2022 IEEE/AIAA 41st Digital Avionics Systems Conference (DASC), Portsmouth, VA, USA, 18–22 September 2022; IEEE: Piscataway, NJ, USA, 2022; pp. 1–7. [Google Scholar]
  85. Yamanouchi, T.; Urakawa, G.; Kashihara, S. UAV 3D-Draping System for Sharing Situational Awareness from Aerial Imagery Data. In Proceedings of the 2021 IEEE Global Humanitarian Technology Conference (GHTC), Seattle, WA, USA, 19–23 October 2021; IEEE: Piscataway, NJ, USA, 2021; pp. 229–232. [Google Scholar]
  86. Gomez, C.; Purdie, H. UAV-based photogrammetry and geocomputing for hazards and disaster risk monitoring—A review. Geoenviron. Disasters 2016, 3, 23. [Google Scholar] [CrossRef]
  87. Kato, S.; Tokunaga, S.; Maruyama, Y.; Maeda, S.; Hirabayashi, M.; Kitsukawa, Y.; Monrroy, A.; Ando, T.; Fujii, Y.; Azumi, T. Autoware on board: Enabling autonomous vehicles with embedded systems. In Proceedings of the 2018 ACM/IEEE 9th International Conference on Cyber-Physical Systems (ICCPS), Porto, Portugal, 11–13 April 2018; IEEE: Piscataway, NJ, USA, 2018; pp. 287–296. [Google Scholar]
  88. Pacala, A. Lidar as a Camera—Digital Lidar’s Implications for Computer Vision. Ouster Blog, 31 August 2018. Available online: https://ouster.com/insights/blog/the-camera-is-in-the-lidar (accessed on 24 July 2024).
  89. Qingqing, L.; Xianjia, Y.; Queralta, J.P.; Westerlund, T. Multi-modal lidar dataset for benchmarking general-purpose localization and mapping algorithms. In Proceedings of the 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Kyoto, Japan, 23–7 October 2022; IEEE: Piscataway, NJ, USA, 2022; pp. 3837–3844. [Google Scholar]
  90. Xianjia, Y.; Salimpour, S.; Queralta, J.P.; Westerlund, T. Analyzing general-purpose deep-learning detection and segmentation models with images from a lidar as a camera sensor. arXiv 2022, arXiv:2203.04064. [Google Scholar]
  91. Giordan, D.; Dematteis, N.; Troilo, F. UAV observation of the recent evolution of the Planpincieux Glacier (Mont Blanc-Italy). In Proceedings of the the EGU General Assembly Conference Abstracts, Virtual, 4–8 May 2020; p. 9906. [Google Scholar]
  92. Qingqing, L.; Xianjia, Y.; Queralta, J.P.; Westerlund, T. Adaptive lidar scan frame integration: Tracking known MAVs in 3D point clouds. In Proceedings of the 2021 20th International Conference on Advanced Robotics (ICAR), Ljubljana, Slovenia, 6–10 December 2021; IEEE: Piscataway, NJ, USA, 2021; pp. 1079–1086. [Google Scholar]
  93. Quentel, A. A Scanning LiDAR for Long Range Detection and Tracking of UAVs. Ph.D. Thesis, Normandie Université, Normandy, France, 2021. [Google Scholar]
  94. Qi, H.; Feng, C.; Cao, Z.; Zhao, F.; Xiao, Y. P2B: Point-to-box network for 3D object tracking in point clouds. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 6329–6338. [Google Scholar]
  95. Sier, H.; Yu, X.; Catalano, I.; Queralta, J.P.; Zou, Z.; Westerlund, T. UAV Tracking with Lidar as a Camera Sensor in GNSS-Denied Environments. In Proceedings of the 2023 International Conference on Localization and GNSS (ICL-GNSS), Castellon, Spain, 6–8 June 2023; IEEE: Piscataway, NJ, USA, 2023; pp. 1–7. [Google Scholar]
  96. Ding, Y.; Qu, Y.; Zhang, Q.; Tong, J.; Yang, X.; Sun, J. Research on UAV Detection Technology of Gm-APD Lidar Based on YOLO Model. In Proceedings of the 2021 IEEE International Conference on Unmanned Systems (ICUS), Beijing, China, 15–17 October 2021; IEEE: Piscataway, NJ, USA, 2021; pp. 105–109. [Google Scholar]
  97. Luo, H.; Wen, C.-Y. A Low-Cost Relative Positioning Method for UAV/UGV Coordinated Heterogeneous System Based on Visual-Lidar Fusion. Aerospace 2023, 10, 924. [Google Scholar] [CrossRef]
  98. Dogru, S.; Marques, L. Drone detection using sparse lidar measurements. IEEE Robot. Autom. Lett. 2022, 7, 3062–3069. [Google Scholar] [CrossRef]
  99. Liu, L.; He, J.; Ren, K.; Xiao, Z.; Hou, Y. A LiDAR–camera fusion 3D object detection algorithm. Information 2022, 13, 169. [Google Scholar] [CrossRef]
  100. An, P.; Liang, J.; Yu, K.; Fang, B.; Ma, J. Deep structural information fusion for 3D object detection on LiDAR–camera system. Comput. Vis. Image Underst. 2022, 214, 103295. [Google Scholar] [CrossRef]
  101. Igonin, D.M.; Kolganov, P.A.; Tiumentsev, Y.V. Situational awareness and problems of its formation in the tasks of UAV behavior control. Appl. Sci. 2021, 11, 11611. [Google Scholar] [CrossRef]
  102. Islam, A.; Shin, S.Y. Bus: A blockchain-enabled data acquisition scheme with the assistance of UAV swarm in internet of things. IEEE Access 2019, 7, 103231–103249. [Google Scholar] [CrossRef]
  103. Na, H.J.; Yoo, S.-J. PSO-based dynamic UAV positioning algorithm for sensing information acquisition in wireless sensor networks. IEEE Access 2019, 7, 77499–77513. [Google Scholar] [CrossRef]
  104. Geraldes, R.; Goncalves, A.; Lai, T.; Villerabel, M.; Deng, W.; Salta, A.; Nakayama, K.; Matsuo, Y.; Prendinger, H. UAV-based situational awareness system using deep learning. IEEE Access 2019, 7, 122583–122594. [Google Scholar] [CrossRef]
  105. Martinez-Alpiste, I.; Golcarenarenji, G.; Wang, Q.; Alcaraz-Calero, J.M. Search and rescue operation using UAVs: A case study. Expert Syst. Appl. 2021, 178, 114937. [Google Scholar] [CrossRef]
  106. Murphy, S.Ó.; Sreenan, C.; Brown, K.N. Autonomous unmanned aerial vehicle for search and rescue using software defined radio. In Proceedings of the 2019 IEEE 89th Vehicular Technology Conference (VTC2019-Spring), Kuala Lumpur, Malaysia, 28 April–1 May 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 1–6. [Google Scholar]
  107. Albanese, A.; Sciancalepore, V.; Costa-Pérez, X. SARDO: An automated search-and-rescue drone-based solution for victims localization. IEEE Trans. Mob. Comput. 2021, 21, 3312–3325. [Google Scholar] [CrossRef]
  108. Dinh, T.D.; Pirmagomedov, R.; Pham, V.D.; Ahmed, A.A.; Kirichek, R.; Glushakov, R.; Vladyko, A. Unmanned aerial system–assisted wilderness search and rescue mission. Int. J. Distrib. Sens. Netw. 2019, 15, 1550147719850719. [Google Scholar] [CrossRef]
  109. Lee, H.; Ho, H.W.; Zhou, Y. Deep Learning-based monocular obstacle avoidance for unmanned aerial vehicle navigation in tree plantations: Faster region-based convolutional neural network approach. J. Intell. Robot. Syst. 2021, 101, 5. [Google Scholar] [CrossRef]
  110. Cao, Y.; Qi, F.; Jing, Y.; Zhu, M.; Lei, T.; Li, Z.; Xia, J.; Wang, J.; Lu, G. Mission Chain Driven Unmanned Aerial Vehicle Swarms Cooperation for the Search and Rescue of Outdoor Injured Human Targets. Drones 2022, 6, 138. [Google Scholar] [CrossRef]
  111. Steenbeek, A.; Nex, F. CNN-based dense monocular visual SLAM for real-time UAV exploration in emergency conditions. Drones 2022, 6, 79. [Google Scholar] [CrossRef]
  112. Chen, T.; Gupta, S.; Gupta, A. Learning exploration policies for navigation. arXiv 2019, arXiv:1903.01959. [Google Scholar]
  113. Chaplot, D.S.; Gandhi, D.; Gupta, S.; Gupta, A.; Salakhutdinov, R. Learning to explore using active neural slam. arXiv 2020, arXiv:2004.05155. [Google Scholar]
  114. Hu, J.; Zhang, H.; Li, Z.; Zhao, C.; Xu, Z.; Pan, Q. Object traversing by monocular UAV in outdoor environment. Asian J. Control 2021, 23, 2766–2775. [Google Scholar] [CrossRef]
  115. Shao, P.; Mo, F.; Chen, Y.; Ding, N.; Huang, R. Monocular object SLAM using quadrics and landmark reference map for outdoor UAV applications. In Proceedings of the 2021 IEEE International Conference on Real-time Computing and Robotics (RCAR), Xining, China, 15–19 July 2021; IEEE: Piscataway, NJ, USA, 2021; pp. 1195–1201. [Google Scholar]
  116. Chen, Q.; Zhu, H.; Yang, L.; Chen, X.; Pollin, S.; Vinogradov, E. Edge computing assisted autonomous flight for UAV: Synergies between vision and communications. IEEE Commun. Mag. 2021, 59, 28–33. [Google Scholar] [CrossRef]
  117. Ziyang, Z.; Ping, Z.; Yixuan, X.; Yuxuan, J. Distributed intelligent self-organized mission planning of multi-UAV for dynamic targets cooperative search-attack. Chin. J. Aeronaut. 2019, 32, 2706–2716. [Google Scholar]
  118. Fraga-Lamas, P.; Ramos, L.; Mondéjar-Guerra, V.; Fernández-Caramés, T.M. A review on IoT deep learning UAV systems for autonomous obstacle detection and collision avoidance. Remote Sens. 2019, 11, 2144. [Google Scholar] [CrossRef]
  119. MahmoudZadeh, S.; Abbasi, A.; Yazdani, A.; Wang, H.; Liu, Y. Uninterrupted path planning system for Multi-USV sampling mission in a cluttered ocean environment. Ocean Eng. 2022, 254, 111328. [Google Scholar] [CrossRef]
  120. MahmoudZadeh, S.; Yazdani, A.; Kalantari, Y.; Ciftler, B. AUVs’ Distributed Mission Planning System for Effective Underwater Chlorophyll Sampling. In Proceedings of the 2024 2nd International Conference on Unmanned Vehicle Systems-Oman (UVS), Muscat, Oman, 12–14 February 2024. [Google Scholar]
  121. Yazdani, A.; MahmoudZadeh, S.; Yakimenko, O.; Wang, H. Perception-aware online trajectory generation for a prescribed manoeuvre of unmanned surface vehicle in cluttered unstructured environment. Robot. Auton. Syst. 2023, 169, 104508. [Google Scholar] [CrossRef]
  122. Akremi, M.S.; Neji, N.; Tabia, H. Visual Navigation of UAVs in Indoor Corridor Environments using Deep Learning. In Proceedings of the 2023 Integrated Communication, Navigation and Surveillance Conference (ICNS), Herndon, VA, USA, 18–20 April 2023; IEEE: Piscataway, NJ, USA, 2023; pp. 1–6. [Google Scholar]
  123. Park, B.; Oh, H. Vision-based obstacle avoidance for UAVs via imitation learning with sequential neural networks. Int. J. Aeronaut. Space Sci. 2020, 21, 768–779. [Google Scholar] [CrossRef]
  124. Yusefi, A.; Durdu, A.; Aslan, M.F.; Sungur, C. LSTM and filter based comparison analysis for indoor global localization in UAVs. IEEE Access 2021, 9, 10054–10069. [Google Scholar] [CrossRef]
  125. Chekakta, Z.; Zenati, A.; Aouf, N.; Dubois-Matra, O. Robust deep learning LiDAR-based pose estimation for autonomous space landers. Acta Astronaut. 2022, 201, 59–74. [Google Scholar] [CrossRef]
  126. Liao, F.; Feng, X.; Li, Z.; Wang, D.; Xu, C.; Guang, C.; Qing, Y.; Song, C. A hybrid CNN-LSTM model for diagnosing rice nutrient levels at the rice panicle initiation stage. J. Integr. Agric. 2023, 23, 711–723. [Google Scholar] [CrossRef]
  127. Kouris, A.; Bouganis, C.-S. Learning to fly by myself: A self-supervised cnn-based approach for autonomous navigation. In Proceedings of the 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, 1–5 October 2018; IEEE: Piscataway, NJ, USA, 2018; pp. 1–9. [Google Scholar]
  128. Gandhi, D.; Pinto, L.; Gupta, A. Learning to fly by crashing. In Proceedings of the 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Vancouver, BC, Canada, 24–28 September 2017; IEEE: Piscataway, NJ, USA, 2017; pp. 3948–3955. [Google Scholar]
  129. Padhy, R.P.; Verma, S.; Ahmad, S.; Choudhury, S.K.; Sa, P.K. Deep neural network for autonomous UAV navigation in indoor corridor environments. Procedia Comput. Sci. 2018, 133, 643–650. [Google Scholar] [CrossRef]
  130. Smolyanskiy, N.; Kamenev, A.; Smith, J.; Birchfield, S. Toward low-flying autonomous MAV trail navigation using deep neural networks for environmental awareness. In Proceedings of the 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Vancouver, BC, Canada, 24–28 September 2017; IEEE: Piscataway, NJ, USA, 2017; pp. 4241–4247. [Google Scholar]
  131. Vrba, M.; Saska, M. Marker-less micro aerial vehicle detection and localization using convolutional neural networks. IEEE Robot. Autom. Lett. 2020, 5, 2459–2466. [Google Scholar] [CrossRef]
  132. Carrio, A.; Vemprala, S.; Ripoll, A.; Saripalli, S.; Campoy, P. Drone detection using depth maps. In Proceedings of the 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, 1–5 October 2018; IEEE: Piscataway, NJ, USA, 2018; pp. 1034–1037. [Google Scholar]
  133. Gu, W.; Valavanis, K.P.; Rutherford, M.J.; Rizzo, A. A survey of artificial neural networks with model-based control techniques for flight control of unmanned aerial vehicles. In Proceedings of the 2019 International Conference on Unmanned Aircraft Systems (ICUAS), Atlanta, GA, USA, 11–14 June 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 362–371. [Google Scholar]
  134. Rai, A.; Kannan, R.J. Population coding of generative neuronal cells for collaborative decision making in UAV-based SLAM operations. J. Indian Soc. Remote Sens. 2021, 49, 499–505. [Google Scholar] [CrossRef]
  135. Khan, A.; Hebert, M. Learning safe recovery trajectories with deep neural networks for unmanned aerial vehicles. In Proceedings of the 2018 IEEE Aerospace Conference, Big Sky, MT, USA, 3–10 March 2018; IEEE: Piscataway, NJ, USA, 2018; pp. 1–9. [Google Scholar]
  136. Queralta, J.P.; Raitoharju, J.; Gia, T.N.; Passalis, N.; Westerlund, T. Autosos: Towards multi-UAV systems supporting maritime search and rescue with lightweight AI and edge computing. arXiv 2020, arXiv:2005.03409. [Google Scholar]
  137. Zhang, Z.; Wu, J.; Dai, J.; He, C. A novel real-time penetration path planning algorithm for stealth UAV in 3D complex dynamic environment. IEEE Access 2020, 8, 122757–122771. [Google Scholar] [CrossRef]
  138. Chicaiza, F.A.; Slawiñski, E.; Salinas, L.R.; Mut, V.A. Evaluation of path planning with force feedback for bilateral teleoperation of unmanned rotorcraft systems. J. Intell. Robot. Syst. 2022, 105, 34. [Google Scholar] [CrossRef]
  139. Huang, X.; Sun, R.; Hu, H. An Artificial Potential Field Based Cooperative UAV Conflict Detection and Resolution under Uncertain Communication Environment. In Proceedings of the 2022 IEEE International Conference on Unmanned Systems (ICUS), Guangzhou, China, 28–30 October 2022; IEEE: Piscataway, NJ, USA, 2022; pp. 309–314. [Google Scholar]
  140. Zhang, M.; Liu, C.; Wang, P.; Yu, J.; Yuan, Q. UAV swarm real-time path planning algorithm based on improved artificial potential field method. In Proceedings of the 2021 International Conference on Autonomous Unmanned Systems (ICAUS 2021), Changsha, China, 24–26 September 2021; Springer: Singapore, 2021; pp. 1933–1945. [Google Scholar]
  141. Chen, J.; Zhang, Y.; Wu, L.; You, T.; Ning, X. An adaptive clustering-based algorithm for automatic path planning of heterogeneous UAVs. IEEE Trans. Intell. Transp. Syst. 2021, 23, 16842–16853. [Google Scholar] [CrossRef]
  142. Radmanesh, R.; Kumar, M.; French, D.; Casbeer, D. Towards a PDE-based large-scale decentralized solution for path planning of UAVs in shared airspace. Aerosp. Sci. Technol. 2020, 105, 105965. [Google Scholar] [CrossRef]
  143. Bahabry, A.; Wan, X.; Ghazzai, H.; Menouar, H.; Vesonder, G.; Massoud, Y. Low-altitude navigation for multi-rotor drones in urban areas. IEEE Access 2019, 7, 87716–87731. [Google Scholar] [CrossRef]
  144. Bahabry, A.; Wan, X.; Ghazzai, H.; Vesonder, G.; Massoud, Y. Collision-free navigation and efficient scheduling for fleet of multi-rotor drones in smart city. In Proceedings of the 2019 IEEE 62nd International Midwest Symposium on Circuits and Systems (MWSCAS), Dallas, TX, USA, 4–7 August 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 552–555. [Google Scholar]
  145. Cho, S.W.; Park, H.J.; Lee, H.; Shim, D.H.; Kim, S.-Y. Coverage path planning for multiple unmanned aerial vehicles in maritime search and rescue operations. Comput. Ind. Eng. 2021, 161, 107612. [Google Scholar] [CrossRef]
  146. Ren, S.; Chen, R.; Gao, W. A UAV UGV Collaboration paradigm based on situation awareness: Framework and simulation. In Proceedings of the 2021 International Conference on Autonomous Unmanned Systems (ICAUS 2021), Changsha, China, 24–26 September 2021; Springer: Singapore; pp. 3398–3406. [Google Scholar]
  147. Atif, M.; Ahmad, R.; Ahmad, W.; Zhao, L.; Rodrigues, J.J. UAV-assisted wireless localization for search and rescue. IEEE Syst. J. 2021, 15, 3261–3272. [Google Scholar] [CrossRef]
  148. Han, Z.; Zhang, R.; Pan, N.; Xu, C.; Gao, F. Fast-tracker: A robust aerial system for tracking agile target in cluttered environments. In Proceedings of the 2021 IEEE International Conference on Robotics and Automation (ICRA), Xi’an, China, 30 May–5 June 2021; IEEE: Piscataway, NJ, USA, 2021; pp. 328–334. [Google Scholar]
  149. Almurib, H.A.; Nathan, P.T.; Kumar, T.N. Control and path planning of quadrotor aerial vehicles for search and rescue. In Proceedings of the SICE Annual Conference 2011, Tokyo, Japan, 13–18 September 2011; IEEE: Piscataway, NJ, USA, 2011; pp. 700–705. [Google Scholar]
  150. Hill, V.W.; Thomas, R.W.; Larson, J.D. Autonomous situational awareness for UAS swarms. In Proceedings of the 2021 IEEE Aerospace Conference (50100), Big Sky, MT, USA, 6–13 March 2021; IEEE: Piscataway, NJ, USA, 2021; pp. 1–6. [Google Scholar]
  151. Oh, C.-G. Implementation of Basic MR-Based Digital Twin to Demonstrate Safe UAM Flight Paths in Urban Areas. 대한인간공학회지 2023, 42, 385–399. [Google Scholar] [CrossRef]
  152. Li, W.; Wang, L.; Zou, A.; Cai, J.; He, H.; Tan, T. Path planning for UAV based on improved PRM. Energies 2022, 15, 7267. [Google Scholar] [CrossRef]
  153. Wu, X.; Xu, L.; Zhen, R.; Wu, X. Biased sampling potentially guided intelligent bidirectional RRT* algorithm for UAV path planning in 3D environment. Math. Probl. Eng. 2019, 2019, 5157403. [Google Scholar] [CrossRef]
  154. Mu, X.; Gao, W.; Li, X.; Li, G. Coverage Path Planning for UAV Based on Improved Back-and-Forth Mode. IEEE Access 2023, 11, 114840–114854. [Google Scholar] [CrossRef]
  155. Tang, G.; Liu, P.; Hou, Z.; Claramunt, C.; Zhou, P. Motion Planning of UAV for Port Inspection Based on Extended RRT* Algorithm. J. Mar. Sci. Eng. 2023, 11, 702. [Google Scholar] [CrossRef]
  156. Chuang, H.-M.; He, D.; Namiki, A. Autonomous target tracking of UAV using high-speed visual feedback. Appl. Sci. 2019, 9, 4552. [Google Scholar] [CrossRef]
  157. Koch, T.; Körner, M.; Fraundorfer, F. Automatic and semantically-aware 3D UAV flight planning for image-based 3D reconstruction. Remote Sens. 2019, 11, 1550. [Google Scholar] [CrossRef]
  158. Qiming, Y.; Jiandong, Z.; Guoqing, S. Modeling of UAV path planning based on IMM under POMDP framework. J. Syst. Eng. Electron. 2019, 30, 545–554. [Google Scholar]
  159. Chen, J.; Ye, F.; Jiang, T. Path planning under obstacle-avoidance constraints based on ant colony optimization algorithm. In Proceedings of the 2017 IEEE 17th International Conference on Communication Technology (ICCT), Chengdu, China, 27–30 October 2017; IEEE: Piscataway, NJ, USA, 2017; pp. 1434–1438. [Google Scholar]
  160. Changxin, Z.; Ligang, W.; Yiding, W.; Xiao, Z.; Yandong, C.; Anming, H.; Anqiao, H. UAV Electric Patrol Path Planning Based on Improved Ant Colony Optimization-A* Algorithm. In Proceedings of the 2022 IEEE International Conference on Electrical Engineering, Big Data and Algorithms (EEBDA), Changchun, China, 25–27 February 2022; IEEE: Piscataway, NJ, USA, 2022; pp. 1374–1380. [Google Scholar]
  161. Li, B.; Qi, X.; Yu, B.; Liu, L. Trajectory planning for UAV based on improved ACO algorithm. IEEE Access 2019, 8, 2995–3006. [Google Scholar] [CrossRef]
  162. Zhou, Y.; Zhao, H.; Chen, J.; Jia, Y. A novel mission planning method for UAVs’ course of action. Comput. Commun. 2020, 152, 345–356. [Google Scholar] [CrossRef]
  163. Ge, F.; Li, K.; Xu, W. Path planning of UAV for oilfield inspection based on improved grey wolf optimization algorithm. In Proceedings of the 2019 Chinese Control and Decision Conference (CCDC), Nanchang, China, 3–5 June 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 3666–3671. [Google Scholar]
  164. Qu, C.; Gai, W.; Zhang, J.; Zhong, M. A novel hybrid grey wolf optimizer algorithm for unmanned aerial vehicle (UAV) path planning. Knowl.-Based Syst. 2020, 194, 105530. [Google Scholar] [CrossRef]
  165. Kumar, G.; Anwar, A.; Dikshit, A.; Poddar, A.; Soni, U.; Song, W.K. Obstacle avoidance for a swarm of unmanned aerial vehicles operating on particle swarm optimization: A swarm intelligence approach for search and rescue missions. J. Braz. Soc. Mech. Sci. Eng. 2022, 44, 56. [Google Scholar] [CrossRef]
  166. Shang, Z.; Bradley, J.; Shen, Z. A co-optimal coverage path planning method for aerial scanning of complex structures. Expert Syst. Appl. 2020, 158, 113535. [Google Scholar] [CrossRef]
  167. Khan, S.I.; Qadir, Z.; Munawar, H.S.; Nayak, S.R.; Budati, A.K.; Verma, K.D.; Prakash, D. UAVs path planning architecture for effective medical emergency response in future networks. Phys. Commun. 2021, 47, 101337. [Google Scholar] [CrossRef]
  168. Zhang, X.; Xia, S.; Zhang, T.; Li, X. Hybrid FWPS cooperation algorithm based unmanned aerial vehicle constrained path planning. Aerosp. Sci. Technol. 2021, 118, 107004. [Google Scholar] [CrossRef]
  169. Duan, T.; Wang, W.; Wang, T. A Review for Unmanned Swarm Gaming: Framework, Model and Algorithm. In Proceedings of the 2022 8th International Conference on Big Data and Information Analytics (BigDIA), Guiyang, China, 24–25 August 2022; IEEE: Piscataway, NJ, USA, 2022; pp. 164–170. [Google Scholar]
  170. Sutton, R.S.; Barto, A.G. Reinforcement Learning: An Introduction; MIT Press: Cambridge, MA, USA, 2018. [Google Scholar]
  171. Jagannath, J.; Jagannath, A.; Furman, S.; Gwin, T. Deep learning and reinforcement learning for autonomous unmanned aerial systems: Roadmap for theory to deployment. In Deep Learning for Unmanned Systems; Springer: Cham, Switzerland, 2021; pp. 25–82. [Google Scholar]
  172. Path, D.-S.B.M.-U. Planning and Obstacle Avoidance in a Dynamic Environment. In Advances in Swarm Intelligence: Proceedings of the 9th International Conference, ICSI 2018, Shanghai, China, June 17–22, 2018, Proceedings, Part II; Springer: Cham, Switzerland, 2018; p. 102. [Google Scholar]
  173. Yan, C.; Xiang, X. A path planning algorithm for UAV based on improved q-learning. In Proceedings of the 2018 2nd International Conference on Robotics and Automation Sciences (ICRAS), Wuhan, China, 23–25 June 2018; IEEE: Piscataway, NJ, USA, 2018; pp. 1–5. [Google Scholar]
  174. Bouhamed, O.; Ghazzai, H.; Besbes, H.; Massoud, Y. Q-learning based routing scheduling for a multi-task autonomous agent. In Proceedings of the 2019 IEEE 62nd International Midwest Symposium on Circuits and Systems (MWSCAS), Dallas, TX, USA, 4–7 August 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 634–637. [Google Scholar]
  175. Sichkar, V.N. Reinforcement learning algorithms in global path planning for mobile robot. In Proceedings of the 2019 International Conference on Industrial Engineering, Applications and Manufacturing (ICIEAM), Sochi, Russia, 25–29 March 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 1–5. [Google Scholar]
  176. Mnih, V.; Kavukcuoglu, K.; Silver, D.; Rusu, A.A.; Veness, J.; Bellemare, M.G.; Graves, A.; Riedmiller, M.; Fidjeland, A.K.; Ostrovski, G. Human-level control through deep reinforcement learning. Nature 2015, 518, 529–533. [Google Scholar] [CrossRef]
  177. Van Hasselt, H.; Guez, A.; Silver, D. Deep reinforcement learning with double Q-learning. In Proceedings of the AAAI′16: Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence, Phoenix, AZ, USA, 12–17 February 2016; Volume 30. [Google Scholar]
  178. Wang, Z.; Schaul, T.; Hessel, M.; Hasselt, H.; Lanctot, M.; Freitas, N. Dueling network architectures for deep reinforcement learning. In Proceedings of the 33rd International Conference on Machine Learning, New York, NY, USA, 20–22 June 2016; pp. 1995–2003. [Google Scholar]
  179. Kersandt, K. Deep Reinforcement Learning as Control Method for Autonomous UAVs. Master’s Thesis, Universitat Politècnica de Catalunya, Barcelona, Spain, 2018. [Google Scholar]
  180. Peters, J.; Schaal, S. Policy gradient methods for robotics. In Proceedings of the 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems, Beijing, China, 9–13 October 2006; IEEE: Piscataway, NJ, USA, 2006; pp. 2219–2225. [Google Scholar]
  181. Silver, D.; Lever, G.; Heess, N.; Degris, T.; Wierstra, D.; Riedmiller, M. Deterministic policy gradient algorithms. In Proceedings of the 33rd International Conference on Machine Learning, New York, NY, USA, 20–22 June 2016; pp. 387–395. [Google Scholar]
  182. Lillicrap, T.P.; Hunt, J.J.; Pritzel, A.; Heess, N.; Erez, T.; Tassa, Y.; Silver, D.; Wierstra, D. Continuous control with deep reinforcement learning. arXiv 2015, arXiv:1509.02971. [Google Scholar]
  183. Rodriguez-Ramos, A.; Sampedro, C.; Bavle, H.; De La Puente, P.; Campoy, P. A deep reinforcement learning strategy for UAV autonomous landing on a moving platform. J. Intell. Robot. Syst. 2019, 93, 351–366. [Google Scholar] [CrossRef]
  184. Yang, Q.; Zhang, J.; Shi, G.; Hu, J.; Wu, Y. Maneuver decision of UAV in short-range air combat based on deep reinforcement learning. IEEE Access 2019, 8, 363–378. [Google Scholar] [CrossRef]
  185. Wang, C.; Wang, J.; Shen, Y.; Zhang, X. Autonomous navigation of UAVs in large-scale complex environments: A deep reinforcement learning approach. IEEE Trans. Veh. Technol. 2019, 68, 2124–2136. [Google Scholar] [CrossRef]
  186. Li, H.; He, H. Multiagent Trust Region Policy Optimization. IEEE Trans. Neural Netw. Learn. Syst. 2023, 1–15. [Google Scholar] [CrossRef] [PubMed]
  187. Schulman, J.; Wolski, F.; Dhariwal, P.; Radford, A.; Klimov, O. Proximal policy optimization algorithms. arXiv 2017, arXiv:1707.06347. [Google Scholar]
  188. Pham, H.X.; La, H.M.; Feil-Seifer, D.; Van Nguyen, L. Reinforcement learning for autonomous UAV navigation using function approximation. In Proceedings of the 2018 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR), Philadelphia, PA, USA, 6–8 August 2018; IEEE: Piscataway, NJ, USA, 2018; pp. 1–6. [Google Scholar]
  189. Abedin, S.F.; Munir, M.S.; Tran, N.H.; Han, Z.; Hong, C.S. Data freshness and energy-efficient UAV navigation optimization: A deep reinforcement learning approach. IEEE Trans. Intell. Transp. Syst. 2020, 22, 5994–6006. [Google Scholar] [CrossRef]
  190. Din, A.F.U.; Akhtar, S.; Maqsood, A.; Habib, M.; Mir, I. Modified model free dynamic programming: An augmented approach for unmanned aerial vehicle. Appl. Intell. 2023, 53, 3048–3068. [Google Scholar] [CrossRef]
  191. Chronis, C.; Anagnostopoulos, G.; Politi, E.; Dimitrakopoulos, G.; Varlamis, I. Dynamic Navigation in Unconstrained Environments Using Reinforcement Learning Algorithms. IEEE Access 2023, 11, 117984–118001. [Google Scholar] [CrossRef]
  192. Merabet, A.; Lakas, A.; Belkacem, A.N. WPT-enabled multi-UAV path planning for disaster management deep Q-network. In Proceedings of the 2023 International Wireless Communications and Mobile Computing (IWCMC), Marrakesh, Morocco, 19–23 June 2023; IEEE: Piscataway, NJ, USA, 2023; pp. 1672–1678. [Google Scholar]
  193. Li, Z.; Lu, Y.; Li, X.; Wang, Z.; Qiao, W.; Liu, Y. UAV networks against multiple maneuvering smart jamming with knowledge-based reinforcement learning. IEEE Internet Things J. 2021, 8, 12289–12310. [Google Scholar] [CrossRef]
  194. Singh, R.; Qu, C.; Esquivel Morel, A.; Calyam, P. Location Prediction and Trajectory Optimization in Multi-UAV Application Missions. In Intelligent Unmanned Air Vehicles Communications for Public Safety Networks; Springer: Singapore, 2022; pp. 105–131. [Google Scholar]
  195. Scukins, E.; Klein, M.; Ögren, P. Enhancing situation awareness in beyond visual range air combat with reinforcement learning-based decision support. In Proceedings of the 2023 International Conference on Unmanned Aircraft Systems (ICUAS), Warsaw, Poland, 6–June 2023; IEEE: Piscataway, NJ, USA, 2023; pp. 56–62. [Google Scholar]
  196. Liu, D.; Bao, W.; Zhu, X.; Fei, B.; Xiao, Z.; Men, T. Vision-aware air-ground cooperative target localization for UAV and UGV. Aerosp. Sci. Technol. 2022, 124, 107525. [Google Scholar] [CrossRef]
  197. Causa, F.; Opromolla, R.; Fasano, G. Closed loop integration of air-to-air visual measurements for cooperative UAV navigation in GNSS challenging environments. Aerosp. Sci. Technol. 2022, 130, 107947. [Google Scholar] [CrossRef]
  198. Zhang, Y.; Xingjian, W.; Shaoping, W.; Xinyu, T. Distributed bearing-based formation control of unmanned aerial vehicle swarm via global orientation estimation. Chin. J. Aeronaut. 2022, 35, 44–58. [Google Scholar] [CrossRef]
  199. Yang, C.; Wang, D.; Zeng, Y.; Yue, Y.; Siritanawan, P. Knowledge-based multimodal information fusion for role recognition and situation assessment by using mobile robot. Inf. Fusion 2019, 50, 126–138. [Google Scholar] [CrossRef]
  200. Kwon, C.; Hwang, I. Sensing-based distributed state estimation for cooperative multiagent systems. IEEE Trans. Autom. Control 2018, 64, 2368–2382. [Google Scholar] [CrossRef]
  201. Zhang, H.; Tao, Y.; Lv, X.; Liu, Y. Master-slave UAV-assisted WSNs for Improving Data Collection Rate and Shortening Flight Time. IEEE Sens. J. 2023, 99, 26597–26607. [Google Scholar] [CrossRef]
  202. MahmoudZadeh, S.; Yazdani, A.; Elmi, A.; Abbasi, A.; Ghanooni, P. Exploiting a fleet of UAVs for monitoring and data acquisition of a distributed sensor network. Neural Comput. Appl. 2022, 34, 5041–5054. [Google Scholar] [CrossRef]
  203. Zadeh, R.B.; Zaslavsky, A.; Loke, S.W.; MahmoudZadeh, S. A multiagent mission coordination system for continuous situational awareness of bushfires. IEEE Trans. Autom. Sci. Eng. 2022, 20, 1275–1291. [Google Scholar] [CrossRef]
  204. Liao, Z.; Wang, S.; Shi, J.; Sun, Z.; Zhang, Y.; Sial, M.B. Cooperative situational awareness of multi-UAV system based on improved DS evidence theory. Aerosp. Sci. Technol. 2023, 142, 108605. [Google Scholar] [CrossRef]
  205. Yingrong, Y.; Jianglong, Y.; Yishi, L.; Zhang, R. Distributed state estimation for heterogeneous mobile sensor networks with stochastic observation loss. Chin. J. Aeronaut. 2022, 35, 265–275. [Google Scholar]
  206. Zhu, H.; Li, J.; Gao, Y.; Cheng, G. Consensus analysis of UAV swarm cooperative situation awareness. In Proceedings of the 2020 International Conference on Virtual Reality and Intelligent Systems (ICVRIS), Zhangjiajie, China, 18–19 July 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 415–418. [Google Scholar]
  207. Zhang, Y.; Xiao, Q.; Deng, X.; Jiang, W. A multi-source information fusion method for ship target recognition based on Bayesian inference and evidence theory. J. Intell. Fuzzy Syst. 2022, 42, 2331–2346. [Google Scholar] [CrossRef]
  208. Luo, J.; Liu, J.; Liang, Q. UAV swarm trajectory planning based on a novel particle swarm optimization. In Proceedings of the 2021 International Conference on Autonomous Unmanned Systems (ICAUS 2021), Changsha, China, 24–26 September 2021; Springer: Singapore; pp. 509–520. [Google Scholar]
  209. Shen, L.; Wang, Y.; Liu, K.; Yang, Z.; Shi, X.; Yang, X.; Jing, K. Synergistic path planning of multi-UAVs for air pollution detection of ships in ports. Transp. Res. Part E Logist. Transp. Rev. 2020, 144, 102128. [Google Scholar] [CrossRef]
  210. Phung, M.D.; Ha, Q.P. Safety-enhanced UAV path planning with spherical vector-based particle swarm optimization. Appl. Soft Comput. 2021, 107, 107376. [Google Scholar] [CrossRef]
  211. Ali, Z.A.; Han, Z.; Masood, R.J. Collective motion and self-organization of a swarm of UAVs: A cluster-based architecture. Sensors 2021, 21, 3820. [Google Scholar] [CrossRef] [PubMed]
  212. Ali, Z.A.; Zhangang, H.; Hang, W.B. Cooperative path planning of multiple UAVs by using max–min ant colony optimization along with cauchy mutant operator. Fluct. Noise Lett. 2021, 20, 2150002. [Google Scholar] [CrossRef]
  213. Xu, C.; Xu, M.; Yin, C. Optimized multi-UAV cooperative path planning under the complex confrontation environment. Comput. Commun. 2020, 162, 196–203. [Google Scholar] [CrossRef]
  214. Yu, Y.; Liu, J.; Wei, C. Hawk and pigeon’s intelligence for UAV swarm dynamic combat game via competitive learning pigeon-inspired optimization. Sci. China Technol. Sci. 2022, 65, 1072–1086. [Google Scholar] [CrossRef]
  215. Yu, Y.; Wang, H.; Liu, S.; Guo, L.; Yeoh, P.L.; Vucetic, B.; Li, Y. Distributed multi-agent target tracking: A nash-combined adaptive differential evolution method for UAV systems. IEEE Trans. Veh. Technol. 2021, 70, 8122–8133. [Google Scholar] [CrossRef]
  216. Akin, E.; Demir, K.; Yetgin, H. Multiagent Q-learning based UAV trajectory planning for effective situationalawareness. Turk. J. Electr. Eng. Comput. Sci. 2021, 29, 2561–2579. [Google Scholar] [CrossRef]
  217. Kim, I.; Morrison, J.R. Learning based framework for joint task allocation and system design in stochastic multi-UAV systems. In Proceedings of the 2018 International Conference on Unmanned Aircraft Systems (ICUAS), Dallas, TX, USA, 12–15 June 2018; IEEE: Piscataway, NJ, USA, 2018; pp. 324–334. [Google Scholar]
  218. Guerra, A.; Guidi, F.; Dardari, D.; Djuric, P.M. Reinforcement Learning for Joint Detection & Mapping using Dynamic UAV Networks. IEEE Trans. Aerosp. Electron. Syst. 2024, 60, 2586–2601. [Google Scholar]
  219. Qu, C.; Singh, R.; Morel, A.E.; Sorbelli, F.B.; Calyam, P.; Das, S.K. Obstacle-aware and energy-efficient multi-drone coordination and networking for disaster response. In Proceedings of the 2021 17th International Conference on Network and Service Management (CNSM), Virtual, 25–29 October 2021; IEEE: Piscataway, NJ, USA, 2021; pp. 446–454. [Google Scholar]
  220. Qu, C.; Sorbelli, F.B.; Singh, R.; Calyam, P.; Das, S.K. Environmentally-aware and energy-efficient multi-drone coordination and networking for disaster response. IEEE Trans. Netw. Serv. Manag. 2023, 20, 1093–1109. [Google Scholar] [CrossRef]
  221. Prakash, M.; Neelakandan, S.; Kim, B.-H. Reinforcement Learning-Based Multidimensional Perception and Energy Awareness Optimized Link State Routing for Flying Ad-Hoc Networks. Mob. Netw. Appl. 2023, 1–19. [Google Scholar] [CrossRef]
  222. Chen, Y.; Dong, Q.; Shang, X.; Wu, Z.; Wang, J. Multi-UAV autonomous path planning in reconnaissance missions considering incomplete information: A reinforcement learning method. Drones 2022, 7, 10. [Google Scholar] [CrossRef]
  223. Qu, C.; Gai, W.; Zhong, M.; Zhang, J. A novel reinforcement learning based grey wolf optimizer algorithm for unmanned aerial vehicles (UAVs) path planning. Appl. Soft Comput. 2020, 89, 106099. [Google Scholar] [CrossRef]
  224. Femminella, M.; Reali, G. Gossip-based monitoring protocol for 6G networks. IEEE Trans. Netw. Serv. Manag. 2023, 20, 4126–4140. [Google Scholar] [CrossRef]
  225. Yeo, S.; Bae, M.; Jeong, M.; Kwon, O.K.; Oh, S. Crossover-SGD: A gossip-based communication in distributed deep learning for alleviating large mini-batch problem and enhancing scalability. Concurr. Comput. Pract. Exp. 2023, 35, e7508. [Google Scholar] [CrossRef]
  226. Mikavica, B.; Kostić-Ljubisavljević, A. Blockchain-based solutions for security, privacy, and trust management in vehicular networks: A survey. J. Supercomput. 2021, 77, 9520–9575. [Google Scholar] [CrossRef]
  227. Vincent, N.E.; Skjellum, A.; Medury, S. Blockchain architecture: A design that helps CPA firms leverage the technology. Int. J. Account. Inf. Syst. 2020, 38, 100466. [Google Scholar] [CrossRef]
  228. Frey, M.A.; Attmanspacher, J.; Schulte, A. A dynamic Bayesian network and Markov decision process for tactical UAV decision making in MUM-T scenarios. In Proceedings of the 2022 IEEE Conference on Cognitive and Computational Aspects of Situation Management (CogSIMA), Salerno, Italy, 6–10 June 2022; IEEE: Piscataway, NJ, USA, 2022; pp. 47–54. [Google Scholar]
  229. Habibi, H.; Yazdani, A.; Darouach, M.; Wang, H.; Fernando, T.; Howard, I. Observer-based sensor fault tolerant control with prescribed tracking performance for a class of nonlinear systems. IEEE Trans. Autom. Control 2023, 68, 8259–8266. [Google Scholar] [CrossRef]
  230. Ghanooni, P.; Habibi, H.; Yazdani, A.; Wang, H.; MahmoudZadeh, S.; Ferrara, A. Prescribed performance control of a robotic manipulator with unknown control gain and assigned settling time. ISA Trans. 2024, 145, 330–354. [Google Scholar] [CrossRef]
  231. Raoufi, M.; Habibi, H.; Yazdani, A.; Wang, H. Robust prescribed trajectory tracking control of a robot manipulator using adaptive finite-time sliding mode and extreme learning machine method. Robotics 2022, 11, 111. [Google Scholar] [CrossRef]
  232. Federal Aviation Administration (FAA). Summary of Small Unmanned Aircraft Rule (Part 107). Available online: https://www.faa.gov/newsroom/small-unmanned-aircraft-systems-uas-regulations-part-107 (accessed on 6 February 2024).
  233. European Union Aviation Safety Agency (EASA). UAS Regulations. Available online: https://www.easa.europa.eu/en/regulations (accessed on 6 February 2024).
  234. International Civil Aviation Organization (ICAO). Manual on Remotely Piloted Aircraft Systems (RPAS). Available online: https://www.icao.int/Pages/default.aspx (accessed on 6 February 2024).
Figure 1. Endsley’s model of SA in the context of robotics.
Figure 1. Endsley’s model of SA in the context of robotics.
Robotics 13 00117 g001
Figure 2. UAV for capturing SA and EA.
Figure 2. UAV for capturing SA and EA.
Robotics 13 00117 g002
Figure 3. Components of SA within UAV operations.
Figure 3. Components of SA within UAV operations.
Robotics 13 00117 g003
Figure 4. Example of the classified UAVs [4,6,33,34,35,36,37].
Figure 4. Example of the classified UAVs [4,6,33,34,35,36,37].
Robotics 13 00117 g004
Figure 5. Popular multirotor UAVs for SA [44].
Figure 5. Popular multirotor UAVs for SA [44].
Robotics 13 00117 g005aRobotics 13 00117 g005b
Figure 6. Conceptual example of capturing SA using distributed sensors, on-board camera, and LiDAR.
Figure 6. Conceptual example of capturing SA using distributed sensors, on-board camera, and LiDAR.
Robotics 13 00117 g006
Table 1. Summary of the autonomous UAV classification and corresponding pros, cons, and suitable applications.
Table 1. Summary of the autonomous UAV classification and corresponding pros, cons, and suitable applications.
ProsConsApplications
Unmanned Helicopter
Agile movement
Efficient structure
Supports swift pose change
Hovering capability
Flexible maneuverability
Efficient in dealing with the complex environment
Vertical take-off and landing
High-cost maintenance
Top-mounted motor has a complex structure
Needs multiple steering gears for movement
Comparatively large size
Expensive
Large-scale operations
Monitoring, tracking, and patrolling missions due to hovering capability
Fixed-Wing UAVs
Fast motion
Long-range maneuver
Energy efficient (only maximize the energy consumption during altitude climbs to generate supporting power)
Need a large landing/takeoff runway to generate sufficient lift force
Agile movements at high speeds can be challenging and can impose significant stress on the vehicle
Limited maneuverability
Expensive
Large-scale range of activities
Time-critical applications
Multirotor or Rotary UAVs
Agile movement and quick response rate
Ease of deployment
Small size and efficient structure
Mechanical simplicity and flexibility
Vertical take-off and landing
Excellent maneuverability to tackle environmental disturbances
Penetrate narrow spaces inaccessible to larger vehicles/human
Hovering in place provides a stable platform for capturing detailed data and imagery
Supporting propellers’ pitch without exerting momentum or control forces
Low maintenance costs
Cheap
Persistent energy consumption for retaining a certain altitude, and due to the use of multiple rotors
Poor endurance
Require stable weather conditions due to the small size
Limited memory and onboard processing power
Power usage is proportionally linked to the weather condition, communication load, computational load, payload, onboard sensors energy demand
Operation in narrow spaces and complex environments
Indoor applications
Monitoring, tracking
Patrolling missions due to hovering capability
Table 2. The specifications and properties of the multirotor UAVs used for SA.
Table 2. The specifications and properties of the multirotor UAVs used for SA.
UAV PlatformDJI Phantom 4 Pro V2.0Phantom 4 RTK
ProcessorQuad-core, 4-Plus-1™ ARM®Quad-core, 4-Plus-1™ ARM®
Positioning SystemsGPS, GLONASSRTK Module, GPS
Max Transmission Distance~7 km~5 km
Weight1375 g1391 g
Max directional Speed72 km/h58 km/h
Max Altitude503 m500 m
Max Flight Time~30 min~30 min
Max Flight Range~10 km~7 km
Battery Capacity5870 mAh5870 mAh
UAV PlatformParrot Anafi USADJI Inspire 2
ProcessorARM Cortex A8@1GHzQuad-core, 4-Plus-1™ ARM®
Positioning SystemsGPS, GLONASS, Galileo, QZSSVision Positioning System, GPS/GLONASS
Max Transmission Distance~5 km~7 km
Weight644 g3290 g
Max directional Speed53 km/h108 km/h
Max Altitude5000 m2500 m
Max Flight Time~32 min~27 min
Max Flight Range~5 km~7 km
Battery Capacity3400 mAh4280 mAh
UAV PlatformDJI Mavic 2 Enterprise AdvancedDJI Mavic 2 Pro
ProcessorIntel® Core™ i3 or AMD Phenom processor,Quad-core, 4-Plus-1™ ARM®
Positioning Systems RTK Module, GPS GPS, GLONASS
Max Transmission Distance~6–10 km~18 km
Weight1100 g907 g
Max directional Speed72 km/h72 km/h
Max Altitude6000 m6000 m
Max Flight Time~31 min~30 min
Max Flight Range~10 km~18 km
Battery Capacity3850 mAh3850 mAh
UAV PlatformDJI Mavic Air 2DJI Mavic 3 Enterprise
ProcessorQuad-core, 4-Plus-1™ ARM®Quad-core, 4-Plus-1™ ARM®
Positioning SystemsGPS, GLONASSGPS, GLONASS, and Galileo
Max Transmission Distance~10 km~15 km
Weight570 g1050 g
Max directional Speed72 km/h68.4 km/h
Max Altitude5000 m6000 m
Max Flight Time~34 min~45 min
Max Flight Range~18.5 km~32 km
Battery Capacity3500 mAh5000 mAh
UAV PlatformYuneec Typhoon H3Yuneec H520E
ProcessorDJI-supported Quad-core, 4-Plus-1™ ARM®Intel quad-core processor with OFDM support
Positioning SystemsGPS and compass systemsGPS, GLONASS, Galileo, BeiDou
Max Transmission Distance~2 km~3.5–7 km
Weight1985 g1860 g
Max directional Speed48.3 km/h48.6 km/h
Max Altitude499 m500 m
Max Flight Time~25 min~28 min
Max Flight Range~2 km~3.5 km
Battery Capacity5250 mAh6200 mAh
UAV PlatformDJI Matrice 300 RTK Matrice M210 V2
ProcessorQuad-core, 4-Plus-1™ ARM®NVIDIA-Tegra K1, Cortex A-15 32-bit CPU. Kepler GPU + 192 CUDA cores.
Positioning SystemsRTK, GPS, GLONASS, BeiDou, GalileoRTK Module, GPS, GLONASS
Max Transmission Distance~15 km~8 km
Weight6300 g4910 g
Max directional Speed83 km/h73.8 km/h
Max Altitude7000 m3000 m
Max Flight Time~55 min~33 min
Max Flight Range~15 km~8 km
Battery Capacity5935 mAh4280 mAh
Table 3. Comparison of three reviewed approaches used for capturing SA by UAVs.
Table 3. Comparison of three reviewed approaches used for capturing SA by UAVs.
AdvantagesLimitations
WSN and Distributed Sensors
Programmable and self-configuration.
Simple and cheap installation.
Low cost of deployment and maintenance.
High coverage of large landscapes.
Real-time monitoring and analysis due to lightweight data.
Uninterrupted connectivity.
Low collision risk.
Energy-efficient data transmission.
Suitable for real-time data transfer and analysis.
Uneven energy consumption among sensors.
Extra time burden from sleep/wake-up mechanisms.
Disrupted connections between distant sensors.
Maintaining active communication due to lossy wireless connections.
Rapid battery depletion of power-hungry sensors.
High overhead and battery usage from frequent scanning.
Accurate timestamping requires extra computation.
Balancing power for data transmission and aerial movements is crucial.
Vision and Systems
Efficient tools for capturing high-resolution and detailed visual data.
Robust visual analysis for terrain mapping, object recognition, and obstacle detection.
Coupled with image processing/computer vision techniques for data interpretation and information extraction.
Fusion of visual data enables detailed 3D maps for precise localization, route planning, and situational analysis.
Vulnerable to lighting conditions and obstructed views.
High energy consumption reduces vehicle endurance.
Intensive computational resource demands for onboard processing.
Fast battery depletion due to computation and transmission.
Centralized processing requires wide bandwidth.
Continuous image transmission quickly drains battery.
Suffer from transmission latency issues.
Online access to live video feed is challenging for time-critical missions.
Long-range data offloading remains a significant challenge.
LiDAR and Onboard Sensor Sets
Provides 3D localization with admissible accuracy.
Depth perception, elevation changes, spatial mapping, terrain modeling, and topographical analysis.
Eliminates the need for cameras, offering a computationally efficient alternative to vision sensors.
Maximizes data usage without requiring extra sensors despite restricted vertical resolution and lack of color information.
Highly accurate, fast, and easy to mobilize.
Lack of colorization and contextual detail.
High-cost technology.
Integration challenges with small UAVs.
Precision and accuracy depend on quality and calibration.
Affected by atmospheric conditions.
Prone to electronic or thermal noise.
Ineffective in dense areas.
Data processing requires expert skills.
Common challenges
Regulatory and ethical considerations.
Multiple recharges needed to cover large landscapes.
Weather conditions impact motion performance; wind resilience needed.
Collision avoidance capability required in cluttered environments.
Ensuring secure and reliable communication between UAV and ground station.
Perception and situational awareness affected by the quality and quantity of fused data.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

MahmoudZadeh, S.; Yazdani, A.; Kalantari, Y.; Ciftler, B.; Aidarus, F.; Al Kadri, M.O. Holistic Review of UAV-Centric Situational Awareness: Applications, Limitations, and Algorithmic Challenges. Robotics 2024, 13, 117. https://doi.org/10.3390/robotics13080117

AMA Style

MahmoudZadeh S, Yazdani A, Kalantari Y, Ciftler B, Aidarus F, Al Kadri MO. Holistic Review of UAV-Centric Situational Awareness: Applications, Limitations, and Algorithmic Challenges. Robotics. 2024; 13(8):117. https://doi.org/10.3390/robotics13080117

Chicago/Turabian Style

MahmoudZadeh, Somaiyeh, Amirmehdi Yazdani, Yashar Kalantari, Bekir Ciftler, Fathi Aidarus, and Mhd Omar Al Kadri. 2024. "Holistic Review of UAV-Centric Situational Awareness: Applications, Limitations, and Algorithmic Challenges" Robotics 13, no. 8: 117. https://doi.org/10.3390/robotics13080117

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop