*4.1. Mission*

The mission has been designed based on current firefighting operations and including research contributions addressed in Sections 2 and 3, respectively. It considers the tasks that could require the participation of the drone swarm, but excludes aerial extinguishing because it would need other types of drones currently in development.

• Prevention: This phase groups the tasks that seek to avoid fires from occurring and control their spread.

Vegetation mapping: In this task, the drones fly over an area of interest to take ground pictures and build a vegetation map. The number of drones, flight pattern and altitude, and other variables can be tuned to efficiently cover the area and obtain highquality images. The drones must integrate conventional and multispectral cameras to perform this task. The base station processes images, build a mosaic, detect trees and plants, and recommend actions to the firefighters.

Fire investigation: This task is developed after the fire is detected. The objective is to find evidence to identify and pursue the perpetrators of the fire. For this purpose, the drones must search around the fire to detect suspicious people, objects, and situations, monitoring static targets, and tracking mobile targets. Although this task is performed after the fire has occurred and the drones have detected it, it is considered a prevention task because it can prevent the occurrence of more outbreaks of the fire. In practice, few drones can perform this task while the rest are carrying out extinguishing tasks.

• Surveillance: This phase considers the tasks that seek to detect fires and alarm firefighting teams early.

Risk mapping: This task is very similar to vegetation mapping, but creating a map with the risk of fire. This map is useful to know in which areas there is more probability of fire and reinforce surveillance over them. The drones must be equipped with conventional and thermal cameras to perform this task.

Fire surveillance: In this task, the drones fly over an area of interest looking for potential fires. When one of the drones detects a possible fire, this or another drone must fly closer to check it. For this purpose, the drones must integrate conventional and thermal cameras, as well as environmental sensors: temperature, humidity, and concentrations of combustion gases.

• Extinguishing: This phase groups the task aimed at extinguishing fires and supporting firefighters.

Fire monitoring: This task is performed to collect information about the fire while the teams on the ground extinguish it. Spatial and temporal information is useful to know the outline of the fire, locate new sources, and predict its evolution. For this purpose, the drones must fly around the fire to incorporate new information from the periphery while keeping updated information from the center. This task needs the same equipment in the drones as risk mapping and fire surveillance.

Firefighter support: This task aims at supporting the firefighters that are working on the ground to extinguish the fire. For this purpose, the drones must fly around the firefighting teams to collect data about their surroundings and recommend them safe paths and effective actions. Additionally, the drones can transport light resources to firefighters, such as communication devices and protection equipment.

#### *4.2. Drone Swarm*

The mission described in the previous section can be addressed by several types of aerial robot systems. The first approach is using a heterogeneous drone fleet, so different types of drones can adapt to different types of tasks, increasing the efficiency of the whole mission. For instance, fixed-wing drones can do the tasks that require covering large areas as surveillance and mapping, whereas rotary-wing drones can do the tasks that require stationary flights as monitoring and support. However, our proposal involves the use of a homogeneous drone swarm to solve this mission. This system relies on the cooperation between drones to accomplish the tasks and not on the adaptation of them to specific tasks. In this case, the same type of drones can perform surveillance and monitoring, but in a different number.

Both systems have advantages and disadvantages in the defined scenario. As already mentioned, heterogeneous fleets can optimize the missions by allocating their different resources to different tasks. Additionally, these systems are easier to control because the drones have more capabilities and need less coordination. On the other hand, drone swarms are more scalable and have more flexibility to adapt to the changes in the scenario. Besides, these systems have better fault tolerance because they can recover from losing one or more members.

We contemplate the definition of robot swarm drawn from [53]: a robot swarm is a group of simple robots, which individually can only perform rudimentary actions, but collectively form an intelligent system and can perform complex tasks. Therefore, we consider a quadcopter fleet as a robot swarm when the fleet consists of a dozen robots, single robots cannot cover the target scenarios, and individual robots are not able to perform the considered tasks. Firefighting missions involve large and complex terrains, where single quadcopters can only collect local information and perform simple actions.

The quadcopters considered for this application shall have the following features:


The size and weight were established looking for a compromise between versatility and load capacity. On the one hand, the drones must be light enough to be transported to the fire area in a vehicle and deployed in the field by a person. On the other hand, they must carry up to three cameras, environmental sensors, and communication devices. Finally, we have taken into account the impact of these parameters on flight range and maneuverability. Furthermore, autonomy is an essential aspect of the system: practically, the longer the flight time of the drones, the better the viability of the system in real missions. Current high-performance commercial drones offer around 30 min of continuous flight, but this figure may increase in the following years.

The navigation capabilities of the drones are another relevant aspect of the operation of the system. We have chosen to combine multiple sources to get high accuracy and fault tolerance. Specifically, we consider a high-performance IMU to provide linear acceleration, rotation speed, and orientation, as well as a GNSS receiver to obtain the position, velocity, and time with high frequency. Additionally, on-board cameras can get terrain features, which allow estimating drone motion. Multiple models can integrate the data provided by these sources to obtain the accurate location of the drone, such as Kalman [54] and particle filters [55]. In this way, the drones can preserve enough autonomy to perform their tasks even in GNSS denied or limited environments.

Finally, communications are often a challenge to apply drones in large and distant scenarios. In fire fighting missions, there must be a continuous exchange of information between the different agents: data from the drones to the base station, commands from the base station to the drones, information from the base station to the firefighters, etc. Our proposal to maintain these communications during the missions is to use the vehicles and robots involved in them as communications relays. However, we estimate that the drones must have a communication range of 5 km to enable this system in the considered scenarios.

As shown in Figure 3, each quadcopter is only able to fly to waypoints and use its payload, whereas the whole fleet can spread over the scenario and perform the required tasks. For instance, a quadcopter can move through a list of waypoints taking images of the terrain, whereas the fleet can cover the whole area monitoring the evolution of the fire. It is made possible thanks the control and coordination algorithms executed by the drones, which allow them to make individual decisions based on local data that produce collective behaviors to perform global tasks. The most representative are behavior-based algorithms, whose efficiency has been validated for surveillance, search, and monitoring tasks in previous works [53,56,57].

Behavior-based algorithms usually consist of multiple behaviors, which process the information and generate possible actions following different patterns, and decision-making module, which fuses the outputs of them and computes the final action. Some common behaviors are inspired in nature, such as "keep distance" and "keep velocity", which are followed by birds' flocks and fishes' shoals. However, some others are devoted to solving specific robot tasks, such as search and surveillance. In both cases, the behaviors have multiple parameters that can be tuned to adapt them to different scenarios.

The drone swarm shall perform the following generic tasks partially drawn from [58]:


These generic tasks can be used individually or in combination to represent the specific tasks of firefighting missions described above. For instance, fire surveillance can be represented as a combination of surveillance and reconnaissance having fires as targets. The specific tasks and their corresponding generic tasks are collected in Table 1.


**Table 1.** List of tasks considered for firefighting missions.

#### *4.3. Team*

Regarding the crew, we consider three principal roles: mission commander, team leaders, and team members. There can be other roles according to the mission and scenario, such as analysts, maintenance workers, communications technicians, etc. As shown in Figure 3 and described below, each role entails different functions, access to information, workplace, and available actions.


#### *4.4. Infrastructure*

A minimal infrastructure is required for the operation of the system. This infrastructure consists of multiple elements that sustain the autonomy of the swarm, enable the communications among the agents, and allow the human-swarm interaction.

As mentioned above, autonomy is a major challenge for applying drone swarms to firefighting missions. Some of the tasks imply continuous flights over target areas, such as fire surveillance and monitoring, whereas some others require a rapid deployment there, such as fire investigation and firefighter support. Therefore, the drones must be able to charge their batteries in the scenario to increase their availability during the missions. For this purpose, charging stations can be distributed throughout the scenario, even using the ground vehicles involved in the mission.

Adaptive and immersive interfaces can improve the situational awareness and reduce the workload of operators in the considered mission. These results have been validated in similar missions, such as the control of multiple robots to perform complex missions [59] and the analysis of the information collected by a drone swarm from a smart city [57].

These interfaces adapt their displays to the mission state and operator preferences, in order to reduce the amount of information and the workload of operator. For this purpose, they can integrate mission and operator models. The first ones allow following the state of the mission and selecting the relevant information according to it, whereas the second ones allow adapting the interface to the operator preferences. The adaptation can be performed through artificial intelligence models like neural networks.

These interfaces apply immersive technologies like virtual reality (VR), augmented reality (AR), and mixed reality (MR) to introduce the operator in the scenario, improving their perception of the environment where the robots are working. VR reproduces virtual environments and allows interacting with their elements; AR enhances real environments with virtual elements with which the operator can interact, and MR combines real and virtual elements and allows interacting with them [60].

In this work, we consider VR interfaces for the mission commander and AR interfaces for team leaders and members. The mission commander works away from the scenario, so they can focus on the information from the mission. A VR interface can reproduce the scenario, incorporating the real-time information of the swarm and its environment, allowing the operator to move around the scene searching for the best point of view. Meanwhile, team leaders and members work in the scenario, so they must pay most of their attention to the mission. In this case, an AR interface can provide them with relevant information about the mission while keeping their attention in their environment.
