1. Introduction
1.1. Context
Forest fires have become one of the most destructive natural disasters worldwide, causing catastrophic losses [
1,
2]. In 2017, rural wildfires in Portugal destroyed a large area of forest and farmland and caused the death of 114 people. In the aftermath of this event, the Portuguese government launched an enquiry to determine the relevant factors that led to this disaster. The experts designated for this task concluded that one of the main factors was the high density and continuity of vegetation in urban–forest and road–forest interfaces.
Figure 1 shows a site where the density of trees near a road contributed to several casualties.
As a result, a new law was created (law 10/2018) [
3], which aims to control the density of plant biomass near population centres, buildings, and roads in rural areas. It specifies buffer zones with minimum distances between tree canopies, minimum distances between tree canopies and buildings, and maximum density of the shrub layer near buildings.
Figure 2 illustrates the main buffer zones around buildings.
There are two major mandatory buffer zones: a 10-m buffer zone (A) adjacent to buildings and an additional 40-m buffer zone (B) to the unmanaged wildland, meaning that there must be at least a 50-m zone around buildings where the vegetation and trees must be managed. In zone A, there are two sub-zones with different requirements: in the first 5 m, no trees or shrubs are allowed, but in the following 5 m, small trees and shrubs are allowed, provided they are low-flammable species, are watered, and do not establish horizontal or vertical fuel continuity. In zone B, isolated trees are allowed if their crowns are separated by at least 4 m. However, pine and eucalyptus are considered more dangerous than other species, and the minimum distance between their crowns is 10 m.
For roads and railways, the rules are simpler: there must be a single 10-m buffer zone between the road and the wildland with mandatory fuel management, as can be seen in
Figure 3.
Currently, the public authorities responsible for enforcing this law face serious challenges, because they rely solely on direct observations and manual measurements but lack the human resources needed to assess the huge area of wildland–urban/road interfaces that exist in Portugal. There is therefore an opportunity to use technology to automate and simplify the task of verifying compliance with this law, using aerial images, artificial intelligence, and web-based mapping frameworks.
Unmanned aerial vehicles (UAVs), commonly known as drones, have emerged as important tools for tackling various aspects of forest fire science, from prevention to detection [
4]. Currently, drones present several advantages when compared with traditional remote sensing platforms, such as satellites and manned aircraft, and both LiDAR and photogrammetry techniques are feasible solutions for measuring and monitoring aspects of complex forest structures [
5].
The EU Horizon CHAMELEON Project [
6], developed by a consortium of twelve partner organisations distributed in nine European countries, aims to optimise production and identify potential problems in agriculture, livestock, forestry, and rural areas, using drones and software bundles for specific purposes. This study aims to explore the use of drones and some of the CHAMELEON bundles to verify compliance with mandatory fuel management in buffer zones, helping to reduce the risk of forest fires spreading near buildings and roads. This is important because it can leverage the automation of compliance checks and allow a massive expansion of fire prevention operations in wildland–urban interfaces.
1.2. Related Work
The use of remote sensing technologies plays an important role in fire ecology, including risk mapping, fuel mapping, and active fire detection [
5]. Guimarães et al. [
7] have shown that drones present several advantages when compared with traditional remote sensing platforms, like satellites and manned aircraft, namely, better spatial and temporal resolutions and lower costs. They concluded that drones with LiDAR and RGB sensors are going mainstream and their importance for decision support is becoming increasingly relevant for researchers and foresters, as well as related business professionals. They also confirmed that LiDAR and photogrammetry techniques are both feasible solutions for measuring and monitoring aspects of complex forest structures.
Tomljanović et al. [
8] confirm that drones equipped with RGB sensors are a cost-effective option for collecting high-resolution 3D point clouds of forests. They also state that forest fire monitoring and management, as one aspect of integrated forest protection, was one of the first fields that confirmed the importance of drones in forestry applications.
While using photogrammetry techniques based on RGB sensors proved to be a cost-effective way of collecting information, LiDAR techniques provide better results, as Fernández-álvarez et al. have demonstrated [
9]. They used LiDAR point clouds with an ultra-high resolution in the characterization of forest fuels for further wildfire prevention and management, allowing for the detection, measurement, and characterization of individual trees and even shrubs. They applied the LiDAR-based methodology to characterize the forest fuels in a wildland–urban interface and along infrastructures, for wildfire protection purposes. Their study also describes the buffer zones that are mandatory in Spain, which are different from those defined in Portuguese law.
Arkin et al. corroborate that using high-density point clouds has the potential to estimate vegetation metrics with high accuracy [
10]. Their study demonstrates the relative ability of using remotely piloted aerial systems (RPAS) LiDAR, digital aerial photogrammetry (DAP) and mobile laser scanning (MLS) point clouds to estimate key canopy and surface fuel metrics used to model fire behaviour.
AI is also being widely used in this field: Boroujeni et al. have pointed out how the use of AI, UAV, and deep learning models has created an unprecedented momentum to implement and develop more effective wildfire management [
11]. Identifying the type of biomass to estimate the risk of fire propagation in wildland–urban interfaces is one of the topics that could benefit from AI methodologies. Andrada et al. were able to use a drone with a custom sensor payload with LiDAR, stereo cameras, inertial measurement unit (IMU), and a multispectral camera to identify several types of biomass (fuel), using a semantic segmentation model [
12]. They concluded that their system provides a comprehensive solution for forest monitoring with drones, enabling accurate data collection and mapping for effective forest management and conservation.
Kwon et al. [
13] have demonstrated how a deep-learning image segmentation method, using a mask recursive convolutional neural network (R-CNN), can be used to detect and identify trees in a dense forest. Trenčanová et al. were also able to use Convolutional Neural Networks (CNN) with a segmentation network architecture (U-Net) to detect shrubs in heterogeneous landscapes [
14]. Carbonell-Rivera et al. went even further: they have successfully used a consumer drone with a multispectral camera to classify shrub and tree species in Mediterranean forests, using point clouds obtained from UAV-based digital aerial photogrammetry [
15].
Pham et al. have successfully employed an artificial bee colony-adaptive neuro fuzzy inference system (ABC-ANFIS) to evaluate the risk of wildfires in Dak Nong, a province located in the highland region of Vietnam [
16]. Liz-López et al. created a novel wildfire assessment model (WAM) based on a residual-style convolutional network architecture [
17]. It performs regression over atmospheric variables and the greenness index, and is able to predict necessary resources, the control and extinction time, and the expected burnt surface area. This model is useful for anticipating the economic and ecological impact of a wildfire, assisting managers in resource allocation and decision-making.
The automatic identification of anthropogenic objects from aerial images and the computation of buffer zones has been investigated by several studies. Caggiano et al. [
18] demonstrated that it was possible to detect individual buildings within the wildland–urban interface using semi-automated Object-Based Image Analysis (OBIA) approach, which utilizes 4-band multispectral National Aerial Image Program (NAIP) imagery. However, this kind of imagery has low resolution (50 cm to 100 cm per pixel), and their approach struggles to detect smaller buildings. Moreover, their study calculates buffer zones around buildings but does not evaluate the presence of trees or vegetation within these buffer zones. In their conclusion, they point out that using artificial neural networks could be a better approach for detecting buildings from aerial imagery.
The vulnerability of wildland–urban interfaces has also recently gained attention from researchers, as they pose significant risks to lives and property. Aguirre et al. studied seven fire case studies in central Chile, using satellite and drone imagery and GIS-based analysis of the collected data [
19]. They concluded that spatial arrangement factors like distance to vegetation have a greater impact on damage prediction than the structural conditions and fire preparedness of individual buildings. Ortega et al. studied the general effectiveness of buffer zones in the southern Spain wildfires [
20]. They concluded that buffer zones were indeed effective in containing the fire, especially when supported by ground and aerial firefighting. This confirms the importance of buffer zones around buildings and roads. Novo et al. proposed a methodology to detect the continuity of vegetation near roads, based on aerial LiDAR point clouds, in combination with point cloud processing techniques [
21]. Their study emphasises the importance of forest management around roads in preventing forest fires and mitigating their effects.
The main gap in this body of knowledge is that these studies focus on partial methodologies of the topic of this manuscript, but no study integrates all these methodologies into a complete system (from planning the drone flight to the final report) to verify compliance with mandatory fuel management in buffer zones. There is also a specific gap mentioned by Caggiano et al. [
18]: The automatic detection of building contours using CNN methodologies. Finally, none of these studies explored methodologies for automatically delineating the boundaries of multi-layer buffer zones, as defined by the Portuguese legislation for the protection of buildings.
1.3. Objectives of This Work
The main objective of this work is to create a set of tools, based on off-the-shelf drones equipped with RBG cameras, artificial intelligence, and CHAMELEON bundles, to check compliance with the mandatory buffer zones around buildings and roads, as determined by the Portuguese law 10/2018 for rural areas. These tools should be evaluated in a small-scale pilot, using drones to capture high-resolution aerial imagery of designated areas and then automatically determining whether these areas comply with the applicable rules regarding vegetation management inside buffer zones. A designated area is a polygonal area defined by the user in a WebGIS, which contains roads and/or buildings with a potential risk of fire propagation that is worth analysing.
The tools should be able to identify the type of designated area (isolated building, urban–forest interface, road) and automatically delimit the zones that need to be assessed (buffer zones), according to the type of forest interface and the minimum size of the buffer zone, stipulated by law for each case. The tools should also be able to generate a high-resolution orthophoto and a 3D point cloud of the designated area from a set of aerial photos captured by an off-the-shelf drone. These data (especially the 3D point cloud) are required by the CHAMELEON bundles to extract information about trees and vegetation in the area.
The tools should use two different CHAMELEON bundles: BC1 (vegetation monitoring and census) and BC3 (continuity of vegetation), for extracting several different metrics from the data. After receiving the processed data from the bundles, the tool should be able to automatically determine if a designated area complies with the law, and in case of non-compliance, it should display a WebGIS map highlighting the specific zones that do not comply, informing the user of the reason, and suggesting a way to clear the vegetation in order to remove the non-compliances.
The tool should be developed specifically to assess compliance with Portuguese law but prepared to be easily parameterised to assess different requirements for vegetation density and/or distances, so that it can be used on a European scale.
2. Materials and Methods
2.1. The System Architecture
The architecture and the components of the system that were developed are shown in
Figure 4, which also illustrates the flow of data across the system. It all starts with a designated area, which is defined by the end user, by drawing a polygon on a WebGIS interface based on Mapbox GL v3. This polygon represents the area that the user wants to evaluate.
The designated area is sent to another module, the drone’s mission planner, which calculates an optimised flight path over the designated area, considering the requirements of each bundle; for example, image resolution and overlapping. After the flight over the designated area using the waypoints calculated by the mission planner, the mosaic of high-resolution images is then processed to generate a high-resolution orthophoto map and a 3D point cloud file, which are required by the CHAMELEON bundles.
Bundle BC1 (vegetation monitoring and census) is used to identify individual trees and extract their traits, namely the crown projection area, allowing the measurement of the distances between tree canopies. Bundle BC3 (continuity of vegetation) is used to identify clusters of connected trees in the designated area.
After the bundles process the input data, the generated results are sent to a post-processor module, which merges all the results and determines if the designated area complies with the parameterised vegetation density and minimum distances between tree crowns. In case of non-compliance, the post-processor displays a WebGIS interface with the designated area, a layer showing the zones that do not comply, and a report describing the reasons for the nonconformity.
2.2. Use Cases
The preparation of the data acquisition started by choosing the designated areas for two different use cases, selecting locations for samples, and requesting official authorization to fly over the designated areas. The designated areas were chosen near the “Pedrógão Grande” region (the location of the deadly wildfires of 2017). This area has a high density of forest, with many challenging urban–forest interfaces that are ideal for validating the tools and the CHAMELEON bundles.
Two different types of designated areas were envisaged: a road heavily surrounded by forest and a site with an isolated building in the forest, with a problematic urban–forest interface. The first designated area is a segment of road N236, near the village of “Derreada Cimeira” (Castanheira), with GPS coordinates: 39°58′53″ N 8°09′24″ W, which is illustrated in
Figure 5 on the left side. This segment has been chosen because of the high density of trees next to the road. It is a good example of the high density of vegetation that surrounds many of the region’s roads.
The second use case addresses a site with a single isolated building, near coordinates 39°55′09.0″ N 7°55′28.3″ W, which is shown on the right side of
Figure 5. This is an example that represents many other buildings in the area that have been authorised near forests with a high density of trees and vegetation.
2.3. Mission Planner
A mission planner tool was developed to prepare and automate image capturing by drones, with the following requirements:
It should be able to process a polygon consisting of GPS coordinates delimiting the designated area for each use case, allowing drawing and editing of the polygon directly on a WebGIS interface.
It should be able to calculate the GPS locations (waypoints) and the optimal path to capture the mosaic of images for the designated area, according to the sensor characteristics and bundle requirements concerning image overlapping and represent it on a WebGIS interface.
It should be able to export the waypoints as a Waypoint Markup Language (WPML) file (KMZ file), which could then be used with drones compatible with WPML, such as some DJI drones. The use of a KMZ file allows fully automatic flight and image capture along the path.
The module was successfully developed and tested, and all the requirements were met. Initially, the flight pattern was based on south-north patterns, but after some testing, it was discovered that east–west patterns with a gimbal pitch of a few degrees deviated from nadir (−90°) were more effective for point cloud generation.
Figure 6 shows the interface of the mission planner module, where the user can define several parameters related to the drone’s camera, namely the horizontal and vertical resolution, its field of view (FOV), and the gimbal pitch to use. The user can also define the parameters related to the bundle’s recommendations regarding the captured images, specifically resolution in cm/pixel, forward overlapping, and lateral overlapping.
The module calculates and shows the waypoints (represented as yellow dots in
Figure 6) where images must be captured to comply with the required parameters. The path is initially calculated as a simple east-west linear sweep that embraces the designated area, but it can be further optimized using a nearest neighbour algorithm, followed by a 2-opt optimization algorithm. Our tests have shown that the linear sweep solution is good enough for most cases. Optimisation steps can slightly reduce the number of points and the total distance travelled, but these marginal gains hardly justify the need for optimisation.
These waypoints can then be saved in CSV, PDF, or KMZ format (WPML) so that they can be exported to KMZ-compatible drones (such as some DJI drones), allowing automated mapping operations.
2.4. Data Generated from the Aerial Images
The captured images for each use case were processed by Open Drone Map (ODM) software v3.5.4 [
22] to generate a high-resolution orthophoto map and a 3D point cloud file for each use case, which are needed by the CHAMELEON bundles to extract information about trees and vegetation.
Figure 7 illustrates the orthophoto (left) and 3D point cloud (right) of the Castanheira use case.
Figure 8 shows the orthophoto (left) and 3D point cloud (right) of the Oleiros use case.
ODM was unable to determine the 3D point cloud of some trees, as the large number of similar trees near each other means that it is difficult to extract unambiguous geo-referenced features. A methodology using LiDAR to enhance the model would probably provide better results.
2.5. Defining the Buffer Zones Around Buildings and Roads
The pre-processor module should be able to define the relevant objects (buildings, roads) and calculate the mandatory buffer zones around them, in accordance with the law. The goal was to use AI methodologies to detect these zones and then automatically calculate the buffer zones around these objects. It should also allow the user to define those zones or edit the results generated by the AI components because they are not 100% accurate.
After some research, it was decided to use YOLOv11 [
23] as the AI model for detecting and segmenting buildings and roads from aerial images, due to its speed and efficiency. However, a pretrained model with these characteristics was not found; therefore, a new model had to be trained from scratch. Training YOLO models requires a large dataset with a specific format. After some research, the Inria Aerial Image Labeling Dataset [
24] was selected as the most suitable source to train the model to detect buildings. However, the images in this dataset are large, and the training labels are delivered as image masks instead of polygons, so this dataset had to be converted to smaller images and polygon metadata compatible with YOLO segmentation training. The converted dataset contains 8820 images with a resolution of 640 × 640 pixels, 8379 of which were used for training and 441 for validation.
Training the model required a powerful GPU with many GB of NVRAM, so it was trained on the cloud platform Paperspace [
25], using an instance with 8 vCPUs, 45 GB RAM, and an NVIDIA A6000 GPU with 48 GB of NVRAM. Training started with a learning rate of 0.01 (the default) and a batch size of 8. During training, the learning rate was dynamically adjusted by the learning rate scheduler of YOLO. After 64 epochs, the model converged, and the training was stopped. The segmentation performance metrics of the model are as follows: precision: 0.871, recall: 0.835, and mAP50: 0.888.
The trained model was embedded in a Pythonv3.11.9-based API, which is then consumed by the WebGIS JavaScript interface of the tool. To use the AI segmentation, the user just needs to move the map to the desired location and press a button.
Figure 9 shows an example of AI segmentation on a site with a building (the building in the Oleiros use case).
The lines around the building delimit the buffer zones that are automatically calculated after the building’s contour is detected. The first buffer zone is a 10 m buffer around the walls of the building, and the second one is a 50 m buffer zone from the walls of the building (40 m from the first buffer zone). Each one of these buffer zones must comply with different requirements defined by law: In the first buffer zone, trees and wild vegetation are basically forbidden, whereas in the second buffer zone, trees are allowed, but with a minimum distance of 4 m between their crowns.
These buffer zones are calculated by a JavaScript module that encodes the rules determined by Portuguese laws, since at this stage, the aim was only to check compliance with Portuguese legislation. However, this module can easily be swapped with different versions, each capable of calculating buffer zones according to different national laws.
3. Results
The post-processor module receives the output from the bundles and determines the vegetation and trees in the buffer zones that do not comply with the law and should, therefore, be cut down. The post processor basically uses data from CHAMELEON bundles BC1 and BC3 to identify the vegetation and trees inside the buffer zones and then applies an algorithm to select the zones where vegetation and trees should be cut down. In the first buffer zone (the 10 m buffer zone), the operation is straightforward, as all vegetation and trees must be removed. However, in the second buffer zone (the 10 m to 50 m buffer zone), the algorithm is more complex, because trees are allowed if their crowns are separated by at least 4 m.
Figure 10 shows the result of the compliance check for the Castanheira use case. The red masks indicate trees and vegetation inside the buffer zones that do not comply with the law and must, therefore, be cut down.
This first use case is very simple because only the first buffer zone (10 m) is required. However, the Oleiros use case is more complex because there is a road and also a building, which requires two different buffer zones. The result of the compliance check for this use case can be observed in
Figure 11. The red zones represent vegetation and trees that must be cut down to comply with the law and, thus, reduce the risk of wildfire propagation near the building.
The accuracy of the compliance check depends directly on the accuracy of the CHAMELEON bundles that extract information from the 3D point-cloud files. As a matter of fact, the bundles are not faultless and generate two types of errors: false positives (areas wrongly classified as trees or vegetation) and false negatives (trees and vegetation not classified as such).
The accuracy of the bundles was estimated using a black-box testing approach, feeding them the data from the Oleiros use case, and comparing their output with the expected results (ground truth). Their calculated accuracy is shown in
Table 1.
If only the BC3 bundle were used for the compliance check, its high rate of false negatives would have a major impact on the accuracy of the compliance results, as it does not detect a substantial amount of vegetation. However, the compliance check module joins the detection results of bundle BC3 with the results of bundle BC1 (union operation); therefore, the high rate of false negatives of bundle BC3 is compensated by the low rate of false negatives of bundle BC1. To put it simply, vegetation that is not detected by bundle BC1 will most likely be detected by bundle BC3.
4. Discussion
The main objective of this study was to check whether inexpensive off-the-shelf drones equipped with standard RGB cameras could be used to detect the excess of trees and vegetation in buffer zones around buildings and roads, as defined by Portuguese law.
To accomplish this wider goal, several smaller challenges had to be tackled. The first one was related to the input of the CHAMELEON bundles, which require a high-resolution orthophoto map and a high-quality 3D point-cloud file. Therefore, the system had to be able to create these files from a mosaic of aerial RGB images captured by the drone. This was accomplished by using the ODM software, which easily created high-quality orthophoto maps from almost any type of image mosaic, as long as the images were sufficiently overlapped.
Creating high-quality 3D point clouds of trees and vegetation with ODM proved more complicated, requiring several adjustments until the results were satisfactory. The best results were obtained with east-west sweeps, strong image overlap, and a gimbal pitch of several degrees off-nadir. Even the best results were not perfect: Some trees were not correctly represented, especially in areas with a high density of similar trees. Using a methodology that fuses photogrammetry and LiDAR data [
26] would probably lead to better results and is something worth exploring to increase the accuracy of tree detection. In addition, LiDAR data could be used to create a digital terrain model (DTM) [
27], which would serve as a ground reference for detecting trees from crown images. However, LiDAR sensors for drones are still very expensive: the entry-level LiDAR sensor from a popular drone brand costs EUR 14,520, while a high-quality RGB drone from the same brand can cost just over EUR 1100. This substantial cost difference highlights the importance of continuing to explore and improve the best methods of detecting trees and vegetation using only RGB sensors.
Another challenge that the system had to overcome was the automatic recognition of buildings using AI and the automatic definition of buffer zones around buildings and roads. The automatic segmentation of buildings was facilitated by using the YOLO segmentation model, which is easy to train and use. However, the AI model is not infallible, and various types of errors are occasionally generated: false positives (when a structure is identified as a building, but it is not a building), false negatives (when a building is not identified as such) and contour errors (when the polygon generated by the model does not exactly match the contours of the building). Although the software allows the user to manually correct the gross errors produced by AI segmentation, if small errors in the building’s contour are ignored, this will lead to error propagation in the calculation of buffer zones. This error propagation is linear, because the boundaries of the buffer zones are defined as a linear distance from the closest point on the building’s contour.
After the segmentation of buildings (using AI) and roads (manually), the automatic definition of buffer zones was implemented in a separate module, using standard geospatial libraries. This modular approach will allow the incorporation of different modules to calculate the boundaries of buffer zones in other countries, according to local legislation and requirements.
The representation of the buffer zones over the high-resolution orthophoto map, without any additional data, already provides valuable insights to the user. In fact, using drones to capture photos from the ground allows for high-resolution orthophoto maps (1 cm per pixel is easily achievable), where vegetation and trees are then easily spotted by a human user. However, the system is able to automatically identify the offending vegetation and trees within the buffer zones, thanks to the information extracted by the CHAMELEON bundles from the 3D point cloud files. Therefore, the user can be directly warned about the zones within the buffer zones that need attention, which is the main objective of this system.
If this system is deployed as a commercial service for external customers, it must be able to handle multiple concurrent tasks, so it is important to identify potential performance bottlenecks. All operations related to displaying map tiles and orthophotos are handled locally, using JavaScript in the browser, and by a scalable backend managed by the map provider (Mapbox). Therefore, adding concurrent use cases does not overload the FIRECOM server, as the code runs on clients and the scalability of map tiling is ensured by the map provider.
Regarding AI segmentation, which is carried out by the FIRECOM server, one advantage of using YOLO is its performance in inference operations. In fact, the time required to segment an image with a resolution of 640 × 640 pixels has been measured, and it takes only 260 ms on average to complete the task, using a server with ordinary specifications. As this operation is normally carried out only a few times per use case, we do not foresee any performance issues related to AI segmentation.
The real bottleneck of this system is the generation of the orthophoto and 3D point cloud files by ODM. As a matter of fact, a large use case can easily contain hundreds of high-resolution images, which can take several hours to process on a single server. This means that if several use cases have to be processed at the same time, a horizontal scalability architecture must be adopted to guarantee a reasonable response time.
5. Conclusions
This study aimed to develop and evaluate a complete system to check compliance with the mandatory buffer zones around buildings and roads, as determined by Portuguese law 10/2018 for rural areas. The system has been fully developed and is based on off-the-shelf drones equipped with RBG cameras, an AI model to detect the contour of buildings, and CHAMELEON bundles to detect trees and vegetation within buffer zones.
The AI model that was trained in this study is effective at detecting the contour of buildings from aerial images, with a precision of 0.871, a recall of 0.835 and a mAP50 of 0.888, and it was able to process the aerial images provided by Mapbox. The system is able to automatically delineate the buffer zones around buildings and roads, in accordance with Portuguese legislation. The WebGIS that has been developed is able to display the use case orthophoto on a map, show the boundaries of the buffer zones, and highlight trees and vegetation within buffer zones that are not in compliance. The results from the two use cases show that this system is effective in detecting non-compliant trees and vegetation within buffer zones.
All in all, the main conclusion of this study is that off-the-shelf drones equipped with standard RGB cameras can be effective at detecting non-compliant vegetation and trees within buffer zones. This can be used to identify and cut down the excess vegetation and trees in buffer zones, thus helping to reduce the risk of wildfire propagation in wildland–urban interfaces. The work carried out in this study has paved the way for the launch of a commercial service for estimating wildfire propagation risk in wildland–urban interfaces.