Skip to Content
AlgorithmsAlgorithms
  • Article
  • Open Access

15 December 2022

Evolutionary Statistical System Based on Novelty Search: A Parallel Metaheuristic for Uncertainty Reduction Applied to Wildfire Spread Prediction

,
and
1
Consejo Nacional de Investigaciones Científicas y Técnicas—Centro Científico Tecnológico Mendoza (CONICET, CCT-Mendoza), Mendoza 5500, Argentina
2
Laboratorio de Investigación en Cómputo Paralelo/Distribuido (LICPaD), Facultad Regional Mendoza, Universidad Tecnológica Nacional, Mendoza 5500, Argentina
*
Author to whom correspondence should be addressed.

Abstract

The problem of wildfire spread prediction presents a high degree of complexity due in large part to the limitations for providing accurate input parameters in real time (e.g., wind speed, temperature, moisture of the soil, etc.). This uncertainty in the environmental values has led to the development of computational methods that search the space of possible combinations of parameters (also called scenarios) in order to obtain better predictions. State-of-the-art methods are based on parallel optimization strategies that use a fitness function to guide this search. Moreover, the resulting predictions are based on a combination of multiple solutions from the space of scenarios. These methods have improved the quality of classical predictions; however, they have some limitations, such as premature convergence. In this work, we evaluate a new proposal for the optimization of scenarios that follows the Novelty Search paradigm. Novelty-based algorithms replace the objective function by a measure of the novelty of the solutions, which allows the search to generate solutions that are novel (in their behavior space) with respect to previously evaluated solutions. This approach avoids local optima and maximizes exploration. Our method, Evolutionary Statistical System based on Novelty Search (ESS-NS), outperforms the quality obtained by its competitors in our experiments. Execution times are faster than other methods for almost all cases. Lastly, several lines of future work are provided in order to significantly improve these results.

1. Introduction

The prediction of propagation of natural phenomena is a highly challenging task with important applications, such as prevention and monitoring of the behavior of forest fires, which affect millions of hectares worldwide every year, with devastating consequences on flora and fauna, as well as human health, activities, and economy [1]. In most cases, forest fires are human-caused, although there are also natural causes, such as lightning, droughts, or heat waves. Additionally, climate change has effects, such as high temperatures or extreme droughts, that exacerbate the risk of fires. The prevalence of these phenomena makes it crucial to have methods that can aid with firefighting efforts, e.g., prevention of fires and monitoring and analysis of fire spread on the ground. A basic tool for these analyses are fire simulators, which use computational propagation models in order to predict how the fire line progresses during a period of time. Examples of simulators are BEHAVE [2], FARSITE [3], fireLib [4], BehavePlus [5], and FireStation [6]. Fire spread prediction tasks can be performed either in real time or as a precautionary measure, for example, in order to assess areas of higher risk, or for the development of contingency plans that allocate resources according to predicted patterns of behavior.
Unfortunately, the propagation models for fire spread prediction involve a high degree of uncertainty. On the one hand, modelling natural phenomena involves the possibility of errors due to characteristics inherent to the computational methods used. On the other hand, the simulators require the definition of a number of environmental parameters, also called a scenario, and although these values can greatly influence the results of a prediction, they are often not known beforehand, cannot be provided in real time, or may be susceptible to measuring errors. These difficulties result in predictions that may be far from the actual spread, especially when using what we call a “classical prediction”, i.e., a single prediction obtained by simulating the fire with only one set of parameters.
Nowadays, there are several frameworks that follow strategies for reducing this uncertainty based on the combination of results from multiple simulations. There are solutions categorized as Data-Driven Methods (DDMs), which perform a number of simulations taking different scenarios as input. From these results, the system chooses the set of parameters that obtained a better prediction in the past and uses it as input for the prediction of the future behavior of the fire. Examples of this strategy are found in [7,8]. While there is an improvement over the single-simulation strategy, these methods still use a single scenario for the prediction. This can be a great limitation due to the uncertainty in the dynamic conditions of the fire, and the possibility of errors; that is, a good scenario for a previous step might not be as good for the next. For example, a scenario might have yielded good results only by chance, or it may be an anomalous case that does not generalize well to the fire progress.
Other approaches have set out to overcome this problem by combining results from multiple simulations and using these combined results for producing a prediction. Such methods are called Data-Driven Methods with Multiple Overlapping Solutions, or DDM-MOS. We provide a summary of the taxonomy of several methods in Figure 1. Except for the Classical Prediction, all methods shown in the figure belong in the DDM-MOS category. One of these solutions is the Statistical System for Forest Fire Management, or S 2 F 2 M [9]. This system uses a factorial experiment of the values of variables for choosing scenarios to evaluate, which consists of an exhaustive combination of values based on a discretization of the environmental variables. This strategy presents the challenge of having to deal with a large space of possible scenarios, which makes simulation prohibitive. In order to produce acceptable results in a feasible time, other methods introduced the idea of performing a search over this space in order to reduce the sample space considered for the simulations. One of the frameworks that follow this strategy is the Evolutionary Statistical System (ESS) [10], which embeds an evolutionary algorithm for the optimization of scenarios in order to find the best candidates for predicting future propagation. Recently, other proposals based on ESS have been developed: ESSIM-EA [11,12] and ESSIM-DE [13]. These systems use Parallel Evolutionary Algorithms (PEAs) with an Island Model hierarchy: a Genetic Algorithm [14,15], and Differential Evolution [16], respectively. The optimization performed by these methods was able to outperform previous approaches, but it presented several limitations. The three frameworks mentioned use evolutionary algorithms that are based on a process that iteratively modifies a “population” of scenarios by evaluating them according to a fitness function. In this case, the fitness function compares the simulation with the real fire progress. For problems with high degrees of uncertainty, the fitness function may present features that hinder the search process and might prevent the optimization from reaching the best solutions [17]. Another limitation that becomes relevant in this work is that the state-of-the-art methods ESS, ESSIM-EA, and ESSIM-DE use techniques that were designed with the objective of converging to a single solution, which might negate the benefits of using multiple solutions if the results to be overlapped are too similar to each other.
Figure 1. Taxonomy of wildfire spread prediction methods. S 2 F 2 M : Statistical System for Forest Fire Management; ESS: Evolutionary Statistical System; ESSIM: ESS with Island Model; ESSIM-EA: ESSIM based on evolutionary algorithms; ESSIM-DE: ESSIM based on Differential Evolution; ESS-NS: ESS based on Novelty Search; M/W: Master/Worker parallelism; PEA: Parallel Evolutionary Algorithm; IM: Island Model; NS: Novelty Search.
In this work, we propose and evaluate a new method for the optimization of scenarios. Our method avoids the issues of previous works by using a different criterion for guiding the search: the Novelty Search (NS) paradigm [18,19,20]. NS is an alternative approach that ignores the objective as a guide for exploration and instead rewards candidate solutions that exhibit novel (different from previously discovered) behaviors in order to maximize exploration and avoid local optima and other issues related to objective-based algorithms. In an early work [21], we presented the preliminary design of a new framework based on ESS that incorporated the Novelty Search paradigm as the search strategy for the optimization of scenarios. This article is an extended work whose main contributions are the experimental results and their corresponding discussion. These results comprise two sets of experiments: one for the calibration of our method and another for the comparison of our method against other state-of-the-art methods. In addition, we have made some corrections in the pseudocode, reflecting the final implementation that was evaluated in the experiments. Our experimental results support the idea that the application of a novelty-based metaheuristic to the fire propagation prediction problem can obtain comparable or better results in quality with respect to existing methods. Furthermore, the execution times of our method are better than its competitors in most cases. To the best of our knowledge, this is the first application of NS as a parallel genetic algorithm and also the first application of NS in the area of propagation phenomena.
In the next section, we guide the reader throughout previous works related to our present contribution; firstly, in the area of DDM-MOS, with a detailed explanation of existing systems (Section 2.1 and Section 2.2), and secondly, in the field of NS (Section 2.3), where we explain the paradigm and its contributions in general terms. Then, in Section 3, we present a detailed description of the current contribution and provide a pseudocode of the optimization algorithm. Section 4 presents the experimental methods, results, and discussion. Finally, in Section 5, we detail our main conclusions and describe possible lines of future work. In addition, Appendix A presents a static calibration experiment performed on our method.

3. Novelty-Based Approach for the Optimization Stage in a Wildfire Prediction System

In this section, we present the new approach in two parts. First, we explain the operation of the general scheme for the new prediction system and how it differs from its predecessors. Second, we describe in detail the novel evolutionary algorithm that is embedded as part of the Optimization Stage in this system.

3.1. New Framework: ESS-NS

The framework that has been implemented for the new method is called Evolutionary Statistical System based on Novelty Search, or ESS-NS. Its general scheme is illustrated by Figure 5. There are several aspects that are analogous with respect to ESS (compare to Figure 2), such as the Master/Worker hierarchy, where Workers carry out the simulations and fitness evaluations while the Master handles the steps of the evolutionary algorithm; stages other than the Optimization Stage remain unchanged. However, the optimization component has important modifications, particularly in the Master process, and these are highlighted in the figure. As its predecessors, this framework consists of a prediction process with the same stages from Figure 2 (Section 2.1): Optimization Stage (OS), Statistical Stage (SS), Calibration Stage (CS), and Prediction Stage (PS).
Figure 5. Evolutionary Statistical System based on Novelty Search. RFL i : real fire line of instant t i ; OS-Master: Optimization Stage in Master; OS- Worker { 1 n } : Optimization Stage in Workers 1 to n; P E A : Parallel Evolutionary Algorithm; NS-based GA: Novelty Search-based Genetic Algorithm; ρ ( x ) : novelty score function from Equation (1); PV { 1 n } : parameter vectors (scenarios); FS: fire simulator; PEA F : Parallel Evolutionary Algorithm (fitness evaluation); CS: Calibration Stage; SS: Statistical Stage; FF: fitness function; PFL i : predicted fire line of instant t i ; K ign i : Key Ignition Value for t i ; SK ign : Key Ignition Value search; PS: Prediction Stage.
We have used the same propagation simulator, called fireSim [4], which is implemented in an open-source and portable library, fireLib. This simulator takes the following parameters as input: a terrain ignition map and the set of parameters concerning the environmental conditions and terrain topography. These parameters are described in Table 1. The first row contains the name of the Rothermel Fuel Model, which is a taxonomy describing 13 models of fire propagation commonly used by a number of simulators, including fireSim. The remaining rows represent environmental aspects, such as wind conditions, humidity, slope, etc. For more information on the parameters modeled by this library, see [42]. The output of this simulator is a map that indicates in each cell the estimated time instant of ignition of each cell. If, according to the simulation, the cell is never reached by the fire, it is set to zero. The order and functioning of the Statistical, Calibration, and Prediction stages, in addition to their assignment to Master and Workers, are the same as those presented in Section 2.1.
Table 1. Parameters used by the fireLib library.
As for the Optimization Stage, there are two crucial differences from ESS. First, the metaheuristic contained in this stage is also an evolutionary algorithm, but its behavior in this case follows the NS paradigm: the strategy implemented is novelty-based with a genetic algorithm, as shown in the shaded block inside the Master (PEA: NS-based GA) in Figure 5. We defer the details of this algorithm to the following section; however, it is important to note here that this novelty-based method requires an additional computation of a score, that is, the novelty score, represented by the function ρ ( x ) from Equation (1). The second difference is that the output of the optimization algorithm is not the final evolved population, as in previous methods; rather, it is a collection of high fitness individuals which were accumulated during the search, which we call b e s t S e t . Although the need for this structure originates from an apparent limitation of NS, that is, its lack of convergence, this mechanism is in fact an advantage of this method, and it is more suitable to this application problem than the previous methods, which are based on single-solution metaheuristics. The difference lies in that our NS design has the ability to record individuals from completely different areas of the search space and include them in the final aggregated matrix. As discussed in Section 2.2, fitness-based evolutionary algorithms converge to a population of similar individuals, which are redundant, and the evolved population will almost inevitably include some random or not-so-fit individuals (that were generated and selected during the last iteration) that do not contribute to the solution. Since this collection is used for the purposes of reducing uncertainty in the SS, we considered that the advantages of this new design could be an appropriate match for said stage.
It is important to note that ESS-NS differs from the most recent approaches in that it uses the simpler model of Master/Worker (with no islands and only one population). Although the novelty-based strategy performs more steps than the original ESS, the Master process only delegates the simulation and evaluation of individuals to the Workers since this is the most demanding part of the prediction process. Even so, this has not been a problem in our experimentation since the additional steps, such as novelty score computation and updating of sets, do not add significant delays to the execution. The simplification of the parallel hierarchy is motivated by the need to have a baseline algorithm for future comparisons and to be able to analyze the impact of NS alone on the quality of results. Considering that NS uses a strategy that was designed not only to keep diversity and emphasize exploration but to actively seek them, it serves as an alternative route to solve the problem that originally made it necessary to resort to mechanisms such as the island model. Additionally, such a design would require additional mechanisms, for example, for handling the migrations, and these can directly affect both the quality and the efficiency of the method, making it harder to assess the performance of the method. At the moment, these considerations and possible variants are left as future work.

3.2. Novelty-Based Genetic Algorithm with Multiple Solutions

Our proposal consists of applying a novelty-based evolutionary metaheuristic as part of the Optimization Stage of a wildfire prediction system. We have selected a classical genetic algorithm (GA) as the metaheuristic, which has been adapted to the NS paradigm. This choice was made for two reasons: on the one hand, for simplicity of implementation and, on the other hand, for comparative purposes, since existing systems are also based on variants of evolutionary algorithms, and two of them (ESS and ESSIM-DE) use a GA as their optimization method.
The novelty measure selected is computed as in Equation (1). In this context, x is a scenario, and we define d i s t as the difference between the fitness values of each pair of scenarios:
d i s t ( x , μ i ) = | f i t n e s s ( x ) f i t n e s s ( μ i ) | .
For computing this difference, we used the same fitness function as the one used in the ESS system and its successors: the Jaccard Index [43]. It considers the map of the field as a matrix of square cells (which is the representation used by the simulator):
f i t n e s s ( A , B ) = | A B | | A B | ,
where A represents the set of cells in the real map without the subset of burned cells before starting the simulations, and B represents the set of cells in the simulation map without the subset of burned cells before starting the simulation. (Previously burned cells, which correspond to the initial state of the wildfire in each prediction step, are not considered in order to avoid skewed results.) This formula measures the similarity between prediction and reality and is equal to one when there is a perfect prediction, while a value of zero indicates the worst prediction possible.
An example of the computation of this index for ignition maps is represented in Figure 6. Note that the definition of d i s t in Equation (2) is trivially a metric in the space of fitness, but it does not provide the same formal guarantees when considering the relationship between these fitness differences and the corresponding scenarios that produced the fitness values. For example, it is possible that two scenarios with the same fitness value (and a distance of 0) are not equal to each other. This is because the similarity between scenarios cannot be measured precisely and because it depends to a great extent on the chosen fire simulator.
Figure 6. Example of the fitness computation with Equation (3). We have that | A B | = 8 , and | A B | = 4 ; then f i t n e s s ( A , B ) = 4 / 8 = 0.5 .
We present the pseudocode of ESS-NS in Algorithm 1. In previous work [21], we presented the idea of ESS-NS, including the pseudocode for its novelty-based metaheuristic. The present contribution preserves the same general idea with only minor changes to the pseudocode and extends previous work with experiments based on an implementation of such an algorithm. Although the high-level procedure is partially inspired by the algorithm in [33], our version has an important difference, which is the introduction of a collection of solutions, b e s t S e t . This collection is updated at each iteration of the GA so that at the conclusion of the main loop of the algorithm the resulting set contains the solutions of highest fitness found during the entire search. It should be noted that this set is used as the result set instead of the evolved population set which is used by the previous evolutionary-based systems for both the CS and PS. In addition, this algorithm uses two stopping conditions (line 6): by number of generations and by a threshold of fitness (both present in ESSIM-EA and ESSIM-DE), and also specifies conventional GA parameters, such as a mutation probability and tournament probability. Mutation works as in classic GAs, while the tournament probability is used for selection, where the algorithm performs a tournament strategy. These parameters are specified as input to the algorithm (the values we used for our experiments are specified in Section 4.1). Another difference is that the archive of novel solutions ( a r c h i v e ) is managed with replacement based on novelty only as opposed to the pseudocode in [33], which uses a randomized approach. These features correspond to a “classical” implementation of the NS paradigm: an optimization guided exclusively by the novelty criterion, and a set of results based on the best values obtained using the fitness function. These criteria allow us to establish a baseline against which it will be possible to perform comparisons among future variants of the algorithm. In this first version, parallelism has been implemented in the evaluation of the scenarios, i.e., in the simulation process and subsequent computation of the fitness function. The novelty score computation and other steps are not parallelized in this version.
Algorithm 1 Novelty-based Genetic Algorithm with Multiple Solutions.
Input: population size N, number of offspring m, tournament probability t o u r _ p r o b , mutation rate m u t _ p r o b , crossover rate c r , maximum number of generations m a x G e n , fitness threshold f T h r e s h o l d , number of neighbors for novelty score k
Output: the set b e s t S e t of individuals of highest fitness found during the search
  1:
p o p u l a t i o n initializePopulation ( N )
  2:
a r c h i v e Ø
  3:
b e s t S e t Ø
  4:
g e n e r a t i o n s 0
  5:
m a x F i t n e s s 0
  6:
while g e n e r a t i o n s < m a x G e n and m a x F i t n e s s < f T h r e s h o l d do
  7:
    o f f s p r i n g generateOffspring ( p o p u l a t i o n , m , t o u r _ p r o b , m u t _ p r o b , c r )
  8:
   for each individual i n d ( p o p u l a t i o n o f f s p r i n g ) do
  9:
      i n d . f i t n e s s evaluateFitness ( i n d )
10:
   end for
11:
    n o v e l t y S e t ( p o p u l a t i o n o f f s p r i n g a r c h i v e )
12:
   for each individual i n d ( p o p u l a t i o n o f f s p r i n g )  do
13:
      i n d . n o v e l t y evaluateNovelty ( i n d , n o v e l t y S e t , k )
14:
   end for
15:
    a r c h i v e updateArchive ( a r c h i v e , o f f s p r i n g )
16:
    p o p u l a t i o n replaceByNovelty ( p o p u l a t i o n , o f f s p r i n g , N )
17:
    b e s t S e t updateBest ( b e s t S e t , o f f s p r i n g )
18:
    m a x F i t n e s s getMaxFitness ( b e s t S e t )
19:
    g e n e r a t i o n s g e n e r a t i o n s + 1
20:
end while
21:
return b e s t S e t
We now provide a detailed description of Algorithm 1, specifying parameters in italics and functions in typewriter face. The input parameters of the algorithm include: the typical GA parameters ( N , m , t o u r _ p r o b , m u t _ p r o b , c r ), the two stopping conditions ( m a x G e n and f T h r e s h o l d ), and one NS parameter: k, which is the number of neighbors to be considered for the computation of the novelty score in Equation (1). The algorithm begins by initializing some variables; notably, the evolution process starts with the function initializePopulation (line 1), which generates N scenarios with random values for the unknown variables in a given range; such range has been determined beforehand for each variable. Afterwards, each iteration of the main loop (lines 6 to 20) corresponds to a generation of the GA. At the beginning of each generation, the algorithm performs the selection and reproduction steps, abstracted in generateOffspring; that is, it generates m offspring based on the current N individuals of the population. Our chosen GA population selection strategy is by tournament. For the tournament phase, a set of individuals is chosen to enter the mating pool, where a percentage of these are the ones with the highest novelty, while the rest are randomly chosen. This proportion is set by the tournament probability, t o u r _ p r o b . The crossover rate is determined by c r ; this is the probability that two individuals from the current population are combined. In the current version, we have set this value to one, but it can be anywhere in the range [ 0 , 1 ] . Then, a proportion of the offspring is mutated according to m u t _ p r o b .
The next step is the fitness computation represented by lines 8 to 10. Each individual computation is performed in evaluateFitness by the Worker processes, and the distribution of individuals to each Worker is managed by the Master. The fitness is calculated for all individuals in two steps: first, a simulation is carried out by the fire simulator; then, the fitness is computed with Equation (3). The fitness values are needed both for recording the best solutions in b e s t S e t and for the computation of each individual’s novelty score from Equation (1). Since the fitness scores of multiple neighboring scenarios are required for the computation of the novelty score, a second loop is needed (lines 12 to 14). Internally, evaluateNovelty compares the individual i n d with each of the individuals in the reference set n o v e l t y S e t using the measure d i s t and then takes the k nearest neighbors, i.e., those individuals i n d n o v e l t y S e t for which the smallest values of d i s t ( i n d , i n d ) are obtained, and uses them to evaluate the novelty function according to Equation (1), where d i s t is computed by Equation (2).
After the novelty computation loop, the next two lines are the ones that define the search to be driven by the novelty score. In line 15, updateArchive modifies the archive of novel solutions so that it is updated with the descendants that have higher novelty values. In other words, the replacement strategy is elitist based on the whole population: it considers the union of the sets offspring and a r c h i v e , and the N individuals with the highest novelty in this union are assigned to the a r c h i v e . Population replacement is performed in replaceByNovelty (line 16), also using the elitist novelty criterion. Then, the function updateBest in line 17 modifies b e s t S e t in order to incorporate the solutions in offspring that have obtained better fitness values. In this case, the strategy is also elitist but based on fitness. In the first iteration, we start with an empty a r c h i v e and b e s t S e t , and therefore, in lines 15 and 17, we begin by assigning the complete offspring set to both sets. For the first version, we have implemented a fixed size archive and solution set, but these sizes can potentially be parameterized or even designed to dynamically change size during the search.
Lastly, the algorithm ends the current generation by updating the values for verifying the stopping conditions. In line 18, getMaxFitness returns the maximum value of fitness from the set passed as argument; this value corresponds to the maximum fitness that has been found during the search until the current moment. Line 19 updates the evolutionary generation number. These two values will be verified in line 6 during the next iteration. Once one of these conditions is met, the algorithm will return b e s t S e t , a collection of the best solutions obtained throughout the search.

4. Experimentation and Results

In this section, we present the experimental methodology and results for five application cases, performed in order to assess the quality and execution times of ESS-NS in comparison with ESS, ESSIM-EA, and ESSIM-DE. Section 4.1 describes the application cases and methodology for the experimentation, while Section 4.2 presents the results. Finally, in Section 4.3, we provide interpretations and implications of these findings.

4.1. Experimental Setup

The application cases consist of controlled fires in different lands in Serra da Lousã, Gestosa, Portugal, as part of the SPREAD project [44]. The terrain and environmental characteristics of each controlled fire are shown in Table 2 [22,45]. For each case, the fire progress has been divided into s discrete time intervals t i (for more details, see ([9] Section 5)). The terrain is encoded by a matrix, where the advance of the fire from start to finish is represented by a number in each cell, indicating the discrete time step at which that cell was reached by the fire. From this matrix of the whole fire, we obtained the map that is considered the real fire line at each time step t i ( R F L i ) by taking into account only the cells that have been burned at times t j , j i . In these experiments, the methods have to perform s 1 simulation steps, where each simulation step occurs after one step of the fire, taking into account the real fire line at the previous instant. It should be noted that for each application case there are s 1 simulation steps and s 2 prediction steps because all methods use the first simulation for the calibration of input parameters for the next iteration.
Table 2. Characteristics of the controlled fires. Each case is identified by a number in the first column.
After the execution of the complete simulation process, we evaluated the quality of prediction by comparing the produced prediction map for each time step with the real fire line at that instant. That is, for 1 i s , the resulting map is produced by using R F L i 1 and R F L i as input in order to obtain the prediction P F L i + 1 ; later, this result is compared against R F L i + 1 . The metric used for the assessment of predictions is the fitness function from Equation (3). As we mentioned before, the input for the initial time step is needed to perform the first evaluation; the prediction steps start at the second time step. For this reason, the initial times in Table 2 are not zero. In addition to the quality evaluation, we measured and compared the execution times for the complete process. For all methods, each run was repeated 30 times with different seeds for random number generation; e.g., ESS-NS uses the seed for the generation of the initial population and for generating probabilities during the mutation and selection steps. For a particular method, using the same seed always produces the same numbers and, therefore, the same results for that method. Using a set of different seeds provides more robust results by taking into account the variability caused by the non-deterministic behavior of the methods when changing seeds.
The parameters of the method ESS-NS and its competitors are shown in Table 3. In previous works, several of the parameters for the competitors have been calibrated in order to improve performance by choosing configuration parameters that are well suited for the application problem [22,46]. For a better comparison, we performed a static calibration experiment for the two main parameters of ESS-NS: tournament probability and probability of mutation. The calibration experiment and results are described in Appendix A. Considering that all competitors share some characteristics, other parameters, such as population size or fitness threshold, have been established equal to the ones in the other methods in order to simplify the calibration of ESS-NS.
Table 3. Parameters used for each method in the experimentation.
Data from previous results were provided by M. Laura Tardivo, and the corresponding plots with these results can be found in [22,45]. These results correspond to an experiment performed using the parameters provided in Table 3. The parameters for ESSIM-DE and some parameters of ESS and ESSIM-EA are reproduced from [45]. Other parameters for ESS and ESSIM-EA (mutation and crossover rate) are the same as published in [46] (Chapters 4 and 5).
In addition, the experiment that produced these results was performed on the same cluster as the execution of the new ESS-NS method, which makes the runtimes comparable. All experiments were executed on a cluster with Intel 64 bits Q9550 Quad Core CPUs of 2.83GHz, and with 4GB RAM (DDR3, 1333MHz). The nodes are connected via Gigabit Ethernet and a Linksys SLM2048 switch of 1 Gb. The operating system is Debian Lenny (64-bit), and we used the library MPICH [47] for message passing among the nodes.
We have published the experimental results in an online report at https://jstrappa.quarto.pub/ess-ns-experimentation/, accessed on 13 October 2022. In addition, the source code for the visualization of results is available at https://github.com/jstrappa/ess-ns-supplementary.git, accessed on 13 October 2022. The results from previous experiments are also published with permission from their author.

4.2. Results

The results of the quality assessment are presented in Figure 7, Figure 8, Figure 9, Figure 10 and Figure 11. For each of the five maps, a set of three related figures is presented. In each set of figures, a graphical representation of the fires at different time instants appears for reference (Figure 7a, Figure 8a, Figure 9a, Figure 10a and Figure 11a). The x- and y-axes represent the terrain in meters, while the colors show which areas are reached by the fire at different time steps. Below each map (in Figure 7b, Figure 8b, Figure 9b, Figure 10b and Figure 11b), the average fitness prediction values (over 30 repetitions) are shown for each prediction step for the respective fire. The x-axis shows the prediction steps (where the first prediction step corresponds to the third time step of the fire), and the y-axis shows the average fitness values. Each method is shown in a different color and shape. At any given step, a higher average fitness represents a better prediction for that step. Lastly, the box plots in Figure 7c, Figure 8c, Figure 9c, Figure 10c and Figure 11c show the distribution of fitness values over the 30 repetitions for each method. In each of these box plot figures, the subplots titled with numbers correspond to individual prediction steps for the fire identified by the main title. Each box shows the fitness distribution over 30 repetitions for one method at that step. For example, the leftmost boxes (in black) show the distribution for ESS-NS.
Figure 7. Case 520: (a) map of the real fire spread; (b) fitness averages; and (c) fitness distributions.
Figure 8. Case 533: (a) map of the real fire spread; (b) fitness averages; and (c) fitness distributions.
Figure 9. Case 751: (a) map of the real fire spread; (b) fitness averages; and (c) fitness distributions.
Figure 10. Case 519: (a) map of the real fire spread; (b) fitness averages; and (c) fitness distributions.
Figure 11. Case 534: (a) map of the real fire spread; (b) fitness averages; and (c) fitness distributions.
In order to assess the time efficiency of the new method, we computed the average execution times of the 30 seeds for each case, which are shown in Table 4.
Table 4. Average execution times (hh:mm:ss).

4.3. Discussion

In general, the fitness averages (Figure 7b, Figure 8b, Figure 9b, Figure 10b and Figure 11b) show that ESS-NS is the best method for most steps in all cases; in particular, it outperforms all other methods in Cases 520, 751, and 519 (Figure 7, Figure 9 and Figure 10). For Case 534 (Figure 11), ESS-NS gives a slightly lower average compared only to one method, ESSIM-EA. However, this situation happens only at one particular step, which is why this difference might be considered as not significant. Case 533 (Figure 8) has some peculiarities that have been pointed out in previous works [9,48]. For this fire, ESSIM-DE was shown to provide better predictions than all other methods in the first two steps, but the tendencies are inverted in the next two steps, with methods ESSIM-EA and ESS yielding lower predictions first and improving later. In addition, the fitness averages for these two methods have similar values throughout all steps, while the values for ESSIM-DE are more variable. Regarding ESS-NS, it seems to follow the same tendency as ESSIM-EA and ESS but with higher fitness values compared to both in all prediction steps.
As for the fitness distribution in Figure 7c, Figure 8c, Figure 9c, Figure 10c and Figure 11c, ESS-NS presents a much narrower distribution of predictions compared to ESS and ESSIM-EA. ESSIM-DE presents more variability in this respect, with the lowest distribution of fitness values for Cases 533 and 534 throughout all steps and in some of the steps for Cases 520, 751, and 519. In general, the best method regarding fitness distribution is ESS-NS, which is most likely related to its strategy of keeping a set of solutions of high fitness found throughout the search instead of returning a final population as the other methods. This provides a guarantee of the robustness of the method, which yields similar results regardless of the initial population.
The runtimes for ESS-NS (Table 4) are considerably faster than the other methods for Cases 533, 751, 519, and 534. In these cases, the times for ESS-NS are proportional to those of ESSIM-DE. For Case 520, the runtimes of ESS-NS are longer than ESSIM-DE, and comparable to ESS, but still better than ESSIM-EA. Overall, ESS-NS is the fastest method for these experiments. Regarding time complexity, the main bottleneck for these methods is the simulation time; therefore, this complexity is determined by the population size, the number of iterations, and the number of individuals that are different in each population (assuming that all implementations avoid repeating simulations that have already been performed when the same individual remains in a subsequent iteration). Then, the improvements seen with NS are mainly due to its exploration ability, which often allows it to reach the fitness threshold earlier than other methods. Changing the fitness threshold also affects the time complexity, providing a parameter for the trade-off between speed and quality. As stated in Section 4.1, for simplicity and for the sake of comparison, we decided for ESS-NS to keep the same value of fitness threshold as the other methods, which had been chosen in previous works based on calibration experiments. Nevertheless, it would be interesting to test how this value affects different methods as a future line of research.
It is important to note that ESS-NS achieves quality results that are similar to the ESSIM methods but without the island model component. That is, it outperforms the original ESS only by means of a different metaheuristic. One advantage of this is that the time complexity of ESS-NS is not burdened with the additional computations for handling the islands and the migration of individuals present in the ESSIM methods. Another implication is that, just as ESSIM-EA and ESSIM-DE benefited from the use of an island model and were able to improve quality results, this will likely also be true for ESS-NS if the same strategy is applied to it. However, one should also take into account that there exist a number of variants of ESSIM-DE that have shown improved quality and runtimes [13,25]. These results have been excluded because we decided that a fair comparison would use the base methods with their statically calibrated configuration parameters but without the dynamic tuning techniques.
As a final note, it is important to emphasize that this new approach improves the quality of predictions by means of a metaheuristic that employs fewer parameters, which makes it easier to adapt to a specific problem. Another advantage is the method for constructing the final set of results, which provides more control over which solutions will be kept or discarded. In this case, the algorithm keeps a number of the highest fitness solutions found during the search, and this number can be established by a parameter (for simplicity, in our experiments we have used the same number as the population size for the results set). This mechanism improves robustness in the distribution of quality results over many different runs of the prediction process. Finally, this approach is simpler to understand and implement than its predecessors. For all these reasons, we find that, overall, the new method is the best regarding quality, robustness, and efficiency.

5. Conclusions

In this work, we have proposed a new parallel metaheuristic approach for uncertainty reduction applied to the problem of wildfire propagation prediction. It is based on previous methods that also use parallel metaheuristics, but in this case, we follow the recently developed Novelty Search paradigm. We designed a novel genetic algorithm based on NS, which guides the evolution of the population according to the novelty of the solutions. During the search, the solutions of highest fitness are stored and then returned as the solution set at the end of the evolution process. The results obtained with this new method show consistent improvements in quality and execution times with respect to previous approaches.
While in this work we have experimented with the particular use case of wildfires, the scope of application of ESS-NS can be extended in at least three ways. Firstly, given that the fire simulator is used as a black box, it could potentially be replaced with another one if necessary. Secondly, as with its predecessors, another propagation simulator might also be used in order to adapt the system for the prediction of different phenomena, such as floods, avalanches, or landslides. Thirdly, although ESS-NS was designed for this kind of phenomena, the applications of its internal optimization algorithm (Algorithm 1) are wider; it can potentially be applied to any problem that can be adapted to a GA representation.
Regarding the weaknesses and limitations of ESS-NS, these are directly related to the requirements of the system: it depends on a fire simulator (which has its own sources of errors), it needs to obtain maps from the real fire spread at each step, and two simulation steps are needed before a first prediction can be made. In addition, it is possible that many variables are unknown, and this can cause the search process to be more resource-consuming. However, provided that the appropriate resources can be obtained, i.e., hardware and real-time information of the terrain, this method is already capable of producing useful predictions for real-world decision-making tasks.
As next steps, the two most interesting paths are related to the parallelization and the behavioral characterization. On the one hand, our current version of ESS-NS only takes advantage of parallelization at the level of the fitness evaluations. Different parallelization techniques could be applied in order to improve quality, execution times, or both. Regarding quality, the most straightforward of these methods would be an island model, such as the ones in ESSIM-EA and ESSIM-DE, but with migration strategies designed specifically for NS. The island model would allow the search process to be carried out with several populations at once, increasing the level of exploration and, as a consequence, the quality of the final results. A migration strategy could even introduce hybridization with a fitness-based approach. Other parallelization approaches could be applied to the remaining sequential steps of Novelty search, e.g., novelty score computation. On the other hand, the current behavioral characterization is based on fitness values, which may bias the search towards high fitness depending on the parameters of the metaheuristic, e.g., the population size. An interesting hypothesis is that a different behavioral characterization based on the simulation results may be a better guide for the exploration of the search space. Therefore, another possibility of improvement could be a novelty score that relies on the distance between simulated maps instead of on the fitness difference.
Lastly, another possibility is the design of a dynamic size archive and/or solution set, a novelty threshold for including solutions in the archive as in [19] or even switching the underlying metaheuristic and adapting its mechanisms to the application problem.

Author Contributions

Conceptualization, J.S., P.C.-S. and G.B.; methodology, P.C.-S. and G.B.; software, J.S.; formal analysis, J.S., P.C.-S. and G.B.; investigation, J.S.; resources, P.C.-S. and G.B.; writing—original draft preparation, J.S.; writing—review and editing, P.C.-S. and G.B.; visualization, J.S.; supervision, P.C.-S. and G.B.; project administration, P.C.-S., G.B. and J.S.; funding acquisition, P.C.-S., G.B. and J.S. All authors have read and agreed to the published version of the manuscript.

Funding

This work has been supported by Universidad Tecnológica Nacional under the project SIUTIME0007840TC, by FONCyT (Fondo para la Investigación Científica y Tecnológica, Agencia Nacional de Promoción de la Investigación, el Desarrollo Tecnológico y la Innovación, Argentina) under the project UUMM-2019-00042, and by CONICET (Consejo Nacional de Investigaciones Científicas y Técnicas) through a postdoctoral scholarship for the first author.

Data Availability Statement

The data and source code for visualization of results is available online at https://github.com/jstrappa/ess-ns-supplementary.git, accessed on 13 October 2022. Other data and sources that are not openly available may be provided by the corresponding author on reasonable request. We have provided supplementary information consisting of primary results, together with R code for visualizing them with static and interactive plots, and an online report with these results. The results contain the same information as published in this work, in a slightly different format, and consist of: figures for the fitness averages; fitness averages distribution; heatmap tables as in Appendix A; computation of MSE score; and runtime averages distribution. The online report with interactive figures can be read at https://jstrappa.quarto.pub/ess-ns-experimentation/, accessed on 13 October 2022. The complete source material can be accessed at: https://github.com/jstrappa/ess-ns-supplementary.git, accessed on 13 October 2022.

Acknowledgments

We wish to thank María Laura Tardivo (ORCiD ID: 0000-0003-1268-7367, Universidad Nacional de Río Cuarto, Argentina) for providing primary results for the fitness and average runtimes of the methods ESS, ESSIM-EA and ESSIM-DE. Thanks are also due to the LIDIC laboratory (Laboratorio de Investigación y Desarrollo en Inteligencia Computacional), Universidad Nacional de San Luis, Argentina, for providing the hardware equipment for the experimentation.

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

Abbreviations

The following abbreviations are used in this manuscript:
DDMData-Driven Methods
DDM-MOSData-Driven methods with Multiple Overlapping Solutions
ESSEvolutionary Statistical System
ESS-NSEvolutionary Statistical System based on Novelty Search
ESSIMEvolutionary Statistical System with Island Model
ESSIM-EAESSIM based on evolutionary algorithms
ESSIM-DEESSIM based on Differential Evolution
GAGenetic Algorithm
NSNovelty Search
PEAParallel Evolutionary Algorithm
OSOptimization Stage
SSStatistical Stage
CSCalibration Stage
PSPrediction Stage
RFLReal Fire Line
PFLPredicted Fire Line

Appendix A. Calibration Experiment

In this section, we describe a calibration experiment performed for ESS-NS. The motivation of this experiment was to produce information in order to choose sensible parameters for comparison against the other methods in Section 4. Static calibrations are motivated by the fact that metaheuristics usually have a set of parameters that can be very sensitive to the application problem. Therefore, it is often necessary to test a number of combinations of possible values for these parameters in order to find values that are suitable for the kind of problem to be solved. Previous work [22,46] includes static calibration for several parameters of ESSIM-EA and ESSIM-DE, including number of islands, number of workers per island, and frequency of migration, among others. In the context of our current work, most parameters are fixed in order to perform a fair comparison. For example, the population size and fitness threshold are the same for all methods, and the number of workers is the same for ESS and ESS-NS. This gives all algorithms similar computational resources and restrictions. As a particular case, we have fixed the crossover rate at one since we consider this to be the most direct approach for a novelty-based strategy, given such a value maximizes diversity. These simplifications also help narrow down the space of possible combinations of parameters. Therefore, our calibration was performed by varying only two parameters: tournament probability and mutation probability (or mutation rate). The candidate values vary among the following, respectively:
t o u r _ p r o b { 0.75 , 0.8 , 0.85 , 0.9 } m u t _ p r o b { 0.1 , 0.2 , 0.3 , 0.4 }

Appendix A.1. Results

Table A1, Table A2, Table A3, Table A4 and Table A5 show the fitness averages resulting from running ESS-NS with each combination of parameters. There is one table for each controlled fire. In each table, the rows show the fitness values, averaged over 30 repetitions, for a particular configuration of the parameters. For columns are labeled by numbers; the number indicates the prediction step. The f ¯ column is the average fitness over all steps, and the last column, t ( s ) , shows total runtime values in seconds. The runtimes for each repetition correspond to the whole execution (including all steps); the runtimes shown are averaged over 30 repetitions. The darker the color, the better the results, both for quality (fitness) and runtimes.
Table A1. Calibration results for map 520. Colored columns show fitness averages for each step (identified by step number), average over all steps ( f ¯ ), and runtimes (in seconds). Each row is a combination of two parameters: tournament probability (tour) and mutation rate (mut).
Table A1. Calibration results for map 520. Colored columns show fitness averages for each step (identified by step number), average over all steps ( f ¯ ), and runtimes (in seconds). Each row is a combination of two parameters: tournament probability (tour) and mutation rate (mut).
TourMut12345 f ¯ t (s)
0.750.10.8790.7200.8640.8170.8830.8332813.000
0.20.8820.7690.8610.8370.8840.8473091.330
0.30.8820.7770.8550.8070.8810.8403202.670
0.40.8830.7690.8620.8230.8820.8443176.670
0.80.10.8880.7130.8660.7650.8820.8233358.330
0.20.8880.7850.8620.7630.8830.8363087.670
0.30.8840.7770.8630.7860.8830.8383058.000
0.40.8820.7550.8610.7900.8810.8343232.000
0.850.10.8820.7600.8620.8010.8850.8382760.670
0.20.8880.7810.8600.8150.8790.8452977.000
0.30.8830.7810.8550.7410.8840.8293067.330
0.40.8790.7590.8570.8160.8800.8383162.670
0.90.10.8800.7110.8600.7340.8840.8142800.330
0.20.8820.7100.8620.7860.8810.8242821.000
0.30.8820.7750.8660.8240.8800.8453055.000
0.40.8850.7790.8640.8100.8820.8443131.000
Table A2. Calibration results for map 533. Colored columns show fitness averages for each step (identified by step number), average over all steps ( f ¯ ), and runtimes (in seconds). Each row is a combination of two parameters: tournament probability (tour) and mutation rate (mut).
Table A2. Calibration results for map 533. Colored columns show fitness averages for each step (identified by step number), average over all steps ( f ¯ ), and runtimes (in seconds). Each row is a combination of two parameters: tournament probability (tour) and mutation rate (mut).
TourMut1234 f ¯ t (s)
0.750.10.6720.6750.7310.6960.6942122.970
0.20.7540.7730.7430.7510.7552239.330
0.30.7090.7840.7220.7550.7422148.000
0.40.7370.7900.7690.7840.7702284.330
0.80.10.7060.7220.7450.7360.7272003.330
0.20.7370.7070.7030.7650.7282192.670
0.30.7430.7370.7310.7620.7432165.670
0.40.7370.7410.7450.7690.7482247.000
0.850.10.7390.8010.7700.7810.7732176.670
0.20.7190.8060.7840.7660.7692392.000
0.30.7070.7620.7380.7400.7372262.000
0.40.7030.7810.7230.7690.7442296.000
0.90.10.7850.8010.7660.7510.7762128.330
0.20.7170.8170.7280.7580.7552271.670
0.30.7170.7430.7000.7570.7302317.330
0.40.7820.7840.7210.7800.7672304.000
Table A3. Calibration results for map 751. Colored columns show fitness averages for each step (identified by step number), average over all steps ( f ¯ ), and runtimes (in seconds). Each row is a combination of two parameters: tournament probability (tour) and mutation rate (mut).
Table A3. Calibration results for map 751. Colored columns show fitness averages for each step (identified by step number), average over all steps ( f ¯ ), and runtimes (in seconds). Each row is a combination of two parameters: tournament probability (tour) and mutation rate (mut).
TourMut123 f ¯ t (s)
0.750.10.8930.8880.8050.862992.433
0.20.9420.8640.8540.8861064.730
0.30.9380.8880.8480.8911042.970
0.40.9500.8970.8430.8971048.870
0.80.10.9540.8960.8210.8901011.500
0.20.8990.8750.8030.8591056.670
0.30.9480.8840.8360.8891034.700
0.40.9240.8750.8320.8771064.630
0.850.10.9330.8610.8030.866963.733
0.20.9410.8880.8410.8901002.300
0.30.9370.9000.8550.8971017.730
0.40.9320.8760.8220.8771088.230
0.90.10.8980.8880.7940.8601043.030
0.20.9320.8920.8410.8891021.170
0.30.9470.8870.8430.8921039.770
0.40.9470.8830.8340.8881055.770
Table A4. Calibration results for map 519. Colored columns show fitness averages for each step (identified by step number), average over all steps ( f ¯ ), and runtimes (in seconds). Each row is a combination of two parameters: tournament probability (tour) and mutation rate (mut).
Table A4. Calibration results for map 519. Colored columns show fitness averages for each step (identified by step number), average over all steps ( f ¯ ), and runtimes (in seconds). Each row is a combination of two parameters: tournament probability (tour) and mutation rate (mut).
TourMut123 f ¯ t (s)
0.750.10.8860.9310.7830.8671738.670
0.20.8810.9260.8340.8801835.000
0.30.8970.9100.7710.8591879.330
0.40.8970.8820.8120.8631949.000
0.80.10.8930.9120.7280.8451770.000
0.20.8900.9240.8000.8711844.000
0.30.9010.9260.7790.8691902.330
0.40.8750.9230.8110.8701912.000
0.850.10.8650.8340.7820.8271787.670
0.20.8720.9250.7410.8461810.330
0.30.8960.9010.7720.8561874.670
0.40.9040.9070.7650.8591965.000
0.90.10.8640.9140.7510.8431804.000
0.20.8970.9060.8050.8691822.670
0.30.8630.9230.7550.8471892.000
0.40.8980.9170.7240.8461888.000
Table A5. Calibration results for map 534. Colored columns show fitness averages for each step (identified by step number), average over all steps ( f ¯ ), and runtimes (in seconds). Each row is a combination of two parameters: tournament probability (tour) and mutation rate (mut).
Table A5. Calibration results for map 534. Colored columns show fitness averages for each step (identified by step number), average over all steps ( f ¯ ), and runtimes (in seconds). Each row is a combination of two parameters: tournament probability (tour) and mutation rate (mut).
TourMut12345 f ¯ t (s)
0.750.10.7380.5730.5880.8040.7680.6941593.330
0.20.6970.5900.7000.7960.7510.7071665.000
0.30.7620.5750.6920.8390.7570.7251655.330
0.40.7620.5750.6990.8220.7420.7201698.670
0.80.10.6960.5660.6620.8090.7560.6981643.330
0.20.7780.5900.6870.8220.7650.7281644.000
0.30.7380.5820.6790.8290.7230.7101654.670
0.40.7930.5470.6920.8110.7340.7161628.330
0.850.10.7540.5900.7080.8250.7530.7261599.000
0.20.7700.5470.6670.7860.7800.7101642.000
0.30.6920.5630.6760.8390.7590.7061672.330
0.40.7440.5900.6680.8410.7590.7201702.000
0.90.10.6670.5470.6690.8080.7560.6891578.500
0.20.7780.5900.7000.8110.7700.7301652.000
0.30.7710.5820.7080.7950.7310.7171686.000
0.40.7620.5820.7000.8250.7550.7251673.330
In order to choose a combination of parameters that better generalizes for all maps, we computed the mean squared error (MSE) [49] for each combination, using 1 f ¯ (where f ¯ is the average fitness over all prediction steps) as the error. Formally, for each combination of parameters { t o u r _ p r o b , m u t _ p r o b } , we computed:
M S E t o u r , m u t = 1 n i = 1 n ( 1 f ¯ i ) 2
where f ¯ i is the fitness average over all steps for each map, and n is the number of experiments, in this case, five, corresponding to the five controlled fires. The combination of parameters that minimizes the MSE is { t o u r = 0.75 , m u t = 0.4 } . Therefore, we have chosen this combination for comparison against the competitors in Section 4. Nevertheless, the results seem to imply that there is no single combination of parameters that clearly outperforms all the others considerably. This aspect adds to the robustness of the method, since it is not very sensitive to variations of these parameters.

References

  1. Facts Plus Statistics: Wildfires—III. Available online: https://www.iii.org/fact-statistic/facts-statistics-wildfires#Wildland%20fires (accessed on 13 October 2022).
  2. Burgan, R.E.; Rothermel, R.C. BEHAVE: Fire Behavior Prediction and Fuel Modeling System—FUEL Subsystem; U.S. Department of Agriculture, Forest Service, Intermountain Forest and Range Experiment Station: Ogden, UT, USA, 1984. [Google Scholar] [CrossRef]
  3. Finney, M.A. FARSITE: Fire Area Simulator-Model Development and Evaluation; Res. Pap. RMRS-RP-4, Revised 2004; U.S. Department of Agriculture, Forest Service, Rocky Mountain Research Station: Ogden, UT, USA, 1998; Volume 4, 47p. [Google Scholar] [CrossRef]
  4. Smith, J.E. vFireLib: A Forest Fire Simulation Library Implemented on the GPU. Master’s Thesis, University of Nevada, Reno, NV, USA, 2016. [Google Scholar]
  5. Heinsch, F.A.; Andrews, P.L. BehavePlus Fire Modeling System, Version 5.0: Design and Features; Gen. Tech. Rep. RMRS-GTR-249; U.S. Department of Agriculture, Forest Service, Rocky Mountain Research Station: Fort Collins, CO, USA, 2010; Volume 249, 111p. [Google Scholar] [CrossRef]
  6. Lopes, A.; Cruz, M.; Viegas, D. FireStation—An Integrated Software System for the Numerical Simulation of Fire Spread on Complex Topography. Environ. Model. Softw. 2002, 17, 269–285. [Google Scholar] [CrossRef]
  7. Abdalhaq, B.; Cortés, A.; Margalef, T.; Bianchini, G.; Luque, E. Between Classical and Ideal: Enhancing Wildland Fire Prediction Using Cluster Computing. Clust. Comput. 2006, 9, 329–343. [Google Scholar] [CrossRef]
  8. Piñol, J.; Salvador, R.; Beven, K.; Viegas, D.X. Model Calibration and Uncertainty Prediction of Fire Spread. In Forest Fire Research and Wildland Fire Safety: Proceedings of IV International Conference on Forest Fire Research 2002 Wildland Fire Safety Summit, Coimbra, Portugal, 18–23 November 2002; Millpress Science Publishers: Rotterdam, The Netherlands, 2002. [Google Scholar]
  9. Bianchini, G.; Denham, M.; Cortés, A.; Margalef, T.; Luque, E. Wildland Fire Growth Prediction Method Based on Multiple Overlapping Solution. J. Comput. Sci. 2010, 1, 229–237. [Google Scholar] [CrossRef]
  10. Bianchini, G.; Caymes-Scutari, P.; Méndez Garabetti, M. Evolutionary-Statistical System: A Parallel Method for Improving Forest Fire Spread Prediction. J. Comput. Sci. 2015, 6, 58–66. [Google Scholar] [CrossRef]
  11. Méndez Garabetti, M.; Bianchini, G.; Tardivo, M.L.; Caymes Scutari, P. Comparative Analysis of Performance and Quality of Prediction Between ESS and ESS-IM. Electron. Notes Theor. Comput. Sci. 2015, 314, 45–60. [Google Scholar] [CrossRef]
  12. Méndez Garabetti, M.; Bianchini, G.; Caymes Scutari, P.; Tardivo, M.L.; Gil Costa, V. ESSIM-EA Applied to Wildfire Prediction Using Heterogeneous Configuration for Evolutionary Parameters. In Proceedings of the XXIII Congreso Argentino de Ciencias de la Computación, La Plata, Argentina, 9–13 October 2017; p. 10. [Google Scholar]
  13. Tardivo, M.L.; Caymes Scutari, P.; Méndez Garabetti, M.; Bianchini, G. Optimization for an Uncertainty Reduction Method Applied to Forest Fires Spread Prediction. In Computer Science—CACIC 2017; De Giusti, A.E., Ed.; Springer International Publishing: Cham, Switzerland, 2018; Volume 790, pp. 13–23. [Google Scholar] [CrossRef]
  14. Mitchell, M. An Introduction to Genetic Algorithms; The MIT Press: Cambridge, MA, USA, 1998. [Google Scholar] [CrossRef]
  15. Goldberg, D.E. Genetic Algorithms in Search, Optimization and Machine Learning; Addison-Wesley: Reading, MA, USA, 1988. [Google Scholar]
  16. Bilal; Pant, M.; Zaheer, H.; Garcia-Hernandez, L.; Abraham, A. Differential Evolution: A Review of More than Two Decades of Research. Eng. Appl. Artif. Intell. 2020, 90, 103479. [Google Scholar] [CrossRef]
  17. Malan, K.M.; Engelbrecht, A.P. A Survey of Techniques for Characterising Fitness Landscapes and Some Possible Ways Forward. Inf. Sci. 2013, 241, 148–163. [Google Scholar] [CrossRef]
  18. Lehman, J.; Stanley, K.O. Abandoning Objectives: Evolution Through the Search for Novelty Alone. Evol. Comput. 2011, 19, 189–223. [Google Scholar] [CrossRef]
  19. Lehman, J.; Stanley, K.O. Exploiting Open-Endedness to Solve Problems Through the Search for Novelty. In Artificial Life; 2008; p. 329. ISBN 978-0-262-75017-2. Available online: http://eprints.soton.ac.uk/id/eprint/266740 (accessed on 13 October 2022).
  20. Lehman, J.; Stanley, K.O. Evolvability Is Inevitable: Increasing Evolvability without the Pressure to Adapt. PLoS ONE 2013, 8, 2–10. [Google Scholar] [CrossRef]
  21. Strappa, J.; Caymes-Scutari, P.; Bianchini, G. A Parallel Novelty Search Metaheuristic Applied to a Wildfire Prediction System. In Proceedings of the 2022 IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW), Lyon, France, 30 May–3 June 2022; pp. 798–806. [Google Scholar] [CrossRef]
  22. Tardivo, M.L. Paralelización Y Sintonización De Evolución Diferencial Aplicada a Un Método De Reducción De Incertidumbre Para La Predicción De Incendios Forestales. Ph.D. Thesis, Universidad Nacional de San Luis, San Luis, Argentina.
  23. Naono, K.; Teranishi, K.; Cavazos, J.; Suda, R. (Eds.) Software Automatic Tuning; Springer: New York, NY, USA, 2010. [Google Scholar] [CrossRef]
  24. Caymes Scutari, P.; Bianchini, G.; Sikora, A.; Margalef, T. Environment for Automatic Development and Tuning of Parallel Applications. In Proceedings of the 2016 International Conference on High Performance Computing & Simulation (HPCS), Innsbruck, Austria, 18–22 July 2016; IEEE: Innsbruck, Austria, 2016; pp. 743–750. [Google Scholar] [CrossRef]
  25. Caymes Scutari, P.; Tardivo, M.L.; Bianchini, G.; Méndez Garabetti, M. Dynamic Tuning of a Forest Fire Prediction Parallel Method. In Computer Science—CACIC 2019; Pesado, P., Arroyo, M., Eds.; Springer International Publishing: Cham, Switzerland, 2020; Volume 1184, pp. 19–34. [Google Scholar] [CrossRef]
  26. Zou, F.; Chen, D.; Liu, H.; Cao, S.; Ji, X.; Zhang, Y. A Survey of Fitness Landscape Analysis for Optimization. Neurocomputing 2022, 503, 129–139. [Google Scholar] [CrossRef]
  27. Pugh, J.K.; Soros, L.B.; Stanley, K.O. Quality Diversity: A New Frontier for Evolutionary Computation. Front. Robot. AI 2016, 3, 40. [Google Scholar] [CrossRef]
  28. Gomes, J.; Urbano, P.; Christensen, A.L. Evolution of Swarm Robotics Systems with Novelty Search. Swarm Intell. 2013, 7, 115–144. [Google Scholar] [CrossRef]
  29. Krčah, P. Solving Deceptive Tasks in Robot Body-Brain Co-evolution by Searching for Behavioral Novelty. In Advances in Robotics and Virtual Reality; Kacprzyk, J., Jain, L.C., Gulrez, T., Hassanien, A.E., Eds.; Springer Berlin Heidelberg: Berlin/Heidelberg, Germany, 2012; Volume 26, pp. 167–186. [Google Scholar] [CrossRef]
  30. Lehman, J.; Stanley, K.O. Evolving a Diversity of Virtual Creatures through Novelty Search and Local Competition. In Proceedings of the 13th Annual Conference on Genetic and Evolutionary Computation—GECCO ’11, Dublin, Ireland, 12–16 July 2011; ACM Press: Dublin, Ireland, 2011; p. 211. [Google Scholar] [CrossRef]
  31. Ollion, C.; Doncieux, S. Why and How to Measure Exploration in Behavioral Space. In Proceedings of the 13th Annual Conference on Genetic and Evolutionary Computation—GECCO ’11, Dublin, Ireland, 12–16 July 2011; ACM Press: Dublin, Ireland, 2011; p. 267. [Google Scholar] [CrossRef]
  32. Gomes, J.; Mariano, P.; Christensen, A.L. Devising Effective Novelty Search Algorithms: A Comprehensive Empirical Study. In Proceedings of the 2015 Annual Conference on Genetic and Evolutionary Computation, Madrid, Spain, 11–15 July 2015; ACM: Madrid, Spain, 2015; pp. 943–950. [Google Scholar] [CrossRef]
  33. Doncieux, S.; Paolo, G.; Laflaquière, A.; Coninx, A. Novelty Search Makes Evolvability Inevitable. arXiv 2020, arXiv:2005.06224. [Google Scholar]
  34. Galvao, D.F.; Lehman, J.; Urbano, P. Novelty-Driven Particle Swarm Optimization; Bonnevay, S., Legrand, P., Monmarché, N., Lutton, E., Schoenauer, M., Eds.; Artificial Evolution. EA 2015. Lecture Notes in Computer Science; Springer: Cham, Swizerland, 2015; Volume 9554, pp. 177–190. [Google Scholar] [CrossRef]
  35. Cuccu, G.; Gomez, F. When Novelty Is Not Enough. In Applications of Evolutionary Computation; Di Chio, C., Cagnoni, S., Cotta, C., Ebner, M., Ekárt, A., Esparcia-Alcázar, A.I., Merelo, J.J., Neri, F., Preuss, M., Richter, H., et al., Eds.; Springer: Berlin/Heidelberg, Germany, 2011; Volume 6624, pp. 234–243. [Google Scholar] [CrossRef]
  36. Mouret, J.B.; Doncieux, S. Encouraging Behavioral Diversity in Evolutionary Robotics: An Empirical Study. Evol. Comput. 2012, 20, 91–133. [Google Scholar] [CrossRef] [PubMed]
  37. Pugh, J.K.; Soros, L.B.; Szerlip, P.A.; Stanley, K.O. Confronting the Challenge of Quality Diversity. In Proceedings of the 2015 Annual Conference on Genetic and Evolutionary Computation, Madrid, Spain, 11–15 July 2015; ACM: Madrid, Spain, 2015; pp. 967–974. [Google Scholar] [CrossRef]
  38. Cully, A.; Clune, J.; Tarapore, D.; Mouret, J.B. Robots That Can Adapt like Animals. Nature 2015, 521, 503–507. [Google Scholar] [CrossRef] [PubMed]
  39. Mouret, J.B.; Clune, J. Illuminating Search Spaces by Mapping Elites. arXiv 2015, arXiv:1504.04909. [Google Scholar]
  40. Hodjat, B.; Shahrzad, H.; Miikkulainen, R. Distributed Age-Layered Novelty Search. In Proceedings of the Artificial Life Conference 2016, Cancun, Mexico, 4–6 July 2016; MIT Press: Cancun, Mexico, 2016; pp. 131–138. [Google Scholar] [CrossRef]
  41. Liu, Q.; Wang, Y.; Liu, X. PNS: Population-Guided Novelty Search for Reinforcement Learning in Hard Exploration Environments. In Proceedings of the 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Prague, Czech Republic, 27 September–1 October 2021. [Google Scholar] [CrossRef]
  42. Andrews, P.L. BehavePlus Fire Modeling System, Version 5.0: Variables; Gen. Tech. Rep. RMRS-GTR-213 Revised; Department of Agriculture, Forest Service, Rocky Mountain Research Station: Fort Collins, CO, USA, 2009; Volume 213, 111p. [Google Scholar] [CrossRef]
  43. Real, R.; Vargas, J.M. The Probabilistic Basis of Jaccard’s Index of Similarity. Syst. Biol. 1996, 45, 380–385. [Google Scholar] [CrossRef]
  44. Forest Fire Spread Prevention and Mitigation, SPREAD Project, Fact Sheet, FP5, CORDIS, European Commission. Available online: https://cordis.europa.eu/project/id/EVG1-CT-2001-00043 (accessed on 13 October 2022).
  45. Tardivo, M.L.; Caymes Scutari, P.; Bianchini, G.; Méndez Garabetti, M.; Cencerrado, A.; Cortés, A. A Comparative Study of Evolutionary Statistical Methods for Uncertainty Reduction in Forest Fire Propagation Prediction. Procedia Comput. Sci. 2017, 108, 2018–2027. [Google Scholar] [CrossRef]
  46. Méndez Garabetti, M.; Bianchini, G.; Gil Costa, V.; Caymes Scutari, P. Método de Reducción de Incertidumbre Basado en Algoritmos Evolutivos y Paralelismo Orientado a la Predicción y Prevención de Desastres Naturales. AJEA 2020, 5. [Google Scholar] [CrossRef]
  47. MPICH—High-Performance Portable MPI. Available online: https://www.mpich.org/ (accessed on 13 October 2022).
  48. Bianchini, G.; Cortés, A.; Margalef, T.; Luque, E. Improved Prediction Methods for Wildfires Using High Performance Computing: A Comparison; Alexandrov, V.N., van Albada, G.D., Sloot, P.M.A., Dongarra, J., Eds.; Computational Science—ICCS 2006. ICCS 2006. Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 2006; Volume 3991. [Google Scholar] [CrossRef]
  49. James, G.; Witten, D.; Hastie, T.; Tibshirani, R. An Introduction to Statistical Learning: With Applications in R; Springer Texts in Statistics; Springer: New York, NY, USA, 2021. [Google Scholar] [CrossRef]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Article Metrics

Citations

Article Access Statistics

Multiple requests from the same IP address are counted as one view.