Next Article in Journal
Lightweight Multilevel Feature-Fusion Network for Built-Up Area Mapping from Gaofen-2 Satellite Images
Next Article in Special Issue
LUFFD-YOLO: A Lightweight Model for UAV Remote Sensing Forest Fire Detection Based on Attention Mechanism and Multi-Level Feature Fusion
Previous Article in Journal
Analysis of the Substantial Growth of Water Bodies during the Urbanization Process Using Landsat Imagery—A Case Study of the Lixiahe Region, China
Previous Article in Special Issue
FuF-Det: An Early Forest Fire Detection Method under Fog
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Deep Learning Approach to Improve Spatial Resolution of GOES-17 Wildfire Boundaries Using VIIRS Satellite Data

1
Computer Science and Engineering Department, University of Nevada Reno, Reno, NV 89557, USA
2
Civil and Environmental Engineering Department, University of Nevada Reno, Reno, NV 89557, USA
3
Physics Department, University of Nevada Reno, Reno, NV 89557, USA
4
Department of Atmospheric Science, Desert Research Institute, Reno, NV 89512, USA
*
Author to whom correspondence should be addressed.
Remote Sens. 2024, 16(4), 715; https://doi.org/10.3390/rs16040715
Submission received: 12 November 2023 / Revised: 30 January 2024 / Accepted: 7 February 2024 / Published: 18 February 2024
(This article belongs to the Special Issue The Use of Remote Sensing Technology for Forest Fire)

Abstract

:
The rising severity and frequency of wildfires in recent years in the United States have raised numerous concerns regarding the improvement in wildfire emergency response management and decision-making systems, which require operational high temporal and spatial resolution monitoring capabilities. Satellites are one of the tools that can be used for wildfire monitoring. However, none of the currently available satellite systems provide both high temporal and spatial resolution. For example, GOES-17 geostationary satellite fire products have high temporal (1–5 min) but low spatial resolution (≥2 km), and VIIRS polar orbiter satellite fire products have low temporal (~12 h) but high spatial resolution (375 m). This work aims to leverage currently available satellite data sources, such as GOES and VIIRS, along with deep learning (DL) advances to achieve an operational high-resolution, both spatially and temporarily, wildfire monitoring tool. Specifically, this study considers the problem of increasing the spatial resolution of high temporal but low spatial resolution GOES-17 data products using low temporal but high spatial resolution VIIRS data products. The main idea is using an Autoencoder DL model to learn how to map GOES-17 geostationary low spatial resolution satellite images to VIIRS polar orbiter high spatial resolution satellite images. In this context, several loss functions and DL architectures are implemented and tested to predict both the fire area and the corresponding brightness temperature. These models are trained and tested on wildfire sites from 2019 to 2021 in the western U.S. The results indicate that DL models can improve the spatial resolution of GOES-17 images, leading to images that mimic the spatial resolution of VIIRS images. Combined with GOES-17 higher temporal resolution, the DL model can provide high-resolution near-real-time wildfire monitoring capability as well as semi-continuous wildfire progression maps.

1. Introduction

Wildfires have increased in number, frequency, and severity in the United States in recent years, with the western states, and particularly California, being severely impacted [1,2,3]. Climate change, anthropogenic activities, and other factors have worsened the frequency and severity of wildfires. Catastrophic wildfire events have short- and long-term impacts on the economy, human health, ecosystems, watersheds, and built environment, highlighting the need for effective wildfire monitoring [4]. Real-time monitoring is crucial for providing timely and accurate information on the location, size, and intensity of wildfires. This can be an effective means for efficient emergency response management efforts and for making decisions related to firefighting, public safety, and evacuation orders.
Remote sensing technologies, including terrestrial-based, aerial-based, and satellite-based systems are used to provide information on wildfires, such as their location, rate of spread, and fire radiative power, which can help analyze wildfire behavior and manage its impact [5,6]. Terrestrial-based systems are highly accurate and have a fast response time, but their coverage is limited, and they are vulnerable to a blockage of the scanner beam by the terrain [6]. Aerial-based systems provide detailed fire progression mapping, but their deployment during emergencies can be challenging [6,7].
Satellite-based systems, such as NASA’s Earth Observing System (EOS), allow for the detection of wildfires over a vast area, including remote or inaccessible areas. However, the spatial and/or temporal resolution of these systems is limited, not providing the level of details needed for emergency response applications [6]. Low Earth Orbit (LEO) satellites offer high spatial resolution, but they typically capture snapshots of the same area at low temporal resolution (e.g., hours or days) [8]. Landsat-8/9 [9] and Sentinel-2A/2B [10] provide multi-spectral global coverage with a resolution of 10 m to 30 m [11], but their revisit intervals of 8 days for Landsat-8/-9 and 5 days for Sentinel-2A/2B are inadequate for monitoring active fires [12]. Other instruments on board LEO satellites, such as the Visible Infrared Imaging Radiometer Suite (VIIRS) [13] on the Suomi National Polar-orbiting Partnership (SNPP) satellite and the Moderate Resolution Imaging Spectroradiometer (MODIS) [14] on the Terra and Aqua satellites, are commonly used to detect active fire points with twice-daily revisits [15]. VIIRS offers a spatial resolution of 375 m, while MODIS provides a spatial resolution of 1 km [16,17]. On the other hand, Geosynchronous Equatorial Orbits (GEO) satellites provide high temporal resolution but lower spatial resolution (2 km) due to their higher elevation from the Earth [18]. An example of a GEO satellite is the Geostationary Operational Environmental Satellites R Series (GOES-R), consisting of three geostationary satellites, GOES-16, -17, and -18, which continuously monitor the entire western hemisphere, including North America, South America, the Pacific Ocean, the Atlantic Ocean, and Western Africa. However, due to its low spatial resolution, the GOES-R active fire product has been found to be unreliable, with a false alarm rate of around 60% to 80% for medium- and low-confidence fire pixels [19]. While it may struggle with some fire detections, GOES-R’s performance is commendable for many high-impact fires, as demonstrated by studies such as Lindley et al. (2020) [20] for the Kincade Fire.
Artificial intelligence (AI) has the potential to improve the temporal and spatial accuracy for monitoring wildfires. AI, particularly deep learning (DL) models, has been effective in solving complex tasks such as image classification [21], object detection [22], and semantic segmentation [23]. Building on these successes, recent research has explored the application of DL models on satellite imagery for land-use classification [24] and urban planning [25,26]. Leveraging success in high-level AI tasks such as image segmentation and super-resolution, it might be possible to develop a high-resolution wildfire monitoring system that can improve our ability to detect and respond to wildfire incidents using available data systems.
A few recent studies have employed DL-based approaches for early wildfire detection from streams of remote sensing data. These studies have focused mainly on active fire detection or fire area mapping while skipping monitoring quantities that can quantify fire intensity. Toan et al. (2019) proposed a deep convolutional neural network (CNN) architecture for wildfire detection using GOES satellite hyperspectral images [27]. Their model employs 3D convolutional layers to capture spatial patterns across multiple spectral bands of GOES and patch normalization layer to locate fires at the pixel level. The study demonstrates the potential of using DL models for early wildfire detection and monitoring. In a related study, Toan et al. (2020) proposed another DL-based approach for the early detection of bushfires using multi-modal remote sensing data [28]. Their approach consisted of a DL model incorporating both CNNs and long short-term memory (LSTM) recurrent neural networks. The model was designed to process multi-modal remote sensing data at different scales, ranging from individual pixels (using CNN) to entire images (using LSTM). The authors reported a high-level detection accuracy, outperforming several state-of-the-art methods. Zhao et al. (2022) focused on using time-series data from GOES-R for early detection of wildfires [29]. The study proposed a DL model incorporating a gated recurrent unit (GRU) network to process the time-series data from GOES-R. The model was designed to learn the spatiotemporal patterns of wildfire events and to predict their likelihood at different locations. Using a sliding window technique to capture the temporal dynamics of wildfire events, the authors demonstrated that the model could detect wildfire events several hours before they are reported by official sources. McCarthy et al. (2021) proposed an extension of U-Net CNNs to geostationary remote sensing imagery with the goal of improving the spatial resolution of wildfire detections and high-resolution active-fire monitoring [30]. Their study leverages the complementary properties of GEO and LEO sensors as well as static features related to topography and vegetation to inform the analysis of remote sensing imagery with physical knowledge about the fire behavior. However, the study acknowledged a limitation of the proposed algorithm in terms of false positives and emphasized the need for further research to address this issue. Overall, the published literature demonstrates the potential of DL methods for early detection and monitoring of wildfires using remote sensing data from different sources. Recently, Ghali et al. (2023) provided a comprehensive analysis of recent (between 2018 and 2022) DL models used for wildland fire detection, mapping, and damage and spread prediction using satellite data [31]. However, these studies are limited to fire detection without fire boundary and intensity monitoring. To the best of our knowledge, there is no published study that attempts to improve the spatial resolution of GOES for operational wildfire monitoring.
This work aims to address the mentioned gap in remote monitoring of wildfires by presenting a framework that utilizes DL techniques to enhance the spatial resolution of GOES-17 satellite images using VIIRS data as ground truth. In this context, we have performed an ablation study using different loss functions, evaluation metrics, and variations of a DL architecture known as autoencoder. In the domain of super-resolution and image denoising, autoencoders, Generative Adversarial Networks (GANs), and U-Net feature distinctive architectures tailored to specific tasks [32]. Autoencoders utilize an encoder–decoder structure, compressing input data into a latent representation, and then reconstructing the input. The symmetry of autoencoders emphasizes efficient feature representation. In contrast, GANs employ a generator-discriminator framework, where the generator produces synthetic data, and the discriminator evaluates its realism. Adversarial training enables GANs to excel in generating realistic data [32]. U-Net, initially designed for image segmentation, incorporates an encoder–decoder architecture with skip connections, enhancing its capability to capture both low-level (edges, corners, colors, textures, and simple patterns) and high-level features (shapes, objects, and other complex patterns). The decision to employ these algorithms is based on their simplicity in implementation, training time, and computational requirements. In this study, we prioritize simplicity to effectively address the current problem, while also acknowledging the potential exploration of more advanced methods in future research pursuits. To enable DL models to use contemporaneous data that share similar spectral and projection characteristics, a scalable dataset creation pipeline is developed, which can accommodate the addition of new sites. An automated real-time streaming and visualization dashboard system can utilize the proposed framework to transform relevant GOES data into high spatial resolution images in near-real-time.
The rest of the paper is organized as follows. Section 2 presents an overview of the satellite data streams used in the study and describes the preprocessing steps taken to ensure consistency. Section 3 discusses the proposed approach, including the autoencoder architectures, loss functions, and evaluation metrics. Finally, Section 4 presents the experimental results and comparisons followed by conclusions in Section 5.

2. Materials and Methods

2.1. Data Source

2.1.1. Geostationary Operational Environmental Satellite (GOES)

Launched by the National Oceanic and Atmospheric Administration (NOAA), GOES-17 has been operational as GOES-West since 12 February 2019. This geostationary satellite is 35,700 km above the Earth, providing constant watch over the Pacific ocean and the western United States [33]. The Advanced Baseline Imager (ABI) is the primary instrument of GOES for imaging the Earth’s weather, oceans, and environment. ABI views the Earth with 16 spectral bands, including two visible channels (channels 1–2 with approximate center wavelengths of 0.47 and 0.64 µm), four near-infrared channels (channels 3–6 with approximate center wavelengths of 0.865, 1.378, 1.61, and 2.25 µm), and ten mid- and long-wave infrared (IR) channels (channels 7–16 with approximate center wavelengths 3.900, 6.185, 6.950, 7.340, 8.500, 9.610, 10.350, 11.200, 12.300, and 13.300 µm) [33]. These channels are used by various models and tools to monitor different elements on the Earth’s surface, such as trees and water, or in the atmosphere, such as clouds, moisture, and smoke [33]. Dedicated products are available for cloud formation, atmospheric motion, convection, land surface temperature, ocean dynamics, vegetation health, and flow of water, fire, smoke, volcanic ash plumes, aerosols, air quality, etc. [33].
In this study, channel 7 (IR shortwave) of Level 1B (L1B) Radiances product (ABI-L1B-Rad) is used as input to the DL model. The product, with its scan mode six, captures one observation of the Continental U.S. (CONUS) with a spatial resolution of 2 km at every 5 min [34]. The L1B data product contains measurements of the radiance values (measured in milliwatts per square meter per steradian per reciprocal centimeter) from the Earth’s surface and atmosphere. These radiances are used to identify cloudy and hot regions within the satellite’s field of view [33]. These measured radiance values are converted to brightness temperature (BT) in this study to facilitate data analysis. A detailed explanation of this conversion process will be provided in a following section of the paper.

2.1.2. Visible Infrared Imaging Radiometer Suite (VIIRS)

The Visible Infrared Imaging Radiometer Suite (VIIRS) instrument is installed on two polar orbiter satellites, namely Suomi National Polar-orbiting Partnership (S-NPP), operational since 7 March 2012 [35], and NOAA’s Joint Polar Satellite System (JPSS), now called NOAA-20, operational since 7 March 2018 [36]. These two satellites are 50 min apart, 833 km above the Earth, and revolve around the Earth in a polar orbit [36]. For each site, these satellites make two passes daily—one during the day and one at night [35]. VIIRS features daily imaging capabilities across multiple electromagnetic spectrum bands to collect high-resolution atmospheric imagery including visible and infrared images to detect fire, smoke, and particles in the atmosphere [37]. The VIIRS instrument provides 22 spectral bands, including five 375 m resolution imagery bands (I-bands), 16 moderate 750 m resolution bands (M bands), and one day/night band (DNB band) [36]. The I-bands include a visible channel (I1) as well as a near, a shortwave, a mediumwave, and a longwave IR (I2–I5) with center wavelengths of 0.640, 0.865, 1.610, 3.740, and 11.450 µm, respectively. The M bands include five visible channels (M1–M5) together with two near IR (M6–M7), four shortwave IR (M8–M11), two mediumwave IR (M12–M13), and three longwave IR (M14–M16) channels with center wavelengths of 0.415, 0.445, 0.490, 0.555, 0.673, 0.746, 0.865, 1.240, 1.378, 1.610, 2.250, 3.700, 4.050, 8.550, 10.763, and 12.013 µm, respectively [38]. VIIRS also hosts a unique panchromatic day/night band (DNB), which is ultra-sensitive in low-light conditions and is operated on central wavelength of 0.7 µm [38].
In this study, the VIIRS (S-NPP) I-band Active Fire Near-Real-Time product with 375 m resolution (i.e., VNP14IMGTDL_NRT) [39] is used as ground truth to improve the spatial resolution of GOES imagery due to its relatively high 375 m spatial resolution compared to GOES 2 km spatial resolution. Furthermore, VIIRS shows good agreement with its predecessors in hotspot detection, and it provides an improvement in the detection of relatively small fires as well as the mapping of large fire perimeters [35]. The VIIRS data are available from 20 January 2012 to present [35].

2.2. Data Pre-Processing

DL models require two sets of images: input images (i.e., GOES images herein) and ground truth or reference images (i.e., VIIRS images herein). The prediction of the DL model and ground truth are compared pixel by pixel, and the difference between the prediction and ground truth (i.e., loss function value) is used by the model to learn its parameters via backpropagation [40]. Hence, the input and ground truth images should have the same size, projection, time instance, and location. However, the initial format of GOES and VIIRS data are different. VIIRS fire data, as obtained from the NASA’s Fire Information for Resource Management System (FIRMS) is represented in vector form in CSV format [41,42], whereas GOES data are represented in raster form in NetCDF format [34]. Although raster VIIRS products are available, VNP14IMGTDL_NRT is the only product that has public access. This product includes detected fire hotspot in vector format, which we will use in this study. In the vector form, data features are represented as points or lines, while in the raster form, data features are presented as pixels arranged in a grid. Vector data need to be converted into a pixel-based format for proper display and comparison with raster data [43]. Furthermore, GOES images contain snapshots of both fire and its surrounding background information, whereas VIIRS data contain only the location and radiance value of globally detected fire hotspots. Therefore, data pre-processing is required to make the initial formats and projections between GOES and VIIRS data consistent.
The pre-processing pipeline aims to create a consistent dataset of images from multiple wildfire sites, with standardized dimensions, projections, and formats. Each processed GOES image in the dataset corresponds to a processed VIIRS image representing the same region and time instance of a wildfire event. To facilitate this process, a comprehensive list of wildfire events in the western U.S. between 2019 and 2021 was compiled from multiple sources [44,45] (see Appendix A). This wildfire property list (WPL) plays a key role in several pre-processing steps, such as defining the region of interest (ROI) for each wildfire site. To obtain the four corners of the ROI, a constant value is added to/subtracted from the central coordinates specified in the WPL. For each wildfire event defined by its ROI and its duration as included in the WPL, the pre-processing pipeline conducts the following four steps.
  • Step 1: Extracting wildfire event data from VIIRS and identifying timestamps. The pipeline first extracts the records from the VIIRS CSV file to identify detected fire hotspots that fall within the ROI and duration of wildfire event. The pipeline also identifies unique timestamps from the extracted records.
  • Step 2: Downloading GOES images for each identified timestamp. In order to ensure a contemporaneous dataset, the pre-processing pipeline downloads GOES images with captured times that are near to each VIIRS timestamp identified in Step 1. GOES have a temporal resolution of 5 min, meaning that there will always be a GOES image within 2.5 min of the VIIRS captured time, except in cases where the GOES file is corrupted due to cooling system issue [33]. In case of corrupted GOES data, Steps 3 and 4 will be halted and the pipeline will proceed to the next timestamp. It is important to note that the instrument delivers 94% of the intended data [33].
  • Step 3: Creating processed GOES images. The GOES images obtained in Step 2 have different projection from the corresponding VIIRS. In this step, the GOES images are cropped to match the site’s ROI and reprojected into a standard coordinate reference system (CRS).
  • Step 4: Creating processed VIIRS images. The VIIRS records obtained in Step 1 are grouped by timestamp and rasterized, interpolated, and saved into GeoTIFF images using the same projection as the one used to reproject GOES images in Step 3.
The above steps are explained in more detail in the following subsections. Once these steps are completed, the GOES and VIIRS images will have the same image size and projection. An example of the processed GOES and VIIRS images is shown in Figure 1a, and 1b, respectively. In Figure 1c, where the VIIRS image is overlaid on the GOES image, the VIIRS fire region almost completely covers the GOES fire region, which verifies the data pre-processing pipeline. It is crucial to clarify that the dark color in Figure 1c does not signify low temperatures but rather serves to outline the contours of fires detected in the VIIRS image, distinguishing them from the underlying GOES image. It should be noted that in Figure 1a, the GOES image includes both fire pixels and background information, while in Figure 1b, the output VIIRS image only contains fire pixels without any background information.

2.2.1. GOES Pre-Processing

The NOAA Comprehensive Large Array-data Stewardship System (CLASS) repository [34] is the official site for accessing GOES products. ABI fire products are also available publicly in Amazon Web Services (AWS) S3 Buckets [34]. The Python s3fs [34] library, which is a filesystem in user space (FUSE) that allows mounting an AWS S3 bucket as a local filesystem, is utilized in this study to access the AWS bucket “NOAA-GOES17”. As previously mentioned, the downloaded GOES images cover the western U.S. and Pacific Ocean and need to be cropped to the specific wildfire site’s ROI. Additionally, the unique projection system of GOES, known as the “GOES Imager Projection” [33] must be transformed to a standard CRS to make it comparable with VIIRS. In this study, the WGS84 system (latitude/longitude) was used as the standard CRS. The projection transformation was achieved using the Satpy Python library [46], which is specifically designed for reading, manipulating, and writing data from Earth-observing remote sensing instruments. Specifically, the Satpy scene function was utilized to create a GOES scene from the downloaded GOES file, which allows the transformation of the GOES CRS to the WGS84 system. The Satpy area definition was then applied to crop the GOES scene to match the ROI. Furthermore, Satpy enabled the conversion of radiance values, measured in milliwatts per square meter per steradian per reciprocal centimeter, to brightness temperature (BT) values, expressed in Kelvin (K), using Planck’s law [47]. This transformation was conducted to have the same physical variable as VIIRS data to facilitate training. Additionally, to ensure the accuracy of these converted values, a comparison was conducted with the Level 2 GOES cloud and moisture CMI (Cloud and Moisture Imagery) product. This validation process confirms the reliability and appropriateness of the radiance-to-BT conversion for our analysis. It is important to note that Level 1 data are utilized throughout due to its real-time availability, as Level 2 data are not real-time and may not be suitable for time-sensitive applications.

2.2.2. VIIRS Pre-Processing

Annual summaries of VIIRS-detected fire hotspots in CSV format are accessible by country through the Fire Information for Resource Management System (FIRMS) [41], which is a part of NASA’s Land Atmosphere Near-Real-time Capability for Earth Observing System (LANCE) [42]. In this study, annual summaries from 2019 to 2021 were utilized since corresponding GOES-17 files are only available from 2019, the year in which GOES-17 became operational. The CSV files contain the I-4 channel brightness temperature (3.55–3.93 µm) of the fire pixel measured in K, referred to as b-temperature I-4, along with other measurements and information such as the acquisition date, acquisition time, and latitude and longitude fields. Notably, the annual summary files are exclusively available for S-NNP.
The VIIRS-detected fire hotspot vector data are available in CSV format, with the longitude and latitude coordinates of each fire hotspot defining its location at the center of a 375 m by 375 m pixel. To ensure its compatibility with GOES data for the DL model, the vector data was converted into a pixel-based format through rasterization [43]. This process involved mapping the vector data to pixels, resulting in an image that can be displayed [43]. This process had three main steps: (1) defining the ROI, (2) defining the grid system over the ROI, and (3) mapping the fire data to the pixels. This process is illustrated schematically in Figure 2.
The first step involved defining the ROI for the wildfire site. To accomplish this, a constant value C = 0.6 degrees was added to/subtracted from the center coordinates of the fire site as defined in the WPL to obtain the four corners of the ROI. Figure 2a shows the defined ROI around the center of fire hotspot represented by the red dots. The second step involved overlaying a grid system over the ROI as illustrated in Figure 2b. In this grid system, each cell represented a single pixel in the output image where a specific cell size of 375 m by 375 m was selected to match the resolution of the VIIRS hotspot detection. The process involved transforming the entire ROI from longitude/latitude space to northing/easting or distance space, using PyProj’s Python library transformation function. The final step was to map the fire locations from the CSV file to the corresponding pixels. If the central coordinates of fire hotspot fall within the cell associated with a pixel, that pixel was activated and assigned a value based on the measured I-4 BT/I-5 BT value from VIIRS CSV file, based on Equation (1).
B T p i x e l = 367 B T I 4 = 208 B T I 5 B T I 4 < B T I 5 B T I 4 o t h e r w i s e
where B T I 4 is channel I-4 BT and B T I 5 is channel I-5 BT for each hotspot defined in VIIRS CSV file. Equation (1) is employed to overcome VIIRS data artifacts as outlined in [13]. Specifically, the first condition was for instances where the VIIRS pixel is located at the core area of intense wildfire activity causing the pixel saturation. This, in turn, leads to complete folding of the channel I-4 data resulting in I-4 BT values of 208 K. The second condition in Equation (1) was designed to deal with “mixed pixels”, where saturated and unsaturated native pixels were mixed during the process of combining VIIRS channels data, known as the “aggregation process”, to derive fire products, leading to artificially low I-4 BT values. To overcome this, we assume that B T I 5 could be a proper substitute for B T I 4 . Nonetheless, the appropriateness of this assumption needs further investigation. Figure 2c illustrates this process by showing the activation of pixels without displaying their actual values for simplicity.
Following the rasterization process, nearest neighbor interpolation [48] was performed for non-fire pixel with a neighboring fire pixel. This interpolation method was used to eliminate any artificial patches in and around the fire region, as shown in Figure 3a, to achieve a continuous fire region in the output image, as shown in Figure 3b. However, it should be noted that this method can slightly and erroneously expand the fire region at the outer fire boundaries due to the logic of the interpolation approach, resulting in expansion of up to one fire grid cell (375 m). Future interpolation methods are needed to eliminate this erroneous expansion. Finally, the created raster was saved in GeoTIFF format using the GDAL library for future use [49].

3. Proposed Approach

3.1. Autoencoder

Autoencoders are one of the popular DL architectures for image super-resolution [50]. It takes the low-resolution image as input, learns to recognize the underlying structure and patterns, and generates a high-resolution image that closely resembles the ground truth [51]. Autoencoders have two main components: (1) the encoder, which extracts important features from the input data, and (2) the decoder, which generates an output based on the learned features. Together, they can effectively distill relevant information from an input to generate the desired output [51].
In this study, an autoencoder was tasked with distilling the relevant portions of the input GOES-17 imagery to generate an output with increased resolution and no background noise (i.e., such as reflections from clouds, lakes, etc.) to mimic VIIRS imagery. Figure 4 illustrates the autoencoder architecture along with the dimensions for each layer utilized in this study. As can be seen, the model’s encoder component is composed of five two-dimensional convolutional layers with a kernel size of three and a padding of one, followed by a Rectified Linear Unit (ReLu) activation layer [52]. This architecture allows the model to recognize important features in the low-resolution input images while preserving their spatial information. The use of ReLu activation layers after each convolutional layer helps to introduce nonlinearity in the model, which is critical for the network’s ability to learn complex patterns presented in the input images. Additionally, the second and fourth convolutional layers are followed by max pool layers, which help to down-sample the feature maps and reduce the spatial dimensions of the data. Meanwhile, the decoder component consists of two blocks, each containing one transposed convolutional layer [53] and two normal convolutional layers with a kernel size of three. The use of transposed convolutional layers in the decoder allows the model to up-sample the feature maps and generate a high-resolution output image that closely resembles the ground truth image. The normal convolutional layers that follow the transposed convolutional layers help to refine the features and details in the output image. Lastly, the decoder ends with a final convolutional layer that produces the final output image. This architecture was chosen in this study after testing several modifications such as altering the activation functions, adding or removing convolutional layers, and increasing the number of blocks. The final architecture was chosen based on its ability to produce high-quality output images that accurately represent the ground truth images while minimizing the computational resources required for training.

3.2. Loss Functions and Architectural Tweaking

The objective of this study is to enhance GOES-17 imagery by improving its spatial resolution and removing background information, as well as predicting improved radiance values. To achieve this, an ablation study is conducted by considering variations in the choice of the loss functions and autoencoder architecture, in order to determine the optimal solution. In particular, four different loss functions are considered, namely (1) global root mean square error, (2) global plus local root means square error, (3) Jaccard loss, and (4) global root mean square error plus Jaccard loss, which are explained in detail in the following sections.

3.2.1. Global Root Mean Square Error (GRMSE)

Root mean square error (RMSE) is a loss function commonly used in image reconstruction and denoising tasks [54], where the goal is to minimize the difference between the predicted and ground truth images. In this study, the autoencoder model is initially trained using the RMSE loss function on the entire input image to predict the BT value for each input image pixel based on the VIIRS ground truth data. This is referred to as the global RMSE (GRMSE) as defined in Equation (2). The term “global” is used here to distinguish it from other RMSE-based loss, which will be discussed in the next section.
GRMSE = i = 1 n y i y i ^ 2 n
where y i represents the BT value of the ith pixel in the VIIRS ground truth image, y i ^ represents the BT value of ith pixel in the predicted image, and n represents the total number of pixels in the VIIRS/predicted image.

3.2.2. Global Plus Local RMSE (GLRMSE)

Since the background area is often significantly larger than the fire area (e.g., see Figure 1c), it dominates the RMSE calculation, possibly leading to decreased training performance. For this reason, a local RMSE (LRMSE) is defined in Equation (3). The LRMSE applies only to the fire area of the ground truth image (i.e., where the pixel’s BT value is non-zero). Specifically,
LRMSE = i = 1 n y i y i ^ 2 I i = 1 n I                         I = 1 ,   y i 0 0 ,   o t h e r w i s e
where I is an identifier variable, which is one for all pixels belonging to the fire area and zero for the background. Based on this, the RMSE is calculated only for the fire region of the VIIRS ground truth image. The LRMSE is combined with the global RMSE resulting in a global plus local RMSE loss function (GLRMSE) as defined in Equation (4).
GLRMSE = W G GRMSE + W L LRMSE
where W G is a weight factor for the global RMSE and W L is a weight factor for local RMSE. These weights are hyperparameters and are determined through hyperparameter optimization as will be discussed later.

3.2.3. Jaccard Loss (JL)

For effective wildfire monitoring, it is crucial to not only minimize discrepancies in BT values but also to predict the wildfire perimeters. This is accomplished by binary segmentation in which a binary value is assigned to each pixel based on its category, partitioning the image into foreground (i.e., fire) and background regions [55]. The Jaccard Loss (JL) function defined in Equation (5) is a prevalent loss function utilized in the field of image segmentation [56]. It aims to evaluate and improve the similarity between the predicted and ground truth binary masks, which is also referred to as segmentation masks. Specifically, JL is defined as
J L = 1 n y b , i · y ^ b , i 1 n y b , i + 1 n   y ^ b , i 1 n y b , i · y ^ b , i
where y b , i represents the presence (1) or absence (0) of fire in the VIIRS ground truth image at the ith pixel, and y ^ b , i represents the probability of fire in the predicted image at the ith pixel.
In order to use the JL in the autoencoder training, the VIIRS ground truth image is transformed into a binary image by setting the BT value of all fire pixels to one, while setting the BT value of the background pixels to zero [56]. Additionally, the final activation layer of the autoencoder model is modified from ReLu to Sigmoid [57], essentially generating a probability value for every pixel in the output image. This probability map assigns a value between zero and one to each pixel, indicating the probability of that pixel belonging to the fire region or not. Once the model is properly trained, the resulting probability map is converted to a binary map (0/1). The resulting binary map can be used to identify the fire region and distinguish it from the surrounding environment.

3.2.4. RMSE Plus Jaccard Loss Using Two-Branch Architecture

RMSE and Jaccard losses can be combined to predict the shape and location of the fire as well as its BT values. However, as these loss functions require different activation layers, combining them in a single network necessitates architectural changes. To address this issue, a two-branch architecture is introduced, where the first branch uses ReLu as the last activation layer to predict the BT values, while the second branch uses a Sigmoid activation layer to predict the fire probability map. The resulting architecture combines GRMSE and JL loss functions as defined in Equation (6) as follows to enhance learning and improve predictions.
TBL = W R G R M S E + W J J L
where W R is weight for the GRMSE loss, W J is the weight for JL loss, and TBL is the two-branch loss, which is the weighted sum of the two losses. These weights are determined through hyperparameter optimization.
The two-branch architecture, illustrated in Figure 5, branches out before the final convolutional layer. The outputs from the two branches are compared with their respective ground truth images to calculate individual losses, which are then combined to train the model. This approach allows the learning process to utilize information from both branches, resulting in a more effective model. As a result, the output from the GRMSE branch, which captures both the predicted location and BT of the fire, is considered the primary output of the model. From this point forward, this architecture will be referred to as the two-branch loss (TBL) model.

3.3. Evaluation

3.3.1. Pre-Processing: Removing Background Noise

Accurate network prediction evaluation requires taking noise into account. Although noise may not have significant physical relevance due to its typically low BT compared to the actual fire, it can still impact the accuracy of the evaluation metrics. To mitigate this issue, the Otsu’s thresholding method has been adopted in this study to effectively remove background noise and improve evaluation accuracy. The Otsu’s thresholding automatically determines the optimal threshold level that separates the foreground (relevant data) from the background (noise) [58]. This is accomplished by calculating the variance between two classes of pixels (foreground and background) at different threshold levels and selecting the threshold level that maximizes the variance between these two classes. Figure 6 shows the process of using the Otsu’s thresholding to remove background noise from the model’s prediction (Figure 6a) and create a post-thresholding prediction (Figure 6b) that is compared to the ground truth (Figure 6c) to evaluate the performance of the model. The successful removal of background by Otsu’s thresholding improves the consistency between the evaluation metrics and visual inspection.

3.3.2. Evaluation Metrics

The performance of DL models is evaluated using two metrics: intersection over union (IOU) and intersection’s point signal-to-noise ratio (IPSNR), which is a modified version of PSNR. These metrics are explained below.
IOU measures the agreement between the prediction and ground truth by quantifying the degree of overlap of the fire area between ground truth and network prediction as follows.
I O U = 1 n y b , i · y ^ b , i 1 n y b , i + 1 n   y ^ b , i 1 n y b , i · y ^ b , i
The terms used in this equation are already defined following Equation (5). However, Equations (7) and (5) differ in sign. To compute the IOU metric, both the post-thresholding prediction and ground truth images are converted into binary masks. This is achieved by setting all fire BT values to one, effectively binarizing the image. This step is conducted to simplify the IOU calculation. By representing the images as binary masks, the IOU can be calculated as the intersection of the two masks divided by their union, providing an accurate measure of the overlap between the predicted fire region and the ground truth.
On the other hand, IPSNR quantifies the similarity of BT values in the intersection of the fire areas between the prediction image and its ground truth as shown in Equation (8). Here, the intersection is defined as the region where both the prediction and VIIRS ground truth have fire BT as shown in Equation (9).
I P S N R = log 10 m a x v a l / I R M S E
IRMSE = i = 1 n y i y i ^ 2 I i = 1 n I                     I = 1 ,   y i . y i ^ 0 0 ,   o t h e r w i s e
where maxval is the maximum BT in the VIIRS ground truth and IRMSE is the RMSE computed solely in the intersecting fire area using the identifier variable I, which is one for areas where the predicted fire region intersects with the VIIRS ground truth, and zero otherwise.
The motivation for utilizing IPSNR instead of PSNR stems from the difficulty of evaluating the model’s performance based on the similarity of BT values over the whole image, given that the fire area typically occupies only a small portion of the image. If the model’s prediction is incorrect, most of the background still appears similar to the ground truth, resulting in higher PSNR values that do not necessarily reflect accurate performance. Therefore, to obtain a more reliable evaluation metric, it is necessary to focus on assessing only the predicted fire area that matches the ground truth when calculating PSNR. It should be noted that the underlying principle of IPSNR is different from that of LRMSE. IPSNR evaluates the quality of the correctly predicted fire area (i.e., predicted fire pixels, that are also present in the ground truth) by considering the RMSE for the intersection of the prediction and ground truth (as described in Equation (9)), while LRMSE is a loss function which focuses on reducing the RMSE specifically for the fire area (as described in Equation (3).

3.3.3. Dataset Categorization

To obtain a more accurate and precise evaluation of the model’s performance, it is important to account for diversity in input and ground truth samples. Factors such as fire orientation, location, background noise, fire size, and similarity between input and ground truth can vary significantly affecting the model’s performance. Evaluating the model’s performance on a given test set without separately considering the above factors might provide an inaccurate assessment of the model since average performance may be biased towards the majority of sample types in the test set. To overcome this limitation, the test set was divided into four categories based on (1) the total coverage of distinguishable foreground in GOES images, and (2) the initial IOU between the distinguishable foreground in GOES image and VIIRS ground truth. It should be noted that these categorizations cannot be applied in an operational setting as the IOU between GOES and VIIRS can be unknown. To achieve this categorization, the Otsu’s thresholding was utilized herein to eliminate background information from the original GOES image (Figure 7a), resulting in a post-thresholding GOES image (Figure 7b) with distinguishable foreground information that is used to compute coverage and initial IOU. Thus, coverage was determined by calculating the ratio of fire pixels to the total number of pixels in the post-thresholding image. This metric indicates the degree of foreground presence in the GOES image. Meanwhile, the initial IOU was calculated using the binarized post-thresholding GOES image and binarized VIIRS ground truth. This provides a measure of the degree of foreground area similarity between the two images. It should be noted that the foreground area identified by the Otsu’s method in the GOES image may not always accurately indicate the fire region, unlike the prediction image. In some cases, the BT values of the background may be comparable to, or even greater than that of the actual fire region, making it difficult to identify the fire region accurately. Additionally, the coverage calculation involves a single iteration of the Otsu’s thresholding, which was used to determine the true coverage of GOES. On the other hand, calculating the IOU required multiple iterations of the Otsu’s thresholding to accurately assess the fire area. The final IOU result was obtained by selecting the highest IOU value obtained from all iterations, ensuring that only the fire area is considered for evaluation.
Using the calculated coverage and IOU, the test set is categorized into four groups. These groups are as follows: (1) low coverage with high IOU (LCHI), (2) low coverage with low IOU (LCLI), (3) high coverage with high IOU (HCHI), and (4) high coverage with low IOU (HCLI). The threshold values for coverage and IOU are determined through visual inspection of the results and are provided in Table 1. By categorizing the test set, we can perform a more meaningful evaluation of the model’s performance and identify its strengths and weaknesses under various scenarios.

3.3.4. Post-Processing: Normalization of Prediction values

To ensure that the autoencoder’s output aligns with the desired physical range of VIIRS brightness temperature values, Min–Max scaling, a commonly used normalization technique, was employed. This process linearly transformed the autoencoder’s output to fit within the specific range corresponding to VIIRS data, enhancing both the physical interpretability of the results and their compatibility with existing remote sensing algorithms and models tailored to this range.

4. Results

4.1. Training

The western U.S. wildfire events that occurred between 2019 and 2021 (listed in Appendix A) were utilized herein to create contemporaneous images of VIIRS and GOES using the preprocessing steps explained in Section 2 for network training and evaluation. The preprocessed images were then partitioned into windows of size 128 by 128 pixels, resulting in a dataset of 5869 samples. These samples were then split with ratio of 4 to 1 to obtain training and test sets, respectively—that is, 80% for training and 20% for testing. The training set was further divided into a four-to-one ratio to obtain training and validation sets. After splitting the dataset, the training, validation, and test sets included 3756, 939, and 1174 samples, respectively. The validation set was used to identify potential overfitting during the training process, while the test set was kept aside for evaluating the performance of the model on novel samples. To improve the diversity of the training data and prevent overfitting, data augmentation techniques were utilized. In particular, at each epoch, the training samples underwent random horizontal and vertical flips, leading to greater variability in the training data and improved model’s generalization to novel samples.
To find optimum hyperparameters for the networks defined in Section 3.1 and Section 3.2.4, hyperparameter tuning was performed using the Weights and Biases (WAB) tool. For each subset of hyperparameters, the validation loss was used to identify potential overfitting and determine the optimum hyperparameters. The hyperparameter subset that led to the smallest validation loss and overfitting was then chosen as the final hyperparameters. In the case of autoencoder models combining two losses, such as GLRMSE and TBL, the weights for the loss functions were also considered as hyperparameters and were determined through the same hyperparameter tuning process.
After hyperparameter tuning was conducted, all autoencoder models, as outlined in Section 3.2, were trained for 150 epochs and batch size of 16 using Adam optimizer with learning rate of 3 × 10−5 [59]. A learning rate decay based on validation loss plateau, with weight decay of 0.1, threshold of 1 × 10−5, and patience of 10 epochs was used during training to help both optimization and generalization performance. For the GLRMSE model, the best results were achieved by setting the weights for the LRMSE and global GRMSE to W L = 1 and W G = 8 (see Equation (4)). For the TBL model, the best results were obtained by setting the weights of GRMSE and Jaccard losses to W R = 3 and W J = 1 (see Equation (6)). The DL model was implemented using Pytorch (v.1.12) Python package and trained on an Nvidia RTX 3090 Graphical Processing Unit (GPU) with 24 GB of Video RAM (VRAM). With this setup the models were trained in 15 to 20 min. Figure 8 depicts the training loss for all four cases along with the individual losses for the GLRMSE and TBL models. As can be seen, the loss functions converged to a plateau with little to no overfitting.

4.2. Testing

To evaluate the performance of the four models, the IOU and IPSNR were used for evaluation. Table 2 presents the results obtained on the entire test set. The TBL model is found to produce the best results in terms of IOU between the prediction and ground truth (VIIRS) while GLRMSE yielded the best results in terms of IPSNR. This suggests that the TBL model is influenced by both loss functions (i.e., Jaccard loss for the fire shape and GRMSE loss for BT values) resulting in a higher IOU. On the other hand, the GLRMSE model improved the prediction of the BT values in the fire area by adding local (i.e., fire area of ground truth) RMSE calculation resulting in a higher IPSNR compared to GRMSE. However, as discussed earlier in Section 3.3, there is a possibility for evaluation bias towards the majority of the sample types. Therefore, the evaluation was carried out separately for each of the four categories, namely LCHI, LCLI, HCHI, and HCLI, in the following subsections. For each group, representative results are presented on three distinct samples to visually illustrate the model’s performance. The samples are chosen to present fires that encompass a wide range of temperatures and spatial scales. Average evaluation scores are also presented for each category for completeness.

4.2.1. LCHI: Low Coverage with High IOU

Figure 9 shows representative sample image in this category captured by GOES and the corresponding VIIRS images for three fires along with the results obtained from the four models, which demonstrate the correspondence and deviation between sample of LCHI. For the Dixie Fire, due to the small size of distinguishable pixels found after applying Otsu’s thresholding, this sample is classified as having low coverage. Furthermore, the initial IOU between the fire regions in the GOES and VIIRS images is relatively high indicating a significant overlap in the fire area captured by both satellites. This type of scenario is generally less challenging for the DL models to handle, as there is a clear and visible overlap between the fire areas in both images.
The JL model performs slightly worse than the GRMSE and GLRMSE models in accurately predicting the location of fires, with an IOU score of 0.527, compared to 0.531 and 0.528, respectively. The lower performance is counter intuitive since it solely focuses on shape of fire and not on actual BT values. However, the TBL, using the features of both GRMSE and JL, improves the prediction accuracy compared to other models, achieving an IOU score of 0.531. Although this improvement comes at the cost of a lower IPSNR score, the TBL model provides a compromise between predicting BT values and accurately capturing the shape of wildfires. However, in terms of predicting BT values, the GLRMSE model outperforms all other models, with an IPSNR of 59.24, followed by the GRMSE model at 58.03 and the TBL model at 57.82. The visual results confirm that the BT values predicted by the GLRMSE model are higher than the other models and hence closer to the ground truth. This potentially demonstrates the importance of having higher focus on fire area than the background to achieve more accurate prediction results.
The August Complex Fire on 5 September 2020 at 10:00 UTC (Figure 9), diverges from both the Dixie Fire and the Holiday Farms Fire with the TBL model having a low IOU score of 0.315. The GLRMSE model achieved higher IOU scores of 0.319 and GRMSE achieved equal IOU of 0.315, respectively, in comparison to the TBL model. This is most likely due to the fact that either the GOES fire area in this sample has a relatively low BT value or the VIIRS fire region spans across multiple subregions, unlike the previous sample. However, it is worth noting that the GLRMSE model achieved an IPSNR of 59.97, which is still higher than the IPSNRs of both the GRMSE (55.09) and TBL (55.60) models. Figure 9 supports this observation, showing that the GLRMSE model produced higher BT values than the other models, similar to the previous example. The Holiday Farm Fire on 9 September 2020 at 20:06 UTC (Figure 9) shares similarities to the Dixie Fire sample, with both GOES and VIIRS fire area spanning to roughly a single region and with good initial overlap. The TBL model exhibits superior performance in terms of IOU, followed by the GRMSE, GLRMSE, and JL models. The GLRMSE here overestimated fire region but it still outperformed the other models in terms of IPSNR, followed by the GRMSE and TBL models.
To summarize the results for this category, Table 3 presents evaluation results on 421 LCHI test samples, revealing a similar pattern to what has been observed in the entire dataset. While there may be some exceptions, such as the sample shown in August Complex, the pattern observed in the Dixie and Holiday Farms Fires appears to be generally consistent with the bulk statistics of this category. To summarize the results for this category, Table 3 presents evaluation results on 421 LCHI test samples, revealing a similar pattern to what has been observed in the entire dataset. While there may be some exceptions, such as the August Complex Fire in Figure 9, the pattern observed in this figure appears to be generally consistent with the bulk statistics of this category.
Overall, it can be concluded that, for most of the samples in this category, the models’ performance follows a similar pattern. That is, the TBL model performs the best among all the models in terms of IOU, indicating better agreement between the predicted and actual fire areas. This can be attributed to the fact that this model is trained based on both fire shape and BT, providing a compromise between fire perimeter and BT prediction performance. Meanwhile, the GLRMSE model has the highest IPSNR, indicating better performance in predicting BT values. This suggests that the GLRMSE model can predict BT values closer to the those in VIIRS images, most likely due to the focus of the local term of its loss function on the fire area.

4.2.2. LCLI: Low Coverage with Low IOU

Figure 10 depicts a comparison of representative images from this category taken by GOES and the corresponding VIIRS images for three different fires. The results obtained from the four models illustrate the agreement and variation in the sample of LCLI. Specifically, for the Slater Fire on 8 September 2020 at 22:06 UTC, all models produced an IOU score of zero due to the small size of the VIIRS fire region and the lack of significant overlap between the GOES and VIIRS images. This is expected as there is no underlying pattern in these types of samples that the DL model can learn from.
For the Santiam Fire on 8 September 2020 at 10:42 UTC, the VIIRS fire area is relatively small, similarly to the sample from the Slater Fire. The GOES image for the Santiam Fire contains visible fire area for two regions, one overlapping with VIIRS fire region and another region with higher BT value producing an error of commission, which is most likely noise. Even with these errors, all four models performed reasonably well in predicting the fire location. The network is unable to remove the noise, which warrants the need for improving the network to address regions of noise or false positives. Among the models, the JL model demonstrates a significant improvement over the GRMSE model, with an IOU score of 0.08 compared to 0.04. The TBL model achieves the same IOU score of 0.04 as the GRMSE model. The GLRMSE model shows the best BT value prediction in terms of IPSNR among the considered models.
The Jack Fire on 25 July 2021 at 20:24 UTC (Figure 10), demonstrates a distinguishable fire region in the GOES image with relatively higher coverage than the previous samples of this category. This fire also is dissimilar in shape and orientation to the VIIRS fire region. Additionally, the VIIRS fire area is scattered into multiple subregions. As a result, accurately predicting the fire location is challenging for all four models. The GRMSE model produced an IOU score of 0.067, is still better than the TBL, GLRMSE, and JL model’s score of 0.42, 0.03, and 0.02, respectively. These results suggest that TBL and GLRMSE model may not be effective in cases where GOES visually distinguishable fire area is scattered and dispersed, resulting in these models enhancing these regions instead of removing them as noise. Nonetheless, similarly to the previous samples, the GLRMSE model has the best BT value prediction.
To summarize the performance of all the models in this category, Table 4 presents evaluation results on 136 LCLI test samples, revealing a pattern similar to what has been observed in the entire dataset. Specifically, the TBL model exhibits better IOU and the GLRMSE model demonstrates superior IPSNR performance compared to the other models. However, for some samples where there is no overlap between GOES and VIIRS fire areas or where VIIRS fire areas are scattered in subregions, the TBL model is not the best performing model. In some cases, the GRMSE model demonstrates better IOU results than the other models.

4.2.3. HCHI: High Coverage with High IOU

The representative sample image in this category captured by GOES and the corresponding VIIRS images for three fires along with the results obtained from the four models demonstrate the correspondence and deviation between sample of HCHI (Figure 11). For the Antelope Fire, the GOES image is considered to have high coverage due to the large background area with high BT values, which are not removed by Otsu’s thresholding. However, the actual fire region in the GOES image still has good overlap with that in the VIIRS image. The predictions of all models have removed most of the background and predicted the fire area with reasonable accuracy. Specifically, the TBL model achieved the highest IOU score of 0.407 where the GRMSE, GLRMSE, and JL models scored an IOU of 0.340, 0.319, and 0.340, respectively. Although the visual results for the GRMSE and TBL models appear similar, the evaluation results suggest that the TBL model can reduce background noise more effectively, resulting in a higher IOU score. However, as is evident from a visual inspection as well as IPSNR evaluation (60.19), the GLRMSE model proved to be predicting the BT values better than GRMSE (57.01 IPSNR) and TBL (56.48 IPSNR).
For the Telegraph Fire on 15 June 2021 at 9:54 UTC (Figure 11), the TBL model scored 0.244, which is lower than the scores of 0.271 and 0.281 achieved by the GRMSE and JL models, respectively, suggesting that the good performance of GRMSE and JL models does not guarantee that TBL will perform as well. However, with the lowest IOU of 0.240, similarly to previous samples, the GLRMSE model performs best in terms of IPSNR.
For the Walker Fire on 7 September 2019 at 10:27 UTC (Figure 11), the GRMSE, JL and GLRMSE model achieved the IOU score of 0.327, 0.318, and 0.317, while the TBL model has an IOU score of 0.247. Hence, we can conclude that TBL model may not have the highest performance for all samples. Nonetheless, the GLRMSE model still demonstrates the highest IPSNR score, highlighting the strength of this model in predicting BT values.
Table 5 presents evaluation results on 155 HCHI testing samples. The results demonstrate a consistent pattern of GLRMSE’s superior performance in terms of IPSNR compared to the other models and the TBL model in terms of IOU, similarly to the previous categories. However, some samples can be found where the other three models’ IOU is significantly lower than the TBL model, contradicting the average evaluation (Table 5) for this category. This suggests the need for a more precise categorization.

4.2.4. HCLI: High Coverage with Low IOU

Figure 12 presents representative sample images from this category that are acquired by GOES, as well as the corresponding VIIRS images for three sample fires. Additionally, the results obtained from the four models reveal the agreement and disparity within the HCLI sample. For the Tamarack Fire on 26 July at 20:06 UTC, the high coverage and absence of overlap between the GOES and VIIRS fire regions resulted in all models predicting a zero IOU score, leading to incorrect predictions compared to the ground truth. This outcome is expected due to the small size of the fire region in the VIIRS images. However, it is worth noting that all models successfully removed most of the background information, except for the areas where the GOES image contained high BT values.
From the example of the Elbow Creek Fire on 19 July 2021 at 9:12 UTC (Figure 12), the initial overlap between the GOES and VIIRS fire regions is visually hard to detect, but the GRMSE, GLRMSE, and TBL models accurately predicted the overall location of the fire in comparison to the ground truth with an IOU of 0.126, 0.106, and 0.187, respectively. In this case, the JL model has the lowest predicted IOU score of zero. Despite this, the GLRMSE model demonstrated the highest IPSNR score, leading to the best-matching predicted BT values.
The River Complex Fire on 17 August 2021 at 19:54 UTC (Figure 12) example demonstrates the even though initial overlap between the GOES and VIIRS fire regions is visually hard to detect, the GRMSE, GLRMSE, and TBL models, with an IOU of 0.03, accurately predicted the location of the fire as well as removed most of the background information except for the areas where the GOES image contained high BT values.
Table 6 presents the evaluation results on 436 HCLI test samples. The results demonstrate a consistent pattern of GLRMSE’s superior performance in terms of IPSNR compared to the other models and the TBL model best in terms of IOU, closely following the pattern observed in the other categories.

4.3. Blind Testing

To further evaluate the performance of the proposed approach, the best performing model (i.e., the TBL model based on earlier results as seen in Section 4.1) was used for blind testing on two wildfire events, namely the 2020 Bear fire and the 2021 Caldor fire. The term “blind testing” refers to the fact that the DL model has never been exposed to any data from these sites during training. To conduct the blind testing, GOES images were downloaded at the operational temporal frequency (i.e., 5 min) for the entire duration of the testing. The preprocessing pipeline, as outlined in Section 2.2.1, was applied to these images, which are then fed to the trained DL model as input. The output of the DL model, which are enhanced VIIRS-like images, were postprocessed (combined to show entire ROI) for visualization. While various sources of data, such as IR imagery, is available to validate the model performance, we utilize high-resolution (i.e., 250 m spatial and 5 min temporal resolution) fire perimeters estimated from NEXRAD reflectivity measurements for blind testing validation due to their high resolution compared to other sources [60].
Figure 13 shows four instances during the 2020 Bear Fire from 8 to 9 September 2020, with the blue boundaries representing the radar-estimated fire perimeter and color shading representing the output of the DL model. As of 8 September 2020 19:35 UTC, the model’s results have good agreement with radar data, but are not completely matching. However, by 8 September 2020 22:25 UTC, the DL model’s predictions are comparatively within the radar perimeters. As the fire area expands, by 9 September 2020 02:30 UTC, it is still confined by radar perimeters with reasonable accuracy. Even as the fire begins to fade and only remains at the boundaries, by 9 September 2020 03:55 UTC, it is still comparatively inside the radar perimeters. It should be noted that the DL output only shows the active fire regions, and fitting a fire perimeter to the DL output is out of the scope of this study and will be addressed in future research.
Similarly, Figure 14 depicts four instances during the 2021 Caldor Fire from 16 to 17 August 2021. Reasonable agreement between the fire pixel and predicted boundaries can be observed in the results; however, it is not as accurate as in the case of the Bear Fire. At 17 August 2021 17:05 UTC, the DL prediction is aligned with general location of the radar perimeters, with no visible fire in the bottom half and overprediction in the top half. This can be related to the fact that the network predictions are limited to active fire regions, which are the only part of the fires captured by the satellite. Nonetheless, the overlap between the advancing fire head in the radar perimeters and the predicted active fire regions in this case is less than that in the Bear Fire case. This can be further improved for operational applications such as online learning, which is out of the scope of this study. Additionally, from 17 August 2021 20:30 UTC to 17 August 2021 23:55 UTC, the DL prediction is consistently confined by the radar parameters, with some fire area outside, but the overall shape is similar to the radar parameter. These findings suggest that the DL model’s predictions closely align with the radar data, indicating that it has the potential to be a valuable tool for monitoring wildfires in near real time. It is worth mentioning that both GRMSE and GLRMSE models yielded similar visual results in these cases.

4.4. Opportunities and Limitations

With any network-based learning model, bias is introduced based on the quantity and quality of training data. When assessing various models using the entire dataset, two key observations were made. Firstly, the two-branch loss (TBL) model demonstrated the best overall performance in accurately predicting the location of fire pixels, as confirmed by IOU (Intersection over Union) validation. Secondly, the global plus local root mean square error (GLRMSE) model exhibited notable improvement in predicting brightness temperature (BT) values for fire pixels among all models, validated by the intersection’s point signal-to-noise ratio (IPSNR), which considers only true positive predictions. This can be attributed to the custom regression loss function designed to have higher focus on the fire area than the background. Furthermore, the TBL model achieved the second highest performance in BT values prediction among the cases, highlighting that the TBL architecture can provide a compromise between fire shape and BT predictions. However, it is crucial to consider that these results might be influenced by the abundance of certain sample types in the dataset. As a result, evaluations were conducted based on different categories of fire coverage and intensity. Upon analyzing multiple samples from each category, it became evident that while the TBL model performed well, it did not consistently produce the best results. The ranking of model performance varied depending on the specific samples evaluated, underscoring the importance of adopting more effective evaluation metrics and sample categorization. Overall, the GLRMSE model consistently demonstrated superior performance in predicting BT values for fire pixels across all cases.
Blind testing of TBL model predictions aligned well with radar-based fire perimeters. However, anecdotally, there appears to be a spatial disconnect between temperatures and where the active flaming front should be occurring. These effects could be a result of the limited training set used to inform the model, sampling incongruities from GOES-17 data due to effects from the plume as a function of plume height and direction, or representation of heat signatures from post-front combustion. These are a few factors that need to be accounted for in future applications of this approach. From a fire management and safety perspective, continuous 5 min-updated progressions of fire hold significant benefit to personnel, resource, and risk planning. In general, large fire incidents rely on fire simulations as FSPRO [61] and daily National Infrared Operations to build a daily operations picture. Additional filtering and tuning of the proposed model outputs can provide high-resolution fire perimeters that not only is a rich set of observational data, but it can potentially be assimilated in fire simulation platforms to create more accurate fire forecasting capability, aiding emergency response management. Expanding the training data beyond this proof-of-concept approach should improve model predictions, though more work will be needed to quantify predicted temperatures that potentially can be integrated into machine learning approaches to improve coupled atmosphere-fire modeling to better understand the role of fire energetics effects on plume dynamics and resulting post-fire effects. It is worth mentioning that for such an application, the predicted temperatures must be extensively validated. The model devised in this study is designed to predict wildfires across diverse sites in the western United States. However, the prospect of targeted refinement for specific regions or to account for physics-informed fire dynamics is plausible research paths for future investigations. For instance, leveraging the provided pretrained model and employing advanced techniques in transfer learning can contribute to these research paths [62].

5. Conclusions

The primary objective of this study was to design and develop a deep learning (DL) framework pursuing two primary objectives. The first focus was directed towards enhancing the spatial resolution of Geostationary Operational Environmental Satellite (GOES) images, seeking to elevate the quality and clarity of the captured satellite data. The second critical objective was to predict brightness temperature (BT) values within the GOES images, aligning them with the ground truth data obtained from VIIRS (Visible Infrared Imaging Radiometer Suite). To these ends, the autoencoder model, an approach widely recognized for its ability to learn and represent intricate patterns within image data, was utilized with different loss functions and architectures. The model was trained utilizing GOES data as input, with the aim of capturing the underlying spatial information, and VIIRS data as the ground truth, providing a reference for the model’s learning process. This training process allowed the DL model to extract valuable features and characteristics from the input GOES images, learning how to enhance their spatial resolution effectively.
The initial phase of this study encompassed data pre-processing and dataset creation steps, where GOES and VIIRS data were downloaded, and efforts were made to ensure consistency between the initial data in terms of location, time, and projection, as well as efforts to convert VIIRS vector data into raster format. Subsequently, a DL model was designed and trained, with the two main goals of enhancing the spatial resolution of GOES images and predicting BT values of active fire regions, closely aligned with VIIRS ground truth images. To comprehensively assess the model’s performance, an ablation study was conducted, involving four distinct models with different loss functions and autoencoder architecture variations. This analysis provided valuable insights into the impact of different components on the model’s effectiveness. The study further addressed the challenges of evaluating model performance and proposed an evaluation metric that aligned with the physical interpretation of the results, providing a more meaningful assessment. Additionally, it was also suggested that assessing different scenarios based on coverage and initial IOU would be more meaningful than reporting results on the whole dataset without detailed analysis.
The findings of this study have established a strong and promising foundation for advancing the creation of high-resolution GOES images. Moving forward, there are several potential avenues for enhancing the accuracy and applicability of DL models in this domain. Firstly, one aspect to focus on is improving the prediction of actual brightness temperature (BT) values. While the current models have shown high performance, further refining the algorithms can lead to even more precise and reliable BT predictions. This will be instrumental in providing more accurate information about the thermal characteristics of different fire regions. Secondly, expanding the training dataset by incorporating other data sources, such as NOAA-20 satellite VIIRS data as well as NEXRAD-estimated fire perimeters, can further improve the model’s performance. Integrating data from multiple sources can provide a broader and more diverse set of inputs, enabling the model to capture a more comprehensive range of features and patterns. This, in turn, will improve the model’s ability to adapt to different conditions and geographical regions. Additionally, the fusion of the other data sources can enable creating a network capable of forecasting the fire progression ahead of time, acting as a predictive simulation platform. This can be a highly beneficial and accurate tool for fire management. Thirdly, incorporating a time component into the DL models can open opportunities for forecasting wildfire pattern changes. By considering temporal dynamics, the models can capture how fires evolve and spread over time. Additionally, enriching the DL models with land use, vegetation properties, and terrain data can further enhance wildfire pattern predictions. These additional data layers will enable the models to better understand the complex interactions between fire and environmental factors. Incorporating land use information can help identify vulnerable regions, while vegetation properties and terrain data can provide insights into how fires might spread in different landscapes. Overall, by pursuing these potential avenues for improvement, DL models can become even more powerful tools for generating high-resolution GOES images and advancing our understanding of wildfires and their impact on the environment. These advancements hold the potential to improve the means of how we monitor and respond to wildfires, ultimately contributing to better fire management practices and environmental protection.

Author Contributions

Conceptualization, N.P.L. and H.E.; methodology, M.B., K.S., G.B. and H.E.; software, M.B. and K.S.; validation, M.B., K.S. and N.P.L.; formal analysis, M.B., K.S., H.E. and G.B.; writing—original draft preparation, M.B. and K.S.; writing—review and editing, M.B., K.S., H.E., G.B., N.P.L. and E.R.; visualization, M.B.; supervision, H.E. and G.B.; project administration, H.E.; funding acquisition, H.E. and N.P.L. All authors have read and agreed to the published version of the manuscript.

Funding

This work is supported through the National Science Foundation’s Leading Engineering for America’s Prosperity, Health, and Infrastructure (LEAP-HI) program by grant number CMMI 1953333. The opinions and perspectives expressed in this study are those of the authors and do not necessarily reflect the views of the sponsor.

Data Availability Statement

Data and materials supporting the results or analyses presented in this work are available upon reasonable request from the corresponding author.

Acknowledgments

The authors would like to acknowledge technical support from Sharif Amit Kamran at University of Nevada, Reno in model development.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A. List of Wildfire Events Used in This Study

Table A1. Wildfire Events Dataset: Names, Central Coordinates, and Start/End Dates.
Table A1. Wildfire Events Dataset: Names, Central Coordinates, and Start/End Dates.
SiteCentral LongitudeCentral LatitudeFire Start DateFire End Date
Kincade−122.78038.79223 October 20196 November 2019
Walker−120.66940.0534 September 201925 September 2019
Tucker−121.24341.72628 July 201915 August 2019
Taboose−118.34537.0344 September 201921 November 2019
Maria−118.99734.30231 October 20195 November 2019
Redbank−122.6440.125 September 201913 September 2019
Saddle ridge−118.48134.32910 October 201931 October 2019
Lone−121.57639.4345 September 201913 September 2019
Richter creek fire−119.6649.0413 May 201920 May 2019
LNU lighting complex−122.23738.59318 August 202030 September 2020
SCU lighting complex−121.43837.35214 August 20201 October 2020
CZU lighting complex−122.28037.09716 August 202022 September 2020
August complex−122.9739.86817 August 202023 September 2020
North complex fire−120.1239.6914 August 20203 December 2020
Glass fire−122.49638.56527 September 202030 October 2020
Beachie wildfire−122.13844.7452 September 202014 September 2020
Beachie wildfire 2−122.23945.1022 September 202014 September 2020
Holiday farm wildfire−122.4944.157 September 202014 September 2020
Cold spring fire−119.57248.8506 September 202014 September 2020
Creek fire−119.337.25 September 202010 September 2020
Blue ridge fire−117.6833.8826 October 202030 October 2020
Silverado fire−117.6633.7426 October 202027 October 2020
Bond fire−117.6733.742 December 20207 December 2020
Washinton fire−119.55648.82518 August 202030 August 2020
Oregon fire−121.64544.73817 August 202030 August 2020
Christie mountain−119.5449.36418 August 202030 September 2020
Bush fire−111.56433.62913 June 20206 July 2020
Magnum fire−112.3436.618 June 20206 July 2020
Bighorn fire−111.0332.536 June 202023 July 2020
Santiam fire−122.1944.8231 August 202030 September 2020
Holiday farm fire−122.4544.157 September 202030 September 2020
Slater fire−123.3841.777 September 202030 September 2020
Pinnacle fire−110.20132.86510 June 202116 July 2021
Backbone fire−111.67734.34416 June 202119 July 2021
Rafael fire−112.16234.94218 June 202115 July 2021
Telegraph fire−111.09233.2094 June 20213 July 2021
Dixie−1214015 June 202115 August 2021
Monument−123.3340.75230 July 202125 October 2021
River complex−123.01841.14330 July 202125 October 2021
Antelope−121.91941.5211 August 202115 October 2021
McFarland−123.03440.3529 July 202116 September 2021
Beckwourth complex−118.81136.5673 July 202122 September 2021
Windy−118.63136.0479 September 202115 November 2021
Mccash−123.40441.56431 July 202127 October 2021
Knp complex−118.81136.56710 September 202116 December 2021
Tamarack−119.85738.6284 July 20218 October 2021
French−118.5535.68718 August 202119 October 2021
Lava−122.32941.45925 June 202103 September 2021
Alisal−120.13134.51711 October 202116 November 2021
Salt−122.33640.84930 June 202119 July 2021
Tennant−122.03941.66528 June 202112 July 2021
Bootleg−121.42142.6166 July 202114 August 2021
Cougar peak−120.61342.2777 September 202121 October 2021
Devil’s Knob Complex−123.26841.9153 August 202119 October 2021
Roughpatch complex−122.67643.51129 July 202129 November 2021
Middlefork complex−122.40943.86929 July 202113 December 2021
Bull complex−122.00944.8792 August 202119 November 2021
Jack−122.68643.3225 July 202129 November 2021
Elbow Creek−117.61945.86715 July 202124 September 2021
Black Butte−118.32644.0933 August 202127 September 2021
Fox complex−120.59942.2113 August 20211 September 2021
Joseph canyon−117.08145.9894 June 202115 July 2021
Wrentham market−121.00645.4929 June 20213 July 2021
S-503−121.47645.08718 June 202118 August 2021
Grandview−121.444.46611 July 202125 July 2021
Lick Creek fire−117.41646.2627 July 202114 August 2021
Richter mountain fire−119.749.0626 July 201930 July 2019

References

  1. NIFC Wildfires and Acres|National Interagency Fire Center. Available online: https://www.nifc.gov/fire-information/statistics/wildfires (accessed on 21 January 2024).
  2. Taylor, A.H.; Harris, L.B.; Skinner, C.N. Severity Patterns of the 2021 Dixie Fire Exemplify the Need to Increase Low-Severity Fire Treatments in California’s Forests. Environ. Res. Lett. 2022, 17, 071002. [Google Scholar] [CrossRef]
  3. Liao, Y.; Kousky, C. The Fiscal Impacts of Wildfires on California Municipalities. J. Assoc. Environ. Resour. Econ. 2022, 9, 455–493. [Google Scholar] [CrossRef]
  4. Westerling, A.L.; Hidalgo, H.G.; Cayan, D.R.; Swetnam, T.W. Warming and Earlier Spring Increase Western U.S. Forest Wildfire Activity. Science 2006, 313, 940–943. [Google Scholar] [CrossRef]
  5. Szpakowski, D.M.; Jensen, J.L.R. A Review of the Applications of Remote Sensing in Fire Ecology. Remote Sens. 2019, 11, 2638. [Google Scholar] [CrossRef]
  6. Barmpoutis, P.; Papaioannou, P.; Dimitropoulos, K.; Grammalidis, N. A Review on Early Forest Fire Detection Systems Using Optical Remote Sensing. Sensors 2020, 20, 6442. [Google Scholar] [CrossRef] [PubMed]
  7. Radke, L.F.; Clark, T.L.; Coen, J.L.; Walther, C.A.; Lockwood, R.N.; Riggan, P.J.; Brass, J.A.; Higgins, R.G. The Wildfire Experiment (WIFE): Observations with Airborne Remote Sensors. Can. J. Remote Sens. 2000, 26, 406–417. [Google Scholar] [CrossRef]
  8. Loew, A.; Bell, W.; Brocca, L.; Bulgin, C.E.; Burdanowitz, J.; Calbet, X.; Donner, R.V.; Ghent, D.; Gruber, A.; Kaminski, T. Validation Practices for Satellite-Based Earth Observation Data across Communities. Rev. Geophys. 2017, 55, 779–817. [Google Scholar] [CrossRef]
  9. Kumar, S.S.; Roy, D.P. Global Operational Land Imager Landsat-8 Reflectance-Based Active Fire Detection Algorithm. Int. J. Digit. Earth 2018, 11, 154–178. [Google Scholar] [CrossRef]
  10. Li, J.; Roy, D.P. A Global Analysis of Sentinel-2a, Sentinel-2b and Landsat-8 Data Revisit Intervals and Implications for Terrestrial Monitoring. Remote Sens. 2017, 9, 902. [Google Scholar] [CrossRef]
  11. Lentile, L.B.; Holden, Z.A.; Smith, A.M.S.; Falkowski, M.J.; Hudak, A.T.; Morgan, P.; Lewis, S.A.; Gessler, P.E.; Benson, N.C. Remote Sensing Techniques to Assess Active Fire Characteristics and Post-Fire Effects. Int. J. Wildl. Fire 2006, 15, 319–345. [Google Scholar] [CrossRef]
  12. Hu, X.; Ban, Y.; Nascetti, A. Sentinel-2 MSI Data for Active Fire Detection in Major Fire-Prone Biomes: A Multi-Criteria Approach. Int. J. Appl. Earth Obs. Geoinf. 2021, 101, 102347. [Google Scholar] [CrossRef]
  13. Schroeder, W.; Oliva, P.; Giglio, L.; Csiszar, I.A. The New VIIRS 375 m Active Fire Detection Data Product: Algorithm Description and Initial Assessment. Remote Sens. Environ. 2014, 143, 85–96. [Google Scholar] [CrossRef]
  14. Giglio, L.; Schroeder, W.; Justice, C.O. The Collection 6 MODIS Active Fire Detection Algorithm and Fire Products. Remote Sens. Environ. 2016, 178, 31–41. [Google Scholar] [CrossRef]
  15. Xu, W.; Wooster, M.J.; He, J.; Zhang, T. First Study of Sentinel-3 SLSTR Active Fire Detection and FRP Retrieval: Night-Time Algorithm Enhancements and Global Intercomparison to MODIS and VIIRS AF Products. Remote Sens. Environ. 2020, 248, 111947. [Google Scholar] [CrossRef]
  16. Oliva, P.; Schroeder, W. Assessment of VIIRS 375 m Active Fire Detection Product for Direct Burned Area Mapping. Remote Sens. Environ. 2015, 160, 144–155. [Google Scholar] [CrossRef]
  17. Schroeder, W.; Prins, E.; Giglio, L.; Csiszar, I.; Schmidt, C.; Morisette, J.; Morton, D. Validation of GOES and MODIS Active Fire Detection Products Using ASTER and ETM+ Data. Remote Sens. Environ. 2008, 112, 2711–2726. [Google Scholar] [CrossRef]
  18. Koltunov, A.; Ustin, S.L.; Prins, E.M. On Timeliness and Accuracy of Wildfire Detection by the GOES WF-ABBA Algorithm over California during the 2006 Fire Season. Remote Sens. Environ. 2012, 127, 194–209. [Google Scholar] [CrossRef]
  19. Li, F.; Zhang, X.; Kondragunta, S.; Schmidt, C.C.; Holmes, C.D. A Preliminary Evaluation of GOES-16 Active Fire Product Using Landsat-8 and VIIRS Active Fire Data, and Ground-Based Prescribed Fire Records. Remote Sens. Environ. 2020, 237, 111600. [Google Scholar] [CrossRef]
  20. Lindley, T.T.; Zwink, A.B.; Gravelle, C.M.; Schmidt, C.C.; Palmer, C.K.; Rowe, S.T.; Heffernan, R.; Driscoll, N.; Kent, G.M. Ground-Based Corroboration of Goes-17 Fire Detection Capabilities during Ignition of the Kincade Fire. J. Oper. Meteorol. 2020, 8, 105–110. [Google Scholar] [CrossRef]
  21. Rashid, M.; Sulaiman, N.; Mustafa, M.; Khatun, S.; Bari, B.S. The Classification of EEG Signal Using Different Machine Learning Techniques for BCI Application. Commun. Comput. Inf. Sci. 2019, 1015, 207–221. [Google Scholar] [CrossRef]
  22. Ren, S.; Zhou, Y.; He, L. Multi-Object Tracking with Pre-Classified Detection. In Proceedings of the Advances in Intelligent Systems and Computing; Springer: Berlin/Heidelberg, Germany, 2019; Volume 751, pp. 503–513. [Google Scholar]
  23. Nezhad, M.Z.; Sadati, N.; Yang, K.; Zhu, D. A Deep Active Survival Analysis Approach for Precision Treatment Recommendations: Application of Prostate Cancer. Expert Syst. Appl. 2019, 115, 16–26. [Google Scholar] [CrossRef]
  24. Xu, Y.; Wu, L.; Xie, Z.; Chen, Z. Building Extraction in Very High Resolution Remote Sensing Imagery Using Deep Learning and Guided Filters. Remote Sens. 2018, 10, 144. [Google Scholar] [CrossRef]
  25. Albert, A.; Kaur, J.; Gonzalez, M.C. Using Convolutional Networks and Satellite Imagery to Identify Patterns in Urban Environments at a Large Scale. In Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Halifax, NS, Canada, 13 August 2017; Association for Computing Machinery: New York, NY, USA, 2017; pp. 1357–1366. [Google Scholar]
  26. Oh, T.; Chung, M.J.; Myung, H. Accurate Localization in Urban Environments Using Fault Detection of GPS and Multi-Sensor Fusion. In Proceedings of the Advances in Intelligent Systems and Computing; Springer: Berlin/Heidelberg, Germany, 2017; Volume 447, pp. 43–53. [Google Scholar]
  27. Toan, N.T.; Thanh Cong, P.; Viet Hung, N.Q.; Jo, J. A Deep Learning Approach for Early Wildfire Detection from Hyperspectral Satellite Images. In Proceedings of the 2019 7th International Conference on Robot Intelligence Technology and Applications, RiTA 2019, Daejeon, Republic of Korea, 1 November 2019; Institute of Electrical and Electronics Engineers Inc.: Piscataway, NJ, USA; pp. 38–45. [Google Scholar]
  28. Phan, T.C.; Nguyen, T.T.; Hoang, T.D.; Nguyen, Q.V.H.; Jo, J. Multi-Scale Bushfire Detection from Multi-Modal Streams of Remote Sensing Data. IEEE Access 2020, 8, 228496–228513. [Google Scholar] [CrossRef]
  29. Zhao, Y.; Ban, Y. GOES-R Time Series for Early Detection of Wildfires with Deep GRU-Network. Remote Sens. 2022, 14, 4347. [Google Scholar] [CrossRef]
  30. McCarthy, N.F.; Tohidi, A.; Aziz, Y.; Dennie, M.; Valero, M.M.; Hu, N. A Deep Learning Approach to Downscale Geostationary Satellite Imagery for Decision Support in High Impact Wildfires. Forests 2021, 12, 294. [Google Scholar] [CrossRef]
  31. Ghali, R.; Akhloufi, M.A. Deep Learning Approaches for Wildland Fires Using Satellite Remote Sensing Data: Detection, Mapping, and Prediction. Fire 2023, 6, 192. [Google Scholar] [CrossRef]
  32. Chen, Z.; Pawar, K.; Ekanayake, M.; Pain, C.; Zhong, S.; Egan, G.F. Deep Learning for Image Enhancement and Correction in Magnetic Resonance Imaging-State-of-the-Art and Challenges. J. Digit. Imaging 2023, 36, 204–230. [Google Scholar] [CrossRef] [PubMed]
  33. Valenti, J. Goes R Series Product Definition and Users’ Guide; NOAA: Silver Spring, ML, USA, 2018; Volume 4. [Google Scholar]
  34. Losos, D. Beginner’s Guide to GOES-R Series Data; NOAA: Silver Spring, ML, USA, 2021. [Google Scholar]
  35. Schroeder, W.; Giglio, L. NASA VIIRS Land Science Investigator Processing System (SIPS) Visible Infrared Imaging Radiometer Suite (VIIRS) 375 m & 750 m Active Fire Products: Product User’s Guide; NOAA: Silver Spring, ML, USA, 2018; Volumes 1–4. [Google Scholar]
  36. Cao, C.; Blonski, S.; Wang, W.; Uprety, S.; Shao, X.; Choi, J.; Lynch, E.; Kalluri, S. NOAA-20 VIIRS on-Orbit Performance, Data Quality, and Operational Cal/Val Support. In SPIE 10781, Earth Observing Missions and Sensors: Development, Implementation, and Characterization; SPIE: Bellingham, WA, USA, 2018; p. 21. ISBN 9781510621374. [Google Scholar]
  37. Visible Infrared Imaging Radiometer Suite (VIIRS)|NESDIS. Available online: https://www.nesdis.noaa.gov/current-satellite-missions/currently-flying/joint-polar-satellite-system/jpss-mission-and-2 (accessed on 18 October 2022).
  38. Visible Infrared Imaging Radiometer Suite (VIIRS)—LAADS DAAC. Available online: https://ladsweb.modaps.eosdis.nasa.gov/missions-and-measurements/viirs/ (accessed on 9 February 2023).
  39. VNP14IMGTDL_NRT|Earthdata. Available online: https://www.earthdata.nasa.gov/learn/find-data/near-real-time/firms/vnp14imgtdlnrt (accessed on 4 January 2023).
  40. Wythoff, B.J. Backpropagation Neural Networks. Chemom. Intell. Lab. Syst. 1993, 18, 115–155. [Google Scholar] [CrossRef]
  41. NASA | LANCE | FIRMS. Available online: https://firms.modaps.eosdis.nasa.gov/country/ (accessed on 7 October 2022).
  42. Murphy, K.J.; Davies, D.K.; Michael, K.; Justice, C.O.; Schmaltz, J.E.; Boller, R.; McLemore, B.D.; Ding, F.; Wong, M.M. Lance, Nasa’s Land, Atmosphere near Real-Time Capability for Eos. In Time-Sensitive Remote Sensing; Springer: New York, NY, USA, 2015; pp. 113–127. [Google Scholar]
  43. Congalton, R.G. Exploring and Evaluating the Consequences of Vector-to-Raster and Raster-to-Vector Conversion. Photogramm. Eng. Remote Sens. 1997, 63, 425–434. [Google Scholar]
  44. Incidents|CAL FIRE. Available online: https://www.fire.ca.gov/incidents (accessed on 23 January 2024).
  45. 2020 Western United States Wildfires—Homeland Security Digital Library. Available online: https://www.hsdl.org/c/tl/2020-wildfires/ (accessed on 1 March 2023).
  46. Hoese, D. SatPy: A Python Library for Weather Satellite Processing. Ninth Symp. Adv. Model. Anal. Using Python 2019. [Google Scholar] [CrossRef]
  47. Berk, A. Analytically Derived Conversion of Spectral Band Radiance to Brightness Temperature. J. Quant. Spectrosc. Radiat. Transf. 2008, 109, 1266–1276. [Google Scholar] [CrossRef]
  48. Xing, Y.; Song, Q.; Cheng, G. Benefit of Interpolation in Nearest Neighbor Algorithms. SIAM J. Math. Data Sci. 2022, 4, 935–956. [Google Scholar] [CrossRef]
  49. GDAL/OGR contributors GDAL/OGR Geospatial Data Abstraction Software Library. Open Source Geospat. Found. 2024. [CrossRef]
  50. Park, S.; Gach, H.M.; Kim, S.; Lee, S.J.; Motai, Y. Autoencoder-Inspired Convolutional Network-Based Super-Resolution Method in MRI. IEEE J. Transl. Eng. Health Med. 2021, 9, 1–13. [Google Scholar] [CrossRef]
  51. Wang, W.; Huang, Y.; Wang, Y.; Wang, L. Generalized Autoencoder: A Neural Network Framework for Dimensionality Reduction. In Proceedings of the 2014 IEEE Conference on Computer Vision and Pattern Recognition Workshops, Honolulu, HI, USA, 21–26 July 2016; pp. 496–503. [Google Scholar] [CrossRef]
  52. Albawi, S.; Mohammed, T.A.; Al-Zawi, S. Understanding of a Convolutional Neural Network. In Proceedings of the International Conference on Engineering and Technology, ICET 2017, Antalya, Turkey, 7 March 2017; Institute of Electrical and Electronics Engineers Inc.: Piscataway, NJ, USA, 2017; pp. 1–6. [Google Scholar]
  53. Shi, W.; Caballero, J.; Theis, L.; Huszar, F.; Aitken, A.; Ledig, C.; Wang, Z. Is the Deconvolution Layer the Same as a Convolutional Layer? arXiv 2016, arXiv:1609.07009. [Google Scholar] [CrossRef]
  54. Chiang, H.T.; Hsieh, Y.Y.; Fu, S.W.; Hung, K.H.; Tsao, Y.; Chien, S.Y. Noise Reduction in ECG Signals Using Fully Convolutional Denoising Autoencoders. IEEE Access 2019, 7, 60806–60813. [Google Scholar] [CrossRef]
  55. Barrowclough, O.J.D.; Muntingh, G.; Nainamalai, V.; Stangeby, I. Binary Segmentation of Medical Images Using Implicit Spline Representations and Deep Learning. Comput. Aided Geom. Des. 2021, 85, 101972. [Google Scholar] [CrossRef]
  56. Duque-Arias, D.; Velasco-Forero, S.; Deschaud, J.E.; Goulette, F.; Serna, A.; Decencière, E.; Marcotegui, B. On Power Jaccard Losses for Semantic Segmentation. In Proceedings of the 16th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications, Prague, Czech Republic, 10 February 2021; SCITEPRESS—Science and Technology Publications: Lisboa, Portugal, 2021; Volume 5, pp. 561–568. [Google Scholar]
  57. Narayan, S. The Generalized Sigmoid Activation Function: Competitive Supervised Learning. Inf. Sci. 1997, 99, 69–82. [Google Scholar] [CrossRef]
  58. Jun, Z.; Jinglu, H. Image Segmentation Based on 2D Otsu Method with Histogram Analysis. In Proceedings of the International Conference on Computer Science and Software Engineering, CSSE 2008, Wuhan, China, 12–14 December 2008; IEEE: Piscataway, NJ, USA, 2008; Volume 6, pp. 105–108. [Google Scholar]
  59. Kingma, D.P.; Ba, J.L. Adam: A Method for Stochastic Optimization. In Proceedings of the 3rd International Conference on Learning Representations, ICLR 2015—Conference Track Proceedings, ICLR, San Diego, CA, USA, 22 December 2015. [Google Scholar]
  60. Lareau, N.P.; Donohoe, A.; Roberts, M.; Ebrahimian, H. Tracking Wildfires With Weather Radars. J. Geophys. Res. Atmos. 2022, 127, e2021JD036158. [Google Scholar] [CrossRef]
  61. Finney, M.A.; Grenfell, I.C.; McHugh, C.W.; Seli, R.C.; Trethewey, D.; Stratton, R.D.; Brittain, S. A Method for Ensemble Wildland Fire Simulation. Environ. Model. Assess. 2011, 16, 153–167. [Google Scholar] [CrossRef]
  62. Hong, Z.; Hamdan, E.; Zhao, Y.; Ye, T.; Pan, H.; Cetin, A.E. Wildfire Detection via Transfer Learning: A Survey. Signal, Image Video Process. 2023, 18, 207–214. [Google Scholar] [CrossRef]
Figure 1. An illustration of data pre-processing outcomes for Kincade fire site at 27 October 2019 9:49 UTC: (a) processed GOES image, (b) processed VIIRS image, and (c) overlapped GOES and VIIRS images, where brightness temperature unit is K.
Figure 1. An illustration of data pre-processing outcomes for Kincade fire site at 27 October 2019 9:49 UTC: (a) processed GOES image, (b) processed VIIRS image, and (c) overlapped GOES and VIIRS images, where brightness temperature unit is K.
Remotesensing 16 00715 g001
Figure 2. VIIRS data rasterization process, (a) region of interest in longitude/latitude (Ln, Lt) space, (b) grid over region of interest in Northing/Easting space, and (c) output image, where red dots represent centers of VIIRS hotspots, and red squares denote activated pixels in the output image.
Figure 2. VIIRS data rasterization process, (a) region of interest in longitude/latitude (Ln, Lt) space, (b) grid over region of interest in Northing/Easting space, and (c) output image, where red dots represent centers of VIIRS hotspots, and red squares denote activated pixels in the output image.
Remotesensing 16 00715 g002
Figure 3. VIIRS data interpolation process (a) pre-interpolation VIIRS output, (b) post-interpolation VIIRS output, where brightness temperature unit is K.
Figure 3. VIIRS data interpolation process (a) pre-interpolation VIIRS output, (b) post-interpolation VIIRS output, where brightness temperature unit is K.
Remotesensing 16 00715 g003
Figure 4. Proposed autoencoder architecture.
Figure 4. Proposed autoencoder architecture.
Remotesensing 16 00715 g004
Figure 5. Two-branch autoencoder architecture.
Figure 5. Two-branch autoencoder architecture.
Remotesensing 16 00715 g005
Figure 6. Illustration of Otsu’s thresholding on model prediction (a) model prediction, (b) post-thresholding model prediction, (c) ground truth.
Figure 6. Illustration of Otsu’s thresholding on model prediction (a) model prediction, (b) post-thresholding model prediction, (c) ground truth.
Remotesensing 16 00715 g006
Figure 7. Otsu’s thresholding on GOES images: (a) input GOES image, (b) post-thresholding GOES image.
Figure 7. Otsu’s thresholding on GOES images: (a) input GOES image, (b) post-thresholding GOES image.
Remotesensing 16 00715 g007
Figure 8. Training curves (i.e., loss vs. epoch) of (a) GRMSE, (b) JL, (c) GLRMSE, (d) individual global and local for GLRMSE loss, (e) TBL, (f) individual RMSE, and Jaccard loss for TBL.
Figure 8. Training curves (i.e., loss vs. epoch) of (a) GRMSE, (b) JL, (c) GLRMSE, (d) individual global and local for GLRMSE loss, (e) TBL, (f) individual RMSE, and Jaccard loss for TBL.
Remotesensing 16 00715 g008
Figure 9. Low coverage with high IOU fires depicted by input data, ground truth, and predictions from four models, each with corresponding labels provided within the figure, where brightness temperature units are K.
Figure 9. Low coverage with high IOU fires depicted by input data, ground truth, and predictions from four models, each with corresponding labels provided within the figure, where brightness temperature units are K.
Remotesensing 16 00715 g009aRemotesensing 16 00715 g009b
Figure 10. Low coverage with low IOU fires depicted by input data, ground truth, and predictions from four models, each with corresponding labels provided within the figure, where brightness temperature units are K.
Figure 10. Low coverage with low IOU fires depicted by input data, ground truth, and predictions from four models, each with corresponding labels provided within the figure, where brightness temperature units are K.
Remotesensing 16 00715 g010
Figure 11. High coverage with high IOU fires depicted by input data, ground truth, and predictions from four models, each with corresponding labels provided within the figure, where brightness temperature units are K.
Figure 11. High coverage with high IOU fires depicted by input data, ground truth, and predictions from four models, each with corresponding labels provided within the figure, where brightness temperature units are K.
Remotesensing 16 00715 g011aRemotesensing 16 00715 g011b
Figure 12. High coverage with low IOU fires depicted by input data, ground truth, and predictions from four models, each with corresponding labels provided within the figure, where brightness temperature units are K.
Figure 12. High coverage with low IOU fires depicted by input data, ground truth, and predictions from four models, each with corresponding labels provided within the figure, where brightness temperature units are K.
Remotesensing 16 00715 g012
Figure 13. Validation Model prediction with radar data for Bear Fire, where brightness temperature units are K. Left column shows GOES imagery and right column network prediction. Blue boundary depicts radar-estimated fire perimeter.
Figure 13. Validation Model prediction with radar data for Bear Fire, where brightness temperature units are K. Left column shows GOES imagery and right column network prediction. Blue boundary depicts radar-estimated fire perimeter.
Remotesensing 16 00715 g013aRemotesensing 16 00715 g013b
Figure 14. Validation Model prediction with radar data for Caldor Fire, where brightness temperature units are K. Left column shows GOES imagery and right column network prediction. Blue boundary depicts radar-estimated fire perimeter.
Figure 14. Validation Model prediction with radar data for Caldor Fire, where brightness temperature units are K. Left column shows GOES imagery and right column network prediction. Blue boundary depicts radar-estimated fire perimeter.
Remotesensing 16 00715 g014aRemotesensing 16 00715 g014b
Table 1. Threshold values for defining test set categories.
Table 1. Threshold values for defining test set categories.
CategoryLCHILCLIHCHIHCLI
ConditionCoverage < 20%
IOU > 5%
Coverage < 20%
IOU < 5%
Coverage > 20%
IOU > 5%
Coverage > 20%
IOU < 5%
Table 2. Evaluation based on all test samples.
Table 2. Evaluation based on all test samples.
Evaluation MetricsGRMSEGLRMSEJLTBL
IOU0.13580.12750.11970.1389
IPSNR46.686448.5989N/A46.0219
Table 3. Evaluation based on LCHI test samples.
Table 3. Evaluation based on LCHI test samples.
Evaluation MetricsGRMSEGLRMSEJLTBL
IOU0.23720.22250.22940.2408
IPSNR56.451758.6385N/A56.2793
Table 4. Evaluation based on LCLI test samples.
Table 4. Evaluation based on LCLI test samples.
Evaluation MetricsGRMSEGLRMSEJLTBL
IOU0.03200.03040.02080.0334
IPSNR32.412833.5530N/A31.5966
Table 5. Evaluation based on HCHI test samples.
Table 5. Evaluation based on HCHI test samples.
Evaluation MetricsGRMSEGLRMSEJLTBL
IOU0.18200.17290.14000.1839
IPSNR56.522057.1891N/A56.0820
Table 6. Evaluation based on HCLI test samples.
Table 6. Evaluation based on HCLI test samples.
Evaluation MetricsGRMSEGLRMSEJLTBL
IOU0.05390.04990.03750.0575
IPSNR37.857240.8997N/A37.0405
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Badhan, M.; Shamsaei, K.; Ebrahimian, H.; Bebis, G.; Lareau, N.P.; Rowell, E. Deep Learning Approach to Improve Spatial Resolution of GOES-17 Wildfire Boundaries Using VIIRS Satellite Data. Remote Sens. 2024, 16, 715. https://doi.org/10.3390/rs16040715

AMA Style

Badhan M, Shamsaei K, Ebrahimian H, Bebis G, Lareau NP, Rowell E. Deep Learning Approach to Improve Spatial Resolution of GOES-17 Wildfire Boundaries Using VIIRS Satellite Data. Remote Sensing. 2024; 16(4):715. https://doi.org/10.3390/rs16040715

Chicago/Turabian Style

Badhan, Mukul, Kasra Shamsaei, Hamed Ebrahimian, George Bebis, Neil P. Lareau, and Eric Rowell. 2024. "Deep Learning Approach to Improve Spatial Resolution of GOES-17 Wildfire Boundaries Using VIIRS Satellite Data" Remote Sensing 16, no. 4: 715. https://doi.org/10.3390/rs16040715

APA Style

Badhan, M., Shamsaei, K., Ebrahimian, H., Bebis, G., Lareau, N. P., & Rowell, E. (2024). Deep Learning Approach to Improve Spatial Resolution of GOES-17 Wildfire Boundaries Using VIIRS Satellite Data. Remote Sensing, 16(4), 715. https://doi.org/10.3390/rs16040715

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop