Next Article in Journal
Evaluating the Farmland Use Intensity and Its Patterns in a Farming—Pastoral Ecotone of Northern China
Previous Article in Journal
Assessing the Vertical Displacement of the Grand Ethiopian Renaissance Dam during Its Filling Using DInSAR Technology and Its Potential Acute Consequences on the Downstream Countries
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Robust Damage Estimation of Typhoon Goni on Coconut Crops with Sentinel-2 Imagery

by
Andrés C. Rodríguez
1,*,
Rodrigo Caye Daudt
1,
Stefano D’Aronco
1,
Konrad Schindler
1 and
Jan D. Wegner
1,2
1
EcoVision Lab—Photogrammetry and Remote Sensing, ETH Zürich, Rämistrasse 101, 8092 Zürich, Switzerland
2
Institute for Computational Science, University of Zürich, Rämistrasse 71, 8006 Zürich, Switzerland
*
Author to whom correspondence should be addressed.
Remote Sens. 2021, 13(21), 4302; https://doi.org/10.3390/rs13214302
Submission received: 15 September 2021 / Revised: 21 October 2021 / Accepted: 22 October 2021 / Published: 26 October 2021
(This article belongs to the Section Earth Observation for Emergency Management)

Abstract

:
Typhoon Goni crossed several provinces in the Philippines where agriculture has high socioeconomic importance, including the top-3 provinces in terms of planted coconut trees. We have used a computational model to infer coconut tree density from satellite images before and after the typhoon’s passage, and in this way estimate the number of damaged trees. Our area of study around the typhoon’s path covers 15.7 Mha, and includes 47 of the 87 provinces in the Philippines. In validation areas our model predicts coconut tree density with a Mean Absolute Error of 5.9 Trees/ha. In Camarines Sur we estimated that 3.5 M of the 4.6 M existing coconut trees were damaged by the typhoon. Overall we estimated that 14.1 M coconut trees were affected by the typhoon inside our area of study. Our validation images confirm that trees are rarely uprooted and damages are largely due to reduced canopy cover of standing trees. On validation areas, our model was able to detect affected coconut trees with 88.6% accuracy, 75% precision and 90% recall. Our method delivers spatially fine-grained change maps for coconut plantations in the area of study, including unchanged, damaged and new trees. Beyond immediate damage assessment, gradual changes in coconut density may serve as a proxy for future changes in yield.

1. Introduction

During the 2020 Pacific typhoon season, typhoon Goni became the strongest storm in history to make landfall, with speeds of 225 km/h [1,2]. At its peak it achieved 1-min sustained wind speeds of 315 km/h and was a Category-5 Hurricane equivalent on the Saffir-Simpson scale [2]. An estimated 24 M people live in severely affected areas by Goni and an estimated 850 k were in need of assistance in the early response phases [1]. According to AON plc., initial estimates of damages in agriculture and infrastructure are at 415 million USD [2]. The damages to agriculture alone caused by this event in the Philippines have been estimated at 5.01 billion PHP (102 million USD) [3], and much of the damage took place in family farms which are the primary source of income to most of the 2.5 million coconut farmers in the country [4], a population group with a high poverty rate [5].
Although the scale of Typhoon Goni was unprecedented, it was not an isolated event. The Philippines Coconut Authority (PCA) estimated that typhoon Haiyan in 2013 damaged 33 million coconut trees over almost 300,000 ha [6]. For the tropical storm Kai-tak in 2017 (known as Urduja in the Philippines) the PCA estimated that 965,000 trees were affected, of which 6000 coconut trees experienced a 100% crop yield loss, while 42,000 coconut trees were heavily damaged, resulting in 50% crop yield loss [7] That second group with 50% yield loss mainly consisted of trees that had lost foliage, but were not uprooted. Accurate mapping of such damage to the canopy is very relevant as it scales with the amount of affected trees, ultimately leading to massive yield loss in the short- to mid-term, at country-scale. To mitigate negative impacts on the local population, rapid response actions are essential, which calls for accurate, large-scale damage maps. In this paper, we propose a machine learning-based, country-scale damage estimation approach and experimentally validate its performance for the impact of typhoon Goni on coconut trees in the Philippines.
Timely and accurate mapping of natural hazard impacts is important for a precise and efficient disaster response [1]. Although in situ surveys on the ground can provide reliable data for small-scale events, rapid assessment often becomes infeasible if the affected area is large, particularly if its accessibility is compromised [8]. According to van den Homberg et al., only 40% of the information needed during disaster responses is typically supplied in a timely and pertinent manner [9]. Damage maps based on remote sensing data can reduce the response time while scaling to entire countries, if they can be generated with a high degree of automation. A valuable source of evidence are medium resolution, freely available satellite images captured by satellite constellations with Synthetic Aperture Radar (SAR) such as Sentinel-1; or multi-spectral, optical images like Sentinel-2 and Landsat.
It has been argued that SAR images, acquired with active illumination in the microwave spectrum, are the first choice due to their ability to observe the Earth’s surface in unfavourable weather conditions and during night time. This robustness against varying atmospheric conditions in principle allows for near-real time change detection, and C-band SAR images have been used for forest density classification [10]. This classification relies on varying back-scatter from different vegetation species. However, SAR-based methods may have limited sensitivity to subtle vegetation changes [11], especially within the same species.
In our study we turn to multi-spectral optical satellite imagery, since we aim to detect small degradations in the canopies of coconut palm trees. i.e., we accept some data loss due to clouds in return for a high sensitivity to intricate vegetation changes. Multi-spectral optical satellite images from Sentinel-2 allow for fine-grained analyses of vegetation, including crop classification [12,13], large scale mapping of tree crops [14,15], and vegetation height estimation [16]. In previous work we have shown that Sentinel-2 imagery is suitable for mapping oil palms and coconut trees [15,17]. Here, we build on those works and extend them to a change detection framework that has the ability to quantify subtle variations in the canopy structure of coconut trees.
More specifically, we employ a deep learning method to analyse the damage caused by typhoon Goni on coconut trees. First, we apply the active learning method developed in [15] to train a convolutional neural network (CNN) which regresses the number of intact coconut tree canopies in each Sentinel-2 pixel, at 10 m GSD. We then use that CNN to estimate coconut canopy densities both before and after the passage of typhoon Goni. By analysing differences between the two maps we quantify the number of trees that were damaged by the typhoon. As expected, our results show that the largest damages have occurred near the location where typhoon Goni first made landfall. We also demonstrate that our method, based on supervised, class-specific tree densities produces more plausible maps than alternative methods based on unsupervised change detection. Furthermore, we validate our method in selected regions with very high resolution (VHR) images, where individual trees are discernible and the presence or absence of damages can be visually ascertained.
This remainder paper is organised as follows. In Section 2 we discuss some related work regarding disaster response and change detection methods for remote sensing applications. In Section 3 we introduce the dataset used in this work and describe in detail the area of interest. In Section 4 we introduce the proposed active learning method for change detection as well as several baselines used for comparison. In Section 5 we evaluate the performance of the proposed method compare it with other baselines. In Section 6 we discuss the strengths as well as the limitations of the proposed method. Finally, conclusions are provided in Section 7.

2. Related Work

2.1. Disaster Response

With every 1 °C increase in the Anthropogenic Climate Change Index (ACCI, a measure for the anthropogenic portion of climate change) the share of strong Category 4–5 events among all hurricanes across the globe increases by 40% [18]. Given the large human and economic cost that such natural disasters cause every year [19], much effort and attention has been put into studying and understanding their effects [20,21]. Early responses during post-disaster phases are crucial to mitigate short- and mid-term consequences. Governments and non-governmental organisations need to take action quickly, not only in the short term to provide relief materials, health assistance and rescue operations; but also in the mid term to mitigate socioeconomic impacts with suitable relief strategies [22,23]. A clear understanding of the magnitude of the damage is crucial to focus efforts and provide effective policies [24]. Common sources of information are news, satellite imagery, and social-media reports from affected users. Analysing this vast amount of information in a timely and concise way is challenging [25].
Many automated methods have been proposed to forecast hazards, to assess vulnerabilities, and to track the recovery in affected regions [26,27]. Notably, several works attempt to extract information about such events from remote sensing data sources [27,28,29,30,31,32,33]. This is especially valuable for rapid mapping tasks, since access to affected regions is often hindered during and immediately after the event. Other works use alternative data sources, such as social media [34], instead of remote sensing data. Yet others also use field surveys to better understand the consequences of natural disasters and to better prepare for future events [35].
While several methods for damage assessment in natural disasters are presented in general, there are also dedicated works that target a single event or type of disaster, such as tsunamis [36], floods [30,37,38], earthquakes [39], hurricanes [28,40], and typhoons [33,34]. Some studies aim to monitor the evolution of a given region over longer periods of time, usually focusing on measuring the recovery of the considered region [31,33], while others focus on single instants (e.g., single image analysis) [29]. Most works on damage assessment focus on urban areas, usually attempting to detect damaged buildings [30,32,36,41,42,43].
There is a large diversity of data-driven remote sensing methods designed to tackle specific needs after a hazard event. Bejiga et al. [44] aim to detect objects of interest in the context of search-and-rescue missions after avalanche events, using a convolutional neural network coupled with a support vector machine. For flood detection, Ireland et al. [37] use Landsat imagery and compare the performance of different machine learning methods. Yang and Cervone [38] provide damage assessments on thousands of aerial images with limited human input. To assist with the development of data-driven approaches for post-disaster building damage evaluation, Gupta et al. [43] propose the xBD dataset, a collection that is large enough to train modern, data-hungry deep learning methods.
More closely related to our task are data-driven methods to assess damages from tropical storms, i.e., typhoons, cyclones and hurricanes. Ghaffarian et al. [33] adopt a random forest algorithm to derive land cover maps from cloud-free composites from Landsat 7 and 8 images. Damage and recovery maps were produced according to the changes observed between different dates. Such approach provides valuable insights on the temporal evolution of the region during and after a natural disaster, although post-classification change analysis is prone to errors especially near region boundaries, and end-to-end classification methods have recently been preferred over random forests [45,46]. Sheykhmousa et al. [31] take a similar approach and extract transition patterns from land use and land cover maps, generated from high resolution WorldView-2 images. Some works have explored the alternative strategy to directly analyse a single post-disaster image. Kovordányi and Roy [40] forecast the direction of a cyclone’s track at a given instant using satellite images; and Jiang and Friedland [28] predict hurricane-induced damage by detecting urban debris zones in high-resolution images.

2.2. Vegetation Mapping

Detection and quantitative analysis of vegetation has been a research topic in remote sensing for decades. Jasinski [47] proposed a physics-inspired method for estimating canopy densities of natural systems with little or no ground truth data using Landsat data. Others have proposed indices such as the normalised difference vegetation index (NDVI) to retrieve fractional vegetation cover and leaf area index [48]. Hansen et al. [49] used a regression method to estimate tree cover from MODIS (500 m GSD) on a planetary scale. More recently, Agapiou [50] used Sentinel-1 and Sentinel-2 data coupled with crowd-sourced OpenStreetMap data to estimate the vegetation cover in the vicinity of archaeological sites. Melville et al. [51] used random forests to identify four vegetation classes in images obtained using an unmanned aircraft system. Some works were specifically aimed at evaluating vegetation changes. Hansen et al. [52] have generated maps that quantify forest loss between the years 2000 and 2012 from satellite data. Khan et al. [53] used deep neural networks to detect various types of forest changes over a period of 29 years.
Crop mapping has received a large attention in the remote sensing community. It is a frequent use case for multi-spectral images from satellites such as Sentinel-2, Landsat or MODIS. Multi-spectral images allow for species identification, often based on image time series [12,13], as well as crop yield prediction [54,55]. In terms of tropical tree crops, oil palms have been mapped in south-east Asia [15,56] and at global scale [14].
In our previous work we have developed methods to retrieve the density of oil palms, coconuts and olive trees from Sentinel-2 images [17]. That work has then been extended into an active learning scheme to map oil palm density at large scale with a tractable annotation effort [15]. The present paper builds on that active learning approach and utilises a similar pipeline to map the density of coconut canopies and estimate the effect that typhoon Goni had on them.

2.3. Change Detection

An intuitive and very common option to analyse the impact of natural disasters in image data are change detection algorithms. The literature about change detection is vast [57,58,59] and not primarily concerned with natural disasters, although there recently have been efforts to encourage such experiments [43]. A limited number of studies have applied change detection to identify regions whose state significantly differs before and after a given event [33,53].
In their simplest form, change detection algorithms aim to classify pixels into changed and unchanged classes. Most change detection algorithms belong to one of the following groups: (1) image differencing with automatic thresholding [60], (2) change vector analysis [61], and (3) direct change prediction [45,53,62]. The earliest approaches simply form a difference image and turn it into a (unitless) measure of per-pixel change, followed by thresholding or clustering [57,58]. Another family of methods, named change vector analysis (CVA), describes the temporal evolution of each pixel with a high-dimensional vector that encodes the change “direction”, so that changes are not only identified but also characterised to some degree [63]. Object-based approaches were also proposed to increase robustness, by analysing image “objects” (often simply small, homogeneous segments) instead of individual pixels [58]. Finally, the problem can also be formulated as semantic co-segmentation, where a label is assigned directly to each pair of corresponding (before/after) pixels, without explicitly image differencing or thresholding [59]. This last formulation is especially suited for data-driven algorithms like convolutional neural networks (CNNs), which are nowadays one of the most powerful tools for image analysis [46].
Unsupervised change detection algorithms do not require training data to be annotated by experts, which is often expensive and slow, and at least conceptually should generalise well across different geographical regions, since they do not suffer from training data biases [58,64]. Thus, they may seem particularly well suited to study natural disasters. On the downside, the price to pay for not fitting to the specific task is that these methods cannot discriminate different types of changes that have comparable visual saliency.
At least one representative method from each of the three groups described above has also been selected as baselines in our experiments, these are described in more detail in Section 4.3.

3. Area of Interest

Typhoon Goni made landfall at Catanduanes island on the 31 October 2020 with ground speeds of 225 km/h [65]. Its course covered the provinces of Camarines del Sur, Albay and Quezon towards the east. The affected region is not only home to 24 M people but it also contains an important share of the country’s coconut trees. According to the Food and Agriculture Organization of the United Nations the Philippines are the world’s second largest coconut producer only after Indonesia and produced 17.8 Mt of coconut in 2019. The top-3 provinces in terms of coconut trees are Quezon, Northern Samar, Camarines Sur, all of which are located very close to Goni’s path. Figure 1 shows the estimated number of coconut trees per province and Goni’s path. For our analysis, we included all the provinces located within 500 km from the centre line of the typhoon’s path (marked with white boundaries in Figure 1). This strip covers 15.7 Mha and 47 of the 87 provinces in the Philippines.

Data

We created a coconut canopy density map before the typhoon landfall on 31 October 2020, using Sentinel-2 images recorded between January 2020 and 15 October 2020. From this time period, we chose the 10 images from each location least effected by clouds, to build a density map without gaps. In that map we also count the number of intact coconut trees in each province, Figure 1. See details on the method in Section 4.1. We then computed a density map after the typhoon with Sentinel-2 images recorded between 15 December 2020 and 15 April 2021. We use all available images from that period and predict a density for all cloud-free pixels.
As reference labels for training, we used density maps derived from coconut trees identified with interactive photo-interpretation of images from Google Earth (GSD < 30 cm) from 2017. During annotation we manually marked the centre of each coconut tree crown in different areas in the Philippines. Contiguous areas of at least 300 ha represent the minimal areas for labelling, we refer to such base labelling areas as blocks. We created a density map with the annotated locations on a 0.625 m GSD grid and applied a squared kernel of size 20 × 20 m, then we summed the densities into a 10 m GSD grid that matches the Sentinel-2 resolution. Using a grid 16 times denser than the 10 m of Sentinel-2, and a squared kernel, allows the model to focus on identifying the tree density rather than the specific (sub-pixel) tree location, thus reducing the impact of potential co-registration errors. That strategy already proved successful in our earlier work for oil palms [15], as well as olive trees and coconut trees [17]. Our training set consisted of an initial set of 37 blocks, plus additional 50 blocks which were selected using an active learning approach (see more details in Section 4.1), resulting in a total of 87 blocks for training and 31 additional blocks were set aside for validation.
To validate our approach against actual, visually recognisable damages caused by the typhoon we relied on high-resolution images. For the state before the typhoon we used images from Google Earth (GSD < 30 cm), For after the typhoon we used Worldview-2© images from Maxar Technologies (GSD 50 cm). We selected a variety of locations to inspect different levels of damage across the region of interest.

4. Methods

We estimate the typhoon damage by identifying coconut canopy density changes between images before and after the event. To that end we have created two different maps using a collection of images sensed in two different time periods as described in Section 3. We first train a model to estimate coconut density in the Philippines using input images and ground truth labels exclusively from before the typhoon. We then apply that model to images from before and after the event to obtain coconut density maps, and analyse the changes in density. See Figure 2 for an overview of the complete method.
A naïve and straightforward approach to evaluate the density difference could lead to inconsistencies, due to two important issues. First, since our density model was not trained with images collected after the typhoon, a potential domain shift might lead to inaccurate predictions. Second, our after-map is obtained with notably less samples than our before-map which could also increase the uncertainty of the predictions after the typhoon. It is therefore crucial to account for prediction uncertainty, such that change detection can be based on trustworthy predictions.
In the following we first review the active learning strategy used to ensure accurate density estimation without excessive labelling efforts. We then describe how we model uncertainty and filter out unreliable predictions. Finally, we describe other change detection methods that are used as baselines to compare to.

4.1. Active Learning for Density Estimation

Based on our previous work on tree density retrieval from Sentinel-2 [15,17], we derive a map of coconut canopy density, covering the entire region of interest in the Philippines. A Convolutional Neural Network (CNN), with 9 residual blocks each consisting of 3 convolutional layers, predicts the number of intact coconut canopies per pixel. For details of the network architecture we refer to [15].
We rely on active learning [15] to keep the labelling effort as low as possible for the large area of interest. The fundamental idea of active learning is to start from some base model trained with only a small amount of reference data, and to iteratively grow the training set and improve the model, with additional help of an acquisition function  g ( x ) . The acquisition function scores unlabelled samples x, so as to identify which samples are most informative and should be added to the training set. For our large regions of interest the set of candidate samples to choose from is very large, so g ( x ) should be cheap to compute and scale.
As a rule of thumb, samples to be added to the training set should be diverse to avoid annotating redundant information (e.g., annotating several trees of highly similar characteristics) and they should have high prediction uncertainty under the old model, so that their correct label adds a lot of information to the training set. We use the acquisition function proposed in our previous work [15] to balance diversity and uncertainty:
g ( x i ) = σ 2 ( y ^ i ) j X σ 2 ( y ^ j ) Uncertainty + d z ( x i , μ ) 2 j X d z ( x j , μ ) 2 Diversity ,
The first term in Equation (1) estimates the epistemic (model) uncertainty, i.e., the “lack of knowledge” of the model after seeing only a limited set of training examples. This uncertainty is estimated with an ensemble of models trained independently [66,67], where the disagreement between ensemble members serves as a measure of how uncertain the ensemble is about a specific input. In particular, with M models f i trained independently of each other, we can compute the epistemic uncertainty as the standard deviation of the model ensemble:
σ y ^ 2 = 1 M 1 i = 1 M ( y ^ i y ¯ ) 2 ,
where y ^ i = f i ( x ) is the individual prediction from model f i and y ¯ = M 1 i = 1 M ( y ^ i ) is the average over all individual predictions. We obtain this estimate for all available samples in our study area X .
The second term in Equation (1) represents an estimate of the sample diversity, and aims to maximise the variety of the selected sample set. Diversity is measured via distances d z from the mean feature value μ in the latent feature space z of the CNN. Please refer to [15] for further details.

4.2. Robust Density Change Detection

After the model has been trained with the actively selected samples, we apply it to our entire area of interest.
More robust change estimates can be obtained by using only predictions with reasonably low uncertainty. In general, predictions are affected by two types of uncertainties: epistemic uncertainty, i.e., the model’s lack of knowledge due to unseen patterns not encountered during training, which was already used to select the training samples; and the aleatoric uncertainty, which is the uncertainty inherent in the data, where the input data does not contain the information needed to unambiguously determine the correct output, even if perfect model parameters were known.
Unobserved conditions that give rise to epistemic uncertainty could include a vegetation type not encountered in the training set or, more relevant to our case, situations that are normally not encountered except after a typhoon, such as destroyed buildings or defoliated forest. This uncertainty can be modelled by running an ensemble of (density) prediction networks on the same input image, in the same way as during active learning.
The aleatoric uncertainty in optical remote sensing can largely be attributed to cloud cover. For Sentinel-2, cloud probability masks are available. To reduce the overall effect of clouds on our predictions, we discard pixels with cloud probability >50% during training, and predict only for pixels with a low chance of dense cloud cover (cloud probability <10%). This however does not completely remove the uncertainty, as some part of it stems from other sources, for instance imperfect cloud masks or not density-related changes on the ground. Rather than using only a single input image per location, we run the prediction for multiple different (cloud-free) images within a time window and average the estimates. A natural approach to model the uncertainty in such a scenario is to compute the standard deviation over the N l a t , l o n predictions at a given location. This is especially important for the after-map, where only 4 months of Sentinel-2 data were available at the time of our study.
We are interested in computing the overall uncertainty, to filter out unreliable estimates. To that end we evaluate the standard deviation both over M different models and over N different samples at each location. The combined epistemic and aleatoric uncertainty of the predicted density is equal to:
σ y ^ = 1 ( M 1 ) ( N 1 ) i = 1 M i = 1 N ( y ^ i , j y ¯ ) 2 .
The combined standard deviation is transformed to a quantity similar in nature to the coefficient of variation (CV), but adjusted depending on the distribution of y ^ :
C V = σ y ^ ϵ + y ^ ,
where ϵ acts as a regulariser for very low values of y ^ that could otherwise affect the CV. The CV quantifies the relative standard deviation measure with respect to the predicted density. Depending on the desired recall ( i.e. the percentage of predictions that we want to keep), we can select predictions according to a confidence threshold, C V < τ , and discard predictions with too high uncertainty. Furthermore, we also remove predictions at pixels that never had clear visibility within the considered time window (i.e., always cloud probability >10% or cloud shadow). Changes are only computed for pixels that have low uncertainty both before and after the typhoon.
Using only the reliable density estimates, we create a change image Δ = y ^ b e f o r e y ^ a f t e r . Based on the values Δ and y ^ b e f o r e , pixels are classified according to the rule:
Change ( Δ , y ^ b e f o r e ) = Damaged , if y ^ b e f o r e 0.4 and Δ 0.4 New , if y ^ b e f o r e < 0.4 and Δ 0.4 Unchanged , if y ^ b e f o r e 0.4 and | Δ | < 0.4 Non-coconut , otherwise
where | · | denotes the absolute value. We chose the threshold after a preliminary analysis, in which we observed that pixels with densities >0.4 consistently lie within coconut plantations.

4.3. Baselines

Unsupervised change detection is a common alternative to evaluate the magnitude of damage after a natural disaster. Here we consider three different approaches against which we compare our method. Since all methods perform a pair-wise image comparison, we compute cloud-free median composites such that they are (like our model) not affected by clouds, and perform change detection between the before and after composites.
Change vector analysis (CVA) [68]: This traditional change detection method is based on analysing change vectors generated by comparing pixel values in coregistered images. Changed regions are expected to contain change vectors with large magnitudes, and the direction of these vectors provides some insight on the type of change that has occurred. For unsupervised change detection, Otsu’s thresholding algorithm is used to separate pixels into changed and unchanged based on the magnitudes of the change vectors.
k-means [60]: This approach computes the difference between before and after, the so-called “delta image”. A neighbourhood around each pixel is then considered to cluster the delta image into changed and unchanged pixels. Given an image with N pixels and d channels, we concatenate the pixel values from a s × s window around each pixel into a “feature vector” for the pixel and flatten the resulting feature map into a matrix of size N × s 2 d . Next we perform Principal Component Analysis to extract the top eigenvectors e j of the covariance matrix, project all feature vectors onto the space spanned by these eigenvectors, and cluster them with k-means (with k = 2 ). The underlying assumption is that a pixel that has changed will fall into a different cluster than surrounding pixels with no change. We repeated this procedure for window sizes s { 2 , 4 , 6 , 8 , 10 } and aggregated the multiple clusterings into a final one with an additional k-means.
Deep change vector analysis (DCVA) [69]: This change detection method uses a pretrained CNN to obtain pixelwise features to build a change vector. Automatic feature selection is done to compile a group of features that are especially helpful in differentiating changed regions from unchanged ones. Pixels are separated into changed and unchanged using a thresholding algorithm, either Otsu’s method or a local adaptive thresholding algorithm.
Deep slow feature analysis (DSFA) [61]: This approach uses Slow Feature Analysis (SFA) [70], an unsupervised method that projects the input data into a slowly varying (i.e., maximally invariant) feature space. This allows for an easy identification of changed vs. unchanged pixels in the transformed space. Specifically, DSFA uses a neural network to compute a projection matrix that extracts the most invariant features in a non-linear fashion. Once the projection matrices have been learned (with stochastic gradient descent), the projected features are again classified into changed and unchanged pixels by clustering. In practice, k-means is again used for that purpose.
Self-supervised Adversarial Change Detection (S2-cGAN) [62]: This method relies on a Generative Adversarial Network (GAN) to identify changes. From a pair of unchanged images, it trains a Discriminator to identify semantically unchanged pairs of pixels in different time-steps. At inference time, that discriminator should be able to recognise which pixels have changed, since they constitute outliers compared to the training data. In practice, at training time only one image x is used and small perturbations are added to obtain a synthetic pair of “unchanged” images for discriminator training.
Deep Kernel PCA (KPCA) [71]: This algorithm uses a Siamese network in combination with kernel PCA to produce change vectors and map them into a 2D polar domain. These vectors are then thresholded automatically using Otsu’s algorithm to separate changed pixels from unchanged ones.

5. Results

We trained our model using only Sentinel-2 pixels that are not marked as cloud, cloud shadow, water or snow in the the Sen2Cor [72] scene classification layer. Furthermore, we masked-out pixels that have a cloud probability > 50% for training, which allows the model to observe different cloud coverage conditions. At inference time we used only pixels with cloud probability < 10% to avoid possible noise coming from different cloud conditions.
To evaluate the performance of density estimation, we aggregate predictions into 1 ha blocks and compare them to reference data. The evaluation on 1 ha blocks sidesteps the influence of residual co-registration errors between reference data and Sentinel-2 images. On validation blocks, our Mean Absolute Error (MAE) after training only with the initial 37 training blocks was 8.9 Trees/ha, and dropped to 5.9 Trees/ha after including the actively selected samples. This represents a reduction of ≈34% (see Figure 3). For a fair comparison we applied the same robust thresholding in both cases, as we will explain below. These results are in line with our previous study about oil palm density, where active learning lead to a reduction of 28% with respect to the initial MAE.
We present a visual comparison of reference and predicted density maps at two different locations in Figure 4. The main structure of the plantations is correctly mapped and the density values in general agree. However, pixels in areas of high density (close to 2 trees) exhibit some under-estimation bias, as in the middle image of Figure 4.
As mentioned in Section 4, we considered only confident estimates according to the threshold C V > τ . It is expected that a lower value of τ should increase the accuracy, but reduce the overall recall. Which exact value of τ to use depends on the use case. In our case, we aim for high map coverage and remove only the most uncertain locations. See Table 1 for an evaluation how varying values of τ affect the output. For these results, we used an ϵ = 0.7 to compute the C V and have tested different values of τ . We observe that at τ = 0.45 the MAE improves by 9.1% at the cost of leaving out the <2% most uncertain samples, which we regard as the best compromise.
We applied the chosen threshold τ = 0.45 to all predictions in our study area. Due to cloud cover especially in high-altitude regions in several states in the before-map, the overall recall dropped to 89.1% for the complete study area. Fortunately, most discarded pixels lie in mountain areas, where no coconut is grown, consequently the recall over the validation areas, where coconut plantations are found, was a lot higher (>98%).

5.1. Estimated Coconut Damages

Using Equation (5), we estimate how many trees were damaged between the two considered time-steps. The estimated number of healthy coconut tree canopies in our study area was 58.4 M before and 44.3 M after the passage of the typhoon, respectively. Meaning that 14.1 M coconut trees were affected by the typhoon. We aggregated our results per state and observed the largest changes in Camarines Sur, with 3.5 M coconut trees estimated to have been damaged. Cataduanes, the point of first landfall, does not have a significant amount of coconut trees, see Figure 5. Notably, we observe a clear correlation between the level of damage and the proximity to the storm track as well as the temporal order along the track.
Table 2 shows the top-15 provinces in terms of overall coconut crop yield (before the passage of Goni). In our change analysis we do also predict a few localised areas with new coconut plantings. In 31 of the 47 provinces in our region of interest the estimated changes lie within ±0.1 million trees. In certain areas on Masbate island (which represents 6% of the coconut planted area), we observe regions with an apparent increase of coconut trees. These areas however appear to largely be false negatives in the before-map, where there are only few cloud free images.

5.2. Coconut Damage Evaluation

To validate our large-scale predictions we obtained high-resolution images from Worldview-2© 2021 Maxar Technologies with a GSD of 50 cm. Images were selected in various parts of the study area. We visually analysed the high-resolution images and marked changes between images before and after the typhoon. Although this is a manually difficult task, we focused on areas with clear changes on the ground. In total we annotated 87 different change points including changes on coconut plantations, and added background points that showed no explicit change. See Figure 6 for examples of annotated points from each class.
In Figure 7 we present example areas located close to the typhoon path in northern Albay. Validation images show that most tree trunks remain rooted in ground, i.e., lower density found with our method corresponds mostly to canopy loss, not fallen trees.
Figure 8 and Figure 9 show examples of areas in the provinces of Aurora and Samar, respectively, which exhibited only minor damages. Most of the areas there show no changes, which appears consistent with our high-resolution validation images. However, we observed very localised areas where damages are predicted. From the high-resolution validation images, there is no clear sign of damaged trees in these areas. We speculate that these predictions are due to subtle changes of the tree foliage, co-registration errors, or inaccurate predictions from the radiometrically more challenging post-typhoon images. The phenomenon is localised and does not appear systematically, therefore it does not significantly affect the large-scale analysis.

5.3. Baseline Comparisons

We compare our predicted changes with unsupervised change detection, as described in Section 4.3. Results on Table 3 show that most of the baselines are not sensitive enough for detecting any changes in the coconut plantations. Our method detects changed vs. unchanged coconut pixels with an accuracy of 88.6%, and a precision and recall of 75% and 90%, respectively. The baseline methods detect any type of change, but we can use validation points that contain coconut to evaluate their performance on coconut plantation areas (See Table 4). k-means, CVA and KPCA all obtain a recall of 27.3%. GAN, DSFA and DCVA did not predict any correct changes in our validation areas. KPCA has an overall accuracy of 80.5%, which is however largely due to the fact that it predicts almost no changes at all.
Furthermore, we visually analysed the different damaged areas. Results are shown in Figure 10. We concentrate our validation efforts on areas with strong damages (top) and mixed change states (bottom). For reference, we also include the Sentinel-2 median composite that served as input for the baselines.
DSFA shows no clear indication of changes in coconut plantations, in either of the analysed locations. It does however flag some changes in vegetation and soil areas. As expected, the method homes in on the strongest changes between the two images, however it does not seem to be able to detect coconut damages, which are visually more subtle than other changes.
k-means, on the other hand, flags many more changes. It appears that strong, apparent changes are detected much better than with DSFA. This could be partially due to the fact that k-means uses a neighbourhood of pixels to cluster each pixel instead of single pixels as DSFA. Some of the predicted changes by k-means in the top image are in areas that were reported as damaged coconut areas. It is unclear however which type of change this could be representing as in the composite images there is no clear indication of change.
S2-cGAN, similar to k-means, detects some changes in areas that are heavily affected according to our prediction. It seems to have fewer false positives than DSFA and k-means. However, it does return also non-coconut changes on certain urban structures, as in Figure 10 (bottom). Overall this more recent method delivers more consistent results than the other baselines.
Across all compared methods we observe a high level of agreement on unchanged areas (Figure 10, middle), even though unsupervised methods can of course fail to recognise whether an unchanged area shows coconut plantings. Overall, it was to be expected that the main problem of unsupervised change detection for damage assessment are false alarms due to land-cover changes not associated with coconut trees. But we also observe a significant volume of false negatives (missed changes).

6. Discussion

6.1. Sentinel-2 for Damage Detection

In this work we have used multi-spectral, optical satellite images from the Sentinel-2 constellation to detect damage to coconut trees. We point out that our estimates correspond to a “cumulative net loss” of species-specific canopy cover. Consequently, our method can detect a wide range of changes to coconut trees, but cannot differentiate precisely between a small number of completely denuded or uprooted trees and a larger number of trees with only slightly thinned out canopies. We use four months of data after the typhoon to compensate for the high-cloud coverage. We further alleviate the problem of erroneous predictions by explicitly modelling uncertainty and using only high-confidence estimates, while maintaining a good recall (>80% overall and >98% in areas with coconut plantations).
Although not having 100% recall could lead to a slight underestimation of the damages, we did not observe a systematic effect in our validation areas. It appears that, by retaining only sufficiently reliable estimates, we obtain a map that still contains the large majority of changes, but is a lot less affected by noise, as demonstrated by the clearly reduced MAE in our validation areas.

6.2. SAR Sensors for Damage Estimation

A common alternative for mapping the impact of natural hazards are SAR images [11]. SAR data have the advantage that there are no data gaps due to clouds, which are a main challenge for optical remote sensing, particularly in the tropics. Indeed, for our study we rely on four months of data to gather enough cloud-free Sentinel-2 images. It remains unclear if a SAR sensor (and with which bands or polarisations) could identify changes of the tree foliage and serve as a basis for density estimation. This is an open question for future investigation. On the other hand, it is an advantage of our model that it does not mainly observe tree stems but the actual foliage, and therefore can directly retrieve canopy changes, not only the removal (or planting) of entire trees. This capability is crucial for our application, since the large majority of affected coconut trees suffer damage only to the canopy, which is likely to affect their health and yield, while their stems remain in place.

6.3. Limitations of Our Study

Assessing subtle damages to the foliage of coconut trees is difficult. Collecting data on the ground via field studies would yield the most accurate reference data, but does not scale to large areas, and is too time-consuming to be applied directly after an event. Furthermore, after typhoons and similar events the affected regions are often hard to access. We resorted to assessing the performance of our model with the help of commercial VHR satellite imagery of several sample regions scattered across the impact zone. We manually analysed those images, and assessed specific changes in 87 diverse locations with these images, these results show that observable changes can be detected with a very high performance (88.9% accuracy); but subtle changes are hard to recognise and presumably were not always detected.
Another current limitation is that the correlation between reduced canopy coverage and yield is not well understood. Although one can expect reduced foliage to have an impact on overall tree health and ultimately yield, and a correlation between leaves and unripe fruit being torn off, the precise relationship is unknown and requires a more thorough investigation. We note that the relation between plant health and yield has been observed for other crops like wheat [73], parsley [74] and almonds [75].

7. Conclusions

We have assessed the damage caused by Typhoon Goni to coconut trees in the Philippines. Our image analysis is based on canopy density estimation in multi-spectral Sentinel-2 images, which is able to recognise a wide range of damages to the tree canopies. Accurate mapping of reduced and damaged foliage may help to anticipate yield loss in the mid- and long-term that may impact livelihood and well-being of coconut producers (many of which are smallholder farmers).
Overall, we estimated that 14.1 M coconut trees were affected by typhoon Goni inside our study area. We observed a larger impact on provinces that were directly under the typhoon’s path, including Camarines Sur, Quezon and northern Samar, all of which are home to large coconut plantations.
While we rely on optical satellite imagery to map even small damages to the trees’ foliage, our method is challenged by cloud cover. We believe that a promising direction for future research is the combination of optical and SAR satellite imagery. Some research already exists that advocates the joint usage of SAR and optical sensors [76,77], which could possibly allow us to combine the best of both worlds: robustness against clouds and sensitivity to subtle changes of foliage.
With climate change increasing the likelihood and severity of natural hazards like typhoons [78], methods that allow for large-scale, consistent damage assessment will play an important role to support mitigation measures against negative impacts on the local population. Our change estimates could, for example, be used to make predictions of future economical impacts and ultimately of the livelihood of coconut farmers. Studies analysing the mid- and long-term impact of similar events have been published after typhoon Haiyan in 2013 [5,31,33], but took several years to complete. We hope that the large-scale, high-resolution maps that our method is able to generate in a short time can support stakeholders with quantitative evidence to take better informed decisions with less time lag.

Author Contributions

A.C.R. was responsible for the design and development of the methods, for running the experiments, and for writing the paper. R.C.D., S.D., K.S. and J.D.W. were responsible for creating the research design, for writing the paper, and for editing. All authors have read and agreed to the published version of the manuscript.

Funding

This research received funding from Barry-Callebaut AG as part of a research agreement.

Data Availability Statement

The change maps are available online at https://doi.org/10.5281/zenodo.5595921 (accessed on October 2021). The manually labelled reference data is available on request.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. United Nations Office for the Coordination of Humanitarian Affairs. Philippines: Super Typhoon Goni (Rolly) Humanitarian Needs and Priorities (Nov 2020–April 2021). 2020. Available online: https://reliefweb.int/report/philippines/philippines-super-typhoon-goni-rolly-humanitarian-needs-and-priorities-nov-2020 (accessed on 30 November 2020).
  2. Aon plc.; Global Catastrophe Recap November 2020. 2020. Available online: http://thoughtleadership.aon.com/documents/20201210_analytics-if-november-global-recap.pdf (accessed on 30 June 2021).
  3. International Federation of Red Cross and Red Crescent Societies. Operation Update Report: Philippines: Floods and Typhoons 2020 (Typhoon Goni); Technical Report; IFRC: Geneva, Switzerland, 2021. [Google Scholar]
  4. Department of Agriculture, Philippines. DA Allots P8.5 B to Enable Typhoon-Affected Farmers, Fishers Recover, Start Anew. November 2020. Available online: https://www.da.gov.ph/da-allots-p8-5-b-to-enable-typhoon-affected-farmers-fishers-recover-start-anew/ (accessed on 30 June 2021).
  5. Seriño, M.N.V.; Cavero, J.A.; Cuizon, J.; Ratilla, T.C.; Ramoneda, B.M.; Bellezas, M.H.I.; Ceniza, M.J.C. Impact of the 2013 super typhoon haiyan on the livelihood of small-scale coconut farmers in Leyte island, Philippines. Int. J. Disaster Risk Reduct. 2021, 52, 101939. [Google Scholar] [CrossRef]
  6. Elmer Abonales, R.T.S. Typhoon Yolanda Coconut DAMAGE Report; Philippine Coconut Authority (PCA): Leyte, Philippines, 2013. [Google Scholar]
  7. Philippine Coconut Authority. Initial Report on Damage to Coconut by Tropical Storm Urduja & Typhoon Vinta. December 2017. Available online: https://pca.gov.ph/index.php/about-us/overview/10-news/110-initial-report-on-damage-to-coconut-by-tropical-storm-urduja-typhoon-vinta (accessed on 30 June 2021).
  8. Schumann, G.; Kirschbaum, D.; Anderson, E.; Rashid, K. Role of Earth Observation Data in Disaster Response and Recovery: From Science to Capacity Building. In Earth Science Satellite Applications: Current and Future Prospects; Hossain, F., Ed.; Springer International Publishing: Cham, Switzerland, 2016; pp. 119–146. [Google Scholar] [CrossRef]
  9. van den Homberg, M.; Monné, R.; Spruit, M. Bridging the information gap of disaster responders by optimizing data selection using cost and quality. Comput. Geosci. 2018, 120, 60–72. [Google Scholar] [CrossRef]
  10. Varghese, A.O.; Suryavanshi, A.; Joshi, A.K. Analysis of different polarimetric target decomposition methods in forest density classification using C band SAR data. Int. J. Remote Sens. 2016, 37, 694–709. [Google Scholar] [CrossRef]
  11. Gong, M.; Zhao, J.; Liu, J.; Miao, Q.; Jiao, L. Change Detection in Synthetic Aperture Radar Images Based on Deep Neural Networks. IEEE Trans. Neural Networks Learn. Syst. 2016, 27, 125–138. [Google Scholar] [CrossRef] [PubMed]
  12. Turkoglu, M.O.; D’Aronco, S.; Perich, G.; Liebisch, F.; Streit, C.; Schindler, K.; Wegner, J.D. Crop mapping from image time series: Deep learning with multi-scale label hierarchies. Remote Sens. Environ. 2021, 264, 112603. [Google Scholar] [CrossRef]
  13. Rußwurm, M.; Körner, M. Self-attention for raw optical satellite time series classification. ISPRS J. Photogramm. Remote Sens. 2020, 169, 421–435. [Google Scholar] [CrossRef]
  14. Descals, A.; Szantoi, Z.; Meijaard, E.; Sutikno, H.; Rindanata, G.; Wich, S. Oil Palm (Elaeis guineensis) Mapping with Details: Smallholder versus Industrial Plantations and their Extent in Riau, Sumatra. Remote Sens. 2019, 11, 2590. [Google Scholar] [CrossRef] [Green Version]
  15. Rodríguez, A.C.; D’Aronco, S.; Schindler, K.; Wegner, J.D. Mapping oil palm density at country scale: An active learning approach. Remote Sens. Environ. 2021, 261, 112479. [Google Scholar] [CrossRef]
  16. Lang, N.; Schindler, K.; Wegner, J.D. Country-wide high-resolution vegetation height mapping with Sentinel-2. Remote Sens. Environ. 2019, 233, 111347. [Google Scholar] [CrossRef] [Green Version]
  17. Rodriguez, A.C.; Wegner, J.D. Counting the uncountable: Deep semantic density estimation from space. In German Conference on Pattern Recognition; Springer: Stuttgart, Germany, 2018; pp. 351–362. [Google Scholar]
  18. Holland, G.; Bruyère, C.L. Recent intense hurricane response to global climate change. Clim. Dyn. 2014, 42, 617–627. [Google Scholar] [CrossRef] [Green Version]
  19. Kousky, C. Informing climate adaptation: A review of the economic costs of natural disasters. Energy Econ. 2014, 46, 576–592. [Google Scholar] [CrossRef]
  20. Li, Q.; Wang, C.; Zhang, H. A probabilistic framework for hurricane damage assessment considering non-stationarity and correlation in hurricane actions. Struct. Saf. 2016, 59, 108–117. [Google Scholar] [CrossRef]
  21. Wang, W.; Qu, J.J.; Hao, X.; Liu, Y.; Stanturf, J.A. Post-hurricane forest damage assessment using satellite remote sensing. Agric. For. Meteorol. 2010, 150, 122–132. [Google Scholar] [CrossRef]
  22. Mondal, D.; Chowdhury, S.; Basu, D. Role of Non Governmental Organization in Disaster Management. Res. J. Agric. Sci. 2015, 6, 1485–1489. [Google Scholar]
  23. de Waal, A.; Hilhorst, D.; Chan, E.Y.Y. Public Health Humanitarian Responses to Natural Disasters; Routledge: London, UK, 2017. [Google Scholar]
  24. Neumayer, E.; Plümper, T.; Barthel, F. The political economy of natural disaster damage. Glob. Environ. Chang. 2014, 24, 8–19. [Google Scholar] [CrossRef] [Green Version]
  25. Boccardo, P.; Giulio Tonolo, F. Remote Sensing Role in Emergency Mapping for Disaster Response. In Engineering Geology for Society and Territory—Volume 5; Lollino, G., Manconi, A., Guzzetti, F., Culshaw, M., Bobrowsky, P., Luino, F., Eds.; Springer International Publishing: Cham, Switzerland, 2015; pp. 17–24. [Google Scholar]
  26. Sun, W.; Bocchini, P.; Davison, B.D. Applications of artificial intelligence for disaster management. Nat. Hazards 2020, 103, 2631–2689. [Google Scholar] [CrossRef]
  27. Dutta, R.; Aryal, J.; Das, A.; Kirkpatrick, J.B. Deep cognitive imaging systems enable estimation of continental-scale fire incidence from climate data. Sci. Rep. 2013, 3, 3188. [Google Scholar] [CrossRef] [Green Version]
  28. Jiang, S.; Friedland, C.J. Automatic urban debris zone extraction from post-hurricane very high-resolution satellite and aerial imagery. Geomat. Nat. Hazards Risk 2016, 7, 933–952. [Google Scholar] [CrossRef]
  29. Hamdi, Z.M.; Brandmeier, M.; Straub, C. Forest Damage Assessment Using Deep Learning on High Resolution Remote Sensing Data. Remote Sens. 2019, 11, 1976. [Google Scholar] [CrossRef] [Green Version]
  30. Rudner, T.G.; Rußwurm, M.; Fil, J.; Pelich, R.; Bischke, B.; Kopačková, V.; Biliński, P. Multi3Net: Segmenting flooded buildings via fusion of multiresolution, multisensor, and multitemporal satellite imagery. In Proceedings of the AAAI Conference on Artificial Intelligence, Honolulu, HI, USA, 27–28 January 2019; Volume 33, pp. 702–709. [Google Scholar] [CrossRef]
  31. Sheykhmousa, M.; Kerle, N.; Kuffer, M.; Ghaffarian, S. Post-Disaster Recovery Assessment with Machine Learning-Derived Land Cover and Land Use Information. Remote Sens. 2019, 11, 1174. [Google Scholar] [CrossRef] [Green Version]
  32. Xu, J.Z.; Lu, W.; Li, Z.; Khaitan, P.; Zaytseva, V. Building Damage Detection in Satellite Imagery Using Convolutional Neural Networks. arXiv 2019, arXiv:1910.06444. [Google Scholar]
  33. Ghaffarian, S.; Rezaie Farhadabad, A.; Kerle, N. Post-Disaster Recovery Monitoring with Google Earth Engine. Appl. Sci. 2020, 10, 4574. [Google Scholar] [CrossRef]
  34. Zhao, Q.; Chen, Z.; Liu, C.; Luo, N. Extracting and classifying typhoon disaster information based on volunteered geographic information from Chinese Sina microblog. Concurr. Comput. Pract. Exp. 2019, 31, e4910. [Google Scholar] [CrossRef]
  35. Escobedo, F.J.; Luley, C.J.; Bond, J.; Staudhammer, C.; Bartel, C. Hurricane debris and damage assessment for Florida urban forests. J. Arboric. 2009, 35, 100. [Google Scholar]
  36. Bai, Y.; Gao, C.; Singh, S.; Koch, M.; Adriano, B.; Mas, E.; Koshimura, S. A Framework of Rapid Regional Tsunami Damage Recognition From Post-event TerraSAR-X Imagery Using Deep Neural Networks. IEEE Geosci. Remote Sens. Lett. 2018, 15, 43–47. [Google Scholar] [CrossRef] [Green Version]
  37. Ireland, G.; Volpi, M.; Petropoulos, G.P. Examining the Capability of Supervised Machine Learning Classifiers in Extracting Flooded Areas from Landsat TM Imagery: A Case Study from a Mediterranean Flood. Remote Sens. 2015, 7, 3372–3399. [Google Scholar] [CrossRef] [Green Version]
  38. Yang, L.; Cervone, G. Analysis of remote sensing imagery for disaster assessment using deep learning: A case study of flooding event. Soft Comput. 2019, 23, 13393–13408. [Google Scholar] [CrossRef]
  39. Cooner, A.J.; Shao, Y.; Campbell, J.B. Detection of Urban Damage Using Remote Sensing and Machine Learning Algorithms: Revisiting the 2010 Haiti Earthquake. Remote Sens. 2016, 8, 868. [Google Scholar] [CrossRef] [Green Version]
  40. Kovordányi, R.; Roy, C. Cyclone track forecasting based on satellite images using artificial neural networks. ISPRS J. Photogramm. Remote Sens. 2009, 64, 513–521. [Google Scholar] [CrossRef] [Green Version]
  41. Vetrivel, A.; Kerle, N.; Gerke, M.; Nex, F.; Vosselman, G. Towards automated satellite image segmentation and classification for assessing disaster damage using data-specific features with incremental learning. In Proceedings of the Geographic Object Based Image Analysis (GEOBIA), Enschede, The Netherlands, 14–16 September 2016. [Google Scholar]
  42. Vetrivel, A.; Gerke, M.; Kerle, N.; Nex, F.; Vosselman, G. Disaster damage detection through synergistic use of deep learning and 3D point cloud features derived from very high resolution oblique aerial images, and multiple-kernel-learning. ISPRS J. Photogramm. Remote Sens. 2018, 140, 45–59. [Google Scholar] [CrossRef]
  43. Gupta, R.; Hosfelt, R.; Sajeev, S.; Patel, N.; Goodman, B.; Doshi, J.; Heim, E.; Choset, H.; Gaston, M. xbd: A dataset for assessing building damage from satellite imagery. arXiv 2019, arXiv:1911.09296. [Google Scholar]
  44. Bejiga, M.B.; Zeggada, A.; Nouffidj, A.; Melgani, F. A Convolutional Neural Network Approach for Assisting Avalanche Search and Rescue Operations with UAV Imagery. Remote Sens. 2017, 9, 100. [Google Scholar] [CrossRef] [Green Version]
  45. Daudt, R.C.; Saux, B.L.; Boulch, A.; Gousseau, Y. Multitask Learning for Large-scale Semantic Change Detection. Comput. Vis. Image Underst. 2019, 187, 102783. [Google Scholar] [CrossRef] [Green Version]
  46. Zhu, X.X.; Tuia, D.; Mou, L.; Xia, G.S.; Zhang, L.; Xu, F.; Fraundorfer, F. Deep Learning in Remote Sensing: A Comprehensive Review and List of Resources. IEEE Geosci. Remote Sens. Mag. 2017, 5, 8–36. [Google Scholar] [CrossRef] [Green Version]
  47. Jasinski, M. Estimation of subpixel vegetation density of natural regions using satellite multispectral imagery. IEEE Trans. Geosci. Remote Sens. 1996, 34, 804–813. [Google Scholar] [CrossRef]
  48. Carlson, T.N.; Ripley, D.A. On the relation between NDVI, fractional vegetation cover, and leaf area index. Remote Sens. Environ. 1997, 62, 241–252. [Google Scholar] [CrossRef]
  49. Hansen, M.C.; DeFries, R.S.; Townshend, J.R.G.; Carroll, M.; Dimiceli, C.; Sohlberg, R.A. Global Percent Tree Cover at a Spatial Resolution of 500 Meters: First Results of the MODIS Vegetation Continuous Fields Algorithm. Earth Interact. 2003, 7, 1–15. [Google Scholar] [CrossRef] [Green Version]
  50. Agapiou, A. Estimating Proportion of Vegetation Cover at the Vicinity of Archaeological Sites Using Sentinel-1 and -2 Data, Supplemented by Crowdsourced OpenStreetMap Geodata. Appl. Sci. 2020, 10, 4764. [Google Scholar] [CrossRef]
  51. Melville, B.; Lucieer, A.; Aryal, J. Classification of lowland native grassland communities using hyperspectral Unmanned Aircraft System (UAS) Imagery in the Tasmanian midlands. Drones 2019, 3, 5. [Google Scholar] [CrossRef] [Green Version]
  52. Hansen, M.C.; Potapov, P.V.; Moore, R.; Hancher, M.; Turubanova, S.A.; Tyukavina, A.; Thau, D.; Stehman, S.V.; Goetz, S.J.; Loveland, T.R.; et al. High-Resolution Global Maps of 21st-Century Forest Cover Change. Science 2013, 342, 850–853. [Google Scholar] [CrossRef] [Green Version]
  53. Khan, S.H.; He, X.; Porikli, F.; Bennamoun, M. Forest Change Detection in Incomplete Satellite Images With Deep Neural Networks. IEEE Trans. Geosci. Remote Sens. 2017, 55, 5407–5423. [Google Scholar] [CrossRef]
  54. Hunt, M.L.; Blackburn, G.A.; Carrasco, L.; Redhead, J.W.; Rowland, C.S. High resolution wheat yield mapping using Sentinel-2. Remote Sens. Environ. 2019, 233, 111410. [Google Scholar] [CrossRef]
  55. You, J.; Li, X.; Low, M.; Lobell, D.; Ermon, S. Deep Gaussian Process for Crop Yield Prediction Based on Remote Sensing Data. In Proceedings of the AAAI Conference on Artificial Intelligence, San Francisco, CA, USA, 4–9 February 2017; Volume 31. [Google Scholar]
  56. Xu, Y.; Yu, L.; Li, W.; Ciais, P.; Cheng, Y.; Gong, P. Annual oil palm plantation maps in Malaysia and Indonesia from 2001 to 2016. Earth Syst. Sci. Data 2020, 12, 847–867. [Google Scholar] [CrossRef] [Green Version]
  57. Singh, A. Review Article Digital Change Detection Techniques Using Remotely-sensed Data. Int. J. Remote Sens. 1989, 10, 989–1003. [Google Scholar] [CrossRef] [Green Version]
  58. Hussain, M.; Chen, D.; Cheng, A.; Wei, H.; Stanley, D. Change Detection from Remotely Sensed Images: From Pixel-based to Object-based Approaches. ISPRS J. Photogramm. Remote Sens. 2013, 80, 91–106. [Google Scholar] [CrossRef]
  59. Caye Daudt, R. Convolutional Neural Networks for Change Analysis in Earth Observation Images with NOISY labels and Domain Shifts. Ph.D. Thesis, Institut Polytechnique de Paris, Paris, France, 2020. [Google Scholar]
  60. Celik, T. Unsupervised Change Detection in Satellite Images Using Principal Component Analysis and k-Means Clustering. IEEE Geosci. Remote Sens. Lett. 2009, 6, 772–776. [Google Scholar] [CrossRef]
  61. Du, B.; Ru, L.; Wu, C.; Zhang, L. Unsupervised deep slow feature analysis for change detection in multi-temporal remote sensing images. IEEE Trans. Geosci. Remote Sens. 2019, 57, 9976–9992. [Google Scholar] [CrossRef] [Green Version]
  62. Holgado Alvarez, J.L.; Ravanbakhsh, M.; Demir, B. S2-CGAN: Self-Supervised Adversarial Representation Learning for Binary Change Detection in Multispectral Images. In Proceedings of the IGARSS 2020—2020 IEEE International Geoscience and Remote Sensing Symposium, Waikoloa, HI, USA, 26 September–2 October 2020; pp. 2515–2518. [Google Scholar] [CrossRef]
  63. Singh, S.; Talwar, R. Review on different change vector analysis algorithms based change detection techniques. In Proceedings of the 2013 IEEE Second International Conference on Image Information Processing (ICIIP-2013), Shimla, India, 9–11 December 2013; pp. 136–141. [Google Scholar] [CrossRef]
  64. Melgani, F.; Moser, G.; Serpico, S.B. Unsupervised change detection methods for remote sensing images. In Image and Signal Processing for Remote Sensing VII; Serpico, S.B., Ed.; International Society for Optics and Photonics, SPIE: Bellingham, WA, USA, 2002; Volume 4541, pp. 211–222. [Google Scholar]
  65. Knapp, K.; Diamond, H.; Kossin, J.; Kruk, M.; Schreck, C. International Best Track Archive for Climate Stewardship (IBTRACS) Project, Version 4; National Centers for Environmental Information, NESDIS, NOAA, U.S. Department of Commerce: Washington, DC, USA, 2018. [Google Scholar]
  66. Kendall, A.; Gal, Y. What uncertainties do we need in bayesian deep learning for computer vision? In Proceedings of the 31st Advances in Neural Information Processing Systems, Long Beach, CA, USA, 4–9 December 2017; pp. 5574–5584. [Google Scholar]
  67. Ovadia, Y.; Fertig, E.; Ren, J.; Nado, Z.; Sculley, D.; Nowozin, S.; Dillon, J.; Lakshminarayanan, B.; Snoek, J. Can you trust your model’s uncertainty? Evaluating predictive uncertainty under dataset shift. In Proceedings of the 33rd International Conference on Neural Information Processing Systems, Vancouver, BC, Canada, 8–14 December 2019; pp. 13991–14002. [Google Scholar]
  68. Malila, W.A. Change vector analysis: An approach for detecting forest changes with Landsat. In LARS Symposia; Institute of Electrical and Electronics Engineers: New York, NY, USA, 1980; p. 385. [Google Scholar]
  69. Saha, S.; Bovolo, F.; Bruzzone, L. Unsupervised deep change vector analysis for multiple-change detection in VHR images. IEEE Trans. Geosci. Remote Sens. 2019, 57, 3677–3693. [Google Scholar] [CrossRef]
  70. Wiskott, L.; Sejnowski, T.J. Slow feature analysis: Unsupervised learning of invariances. Neural Comput. 2002, 14, 715–770. [Google Scholar] [CrossRef] [PubMed]
  71. Wu, C.; Chen, H.; Du, B.; Zhang, L. Unsupervised Change Detection in Multitemporal VHR Images Based on Deep Kernel PCA Convolutional Mapping Network. IEEE Trans. Cybern. 2021, 1–15. [Google Scholar] [CrossRef]
  72. Louis, J.; Debaecker, V.; Pflug, B.; Main-Knorn, M.; Bieniarz, J.; Mueller-Wilm, U.; Cadau, E.; Gascon, F. Sentinel-2 Sen2Cor: L2A processor for users. In Living Planet Symposium 2016; Spacebooks Online: Prague, Czech Republic, 2016; pp. 1–8. [Google Scholar]
  73. Hoá’ná, D. Development and structure of foliage in wheat stands of different density. Biol. Plant. 1967, 9, 424–438. [Google Scholar] [CrossRef]
  74. Petropoulos, S.A.; Daferera, D.; Polissiou, M.; Passam, H. The effect of water deficit stress on the growth, yield and composition of essential oils of parsley. Sci. Hortic. 2008, 115, 393–397. [Google Scholar] [CrossRef]
  75. Underwood, J.P.; Hung, C.; Whelan, B.; Sukkarieh, S. Mapping almond orchard canopy volume, flowers, fruit and yield using lidar and vision sensors. Comput. Electron. Agric. 2016, 130, 83–96. [Google Scholar] [CrossRef]
  76. Kulkarni, S.C.; Rege, P.P. Pixel level fusion techniques for SAR and optical images: A review. Inf. Fusion 2020, 59, 13–29. [Google Scholar] [CrossRef]
  77. Schmitt, M.; Hughes, L.; Zhu, X. The sen1-2 dataset for deep learning in sar-optical data fusion. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2018, IV-1, 141–146. [Google Scholar] [CrossRef] [Green Version]
  78. Lin, N.; Emanuel, K.; Oppenheimer, M.; Vanmarcke, E. Physically based assessment of hurricane surge threat under climate change. Nat. Clim. Chang. 2012, 2, 462–467. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Overview of estimated coconut tree counts per province in 2020 and Typhoon Goni’s path. Provinces with white boundaries represent our area of study.
Figure 1. Overview of estimated coconut tree counts per province in 2020 and Typhoon Goni’s path. Provinces with white boundaries represent our area of study.
Remotesensing 13 04302 g001
Figure 2. Overview of our method. We use active learning for efficient labelling to train a model for coconut tree density estimation. Damage estimation is performed using images from before and after the typhoon pass. After a robustness and change analysis we predict the following change classes: damaged, new or unchanged coconut, and non-coconut pixels.
Figure 2. Overview of our method. We use active learning for efficient labelling to train a model for coconut tree density estimation. Damage estimation is performed using images from before and after the typhoon pass. After a robustness and change analysis we predict the following change classes: damaged, new or unchanged coconut, and non-coconut pixels.
Remotesensing 13 04302 g002
Figure 3. Performance of coconut palm density estimation as predicted by our method (right axis) compared to manually labelled validation data (vertical axis). Each green dot indicates a comparison per block of size 300 ha each, while the unit on both axes is Trees/ha. Values with densities below 1 Trees/ha were trimmed for visualisation.
Figure 3. Performance of coconut palm density estimation as predicted by our method (right axis) compared to manually labelled validation data (vertical axis). Each green dot indicates a comparison per block of size 300 ha each, while the unit on both axes is Trees/ha. Values with densities below 1 Trees/ha were trimmed for visualisation.
Remotesensing 13 04302 g003
Figure 4. Qualitative example of coconut density estimation. Ground Truth (GT) in top row, prediction of our model in bottom row. Numbers in parentheses represent the sum of (intact) coconut trees in the area. Areas without GT (shown as gray) are excluded from both counts.
Figure 4. Qualitative example of coconut density estimation. Ground Truth (GT) in top row, prediction of our model in bottom row. Numbers in parentheses represent the sum of (intact) coconut trees in the area. Areas without GT (shown as gray) are excluded from both counts.
Remotesensing 13 04302 g004
Figure 5. Overview of Typhoon Goni’s estimated damages to coconut crops.
Figure 5. Overview of Typhoon Goni’s estimated damages to coconut crops.
Remotesensing 13 04302 g005
Figure 6. Left: examples of manually annotated validation points. Right: geographical overview of all the very high resolution (VHR) areas used for validation.
Figure 6. Left: examples of manually annotated validation points. Right: geographical overview of all the very high resolution (VHR) areas used for validation.
Remotesensing 13 04302 g006
Figure 7. Comparison of density estimations and high resolution reference data in Northern Albay, Philippines. Images before and after were sensed on 9 May 2020 and 8 March 2021 respectively.
Figure 7. Comparison of density estimations and high resolution reference data in Northern Albay, Philippines. Images before and after were sensed on 9 May 2020 and 8 March 2021 respectively.
Remotesensing 13 04302 g007
Figure 8. Comparison of density estimations and high resolution reference data in Aurora, Philippines. Images before and after were sensed on 19 September 2020 and 24 December 2020 respectively.
Figure 8. Comparison of density estimations and high resolution reference data in Aurora, Philippines. Images before and after were sensed on 19 September 2020 and 24 December 2020 respectively.
Remotesensing 13 04302 g008
Figure 9. Comparison of density estimations and high resolution reference data in Samar, Philippines. Images before and after were sensed on 11 December 2018 and 3 February 2021 respectively.
Figure 9. Comparison of density estimations and high resolution reference data in Samar, Philippines. Images before and after were sensed on 11 December 2018 and 3 February 2021 respectively.
Remotesensing 13 04302 g009
Figure 10. Comparison of our method with unsupervised change detection methods DSFA [61], k-means [60] and S2-CGAN [62] in northern Albay (top) and Quezon (bottom).
Figure 10. Comparison of our method with unsupervised change detection methods DSFA [61], k-means [60] and S2-CGAN [62] in northern Albay (top) and Quezon (bottom).
Remotesensing 13 04302 g010
Table 1. Mean absolute error and recall for different levels of τ in the validation areas. Marked in bold is the selected τ level.
Table 1. Mean absolute error and recall for different levels of τ in the validation areas. Marked in bold is the selected τ level.
τ Recall (%)MAE
0.1046.681.08
0.2064.922.16
0.4095.505.33
0.4598.205.91
0.5099.396.26
2.00100.006.50
Table 2. Top-15 provinces in the Philippines in terms of total coconut tree count. Total [%] represents the percentage with respect to the total coconut tree count of the area of study. Recall [%] is the percentage of covered area with respect to the land area of each province.
Table 2. Top-15 provinces in the Philippines in terms of total coconut tree count. Total [%] represents the percentage with respect to the total coconut tree count of the area of study. Recall [%] is the percentage of covered area with respect to the land area of each province.
StateBefore
[M]
After
[M]
Relative
Damage [%]
Total
[%]
Recall
[%]
Quezon12.29.323.720.976.8
Northern Samar6.24.232.910.784.0
Camarines Sur4.61.177.17.970.5
Sorsogon4.54.8−5.57.873.1
Eastern Samar4.42.933.07.584.0
Masbate3.94.6−18.46.696.0
Samar3.32.330.35.792.3
Leyte3.32.040.25.787.0
Albay2.41.152.84.189.3
Camarines Norte2.41.344.14.152.3
Aurora1.81.421.73.182.7
Oriental Mindoro1.31.32.12.262.4
Cebu1.11.1−7.71.897.5
Biliran1.01.1−6.01.790.5
Romblon1.01.1−12.31.689.2
Table 3. Confusion matrices for 87 validation points. For the comparison, our predicted classes new, unchanged and not coconut had to be merged into a single background class to match the outputs of pure change detection. C: Change, N: No_change, U: Uncertain.
Table 3. Confusion matrices for 87 validation points. For the comparison, our predicted classes new, unchanged and not coconut had to be merged into a single background class to match the outputs of pure change detection. C: Change, N: No_change, U: Uncertain.
OursGANk-MeansDSFADCVACVAKPCA
CNUCNCNCNCNCNCN
Coconut
C (11)911011380110113838
N (30)32257231812030723822030
Background
(46)0433046291704610361432442
Table 4. Validation results in %. The metrics were computed only at points with true landcover coconut, since unsupervised baselines are unable to detect coconut and also flag changes in the background, contrary to our method.
Table 4. Validation results in %. The metrics were computed only at points with true landcover coconut, since unsupervised baselines are unable to detect coconut and also flag changes in the background, contrary to our method.
OursGANk-MeansDSFADCVACVAKPCA
Accuracy88.656.136.673.256.161.080.5
Recall90.00.027.30.00.027.327.3
Precision75.00.014.30.00.027.3100.0
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Rodríguez, A.C.; Daudt, R.C.; D’Aronco, S.; Schindler, K.; Wegner, J.D. Robust Damage Estimation of Typhoon Goni on Coconut Crops with Sentinel-2 Imagery. Remote Sens. 2021, 13, 4302. https://doi.org/10.3390/rs13214302

AMA Style

Rodríguez AC, Daudt RC, D’Aronco S, Schindler K, Wegner JD. Robust Damage Estimation of Typhoon Goni on Coconut Crops with Sentinel-2 Imagery. Remote Sensing. 2021; 13(21):4302. https://doi.org/10.3390/rs13214302

Chicago/Turabian Style

Rodríguez, Andrés C., Rodrigo Caye Daudt, Stefano D’Aronco, Konrad Schindler, and Jan D. Wegner. 2021. "Robust Damage Estimation of Typhoon Goni on Coconut Crops with Sentinel-2 Imagery" Remote Sensing 13, no. 21: 4302. https://doi.org/10.3390/rs13214302

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop