Next Article in Journal
Scale-Aware Pansharpening Algorithm for Agricultural Fragmented Landscapes
Next Article in Special Issue
InSAR Observation and Numerical Modeling of the Earth-Dam Displacement of Shuibuya Dam (China)
Previous Article in Journal
Effect of Protection Level in the Hydroperiod of Water Bodies on Doñana’s Aeolian Sands
Previous Article in Special Issue
Taking Advantage of the ESA G-POD Service to Study Ground Deformation Processes in High Mountain Areas: A Valle d’Aosta Case Study, Northern Italy
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Detection of Urban Damage Using Remote Sensing and Machine Learning Algorithms: Revisiting the 2010 Haiti Earthquake

Virginia Tech Department of Geography, 115 Major Williams Hall 220 Stanger St., Blacksburg, VA 24060, USA
*
Author to whom correspondence should be addressed.
Remote Sens. 2016, 8(10), 868; https://doi.org/10.3390/rs8100868
Submission received: 5 August 2016 / Revised: 26 September 2016 / Accepted: 17 October 2016 / Published: 20 October 2016
(This article belongs to the Special Issue Earth Observations for Geohazards)

Abstract

:
Remote sensing continues to be an invaluable tool in earthquake damage assessments and emergency response. This study evaluates the effectiveness of multilayer feedforward neural networks, radial basis neural networks, and Random Forests in detecting earthquake damage caused by the 2010 Port-au-Prince, Haiti 7.0 moment magnitude (Mw) event. Additionally, textural and structural features including entropy, dissimilarity, Laplacian of Gaussian, and rectangular fit are investigated as key variables for high spatial resolution imagery classification. Our findings show that each of the algorithms achieved nearly a 90% kernel density match using the United Nations Operational Satellite Applications Programme (UNITAR/UNOSAT) dataset as validation. The multilayer feedforward network was able to achieve an error rate below 40% in detecting damaged buildings. Spatial features of texture and structure were far more important in algorithmic classification than spectral information, highlighting the potential for future implementation of machine learning algorithms which use panchromatic or pansharpened imagery alone.

Graphical Abstract

1. Introduction

Earthquakes accounted for over 60% of all natural disaster-related deaths from 2001 to 2011—a danger that will likely increase due to rapid global urbanization [1]. Immediately after an earthquake occurs, satellite imagery is a critical component of damage mapping. Hussain et al. noted that “information derived from remote sensing data greatly helps the authorities in rescue and relief efforts, damage assessment, and the planning of remedial measures to safeguard such events effectively” [2]. For immediate rescue operations, rapid damage maps derived from satellite imagery must be developed quickly. A study of the 1995 Kobe earthquake in Japan showed a drastic reduction of the total rescued and the proportion of survivors after the third day of recovery efforts [3,4]. However, because rapid mapping is required to balance immediacy with in-depth analysis, early mapping efforts often yield coarse damage assessments [5].
Remote sensing has been used widely to map the effects of major disasters such as earthquakes. Numerous studies have utilized electro-optical (EO), synthetic aperture radar (SAR), light detection and ranging (LiDAR), ancillary data, or a combination thereof for post-earthquake damage detection [1,5,6]. One technique for damage detection involves fusion of SAR and EO data in pixel-based damage detection. Stramondo et al. used a maximum likelihood (ML) classifier on SAR features derived from the European Remote Sensing mission in combination with EO data provided by the Indian Remote Sensing satellite in order to identify damaged structures following the 1999 Izmit, Turkey earthquake [7]. A similar approach combined SAR from COSMO/SkyMed mission and very high resolution (VHR) EO data from the Quickbird satellite to improve damage detection at block level after combining the two datasets in a pixel-based classification following the 2009 L’Aquila earthquake [8].
As early as 1998, object-based image analysis (OBIA) has been used to detect earthquake damage from remote sensing [9]. More recently, OBIA has been a continual focus in earthquake detection damage with many studies focusing on the use of unmanned aerial systems (UAS), LiDAR, and the popular image segmentation and classification software eCognition. Hussain et al. [2] fused GeoEye-1 VHR EO data and airborne LiDAR elevation models derived from the RIT-ImageCAT UAS for image segmentation using the Definiens (now eCognition) software suite. The data were classified using nearest neighbor and fuzzy membership sets to detect damaged buildings and rubble following the 2010 Haiti earthquake. Similarly, Pham et al. [6] used aerial VHR RGB composite and LiDAR data (also from the RIT-ImageCAT UAS) along with eCognition for object segmentation and damage detection.
The application of machine learning algorithms (MLAs) to earthquake damage detection is a relatively new area of study. MLAs actively adapt and learn the problem at hand, often mimicking natural or biological systems, instead of relying on statistical assumptions about data distribution [10]. In addition to overall improved accuracy [11,12], MLAs have several advantages compared to traditional classification and change detection methods. MLAs work with nonlinear datasets [11,13], learn from limited training data [12,14], and successfully solve difficult-to-distinguish classification problems [15].
Ito et al. [16] used learning vector quantization (LVQ), a type of artificial neural network (ANN) to classify SAR features signifying damage after the 1995 Kobe earthquake. Li et al. [17] used a two-class support vector machine (SVM) on pre- and post-earthquake Quickbird imagery along with spatial relations derived from the local indicator of spatial association (LISA) index to detect structures damaged by the Wenchuan earthquake of 2008. Haiyang et al. utilized a SVM approach in combination with eCognition image segmentation on the RIT-ImageCAT RGB and LiDAR data, as well as the textural features of contrast, dissimilarity, and variance derived from the gray level co-occurrence matrix (GLCM) to detect urban damage in Port-Au-Prince [18]. Kaya et al. [19] used OBIA in combination with support vector selection and adaptation (a type of SVM) on pansharpened Quickbird imagery to conduct damage detection for specific buildings within Port-au-Prince after the 2010 earthquake. While OBIA using SVMs have been researched extensively in the past, ANNs, particularly radial basis function neural networks (RBFNNs), and Random Forests (RF) have shown promise in pattern recognition and image classification [15,20] and have yet to be examined in the application of earthquake damage detection. All three algorithms require parameter-tuning process to achieve optimal performance and a cross-validation approach can be applied to automate the parameter-tuning. SVM has an advantage in dealing with small sample size problems due to its sparse characteristics. However, for applications where a large number of training samples are available, SVM often yields a large number of support vectors, resulting in unnecessary complexity and a long training time [21].
Evaluating structural dimensions such as the Laplacian of Gaussian (LoG) and object-based metrics in addition to spectral and textural information could greatly increase damage detection rates. LoG, a blob detection technique, has been used for medical applications in nuclei mapping [22] and for the detection of buildings in bitemporal images [23]. As discussed earlier, OBIA has shown strong results in urban scenes and earthquake damage detection. Huang and Zhang had success applying the popular mean-shift segmentation algorithm for urban classification in hyperspectral scenes while statistical region merging (SRM) is another segmentation approach which is robust to noise and occlusions [24,25]. Additionally, various metrics such as rectangular fit, morphological shadow index, and morphological building index can describe the structure of objects in the scene before or after segmentation [26,27]. Applying structural descriptors such as a LoG filter and segmentation derived metrics to high resolution satellite imagery as an additional input to an MLA could evince damage in difficult to detect scenarios such as a pancake collapse [22,28]. The robustness and generalizability of RF and ANN along with the additional dimensions of texture and structure may provide higher accuracies in the face of imperfect input data.
In past disasters, by the time an automated change detection scheme is ready for implementation, a crowdsourced team of visual interpreters is already mapping damaged buildings [29]. Dong and Shan mention that while manual digitization of damaged structures requires trained image analysts and is unsuitable for large areas, “visual interpretation remains to be the most reliable and independent evaluation for automated methods” [1]. Additionally, many previous studies suggest detection schema which require ancillary data such as UAS products, LiDAR, or GIS databases. Many of these products are unavailable in developing regions where the death toll is highest [30]. Using MLAs (RF and ANN), a rapid damage map derived from readily available multispectral imagery could allow for a minimal compromise between time and accuracy and allow first responders to more rapidly allocate their resources in a crisis. RF and ANNs along with derived textural and structural features may provide improved balance between rapid and accurate damage detection. The main purposes of this study include:
  • Assess the performance of neural networks (to include radial basis function neural networks), and Random Forests on very high resolution satellite imagery in earthquake damage detection
  • Investigate the usefulness of structural feature identifiers to include the Laplacian of Gaussian and rectangular fit in identifying damaged regions

2. Materials and Methods

2.1. Study Area and Data

The earthquake on 12 January 2010 near Port-au-Prince, Haiti, was an exceptionally devastating event. The initial shock of 7.0 Mw caused an astounding death toll: between 217,000 and 230,000 were reported dead by the Haitian government [2] and the official estimate has grown to 316,000 [31]. Additionally, the earthquake and its subsequent aftershocks caused extensive damage in and around Port-au-Prince including numerous landmark buildings such as the National Palace [2]. Because of the high density of collapsed and damaged structures available for training and validation of MLAs, the 2010 Haiti earthquake is an ideal case study to evaluate automated damage detection methods.
For this research, we obtained high resolution multispectral and panchromatic remote sensing data from the DigitalGlobe Foundation. A pre-disaster panchromatic image was acquired in December of 2009 by the WorldView-1 satellite (accessed via DigitalGlobe’s EnhancedView system) and post-disaster multispectral and panchromatic images were acquired on 15 January 2010 by the Quickbird 2 satellite. All datasets (see Table 1) were resampled to 0.6 meters using the nearest neighbor technique. Images were atmospherically corrected to top of atmosphere (TOA) reflectance [32] and clipped to the study area of central Port-Au-Prince. Pre- and post-earthquake imagery were coregistered using 15 control points and a third order polynomial transformation with a root mean square error (RMSE) of 0.55 m.
The study area (Figure 1) was divided into training and validation regions. Training samples for the algorithms were selected by manually digitizing polygons over damaged structures, ensuring that training data represented the input space of the entire study area. A total of 1,214,623 undamaged and 134,327 damaged pixels were used for training. For validation purposes, over 900 buildings were digitized and marked as damaged or undamaged according to the UNOSAT Haiti damage assessment [33]. This dataset classifies damage as points classified according to the European Macroseismic Scale developed in 1998 [34]. This classification schema defines five damage levels ranging from minor/no damage to destruction [1,5,6,35]. In order to simplify the damage detection problem, the UNOSAT points belonging to the most severe damage class (EMS grade V) were extracted for the validation sets and used for the damaged building class. This action is required primarily because visual distinction between classes is difficult even with sub-meter pixel resolution [1]. While these validation points have been previously used in several studies [6,36], it is important to note that the UNOSAT/UNITAR dataset was derived through manual interpretation of satellite and aerial imagery and very few points were ground verified. However, the assessment remains the standard for the validation of damage that occurred in the 2010 Haiti earthquake.

2.2. Texture and Structure

Because several studies have shown that classification and change detection performance can be increased with the addition of spatial information [11,20], textural and structural information was extracted from the pre- and post-earthquake panchromatic images. Figure 2 shows the impact of earthquake damage on two selected textural and structural features. Entropy, energy, dissimilarity, and homogeneity are all second order texture features derived from the GLCM that have been correlated with damage or used as proxies for damage in previous studies [35,37,38,39]. In order to reduce dimensionality and eliminate redundancy, the two consistently correlated GLCM features of Entropy (a measure of gray level randomness) and Dissimilarity (a measure of gray level difference (the square of contrast)) were chosen as texture inputs.
Entropy = i , j GLCM i , j × log ( GLCM i , j )
Dissimilarity = i , j GLCM i , j × | i j |
In order to reduce noise, a Gaussian filter (σ = 1) was applied to the panchromatic images before measuring image Entropy and Dissimilarity. A 7 × 7 sliding window was used to compute the 0° GLCM and the corresponding texture values were calculated for both the before and after panchromatic images.
To define structural features, a Laplacian of Gaussian (LoG) filter was applied as it is one of the most commonly implemented methods of blob detection. A 2-dimensional LoG filter of size (x, y) can be constructed using:
LoG ( x , y , σ ) = x 2 + y 2 + 2 σ 2 π σ 4 × e x 2 +   y 2 2 σ 2
Because the Laplacian computes the second derivative, abruptly changing regions of an image will be highlighted. When combined with a Gaussian smoothing filter, blobs can be detected at different scales defined by σ = (r − 1)/3 where r is the radius of a blob of interest [22]. In order to detect buildings of different sizes, a multiscale approach is required. A total of 50 separate convolutions of the LoG filter were applied to the pre- and post-earthquake imagery with sigma adjusted at equal intervals between the values of 15 and 35. Additionally, the LoG filter size was increased from 10 × 10 to a 25 × 25 window in order to accommodate the larger sigma values. The minimum response in scale space was assigned to each pixel due to the fact that LoG filters produce negative responses for bright areas (the majority of buildings produced high reflectance in the panchromatic images).
Other structural identifiers can be derived through object-based image analysis (OBIA). While a number of geometric indices are possible through OBIA, this study chose rectangular fit as an input feature due to its obvious possibility for correlation with building shape. The analysis of equal area rectangles, drawn according to object moment, has been used as a more robust version of minimum bounding rectangle comparison [26]. Rectangular fit is defined as
R e c t F i t = ( A R A D ) A O
where A O is the area of the original object, A R is the area of the equal rectangle ( A O = A R ) and A D is the overlaid difference between the equal rectangle and the original object [26]. For this study, image segmentation was performed on both the before and after panchromatic images using statistical region merging (SRM) as posited by Nock and Nielsen because of its fast and simple implementation [40]. Setting the scale parameter Q = 512 enabled the many small buildings in the scene to be distinguished. Each object’s rectangular fit was calculated and included as an input dimension.

2.3. Machine Learning Algorithms

Once all preprocessing steps were taken and textural and structural features were derived, the training and validation datasets were transformed into 14-dimensional arrays consisting of pre-panchromatic; pre-entropy; pre-dissimilarity pre-LoG; pre-rectangular fit; post-panchromatic; post-entropy; post-dissimilarity; post-LoG; post-rectangular fit; blue; green; red, and near infrared multispectral layers (see Table 2). Values in these layers were normalized to fall between the values of −1 and 1 in order to standardize the input and validation layers. While cross-validation was implemented for each MLA, exhaustive parameter analysis was not performed due to the large number of training samples available. All MLA design, algorithm implementation, training and validation were performed using MATLAB release 2015a on a 3.5 GHz quad core processor with 64 GB RAM.

2.3.1. Neural Networks

Artificial neural networks (ANN) are of continued interest due to their ability to approximate any function given sufficient neurons in each layer, the flexibility of data distribution, and their reduced computational complexity compared with statistical classification methods [11,13,20,41]. A feedforward ANN (also known as multilayer perceptron) assigns small random multiplicative weights and additive biases to input neurons and iteratively adjusts them with each additional training data input, minimizing the surface of the performance function representing the error between the training data and the network’s output in order to make the best classification [11,13,41]. Neural network design is difficult due to the number of free parameters and ambiguous requirements for network depth and complexity which depend upon the problem at hand [41]. As a result, neurons were grown and pruned in various combinations within 1 to 3 hidden layers until the network performance was maximized. The resulting ANN consisted of two hidden layers of 20 neurons each with sigmoid transfer functions, and a binary softmax output layer (see Figure 3). Training was accomplished using backpropagation and the scaled conjugate gradient algorithm to minimize cross-entropy and two-fold cross validation via early stopping (when decrease of cross-entropy in a validation subset ceased) was used to prevent overfitting [41,42]. Finally, variable importance was measured by retraining the network an additional 14 times with one of each of the input dimensions set to zero for all data points and recording the change in cross-entropy after a set number of iterations (400).
An additional ANN which has been used for classification is the radial basis function neural network (RBFN). The first layer of RBFNs consists of basis functions, centered throughout the input space and the second layer consists of a transfer function which combines the results from the basis functions and classifies them into the categories of interest [41]. Similar to the backpropagation network design, the number of basis functions was determined through trial and error and grown until performance was maximized. The radial basis layer consisted of 150 Gaussian functions centered using an unsupervised k-Means approach, which intelligently assigns cluster centers in areas of the input space where high activity exists [43,44]. The radial basis outputs fed into the second layer, which consisted of a softmax transfer function, trained by minimizing cross-entropy using the scaled conjugate gradient with early stopping as before.

2.3.2. Random Forests

Random Forests, which was originally proposed by Breiman [45], is an ensemble classifier consisting of a large number of classification and regression trees (CART), where final classification is performed by a winner-takes-all voting system. The algorithm trains each tree on an independently drawn subset of the original data (bootstrap aggregating or bagging) and determines the number of features to be used at each node by the evaluation of a random vector [45,46]. Because RF is an ensemble classifier, the Law of Strong Numbers dictates that RF will converge without overfitting the model, so that the computationally optimal number of trees can be found through testing the algorithm. Additional benefits include robustness to outliers and noise and built-in estimates of error and variable importance [45,47]. Using MATLAB’s TreeBagger function, training was accomplished using 400 (additional tree growth resulted in no further decrease in out-of-bag error) classification trees grown by selecting three variables (at random) for each node split. Additionally, the out-of-bag error was collected and variable importance was measured for comparison with the ANN approach.

2.4. Testing and Accuracy Assessement

After training, validation testing was performed on two different datasets. The first dataset consisted of an area including the National Palace just to the south of the training area where building footprints were previously digitized. For this testing area, output from the algorithms was converted to polygons and a building by building accuracy assessment was performed on the first dataset, resulting in a simple confusion matrix. This approach was taken in order to ensure that our accuracy measurements were based on actual objects in the scene rather than a pixel-by-pixel assessment. The second validation dataset included a larger area in order to test the algorithms’ performance when based on kernel density map rank matching. This secondary assessment followed the procedures used by Tiede et al. [36] and Pham et al. [6] in which the centroid points of the damage polygons are computed and kernel density raster maps are generated for both the test damage points as well as the UNOSAT/UNITAR control data points. The damage density for each dataset was projected onto a 20 m raster grid and the rank value of each cell was computed according to the quartile of density that the cell belonged to. The two maps were overlaid and accuracy was assessed by subtracting the UNOSAT/UNITAR kernel density map from the test kernel density maps, with final output values ranging from −3 (omission error) to +3 (commission error).

3. Results

Table 3 (supplemented by the confusion matrices in Appendix A) compares overall performance of three MLAs using building-by-building and kernel density accuracy assessment. Our findings showed that the multilayer feedforward ANN outperformed both the RBFNN and random forests with an omission error rate of 37.7%. Figure 4 matches the spatially explicit locations of damage detected by the ANN algorithm with the digitized buildings marked as damaged or undamaged (please refer to Appendix B for the other algorithms’ damage maps). Both RBFNN and RF had higher overall accuracies, yet drastically underestimated damage. RBFNN and RF created a high user’s accuracy for the damaged building class, but a lower producer’s accuracy. The feedforward ANN also had the shortest runtime, an advantage that was primarily gained through the extremely fast implementation of the ANN for testing. The kappa value for each of the algorithms indicated that the distribution of damaged and undamaged buildings could not be accounted for by random chance, however both of the ANNs clearly outperformed Random Forests in this measure as well.
For the kernel density accuracy assessment, a good result was measured as a value in the comparison map that ranged between −1 and 1 in accordance with the Tiede et al. [36] approach. RBFNN reached a 90% kernel density map match (shown in Figure 5), outperforming the standard ANN and RF, which were not far behind. This indicates that the radial basis function ANN may be able to generalize to a larger area with greater success than either a feedforward network or Random Forests. Even so, each of the algorithms performed at higher accuracies when generalizing the distribution of damage over a wide area instead of detecting individual, building level damage.
As well as examining an algorithm’s wide area generalizability, kernel density accuracy assessment also allowed for investigation into areas of common error of both omission and commission. One of the more interesting results was that these areas were common in all three algorithms. A common error of commission occurred in the north-center of the test area and coincided with the development of an internally displaced persons (IDP) camp (see Figure 6). According to the algorithms, this camp broke up the “structure” of the underlying field and increased the randomness of the texture, which led it to misclassifying the area as a damaged building. Figure 7 shows a common area of omission error contained within the map in the central region of the testing area. The underlying cause of error in this region is the scene complexity and high density of small structures before the earthquake occurred. This preexisting randomness and highly variable structure and texture were difficult for the algorithm to interpret, leading to an error of omission. Even so, with kernel density matching occurring in nearly 90% of cells for each algorithm, a wide area damage density classification was successful for both of the algorithms tested.
Finally, variable importance shows similar trends for ANN and RF algorithms. There were a few variables that showed rather different utilization between the algorithms. Figure 8 shows the change in error between each variable and was developed by averaging the assigned variable importance between the pre- and post-earthquake datasets. Overall, the multispectral variables had lower changes in out-of-bag error and cross entropy in comparison to the textural and structural values, however the panchromatic images and the near infrared band was useful to each of the algorithms. Of the two texture measures, entropy was utilized more than dissimilarity in all three algorithms. Rectangular fit was marked as important for both the ANN and RF however the Laplacian of Gaussian filter was more impactful on the RBFNN assessment and also had a moderately high impact on RF performance.

4. Discussion

It is difficult to determine outright which algorithm is better. While the multilayer ANN outperformed RF in building by building assessments and required slightly less overall training time, it required a large number of training samples in order to perform well. This is not necessarily the case for the RF algorithm. Also, it is rather easy to overfit an ANN to the training data, which can be avoided using RF due to its nature as an ensemble classifier [41,45]. Finally, ANNs can also become stuck at local minima in the performance surface without reaching the global solution, yielding an insufficient result [41]. However, in our study, the multilayer ANN had the lowest rates of error in detecting damaged buildings, without sacrificing much performance in wide area generalization or overall accuracy. SVMs, while not examined here, have shown promise in earthquake detection in previous studies [17,18,19]. While our study focused on ANNs and RF, as little research has been done on their applicability to earthquake detection problems, future studies may investigate the performance of these algorithms (to include SVM) with respect to training sample size.
Beyond damage detection performance, practical considerations require an investigation into time complexity, particularly when considering any kind of operationalization of an algorithm in automatic damage detection. RF took much longer than either of the ANNs to train and test the datasets. The time complexity of a single classification and regression tree is O(mn∙logn) where m is the total number of variables and n is the number of samples [48]. Because RF is an ensemble classifier, the overall time complexity of Random Forests can be summarized as O(M(mn∙logn)) where M is the number of trees grown. For a large number of samples with moderate dimensionality, this can be quite time consuming. In contrast, neural network complexity is highly dependent on network architecture. Time complexity for the scaled conjugate descent algorithm is often polynomial, overall complexity is determined by the problem and the number of free parameters (weights, biases, or basis functions (in the case of RBFNN)) required to describe that problem [49]. As such, testing showed that the ANNs trained faster and tested faster, which is important to consider given the requirement to process a potentially large amount of imagery in an operational context.
As previously discussed, a number of preprocessing steps are required to develop each of the textural and structural dimensions. Also, a k-means unsupervised clustering was used as part of the RBFNN algorithm to intelligently center the basis functions before training. Each of these steps adds time and complexity to the final product. For future studies, the parallelization of many of these processes is one way to greatly reduce computational time. Our data were gathered using serial processing (primarily because a parallel implementation of k-means was not immediately available) in order to establish a baseline and fairly assess each algorithm, however parallel implementations (which include graphics processing units [GPUs]) of both ANN and RF training are readily available for use and will greatly speed up training and implementation of these machine learning algorithms.
The actual results from this study mirrored what was expected quite well; the areas of imagery where texture and structure were broken up were often identified as damage, as one would expect. As mentioned in the results section, one interesting finding was that each of the algorithms erroneously identified IDP camp areas as building damage. These IDP camps are ad hoc structures (tents, tarps, and shanties) built primarily on open spaces. As these IDP camps were erected, they broke up the coherent texture and structure of the underlying terrain, causing the algorithm to mark them as damage. While this is technically an error of commission, it is nevertheless a useful result in showing the power of MLAs in seeking out patterns as well as their ability to simultaneously detect damage and displaced persons. In an operational context, the MLA results in combination with a priori knowledge of building distribution via a GIS database would allow first responders and emergency planners to easily distinguish damaged structures from these IDP camps.
As the experiment on variable importance showed, the textural and structural features were some of the most important factors which allowed both ANNs and RF to detect damage and IDP camps. Stramondo et al. [7] also used important textural features for earthquake damage detection although in their study a maximum likelihood classifier was used. This line of thinking, paramount to computer vision applications, is expanded here by using more intelligent algorithms and readily available data. The importance of the panchromatic features along with the texture and structure products derived exclusively from that panchromatic imagery presupposes that future implementations of machine learning may be able to perform earthquake damage detection from panchromatic imagery alone. One reason that the multispectral imagery was not a driving variable is simply due to resolution; the native 2.4 m is too coarse to detect many of the features associated with earthquake damage. Interestingly, the only multispectral product which was found to be an important variable for each of the algorithms was the near infrared band, which may have resulted from a correlation between exposed rubble and a higher near infrared reflectance. These findings may guide future research in determining which variables to focus on in earthquake damage studies.
Our focus on simple panchromatic imagery is a departure from many previous earthquake studies. The state-of-the-art focuses on LiDAR, SAR, unmanned aerial vehicles, and the software suite eCognition [1]. However, the access and availability of these additional data requirements may be limited in the aftermath of a destructive earthquake in a less developed region such as Haiti. A return to easily accessible data products such as multispectral or even panchromatic imagery alone could allow a MLA (potentially even one that is pre-trained) to detect imagery without the requirement of ancillary data. One potential disadvantage of the reliance on bitemporal VHR imagery is the requirement for precise coregistration. Different look angles can cause problems in classification and change detection. While image registration is still important to our study, a small look angle difference may not be critical due to our use of textural and structural features rather than the VHR imagery alone. Additionally, registration errors can be seen as a source of noise in the system; each of the MLAs used has been shown to be robust to noise [41,45]. The difference in our look angles (~7°) did not appear to cause any damage detection errors in a visual survey of our results. Future research may investigate the limits of acceptable look angle differences or use a complex coregistration approach to eliminate the issue altogether [50].
The future of earthquake damage detection may lie in deep convolutional neural networks (DCNN) coupled with high performance computing and GPUs. DCNNs have already pushed the boundaries of artificial intelligence and image recognition; rather than being told which textural, structural, or spectral inputs should be used, these networks automatically learn and identify the defining features (convolutions) of the problem at hand in order to classify future samples [51,52]. Initial results are promising. Using MatConvNet (a deep learning library for Matlab), we experimented with training a DCNN with the VGG-F architecture following the approach and using the hyperparameters described by Chatfield et al. [52,53]. We segmented the post-earthquake pansharpened image using SRM and trained the DCNN on each labeled, extracted object. The DCNN was not only able to detect buildings at a comparable rate (>55% detection rate), it was able to distinguish between damage and IDP camps (>65% detection rate) and did so using an after-only pansharpened image, reducing data requirements and eliminating the need for coregistration. A pre-trained DCNN optimized specifically for earthquake detection may offer a robust and operationally implementable solution to the much studied topic of earthquake damage detection.

5. Conclusions

This study analyzed the use of machine learning algorithms to include feedforward neural networks, radial basis function neural networks, and Random Forests in detecting earthquake damage caused by the 12 January 2010 event near Port-au-Prince, Haiti. The algorithms’ efficacy was improved by providing coregistered 0.6 m multitemporal imagery, texture features (dissimilarity and entropy), and structure features (Laplacian of Gaussian and rectangular fit) as inputs. Detection results were assessed on a structure specific basis by digitizing more than 900 buildings and comparing each MLA’s response to the UNITAR/UNOSAT validation set. For a wide area generalization, a kernel density map comparison was performed between each of the algorithms’ classification results and the UN damage validation points.
The feedforward ANN consisting of two hidden layers had the lowest error rate (<40%) without sacrificing much performance in overall accuracy or generalization to wider area damage estimates. Additionally, textural and structural features derived from panchromatic imagery were shown to be more important than spectral variables in the algorithms’ classification process. Each algorithm had common errors of commission occurring around ad hoc IDP camps that were spontaneously formed in open spaces following the earthquake; this technically incorrect result can be easily integrated with a GIS layer containing building footprints.
The results of this study show that not only do MLAs have potential for use in earthquake damage detection, but that panchromatic or pansharpened imagery can be the exclusive data source for training and testing. Measures of variable importance found that the panchromatic derived texture and structure products are the main drivers behind the success of these “shallow” machine learning algorithms. Future research into an operationally implementable machine learning method is warranted. An attractive next step is to transition into deep learning where convolutional neural networks move beyond pixel-based or object-based paradigms and begin to detect remotely sensed features in ways akin to natural image recognition.

Acknowledgments

The authors would like to thank Stephen Prisley for his supporting comments and direction in the study. Also, we extend our gratitude to our peer-reviewers and thank them for their time and helpful comments and suggestions. We would like to thank Chris McCormick for his excellent tutorial on RBFNN and corresponding software which was modified for use here. The authors would also like to thank the DigitalGlobe Foundation for their critical support in providing QuickBird imagery and the Virginia Tech Open Access Subvention Fund for supporting this publication. Finally, we would like to thank Earl and Marion Nutter; without their entrusted scholarship this study would not be possible.

Author Contributions

Austin Cooner designed the experiment, coded the software, and wrote the manuscript. James Campbell provided direction and edited the manuscript. Yang Shao offered oversight, guidance, and edited the manuscript.

Conflicts of Interest

The authors declare no conflict of interest. The DigitalGlobe Foundation and scholarship support had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, and in the decision to publish the results.

Appendix A

Table A1. Confusion matrices for each of the algorithms, including user’s accuracy (UA) and producer’s accuracy (PA).
Table A1. Confusion matrices for each of the algorithms, including user’s accuracy (UA) and producer’s accuracy (PA).
Undamaged UNOSATDamaged UNOSATANN UA
Undamaged ANN49810183.14
Damaged ANN13116756.04
ANN PA79.1762.31
RBFNN UA
Undamaged RBFNN57515079.31
Damaged RBFNN5411868.60
RBFNN PA91.4144.03
RF UA
Undamaged RF60619176.04
Damaged RF237777.00
RF PA96.3428.73

Appendix B

Appendix B.1. Algorithmic Damage Map for the RBFNN Algorithm

Figure B1. Results of the RBFNN algorithm on the building test dataset. Direct output of the algorithm (pink) overlays the building categories are represented by shaded polygons (satellite image courtesy of the DigitalGlobe Foundation).
Figure B1. Results of the RBFNN algorithm on the building test dataset. Direct output of the algorithm (pink) overlays the building categories are represented by shaded polygons (satellite image courtesy of the DigitalGlobe Foundation).
Remotesensing 08 00868 g009

Appendix B.2. Algorithmic Damage Map for the Random Forests Algorithm

Figure B2. Results of the RF algorithm on the building test dataset. Direct output of the algorithm (pink) overlays the building categories are represented by shaded polygons (satellite image courtesy of the DigitalGlobe Foundation).
Figure B2. Results of the RF algorithm on the building test dataset. Direct output of the algorithm (pink) overlays the building categories are represented by shaded polygons (satellite image courtesy of the DigitalGlobe Foundation).
Remotesensing 08 00868 g010

References

  1. Dong, L.; Shan, J. A Comprehensive review of earthquake-induced building damage detection with remote sensing techniques. ISPRS J. Photogramm. Remote Sens. 2013, 84, 85–99. [Google Scholar] [CrossRef]
  2. Ural, S.; Hussain, E.; Kim, K.; Fu, C.; Shan, J. Building extraction and rubble mapping for city Port-au-Prince post-2010 earthquake with GeoEye-1 imagery and Lidar Data. Photogramm. Eng. Remote Sens. 2011, 77, 1011–1023. [Google Scholar] [CrossRef]
  3. Comfort, L.K. Self Organization in Disaster Response: The Great Hanshin, Japan Earthquake of January 17, 1995; Quick Response Report; Natural Hazards Research and Applications Information Center: Boulder, CO, USA, 1996. [Google Scholar]
  4. Smith, K. Environmental Hazards, 3rd ed.; Routledge: London, UK, 2001. [Google Scholar]
  5. Voigt, S.; Schneiderhan, T.; Twele, A.; Gähler, M.; Stein, E.; Mehl, H. Rapid damage assessment and situation mapping: learning from the 2010 Haiti earthquake. Photogramm. Eng. Remote Sens. 2011, 77, 923–931. [Google Scholar] [CrossRef]
  6. Pham, T.T.H.; Apparicio, P.; Gomez, C.; Weber, C.; Mathon, D. Towards a rapid automatic detection of building damage using remote sensing for disaster management: The 2010 Haiti earthquake. Disaster Prev. Manag. 2014, 23, 53–66. [Google Scholar] [CrossRef]
  7. Stramondo, S.; Bignami, C.; Chini, M.; Pierdicca, N.; Tertulliani, A. Satellite radar and optical remote sensing for earthquake damage detection: Results from different case studies. Int. J. Remote Sens. 2006, 27, 4433–4447. [Google Scholar] [CrossRef]
  8. Dell’Acqua, F.; Bignami, C.; Chini, M. Earthquake damages rapid mapping by satellite remote sensing data: L’Aquila April 6th, 2009 event. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2011, 4, 935–943. [Google Scholar] [CrossRef]
  9. Gamba, P.; Casciati, F. GIS and image understanding for near-real-time earthquake damage assessment. Photogramm. Eng. Remote Sens. 1998, 64, 987–994. [Google Scholar]
  10. Goldberg, D.; Holland, J. Genetic algorithms and machine learning. Mach. Learn. 1988, 3, 95–99. [Google Scholar] [CrossRef]
  11. Xu, J.B.; Song, L.S.; Zhong, D.F.; Zhao, Z.Z.; Zhao, K. Remote sensing image classification based on a modified self-organizing neural network with a priori knowledge. Sens. Transducers 2013, 153, 29–36. [Google Scholar]
  12. Melgani, F.; Bruzzone, L. Classification of hyperspectral remote sensing images with support vector machines. IEEE Trans. Geosci. Remote Sens. 2004, 42, 1778–1790. [Google Scholar] [CrossRef]
  13. Benediktsson, J.; Swain, P.H.; Ersoy, O.K. Neural network approaches versus statistical methods in classification of multisource remote sensing data. IEEE Trans. Geosci. Remote Sens. 1990, 28, 540–552. [Google Scholar] [CrossRef]
  14. Shao, Y.; Lunetta, R.S. Comparison of support vector machine, neural network, and CART algorithm for the land-cover classifcation using limited data points. ISPRS J. Photogramm. Remote Sens. 2012, 70, 78–87. [Google Scholar] [CrossRef]
  15. Pal, M. Random forest classifier for remote sensing classification. Int. J. Remote Sens. 2005, 26, 217–222. [Google Scholar] [CrossRef]
  16. Ito, Y.; Hosokawwa, M.; Lee, H.; Liu, J.G. Extraction of damaged regions using SAR data and neural networks. Int. Arch. Photogramm. Remote Sens. 2000, 33, 156–163. [Google Scholar]
  17. Li, P.; Xu, H.; Liu, S.; Guo, J. Urban building damage detection from very high resolution imagery using one-class SVM and spatial relations. In Proceedings of the IEEE International Geoscience and Remote Sensing Symposium, Cape Town, South Africa, 12–17 July 2009.
  18. Haiyang, Y.; Gang, C.; Xiaosan, G. Earthquake-collapsed building extraction from LiDAR and aerophotograph based on OBIA. In Proceedings of the 2nd International Conference on Information Science and Engineering, Hangzhou, China, 4–6 December 2010.
  19. Kaya, G.; Musaoglu, N.; Ersoy, O.K. Damage assessment of 2010 Haiti earthquake with post-earthquake satellite image by support vector selection and adaptation. Photogramm. Eng. Remote Sens. 2011, 77, 1025–1035. [Google Scholar] [CrossRef]
  20. Li, J.; Du, Q.; Li, Y. An efficient radial basis function neural network for hyperspectral remote sensing image classification. Soft Comput. 2015, 12, 1–7. [Google Scholar] [CrossRef]
  21. Duda, R.; Hart, P.; Stork, D. Pattern Classification, 2nd ed.; Wiley: New York, NY, USA, 2004. [Google Scholar]
  22. Kong, H.; Akakin, H.; Sarma, S. A generalized Laplacian of Gaussian filter for blob detection and its applications. IEEE Trans. Cyber. 2013, 43, 1719–1733. [Google Scholar] [CrossRef] [PubMed]
  23. Ilsever, M.; Ünsalan, C. Two-Dimensional Change Detection Methods; Springer: London, UK, 2012. [Google Scholar]
  24. Huang, X.; Zhang, L. An adaptive mean-shift analysis approach for object extraction and classification from urban hyperspectral imagery. IEEE Trans. Geosci. Remote Sens. 2008, 46, 4173–4185. [Google Scholar] [CrossRef]
  25. Li, H.; Gu, H.; Han, Y.; Yang, J. An efficient multiscale SRMMHR (Statistical Region Merging and Minimum Heterogeneity Rule) segmentation method for high-resolution remote sensing imagery. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2009, 2, 67–73. [Google Scholar] [CrossRef]
  26. Sun, Z.; Fang, H.; Deng, M.; Chen, A.; Yue, P.; Di, L. Regular shape similarity index: A novel index for accurate extraction of regular objects from remote sensing images. IEEE Trans. Geosci. Remote Sens. 2015, 53, 3737–3748. [Google Scholar] [CrossRef]
  27. Huang, X.; Zhang, L. Morphological building/shadow index for building extraction from high-resolution imagery over urban areas. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2012, 5, 161–172. [Google Scholar] [CrossRef]
  28. Bignami, C.; Chini, M.; Stramondo, S.; Emery, W.J.; Pierdicca, N. Objects textural features sensitivity for earthquake damage mapping. In Joint Urban Remote Sensing Event; Stilla, U., Ed.; Munich: Bavaria, German, 2011. [Google Scholar]
  29. Marshall, A. How Amateur Mappers Are Helping Recovery Efforts in Nepal. 2015. Available online: http://www.citylab.com/tech/2015/04/how-amateur-mappers-are-helping-recovery-efforts-in-nepal/391703/ (accessed on 15 October 2015).
  30. Kahn, M.E. The death toll from natural disasters: The role of income, geography, and institutions. Rev. Econ. Stat. 2005, 87, 271–284. [Google Scholar] [CrossRef]
  31. USGS. Earthquakes with 1000 or More Deaths 1900–2014. Available online: http://earthquake.usgs.gov/earthquakes/world/world_deaths.php (accessed on 19 October 2015).
  32. Krause, K. Radiometric Use of QuickBird Imagery; DigitalGlobe: Longmont, CO, USA, 2005. [Google Scholar]
  33. UNITAR/UNOSAT, EC JRC, and World Bank. Haiti Earthquake 2010: Remote Sensing Damage Assessment. Available online: http://www.unitar.org/unosat/haiti-earthquake-2010-remote-sensing-based-building-damage-assessment-data (accessed on 25 September 2015).
  34. Grünthal, G. European macroseismic scale 1998: EMS-98. Available online: http://www.franceseisme.fr/EMS98_Original_english.pdf (accessed on 20 October 2016).
  35. Dell’Acqua, F.; Gamba, P. Remote sensing and earthquake damage assessment: Experiences, limits, and perspectives. Proc. IEEE 2012, 100, 2876–2890. [Google Scholar] [CrossRef]
  36. Tiede, D.; Lang, S.; Füreder, P.; Hölbling, D.; Hoffmann, C.; Zeil, P. Automated damage indication for rapid geospatial reporting: An operational object based approach to damage density mapping following the 2010 Haiti earthquake. Photogramm. Eng. Remote Sens. 2011, 77, 1–10. [Google Scholar] [CrossRef]
  37. Gebejes, A.; Huertas, R. Texture characterization based on grey-level co-occurrence matrix. In Proceedings of the Conference on Informatics and Management Sciences, Chongqing, China, 16–19 November 2013.
  38. Miura, H.; Modorikawa, S.; Chen, S.H. Texture characteristics of high-resolution satellite images in damaged areas of the 2010 Haiti earthquake. In Proceedings of the 9th International Workshop on Remote Sensing for Disaster Response, Stanford, CA, USA, 15–16 September 2011.
  39. Tomowskia, D.; Klonus, S.; Ehlers, M.; Michel, U.; Reinartz, P. Change visualization through a texture-based analysis approach for disaster applications. In Proceedings of the ISPRS TC VII Symposium, Vienna, Austria, 5–7 July 2010.
  40. Nock, R.; Nielsen, F. Statistical region merging. IEEE Trans. Pattern Anal. Mach. Intell. 2004, 26, 1452–1458. [Google Scholar] [CrossRef] [PubMed]
  41. Hagan, M.; Demuth, H.B.; Beale, M.H.; De Jesús, O. Neural Network Design, 2nd ed.; PWS Publishing Company: Lexington, KY, USA, 2015. [Google Scholar]
  42. Shao, Y.; Taff, G.N.; Walsh, S.J. Comparison of early stop criteria for neural network-based sub-pixel classification. IEEE Geosci. Remote Sens. Lett. 2010, 8, 113–117. [Google Scholar] [CrossRef]
  43. Schwenker, F.; Kestler, H.; Palm, G. Three learning phases for radial-basis-function networks. Neural Netw. 2001, 14, 439–458. [Google Scholar] [CrossRef]
  44. McCormick, C. Radial Basis Function Network (RBFN). Available online: http://mccormickml.com/2013/08/15/radial-basis-function-network-rbfn-tutorial/ (accessed on 15 February 2016).
  45. Breiman, L. Random forests. Mach. Learn. 2001, 45, 5–32. [Google Scholar] [CrossRef]
  46. Breiman, L. Bagging predictors. Mach. Learn. 1996, 24, 123–140. [Google Scholar] [CrossRef]
  47. Geiss, C.; Pelizari, P.A.; Marconcini, M.; Sengara, W.; Edwards, M.; Lakes, T.; Taubenbock, H. Estimation of seismic building structural types using multi-sensor remote sensing and machine learning techniques. ISPRS J. Photogramm. Remote Sens. 2015, 104, 175–188. [Google Scholar] [CrossRef]
  48. Witten, I.; Frank, E.; Hall, M. Data Mining, 3rd ed.; Elsevier: Burlington, MA, USA, 2011. [Google Scholar]
  49. Moller, M.F. A scaled conjugate gradient algorithm for fast supervised learning. Neural Netw. 1991, 6, 525–533. [Google Scholar] [CrossRef]
  50. Leprince, S.; Barbot, S.; Ayoub, F.; Avouac, J. Automatic and precise orthorectification, coregistration, and subpixel correlation of satellite images, application to ground deformation measurements. IEEE Trans. Geosci. Remote Sens. 2007, 45, 1529–1558. [Google Scholar] [CrossRef]
  51. Krizhevsky, A.; Sutskever, I.; Hinton, G. ImageNet classification with deep convolutional neural networks. In Proceedings of 26th Annual Conference on Neural Information Processing Systems, Lake Tahoe, NV, USA, 3–8 December 2012.
  52. Chatfield, K.; Simonyan, K.; Vedaldi, A.; Zisserman, A. Return of the devil in the details: Delving deep into convolutional nets. In Proceedings of the British Machine Vision Conference, Bristol, UK, 9–13 September 2014.
  53. Vedaldi, A.; Lenc, K. MatConvNet—Convolutional neural networks for MATLAB. In Proceedings of the 23rd ACM international conference on Multimedia, Brisbane, Australia, 26–30 October 2015.
Figure 1. Study area in Port-au-Prince showing training, building test and kernel density test sites (satellite image courtesy of the DigitalGlobe Foundation).
Figure 1. Study area in Port-au-Prince showing training, building test and kernel density test sites (satellite image courtesy of the DigitalGlobe Foundation).
Remotesensing 08 00868 g001
Figure 2. This figure comparing building damage shows the effects of an entropy filter and a LoG filter on pre- and post- earthquake imagery considering two different collapse scenarios: (a) a normal structural collapse; (b) a pancake collapse. Pre-earthquake satellite images © copyright DigitalGlobe. Post-earthquake satellite images courtesy of the DigitalGlobe Foundation.
Figure 2. This figure comparing building damage shows the effects of an entropy filter and a LoG filter on pre- and post- earthquake imagery considering two different collapse scenarios: (a) a normal structural collapse; (b) a pancake collapse. Pre-earthquake satellite images © copyright DigitalGlobe. Post-earthquake satellite images courtesy of the DigitalGlobe Foundation.
Remotesensing 08 00868 g002
Figure 3. A simplified design layout of the feedforward ANN used for training and testing. For input layer detail, see Table 2. In reality each of the twenty neurons in Hidden Layer 1 is connected to each of the twenty neurons in Hidden Layer 2.
Figure 3. A simplified design layout of the feedforward ANN used for training and testing. For input layer detail, see Table 2. In reality each of the twenty neurons in Hidden Layer 1 is connected to each of the twenty neurons in Hidden Layer 2.
Remotesensing 08 00868 g003
Figure 4. Results of the feedforward ANN (highest performer) on the building test dataset. Direct output of the algorithm (pink) overlays the building categories are represented by shaded polygons (satellite image courtesy of the DigitalGlobe Foundation).
Figure 4. Results of the feedforward ANN (highest performer) on the building test dataset. Direct output of the algorithm (pink) overlays the building categories are represented by shaded polygons (satellite image courtesy of the DigitalGlobe Foundation).
Remotesensing 08 00868 g004
Figure 5. Results of the RBFNN algorithm on the kernel density test dataset. Each 20 m cell is marked from −3 to 3, where an adequate density match falls between −1 and 1 (satellite image courtesy of the DigitalGlobe Foundation).
Figure 5. Results of the RBFNN algorithm on the kernel density test dataset. Each 20 m cell is marked from −3 to 3, where an adequate density match falls between −1 and 1 (satellite image courtesy of the DigitalGlobe Foundation).
Remotesensing 08 00868 g005
Figure 6. Results of the RBFNN on the kernel density test dataset overlaid before (a); and after (b) panchromatic imagery. This ad hoc IDP caused an error of commission among each of the algorithms. Pre-earthquake satellite image copyright © DigitalGlobe. Post-earthquake satellite image courtesy of the DigitalGlobe Foundation.
Figure 6. Results of the RBFNN on the kernel density test dataset overlaid before (a); and after (b) panchromatic imagery. This ad hoc IDP caused an error of commission among each of the algorithms. Pre-earthquake satellite image copyright © DigitalGlobe. Post-earthquake satellite image courtesy of the DigitalGlobe Foundation.
Remotesensing 08 00868 g006
Figure 7. Results of the RBFNN on the kernel density test dataset overlaid before (a); and after (b) panchromatic imagery. This area of complex small structures was a common error of omission among the algorithms. Pre-earthquake satellite image copyright © DigitalGlobe. Post-earthquake satellite image courtesy of the DigitalGlobe Foundation.
Figure 7. Results of the RBFNN on the kernel density test dataset overlaid before (a); and after (b) panchromatic imagery. This area of complex small structures was a common error of omission among the algorithms. Pre-earthquake satellite image copyright © DigitalGlobe. Post-earthquake satellite image courtesy of the DigitalGlobe Foundation.
Remotesensing 08 00868 g007
Figure 8. Chart of variable importance for both ANNs and RF. Overall, structure and texture played a larger role in classification than spectral information. Random Forests use the ΔOOB Error measure while ANN and RBFNN use ΔCross Entropy.
Figure 8. Chart of variable importance for both ANNs and RF. Overall, structure and texture played a larger role in classification than spectral information. Random Forests use the ΔOOB Error measure while ANN and RBFNN use ΔCross Entropy.
Remotesensing 08 00868 g008
Table 1. Data used for the analysis. Note the similar look angles between the two satellite images used and the relatively short gap between the pre- and post-earthquake images.
Table 1. Data used for the analysis. Note the similar look angles between the two satellite images used and the relatively short gap between the pre- and post-earthquake images.
SensorNative ResolutionAcquisition DateLook Angle
WorldView-10.5 m panchromatic7 December 200927.62
QuickBird-22.4 m multispectral 0.6 m panchromatic15 January 201020.7
Remote sensing damage assessment: UNITAR/UNOSAT, EC JRC, World BankVector (point)15 January 2010N/A
Table 2. List of the input features and their corresponding data sources which were used as predictors for earthquake damage.
Table 2. List of the input features and their corresponding data sources which were used as predictors for earthquake damage.
Input FeatureSource
Panchromatic (450–900 nm)WV1 and QB2
EntropyWV1 and QB2
DissimilarityWV1 and QB2
LoGWV1 and QB2
Rectangular fitWV1 and QB2
Blue (450–520 nm)QB2
Green (520–600 nm)QB2
Red (630–690 nm)QB2
Near infrared (760–900 nm)QB2
Table 3. Results from the test datasets.
Table 3. Results from the test datasets.
AlgorithmTrain + Test Runtime (s)Overall Accuracy (%)Building Class Omission Error (%)Cohen’s KappaKernel Density Match (%)
ANN62374.1437.690.40287.41
RBFNN153277.2655.970.395190.25
RF869276.1471.270.305788.77

Share and Cite

MDPI and ACS Style

Cooner, A.J.; Shao, Y.; Campbell, J.B. Detection of Urban Damage Using Remote Sensing and Machine Learning Algorithms: Revisiting the 2010 Haiti Earthquake. Remote Sens. 2016, 8, 868. https://doi.org/10.3390/rs8100868

AMA Style

Cooner AJ, Shao Y, Campbell JB. Detection of Urban Damage Using Remote Sensing and Machine Learning Algorithms: Revisiting the 2010 Haiti Earthquake. Remote Sensing. 2016; 8(10):868. https://doi.org/10.3390/rs8100868

Chicago/Turabian Style

Cooner, Austin J., Yang Shao, and James B. Campbell. 2016. "Detection of Urban Damage Using Remote Sensing and Machine Learning Algorithms: Revisiting the 2010 Haiti Earthquake" Remote Sensing 8, no. 10: 868. https://doi.org/10.3390/rs8100868

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop