Next Article in Journal
Development of Municipal Energy Management as Trigger of Future Energy Savings
Previous Article in Journal
Experimental Study on the Mechanical Properties of Recycled Spiral Steel Fiber-Reinforced Rubber Concrete
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Advances in Rapid Damage Identification Methods for Post-Disaster Regional Buildings Based on Remote Sensing Images: A Survey

by
Jiancheng Gu
1,†,
Zhengtao Xie
1,†,
Jiandong Zhang
1 and
Xinhao He
2,*
1
Department of Civil Engineering, Nanjing Tech University, Nanjing 211816, China
2
Department of Civil and Environmental Engineering, Graduate School of Engineering, Tohoku University, Sendai 980-8579, Miyagi, Japan
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Buildings 2024, 14(4), 898; https://doi.org/10.3390/buildings14040898
Submission received: 15 February 2024 / Revised: 9 March 2024 / Accepted: 24 March 2024 / Published: 26 March 2024
(This article belongs to the Section Construction Management, and Computers & Digitization)

Abstract

:
After a disaster, ascertaining the operational state of extensive infrastructures and building clusters on a regional scale is critical for rapid decision-making and initial response. In this context, the use of remote sensing imagery has been acknowledged as a valuable adjunct to simulation model-based prediction methods. However, a key question arises: how to link these images to dependable assessment results, given their inherent limitations in incompleteness, suboptimal quality, and low resolution? This article comprehensively reviews the methods for post-disaster building damage recognition through remote sensing, with particular emphasis on a thorough discussion of the challenges encountered in building damage detection and the various approaches attempted based on the resultant findings. We delineate the process of the literature review, the research workflow, and the critical areas in the present study. The analysis result highlights the merits of image-based recognition methods, such as low cost, high efficiency, and extensive coverage. As a result, the evolution of building damage recognition methods using post-disaster remote sensing images is categorized into three critical stages: the visual inspection stage, the pure algorithm stage, and the data-driven algorithm stage. Crucial advances in algorithms pertinent to the present research topic are comprehensively reviewed, with details on their motivation, key innovation, and quantified effectiveness as assessed through test data. Finally, a case study is performed, involving seven state-of-the-art AI models, which are applied to sample sets of remote sensing images obtained from the 2024 Noto Peninsula earthquake in Japan and the 2023 Turkey earthquake. To facilitate a cohesive and thorough grasp of these algorithms in their implementation and practical application, we have deliberated on the analytical outcomes and accentuated the characteristics of each method through the practitioner’s lens. Additionally, we propose recommendations for improvements to be considered in the advancement of advanced algorithms.

1. Introduction

Past disaster survey reports, which include earthquakes, typhoons, hurricanes, floods, explosions, or wars, have consistently underscored the crucial need for swift damage assessment of infrastructures and building clusters on a regional scale, particularly in the context of initial response and decision-making. In this regard, leveraging technological advancements across various domains offers promise in establishing a comprehensive disaster response system, as illustrated in Figure 1. Compared to alternative approaches, such as visual inspection, built-in structural health monitoring systems, and simulation-based methods, which excel in delivering high-precision evaluation results, remote sensing techniques play a distinct role in terms of immediacy, directness, and regional coverage under compound disaster scenarios. This study primarily focuses on cutting-edge advancements in applying remote sensing for the rapid assessment of post-disaster damage in regional building clusters.
Damage detection techniques for buildings following disasters, particularly strong earthquakes, have been extensively explored with a focus on structural health monitoring. Conventional methods for building damage detection primarily relied on field inspection techniques, such as field survey [1,2], vibration characteristics-based methods [3,4], finite element model updating methods [5,6,7,8], and conventional short-time matrix pencil method [9]. While these methods are advantages in high accuracy at the individual object level, they encounter challenges in terms of cost and management when dealing with extensive damage in large-scale building clusters [10]. Other studies have addressed this issue through predictive simulation. For instance, Mardiyono et al. combined sensors with finite element analysis to simulate building models, and then utilized artificial neural networks to predict building damage indices [11]. Chiauzzi et al. employed finite-layer simulation to establish new relationships between Housner and EMS-98 seismic intensity, applying damage probability matrixes to estimate residential building damage levels [12]. However, from a practitioner’s view, these methods, associated with elevated costs and demanding technical expertise, still face difficulties in widespread application. For instance, in the aftermath of the devastating 2008 Wenchuan earthquake and 2023 Turkey earthquake, rescue workers were compelled to rely on labor-intensive on-site surveys characterized by low efficiency and effectiveness. This highlights the pressing need for simpler applying and more efficient techniques for addressing regional scale challenges.
In contrast, Ogawa et al. rely solely on visual assessment of remote sensing images to provide initial estimates of building damage across entire regions [13]. It underscores the advantages of image-based recognition methods, which offer low costs, high efficiency, and extensive coverage, particularly during the initial response phase following large-scale disasters. Compared to other methods, this approach achieves significant advancements in terms of resource utilization and prediction speed. However, a drawback is their reliance on the quality and completeness of the images, making it challenging to achieve high levels of precision.
Therefore, the improvement in the quality of remote sensing images has played a crucial role in the development of this discipline, particularly with the advancement and widespread use of Synthetic Aperture Radar (SAR) images, which have significantly propelled the field of remote sensing target identification [14]. Li et al. analyzed SAR images, considering the scattering of artificial targets and natural backgrounds [15]. They introduced the concept of Relative Spectral Smoothness (RSS) based on two-dimensional spectral analysis to measure the distance between observed data and manually generated noise, enabling a more comprehensive utilization of the semantic information present in SAR images. In the reference [16], a novel Scattering Model Guided Adversarial Attack (SMGAA) algorithm was proposed by Peng et al. This algorithm does not require careful tuning and effectively finds optimal parameters for adapting SAR classifiers, facilitating robust model training. A city-aware U-Net model was introduced, using dual-polarization Sentinel-1 multi-temporal intensity and coherence data to map flood extents in urban environments. The model was qualitatively evaluated and quantitatively analyzed using a flood case study in one city from Sentinel-1 data acquired across four research locations on three continents [17]. As a remote sensing image recognition subfield, these new technologies have laid a solid foundation for developing and optimizing building damage identification algorithms using remote sensing images.
It is concluded that the utilization of remote sensing images for damage identification in post-disaster hinges on the advancements in image processing methods. Three distinct development stages in this discipline can be summarized: (1) the visual inspection stage, (2) the pure algorithm stage, and (3) the data-driven algorithm stage. The visual inspection stage was the earliest development and held an unshakable position regarding accuracy. The pure algorithm stage has the most extended development history, where many efficient algorithms were developed based on signal processing theory, laying a solid foundation for advancing neural networks. With the gradual increase in hardware computational power, data-driven algorithms have become the mainstream and are steadily approaching visual inspection algorithms’ accuracy levels. On the other hand, the methods can be classified as: single-temporal methods based on a single image after a disaster event and multi-temporal methods based on pre-and post-disaster images for change detection. In comparison, the latter offer higher accuracy but have certain constraints.
Therefore, the key challenge lies in how to enhance the efficiency and accuracy of utilizing remote sensing imagery for post-disaster building damage identification, many researchers have organized and summarized the works associated with the development of various approaches. For instance, Dong et al. thoroughly discuss single-temporal and multi-temporal methods [18]. This study categorizes and evaluates these two techniques based on the type of remote sensing data employed, including optical, LiDAR, and SAR data. However, this article’s early publication date has resulted in a significant coverage gap, as it does not encompass the most cutting-edge advancements, such as deep learning technologies. Maxwell et al. tried to fill this void [19]. Their work primarily focuses on early machine learning algorithms such as K-NN, Random Forest, and support vector machines, with limited attention to network architectures built on newer methods like CNNs and transformers. Ge et al. offer a more comprehensive exposition of these novel deep learning algorithms but restrict their study samples to SAR images [20], without emphasis on optical remote sensing imagery-related developments. Ghaedi et al.’s study places less emphasis on image processing algorithms and delves more into the aspects of building damage, leading to imbalanced discussion [21].
This paper serves as supplementary research aimed at addressing the aforementioned research gaps, drawing upon 91 high-quality articles selected to conduct a comprehensive literature review of the most recent advancements in algorithms for achieving reliable and prompt post-disaster building damage identification. Chapter Two elucidates the process of literature retrieval, while the analysis results are presented in Chapter Three. Chapter Four offers a detailed categorization and discussion concerning the pivotal advancements, effectiveness, and applications of these algorithms. Based on the analysis results, Chapter Five assesses the effectiveness and precision of seven state-of-the-art algorithms using sample sets obtained from two earthquake disaster events and provides a thorough discussion of their strengths and weaknesses. Finally, Chapter Six presents the conclusions drawn from this research and outlines prospects for future research directions in the development of advanced algorithms.

2. Materials and Methods

The review was conducted from January 2023 to January 2024, with regular updates following the completion of each round. In accordance with established academic search methodology, the process encompassed three widely recognized search engines or repositories: Scopus, Web of Science, and Science Direct. These platforms are renowned for their high credibility as search engines or databases. Document types were restricted to “reviews”, “articles”, “conference papers”, and “books/book chapters” to ensure data quality and uniformity, with English being the selected language. Given that this research intersects civil engineering, remote sensing, and deep learning, there were no specific constraints on the publication date of the literature. Nevertheless, emphasis was placed on articles published after 2012, the year when AlexNet was introduced.
The literature search process consists of two stages. The initial step involves retrieving primary keywords from databases and search engines, with these keywords organized hierarchically in a pyramid-like structure. The primary keywords are as follows: Deep Learning (DL), Image Processing (IP), Remote Sensing Images (RS), Building/Construction/Architectural (In the context of this study, the term “building” is used to refer to the same concept), and Building Damage (BD). Figure 2 illustrates the combinations and progressive relationships among these keywords.
The second step entails extracting secondary keywords subsequent to the review and organization of the literature obtained from the first step. The secondary keywords are more specific and represent the technical aspects within the research areas indicated by the primary keywords. The secondary keywords include Self-supervised Learning, Change Detection, Distillation Learning, Contrastive Learning, and Siamese Neural Networks. Subsequent rounds of searching will repeat the aforementioned steps, resulting in the discovery of some secondary keywords at different times. For instance, “self-supervised learning” was identified as early as January 2023, while “change detection” became a focus in May 2023. Figure 3 provides a conceptual flowchart of the retrieval process.

2.1. Step 1

Initially, at the base level of the pyramid, we conducted a retrieval of image processing methods within the domain of deep learning. Ultimately, we organized and categorized the twenty most frequently employed deep learning algorithms for image processing as follows:
(1)
Image Classification,
(2)
Object Detection
(3)
Image Segmentation
(4)
Semantic Segmentation
(5)
Instance Segmentation
(6)
Object Tracking
(7)
Pose Estimation
(8)
Image Generation
(9)
Image Super-resolution
(10)
Image Denoizing
(11)
Image Restoration
(12)
Image Style Transfer
(13)
Image Captioning
(14)
Anomaly Detection
(15)
Image Registration
(16)
Image Compression
(17)
Image Enhancement
(18)
Image Reconstruction
(19)
Image Transformation
(20)
Image Matching.
Indeed, this represents an extensive research scope. Therefore, we focused on the selection of more relevant remote sensing image processing methods at the second level of the pyramid, surrounding the five criteria detailed in Section 3. Ultimately, we identified five pertinent remote sensing image processing methods:
(1)
Remote Sensing Image Classification
(2)
Object Detection
(3)
Land Cover Segmentation
(4)
Land Surface Temperature Estimation
(5)
Land Use/Land Cover Change Detection.
These five methods are fundamental and widely utilized in remote sensing image processing, covering various application domains and tasks. Leveraging deep learning techniques, these methods can enhance the accuracy and efficiency of remote sensing image processing, providing valuable information and insights for remote sensing applications. However, not all of these methods are directly related to housing targets. Consequently, at this stage, we have chosen to focus primarily on the first three methods in our retrieval.

2.2. Step 2

Following an extensive review of the literature in the initial phase, our research focus shifted toward optimizing network structures, such as self-supervised learning and Siamese neural networks. A profound understanding of Siamese neural networks drew our attention to change detection algorithms. Consequently, our research emphasis has shifted toward change detection.
In comparison to single-image recognition algorithms, change detection algorithms frequently yield more robust results. Therefore, in the subsequent stages of our research, we will emphasize the combination of change detection, remote sensing, and building damage.

3. Results

Following a comprehensive search, we have identified 91 influential and high-quality articles as the culmination of our investigation. These articles were primarily chosen based on the following criteria:
(1)
Innovation: Whether the methods addressed critical challenges and offered innovative solutions.
(2)
Reliability of results: Quality of results, adequacy of data, convincing statistical analysis, comparison, and discussion.
(3)
Clarification of algorithm implementation: Clarity regarding the principle and sufficient details provided to reproduce results.
(4)
Novelty: The first-time application of an innovative method (such as Transformer around 2020) in this field.
(5)
Citation count.
Figure 4 presents statistical data regarding the publication years of these selected articles. The table illustrates an increasing trend in the number of literature items related to remote sensing identification of post-disaster constructions over time. Specifically, within this context, technologies that integrate change detection with deep learning have gained popularity in recent years. Statistical data are presented in Table 1 and Figure 5.
For literature that employs data-driven methods, this paper has compiled data on the network architectures they utilized, as presented in Table 2.
Moreover, the subjects of study in the literature and the datasets employed by neural networks hold significant statistical value. Comparing different algorithms on the same dataset facilitates the assessment of their strengths and weaknesses. Table 3 provides an overview of the datasets utilized in the literature and their sources.
In the subsequent section, leveraging this statistical data, this paper will provide a concise introduction to the content of these literature items and their contributions, organized by the classification of algorithms adopted in the literature.

4. Analysis: Remote Sensing-Based Methods for Post-Disaster Building Damage

The acquisition of diagnostic results on post-disaster building damage can be broadly categorized into two approaches: (1) Multi-temporal methods that detect changes in remote sensing data before and after the event (change detection). (2) Single-temporal methods that analyze only post-event remote sensing data.
Among these, methods based on change detection are considered mainstream for obtaining building damage information, as they exhibit higher applicability and yield more accurate results. However, this technique faces a significant limitation: many cities, particularly those in developing countries, lack reliable pre-disaster remote sensing data, which can substantially compromise the outcomes when using low-precision images.
On the other hand, the single-temporal method, using only post-event imagery, is more convenient and suitable for rapid response after disasters. However, it often encounters comparatively lower precision and sensitivity issues due to the complexities of post-disaster images. In this context, this section focuses on critical technical advances aimed at ensuring the accuracy of assessment results.
Nevertheless, it should be emphasized that the performance of all these approaches largely depends on the amount and quality of data used. For example, a high-quality database leads to a deeper understanding of the physical mechanisms, which can then be used to modify algorithm assumptions and settings, an essential aspect of data-driven approaches.

4.1. Only Post-Event Data

The single-temporal analysis technique relying solely on post-event imagery emerged early, and with the enhancement of remote sensing data precision, this technique has been continuously refined. Therefore, this paper will provide a comprehensive overview of relevant literature from three perspectives in the following paragraphs. The development trajectory of this direction is illustrated in Figure 6.

4.1.1. Visual Approaches

Traditional single-temporal methods can be divided into visual and automated processes. Visual methods rely on the expertise and efforts of researchers, resulting in low efficiency. In 2000, Ogawa et al. annotated building damage caused by the Kobe earthquake using single-frame and stereoscopic aerial photographs [13]. Their results achieved an accuracy of 70% compared to on-site surveys, with even higher accuracy for heavily damaged structures. In 2004, Yamazaki et al. conducted visual damage detection of buildings based on QuickBird satellite observation data [22]. This study also compared change detection with single-temporal methods, revealing the former’s higher accuracy. A similar work was conducted by Kaya Ş et al. [23] in 2005 to the Izmit earthquake in Turkey.
Notably, visual observation of remote sensing imagery is exceptionally constrained by the precision of the imagery, primarily due to weather conditions such as wind, frost, clouds, and snow. However, this issue has been optimized with the development and widespread adoption of Synthetic Aperture Radar (SAR) technology.
SAR imagery can overcome limitations imposed by clouds, fog, rain, snow, and nighttime darkness, enabling all-day, all-weather, high-resolution, large-scale ground imaging. This capability facilitates a finer subdivision of building damage levels. For example, researchers utilized high-resolution X-band airborne SAR imagery to evaluate the building losses caused by an earthquake in Yushu City [24]. Based on the extracted features, researchers quantitatively interpreted the individual building damage conditions within each street block in Yushu City. Subsequently, they established a preliminary quantitative relationship between the SAR image-interpreted remote sensing damage index and the optical image-interpreted remote sensing damage index.

4.1.2. Pure Algorithm-Based Methods

Numerous automated methods have relatively higher efficiency; however, their accuracy varies. In 2007, Yamazaki et al. optimized the threshold selection process of grayscale images using morphological features of remote sensing imagery based on an edge detection algorithm [25]. This approach was applied to QuickBird images captured during the Bam earthquake, yielding results that aligned well with visual detection outcomes. In 2010, Polli et al. attempted to construct a texture-based automated classifier for categorizing building damage levels in post-earthquake COSMO/SkyMed images of L’Aquila and Haiti [26].
Furthermore, an object-oriented method was proposed in [27] by examining various features of things, including spectral, textural, and spatial relationship features, to achieve an accuracy of over 90%. Balz et al. employed post-earthquake imagery from the Wenchuan earthquake but chose SAR data and presented a theoretical assumption about the appearance of collapsed buildings in high-resolution SAR images [28]. This work established mapping functions and ultimately achieved the interpretation of visual features.

4.1.3. Data-Driven Methods

Since the inception of neural networks, there has been a substantial influx of researchers every year channeling their efforts into this domain, resulting in diverse AI models tailored for many tasks. The realm of architectural damage recognition is no exception. Through the application of these algorithms, the efficiency and accuracy of single-time-domain methods have experienced significant advancements.
(1)
Machine Learning-Based Methods
With the advancement of neural networks, the remote sensing image processing domain has witnessed the integration of machine learning (ML) techniques like Support Vector Machines (SVM), Random Forests (RF), and K-Nearest Neighbors (K-NN). While these methods might not achieve breakthroughs in terms of accuracy, they have introduced novel avenues for recognizing building damage in remote sensing imagery [18].
As mentioned earlier, SAR imagery has significantly elevated the precision of remote sensing image recognition. Shi et al. conducted a comprehensive assessment of China’s Dual-Band Airborne SAR Mapping System (CASMSAR), which employed high-resolution X-band and P-band imagery (at 0.50 m and 1.1 m resolution, respectively) collected using interferometric and polarimetric modes [29]. Additionally, the study introduced the RF decision framework to quantify the importance scores of each feature and enhance the discrimination accuracy.
Gong et al. introduced a novel concept for individual building damage assessment using post-event high-resolution SAR imagery and building footprint maps [30]. The experiment employed three machine learning classifiers: RF, SVM, and K-NN. The results indicated a generally high accuracy, with all three classifiers achieving above 80%.
In the 2016 Kumamoto earthquake context, Bai et al. combined ALOS-2/PALSAR-2 SAR imagery with a machine learning framework. They compared the accuracy of multi-temporal methods with single-temporal methods, showing an enhancement of 2.3% for the former over the latter [31]. The same year, they developed and tested a random forest-based object classification task on ALOS2/PALSAR-2015 dual-polarization SAR imagery collected from two disaster-affected regions after the Nepal earthquake, which demonstrated the satisfactory performance of the RF framework in conducting binary classification of building damage levels [32].
Ji et al. introduced a novel seismic loss assessment method addressing urban area extraction and damaged building recognition [33]. It is an unsupervised classification method based on a supervised SVM that leverages single post-event PolSAR data to provide more accurate damage assessment results. Compared to SAR imagery, PolSAR images offer richer semantic information, leading to more precise outcomes.
Additionally, research in the realm of optical imagery continues. Cooner et al. evaluated the performance of Multilayer Perceptron Neural Networks, Radial Basis Function Neural Networks, and Random Forests in detecting responses to the 2010 Haiti earthquake [34]. This study introduced texture and structural features like entropy, dissimilarity, Gaussian Laplacian, and rectangle fitting as critical variables for high spatial resolution image classification. Each algorithm achieved nearly 90% kernel density matching, with the Multilayer Perceptron Network achieving an error rate of below 40% in detecting damaged buildings. The outcome showcased the significance of spatial features like texture and structure in algorithmic classification, shown in Figure 7.
(2)
Deep Learning-Based Methods
Deep Learning (DL) represents a subfield of ML that places an emphasis on leveraging deep neural networks to address intricate and multifaceted problems. The progression of deep learning techniques has enabled researchers to extract more nuanced semantic information from remote sensing imagery, particularly for the purpose of identifying post-disaster building damage. Consequently, this section will elucidate the distinctions between deep learning technology and conventional machine learning technology.
I.
CNN-based Methods
One of the most prominent deep neural networks in image processing is the Convolutional Neural Network (CNN). Researchers have made significant efforts to employ CNNs for damage detection in remote sensing images like other image processing disciplines.
A CNN model was designed to detect building damage in post-earthquake high-resolution (VHR) images of the 2010 Haiti earthquake, achieving impressive accuracy and efficiency on a GPU K80 HPC platform [35]. Ma et al. applied the CNN-based object detection method YOLOv3 to locate collapsed buildings in post-earthquake remote sensing images [36]. They replaced the Darknet3 CNN in YOLOv3 with a more lightweight CNN ShuffleNet v2. Valentijn et al. leveraged CNNs and supervised learning to label 175,289 buildings from the xBD dataset, achieving high-performance levels [37]. Miura et al. developed a method for automatically identifying building damage from post-disaster aerial images using CNNs and building damage inventory [38].
Some researchers focused on the development of novel methods. For example, Bai et al. proposed a deep learning-based framework using only post-event TerraSAR-X data to generate a damage map [39]. Similarly, Qiao W et al. [40] have proposed a novel Weakly Supervised Semantic Segmentation (WSSS) method based on image-level labels for pixel-level extraction of damaged buildings from post-earthquake high-resolution remote sensing images. An advanced CNN for visible structural damage detection was tested by Nex et al. in [41].
Furthermore, Zhan et al. modified a deep learning model called Mask R-CNN to extract residential buildings and estimate their damage levels from post-disaster aerial images [42]. The model employed an improved Feature Pyramid Network and online complex example mining. Seydi et al. constructed a Building Damage Detection Network (BDD-Net) utilizing three deep feature streams (via multi-scale residual depth convolution blocks) fused at different network levels [43]. Unlike other fusion networks that only connected at the first and last levels, BDD-Net’s performance was evaluated across three distinct stages. The results demonstrated that fusing optical and LiDAR datasets significantly enhanced building damage maps’ generation, achieving an overall accuracy greater than 88%.
By combining remote sensing imagery and block vector data, an improved CNN architecture called Inception V3 was proposed by Ma et al. in [44] for assessing the degree of damage to building clusters in post-earthquake SAR imagery with a 0.5 m resolution of the Yushu earthquake. On the other hand, by deploying unmanned aerial vehicles (UAV) to conduct on-site imaging in areas with building damage and then combining the images with satellite remote sensing data, more feature selection can be incorporated into deep learning detection techniques. Asim et al. proposed a structure known as Multi-View-CNN, which aggregates 2D images of building damage from five different angles into a 3D representation [45]. This model achieves a multi-level assessment of building damage by extracting features from images captured in five directions, including satellite top-down views. The model demonstrated an accuracy of 65%/81% (based on five-level damage and three-level damage, respectively) on the test set of Hurricane Harvey.
II.
Other Nat-based Methods
After CNN became the most influential deep learning network, numerous notable neural network models based on CNN technology emerged, such as YOLO, ResNet, and U-Net, among others. Due to their significant architectural innovations and outstanding contributions to neural networks, they are often categorized separately.
For instance, Wang et al. introduced a novel real-time detection model for identifying damaged areas in buildings based on an improved YOLOv5 adapted for UAV images, called DB-YOLOv5 [46]. The model incorporates a spatial feature extraction module utilizing a residual expansion network, greatly enhancing the receptive field. Adriano et al. conducted building classification after the 2018 Sulawesi earthquake [47] by using the SqueezeNet network to classify built-up and non-built-up areas, distinguishing different levels of building damage within other building areas. Song et al. utilized the Deeplab v2 neural network to obtain initial damaged building areas. Subsequently, the test images were segmented using the simple linear iterative clustering method [48]. Nie et al. introduced a novel approach using a single post-event fully polarimetric SAR image to detect collapsed buildings [49]. Their method effectively differentiated collapsed and diagonal structures by combining the developed part with texture features while identifying collapsed buildings in error-prone areas.
Through combining remote sensing images with multiple ground perspectives, Asim et al. have demonstrated that automated building damage target detection algorithms achieve higher accuracy [45], highlighting the significance of drones in this process.
Hu et al. conducted a similar study, presenting a new approach for automated reconnaissance of building damage through UAV mission planning for disaster response actions [50]. Researchers utilized satellite remote sensing images to assist in planning UAV mission routes, significantly enhancing the efficiency of post-disaster UAV detection of building damage.
III.
Transformer-based Methods
After the emergence of the Transformer in 2017, the high robustness brought by the attention mechanism caught the attention of researchers. Xie et al. introduced a local-global context attention module to enhance the feature detection capability of the network [51]. This module extracts features of damaged buildings from different directions, effectively aggregates global and local features, and considers the correlations between feature maps of different scales during information extraction. The effectiveness of the proposed method was verified using data from the 2010 Haiti earthquake, confirming the superiority of the attention mechanism in remote sensing image recognition.
IV.
Transfer Learning
Due to the labor-intensive label operations required by traditional deep learning methods, many researchers have explored alternative approaches. Yang et al. constructed an example dataset of damaged buildings using multiple disaster images retrieved from the xBD dataset [52]. Utilizing satellite and aerial remote sensing data obtained after the 2008 Wenchuan earthquake, the geographical and data transferability of pre-trained deep network models on the xBD dataset was investigated. The adapted DenseNet121 exhibited the highest robustness among the four tested CNN models. The research results confirmed the reliability of transfer learning, providing a novel solution for the rapid extraction of earthquake-damaged building information.
V.
Pixel-level single-temporal-phase building damage detection
While the predominant approaches focus on identifying and classifying the damage level of individual buildings, advanced methods have emerged. Recent research has demonstrated the potential of utilizing solely post-disaster single images to achieve pixel-level damage assessment results comparable to those obtained through change detection techniques.
For instance, Wang et al. designed a novel loss function based on U-Net [53], incorporating both geometric consistency constraint and cross-entropy losses. This loss function is employed for seismic post-disaster building segmentation across multiple scales with complex geometry. The recognition model equipped with this loss function achieved an average accuracy of 86.98% on training and test sets related to the Yushu and Wenchuan earthquakes.

4.2. Building Damage Detection Using Both Pre- and Post-Event Data

In fact, most methods for detecting architectural damage rely on analyzing changes between pre-earthquake and post-earthquake remote sensing images. These methods generally yield superior accuracy compared to those solely reliant on post-event data. For instance, Gong et al. [54] proposed an object-based change detection method, demonstrating its efficacy through a case study using high-resolution pre- and post-Yushu earthquake images. This approach achieved an accuracy of 83% in extracting building damage, significantly surpassing the accuracies obtained through pixel-based direct change detection (72%) and principal component analysis-based methods (61%). These findings underscore the effectiveness of the object-based change detection algorithm for accurate building damage assessment. The development trajectory of this field is illustrated in Figure 8.

4.2.1. Visual Approaches

Due to the challenges in collecting remote sensing imagery and the instability of image quality, accurately overlaying pre- and post-earthquake images for automatic change detection has been a complex task. As a result, early change detection methods often relied on visual interpretation of optical images. For instance, Sakamoto et al. utilized three IKONOS images from before and after the 2003 Bam earthquake in southeastern Iran for visually interpreting collapsed buildings, achieving an accuracy exceeding 75% [55]. Saito et al. compared visual interpretation results of image changes with change detection results and found that building damage tends to be underestimated when using only post-event images [56]. Yamazaki et al. visually compared QuickBird images before and after the Bam earthquake and categorized damaged buildings into four damage levels [57].
In addition, a system called “VIEWS™” [58] was developed to support route planning, progress monitoring, and capturing georeferenced digital images during the emergency response phase of the Bam earthquake.
These works demonstrate the higher accuracy of multi-temporal methods over single-temporal methods. Nevertheless, the time-consuming nature of visual interpretation hinders its usage for large-scale rapid damage assessment tasks.

4.2.2. Pure Algorithm-Based Methods

I.
Texture-based Methods
Numerous automated change detection techniques have been developed in architectural damage recognition to enhance efficiency. For instance, Gamba et al. introduced a computerized change detection algorithm through edge and contour detection in pre-and post-event images, shape analysis, and perceptual grouping [59]. The system correctly detected around 70% of collapsed buildings in two trials compared to visual interpretation.
Integrating GPS information embedded within images and linking it to a pre-existing building database for a specific urban area could considerably enhance the system’s practical applicability. Hancilar et al. [60] partially implemented this concept by merging building damage data acquired following the Haiti earthquake with field survey information. This enabled the development of vulnerability functions for diverse urban zones within Haiti, categorized as low, medium, and high-density areas, including slum districts. The researchers employed field-collected damage data to conduct a structural vulnerability analysis on buildings, grouping them based on construction materials and the number of stories.
II.
Threshold Segmentation
Threshold segmentation is an early and straightforward image-processing method that research in change detection has revealed. For instance, Yusuf et al. computed the difference between two Landsat-8 satellite images before and after the 2001 Gujarat earthquake in India [61]. They used a threshold classification procedure to analyze each pixel individually, resulting in building damage data. However, the final results did not significantly improve over single-temporal techniques due to algorithmic limitations. Similarly, Sugiyama et al. proposed an image difference algorithm that extracted damaged areas by thresholding color differences at the exact geographic locations in pre- and post-event aerial images [62]. This method struggled to handle shadows effectively, limiting its accuracy enhancement. Dou et al. calculated block-level damage levels by thresholding the differences in average grayscale and average variance of pre -and post-event images [63]. Kohiyama et al. used 15m resolution Terra-ASTER images to detect damaged areas after the Bam earthquake [64]. This method established a standard distribution model to assess the deviation value of each pixel in post-event images. It transformed it into confidence, indicating the likelihood of surface changes due to the earthquake.
However, threshold segmentation methods often utilize single features, thereby presenting limitations in handling the complex semantic information inherent in remote sensing images. This difficulty is further exacerbated by the presence of noise unless effective filtering techniques are employed. Consequently, these methods necessitate increased robustness.
In addition, threshold segmentation methods have also been applied in neural network-based approaches. However, unlike pure algorithm methods, the application of threshold segmentation in data-driven methods typically manifests as a binary classification. Adriano et al. developed a global multi-modal and multi-temporal dataset for building damage mapping [65]. This dataset includes building damage features from three disaster types: earthquakes, tsunamis, and typhoons, and considers three damage categories. The global dataset incorporates high-resolution optical images and high to medium-resolution SAR data captured before and after each disaster event. It annotates image data with on-site information, partially bridging remote sensing image information and urban databases. The study defined a damaged building semantic segmentation mapping framework based on a deep CNN algorithm, which is called “U-Net”, and validated the network’s reliability using field survey data.
III.
Multi-feature Methods
Pesaresi et al. designed a multi-feature method combining radiometric, texture, and morphological features from pre- and post-event QuickBird images to detect damaged buildings in tsunami-affected areas [66]. They introduced a correlation index to mitigate the influence of shadows. The highest accuracy obtained after analyzing post-2004 Southeast Asian tsunami remote sensing images was 93.97%, proving the advantage of multi-feature methods. In contrast, Miura et al. achieved only 70% accuracy using texture features from pre-event QuickBird and post-event WorldView-2 images in Haiti earthquake data, further affirming the superiority of multi-feature methods [67]. Janalipour et al. [68] presented a method for post-earthquake urban building damage detection that leverages pre-event vector maps and post-event high-resolution sharpened images. Their approach involved pre-processing the post-disaster satellite imagery, integrating the results from pixel-based and object-based classifications, extracting building geometry features (area, rectangular fit, and convexity), and developing a decision-making system incorporating an Adaptive Neuro-Fuzzy Inference System (ANFIS) model. While tested on the Bam earthquake dataset in Iran, the method achieved a moderate overall accuracy of 76.36%.
It should be noted that beyond feature selection and algorithm design, the inherent limitations of remote sensing imagery, such as image quality, can significantly impact the effectiveness of such methods.
IV.
SAR Sample-based Methods
As such, SAR images with higher precision and less noise are preferable. Done et al. employed change detection methods using Advanced Land Observing Satellite (ALOS) Phased Array type L-band Synthetic Aperture Radar (PALSAR) data and TerraSAR-X data to acquire building loss information from the Wenchuan earthquake [69]. Miura et al. evaluated individual building damage from the 2010 Haiti earthquake using high-resolution SAR intensity images and auxiliary building footprints [70]. Comparing pre- and post-event images and a list of damaged buildings, they found that the reverse scattering intensity change between collapsed building images was more significant than in less damaged buildings.
Park et al. focused on applying full-polarization scattering observations in deriving information from damaged areas [71]. They used polarization scattering mechanism indicators to study radar backscatter mechanism changes caused by earthquakes. A polarization information fusion method was proposed based on Context Markov Random Fields (MRF). This process achieved high accuracy results in the 2011 Japan Tohoku earthquake dataset. Park et al. investigated severely damaged urban areas from the 2016 Kumamoto earthquake using polarization PALSAR data acquired under pre- and post-event conditions [72]. An optimal damage index with significant natural changes and a fuzzy fusion-based change detection method using a polarization damage index were proposed.
By integrating SAR amplitude and phase coherence change detection, Wang et al. [73] proposed a novel method for seismic building damage assessment and analysis. This study evaluated building losses in five urban areas significantly impacted by the 2023 Turkey earthquake and validated their estimations by comparing them with high-resolution optical imagery and AI recognition results obtained from the results assessed by the Microsoft team partnered with Turkey’s Ministry of Interior Disaster and Emergency Management Presidency (AFAD). Additionally, the study explored the influence of SAR image parameters on building change detection. It was revealed that image resolution and observational geometry significantly impact the accuracy of change detection, with improvements in resolution leading to enhanced recognition accuracy.
V.
Others
Researchers have also attempted to address the high similarity issues between pre- and post-event images in change detection. For example, Kim et al. detected earthquake-induced damaged buildings using SAR data with different observation modes [74]. They proposed a context change analysis method based on novel texture features to map the damaged buildings. This study utilized dual-phase Kompsat-5 data obtained in different polarization modes. In the 2016 Kumamoto earthquake dataset, the proposed texture analysis enhanced the detectability of damaged building areas while maintaining a low false alarm rate in agricultural regions. The detection rate was approximately 72.5%, and the false alarm rate was about 6.8%.

4.2.3. Data-Driven Methods

(1)
Machine Learning-Based Methods
Regarding optical images, Annabelle et al. analyzed Very High-Resolution (VHR) optical images before and after the L’Aquila earthquake in June 2009 [75]. They proposed a texture-based supervised learning method for building damage detection, achieving a five-class classification of building damage levels. Inproceedings et al. introduced a technical workflow for object-based change detection of building earthquake damage information in VHR remote sensing images [76], achieving an accuracy of 88.45% and a Kappa coefficient of 0.8411 in the small area change detection of the Yushu earthquake.
Likewise, SAR images have provided new samples for machine learning. For example, SAR and Light Detection and Ranging (LiDAR) data were combined to assess building damage caused by the Kumamoto earthquake [77]. Hajeb et al. employed machine learning algorithms to classify and predict damage rates, including RF and SVM. Similar to the goals of [78,79], Gong et al. [80] presented an object-based change detection method by overlaying road vector data onto post-event images of the Wenchuan earthquake.
(2)
Deep Learning-Based Methods
I.
CNN-based Methods
The inherent richness of semantic information in remote sensing imagery, coupled with the challenges posed by varying acquisition angles, colors, shadows, and noise in images captured across significant temporal intervals, hinders the ability of shallow machine learning networks to achieve highly robust damage detection results, as noted in [81]. Therefore, this article emphasizes the exploration of deep learning techniques in this domain.
Sublime et al. introduced an unsupervised deep learning method and applied it to change detection in satellite images captured before and after the 2011 Japan Tohoku earthquake [82]. They achieved unsupervised clustering through the fully convolutional AE model, ultimately attaining high accuracy while ensuring efficiency. While the unsupervised algorithm has the advantage of fast analysis, it may need better generalization and more result robustness.
A supervised learning approach based on convolutional operations can effectively address these issues. Qing et al. proposed a layered architectural damage assessment workflow for superpixel-level direct remote sensing change detection using CNNs [83]. A fine-scale building damage detection method based on the Pre-Superpixel Constraint strategy was employed for the extracted building areas. Experimental results demonstrated that the proposed workflow effectively and accurately located and classified damaged buildings.
In particular, a comparison with single-time-phase images further highlighted the advantages of direct change detection. Loerch et al. compared three methods to enhance the value of small Unmanned Aircraft System images [84]: (1) the use of a dual-temporal imaging method known as Repetitive Station Imaging (RSI) that repeats image station positions over time (in contrast to traditional non-RSI imaging); (2) co-registration of dual-temporal image pairs; (3) damage detection using Mask R-CNN. In all cases, the RSI method exhibited the highest accuracy.
  • II.
    View2 Challenge and xBD Dataset
The View2 Challenge and the introduction of the xBD dataset significantly spurred the development of remote sensing image recognition, particularly for deep learning methods, leading to the creation of several prominent network architectures [85].
Unlike its predecessor, xView, which primarily focused on small object samples within individual images, xBD incorporates additional pixel-level change detection annotations specifically for building damage. It utilizes paired samples and employs more well-defined categories, such as building damage, vegetation cover, and cloud layer changes, which better align with the demands of change detection tasks compared to traditional detection and classification tasks. For instance, Weber et al. released their competition-winning code, which leveraged a deep learning algorithm for building damage assessment on the xBD dataset, achieving remarkable accuracy [86].
The emergence of xBD has demonstrably motivated researchers to develop more comprehensive datasets for pre- and post-disaster building damage assessment. One such example is the database compiled by M. Omoya et al., which encompasses information on over 3600 buildings impacted by the 2014 South Napa earthquake, including building characteristics, site attributes, seismic intensity, damage and repair details, and demographic statistics [87]. Additionally, Zhe Dong et al. collected post-disaster drone imagery following the 2021 Yunan Dali Yangbi earthquake and determined damage levels by referencing pre-disaster images from Google Earth [88]. Furthermore, S. Ghaffarian et al. introduced a novel approach using pre-disaster OpenStreetMap building data to automatically generate training samples, offering a new perspective on dataset construction in this domain [89].
In this context, a similar Siam-U-Net-Attn model was proposed by Hao et al. in [90], incorporating an attention mechanism to assess the extent of building damage. This model’s efficacy was evaluated on the xView2 dataset for building damage assessment, demonstrating precise damage scale classification and building segmentation. Ismail et al. developed a semi-supervised framework BLDNet based on CNN, applied to various scenarios of damage detection in the xBD dataset [91,92]. A refined Swin-Unet method was introduced by Xu et al. in [93] and applied in different scenarios, such as high-resolution Gaofen-2/Jilin-1 multi-temporal optical images and satellite image datasets (xBD). Karlbrg et al. [94], based on the xBD dataset and samples from the 2023 Turkey earthquake, compares the strengths and weaknesses of single-temporal-phase and multi-temporal-phase methods. Their study suggests that in cases where sample accuracy is relatively low, single-temporal-phase methods might be a preferable choice.
The xBD dataset’s immense size offers another significant advantage: its sample diversity. This characteristic enables effective integration with other datasets, mitigating overfitting and enhancing model generalizability. In a recent study [95], researchers explored the hierarchical transformer architecture (DAHiTrA) for damage assessment. This approach leverages spatial features at various resolutions and captures temporal variations by applying transformer encoders. The proposed network achieved state-of-the-art performance on both the xBD dataset (for building localization and damage classification) and the LEVIR-CD dataset (for change detection tasks).
Furthermore, the interoperability of xBD aligns well with concepts like distillation learning and incremental learning. Ge et al. proposed an incremental learning architecture (Incre-Trans) applicable to various datasets, including xBD [96]. This method achieved Kappa accuracies of 0.70 and 0.68 when tested on the 2010 Haiti and 2015 Nepal earthquake datasets, respectively. Additionally, Bai et al. designed a lightweight knowledge distillation method for damage assessment on the xBD dataset, achieving a 30% reduction in model parameters and a 30–40% improvement in inference speed [97].
In essence, the View2 Challenge and the xBD dataset have become benchmarks for evaluating the performance of novel networks. Even the ChangeOS framework model, trained on UAV imagery, employs xBD for accuracy testing [98].
  • III.
    Siamese Neural Network Structure
The View2 Challenge has further spurred the adoption of Siamese neural network architectures beyond their initial use in signature recognition [99]. These networks have become a prominent choice for face recognition and change detection tasks [100]. Reference [101] proposes a Siamese neural network for simultaneous building damage localization and classification. Another study [102] introduces a multi-scale segmentation and scene change detection (MSSCD) strategy using pre- and post-disaster Very High-Resolution (VHR) satellite imagery for building damage detection. This approach was applied to the 2010 Haiti earthquake dataset. Following the 2017 Iran Sarpol-e Zahab earthquake, Alizadeh et al. proposed a method for detecting damaged areas using a fusion approach that combines Sentinel-1 radar and Sentinel-2 optical images [103]. This method leverages post-classification fusion of optical and radar image classifications to generate urban change maps.
  • IV.
    SAR Sample-based Method
The success of Siamese neural networks has extended to SAR image analysis, capitalizing on SAR’s desirable properties of strong penetration and high clarity. Kalantar et al. [104] evaluated the performance of CNN-based Siamese, fusion, and composite models for building damage detection using the 2016 Kumamoto earthquake dataset. Their approach employs uniform transformations to map heterogeneous optical and SAR images into a shared feature space for change detection, contrasting with prior methods.
Jiang et al. [105] proposed a novel deep homogeneous feature fusion (DHFF) model based on Image Style Transfer (IST) for achieving these uniform transformations. DHFF separates semantic content and style features from heterogeneous images, enabling transformation without compromising semantic information, particularly in areas of change. This approach leads to improved detection accuracy through more precise uniform modifications.
Pang et al. [106] presented a method for building damage assessment using semantic change detection on high-resolution SAR images. Their model leverages a Siamese-based module for damage change detection and an attention mechanism-based module for semantic segmentation of the damage map. A new SAR image dataset constructed from the Syrian-Aleppo battle was used for model training and testing to evaluate its effectiveness. The method successfully identified damaged areas of buildings and assessed the extent of damage.
Mazzanti et al. [107] analyzed a SAR image dataset (Sentinel-30, COSMO-SkyMed Spotlight, and COSMO-SkyMed StripMap) acquired following the 2016 Norcia earthquake in central Italy. They employed amplitude and coherence change detection tools to provide an approximation of the most severely affected areas.
Despite their advantages, SAR images have limitations. Their unique characteristics, stemming from radar technology, introduce challenges like radiation distortion, geometric distortions, and increased susceptibility to the Doppler effect compared to optical images. Recognizing these shortcomings, researchers have proposed various models specifically designed to improve SAR image samples for building damage detection.
Sun et al. [108] aimed for rapid earthquake damage assessment in the early post-disaster phase using Sentinel-1A radar images. Their model, applied to the 2021 Mado County earthquake in China, demonstrated success in accurately obtaining the co-seismic deformation field and identifying damaged and undamaged building areas. Cho et al. [109] conducted a more granular study by classifying building damage levels in the 2016 Kumamoto earthquake dataset to understand the relationship between damaged building characteristics and SAR image features.
Beyond optimizing SAR samples themselves, approaches involving multimodal feature acquisition have also shown promise. Rao et al. [110] combined high-resolution building inventory data, ground shaking intensity maps, and pre- and post-event InSAR-derived surface changes to perform multi-level and binary damage classification for four recent earthquakes. They compared their predicted damage labels with ground truth data from on-site surveys and achieved successful identification of over 50% of damaged buildings using binary classification for three out of the four earthquakes studied.
  • V.
    Studies of Non-natural Disaster Samples
The research domain of building damage assessment using remote sensing imagery extends beyond natural disasters to encompass human-induced events as well. This section highlights the application of deep learning techniques to databases related to wars, explosions, and other such occurrences.
ChangeOS framework: Zheng et al. [111] proposed an object-based deep semantic change detection framework called ChangeOS for building damage assessment in the context of human-induced disasters. This framework seamlessly integrates Object-Based Image Analysis (OBIA) with deep learning by employing a deep object localization network to generate precise building object detections. Furthermore, it integrates the object localization and damage classification networks into a unified semantic change detection network, enabling end-to-end building damage assessment.
Multi-disaster damage classification: Yuan et al. [112] conducted comprehensive studies on pre- and post-disaster data from Hurricane Idai in Beira, Mozambique (2019) and the Beirut explosion in Lebanon (2020). Their research involved classifying damage levels using both human and deep learning models while also exploring the impact of various image acquisition conditions on classification accuracy.
These examples showcase the adaptability and versatility of deep learning approaches in building damage assessment across diverse disaster scenarios, including those beyond natural phenomena.
  • VI.
    Transfer Learning
The field of remote sensing image analysis for building damage assessment has significantly benefited from transfer learning techniques, particularly in the context of change detection. For example, Abdi et al. proposed a multi-feature fusion approach that leverages deep transfer learning [113]. This approach consists of four distinct steps: preprocessing, deep feature extraction, deep feature fusion, and transfer learning. The researchers conducted comparative experiments using post-disaster images from the 2010 Haiti earthquake. Their findings demonstrated the approach’s effectiveness in identifying damaged and undamaged buildings, achieving an overall accuracy exceeding 83%. This highlights the potential of transfer learning to improve change detection accuracy in building damage assessment tasks.
  • VII.
    Object-level Change Detection
Object-level change detection focuses on analyzing, detecting, and quantifying changes in individual objects within remote sensing images. This method aims to: (1). Identify individual objects, including buildings, roads, vegetation, etc; (2). Detect changes between different time points, revealing what has changed between two or more images captured at different times.
The results of object-level change detection are similar to object detection, offering a visually intuitive understanding of changes. As object detection technology has matured, researchers have developed several advanced techniques for object-level change detection.
For example, the approach, proposed by Zhang et al. [114], prioritizes the overall characteristics and contextual relationships of changed objects to identify modifications in geographical entities. The detected changes are represented by bounding boxes, facilitating feature extraction and interpretation. Additionally, they introduce a data augmentation method called Alternative-Mosaic to accelerate training and improve model performance.
Wang et al. [115] developed a two-stage method for post-earthquake building damage assessment. This method combines a modified YOLOv4 object detection module with an SVM classification module. The model achieved high accuracy, exceeding 95.7% for building localization and 97.1% for damage classification using a dataset of pre- and post-earthquake images.
These advancements demonstrate the growing effectiveness of object-level change detection in analyzing and quantifying various changes in remote sensing imagery.

5. Evaluation of Seven Cutting-Edge Open-Source Approaches/Models

Based on the comprehensive review and discussion presented earlier, it becomes evident that all these methods primarily revolve around two core concepts: single-phase methods and multi-phase (i.e., change detection) methods. After more than twenty years of development, these two main ideas have spawned numerous advanced automation algorithms.
However, these cutting-edge algorithms exhibit significant divergence across various aspects, including their development motivation, implementation environment, approaches, and the performance and effectiveness of their application on different test data. In order to foster a unified understanding of the distinct characteristics of mainstream and cutting-edge methods, we showcase seven cutting-edge and open-source approaches trained using different remote sensing datasets or network structures. We evaluate their applicability for detecting building damage information before and after the 2023 Turkey earthquake and subsequent to the 2024 Noto Peninsula earthquake in Japan, which were not included in the training data for these approaches. Through this approach, we aim to facilitate a deeper understanding of these methods, encompassing their implementation and respective characteristics, ultimately leading to a unified and comprehensive understanding of the current challenges in utilizing remote sensing images for rapid post-disaster damage assessment.
In addition, the collection and labeling of these images represented a significant undertaking, and we extend our sincere appreciation to the many open-source contributors who assisted with this effort. Table 4 provides basic information about these seven automation algorithms.

5.1. Methods, Datasets, and Implementation Details

The CPU model utilized in this article is the Intel i9 13900 K, and the GPU model is the NVIDIA GeForce RTX 3090. With this hardware setup, we conducted experiments involving five algorithms based on change detection and two algorithms based on single-phase technology. Figure 9 briefly shows their workflow. Table 5 shows the direct comparison of their performance.
The first four algorithms are change detection algorithms, and their final output is a binary image where the changed area is marked in white, and the unchanged area is marked in black. Among them:
➀ CMI-Otsu: It employs a context-based adaptive texture extraction algorithm, which initially extracts the change magnitude image and then binarizes it.
➁ CVA-Otsu: This method involves transformation vector detection after initially dividing the image into unchanged and changed areas, followed by binary selection.
➂➃ Tras-LEVIR and Siam-CNN: These are two change detection algorithms based on Siamese neural networks. Tras-LEVIR incorporates a novel transformer module but uses the LEVIR-CD dataset without damaged building labels. Siam-CNN employs a more traditional CNN architecture, emphasizing texture features in feature vector selection and utilizing a dataset that pays more attention to building damage.
➄ Siam-ResNet: It utilizes an improved Siamese neural network based on ResNet50 and was trained on the xBD dataset. The final model provides a five-category evaluation of the degree of building damage.
➅ YOLOv8: This is one of the most popular object detection algorithms. In this article, YOLOv8 is used to identify building damage in the xView dataset. The training set image size was adjusted to 1080 × 1080 pixels for improved accuracy, with batch size = 2 and epochs = 400. It achieved high accuracy for pre-event images but exhibited significant errors for post-event images.
➆ Sate-CNN: This algorithm uses a unique dataset label type that divides the image into small pieces and evaluates whether a building is damaged in each piece, marking the image with different colors based on this evaluation standard. However, it was found to have high instability, performing well in some samples and poorly in others during experiments.
Specifically, we have detailed some algorithm implementations to facilitate a deeper understanding of the usage of these open-source models:
➀ Configuration environment: Version and compatibility issues with functions often serve as the root cause of many algorithm bugs. For instance, many open-source codes typically call relevant functions under the ‘np’ folder for matrix operations, but these functions cannot be called in newer versions of NumPy. Careful and meticulous troubleshooting is needed in the process.
➁ Paying attention to input and output specifications when using existing model weights. For example, the Sate-CNN model is saved using HDF5 files, which means that its detailed code structure cannot be directly inspected. However, the input and output information can be obtained by testing the superficial and underlying networks, leading to successful replication.

5.2. Results

The detection data used in this article comes from the satellite images of building damage before and after the earthquakes in Turkey in 2023, the Noto Peninsula in Japan in 2024, and the xView2 dataset. This article will extract and display typical prediction results.

5.2.1. CMI-Otsu

The CMI-Otsu algorithm, a pure algorithm method for change detection, utilizes an adaptive spatial context extraction algorithm to investigate contextual information surrounding pixels. Subsequently, it quantitatively assesses the degree of change between paired pixels by computing the band distance established by the paired adaptive regions surrounding the corresponding pixels. Following the generation of the change magnitude image (CMI), a binary thresholding method, employing double window flexible step search (DFSS), is used to segment the CMI into a binary change detection map. Figure 10 displays the processing results of this algorithm on the images before and after the earthquake in the Noto Peninsula, Japan. It is observed that this method is sensitive to noise and has poor handling capabilities, particularly in dealing with the color processing of the three-channel image. Consequently, it results in a high number of false positive samples.

5.2.2. CVA-Otsu

The CVA-Otsu method leverages a change vector analysis (CVA) approach to detect modifications. This method quantifies the degree of change between two images by comparing the magnitude of transformation vectors for corresponding pixels. Subsequently, a thresholding technique, employing Otsu’s method, segments the image into regions of change and no change, resulting in a binary change detection map. Notably, the method incorporates a pre-processing step based on imageRegr for efficient noise reduction, enabling robust performance even in scenarios with low building density. The results of applying this method to another sample, as well as the previous sample, are displayed. The new sample was taken from urban areas with denser building clusters. It is evident that this method performs well in the images shown in Figure 11d but exhibits reduced effectiveness in Figure 11c, which features a denser building cluster. Many healthy samples, particularly those at the upper corner of Figure 11c, were erroneously identified as abnormal results.

5.2.3. Tras-LEVIR

The transformer architecture has emerged as a dominant deep learning model, achieving remarkable success in the domain of change detection. However, its application in house damage detection remains relatively unexplored in scholarly works. The Tras-LEVIR algorithm, while demonstrably effective for urban construction change detection, suffers from a limitation - its training dataset, LEVIR-CD, lacks explicit information on building damage.
During testing, it was observed that this method is not insensitive to the debris resulting from building collapses and partial damage to buildings. Particularly in the data samples before and after the earthquake in the Noto Peninsula, Japan, no substantial discernible results could be obtained. However, in the samples from the Turkish earthquake with higher sample quality, this algorithm can yield good results, and it performs excellently in the xView2 dataset. Figure 12 presents two sets of test data, which provide sufficient evidence of the robust generalization capability of the transformer model.

5.2.4. Siam-CNN

The Siam-CNN algorithm utilizes a unique Siamese convolutional neural network architecture. Beyond relying solely on convolutional kernels for feature extraction, it incorporates a gray-level co-occurrence matrix (GLCM) module to capture information pertaining to the direction, interval, magnitude, and rate of change within the image’s grayscale values. This module facilitates a quantitative description of the textural features’ statistical attributes. Notably, the pre-processing stage in Siam-CNN employs the conventional dual-polarization clutter cancellation (DPCA) method for denoizing SAR samples.
The training samples used are SAR samples characterized by high clarity, minimal noise points, and strong penetration. They also emphasize the texture features of remote sensing images. However, they did not yield good results in the two sets of samples with low resolution obtained from the Noto Peninsula in Japan, as depicted in Figure 13. Particularly in the second set of images with complex texture features, the detection results appear extremely unreasonable, lacking any meaningful information. This discrepancy is attributed to the use of a more traditional CNN architecture in this model. In contrast to Tras-LEVIR, this architecture appears to be more prone to overfitting and exhibits lower versatility.

5.2.5. Siam-ResNet

The xView2 dataset, renowned for remote sensing building damage detection, has garnered significant attention from researchers since its release, leading to the development of numerous open-source network architectures. However, not all networks exhibit high efficacy and generalizability. Considering hardware constraints, prediction accuracy, and speed, we opted for a Siamese neural network architecture based on ResNet-50. This architecture integrates a Feature Pyramid Network (FPN) module within the feature extractor to optimize features of diverse house sample sizes, accelerating the training process and reducing hardware demands. The authors trained this model for only two days using two NVIDIA 3060 Ti graphics cards in series. Figure 14 depicts the algorithm’s test results and training accuracy maps for pre- and post-earthquake image sets from the Noto Peninsula, Japan. The results demonstrate the network’s exceptional performance, suggesting the potential for further performance improvements through optimized training parameters on a more powerful hardware platform.
However, despite the visually more intuitive nature of the Turkish samples compared to the Japanese ones, the method did not yield effective results in this case. This is because Turkish samples have not been well pretreated. Due to variations in camera angles, the same building might present different facades in pre- and post-disaster imagery. While these variations pose minimal challenges for human observation, they can significantly impact AI models.

5.2.6. YOLOv8

The YOLO series of object detection models is renowned worldwide for its lightweight design and high robustness. The YOLOv8 model, released in January 2023, introduced new features and beneficial improvements based on the previously popular YOLOv5 model. However, its adaptation to the xView dataset remains at the YOLOv5 stage, and there are significant issues with invoking external functions such as NumPy, Tensorflow, and Keras in the configuration file. No creator has open-sourced the configuration of the YOLOv8 model for xView on GitHub. Consequently, we modified the configuration file of YOLOv5, conducted 400 training iterations on the NVIDIA 3090 GPU, and eventually obtained the corresponding weight model. Figure 15 displays multiple prediction results of this model, while Figure 16 illustrates the training parameter statistics of this model.
From the results, it is evident that the model exhibits high responsiveness to undamaged houses. However, it struggles to provide appropriate results for damaged houses. Future studies should focus on improving the capability of identifying undamaged buildings using this method.

5.2.7. Sate-CNN

The Sate-CNN architecture leverages a conventional CNN structure during training. It employs a dense block, where convolutional layers are repeated in three sets: three, six, and twelve repetitions for dilation factors of 1, 2, and 4, respectively, followed by a final set of eight repetitions with a dilation factor of 2. However, the network’s innovation lies in its unique approach to utilizing data. Unlike conventional CNNs, Sate-CNN segments a post-disaster image containing house damage information into smaller blocks and subsequently assesses the damage level within each individual block.
During testing, this model achieved excellent results in the samples from the earthquake in the Noto Peninsula, Japan, as well as in the xView2 samples, as shown in Figure 17. However, the most significant drawback of this model is the absence of an automated threshold segmentation system. After completing the scoring system, subsequent threshold segmentation operations must be manually executed. During the threshold segmentation process, we observed a significant variance in the distribution of sample scores among different sample groups, and an automated threshold divider may not necessarily yield good tri-classification results.
Nonetheless, in the context of damage identification, this method exhibits strong performance and holds excellent prospects for development. It may achieve more accurate damage degree evaluation results when integrated into a neural network equipped with novel deep learning modules such as transformers, MoCo, and Zero-Shot.

5.3. Discussion on Analysis Results

Based on the aforementioned analysis, it is evident that data-driven methods, which integrate advanced deep learning algorithms with past building damage data, outperform pure algorithmic methods in terms of accuracy and generalization, even in situations where image quality is significantly compromised.
Furthermore, data-driven methods offer a distinct advantage in terms of promptness, as once they are well-trained, the input-output process is immediate. For example, the CMI-Otsu algorithm takes approximately 2 min to process a 1080 × 720 image, whereas the Tras-LEVIR model and YOLOv8 only require 3 s and 10 s, respectively.
However, data-driven methods face several challenges:
➀ The overfitting problem: None of the models in the present study demonstrate high robustness across the three different test sample groups (the Noto Peninsula earthquake in Japan, the earthquake in Turkey, and the xView2 dataset). In contrast, the three change detection algorithms show promising results when image quality is high but exhibit significant deviations when dealing with low-precision remote sensing images, such as those from Google Earth images after the Turkey earthquake.
➁ Training reliable deep learning models necessitates substantial hardware resources. In practice, training a large enough model on a single GPU often requires reducing training parameters and rounds to ensure memory constraints are met. Conversely, training a smaller model can make it challenging to meet industrial requirements.
➂ There is a significant demand for samples, especially apparent in models related to the xView and xView2 datasets. Label categories with a large sample capacity are easily detected, while those with a small capacity are frequently overlooked.
Regarding single-phase and multi-phase methods, it is observed that the generalization of multi-phase methods is more sensitive to the quality of remote sensing images. In situations where the images have suboptimal quality, manual intervention is necessary to ensure satisfactory accuracy. This presents a technical gap that needs to be addressed in the development of multi-phase methods. On the other hand, while multi-phase methods can provide more informative data, it is noted that the overfitting problem tends to be more pronounced compared to single-phase methods. Nevertheless, there is optimism as the results of Tras-LEVIR have demonstrated that the introduction of transformers can help mitigate this issue.

6. Challenges and Future Directions

Among various methods for building damage detection, image recognition methods based on remote sensing imagery, especially data-driven methods born after the widespread development of deep learning technology, stand out as the ones with the lowest implementation cost and the fastest speed. Despite the reviewed advancements in data-driven methods, their widespread application in post-disaster decision-making remains limited. In pursuit of practical application, we summarize the challenges encountered in this field and discuss potential future directions.

6.1. Acquisition of High-Quality Samples

The consistent challenge in addressing post-disaster building damage detection problems, particularly for change detection algorithms, is the acquisition of high-quality samples. Analysis results of various such algorithms on samples obtained from the Turkey earthquake have demonstrated this challenge. A significant disparity between pre- and post-event samples, such as excessive shooting angles, substantial color variations, and outdated pre-event images significantly compromise accuracy. However, it has been found that this issue can be alleviated by introducing transformer modules in deep learning algorithms or by enhancing the sample quality through additional pre-processing procedures.

6.2. Integration of Multimodal Data

It is widely acknowledged that AI models trained solely on one type of data, such as satellite remote sensing imagery, have inherent limitations in their capabilities. One solution is to introduce data from additional dimensions into the methods. For example, combining unmanned aerial vehicle (UAV) images with remote sensing imagery or incorporating physical information, such as peak ground acceleration, earthquake intensity, and building inventory data alongside remote sensing imagery to form a physics-data-driven method.

6.3. Focus on Object-Level Identification Task

It is essential to emphasize that object-level rather than pixel-level detection results align better with practical requirements. The primary concern is to obtain a visually comprehensible representation of structural damage information within a given area. Post-disaster, infrastructure and building managers, as well as rescue workers, urgently need to ascertain the existence of collapsed houses, their locations, their damage levels, and the surrounding traffic conditions. Such information goes beyond the capabilities of binary or ternary images, necessitating the derivation of outcomes at the object level. Currently, apart from the majority of single-temporal methods, object-level change detection algorithms are also capable of providing such results. Clearly, the latter can offer a more extensive array of information, theoretically leading to superior outcomes. This approach should be a central focus of our future research endeavors.

6.4. Exploration of New Methods

In the domain of single-temporal technology, attention should be directed the development of self-supervised learning techniques, as they have made significant progress in remote sensing image processing but remain underexplored in the context of building damage recognition. Similar challenges also arise in the domains of distillation learning and transfer learning. On the other hand, among the seven cutting-edge models featured in this study, only one utilized a transformer, emphasizing the need to explore single-temporal methods, as they exhibit greater robustness against variations in sample quality. Nevertheless, the results from the lone transformer example demonstrate its formidable processing capabilities.

6.5. Method Selection

It is acknowledged that the selection of remote sensing-based methods for rapid damage evaluation must consider the balance between prediction accuracy, computation speed, and computational (training) complexities. Achieving high accuracy results on unknown or untrained datasets is always challenging for all these methods, while the time-consuming nature of reconstructing algorithms, readjusting algorithms, or retraining models after a disaster is also challenging. However, recent advancements in self-supervised learning and incremental learning approaches seem to offer the potential to save time in training new models while improving speed without compromising accuracy.

7. Conclusions

This study provides a systematic introduction and literature review of advancements in methods for rapidly detecting regional building damage through remote sensing images after disasters, particularly compound disasters. We categorize these damage detection methods into three stages (visual method stage, pure algorithm method stage, and data-driven method stage), two main classes (single-temporal and multi-temporal), and several branches (such as texture-based, threshold segmentation-based, multi-feature-based, machine learning-based, deep learning-based, CNN-based, innovative network-based, transfer learning, and object-level). For each method type, we provide an overview of representative works, detailing methodologies and employed data through statistical analysis.
This comprehensive approach aims to facilitate a deeper understanding of these methods, encompassing their implementation and characteristics, and ultimately fostering a unified perspective on the current challenges associated with utilizing remote sensing imagery for rapid post-disaster damage assessment. Subsequently, the efficacy of seven leading-edge detection algorithms is evaluated using real data samples from the 2024 Noto Peninsula earthquake and the 2023 Turkey earthquake.
The findings highlight that the true potential of data-driven methods (mainly AI-based methods) lies in their ability to augment human decision-making rather than supplant it entirely, as individual models exhibit limitations in diverse scenarios. Therefore, establishing a framework that integrates the results of these models as crucial supplements within the decision-making process becomes crucial. Furthermore, enhancing the visualization of results, exemplified by the innovative Sate-CNN algorithm’s color-coded damage level maps, merits further exploration. Finally, the integration of physics-driven and data-driven approaches, forming a physics-data-driven method, presents a promising avenue for future research.

Author Contributions

Z.X. methodology, validation, writing—original draft preparation. J.G. conceptualization, methodology, writing—original draft preparation. J.Z. supervision, writing—review and editing. X.H. conceptualization—review and critical revision. All authors have read and agreed to the published version of the manuscript.

Funding

This work was partially supported by Japan Society for the Promotion of Science (JSPS), KAKENHI, Grant-in-Aid for Young Scientists, Grant/Award Number: 22K14313.

Data Availability Statement

All datasets and algorithmic models used in this paper are available in open source at: https://github.com/wenhwu/awesome-remote-sensing-change-detection (accessed on 30 November 2023); https://github.com/likyoo/open-cd (accessed on 19 December 2023); https://github.com/PaddlePaddle/PaddleRS (accessed on 20 December 2023). The other data that support this study are available from the corresponding author upon reasonable request.

Acknowledgments

We would like to express our sincere gratitude to the members of Professor Zhang’s research group for their invaluable assistance and support. We extend our thanks to the editors and reviewers for their valuable contributions during the review and proofreading of this manuscript. Their meticulousness and attention to detail significantly enhanced the quality of our paper. We also appreciate their willingness to provide constructive feedback and suggestions throughout the entire writing process. And, we would like to convey our sincerest gratitude to the hard-working open-source workers, like “wenhwu”, “YWRXJRY Group” or “PaddlePaddle”, in this discipline, who have helped us a lot with their efforts.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Parisi, F.; Augenti, N. Earthquake damages to cultural heritage constructions and simplified assessment of artworks. Eng. Fail. Anal. 2013, 34, 735–760. [Google Scholar] [CrossRef]
  2. Goretti, A.; Di Pasquale, G. An overview of post-earthquake damage assessment in Italy. In Proceedings of the Eeri Invitational Workshop. An Action Plan to Develop Earthquake Damage and Loss Data Protocols, Pasadena, CA, USA, 19–20 September 2002. [Google Scholar]
  3. Wang, Y. Damage Assessment in Asymmetric Buildings Using Vibration Techniques. Ph.D. Thesis, Queensland University of Technology, Brisbane, Australia, 2018. [Google Scholar]
  4. Pan, H.; Kusunoki, K.; Hattori, Y. Capacity-curve-based damage evaluation approach for reinforced concrete buildings using seismic response data. Eng. Struct. 2019, 197, 109386. [Google Scholar] [CrossRef]
  5. Takeuchi, T.; Mita, A. Damage localization for multi-story buildings focusing on shift in the center of rigidity using an adaptive extended Kalman filter. In Proceedings of the Structural Health Monitoring and Inspection of Advanced Materials, Aerospace, and Civil Infrastructure, San Diego, CA, USA, 9–12 March 2015; SPIE: Bellingham, WA, USA, 2015; Volume 9437, pp. 258–271. [Google Scholar]
  6. Lei, Y.; Jiang, Y.; Xu, Z. Structural damage detection with limited input and output measurement signals. Mech. Syst. Signal Process. 2012, 28, 229–243. [Google Scholar] [CrossRef]
  7. Li, D.; Zhang, J. Finite element model updating through derivative-free optimization algorithm. Mech. Syst. Signal Process. 2023, 185, 109726. [Google Scholar] [CrossRef]
  8. He, X.; Unjoh, S.; Li, D. Unscented Kalman filter with performance recovery strategy for parameter estimation of isolation structures. Struct. Control Health Monit. 2022, 29, e3116. [Google Scholar] [CrossRef]
  9. Soltaninejad, M.; Soroushian, S.; Livani, H. Application of short-time matrix pencil method for high-frequency damage detection in structural system. Struct. Control Health Monit. 2020, 27, e2589. [Google Scholar] [CrossRef]
  10. Stramondo, S.; Bignami, C.; Chini, M.; Pierdicca, N.; Tertulliani, A. Satellite radar and optical remote sensing for earthquake damage detection: Results from different case studies. Int. J. Remote Sens. 2006, 27, 4433–4447. [Google Scholar] [CrossRef]
  11. Mardiyono, M.; Suryanita, R.; Adnan, A. Intelligent monitoring system on prediction of building damage index using neural-network. TELKOMNIKA (Telecommun. Comput. Electron. Control) 2012, 10, 155–164. [Google Scholar] [CrossRef]
  12. Chiauzzi, L.; Masi, A.; Mucciarelli, M.; Vona, M.; Pacor, F.; Cultrera, G.; Gallovič, F.; Emolo, A. Building damage scenarios based on exploitation of Housner intensity derived from finite faults ground motion simulations. Bull. Earthq. Eng. 2012, 10, 517–545. [Google Scholar] [CrossRef]
  13. Ogawa, N.; Yamazaki, F. Photo-interpretation of building damage due to earthquakes using aerial photographs. In Proceedings of the 12th World Conference on Earthquake Engineering, Auckland, New Zealand, 20 January–4 February 2000; p. 1906. [Google Scholar]
  14. Zhu, X.X.; Montazeri, S.; Ali, M.; Hua, Y.; Wang, Y.; Mou, L.; Shi, Y.; Xu, F.; Bamler, R. Deep learning meets SAR: Concepts, models, pitfalls, and perspectives. IEEE Geosci. Remote Sens. Mag. 2021, 9, 143–172. [Google Scholar] [CrossRef]
  15. Li, W.; Zou, B.; Zhang, L. A Robust Man-Made Target Detection Method Based on Relative Spectral Stationarity for High-Resolution SAR Images. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5240320. [Google Scholar] [CrossRef]
  16. Peng, B.; Peng, B.; Zhou, J.; Xie, J.; Liu, L. Scattering model guided adversarial examples for SAR target recognition: Attack and defense. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5236217. [Google Scholar] [CrossRef]
  17. Zhao, J.; Li, Y.; Matgen, P.; Pelich, R.; Hostache, R.; Wagner, W.; Chini, M. Urban-aware U-Net for large-scale urban flood mapping using multitemporal Sentinel-1 intensity and interferometric coherence. IEEE Trans. Geosci. Remote Sens. 2022, 60, 4209121. [Google Scholar] [CrossRef]
  18. Dong, L.; Shan, J. A comprehensive review of earthquake-induced building damage detection with remote sensing techniques. ISPRS J. Photogramm. Remote Sens. 2013, 84, 85–99. [Google Scholar] [CrossRef]
  19. Maxwell, A.E.; Warner, T.A.; Fang, F. Implementation of machine-learning classification in remote sensing: An applied review. Int. J. Remote Sensing 2018, 39, 2784–2817. [Google Scholar] [CrossRef]
  20. Ge, P.; Gokon, H.; Meguro, K. A review on synthetic aperture radar-based building damage assessment in disasters. Remote Sens. Environ. 2020, 240, 111693. [Google Scholar] [CrossRef]
  21. Ghaedi, K.; Gordan, M.; Ismail, Z.; Hashim, H.; Talebkhah, M. A literature review on the development of remote sensing in damage detection of civil structures. J. Eng. Res. Rep. 2021, 20, 39–56. [Google Scholar] [CrossRef]
  22. Yamazaki, F.; Kouchi, K.; Matsuoka, M.; Kohiyama, M.; Muraoka, N. Damage detection from high-resolution satellite images for the 2003 Boumerdes, Algeria earthquake. In Proceedings of the 13th World Conference on Earthquake Engineering, International Association for Earthquake Engineering, Vancouver, BC, Canada, 1–6 August 2004; p. 13. [Google Scholar]
  23. Kaya, Ş.; Curran, P.; Llewellyn, G. Post-earthquake building collapse: A comparison of government statistics and estimates derived from SPOT HRVIR data. Int. J. Remote Sens. 2005, 26, 2731–2740. [Google Scholar] [CrossRef]
  24. Jin, D.; Wang, X.; Dou, A.; Dong, Y. Post-earthquake building damage assessment in Yushu using airborne SAR imagery. Earthq. Sci. 2011, 24, 463–473. [Google Scholar] [CrossRef]
  25. Yamazaki, F.; Matsuoka, M. Remote sensing technologies in post-disaster damage assessment. J. Earthq. Tsunami 2007, 1, 193–210. [Google Scholar] [CrossRef]
  26. Polli, D.; Dell’Acqua, F.; Gamba, P.; Lisini, G. Earthquake damage assessment from post-event only radar satellite data. In Proceedings of the Eighth International Workshop on Remote Sensing for Disaster Response, Tokyo, Japan, 30 September–1 October 2010; Volume 1. [Google Scholar]
  27. Li, X.; Yang, W.; Ao, T.; Li, H.; Chen, W. An improved approach of information extraction for earthquake-damaged buildings using high-resolution imagery. J. Earthq. Tsunami 2011, 5, 389–399. [Google Scholar] [CrossRef]
  28. Balz, T.; Liao, M. Building-damage detection using post-seismic high-resolution SAR satellite data. Int. J. Remote Sens. 2010, 31, 3369–3391. [Google Scholar] [CrossRef]
  29. Shi, L.; Sun, W.; Yang, J.; Li, P.; Lu, L. Building collapse assessment by the use of postearthquake Chinese VHR airborne SAR. IEEE Geosci. Remote Sens. Lett. 2015, 12, 2021–2025. [Google Scholar] [CrossRef]
  30. Gong, L.; Wang, C.; Wu, F.; Zhang, J.; Zhang, H.; Li, Q. Earthquake-induced building damage detection with post-event sub-meter VHR TerraSAR-X staring spotlight imagery. Remote Sens. 2016, 8, 887. [Google Scholar] [CrossRef]
  31. Bai, Y.; Adriano, B.; Mas, E.; Koshimura, S. Machine learning based building damage mapping from the ALOS-2/PALSAR-2 SAR imagery: Case study of 2016 Kumamoto earthquake. J. Disaster Res. 2017, 12, 646–655. [Google Scholar] [CrossRef]
  32. Bai, Y.; Adriano, B.; Mas, E.; Gokon, H.; Koshimura, S. Object-based building damage assessment methodology using only post event ALOS-2/PALSAR-2 dual polarimetric SAR intensity images. J. Disaster Res. 2017, 12, 259–271. [Google Scholar] [CrossRef]
  33. Ji, Y.; Sri Sumantyo, J.T.; Chua, M.Y.; Waqar, M.M. Earthquake/tsunami damage assessment for urban areas using post-event PolSAR data. Remote Sens. 2018, 10, 1088. [Google Scholar] [CrossRef]
  34. Cooner, A.J.; Shao, Y.; Campbell, J.B. Detection of urban damage using remote sensing and machine learning algorithms: Revisiting the 2010 Haiti earthquake. Remote Sens. 2016, 8, 868. [Google Scholar] [CrossRef]
  35. Bhangale, U.; Durbha, S.; Potnis, A.; Shinde, R. Rapid earthquake damage detection using deep learning from VHR remote sensing images. In Proceedings of the IGARSS 2019—2019 IEEE International Geoscience and Remote Sensing Symposium, Yokohama, Japan, 28 July–2 August 2019; IEEE: Piscataway, NY, USA, 2019; pp. 2654–2657. [Google Scholar]
  36. Ma, H.; Liu, Y.; Ren, Y.; Yu, J. Detection of collapsed buildings in post-earthquake remote sensing images based on the improved YOLOv3. Remote Sens. 2019, 12, 44. [Google Scholar] [CrossRef]
  37. Valentijn, T.; Margutti, J.; van den Homberg, M.; Laaksonen, J. Multi-hazard and spatial transferability of a cnn for automated building damage assessment. Remote Sens. 2020, 12, 2839. [Google Scholar] [CrossRef]
  38. Miura, H.; Aridome, T.; Matsuoka, M. Deep learning-based identification of collapsed, non-collapsed and blue tarp-covered buildings from post-disaster aerial images. Remote Sens. 2020, 12, 1924. [Google Scholar] [CrossRef]
  39. Bai, Y.; Gao, C.; Singh, S.; Koch, M.; Adriano, B.; Mas, E.; Koshimura, S. A framework of rapid regional tsunami damage recognition from post-event TerraSAR-X imagery using deep neural networks. IEEE Geosci. Remote Sens. Lett. 2017, 15, 43–47. [Google Scholar] [CrossRef]
  40. Qiao, W.; Shen, L.; Wang, J.; Yang, X.; Li, Z. A weakly supervised semantic segmentation approach for damaged building extraction from postearthquake high-resolution remote-sensing images. IEEE Geosci. Remote Sens. Lett. 2023, 20, 6002705. [Google Scholar] [CrossRef]
  41. Nex, F.; Duarte, D.; Tonolo, F.G.; Kerle, N. Structural building damage detection with deep learning: Assessment of a state-of-the-art CNN in operational conditions. Remote Sens. 2019, 11, 2765. [Google Scholar] [CrossRef]
  42. Zhan, Y.; Liu, W.; Maruyama, Y. Damaged building extraction using modified mask R-CNN model using post-event aerial images of the 2016 Kumamoto earthquake. Remote Sens. 2022, 14, 1002. [Google Scholar] [CrossRef]
  43. Seydi, S.T.; Rastiveis, H.; Kalantar, B.; Halin, A.A.; Ueda, N. BDD-Net: An End-to-End Multiscale Residual CNN for Earthquake-Induced Building Damage Detection. Remote Sens. 2022, 14, 2214. [Google Scholar] [CrossRef]
  44. Ma, H.; Liu, Y.; Ren, Y.; Wang, D.; Yu, L.; Yu, J. Improved CNN classification method for groups of buildings damaged by earthquake, based on high resolution remote sensing images. Remote Sens. 2020, 12, 260. [Google Scholar] [CrossRef]
  45. Khajwal, A.B.; Cheng, C.; Noshadravan, A. Post-disaster damage classification based on deep multi-view image fusion. Comput.-Aided Civ. Infrastruct. Eng. 2022, 38, 528–544. [Google Scholar] [CrossRef]
  46. Wang, Y.; Feng, W.; Jiang, K.; Li, Q.; Lv, R.; Tu, J. Real-Time Damaged Building Region Detection Based on Improved YOLOv5s and Embedded System From UAV Images. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2023, 16, 4205–4217. [Google Scholar] [CrossRef]
  47. Adriano, B.; Xia, J.; Baier, G.; Yokoya, N.; Koshimura, S. Multi-source data fusion based on ensemble learning for rapid building damage mapping during the 2018 sulawesi earthquake and tsunami in Palu, Indonesia. Remote Sens. 2019, 11, 886. [Google Scholar] [CrossRef]
  48. Song, D.; Tan, X.; Wang, B.; Zhang, L.; Shan, X.; Cui, J. Integration of super-pixel segmentation and deep-learning methods for evaluating earthquake-damaged buildings using single-phase remote sensing imagery. Int. J. Remote Sens. 2020, 41, 1040–1066. [Google Scholar] [CrossRef]
  49. Nie, Y.; Zeng, Q.; Zhang, H.; Wang, Q. Building damage detection based on OPCE matching algorithm using a single post-event PolSAR data. Remote Sens. 2021, 13, 1146. [Google Scholar] [CrossRef]
  50. Hu, D.; Li, S.; Du, J.; Cai, J. Automating Building Damage Reconnaissance to Optimize Drone Mission Planning for Disaster Response. J. Comput. Civ. Eng. 2023, 37, 04023006. [Google Scholar] [CrossRef]
  51. Xie, Y.; Feng, D.; Chen, H.; Liu, Z.; Mao, W.; Zhu, J.; Hu, Y.; Baik, S.W. Damaged building detection from post-earthquake remote sensing imagery considering heterogeneity characteristics. IEEE Trans. Geosci. Remote Sens. 2022, 60, 4708417. [Google Scholar] [CrossRef]
  52. Yang, W.; Zhang, X.; Luo, P. Transferability of convolutional neural network models for identifying damaged buildings due to earthquake. Remote Sens. 2021, 13, 504. [Google Scholar] [CrossRef]
  53. Wang, Y.; Jing, X.; Xu, Y.; Cui, L.; Zhang, Q.; Li, H. Geometry-guided semantic segmentation for post-earthquake buildings using optical remote sensing images. Earthq. Eng. Struct. Dyn. 2023, 52, 3392–3413. [Google Scholar] [CrossRef]
  54. Gong, L.; Li, Q.; Zhang, J. Earthquake building damage detection with object-oriented change detection. In Proceedings of the 2013 IEEE International Geoscience and Remote Sensing Symposium-IGARSS, Melbourne, Australia, 21–26 July 2013; IEEE: Piscataway, NJ, USA, 2013; pp. 3674–3677. [Google Scholar]
  55. Sakamoto, M.; Takasago, Y.; Uto, K.; Kakumoto, S.; Kosugi, Y.; Doihara, T. Automatic detection of damaged area of Iran earthquake by high-resolution satellite imagery. In Proceedings of the IGARSS 2004—2004 IEEE International Geoscience and Remote Sensing Symposium, Anchorage, AK, USA, 20–24 September 2004; IEEE: Piscataway, NJ, USA, 2004; Volume 2, pp. 1418–1421. [Google Scholar]
  56. Saito, K.; Spence, R.; de C Foley, T.A. Visual damage assessment using high-resolution satellite images following the 2003 Bam, Iran, earthquake. Earthq. Spectra 2005, 21, 309–318. [Google Scholar] [CrossRef]
  57. Yamazaki, F.; Yano, Y.; Matsuoka, M. Visual damage interpretation of buildings in Bam city using QuickBird images following the 2003 Bam, Iran, earthquake. Earthq. Spectra 2005, 21, 329–336. [Google Scholar] [CrossRef]
  58. Adams, B.J.; Mansouri, B.; Huyck, C.K. Streamlining Post-Earthquake Data Collection and Damage Assessment for the 2003 Bam, Iran, Earthquake Using VIEWS™ (Visualizing Impacts of Earthquakes With Satellites). Earthq. Spectra 2005, 21, 213–218. [Google Scholar] [CrossRef]
  59. Gamba, P.; Casciati, F. GIS and image understanding for near-real-time earthquake damage assessment. Photogramm. Eng. Remote Sens. 1998, 64, 987–994. [Google Scholar]
  60. Hancilar, U.; Taucer, F.; Corbane, C. Empirical fragility functions based on remote sensing and field data after the 12 January 2010 Haiti earthquake. Earthq. Spectra 2013, 29, 1275–1310. [Google Scholar] [CrossRef]
  61. Yusuf, Y.; Matsuoka, M.; Yamazaki, F. Damage assessment after 2001 Gujarat earthquake using Landsat-7 satellite images. J. Indian Soc. Remote Sens. 2001, 29, 17–22. [Google Scholar] [CrossRef]
  62. Sugiyama, M.; Abe, H.S.K. Detection of earthquake damaged areas from aerial photographs by using color and edge information. In Proceedings of the 5th Asian Conference on Computer Vision, Melbourne, Australia, 22–25 January 2002; pp. 23–25. [Google Scholar]
  63. Dou, A.; Zhang, J.; Tian, Y. Retrieve seismic damages from remote sensing images by change detection algorithm. In Proceedings of the IGARSS 2003—2003 IEEE International Geoscience and Remote Sensing Symposium. Proceedings (IEEE Cat. No. 03CH37477), Toulouse, France, 21–25 July 2003; IEEE: Piscataway, NJ, USA, 2003; Volume 4, pp. 2407–2409. [Google Scholar]
  64. Kohiyama, M.; Yamazaki, F. Damage detection for 2003 Bam, Iran, earthquake using Terra-ASTER satellite imagery. Earthq. Spectra 2005, 21, 267–274. [Google Scholar] [CrossRef]
  65. Adriano, B.; Yokoya, N.; Xia, J.; Miura, H.; Liu, W.; Matsuoka, M.; Koshimura, S. Learning from multimodal and multitemporal earth observation data for building damage mapping. ISPRS J. Photogramm. Remote Sens. 2021, 175, 132–143. [Google Scholar] [CrossRef]
  66. Pesaresi, M.; Gerhardinger, A.; Haag, F. Rapid damage assessment of built-up structures using VHR satellite data in tsunami-affected areas. Int. J. Remote Sens. 2007, 28, 3013–3036. [Google Scholar] [CrossRef]
  67. Miura, H.; Modorikawa, S.; Chen, S.H. Texture characteristics of high-resolution satellite images in damaged areas of the 2010 Haiti earthquake. In Proceedings of the 9th International Workshop on Remote Sensing for Disaster Response, Stanford, CA, USA, 15–16 September 2011; pp. 15–16. [Google Scholar]
  68. Janalipour, M.; Mohammadzadeh, A. Building damage detection using object-based image analysis and ANFIS from high-resolution image (case study: BAM earthquake, Iran). IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2015, 9, 1937–1945. [Google Scholar] [CrossRef]
  69. Dong, Y.; Li, Q.; Dou, A.; Wang, X. Extracting damages caused by the 2008 Ms 8.0 Wenchuan earthquake from SAR remote sensing data. J. Asian Earth Sci. 2011, 40, 907–914. [Google Scholar] [CrossRef]
  70. Miura, H.; Midorikawa, S.; Matsuoka, M. Building damage assessment using high-resolution satellite SAR images of the 2010 Haiti earthquake. Earthq. Spectra 2016, 32, 591–610. [Google Scholar] [CrossRef]
  71. Park, S.E.; Yamaguchi, Y.; Kim, D.j. Polarimetric SAR remote sensing of the 2011 Tohoku earthquake using ALOS/PALSAR. Remote Sens. Environ. 2013, 132, 212–220. [Google Scholar] [CrossRef]
  72. Park, S.E.; Jung, Y.T. Detection of earthquake-induced building damages using polarimetric SAR data. Remote Sens. 2020, 12, 137. [Google Scholar] [CrossRef]
  73. Wang, X.; Feng, G.; He, L.; An, Q.; Xiong, Z.; Lu, H.; Wang, W.; Li, N.; Zhao, Y.; Wang, Y.; et al. Evaluating urban building damage of 2023 Kahramanmaras, Turkey earthquake sequence using SAR change detection. Sensors 2023, 23, 6342. [Google Scholar] [CrossRef] [PubMed]
  74. Kim, M.; Park, S.E.; Lee, S.J. Detection of Damaged Buildings Using Temporal SAR Data with Different Observation Modes. Remote Sens. 2023, 15, 308. [Google Scholar] [CrossRef]
  75. Anniballe, R.; Noto, F.; Scalia, T.; Bignami, C.; Stramondo, S.; Chini, M.; Pierdicca, N. Earthquake damage mapping: An overall assessment of ground surveys and VHR image change detection after L’Aquila 2009 earthquake. Remote Sens. Environ. 2018, 210, 166–178. [Google Scholar] [CrossRef]
  76. Yan, Z.; Huazhong, R.; Desheng, C. The research of building earthquake damage object-oriented change detection based on ensemble classifier with remote sensing image. In Proceedings of the IGARSS 2018—2018 IEEE International Geoscience and Remote Sensing Symposium, Valencia, Spain, 22–27 July 2018; IEEE: Piscataway, NJ, USA, 2018; pp. 4950–4953. [Google Scholar]
  77. Hajeb, M.; Karimzadeh, S.; Matsuoka, M. SAR and LIDAR datasets for building damage evaluation based on support vector machine and random forest algorithms—A case study of Kumamoto earthquake, Japan. Appl. Sci. 2020, 10, 8932. [Google Scholar] [CrossRef]
  78. Ma, H.; Lu, N.; Ge, L.; Li, Q.; You, X.; Li, X. Automatic road damage detection using high-resolution satellite images and road maps. In Proceedings of the 2013 IEEE International Geoscience and Remote Sensing Symposium-IGARSS, Melbourne, Australia, 21–26 July 2013; IEEE: Piscataway, NJ, USA, 2013; pp. 3718–3721. [Google Scholar]
  79. Zhao, K.; Liu, J.; Wang, Q.; Wu, X.; Tu, J. Road damage detection from post-disaster high-resolution remote sensing images based on tld framework. IEEE Access 2022, 10, 43552–43561. [Google Scholar] [CrossRef]
  80. Gong, L.; An, L.; Liu, M.; Zhang, J. Road damage detection from high-resolution RS image. In Proceedings of the 2012 IEEE International Geoscience and Remote Sensing Symposium, Munich, Germany, 22–27 July 2012; IEEE: Piscataway, NJ, USA, 2012; pp. 990–993. [Google Scholar]
  81. Moya, L.; Geiß, C.; Hashimoto, M.; Mas, E.; Koshimura, S.; Strunz, G. Disaster intensity-based selection of training samples for remote sensing building damage classification. IEEE Trans. Geosci. Remote Sens. 2021, 59, 8288–8304. [Google Scholar] [CrossRef]
  82. Sublime, J.; Kalinicheva, E. Automatic post-disaster damage mapping using deep-learning techniques for change detection: Case study of the Tohoku tsunami. Remote Sens. 2019, 11, 1123. [Google Scholar] [CrossRef]
  83. Qing, Y.; Ming, D.; Wen, Q.; Weng, Q.; Xu, L.; Chen, Y.; Zhang, Y.; Zeng, B. Operational earthquake-induced building damage assessment using CNN-based direct remote sensing change detection on superpixel level. Int. J. Appl. Earth Obs. Geoinf. 2022, 112, 102899. [Google Scholar] [CrossRef]
  84. Loerch, A.C.; Stow, D.A.; Coulter, L.L.; Nara, A.; Frew, J. Comparing the Accuracy of sUAS Navigation, Image Co-Registration and CNN-Based Damage Detection between Traditional and Repeat Station Imaging. Geosciences 2022, 12, 401. [Google Scholar] [CrossRef]
  85. Gupta, R.; Goodman, B.; Patel, N.; Hosfelt, R.; Sajeev, S.; Heim, E.; Doshi, J.; Lucas, K.; Choset, H.; Gaston, M. Creating xBD: A dataset for assessing building damage from satellite imagery. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, Seattle, WA, USA, 4–19 June 2019; pp. 10–17. [Google Scholar]
  86. Weber, E.; Kané, H. Building disaster damage assessment in satellite imagery with multi-temporal fusion. arXiv 2020, arXiv:2004.05525. [Google Scholar]
  87. Omoya, M.; Ero, I.; Zaker Esteghamati, M.; Burton, H.V.; Brandenberg, S.; Sun, H.; Yi, Z.; Kang, H.; Nweke, C.C. A relational database to support post-earthquake building damage and recovery assessment. Earthq. Spectra 2022, 38, 1549–1569. [Google Scholar] [CrossRef]
  88. Dong, Z. Construction of a Sample Set of Earthquake-Damaged Buildings Based on Post-Disaster UAV Remote Sensing Images. Artif. Intell. Robot. Res. 2022. [Google Scholar] [CrossRef]
  89. Ghaffarian, S.; Kerle, N.; Pasolli, E.; Jokar Arsanjani, J. Post-disaster building database updating using automated deep learning: An integration of pre-disaster OpenStreetMap and multi-temporal satellite data. Remote Sens. 2019, 11, 2427. [Google Scholar] [CrossRef]
  90. Hao, H.; Baireddy, S.; Bartusiak, E.R.; Konz, L.; LaTourette, K.; Gribbons, M.; Chan, M.; Delp, E.J.; Comer, M.L. An attention-based system for damage assessment using satellite imagery. In Proceedings of the 2021 IEEE International Geoscience and Remote Sensing Symposium IGARSS, Brussels, Belgium, 12–16 July 2021; IEEE: Piscataway, NJ, USA, 2021; pp. 4396–4399. [Google Scholar]
  91. Ismail, A.; Awad, M. Bldnet: A semi-supervised change detection building damage framework using graph convolutional networks and urban domain knowledge. arXiv 2022, arXiv:2201.10389. [Google Scholar]
  92. Ismail, A.; Awad, M. Towards Cross-Disaster Building Damage Detection with Graph Convolutional Networks. In Proceedings of the IGARSS 2022—2022 IEEE International Geoscience and Remote Sensing Symposium, Kuala Lumpur, Malaysia, 17–22 July 2022; IEEE: Piscataway, NJ, USA, 2022; pp. 223–226. [Google Scholar]
  93. Xu, S.; He, X.; Cao, X.; Hu, J. Damaged Building Detection with Improved Swin-Unet Model. Wirel. Commun. Mob. Comput. 2022, 2022, 2124949. [Google Scholar] [CrossRef]
  94. Karlbrg, T.; Malmgren, J. Deep Learning for Building Damage Assessment of the 2023 Turkey Earthquakes: A comparison of two remote sensing methods. Int. J. Disaster Risk Sci. 2023, 14, 947–962. [Google Scholar]
  95. Kaur, N.; Lee, C.C.; Mostafavi, A.; Mahdavi-Amiri, A. Large-scale building damage assessment using a novel hierarchical transformer architecture on satellite images. Comput.-Aided Civ. Infrastruct. Eng. 2023, 38, 2072–2091. [Google Scholar] [CrossRef]
  96. Ge, J.; Tang, H.; Yang, N.; Hu, Y. Rapid identification of damaged buildings using incremental learning with transferred data from historical natural disaster cases. Isprs J. Photogramm. Remote Sens. 2023, 195, 105–128. [Google Scholar] [CrossRef]
  97. Bai, Y.; Su, J.; Zou, Y.; Adriano, B. Knowledge distillation based lightweight building damage assessment using satellite imagery of natural disasters. GeoInformatica 2023, 27, 237–261. [Google Scholar] [CrossRef]
  98. Gupta, S.; Nair, S. A Novel Approach for Infrastructural Disaster Damage Assessment Using High Spatial Resolution Satellite and UAV Imageries Using Deep Learning Algorithms; Technical report, Copernicus Meetings; EGU: Vienna, Austria, 2023. [Google Scholar]
  99. Bromley, J.; Guyon, I.; LeCun, Y.; Säckinger, E.; Shah, R. Signature verification using a “siamese” time delay neural network. Adv. Neural Inf. Process. Syst. 1993, 6, 25. [Google Scholar] [CrossRef]
  100. Nair, V.; Hinton, G.E. Rectified linear units improve restricted boltzmann machines. In Proceedings of the 27th International Conference on Machine Learning (ICML-10), Haifa, Israel, 21–24 June 2010; pp. 807–814. [Google Scholar]
  101. Wu, C.; Zhang, F.; Xia, J.; Xu, Y.; Li, G.; Xie, J.; Du, Z.; Liu, R. Building damage detection using U-Net with attention mechanism from pre-and post-disaster remote sensing datasets. Remote Sens. 2021, 13, 905. [Google Scholar] [CrossRef]
  102. Zhang, W.; Shen, L.; Qiao, W. Building damage detection in VHR satellite images via multi-scale scene change detection. In Proceedings of the 2021 IEEE International Geoscience and Remote Sensing Symposium IGARSS, Brussels, Belgium, 12–16 July 2021; IEEE: Piscataway, NJ, USA, 2021; pp. 8570–8573. [Google Scholar]
  103. Alizadeh, N.; Beirami, B.A.; Mokhtarzade, M. Damage Detection After the Earthquake Using Sentinel-1 and 2 Images and Machine Learning Algorithms (Case Study: Sarpol-e Zahab Earthquake). In Proceedings of the 2022 12th International Conference on Computer and Knowledge Engineering (ICCKE), Mashhad, Iran, 17–18 November 2022; IEEE: Piscataway, NJ, USA, 2022; pp. 343–347. [Google Scholar]
  104. Kalantar, B.; Ueda, N.; Al-Najjar, H.A.; Halin, A.A. Assessment of convolutional neural network architectures for earthquake-induced building damage detection based on pre-and post-event orthophoto images. Remote Sens. 2020, 12, 3529. [Google Scholar] [CrossRef]
  105. Jiang, X.; Li, G.; Liu, Y.; Zhang, X.P.; He, Y. Change detection in heterogeneous optical and SAR remote sensing images via deep homogeneous feature fusion. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2020, 13, 1551–1566. [Google Scholar] [CrossRef]
  106. Pang, L.; Zhang, F.; Li, L.; Huang, Q.; Jiao, Y.; Shao, Y. Assessing Buildings Damage from Multi-Temporal Sar Images Fusion Using Semantic Change Detection. In Proceedings of the IGARSS 2022—2022 IEEE International Geoscience and Remote Sensing Symposium, Kuala Lumpur, Malaysia, 17–22 July 2022; IEEE: Piscataway, NJ, USA, 2022; pp. 6292–6295. [Google Scholar]
  107. Mazzanti, P.; Scancella, S.; Virelli, M.; Frittelli, S.; Nocente, V.; Lombardo, F. Assessing the Performance of Multi-Resolution Satellite SAR Images for Post-Earthquake Damage Detection and Mapping Aimed at Emergency Response Management. Remote Sens. 2022, 14, 2210. [Google Scholar] [CrossRef]
  108. Sun, X.; Chen, X.; Yang, L.; Wang, W.; Zhou, X.; Wang, L.; Yao, Y. Using insar and polsar to assess ground displacement and building damage after a seismic event: Case study of the 2021 baicheng earthquake. Remote Sens. 2022, 14, 3009. [Google Scholar] [CrossRef]
  109. Cho, S.; Xiu, H.; Matsuoka, M. Backscattering Characteristics of SAR Images in Damaged Buildings Due to the 2016 Kumamoto Earthquake. Remote Sens. 2023, 15, 2181. [Google Scholar] [CrossRef]
  110. Rao, A.; Jung, J.; Silva, V.; Molinario, G.; Yun, S.H. Earthquake building damage detection based on synthetic-aperture-radar imagery and machine learning. Nat. Hazards Earth Syst. Sci. 2023, 23, 789–807. [Google Scholar] [CrossRef]
  111. Zheng, Z.; Zhong, Y.; Wang, J.; Ma, A.; Zhang, L. Building damage assessment for rapid disaster response with a deep object-based semantic change detection framework: From natural disasters to human-made disasters. Remote Sens. Environ. 2021, 265, 112636. [Google Scholar] [CrossRef]
  112. Yuan, X.; Azimi, S.; Henry, C.; Gstaiger, V.; Codastefano, M.; Manalili, M.; Cairo, S.; Modugno, S.; Wieland, M.; Schneibel, A.; et al. Automated building segmentation and damage assessment from satellite images for disaster relief. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2021, 43, 741–748. [Google Scholar] [CrossRef]
  113. Abdi, G.; Jabari, S. A multi-feature fusion using deep transfer learning for earthquake building damage detection. Can. J. Remote Sens. 2021, 47, 337–352. [Google Scholar] [CrossRef]
  114. Zhang, L.; Hu, X.; Zhang, M.; Shu, Z.; Zhou, H. Object-level change detection with a dual correlation attention-guided detector. ISPRS J. Photogramm. Remote Sens. 2021, 177, 147–160. [Google Scholar] [CrossRef]
  115. Wang, Y.; Cui, L.; Zhang, C.; Chen, W.; Xu, Y.; Zhang, Q. A Two-Stage Seismic Damage Assessment Method for Small, Dense, and Imbalanced Buildings in Remote Sensing Images. Remote Sens. 2022, 14, 1012. [Google Scholar] [CrossRef]
Figure 1. Various approaches for damage recognition under compound disasters.
Figure 1. Various approaches for damage recognition under compound disasters.
Buildings 14 00898 g001
Figure 2. The progressive retrieval process in the form of a pyramid.
Figure 2. The progressive retrieval process in the form of a pyramid.
Buildings 14 00898 g002
Figure 3. Overall research workflow.
Figure 3. Overall research workflow.
Buildings 14 00898 g003
Figure 4. Publication Years of the Literature.
Figure 4. Publication Years of the Literature.
Buildings 14 00898 g004
Figure 5. Time Distribution of Various Algorithms in the Literature.
Figure 5. Time Distribution of Various Algorithms in the Literature.
Buildings 14 00898 g005
Figure 6. Main development stages of rapid diagnosis methods for post-disaster building damage with post-event remote sensing data.
Figure 6. Main development stages of rapid diagnosis methods for post-disaster building damage with post-event remote sensing data.
Buildings 14 00898 g006
Figure 7. A variable importance chart, depicting the relative significance of features for both Artificial Neural Networks (ANNs) and Random Forests (RFs) [34], reveals that structural and textural information played a more prominent role in building damage classification compared to spectral data.
Figure 7. A variable importance chart, depicting the relative significance of features for both Artificial Neural Networks (ANNs) and Random Forests (RFs) [34], reveals that structural and textural information played a more prominent role in building damage classification compared to spectral data.
Buildings 14 00898 g007
Figure 8. Main development stages of rapid assessment methods for post-disaster building damage with pre- and post-event remote sensing data.
Figure 8. Main development stages of rapid assessment methods for post-disaster building damage with pre- and post-event remote sensing data.
Buildings 14 00898 g008
Figure 9. The process diagrams of the seven algorithms and the visualization of their inputs and outputs.
Figure 9. The process diagrams of the seven algorithms and the visualization of their inputs and outputs.
Buildings 14 00898 g009
Figure 10. Results of the CMI-Otsu method (a) Image before the 2024 Noto Peninsula earthquake, Japan; (b) Image after the earthquake, Japan, taken by MAXAR, Westminster, CO, USA; (c) Output image generated by the CMI algorithm; (d) Prediction result after binarizing the CMI image. Note: This result can be considered unfavorable or suboptimal due to its high rate of false positives.
Figure 10. Results of the CMI-Otsu method (a) Image before the 2024 Noto Peninsula earthquake, Japan; (b) Image after the earthquake, Japan, taken by MAXAR, Westminster, CO, USA; (c) Output image generated by the CMI algorithm; (d) Prediction result after binarizing the CMI image. Note: This result can be considered unfavorable or suboptimal due to its high rate of false positives.
Buildings 14 00898 g010
Figure 11. Results of the CVA-Otsu method (a) Image before the 2024 Noto Peninsula earthquake, Japan; (b) Image after the earthquake, taken by MAXAR, Westminster, Colorado, United States; (c) Prediction results of the CVA algorithm; (d) Prediction results on the image, Figure 10. Note: The algorithm’s noise reduction capabilities remain inadequate.
Figure 11. Results of the CVA-Otsu method (a) Image before the 2024 Noto Peninsula earthquake, Japan; (b) Image after the earthquake, taken by MAXAR, Westminster, Colorado, United States; (c) Prediction results of the CVA algorithm; (d) Prediction results on the image, Figure 10. Note: The algorithm’s noise reduction capabilities remain inadequate.
Buildings 14 00898 g011
Figure 12. Results of the Tras-LEVIR method (a) Image of the first building group before the 2023 earthquake in Turkey; (b) Image of the first building group after the earthquake, from Google Earth; (c) Prediction results obtained from the Tras-LEVIR algorithm for the first group; (d) A pre-event image in the xView2 dataset; (e) A post-event image in the xView2 dataset; (f) Prediction results obtained from the Tras-LEVIR algorithm for the second image. Note: Despite the dataset lacking explicit building damage labels, the algorithm demonstrates high precision in specific sample regions. This highlights the potential of transformers for building damage detection, albeit requiring further validation with appropriately labeled data.
Figure 12. Results of the Tras-LEVIR method (a) Image of the first building group before the 2023 earthquake in Turkey; (b) Image of the first building group after the earthquake, from Google Earth; (c) Prediction results obtained from the Tras-LEVIR algorithm for the first group; (d) A pre-event image in the xView2 dataset; (e) A post-event image in the xView2 dataset; (f) Prediction results obtained from the Tras-LEVIR algorithm for the second image. Note: Despite the dataset lacking explicit building damage labels, the algorithm demonstrates high precision in specific sample regions. This highlights the potential of transformers for building damage detection, albeit requiring further validation with appropriately labeled data.
Buildings 14 00898 g012
Figure 13. Results of the Siam-CNN method (a) Detection results of the first sample in the 2024 Noto Peninsula earthquake, Japan (Figure 10a,b); (b) detection results of the second sample in the 2024 Noto Peninsula earthquake, Japan (Figure 11a,b). Note: While the algorithm exhibits respectable performance on the first dataset, its effectiveness diminishes when applied to the second dataset, where distinguishing house texture proves more challenging.
Figure 13. Results of the Siam-CNN method (a) Detection results of the first sample in the 2024 Noto Peninsula earthquake, Japan (Figure 10a,b); (b) detection results of the second sample in the 2024 Noto Peninsula earthquake, Japan (Figure 11a,b). Note: While the algorithm exhibits respectable performance on the first dataset, its effectiveness diminishes when applied to the second dataset, where distinguishing house texture proves more challenging.
Buildings 14 00898 g013
Figure 14. Results of the Siam-ResNet method (a) Detection results of the first sample in the 2024 Noto Peninsula earthquake in t, Japan (Figure 10a,b); (b) Detection results of the second sample2 in the 2024 Noto Peninsula earthquake, Japan (Figure 11a,b). Note: The algorithm generally demonstrates promising results.
Figure 14. Results of the Siam-ResNet method (a) Detection results of the first sample in the 2024 Noto Peninsula earthquake in t, Japan (Figure 10a,b); (b) Detection results of the second sample2 in the 2024 Noto Peninsula earthquake, Japan (Figure 11a,b). Note: The algorithm generally demonstrates promising results.
Buildings 14 00898 g014
Figure 15. Results of the YOLOv8 method (a,b) The pre- and post-images of the first sample in the 2024 Noto Peninsula earthquake, combined with the recognition results of the YOLOv8 model. (c,d) The pre- and post-images of the second sample in the 2024 Noto Peninsula earthquake. (e,f) The pre- and post-images of the first sample in the 2023 Turkey earthquake. Finally, (g,h) The pre- and post-images of the second sample in the 2023 Turkey earthquake. Note: While the green square correctly identifies an undamaged house, other colored bounding boxes represent irrelevant labels. Notably, the absence of a dark red bounding box suggests the algorithm did not detect any damaged buildings. Despite these shortcomings, the results offer potential for future improvements.
Figure 15. Results of the YOLOv8 method (a,b) The pre- and post-images of the first sample in the 2024 Noto Peninsula earthquake, combined with the recognition results of the YOLOv8 model. (c,d) The pre- and post-images of the second sample in the 2024 Noto Peninsula earthquake. (e,f) The pre- and post-images of the first sample in the 2023 Turkey earthquake. Finally, (g,h) The pre- and post-images of the second sample in the 2023 Turkey earthquake. Note: While the green square correctly identifies an undamaged house, other colored bounding boxes represent irrelevant labels. Notably, the absence of a dark red bounding box suggests the algorithm did not detect any damaged buildings. Despite these shortcomings, the results offer potential for future improvements.
Buildings 14 00898 g015
Figure 16. Statistics of the training parameters in the YOLOv8 method (a) The relationship between confidence and precision (b) The relationship between confidence and recall.
Figure 16. Statistics of the training parameters in the YOLOv8 method (a) The relationship between confidence and precision (b) The relationship between confidence and recall.
Buildings 14 00898 g016
Figure 17. Results of the Sate-CNN method (a) The detection results for a sample in the xBD dataset; (b,c) The detection results for the two samples following the 2024 Noto Peninsula earthquake in Japan. Note: The red block signifies the region with the most severe damage. The yellow block highlights the area containing a damaged building, while the remaining uncolored blocks represent areas without detected damage.
Figure 17. Results of the Sate-CNN method (a) The detection results for a sample in the xBD dataset; (b,c) The detection results for the two samples following the 2024 Noto Peninsula earthquake in Japan. Note: The red block signifies the region with the most severe damage. The yellow block highlights the area containing a damaged building, while the remaining uncolored blocks represent areas without detected damage.
Buildings 14 00898 g017
Table 1. Categorization of Literature in Algorithms.
Table 1. Categorization of Literature in Algorithms.
CategoryMethodPaper Number
single-temporal methodVisual(13, 22–24)
Pure algorithm-based(25–28)
Data-driven(29–53)
multi-temporal methodVisual(55–58)
Pure algorithm-based(54, 59, 60, 62–73, 107)
Data-driven(61, 74–84, 87, 89–98, 100–106, 108–115)
Table 2. Neural Network Statistics.
Table 2. Neural Network Statistics.
Neural NetworkPaper Number
CNN-based(35, 37–41, 45, 61, 81, 82, 83, 94, 102, 112)
ResNet(43, 44, 46, 96)
transformer(51, 93, 105)
KNN(29, 30, 31, 32, 34, 47, 75)
MLR(76)
BPNN(76)
SVM(30, 33, 75, 77, 78, 99)
RF(30, 50, 77)
YOLO(18, 36, 46, 82, 114)
Deeplab(48)
BDD-Net(42)
Transfer Learning(52, 113)
RBM(80)
DHFF(VGG-based)(104)
ResNet50(85)
U-Net(53, 89, 92, 100, 109)
Siamese(87, 89, 96, 100, 101, 105, 111)
ChangeOS(97, 111)
BLDNET(90, 91)
Incre-Trans(95)
PANet(50)
DCA-Det(115)
Table 3. Natural Disaster and Dataset Statistics.
Table 3. Natural Disaster and Dataset Statistics.
Dataset or Events’ NamePaper Number
2008 Wenchuan Earthquake(27, 28, 30, 36, 48, 52, 53, 70, 78, 84)
2016 Kumamoto Earthquake(31, 40, 41, 49, 72, 74, 76)
xBD(xView/xView2)(39, 52, 83, 85, 89–97, 100)
1995 Kobe Earthquake(13, 22, 40, 63)
2017 Mexico City Earthquake(115)
1999 Turkey Earthquake(23)
2004 Indian Ocean Tsunami(66)
2019 China Changning Earthquake(44)
2003 Bam Earthquake in Iran(55–58, 65, 68, 109)
2010 Haiti Earthquake(26, 34, 35, 42, 44, 51, 60, 67, 71, 101, 104)
2010 Yushu Earthquake(24, 29, 36, 38, 49, 53, 54)
2015 Nepal Earthquake(32)
2011 Tohoku Earthquake(33, 43, 49, 70, 79, 80, 104)
2006 Yogyakarta Earthquake in Central Java(25)
2013 Ya’an(Lushan) Earthquake(46, 84)
1997 The Umbria Earthquake(59)
2001 Gujarat Earthquake in India(62)
2003 Xinjiang Earthquake(64)
2019 Chiba Typhoon(40)
2023 Turkey Earthquake(73)
2009 L’Aquila Earthquake in Italy(75)
2014 Yunnan Earthquake in China(81)
2016 Central Italy Earthquake(106)
2021 Maduo Earthquake in China(44)
2023 Tennessee Tornado(50)
2019 Beira Mozambique(112)
2020 Beirut Lebanon(112)
2017 Hurricane Harvey(45)
2018 Okayama Floods(79)
Aleppoin in the syrian civil war(105)
2020 Lagred Earthquake(110)
2018 Sarpol-e Zahab Earthquake(47, 102)
Table 4. Information of Test Models.
Table 4. Information of Test Models.
Algorithm NameCategoryPublicationCitationsGitstarsNetwork ArchitectureSource of Weights
CMI-OtsuPA-CD20211010
CVA-OtsuPA-CD2019/148
Tras-LEVIRDL-CD2020699371CNN and TransformerOpen accept
Siam-CNNDL-CD20207357CNN-basedOpen accept
Siam-ResNetDL-CD20213678ResNet50Open accept
YOLOv8DL-SP2023/18.4 kYOLOv8Acquired through training
Sate-CNNDL-SP20191254CNN-basedOpen accept
Table 5. Detailed parameters used in the deep learning models.
Table 5. Detailed parameters used in the deep learning models.
Algorithm NameTrain Set/Sample SizeOA(%)TEST(Jp/Tk/xV2) OA(%)Time(s)Shortages
CMI-OtsuN73.2252.3/46.8/62.4300+Low precision
CVA-OtsuN74.4667.2/59.2/65.160Low precision
Tras-LEVIRLEVIR/31498.9212.8/54.6/73.33Poor generalization ability
Siam-CNNXview2/1146490.655.6/67.8/88.72Poor generalization ability
Siam-ResNetXview2/1146483.767.4/37.1/80.25Poor generalization ability
YOLOv8Xview/8488434.7/76.7/79.310Low precision
Sate-CNNN/10969594.0/16.7/96.817Unstable
PS: “N” means missing data; JP means data from the 2024 Noto Peninsula, Japan earthquake; TK means data from the 2023 Turkey earthquake; xV2 means xView2 dataset.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Gu, J.; Xie, Z.; Zhang, J.; He, X. Advances in Rapid Damage Identification Methods for Post-Disaster Regional Buildings Based on Remote Sensing Images: A Survey. Buildings 2024, 14, 898. https://doi.org/10.3390/buildings14040898

AMA Style

Gu J, Xie Z, Zhang J, He X. Advances in Rapid Damage Identification Methods for Post-Disaster Regional Buildings Based on Remote Sensing Images: A Survey. Buildings. 2024; 14(4):898. https://doi.org/10.3390/buildings14040898

Chicago/Turabian Style

Gu, Jiancheng, Zhengtao Xie, Jiandong Zhang, and Xinhao He. 2024. "Advances in Rapid Damage Identification Methods for Post-Disaster Regional Buildings Based on Remote Sensing Images: A Survey" Buildings 14, no. 4: 898. https://doi.org/10.3390/buildings14040898

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop