Next Article in Journal
Applications of Smart and Self-Sensing Materials for Structural Health Monitoring in Civil Engineering: A Systematic Review
Previous Article in Journal
Theoretical Analysis of Energy Distribution Characteristics in Deeply Buried Circular Tunnels with a Revealed Cave
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Integrating Machine Learning and Remote Sensing in Disaster Management: A Decadal Review of Post-Disaster Building Damage Assessment

by
Sultan Al Shafian
and
Da Hu
*
Department of Civil and Environmental Engineering, Kennesaw State University, Marietta, GA 30060, USA
*
Author to whom correspondence should be addressed.
Buildings 2024, 14(8), 2344; https://doi.org/10.3390/buildings14082344
Submission received: 28 June 2024 / Revised: 15 July 2024 / Accepted: 17 July 2024 / Published: 29 July 2024

Abstract

:
Natural disasters pose significant threats to human life and property, exacerbated by their sudden onset and increasing frequency. This paper conducts a comprehensive bibliometric review to explore robust methodologies for post-disaster building damage assessment and reconnaissance, focusing on the integration of advanced data collection technologies and computational techniques. The objectives of this study were to assess the current landscape of methodologies, highlight technological advancements, and identify significant trends and gaps in the literature. Using a structured approach for data collection, this review analyzed 370 journal articles from the Scopus database from 2014 to 2024, emphasizing recent developments in remote sensing, including satellite and UAV technologies, and the application of machine learning and deep learning for damage detection and analysis. Our findings reveal substantial advancements in data collection and analysis techniques, underscoring the critical role of machine learning and remote sensing in enhancing disaster damage assessments. The results are significant as they highlight areas requiring further research and development, particularly in data fusion techniques, real-time processing capabilities, model generalization, UAV technology enhancements, and training for the rescue team. These areas are crucial for improving disaster management practices and enhancing community resilience. The application of our research is particularly relevant in developing more effective emergency response strategies and in informing policy-making for disaster-prepared social infrastructure planning. Future research should focus on closing the identified gaps and leveraging cutting-edge technologies to advance the field of disaster management.

1. Introduction

Natural disasters are characterized by their sudden onset, immense destructive power, and inherent unpredictability, posing significant threats to human life and the security of property. These catastrophic events can strike with little to no warning, leading to substantial loss of life, extensive damage to infrastructure, and profound economic disruptions. Between 2000 and 2019, there were 510,837 deaths and 3.9 billion people affected by 6681 climate-related disasters [1]. In 2020 alone, disaster events attributed to natural hazards affected approximately 100 million people, accounted for an estimated USD 190 billion in global economic losses, and resulted in 15,082 deaths [2]. These staggering figures underscore the critical importance of effective disaster management and mitigation strategies. The increasing frequency and severity of natural disasters, exacerbated by climate change and urbanization, necessitates robust methodologies for assessing and responding to building damage post-disaster. Identifying critically affected areas and delivering essential aid to disaster-impacted regions is a pivotal component of effective disaster management [3].
In the wake of natural disasters, researchers are increasingly leveraging advanced technologies to meticulously gather information about buildings affected by such calamities [4,5,6,7]. This critical task of identifying damaged structures is essential for ensuring public safety, as it informs residents about the condition of their homes and supports decisions on whether they can safely reoccupy their living spaces. Given the fundamental role of a home as a place of safety, accurately determining whether a building remains structurally sound after a disaster is paramount. To achieve this, researchers utilize a suite of state-of-the-art techniques, including remote sensing [8,9] and aerial drone surveillance [10,11], to deliver precise and comprehensive assessments. These technologies enable rapid evaluation of damage extent and critical structural weaknesses, providing residents with the information needed to feel secure and confident about the safety of their environments. Furthermore, the application of these advanced technologies extends beyond immediate post-disaster assessments. They are instrumental in shaping long-term urban planning and improving disaster response strategies. By integrating data-driven insights from current and past events, urban planners can design more resilient infrastructures, and disaster response teams can refine their strategies to enhance efficacy and safety.
This paper provides a comprehensive bibliometric review of post-disaster building damage assessment and reconnaissance methods, emphasizing recent advancements in data collection technologies and the incorporation of machine learning and deep learning techniques to enhance damage detection and analysis. This study meticulously compiles and analyzes a vast array of the literature, leveraging the latest developments in remote sensing, UAV surveillance, and automated data processing to provide a holistic view of the current landscape in disaster assessment methodologies. This review makes several notable contributions. First, it systematically catalogs the evolution and application of cutting-edge technologies in post-disaster scenarios, demonstrating how these tools not only accelerate damage assessment but also enhance the accuracy of these evaluations. Second, by exploring the integration of artificial intelligence, particularly machine learning and deep learning, this paper reveals how these technologies are reshaping approaches to damage assessment, providing deeper insights and predictive capabilities that were previously unattainable. Furthermore, this review identifies significant gaps and limitations in current methodologies, offering a critical perspective that is essential for advancing the field. It outlines future directions and proposes potential improvements, such as the integration of multi-modal data and the development of more robust AI models that can adapt to chaotic environments post-disaster. Ultimately, the contributions of this study aim to advance more effective and efficient disaster management practices. By highlighting innovative technologies and identifying areas for further research and development, this paper seeks to enhance the resilience of communities against future disasters, ensuring quicker recovery and reducing the long-term impact on affected populations.

2. Methodology of the Research

Disaster reconnaissance is a vital and complex field that utilizes a range of advanced technologies to perform its functions. To ensure the inclusion of the most relevant research works in this area, it is essential to follow a clear and systematic methodology. This research adopts a structured approach beginning with the collection of data by retrieving relevant publications from a selected database. This initial step is followed by a meticulous data-sorting process to identify additional pertinent research articles for comprehensive analysis. The final stage involves conducting a bibliometric analysis to construct a detailed science map of the existing literature. This map provides an in-depth understanding of the current research landscape, highlighting significant trends and gaps, and ultimately offering suggestions for future research directions.
For this study, the Scopus database has been chosen due to its extensive range of high-quality publications, particularly those related to interdisciplinary and technologically advanced aspects of disaster reconnaissance. Leveraging Scopus ensures a robust foundation for our bibliometric analysis. The following subsections describe the methodology in detail.

2.1. Article Collection from Sources

The quality of input data is crucial for any literature review, necessitating a comprehensive database and a rigorous search strategy before proceeding to bibliometric analysis and discussion. For this research, the literature has been derived from the Scopus database, as it has a wider range of disaster reconnaissance-related research articles and provides a broader scope for interdisciplinary research topics. Articles featured in the Scopus database have undergone peer review, ensuring they meet established criteria for research quality [12]. The related publications were chosen with certain keywords. However, at first, some research questions were developed and then keywords were selected.
  • What are the comparative strengths and limitations of remote sensing (satellite, UAV) versus ground-based sensing technologies in the detection and assessment of building damage following various types of disasters (e.g., earthquakes, floods, hurricanes)?
  • How can artificial intelligence and deep learning techniques (e.g., CNNs) improve the accuracy and efficiency of building damage assessment from diverse data sources?
  • Considering the challenges in real-time data collection and analysis in post-disaster scenarios, what are the most effective AI-driven strategies for rapidly assessing building damage to support immediate response and recovery efforts?
  • How do machine learning models compare in their ability to detect, segment, and classify different types of building damage in disaster-affected areas?
Based on the above research questions, the following keywords were chosen for final data collection:
(“Building*” OR “house*” OR “residence*” OR “dwelling*”) AND (“Damage*” OR “Collapse*” OR “Destruction*”) AND (“assess*” OR “Investigat*” OR “Evaluat*” OR “ analysis*” OR “survey*” OR “reconnaissance”) AND ( “Disaster*” OR “Earthquake*” OR “Tsunami*” OR “seismic*” OR “Hurricane*” OR “Tornado*” OR “typhoon*” OR “flood*” OR “fire*” OR “storm*”) AND (“Machine Learning” OR “Deep Learning” OR “Computer vision*” OR “transformer*” OR “Detect*” OR “segment*” OR “classif*” OR “Artificial Intelligence” OR “AI” OR “ML” OR “DL” OR “Neural Network*” OR “CNN*” OR “DNN*”) AND (“remote sensing” OR “LiDAR*” OR “Point cloud*” OR “radar*” OR “satellite*” OR “imag*” OR “drone*” OR “UAV*” OR “aerial*” OR “camera*”)) AND PUBYEAR > 2013 AND PUBYEAR < 2025.
Publications that include the specified keywords in their titles, abstracts, or designated keyword sections are identified using the Scopus database keyword search tool. The search criteria involve selecting the title/abstract/keywords option within the database. This comprehensive search covers a decade, specifically from 2014 to 2024, ensuring a robust collection of the relevant literature over this period. The goal is to capture a wide array of studies and articles that align with the research focus, providing a solid foundation for analysis and review within the chosen timeframe.

2.2. Data Sorting/Article Selection

Using the specified keywords, a total of 698 publications met the initial selection criteria. To refine this collection, a further screening process was implemented. This involved selecting only publications in the English language and focusing exclusively on journal articles pertinent to the research objectives. After this rigorous screening, a final total of 370 journal articles were chosen for in-depth analysis. This curated selection aims to provide a comprehensive understanding of the research landscape, ensuring that the most relevant and high-quality studies are included in the analysis.

2.3. Bibliometric Analysis Using Vosviewer

The network maps were created using the VOSviewer tool (version 1.6.20), a widely recognized tool in the recent literature for producing high-quality bibliometric visualizations. This tool facilitates the construction of diverse bibliometric networks, including co-authorship, co-occurrence, citation, bibliographic coupling, and co-citation networks, utilizing data from sources such as Web of Science, Scopus, and PubMed. The software features three distinct visualization modes—network, overlay, and density—that enable thorough analysis of extensive datasets and the interconnections among items. Additionally, VOSviewer supports data cleaning and the integration of multiple bibliographic and reference manager databases, significantly enhancing its effectiveness in bibliometric research and analysis [13]. A summary of the methodology is provided in Figure 1.
In this study, the bibliometric analysis using VOSviewer spanned the period from 2014 to April 2024, focusing on scientific articles. The analysis included co-occurrence, citation, co-authorship, and bibliographic coupling, with the full counting method employed throughout. Minimum and maximum thresholds were carefully selected for each analysis to ensure the inclusion of relevant terms and significant articles. The network visualization was created using the default Association Strength algorithm, with attraction and repulsion parameters set to their default values. Clustering was conducted with a resolution parameter of 1.0, and colors were assigned based on cluster membership. Node and label sizes were scaled according to the frequency of various parameters specific to each network map. Key metrics, such as the number of clusters and the size of the largest connected component, were reported. These settings were chosen to enhance visualization quality and ensure the reproducibility of our results.

3. Results of Bibliometric Analysis

3.1. Bibliometric Performance Trends

3.1.1. Yearly Publication Metrics

Over the past decade, there has been a significant surge in the volume of research dedicated to disaster management and mitigation. This body of work has particularly concentrated on the development and implementation of technologies for predicting, monitoring, and responding to disasters. Additionally, there has been substantial progress in disaster documentation through remote sensing, as well as the utilization of various computational and artificial intelligence techniques for the analysis of post-disaster scenarios, among other areas. Figure 2 shows the yearly publication trends on similar research works from 2014 to 2024 (April).
The increasing number of publications over the years, as depicted in the Figure 2, illustrates the growing interest from various fields in disaster management and mitigation. This trend underscores the heightened recognition of the critical significance of these topics. Furthermore, the rising frequency and severity of natural disasters, driven by climate change, urbanization, and other environmental stressors, have catalyzed the scientific community to prioritize research aimed at predicting, mitigating, and managing the impacts of such events. These data reveal a marked increase in publications from 2018 onwards, with a pronounced surge in 2023, indicating an accelerated response to the escalating threat posed by these disasters.

3.1.2. Geographical Distribution of Publications

The global response to disaster reconnaissance, depicted in Figure 3, is a testament to the collective acknowledgment of its importance by countries across the world.
Based on the detailed analysis of publications, it is evident that this critical field has garnered significant attention and contributions from a wide range of nations, spanning all continents.
After analyzing the number of publications of the last decade, Asia stands out with substantial contributions from countries such as China and Japan, reflecting their proactive approach to addressing disaster management challenges through scientific research and innovation. Similarly, Europe showcases a robust engagement with notable publications from Germany, the United Kingdom, and France, underscoring their commitment to advancing knowledge and practical solutions in disaster reconnaissance.
In North America, the United States and Canada lead with a considerable number of publications, highlighting their emphasis on leveraging advanced technologies and methodologies to enhance disaster preparedness and response. The African continent, although traditionally underrepresented in global research outputs, shows participation from Algeria and Egypt, indicating a recognition of the importance of disaster reconnaissance in mitigating the impacts of natural and man-made disasters.
South America, with contributions from Peru, reflects a regional effort to integrate scientific research into national disaster management frameworks. Oceania, represented primarily by Australia, demonstrates a strong research output that underscores its strategic focus on disaster risk reduction, considering its vulnerability to natural calamities such as bushfires and cyclones. The cumulative number of publications from these diverse regions underscores the universal recognition of disaster reconnaissance as a vital discipline. This global effort is crucial not only for advancing scientific knowledge but also for developing practical tools and strategies that can save lives, protect property, and enhance the resilience of communities against future disasters.

3.2. Bibliometric Mapping

3.2.1. Co-Occurrence of Keywords

To provide an accurate picture of the main research stream and topics covered in the domain of disaster reconnaissance, a co-occurrence network of keywords was created using VOSviewer software. Keywords capture the central concepts of ongoing research, delineate the explored domains within specific topics, and assist scholars in discovering potential possibilities for future investigation [14,15]. Contrarily, keywords designated by authors are typically regarded as a crucial tool for pinpointing pivotal research areas within a given subject. These author-assigned terms not only provide insight into the focus of the research but also serve as a valuable resource for identifying emergent trends and gaps in the field. To visualize the bibliographic map of author-indexed keywords for the co-occurrence analysis in VOSviewer, a threshold was meticulously set, requiring a minimum occurrence of 5. An additional process of merging keywords with analogous semantic meanings was undertaken to create a coherent and insightful map. For example, keywords such as “convolutional neural network”, “convolutional neural networks”, and “cnn” were merged under “convolutional neural network”. From this approach, 37 keywords were identified and are displayed in the co-occurrence map in Figure 4, from the 839 keywords discovered in the included publications. As per the manual of VOSviewer, the node size indicates the frequency of keyword occurrences, with larger nodes representing more frequent keywords. Nodes sharing the same color are grouped into the same research cluster, and the proximity between nodes illustrates the strength of their relationship, where greater distances imply weaker connections [13].
The largest nodes in the co-occurrence map represent “Building Damage Assessment” and “Remote Sensing.” This prominence is not surprising as these are key research topics within the domain of disaster management and damage evaluation. The figure suggests that many researchers are exploring the synergy between remote sensing technologies and building damage assessment, as well as integrating machine learning techniques. For example, the Building Damage Assessment cluster is centrally located, indicating its pivotal role in the research field. Remote Sensing, positioned nearby, signifies its significant contribution to the domain, often used in conjunction with other analysis approaches. The Satellite Imagery cluster reflects research aimed at using satellite data to assess damage from natural calamities. Additionally, the SAR (Synthetic Aperture Radar) cluster highlights its utility in structural analysis and monitoring. Overall, the map in Figure 4 provides a detailed visualization of the key research areas and their interconnections within the field of building damage assessment, remote sensing, and disaster management. It highlights significant clusters and suggests ongoing and future research directions.

3.2.2. Co-Authorship Network

  • Co-Authorship Network of Countries
This section presents a co-authorship analysis using VOSviewer, focusing on individual authors’ geographic areas which can help to uncover patterns of international partnerships, highlight key contributors from different areas, and reveal the extent of global collaboration in the research community. Figure 5 shows the mapping of the co-authorship network among the authors of different countries.
To create the co-authorship network, a minimum number of documents of a country is selected 1, and a minimum number of citations is considered 5. The map illustrates the dominant research productivity and collaboration of the United States and China, depicted as the largest nodes. Specifically, China shows extensive collaborations with countries like the United States, Germany, and the United Kingdom. The United States has strong links with countries such as Germany and the Netherlands. This visualization reveals the structure of global research collaboration, highlighting key regional and international partnerships.
  • Co-Authorship Network of Authors
To obtain valuable insights into collaborative relationships among authors and their research, revealing patterns, a co-authorship network map in VOSviewer can significantly enhance understanding. This map indicates the strength of these collaborations by the number of coauthored works, helping to identify influential partnerships. In this study, the minimum documents per author were set to 1, and the minimum citations were set to 30, resulting in 284 authors meeting the threshold values. Fractional counting was used to provide a nuanced view of individual contributions. Figure 6 shows the co-authorship network for authors around the world.
The co-authorship network map reveals each author as a node and their collaborative relationships as links, with node size corresponding to the number of documents authored and link thickness indicating the strength of co-authorship. Two key metrics, the number of links per author and total link strength are crucial for understanding these collaborations. From Figure 6, it can be seen that notable authors in this network include Bruno Adriano, with 10 documents and a total link strength of 37, Shunichi Koshimura, and Masashi Matsuoka, all of whom demonstrate extensive collaborations. The analysis identifies multiple clusters, with the largest featuring significant authors such as Masashi Matsuoka and Hiroyuki Miura, indicating strong collaborative efforts

3.2.3. Citation Analysis

  • Citation Analysis by Countries
Understanding citation networks is crucial for evaluating the scholarly impact and academic influence of countries, as they reveal the interconnectedness and flow of knowledge within the global research community. For developing the citation network map, the minimum number of documents per country is considered 5 and the minimum number of citations for each country is considered 15. Figure 7 provides a detailed overview of the citation network among various countries, focusing on their scholarly impact and interconnectedness. China leads with the highest figures (169 documents, 2325 citations, and 245 total link strength), indicating its significant academic influence. The United States follows with 64 documents, 1499 citations, and a total link strength of 106, showcasing its strong citation impact. The Netherlands and Germany also exhibit substantial citation strength, with the Netherlands having 13 documents, 872 citations, and 116 total link strength, and Germany with 16 documents, 520 citations, and 107 total link strength. Countries such as Iran, Canada, Italy, France, and the United Kingdom display moderate citation metrics, reflecting their roles in the global citation network. South Korea, India, Turkey, Australia, and Indonesia have lower metrics but still contribute meaningfully. The network visualization, created using VOSviewer, illustrates these citation relationships, with nodes representing countries and node sizes reflecting citation counts and link strength. China appears as the largest node, indicating its dominant role, while the United States is another significant node with extensive connections. The Netherlands and Germany also show substantial connectivity, reinforcing their influence.
  • Citation Analysis of Sources
To generate the bibliographic map of citation sources, the minimum number of documents for a source was set to 5, and the minimum number of citations was set to 30. Ultimately, 21 sources met these thresholds. Figure 8 shows the map which reveals that “Remote Sensing”, and the “IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing” are central, highly cited nodes, signifying their influence in the field. The map’s color-coded clusters represent different subfields, such as geoscience-related research and civil engineering applications. Analysis of Excel data highlights that “Remote Sensing” leads in document count (56) and citation count (1534). Journals such as the “ISPRS Journal of Photogrammetry and Remote Sensing,” with 568 citations, “IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing,” with 282 citations, and “Computer-Aided Civil and Infrastructure Engineering,” with 263 citations, underscore their significant role in advancing disaster reconnaissance research. These high citation counts highlight the critical contributions these journals make to the field, reflecting their influence and importance in ongoing academic and practical advancements in disaster reconnaissance.
  • Citation Analysis of Journal Articles
Citation counts of a document indicate its significant academic impact. High citation counts often serve as validation for the research findings and methodologies, reflecting widespread acceptance and utilization within the academic community [16]. A citation map is generated in VOSviewer for highly cited documents, where the minimum number of citations is taken 30. Figure 9 shows the highly cited documents between the year 2014 and 2024.
From the map, it is visible that the research article by authors such as Vetrivel et al. (2018) [17], Fernandez Galarreta et al. (2015) [18], and Cooner et al. (2016) [19] are highly cited, indicating their significance. The map features various clusters such as the green cluster (e.g., Fernandez Galarreta 2015) [18], the blue cluster (e.g., Ji 2018) [21], the red cluster (e.g., Moya 2019) [26], and the orange cluster (e.g., Vetrivel 2018 and Pan 2020) [17,24], each representing distinct research themes.

3.2.4. Bibliographic Coupling

Bibliographic coupling happens when two documents cite the same references, signaling a shared basis in their research areas. This technique gauges the extent of overlap in the reference lists of two documents, indicating their common intellectual foundation and suggesting aligned research interests. By pinpointing these connections, bibliographic coupling aids in charting the terrain of academic dialogue, uncovering clusters of related works, and supporting the exploration of scholarly networks. This approach is especially valuable for understanding how different research papers are interconnected through their citations, shedding light on the interdependencies and thematic relationships within a specific field of study. Figure 10 shows the bibliographic coupling of documents where the minimum number of citations for each document is taken 15.

4. Discussion

4.1. Techniques for Data Collection in Building Damage Assessment

4.1.1. Satellite-Based Data Collection

Accurate data collection for identifying damaged buildings is a crucial aspect of disaster reconnaissance. The prompt and precise identification of structural damage is vital for informing first responders about critically affected structures, enabling them to prioritize rescue operations effectively. Researchers are employing various remote sensing technologies, including optical satellite imagery and synthetic aperture radar (SAR), to collect data on damaged buildings. Satellite imagery has become a vital technology in disaster mapping and assessment due to its ability to capture detailed temporal and spatial information over large areas, which is essential in post-disaster scenarios such as earthquakes, typhoons, hurricanes, and many more [21]. Satellites equipped with optical sensors can provide detailed information about the structural integrity of buildings following disasters such as earthquakes, floods, and hurricanes. Modern satellite sensors, such as WorldView-4, provide very high-resolution (VHR) imagery with ground sample distances (GSD) as fine as 0.31 m. Satellites are equipped with various sensors, including optical and Synthetic Aperture Radar (SAR), which can capture data under different conditions. Optical sensors provide high-resolution images, while SAR sensors can operate in all weather conditions and during both day and night, ensuring continuous monitoring capability [39,52].
One of the key advantages of optical satellite imagery is its extensive coverage and frequent revisit times, which allow for timely monitoring of disaster-affected areas. For instance, the Copernicus Sentinel-2 mission provides optical imagery with a spatial resolution of up to 10 m and a revisit time of five days, making it highly effective for continuous monitoring [53]. SAR is a powerful remote sensing technology used in building damage assessment. There are several advantages of using SAR over optical satellite imagery in disaster monitoring. For example, unlike optical sensors, SAR can penetrate cloud cover and operate in all weather conditions, providing reliable data acquisition capabilities. SAR systems emit microwave signals that interact with the Earth’s surface, and the reflected signals are used to create high-resolution images. This technology is particularly useful for detecting structural deformations and changes in surface roughness, which are indicative of building damage. One of the notable advantages of SAR is its ability to capture data at night and under adverse weather conditions, making it a reliable tool for emergency response [54]. The unique ability of SAR sensors to capture high-resolution images under any weather conditions makes this technology particularly valuable for disaster damage assessment, where timely and accurate information is crucial. Prominent SAR satellite systems such as TerraSAR-X, TanDEM-X, Sentinel-1, RADARSAT-2, ALOS-2, RISAT, Cosmo-SkyMed, and Gaofen-3 have been extensively used in this domain.
TerraSAR-X, launched by the German Aerospace Center (DLR) in 2007, provides high-resolution radar imagery with up to 1 m resolution [55]. Its versatility in capturing data in various modes, such as StripMap, Spotlight, and ScanSAR, allows for detailed monitoring of natural disasters like floods, landslides, and earthquakes [56]. TanDEM-X, launched in 2010 as a twin satellite to TerraSAR-X, forms a unique SAR interferometry constellation capable of generating high-precision digital elevation models (DEMs) [57]. These DEMs are crucial for understanding terrain changes caused by natural disasters. For instance, after an earthquake, the TanDEM-X system can detect ground displacement and deformation, aiding in evaluating the extent of damage and planning reconstruction efforts, as shown by Eineder et al. [58].
Sentinel-1 and Sentinel-2, both part of the European Space Agency’s Copernicus program, include two satellites, Sentinel-1A and Sentinel-1B, launched in 2014 and 2016, respectively. These satellites provide all-weather, day-and-night radar imagery, with a revisit time of six days [59]. Sentinel-1’s wide coverage and short revisit time make it ideal for monitoring dynamic disaster scenarios, such as floods, landslides, and oil spills [60]. The sensor’s ability to detect surface movement and changes over time has been crucial in assessing the impact of disasters and guiding relief efforts. Using freely available data from the Sentinel satellite, researchers worldwide have explored various methods to analyze earthquake damage, particularly focusing on damaged buildings [61,62,63]. RADARSAT-2, a Canadian satellite launched in 2007, offers high-resolution SAR imagery with various beam modes, including fine, standard, and wide modes. Its versatility and high resolution are beneficial for monitoring disasters such as floods, hurricanes, and earthquakes. RADARSAT-2 has been extensively used to map flood extents, assess hurricane damage, and monitor changes in coastal regions [64]. ALOS-2, the successor to Japan’s ALOS satellite, was launched in 2014 and features the PALSAR-2 sensor, providing high-resolution imagery with enhanced sensitivity. It is particularly useful for monitoring earthquakes, tsunamis, and landslides [65]. Cosmo-SkyMed, an Italian constellation of four satellites launched between 2007 and 2010, offers high-resolution SAR imagery with short revisit times. Its ability to capture frequent images makes it suitable for monitoring rapidly changing disaster scenarios. Cosmo-SkyMed has been used extensively for earthquake damage assessment, flood mapping, and monitoring volcanic activity [66]. The constellation’s data help in understanding the extent of damage and planning effective recovery efforts. Gaofen-3, a Chinese satellite launched in 2016, provides high-resolution SAR imagery with multiple imaging modes. Its data are valuable for disaster monitoring, including floods, landslides, and earthquakes. Gaofen-3’s ability to capture detailed images in all weather conditions ensures continuous monitoring of disaster-prone areas. Its data support emergency responders in assessing damage and planning relief operations efficiently [67].
Satellite imagery techniques have some limitations as well. High-resolution images may miss subtle structural damage, such as fine cracks or internal issues not visible from above. Processing this rich data requires advanced techniques and substantial computational resources, necessitating continual refinement of automated methods. Post-disaster datasets are often imbalanced, with fewer collapsed buildings compared to intact ones, requiring techniques like over-sampling and cost-sensitive learning to improve classification accuracy. Optical imagery’s reliance on clear weather can impede data acquisition due to cloud cover, necessitating supplementary use of weather-independent methods like SAR. The limitations of current building damage estimation methods using post-event SAR imagery include the impracticality of the physical polarimetric SAR features approach due to the unavailability of fully polarimetric SAR data in real-world scenarios [68].

4.1.2. UAV-Based Data Collection

The utilization of both manned aircraft and UAVs for data collection is prevalent due to their ability to meet specific user requirements effectively. UAVs are used to collect various damaged building data as well as disaster-affected areas data for analysis [17]. UAVs offer significant advantages for disaster damage assessment, including the ability to capture very high-resolution imagery (up to 2 cm) essential for identifying fine damage details like cracks on façades. Their superior portability and high-resolution imaging capabilities allow them to gather more detailed information on building damage compared to manned platforms [69]. Additionally, their flexible data acquisition capabilities allow for multi-angle imaging, providing comprehensive views of building façades and roofs often missed by traditional methods. UAVs support the generation of detailed 3-D point clouds, which are invaluable for identifying major damage features such as collapsed roofs and rubble piles. Additionally, UAVs enable fully controlled flight paths for systematic and reliable data collection. Their rapid deployment capabilities facilitate quick data acquisition and assessment, crucial for timely post-disaster decision-making. Furthermore, UAVs can safely access and survey hazardous or inaccessible areas, ensuring thorough damage assessments without risking human lives [18]. However, UAVs have several disadvantages in disaster damage assessment, including short battery life that limits coverage, and sensitivity to variable atmospheric conditions that can affect data quality. Their effective use requires skilled pilots, but pilot training is often insufficient. Strict regulations in many countries can also restrict UAV deployment and flexibility. Additionally, the rich, detailed data they provide can complicate automated analysis and necessitate time-consuming expert assessments, introducing subjectivity into the evaluation process.
Airborne LiDAR (Light Detection and Ranging) offers several advantages for identifying damaged buildings as it provides highly accurate 3D point clouds essential for assessing both surface and structural damages, facilitating effective damage detection and emergency response. The rapid data acquisition capability of airborne LiDAR allows for quick assessment over large areas, aiding timely decision-making and resource allocation during disasters [70]. Additionally, LiDAR operates effectively in various weather conditions, including cloudy and nighttime scenarios, ensuring consistent data collection. The detailed structural analysis capability of LiDAR helps identify specific damage types, such as surface irregularities and structural deformations, enabling comprehensive damage assessments [71,72]. Foroughnia et al. [73] used airborne LiDAR to collect data on the terrain and structures before and after an earthquake. Later, the data are analyzed by aligning the pre- and post-event point clouds using the Iterative Closest Point (ICP) method to determine ground and building displacements, which are then used to calculate residual drift ratios. These ratios classify the damage levels of buildings into categories ranging from negligible to complete, providing a detailed assessment of earthquake-induced damage.
The deployment and operation of airborne LiDAR systems can be costly, including expenses for equipment, maintenance, and data processing, which may limit its use in smaller-scale assessments. Additionally, LiDAR data require complex processing and specialized expertise to extract meaningful information from 3D point clouds. Effective damage assessment often necessitates comparison with pre-event data, which may not always be available, hindering accurate damage evaluation. While LiDAR is highly effective for detecting surface and certain structural damages, it may struggle to identify internal deformations or minor cracks, which can affect the comprehensiveness of damage assessments. To address these limitations, data processing is crucial for filtering out noise and creating digital elevation models (DEMs) or digital surface models (DSMs), enhancing the accuracy and reliability of the data for thorough damage evaluation. This involves techniques like ground filtering, classification, and point cloud segmentation to distinguish ground points from non-ground points such as buildings and vegetation. Change detection is achieved by comparing pre-disaster and post-disaster DEMs/DSMs to identify terrain and structural changes, such as landslides and erosion [74]. The detailed 3D models produced allow for accurate measurement of structural damage, including displacement, deformation, and volumetric changes, which are vital for planning reconstruction efforts.
However, there are certain limitations of UAV-based radar systems in disaster assessment which include limited flight time and range due to battery constraints, which affect continuous monitoring and rapid response capabilities. Additionally, environmental factors such as adverse weather conditions and complex terrains can interfere with radar signal accuracy and UAV stability. The high data processing requirements of radar systems, coupled with limited onboard computing power, pose challenges for real-time data analysis and decision-making. Moreover, current regulations restrict UAV airspace access, complicating timely deployment in disaster zones. To address these limitations, advancements in battery technology and energy-efficient UAV designs can extend flight times. Enhancing radar signal processing algorithms and integrating artificial intelligence can improve data accuracy and processing speed. Developing UAVs with better weather resistance and more robust designs can mitigate environmental impacts. Finally, regulatory frameworks need to evolve to allow more flexible UAV operations in disaster response scenarios [75,76,77].

4.1.3. Ground-Based Data Collection

Complementing these methods, ground-based investigations such as visual inspections of damaged buildings, though time-consuming, are crucial for identifying structural damage [78]. These inspections provide detailed assessments that are essential for accurate damage evaluation and informing subsequent repair and recovery efforts. One of the primary methods of ground-based data collection is on-site surveys. Teams of experts, including engineers, geologists, urban planners, and disaster response professionals, are deployed to the affected areas to conduct these surveys. These professionals meticulously examine the damage, taking precise measurements and recording their observations. Standardized forms and checklists are often used to ensure that data collection is consistent and comprehensive. This hands-on approach allows for the identification of specific structural weaknesses and failures, offering insights that are not always visible through remote sensing methods. Photographic and video documentation also play a critical role in ground-based disaster data collection. High-resolution cameras and video equipment capture the physical state of the affected areas, providing a visual record of the damage. These images and videos are invaluable for detailed analysis and can be used to support claims for disaster relief funding and insurance [79,80]. They also serve as a historical record that can inform future research and disaster preparedness initiatives.
Ground-based LiDAR data deliver precise details about damaged structures in disaster-stricken areas. Notably, ground-based LiDAR technologies, such as Terrestrial Laser Scanning (TLS) and Mobile Laser Scanning (MLS), provide substantial benefits for evaluating damage post-event, surpassing the capabilities of airborne remote sensing methods [81]. In post-earthquake building loss analysis, ground-based LiDAR data can be used to develop analysis models to address issues related to building earthquake damage analysis [82,83]. Ground-based LiDAR has greatly enhanced the ability to assess building damage caused by flooding. Bodoque et al. [84] used high-resolution ground-based LiDAR data to improve the accuracy of flood damage assessments by generating precise Digital Surface Models (DSMs). In addition, vibration-based structural damage assessment is recommended for buildings without evident damage to accurately estimate the severity of the damage [85].
In conclusion, while ground-based data collection offers detailed and direct assessments of disaster-impacted areas, it inherently faces several limitations that can restrict its efficiency and scalability. One primary challenge is the time-intensive nature of ground surveys, which often require significant human resources and can be slow to deploy in the immediate aftermath of a disaster. Additionally, ground-based methods are limited by the physical accessibility of the disaster site. Areas that are severely damaged or hazardous can be inaccessible, posing risks to personnel and potentially leading to gaps in data collection. Moreover, ground-based data collection is often constrained by environmental conditions. Poor weather, ongoing hazardous events, or unstable structures can further delay or prevent comprehensive on-site assessments. Such conditions can diminish the accuracy and timeliness of the data gathered, which are critical for effective disaster response and recovery planning. However, integrating ground-based methods with remote sensing technologies can significantly enhance the efficiency and effectiveness of these surveys. Remote sensing enables the collection of data without physical access to the site, overcoming many limitations associated with direct assessments. It provides a broader view and gathers critical data rapidly, which is especially valuable in extensive or inaccessible disaster zones where quick situational awareness is essential.

4.2. Analytical Techniques for Post-Disaster Building Assessment

4.2.1. Image-Based Analysis

Image-based analysis, utilizing data from Unmanned Aerial Vehicles (UAVs), satellites, and field observations, has emerged as a powerful tool for effective disaster response and recovery. This method allows for the comprehensive evaluation of the extent and severity of damage over large areas, providing vital information for emergency services, government agencies, and reconstruction efforts. Such analysis for disaster reconnaissance involves several critical steps to ensure a comprehensive understanding of the impact. The process begins with a series of image preprocessing steps, including georeferencing [86], ortho-rectification [87], and mosaicking [88], to ensure the images are accurately aligned and free from distortions. The preprocessing of collected images focuses on reducing noise and haze to improve their suitability for machine processing. This step is crucial in enhancing the clarity and quality of the images, ensuring they are optimized for subsequent automated analysis [89]. There are various techniques in image-based analysis to identify and assess the damage to buildings. One common method is change detection, which involves comparing pre- and post-disaster images to identify areas that have undergone significant changes [90]. Techniques such as pixel-based and object-based change detection are employed, with pixel-based methods comparing individual pixels and object-based methods analyzing groups of pixels for a more context-aware assessment. Change detection algorithms can highlight differences in color, texture, and structural integrity, which are indicative of damage. For instance, in the case of an earthquake, the collapse of buildings, rubble, and debris can be easily identified through changes in the landscape [91]. Feature extraction plays a pivotal role in this analysis. It involves identifying and isolating significant attributes from the imagery, such as edges, corners, textures, and shapes, which can then be used to detect and classify damage. For example, features like building outlines, roof textures, structural edges, road networks, water bodies, etc., can be extracted and analyzed to assess the damage [92]. Advanced algorithms can quantify these features, enabling a detailed comparison between pre- and post-disaster conditions.
Advanced machine learning and deep learning techniques are increasingly being used to automate and enhance the accuracy of damage assessment from images [93,94]. Researchers have been investigating the application of machine learning (ML) algorithms to evaluate the condition of buildings. ML algorithms have been applied to the seismic risk assessment of reinforced concrete (RC) buildings, enabling the prediction of seismic responses and performance levels. Techniques such as Artificial Neural Networks (ANNs), Extra-Trees Regressor (ETR), and Extreme Gradient Boosting (XGBoost) have shown exceptional performance in generating seismic fragility curves, thus expediting seismic risk assessment processes. By incorporating innovative hyperparameter optimization methods, like halving search and k-fold cross-validation, there is a substantial reduction in computational effort compared to traditional seismic fragility assessment procedures [95,96]. These advancements have inspired researchers to utilize advanced machine learning and deep learning techniques for automating and improving damage assessment from images.
Convolutional Neural Networks (CNNs), a type of deep learning algorithm, have shown great promise in identifying damaged buildings from aerial and satellite images. Moreover, these techniques are adept at crack analysis, a crucial indicator for damage identification, which is essential for disaster reconnaissance [97]. These applications extend to infrastructure maintenance, urban planning, monitoring data recovery [98], response prediction for megastructures [99], and environmental monitoring, demonstrating the versatility and broad impact of machine learning and deep learning in managing and mitigating risks across different sectors. CNNs can be trained on large datasets of labeled images to recognize patterns associated with different levels of damage. Once trained, these models can process new images rapidly, providing real-time assessments of disaster-affected areas. For post-earthquake rescue planning, researchers have integrated balancing methods with CNNs which improved their model’s capability to identify collapsed buildings [21]. For example, incorporating artificial intelligence with the collected images significantly enhances the effectiveness of UAVs for disaster assessment, making them an excellent choice for rapid response teams [100]. Numerous researchers have leveraged deep learning-based object detection methods, including Faster R-CNN and YOLO, to identify damaged building regions in post-disaster imagery [101,102]. UAV platform integrated with deep learning is increasingly being utilized for post-earthquake damage assessment and management [103,104]. Khankeshizadeh et al. [86] explored the use of both deep learning (DL) and machine learning (ML) techniques for building damage assessment using UAV data by employing various feature sets, including spectral features (SFs) and combinations of spectral and geometrical features (SGFs). These artificial intelligence techniques have enhanced the analysis of UAV data, thereby increasing the ability of first responders to provide aid and manage disasters more efficiently.
However, for image-based analysis, high-resolution data availability is crucial for accurate post-disaster building damage assessment as it provides detailed and precise information necessary for evaluating the extent and nature of damage. These data, typically obtained from advanced remote sensing technologies, enable a granular analysis of the affected areas. The importance of high-resolution data lies in its ability to capture fine details of structural damage that lower-resolution data might miss, such as cracks, deformations, and partial collapses. These details are essential for rapid and effective response, planning of relief efforts, and allocation of resources [105]. Moreover, high-quality data ensure the reliability of damage assessment models and algorithms, which are often used to automate the evaluation process [106]. Accurate data inputs lead to more reliable predictions and assessments, reducing the risk of errors that could result in either an underestimation or overestimation of the damage, both of which have significant implications. Underestimation might lead to insufficient aid and delayed recovery, while overestimation could result in unnecessary expenditure and misallocation of resources. High-resolution data also support longitudinal studies to monitor recovery progress and improve future disaster preparedness strategies. Ensuring data quality involves rigorous validation and verification processes to minimize errors and enhance the credibility of the findings. Furthermore, high-resolution data facilitate the integration of various datasets, such as structural design information and socio-economic data, enabling a comprehensive assessment that considers not only physical damage but also the broader impact on communities. Thus, the availability and quality of high-resolution data are fundamental to effective post-disaster building damage assessment, directly influencing the efficiency and effectiveness of disaster response and recovery efforts.

4.2.2. Point Cloud Data Analysis

Point clouds have emerged as a crucial technology for assessing building damage after disasters. This technology involves the creation of a three-dimensional representation of an environment, constructed from a multitude of data points collected through remote sensing techniques such as LiDAR, photogrammetry, and terrestrial laser scanning (TLS). Each point in a point cloud is defined by X, Y, and Z coordinates, and can include additional attributes such as color, intensity, and reflectivity, which are invaluable for detailed analysis and modeling, providing a robust foundation for evaluating structural damage in the aftermath of a disaster [107].
The generation of point clouds begins with data acquisition. LiDAR systems, commonly mounted on drones or aircraft, emit laser pulses toward the ground and measure the time it takes for these pulses to return. This time-of-flight measurement is used to calculate the distance between the sensor and the surface, resulting in a dense, highly accurate 3D map of the surveyed area [108]. LiDAR data leveraged with deep learning are useful for wildfire damage analysis and detection as they enable precise identification and classification of damaged structures from detailed point-cloud datasets [109]. Yang et al. [82] processed and analyzed the LiDAR data using a triangular network vector model (TIN-shaped model) in conjunction with the alpha shapes algorithm to measure deformation and extract detailed damage features, such as wall cracks and tilt. The analysis results provide accurate, quantitative assessments of building damage, enabling the identification of structural deformations and damage levels, which are critical for post-earthquake emergency response and reconstruction efforts.
Once the point cloud data are collected, the next step is processing and analysis. The raw data often require cleaning and filtering to remove noise and irrelevant points, which ensures a more accurate representation of the surveyed area. Advanced algorithms are then applied to extract meaningful information from the point cloud. Researchers have shown significant interest in utilizing point clouds for swift damage assessment in disaster scenarios. The ability to quickly collect comprehensive 3D data over large areas allows emergency response teams to gain an immediate understanding of the extent and severity of the damage. This is crucial for prioritizing response efforts and allocating resources effectively. By comparing pre- and post-disaster point clouds, it is possible to measure the deformation of buildings. This deformation analysis helps identify structural displacements, tilting, and other forms of damage that may not be visible to the naked eye. High-resolution point clouds enable the detection of cracks and other minor damages on building surfaces. Advanced image processing and machine learning algorithms can analyze point cloud data to identify and quantify these damages, providing a detailed assessment of the structural integrity of buildings. For example, Researchers integrated CNN features with 3D point cloud features which improved the accuracy of disaster damage detection. The combined approach achieved an average classification accuracy of 94%, compared to 91% when using CNN features alone [17]. Additionally, the combined model demonstrated improved transferability, achieving an average accuracy of 85% when applied to new, unseen sites.
For structures that have partially collapsed, point clouds can be used to calculate the volume of debris [110]. Creating a digital twin of the affected area is another powerful application of point clouds. A digital twin is a highly accurate 3D model that replicates the real-world environment, allowing for detailed analysis and visualization of the damage. These 3D models can be used for planning reconstruction efforts, conducting virtual inspections, and communicating the extent of damage to stakeholders [111]. Researchers have also explored damage segmentation and evaluation framework, utilizing UAV-based point cloud modeling to inspect surface damage on building facades caused by earthquakes [112]. The advantages of using point clouds in building damage assessment are numerous. They provide highly accurate and detailed 3D representations of buildings and structures, enabling precise damage assessments [113]. The speed of data collection and processing is crucial in disaster scenarios where timely information is essential for effective response. Point clouds also allow for non-intrusive data collection, ensuring the safety of assessment teams. The comprehensive coverage of point clouds, capturing data from multiple angles, provides a thorough understanding of the affected region.
However, there are challenges and limitations associated with the use of point clouds. The large volume of data generated requires significant processing power and advanced algorithms for effective analysis. Analyzing and interpreting point cloud data also requires specialized skills and expertise, which may not be readily available in all disaster response teams. Additionally, point cloud data often suffer from noise contamination and contain outliers, posing significant challenges for accurate analysis and interpretation [114]. This issue arises from various factors, including sensor inaccuracies, environmental interferences, and reflective surfaces, which can introduce errors and irrelevant data points into the dataset. Despite these challenges, the future of point clouds in building damage assessment looks promising. Advances in technology are likely to enhance the capabilities and accessibility of point cloud-based assessments. The development of machine learning algorithms for automated damage detection and classification will improve the efficiency and accuracy of assessments. The use of cloud computing resources for processing and analyzing large point cloud datasets will overcome the limitations of local processing power. Real-time data collection and processing will enable continuous monitoring of buildings and infrastructure, providing early warnings of potential failures or further damage.

4.2.3. Radar Remote Sensing Data Analysis

Radar remote sensing technologies have revolutionized disaster reconnaissance, particularly in assessing building damage after events such as earthquakes, hurricanes, floods, and other natural disasters. However, the data collected from various radar remote sensing technologies, such as Synthetic Aperture Radar (SAR), Radar Interferometry (InSAR), Polarimetric SAR (PolSAR), and UAV-based radar systems, are analyzed through a series of sophisticated processing and interpretation techniques. SAR produces radiometric and geometric distortions and to reduce such distortions, preprocessing is necessary [115]. Preprocessing corrects geometric distortions, removes noise, and calibrates the signal strength using techniques such as speckle filtering, radiometric calibration, and geocoding. There are several preprocessing techniques carried out by researchers. One of the preprocessing techniques is radiometric calibration, which accounts for factors such as the antenna gain, the system loss, and the effective aperture of the antenna, etc. This introduces a significant radiometric bias in the SAR image and renders it unsuitable for use in applications that entail quantitative use of the SAR data. Radiometric calibration provides for converting the pixel values in the SAR image from being qualitatively representative of the biased backscatter signal to being quantitatively representative of the RCS (σ) and the backscatter coefficient (σo), respectively, for the cases of point and extended targets. In applications such as target recognition, this provides for proper comparison between the scattering centers of a target imaged with different SAR sensors, or from the same sensor with different operating conditions [116]. Another preprocessing technique is Multilooking. It is applied to SAR data to generate square pixels. The number of looks is determined by the image statistics [117].
Geometric correction is a crucial preprocessing technique that repositions pixels from their uncorrected image locations to a reference grid through geometric operations. Common methods include the Global Polynomial Model, 2D Direct Linear Transformation Model, Rational Function Model, and Generic Algorithm Model [118]. SAR’s active sensor produces speckle noise due to its coherent imaging mechanism. This noise is caused by the random interference of numerous elementary reflectors within a single resolution cell [119,120]. Therefore, speckle filtering is a vital preprocessing technique for SAR, as it diminishes noise and improves image quality and interpretability. Co-registration is another key preprocessing technique that aligns slave images with a master image so that each pixel corresponds to the same point on Earth’s surface. This is essential for tomo-SAR and In-SAR processes. Preprocessing methods such as image cropping, target segmentation, and image enhancement are essential to improve the quality of SAR images for better target recognition. Image cropping removes background redundancy by segmenting a specific area of the image, while target segmentation isolates the target from the background using techniques like histogram equalization, average filtering, thresholding, and mathematical morphology. Image enhancement, such as power exponent enhancement, highlights relevant features and suppresses background noise to facilitate effective feature extraction and classification [121].
SAR analysis utilizes radar-based remote sensing to capture high-resolution images regardless of weather conditions or daylight, making it ideal for disaster monitoring [122]. Data acquisition involves SAR sensors on satellites like Sentinel-1 and RADARSAT or aircraft emitting microwave signals toward the earth’s surface and measuring the reflected signals. Researchers are trying various ways to analyze SAR data to obtain the maximum output for disaster mapping. Kim et al. [123] analyzed SAR data by employing a contextual change analysis method that maps damaged buildings using novel textural features derived from bi-temporal SAR images with different observation modes. The Gray Level Co-occurrence Matrix (GLCM) and Local Ternary Codes (LTCs) have been used in the study to enhance damage detection while minimizing false alarms. This approach improved the detection of earthquake-induced damages, achieving a 72.5% detection rate and a 6.8% false alarm rate in the study area affected by the 2016 Kumamoto earthquake. GLCM and LTC enhance damage detection in SAR data by analyzing texture and local pixel intensity variations. GLCM captures spatial relationships between pixel pairs, extracting features like contrast, correlation, and homogeneity, essential for identifying structural damages. LTC, an extension of Local Binary Patterns (LBPs), encodes image texture into three states, improving robustness to noise and lighting changes, and highlighting subtle surface changes. Together, these methods enable detailed and accurate damage assessment in SAR data, aiding efficient disaster response and recovery [123,124,125].
InSAR and PolSAR are two advanced microwave remote-sensing technologies that have emerged in recent years. These technologies have been effectively utilized for the comprehensive analysis and assessment of earthquake disaster impacts and associated losses [126]. Their flexibility and precision in capturing detailed geophysical changes make them invaluable tools in disaster monitoring and response, enhancing our ability to rapidly detect and evaluate the extent of damage in affected areas. InSAR measures surface deformations by analyzing phase differences between pre- and post-earthquake SAR images, effectively mapping subsidence or uplift caused by seismic activity. This capability was demonstrated in the assessment of the 2021 Baicheng earthquake, where InSAR provided accurate coseismic deformation fields crucial for immediate disaster response [127]. PolSAR, on the other hand, enhances building damage detection by utilizing different polarizations of radar waves to differentiate between undamaged and damaged structures. PolSAR’s effectiveness is illustrated in the rapid damage assessment following the Yushu earthquake, where polarization orientation angle compensation and supervised classification methods significantly improved the accuracy of identifying collapsed buildings [42]. These technologies enable comprehensive and reliable earthquake damage assessments, facilitating timely and informed decision-making for disaster management.
CNNs and Recurrent Neural Networks (RNNs) offer substantial potential in enhancing SAR data analysis for damage detection. By automatically extracting high-level features from raw data, CNNs enhance the accuracy of building damage classification and detection. According to Xu et al. [128], CNNs can effectively address the complexities of SAR images, such as speckle noise and radiometric distortions, by leveraging robust feature extraction capabilities. Shahzad et al. [129] highlighted the effectiveness of fully convolutional neural networks (FCNNs) in detecting buildings in VHR SAR images, achieving high mean pixel accuracies through advanced techniques like SAR tomography for data labeling and the use of auxiliary information to improve training datasets. On the other hand, RNNs, particularly Long Short-Term Memory (LSTM) networks, are adept at temporal sequence modeling, which is crucial for monitoring changes over time in multi-temporal SAR datasets. This capability allows RNNs to track the progression of damage and distinguish between transient and permanent changes. RNNs can analyze time series of interferometric SAR (InSAR) coherence to detect anomalies indicative of building damage. For instance, Stephenson et al. [130] demonstrated the use of RNNs as probabilistic anomaly detectors on InSAR coherence time series, where the network is trained on pre-event data to forecast expected coherence values. The deviation between these forecasted values and the actual post-event coherence provides a measure of damage, enabling more accurate and localized damage detection compared to traditional methods. This approach allows for customized damage detection thresholds based on local coherence behavior, significantly improving the precision and recall of damage mapping across diverse geographic areas and disaster scenarios. The application of these advanced neural network architectures facilitates more efficient and reliable SAR data analysis, which is vital for rapid disaster response, infrastructure monitoring, and environmental impact assessments. These models’ ability to handle large datasets and their adaptability to different SAR configurations make them invaluable tools in modern remote sensing applications.

5. Challenges and Future Direction

Based on the discussion of previous research works, the following things can be considered as future challenges, and solving these will make disaster reconnaissance more effective:

5.1. Challenges to Be Solved

  • Data Quality and Availability
The accuracy of building damage assessment heavily relies on the quality and availability of data. High-resolution satellite imagery, UAV footage, and LiDAR data are often required but can be expensive and difficult to obtain promptly after a disaster. For example, the presence of speckle noise in SAR images poses another limitation. Despite filtering, speckle noise cannot be completely removed and affects the accuracy of the features used for classification. Additionally, the presence of obstructions such as debris or weather conditions can impede data collection, leading to incomplete or inaccurate assessments. Collecting data during extreme disaster conditions, such as fires, is highly challenging. For instance, UAVs encounter difficulties in fire disaster scenarios. Fire disaster scenarios present several challenges, including harsh environmental conditions such as high temperatures, smoke, and strong winds that can affect the UAVs’ flight stability. High temperatures and smoke can limit their flying capacity and sensor effectiveness. Communication breakdowns and limited battery life further complicate continuous operations in these dangerous zones. To obtain high-quality data, it is essential to address and overcome these challenges.
  • Integration of Multiple Data Sources
Integrating data from various sensors, including optical, infrared, LiDAR, radar, and ground-based observations, presents significant challenges. Advances in wireless sensor technologies and computer vision-based monitoring techniques have significantly improved our ability to gather and analyze these data [131]. However, each data type has unique characteristics and limitations, requiring sophisticated algorithms and techniques for effective fusion and analysis. The heterogeneity of these data sources complicates the development of comprehensive damage assessment models. One primary issue is the inconsistency in data formats, resolutions, and spectral characteristics across different sensors, which complicates data fusion and analysis. Temporal synchronization is another challenge, as data from various sources may be captured at different times, leading to discrepancies in the observed phenomena. Additionally, the differing spatial resolutions can result in misalignment issues when combining high-resolution data with lower-resolution datasets. Processing and computational demands also increase significantly when handling large volumes of multi-sensor data, necessitating advanced algorithms and substantial computing resources. Effective integration requires addressing issues such as data alignment, resolution differences, and varying levels of noise and reliability. Additionally, advanced machine learning and data processing methods are essential to harmonize these disparate data streams into a coherent, actionable framework. By overcoming these challenges, more accurate and reliable models for damage assessment could be developed, enhancing the human capability to respond to disasters, maintain infrastructure, and plan urban environments more effectively.
  • Processing and Analysis Complexity
The processing and analysis of large volumes of data from diverse sources such as radar, LiDAR, and optical imagery in disaster management scenarios present substantial challenges that require significant computational resources and specialized expertise. Each of these data types brings its own set of complexities that compound the difficulty of deriving accurate and actionable insights from the data. Radar, known for its capability to penetrate atmospheric conditions and provide data regardless of weather or lighting, generates large volumes of complex data that include both phase and amplitude information. The handling of these data necessitates advanced algorithms capable of distinguishing meaningful patterns from noise, which are critical for accurate disaster assessment. High-performance computing (HPC) systems are essential for processing these data efficiently, due to the sheer volume and the need for real-time analysis. These systems must not only have robust storage solutions to manage the data but also powerful processors to facilitate rapid and timely analysis. The real-time processing requirements of radar data add another layer of complexity, demanding optimized algorithms and parallel processing techniques to effectively manage the continuous inflow of data. Additionally, noise reduction, calibration, and correction of atmospheric effects are crucial steps that further increase the computational burden.
LiDAR data, while offering unmatched precision in topographical information, generate vast amounts of point cloud data that require extensive processing. Constructing accurate 3D models from these data involves intricate algorithms for noise reduction, data filtering, and feature extraction. These processes are computationally intensive and require sophisticated software that can accurately differentiate relevant features from irrelevant data. The precision of LiDAR data is pivotal in scenarios where detailed spatial analysis is required, such as assessing the structural integrity of buildings or mapping terrain deformations after a natural disaster. Optical imagery complements the data provided by radar and LiDAR by offering high-resolution visual information. This type of data is invaluable for tasks such as assessing visible damage, monitoring recovery progress, and planning rescue missions. However, processing optical imagery also involves significant challenges. The tasks of image stitching, alignment, and classification are computationally demanding and require sophisticated image processing software. Handling large datasets, especially those covering extensive disaster-affected areas, demands not only advanced computational capabilities but also extensive data management strategies to ensure that the data can be processed and analyzed in a timely manner.

5.2. Future Directions

  • Advanced-Data Fusion Techniques
Future research should prioritize the development of more sophisticated data fusion techniques that can effectively integrate various types of sensor data. This integration is essential for creating comprehensive and accurate models for applications such as damage assessment, environmental monitoring, and infrastructure management. One of the key areas to focus on is the advancement of artificial intelligence (AI) and machine learning (ML) algorithms to handle heterogeneous data sources like remote sensing, satellite imagery, etc. Employing advanced ML algorithms could be pivotal in addressing the complexities of disaster assessment data, which often encompass heterogeneous data sources such as remote sensing and satellite imagery. Integrating diverse data from multiple sensors—including optical imagery, LiDAR, and thermal data—can be achieved through the development of robust ML models, leading to more accurate and reliable damage assessments. One of the primary challenges in disaster assessment is the effective combination and interpretation of data from various sensors. For instance, optical imagery provides high-resolution visual data, which are valuable for identifying surface-level damage. LiDAR offers detailed topographical information, essential for assessing structural deformations and landscape changes. Thermal data, on the other hand, help in detecting heat anomalies that can indicate ongoing fires or human activity in disaster zones. Integrating these data types requires sophisticated ML techniques capable of handling their inherent variability and complexity.
For potential improvements involving the integration of multi-modal data and the development of more robust AI models, convolutional neural networks (CNNs) and recurrent neural networks (RNNs) can be used to create a multi-modal deep learning framework [132,133,134]. This framework aims to combine data from diverse sources, such as satellite imagery, UAV footage, LiDAR scans, social media posts, and ground-based observations, ensuring a comprehensive analysis of post-disaster scenarios [135,136,137]. The integration of these varied data types enhances the model’s ability to capture a wide range of damage indicators, from structural deformations visible in satellite images to detailed surface damage captured by UAVs. The primary benefit of this approach lies in its robustness and adaptability to chaotic post-disaster environments. By leveraging the strengths of each data source, the model can provide more accurate and timely assessments, crucial for effective disaster response and resource allocation. Furthermore, the use of advanced AI techniques enables the system to process and analyze large volumes of data rapidly, facilitating real-time decision-making and significantly improving the efficiency and effectiveness of disaster management practices. Another critical aspect is the development of user-friendly software tools and platforms that can facilitate the implementation of these advanced data fusion techniques. These tools should be designed to handle large volumes of data and provide real-time processing capabilities, making them accessible to a wide range of users, including scientists, engineers, and decision-makers.
  • Real-Time Data Processing
Real-time processing is an advanced technology that integrates the rapid capture, processing, and export of data. This method ensures that data are handled almost instantaneously, providing immediate insights and responses [138]. It involves the fast acquisition, integration, and analysis of diverse data streams, utilizing cutting-edge technologies such as sensors, satellite imagery, drones, ground-based sensors, and sometimes even social media feeds. The significance of real-time data processing capabilities, especially in urban disaster response cannot be overstated. Enhancing real-time data processing capabilities is crucial for timely disaster response, particularly in densely populated areas where deploying disaster relief quickly is challenging. Rapid data analysis allows for the quick identification of disaster-affected areas, enabling targeted deployment of resources and minimizing the time taken to reach those in need. Technologies such as the Internet of Things (IoT) play a pivotal role in this process by providing continuous streams of data from various sensors deployed throughout a city [139]. Advanced algorithms, powered by AI and machine learning, can be designed to handle these large and diverse data sets efficiently. These algorithms need to be capable of identifying critical patterns and anomalies that indicate disaster impacts, such as structural damage, flooding, or fires, and provide this information instantly to decision-makers [51]. For instance, in the event of an earthquake, real-time data from seismographs, satellite images, and social media can be fused to create an immediate and accurate picture of the affected areas.
Cloud computing and edge computing technologies can play a vital role in achieving these capabilities, providing the necessary computational power and ensuring data availability and redundancy. In addition to technical advancements, collaboration between various stakeholders, including government agencies, non-profits, and private sector companies, is essential. Sharing data and resources can enhance the effectiveness of real-time disaster response systems, ensuring that the most accurate and up-to-date information is available to all parties involved. Future efforts should focus on developing faster and more efficient algorithms that can process and analyze data in real time, providing immediate and actionable insights to emergency responders. This is especially important in urban areas, where the complexity and density of the infrastructure can significantly hinder rescue and relief operations.
  • Improving Model Generalization
Developing models that can generalize well across different disaster scenarios and geographic regions is essential for effective disaster response and management. This involves training models on diverse datasets and incorporating transfer learning techniques to improve their adaptability and performance in new, unseen environments. Residential buildings, which are often most at risk during disasters, vary significantly in their structural characteristics across different societies and regions, adding to the complexity of creating robust and versatile models. That is why developing training models on diverse datasets is a critical step in ensuring that they can handle various types of disasters, such as earthquakes, floods, hurricanes, and wildfires, in different geographic locations. These datasets should include data from past disasters, encompassing various types of buildings, infrastructure, and environmental conditions. By exposing models to a wide range of scenarios during training, we can enhance their ability to generalize and perform accurately when faced with new and unforeseen disaster situations. In this case, transfer learning can be a powerful technique. Transfer learning can enhance the adaptability of models by pre-training them on a large, diverse dataset and then fine-tuning them on a smaller, specific dataset related to the target disaster scenario or geographic region. This approach leverages knowledge from the broader dataset to improve performance in the specific context, making the model more effective in real-world applications. By utilizing knowledge from past disaster events, transfer learning enhances model generalization in disaster response applications, improving performance on new, unseen disasters. It addresses domain gaps and distribution shifts inherent in different disaster scenarios, such as varying locations, damage types, and climatic conditions. Pretraining models on diverse datasets from previous disasters enables the extraction of generic features applicable to new disaster contexts. Fine-tuning the models with a limited number of annotated samples from the current disaster allows for quick adaptation and improved accuracy in damage assessment. This process reduces the need for extensive manual annotation, which is time-consuming and costly, and ensures reliable predictions under urgent conditions, supporting timely and effective disaster response efforts [140,141,142].
Incorporating data on residential structures is also crucial, as these are often the most vulnerable during disasters. Different societies have distinct residential building designs, materials, and construction practices, which influence how buildings respond to various types of disasters. For instance, the earthquake resistance of buildings in Japan, which commonly use flexible materials and advanced engineering techniques, differs significantly from that of buildings in less earthquake-prone areas. Therefore, models must account for these differences to provide accurate damage assessments and risk predictions.
Moreover, integrating advanced simulation techniques, such as finite element modeling and agent-based modeling, can help in understanding the impact of disasters on different types of residential structures. These simulations can generate synthetic data to supplement real-world datasets, further enhancing the robustness of the models.
  • Enhancing UAV Capabilities
Enhancing current UAV technologies with additional features is crucial for improving their accuracy and effectiveness in disaster assessment. Integrating advanced capabilities into UAVs will not only make them more versatile but also enable them to provide more detailed and reliable data during disaster response efforts. One of the primary limitations of current UAV technology is battery life. Extending the flight duration of UAVs is crucial for covering larger areas and conducting thorough assessments without the need for frequent recharging or battery replacement. Research into more efficient energy storage solutions, such as advanced lithium-ion batteries, fuel cells, or even solar-powered UAVs, can help achieve longer flight times. Payload capacity is another critical factor that determines the effectiveness of UAVs in disaster scenarios. Increasing the payload capacity allows UAVs to carry more advanced sensors, such as high-resolution cameras, LiDAR, thermal imaging devices, and multispectral sensors. This expanded sensor suite can provide comprehensive data on the disaster’s impact, including structural damage, thermal anomalies, and environmental changes. Developing sophisticated algorithms for autonomous flight, obstacle avoidance, and real-time decision-making can enable UAVs to navigate challenging terrains and reach areas that are difficult for human responders to access. Integrating AI and machine learning techniques can further enhance UAVs’ ability to adapt to changing conditions, such as shifting debris, adverse weather, or extreme disaster conditions like firebreak. UAVs can be adapted by equipping them with robust sensors designed for extreme conditions and employing advanced algorithms for real-time data processing and decision-making. Enhancing communication systems to ensure stable connection technologies, such as 5G or satellite-based connectivity, can enhance real-time data transmission between UAVs and ground control stations, ensuring emergency responders receive timely and accurate information and developing more efficient power sources or battery-swapping mechanisms can also improve UAV operations in fire disaster scenarios. Future research should focus on heat-resistant materials, improved cooling systems, and sensors capable of operating in high-temperature environments. These improvements will enable UAVs to quickly assess damage, identify areas needing immediate assistance, and provide valuable data for emergency responders.
  • Training for the Rescue Team/Disaster Management Team
Even with advancements in technology, the goals of disaster reconnaissance cannot be fully realized if the personnel or rescue teams lack the necessary skills to utilize the technology effectively. This gap in skills can lead to delays in rescue operations, potentially endangering lives and worsening the impact of the disaster. Identifying critical structures, such as damaged buildings during emergency situations, remains a challenge without proper training. Virtual reality (VR) offers a promising solution for equipping rescue teams with the skills needed for such tasks. It has been widely utilized by researchers to explore and enhance the effectiveness of training for disaster preparedness [143], hazard identification [144], evacuation drills [145], and much more. VR simulation can dynamically present the progression of a situation, aiding participants in comprehending their decision-making process [146,147]. By analyzing historical disaster data and integrating synthetic data, VR can be used to create immersive training scenarios that enhance the knowledge and skills required for assessing critical building structures in disaster reconnaissance. This technology can enable trainees to experience various disaster scenarios in a controlled, repeatable environment, significantly enhancing their ability to react and adapt in real-life situations [148]. The use of VR has shown tremendous potential in improving knowledge retention and practical skills through experiential learning. For example, simulations can replicate the chaos and complexity of natural disasters like earthquakes, floods, hurricanes, and wildfires, allowing team members to practice critical decision-making, communication, and coordination without the risk of real-world consequences. This hands-on approach enables participants to develop muscle memory and refine their strategies, leading to more effective and timely responses during actual disasters [149]. VR training can lead to better preparedness, increased confidence, and enhanced situational awareness among participants, as they can repeatedly practice and refine their responses to a variety of disaster scenarios.
Moreover, VR training facilitates the evaluation of individual and team performance, providing valuable feedback that can be used to identify areas for improvement and further customize training programs. Additionally, VR training is cost-effective compared to large-scale real-life drills, making it accessible for widespread use across various organizations and regions, including those with limited resources [150]. By providing a safe yet realistic platform for disaster response training, VR helps build more resilient and effective disaster management teams. It allows for the simulation of rare or complex disaster scenarios that might be difficult to practice in real life, ensuring comprehensive preparedness. Furthermore, VR can simulate the emotional and psychological stress experienced during disasters, helping trainees develop coping mechanisms and resilience. This holistic approach ensures that disaster management teams are not only technically proficient but also mentally prepared to handle high-pressure situations. Ultimately, VR contributes to better disaster preparedness and response outcomes, reducing the potential for loss of life and property, and enhancing the overall safety and resilience of communities.

6. Conclusions

In conclusion, this comprehensive bibliometric review of post-disaster building damage assessment and reconnaissance methods highlights the significant advancements and challenges within this critical field. The findings on global collaboration and scholarly impact in the field of disaster reconnaissance highlight several key benefits. Firstly, the comprehensive analysis of the global citation network and the geographical distribution of publications underscores the interconnectedness of research efforts worldwide. This interconnectedness facilitates the sharing of knowledge, technologies, and methodologies, thereby accelerating advancements in disaster reconnaissance. Moreover, the collaboration across different regions allows for a diverse range of perspectives and expertise, which enhances the robustness and applicability of research findings. Disaster is a global phenomenon, affecting millions of people each year and resulting in significant economic losses worldwide. Understanding the geographical distribution of publications helps researchers and policymakers identify regions with significant contributions and those that may require more attention and support. This awareness can guide targeted efforts to strengthen research capabilities and disaster response strategies globally. Additionally, such analyses help to understand different approaches around the world to fight disasters and improve preparedness. The global citation network will help identify critical papers, and the distribution of publications will help understand the study areas and affected regions. Ultimately, the enhanced global collaboration fosters a more resilient and prepared international community, better equipped to mitigate the impacts of natural disasters and improve recovery efforts.
The increasing frequency and severity of natural disasters necessitate robust and efficient methodologies for assessing building damage, which is pivotal for effective disaster management and mitigation strategies. This review underscores the importance of leveraging advanced technologies such as satellite imagery, and UAVs in conjunction with machine learning and deep learning techniques. These technologies have revolutionized the way researchers collect and analyze data, providing high-resolution, accurate, and timely information crucial for disaster response. Optical satellite imagery, despite its limitations under adverse weather conditions, remains a widely used tool due to its extensive coverage and frequent revisit times. Synthetic aperture radar (SAR), with its all-weather and night-time operational capabilities, offers a reliable alternative, especially in detecting structural deformations. LiDAR, known for its precise 3D mapping capabilities, proves invaluable for detailed structural analysis and damage assessment. The integration of these diverse data sources presents significant challenges, particularly in terms of data fusion and processing. Developing sophisticated algorithms that can effectively merge data from optical, infrared, LiDAR, radar, and ground-based observations is essential for creating comprehensive damage assessment models. Moreover, this review highlights the need for real-time data processing capabilities to provide immediate insights for emergency responders, thereby enhancing the effectiveness of disaster response efforts.
One of the standout insights from this review is the evolving role of machine learning and deep learning technologies in enhancing the accuracy and efficiency of building damage assessments. Innovative applications of convolutional neural networks (CNNs) and transfer learning have demonstrated significant potential in processing large datasets and rapidly adapting to unfamiliar disaster scenarios. These advancements facilitate more precise damage evaluations in real-time, which are critical for effective response and recovery operations. They also play a crucial role in long-term urban planning and resilience building, offering tools that can predict potential damage and optimize urban layouts to mitigate future disaster impacts. Future research directions should focus on overcoming the challenges identified through this review and exploring groundbreaking solutions. There is a particular need to enhance UAV capabilities, such as extending flight durations and increasing payload capacities, which would revolutionize data collection, especially in areas that are difficult to access following a disaster. Moreover, the development of user-friendly software tools and platforms for data fusion and real-time processing is essential. These tools would democratize the use of advanced technologies, making them accessible and practical for a broader range of stakeholders, including local governments, emergency responders, and community planners.
In conclusion, although significant strides have been made in the field of post-disaster building damage assessment, there remains a wealth of opportunities for further research and technological innovation. Addressing the highlighted challenges and leveraging the potential of emerging technologies will enable the development of more effective and efficient disaster management practices. Such progress is vital for enhancing the resilience and safety of communities worldwide, equipping them with the necessary tools and knowledge to better predict, respond to, and recover from disastrous events.

Author Contributions

Conceptualization, D.H.; methodology, D.H.; software, S.A.S.; validation, D.H.; formal analysis, S.A.S.; data curation, S.A.S.; writing—original draft preparation, S.A.S.; writing—review and editing, D.H.; visualization, S.A.S.; supervision, D.H.; project administration, D.H.; funding acquisition, D.H. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Science Foundation, grant number 2346936.

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Acknowledgments

The authors gratefully acknowledge the support from NSF. Any opinions, findings, recommendations, and conclusions in this paper are those of the authors, and do not necessarily reflect the views of NSF and Kennesaw State University.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. EM-DAT: Human Cost of Disasters (2000–2019). 2020. Available online: https://www.preventionweb.net/publication/cred-crunch-issue-no-61-december-2020-human-cost-disasters-2000-2019 (accessed on 28 June 2024).
  2. Jones, R.L.; Guha-Sapir, D.; Tubeuf, S. Human and Economic Impacts of Natural Disasters: Can We Trust the Global Data? Sci. Data 2022, 9, 572. [Google Scholar] [CrossRef] [PubMed]
  3. Alexander, D. Principles of Emergency Planning and Management; Oxford University Press: Oxford, UK; New York, NY, USA, 2002; ISBN 978-0-19-521838-1. [Google Scholar]
  4. Hu, D.; Chen, J.; Li, S. Reconstructing Unseen Spaces in Collapsed Structures for Search and Rescue via Deep Learning Based Radargram Inversion. Autom. Constr. 2022, 140, 104380. [Google Scholar] [CrossRef]
  5. Hu, D.; Chen, L.; Du, J.; Cai, J.; Li, S. Seeing through Disaster Rubble in 3D with Ground-Penetrating Radar and Interactive Augmented Reality for Urban Search and Rescue. J. Comput. Civ. Eng. 2022, 36, 04022021. [Google Scholar] [CrossRef]
  6. Hu, D.; Li, S.; Chen, J.; Kamat, V.R. Detecting, Locating, and Characterizing Voids in Disaster Rubble for Search and Rescue. Adv. Eng. Inform. 2019, 42, 100974. [Google Scholar] [CrossRef]
  7. Deng, L.; Wang, Y. Post-Disaster Building Damage Assessment Based on Improved U-Net. Sci. Rep. 2022, 12, 15862. [Google Scholar] [CrossRef]
  8. Jiang, X.; He, Y.; Li, G.; Liu, Y.; Zhang, X. Building Damage Detection via Superpixel-Based Belief Fusion of Space-Borne SAR and Optical Images. IEEE Sens. J. 2020, 20, 2008–2022. [Google Scholar] [CrossRef]
  9. Malmgren-Hansen, D.; Sohnesen, T.; Fisker, P.; Baez, J. Sentinel-1 Change Detection Analysis for Cyclone Damage Assessment in Urban Environments. Remote Sens. 2020, 12, 2409. [Google Scholar] [CrossRef]
  10. Liu, C.; Sui, H.; Huang, L. Identification of Building Damage from UAV-Based Photogrammetric Point Clouds Using Supervoxel Segmentation and Latent Dirichlet Allocation Model. Sensors 2020, 20, 6499. [Google Scholar] [CrossRef]
  11. Pi, Y.; Nath, N.; Behzadan, A. Convolutional Neural Networks for Object Detection in Aerial Imagery for Disaster Response and Recovery. Adv. Eng. Inform. 2020, 43, 101009. [Google Scholar] [CrossRef]
  12. Agbehadji, I.E.; Schütte, S.; Masinde, M.; Botai, J.; Mabhaudhi, T. Climate Risks Resilience Development: A Bibliometric Analysis of Climate-Related Early Warning Systems in Southern Africa. Climate 2023, 12, 3. [Google Scholar] [CrossRef]
  13. Van Eck, N.J.; Waltman, L. Software Survey: VOSviewer, a Computer Program for Bibliometric Mapping. Scientometrics 2010, 84, 523–538. [Google Scholar] [CrossRef]
  14. Su, H.-N.; Lee, P.-C. Mapping Knowledge Structure by Keyword Co-Occurrence: A First Look at Journal Papers in Technology Foresight. Scientometrics 2010, 85, 65–79. [Google Scholar] [CrossRef]
  15. Lee, P.-C.; Su, H.-N. Investigating the Structure of Regional Innovation System Research through Keyword Co-Occurrence and Social Network Analysis. Innovation 2010, 12, 26–40. [Google Scholar] [CrossRef]
  16. Bornmann, L.; Daniel, H. What Do Citation Counts Measure? A Review of Studies on Citing Behavior. J. Doc. 2008, 64, 45–80. [Google Scholar] [CrossRef]
  17. Vetrivel, A.; Gerke, M.; Kerle, N.; Nex, F.; Vosselman, G. Disaster Damage Detection through Synergistic Use of Deep Learning and 3D Point Cloud Features Derived from Very High Resolution Oblique Aerial Images, and Multiple-Kernel-Learning. ISPRS J. Photogramm. Remote Sens. 2018, 140, 45–59. [Google Scholar] [CrossRef]
  18. Fernandez Galarreta, J.; Kerle, N.; Gerke, M. UAV-Based Urban Structural Damage Assessment Using Object-Based Image Analysis and Semantic Reasoning. Nat. Hazards Earth Syst. Sci. 2015, 15, 1087–1101. [Google Scholar] [CrossRef]
  19. Cooner, A.; Shao, Y.; Campbell, J. Detection of Urban Damage Using Remote Sensing and Machine Learning Algorithms: Revisiting the 2010 Haiti Earthquake. Remote Sens. 2016, 8, 868. [Google Scholar] [CrossRef]
  20. Nex, F.; Duarte, D.; Tonolo, F.G.; Kerle, N. Structural Building Damage Detection with Deep Learning: Assessment of a State-of-the-Art CNN in Operational Conditions. Remote Sens. 2019, 11, 2765. [Google Scholar] [CrossRef]
  21. Ji, M.; Liu, L.; Buchroithner, M. Identifying Collapsed Buildings Using Post-Earthquake Satellite Imagery and Convolutional Neural Networks: A Case Study of the 2010 Haiti Earthquake. Remote Sens. 2018, 10, 1689. [Google Scholar] [CrossRef]
  22. Janalipour, M.; Mohammadzadeh, A. Building Damage Detection Using Object-Based Image Analysis and ANFIS From High-Resolution Image (Case Study: BAM Earthquake, Iran). IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2016, 9, 1937–1945. [Google Scholar] [CrossRef]
  23. Gong, L.; Wang, C.; Wu, F.; Zhang, J.; Zhang, H.; Li, Q. Earthquake-Induced Building Damage Detection with Post-Event Sub-Meter VHR TerraSAR-X Staring Spotlight Imagery. Remote Sens. 2016, 8, 887. [Google Scholar] [CrossRef]
  24. Pan, X.; Yang, T.Y. Postdisaster Image-based Damage Detection and Repair Cost Estimation of Reinforced Concrete Buildings Using Dual Convolutional Neural Networks. Comput. Aided Civ. Infrastruct. Eng. 2020, 35, 495–510. [Google Scholar] [CrossRef]
  25. Ji, M.; Liu, L.; Du, R.; Buchroithner, M.F. A Comparative Study of Texture and Convolutional Neural Network Features for Detecting Collapsed Buildings After Earthquakes Using Pre- and Post-Event Satellite Imagery. Remote Sens. 2019, 11, 1202. [Google Scholar] [CrossRef]
  26. Moya, L.; Zakeri, H.; Yamazaki, F.; Liu, W.; Mas, E.; Koshimura, S. 3D Gray Level Co-Occurrence Matrix and Its Application to Identifying Collapsed Buildings. ISPRS J. Photogramm. Remote Sens. 2019, 149, 14–28. [Google Scholar] [CrossRef]
  27. Adriano, B.; Yokoya, N.; Xia, J.; Miura, H.; Liu, W.; Matsuoka, M.; Koshimura, S. Learning from Multimodal and Multitemporal Earth Observation Data for Building Damage Mapping. ISPRS J. Photogramm. Remote Sens. 2021, 175, 132–143. [Google Scholar] [CrossRef]
  28. Anniballe, R.; Noto, F.; Scalia, T.; Bignami, C.; Stramondo, S.; Chini, M.; Pierdicca, N. Earthquake Damage Mapping: An Overall Assessment of Ground Surveys and VHR Image Change Detection after L’Aquila 2009 Earthquake. Remote Sens. Environ. 2018, 210, 166–178. [Google Scholar] [CrossRef]
  29. Shaodan, L.; Hong, T.; Shi, H.; Yang, S.; Ting, M.; Jing, L.; Zhihua, X. Unsupervised Detection of Earthquake-Triggered Roof-Holes From UAV Images Using Joint Color and Shape Features. IEEE Geosci. Remote Sens. Lett. 2015, 12, 1823–1827. [Google Scholar] [CrossRef]
  30. Shen, Y.; Zhu, S.; Yang, T.; Chen, C.; Pan, D.; Chen, J.; Xiao, L.; Du, Q. BDANet: Multiscale Convolutional Neural Network with Cross-Directional Attention for Building Damage Assessment From Satellite Images. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5402114. [Google Scholar] [CrossRef]
  31. Moya, L.; Yamazaki, F.; Liu, W.; Yamada, M. Detection of Collapsed Buildings from Lidar Data Due to the 2016 Kumamoto Earthquake in Japan. Nat. Hazards Earth Syst. Sci. 2018, 18, 65–78. [Google Scholar] [CrossRef]
  32. Sun, W.; Shi, L.; Yang, J.; Li, P. Building Collapse Assessment in Urban Areas Using Texture Information From Postevent SAR Data. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2016, 9, 3792–3808. [Google Scholar] [CrossRef]
  33. Song, D.; Tan, X.; Wang, B.; Zhang, L.; Shan, X.; Cui, J. Integration of Super-Pixel Segmentation and Deep-Learning Methods for Evaluating Earthquake-Damaged Buildings Using Single-Phase Remote Sensing Imagery. Int. J. Remote Sens. 2020, 41, 1040–1066. [Google Scholar] [CrossRef]
  34. Qing, Y.; Ming, D.; Wen, Q.; Weng, Q.; Xu, L.; Chen, Y.; Zhang, Y.; Zeng, B. Operational Earthquake-Induced Building Damage Assessment Using CNN-Based Direct Remote Sensing Change Detection on Superpixel Level. Int. J. Appl. EARTH Obs. Geoinf. 2022, 112, 102899. [Google Scholar] [CrossRef]
  35. Sharma, R.; Tateishi, R.; Hara, K.; Nguyen, H.; Gharechelou, S.; Nguyen, L. Earthquake Damage Visualization (EDV) Technique for the Rapid Detection of Earthquake-Induced Damages Using SAR Data. Sensors 2017, 17, 235. [Google Scholar] [CrossRef] [PubMed]
  36. Tu, J.; Li, D.; Feng, W.; Han, Q.; Sui, H. Detecting Damaged Building Regions Based on Semantic Scene Change from Multi-Temporal High-Resolution Remote Sensing Images. ISPRS Int. J. Geo-Inf. 2017, 6, 131. [Google Scholar] [CrossRef]
  37. Li, Y.; Ye, S.; Bartoli, I. Semisupervised Classification of Hurricane Damage from Postevent Aerial Imagery Using Deep Learning. J. Appl. Remote Sens. 2018, 12, 045008. [Google Scholar] [CrossRef]
  38. Qi, J.; Song, D.; Shang, H.; Wang, N.; Hua, C.; Wu, C.; Qi, X.; Han, J. Search and Rescue Rotary-Wing UAV and Its Application to the Lushan Ms 7.0 Earthquake. J. Field Robot. 2016, 33, 290–321. [Google Scholar] [CrossRef]
  39. Moya, L.; Marval Perez, L.; Mas, E.; Adriano, B.; Koshimura, S.; Yamazaki, F. Novel Unsupervised Classification of Collapsed Buildings Using Satellite Imagery, Hazard Scenarios and Fragility Functions. Remote Sens. 2018, 10, 296. [Google Scholar] [CrossRef]
  40. Zheng, Z.; Zhong, Y.; Wang, J.; Ma, A.; Zhang, L. Building Damage Assessment for Rapid Disaster Response with a Deep Object-Based Semantic Change Detection Framework: From Natural Disasters to Man-Made Disasters. Remote Sens. Environ. 2021, 265, 112636. [Google Scholar] [CrossRef]
  41. Brando, G.; Rapone, D.; Spacone, E.; O’Banion, M.S.; Olsen, M.J.; Barbosa, A.R.; Faggella, M.; Gigliotti, R.; Liberatore, D.; Russo, S.; et al. Damage Reconnaissance of Unreinforced Masonry Bearing Wall Buildings after the 2015 Gorkha, Nepal, Earthquake. Earthq. Spectra 2017, 33, 243–273. [Google Scholar] [CrossRef]
  42. Zhai, W.; Shen, H.; Huang, C.; Pei, W. Building Earthquake Damage Information Extraction from a Single Post-Earthquake PolSAR Image. Remote Sens. 2016, 8, 171. [Google Scholar] [CrossRef]
  43. Levine, N.; Spencer, B. Post-Earthquake Building Evaluation Using UAVs: A BIM-Based Digital Twin Framework. Sensors 2022, 22, 873. [Google Scholar] [CrossRef] [PubMed]
  44. Pham, T.-T.-H.; Apparicio, P.; Gomez, C.; Weber, C.; Mathon, D. Towards a Rapid Automatic Detection of Building Damage Using Remote Sensing for Disaster Management: The 2010 Haiti Earthquake. Disaster Prev. Manag. Int. J. 2014, 23, 53–66. [Google Scholar] [CrossRef]
  45. Xie, S.; Duan, J.; Liu, S.; Dai, Q.; Liu, W.; Ma, Y.; Guo, R.; Ma, C. Crowdsourcing Rapid Assessment of Collapsed Buildings Early after the Earthquake Based on Aerial Remote Sensing Image: A Case Study of Yushu Earthquake. Remote Sens. 2016, 8, 759. [Google Scholar] [CrossRef]
  46. Wang, Y.; Cui, L.; Zhang, C.; Chen, W.; Xu, Y.; Zhang, Q. A Two-Stage Seismic Damage Assessment Method for Small, Dense, and Imbalanced Buildings in Remote Sensing Images. Remote Sens. 2022, 14, 1012. [Google Scholar] [CrossRef]
  47. Cui, L.; Jing, X.; Wang, Y.; Huan, Y.; Xu, Y.; Zhang, Q. Improved Swin Transformer-Based Semantic Segmentation of Postearthquake Dense Buildings in Urban Areas Using Remote Sensing Images. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2023, 16, 369–385. [Google Scholar] [CrossRef]
  48. Thomas, J.; Kareem, A.; Bowyer, K.W. Automated Poststorm Damage Classification of Low-Rise Building Roofing Systems Using High-Resolution Aerial Imagery. IEEE Trans. Geosci. Remote Sens. 2014, 52, 3851–3861. [Google Scholar] [CrossRef]
  49. Wang, X.Q.; Dou, A.X.; Wang, L.; Yuan, X.X.; Ding, X.; Zhang, W. RS-based assessment of seismic intensity of the 2013 Lushan, Sichuan, China MS7.0 earthquake. Chin. J. Geophys. 2015, 58, 163–171. (In Chinese) [Google Scholar] [CrossRef]
  50. Kaur, S.; Gupta, S.; Singh, S.; Koundal, D.; Zaguia, A. Convolutional Neural Network Based Hurricane Damage Detection Using Satellite Images. Soft Comput. 2022, 26, 7831–7845. [Google Scholar] [CrossRef]
  51. Liu, C.; Sui, H.; Wang, J.; Ni, Z.; Ge, L. Real-Time Ground-Level Building Damage Detection Based on Lightweight and Accurate YOLOv5 Using Terrestrial Images. Remote Sens. 2022, 14, 2763. [Google Scholar] [CrossRef]
  52. Chen, Q.; Yang, H.; Li, L.; Liu, X. A Novel Statistical Texture Feature for SAR Building Damage Assessment in Different Polarization Modes. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2020, 13, 154–165. [Google Scholar] [CrossRef]
  53. ESA Sentinel-2 User Handbook. Available online: https://sentinel.esa.int/documents/247904/685211/Sentinel-2_User_Handbook (accessed on 28 June 2024).
  54. Tamkuan, N.; Nagai, M. Fusion of Multi-Temporal Interferometric Coherence and Optical Image Data for the 2016 Kumamoto Earthquake Damage Assessment. ISPRS Int. J. Geo-Inf. 2017, 6, 188. [Google Scholar] [CrossRef]
  55. Zhu, X.X.; Wang, Y.; Montazeri, S.; Ge, N. A Review of Ten-Year Advances of Multi-Baseline SAR Interferometry Using TerraSAR-X Data. Remote Sens. 2018, 10, 1374. [Google Scholar] [CrossRef]
  56. Endo, Y.; Adriano, B.; Mas, E.; Koshimura, S. New Insights into Multiclass Damage Classification of Tsunami-Induced Building Damage from SAR Images. Remote Sens. 2018, 10, 2059. [Google Scholar] [CrossRef]
  57. Zink, M.; Moreira, A.; Hajnsek, I.; Rizzoli, P.; Bachmann, M.; Kahle, R.; Fritz, T.; Huber, M.; Krieger, G.; Lachaise, M.; et al. TanDEM-X: 10 Years of Formation Flying Bistatic SAR Interferometry. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2021, 14, 3546–3565. [Google Scholar] [CrossRef]
  58. Eineder, M.; Minet, C.; Steigenberger, P.; Cong, X.; Fritz, T. Imaging Geodesy—Toward Centimeter-Level Ranging Accuracy With TerraSAR-X. IEEE Trans. Geosci. Remote Sens. 2011, 49, 661–671. [Google Scholar] [CrossRef]
  59. Rao, P.; Zhou, W.; Bhattarai, N.; Srivastava, A.K.; Singh, B.; Poonia, S.; Lobell, D.B.; Jain, M. Using Sentinel-1, Sentinel-2, and Planet Imagery to Map Crop Type of Smallholder Farms. Remote Sens. 2021, 13, 1870. [Google Scholar] [CrossRef]
  60. Tiampo, K.F.; Huang, L.; Simmons, C.; Woods, C.; Glasscoe, M.T. Detection of Flood Extent Using Sentinel-1A/B Synthetic Aperture Radar: An Application for Hurricane Harvey, Houston, TX. Remote Sens. 2022, 14, 2261. [Google Scholar] [CrossRef]
  61. Nur, A.S.; Lee, C.-W. Damage Proxy Map (DPM) of the 2016 Gyeongju and 2017 Pohang Earthquakes Using Sentinel-1 Imagery. Korean J. Remote Sens. 2021, 37, 13–22. [Google Scholar] [CrossRef]
  62. Liu, H.; Song, C.; Li, Z.; Liu, Z.; Ta, L.; Zhang, X.; Chen, B.; Han, B.; Peng, J. A New Method for the Identification of Earthquake-Damaged Buildings Using Sentinel-1 Multitemporal Coherence Optimized by Homogeneous SAR Pixels and Histogram Matching. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2024, 17, 7124–7143. [Google Scholar] [CrossRef]
  63. Sandhini Putri, A.F.; Widyatmanti, W.; Umarhadi, D.A. Sentinel-1 and Sentinel-2 Data Fusion to Distinguish Building Damage Level of the 2018 Lombok Earthquake. Remote Sens. Appl. Soc. Environ. 2022, 26, 100724. [Google Scholar] [CrossRef]
  64. Li, X.; Guo, H.; Zhang, L.; Chen, X.; Liang, L. A New Approach to Collapsed Building Extraction Using RADARSAT-2 Polarimetric SAR Imagery. IEEE Geosci. Remote Sens. Lett. 2012, 9, 677–681. [Google Scholar] [CrossRef]
  65. Watanabe, M.; Thapa, R.B.; Ohsumi, T.; Fujiwara, H.; Yonezawa, C.; Tomii, N.; Suzuki, S. Detection of Damaged Urban Areas Using Interferometric SAR Coherence Change with PALSAR-2. Earth Planets Space 2016, 68, 131. [Google Scholar] [CrossRef]
  66. Noda, A.; Suzuki, S.; Shimada, M.; Toda, K.; Miyagi, Y. COSMO-SkyMed and ALOS-1/2 X and L Band Multi-Frequency Results in Satellite Disaster Monitoring. In Proceedings of the 2015 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Milan, Italy, 26–31 July 2015; IEEE: Piscataway, NJ, USA, 2015; pp. 211–214. [Google Scholar]
  67. Zhao, L.; Zhang, Q.; Li, Y.; Qi, Y.; Yuan, X.; Liu, J.; Li, H. China’s Gaofen-3 Satellite System and Its Application and Prospect. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2021, 14, 11019–11028. [Google Scholar] [CrossRef]
  68. Bai, Y.; Gao, C.; Singh, S.; Koch, M.; Adriano, B.; Mas, E.; Koshimura, S. A Framework of Rapid Regional Tsunami Damage Recognition From Post-Event TerraSAR-X Imagery Using Deep Neural Networks. IEEE Geosci. Remote Sens. Lett. 2018, 15, 43–47. [Google Scholar] [CrossRef]
  69. Dong, L.; Shan, J. A Comprehensive Review of Earthquake-Induced Building Damage Detection with Remote Sensing Techniques. ISPRS J. Photogramm. Remote Sens. 2013, 84, 85–99. [Google Scholar] [CrossRef]
  70. Zhou, Z.; Gong, J.; Hu, X. Community-Scale Multi-Level Post-Hurricane Damage Assessment of Residential Buildings Using Multi-Temporal Airborne LiDAR Data. Autom. Constr. 2019, 98, 30–45. [Google Scholar] [CrossRef]
  71. He, M.; Zhu, Q.; Du, Z.; Hu, H.; Ding, Y.; Chen, M. A 3D Shape Descriptor Based on Contour Clusters for Damaged Roof Detection Using Airborne LiDAR Point Clouds. Remote Sens. 2016, 8, 189. [Google Scholar] [CrossRef]
  72. Axel, C.; Van Aardt, J. Building Damage Assessment Using Airborne Lidar. J. Appl. Remote Sens. 2017, 11, 046024. [Google Scholar] [CrossRef]
  73. Foroughnia, F.; Macchiarulo, V.; Berg, L.; DeJong, M.; Milillo, P.; Hudnut, K.W.; Gavin, K.; Giardina, G. Quantitative Assessment of Earthquake-Induced Building Damage at Regional Scale Using LiDAR Data. Int. J. Disaster Risk Reduct. 2024, 106, 104403. [Google Scholar] [CrossRef]
  74. Hauptman, L.; Mitsova, D.; Briggs, T.R. Hurricane Ian Damage Assessment Using Aerial Imagery and LiDAR: A Case Study of Estero Island, Florida. J. Mar. Sci. Eng. 2024, 12, 668. [Google Scholar] [CrossRef]
  75. Hensley, S.; Jones, C.; Lou, Y. Prospects for Operational Use of Airborne Polarimetric SAR for Disaster Response and Management. In Proceedings of the 2012 IEEE International Geoscience and Remote Sensing Symposium, Munich, Germany, 22–27 July 2012; IEEE: Munich, Germany, 2012; pp. 103–106. [Google Scholar]
  76. Hildmann, H.; Kovacs, E. Review: Using Unmanned Aerial Vehicles (UAVs) as Mobile Sensing Platforms (MSPs) for Disaster Response, Civil Security and Public Safety. Drones 2019, 3, 59. [Google Scholar] [CrossRef]
  77. Dong, J.; Ota, K.; Dong, M. UAV-Based Real-Time Survivor Detection System in Post-Disaster Search and Rescue Operations. IEEE J. Miniaturization Air Space Syst. 2021, 2, 209–219. [Google Scholar] [CrossRef]
  78. Wang, Y.; Jing, X.; Cui, L.; Zhang, C.; Xu, Y.; Yuan, J.; Zhang, Q. Geometric Consistency Enhanced Deep Convolutional Encoder-Decoder for Urban Seismic Damage Assessment by UAV Images. Eng. Struct. 2023, 286, 116132. [Google Scholar] [CrossRef]
  79. Berke, P.R.; Kartez, J.; Wenger, D. Recovery after Disaster: Achieving Sustainable Development, Mitigation and Equity. Disasters 1993, 17, 93–109. [Google Scholar] [CrossRef] [PubMed]
  80. Meyer, M.A.; Hendricks, M.D. Using Photography to Assess Housing Damage and Rebuilding Progress for Disaster Recovery Planning. J. Am. Plann. Assoc. 2018, 84, 127–144. [Google Scholar] [CrossRef]
  81. Kashani, A.; Graettinger, A. Cluster-Based Roof Covering Damage Detection in Ground-Based Lidar Data. Autom. Constr. 2015, 58, 19–27. [Google Scholar] [CrossRef]
  82. Yang, F.; Wen, X.; Wang, X.; Li, X.; Li, Z. A Model Study of Building Seismic Damage Information Extraction and Analysis on Ground-Based LiDAR Data. Adv. Civ. Eng. 2021, 2021, 5542012. [Google Scholar] [CrossRef]
  83. Jiang, H.; Li, Q.; Jiao, Q.; Wang, X.; Wu, L. Extraction of Wall Cracks on Earthquake-Damaged Buildings Based on TLS Point Clouds. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2018, 11, 3088–3096. [Google Scholar] [CrossRef]
  84. Bodoque, J.; Guardiola-Albert, C.; Aroca-Jiménez, E.; Eguibar, M.; Martínez-Chenoll, M. Flood Damage Analysis: First Floor Elevation Uncertainty Resulting from LiDAR-Derived Digital Surface Models. Remote Sens. 2016, 8, 604. [Google Scholar] [CrossRef]
  85. Huang, M.-S.; Gül, M.; Zhu, H.-P. Vibration-Based Structural Damage Identification under Varying Temperature Effects. J. Aerosp. Eng. 2018, 31, 04018014. [Google Scholar] [CrossRef]
  86. Chiabrando, F.; Giulio Tonolo, F.; Lingua, A. Uav Direct Georeferencing Approach in An Emergency Mapping Context. The 2016 Central Italy Earthquake Case Study. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2019, XLII-2/W13, 247–253. [Google Scholar] [CrossRef]
  87. Umemura, R.; Samura, T.; Tadamura, K. An Efficient Orthorectification of a Satellite SAR Image Used for Monitoring Occurrence of Disaster. In Proceedings of the 2018 International Workshop on Advanced Image Technology (IWAIT), Chiang Mai, Thailand, 7–9 January 2018; IEEE: Piscataway, NJ, USA, 2018; pp. 1–4. [Google Scholar]
  88. Yahyanejad, S.; Wischounig-Strucl, D.; Quaritsch, M.; Rinner, B. Incremental Mosaicking of Images from Autonomous, Small-Scale UAVs. In Proceedings of the 2010 7th IEEE International Conference on Advanced Video and Signal Based Surveillance, Boston, MA, USA, 29 August–1 September 2010; IEEE: Piscataway, NJ, USA, 2010; pp. 329–336. [Google Scholar]
  89. Joshi, A.R.; Tarte, I.; Suresh, S.; Koolagudi, S.G. Damage Identification and Assessment Using Image Processing on Post-Disaster Satellite Imagery. In Proceedings of the 2017 IEEE Global Humanitarian Technology Conference (GHTC), San Jose, CA, USA, 19–22 October 2017; IEEE: Piscataway, NJ, USA, 2017; pp. 1–7. [Google Scholar]
  90. Yeum, C.M.; Dyke, S.J.; Ramirez, J. Visual Data Classification in Post-Event Building Reconnaissance. Eng. Struct. 2018, 155, 16–24. [Google Scholar] [CrossRef]
  91. Coulter, L.L.; Stow, D.A.; Lippitt, C.D.; Fraley, G.W. Repeat Station Imaging for Rapid Airborne Change Detection. In Time-Sensitive Remote Sensing; Lippitt, C.D., Stow, D.A., Coulter, L.L., Eds.; Springer New York: Piscataway, NJ, USA, 2015; pp. 29–43. ISBN 978-1-4939-2601-5. [Google Scholar]
  92. Hong, Z.; Zhong, H.; Pan, H.; Liu, J.; Zhou, R.; Zhang, Y.; Han, Y.; Wang, J.; Yang, S.; Zhong, C. Classification of Building Damage Using a Novel Convolutional Neural Network Based on Post-Disaster Aerial Images. Sensors 2022, 22, 5920. [Google Scholar] [CrossRef] [PubMed]
  93. Ghosh Mondal, T.; Jahanshahi, M.R.; Wu, R.; Wu, Z.Y. Deep Learning-based Multi-class Damage Detection for Autonomous Post-disaster Reconnaissance. Struct. Control Health Monit. 2020, 27, e2507. [Google Scholar] [CrossRef]
  94. Al Shafian, S.; Hu, D.; Yu, W. Deep Learning Enhanced Crack Detection for Tunnel Inspection. In Proceedings of the International Conference on Transportation and Development 2024, Atlanta, GA, USA, 13 June 2024; American Society of Civil Engineers: Reston, VA, USA, 2024; pp. 732–741. [Google Scholar]
  95. Kazemi, F.; Asgarkhani, N.; Jankowski, R. Machine Learning-Based Seismic Fragility and Seismic Vulnerability Assessment of Reinforced Concrete Structures. Soil Dyn. Earthq. Eng. 2023, 166, 107761. [Google Scholar] [CrossRef]
  96. Kazemi, F.; Asgarkhani, N.; Jankowski, R. Machine Learning-Based Seismic Response and Performance Assessment of Reinforced Concrete Buildings. Arch. Civ. Mech. Eng. 2023, 23, 94. [Google Scholar] [CrossRef]
  97. Bai, Y.; Sezen, H.; Yilmaz, A. Detecting Cracks and Spalling Automatically in Extreme Events by End-To-End Deep Learning Frameworks. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2021, 2, 161–168. [Google Scholar] [CrossRef]
  98. Zhang, J.; Huang, M.; Wan, N.; Deng, Z.; He, Z.; Luo, J. Missing Measurement Data Recovery Methods in Structural Health Monitoring: The State, Challenges and Case Study. Measurement 2024, 231, 114528. [Google Scholar] [CrossRef]
  99. Huang, M.; Zhang, J.; Hu, J.; Ye, Z.; Deng, Z.; Wan, N. Nonlinear Modeling of Temperature-Induced Bearing Displacement of Long-Span Single-Pier Rigid Frame Bridge Based on DCNN-LSTM. Case Stud. Therm. Eng. 2024, 53, 103897. [Google Scholar] [CrossRef]
  100. Hu, D.; Li, S.; Du, J.; Cai, J. Automating Building Damage Reconnaissance to Optimize Drone Mission Planning for Disaster Response. J. Comput. Civ. Eng. 2023, 37, 04023006. [Google Scholar] [CrossRef]
  101. Wang, Y.; Feng, W.; Jiang, K.; Li, Q.; Lv, R.; Tu, J. Real-Time Damaged Building Region Detection Based on Improved YOLOv5s and Embedded System From UAV Images. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2023, 16, 4205–4217. [Google Scholar] [CrossRef]
  102. Ding, J.; Zhang, J.; Zhan, Z.; Tang, X.; Wang, X. A Precision Efficient Method for Collapsed Building Detection in Post-Earthquake UAV Images Based on the Improved NMS Algorithm and Faster R-CNN. Remote Sens. 2022, 14, 663. [Google Scholar] [CrossRef]
  103. Wen, H.; Hu, J.; Xiong, F.; Zhang, C.; Song, C.; Zhou, X. A Random Forest Model for Seismic-Damage Buildings Identification Based on UAV Images Coupled with RFE and Object-Oriented Methods. Nat. Hazards 2023, 119, 1751–1769. [Google Scholar] [CrossRef]
  104. Takhtkeshha, N.; Mohammadzadeh, A.; Salehi, B. A Rapid Self-Supervised Deep-Learning-Based Method for Post-Earthquake Damage Detection Using UAV Data (Case Study: Sarpol-e Zahab, Iran). Remote Sens. 2022, 15, 123. [Google Scholar] [CrossRef]
  105. Hristidis, V.; Chen, S.-C.; Li, T.; Luis, S.; Deng, Y. Survey of Data Management and Analysis in Disaster Situations. J. Syst. Softw. 2010, 83, 1701–1714. [Google Scholar] [CrossRef]
  106. Shah, S.A.; Seker, D.Z.; Hameed, S.; Draheim, D. The Rising Role of Big Data Analytics and IoT in Disaster Management: Recent Advances, Taxonomy and Prospects. IEEE Access 2019, 7, 54595–54614. [Google Scholar] [CrossRef]
  107. Mohammadi, M.E.; Watson, D.P.; Wood, R.L. Deep Learning-Based Damage Detection from Aerial SfM Point Clouds. Drones 2019, 3, 68. [Google Scholar] [CrossRef]
  108. Lohani, B.; Ghosh, S. Airborne LiDAR Technology: A Review of Data Collection and Processing Systems. Proc. Natl. Acad. Sci. India Sect. Phys. Sci. 2017, 87, 567–579. [Google Scholar] [CrossRef]
  109. Sultan, V.; Sarksyan, T.; Yadav, S. Wildfires Damage Assessment Via LiDAR. Int. J. Adv. Appl. Sci. 2022, 9, 34–43. [Google Scholar] [CrossRef]
  110. Yoo, H.T.; Lee, H.; Chi, S.; Hwang, B.-G.; Kim, J. A Preliminary Study on Disaster Waste Detection and Volume Estimation Based on 3D Spatial Information. In Proceedings of the Computing in Civil Engineering 2017, Seattle, WA, USA, 22 June 2017; American Society of Civil Engineers: Reston, VA, USA, 2017; pp. 428–435. [Google Scholar]
  111. Yu, D.; He, Z. Digital Twin-Driven Intelligence Disaster Prevention and Mitigation for Infrastructure: Advances, Challenges, and Opportunities. Nat. Hazards 2022, 112, 1–36. [Google Scholar] [CrossRef]
  112. Yu, R.; Li, P.; Shan, J.; Zhang, Y.; Dong, Y. Multi-Feature Driven Rapid Inspection of Earthquake-Induced Damage on Building Facades Using UAV-Derived Point Cloud. Meas. J. Int. Meas. Confed. 2024, 232, 114679. [Google Scholar] [CrossRef]
  113. Ji, H.; Luo, X. 3D Scene Reconstruction of Landslide Topography Based on Data Fusion between Laser Point Cloud and UAV Image. Environ. Earth Sci. 2019, 78, 534. [Google Scholar] [CrossRef]
  114. Han, X.-F.; Jin, J.S.; Wang, M.-J.; Jiang, W.; Gao, L.; Xiao, L. A Review of Algorithms for Filtering the 3D Point Cloud. Signal Process. Image Commun. 2017, 57, 103–112. [Google Scholar] [CrossRef]
  115. Frulla, L.A.; Milovich, J.A.; Karszenbaum, H.; Gagliardini, D.A. Radiometric Corrections and Calibration of SAR Images. In Proceedings of the IGARSS ’98. Sensing and Managing the Environment. 1998 IEEE International Geoscience and Remote Sensing, Seattle, WA, USA, 6–10 July 1998; IEEE: Piscataway, NJ, USA, 1998; pp. 1147–1149. [Google Scholar]
  116. El-Darymli, K.; McGuire, P.; Gill, E.; Power, D.; Moloney, C. Understanding the Significance of Radiometric Calibration for Synthetic Aperture Radar Imagery. In Proceedings of the 2014 IEEE 27th Canadian Conference on Electrical and Computer Engineering (CCECE), Toronto, ON, Canada, 4–7 May 2014; IEEE: Piscataway, NJ, USA, 2014; pp. 1–6. [Google Scholar]
  117. Garg, R.; Kumar, A.; Prateek, M.; Pandey, K.; Kumar, S. Land Cover Classification of Spaceborne Multifrequency SAR and Optical Multispectral Data Using Machine Learning. Adv. Space Res. 2022, 69, 1726–1742. [Google Scholar] [CrossRef]
  118. Choo, A.L.; Chan, Y.K.; Koo, V.C.; Lim, T.S. Study on Geometric Correction Algorithms for SAR Images. Int. J. Microw. Opt. Technol. 2014, 9, 68–72. [Google Scholar]
  119. Choi, H.; Jeong, J. Speckle Noise Reduction Technique for SAR Images Using Statistical Characteristics of Speckle Noise and Discrete Wavelet Transform. Remote Sens. 2019, 11, 1184. [Google Scholar] [CrossRef]
  120. Liu, S.; Wu, G.; Zhang, X.; Zhang, K.; Wang, P.; Li, Y. SAR Despeckling via Classification-Based Nonlocal and Local Sparse Representation. Neurocomputing 2017, 219, 174–185. [Google Scholar] [CrossRef]
  121. Ma, Q. Improving SAR Target Recognition Performance Using Multiple Preprocessing Techniques. Comput. Intell. Neurosci. 2021, 2021, 6572362. [Google Scholar] [CrossRef] [PubMed]
  122. Ge, P.; Gokon, H.; Meguro, K. A Review on Synthetic Aperture Radar-Based Building Damage Assessment in Disasters. Remote Sens. Environ. 2020, 240, 111693. [Google Scholar] [CrossRef]
  123. Kim, M.; Park, S.-E.; Lee, S.-J. Detection of Damaged Buildings Using Temporal SAR Data with Different Observation Modes. Remote Sens. 2023, 15, 308. [Google Scholar] [CrossRef]
  124. Akhmadiya, A.; Nabiyev, N.; Moldamurat, K.; Dyussekeyev, K.; Atanov, S. Use of Sentinel-1 Dual Polarization Multi-Temporal Data with Gray Level Co-Occurrence Matrix Textural Parameters for Building Damage Assessment. Pattern Recognit. Image Anal. 2021, 31, 240–250. [Google Scholar] [CrossRef]
  125. James, J.; Heddallikar, A.; Choudhari, P.; Chopde, S. Analysis of Features in SAR Imagery Using GLCM Segmentation Algorithm. In Data Science; Verma, G.K., Soni, B., Bourennane, S., Ramos, A.C.B., Eds.; Transactions on Computer Systems and Networks; Springer Singapore: Singapore, 2021; pp. 253–266. ISBN 9789811616808. [Google Scholar]
  126. Raucoules, D.; Ristori, B.; De Michele, M.; Briole, P. Surface Displacement of the Mw 7 Machaze Earthquake (Mozambique): Complementary Use of Multiband InSAR and Radar Amplitude Image Correlation with Elastic Modelling. Remote Sens. Environ. 2010, 114, 2211–2218. [Google Scholar] [CrossRef]
  127. Sun, X.; Chen, X.; Yang, L.; Wang, W.; Zhou, X.; Wang, L.; Yao, Y. Using InSAR and PolSAR to Assess Ground Displacement and Building Damage after a Seismic Event: Case Study of the 2021 Baicheng Earthquake. Remote Sens. 2022, 14, 3009. [Google Scholar] [CrossRef]
  128. Xu, Z.; Wang, R.; Zhang, H.; Li, N.; Zhang, L. Building Extraction from High-Resolution SAR Imagery Based on Deep Neural Networks. Remote Sens. Lett. 2017, 8, 888–896. [Google Scholar] [CrossRef]
  129. Shahzad, M.; Maurer, M.; Fraundorfer, F.; Wang, Y.; Zhu, X.X. Buildings Detection in VHR SAR Images Using Fully Convolution Neural Networks. IEEE Trans. Geosci. Remote Sens. 2019, 57, 1100–1116. [Google Scholar] [CrossRef]
  130. Stephenson, O.L.; Kohne, T.; Zhan, E.; Cahill, B.E.; Yun, S.-H.; Ross, Z.E.; Simons, M. Deep Learning-Based Damage Mapping With InSAR Coherence Time Series. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5207917. [Google Scholar] [CrossRef]
  131. Deng, Z.; Huang, M.; Wan, N.; Zhang, J. The Current Development of Structural Health Monitoring for Bridges: A Review. Buildings 2023, 13, 1360. [Google Scholar] [CrossRef]
  132. Afrin, A.; Rahman, M.M.; Chowdhury, A.H.; Eshraq, M.; Ukasha, M.R. Fire and Disaster Detection with Multimodal Quadcopter by Machine Learning; BRAC University: Dhaka, Bangladesh, 2023. [Google Scholar]
  133. Wang, T.; Tao, Y.; Chen, S.-C.; Shyu, M.-L. Multi-Task Multimodal Learning for Disaster Situation Assessment. In Proceedings of the 2020 IEEE Conference on Multimedia Information Processing and Retrieval (MIPR), Shenzhen, China, 6–8 August 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 209–212. [Google Scholar]
  134. Qin, W.; Tang, J.; Lu, C.; Lao, S. A Typhoon Trajectory Prediction Model Based on Multimodal and Multitask Learning. Appl. Soft Comput. 2022, 122, 108804. [Google Scholar] [CrossRef]
  135. Morales, J.; Vázquez-Martín, R.; Mandow, A.; Morilla-Cabello, D.; García-Cerezo, A. The UMA-SAR Dataset: Multimodal Data Collection from a Ground Vehicle during Outdoor Disaster Response Training Exercises. Int. J. Robot. Res. 2021, 40, 835–847. [Google Scholar] [CrossRef]
  136. Mohanty, S.D.; Biggers, B.; Sayedahmed, S.; Pourebrahim, N.; Goldstein, E.B.; Bunch, R.; Chi, G.; Sadri, F.; McCoy, T.P.; Cosby, A. A Multi-Modal Approach towards Mining Social Media Data during Natural Disasters—A Case Study of Hurricane Irma. Int. J. Disaster Risk Reduct. 2021, 54, 102032. [Google Scholar] [CrossRef]
  137. Bultmann, S.; Quenzel, J.; Behnke, S. Real-Time Multi-Modal Semantic Fusion on Unmanned Aerial Vehicles. In Proceedings of the 2021 European Conference on Mobile Robots (ECMR), Bonn, Germany, 31 August–1 September 2021; IEEE: Piscataway, NJ, USA, 2021; pp. 1–8. [Google Scholar]
  138. Yang, W.; Liu, X.; Zhang, L.; Yang, L.T. Big Data Real-Time Processing Based on Storm. In Proceedings of the 2013 12th IEEE International Conference on Trust, Security and Privacy in Computing and Communications, Melbourne, Australia, 16–18 July 2013; IEEE: Piscataway, NJ, USA, 2013; pp. 1784–1787. [Google Scholar]
  139. Zeng, F.; Pang, C.; Tang, H. Sensors on the Internet of Things Systems for Urban Disaster Management: A Systematic Literature Review. Sensors 2023, 23, 7475. [Google Scholar] [CrossRef] [PubMed]
  140. Bouchard, I.; Rancourt, M.-È.; Aloise, D.; Kalaitzis, F. On Transfer Learning for Building Damage Assessment from Satellite Imagery in Emergency Contexts. Remote Sens. 2022, 14, 2532. [Google Scholar] [CrossRef]
  141. Sousa, M.J.; Moutinho, A.; Almeida, M. Wildfire Detection Using Transfer Learning on Augmented Datasets. Expert Syst. Appl. 2020, 142, 112975. [Google Scholar] [CrossRef]
  142. Kyrkou, C.; Theocharides, T. Deep-Learning-Based Aerial Image Classification for Emergency Response Applications Using Unmanned Aerial Vehicles. In Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Long Beach, CA, USA, 16–17 June 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 517–525. [Google Scholar]
  143. Alshowair, A.; Bail, J.; AlSuwailem, F.; Mostafa, A.; Abdel-Azeem, A. Use of Virtual Reality Exercises in Disaster Preparedness Training: A Scoping Review. SAGE Open Med. 2024, 12, 20503121241241936. [Google Scholar] [CrossRef] [PubMed]
  144. Shafian, S.A.; Hu, D.; Li, Y.; Adhikari, S. Improving Construction Site Safety by Incident Reporting Through Utilizing Virtual Reality. In Proceedings of the 2024 South East Section Meeting, Marietta, GA, USA, 10–12 March 2024. [Google Scholar]
  145. Sharma, S.; Jerripothula, S.; Mackey, S.; Soumare, O. Immersive Virtual Reality Environment of a Subway Evacuation on a Cloud for Disaster Preparedness and Response Training. In Proceedings of the 2014 IEEE Symposium on Computational Intelligence for Human-like Intelligence (CIHLI), Orlando, FL, USA, 9–12 December 2014; IEEE: Piscataway, NJ, USA, 2014; pp. 1–6. [Google Scholar]
  146. Jane Lamb, K.; Davies, J.; Bowley, R.; Williams, J.-P. Incident Command Training: The Introspect Model. Int. J. Emerg. Serv. 2014, 3, 131–143. [Google Scholar] [CrossRef]
  147. Renganayagalu, S.K.; Mallam, S.C.; Nazir, S. Effectiveness of VR Head Mounted Displays in Professional Training: A Systematic Review. Technol. Knowl. Learn. 2021, 26, 999–1041. [Google Scholar] [CrossRef]
  148. Zhu, Y.; Li, N. Virtual and Augmented Reality Technologies for Emergency Management in the Built Environments: A State-of-the-Art Review. J. Saf. Sci. Resil. 2021, 2, 1–10. [Google Scholar] [CrossRef]
  149. Xie, B.; Liu, H.; Alghofaili, R.; Zhang, Y.; Jiang, Y.; Lobo, F.D.; Li, C.; Li, W.; Huang, H.; Akdere, M.; et al. A Review on Virtual Reality Skill Training Applications. Front. Virtual Real. 2021, 2, 645153. [Google Scholar] [CrossRef]
  150. Grassini, S.; Laumann, K.; Rasmussen Skogstad, M. The Use of Virtual Reality Alone Does Not Promote Training Performance (but Sense of Presence Does). Front. Psychol. 2020, 11, 1743. [Google Scholar] [CrossRef]
Figure 1. Overview of the methodology.
Figure 1. Overview of the methodology.
Buildings 14 02344 g001
Figure 2. Number of publications over the year. (* For the year 2024, publications till April 2024 have been considered for this paper).
Figure 2. Number of publications over the year. (* For the year 2024, publications till April 2024 have been considered for this paper).
Buildings 14 02344 g002
Figure 3. Number of publications by different countries from 2014 to 2024.
Figure 3. Number of publications by different countries from 2014 to 2024.
Buildings 14 02344 g003
Figure 4. Co-occurrence of author-indexed keywords.
Figure 4. Co-occurrence of author-indexed keywords.
Buildings 14 02344 g004
Figure 5. Co-authorship network for different countries.
Figure 5. Co-authorship network for different countries.
Buildings 14 02344 g005
Figure 6. Co-authorship network for authors from around the world.
Figure 6. Co-authorship network for authors from around the world.
Buildings 14 02344 g006
Figure 7. Citation network for different countries.
Figure 7. Citation network for different countries.
Buildings 14 02344 g007
Figure 8. Citation of sources.
Figure 8. Citation of sources.
Buildings 14 02344 g008
Figure 9. Citation of journal articles, Vetrivel et al. (2018) [17], Fernandez Galarreta et al. (2015) [18], Cooner et al. (2016) [19], Nex et al. (2019) [20], Ji et al. (2018) [21], Janalipour et al. (2016) [22], Gong et al. (2016) [23], Pan et al. (2020) [24], Ji et al. (2019) [25], Moya et al. (2019) [26], Adriano et al. (2021) [27], Anniballe et al. (2018) [28], Li et al. (2015) [29], Shen et al. (2022) [30], Moya et al. (2018a) [31], Sun et al. (2016) [32], Song et al. (2020) [33], Qing et al. (2022) [34], Sharma et al. (2017) [35], Tu et al. (2017) [36], Li et al. (2018) [37].
Figure 9. Citation of journal articles, Vetrivel et al. (2018) [17], Fernandez Galarreta et al. (2015) [18], Cooner et al. (2016) [19], Nex et al. (2019) [20], Ji et al. (2018) [21], Janalipour et al. (2016) [22], Gong et al. (2016) [23], Pan et al. (2020) [24], Ji et al. (2019) [25], Moya et al. (2019) [26], Adriano et al. (2021) [27], Anniballe et al. (2018) [28], Li et al. (2015) [29], Shen et al. (2022) [30], Moya et al. (2018a) [31], Sun et al. (2016) [32], Song et al. (2020) [33], Qing et al. (2022) [34], Sharma et al. (2017) [35], Tu et al. (2017) [36], Li et al. (2018) [37].
Buildings 14 02344 g009
Figure 10. Bibliographic coupling of documents. Hu et al. (2022) [4], Pi et al. (2020) [11], Vetrivel et al. (2018) [17], Ji et al. (2018a) [21], Moya et al. (2019) [26], Song et al. (2020) [33], Qing et al. (2022) [34], Qi et al. (2016) [38], Moya et al. (2018c) [39], Zheng et al. (2021) [40], Brando et al. (2017) [41], Zhai et al. (2016) [42], Levine et al. (2022) [43], Pham et al. (2014) [44], Xie et al. (2016) [45], Wang et al. (2022) [46], Cui et al. (2023) [47], Thomas et al. (2014) [48], Wang et al. (2015a) [49], Kaur et al. (2022) [50], Liu et al. (2022) [51].
Figure 10. Bibliographic coupling of documents. Hu et al. (2022) [4], Pi et al. (2020) [11], Vetrivel et al. (2018) [17], Ji et al. (2018a) [21], Moya et al. (2019) [26], Song et al. (2020) [33], Qing et al. (2022) [34], Qi et al. (2016) [38], Moya et al. (2018c) [39], Zheng et al. (2021) [40], Brando et al. (2017) [41], Zhai et al. (2016) [42], Levine et al. (2022) [43], Pham et al. (2014) [44], Xie et al. (2016) [45], Wang et al. (2022) [46], Cui et al. (2023) [47], Thomas et al. (2014) [48], Wang et al. (2015a) [49], Kaur et al. (2022) [50], Liu et al. (2022) [51].
Buildings 14 02344 g010
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Al Shafian, S.; Hu, D. Integrating Machine Learning and Remote Sensing in Disaster Management: A Decadal Review of Post-Disaster Building Damage Assessment. Buildings 2024, 14, 2344. https://doi.org/10.3390/buildings14082344

AMA Style

Al Shafian S, Hu D. Integrating Machine Learning and Remote Sensing in Disaster Management: A Decadal Review of Post-Disaster Building Damage Assessment. Buildings. 2024; 14(8):2344. https://doi.org/10.3390/buildings14082344

Chicago/Turabian Style

Al Shafian, Sultan, and Da Hu. 2024. "Integrating Machine Learning and Remote Sensing in Disaster Management: A Decadal Review of Post-Disaster Building Damage Assessment" Buildings 14, no. 8: 2344. https://doi.org/10.3390/buildings14082344

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop