remotesensing-logo

Journal Browser

Journal Browser

Advances in Remote Sensing for Disaster Research: Methodologies and Applications

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "Environmental Remote Sensing".

Deadline for manuscript submissions: closed (2 October 2020) | Viewed by 65003

Special Issue Editors

International Research Institute of Disaster Science, Tohoku University, Sendai 980-8572, Japan
Interests: multi-agent systems and agent-based simulation; tsunami simulation; evacuation simulation; remote sensing
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Disaster Geo-Informatics Laboratory, International Research Institute of Disaster Science, Tohoku University, Aoba 468-1, Aramaki, Aoba-ku, Sendai 980-8572, Japan
Interests: earth observation; numerical modeling; disaster management; early warning; tsunami; flood; earthquake
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Growing attention has been given to the use of satellite- or aircraft-based sensor technologies to detect and classify objects on Earth. The acquisition of information through remote sensing technologies has been applied in numerous fields, including disaster research and disaster management. Remote sensing has been applied to detection, monitoring, and response to disasters due to natural hazards. It has also provided the opportunity to identify urban vulnerabilities and exposure to possible disasters.

This Special Issue invites paper contributions highlighting recent advances in methodologies and applications of remote sensing to disaster research. Research focusing on earthquake, tsunami, and flood disasters is encouraged, but other types of disasters are welcome. We encourage submissions of review and original research articles related, but not limited, to satellite remote sensing, aerial image analysis, unmanned aerial vehicle (UAV) technology, etc. that focus on the following topics:

  • Gathering data for vulnerability and exposure analysis;
  • Post-disaster field survey using drones;
  • Damage assessment and mapping;
  • Disaster recovery monitoring;
  • Earth observation (EO) for humanitarian aid;
  • AI algorithms applied on remote sensing data for disaster research;
  • Public participation in scientific disaster research (citizen science);
  • Other topics related to remote sensing and disaster research.

IMPORTANT NOTE:

“Remote Sensing” is the Media Partner of the World Bosai Forum/International Disaster Risk Conference 2019, to be held in Sendai (WBF2019).

Although this Special Issue is open to other contributions, it also includes some of the outcomes of the session “Innovative Remote Sensing Technologies for Enhancing Disaster Management”, held at the WBF2019.

http://www.worldbosaiforum.com/2019/english/

Assoc. Prof. Dr. Erick Mas
Prof. Dr. Shunichi Koshimura
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Disaster research
  • Remote sensing
  • Disaster management
  • Satellite remote sensing
  • Unmanned aerial vehicle (UAV)
  • Aerial photo
  • Machine learning
  • Damage assessment
  • Disaster recovery
  • Drone field survey

Published Papers (11 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

20 pages, 11041 KiB  
Article
Pyramid Pooling Module-Based Semi-Siamese Network: A Benchmark Model for Assessing Building Damage from xBD Satellite Imagery Datasets
by Yanbing Bai, Junjie Hu, Jinhua Su, Xing Liu, Haoyu Liu, Xianwen He, Shengwang Meng, Erick Mas and Shunichi Koshimura
Remote Sens. 2020, 12(24), 4055; https://doi.org/10.3390/rs12244055 - 11 Dec 2020
Cited by 32 | Viewed by 3985
Abstract
Most mainstream research on assessing building damage using satellite imagery is based on scattered datasets and lacks unified standards and methods to quantify and compare the performance of different models. To mitigate these problems, the present study develops a novel end-to-end benchmark model, [...] Read more.
Most mainstream research on assessing building damage using satellite imagery is based on scattered datasets and lacks unified standards and methods to quantify and compare the performance of different models. To mitigate these problems, the present study develops a novel end-to-end benchmark model, termed the pyramid pooling module semi-Siamese network (PPM-SSNet), based on a large-scale xBD satellite imagery dataset. The high precision of the proposed model is achieved by adding residual blocks with dilated convolution and squeeze-and-excitation blocks into the network. Simultaneously, the highly automated process of satellite imagery input and damage classification result output is reached by employing concurrent learned attention mechanisms through a semi-Siamese network for end-to-end input and output purposes. Our proposed method achieves F1 scores of 0.90, 0.41, 0.65, and 0.70 for the undamaged, minor-damaged, major-damaged, and destroyed building classes, respectively. From the perspective of end-to-end methods, the ablation experiments and comparative analysis confirm the effectiveness and originality of the PPM-SSNet method. Finally, the consistent prediction results of our model for data from the 2011 Tohoku Earthquake verify the high performance of our model in terms of the domain shift problem, which implies that it is effective for evaluating future disasters. Full article
Show Figures

Graphical abstract

25 pages, 14333 KiB  
Article
Technical Solution Discussion for Key Challenges of Operational Convolutional Neural Network-Based Building-Damage Assessment from Satellite Imagery: Perspective from Benchmark xBD Dataset
by Jinhua Su, Yanbing Bai, Xingrui Wang, Dong Lu, Bo Zhao, Hanfang Yang, Erick Mas and Shunichi Koshimura
Remote Sens. 2020, 12(22), 3808; https://doi.org/10.3390/rs12223808 - 20 Nov 2020
Cited by 12 | Viewed by 4158
Abstract
Earth Observation satellite imaging helps building diagnosis during a disaster. Several models are put forward on the xBD dataset, which can be divided into two levels: the building level and the pixel level. Models from two levels evolve into several versions that will [...] Read more.
Earth Observation satellite imaging helps building diagnosis during a disaster. Several models are put forward on the xBD dataset, which can be divided into two levels: the building level and the pixel level. Models from two levels evolve into several versions that will be reviewed in this paper. There are four key challenges hindering researchers from moving forward on this task, and this paper tries to give technical solutions. First, metrics on different levels could not be compared directly. We put forward a fairer metric and give a method to convert between metrics of two levels. Secondly, drone images may be another important source, but drone data may have only a post-disaster image. This paper shows and compares methods of directly detecting and generating. Thirdly, the class imbalance is a typical feature of the xBD dataset and leads to a bad F1 score for minor damage and major damage. This paper provides four specific data resampling strategies, which are Main-Label Over-Sampling (MLOS), Discrimination After Cropping (DAC), Dilation of Area with Minority (DAM) and Synthetic Minority Over-Sampling Technique (SMOTE), as well as cost-sensitive re-weighting schemes. Fourthly, faster prediction meets the need for a real-time situation. This paper recommends three specific methods, feature-map subtraction, parameter sharing, and knowledge distillation. Finally, we developed our AI-driven Damage Diagnose Platform (ADDP). This paper introduces the structure of ADDP and technical details. Customized settings, interface preview, and upload and download satellite images are major services our platform provides. Full article
Show Figures

Graphical abstract

22 pages, 22905 KiB  
Article
Integrated Methodology for Urban Flood Risk Mapping at the Microscale in Ungauged Regions: A Case Study of Hurghada, Egypt
by Karim I. Abdrabo, Sameh A. Kantoush, Mohamed Saber, Tetsuya Sumi, Omar M. Habiba, Dina Elleithy and Bahaa Elboshy
Remote Sens. 2020, 12(21), 3548; https://doi.org/10.3390/rs12213548 - 29 Oct 2020
Cited by 36 | Viewed by 5145
Abstract
Flood risk mapping forms the basis for disaster risk management and the associated decision-making systems. The effectiveness of this process is highly dependent on the quality of the input data of both hazard and vulnerability maps and the method utilized. On the one [...] Read more.
Flood risk mapping forms the basis for disaster risk management and the associated decision-making systems. The effectiveness of this process is highly dependent on the quality of the input data of both hazard and vulnerability maps and the method utilized. On the one hand, for higher-quality hazard maps, the use of 2D models is generally suggested. However, in ungauged regions, such usage becomes a difficult task, especially at the microscale. On the other hand, vulnerability mapping at the microscale suffers limitations as a result of the failure to consider vulnerability components, the low spatial resolution of the input data, and the omission of urban planning aspects that have crucial impacts on the resulting quality. This paper aims to enhance the quality of both hazard and vulnerability maps at the urban microscale in ungauged regions. The proposed methodology integrates remote sensing data and high-quality city strategic plans (CSPs) using geographic information systems (GISs), a 2D rainfall-runoff-inundation (RRI) simulation model, and multicriteria decision-making analysis (MCDA, i.e., the analytic hierarchy process (AHP)). This method was implemented in Hurghada, Egypt, which from 1996 to 2019 was prone to several urban flood events. Current and future physical, social, and economic vulnerability maps were produced based on seven indicators (land use, building height, building conditions, building materials, total population, population density, and land value). The total vulnerability maps were combined with the hazard maps based on the Kron equation for three different return periods (REPs) 50, 10, and 5 years to create the corresponding flood risk maps. In general, this integrated methodology proved to be an economical tool to overcome the scarcity of data, to fill the gap between urban planning and flood risk management (FRM), and to produce comprehensive and high-quality flood risk maps that aid decision-making systems. Full article
Show Figures

Graphical abstract

31 pages, 10416 KiB  
Article
UAV Framework for Autonomous Onboard Navigation and People/Object Detection in Cluttered Indoor Environments
by Juan Sandino, Fernando Vanegas, Frederic Maire, Peter Caccetta, Conrad Sanderson and Felipe Gonzalez
Remote Sens. 2020, 12(20), 3386; https://doi.org/10.3390/rs12203386 - 16 Oct 2020
Cited by 55 | Viewed by 7616
Abstract
Response efforts in emergency applications such as border protection, humanitarian relief and disaster monitoring have improved with the use of Unmanned Aerial Vehicles (UAVs), which provide a flexibly deployed eye in the sky. These efforts have been further improved with advances in autonomous [...] Read more.
Response efforts in emergency applications such as border protection, humanitarian relief and disaster monitoring have improved with the use of Unmanned Aerial Vehicles (UAVs), which provide a flexibly deployed eye in the sky. These efforts have been further improved with advances in autonomous behaviours such as obstacle avoidance, take-off, landing, hovering and waypoint flight modes. However, most UAVs lack autonomous decision making for navigating in complex environments. This limitation creates a reliance on ground control stations to UAVs and, therefore, on their communication systems. The challenge is even more complex in indoor flight operations, where the strength of the Global Navigation Satellite System (GNSS) signals is absent or weak and compromises aircraft behaviour. This paper proposes a UAV framework for autonomous navigation to address uncertainty and partial observability from imperfect sensor readings in cluttered indoor scenarios. The framework design allocates the computing processes onboard the flight controller and companion computer of the UAV, allowing it to explore dangerous indoor areas without the supervision and physical presence of the human operator. The system is illustrated under a Search and Rescue (SAR) scenario to detect and locate victims inside a simulated office building. The navigation problem is modelled as a Partially Observable Markov Decision Process (POMDP) and solved in real time through the Augmented Belief Trees (ABT) algorithm. Data is collected using Hardware in the Loop (HIL) simulations and real flight tests. Experimental results show the robustness of the proposed framework to detect victims at various levels of location uncertainty. The proposed system ensures personal safety by letting the UAV to explore dangerous environments without the intervention of the human operator. Full article
Show Figures

Graphical abstract

25 pages, 18391 KiB  
Article
Visual-Based Person Detection for Search-and-Rescue with UAS: Humans vs. Machine Learning Algorithm
by Sven Gotovac, Danijel Zelenika, Željko Marušić and Dunja Božić-Štulić
Remote Sens. 2020, 12(20), 3295; https://doi.org/10.3390/rs12203295 - 10 Oct 2020
Cited by 8 | Viewed by 4422
Abstract
Unmanned Aircraft Systems (UASs) have been recognized as an important resource in search-and-rescue (SAR) missions and, as such, have been used by the Croatian Mountain Search and Rescue (CMRS) service for over seven years. The UAS scans and photographs the terrain. The high-resolution [...] Read more.
Unmanned Aircraft Systems (UASs) have been recognized as an important resource in search-and-rescue (SAR) missions and, as such, have been used by the Croatian Mountain Search and Rescue (CMRS) service for over seven years. The UAS scans and photographs the terrain. The high-resolution images are afterwards analyzed by SAR members to detect missing persons or to find some usable trace. It is a drawn out, tiresome process prone to human error. To facilitate and speed up mission image processing and increase detection accuracy, we have developed several image-processing algorithms. The latest are convolutional neural network (CNN)-based. CNNs were trained on a specially developed image database, named HERIDAL. Although these algorithms achieve excellent recall, the efficiency of the algorithm in actual SAR missions and its comparison with expert detection must be investigated. A series of mission simulations are planned and recorded for this purpose. They are processed and labelled by a developed algorithm. A web application was developed by which experts analyzed raw and processed mission images. The algorithm achieved better recall compared to an expert, but the experts achieved better accuracy when they analyzed images that were already processed and labelled. Full article
Show Figures

Graphical abstract

28 pages, 7476 KiB  
Article
Automatic Detection of Earthquake-Damaged Buildings by Integrating UAV Oblique Photography and Infrared Thermal Imaging
by Rui Zhang, Heng Li, Kaifeng Duan, Shucheng You, Ke Liu, Futao Wang and Yong Hu
Remote Sens. 2020, 12(16), 2621; https://doi.org/10.3390/rs12162621 - 13 Aug 2020
Cited by 48 | Viewed by 10586
Abstract
Extracting damage information of buildings after an earthquake is crucial for emergency rescue and loss assessment. Low-altitude remote sensing by unmanned aerial vehicles (UAVs) for emergency rescue has unique advantages. In this study, we establish a remote sensing information-extraction method that combines ultramicro [...] Read more.
Extracting damage information of buildings after an earthquake is crucial for emergency rescue and loss assessment. Low-altitude remote sensing by unmanned aerial vehicles (UAVs) for emergency rescue has unique advantages. In this study, we establish a remote sensing information-extraction method that combines ultramicro oblique UAV and infrared thermal imaging technology to automatically detect the structural damage of buildings and cracks in external walls. The method consists of four parts: (1) 3D live-action modeling and building structure analysis based on ultramicro oblique images; (2) extraction of damage information of buildings; (3) detection of cracks in walls based on infrared thermal imaging; and (4) integration of detection systems for information of earthquake-damaged buildings. First, a 3D live-action building model is constructed. A multi-view structure image for segmentation can be obtained based on this method. Second, a method of extracting information on damage to building structures using a 3D live-action building model as the geographic reference is proposed. Damage information of the internal structure of the building can be obtained based on this method. Third, based on analyzing the temperature field distribution on the exterior walls of earthquake-damaged buildings, an automatic method of detecting cracks in the walls by using infrared thermal imaging is proposed. Finally, the damage information detection and assessment system is researched and developed, and the system is integrated. Taking earthquake search-and-rescue simulation as an example, the effectiveness of this method is verified. The damage distribution in the internal structure and external walls of buildings in this area is obtained with an accuracy of 78%. Full article
Show Figures

Figure 1

16 pages, 5804 KiB  
Article
Learning from the 2018 Western Japan Heavy Rains to Detect Floods during the 2019 Hagibis Typhoon
by Luis Moya, Erick Mas and Shunichi Koshimura
Remote Sens. 2020, 12(14), 2244; https://doi.org/10.3390/rs12142244 - 13 Jul 2020
Cited by 24 | Viewed by 5669
Abstract
Applications of machine learning on remote sensing data appear to be endless. Its use in damage identification for early response in the aftermath of a large-scale disaster has a specific issue. The collection of training data right after a disaster is costly, time-consuming, [...] Read more.
Applications of machine learning on remote sensing data appear to be endless. Its use in damage identification for early response in the aftermath of a large-scale disaster has a specific issue. The collection of training data right after a disaster is costly, time-consuming, and many times impossible. This study analyzes a possible solution to the referred issue: the collection of training data from past disaster events to calibrate a discriminant function. Then the identification of affected areas in a current disaster can be performed in near real-time. The performance of a supervised machine learning classifier to learn from training data collected from the 2018 heavy rainfall at Okayama Prefecture, Japan, and to identify floods due to the typhoon Hagibis on 12 October 2019 at eastern Japan is reported in this paper. The results show a moderate agreement with flood maps provided by local governments and public institutions, and support the assumption that previous disaster information can be used to identify a current disaster in near-real time. Full article
Show Figures

Figure 1

19 pages, 5759 KiB  
Article
Deep Learning-Based Identification of Collapsed, Non-Collapsed and Blue Tarp-Covered Buildings from Post-Disaster Aerial Images
by Hiroyuki Miura, Tomohiro Aridome and Masashi Matsuoka
Remote Sens. 2020, 12(12), 1924; https://doi.org/10.3390/rs12121924 - 14 Jun 2020
Cited by 49 | Viewed by 6767
Abstract
A methodology for the automated identification of building damage from post-disaster aerial images was developed based on convolutional neural network (CNN) and building damage inventories. The aerial images and the building damage data obtained in the 2016 Kumamoto, and the 1995 Kobe, Japan [...] Read more.
A methodology for the automated identification of building damage from post-disaster aerial images was developed based on convolutional neural network (CNN) and building damage inventories. The aerial images and the building damage data obtained in the 2016 Kumamoto, and the 1995 Kobe, Japan earthquakes were analyzed. Since the roofs of many moderately damaged houses are covered with blue tarps immediately after disasters, not only collapsed and non-collapsed buildings but also the buildings covered with blue tarps were identified by the proposed method. The CNN architecture developed in this study correctly classifies the building damage with the accuracy of approximately 95 % in both earthquake data. We applied the developed CNN model to aerial images in Chiba, Japan, damaged by the typhoon in September 2019. The result shows that more than 90 % of the building damage are correctly classified by the CNN model. Full article
Show Figures

Graphical abstract

24 pages, 37778 KiB  
Article
Wetland Surface Water Detection from Multipath SAR Images Using Gaussian Process-Based Temporal Interpolation
by Yukio Endo, Meghan Halabisky, L. Monika Moskal and Shunichi Koshimura
Remote Sens. 2020, 12(11), 1756; https://doi.org/10.3390/rs12111756 - 29 May 2020
Cited by 8 | Viewed by 4107
Abstract
Wetlands provide society with a myriad of ecosystem services, such as water storage, food sources, and flood control. The ecosystem services provided by a wetland are largely dependent on its hydrological dynamics. Constant monitoring of the spatial extent of water surfaces and the [...] Read more.
Wetlands provide society with a myriad of ecosystem services, such as water storage, food sources, and flood control. The ecosystem services provided by a wetland are largely dependent on its hydrological dynamics. Constant monitoring of the spatial extent of water surfaces and the duration of flooding of a wetland is necessary to understand the impact of drought on the ecosystem services a wetland provides. Synthetic aperture radar (SAR) has the potential to reveal wetland dynamics. Multitemporal SAR image analysis for wetland monitoring has been extensively studied based on the advances of modern SAR missions. Unfortunately, most previous studies utilized monopath SAR images, which result in limited success. Tracking changes in individual wetlands remains a challenging task because several environmental factors, such as wind-roughened water, degrade image quality. In general, the data acquisition frequency is an important factor in time series analysis. We propose a Gaussian process-based temporal interpolation (GPTI) method that enables the synergistic use of SAR images taken from multiple paths. The proposed model is applied to a series of Sentinel-1 images capturing wetlands in Okanogan County, Washington State. Our experimental analysis demonstrates that the multiple path analysis based on the proposed method can extract seasonal changes more accurately than a single path analysis. Full article
Show Figures

Graphical abstract

19 pages, 16535 KiB  
Article
A Semiautomatic Pixel-Object Method for Detecting Landslides Using Multitemporal ALOS-2 Intensity Images
by Bruno Adriano, Naoto Yokoya, Hiroyuki Miura, Masashi Matsuoka and Shunichi Koshimura
Remote Sens. 2020, 12(3), 561; https://doi.org/10.3390/rs12030561 - 08 Feb 2020
Cited by 23 | Viewed by 5323
Abstract
The rapid and accurate mapping of large-scale landslides and other mass movement disasters is crucial for prompt disaster response efforts and immediate recovery planning. As such, remote sensing information, especially from synthetic aperture radar (SAR) sensors, has significant advantages over cloud-covered optical imagery [...] Read more.
The rapid and accurate mapping of large-scale landslides and other mass movement disasters is crucial for prompt disaster response efforts and immediate recovery planning. As such, remote sensing information, especially from synthetic aperture radar (SAR) sensors, has significant advantages over cloud-covered optical imagery and conventional field survey campaigns. In this work, we introduced an integrated pixel-object image analysis framework for landslide recognition using SAR data. The robustness of our proposed methodology was demonstrated by mapping two different source-induced landslide events, namely, the debris flows following the torrential rainfall that fell over Hiroshima, Japan, in early July 2018 and the coseismic landslide that followed the 2018 Mw6.7 Hokkaido earthquake. For both events, only a pair of SAR images acquired before and after each disaster by the Advanced Land Observing Satellite-2 (ALOS-2) was used. Additional information, such as digital elevation model (DEM) and land cover information, was employed only to constrain the damage detected in the affected areas. We verified the accuracy of our method by comparing it with the available reference data. The detection results showed an acceptable correlation with the reference data in terms of the locations of damage. Numerical evaluations indicated that our methodology could detect landslides with an accuracy exceeding 80%. In addition, the kappa coefficients for the Hiroshima and Hokkaido events were 0.30 and 0.47, respectively. Full article
Show Figures

Figure 1

23 pages, 7054 KiB  
Article
A Modular Processing Chain for Automated Flood Monitoring from Multi-Spectral Satellite Data
by Marc Wieland and Sandro Martinis
Remote Sens. 2019, 11(19), 2330; https://doi.org/10.3390/rs11192330 - 08 Oct 2019
Cited by 59 | Viewed by 5823
Abstract
Emergency responders frequently request satellite-based crisis information for flood monitoring to target the often-limited resources and to prioritize response actions throughout a disaster situation. We present a generic processing chain that covers all modules required for operational flood monitoring from multi-spectral satellite data. [...] Read more.
Emergency responders frequently request satellite-based crisis information for flood monitoring to target the often-limited resources and to prioritize response actions throughout a disaster situation. We present a generic processing chain that covers all modules required for operational flood monitoring from multi-spectral satellite data. This includes data search, ingestion and preparation, water segmentation and mapping of flooded areas. Segmentation of the water extent is done by a convolutional neural network that has been trained on a global dataset of Landsat TM, ETM+, OLI and Sentinel-2 images. Clouds, cloud shadows and snow/ice are specifically handled by the network to remove potential biases from downstream analysis. Compared to previous work in this direction, the method does not require atmospheric correction or post-processing and does not rely on ancillary data. Our method achieves an Overall Accuracy (OA) of 0.93, Kappa of 0.87 and Dice coefficient of 0.90. It outperforms a widely used Random Forest classifier and a Normalized Difference Water Index (NDWI) threshold method. We introduce an adaptable reference water mask that is derived by time-series analysis of archive imagery to distinguish flood from permanent water. When tested against manually produced rapid mapping products for three flood disasters (Germany 2013, China 2016 and Peru 2017), the method achieves ≥ 0.92 OA, ≥ 0.86 Kappa and ≥ 0.90 Dice coefficient. Furthermore, we present a flood monitoring application centred on Bihar, India. The processing chain produces very high OA (0.94), Kappa (0.92) and Dice coefficient (0.97) and shows consistent performance throughout a monitoring period of one year that involves 19 Landsat OLI ( μ Kappa = 0.92 and σ Kappa = 0.07 ) and 61 Sentinel-2 images ( μ Kappa = 0.92 , σ Kappa = 0.05 ). Moreover, we show that the mean effective revisit period (considering cloud cover) can be improved significantly by multi-sensor combination (three days with Sentinel-1, Sentinel-2, and Landsat OLI). Full article
Show Figures

Graphical abstract

Back to TopTop