Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (239)

Search Parameters:
Keywords = optical and SAR satellite images

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
26 pages, 49356 KB  
Article
A Methodology to Detect Changes in Water Bodies by Using Radar and Optical Fusion of Images: A Case Study of the Antioquia near East in Colombia
by César Olmos-Severiche, Juan Valdés-Quintero, Jean Pierre Díaz-Paz, Sandra P. Mateus, Andres Felipe Garcia-Henao, Oscar E. Cossio-Madrid, Blanca A. Botero and Juan C. Parra
Appl. Sci. 2025, 15(23), 12559; https://doi.org/10.3390/app152312559 - 27 Nov 2025
Viewed by 111
Abstract
This study presents a novel methodology for the detection and monitoring of changes in surface water bodies, with a particular emphasis on the near-eastern region of Antioquia, Colombia. The proposed approach integrates remote sensing and artificial intelligence techniques through the fusion of multi-source [...] Read more.
This study presents a novel methodology for the detection and monitoring of changes in surface water bodies, with a particular emphasis on the near-eastern region of Antioquia, Colombia. The proposed approach integrates remote sensing and artificial intelligence techniques through the fusion of multi-source imagery, specifically Synthetic Aperture Radar (SAR) and optical data. The framework is structured in several stages. First, radar imagery is pre-processed using an autoencoder-based despeckling model, which leverages deep learning to reduce noise while preserving structural information critical for environmental monitoring. Concurrently, optical imagery is processed through the computation of normalized spectral indices, including NDVI, NDWI, and NDBI, capturing essential characteristics related to vegetation, water presence, and surrounding built-up areas. These complementary sources are subsequently fused into synthetic RGB composite representations, ensuring spatial and spectral consistency between radar and optical domains. To operationalize this methodology, a standardized and reproducible workflow was implemented for automated image acquisition, preprocessing, fusion, and segmentation. The Segment Anything Model (SAM) was integrated into the process to generate semantically interpretable classes, enabling more precise delineation of hydrological features, flood-prone areas, and urban expansion near waterways. This automated system was embedded in a software prototype, allowing local users to manage large volumes of satellite data efficiently and consistently. The results demonstrate that the combination of SAR and optical datasets provides a robust solution for monitoring dynamic hydrological environments, particularly in tropical mountainous regions with persistent cloud cover. The fused products enhanced the detection of small streams and complex hydrological patterns that are typically challenging to monitor using optical imagery alone. By integrating these technical advancements, the methodology supports improved environmental monitoring and provides actionable insights for decision-makers. At the local scale, municipal governments can use these outputs for urban planning and flood risk mitigation; at the regional level, environmental and territorial authorities can strengthen water resource management and conservation strategies; and at the national level, risk management institutions can incorporate this information into early warning systems and disaster preparedness programs. Overall, this research delivers a scalable and automated tool for surface water monitoring, bridging the gap between scientific innovation and operational decision-making to support sustainable watershed management under increasing pressures from climate change and urbanization. Full article
Show Figures

Figure 1

13 pages, 326 KB  
Technical Note
Fast and Accurate System for Onboard Target Recognition on Raw SAR Echo Data
by Gustavo Jacinto, Mário Véstias, Paulo Flores and Rui Policarpo Duarte
Remote Sens. 2025, 17(21), 3547; https://doi.org/10.3390/rs17213547 - 26 Oct 2025
Viewed by 492
Abstract
Synthetic Aperture Radar (SAR) onboard satellites provides high-resolution Earth imaging independent of weather conditions. SAR data are acquired by an aircraft or satellite and sent to a ground station to be processed. However, for novel applications requiring real-time analysis and decisions, onboard processing [...] Read more.
Synthetic Aperture Radar (SAR) onboard satellites provides high-resolution Earth imaging independent of weather conditions. SAR data are acquired by an aircraft or satellite and sent to a ground station to be processed. However, for novel applications requiring real-time analysis and decisions, onboard processing is necessary to escape the limited downlink bandwidth and latency. One such application is real-time target recognition, which has emerged as a decisive operation in areas such as defense and surveillance. In recent years, deep learning models have improved the accuracy of target recognition algorithms. However, these are based on optical image processing and are computation and memory expensive, which requires not only processing the SAR pulse data but also optimized models and architectures for efficient deployment in onboard computers. This paper presents a fast and accurate target recognition system directly on raw SAR data using a neural network model. This network receives and processes SAR echo data for fast processing, alleviating the computationally expensive DSP image generation algorithms such as Backprojection and RangeDoppler. Thus, this allows the use of simpler and faster models, while maintaining accuracy. The system was designed, optimized, and tested on low-cost embedded devices with low size, weight, and energy requirements (Khadas VIM3 and Raspberry Pi 5). Results demonstrate that the proposed solution achieves a target classification accuracy for the MSTAR dataset close to 100% in less than 1.5 ms and 5.5 W of power. Full article
Show Figures

Figure 1

20 pages, 7699 KB  
Article
Large-Gradient Displacement Monitoring and Parameter Inversion of Mining Collapse with the Optical Flow Method of Synthetic Aperture Radar Images
by Chuanjiu Zhang and Jie Chen
Remote Sens. 2025, 17(21), 3533; https://doi.org/10.3390/rs17213533 - 25 Oct 2025
Viewed by 474
Abstract
Monitoring large-gradient surface displacement caused by underground mining remains a significant challenge for conventional Synthetic Aperture Radar (SAR)-based techniques. This study introduces optical flow methods to monitor large-gradient displacement in mining areas and conducts a comprehensive comparison with Small Baseline Subset Interferometric SAR [...] Read more.
Monitoring large-gradient surface displacement caused by underground mining remains a significant challenge for conventional Synthetic Aperture Radar (SAR)-based techniques. This study introduces optical flow methods to monitor large-gradient displacement in mining areas and conducts a comprehensive comparison with Small Baseline Subset Interferometric SAR (SBAS-InSAR) and Pixel Offset Tracking (POT) methods. Using 12 high-resolution TerraSAR-X (TSX) SAR images over the Daliuta mining area in Yulin, China, we evaluate the performance of each method in terms of sensitivity to displacement gradients, computational efficiency, and monitoring accuracy. Results indicate that SBAS-InSAR is only capable of detecting displacement at the decimeter level in the Dalinta mining area and is unable to monitor rapid, large-gradient displacement exceeding the meter scale. While POT can detect meter-scale displacements, it suffers from low efficiency and low precision. In contrast, the proposed optical flow method (OFM) achieves sub-pixel accuracy with root mean square errors of 0.17 m (compared to 0.26 m for POT) when validated against Global Navigation Satellite System (GNSS) data while improving computational efficiency by nearly 30 times compared to POT. Furthermore, based on the optical flow results, mining parameters and three-dimensional (3D) displacement fields were successfully inverted, revealing maximum vertical subsidence exceeding 4.4 m and horizontal displacement over 1.5 m. These findings demonstrate that the OFM is a reliable and efficient tool for large-gradient displacement monitoring in mining areas, offering valuable support for hazard assessment and mining management. Full article
Show Figures

Figure 1

24 pages, 1777 KB  
Systematic Review
Monitoring Biodiversity and Ecosystem Services Using L-Band Synthetic Aperture Radar Satellite Data
by Brian Alan Johnson, Chisa Umemiya, Koji Miwa, Takeo Tadono, Ko Hamamoto, Yasuo Takahashi, Mariko Harada and Osamu Ochiai
Remote Sens. 2025, 17(20), 3489; https://doi.org/10.3390/rs17203489 - 20 Oct 2025
Viewed by 545
Abstract
Over the last decade, L-band synthetic aperture radar (SAR) satellite data has become more widely available globally, providing new opportunities for biodiversity and ecosystem services (BES) monitoring. To better understand these opportunities, we conducted a systematic scoping review of articles that utilized L-band [...] Read more.
Over the last decade, L-band synthetic aperture radar (SAR) satellite data has become more widely available globally, providing new opportunities for biodiversity and ecosystem services (BES) monitoring. To better understand these opportunities, we conducted a systematic scoping review of articles that utilized L-band synthetic aperture radar (SAR) satellite data for BES monitoring. We found that the data have mainly been analyzed using image classification and regression methods, with classification methods attempting to understand how the extent, spatial distribution, and/or changes in different types of land use/land cover affect BES, and regression methods attempting to generate spatially explicit maps of important BES-related indicators like species richness or vegetation above-ground biomass. Random forest classification and regression algorithms, in particular, were used frequently and found to be promising in many recent studies. Deep learning algorithms, while also promising, have seen relatively little usage thus far. PALSAR-1/-2 annual mosaic data was by far the most frequently used dataset. Although free, this data is limited by its low temporal resolution. To help overcome this and other limitations of the existing L-band SAR datasets, 64% of studies combined them with other types of remote sensing data (most commonly, optical multispectral data). Study sites were mainly subnational in scale and located in countries with high species richness. Future research opportunities include investigating the benefits of new free, high temporal resolution L-band SAR datasets (e.g., PALSAR-2 ScanSAR data) and the potential of combining L-band SAR with new sources of SAR data (e.g., P-band SAR data from the “Biomass” satellite) and further exploring the potential of deep learning techniques. Full article
(This article belongs to the Special Issue Global Biospheric Monitoring with Remote Sensing (2nd Edition))
Show Figures

Figure 1

32 pages, 19967 KB  
Article
Monitoring the Recovery Process After Major Hydrological Disasters with GIS, Change Detection and Open and Free Multi-Sensor Satellite Imagery: Demonstration in Haiti After Hurricane Matthew
by Wilson Andres Velasquez Hurtado and Deodato Tapete
Water 2025, 17(19), 2902; https://doi.org/10.3390/w17192902 - 7 Oct 2025
Cited by 1 | Viewed by 758
Abstract
Recovery from disasters is the complex process requiring coordinated measures to restore infrastructure, services and quality of life. While remote sensing is a well-established means for damage assessment, so far very few studies have shown how satellite imagery can be used by technical [...] Read more.
Recovery from disasters is the complex process requiring coordinated measures to restore infrastructure, services and quality of life. While remote sensing is a well-established means for damage assessment, so far very few studies have shown how satellite imagery can be used by technical officers of affected countries to provide crucial, up-to-date information to monitor the reconstruction progress and natural restoration. To address this gap, the present study proposes a multi-temporal observatory method relying on GIS, change detection techniques and open and free multi-sensor satellite imagery to generate thematic maps documenting, over time, the impact and recovery from hydrological disasters such as hurricanes, tropical storms and induced flooding. The demonstration is carried out with regard to Hurricane Matthew, which struck Haiti in October 2016 and triggered a humanitarian crisis in the Sud and Grand’Anse regions. Synthetic Aperture Radar (SAR) amplitude change detection techniques were applied to pre-, cross- and post-disaster Sentinel-1 image pairs from August 2016 to September 2020, while optical Sentinel-2 images were used for verification and land cover classification. With regard to inundated areas, the analysis allowed us to determine the needed time for water recession and rural plain areas to be reclaimed for agricultural exploitation. With regard to buildings, the cities of Jérémie and Les Cayes were not only the most impacted areas, but also were those where most reconstruction efforts were made. However, some instances of new settlements located in at-risk zones, and thus being susceptible to future hurricanes, were found. This result suggests that the thematic maps can support policy-makers and regulators in reducing risk and making the reconstruction more resilient. Finally, to evaluate the replicability of the proposed method, an example at a country-scale is discussed with regard to the June 2023 flooding event. Full article
(This article belongs to the Special Issue Applications of GIS and Remote Sensing in Hydrology and Hydrogeology)
Show Figures

Figure 1

36 pages, 9276 KB  
Article
Understanding Landslide Expression in SAR Backscatter Data: Global Study and Disaster Response Application
by Erin Lindsay, Alexandra Jarna Ganerød, Graziella Devoli, Johannes Reiche, Steinar Nordal and Regula Frauenfelder
Remote Sens. 2025, 17(19), 3313; https://doi.org/10.3390/rs17193313 - 27 Sep 2025
Viewed by 1285
Abstract
Cloud cover can delay landslide detection in optical satellite imagery for weeks, complicating disaster response. Synthetic Aperture Radar (SAR) backscatter imagery, which is widely used for monitoring floods and avalanches, remains underutilised for landslide detection due to a limited understanding of landslide signatures [...] Read more.
Cloud cover can delay landslide detection in optical satellite imagery for weeks, complicating disaster response. Synthetic Aperture Radar (SAR) backscatter imagery, which is widely used for monitoring floods and avalanches, remains underutilised for landslide detection due to a limited understanding of landslide signatures in SAR data. We developed a conceptual model of landslide expression in SAR backscatter (σ°) change images through iterative investigation of over 1000 landslides across 30 diverse study areas. Using multi-temporal composites and dense time series Sentinel-1 C-band SAR data, we identified characteristic patterns linked to land cover, terrain, and landslide material. The results showed either increased or decreased backscatter depending on environmental conditions, with reduced visibility in urban or mixed vegetation areas. Detection was also hindered by geometric distortions and snow cover. The diversity of landslide expression illustrates the need to consider local variability and multi-track (ascending and descending) satellite data in designing representative training datasets for automated detection models. The conceptual model was applied to three recent disaster events using the first post-event Sentinel-1 image, successfully identifying previously unknown landslides before optical imagery became available in two cases. This study provides a theoretical foundation for interpreting landslides in SAR imagery and demonstrates its utility for rapid landslide detection. The findings support further exploration of rapid landslides in SAR backscatter data and future development of automated detection models, offering a valuable tool for disaster response. Full article
Show Figures

Graphical abstract

26 pages, 14923 KB  
Article
Multi-Sensor Flood Mapping in Urban and Agricultural Landscapes of the Netherlands Using SAR and Optical Data with Random Forest Classifier
by Omer Gokberk Narin, Aliihsan Sekertekin, Caglar Bayik, Filiz Bektas Balcik, Mahmut Arıkan, Fusun Balik Sanli and Saygin Abdikan
Remote Sens. 2025, 17(15), 2712; https://doi.org/10.3390/rs17152712 - 5 Aug 2025
Viewed by 1536
Abstract
Floods stand as one of the most harmful natural disasters, which have become more dangerous because of climate change effects on urban structures and agricultural fields. This research presents a comprehensive flood mapping approach that combines multi-sensor satellite data with a machine learning [...] Read more.
Floods stand as one of the most harmful natural disasters, which have become more dangerous because of climate change effects on urban structures and agricultural fields. This research presents a comprehensive flood mapping approach that combines multi-sensor satellite data with a machine learning method to evaluate the July 2021 flood in the Netherlands. The research developed 25 different feature scenarios through the combination of Sentinel-1, Landsat-8, and Radarsat-2 imagery data by using backscattering coefficients together with optical Normalized Difference Water Index (NDWI) and Hue, Saturation, and Value (HSV) images and Synthetic Aperture Radar (SAR)-derived Grey Level Co-occurrence Matrix (GLCM) texture features. The Random Forest (RF) classifier was optimized before its application based on two different flood-prone regions, which included Zutphen’s urban area and Heijen’s agricultural land. Results demonstrated that the multi-sensor fusion scenarios (S18, S20, and S25) achieved the highest classification performance, with overall accuracy reaching 96.4% (Kappa = 0.906–0.949) in Zutphen and 87.5% (Kappa = 0.754–0.833) in Heijen. For the flood class F1 scores of all scenarios, they varied from 0.742 to 0.969 in Zutphen and from 0.626 to 0.969 in Heijen. Eventually, the addition of SAR texture metrics enhanced flood boundary identification throughout both urban and agricultural settings. Radarsat-2 provided limited benefits to the overall results, since Sentinel-1 and Landsat-8 data proved more effective despite being freely available. This study demonstrates that using SAR and optical features together with texture information creates a powerful and expandable flood mapping system, and RF classification performs well in diverse landscape settings. Full article
(This article belongs to the Special Issue Remote Sensing Applications in Flood Forecasting and Monitoring)
Show Figures

Figure 1

23 pages, 8942 KB  
Article
Optical and SAR Image Registration in Equatorial Cloudy Regions Guided by Automatically Point-Prompted Cloud Masks
by Yifan Liao, Shuo Li, Mingyang Gao, Shizhong Li, Wei Qin, Qiang Xiong, Cong Lin, Qi Chen and Pengjie Tao
Remote Sens. 2025, 17(15), 2630; https://doi.org/10.3390/rs17152630 - 29 Jul 2025
Viewed by 884
Abstract
The equator’s unique combination of high humidity and temperature renders optical satellite imagery highly susceptible to persistent cloud cover. In contrast, synthetic aperture radar (SAR) offers a robust alternative due to its ability to penetrate clouds with microwave imaging. This study addresses the [...] Read more.
The equator’s unique combination of high humidity and temperature renders optical satellite imagery highly susceptible to persistent cloud cover. In contrast, synthetic aperture radar (SAR) offers a robust alternative due to its ability to penetrate clouds with microwave imaging. This study addresses the challenges of cloud-induced data gaps and cross-sensor geometric biases by proposing an advanced optical and SAR image-matching framework specifically designed for cloud-prone equatorial regions. We use a prompt-driven visual segmentation model with automatic prompt point generation to produce cloud masks that guide cross-modal feature-matching and joint adjustment of optical and SAR data. This process results in a comprehensive digital orthophoto map (DOM) with high geometric consistency, retaining the fine spatial detail of optical data and the all-weather reliability of SAR. We validate our approach across four equatorial regions using five satellite platforms with varying spatial resolutions and revisit intervals. Even in areas with more than 50 percent cloud cover, our method maintains sub-pixel edging accuracy under manual check points and delivers comprehensive DOM products, establishing a reliable foundation for downstream environmental monitoring and ecosystem analysis. Full article
Show Figures

Figure 1

23 pages, 4237 KB  
Article
Debris-Flow Erosion Volume Estimation Using a Single High-Resolution Optical Satellite Image
by Peng Zhang, Shang Wang, Guangyao Zhou, Yueze Zheng, Kexin Li and Luyan Ji
Remote Sens. 2025, 17(14), 2413; https://doi.org/10.3390/rs17142413 - 12 Jul 2025
Viewed by 873
Abstract
Debris flows pose significant risks to mountainous regions, and quick, accurate volume estimation is crucial for hazard assessment and post-disaster response. Traditional volume estimation methods, such as ground surveys and aerial photogrammetry, are often limited by cost, accessibility, and timeliness. While remote sensing [...] Read more.
Debris flows pose significant risks to mountainous regions, and quick, accurate volume estimation is crucial for hazard assessment and post-disaster response. Traditional volume estimation methods, such as ground surveys and aerial photogrammetry, are often limited by cost, accessibility, and timeliness. While remote sensing offers wide coverage, existing optical and Synthetic Aperture Radar (SAR)-based techniques face challenges in direct volume estimation due to resolution constraints and rapid terrain changes. This study proposes a Super-Resolution Shape from Shading (SRSFS) approach enhanced by a Non-local Piecewise-smooth albedo Constraint (NPC), hereafter referred to as NPC SRSFS, to estimate debris-flow erosion volume using single high-resolution optical satellite imagery. By integrating publicly available global Digital Elevation Model (DEM) data as prior terrain reference, the method enables accurate post-disaster topography reconstruction from a single optical image, thereby reducing reliance on stereo imagery. The NPC constraint improves the robustness of albedo estimation under heterogeneous surface conditions, enhancing depth recovery accuracy. The methodology is evaluated using Gaofen-6 satellite imagery, with quantitative comparisons to aerial Light Detection and Ranging (LiDAR) data. Results show that the proposed method achieves reliable terrain reconstruction and erosion volume estimates, with accuracy comparable to airborne LiDAR. This study demonstrates the potential of NPC SRSFS as a rapid, cost-effective alternative for post-disaster debris-flow assessment. Full article
(This article belongs to the Section Remote Sensing in Geology, Geomorphology and Hydrology)
Show Figures

Figure 1

19 pages, 34272 KB  
Article
Sequential SAR-to-Optical Image Translation
by Jingbo Wei, Huan Zhou, Peng Ke, Yaobin Ma and Rongxin Tang
Remote Sens. 2025, 17(13), 2287; https://doi.org/10.3390/rs17132287 - 3 Jul 2025
Cited by 1 | Viewed by 2026
Abstract
There is a common need for optical sequence images with high spatiotemporal resolution. As a solution, Synthetic Aperture Radar (SAR)-to-optical translation tends to bring high temporal continuity of optical images and low interpretation difficulty of SAR images. Existing studies have been focused on [...] Read more.
There is a common need for optical sequence images with high spatiotemporal resolution. As a solution, Synthetic Aperture Radar (SAR)-to-optical translation tends to bring high temporal continuity of optical images and low interpretation difficulty of SAR images. Existing studies have been focused on converting a single SAR image into a single optical image, failing to utilize the advantages of repeated observations from SAR satellites. To make full use of periodic SAR images, it is proposed to investigate the sequential SAR-to-optical translation, which represents the first effort in this topic. To achieve this, a model based on a diffusion framework has been constructed, with twelve Transformer blocks utilized to effectively capture spatial and temporal features alternatively. A variational autoencoder is employed to encode and decode images, enabling the diffusion model to learn the distribution of features within optical image sequences. A conditional branch is specifically designed for SAR sequences to facilitate feature extraction. Additionally, the capture time is encoded and embedded into the Transformers. Two sequence datasets for the sequence translation task were created, comprising Sentinel-1 Ground Range Detected data and Sentinel-2 red/green/blue data. Our method was tested on new datasets and compared with three state-of-the-art single translation methods. Quantitative and qualitative comparisons validate the effectiveness of the proposed method in maintaining radiometric and spectral consistency. Full article
(This article belongs to the Special Issue SAR Images Processing and Analysis (2nd Edition))
Show Figures

Graphical abstract

24 pages, 12865 KB  
Article
Mapping Crop Types and Cropping Patterns Using Multiple-Source Satellite Datasets in Subtropical Hilly and Mountainous Region of China
by Yaoliang Chen, Zhiying Xu, Hongfeng Xu, Zhihong Xu, Dacheng Wang and Xiaojian Yan
Remote Sens. 2025, 17(13), 2282; https://doi.org/10.3390/rs17132282 - 3 Jul 2025
Viewed by 1430
Abstract
A timely and accurate distribution of crop types and cropping patterns provides a crucial reference for the management of agriculture and food security. However, accurately mapping crop types and cropping patterns in subtropical hilly and mountainous areas often face challenges such as mixed [...] Read more.
A timely and accurate distribution of crop types and cropping patterns provides a crucial reference for the management of agriculture and food security. However, accurately mapping crop types and cropping patterns in subtropical hilly and mountainous areas often face challenges such as mixed pixels resulted from fragmented patches and difficulty in obtaining optical satellites due to a frequently cloudy and rainy climate. Here we propose a crop type and cropping pattern mapping framework in subtropical hilly and mountainous areas, considering multiple sources of satellites (i.e., Landsat 8/9, Sentinel-2, and Sentinel-1 images and GF 1/2/7). To develop this framework, six types of variables from multi-sources data were applied in a random forest classifier to map major summer crop types (singe-cropped rice and double-cropped rice) and winter crop types (rapeseed). Multi-scale segmentation methods were applied to improve the boundaries of the classified results. The results show the following: (1) Each type of satellite data has at least one variable selected as an important feature for both winter and summer crop type classification. Apart from the endmember variables, the other five extracted variable types are selected by the RF classifier for both winter and summer crop classifications. (2) SAR data can capture the key information of summer crops when optical data is limited, and the addition of SAR data can significantly improve the accuracy as to summer crop types. (3) The overall accuracy (OA) of both summer and winter crop type mapping exceeded 95%, with clear and relatively accurate cropland boundaries. Area evaluation showed a small bias in terms of the classified area of rapeseed, single-cropped rice, and double-cropped rice from statistical records. (4) Further visual examination of the spatial distribution showed a better performance of the classified crop types compared to three existing products. The results suggest that the proposed method has great potential in accurately mapping crop types in a complex subtropical planting environment. Full article
Show Figures

Figure 1

21 pages, 2568 KB  
Article
Improved Flood Insights: Diffusion-Based SAR-to-EO Image Translation
by Minseok Seo, Jinwook Jung and Dong-Geol Choi
Remote Sens. 2025, 17(13), 2260; https://doi.org/10.3390/rs17132260 - 1 Jul 2025
Cited by 1 | Viewed by 2321
Abstract
Floods, exacerbated by climate change, necessitate timely and accurate situational awareness to support effective disaster response. While electro-optical (EO) satellite imagery has been widely employed for flood assessment, its utility is significantly limited under conditions such as cloud cover or nighttime. Synthetic Aperture [...] Read more.
Floods, exacerbated by climate change, necessitate timely and accurate situational awareness to support effective disaster response. While electro-optical (EO) satellite imagery has been widely employed for flood assessment, its utility is significantly limited under conditions such as cloud cover or nighttime. Synthetic Aperture Radar (SAR) provides consistent imaging regardless of weather or lighting conditions but it remains challenging for human analysts to interpret. To bridge this modality gap, we present diffusion-based SAR-to-EO image translation (DSE), a novel framework designed specifically for enhancing the interpretability of SAR imagery in flood scenarios. Unlike conventional GAN-based approaches, our DSE leverages the Brownian Bridge Diffusion Model to achieve stable and high-fidelity EO synthesis. Furthermore, it integrates a self-supervised SAR denoising module to effectively suppress SAR-specific speckle noise, thereby improving the quality of the translated outputs. Quantitative experiments on the SEN12-FLOOD dataset show that our method improves PSNR by 3.23 dB and SSIM by 0.10 over conventional SAR-to-EO baselines. Additionally, a user study with SAR experts revealed that flood segmentation performance using synthetic EO (SynEO) paired with SAR was nearly equivalent to using true EO–SAR pairs, with only a 0.0068 IoU difference. These results confirm the practicality of the DSE framework as an effective solution for EO image synthesis and flood interpretation in SAR-only environments. Full article
(This article belongs to the Special Issue Deep Learning Innovations in Remote Sensing)
Show Figures

Figure 1

26 pages, 6668 KB  
Article
Dark Ship Detection via Optical and SAR Collaboration: An Improved Multi-Feature Association Method Between Remote Sensing Images and AIS Data
by Fan Li, Kun Yu, Chao Yuan, Yichen Tian, Guang Yang, Kai Yin and Youguang Li
Remote Sens. 2025, 17(13), 2201; https://doi.org/10.3390/rs17132201 - 26 Jun 2025
Viewed by 4503
Abstract
Dark ships, vessels deliberately disabling their AIS signals, constitute a grave maritime safety hazard, with detection efforts hindered by issues like over-reliance on AIS, inadequate surveillance coverage, and significant mismatch rates. This paper proposes an improved multi-feature association method that integrates satellite remote [...] Read more.
Dark ships, vessels deliberately disabling their AIS signals, constitute a grave maritime safety hazard, with detection efforts hindered by issues like over-reliance on AIS, inadequate surveillance coverage, and significant mismatch rates. This paper proposes an improved multi-feature association method that integrates satellite remote sensing and AIS data, with a focus on oriented bounding box course estimation, to improve the detection of dark ships and enhance maritime surveillance. Firstly, the oriented bounding box object detection model (YOLOv11n-OBB) is trained to break through the limitations of horizontal bounding box orientation representation. Secondly, by integrating position, dimensions (length and width), and course characteristics, we devise a joint cost function to evaluate the combined significance of multiple features. Subsequently, an advanced JVC global optimization algorithm is employed to ensure high-precision association in dense scenes. Finally, by integrating data from Gaofen-6 (optical) and Gaofen-3B (SAR) satellites, a day-and-night collaborative monitoring framework is constructed to address the blind spots of single-sensor monitoring during night-time or adverse weather conditions. Our results indicate that the detection model demonstrates a high average precision (AP50) of 0.986 on the optical dataset and 0.903 on the SAR dataset. The association accuracy of the multi-feature association algorithm is 91.74% in optical image and AIS data matching, and 91.33% in SAR image and AIS data matching. The association rate reaches 96.03% (optical) and 74.24% (SAR), respectively. This study provides an efficient technical tool for maritime safety regulation through multi-source data fusion and algorithm innovation. Full article
Show Figures

Graphical abstract

30 pages, 5702 KB  
Article
Monitoring Tropical Forest Disturbance and Recovery: A Multi-Temporal L-Band SAR Methodology from Annual to Decadal Scales
by Derek S. Tesser, Kyle C. McDonald, Erika Podest, Brian T. Lamb, Nico Blüthgen, Constance J. Tremlett, Felicity L. Newell, Edith Villa-Galaviz, H. Martin Schaefer and Raul Nieto
Remote Sens. 2025, 17(13), 2188; https://doi.org/10.3390/rs17132188 - 25 Jun 2025
Viewed by 1367
Abstract
Tropical forests harbor a significant portion of global biodiversity but are increasingly degraded by human activity. Assessing restoration efforts requires the systematic monitoring of tropical ecosystem status and recovery. Satellite-borne synthetic aperture radar (SAR) supports monitoring changes in vegetation structure and is of [...] Read more.
Tropical forests harbor a significant portion of global biodiversity but are increasingly degraded by human activity. Assessing restoration efforts requires the systematic monitoring of tropical ecosystem status and recovery. Satellite-borne synthetic aperture radar (SAR) supports monitoring changes in vegetation structure and is of particular utility in tropical regions where clouds obscure optical satellite observations. To characterize tropical forest recovery in the Lowland Chocó Biodiversity Hotspot of Ecuador, we apply over a decade of dual-polarized (HH + HV) L-band SAR datasets from the Japanese Space Agency’s (JAXA) PALSAR and PALSAR-2 sensors. We assess the complementarity of the dual-polarized imagery with less frequently available fully-polarimetric imagery, particularly in the context of their respective temporal and informational trade-offs. We examine the radar image texture associated with the dual-pol radar vegetation index (DpRVI) to assess the associated determination of forest and nonforest areas in a topographically complex region, and we examine the equivalent performance of texture measures derived from the Freeman–Durden polarimetric radar decomposition classification scheme applied to the fully polarimetric data. The results demonstrate that employing a dual-polarimetric decomposition classification scheme and subsequently deriving the associated gray-level co-occurrence matrix mean from the DpRVI substantially improved the classification accuracy (from 88.2% to 97.2%). Through this workflow, we develop a new metric, the Radar Forest Regeneration Index (RFRI), and apply it to describe a chronosequence of a tropical forest recovering from naturally regenerating pasture and cacao plots. Our findings from the Lowland Chocó region are particularly relevant to the upcoming NASA-ISRO NISAR mission, which will enable the comprehensive characterization of vegetation structural parameters and significantly enhance the monitoring of biodiversity conservation efforts in tropical forest ecosystems. Full article
(This article belongs to the Special Issue NISAR Global Observations for Ecosystem Science and Applications)
Show Figures

Figure 1

22 pages, 4380 KB  
Article
Utilization of Multisensor Satellite Data for Developing Spatial Distribution of Methane Emission on Rice Paddy Field in Subang, West Java
by Khalifah Insan Nur Rahmi, Parwati Sofan, Hilda Ayu Pratikasiwi, Terry Ayu Adriany, Dandy Aditya Novresiandi, Rendi Handika, Rahmat Arief, Helena Lina Susilawati, Wage Ratna Rohaeni, Destika Cahyana, Vidya Nahdhiyatul Fikriyah, Iman Muhardiono, Asmarhansyah, Shinichi Sobue, Kei Oyoshi, Goh Segami and Pegah Hashemvand Khiabani
Remote Sens. 2025, 17(13), 2154; https://doi.org/10.3390/rs17132154 - 23 Jun 2025
Viewed by 1983
Abstract
Intergovernmental Panel on Climate Change (IPCC) guidelines have been standardized and widely used to calculate methane (CH4) emissions from paddy fields. The emission factor (EF) is a key parameter in these guidelines, and it is different for each location globally and [...] Read more.
Intergovernmental Panel on Climate Change (IPCC) guidelines have been standardized and widely used to calculate methane (CH4) emissions from paddy fields. The emission factor (EF) is a key parameter in these guidelines, and it is different for each location globally and regionally. However, limited studies have been conducted to measure locally specific EFs (EFlocal) through on-site assessments and modeling their spatial distribution effectively. This study aims to investigate the potential of multisensor satellite data to develop a spatial model of CH4 emission estimation on rice paddy fields under different water management practices, i.e., continuous flooding (CF) and alternate wetting and drying (AWD) in Subang, West Java, Indonesia. The model employed the national EF (EFnational) and EFlocal using the IPCC guidelines. In this study, we employed the multisensor satellite data to derive the key parameters for estimating CH4 emission, i.e., rice cultivation area, rice age, and EF. Optical high-resolution images were used to delineate the rice cultivation area, Sentinel-1 SAR imagery was used for identifying transplanting and harvesting dates for rice age estimation, and ALOS-2/PALSAR-2 was used to map the water regime for determining the scaling factor of the EF. The closed-chamber method has been used to measure the daily CH4 flux rate on the local sites. The results revealed spatial variability in CH4 emissions, ranging from 1–5 kg/crop/season to 20–30 kg/crop/season, depending on the water regime. Fields under CF exhibited higher CH4 emissions than those under AWD, underscoring the critical role of water management in mitigating CH4 emissions. This study demonstrates the feasibility of combining remote sensing data with the IPCC model to spatially estimate CH4 emissions, providing a robust framework for sustainable rice cultivation and greenhouse gas (GHG) mitigation strategies. Full article
Show Figures

Figure 1

Back to TopTop