Journal Description
Remote Sensing
Remote Sensing
is an international, peer-reviewed, open access journal about the science and application of remote sensing technology, and is published semimonthly online by MDPI. The Remote Sensing Society of Japan (RSSJ) and the Japan Society of Photogrammetry and Remote Sensing (JSPRS) are affiliated with Remote Sensing, and their members receive a discount on the article processing charge.
- Open Access— free for readers, with article processing charges (APC) paid by authors or their institutions.
- High Visibility: indexed within Scopus, SCIE (Web of Science), Ei Compendex, PubAg, GeoRef, Astrophysics Data System, Inspec, dblp, and other databases.
- Journal Rank: JCR - Q1 (Geosciences, Multidisciplinary) / CiteScore - Q1 (General Earth and Planetary Sciences)
- Rapid Publication: manuscripts are peer-reviewed and a first decision is provided to authors approximately 24.9 days after submission; acceptance to publication is undertaken in 2.5 days (median values for papers published in this journal in the first half of 2025).
- Recognition of Reviewers: reviewers who provide timely, thorough peer-review reports receive vouchers entitling them to a discount on the APC of their next publication in any MDPI journal, in appreciation of the work done.
- Companion journal: Geomatics.
- Journal Cluster of Geospatial and Earth Sciences: Remote Sensing, Geosciences, Quaternary, Earth, Geographies, Geomatics and Fossil Studies.
Impact Factor:
4.1 (2024);
5-Year Impact Factor:
4.8 (2024)
Latest Articles
Improving the Accuracy of Seasonal Crop Coefficients in Grapevine from Sentinel-2 Data
Remote Sens. 2025, 17(19), 3365; https://doi.org/10.3390/rs17193365 (registering DOI) - 4 Oct 2025
Abstract
Accurate assessment of a crop’s water requirement is essential for optimising irrigation scheduling and increasing the sustainability of water use. The crop coefficient (Kc) is a dimensionless factor that converts reference evapotranspiration (ET0) into actual crop evapotranspiration (ET
[...] Read more.
Accurate assessment of a crop’s water requirement is essential for optimising irrigation scheduling and increasing the sustainability of water use. The crop coefficient (Kc) is a dimensionless factor that converts reference evapotranspiration (ET0) into actual crop evapotranspiration (ETc) and is widely used for irrigation scheduling. The Kc reflects canopy cover, phenology, and crop type/variety, but is difficult to measure directly in heterogeneous perennial systems, such as vineyards. Remote sensing (RS) products, especially open-source satellite imagery, offer a cost-effective solution at moderate spatial and temporal scales, although their application in vineyards has been relatively limited due to the large pixel size (~100 m2) relative to vine canopy size (~2 m2). This study aimed to improve grapevine Kc predictions using vegetation indices derived from harmonised Sentinel-2 imagery in combination with spectral unmixing, with ground data obtained from canopy light interception measurements in three winegrape cultivars (Shiraz, Cabernet Sauvignon, and Chardonnay) in the Barossa and Eden Valleys, South Australia. A linear spectral mixture analysis approach was taken, which required estimation of vine canopy cover through beta regression models to improve the accuracy of vegetation indices that were used to build the Kc prediction models. Unmixing improved the prediction of seasonal Kc values in Shiraz (R2 of 0.625, RMSE = 0.078, MAE = 0.063), Cabernet Sauvignon (R2 = 0.686, RMSE = 0.072, MAE = 0.055) and Chardonnay (R2 = 0.814, RMSE = 0.075, MAE = 0.059) compared to unmixed pixels. Furthermore, unmixing improved predictions during the early and late canopy growth stages when pixel variability was greater. Our findings demonstrate that integrating open-source satellite data with machine learning models and spectral unmixing can accurately reproduce the temporal dynamics of Kc values in vineyards. This approach was also shown to be transferable across cultivars and regions, providing a practical tool for crop monitoring and irrigation management in support of sustainable viticulture.
Full article
(This article belongs to the Special Issue Remote Sensing for Agricultural Water Management (RSAWM) (Second Edition))
Open AccessArticle
Estimating Fractional Land Cover Using Sentinel-2 and Multi-Source Data with Traditional Machine Learning and Deep Learning Approaches
by
Sergio Sierra, Rubén Ramo, Marc Padilla, Laura Quirós and Adolfo Cobo
Remote Sens. 2025, 17(19), 3364; https://doi.org/10.3390/rs17193364 (registering DOI) - 4 Oct 2025
Abstract
Land cover mapping is essential for territorial management due to its links with ecological, hydrological, climatic, and socioeconomic processes. Traditional methods use discrete classes per pixel, but this study proposes estimating cover fractions with Sentinel-2 imagery (20 m) and AI. We employed the
[...] Read more.
Land cover mapping is essential for territorial management due to its links with ecological, hydrological, climatic, and socioeconomic processes. Traditional methods use discrete classes per pixel, but this study proposes estimating cover fractions with Sentinel-2 imagery (20 m) and AI. We employed the French Land cover from Aerospace ImageRy (FLAIR) dataset (810 km2 in France, 19 classes), with labels co-registered with Sentinel-2 to derive precise fractional proportions per pixel. From these references, we generated training sets combining spectral bands, derived indices, and auxiliary data (climatic and temporal variables). Various machine learning models—including XGBoost three deep neural network (DNN) architectures with different depths, and convolutional neural networks (CNNs)—were trained and evaluated to identify the optimal configuration for fractional cover estimation. Model validation on the test set employed RMSE, MAE, and R2 metrics at both pixel level (20 m Sentinel-2) and scene level (100 m FLAIR). The training set integrating spectral bands, vegetation indices, and auxiliary variables yielded the best MAE and RMSE results. Among all models, DNN2 achieved the highest performance, with a pixel-level RMSE of 13.83 and MAE of 5.42, and a scene-level RMSE of 4.94 and MAE of 2.36. This fractional approach paves the way for advanced remote sensing applications, including continuous cover-change monitoring, carbon footprint estimation, and sustainability-oriented territorial planning.
Full article
(This article belongs to the Special Issue Multimodal Remote Sensing Data Fusion, Analysis and Application)
►▼
Show Figures

Figure 1
Open AccessArticle
Estimating Gully Erosion Induced by Heavy Rainfall Events Using Stereoscopic Imagery and UAV LiDAR
by
Lu Wang, Yuan Qi, Wenwei Xie, Rui Yang, Xijun Wang, Shengming Zhou, Yanqing Dong and Xihong Lian
Remote Sens. 2025, 17(19), 3363; https://doi.org/10.3390/rs17193363 (registering DOI) - 4 Oct 2025
Abstract
►▼
Show Figures
Gully erosion, driven by the interplay of natural processes and human activities, results in severe soil degradation and landscape alteration, yet approaches for accurately quantifying erosion triggered by extreme precipitation using multi-source high-resolution remote sensing remain limited. This study first extracted digital surface
[...] Read more.
Gully erosion, driven by the interplay of natural processes and human activities, results in severe soil degradation and landscape alteration, yet approaches for accurately quantifying erosion triggered by extreme precipitation using multi-source high-resolution remote sensing remain limited. This study first extracted digital surface models (DSM) for the years 2014 and 2024 using Ziyuan-3 and GaoFen-7 satellite stereo imagery, respectively. Subsequently, the DSM was calibrated using high-resolution unmanned aerial vehicle photogrammetry data to enhance elevation accuracy. Based on the corrected DSMs, gully erosion depths from 2014 to 2024 were quantified. Erosion patches were identified through a deep learning framework applied to GaoFen-1 and GaoFen-2 imagery. The analysis further explored the influences of natural processes and anthropogenic activities on elevation changes within the gully erosion watershed. Topographic monitoring in the Sandu River watershed revealed a net elevation loss of 2.6 m over 2014–2024, with erosion depths up to 8 m in some sub-watersheds. Elevation changes are primarily driven by extreme precipitation-induced erosion alongside human activities, resulting in substantial spatial variability in surface lowering across the watershed. This approach provides a refined assessment of the spatial and temporal evolution of gully erosion, offering valuable insights for soil conservation and sustainable land management strategies in the Loess Plateau region.
Full article

Figure 1
Open AccessArticle
FLDSensing: Remote Sensing Flood Inundation Mapping with FLDPLN
by
Jackson Edwards, Francisco J. Gomez, Son Kim Do, David A. Weiss, Jude Kastens, Sagy Cohen, Hamid Moradkhani, Venkataraman Lakshmi and Xingong Li
Remote Sens. 2025, 17(19), 3362; https://doi.org/10.3390/rs17193362 (registering DOI) - 4 Oct 2025
Abstract
Flood inundation mapping (FIM), which is essential for effective disaster response and management, requires rapid and accurate delineation of flood extent and depth. Remote sensing FIM, especially using satellite imagery, offers certain capabilities and advantages, but also faces challenges such as cloud and
[...] Read more.
Flood inundation mapping (FIM), which is essential for effective disaster response and management, requires rapid and accurate delineation of flood extent and depth. Remote sensing FIM, especially using satellite imagery, offers certain capabilities and advantages, but also faces challenges such as cloud and canopy obstructions and flood depth estimation. This research developed a novel hybrid approach, named FLDSensing, which combines remote sensing imagery with the FLDPLN (pronounced “floodplain”) flood inundation model, to improve remote sensing FIM in both inundation extent and depth estimation. The method first identifies clean flood edge pixels (i.e., floodwater pixels next to bare ground), which, combined with the FLDPLN library, are used to estimate the water stages at certain stream pixels. Water stage is further interpolated and smoothed at additional stream pixels, which is then used with an FLDPLN library to generate flood extent and depth maps. The method was applied over the Verdigris River in Kansas to map the flood event that occurred in late May 2019, where Sentinel-2 imagery was used to generate remote sensing FIM and to identify clean water-edge pixels. The results show a significant improvement in FIM accuracy when compared to a HEC-RAS 2D (Version 6.5) benchmark, with the metrics of CSI/POD/FAR/F1-scores reaching 0.89/0.98/0.09/0.94 from 0.55/0.56/0.03/0.71 using remote sensing alone. The method also performed favorably against several existing hybrid approaches, including FLEXTH and FwDET 2.1. This study demonstrates that integrating remote sensing imagery with the FLDPLN model, which uniquely estimates stream stage through floodwater-edges, offers a more effective hybrid approach to enhancing remote sensing-based FIM.
Full article
(This article belongs to the Special Issue Multi-Source Remote Sensing Data in Hydrology and Water Management)
►▼
Show Figures

Figure 1
Open AccessArticle
A Progressive Target-Aware Network for Drone-Based Person Detection Using RGB-T Images
by
Zhipeng He, Boya Zhao, Yuanfeng Wu, Yuyang Jiang and Qingzhan Zhao
Remote Sens. 2025, 17(19), 3361; https://doi.org/10.3390/rs17193361 (registering DOI) - 4 Oct 2025
Abstract
Drone-based target detection using visible and thermal (RGB-T) images is critical in disaster rescue, intelligent transportation, and wildlife monitoring. However, persons typically occupy fewer pixels and exhibit more varied postures than vehicles or large animals, making them difficult to detect in unmanned aerial
[...] Read more.
Drone-based target detection using visible and thermal (RGB-T) images is critical in disaster rescue, intelligent transportation, and wildlife monitoring. However, persons typically occupy fewer pixels and exhibit more varied postures than vehicles or large animals, making them difficult to detect in unmanned aerial vehicle (UAV) remote sensing images with complex backgrounds. We propose a novel progressive target-aware network (PTANet) for person detection using RGB-T images. A global adaptive feature fusion module (GAFFM) is designed to fuse the texture and thermal features of persons. A progressive focusing strategy is used. Specifically, we incorporate a person segmentation auxiliary branch (PSAB) during training to enhance target discrimination, while a cross-modality background mask (CMBM) is applied in the inference phase to suppress irrelevant background regions. Extensive experiments demonstrate that the proposed PTANet achieves high accuracy and generalization performance, reaching 79.5%, 47.8%, and 97.3% mean average precision (mAP)@50 on three drone-based person detection benchmarks (VTUAV-det, RGBTDronePerson, and VTSaR), with only 4.72 M parameters. PTANet deployed on an embedded edge device with TensorRT acceleration and quantization achieves an inference speed of 11.177 ms (640 × 640 pixels), indicating its promising potential for real-time onboard person detection. The source code is publicly available on GitHub.
Full article
(This article belongs to the Special Issue Object Detection and Information Extraction Based on Remote Sensing Imagery (Second Edition))
Open AccessReview
Challenges and Opportunities in Predicting Future Beach Evolution: A Review of Processes, Remote Sensing, and Modeling Approaches
by
Thierry Garlan, Rafael Almar and Erwin W. J. Bergsma
Remote Sens. 2025, 17(19), 3360; https://doi.org/10.3390/rs17193360 (registering DOI) - 4 Oct 2025
Abstract
This review synthesizes the current knowledge of the various natural and human-caused processes that influence the evolution of sandy beaches and explores ways to improve predictions. Short-term storm-driven dynamics have been extensively studied, but long-term changes remain poorly understood due to a limited
[...] Read more.
This review synthesizes the current knowledge of the various natural and human-caused processes that influence the evolution of sandy beaches and explores ways to improve predictions. Short-term storm-driven dynamics have been extensively studied, but long-term changes remain poorly understood due to a limited grasp of non-wave drivers, outdated topo-bathymetric (land–sea continuum digital elevation model) data, and an absence of systematic uncertainty assessments. In this study, we classify and analyze the various drivers of beach change, including meteorological, oceanographic, geological, biological, and human influences, and we highlight their interactions across spatial and temporal scales. We place special emphasis on the role of remote sensing, detailing the capacities and limitations of optical, radar, lidar, unmanned aerial vehicle (UAV), video systems and satellite Earth observation for monitoring shoreline change, nearshore bathymetry (or seafloor), sediment dynamics, and ecosystem drivers. A case study from the Langue de Barbarie in Senegal, West Africa, illustrates the integration of in situ measurements, satellite observations, and modeling to identify local forcing factors. Based on this synthesis, we propose a structured framework for quantifying uncertainty that encompasses data, parameter, structural, and scenario uncertainties. We also outline ways to dynamically update nearshore bathymetry to improve predictive ability. Finally, we identify key challenges and opportunities for future coastal forecasting and emphasize the need for multi-sensor integration, hybrid modeling approaches, and holistic classifications that move beyond wave-only paradigms.
Full article
(This article belongs to the Special Issue Remote Sensing of Coastal Environment and Evolution: Progress, Challenges and Opportunities)
Open AccessArticle
Optimized Recognition Algorithm for Remotely Sensed Sea Ice in Polar Ship Path Planning
by
Li Zhou, Runxin Xu, Jiayi Bian, Shifeng Ding, Sen Han and Roger Skjetne
Remote Sens. 2025, 17(19), 3359; https://doi.org/10.3390/rs17193359 (registering DOI) - 4 Oct 2025
Abstract
Collisions between ships and sea ice pose a significant threat to maritime safety, making it essential to detect sea ice and perform safety-oriented path planning for polar navigation. This paper utilizes an optimized You Only Look Once version 5 (YOLOv5) model, designated as
[...] Read more.
Collisions between ships and sea ice pose a significant threat to maritime safety, making it essential to detect sea ice and perform safety-oriented path planning for polar navigation. This paper utilizes an optimized You Only Look Once version 5 (YOLOv5) model, designated as YOLOv5-ICE, for the detection of sea ice in satellite imagery, with the resultant detection data being employed to input obstacle coordinates into a ship path planning system. The enhancements include the Squeeze-and-Excitation (SE) attention mechanism, improved spatial pyramid pooling, and the Flexible ReLU (FReLU) activation function. The improved YOLOv5-ICE shows enhanced performance, with its mAP increasing by 3.5% compared to the baseline YOLOv5 and also by 1.3% compared to YOLOv8. YOLOv5-ICE demonstrates robust performance in detecting small sea ice targets within large-scale satellite images and excels in high ice concentration regions. For path planning, the Any-Angle Path Planning on Grids algorithm is applied to simulate routes based on detected sea ice floes. The objective function incorporates the path length, number of ship turns, and sea ice risk value, enabling path planning under varying ice concentrations. By integrating detection and path planning, this work proposes a novel method to enhance navigational safety in polar regions.
Full article
(This article belongs to the Special Issue Remote Sensing of River and Lake Ice/Water Using Spaceborne, Airborne, and Ground Platforms)
►▼
Show Figures

Figure 1
Open AccessArticle
2C-Net: A Novel Spatiotemporal Dual-Channel Network for Soil Organic Matter Prediction Using Multi-Temporal Remote Sensing and Environmental Covariates
by
Jiale Geng, Chong Luo, Jun Lu, Depiao Kong, Xue Li and Huanjun Liu
Remote Sens. 2025, 17(19), 3358; https://doi.org/10.3390/rs17193358 - 3 Oct 2025
Abstract
Soil organic matter (SOM) is essential for ecosystem health and agricultural productivity. Accurate prediction of SOM content is critical for modern agricultural management and sustainable soil use. Existing digital soil mapping (DSM) models, when processing temporal data, primarily focus on modeling the changes
[...] Read more.
Soil organic matter (SOM) is essential for ecosystem health and agricultural productivity. Accurate prediction of SOM content is critical for modern agricultural management and sustainable soil use. Existing digital soil mapping (DSM) models, when processing temporal data, primarily focus on modeling the changes in input data across successive time steps. However, they do not adequately model the relationships among different input variables, which hinders the capture of complex data patterns and limits the accuracy of predictions. To address this problem, this paper proposes a novel deep learning model, 2-Channel Network (2C-Net), leveraging sequential multi-temporal remote sensing images to improve SOM prediction. The network separates input data into temporal and spatial data, processing them through independent temporal and spatial channels. Temporal data includes multi-temporal Sentinel-2 spectral reflectance, while spatial data consists of environmental covariates including climate and topography. The Multi-sequence Feature Fusion Module (MFFM) is proposed to globally model spectral data across multiple bands and time steps, and the Diverse Convolutional Architecture (DCA) extracts spatial features from environmental data. Experimental results show that 2C-Net outperforms the baseline model (CNN-LSTM) and mainstream machine learning model for DSM, with R2 = 0.524, RMSE = 0.884 (%), MAE = 0.581 (%), and MSE = 0.781 (%)2. Furthermore, this study demonstrates the significant importance of sequential spectral data for the inversion of SOM content and concludes the following: for the SOM inversion task, the bare soil period after tilling is a more important time window than other bare soil periods. 2C-Net model effectively captures spatiotemporal features, offering high-accuracy SOM predictions and supporting future DSM and soil management.
Full article
(This article belongs to the Special Issue Remote Sensing in Soil Organic Carbon Dynamics)
Open AccessArticle
Optimizing Ecosystem Service Patterns with Dynamic Bayesian Networks for Sustainable Land Management Under Climate Change: A Case Study in China’s Sanjiangyuan Region
by
Qingmin Cheng, Xiaofeng Liu, Xiaowen Han, Jiayuan Yin, Junji Li, Xue Cheng, Hucheng Li, Qinyi Huang, Yuefeng Wang, Haotian You, Zhiwei Wang and Jianjun Chen
Remote Sens. 2025, 17(19), 3357; https://doi.org/10.3390/rs17193357 - 3 Oct 2025
Abstract
Identifying suitable areas for ecosystem services (ES) development is essential for balancing economic growth with environmental sustainability in ecologically fragile regions. However, existing studies often neglect integrating future climate and socioeconomic drivers into ES optimization, hindering the design of robust strategies for sustainable
[...] Read more.
Identifying suitable areas for ecosystem services (ES) development is essential for balancing economic growth with environmental sustainability in ecologically fragile regions. However, existing studies often neglect integrating future climate and socioeconomic drivers into ES optimization, hindering the design of robust strategies for sustainable resource management. In this study, we propose a novel framework integrating the System Dynamics (SD) model, the Patch-based Land Use Simulation (PLUS) model, the Integrated Valuation of Ecosystem Services and Trade-offs (InVEST) model, and the Dynamic Bayesian Network (DBN) to optimize ES patterns in the Sanjiangyuan region under three climate scenarios (SSP126, SSP245, and SSP585) from 2030 to 2060. Our results show the following: (1) Ecological land (forest) expanded by 0.86% under SSP126, but declined by 11.54% under SSP585 due to unsustainable land use intensification. (2) SSP126 emerged as the optimal scenario for ES sustainability, increasing carbon storage and sequestration, habitat quality, and water conservation by 3.2%, 1%, and 1.4%, respectively, compared to SSP585. (3) The central part of the Sanjiangyuan region, characterized by gentle topography and adequate rainfall, was identified as a priority zone for ES development. This study provides a transferable framework for aligning ecological conservation with low-carbon transitions in global biodiversity hotspots.
Full article
(This article belongs to the Section Ecological Remote Sensing)
►▼
Show Figures

Figure 1
Open AccessArticle
Research on Rice Field Identification Methods in Mountainous Regions
by
Yuyao Wang, Jiehai Cheng, Zhanliang Yuan and Wenqian Zang
Remote Sens. 2025, 17(19), 3356; https://doi.org/10.3390/rs17193356 - 2 Oct 2025
Abstract
Rice is one of the most important staple crops in China, and the rapid and accurate extraction of rice planting areas plays a crucial role in the agricultural management and food security assessment. However, the existing rice field identification methods faced the significant
[...] Read more.
Rice is one of the most important staple crops in China, and the rapid and accurate extraction of rice planting areas plays a crucial role in the agricultural management and food security assessment. However, the existing rice field identification methods faced the significant challenges in mountainous regions due to the severe cloud contamination, insufficient utilization of multi-dimensional features, and limited classification accuracy. This study presented a novel rice field identification method based on the Graph Convolutional Networks (GCN) that effectively integrated multi-source remote sensing data tailored for the complex mountainous terrain. A coarse-to-fine cloud removal strategy was developed by fusing the synthetic aperture radar (SAR) imagery with temporally adjacent optical remote sensing imagery, achieving high cloud removal accuracy, thereby providing reliable and clear optical data for the subsequent rice mapping. A comprehensive multi-feature library comprising spectral, texture, polarization, and terrain attributes was constructed and optimized via a stepwise selection process. Furthermore, the 19 key features were established to enhance the classification performance. The proposed method achieved an overall accuracy of 98.3% for the rice field identification in Huoshan County of the Dabie Mountains, and a 96.8% consistency compared to statistical yearbook data. The ablation experiments demonstrated that incorporating terrain features substantially improved the rice field identification accuracy under the complex topographic conditions. The comparative evaluations against support vector machine (SVM), random forest (RF), and U-Net models confirmed the superiority of the proposed method in terms of accuracy, local performance, terrain adaptability, training sample requirement, and computational cost, and demonstrated its effectiveness and applicability for the high-precision rice field distribution mapping in mountainous environments.
Full article
(This article belongs to the Special Issue Precision Agriculture and Crop Monitoring Based on Remote Sensing Methods)
►▼
Show Figures

Figure 1
Open AccessArticle
Assessing the Sensitivity of Snow Depth Retrieval Algorithms to Inter-Sensor Brightness Temperature Differences
by
Guangjin Liu, Lingmei Jiang, Huizhen Cui, Jinmei Pan, Jianwei Yang and Min Wu
Remote Sens. 2025, 17(19), 3355; https://doi.org/10.3390/rs17193355 - 2 Oct 2025
Abstract
Passive microwave remote sensing provides indispensable observations for constructing long-term snow depth records, which are critical for climatology, hydrology, and operational applications. Nevertheless, despite decades of snow depth monitoring, systematic evaluations of how inter-sensor brightness temperature differences (TBDs) propagate into retrieval uncertainties are
[...] Read more.
Passive microwave remote sensing provides indispensable observations for constructing long-term snow depth records, which are critical for climatology, hydrology, and operational applications. Nevertheless, despite decades of snow depth monitoring, systematic evaluations of how inter-sensor brightness temperature differences (TBDs) propagate into retrieval uncertainties are still lacking. In this study, TBDs between DMSP-F18/SSMIS, FY-3D/MWRI, and AMSR2 sensors were quantified, and the sensitivity of seven snow depth retrieval algorithms to these discrepancies was systematically assessed. The results indicate that TBDs between SSMIS and AMSR2 are larger than those between MWRI and AMSR2, likely reflecting variations in sensor specifications such as frequency, observation angle, and overpass time. In terms of algorithm sensitivity, SPD, WESTDC, FY-3B, and FY-3D demonstrate less sensitivity across sensors, with standard deviations of snow depth differences generally below 2 cm. In contrast, the Foster algorithm exhibits pronounced sensitivity to TBDs, with standard deviations exceeding 11 cm and snow depth differences reaching over 20 cm in heavily forested regions (forest fracion >90%). This study provides guidance for SWE virtual constellation design and algorithm selection, supporting long-term, seamless, and consistent snow depth retrievals.
Full article
(This article belongs to the Section Remote Sensing in Geology, Geomorphology and Hydrology)
►▼
Show Figures

Figure 1
Open AccessArticle
Spatiotemporal Analysis of Vineyard Dynamics: UAS-Based Monitoring at the Individual Vine Scale
by
Stefan Ruess, Gernot Paulus and Stefan Lang
Remote Sens. 2025, 17(19), 3354; https://doi.org/10.3390/rs17193354 - 2 Oct 2025
Abstract
The rapid and reliable acquisition of canopy-related metrics is essential for improving decision support in viticultural management, particularly when monitoring individual vines for targeted interventions. This study presents a spatially explicit workflow that integrates Uncrewed Aerial System (UAS) imagery, 3D point-cloud analysis, and
[...] Read more.
The rapid and reliable acquisition of canopy-related metrics is essential for improving decision support in viticultural management, particularly when monitoring individual vines for targeted interventions. This study presents a spatially explicit workflow that integrates Uncrewed Aerial System (UAS) imagery, 3D point-cloud analysis, and Object-Based Image Analysis (OBIA) to detect and monitor individual grapevines throughout the growing season. Vines are identified directly from 3D point clouds without the need for prior training data or predefined row structures, achieving a mean Euclidean distance of 10.7 cm to the reference points. The OBIA framework segments vine vegetation based on spectral and geometric features without requiring pre-clipping or manual masking. All non-vine elements—including soil, grass, and infrastructure—are automatically excluded, and detailed canopy masks are created for each plant. Vegetation indices are computed exclusively from vine canopy objects, ensuring that soil signals and internal canopy gaps do not bias the results. This enables accurate per-vine assessment of vigour. NDRE values were calculated at three phenological stages—flowering, veraison, and harvest—and analyzed using Local Indicators of Spatial Association (LISA) to detect spatial clusters and outliers. In contrast to value-based clustering methods, LISA accounts for spatial continuity and neighborhood effects, allowing the detection of stable low-vigour zones, expanding high-vigour clusters, and early identification of isolated stressed vines. A strong correlation (R2 = 0.73) between per-vine NDRE values and actual yield demonstrates that NDRE-derived vigour reliably reflects vine productivity. The method provides a transferable, data-driven framework for site-specific vineyard management, enabling timely interventions at the individual plant level before stress propagates spatially.
Full article
(This article belongs to the Special Issue Remote and Proximal Sensing for Precision Agriculture and Viticulture(2nd Edition))
►▼
Show Figures

Figure 1
Open AccessArticle
Comparative Evaluation of SNO and Double Difference Calibration Methods for FY-3D MERSI TIR Bands Using MODIS/Aqua as Reference
by
Shufeng An, Fuzhong Weng, Xiuzhen Han and Chengzhi Ye
Remote Sens. 2025, 17(19), 3353; https://doi.org/10.3390/rs17193353 - 2 Oct 2025
Abstract
►▼
Show Figures
Radiometric consistency across satellite platforms is fundamental to producing high-quality Climate Data Records (CDRs). Because different cross-calibration methods have distinct advantages and limitations, comparative evaluation is necessary to ensure record accuracy. This study presents a comparative assessment of two widely applied calibration approaches—Simultaneous
[...] Read more.
Radiometric consistency across satellite platforms is fundamental to producing high-quality Climate Data Records (CDRs). Because different cross-calibration methods have distinct advantages and limitations, comparative evaluation is necessary to ensure record accuracy. This study presents a comparative assessment of two widely applied calibration approaches—Simultaneous Nadir Overpass (SNO) and Double Difference (DD)—for the thermal infrared (TIR) bands of FY-3D MERSI. MODIS/Aqua serves as the reference sensor, while radiative transfer simulations driven by ERA5 inputs are generated with the Advanced Radiative Transfer Modeling System (ARMS) to support the analysis. The results show that SNO performs effectively when matchup samples are sufficiently large and globally representative but is less applicable under sparse temporal sampling or orbital drift. In contrast, the DD method consistently achieves higher calibration accuracy for MERSI Bands 24 and 25 under clear-sky conditions. It reduces mean biases from ~−0.5 K to within ±0.1 K and lowers RMSE from ~0.6 K to 0.3–0.4 K during 2021–2022. Under cloudy conditions, DD tends to overcorrect because coefficients derived from clear-sky simulations are not directly transferable to cloud-covered scenes, whereas SNO remains more stable though less precise. Overall, the results suggest that the two methods exhibit complementary strengths, with DD being preferable for high-accuracy calibration in clear-sky scenarios and SNO offering greater stability across variable atmospheric conditions. Future work will validate both methods under varied surface and atmospheric conditions and extend their use to additional sensors and spectral bands.
Full article

Figure 1
Open AccessArticle
Generating the 500 m Global Satellite Vegetation Productivity Phenology Product from 2001 to 2020
by
Boyu Ren, Yunfeng Cao, Jiaxin Tian, Shunlin Liang and Meng Yu
Remote Sens. 2025, 17(19), 3352; https://doi.org/10.3390/rs17193352 - 2 Oct 2025
Abstract
►▼
Show Figures
Accurate monitoring of vegetation phenology is vital for understanding climate change impacts on terrestrial ecosystems. While global vegetation greenness phenology (VGP) products are widely available, vegetation productivity phenology (VPP), which better reflects ecosystems’ carbon dynamics, remains largely inaccessible. This study introduces a novel
[...] Read more.
Accurate monitoring of vegetation phenology is vital for understanding climate change impacts on terrestrial ecosystems. While global vegetation greenness phenology (VGP) products are widely available, vegetation productivity phenology (VPP), which better reflects ecosystems’ carbon dynamics, remains largely inaccessible. This study introduces a novel global 500 m VPP dataset (GLASS VPP) from 2001 to 2020, derived from the GLASS gross primary productivity (GPP) product. Validation against three ground-based datasets—Fluxnet 2015, PhenoCam V2.0, and PEP725—demonstrated the dataset’s superior accuracy. Compared to the widely used MCD12Q2 VGP product, GLASS VPP reduced RMSE and bias by 35% and 63%, respectively, when validated against Fluxnet data. It also showed stronger correlations than MCD12Q2 when compared with PhenoCam (195 sites) and PEP725 (99 sites) observations, and it captured spatial and altitudinal phenology patterns more effectively. Overall, GLASS VPP exhibits a higher spatial integrity, stronger ecological interpretability, and improved consistency with ground observations, making it a valuable dataset for phenology modeling, carbon cycle research, and ecological forecasting under climate change.
Full article

Figure 1
Open AccessArticle
ILF-BDSNet: A Compressed Network for SAR-to-Optical Image Translation Based on Intermediate-Layer Features and Bio-Inspired Dynamic Search
by
Yingying Kong and Cheng Xu
Remote Sens. 2025, 17(19), 3351; https://doi.org/10.3390/rs17193351 - 1 Oct 2025
Abstract
►▼
Show Figures
Synthetic aperture radar (SAR) exhibits all-day and all-weather capabilities, granting it significant application in remote sensing. However, interpreting SAR images requires extensive expertise, making SAR-to-optical remote sensing image translation a crucial research direction. While conditional generative adversarial networks (CGANs) have demonstrated exceptional performance
[...] Read more.
Synthetic aperture radar (SAR) exhibits all-day and all-weather capabilities, granting it significant application in remote sensing. However, interpreting SAR images requires extensive expertise, making SAR-to-optical remote sensing image translation a crucial research direction. While conditional generative adversarial networks (CGANs) have demonstrated exceptional performance in image translation tasks, their massive number of parameters pose substantial challenges. Therefore, this paper proposes ILF-BDSNet, a compressed network for SAR-to-optical image translation. Specifically, first, standard convolutions in the feature-transformation module of the teacher network are replaced with depthwise separable convolutions to construct the student network, and a dual-resolution collaborative discriminator based on PatchGAN is proposed. Next, knowledge distillation based on intermediate-layer features and channel pruning via weight sharing are designed to train the student network. Then, the bio-inspired dynamic search of channel configuration (BDSCC) algorithm is proposed to efficiently select the optimal subnet. Meanwhile, the pixel-semantic dual-domain alignment loss function is designed. The feature-matching loss within this function establishes an alignment mechanism based on intermediate-layer features from the discriminator. Extensive experiments demonstrate the superiority of ILF-BDSNet, which significantly reduces number of parameters and computational complexity while still generating high-quality optical images, providing an efficient solution for SAR image translation in resource-constrained environments.
Full article

Figure 1
Open AccessArticle
PyGEE-ST-MEDALUS: AI Spatiotemporal Framework Integrating MODIS and Sentinel-1/-2 Data for Desertification Risk Assessment in Northeastern Algeria
by
Zakaria Khaldi, Jingnong Weng, Franz Pablo Antezana Lopez, Guanhua Zhou, Ilyes Ghedjatti and Aamir Ali
Remote Sens. 2025, 17(19), 3350; https://doi.org/10.3390/rs17193350 - 1 Oct 2025
Abstract
Desertification threatens the sustainability of dryland ecosystems, yet many existing monitoring frameworks rely on static maps, coarse spatial resolution, or lack temporal forecasting capacity. To address these limitations, this study introduces PyGEE-ST-MEDALUS, a novel spatiotemporal framework combining the full MEDALUS desertification model with
[...] Read more.
Desertification threatens the sustainability of dryland ecosystems, yet many existing monitoring frameworks rely on static maps, coarse spatial resolution, or lack temporal forecasting capacity. To address these limitations, this study introduces PyGEE-ST-MEDALUS, a novel spatiotemporal framework combining the full MEDALUS desertification model with deep learning (CNN, LSTM, DeepMLP) and machine learning (RF, XGBoost, SVM) techniques on the Google Earth Engine (GEE) platform. Applied across Tebessa Province, Algeria (2001–2028), the framework integrates MODIS and Sentinel-1/-2 data to compute four core indices—climatic, soil, vegetation, and land management quality—and create the Desertification Sensitivity Index (DSI). Unlike prior studies that focus on static or spatial-only MEDALUS implementations, PyGEE-ST-MEDALUS introduces scalable, time-series forecasting, yielding superior predictive performance (R2 ≈ 0.96; RMSE < 0.03). Over 71% of the region was classified as having high to very high sensitivity, driven by declining vegetation and thermal stress. Comparative analysis confirms that this study advances the state-of-the-art by integrating interpretable AI, near-real-time satellite analytics, and full MEDALUS indicators into one cloud-based pipeline. These contributions make PyGEE-ST-MEDALUS a transferable, efficient decision-support tool for identifying degradation hotspots, supporting early warning systems, and enabling evidence-based land management in dryland regions.
Full article
(This article belongs to the Special Issue Integrating Remote Sensing, Machine Learning, and Process-Based Modelling for Monitoring Environmental and Agricultural Landscapes Under Climate Change)
►▼
Show Figures

Graphical abstract
Open AccessArticle
LiteSAM: Lightweight and Robust Feature Matching for Satellite and Aerial Imagery
by
Boya Wang, Shuo Wang, Yibin Han, Linfeng Xu and Dong Ye
Remote Sens. 2025, 17(19), 3349; https://doi.org/10.3390/rs17193349 - 1 Oct 2025
Abstract
We present a (Light)weight (S)atellite–(A)erial feature (M)atching framework (LiteSAM) for robust UAV absolute visual localization (AVL) in GPS-denied environments. Existing satellite–aerial matching methods struggle with large appearance variations, texture-scarce regions, and limited efficiency for real-time UAV
[...] Read more.
We present a (Light)weight (S)atellite–(A)erial feature (M)atching framework (LiteSAM) for robust UAV absolute visual localization (AVL) in GPS-denied environments. Existing satellite–aerial matching methods struggle with large appearance variations, texture-scarce regions, and limited efficiency for real-time UAV applications. LiteSAM integrates three key components to address these issues. First, efficient multi-scale feature extraction optimizes representation, reducing inference latency for edge devices. Second, a Token Aggregation–Interaction Transformer (TAIFormer) with a convolutional token mixer (CTM) models inter- and intra-image correlations, enabling robust global–local feature fusion. Third, a MinGRU-based dynamic subpixel refinement module adaptively learns spatial offsets, enhancing subpixel-level matching accuracy and cross-scenario generalization. The experiments show that LiteSAM achieves competitive performance across multiple datasets. On UAV-VisLoc, LiteSAM attains an RMSE@30 of 17.86 m, outperforming state-of-the-art semi-dense methods such as EfficientLoFTR. Its optimized variant, LiteSAM (opt., without dual softmax), delivers inference times of 61.98 ms on standard GPUs and 497.49 ms on NVIDIA Jetson AGX Orin, which are 22.9% and 19.8% faster than EfficientLoFTR (opt.), respectively. With 6.31M parameters, which is 2.4× fewer than EfficientLoFTR’s 15.05M, LiteSAM proves to be suitable for edge deployment. Extensive evaluations on natural image matching and downstream vision tasks confirm its superior accuracy and efficiency for general feature matching.
Full article
Open AccessArticle
SS3L: Self-Supervised Spectral–Spatial Subspace Learning for Hyperspectral Image Denoising
by
Yinhu Wu, Dongyang Liu and Junping Zhang
Remote Sens. 2025, 17(19), 3348; https://doi.org/10.3390/rs17193348 - 1 Oct 2025
Abstract
►▼
Show Figures
Hyperspectral imaging (HSI) systems often suffer from complex noise degradation during the imaging process, significantly impacting downstream applications. Deep learning-based methods, though effective, rely on impractical paired training data, while traditional model-based methods require manually tuned hyperparameters and lack generalization. To address these
[...] Read more.
Hyperspectral imaging (HSI) systems often suffer from complex noise degradation during the imaging process, significantly impacting downstream applications. Deep learning-based methods, though effective, rely on impractical paired training data, while traditional model-based methods require manually tuned hyperparameters and lack generalization. To address these issues, we propose SS3L (Self-Supervised Spectral-Spatial Subspace Learning), a novel HSI denoising framework that requires neither paired data nor manual tuning. Specifically, we introduce a self-supervised spectral–spatial paradigm that learns noisy features from noisy data, rather than paired training data, based on spatial geometric symmetry and spectral local consistency constraints. To avoid manual hyperparameter tuning, we propose an adaptive rank subspace representation and a loss function designed based on the collaborative integration of spectral and spatial losses via noise-aware spectral-spatial weighting, guided by the estimated noise intensity. These components jointly enable a dynamic trade-off between detail preservation and noise reduction under varying noise levels. The proposed SS3L embeds noise-adaptive subspace representations into the dynamic spectral–spatial hybrid loss-constrained network, enabling cross-sensor denoising through prior-informed self-supervision. Experimental results demonstrate that SS3L effectively removes noise while preserving both structural fidelity and spectral accuracy under diverse noise conditions.
Full article

Figure 1
Open AccessArticle
S2M-Net: A Novel Lightweight Network for Accurate Smal Ship Recognition in SAR Images
by
Guobing Wang, Rui Zhang, Junye He, Yuxin Tang, Yue Wang, Yonghuan He, Xunqiang Gong and Jiang Ye
Remote Sens. 2025, 17(19), 3347; https://doi.org/10.3390/rs17193347 - 1 Oct 2025
Abstract
►▼
Show Figures
Synthetic aperture radar (SAR) provides all-weather and all-day imaging capabilities and can penetrate clouds and fog, playing an important role in ship detection. However, small ships usually contain weak feature information in such images and are easily affected by noise, which makes detection
[...] Read more.
Synthetic aperture radar (SAR) provides all-weather and all-day imaging capabilities and can penetrate clouds and fog, playing an important role in ship detection. However, small ships usually contain weak feature information in such images and are easily affected by noise, which makes detection challenging. In practical deployment, limited computing resources require lightweight models to improve real-time performance, yet achieving a lightweight design while maintaining high detection accuracy for small targets remains a key challenge in object detection. To address this issue, we propose a novel lightweight network for accurate small-ship recognition in SAR images, named S2M-Net. Specifically, the Space-to-Depth Convolution (SPD-Conv) module is introduced in the feature extraction stage to optimize convolutional structures, reducing computation and parameters while retaining rich feature information. The Mixed Local-Channel Attention (MLCA) module integrates local and channel attention mechanisms to enhance adaptation to complex backgrounds and improve small-target detection accuracy. The Multi-Scale Dilated Attention (MSDA) module employs multi-scale dilated convolutions to fuse features from different receptive fields, strengthening detection across ships of various sizes. The experimental results show that S2M-Net achieved mAP50 values of 0.989, 0.955, and 0.883 on the SSDD, HRSID, and SARDet-100k datasets, respectively. Compared with the baseline model, the F1 score increased by 1.13%, 2.71%, and 2.12%. Moreover, S2M-Net outperformed other state-of-the-art algorithms in FPS across all datasets, achieving a well-balanced trade-off between accuracy and efficiency. This work provides an effective solution for accurate ship detection in SAR images.
Full article

Figure 1
Open AccessReview
Seeing the Trees from Above: A Survey on Real and Synthetic Agroforestry Datasets for Remote Sensing Applications
by
Babak Chehreh, Alexandra Moutinho and Carlos Viegas
Remote Sens. 2025, 17(19), 3346; https://doi.org/10.3390/rs17193346 - 1 Oct 2025
Abstract
Trees are vital to both environmental health and human well-being. They purify the air we breathe, support biodiversity by providing habitats for wildlife, prevent soil erosion to maintain fertile land, and supply wood for construction, fuel, and a multitude of essential products such
[...] Read more.
Trees are vital to both environmental health and human well-being. They purify the air we breathe, support biodiversity by providing habitats for wildlife, prevent soil erosion to maintain fertile land, and supply wood for construction, fuel, and a multitude of essential products such as fruits, to name a few. Therefore, it is important to monitor and preserve them to protect the natural environment for future generations and ensure the sustainability of our planet. Remote sensing is the rapidly advancing and powerful tool that enables us to monitor and manage trees and forests efficiently and at large scale. Statistical methods, machine learning, and more recently deep learning are essential for analyzing the vast amounts of data collected, making data the fundamental component of these methodologies. The advancement of these methods goes hand in hand with the availability of sample data; therefore, a review study on available high-resolution aerial datasets of trees can help pave the way for further development of analytical methods in this field. This study aims to shed light on publicly available datasets by conducting a systematic search and filter and an in-depth analysis of them, including their alignment with the FAIR—findable, accessible, interoperable, and reusable—principles and the latest trends concerning applications for such datasets.
Full article
(This article belongs to the Special Issue Advances in Deep Learning Approaches: UAV Data Analysis)
►▼
Show Figures

Figure 1

Journal Menu
► ▼ Journal Menu-
- Remote Sensing Home
- Aims & Scope
- Editorial Board
- Reviewer Board
- Topical Advisory Panel
- Photography Exhibition
- Instructions for Authors
- Special Issues
- Topics
- Sections & Collections
- Article Processing Charge
- Indexing & Archiving
- Editor’s Choice Articles
- Most Cited & Viewed
- Journal Statistics
- Journal History
- Journal Awards
- Society Collaborations
- Conferences
- Editorial Office
Journal Browser
► ▼ Journal BrowserHighly Accessed Articles
Latest Books
E-Mail Alert
News
Topics
Topic in
Entropy, Environments, Land, Remote Sensing
Bioterraformation: Emergent Function from Systemic Eco-Engineering
Topic Editors: Matteo Convertino, Jie LiDeadline: 30 November 2025
Topic in
Energies, Aerospace, Applied Sciences, Remote Sensing, Sensors
GNSS Measurement Technique in Aerial Navigation
Topic Editors: Kamil Krasuski, Damian WierzbickiDeadline: 31 December 2025
Topic in
Geosciences, Land, Remote Sensing, Sustainability
Disaster and Environment Monitoring Based on Multisource Remote Sensing Images
Topic Editors: Bing Guo, Yuefeng Lu, Yingqiang Song, Rui Zhang, Huihui ZhaoDeadline: 1 January 2026
Topic in
Agriculture, Remote Sensing, Sustainability, Water, Hydrology, Limnological Review, Earth
Water Management in the Age of Climate Change
Topic Editors: Yun Yang, Chong Chen, Hao SunDeadline: 31 January 2026

Conferences
Special Issues
Special Issue in
Remote Sensing
Synthetic Aperture Radar (SAR) Remote Sensing for Civil and Environmental Applications
Guest Editors: Saeid Homayouni, Hossein Aghababaei, Alireza Tabatabaeenejad, Benyamin HosseinyDeadline: 10 October 2025
Special Issue in
Remote Sensing
Remote Sensing of Land Surface Temperature: Retrieval, Modeling, and Applications
Guest Editors: Zihan Liu, Jin Ma, Kangning Li, Lluís Pérez-PlanellsDeadline: 13 October 2025
Special Issue in
Remote Sensing
Applications of Remote Sensing in the Monitoring of the Mountain Cryosphere
Guest Editors: Pedro Fidel Espín-López, Massimiliano Barbolini, Andreas J. DietzDeadline: 15 October 2025
Special Issue in
Remote Sensing
Photogrammetry Meets AI
Guest Editors: Fabio Remondino, Rongjun QinDeadline: 15 October 2025
Topical Collections
Topical Collection in
Remote Sensing
Google Earth Engine Applications
Collection Editors: Lalit Kumar, Onisimo Mutanga
Topical Collection in
Remote Sensing
Feature Papers for Section Environmental Remote Sensing
Collection Editor: Magaly Koch
Topical Collection in
Remote Sensing
Discovering A More Diverse Remote Sensing Discipline
Collection Editors: Karen Joyce, Meghan Halabisky, Cristina Gómez, Michelle Kalamandeen, Gopika Suresh, Kate C. Fickas
Topical Collection in
Remote Sensing
The VIIRS Collection: Calibration, Validation, and Application
Collection Editors: Xi Shao, Xiaoxiong Xiong, Changyong Cao