Next Issue
Volume 17, May-1
Previous Issue
Volume 17, April-1
 
 
remotesensing-logo

Journal Browser

Journal Browser

Remote Sens., Volume 17, Issue 8 (April-2 2025) – 161 articles

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
15 pages, 6664 KiB  
Communication
Nonlinear Phase Reconstruction and Compensation Method Based on Orthonormal Complete Basis Functions in Synthetic Aperture Ladar Imaging Technology
by Ruihua Shi, Juanying Zhao, Dong Wang, Wei Li, Yinshen Wang, Bingnan Wang and Maosheng Xiang
Remote Sens. 2025, 17(8), 1480; https://doi.org/10.3390/rs17081480 - 21 Apr 2025
Abstract
By extending synthetic aperture technology from a microwave band to laser wavelength, the synthetic aperture ladar (SAL) achieves extremely high spatial resolution independent of the target distance in long-range imaging. Nonlinear phase correction is a critical challenge in SAL imaging. To address the [...] Read more.
By extending synthetic aperture technology from a microwave band to laser wavelength, the synthetic aperture ladar (SAL) achieves extremely high spatial resolution independent of the target distance in long-range imaging. Nonlinear phase correction is a critical challenge in SAL imaging. To address the issue of phase noise during the imaging process, we first analyze the theoretical impact of nonlinear phase noise in imaging performance. Subsequently, a reconstruction and compensation method based on orthonormal complete basis functions is proposed to mitigate nonlinear phase noise in SAL imaging. The simulation results validate the accuracy and robustness of the proposed method, while experimental data demonstrate its effectiveness in improving system range resolution and reducing the peak side lobe ratio by 3 dB across various target scenarios. This advancement establishes a solid foundation for the application of SAL technology in ground-based remote sensing and space target observation. Full article
(This article belongs to the Section Engineering Remote Sensing)
Show Figures

Figure 1

24 pages, 12004 KiB  
Article
Rapeseed Area Extraction Based on Time-Series Dual-Polarization Radar Vegetation Indices
by Yiqing Zhu, Hong Cao, Shangrong Wu, Yongli Guo and Qian Song
Remote Sens. 2025, 17(8), 1479; https://doi.org/10.3390/rs17081479 - 21 Apr 2025
Abstract
Accurate, real-time, and dynamic monitoring of crop planting distributions in hilly areas with complex terrain and frequent meteorological changes is highly important for agricultural production. Dual-polarization SAR has high application value in the fields of feature classification and crop distribution extraction because of [...] Read more.
Accurate, real-time, and dynamic monitoring of crop planting distributions in hilly areas with complex terrain and frequent meteorological changes is highly important for agricultural production. Dual-polarization SAR has high application value in the fields of feature classification and crop distribution extraction because of its all-day all-weather operation, large mapping bandwidth, and easy data acquisition. To explore the feasibility and applicability of dual-polarization synthetic-aperture radar (SAR) data in crop monitoring, this study draws on two basic methods of dual-polarization decomposition (eigenvalue decomposition and three-component polarization decomposition) to construct time series of crop dual-polarization radar vegetation indices (RVIs), and it performs a full coverage analysis of crop distribution extraction in dryland mountainous areas of southeastern China. On the basis of the Sentinel-1 dual-polarization RVIs, the time-series classification and rapeseed distribution extraction impacts were compared using southern Hunan Province’s principal rapeseed (Brassica napus L.) production area as the study area. From the comparison results, RVI3c performed better in terms of single-point recognition capability and area extraction accuracy than the other indices did, as verified by sampling points and samples, and the OA and F-1 score of rapeseed extraction based on RVI3c were 74.13% and 81.02%, respectively. Therefore, three-component polarization decomposition is more suitable than other methods for crop information extraction and remote sensing classification applications involving dual-polarized SAR data. Full article
(This article belongs to the Special Issue Radar Remote Sensing for Monitoring Agricultural Management)
Show Figures

Figure 1

25 pages, 12757 KiB  
Article
CV-YOLO: A Complex-Valued Convolutional Neural Network for Oriented Ship Detection in Single-Polarization Single-Look Complex SAR Images
by Dandan Zhao, Zhe Zhang, Dongdong Lu, Xiaolan Qiu, Wei Li, Hang Li and Yirong Wu
Remote Sens. 2025, 17(8), 1478; https://doi.org/10.3390/rs17081478 - 21 Apr 2025
Abstract
Deep learning has significantly advanced synthetic aperture radar (SAR) ship detection in recent years. However, existing approaches predominantly rely on amplitude information while largely overlooking the critical phase component, limiting further performance improvements. Additionally, unlike optical images, which benefit from a variety of [...] Read more.
Deep learning has significantly advanced synthetic aperture radar (SAR) ship detection in recent years. However, existing approaches predominantly rely on amplitude information while largely overlooking the critical phase component, limiting further performance improvements. Additionally, unlike optical images, which benefit from a variety of enhancement techniques, complex-valued SAR images lack effective processing methods. To address these challenges, we propose Complex-Valued You Only Look Once (CV-YOLO), an anchor-free, oriented bounding box (OBB)-based ship detection network that fully exploits both amplitude and phase information from single-polarization, single-look complex SAR images. Furthermore, we introduce novel complex-valued data augmentation strategies—including complex-valued Gaussian filtering, complex-valued Mosaic data augmentation, and complex-valued mixed sample data augmentation—to enhance sample diversity and significantly improve the generalization capability of complex-valued networks. Experimental evaluations of the Complex-Valued SAR Images Rotation Ship Detection Dataset (CSRSDD) demonstrate that our method surpasses real-valued networks with identical architectures and outperforms leading real-valued approaches, validating the effectiveness of our proposed methodology. Full article
Show Figures

Figure 1

23 pages, 8516 KiB  
Article
A Geospatial Livestock-Carrying Capacity Model (GLCC) in the Akmola Oblast, Kazakhstan
by Jiaguo Qi, Zihan Lin, Mark A. Weltz, Kenneth E. Spaeth, Gulnaz Iskakova, Jason Nesbit, David Toledo, Tlektes Yespolov, Maira Kussainova, Lyazzat K. Makhmudova and Xiaoping Xin
Remote Sens. 2025, 17(8), 1477; https://doi.org/10.3390/rs17081477 - 21 Apr 2025
Abstract
Spatial disparities in rangeland conditions across Kazakhstan complicate field-based assessments of livestock-carrying capacity (LCC), a critical metric for the country’s food security and economic planning. This study developed a geospatial livestock-carrying capacity (GLCC) modeling framework to quantify LCC spatio-temporal dynamics at the Oblast [...] Read more.
Spatial disparities in rangeland conditions across Kazakhstan complicate field-based assessments of livestock-carrying capacity (LCC), a critical metric for the country’s food security and economic planning. This study developed a geospatial livestock-carrying capacity (GLCC) modeling framework to quantify LCC spatio-temporal dynamics at the Oblast level, by integrating satellite-derived data on vegetation, water resources, and terrain with in situ measurements. By providing ground-truth observations and contextual detail, field-based measurements complement remote sensing data and help to validate estimates and improve the reliability of the GLCC model. The modeling framework was successfully applied and validated in a case study in the Akmola Oblast, Kazakhstan, to specifically map the spatial and temporal distributions of LCC, using publicly available MODIS NPP data and in situ data from 51 field sites. The modeling results showed distinct spatial patterns of LCC across the Oblast, reflecting variability in rangeland productivity with higher values concentrated in southern and southeastern regions (up to 0.5 animals/ha). The results also depicted significant interannual LCC fluctuations (ranging from 0.099 to 0.17 animals/ha) possibly due to rainfall variability, and thus an indicator of climate-related risks for livestock management. Although there is still room for further improvement, particularly in model parameterization to account for grazing pressures, forage quality, and livestock species, the GLCC modeling framework represents a simple modeling tool to map livestock-carrying capacity, a more meaningful indicator to rangeland managers. Further, this work underscores the value of integrating remote sensing with field-based observations to support data-driven rangeland management planning and resilient investment strategies. Full article
Show Figures

Figure 1

21 pages, 22809 KiB  
Article
Joint Optimization Loss Function for Tiny Object Detection in Remote Sensing Images
by Shuohao Shi, Qiang Fang and Xin Xu
Remote Sens. 2025, 17(8), 1476; https://doi.org/10.3390/rs17081476 - 21 Apr 2025
Abstract
Tiny object detection remains a formidable challenge in the field of computer vision. There are many factors that influence tiny object detection performance. In this paper, we focus primarily on the following two aspects. First, due to diminutive size and inappropriate label assignment [...] Read more.
Tiny object detection remains a formidable challenge in the field of computer vision. There are many factors that influence tiny object detection performance. In this paper, we focus primarily on the following two aspects. First, due to diminutive size and inappropriate label assignment strategy, tiny objects yield significantly fewer positive samples than larger objects, resulting in weakened supervisory signals during backpropagation and model training. Second, most existing detectors directly combine the classification loss and bounding box regression loss during training. Some improvement methods focus exclusively on either classification or localization, leading to potential discrepancies in which predictions exhibit precise localization but incorrect classifications or accurate classifications with imprecise localization. To address these issues, we propose a novel Joint Optimization Loss (JOL) that dynamically assigns optimal weights to each training sample, enabling joint optimization of both the classification and regression losses. Notably, JOL integrates seamlessly with most mainstream detectors and loss functions without requiring alterations to network architectures. Extensive experiments conducted on five benchmark datasets demonstrate the superior performance of our approach, achieving AP improvements of 1.7 and 1.5 points on the AI-TOD and SODA-D datasets, respectively, compared to the state-of-the-art method. Full article
(This article belongs to the Section AI Remote Sensing)
Show Figures

Graphical abstract

26 pages, 10459 KiB  
Article
Research on Camouflage Target Classification and Recognition Based on Mid Wave Infrared Hyperspectral Imaging
by Shikun Zhang, Yunhua Cao, Lu Bai and Zhensen Wu
Remote Sens. 2025, 17(8), 1475; https://doi.org/10.3390/rs17081475 - 21 Apr 2025
Abstract
Mid-wave infrared (MWIR) hyperspectral imaging integrates MWIR technology with hyperspectral remote sensing, enabling the capture of radiative information that is difficult to obtain in the visible spectrum, thus demonstrating significant value in camouflage recognition and stealth design. However, there is a notable lack [...] Read more.
Mid-wave infrared (MWIR) hyperspectral imaging integrates MWIR technology with hyperspectral remote sensing, enabling the capture of radiative information that is difficult to obtain in the visible spectrum, thus demonstrating significant value in camouflage recognition and stealth design. However, there is a notable lack of open-source datasets and effective classification methods in this field. To address these challenges, this study proposes a dual-channel attention convolutional neural network (DACNet). First, we constructed four MWIR camouflage datasets (GCL, SSCL, CW, and LC) to fill a critical data gap. Second, to address the issues of spectral confusion between camouflaged targets and backgrounds and blurred spatial boundaries, DACNet employs independent spectral and spatial branches to extract deep spectral–spatial features while dynamically weighting these features through channel and spatial attention mechanisms, significantly enhancing target–background differentiation. Our experimental results demonstrate that DACNet achieves an average accuracy (AA) of 99.96%, 99.45%, 100%, and 95.88%; an overall accuracy (OA) of 99.94%, 99.52%, 100%, and 96.39%; and Kappa coefficients of 99.91%, 99.41%, 100%, and 95.21% across the four datasets. The classification results exhibit sharp edges and minimal noise, outperforming five deep learning methods and three machine learning approaches. Additional generalization experiments on public datasets further validate DACNet’s superiority in providing an efficient and novel approach for hyperspectral camouflage data classification. Full article
Show Figures

Graphical abstract

19 pages, 19242 KiB  
Article
Semi-Supervised Object Detection for Remote Sensing Images Using Consistent Dense Pseudo-Labels
by Tong Zhao, Yujun Zeng, Qiang Fang, Xin Xu and Haibin Xie
Remote Sens. 2025, 17(8), 1474; https://doi.org/10.3390/rs17081474 - 21 Apr 2025
Abstract
Semi-supervised learning aims to improve the generalization performance of a model by exploiting the large quantity of unlabeled data together with limited labeled data during training. When applied to object detection in remote sensing images, semi-supervised learning can not only effectively alleviate the [...] Read more.
Semi-supervised learning aims to improve the generalization performance of a model by exploiting the large quantity of unlabeled data together with limited labeled data during training. When applied to object detection in remote sensing images, semi-supervised learning can not only effectively alleviate the time-consuming and costly labeling of bounding boxes but also improve the performance and generalization of corresponding object detection methods. However, most current semi-supervised learning-based object detection methods (especially combined with pseudo-labels) for remote sensing images ignore a key issue, that is, the consistency of pseudo-labels. In this paper, a novel semi-supervised learning-based method for object detection in remote sensing images called CDPL is proposed, which includes an adaptive mechanism that directly incorporates the potential object information into the dense pseudo-label selection process and carefully selects the appropriate dense pseudo-labels in the scenes where objects are densely distributed. CDPL consists of two main components: feature-aligned dense pseudo-label selection and sparse pseudo-label-based regression object alignment. The experimental results for typical remote sensing datasets show that the proposed method results in a satisfactory performance improvement. Full article
Show Figures

Graphical abstract

20 pages, 16939 KiB  
Article
A Method for the 3D Reconstruction of Landscape Trees in the Leafless Stage
by Jiaqi Li, Qingqing Huang, Xin Wang, Benye Xi, Jie Duan, Hang Yin and Lingya Li
Remote Sens. 2025, 17(8), 1473; https://doi.org/10.3390/rs17081473 - 20 Apr 2025
Abstract
Three-dimensional models of trees can help simulate forest resource management, field surveys, and urban landscape design. With the advancement of Computer Vision (CV) and laser remote sensing technology, forestry researchers can use images and point cloud data to perform digital modeling. However, modeling [...] Read more.
Three-dimensional models of trees can help simulate forest resource management, field surveys, and urban landscape design. With the advancement of Computer Vision (CV) and laser remote sensing technology, forestry researchers can use images and point cloud data to perform digital modeling. However, modeling leafless tree models that conform to tree growth rules and have effective branching remains a major challenge. This article proposes a method based on 3D Gaussian Splatting (3D GS) to address this issue. Firstly, we compared the reconstruction of the same tree and confirmed the advantages of the 3D GS method in tree 3D reconstruction. Secondly, seven landscape trees were reconstructed using the 3D GS-based method, to verify the effectiveness of the method. Finally, the 3D reconstructed point cloud was used to generate the QSM and extract tree feature parameters to verify the accuracy of the reconstructed model. Our results indicate that this method can effectively reconstruct the structure of real trees, and especially completely reconstruct 3rd-order branches. Meanwhile, the error of the Diameter at Breast Height (DBH) of the model is below 1.59 cm, with a relative error of 3.8–14.6%. This proves that 3D GS effectively solved the problems of inconsistency between tree models and real growth rules, as well as poor branch structure in tree reconstruction models, providing new insights and research directions for the 3D reconstruction and visualization of landscape trees in the leafless stage. Full article
Show Figures

Figure 1

16 pages, 5784 KiB  
Article
Temporal and Spatial Prediction of Column Dust Optical Depth Trend on Mars Based on Deep Learning
by Xiangxiang Yan, Ziteng Li, Tao Yu and Chunliang Xia
Remote Sens. 2025, 17(8), 1472; https://doi.org/10.3390/rs17081472 - 20 Apr 2025
Abstract
Dust storms, as an important extreme weather event on Mars, have significant impacts on the Martian atmosphere and climate and the activities of Martian probes. Therefore, it is necessary to analyze and predict the activity trends of Martian dust storms. This study uses [...] Read more.
Dust storms, as an important extreme weather event on Mars, have significant impacts on the Martian atmosphere and climate and the activities of Martian probes. Therefore, it is necessary to analyze and predict the activity trends of Martian dust storms. This study uses historical data on global Column Dust Optical Depth (CDOD) from the Martian years (MYs) 24–36 (1998–2022) to develop a CDOD prediction method based on deep learning and predicts the spatiotemporal trends of dust storms in the landing areas of Martian rovers at high latitudes, the tropics, and the equatorial region. Firstly, based on a trained Particle Swarm Optimization (PSO) Long Short-Term Memory (LTSM)-CDOD network, the rolling predictions of CDOD average values for several sols in the future are performed. Then, an evaluation method based on the accuracy of the test set gives the maximum predictable number of sols and categorizes the predictions into four accuracy intervals. The effective prediction time of the model is about 100 sols, and the accuracy is higher in the tropics and equatorial region compared to at high latitudes. Notably, the accuracy of the Zhurong landing area in the north subtropical region is the highest, with a coefficient of determination (R2) and relative mean error (RME) of 0.98 and 0.035, respectively. Additionally, a Convolutional LSTM (ConvLSTM) network is used to predict the spatial distribution of CDOD intensity for different latitude landing areas of the future sol. The results are similar to the time predictions. This study shows that the LSTM-based prediction model for the intensity of Martian dust storms is effective. The prediction of Martian dust storm activity is of great significance to understanding changes in the Martian atmospheric environment and can also provide a scientific basis for assessing the impact on Martian rovers’ landing and operations during dust storms. Full article
(This article belongs to the Special Issue Planetary Remote Sensing and Applications to Mars and Chang’E-6/7)
Show Figures

Figure 1

22 pages, 12176 KiB  
Article
Cover Crop Types Influence Biomass Estimation Using Unmanned Aerial Vehicle-Mounted Multispectral Sensors
by Sk Musfiq Us Salehin, Chiranjibi Poudyal, Nithya Rajan and Muthukumar Bagavathiannan
Remote Sens. 2025, 17(8), 1471; https://doi.org/10.3390/rs17081471 - 20 Apr 2025
Abstract
Accurate cover crop biomass estimation is critical for evaluating their ecological benefits. Traditional methods, like destructive sampling, are labor-intensive and time-consuming. This study investigates the application of unmanned aerial vehicle (UAV)-mounted multispectral sensors to estimate biomass in oats, Austrian winter peas (AWP), turnips, [...] Read more.
Accurate cover crop biomass estimation is critical for evaluating their ecological benefits. Traditional methods, like destructive sampling, are labor-intensive and time-consuming. This study investigates the application of unmanned aerial vehicle (UAV)-mounted multispectral sensors to estimate biomass in oats, Austrian winter peas (AWP), turnips, and a combination of all three crops across six experimental plots. Five spectral images were collected at two growth stages, analyzing band reflectance, nine vegetation indices, and canopy height models (CHMs) for biomass estimation. Results indicated that most vegetation indices were effective during mid-growth stages but showed reduced accuracy later. Stepwise multiple linear regression revealed that combining the normalized difference red-edge (NDRE) index and CHM provided the best biomass model before termination (R2 = 0.84). For bitemporal images, green reflectance, CHM, and the ratio of near-infrared (NIR) to red achieved the best performance (R2 = 0.85). Cover crop species also influenced the model performance. Oats were best modeled using the enhanced vegetation index (EVI) (R2 = 0.86), AWP with red-edge reflectance (R2 = 0.71), turnips with NIR, GNDVI, and CHM (R2 = 0.95), and mixed species with NIR and blue band reflectance (R2 = 0.93). These findings demonstrate the potential of high-resolution multispectral imaging for efficient biomass assessment in precision agriculture. Full article
(This article belongs to the Special Issue Perspectives of Remote Sensing for Precision Agriculture)
Show Figures

Figure 1

15 pages, 960 KiB  
Technical Note
ViT–KAN Synergistic Fusion: A Novel Framework for Parameter- Efficient Multi-Band PolSAR Land Cover Classification
by Songli Han, Dawei Ren, Fan Gao, Jian Yang and Hui Ma
Remote Sens. 2025, 17(8), 1470; https://doi.org/10.3390/rs17081470 - 20 Apr 2025
Abstract
Deep learning has shown significant potential in multi-band Polarimetric Synthetic Aperture Radar (PolSAR) land cover classification. However, the existing methods face two main challenges: accurately modeling the complex nonlinear relationships between multiple bands and balancing classifier parameter efficiency with classification accuracy. To address [...] Read more.
Deep learning has shown significant potential in multi-band Polarimetric Synthetic Aperture Radar (PolSAR) land cover classification. However, the existing methods face two main challenges: accurately modeling the complex nonlinear relationships between multiple bands and balancing classifier parameter efficiency with classification accuracy. To address these challenges, this paper proposes a novel decision-level multi-band fusion framework that leverages the synergistic optimization of the Vision Transformer (ViT) and Kolmogorov–Arnold Network (KAN). This innovative architecture effectively captures global spatial–spectral correlations through ViT’s cross-band self-attention mechanism and achieves parameter-efficient decision-level probability space mapping using KAN’s spline basis functions. The proposed method significantly enhances the model’s generalization capability across different band combinations. The experimental results on the quad-band (P/L/C/X) Hainan PolSAR dataset, acquired by the Aerial Remote Sensing System of the Chinese Academy of Sciences, show that the proposed framework achieves an overall accuracy of 96.24%, outperforming conventional methods in both accuracy and parameter efficiency. These results demonstrate the practical potential of the proposed method for high-performance and efficient multi-band PolSAR land cover classification. Full article
(This article belongs to the Special Issue Big Data Era: AI Technology for SAR and PolSAR Image)
Show Figures

Figure 1

22 pages, 28104 KiB  
Article
Spatial and Temporal Characteristics of Mesoscale Eddies in the North Atlantic Ocean Based on SWOT Mission
by Aiqun Cui, Zizhan Zhang, Haoming Yan and Baomin Han
Remote Sens. 2025, 17(8), 1469; https://doi.org/10.3390/rs17081469 - 20 Apr 2025
Abstract
Mesoscale eddies play a crucial role as primary transporters of heat, salinity, and freshwater in oceanic systems. Utilizing the latest Surface Water and Ocean Topography (SWOT) dataset, this study employed the py-eddy-tracker (PET) algorithm to identify and track mesoscale eddies in the North [...] Read more.
Mesoscale eddies play a crucial role as primary transporters of heat, salinity, and freshwater in oceanic systems. Utilizing the latest Surface Water and Ocean Topography (SWOT) dataset, this study employed the py-eddy-tracker (PET) algorithm to identify and track mesoscale eddies in the North Atlantic (NA). Our investigation focused on evaluating the influence of applying varying filter wavelengths (800, 600, 400, and 200 km) for absolute dynamic topography (ADT) on the detection of spatiotemporal patterns and dynamic properties of mesoscale eddies, encompassing eddy kinetic energy (EKE), effective radius, rotational velocity, amplitude, lifespan, and propagation distance. The analysis reveals a cyclonic to anticyclonic eddy ratio of approximately 1.1:1 in the study region. The dynamic parameters of mesoscale eddies identified at filter wavelengths of 800 km and 600 km are similar, while a marked reduction in these parameters becomes evident at the 200 km wavelength. Parameter comparative analysis indicates that effective radius exhibits the highest sensitivity to wavelength reduction, followed by amplitude, whereas rotational velocity remains relatively unaffected by filtering variations. The lifespan distribution analysis shows that the majority of eddies persist for 7–21 days, with only a small number of robust mesoscale eddies maintaining activity beyond 45 days. These long-lived, strong mesoscale eddies are primarily generated in the high-energy current zones associated with the Gulf Stream (GS). Full article
Show Figures

Figure 1

21 pages, 8955 KiB  
Article
A Fusion Method Based on Physical Modes and Satellite Remote Sensing for 3D Ocean State Reconstruction
by Yingxiang Hong, Xuan Wang, Bin Wang, Wei Li and Guijun Han
Remote Sens. 2025, 17(8), 1468; https://doi.org/10.3390/rs17081468 - 20 Apr 2025
Abstract
Accurately and timely estimating three-dimensional ocean states is crucial for improving operational ocean forecasting capabilities. Although satellite observations provide valuable evolutionary information, they are confined to surface-level variables. While in situ observations can offer subsurface information, their spatiotemporal distribution is highly uneven, making [...] Read more.
Accurately and timely estimating three-dimensional ocean states is crucial for improving operational ocean forecasting capabilities. Although satellite observations provide valuable evolutionary information, they are confined to surface-level variables. While in situ observations can offer subsurface information, their spatiotemporal distribution is highly uneven, making it difficult to obtain complete three-dimensional ocean structures. This study developed an operational-oriented lightweight framework for three-dimensional ocean state reconstruction by integrating multi-source observations through a computationally efficient multivariate empirical orthogonal function (MEOF) method. The MEOF method can extract physically consistent multivariate ocean evolution modes from high-resolution reanalysis data. We utilized these modes to further integrate satellite remote sensing and buoy observation data, thereby establishing physical connections between the sea surface and subsurface. The framework was tested in the South China Sea, with optimal data integration schemes determined for different reconstruction variables. The experimental results demonstrate that the sea surface height (SSH) and sea surface temperature (SST) are the key factors determining the subsurface temperature reconstruction, while the sea surface salinity (SSS) plays a primary role in enhancing salinity estimation. Meanwhile, current fields are most effectively reconstructed using SSH alone. The evaluations show that the reconstruction results exhibited high consistency with independent Argo observations, outperforming traditional baseline methods and effectively capturing the vertical structure of ocean eddies. Additionally, the framework can easily integrate sparse in situ observations to further improve the reconstruction performance. The high computational efficiency and reasonable reconstruction results confirm the feasibility and reliability of this framework for operational applications. Full article
(This article belongs to the Section Ocean Remote Sensing)
Show Figures

Figure 1

30 pages, 56050 KiB  
Article
Assessing Habitat Quality on Synergetic Land-Cover Dataset Across the Greater Mekong Subregion over the Last Four Decades
by Shu’an Liu, Tianle Sun, Philippe Ciais, Huifang Zhang, Junjun Fang, Jingchun Fang, Tewekel Melese Gemechu and Baozhang Chen
Remote Sens. 2025, 17(8), 1467; https://doi.org/10.3390/rs17081467 - 20 Apr 2025
Abstract
In the face of rapid infrastructure expansion and escalating anthropogenic activities, it becomes imperative to prioritize the examination of long-term transformations in land cover and ecological quality within the Greater Mekong Subregion (GMS). We developed an ecological evaluation system integrating the land cover [...] Read more.
In the face of rapid infrastructure expansion and escalating anthropogenic activities, it becomes imperative to prioritize the examination of long-term transformations in land cover and ecological quality within the Greater Mekong Subregion (GMS). We developed an ecological evaluation system integrating the land cover data assimilation framework (LCDAF) with the InVEST model to accomplish this goal. The LCDAF compensates for the disadvantages of weather interference, difficulty in recognizing complex scenes, and poor generalization in remote sensing image classification, and also adds temporal continuity that other fusion methods do not have. The synthesized land cover dataset demonstrates superior overall accuracy compared to five existing global products. This enhanced dataset provides a robust foundation for comprehensive analysis and decision making within the ecological evaluation system. We implemented a rigorous and quantitative assessment of changes in land cover and habitat quality spanning 1980 to 2020. The land cover analysis unveiled a noteworthy trend that surfaced in the dynamic interplay between forested areas and croplands, highlighting simultaneous processes of forest restoration and agricultural expansion, albeit at varying rates. Further analysis of habitat quality showed that the GMS generally sustained a moderate level with a slight downward trend observed over the period. Significantly, Laos attained the highest ranking in habitat quality, succeeded by Myanmar, China, Cambodia, Vietnam, and Thailand. In human factors, land use intensity and landscape fragmentation emerge as contributors with detrimental effects on habitat quality. Substantial progress was achieved in implementing forestland conservation measures, exemplified in regions such as Cambodia and Guangxi Province of China, where these endeavors proved effective in mitigating habitat degradation. Despite these positive endeavors, the GMS’s overall habitat quality did not significantly improve. It emphasizes the enduring challenges confronted by the region in terms of ecological management and habitat conservation. Full article
Show Figures

Figure 1

19 pages, 3776 KiB  
Article
Research on Weighted Fusion Method for Multi-Source Sea Surface Temperature Based on Cloud Conditions
by Xiangxiang Rong and Haiyong Ding
Remote Sens. 2025, 17(8), 1466; https://doi.org/10.3390/rs17081466 - 20 Apr 2025
Abstract
The sea surface temperature (SST) is an important parameter reflecting the energy exchange between the ocean and the atmosphere, which has a key impact on climate change, marine ecology and fisheries. However, most of the existing SST fusion methods suffer from poor portability [...] Read more.
The sea surface temperature (SST) is an important parameter reflecting the energy exchange between the ocean and the atmosphere, which has a key impact on climate change, marine ecology and fisheries. However, most of the existing SST fusion methods suffer from poor portability and a lack of consideration of cloudy conditions, which can affect the data accuracy and reliability. To address these problems, this paper proposes an infrared and microwave SST fusion method based on cloudy conditions. The method categorizes the fusion process according to three scenarios—clear sky, completely cloudy, and partially cloudy—adjusting the fusion approach for each condition. In this paper, three representative global datasets from home and abroad are selected, while the South China Sea region, which suffers from extreme weather, is used as a typical study area for validation. By introducing the buoy observation data, the fusion results are evaluated using the metrics of bias, RMSE, URMSE, r and coverage. The experimental results show that the biases of the three fusion results of VIRR-RH, AVHRR-RH and MODIS-RH are −0.611 °C, 0.043 °C and 0.012 °C, respectively. In the South China Sea region under extreme weather conditions, the bias is −0.428 °C, the RMSE is 0.941 °C, the URMSE is 0.424 °C and the coverage rate reaches 25.55%. These results confirm that this method not only produces significant fusion effects but also exhibits strong generalization and adaptability, being unaffected by specific sensors or regions. Full article
Show Figures

Figure 1

22 pages, 849 KiB  
Article
Moving-Least-Squares-Enhanced 3D Object Detection for 4D Millimeter-Wave Radar
by Weigang Shi, Panpan Tong and Xin Bi
Remote Sens. 2025, 17(8), 1465; https://doi.org/10.3390/rs17081465 - 20 Apr 2025
Viewed by 4
Abstract
Object detection is a critical task in autonomous driving. Currently, 3D object detection methods for autonomous driving primarily rely on stereo cameras and LiDAR, which are susceptible to adverse weather conditions and low lighting, resulting in limited robustness. In contrast, automotive mmWave radar [...] Read more.
Object detection is a critical task in autonomous driving. Currently, 3D object detection methods for autonomous driving primarily rely on stereo cameras and LiDAR, which are susceptible to adverse weather conditions and low lighting, resulting in limited robustness. In contrast, automotive mmWave radar offers advantages such as resilience to complex weather, independence from lighting conditions, and a low cost, making it a widely studied sensor type. Modern 4D millimeter-wave (mmWave) radar can provide spatial dimensions (x, y, z) as well as Doppler information, meeting the requirements for 3D object detection. However, the point cloud density of 4D mmWave radar is significantly lower than that of LiDAR in the case of short distances, and existing point cloud object detection methods struggle to adapt to such sparse data. To address this challenge, we propose a novel 4D mmWave radar point cloud object detection framework. First, we employ moving least squares (MLS) to densify multi-frame fused point clouds, effectively increasing the point cloud density. Next, we construct a 3D object detection network based on point pillar encoding and utilize an SSD detection head for detection on feature maps. Finally, we validate our method on the VoD dataset. Experimental results demonstrate that our proposed framework outperforms comparative methods, and the MLS-based point cloud densification method significantly enhances the object detection performance. Full article
Show Figures

Figure 1

18 pages, 4934 KiB  
Article
A Cross-Domain Landslide Extraction Method Utilizing Image Masking and Morphological Information Enhancement
by Jie Chen, Jinge Liu, Xu Zeng, Songshan Zhou, Geng Sun, Siqiang Rao, Ya Guo and Jingru Zhu
Remote Sens. 2025, 17(8), 1464; https://doi.org/10.3390/rs17081464 - 20 Apr 2025
Viewed by 16
Abstract
The deployment of landslide intelligent recognition models in non-training regions encounters substantial challenges, primarily attributed to heterogeneous remote sensing acquisition parameters and inherent geospatial variability in factors such as topography, vegetation cover, and soil characteristics across distinct geographic zones. Addressing the issue of [...] Read more.
The deployment of landslide intelligent recognition models in non-training regions encounters substantial challenges, primarily attributed to heterogeneous remote sensing acquisition parameters and inherent geospatial variability in factors such as topography, vegetation cover, and soil characteristics across distinct geographic zones. Addressing the issue of underutilization of landslide contextual information and morphological integrity in domain adaptation methods, this paper introduces a cross-domain landslide extraction approach that integrates image masking with enhanced morphological information. Specifically, our approach implements a pixel-level mask on target domain imagery, facilitating the utilization of context information from the masked images. Furthermore, it establishes a morphological information extraction module, grounded in predefined thresholds and rules, to produce morphological pseudo-labels for the target domain. The results demonstrate that our method achieves an IoU (intersection over union) improvement of 1.78% and 6.02% over the suboptimal method in two cross-domain tasks, respectively, and a remarkable performance enhancement of 33.13% and 31.79% compared to scenarios without domain adaptation. This cross-domain extraction method not only substantially boosts the accuracy of cross-domain landslide identification but also enhances the completeness of landslide morphology information, offering robust technical support for landslide disaster monitoring and early warning systems. Full article
Show Figures

Graphical abstract

25 pages, 17354 KiB  
Article
Frequency–Spatial–Temporal Domain Fusion Network for Remote Sensing Image Change Captioning
by Shiwei Zou, Yingmei Wei, Yuxiang Xie and Xidao Luan
Remote Sens. 2025, 17(8), 1463; https://doi.org/10.3390/rs17081463 - 19 Apr 2025
Viewed by 69
Abstract
Remote Sensing Image Change Captioning (RSICC) has emerged as a cross-disciplinary technology that automatically generates sentences describing the changes in bi-temporal remote sensing images. While demonstrating significant potential for urban planning, agricultural surveillance, and disaster management, current RSICC methods exhibit two fundamental limitations: [...] Read more.
Remote Sensing Image Change Captioning (RSICC) has emerged as a cross-disciplinary technology that automatically generates sentences describing the changes in bi-temporal remote sensing images. While demonstrating significant potential for urban planning, agricultural surveillance, and disaster management, current RSICC methods exhibit two fundamental limitations: (1) vulnerability to pseudo-changes induced by illumination fluctuations and seasonal transitions and (2) an overemphasis on spatial variations with insufficient modeling of temporal dependencies in multi-temporal contexts. To address these challenges, we present the Frequency–Spatial–Temporal Fusion Network (FST-Net), a novel framework that integrates frequency, spatial, and temporal information for RSICC. Specifically, our Frequency–Spatial Fusion module implements adaptive spectral decomposition to disentangle structural changes from high-frequency noise artifacts, effectively suppressing environmental interference. The Spatia–Temporal Modeling module is further developed to employ state-space guided sequential scanning to capture evolutionary patterns of geospatial changes across temporal dimensions. Additionally, a unified dual-task decoder architecture bridges pixel-level change detection with semantic-level change captioning, achieving joint optimization of localization precision and description accuracy. Experiments on the LEVIR-MCI dataset demonstrate that our FSTNet outperforms previous methods by 3.65% on BLEU-4 and 4.08% on CIDEr-D, establishing new performance standards for RSICC. Full article
(This article belongs to the Special Issue GeoAI and EO Big Data Driven Advances in Earth Environmental Science)
Show Figures

Figure 1

18 pages, 5845 KiB  
Article
Remote Sensing-Based Detection and Analysis of Slow-Moving Landslides in Aba Prefecture, Southwest China
by Juan Ren, Wunian Yang, Zhigang Ma, Weile Li, Shuai Zeng, Hao Fu, Yan Wen and Jiayang He
Remote Sens. 2025, 17(8), 1462; https://doi.org/10.3390/rs17081462 - 19 Apr 2025
Viewed by 87
Abstract
Aba Tibetan and Qiang Autonomous Prefecture (Aba Prefecture), located in Southwest China, has complex geological conditions and frequent seismic activity, facing an increasing landslide risk that threatens the safety of local communities. This study aims to improve the regional geohazard database by identifying [...] Read more.
Aba Tibetan and Qiang Autonomous Prefecture (Aba Prefecture), located in Southwest China, has complex geological conditions and frequent seismic activity, facing an increasing landslide risk that threatens the safety of local communities. This study aims to improve the regional geohazard database by identifying slow-moving landslides in the area. We combined Stacking Interferometric Synthetic Aperture Radar (Stacking-InSAR) technology for deformation detection, optical satellite imagery for landslide boundary mapping, and field investigations for validation. A total of 474 slow-moving landslides were identified, covering an area of 149.84 km2, with landslides predominantly concentrated in the river valleys of the southern and southeastern regions. The distribution of these landslides is strongly influenced by bedrock lithology, fault distribution, topographic features, proximity to rivers, and folds. Additionally, 236 previously unknown landslides were detected and incorporated into the local geohazard database. This study provides important scientific support for landslide risk management, infrastructure planning, and mitigation strategies in Aba Prefecture, offering valuable insights for disaster response and prevention efforts. Full article
(This article belongs to the Section Engineering Remote Sensing)
Show Figures

Figure 1

26 pages, 5977 KiB  
Article
Hyperspectral Image Classification Using a Multi-Scale CNN Architecture with Asymmetric Convolutions from Small to Large Kernels
by Xun Liu, Alex Hay-Man Ng, Fangyuan Lei, Jinchang Ren, Xuejiao Liao and Linlin Ge
Remote Sens. 2025, 17(8), 1461; https://doi.org/10.3390/rs17081461 - 19 Apr 2025
Viewed by 64
Abstract
Deep learning-based hyperspectral image (HSI) classification methods, such as Transformers and Mambas, have attracted considerable attention. However, several challenges persist, e.g., (1) Transformers suffer from quadratic computational complexity due to the self-attention mechanism; and (2) both the local and global feature extraction capabilities [...] Read more.
Deep learning-based hyperspectral image (HSI) classification methods, such as Transformers and Mambas, have attracted considerable attention. However, several challenges persist, e.g., (1) Transformers suffer from quadratic computational complexity due to the self-attention mechanism; and (2) both the local and global feature extraction capabilities of large kernel convolutional neural networks (LKCNNs) need to be enhanced. To address these limitations, we introduce a multi-scale large kernel asymmetric CNN (MSLKACNN) with the large kernel sizes as large as 1×17 and 17×1 for HSI classification. MSLKACNN comprises a spectral feature extraction module (SFEM) and a multi-scale large kernel asymmetric convolution (MSLKAC). Specifically, the SFEM is first utilized to suppress noise, reduce spectral bands, and capture spectral features. Then, MSLKAC, with a large receptive field, joins two parallel multi-scale asymmetric convolution components to extract both local and global spatial features: (C1) a multi-scale large kernel asymmetric depthwise convolution (MLKADC) is designed to capture short-range, middle-range, and long-range spatial features; and (C2) a multi-scale asymmetric dilated depthwise convolution (MADDC) is proposed to aggregate the spatial features between pixels across diverse distances. Extensive experimental results on four widely used HSI datasets show that the proposed MSLKACNN significantly outperforms ten state-of-the-art methods, with overall accuracy (OA) gains ranging from 4.93% to 17.80% on Indian Pines, 2.09% to 15.86% on Botswana, 0.67% to 13.33% on Houston 2013, and 2.20% to 24.33% on LongKou. These results validate the effectiveness of the proposed MSLKACNN. Full article
Show Figures

Figure 1

20 pages, 1187 KiB  
Review
A Summary of Recent Advances in the Literature on Machine Learning Techniques for Remote Sensing of Groundwater Dependent Ecosystems (GDEs) from Space
by Chantel Nthabiseng Chiloane, Timothy Dube, Mbulisi Sibanda, Tatenda Dalu and Cletah Shoko
Remote Sens. 2025, 17(8), 1460; https://doi.org/10.3390/rs17081460 - 19 Apr 2025
Viewed by 144
Abstract
While groundwater-dependent ecosystems (GDEs) occupy only a small portion of the Earth’s surface, they hold significant ecological value by providing essential ecosystem services such as habitat for flora and fauna, carbon sequestration, and erosion control. However, GDE functionality is increasingly threatened by human [...] Read more.
While groundwater-dependent ecosystems (GDEs) occupy only a small portion of the Earth’s surface, they hold significant ecological value by providing essential ecosystem services such as habitat for flora and fauna, carbon sequestration, and erosion control. However, GDE functionality is increasingly threatened by human activities, rainfall variability, and climate change. To address these challenges, various methods have been developed to assess, monitor, and understand GDEs, aiding sustainable decision-making and conservation policy implementation. Among these, remote sensing and advanced machine learning (ML) techniques have emerged as key tools for improving the evaluation of dryland GDEs. This study provides a comprehensive overview of the progress made in applying advanced ML algorithms to assess and monitor GDEs. It begins with a systematic literature review following the PRISMA framework, followed by an analysis of temporal and geographic trends in ML applications for GDE research. Additionally, it explores different advanced ML algorithms and their applications across various GDE types. The paper also discusses challenges in mapping GDEs and proposes mitigation strategies. Despite the promise of ML in GDE studies, the field remains in its early stages, with most research concentrated in China, the USA, and Germany. While advanced ML techniques enable high-quality dryland GDE classification at local to global scales, model performance is highly dependent on data availability and quality. Overall, the findings underscore the growing importance and potential of geospatial approaches in generating spatially explicit information on dryland GDEs. Future research should focus on enhancing models through hybrid and transformative techniques, as well as fostering interdisciplinary collaboration between ecologists and computer scientists to improve model development and result interpretability. The insights presented in this study will help guide future research efforts and contribute to the improved management and conservation of GDEs. Full article
(This article belongs to the Section Remote Sensing in Agriculture and Vegetation)
Show Figures

Graphical abstract

21 pages, 7459 KiB  
Article
A Cross-Estimation Method for Spaceborne Synthetic Aperture Radar Range Antenna Pattern Using Pseudo-Invariant Natural Scenes
by Chuanzeng Xu, Jitong Duan, Yongsheng Zhou, Fei Teng, Fan Zhang and Wen Hong
Remote Sens. 2025, 17(8), 1459; https://doi.org/10.3390/rs17081459 - 19 Apr 2025
Viewed by 132
Abstract
The estimation and correction of antenna patterns are essential for ensuring the relative radiometric quality of SAR images. Traditional methods for antenna pattern estimation rely on artificial calibrators or specific stable natural scenes like the Amazon rainforest, which are limited by cost, complexity, [...] Read more.
The estimation and correction of antenna patterns are essential for ensuring the relative radiometric quality of SAR images. Traditional methods for antenna pattern estimation rely on artificial calibrators or specific stable natural scenes like the Amazon rainforest, which are limited by cost, complexity, and geographic constraints, making them unsuitable for frequent imaging demands. Meanwhile, general natural scenes are imaged frequently using SAR systems, but their true scattering characteristics are unknown, posing a challenge for direct antenna pattern estimation. Therefore, it is considered to use the calibrated SAR to obtain the scattering characteristics of these general scenarios; that is, introducing the concept of cross-calibration. Accordingly, this paper proposes a novel method for estimating the SAR range antenna pattern based on cross-calibration. The method addresses three key challenges: (1) Identifying pseudo-invariant natural scenes suitable as reference targets through spatial uniformity and temporal stability assessments using standard deviation and amplitude correlation analyses; (2) Achieving pixel-level registration of heterogeneous SAR images with an iterative method despite radiometric imbalances; (3) Extracting stable power values by segmenting images and applying differential screening to minimize outlier effects. The proposed method is validated using Gaofen-3 SAR data and shows robust performance in bare soil, grassland, and forest scenarios. Comparing the results of the proposed method with the tropical forest-based calibration method, the maximum shape deviation between the range antenna patterns of the two methods is less than 0.2 dB. Full article
(This article belongs to the Section Engineering Remote Sensing)
Show Figures

Figure 1

24 pages, 7592 KiB  
Article
DB-MFENet: A Dual-Branch Multi-Frequency Feature Enhancement Network for Hyperspectral Image Classification
by Chen Zang, Gaochao Song, Lei Li, Guangrui Zhao, Wanxuan Lu, Guiyuan Jiang and Qian Sun
Remote Sens. 2025, 17(8), 1458; https://doi.org/10.3390/rs17081458 - 18 Apr 2025
Viewed by 93
Abstract
HSI classification is essential for monitoring and analyzing the Earth’s surface, with methods utilizing convolutional neural networks (CNNs) and transformers rapidly gaining prominence and advancing in recent years. However, CNNs are limited by their restricted receptive fields and can only process local information. [...] Read more.
HSI classification is essential for monitoring and analyzing the Earth’s surface, with methods utilizing convolutional neural networks (CNNs) and transformers rapidly gaining prominence and advancing in recent years. However, CNNs are limited by their restricted receptive fields and can only process local information. Although transformers excel at establishing long-range dependencies, they underutilize the spatial information of HSIs. To tackle these challenges, we present the multi-frequency feature enhancement network (DB-MFENet) for HSI classification. First, orthogonal position encoding (OPE) is employed to map image coordinates into a high-dimensional space, which is then combined with corresponding spectral values to compute a multi-frequency feature. Next, the multi-frequency feature is divided into low-frequency and high-frequency components, which are separately enhanced through a dual-branch structure and then fused. Finally, a transformer encoder and a linear layer are employed to encode and classify the enhanced multi-frequency feature. The experimental results demonstrate that our method is efficient and robust for HSIs classification, achieving overall accuracies of 97.05%, 91.92%, 98.72%, and 96.31% on Indian Pines, Salinas, Pavia University, and WHU-Hi-HanChuan datasets, respectively. Full article
(This article belongs to the Section Remote Sensing Image Processing)
Show Figures

Figure 1

17 pages, 7709 KiB  
Article
Analysis of Factors Affecting Random Measurement Error in LiDAR Point Cloud Feature Matching Positioning
by Guoliang Liu, Wang Gao and Shuguo Pan
Remote Sens. 2025, 17(8), 1457; https://doi.org/10.3390/rs17081457 - 18 Apr 2025
Viewed by 182
Abstract
Light detection and ranging (LiDAR) has the advantage of simultaneous localization and mapping with high precision, making it one of the important sensors for intelligent robotics navigation, positioning, and perception. It is common knowledge that the random measurement error of global navigation satellite [...] Read more.
Light detection and ranging (LiDAR) has the advantage of simultaneous localization and mapping with high precision, making it one of the important sensors for intelligent robotics navigation, positioning, and perception. It is common knowledge that the random measurement error of global navigation satellite system (GNSS) observations is usually considered to be closely related to the elevation angle factor. However, in the LiDAR point cloud feature matching positioning model, the analysis of factors affecting the random measurement error of observations is unsophisticated, which limits the ability of LiDAR sensors to estimate pose parameters. Therefore, this work draws on the random measurement error analysis method of GNSS observations to study the impact of factors such as distance, angle, and feature accuracy on the random measurement error of LiDAR. The experimental results show that feature accuracy is the main factor affecting the measurement error in the LiDAR point cloud feature matching positioning model, compared with distance and angle factors, even under different sensor specifications, point cloud densities, prior maps, and urban road scenes. Furthermore, a simple random measurement error model based on the feature accuracy factor is used to verify the effect of parameter estimation, and the results show that the random error model can effectively reduce the error of pose parameter estimation, with an improvement effect of about 50%. Full article
Show Figures

Figure 1

32 pages, 10395 KiB  
Article
Predicting Tree-Level Diameter and Volume for Radiata Pine Using UAV LiDAR-Derived Metrics Across a National Trial Series in New Zealand
by Michael S. Watt, Sadeepa Jayathunga, Midhun Mohan, Robin J. L. Hartley, Nicolò Camarretta, Benjamin S. C. Steer, Weichen Zhang and Mitch Bryson
Remote Sens. 2025, 17(8), 1456; https://doi.org/10.3390/rs17081456 - 18 Apr 2025
Viewed by 145
Abstract
The rapid development of UAV-LiDAR and data processing capabilities is likely to enable accurate individual-tree inventories in the near future, requiring few on-ground calibration measurements. Using data collected from 20 radiata pine trials dispersed across New Zealand, the objective of this study was [...] Read more.
The rapid development of UAV-LiDAR and data processing capabilities is likely to enable accurate individual-tree inventories in the near future, requiring few on-ground calibration measurements. Using data collected from 20 radiata pine trials dispersed across New Zealand, the objective of this study was to determine the accuracy of high-density UAV-LiDAR for the prediction of tree diameter and volume, under a range of data calibration scenarios. Using all measurements for the calibration (a range of 335–4703 tree measurements across the 20 sites), accurate random forest models for each of the 20 sites were created from a diverse range of LiDAR metrics that characterised the horizontal and vertical structures of the canopy. Averaged across the 20 sites, predictions had a mean R2 and relative RMSE (rRMSE) of, respectively, 0.713 and 9.699% for the tree diameter and 0.746 and 19.57% for the tree volume. Reductions in the numbers of calibration trees per trial had little effect on model accuracy until only 300 trees/site were used; however, accurate, unbiased predictions were still possible using as few as 100 trees/site. More generally, applicable random forest models for both tree dimensions were constructed by collating all of the data and tested using leave-one-site-out cross-validation to determine the accuracy of the model predictions when calibration measurements were not available. The predictions using this approach were reasonable but less accurate and more biased than with the use of calibration data, with a mean R2 and rRMSE of, respectively, 0.631 and 15.12% for the tree diameter and 0.631 and 35.6% for the volume. Our research aims to facilitate the transition from a plot-based to tree-level inventory in plantation forests and contribute to the future development of a generalised model that could accurately predict tree dimensions from UAV-LiDAR, relying on minimal field measurements. Full article
Show Figures

Figure 1

16 pages, 958 KiB  
Technical Note
Bayesian Time-Domain Ringing Suppression Approach in Impulse Ultrawideband Synthetic Aperture Radar
by Xinhao Xu, Wenjie Li, Haibo Tang, Longyong Chen, Chengwei Zhang, Tao Jiang, Jie Liu and Xingdong Liang
Remote Sens. 2025, 17(8), 1455; https://doi.org/10.3390/rs17081455 - 18 Apr 2025
Viewed by 133
Abstract
Impulse ultrawideband (UWB) synthetic aperture radar (SAR) combines high-azimuth-range resolution with robust penetration capabilities, making it ideal for applications such as through-wall detection and subsurface imaging. In such systems, the performance of UWB antennas is critical for transmitting high-power, large-bandwidth impulse signals. However, [...] Read more.
Impulse ultrawideband (UWB) synthetic aperture radar (SAR) combines high-azimuth-range resolution with robust penetration capabilities, making it ideal for applications such as through-wall detection and subsurface imaging. In such systems, the performance of UWB antennas is critical for transmitting high-power, large-bandwidth impulse signals. However, two primary factors degrade radar imaging quality: (1) inherent limitations in antenna radiation efficiency, which lead to low-frequency signal loss and subsequent time-domain ringing artifacts; (2) impedance mismatch at the antenna terminals, causing standing wave reflections that exacerbate the ringing phenomenon. This study systematically analyzes the mechanisms of ringing generation, including its physical origins and mathematical modeling in SAR systems. Building on this analysis, we propose a Bayesian ringing suppression algorithm based on sparse optimization. The method effectively enhances imaging quality while balancing the trade-off between ringing suppression and image fidelity. Validation through numerical simulations and experimental measurements demonstrates significant suppression of time-domain ringing and improved target clarity. The proposed approach holds critical importance for advancing impulse UWB SAR systems, particularly in scenarios requiring high-resolution imaging. Full article
Show Figures

Figure 1

19 pages, 36390 KiB  
Article
TerrAInav Sim: An Open-Source Simulation of UAV Aerial Imaging from Map-Based Data
by Seyedeh Parisa Dajkhosh, Peter M. Le, Orges Furxhi and Eddie L. Jacobs
Remote Sens. 2025, 17(8), 1454; https://doi.org/10.3390/rs17081454 - 18 Apr 2025
Viewed by 155
Abstract
Capturing real-world aerial images for vision-based navigation (VBN) is challenging due to limited availability and conditions that make it nearly impossible to access all desired images from any location. The complexity increases when multiple locations are involved. State-of-the-art solutions, such as deploying UAVs [...] Read more.
Capturing real-world aerial images for vision-based navigation (VBN) is challenging due to limited availability and conditions that make it nearly impossible to access all desired images from any location. The complexity increases when multiple locations are involved. State-of-the-art solutions, such as deploying UAVs (unmanned aerial vehicles) for aerial imaging or relying on existing research databases, come with significant limitations. TerrAInav Sim offers a compelling alternative by simulating a UAV to capture bird’s-eye view map-based images at zero yaw with real-world visible-band specifications. This open-source tool allows users to specify the bounding box (top-left and bottom-right) coordinates of any region on a map. Without the need to physically fly a drone, the virtual Python UAV performs a raster search to capture images. Users can define parameters such as the flight altitude, aspect ratio, diagonal field of view of the camera, and the overlap between consecutive images. TerrAInav Sim’s capabilities range from capturing a few low-altitude images for basic applications to generating extensive datasets of entire cities for complex tasks like deep learning. This versatility makes TerrAInav a valuable tool for not only VBN but also other applications, including environmental monitoring, construction, and city management. The open-source nature of the tool also allows for the extension of the raster search to other missions. A dataset of Memphis, TN, has been provided along with this simulator. A supplementary dataset is also provided, which includes data from a 3D world generation package for comparison. Full article
Show Figures

Graphical abstract

16 pages, 9580 KiB  
Article
Detecting the Distribution of Callery Pear (Pyrus calleryana) in an Urban U.S. Landscape Using High Spatial Resolution Satellite Imagery and Machine Learning
by Justin Krohn, Hong He, Timothy C. Matisziw, Lauren S. Pile Knapp, Jacob S. Fraser and Michael Sunde
Remote Sens. 2025, 17(8), 1453; https://doi.org/10.3390/rs17081453 - 18 Apr 2025
Viewed by 58
Abstract
Using Planetscope imagery, we trained a random forest model to detect Callery pear (Pyrus calleryana) throughout a diverse urban landscape in Columbia, Missouri. The random forest model had a classification accuracy of 89.78%, a recall score of 0.693, and an F1 [...] Read more.
Using Planetscope imagery, we trained a random forest model to detect Callery pear (Pyrus calleryana) throughout a diverse urban landscape in Columbia, Missouri. The random forest model had a classification accuracy of 89.78%, a recall score of 0.693, and an F1 score of 0.819. The key hyperparameters for model tuning were the cutoff and class–weight parameters. After the distribution of Callery pear was predicted throughout the landscape, we analyzed the distribution pattern of the predictions using Ripley’s K and then associated the distribution patterns with various socio-economic indicators. The analysis identified significant relationships between the distribution of the predicted Callery pear and population density, median household income, median year the housing infrastructure was built, and median housing value at a variety of spatial scales. The findings from this study provide a much-needed method for detecting species of interest in a heterogenous landscape that is both low cost and does not require specialized hardware or software like some alternative deep learning methods. Full article
(This article belongs to the Section Ecological Remote Sensing)
Show Figures

Figure 1

18 pages, 12172 KiB  
Article
An Improved Point Cloud Filtering Algorithm Applies in Complex Urban Environments
by Guangyu Liang, Ximin Cui, Debao Yuan, Liuya Zhang and Renxu Yang
Remote Sens. 2025, 17(8), 1452; https://doi.org/10.3390/rs17081452 - 18 Apr 2025
Viewed by 80
Abstract
Point cloud filtering plays a crucial role in ground point extraction in urban environments. It can effectively distinguish ground points from object points, reduce data redundancy, and improve processing efficiency, providing accurate foundational data for urban 3D modeling, environmental monitoring, and intelligent management. [...] Read more.
Point cloud filtering plays a crucial role in ground point extraction in urban environments. It can effectively distinguish ground points from object points, reduce data redundancy, and improve processing efficiency, providing accurate foundational data for urban 3D modeling, environmental monitoring, and intelligent management. However, current point cloud filtering algorithms have significant limitations in multi-scale structural complexity and sparse-to-dense balancing, hindering accurate extraction in complex urban environments. To address those issues, this paper proposes a point cloud filtering algorithm based on cloth simulation and progressive TIN densification (CAP). The algorithm first applies the cloth simulation filtering (CSF) algorithm to perform an initial filtering of the point cloud data and extract the initial ground points. It then constructs a TIN model based on the initial ground points, incorporating the concept of the progressive TIN densification (PTD) algorithm. Through point-by-point thresholding, the ground and object points are further refined and optimized. In the urban public point cloud datasets provided by ISPRS, the average total error is 5.90% after CAP algorithm filtering. For 12 sets of point cloud data in the North Rhine–Westphalia experimental sample area, the results show that the CAP algorithm achieves an average total error of 2.86%, which is 2.01% lower than the PTD algorithm and 0.60% lower than the CSF algorithm. The average Kappa coefficient is 94.04%, which is an improvement of 4.17% and 1.22% over the PTD and CSF algorithms, respectively. This study demonstrates that the CAP algorithm exhibits superior accuracy and adaptability for point cloud filtering tasks in urban environments, with significant application potential. Full article
(This article belongs to the Section Urban Remote Sensing)
Show Figures

Figure 1

20 pages, 7507 KiB  
Article
Undifferenced Ambiguity Resolution for Precise Multi-GNSS Products to Support Global PPP-AR
by Junqiang Li, Jing Guo, Shengyi Xu and Qile Zhao
Remote Sens. 2025, 17(8), 1451; https://doi.org/10.3390/rs17081451 - 18 Apr 2025
Viewed by 70
Abstract
Precise point positioning ambiguity resolution (PPP-AR) is a key technique for high-precision global navigation satellite system (GNSS) observations, with phase bias products playing a critical role in its implementation. The multi-GNSS experiment analysis center at Wuhan University (WUM) has adopted the undifferenced ambiguity [...] Read more.
Precise point positioning ambiguity resolution (PPP-AR) is a key technique for high-precision global navigation satellite system (GNSS) observations, with phase bias products playing a critical role in its implementation. The multi-GNSS experiment analysis center at Wuhan University (WUM) has adopted the undifferenced ambiguity resolution (UDAR) approach to generate high-precision orbit, clock, and observable-specific bias (OSB) products to support PPP-AR since day 162 of 2023. This study presents the analysis strategy employed and assesses the impact of the transition to ambiguity resolution on the orbit precision, using metrics such as orbit boundary discontinuities (OBD) and satellite laser ranging (SLR) validation. Additionally, the stability of the OSB products and the overall performance of PPP-AR solutions are evaluated. The OBD demonstrates specific improvements of 7.1% and 9.5% for GPS and Galileo, respectively, when UDAR is applied. Notably, BDS-3 medium Earth orbit satellites show a remarkable 15.2% improvement compared to the double-differenced results. However, for the remaining constellations, the improvements are either minimal or result in degradation. Using GPS and GLONASS solutions from the International GNSS Service (IGS) and other solutions from the European Space Agency (ESA) as references, the orbit differences of WUM solutions based on UDAR exhibit a significant reduction. However, the improvements in SLR validation are limited, as the radial orbit precision is primarily influenced by the dynamic model. The narrow-lane ambiguity fixing rate for static PPP-AR, based on data from approximately 430 globally distributed stations, reaches 99.2%, 99.2%, 88.8%, and 98.6% for GPS, Galileo, BDS-2, and BDS-3, respectively. The daily repeatability of station coordinates is approximately 1.4 mm, 1.9 mm, and 3.9 mm in the east, north, and up directions, respectively. Overall, these results demonstrate the effectiveness and potential of WUM’s undifferenced ambiguity resolution approach in enhancing GNSS data processing and facilitating PPP-AR applications. Full article
(This article belongs to the Section Earth Observation Data)
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop