Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (11,488)

Search Parameters:
Keywords = remote-sensing imaging

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
26 pages, 707 KB  
Review
Application of Multispectral Imagery and Synthetic Aperture Radar Sensors for Monitoring Algal Blooms: A Review
by Vikash Kumar Mishra, Himanshu Maurya, Fred Nicolls and Amit Kumar Mishra
Phycology 2025, 5(4), 71; https://doi.org/10.3390/phycology5040071 (registering DOI) - 2 Nov 2025
Abstract
Water pollution is a growing concern for aquatic ecosystems worldwide, with threats like plastic waste, nutrient pollution, and oil spills harming biodiversity and impacting human health, fisheries, and local economies. Traditional methods of monitoring water quality, such as ground sampling, are often limited [...] Read more.
Water pollution is a growing concern for aquatic ecosystems worldwide, with threats like plastic waste, nutrient pollution, and oil spills harming biodiversity and impacting human health, fisheries, and local economies. Traditional methods of monitoring water quality, such as ground sampling, are often limited in how frequently and widely they can collect data. Satellite imagery is a potent tool in offering broader and more consistent coverage. This review explores how Multispectral Imagery (MSI) and Synthetic Aperture Radar (SAR), including polarimetric SAR (PolSAR), are utilised to monitor harmful algal blooms (HABs) and other types of aquatic pollution. It looks at recent advancements in satellite sensor technologies, highlights the value of combining different data sources (like MSI and SAR), and discusses the growing use of artificial intelligence for analysing satellite data. Real-world examples from places like Lake Erie, Vembanad Lake in India, and Korea’s coastal waters show how satellite tools such as the Geostationary Ocean Colour Imager (GOCI) and Environmental Sample Processor (ESP) are being used to track seasonal changes in water quality and support early warning systems. While satellite monitoring still faces challenges like interference from clouds or water turbidity, continued progress in sensor design, data fusion, and policy support is helping make remote sensing a key part of managing water health. Full article
Show Figures

Figure 1

18 pages, 3793 KB  
Article
Water Body Identification from Satellite Images Using a Hybrid Evolutionary Algorithm-Optimized U-Net Framework
by Yue Yuan, Peiyang Wei, Zhixiang Qi, Xun Deng, Ji Zhang, Jianhong Gan, Tinghui Chen and Zhibin Li
Biomimetics 2025, 10(11), 732; https://doi.org/10.3390/biomimetics10110732 (registering DOI) - 1 Nov 2025
Abstract
Accurate and automated identification of water bodies from satellite imagery is critical for environmental monitoring, water resource management, and disaster response. Current deep learning approaches, however, suffer from a strong dependence on manual hyperparameter tuning, which limits their automation capability and robustness in [...] Read more.
Accurate and automated identification of water bodies from satellite imagery is critical for environmental monitoring, water resource management, and disaster response. Current deep learning approaches, however, suffer from a strong dependence on manual hyperparameter tuning, which limits their automation capability and robustness in complex, multi-scale scenarios. To overcome this limitation, this study proposes a fully automated segmentation framework that synergistically integrates an enhanced U-Net model with a novel hybrid evolutionary optimization strategy. Extensive experiments on public Kaggle and Sentinel-2 datasets demonstrate the superior performance of our method, which achieves a Pixel Accuracy of 96.79% and an F1-Score of 94.75, outperforming various mainstream baseline models by over 10% in key metrics. The framework effectively addresses the class imbalance problem and enhances feature representation without human intervention. This work provides a viable and efficient path toward fully automated remote sensing image analysis, with significant potential for application in large-scale water resource monitoring, dynamic environmental assessment, and emergency disaster management. Full article
23 pages, 2222 KB  
Article
Shallow Sea Bathymetric Inversion of Active–Passive Satellite Remote Sensing Data Based on Virtual Control Point Inverse Distance Weighting
by Zhipeng Dong, Junlin Tao, Yanxiong Liu, Yikai Feng, Yilan Chen and Yanli Wang
Remote Sens. 2025, 17(21), 3621; https://doi.org/10.3390/rs17213621 (registering DOI) - 31 Oct 2025
Abstract
Satellite-derived bathymetry (SDB) using Ice, Cloud, and Land Elevation satellite-2 (ICESat-2) LiDAR data and remote sensing images faces challenges in the difficulty of uniform coverage of the inversion area by the bathymetric control points due to the linear sampling pattern of ICESat-2. This [...] Read more.
Satellite-derived bathymetry (SDB) using Ice, Cloud, and Land Elevation satellite-2 (ICESat-2) LiDAR data and remote sensing images faces challenges in the difficulty of uniform coverage of the inversion area by the bathymetric control points due to the linear sampling pattern of ICESat-2. This study proposes a novel virtual control point optimization framework integrating inverse distance weighting (IDW) and spectral confidence analysis (SCA). The methodology first generates baseline bathymetry through semi-empirical band ratio modeling (control group), then extracts virtual control points via SCA. An optimization scheme based on spectral confidence levels is applied to the control group, where high-confidence pixels utilized a residual correction-based strategy, while low-confidence pixels employed IDW interpolation based on a virtual control point. Finally, the preceding optimization scheme uses weighting-based fusion with the control group to generate the final bathymetry map, which is also called the optimized group. Accuracy assessments over the three research areas revealed a significant increase in accuracy from the control group to the optimized group. When compared with in situ data, the determination coefficient (R2), RMSE, MRE, and MAE in the optimized group are better than 0.83, 1.48 m, 12.36%, and 1.22 m, respectively, and all these indicators are better than those in the control group. The key innovation lies in overcoming ICESat-2’s spatial sampling limitation through spectral confidence stratification, which uses SCA to generate virtual control points and IDW to adjust low-confidence pixel values. It is also suggested that when applying ICESat-2 satellite data in active–passive-fused SDB, the distribution of training data in the research zone should be adequately considered. Full article
26 pages, 4332 KB  
Article
CDSANet: A CNN-ViT-Attention Network for Ship Instance Segmentation
by Weidong Zhu, Piao Wang and Kuifeng Luan
J. Imaging 2025, 11(11), 383; https://doi.org/10.3390/jimaging11110383 (registering DOI) - 31 Oct 2025
Abstract
Ship instance segmentation in remote sensing images is essential for maritime applications such as intelligent surveillance and port management. However, this task remains challenging due to dense target distributions, large variations in ship scales and shapes, and limited high-quality datasets. The existing YOLOv8 [...] Read more.
Ship instance segmentation in remote sensing images is essential for maritime applications such as intelligent surveillance and port management. However, this task remains challenging due to dense target distributions, large variations in ship scales and shapes, and limited high-quality datasets. The existing YOLOv8 framework mainly relies on convolutional neural networks and CIoU loss, which are less effective in modeling global–local interactions and producing accurate mask boundaries. To address these issues, we propose CDSANet, a novel one-stage ship instance segmentation network. CDSANet integrates convolutional operations, Vision Transformers, and attention mechanisms within a unified architecture. The backbone adopts a Convolutional Vision Transformer Attention (CVTA) module to enhance both local feature extraction and global context perception. The neck employs dynamic-weighted DOWConv to adaptively handle multi-scale ship instances, while SIoU loss improves localization accuracy and orientation robustness. Additionally, CBAM enhances the network’s focus on salient regions, and a MixUp-based augmentation strategy is used to improve model generalization. Extensive experiments on the proposed VLRSSD dataset demonstrate that CDSANet achieves state-of-the-art performance with a mask AP (50–95) of 75.9%, surpassing the YOLOv8 baseline by 1.8%. Full article
Show Figures

Figure 1

17 pages, 1397 KB  
Article
A Novel Approach for Reliable Classification of Marine Low Cloud Morphologies with Vision–Language Models
by Ehsan Erfani and Farnaz Hosseinpour
Atmosphere 2025, 16(11), 1252; https://doi.org/10.3390/atmos16111252 (registering DOI) - 31 Oct 2025
Abstract
Marine low clouds have a strong impact on Earth’s system but remain a major source of uncertainty in anthropogenic radiative forcing simulated by general circulation models. This uncertainty arises from incomplete understanding of the many processes controlling their evolution and interactions. A key [...] Read more.
Marine low clouds have a strong impact on Earth’s system but remain a major source of uncertainty in anthropogenic radiative forcing simulated by general circulation models. This uncertainty arises from incomplete understanding of the many processes controlling their evolution and interactions. A key feature of these clouds is their diverse mesoscale morphologies, which are closely tied to their microphysical and radiative properties but remain difficult to characterize with satellite retrievals and numerical models. Here, we develop and apply a vision–language model (VLM) to classify marine low cloud morphologies using two independent datasets based on Moderate Resolution Imaging Spectroradiometer (MODIS) satellite imagery: (1) mesoscale cellular convection types of sugar, gravel, fish, and flower (SGFF; 8800 total samples) and (2) marine stratocumulus (Sc) types of stratus, closed cells, open cells, and other cells (260 total samples). By conditioning frozen image encoders on descriptive prompts, the VLM leverages multimodal priors learned from large-scale image–text training, making it less sensitive to limited sample size. Results show that the k-fold cross-validation of VLM achieves an overall accuracy of 0.84 for SGFF, comparable to prior deep learning benchmarks for the same cloud types, and retains robust performance under the reduction in SGFF training size. For the Sc dataset, the VLM attains 0.86 accuracy, whereas the image-only model is unreliable under such a limited training set. These findings highlight the potential of VLMs as efficient and accurate tools for cloud classification under very low samples, offering new opportunities for satellite remote sensing and climate model evaluation. Full article
Show Figures

Figure 1

25 pages, 16046 KB  
Article
UAV-Based Multimodal Monitoring of Tea Anthracnose with Temporal Standardization
by Qimeng Yu, Jingcheng Zhang, Lin Yuan, Xin Li, Fanguo Zeng, Ke Xu, Wenjiang Huang and Zhongting Shen
Agriculture 2025, 15(21), 2270; https://doi.org/10.3390/agriculture15212270 (registering DOI) - 31 Oct 2025
Abstract
Tea Anthracnose (TA), caused by fungi of the genus Colletotrichum, is one of the major threats to global tea production. UAV remote sensing has been explored for non-destructive and high-efficiency monitoring of diseases in tea plantations. However, variations in illumination, background, and [...] Read more.
Tea Anthracnose (TA), caused by fungi of the genus Colletotrichum, is one of the major threats to global tea production. UAV remote sensing has been explored for non-destructive and high-efficiency monitoring of diseases in tea plantations. However, variations in illumination, background, and meteorological factors undermine the stability of cross-temporal data. Data processing and modeling complexity further limits model generalizability and practical application. This study introduced a cross-temporal, generalizable disease monitoring approach based on UAV multimodal data coupled with relative-difference standardization. In an experimental tea garden, we collected multispectral, thermal infrared, and RGB images and extracted four classes of features: spectral (Sp), thermal (Th), texture (Te), and color (Co). The Normalized Difference Vegetation Index (NDVI) was used to identify reference areas and standardize features, which significantly reduced the relative differences in cross-temporal features. Additionally, we developed a vegetation–soil relative temperature (VSRT) index, which exhibits higher temporal-phase consistency than the conventional normalized relative canopy temperature (NRCT). A multimodal optimal feature set was constructed through sensitivity analysis based on the four feature categories. For different modality combinations (single and fused), three machine learning algorithms, K-Nearest Neighbors (KNN), Support Vector Machine (SVM), and Multi-layer Perceptron (MLP), were selected to evaluate disease classification performance due to their low computational burden and ease of deployment. Results indicate that the “Sp + Th” combination achieved the highest accuracy (95.51%), with KNN (95.51%) outperforming SVM (94.23%) and MLP (92.95%). Moreover, under the optimal feature combination and KNN algorithm, the model achieved high generalizability (86.41%) on independent temporal data. This study demonstrates that fusing spectral and thermal features with temporal standardization, combined with the simple and effective KNN algorithm, achieves accurate and robust tea anthracnose monitoring, providing a practical solution for efficient and generalizable disease management in tea plantations. Full article
(This article belongs to the Section Crop Protection, Diseases, Pests and Weeds)
Show Figures

Figure 1

21 pages, 3010 KB  
Article
A Unified Framework with Dynamic Kernel Learning for Bidirectional Feature Resampling in Remote Sensing Images
by Jiajun Xiang, Zixuan Xiao, Shuojie Wang, Ruigang Fu and Ping Zhong
Remote Sens. 2025, 17(21), 3599; https://doi.org/10.3390/rs17213599 (registering DOI) - 30 Oct 2025
Abstract
The inherent multiscale nature of objects poses a fundamental challenge in remote sensing object detection. To address this, feature pyramids have been widely adopted as a key architectural component. However, the effectiveness of these pyramids critically depends on the sampling operations used to [...] Read more.
The inherent multiscale nature of objects poses a fundamental challenge in remote sensing object detection. To address this, feature pyramids have been widely adopted as a key architectural component. However, the effectiveness of these pyramids critically depends on the sampling operations used to construct them, highlighting the need to move beyond traditional fixed-kernel methods. While conventional interpolation approaches (e.g., nearest-neighbor and bilinear) are computationally efficient, their content-agnostic nature often leads to detail loss and artifacts. Recent dynamic sampling operators improve performance through content-aware mechanisms, yet they typically incur substantial computational and parametric costs, hindering their applicability in resource-constrained scenarios. To overcome these limitations, we propose Lurker, a learned and unified resampling kernel that supports both upsampling and downsampling within a consistent framework. Lurker constructs a compact source kernel space and employs bilinear interpolation to generate adaptive kernels, enabling content-aware feature reassembly while maintaining a lightweight parameter footprint. Extensive experiments on the DIOR and DOTA datasets demonstrate that Lurker achieves a favorable trade-off between detection accuracy and efficiency, outperforming existing resampling methods in terms of both accuracy and parameter efficiency, making it especially suitable for remote sensing object detection applications. Full article
(This article belongs to the Section Remote Sensing Image Processing)
Show Figures

Figure 1

21 pages, 4033 KB  
Article
Detection of Wheat Powdery Mildew by Combined MVO_RF and Polarized Remote Sensing
by Qijie Qian, Tianquan Liang, Zibing Wu, Xinru Chen, Qingxin Tang and Quanzhou Yu
Agriculture 2025, 15(21), 2268; https://doi.org/10.3390/agriculture15212268 - 30 Oct 2025
Abstract
Wheat powdery mildew poses a serious threat to crop growth and yield, highlighting the critical need for accurate detection to ensure food security and maintain agricultural productivity. This study explores the integration of polarization remote sensing with a Multi-Verse Optimizer (MVO)–enhanced Random Forest [...] Read more.
Wheat powdery mildew poses a serious threat to crop growth and yield, highlighting the critical need for accurate detection to ensure food security and maintain agricultural productivity. This study explores the integration of polarization remote sensing with a Multi-Verse Optimizer (MVO)–enhanced Random Forest (RF) model for disease detection. Polarization imaging equipment was used to extract key polarization parameters, including the degree of polarization (DOP) and angle of polarization (AOP), from wheat leaves to capture subtle structural differences between healthy and diseased tissues. The MVO algorithm was employed to optimize RF hyperparameters, thereby improving classification performance and addressing the limitations of manual parameter tuning and conventional machine learning methods. Several machine learning algorithms were also evaluated for comparison. The results indicate that the proposed MVO_RF approach outperformed traditional methods, achieving an F1-score of 0.9715, a Kappa coefficient of 0.9797, and an overall accuracy of 0.9878. These findings demonstrate that the integration of polarization characteristics with MVO-optimized machine learning establishes a robust and efficient framework for monitoring wheat powdery mildew. More importantly, it facilitates early in-field disease warnings, enhances the accuracy and efficiency of targeted pesticide application, and offers quantitative decision-making support for smart agricultural management and disease prevention strategies. Full article
(This article belongs to the Section Artificial Intelligence and Digital Agriculture)
24 pages, 15753 KB  
Article
A Novel Canopy Height Mapping Method Based on UNet++ Deep Neural Network and GEDI, Sentinel-1, Sentinel-2 Data
by Xingsheng Deng, Xu Zhu, Zhongan Tang and Yangsheng You
Forests 2025, 16(11), 1663; https://doi.org/10.3390/f16111663 (registering DOI) - 30 Oct 2025
Abstract
As a vital carbon reservoir in terrestrial ecosystems, forest canopy height plays a pivotal role in determining the precision of biomass estimation and carbon storage calculations. Acquiring an accurate Canopy Height Map (CHM) is crucial for building carbon budget models at regional and [...] Read more.
As a vital carbon reservoir in terrestrial ecosystems, forest canopy height plays a pivotal role in determining the precision of biomass estimation and carbon storage calculations. Acquiring an accurate Canopy Height Map (CHM) is crucial for building carbon budget models at regional and global scales. A novel UNet++ deep-learning model was constructed using Sentinel-1 and Sentinel-2 multispectral remote sensing images to estimate forest canopy height data based on full-waveform LiDAR measurements from the Global Ecosystem Dynamics Investigation (GEDI) satellite. A 10 m resolution CHM was generated for Chaling County, China. The model was evaluated using independent validation samples, achieving an R2 of 0.58 and a Root Mean Square Error (RMSE) of 3.38 m. The relationships between multiple Relative Height (RH) metrics and field validation data are examined. It was found that RH98 showed the strongest correlation, with an R2 of 0.56 and RMSE of 5.83 m. Six different preprocessing algorithms for GEDI data were evaluated, and the results demonstrated that RH98 processed using the ‘a1’ algorithm achieved the best agreement with the validation data, yielding an R2 of 0.55 and RMSE of 5.54 m. The impacts of vegetation coverage, assessed through Normalized Difference Vegetation Index (NDVI), and terrain slope on inversion accuracy are explored. The highest accuracy was observed in areas where NDVI ranged from 0.25 to 0.50 (R2 = 0.77, RMSE = 2.27 m) and in regions with slopes between 0° and 10° (R2 = 0.61, RMSE = 2.99 m). These results highlight that the selection of GEDI data preprocessing methods, RH metrics, vegetation density, and terrain characteristics (slope) all have significant impacts on the accuracy of canopy height estimation. Full article
(This article belongs to the Special Issue Applications of LiDAR and Photogrammetry for Forests)
Show Figures

Figure 1

22 pages, 10839 KB  
Article
Multi-Pattern Scanning Mamba for Cloud Removal
by Xiaomeng Xin, Ye Deng, Wenli Huang, Yang Wu, Jie Fang and Jinjun Wang
Remote Sens. 2025, 17(21), 3593; https://doi.org/10.3390/rs17213593 - 30 Oct 2025
Abstract
Detection of changes in remote sensing relies on clean multi-temporal images, but cloud cover may considerably degrade image quality. Cloud removal, a critical image-restoration task, demands effective modeling of long-range spatial dependencies to reconstruct information under cloud occlusions. While Transformer-based models excel at [...] Read more.
Detection of changes in remote sensing relies on clean multi-temporal images, but cloud cover may considerably degrade image quality. Cloud removal, a critical image-restoration task, demands effective modeling of long-range spatial dependencies to reconstruct information under cloud occlusions. While Transformer-based models excel at handling such spatial modeling, their quadratic computational complexity limits practical application. The recently proposed Mamba, a state space model, offers a computationally efficient alternative for long-range modeling, but its inherent 1D sequential processing is ill-suited to capturing complex 2D spatial contexts in images. To bridge this gap, we propose the multi-pattern scanning Mamba (MPSM) block. Our MPSM block adapts the Mamba architecture for vision tasks by introducing a set of diverse scanning patterns that traverse features along horizontal, vertical, and diagonal paths. This multi-directional approach ensures that each feature aggregates comprehensive contextual information from the entire spatial domain. Furthermore, we introduce a dynamic path-aware (DPA) mechanism to adaptively recalibrate feature contributions from different scanning paths, enhancing the model’s focus on position-sensitive information. To effectively capture both global structures and local details, our MPSM blocks are embedded within a U-Net architecture enhanced with multi-scale supervision. Extensive experiments on the RICE1, RICE2, and T-CLOUD datasets demonstrate that our method achieves state-of-the-art performance while maintaining favorable computational efficiency. Full article
Show Figures

Figure 1

37 pages, 25662 KB  
Article
A Hyperspectral Remote Sensing Image Encryption Algorithm Based on a Novel Two-Dimensional Hyperchaotic Map
by Zongyue Bai, Qingzhan Zhao, Wenzhong Tian, Xuewen Wang, Jingyang Li and Yuzhen Wu
Entropy 2025, 27(11), 1117; https://doi.org/10.3390/e27111117 (registering DOI) - 30 Oct 2025
Abstract
With the rapid advancement of hyperspectral remote sensing technology, the security of hyperspectral images (HSIs) has become a critical concern. However, traditional image encryption methods—designed primarily for grayscale or RGB images—fail to address the high dimensionality, large data volume, and spectral-domain characteristics inherent [...] Read more.
With the rapid advancement of hyperspectral remote sensing technology, the security of hyperspectral images (HSIs) has become a critical concern. However, traditional image encryption methods—designed primarily for grayscale or RGB images—fail to address the high dimensionality, large data volume, and spectral-domain characteristics inherent to HSIs. Existing chaotic encryption schemes often suffer from limited chaotic performance, narrow parameter ranges, and inadequate spectral protection, leaving HSIs vulnerable to spectral feature extraction and statistical attacks. To overcome these limitations, this paper proposes a novel hyperspectral image encryption algorithm based on a newly designed two-dimensional cross-coupled hyperchaotic map (2D-CSCM), which synergistically integrates Cubic, Sinusoidal, and Chebyshev maps. The 2D-CSCM exhibits superior hyperchaotic behavior, including a wider hyperchaotic parameter range, enhanced randomness, and higher complexity, as validated by Lyapunov exponents, sample entropy, and NIST tests. Building on this, a layered encryption framework is introduced: spectral-band scrambling to conceal spectral curves while preserving spatial structure, spatial pixel permutation to disrupt correlation, and a bit-level diffusion mechanism based on dynamic DNA encoding, specifically designed to secure high bit-depth digital number (DN) values (typically >8 bits). Experimental results on multiple HSI datasets demonstrate that the proposed algorithm achieves near-ideal information entropy (up to 15.8107 for 16-bit data), negligible adjacent-pixel correlation (below 0.01), and strong resistance to statistical, cropping, and differential attacks (NPCR ≈ 99.998%, UACI ≈ 33.30%). The algorithm not only ensures comprehensive encryption of both spectral and spatial information but also supports lossless decryption, offering a robust and practical solution for secure storage and transmission of hyperspectral remote sensing imagery. Full article
(This article belongs to the Section Signal and Data Analysis)
Show Figures

Figure 1

34 pages, 7677 KB  
Article
JSPSR: Joint Spatial Propagation Super-Resolution Networks for Enhancement of Bare-Earth Digital Elevation Models from Global Data
by Xiandong Cai and Matthew D. Wilson
Remote Sens. 2025, 17(21), 3591; https://doi.org/10.3390/rs17213591 - 30 Oct 2025
Abstract
(1) Background: Digital Elevation Models (DEMs) encompass digital bare earth surface representations that are essential for spatial data analysis, such as hydrological and geological modelling, as well as for other applications, such as agriculture and environmental management. However, available bare-earth DEMs can have [...] Read more.
(1) Background: Digital Elevation Models (DEMs) encompass digital bare earth surface representations that are essential for spatial data analysis, such as hydrological and geological modelling, as well as for other applications, such as agriculture and environmental management. However, available bare-earth DEMs can have limited coverage or accessibility. Moreover, the majority of available global DEMs have lower spatial resolutions (∼30–90 m) and contain errors introduced by surface features such as buildings and vegetation. (2) Methods: This research presents an innovative method to convert global DEMs to bare-earth DEMs while enhancing their spatial resolution as measured by the improved vertical accuracy of each pixel, combined with reduced pixel size. We propose the Joint Spatial Propagation Super-Resolution network (JSPSR), which integrates Guided Image Filtering (GIF) and Spatial Propagation Network (SPN). By leveraging guidance features extracted from remote sensing images with or without auxiliary spatial data, our method can correct elevation errors and enhance the spatial resolution of DEMs. We developed a dataset for real-world bare-earth DEM Super-Resolution (SR) problems in low-relief areas utilising open-access data. Experiments were conducted on the dataset using JSPSR and other methods to predict 3 m and 8 m spatial resolution DEMs from 30 m spatial resolution Copernicus GLO-30 DEMs. (3) Results: JSPSR improved prediction accuracy by 71.74% on Root Mean Squared Error (RMSE) and reconstruction quality by 22.9% on Peak Signal-to-Noise Ratio (PSNR) compared to bicubic interpolated GLO-30 DEMs, and achieves 56.03% and 13.8% improvement on the same items against a baseline Single Image Super Resolution (SISR) method. Overall RMSE was 1.06 m at 8 m spatial resolution and 1.1 m at 3 m, compared to 3.8 m for GLO-30, 1.8 m for FABDEM and 1.3 m for FathomDEM, at either resolution. (4) Conclusions: JSPSR outperforms other methods in bare-earth DEM super-resolution tasks, with improved elevation accuracy compared to other state-of-the-art globally available datasets. Full article
(This article belongs to the Special Issue Artificial Intelligence Remote Sensing for Earth Observation)
Show Figures

Figure 1

20 pages, 8688 KB  
Article
DE-YOLOv13-S: Research on a Biomimetic Vision-Based Model for Yield Detection of Yunnan Large-Leaf Tea Trees
by Shihao Zhang, Xiaoxue Guo, Meng Tan, Chunhua Yang, Zejun Wang, Gongming Li and Baijuan Wang
Biomimetics 2025, 10(11), 724; https://doi.org/10.3390/biomimetics10110724 - 30 Oct 2025
Viewed by 101
Abstract
To address the challenges of variable target scale, complex background, blurred image, and serious occlusion in the yield detection of Yunnan large-leaf tea tree, this study proposes a deep learning network DE-YOLOv13-S that integrates the visual mechanism of primates. DynamicConv was used to [...] Read more.
To address the challenges of variable target scale, complex background, blurred image, and serious occlusion in the yield detection of Yunnan large-leaf tea tree, this study proposes a deep learning network DE-YOLOv13-S that integrates the visual mechanism of primates. DynamicConv was used to optimize the dynamic adjustment process of the effective receptive field and channel the gain of the primate visual system. Efficient Mixed-pooling Channel Attention was introduced to simulate the observation strategy of ‘global gain control and selective integration parallel’ of the primate visual system. Scale-based Dynamic Loss was used to simulate the foveation mechanism of primates, which significantly improved the positioning accuracy and robustness of Yunnan large-leaf tea tree yield detection. The results show that the Box Loss, Cls Loss, and DFL Loss of the DE-YOLOv13-S network decreased by 18.75%, 3.70%, and 2.54% on the training set, and by 18.48%, 14.29%, and 7.46% on the test set, respectively. Compared with YOLOv13, its parameters and gradients are only increased by 2.06 M, while the computational complexity is reduced by 0.2 G FLOPs, precision, recall, and mAP are increased by 3.78%, 2.04% and 3.35%, respectively. The improved DE-YOLOv13-S network not only provides an efficient and stable yield detection solution for the intelligent management level and high-quality development of tea gardens, but also provides a solid technical support for the deep integration of bionic vision and agricultural remote sensing. Full article
(This article belongs to the Special Issue Biologically Inspired Vision and Image Processing 2025)
Show Figures

Figure 1

27 pages, 3492 KB  
Article
Filter-Wise Mask Pruning and FPGA Acceleration for Object Classification and Detection
by Wenjing He, Shaohui Mei, Jian Hu, Lingling Ma, Shiqi Hao and Zhihan Lv
Remote Sens. 2025, 17(21), 3582; https://doi.org/10.3390/rs17213582 - 29 Oct 2025
Viewed by 187
Abstract
Pruning and acceleration has become an essential and promising technique for convolutional neural networks (CNN) in remote sensing image processing, especially for deployment on resource-constrained devices. However, how to maintain model accuracy and achieve satisfactory acceleration simultaneously remains to be a challenging and [...] Read more.
Pruning and acceleration has become an essential and promising technique for convolutional neural networks (CNN) in remote sensing image processing, especially for deployment on resource-constrained devices. However, how to maintain model accuracy and achieve satisfactory acceleration simultaneously remains to be a challenging and valuable problem. To break this limitation, we introduce a novel pruning pattern of filter-wise mask by enforcing extra filter-wise structural constraints on pattern-based pruning, which achieves the benefits of both unstructured and structured pruning. The newly introduced filter-wise mask enhances fine-grained sparsity with more hardware-friendly regularity. We further design an acceleration architecture with optimization of calculation parallelism and memory access, aiming to fully translate weight pruning to hardware performance gain. The proposed pruning method is firstly proven on classification networks. The pruning rate can achieve 75.1% for VGG-16 and 84.6% for ResNet-50 without accuracy compromise. Further to this, we enforce our method on the widely used object detection model, the you only look once (YOLO) CNN. On the aerial image dataset, the pruned YOLOv5s achieves a pruning rate of 53.43% with a slight accuracy degradation of 0.6%. Meanwhile, we implement the acceleration architecture on a field-programmable gate array (FPGA) to evaluate its practical execution performance. The throughput reaches up to 809.46MOPS. The pruned network achieves a speedup of 2.23× and 4.4×, with a compression rate of 2.25× and 4.5×, respectively, converting the model compression to execution speedup effectively. The proposed pruning and acceleration approach provides crucial technology to facilitate the application of remote sensing with CNN, especially in scenarios such as on-board real-time processing, emergency response, and low-cost monitoring. Full article
Show Figures

Figure 1

33 pages, 3814 KB  
Article
Evaluating Various Energy Balance Aggregation Schemes in Cotton Using Unoccupied Aerial Systems (UASs)-Based Latent Heat Flux Estimates
by Haly L. Neely, Cristine L.S. Morgan, Binayak P. Mohanty and Chenghai Yang
Remote Sens. 2025, 17(21), 3579; https://doi.org/10.3390/rs17213579 - 29 Oct 2025
Viewed by 189
Abstract
Daily evapotranspiration (ET) estimated from an unoccupied aerial system (UAS) could help improve irrigation practices, but its spatial resolution needs to be upscaled to coarser pixel resolutions before applying surface energy balance models. The purpose of this study was to evaluate the impact [...] Read more.
Daily evapotranspiration (ET) estimated from an unoccupied aerial system (UAS) could help improve irrigation practices, but its spatial resolution needs to be upscaled to coarser pixel resolutions before applying surface energy balance models. The purpose of this study was to evaluate the impact of various energy balance-based aggregation schemes on generating spatially distributed latent heat flux (LE), and, in comparison, to existing occupied aircraft and satellite remote sensing platforms. In 2017, UAS multispectral and thermal imagery, along with ground truth data, were collected at various cotton growth stages. These data sources were combined to model LE using a Two-Source Energy Balance Priestley–Taylor (TSEB-PT) model. Several UAS aggregation schemes were tested, including the mode of aggregation (i.e., input image and output flux) as well as the averaging scheme (i.e., simple aggregation vs. Box–Cox). Results indicate that output flux aggregation with Box–Cox averaging produced the lowest relative upscaling pixel-scale variability in LE and the lowest absolute prediction errors (relative to eddy covariance flux tower measurements). Output flux aggregation with simple averaging was also more accurate in reproducing LE from occupied aircraft and satellite imagery. Although results are limited to a single site, UAS LE estimates were reliably aggregated to coarser pixel resolutions, which made for faster image processing for operational applications. Full article
(This article belongs to the Special Issue Remote Sensing Data Fusion and Applications (2nd Edition))
Show Figures

Figure 1

Back to TopTop