Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (99)

Search Parameters:
Keywords = hyperspectral imaging super resolution

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
19 pages, 7642 KB  
Article
A Graph-Regularized Double-Path Interactive Spectral Super-Resolution Network for Hyperspectral Image Reconstruction
by Shuo Wang, Ting Hu, Siyuan Cheng, Zhe Li, Zhonghua Sun, Kebin Jia and Jinchao Feng
Remote Sens. 2026, 18(6), 875; https://doi.org/10.3390/rs18060875 - 12 Mar 2026
Viewed by 250
Abstract
Deep learning has demonstrated outstanding potential for the spectral super-resolution (S2R) reconstruction of multispectral images (MSIs). However, it is still a challenge to alleviate spectral distortion during S2R reconstruction. Given the superiority of a graph, a graph-regularized double-path interactive [...] Read more.
Deep learning has demonstrated outstanding potential for the spectral super-resolution (S2R) reconstruction of multispectral images (MSIs). However, it is still a challenge to alleviate spectral distortion during S2R reconstruction. Given the superiority of a graph, a graph-regularized double-path interactive S2R network (GDIS2Net) consisting of two parallel branches is proposed to reconstruct hyperspectral images (HSIs) from MSIs. An interactive residual module is carefully schemed as the backbone of the S2R network to facilitate the feature interaction between the two branches, while an enhanced residual module is constructed for further feature fusion. In addition, a new loss function considering the spectral continuity is proposed to optimize the proposed GDIS2Net. Experimental analyses show that the proposed GDIS2Net outperforms state-of-the-art methods on both simulated and real datasets. Full article
(This article belongs to the Section Remote Sensing Image Processing)
Show Figures

Figure 1

23 pages, 4825 KB  
Article
Degradation-Aware Dynamic Kernel Generation Network for Hyperspectral Super-Resolution
by Huadong Liu, Haifeng Liang and Qian Wang
Sensors 2026, 26(4), 1362; https://doi.org/10.3390/s26041362 - 20 Feb 2026
Viewed by 424
Abstract
Addressing the problems of the difficulty in reconstructing high-resolution hyperspectral images caused by dynamic degradation characteristics, the poor adaptability of traditional static degradation models, and the oversimplified noise modeling, this paper proposes a degradation-aware dynamic Fourier network (DADFN) for hyperspectral super-resolution. This method [...] Read more.
Addressing the problems of the difficulty in reconstructing high-resolution hyperspectral images caused by dynamic degradation characteristics, the poor adaptability of traditional static degradation models, and the oversimplified noise modeling, this paper proposes a degradation-aware dynamic Fourier network (DADFN) for hyperspectral super-resolution. This method employs a dual-channel split module to decouple and encode spectral and spatial degradation information, realizes the independent mapping of spectral and spatial features via a multi-layer perceptron module, and integrates a spectral–spatial dynamic cross-attention fusion module to generate 3D dynamic blur kernels tailored to different bands and spatial positions. The proposed method designs a multi-scale spectral–spatial collaborative constraint (MSSCC) loss function to ensure the coordinated optimization of modeling rationality, spectral continuity, and spatial detail fidelity. Experiments on the CAVE and Harvard benchmark datasets demonstrate that the DADFN algorithm outperforms the baseline methods in all evaluation metrics, which proves the proposed method’s strong robustness in real-world complex degradation scenarios. This method provides a novel solution balancing physical interpretability and performance superiority for hyperspectral image super-resolution tasks and holds significant value for advancing its applications in remote sensing monitoring, precision agriculture, and other related fields. Full article
Show Figures

Figure 1

67 pages, 13903 KB  
Article
A Multi-Sensor Framework for Methane Detection and Flux Estimation with Scale-Aware Plume Segmentation and Uncertainty Propagation from High-Resolution Spaceborne Imaging Spectrometers
by Alvise Ferrari, Valerio Pampanoni, Giovanni Laneve, Raul Alejandro Carvajal Tellez and Simone Saquella
Methane 2026, 5(1), 10; https://doi.org/10.3390/methane5010010 - 13 Feb 2026
Viewed by 644
Abstract
Methane is the second most important contributor to global warming, and monitoring super-emitters from space is critical for climate mitigation. Despite the advancements in hyperspectral remote sensing, comparing methane observations across diverse imaging spectrometers remains a challenging task. Different retrieval algorithms, plume segmentation [...] Read more.
Methane is the second most important contributor to global warming, and monitoring super-emitters from space is critical for climate mitigation. Despite the advancements in hyperspectral remote sensing, comparing methane observations across diverse imaging spectrometers remains a challenging task. Different retrieval algorithms, plume segmentation techniques and uncertainty treatments make it very hard to perform fair comparisons between different products. To overcome these difficulties, this study presents HyGAS (Hyperspectral Gas Analysis Suite), a unified, open-source framework for sensor-agnostic methane retrieval and flux estimation. Starting from the established clutter-matched-filter (CMF) formalism and a physical calibration in concentration–path-length units (ppm·m), we propagate both instrument noise and surface-driven background variability consistently from methane enhancement to Integrated Mass Enhancement (IME) and flux. The framework further includes a spectrally matched background-selection strategy, scale-aware segmentation with fixed physical criteria across resolutions, and emission-rate estimation via an IME–Ueff approach informed by Large Eddy Simulation (LES). We demonstrate the framework on near-simultaneous observations of landfills and gas infrastructure in Argentina, Turkmenistan, and Pakistan, spanning Level-1 radiance workflows (PRISMA, EnMAP, Tanager-1) and Level-2 methane products (EMIT, GHGSat). The standardised chain enables systematic inter-comparison of methane enhancement products and reduces methodological bias, supporting robust multi-mission assessment and future global monitoring. Full article
Show Figures

Graphical abstract

23 pages, 14619 KB  
Article
Edge-Distilled and Local–Global Feature Selection Network for Hyperspectral Image Super-Resolution
by Xinzhao Li, Mengzhe Fan, Xiaoqing Zheng and Jiandong Shang
Sensors 2026, 26(3), 1055; https://doi.org/10.3390/s26031055 - 6 Feb 2026
Viewed by 421
Abstract
In recent years, the methods based on convolutional neural networks have achieved significant progress in hyperspectral image super-resolution. However, existing methods still face two key challenges: (1) they fail to fully extract edge detail information from hyperspectral images; (2) they struggle to simultaneously [...] Read more.
In recent years, the methods based on convolutional neural networks have achieved significant progress in hyperspectral image super-resolution. However, existing methods still face two key challenges: (1) they fail to fully extract edge detail information from hyperspectral images; (2) they struggle to simultaneously capture local and global features. To address these issues, we propose an Edge-Distilled and Local–Global Feature Selection network (EDLGFS) for hyperspectral image super-resolution. This network aims to effectively leverage edge details and local–global features, thereby enhancing super-resolution reconstruction quality. Firstly, we design an edge-guided super-resolution network based on knowledge distillation. This network transfers edge knowledge to improve the reconstruction. Secondly, we propose a Local–Global Feature Selection mechanism (LGFS), which integrates convolutions of different sizes with the self-attention mechanism. This design models spatial correlations across features with different receptive fields, achieving efficient feature selection to more effectively capture local and global features. Finally, we propose a dynamic loss mechanism to more effectively balance the contribution of each loss term. Extensive experimental results on three public datasets demonstrate that the proposed EDLGFS achieves superior super-resolution reconstruction quality. Full article
(This article belongs to the Special Issue Intelligent Sensing and Artificial Intelligence for Image Processing)
Show Figures

Figure 1

24 pages, 12770 KB  
Article
Multiscale RGB-Guided Fusion for Hyperspectral Image Super-Resolution
by Matteo Kolyszko, Marco Buzzelli, Simone Bianco and Raimondo Schettini
J. Imaging 2026, 12(2), 61; https://doi.org/10.3390/jimaging12020061 - 28 Jan 2026
Viewed by 644
Abstract
Hyperspectral imaging (HSI) enables fine spectral analysis but is often limited by low spatial resolution due to sensor constraints. To address this, we propose CGNet, a color-guided hyperspectral super-resolution network that leverages complementary information from low-resolution hyperspectral inputs and high-resolution RGB images. CGNet [...] Read more.
Hyperspectral imaging (HSI) enables fine spectral analysis but is often limited by low spatial resolution due to sensor constraints. To address this, we propose CGNet, a color-guided hyperspectral super-resolution network that leverages complementary information from low-resolution hyperspectral inputs and high-resolution RGB images. CGNet adopts a dual-encoder design: the RGB encoder extracts hierarchical spatial features, while the HSI encoder progressively upsamples spectral features. A multi-scale fusion decoder then combines both modalities in a coarse-to-fine manner to reconstruct the high-resolution HSI. Training is driven by a hybrid loss that balances L1 and Spectral Angle Mapper (SAM), which ablation studies confirm as the most effective formulation. Experiments on two benchmarks, ARAD1K and StereoMSI, at ×4 and ×6 upscaling factors demonstrate that CGNet consistently outperforms state-of-the-art baselines. CGNet achieves higher PSNR and SSIM, lower SAM, and reduced ΔE00, confirming its ability to recover sharp spatial structures while preserving spectral fidelity. Full article
(This article belongs to the Special Issue Celebrating the 10th Anniversary of the Journal of Imaging)
Show Figures

Figure 1

22 pages, 7392 KB  
Article
Recursive Deep Feature Learning for Hyperspectral Image Super-Resolution
by Jiming Liu, Chen Yi and Hehuan Li
Appl. Sci. 2026, 16(2), 1060; https://doi.org/10.3390/app16021060 - 20 Jan 2026
Viewed by 320
Abstract
The advancement of hyperspectral image super-resolution (HSI-SR) has been significantly propelled by deep learning techniques. However, current methods predominantly rely on 2D or 3D convolutional networks, which are inherently local and thus limited in modeling long-range spectral–depth interactions. This work introduces a novel [...] Read more.
The advancement of hyperspectral image super-resolution (HSI-SR) has been significantly propelled by deep learning techniques. However, current methods predominantly rely on 2D or 3D convolutional networks, which are inherently local and thus limited in modeling long-range spectral–depth interactions. This work introduces a novel network architecture designed to address this gap through recursive deep feature learning. Our model initiates with 3D convolutions to extract preliminary spectral–spatial features, which are progressively refined via densely connected grouped convolutions. A core innovation is a recursively formulated generalized self-attention mechanism, which captures long-range dependencies across the spectral dimension with linear complexity. To reconstruct fine spatial details across multiple scales, a progressive upsampling strategy is further incorporated. Evaluations on several public benchmarks demonstrate that the proposed approach outperforms existing state-of-the-art methods in both quantitative metrics and visual quality. Full article
(This article belongs to the Special Issue Remote Sensing Image Processing and Application, 2nd Edition)
Show Figures

Figure 1

17 pages, 6883 KB  
Article
A Comparative Evaluation of Super-Resolution Methods for Spectral Images Using Pretrained RGB Models
by Navid Shokoohi, Abdelhamid N. Fsian, Jean-Baptiste Thomas and Pierre Gouton
Sensors 2026, 26(2), 683; https://doi.org/10.3390/s26020683 - 20 Jan 2026
Viewed by 531
Abstract
The spatial resolution of spectral imaging systems is fundamentally constrained by hardware trade-offs, and the availability of large-scale annotated spectral datasets remains limited. This study presents a comprehensive evaluation of super-resolution (SR) methods across interpolation-based, CNN-based, GAN-based, and diffusion-based approaches. Using a synthetic [...] Read more.
The spatial resolution of spectral imaging systems is fundamentally constrained by hardware trade-offs, and the availability of large-scale annotated spectral datasets remains limited. This study presents a comprehensive evaluation of super-resolution (SR) methods across interpolation-based, CNN-based, GAN-based, and diffusion-based approaches. Using a synthetic 30-band spectral representation reconstructed from RGB with the MST++ model as a proxy ground truth, we arrange non-adjacent triplets as three-channel PNG inputs to ensure compatibility with existing SR architectures. A unified pipeline enables reproducible evaluation at ×2, ×4, and ×8 scales on 50 unseen images, with performance assessed using PSNR, SSIM, and SAM. Results confirm that bicubic interpolation remains a spectrally reliable baseline; shallow CNNs (SRCNN, FSRCNN) generalize well without fine-tuning; and ESRGAN improves spatial detail at the expense of spectral accuracy. Diffusion models (SR3, ResShift, SinSR), evaluated in a zero-shot setting without spectral-domain adaptation, exhibit unstable performance and require spectrum-aware training to preserve spectral structure effectively. The findings underscore a persistent trade-off between perceptual sharpness and spectral fidelity, highlighting the importance of domain-aware objectives when applying generative SR models to spectral data. This work provides reproducible baselines and a flexible evaluation framework to support future research in spectral image restoration. Full article
(This article belongs to the Special Issue Feature Papers in Sensing and Imaging 2025&2026)
Show Figures

Figure 1

28 pages, 11618 KB  
Article
Cascaded Multi-Attention Feature Recurrent Enhancement Network for Spectral Super-Resolution Reconstruction
by He Jin, Jinhui Lan, Zhixuan Zhuang and Yiliang Zeng
Remote Sens. 2026, 18(2), 202; https://doi.org/10.3390/rs18020202 - 8 Jan 2026
Viewed by 493
Abstract
Hyperspectral imaging (HSI) captures the same scene across multiple spectral bands, providing richer spectral characteristics of materials than conventional RGB images. The spectral reconstruction task seeks to map RGB images into hyperspectral images, enabling high-quality HSI data acquisition without additional hardware investment. Traditional [...] Read more.
Hyperspectral imaging (HSI) captures the same scene across multiple spectral bands, providing richer spectral characteristics of materials than conventional RGB images. The spectral reconstruction task seeks to map RGB images into hyperspectral images, enabling high-quality HSI data acquisition without additional hardware investment. Traditional methods based on linear models or sparse representations struggle to effectively model the nonlinear characteristics of hyperspectral data. Although deep learning approaches have made significant progress, issues such as detail loss and insufficient modeling of spatial–spectral relationships persist. To address these challenges, this paper proposes the Cascaded Multi-Attention Feature Recurrent Enhancement Network (CMFREN). This method achieves targeted breakthroughs over existing approaches through a cascaded architecture of feature purification, spectral balancing and progressive enhancement. This network comprises two core modules: (1) the Hierarchical Residual Attention (HRA) module, which suppresses artifacts in illumination transition regions through residual connections and multi-scale contextual feature fusion, and (2) the Cascaded Multi-Attention (CMA) module, which incorporates a Spatial–Spectral Balanced Feature Extraction (SSBFE) module and a Spectral Enhancement Module (SEM). The SSBFE combines Multi-Scale Residual Feature Enhancement (MSRFE) with Spectral-wise Multi-head Self-Attention (S-MSA) to achieve dynamic optimization of spatial–spectral features, while the SEM synergistically utilizes attention and convolution to progressively enhance spectral details and mitigate spectral aliasing in low-resolution scenes. Experiments across multiple public datasets demonstrate that CMFREN achieves state-of-the-art (SOTA) performance on metrics including RMSE, PSNR, SAM, and MRAE, validating its superiority under complex illumination conditions and detail-degraded scenarios. Full article
Show Figures

Figure 1

19 pages, 2680 KB  
Article
ESSTformer: A CNN-Transformer Hybrid with Decoupled Spatial Spectral Transformers for Hyperspectral Image Super-Resolution
by Hehuan Li, Chen Yi, Jiming Liu, Zhen Zhang and Yu Dong
Appl. Sci. 2025, 15(21), 11738; https://doi.org/10.3390/app152111738 - 4 Nov 2025
Viewed by 921
Abstract
Hyperspectral images (HSIs) are crucial for ground object classification, target detection, and related applications due to their rich spatial spectral information. However, hardware limitations in imaging systems make it challenging to directly acquire HSIs with a high spatial resolution. While deep learning-based single [...] Read more.
Hyperspectral images (HSIs) are crucial for ground object classification, target detection, and related applications due to their rich spatial spectral information. However, hardware limitations in imaging systems make it challenging to directly acquire HSIs with a high spatial resolution. While deep learning-based single hyperspectral image super-resolution (SHSR) methods have made significant progress, existing approaches primarily rely on convolutional neural networks (CNNs) with fixed geometric kernels, which struggle to model global spatial spectral dependencies effectively. To address this, we propose ESSTformer, a novel SHSR framework that synergistically integrates CNNs’ local feature extraction and Transformers’ global modeling capabilities. Specifically, we design a multi-scale spectral attention module (MSAM) based on dilated convolutions to capture local multi-scale spatial spectral features. Considering the inherent differences between spatial and spectral information, we adopt a decoupled processing strategy by constructing separate spatial and Spectral Transformers. The Spatial Transformer employs window attention mechanisms and an improved convolutional multi-layer perceptron (CMLP) to model long-range spatial dependencies, while the Spectral Transformer utilizes self-attention mechanisms combined with a spectral enhancement module to focus on discriminative spectral features. Extensive experiments on three hyperspectral datasets demonstrate that the proposed ESSTformer achieves a superior performance in super-resolution reconstruction compared to state-of-the-art methods. Full article
(This article belongs to the Special Issue Advances in Optical Imaging and Deep Learning)
Show Figures

Figure 1

26 pages, 4680 KB  
Article
Onboard Hyperspectral Super-Resolution with Deep Pushbroom Neural Network
by Davide Piccinini, Diego Valsesia and Enrico Magli
Remote Sens. 2025, 17(21), 3634; https://doi.org/10.3390/rs17213634 - 3 Nov 2025
Viewed by 1517
Abstract
Hyperspectral imagers on satellites obtain the fine spectral signatures that are essential in distinguishing one material from another but at the expense of a limited spatial resolution. Enhancing the latter is thus a desirable preprocessing step in order to further improve the detection [...] Read more.
Hyperspectral imagers on satellites obtain the fine spectral signatures that are essential in distinguishing one material from another but at the expense of a limited spatial resolution. Enhancing the latter is thus a desirable preprocessing step in order to further improve the detection capabilities offered by hyperspectral images for downstream tasks. At the same time, there is growing interest in deploying inference methods directly onboard satellites, which calls for lightweight image super-resolution methods that can be run on the payload in real time. In this paper, we present a novel neural network design, called Deep Pushbroom Super-Resolution (DPSR), which matches the pushbroom acquisition of hyperspectral sensors by processing an image line by line in the along-track direction with a causal memory mechanism to exploit previously acquired lines. This design greatly limits the memory requirements and computational complexity, achieving onboard real-time performance, i.e., the ability to super-resolve a line in the time that it takes to acquire the next one, on low-power hardware. Experiments show that the quality of the super-resolved images is competitive with or even surpasses that of state-of-the-art methods that are significantly more complex. Full article
(This article belongs to the Section AI Remote Sensing)
Show Figures

Figure 1

30 pages, 1303 KB  
Review
Spectral Reconstruction Applied in Precision Agriculture: On-Field Solutions
by Marco Mingrone, Marco Seracini and Chiara Cevoli
Appl. Sci. 2025, 15(20), 10985; https://doi.org/10.3390/app152010985 - 13 Oct 2025
Cited by 3 | Viewed by 1800
Abstract
Over the past two decades, hyperspectral imaging (HSI) systems have shown significant potential in agriculture, from disease detection to the assessment of plant and fruit nutritional status. However, most applications remain confined to laboratory analyses under controlled conditions, with only a limited fraction [...] Read more.
Over the past two decades, hyperspectral imaging (HSI) systems have shown significant potential in agriculture, from disease detection to the assessment of plant and fruit nutritional status. However, most applications remain confined to laboratory analyses under controlled conditions, with only a limited fraction implemented in field environments. In this scenario, spectral reconstruction techniques may serve as a bridge between the high accuracy of HSI and the challenges of on-field or even real-time applications. This review outlines the current state of the art of on-field HSI in the agrifood sector, highlighting existing limitations and potential advantages. It then introduces the problem of spectral reconstruction and reviews current techniques used to address it. Laboratory and on-field studies will be taken into account. The final section offers our perspective on the limitations of HSI and the promising potential of spectral super-resolution to overcome current barriers and enable broader adoption of hyperspectral technology in precision agriculture. Full article
(This article belongs to the Special Issue Signal and Image Processing: From Theory to Applications: 2nd Edition)
Show Figures

Figure 1

21 pages, 37484 KB  
Article
Reconstructing Hyperspectral Images from RGB Images by Multi-Scale Spectral–Spatial Sequence Learning
by Wenjing Chen, Lang Liu and Rong Gao
Entropy 2025, 27(9), 959; https://doi.org/10.3390/e27090959 - 15 Sep 2025
Cited by 2 | Viewed by 3130
Abstract
With rapid advancements in transformers, the reconstruction of hyperspectral images from RGB images, also known as spectral super-resolution (SSR), has made significant breakthroughs. However, existing transformer-based methods often struggle to balance computational efficiency with long-range receptive fields. Recently, Mamba has demonstrated linear complexity [...] Read more.
With rapid advancements in transformers, the reconstruction of hyperspectral images from RGB images, also known as spectral super-resolution (SSR), has made significant breakthroughs. However, existing transformer-based methods often struggle to balance computational efficiency with long-range receptive fields. Recently, Mamba has demonstrated linear complexity in modeling long-range dependencies and shown broad applicability in vision tasks. This paper proposes a multi-scale spectral–spatial sequence learning method, named MSS-Mamba, for reconstructing hyperspectral images from RGB images. First, we introduce a continuous spectral–spatial scan (CS3) mechanism to improve cross-dimensional feature extraction of the foundational Mamba model. Second, we propose a sequence tokenization strategy that generates multi-scale-aware sequences to overcome Mamba’s limitations in hierarchically learning multi-scale information. Specifically, we design the multi-scale information fusion (MIF) module, which tokenizes input sequences before feeding them into Mamba. The MIF employs a dual-branch architecture to process global and local information separately, dynamically fusing features through an adaptive router that generates weighting coefficients. This produces feature maps that contain both global contextual information and local details, ultimately reconstructing a high-fidelity hyperspectral image. Experimental results on the ARAD_1k, CAVE and grss_dfc_2018 dataset demonstrate the performance of MSS-Mamba. Full article
Show Figures

Figure 1

25 pages, 7964 KB  
Article
DSCSRN: Physically Guided Symmetry-Aware Spatial-Spectral Collaborative Network for Single-Image Hyperspectral Super-Resolution
by Xueli Chang, Jintong Liu, Guotao Wen, Xiaoyu Huang and Meng Yan
Symmetry 2025, 17(9), 1520; https://doi.org/10.3390/sym17091520 - 12 Sep 2025
Viewed by 943
Abstract
Hyperspectral images (HSIs), with their rich spectral information, are widely used in remote sensing; yet the inherent trade-off between spectral and spatial resolution in imaging systems often limits spatial details. Single-image hyperspectral super-resolution (HSI-SR) seeks to recover high-resolution HSIs from a single low-resolution [...] Read more.
Hyperspectral images (HSIs), with their rich spectral information, are widely used in remote sensing; yet the inherent trade-off between spectral and spatial resolution in imaging systems often limits spatial details. Single-image hyperspectral super-resolution (HSI-SR) seeks to recover high-resolution HSIs from a single low-resolution input, but the high dimensionality and spectral redundancy of HSIs make this task challenging. In HSIs, spectral signatures and spatial textures often exhibit intrinsic symmetries, and preserving these symmetries provides additional physical constraints that enhance reconstruction fidelity and robustness. To address these challenges, we propose the Dynamic Spectral Collaborative Super-Resolution Network (DSCSRN), an end-to-end framework that integrates physical modeling with deep learning and explicitly embeds spatial–spectral symmetry priors into the network architecture. DSCSRN processes low-resolution HSIs with a Cascaded Residual Spectral Decomposition Network (CRSDN) to compress redundant channels while preserving spatial structures, generating accurate abundance maps. These maps are refined by two Synergistic Progressive Feature Refinement Modules (SPFRMs), which progressively enhance spatial textures and spectral details via a multi-scale dual-domain collaborative attention mechanism. The Dynamic Endmember Adjustment Module (DEAM) then adaptively updates spectral endmembers according to scene context, overcoming the limitations of fixed-endmember assumptions. Grounded in the Linear Mixture Model (LMM), this unmixing–recovery–reconstruction pipeline restores subtle spectral variations alongside improved spatial resolution. Experiments on the Chikusei, Pavia Center, and CAVE datasets show that DSCSRN outperforms state-of-the-art methods in both perceptual quality and quantitative performance, achieving an average PSNR of 43.42 and a SAM of 1.75 (×4 scale) on Chikusei. The integration of symmetry principles offers a unifying perspective aligned with the intrinsic structure of HSIs, producing reconstructions that are both accurate and structurally consistent. Full article
(This article belongs to the Section Computer)
Show Figures

Figure 1

34 pages, 10241 KB  
Review
A Comprehensive Benchmarking Framework for Sentinel-2 Sharpening: Methods, Dataset, and Evaluation Metrics
by Matteo Ciotola, Giuseppe Guarino, Antonio Mazza, Giovanni Poggi and Giuseppe Scarpa
Remote Sens. 2025, 17(12), 1983; https://doi.org/10.3390/rs17121983 - 7 Jun 2025
Cited by 4 | Viewed by 2340
Abstract
The advancement of super-resolution and sharpening algorithms for satellite images has significantly expanded the potential applications of remote sensing data. In the case of Sentinel-2, despite significant progress, the lack of standardized datasets and evaluation protocols has made it difficult to fairly compare [...] Read more.
The advancement of super-resolution and sharpening algorithms for satellite images has significantly expanded the potential applications of remote sensing data. In the case of Sentinel-2, despite significant progress, the lack of standardized datasets and evaluation protocols has made it difficult to fairly compare existing methods and advance the state of the art. This work introduces a comprehensive benchmarking framework for Sentinel-2 sharpening, designed to address these challenges and foster future research. It analyzes several state-of-the-art sharpening algorithms, selecting representative methods ranging from traditional pansharpening to ad hoc model-based optimization and deep learning approaches. All selected methods have been re-implemented within a consistent Python-based (Version 3.10) framework and evaluated on a suitably designed, large-scale Sentinel-2 dataset. This dataset features diverse geographical regions, land cover types, and acquisition conditions, ensuring robust training and testing scenarios. The performance of the sharpening methods is assessed using both reference-based and no-reference quality indexes, highlighting strengths, limitations, and open challenges of current state-of-the-art algorithms. The proposed framework, dataset, and evaluation protocols are openly shared with the research community to promote collaboration and reproducibility. Full article
Show Figures

Figure 1

23 pages, 5811 KB  
Article
Multi-Attitude Hybrid Network for Remote Sensing Hyperspectral Images Super-Resolution
by Chi Chen, Yunhan Sun, Xueyan Hu, Ning Zhang, Hao Feng, Zheng Li and Yongcheng Wang
Remote Sens. 2025, 17(11), 1947; https://doi.org/10.3390/rs17111947 - 4 Jun 2025
Cited by 3 | Viewed by 1630
Abstract
Benefiting from the development of deep learning, the super-resolution technology for remote sensing hyperspectral images (HSIs) has achieved impressive progress. However, due to the high coupling of complex components in remote sensing HSIs, it is challenging to achieve a complete characterization of the [...] Read more.
Benefiting from the development of deep learning, the super-resolution technology for remote sensing hyperspectral images (HSIs) has achieved impressive progress. However, due to the high coupling of complex components in remote sensing HSIs, it is challenging to achieve a complete characterization of the internal information, which in turn limits the precise reconstruction of detailed texture and spectral features. Therefore, we propose the multi-attitude hybrid network (MAHN) for extracting and characterizing information from multiple feature spaces. On the one hand, we construct the spectral hypergraph cross-attention module (SHCAM) and the spatial hypergraph self-attention module (SHSAM) based on the high and low-frequency features in the spectral and the spatial domains, respectively, which are used to capture the main structure and detail changes within the image. On the other hand, high-level semantic information in mixed pixels is parsed by spectral mixture analysis, and semantic hypergraph 3D module (SH3M) are constructed based on the abundance of each category to enhance the propagation and reconstruction of semantic information. Furthermore, to mitigate the domain discrepancies among features, we introduce a sensitive bands attention mechanism (SBAM) to enhance the cross-guidance and fusion of multi-domain features. Extensive experiments demonstrate that our method achieves optimal reconstruction results compared to other state-of-the-art algorithms while effectively reducing the computational complexity. Full article
Show Figures

Figure 1

Back to TopTop