Next Article in Journal
Correction: Ma et al. The Effect of Virtual Reality Technology in Table Tennis Teaching: A Multi-Center Controlled Study. Sensors 2024, 24, 7041
Previous Article in Journal
Dual-Mode Data Collection for Periodic and Urgent Data Transmission in Energy Harvesting Wireless Sensor Networks
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Self-Supervised Multiscale Contrastive and Attention-Guided Gradient Projection Network for Pansharpening

College of Electronic Information, Sichuan University, Chengdu 610017, China
*
Author to whom correspondence should be addressed.
Sensors 2025, 25(8), 2560; https://doi.org/10.3390/s25082560
Submission received: 28 February 2025 / Revised: 9 April 2025 / Accepted: 14 April 2025 / Published: 18 April 2025
(This article belongs to the Section Sensor Networks)

Abstract

:
Pansharpening techniques are crucial in remote sensing image processing, with deep learning emerging as the mainstream solution. In this paper, the pansharpening problem is formulated as two optimization subproblems with a solution proposed based on multiscale contrastive learning combined with attention-guided gradient projection networks. First, an efficient and generalized Spectral–Spatial Universal Module (SSUM) is designed and applied to spectral and spatial enhancement modules (SpeEB and SpaEB). Then, the multiscale high-frequency features of PAN and MS images are extracted using discrete wavelet transform (DWT). These features are combined with contrastive learning and residual connection to progressively balance spectral and spatial information. Finally, high-resolution multispectral images are generated through multiple iterations. Experimental results verify that the proposed method outperforms existing approaches in both visual quality and quantitative evaluation metrics.

1. Introduction

Advances in remote sensing satellite technology have made Earth surface observation possible [1]. However, limited by sensor performance, satellites are unable to capture images that contain both rich spectral and spatial information simultaneously. Instead, they can only acquire low-resolution multispectral (MS) images and corresponding high-resolution panchromatic (PAN) images. The importance of high-resolution multispectral images in fields such as change detection [2], classification [3], and target identification [4] has led to the emergence of the pansharpening technique.
Traditional pansharpening methods are mainly divided into strategies based on component substitution (CS) and multiresolution analysis (MRA). CS methods, such as Brovey [5], principal component analysis (PCA) [6], IHS [7], and GSA [8], project the spectral information of the MS image into a new domain by replacing some or all of the spatial information with data from the PAN image, followed by back-projection Although histogram matching is performed before replacement to reduce spectral distortion, it is still difficult to completely avoid spectral aberrations. MRA methods, such as ATWT [9,10], SFIM based on smoothing filters [11], and the MTF-Matched Filtering-Based Generalized Laplace Pyramid (MTF-GLP) [12], extract spatial information from the PAN image through multiscale decomposition, subsequently injecting this information into the upsampled MS image. However, aliasing effects may cause spatial distortion.
Deep learning methods have become mainstream tools due to their powerful feature extraction capabilities and nonlinear mapping performance. Inspired by the Super-Resolution (SR) technique, Masi et al. [13] treated the pansharpening task as a super-resolution problem, using convolutional neural networks (CNNs) to address it. Subsequently, residual networks (RNs) [14,15], generative adversarial networks (GANs) [16,17,18,19,20] and MSDCNN [21], a multiscale deep convolutional network, were proposed.
Variational optimization methods, which lie between traditional CS/MRA methods and deep learning, consider generalized pansharpening as an optimization problem. The P+XS method [22] achieves pansharpening by extracting spatial information from a panchromatic (PAN) image, which is then injected into a multispectral (MS) image. Wu et al. [23] combined variational optimization with deep CNNs to enhance the model’s generalization ability, subsequently proposing a pansharpening framework based on low-rank tensor complementation [24]. In addition, meta-heuristic algorithms [25,26] are also widely used in generalized pansharpening tasks due to their superior performance in large-scale search spaces.
Under the variational optimization framework, it is assumed that a multispectral (MS) image is a reduced-quality version of a high-resolution multispectral (HRMS) image, while a panchromatic (PAN) image is a linear combination of the bands of the HRMS image. Based on this assumption, this paper proposes two optimization problems for HRMS image reconstruction, which constrain the generation of HRMS images using the information from both MS and PAN images, respectively.
Although existing variational optimization methods have shown significant effects on pansharpening, there are still several issues that urgently need to be addressed:
(1)
Modal differences between spatial and spectral information lead to inconsistencies in information representation and extraction, resulting in poor fusion performance.
(2)
During the optimization process of HRMS images, the high-frequency noise in MS images is not considered in spectral optimization, leading to an increase in artifacts in the reconstructed image.
(3)
Balancing spectral and spatial information: Overemphasizing one aspect may lead to a decrease in the overall quality of the final reconstructed image.
To address these challenges, this paper applies contrastive learning to the pansharpening task by introducing an innovative method that combines self-supervised multiscale contrastive learning with attention-guided deep gradient projection (MCAGP).
The method first designs a Spectral–Spatial Universal Module (SSUM) for depth gradient projection networks, combining the depth prior to design spectral enhancement blocks (SpeEBs) and spatial enhancement blocks (SpaEBs). These blocks are applied serially and stacked alternately in the depth gradient projection network to solve the two optimization problems step by step.
Additionally, a multiscale contrastive learning strategy is applied to optimize the spatial information of PAN images. In this strategy, the high-frequency components of PAN images are considered positive samples, while those of MS images are treated as negative samples. This method strengthens the SpaEB’s focus on the spatial features of PAN images while also enhancing the SpeEB’s ability to preserve the spectral properties of MS images.
Finally, a contrastive loss function based on contrastive learning is applied to effectively balance spatial and spectral features by maximizing the similarity between positive samples while minimizing the similarity between negative samples, with model performance further enhanced by incorporating L1 loss.
The experimental results demonstrate that the MCAGP method surpasses both traditional and contemporary advanced methods in terms of visual quality and performance metrics, offering a novel approach to the pansharpening field.
The contributions of this paper are summarized as follows:
(1)
Combining contrastive learning with deep gradient projection within a variational optimization framework: this method reduces modal differences by contrasting high-frequency features, strengthens the task focus of the spectral and spatial enhancement blocks, improves feature consistency and reconstruction quality, and overcomes conflicts between modalities through independent optimization strategies.
(2)
Introducing a Spectral–Spatial Universal Module (SSUM) combined with depth priors: This module is extended to spectral and spatial enhancement blocks, effectively solving the dual optimization problem. Through channel-space attention guidance and multilevel residual connections, it balances spatial and spectral features.
(3)
Designing a multiscale contrastive learning strategy: this strategy introduces contrast loss to filter out noise in MS images, allowing the model to perform well in both full-resolution and reduced-resolution tasks.
The structure of the paper is as follows: Section 2 provides a review of related work; Section 3 describes the MCAGP method in detail; Section 4 presents the experimental results; and Section 5 presents the conclusions.

2. Related Work

2.1. Self-Supervised Learning

Self-supervised learning (SSL) generates labels using the data themselves to train the model without manual labeling, thus providing a significant advantage in areas where labeling is costly. Its successful applications in natural language processing (NLP) and computer vision (CV), such as image colorization [27,28] and super-resolution [29], demonstrate that SSL is able to efficiently extract structural, contextual, and semantic features from data. In the field of pansharpening, SSL shows great potential. Xing et al. [30] proposed a cross-predictive diffusion model (CrossDiff) to explore self-supervised representations in panchromatic sharpening; Ruben et al. [31] designed a self-supervised double-U network (W-NetPan); and He et al. [32] developed a self-supervised pansharpening method based on spectral super-resolution (sSRPNet). These studies show that SSL offers innovative ideas for panchromatic sharpening tasks, significantly enhancing performance.

2.2. Contrastive Learning

Contrastive learning has garnered significant attention, with its core idea being to enhance the mutual information of learned representations by reducing the distance between anchor and positive samples in latent space while pushing negative samples away [33,34,35,36,37,38]. The construction of positive and negative samples is key to contrastive learning. In the field of image super-resolution, positive samples are typically real images, and negative samples are degraded or other images [39,40,41]. SimCLR [34] utilizes data augmentation (e.g., cropping, flipping, and color dithering) to generate pairs of positive samples and learns their similarity through contrast loss. MoCo [35] introduces a momentum encoder and dynamic queue, effectively addressing the problem of balancing positive and negative samples and making representation learning more robust. In the pansharpening field, Zhou et al. [42] enforce contrastive learning to constrain the distance between the restored features and the ground truth, performing distillation to promote the learning of consistent features.

3. Proposed Method

This subsection describes in detail the proposed pansharpening method MCAGP, whose overall framework is illustrated in Figure 1 and Algorithm 1. In this figure, ms denotes the low-resolution multispectral image, PAN denotes the high-resolution panchromatic image, and HRMS refers to the final high-resolution multispectral image.
Algorithm 1: MCAGP Forward Pass
Sensors 25 02560 i001
The framework of MCAGP consists of three key components: a spectral enhancement block (SpeEB), a spatial enhancement block (SpaEB), and a Multiscale Contrastive Learning module (MCL), which are closely coupled through iterative residual learning.
Specifically, the process begins with interpolating the low-resolution MS image to the PAN resolution, obtaining the initial H R M S 0 . Both the interpolated MS image and the original MS image are then input into the SpeEB, which is designed to enhance the spectral information by learning and compensating the spectral difference between the upsampled image and the original MS image. The output of SpeEB, denoted as H R M S l 1 , is subsequently passed through the MCL module, where the multiscale contrastive loss is calculated by extracting high-frequency details and constructing positive and negative samples based on data augmentation and noise injection, effectively guiding the network to focus on fine-grained spatial–spectral consistency.
Afterwards, the contrastive-enhanced H R M S l 1 and the PAN image are jointly fed into the SpaEB, which injects spatial details from the PAN image while preserving spectral consistency. A residual block is embedded after SpaEB to further refine the fused result and compensate for residual errors.
This procedure is repeated over L iterations, with residual connections linking the outputs at each stage to progressively refine the reconstructed HRMS. Through the interaction of spectral enhancement, spatial enhancement, and contrastive learning, the network gradually improves the fidelity of the pansharpened image. The detailed workflow is summarized in the pseudo-code provided, and the interconnection between modules is visually illustrated in Figure 1.

3.1. Attention-Guided Gradient Projection

Problem description: Suppose the LR image is a degraded version of the HR image, while the PAN image is a linear combination of the bands in the HR image. Therefore, the following formula can be obtained:
y lr = D K x hr
y pan = F x hr
where D R m n × M N denotes the downsampling matrix, K is the low-pass circular convolution matrix, F S B × b is the spectral response function, and x hr represents the target high-resolution multispectral image. Since the process of reconstructing the HR image is a typical pathological inverse problem, the direct solution often faces instability. Therefore, in order to constrain the reasonableness of the solution, the following optimization problem with a regularization term is proposed:
min x hr L d a t a x hr + γ R x hr
where R x hr is the prior term, which is used to control the smoothness or structure of the x hr image; traditional optimization is typically hand-tailored, while in deep learning, it is represented as an implicit prior. L d a t a ( x hr ) = y lr D x hr F 2 +   y pan F x hr F 2 is the data fidelity term, which is used to constrain the consistency between the x hr image, the y l r image, and the y p a n image; γ is the trade-off parameter, which is used to regulate the relative importance between the regularization term and the data fidelity term.
In order to better utilize the deep learning framework, the generalized pansharpening problem is decomposed into two complementary subproblems: spectral optimization and spatial optimization. This decomposition allows for the independent optimization of spectral and spatial information, with the final goal of reconstructing the HR image formulated as follows:
min x hr f ( y lr , x hr ) + γ R l ( x hr )
min x hr f ( y pan , x hr ) + γ R P ( x hr )
Inspired by generative GAN algorithms, two generative modules were designed: the spectral enhancement block (SpeEB) and the spatial enhancement block (SpaEB). These two modules implicitly model regularization terms through deep learning in order to optimize both the spectral features and spatial details.
Spectral enhancement block (SpeEB): The focus of the spectral enhancement module is to optimize spectra by reconstructing spectral distributions that are consistent with low-resolution (LR) images.The optimization process of SpeEB consists of the following four steps:
y lr m ^ = D K x hr m 1
R l m = y lr y lr m ^
R h m = ρ D K T R l m
x hr m prox h l = x hr m 1 + R h m
where ρ is the step size, and prox h l is the proximal operator corresponding to the penalty term h l ( · ) .
Spatial enhancement block (SpaEB): the spatial enhancement block focuses on spatial optimization, which optimizes the details of spatial information by comparing the linear combination of an HR image and a PAN image. Its optimization steps are as follows:
y pan m ^ = F x hr m 1
R p m = y pan y pan m ^
R h m = ρ R p m F T
x hr m prox h p = x hr m 1 + R h m
where ρ is the step size, and prox h p is the proximal operator corresponding to the penalty term h p ( · ) .
Spectral–Spatial Universal Module (SSUM): The detailed structure of the Spectral–Spatial Universal Module (SSUM) is illustrated in Figure 2. To further enhance the fusion efficiency of spectral and spatial information, this paper introduces the SSUM module between the spectral enhancement block (SpeEB) and the spatial enhancement block (SpaEB), aiming to achieve the unified extraction and enhancement of spectral and spatial features. Specifically, SSUM incorporates both channel attention and spatial attention mechanisms, which effectively guide the network to selectively focus on spectral attributes and spatial details, thereby improving the feature representation capability. In the overall framework, SpeEB mainly leverages the residual information between the low-resolution multispectral (MS) image and the interpolated high-resolution MS image to compensate for the spectral distortion caused by upsampling. Conversely, SpaEB focuses on utilizing the spatial structural details contained in the PAN image and compensates for the spatial resolution loss via a residual back-projection strategy. Although both SpeEB and SpaEB share the same SSUM structure as the basic unit for feature mapping and residual feedback, they achieve functional decoupling and complementarity in terms of input design and residual information utilization. This ensures a well-balanced optimization between spectral fidelity and spatial detail enhancement. Furthermore, the structural versatility and efficiency of SSUM enable feature sharing and collaborative optimization between SpeEB and SpaEB, significantly improving the overall quality of feature representation and computational efficiency.

3.2. Multiscale Contrastive Learning

In the reconstruction of remote sensing images, MS images have poor spatial quality with significant high-frequency noise (Figure 3a). In contrast, PAN images have clear high-frequency spatial details (Figure 3b). Thus, MS images mainly contribute spectral information, while PAN images provide high-quality spatial details. This division prevents artifacts caused by mixing MS image noise with PAN image details.
To this end, discrete wavelet transform (DWT) is introduced in this paper to extract the multiscale high-frequency features of PAN and MS images. DWT is able to capture the spatial details in multiscale and multidirectional forms by decomposing the images into low and high-frequency subbands. The multiscale contrastive learning framework is shown in Figure 4. Specifically, the following applies.
Anchor sample: The reconstructed image generated via the SpeEB is used to extract its multiscale high-frequency features through DWT, with low-dimensional embedded features generated by global pooling and linear projection.
Z a n c h o r = P G DWT x hr SpeEB
where G ( · ) denotes global pooling, G ( · ) denotes linear projection for mapping high-dimensional features to low-dimensional potential space, and x hr SpeEB denotes the HR image generated via the spectral enhancement module.
Positive sample: The PAN images are acquired through spatial matching, with multiscale high-frequency features extracted after data enhancement (e.g., random flip and rotation) and the embedded features generated using the same process as for the anchor samples.
Z p o s = P G DWT Augment y pan
A u g m e n t ( · ) represents data enhancement operations such as random flipping and rotation.
Negative sample: Extracted from the upsampled MS image, diverse negative samples are generated by adding Gaussian noise, extracting their high-frequency features and mapping them to the low-dimensional space. Through multiple negative samples, the distance between the anchor point and negative samples is enlarged to improve the discriminative ability.
Z n e g i = P G DWT AddNoise y lr
where y lr denotes the LR image after upsampling through the interpolation operation, A d d N o i s e ( · ) denotes the addition of random Gaussian noise to the high-frequency portion of the MS image, and i denotes different instances of negative samples.
Multiscale contrastive learning (MCL): In the proposed MCAGP framework, a multiscale contrastive learning (MCL) module is introduced. As illustrated in Figure 4, the complete process of positive and negative sample construction, high-frequency feature extraction, and contrastive loss computation is clearly presented, providing readers with a detailed understanding of the implementation and functionality of this module.
The core idea of the MCL module is to guide the network to focus more on the consistency of spatial–spectral details during training by constructing positive and negative sample pairs. Specifically, multiscale high-frequency features are first extracted from the output H R M S l 1 of the SpeEB module, which serves as the anchor samples. Subsequently, a data augmentation strategy—including rotation, flipping, color jittering, and other transformations—is applied to the PAN image to generate positive samples. Their multiscale high-frequency features are also extracted. In order to provide effective contrastive information, multiple negative samples are further generated by injecting Gaussian noise into the multispectral image MS, followed by high-frequency feature extraction.
In the feature space, the similarity between the anchor features and the positive features is maximized (i.e., bringing them closer), while the similarity between the anchor features and the negative features is minimized (i.e., pushing them apart). This forms the positive–negative contrastive training objective, where the similarity measurement is implemented using the InfoNCE loss function.
It is noteworthy that the high-frequency feature extraction in the MCL module not only focuses on single-scale texture information but also leverages multiscale spatial details obtained via discrete wavelet transform (DWT). This ensures the effectiveness of contrastive loss across different scales. Additionally, the generation process of positive and negative samples incorporates diverse data augmentation and noise injection strategies, effectively enhancing the model’s discriminative ability and robustness.
Residual connection and information balance: To avoid the loss of spectral information due to the over-reliance of the model on the spatial features of the PAN image and to improve the fusion efficiency of spectral and spatial features, this paper introduces a multi-stage residual connection mechanism between the SpeEB, the SpaEB, and the subsequent residual blocks, which progressively accrues the features of each stage and realizes the dynamic balance between the spectral and spatial information.

3.3. Loss Functions

L1 loss:
L L 1 = x hr pred x hr gt
Contrast loss: the InfoNCE loss [33,34,35] is used.
D = i = 1 K exp sim ( Z anchor , Z neg i ) τ
L InfoNCE = log exp sim ( Z anchor , Z pos ) τ exp sim ( Z anchor , Z pos ) τ + D
where Z a n c h o r is the feature representation of the anchor sample, Z p o s is the feature representation of the positive sample, Z n e g i is the feature representation of the i negative sample, with a total of K negative samples, τ is a given temperature parameter, which is used to regulate the scaling range of similarity, and s i m a , b is the similarity function, which is commonly used to measure the similarity between feature vectors by dot product or cosine similarity.
In the implementation, the dot products of positive and negative samples are batch-processed and spliced by columns to form a logits matrix, where the first position is a positive sample and the rest are negative samples. Cross-entropy loss is a reliable and efficient loss function that is widely utilized in deep networks [43,44,45], and the final contrast loss is calculated by cross-entropy loss.
Thus, the total loss function of the model is as follows:
L = L L 1 + λ L I n f o N C E
where λ for the weight hyperparameters, which are used to balance the proportion of the contribution of the L L 1 loss and the InfoNCE loss.

4. Experiments

4.1. Datasets and Metrics

To verify the superiority of the proposed method, we conduct experiments on the Rio dataset (source: WV3), Guangzhou dataset (source: GF2), and Indianapolis dataset (source: QB), which all have a scale factor of 4, and the test sets contain the reduced-resolution test set and the full-resolution test set, respectively. As shown in Table 1, The data can be found at GitHub-liangjiandeng/PanCollection.
For the reduced-resolution experiments, we used four commonly used metrics: peak signal-to-noise ratio (PSNR), structural similarity index (SSIM) [46], spectral angle mapping (SAM) [47], and relative unquantized global synthesis error (ERGAS) [48]. For the full-resolution experiments, we use the spectral distortion index (D λ ), the spatial distortion index (Ds), and the quality of the reference-free mixing (QNR) to assess the quality of the results.
Our MCAGP is implemented in the PyTorch-3.8 framework with Adam optimizer, a learning rate of 5 × 10 5 , L2 regularization, a weight decay factor of 1 × 10 4 , a batch size of 4, and a network depth and width of 64 and 8, respectively. The experiments were performed on MATLAB 2019b and NVIDIA RTX 4050 GPU computers (NVIDIA, Santa Clara, CA, USA). For other deep learning pansharpening methods, we trained the network using the default settings from the relevant papers or code repositories, using the same equipment and PyTorch environment.

4.2. Comparison with SOTA Methods

In this section, we compare what we propose in this paper with several state-of-the-art methods, including three traditional methods, i.e., EXP [49], C-GSA [50], and BDSD-PC [51], TV [52], PWMPF [53], and nine deep learning-based methods, i.e., DaViT [54] and its variants, such as paDaViT and rDaViT, LeWin [55], MSDCNN [21], PanFormer [56], SSIN [57], PANNET [58], and PNN [13],. We conducted reduced-resolution and full-resolution experiments on three datasets, with the reduced-resolution following the Wald protocol.
Results on WV3 dataset: Table 2 shows the results of quantitative experiments on the WV3 dataset, while Figure 5 provides a visualization of the fused images. Overall, the deep learning-based approach shows significant advantages over the traditional approach. In the resolution reduction experiments, our method is 1.331 dB ahead of the suboptimal method in the PSNR metric, with 0.013 higher in the SSIM metric, indicating a significant improvement in image restoration quality. The restored images are clearer and more natural, with better preservation of details and structures. Our method reduces the spatial angle metric (SAM) by 0.01 and the ERGAS value by 0.458 compared to the suboptimal method, which further indicates that our method achieves a superior balance between preserving spatial details and spectral accuracy. These metrics show that our method effectively recovers the high-frequency details of the image during image reconstruction while reducing the recovery error and enhancing the realism of the image. In the full-resolution experiments, although our D λ (spectral distortion) is slightly higher than that of other methods, indicating a slight trade-off in spectral recovery, we succeeded in minimizing spatial distortion by optimizing Ds (spatial distortion). This optimization allowed us to achieve optimal performance in the recovery of spatial details, ensuring high-resolution image recovery. In terms of the final QNR (quality-to-noise ratio) value, our method achieves the best performance, indicating that we have achieved an ideal balance between image quality and noise control and thus ensured the detail and visual quality of the image. In terms of visual effect, our method significantly improves the clarity and detail performance of the image, especially in the detailed presentation of buildings and vegetation, with a sharper restoration effect.
Results on the QB dataset: Table 3 lists the quantitative results on the QB dataset, while Figure 6 demonstrates the corresponding visual effects. Overall, the deep learning-based method outperforms the traditional approach. In the resolution reduction experiments, our method surpasses the suboptimal method by 0.1635 dB in PSNR and 0.022 in SSIM, indicating superior performance in image noise suppression, detail retention, and structure restoration. SAM and ERGAS values are lower than the suboptimal method by 0.012 and 1.095, respectively, suggesting that our method maximizes spectral restoration, preserving the spectral features of the original image and effectively reducing reconstruction errors. In the full-resolution experiments, our method slightly sacrifices spectral distortion (D λ ), but this does not affect overall performance. Our spatial distortion (Ds) is the lowest among all methods, demonstrating that we minimize spatial distortion during image restoration, ensuring accurate recovery of spatial structure and details. Notably, in the comprehensive QNR (quality-to-noise ratio) metric, our method achieves the best performance, indicating an ideal balance between image quality and noise control.
Results on the GF2 dataset: Table 4 summarizes the experimental results on the GF2 dataset, while Figure 7 presents a visual representation of the fused images. In the down-resolution experiments, our method outperforms the next best method by 0.075 dB in PSNR and 0.019 in SSIM, demonstrating its superiority in image restoration quality, particularly in detail and contrast preservation. SAM is the lowest among all methods, indicating better spatial restoration performance, and ERGAS is 1.238, which is 0.022 higher than the optimal paDaViT method but still shows better results. In the full-resolution experiments, our method continues to significantly outperform traditional methods, although it performs slightly lower than individual deep learning methods in some metrics, especially in spectral recovery. Overall, our method achieves a balance between spectral and spatial details in image restoration, with superior overall performance. Visually, the fused images exhibit lower noise, fewer artifacts, and sharper details with better contrast.
The performance on the GF2 dataset is not as good as that on the QB and WV3 datasets, mainly due to the noise level in the data, scene complexity, and the stringent demands of the unsupervised full-resolution evaluation protocol on the model’s generalization ability. The GF2 dataset contains more fragmented structures, a mix of vegetation and urban textures, and more pronounced edge aliasing effects, which increase the difficulty of image restoration. Additionally, the performance on the GF2 dataset in the full-resolution experiments is not as good as that on other datasets, partly because the Wald protocol we used has limited applicability to the GF2 dataset. While the Wald protocol works effectively for high-quality commercial sensors such as QB and WV3, it may not hold for GF2, as significant details and noise patterns are lost during the downsampling process, and the generated pseudo-GT exhibits substantial statistical deviation from the true full-resolution images in both spectral and texture domains. Although our method outperforms others in down-resolution experiments, the performance on the GF2 dataset in full-resolution evaluation is slightly worse than on other datasets due to these factors.

4.3. Ablation Experiments

To evaluate the contribution of each module in the proposed method, we conducted ablation experiments on the QB dataset by replacing or removing different modules, comparing the experimental results with the final model (Ours) and analyzing the impact of each module on the model performance. The experimental results are shown in Table 5 and analyzed in detail below:
(1)
Replacing the SSUM module with regular convolution while removing the contrastive learning part.
In the experimental setup (1), the SSUM module is replaced with regular convolution, with the contrastive learning part removed. Compared with our final model (our approach), PSNR decreased by 8.54%, SSIM decreased by 3.99%, SAM increased by 22.09%, ERGAS increased by 39.50%, and QNR decreased by 0.66%. The results show that regular convolution cannot replace the efficient SSUM module, with the removal of contrastive learning significantly reducing the model’s performance in both down-resolution and full-resolution experiments.
(2)
Replacing the SSUM module with regular convolution while retaining only the contrastive learning component.
In experimental setup (2), contrastive learning and its loss function are retained, but the SSUM module is replaced with ordinary convolution. Compared with our approach, PSNR decreased by 8.97%, SSIM decreased by 3.57%, SAM increased by 19.77%, ERGAS increased by 41.66%, and QNR decreased by 0.66%. The results demonstrate the key role of the SSUM module in the model, which can significantly improve the reconstruction quality of image details and effectively reduce errors.
(3)
Retaining the SSUM module while deleting the contrastive learning part.
In experimental setup (3), only the SSUM module is used, and the contrastive learning part is removed. Compared with our approach, PSNR decreased by 3.78%, SSIM decreased by 1.05%, SAM increased by 8.14%, and ERGAS increased by 13.95%. Although the SSUM module improves the reconstruction quality, the removal of contrastive learning degrades the model’s performance in the high-resolution reconstruction task; the spectral and spatial properties especially cannot be fully optimized, further validating the importance of contrastive learning.

4.4. Discussion of the Loss Function Parameter, λ

To address the different optimization objectives of the two loss functions, we investigated the impact of introducing contrast loss at different stages on model performance, proposing a new strategy that adds contrast loss at a later stage to fine-tune the already established model. In our experiments, we compared two training strategies: one introduced the contrast loss in the whole process (i.e., the method in this paper, with λ = 1); the other trained the model using the L1 reconstruction loss initially to establish the basic image reconstruction capability, followed by gradually increasing the weight of contrast loss until it matched the L1 loss. The training results are shown in Table 6, Table 7 and Table 8.
On the WV3 dataset, our method prioritizes spectral retention, reflected by a lower SAM and D λ , but with a slight sacrifice in spatial consistency (indicated by the increase in Ds). In contrast, the two-stage training strategy balances spectral and spatial properties better, though at the cost of a slight reduction in PSNR. For tasks requiring high spectral fidelity, such as surface classification and hyperspectral analysis, our method is more suitable. For higher overall performance, the two-stage strategy can be considered. On the QB dataset, our method offers a better balance between spectral and spatial performance, achieving a superior overall performance index. On the GF2 dataset, the two-stage method strikes a better balance between spatial details and spectral consistency, effectively reducing global error (ERGAS); in full-resolution tests, our method shows better spatial detail recovery and noise suppression.

5. Conclusions

In this paper, we have proposed a generative network for depth gradient projection based on self-supervised multiscale contrastive learning and attention guidance that improves the balance of spectral and spatial information. We first proposed an efficient SSUM module based on channel and spatial attention, which was combined with a depth prior and generalized to the depth gradient projection network to form a spectral enhancement block and a spatial enhancement block, which is the basis of our network. Secondly, based on the two proposed optimization problems, we used contrastive learning by using the multiscale high-frequency component of PAN as the positive sample and the upsampled multiscale high-frequency information of MS as the negative samples. This enables the spectral enhancement block and the spatial enhancement block to focus more on their respective optimization tasks; in the end, contrastive loss was applied throughout the process to refine the model, leading to improved reconstruction quality. The experiments demonstrate the superiority of the proposed method in this paper. In the future, research on contrastive learning loss will continue to be strengthened, and it is believed that contrastive learning will have more space for development in the field of pansharpening.

Author Contributions

Conceptualization, Q.L., B.L. and J.W.; Methodology, Q.L.; Software, Q.L.; Investigation, Q.L. and B.L.; Resources, Q.L.; Data curation, Q.L., B.L. and J.W.; Writing—original draft, Q.L.; Writing—review & editing, X.Y.; Visualization, Q.L.; Supervision, X.Y. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Sichuan University, grant number: 24NSFSC2159.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Yilmaz, C.S.; Yilmaz, V.; Gungor, O. A theoretical and practical survey of image fusion methods for multispectral pansharpening. Inf. Fusion 2022, 79, 1–43. [Google Scholar] [CrossRef]
  2. Bovolo, F.; Bruzzone, L.; Capobianco, L.; Garzelli, A.; Marchesi, S.; Nencini, F. Analysis of the effects of pansharpening in change detection on VHR images. IEEE Geosci. Remote Sens. Lett. 2009, 7, 53–57. [Google Scholar] [CrossRef]
  3. Zhong, P.; Wang, R. Learning conditional random fields for classification of hyperspectral images. IEEE Trans. Image Process. 2010, 19, 1890–1907. [Google Scholar] [CrossRef] [PubMed]
  4. Yu, X.; Hoff, L.E.; Reed, I.S.; Chen, A.M.; Stotts, L.B. Automatic target detection and recognition in multiband imagery: A unified ML detection and estimation approach. IEEE Trans. Image Process. 1997, 6, 143–156. [Google Scholar] [PubMed]
  5. Hallada, W.A.; Cox, S. Image sharpening for mixed spatial and spectral resolution satellite systems. In Proceedings of the 1983 International Symposium on Remote Sensing of Environment, Ann Arbor, MI, USA, 9–13 May 1983. [Google Scholar]
  6. Kwarteng, P.; Chavez, A. Extracting spectral contrast in Landsat Thematic Mapper image data using selective principal component analysis. Photogramm. Eng. Remote Sens. 1989, 55, 339–348. [Google Scholar]
  7. Haydn, R. Application of the IHS color transform to the processing of multisensor data and image enhancement. In Proceedings of the International Symposium on Remote Sensing of Arid and Semi-Arid Lands, Cairo, Egypt, 19–25 January 1982. [Google Scholar]
  8. Aiazzi, B.; Baronti, S.; Selva, M. Improving component substitution pansharpening through multivariate regression of MS + Pan data. IEEE Trans. Geosci. Remote Sens. 2007, 45, 3230–3239. [Google Scholar] [CrossRef]
  9. Nunez, J.; Otazu, X.; Fors, O.; Prades, A.; Pala, V.; Arbiol, R. Multiresolution-based image fusion with additive wavelet decomposition. IEEE Trans. Geosci. Remote Sens. 1999, 37, 1204–1211. [Google Scholar] [CrossRef]
  10. Garzelli, A.; Nencini, F. PAN-sharpening of very high resolution multispectral images using genetic algorithms. Int. J. Remote Sens. 2006, 27, 3273–3292. [Google Scholar] [CrossRef]
  11. Liu, J.; Basaeed, E. Smoothing Filter-based Intensity Modulation: A spectral preserve image fusion technique for improving spatial details. Int. J. Remote Sens. 2000, 21, 3461–3472. [Google Scholar] [CrossRef]
  12. Aiazzi, B.; Alparone, L.; Baronti, S.; Garzelli, A.; Selva, M. MTF-tailored Multiscale Fusion of High-resolution MS and Pan Imagery. Photogramm. Eng. Remote Sens. 2006, 72, 591–596. [Google Scholar] [CrossRef]
  13. Masi, G.; Cozzolino, D.; Verdoliva, L.; Scarpa, G. Pansharpening by convolutional neural networks. Remote Sens. 2016, 8, 594. [Google Scholar] [CrossRef]
  14. Wei, Y.; Yuan, Q.; Shen, H.; Zhang, L. Boosting the accuracy of multispectral image pansharpening by learning a deep residual network. IEEE Geosci. Remote Sens. Lett. 2017, 14, 1795–1799. [Google Scholar] [CrossRef]
  15. Benzenati, T.; Kallel, A.; Kessentini, Y. Two stages pan-sharpening details injection approach based on very deep residual networks. IEEE Trans. Geosci. Remote Sens. 2020, 59, 4984–4992. [Google Scholar] [CrossRef]
  16. Liu, Q.; Zhou, H.; Xu, Q.; Liu, X.; Wang, Y. PSGAN: A generative adversarial network for remote sensing image pan-sharpening. IEEE Trans. Geosci. Remote Sens. 2020, 59, 10227–10242. [Google Scholar] [CrossRef]
  17. Ozcelik, F.; Alganci, U.; Sertel, E.; Unal, G. Rethinking CNN-based pansharpening: Guided colorization of panchromatic images via GANs. IEEE Trans. Geosci. Remote Sens. 2020, 59, 3486–3501. [Google Scholar] [CrossRef]
  18. Zhou, H.; Hou, J.; Zhang, Y.; Ma, J.; Ling, H. Unified gradient-and intensity-discriminator generative adversarial network for image fusion. Inf. Fusion 2022, 88, 184–201. [Google Scholar] [CrossRef]
  19. Ma, J.; Yu, W.; Chen, C.; Liang, P.; Guo, X.; Jiang, J. Pan-GAN: An unsupervised pan-sharpening method for remote sensing image fusion. Inf. Fusion 2020, 62, 110–120. [Google Scholar] [CrossRef]
  20. Dong, W.; Hou, S.; Xiao, S.; Qu, J.; Du, Q.; Li, Y. Generative dual-adversarial network with spectral fidelity and spatial enhancement for hyperspectral pansharpening. IEEE Trans. Neural Netw. Learn. Syst. 2021, 33, 7303–7317. [Google Scholar] [CrossRef]
  21. Yuan, Q.; Wei, Y.; Meng, X.; Shen, H.; Zhang, L. A multiscale and multidepth convolutional neural network for remote sensing imagery pan-sharpening. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2018, 11, 978–989. [Google Scholar] [CrossRef]
  22. Ballester, C.; Caselles, V.; Igual, L.; Verdera, J.; Rougé, B. A variational model for P+ XS image fusion. Int. J. Comput. Vis. 2006, 69, 43–58. [Google Scholar] [CrossRef]
  23. Wu, Z.C.; Huang, T.Z.; Deng, L.J.; Vivone, G.; Miao, J.Q.; Hu, J.F.; Zhao, X.L. A new variational approach based on proximal deep injection and gradient intensity similarity for spatio-spectral image fusion. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2020, 13, 6277–6290. [Google Scholar] [CrossRef]
  24. Wu, Z.C.; Huang, T.Z.; Deng, L.J.; Huang, J.; Chanussot, J.; Vivone, G. LRTCFPan: Low-rank tensor completion based framework for pansharpening. IEEE Trans. Image Process. 2023, 32, 1640–1655. [Google Scholar] [CrossRef]
  25. Saeedi, J.; Faez, K. A new pan-sharpening method using multiobjective particle swarm optimization and the shiftable contourlet transform. ISPRS J. Photogramm. Remote Sens. 2011, 66, 365–381. [Google Scholar] [CrossRef]
  26. Yilmaz, V. A Non-Dominated Sorting Genetic Algorithm-II-based approach to optimize the spectral and spatial quality of component substitution-based pansharpened images. Concurr. Comput. Pract. Exp. 2021, 33, e6030. [Google Scholar] [CrossRef]
  27. Larsson, G.; Maire, M.; Shakhnarovich, G. Colorization as a Proxy Task for Visual Understanding. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 840–849. [Google Scholar]
  28. Zhang, R.; Isola, P.; Efros, A.A. Split-Brain Autoencoders: Unsupervised Learning by Cross-Channel Prediction. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 June 2017; pp. 645–654. [Google Scholar]
  29. Wang, Z.; Wang, J.; Liu, Z.; Qiu, Q. Energy-Inspired Self-Supervised Pretraining for Vision Models. arXiv 2023, arXiv:2302.01384. [Google Scholar]
  30. Xing, Y.; Qu, L.; Zhang, S.; Zhang, K.; Zhang, Y.; Bruzzone, L. CrossDiff: Exploring Self-Supervised Representation of Pansharpening via Cross-Predictive Diffusion Model. IEEE Trans. Image Process. 2024, 33, 5496–5509. [Google Scholar] [CrossRef]
  31. Fernandez-Beltran, R.; Fernandez, R.; Kang, J.; Pla, F. W-NetPan: Double-U network for inter-sensor self-supervised pan-sharpening. Neurocomputing 2023, 530, 125–138. [Google Scholar] [CrossRef]
  32. He, J.; Yuan, Q.; Li, J.; Xiao, Y.; Zhang, L. A self-supervised remote sensing image fusion framework with dual-stage self-learning and spectral super-resolution injection. ISPRS J. Photogramm. Remote Sens. 2023, 204, 131–144. [Google Scholar] [CrossRef]
  33. Oord, A.; Li, Y.; Vinyals, O. Representation learning with contrastive predictive coding. arXiv 2018, arXiv:1807.03748. [Google Scholar]
  34. Chen, T.; Kornblith, S.; Norouzi, M.; Hinton, G. A simple framework for contrastive learning of visual representations. In Proceedings of the International Conference on Machine Learning, Virtual, 13–18 July 2020; pp. 1597–1607. [Google Scholar]
  35. He, K.; Fan, H.; Wu, Y.; Xie, S.; Girshick, R. Momentum contrast for unsupervised visual representation learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 9729–9738. [Google Scholar]
  36. Sermanet, P.; Lynch, C.; Chebotar, Y.; Hsu, J.; Jang, E.; Schaal, S.; Levine, S.; Brain, G. Time-contrastive networks: Self-supervised learning from video. In Proceedings of the 2018 IEEE International Conference on Robotics and Automation (ICRA), Brisbane, Australia, 21–25 May 2018; pp. 1134–1141. [Google Scholar]
  37. Tian, Y.; Krishnan, D.; Isola, P. Contrastive multiview coding. In Proceedings of the 16th European Conference on Computer Vision (ECCV 2020), Glasgow, UK, 23–28 August 2020; Part XI. Springer International Publishing: Cham, Switzerland, 2020; pp. 776–794. [Google Scholar]
  38. Cai, Q.; Wang, Y.; Pan, Y.; Yao, T.; Mei, T. Joint contrastive learning with infinite possibilities. Adv. Neural Inf. Process. Syst. 2020, 33, 12638–12648. [Google Scholar]
  39. Wu, H.; Qu, Y.; Lin, S.; Zhou, J.; Qiao, R.; Zhang, Z.; Xie, Y.; Ma, L. Contrastive learning for compact single image dehazing. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 10551–10560. [Google Scholar]
  40. Wang, Y.; Lin, S.; Qu, Y.; Wu, H.; Zhang, Z.; Xie, Y.; Yao, A. Towards compact single image super-resolution via contrastive self-distillation. arXiv 2021, arXiv:2105.11683. [Google Scholar]
  41. Han, J.; Shoeiby, M.; Malthus, T.; Botha, E.; Anstee, J.; Anwar, S.; Wei, R.; Petersson, L.; Armin, M.A. Single underwater image restoration by contrastive learning. In Proceedings of the 2021 IEEE International Geoscience and Remote Sensing Symposium IGARSS, Brussels, Belgium, 11–16 July 2021; pp. 2385–2388. [Google Scholar]
  42. Zhou, M.; Huang, J.; Yan, K.; Yang, G.; Liu, A.; Li, C.; Zhao, F. Normalization-based feature selection and restitution for pan-sharpening. In Proceedings of the 30th ACM International Conference on Multimedia, Lisboa, Portugal, 10–14 October 2022. [Google Scholar]
  43. Rumelhart, D.E.; Hinton, G.E.; Williams, R.J. Learning representations by back-propagating errors. Nature 1986, 323, 533–536. [Google Scholar] [CrossRef]
  44. Baum, E.; Wilczek, F. Supervised learning of probability distributions by neural networks. In Proceedings of the Neural Information Processing Systems Conference, Denver, CO, USA, 8–12 November 1987. [Google Scholar]
  45. Levin, E.; Fleisher, M. Accelerated learning in layered neural networks. Complex Syst. 1988, 2, 3. [Google Scholar]
  46. Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef]
  47. Yuhas, R.H.; Goetz, A.F.H.; Boardman, J.W. Discrimination among semi-arid landscape endmembers using the spectral angle mapper (SAM) algorithm. In Proceedings of the JPL, Summaries of the Third Annual JPL Airborne Geoscience Workshop, Volume 1: AVIRIS Workshop, Pasadena, CA, USA, 1–5 June 1992. [Google Scholar]
  48. Alparone, L.; Wald, L.; Chanussot, J.; Thomas, C.; Gamba, P.; Bruce, L.M. Comparison of pansharpening algorithms: Outcome of the 2006 GRS-S data-fusion contest. IEEE Trans. Geosci. Remote Sens. 2007, 45, 3012–3021. [Google Scholar] [CrossRef]
  49. Aiazzi, B.; Alparone, L.; Baronti, S.; Garzelli, A. Context-driven fusion of high spatial and spectral resolution images based on oversampled multiresolution analysis. IEEE Trans. Geosci. Remote Sens. 2002, 40, 2300–2312. [Google Scholar] [CrossRef]
  50. Restaino, R.; Mura, M.D.; Vivone, G.; Chanussot, J. Context-adaptive pansharpening based on image segmentation. IEEE Trans. Geosci. Remote Sens. 2017, 55, 753–766. [Google Scholar] [CrossRef]
  51. Vivone, G. Robust band-dependent spatial-detail approaches for panchromatic sharpening. IEEE Trans. Geosci. Remote Sens. 2019, 57, 6421–6433. [Google Scholar] [CrossRef]
  52. Palsson, F.; Sveinsson, J.R.; Ulfarsson, M.O.; Benediktsson, J.A. Model-based fusion of multi- and hyperspectral images using PCA and wavelets. IEEE Trans. Geosci. Remote Sens. 2015, 53, 2652–2663. [Google Scholar] [CrossRef]
  53. Palsson, F.; Sveinsson, J.R.; Ulfarsson, M.O. A new pansharpening algorithm based on total variation. IEEE Geosci. Remote Sens. Lett. 2014, 11, 318–322. [Google Scholar] [CrossRef]
  54. Ding, M.; Xiao, B.; Codella, N.; Luo, P.; Wang, J.; Yuan, L. Davit: Dual attention vision transformers. In Proceedings of the European Conference on Computer Vision (ECCV), Tel Aviv, Israel, 23–27 October 2022; Springer Nature: Cham, Switzerland, 2022. [Google Scholar]
  55. Wang, Z.; Cun, X.; Bao, J.; Zhou, W.; Liu, J.; Li, H. Uformer: A general U-shaped transformer for image restoration. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022. [Google Scholar]
  56. Zhou, H.; Liu, Q.; Wang, Y. PanFormer: A Transformer-Based Model for Pan-Sharpening. In Proceedings of the 2022 IEEE International Conference on Multimedia and Expo (ICME), Taipei, Taiwan, 18–22 July 2022; pp. 1–6. [Google Scholar]
  57. Nie, Z.; Chen, L.; Jeon, S.; Yang, X. Spectral–spatial interaction network for multispectral image and panchromatic image fusion. Remote Sens. 2022, 14, 4100. [Google Scholar] [CrossRef]
  58. Yang, J.; Fu, X.; Hu, Y.; Huang, Y.; Ding, X.; Paisley, J. PanNet: A deep network architecture for pan-sharpening. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017; pp. 1753–1761. [Google Scholar]
Figure 1. Overall framework diagram of MCAGP.
Figure 1. Overall framework diagram of MCAGP.
Sensors 25 02560 g001
Figure 2. Spectral–Spatial Universal Module (SSUM) framework diagram.
Figure 2. Spectral–Spatial Universal Module (SSUM) framework diagram.
Sensors 25 02560 g002
Figure 3. Comparison of the high-frequency portion of an MS image and a PAN image: (a) shows the obvious noise in the high-frequency portion of the MS image; (b) demonstrates the rich and clear spatial details in the high-frequency portion of the PAN image. (a) High-frequency portion of a MS image (from left to right, they are as follows: MS-Original, MS-LH (Vertical High), MS-HL (Horizontal High), and MS-HH (Diagonal High)). (b) High-frequency portion of a PAN image (from left to right, they are as follows: PAN-Original, PAN-LH (Vertical High), PAN-HL (Horizontal High), and PAN-HH (Diagonal High)).
Figure 3. Comparison of the high-frequency portion of an MS image and a PAN image: (a) shows the obvious noise in the high-frequency portion of the MS image; (b) demonstrates the rich and clear spatial details in the high-frequency portion of the PAN image. (a) High-frequency portion of a MS image (from left to right, they are as follows: MS-Original, MS-LH (Vertical High), MS-HL (Horizontal High), and MS-HH (Diagonal High)). (b) High-frequency portion of a PAN image (from left to right, they are as follows: PAN-Original, PAN-LH (Vertical High), PAN-HL (Horizontal High), and PAN-HH (Diagonal High)).
Sensors 25 02560 g003
Figure 4. Multiscale contrastive learning framework. Positive samples are generated with multiscale high-frequency components from PAN, anchor samples are pairs of scale high-frequency components after reconstruction with spectral enhancement blocks, and negative samples are multiscale high-frequency components after LRMS interpolation to generate multiple negative samples.
Figure 4. Multiscale contrastive learning framework. Positive samples are generated with multiscale high-frequency components from PAN, anchor samples are pairs of scale high-frequency components after reconstruction with spectral enhancement blocks, and negative samples are multiscale high-frequency components after LRMS interpolation to generate multiple negative samples.
Sensors 25 02560 g004
Figure 5. Visualization on the WV3 dataset.
Figure 5. Visualization on the WV3 dataset.
Sensors 25 02560 g005
Figure 6. Visualization of the QB dataset.
Figure 6. Visualization of the QB dataset.
Sensors 25 02560 g006
Figure 7. Visualization of the GF2 dataset.
Figure 7. Visualization of the GF2 dataset.
Sensors 25 02560 g007
Table 1. Dataset Information. B is the number of channels in the multispectral image.
Table 1. Dataset Information. B is the number of channels in the multispectral image.
DataBMS-ResolutionPAN-Resolution
WV3864256
QB464256
GF2464256
Table 2. Test results for the WV3 dataset at reduced and full resolution. (Bold: best; underline: second best).
Table 2. Test results for the WV3 dataset at reduced and full resolution. (Bold: best; underline: second best).
MethodPSNR ↑SSIM ↑SAM ↓ERGAS ↓D λ Ds ↓QNR ↑
EXP27.4090.6780.1358.4410.0560.1560.796
C-GSA31.2450.8530.1385.5670.1020.0750.831
BDSD-PC31.5210.8730.1305.3130.0630.0730.870
TV32.3810.9050.0944.8550.0780.1020.829
PWMBF32.1300.9190.0973.9320.0280.0780.894
DaViT30.9500.8920.0974.4650.0310.0850.887
LeWin30.5910.8820.0964.6940.0290.0820.892
MSDCNN30.4410.8840.0984.7580.0350.0890.880
paDaViT32.3920.9240.0873.7610.0300.0720.901
PanFormer32.1820.9240.0853.8800.0360.0830.889
SSIN34.2520.9550.0703.0360.0290.0750.898
PANNET31.2570.8990.0914.4130.0280.0770.898
PNN29.4120.8570.1055.2880.0350.0930.876
rDaViT31.2130.9000.0924.3370.0310.0850.888
Ours35.5830.9680.0602.5780.0330.0620.907
Table 3. Test results for the QB dataset at reduced and full resolution. (Bold: best; underline: second best).
Table 3. Test results for the QB dataset at reduced and full resolution. (Bold: best; underline: second best).
MethodPSNR ↑SSIM ↑SAM ↓ERGAS ↓D λ Ds ↓QNR ↑
EXP28.0380.6820.14511.9270.0790.1860.750
C-GSA32.0570.8610.1257.5300.0800.1370.794
BDSD-PC31.9200.8550.1367.6480.0290.0690.904
TV32.1740.8650.1427.6900.0460.0740.884
PWMBF34.2230.9040.1185.5040.0580.0680.878
DaViT32.8800.9090.1146.3130.0380.1150.852
LeWin32.7220.9010.1196.4270.0470.1020.857
MSDCNN33.0600.9170.1136.1830.0390.1100.855
paDaViT33.4040.9210.1085.9510.0450.0980.862
PanFormer33.5520.9140.1025.8570.0480.1140.843
SSIN33.8610.9300.0985.6670.0440.0920.869
PANNET33.4960.9200.1085.8880.0530.0730.878
PNN32.5060.8990.1216.6010.0350.1060.864
rDaViT33.1250.9120.1116.1450.0390.1100.856
Ours35.8580.9520.0864.5720.0520.0450.906
Table 4. Test results for the GF2 dataset at reduced and full resolution. (Bold: best; underline: second best).
Table 4. Test results for the GF2 dataset at reduced and full resolution. (Bold: best; underline: second best).
MethodPSNR ↑SSIM ↑SAM ↓ERGAS↓D λ Ds ↓QNR ↑
EXP31.0940.7940.0352.6450.0190.1670.816
C-GSA33.9440.8950.0331.9240.0530.1340.820
BDSD-PC33.8820.8940.0321.9110.0490.1390.819
TV33.9000.9040.0301.5980.0670.0740.865
PWMBF34.5100.8960.0311.6730.0240.0760.874
DaViT36.8970.9330.0221.2690.0360.0540.913
LeWin36.3270.9200.0241.3570.0350.0490.918
MSDCNN36.1660.9230.0241.3680.0330.0450.923
paDaViT37.2320.9340.0211.2160.0380.0560.908
PanFormer36.4830.9290.0241.3150.0340.0490.919
SSIN36.4110.9290.0231.3300.0370.0410.924
PANNET36.4780.9260.0221.3180.0380.0430.921
PNN35.6160.9140.0281.4610.0340.0490.917
rDaViT37.0230.9350.0211.2470.0340.0560.912
Ours37.3070.9540.0211.2380.0460.0640.892
Table 5. Results of ablation experiments on the QB dataset. (Bold: best; underline: second best).
Table 5. Results of ablation experiments on the QB dataset. (Bold: best; underline: second best).
SSUMCLPSNR ↑SSIM ↑SAM ↓ERGAS ↓D λ Ds ↓QNR ↑
1 ××32.7950.9140.1056.3800.0430.0600.900
2 ×32.6410.9180.1036.4770.0390.0640.900
3 ×34.5040.9420.0935.2100.0450.0800.879
Ours35.8580.9520.0864.5720.0520.0450.906
Table 6. Test results of the WV3 dataset introducing contrast loss at different time periods. (Bold: best).
Table 6. Test results of the WV3 dataset introducing contrast loss at different time periods. (Bold: best).
SSUMCLTwo-StagePSNR ↑SSIM ↑SAM ↓ERGAS ↓D λ Ds ↓QNR ↑
34.9520.9660.0622.7520.0360.0520.914
Ours×35.5830.9680.0602.5780.0330.0620.907
Table 7. Test results of the QB dataset introducing contrast loss at different time periods. (Bold: best).
Table 7. Test results of the QB dataset introducing contrast loss at different time periods. (Bold: best).
SSUMCLTwo-StagePSNR ↑SSIM ↑SAM ↓ERGAS ↓D λ Ds ↓QNR ↑
35.6610.9520.0884.6500.0590.0790.867
Ours×35.8580.9520.0864.5710.0520.0450.906
Table 8. Test results of the GF2 dataset introducing contrast loss at different time periods. (Bold: best).
Table 8. Test results of the GF2 dataset introducing contrast loss at different time periods. (Bold: best).
SSUMCLTwo-StagePSNR ↑SSIM ↑SAM ↓ERGAS ↓D λ Ds ↓QNR ↑
37.3940.9540.0201.2030.0450.0710.887
Ours×37.3070.9540.0211.2380.0460.0640.892
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Li, Q.; Yang, X.; Li, B.; Wang, J. Self-Supervised Multiscale Contrastive and Attention-Guided Gradient Projection Network for Pansharpening. Sensors 2025, 25, 2560. https://doi.org/10.3390/s25082560

AMA Style

Li Q, Yang X, Li B, Wang J. Self-Supervised Multiscale Contrastive and Attention-Guided Gradient Projection Network for Pansharpening. Sensors. 2025; 25(8):2560. https://doi.org/10.3390/s25082560

Chicago/Turabian Style

Li, Qingping, Xiaomin Yang, Bingru Li, and Jin Wang. 2025. "Self-Supervised Multiscale Contrastive and Attention-Guided Gradient Projection Network for Pansharpening" Sensors 25, no. 8: 2560. https://doi.org/10.3390/s25082560

APA Style

Li, Q., Yang, X., Li, B., & Wang, J. (2025). Self-Supervised Multiscale Contrastive and Attention-Guided Gradient Projection Network for Pansharpening. Sensors, 25(8), 2560. https://doi.org/10.3390/s25082560

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop