Next Article in Journal
A Novel Bayesian Super-Resolution Method for Radar Forward-Looking Imaging Based on Markov Random Field Model
Previous Article in Journal
Phytoplankton Genera Structure Revealed from the Multispectral Vertical Diffuse Attenuation Coefficient
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Hyperspectral Super-Resolution Via Joint Regularization of Low-Rank Tensor Decomposition

1
School of Computer Science and Engineering, North Minzu University, Yinchuan 750021, China
2
The Key Laboratory of Images and Graphics Intelligent Processing of State Ethnic Affairs Commission: IGIPLab, North Minzu University, Yinchuan 750021, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2021, 13(20), 4116; https://doi.org/10.3390/rs13204116
Submission received: 10 September 2021 / Revised: 5 October 2021 / Accepted: 6 October 2021 / Published: 14 October 2021

Abstract

:
The hyperspectral image super-resolution (HSI-SR) problem aims at reconstructing the high resolution spatial–spectral information of the scene by fusing low-resolution hyperspectral images (LR-HSI) and the corresponding high-resolution multispectral image (HR-MSI). In order to effectively preserve the spatial and spectral structure of hyperspectral images, a new joint regularized low-rank tensor decomposition method (JRLTD) is proposed for HSI-SR. This model alleviates the problem that the traditional HSI-SR method, based on tensor decomposition, fails to adequately take into account the manifold structure of high-dimensional HR-HSI and is sensitive to outliers and noise. The model first operates on the hyperspectral data using the classical Tucker decomposition to transform the hyperspectral data into the form of a three-mode dictionary multiplied by the core tensor, after which the graph regularization and unidirectional total variational (TV) regularization are introduced to constrain the three-mode dictionary. In addition, we impose the l 1 -norm on core tensor to characterize the sparsity. While effectively preserving the spatial and spectral structures in the fused hyperspectral images, the presence of anomalous noise values in the images is reduced. In this paper, the hyperspectral image super-resolution problem is transformed into a joint regularization optimization problem based on tensor decomposition and solved by a hybrid framework between the alternating direction multiplier method (ADMM) and the proximal alternate optimization (PAO) algorithm. Experimental results conducted on two benchmark datasets and one real dataset show that JRLTD shows superior performance over state-of-the-art hyperspectral super-resolution algorithms.

1. Introduction

Hyperspectral images are obtained through hyperspectral sensors mounted on different platforms, which simultaneously image the target area in tens or even hundreds of consecutive and relatively narrow wavelength bands in multiple regions of the electromagnetic spectrum, such as the ultraviolet, visible, near-infrared and infrared, so it obtains rich spectral information along with surface image information. In other words, hyperspectral imagery combines image information and spectral information of the target area in one. The image information reflects the external characteristics such as size and shape of the sample, while the spectral information reflects the physical structure and chemical differences within the sample. In the field of hyperspectral image processing and applications, fusion [1] is an important element. Furthermore, the problem of hyperspectral image super-resolution (HSI-SR) is to fuse the hyperspectral image (LR-HSI) with rich spectral information and poor spatial resolution with a multispectral image (HR-MSI) with less spectral information but higher spatial resolution to obtain a high-resolution hyperspectral image (HR-HSI). It can usually be divided into two categories: hyper-sharpening and MSI-HSI fusion.
The earliest work on hyper-sharpening was an extension of pansharpening [2,3]. Pan-sharpening is a fusion method that takes a high-resolution panchromatic (HR-PAN) image and a corresponding low-resolution multispectral (LR-MSI) image to create a high-resolution multispectral image (HR-MSI). Meng et al. [4] first classified the existing pan-sharpening methods into component replacement (CS), multi-resolution analysis (MRA), and variational optimization (VO-based methods).
The steps of the CS [5] based methods are to first project the MSI bands into a new space based spectral transform, after which the components representing the spatial information are replaced with HR-PAN images, and finally the fused images are obtained by back-projection. Representative methods include principal component analysis (PCA) [6], Gram Schmidit (GS) [7], etc. The multi-resolution analysis (MRA) [8] method is a widely used method in pan-sharpening which is usually based on discrete wavelet transform (DWT) [9]. The basic idea is to perform DWT on MS and Pan images, then retain the approximate coefficients in MSI and replace the spatial detail coefficients with the approximate coefficients of PAN images to obtain the fused images. Representative algorithms are smoothing filter-based intensity modulation (SFIM) [10], generalized Laplace pyramid (GLP) [11], etc. VO-based [12] methods are an important class of pan-sharpening methods. Since the main fusion processes of regularization-based methods [13,14,15,16,17], Bayesian-based methods [18,19,20], model-based optimization (MBO) [21,22,23] methods and sparse reconstruction (SR) [24,25,26] based methods are all based on or transformed into an optimization of a variational model, they can be generalized to variational optimization (VO) based methods. In other words, the main process of such pan-sharpening methods is usually based on or transformed into an optimization of a variational model. A comprehensive review of VO methods based on the concept of super-resolution was first presented by Garzelli [27]. As the availability of HS imaging systems increased, pan-sharpening was extended to HSI-SR by fusing HSI with PANs, which is referred to as hyper-sharpening [28]. In addition, some hyper-sharpening methods have evolved from MSI-HSI fusion methods [13,14,29]. In this case, MSI consists of only a single band, so MSI can be simplified to PAN images [28], and a more detailed comparison of hyper-sharpening methods can be found in [28].
In recent years, several methods have been proposed to realize the hyper-sharpening process of hyperspectral data, such as: linear spectral unmixing (LSU)-based techniques [30,31], nonnegative matrix decomposition-based methods [29,32,33,34,35,36,37], tensor-based methods [38,39,40,41], and deep learning-based methods to improve the spatial resolution of hyperspectral data by using multispectral images. The LSU technique [30] is essentially a problem of decomposing remote sensing data into endmembers and their corresponding abundances. Song et al. [31] proposed a fast unmixing-based sharpening method, which uses unconstrained least squares algorithm to solve the endmember and abundance matrices. The innovation of the method is to apply the procedure to sub-images rather than to the whole data. Yokoya et al. [29] proposed a nonnegative matrix factorization (NMF)-based hyper-sharpening algorithm called coupled NMF (CNMF) by alternately unmixing low-resolution HS data and high-resolution MS data. In CNMF, the endmember matrix and the abundance matrix are estimated using the alternating spectral decomposition of NMF under the constraints of the observation model. However, the results of CNMF may not always be satisfactory; firstly, the solution of NMF is usually non-unique, and secondly, its solution process is very time-consuming because it needs to continuously alternate the application of NMF unmixing to low spatial resolution hyperspectral and high spatial resolution multispectral data, which yields a hyperspectral endmember and a high spatial resolution abundance matrix. Later, by combining these two matrices, fused data with high spatial and spectral resolution can be obtained. An HSI-SR method based on the sparse matrix decomposition technique was proposed in [33], which decomposes the HSI into a basis matrix and a sparse coefficient matrix. Then the HR-HSI was reconstructed using the spectral basis obtained from LR-HSI and the sparse coefficient matrix estimated by HR-MSI. Other NMF-based sharpening algorithms include spectral constraint NMF [34], sparse constraint NMF [35], joint-criterion NMF-based (JNMF) hyper-sharpening algorithm [36], etc. Specifically, some of the NMF-based methods can also be applied to the fusion process of HS and PAN images, e.g., [34,35]. Furthermore, in order to obtain better fusion results, the work of [37] exploited both the sparsity and non-negativity constraints of HR-HSI and achieved good performance.
Although many methods based on matrix decomposition under different constraints have been proposed by researchers and yielded better performance, these methods based on matrix decomposition require the three-dimensional remote sensing data to be expanded into the form of a two-dimensional matrix, which makes it difficult for the algorithms to take full advantage of the spatial spectral correlation of HSI. HSI-SR method based on tensor decomposition has become a hot topic in MSI-HSI fusion research because of its excellent performance. The main idea of its fusion is to treat HR-HSI as a three-dimensional tensor and to redefine the HSI-SR problem as the estimation of the core tensor and dictionary in three modes. Dian et al. [38] first proposed a non-local sparse tensor factorization method for the HSI-SR problem (called NLSTF), which treats hyperspectral data as a tensor of three modes and combines the non-local similarity prior of hyperspectral images to nonlocally cluster MSI images, and although this method produced good results, LR-HSI was only used for learning the spectral dictionary and not for core tensor estimation. Li et al. [39] proposed the coupled sparse tensor factorization (CSTF) method, which directly decomposes the target HR-HSI using Tucker decomposition and then promotes the sparsity of the core tensor using the high spatial spectral correlation in the target HR-HSI. In order to effectively preserve the spatial spectral structure in LR-HSI and HR-MSI, Zhang et al. [40] proposed a new low-resolution HS (LRHS) and high-resolution MS (HRMS) image fusion method based on spatial–spectral-graph-regularized low-rank tensor decomposition (SSGLRTD). This method redefines the fusion problem as a low-rank tensor decomposition model by considering LR-HSI as the sum of HR-HSI and sparse difference images. Then, the spatial spectral low-rank features of HR-HSI images were explored using the Tucker decomposition method. Finally, the HR-MSI and LR-HSI images were used to construct spatial and spectral graphs, and regularization constraints were applied to the low-rank tensor decomposition model. Xu et al. [41] proposed a new HSI-SR method based on a unidirectional total variational (TV) approach. The method has decomposed the target HR-HSI into a sparse core tensor multiplied by a three-mode dictionary matrix using Tucker decomposition, and then applied the l 1 -norm to the core tensor to represent the sparsity of the target HR-HSI and the unidirectional TV three dictionaries to characterize the piecewise smoothness of the target HR-HSI. In addition, tensor ring-based super-resolution algorithms for hyperspectral images have recently attracted the attention of research scholars. He et al. [42,43] proposed a HSI-SR method based on a constrained tensor ring model, which decomposes the higher-order tensor into a series of three-dimensional tensors. Xu et al. [44] proposed a super-resolution fusion of LR-HSI and HR-MSI using a higher-order tensor ring method, which preserves the spectral information and core tensor in a tensor ring to reconstruct high-resolution hyperspectral images.
Deep learning has received increasing attention in the field of HSI-SR with its superior learning performance and high speed. However, deep learning-based methods usually require a large number of samples to train the neural network to obtain the parameters of the network.
The Tucker tensor decomposition is a valid multilinear representation for high-dimensional tensor data, but it fails to take the manifold structures of high-dimensional HR-HSI into account. Furthermore, the graph regularization can perfectly preserve local information of high-dimensional data and achieve good performances in many fusion tasks. Moreover, the existing tensor decomposition-based methods are sensitive to outliers and noise, there is still much room for improvement. We propose a new method based on joint regularization low-rank tensor decomposition (JRLTD) in this paper to solve the HSI-SR problem from the tensor perspective. The model operates on hyperspectral data using the classical Tucker decomposition and introduces graph regularization and the unidirectional total variation regularization (TV), which effectively preserves the spatial and spectral structures in the fused hyperspectral images while reducing the presence of anomalous noise values in the images, thus solving the HSI-SR problem. The main contributions of the paper are summarized as follows.
(1)
In the process of recovering high-resolution hyperspectral images (HR-HSI), joint regularization is considered to operate on the three-mode dictionary. The graph regularization can make full use of the manifold structure in LR-HSI and HR-MSI, while the unidirectional total variational regularization fully considers the segmental smoothness of the target image, and the combination of the two can effectively preserve the spatial structure information and the spectral structure information of HR-HSI.
(2)
Based on the unidirectional total variational regularization, the l 2 , 1 -norm is used. The l 2 , 1 -norm is not only sparse for the sum of the absolute values of the matrix elements, but also requires row sparsity.
(3)
During the experiments, not only the standard dataset of hyperspectral fusion is adopted, but also the dataset about the local Ningxia is used, which makes the algorithm more widely suitable and the performance more convincing.
The remainder of this paper is organized as follows. Section 2 presents theoretical model and related work. Section 3 describes the solution to the optimization model. Section 4 describes our experimental results and evaluates the algorithm. Conclusions and future research directions are presented in Section 5.

2. Related Works

We introduce the definition and representation of the tensor, discuss the basic problems of image fusion, and introduce the concept of joint regularization.

2.1. Tensor Description

In this paper, the capital flower font T R I 1 × I 2 × × I N denotes the Nth order tensor, and each element in the tensor can be obtained by fixing the subscript: T i 1 , i 2 i N R . In addition, to distinguish the tensor representation, this paper uses the capital letter to denote the matrix, e.g., X R I 1 × I 2 ; the lower case letter denotes the vector, e.g., x R I . Tensor vectorization is the process of transforming a tensor into a vector. For example, a tensor T R I 1 × I 2 × × I N of order N is tensorized to a vector T R I 1 I 2 I N , which can be expressed as τ = v e c T . The elemental correspondence between them is as follows:
T i 1 , i 2 i N = τ i 1 + I 1 i 2 1 + + I 1 I 2 I N 1 i d 1
An n-mode expansion matrix is defined by arranging the n-mode fibers of a tensor as columns of a matrix, e.g., T n = u n f o l d T R I n × I 1 I 2 I n 1 I n + 1 I N . Conversely, the inverse of the expansion can be defined as T = f o l d T n . The n-mode product of a tensor T R I 1 × I 2 × × I N and a matrix P R J × I n , denoted T × P n , is a tensor A of size I 1 × × I n 1 × J × I n + 1 × × I N . The n-mode product can also be expressed as each n-model fiber multiplied by a matrix, denoted A n = P T n .
For tensor data, as the dimensionality and order increase, the number of parameters will exponentially skyrocket, which is called dimensional catastrophe or dimensional curse, and tensor decomposition can alleviate this problem well. Commonly used tensor decomposition methods include CP decomposition, Tucker decomposition, Tensor Train decomposition, and tensor Ring decomposition. In this paper, the Tucker decomposition method is mainly adopted to operate the tensor data. Tucker decomposition, also known as a form of higher-order principal component analysis, decomposes a tensor into a core tensor multiplied by a factor matrix along each modality, with the following equation:
T = C × P 1 1 × P 2 2 × × P N N
where P i R I i × r i denotes the factor matrix along the ith order modality. The core tensor describing the interaction of the different factor matrices can be denoted by C R r 1 × r 2 × × r N . The matrixed form of the Tucker decomposition can be defined as:
T i = P i C i P N P i + 1 P i 1 P 1 T
where ⊗ is the Kronecker product. The l 1 -norm of the tensor is defined as T 1 = i 1 , , i N τ i 1 , , i N and the F-norm is defined as T F = i 1 , , i N τ i 1 , , i N 2 .

2.2. Observation Model

The desired HR-HSI can be defined as X R N W × N H × N S , the LR-HSI can be denoted as Y R N w × N h × N S ( 0 < N w < N W , 0 < N h < N H ), the HR-MSI can be defined as Z R N W × N H × N s ( 0 < N s < N S ). The dimensions of the spatial pattern are N W and N H , and N S denotes the dimension of the spectral mode. From the definition of tensor decomposition, we can derive the basic form of hyperspectral high resolution, i.e.,
X = C × P 1 1 × P 2 2 × P 3 3
The LR-HSI Y can be expressed as the spatial down-sampling form of the desired HR-HSI X, i.e.,
Y = C × 1 P 1 ^ × 2 P 2 ^ × 3 P 3
The HR-MSI Z can be expressed as the spectral down-sampling form of the desired HR-HSI X, i.e.,
Z = C × 1 P 1 × 2 P 2 × 3 P ^ 3
where C R n w × n h × n s is the core tensor, S 1 R N w × N W , S 2 R N h × N H , S 3 R N s × N S are the down-sampling matrices, and P 1 R N W × n w , P 2 R N H × n h , P 3 R N S × n s are the dictionaries in the three modes, then, P ^ 1 , P ^ 2 , P ^ 3 are the down-sampling dictionaries in the three modes, which can be derived from the following equation:
P ^ 1 = S 1 P 1 R N w × n w , P ^ 2 = S 2 P 2 R N h × n h , P ^ 3 = S 3 P 3 R N s × n s

2.3. Joint Regularization

Based on the Tucker decomposition and the factor matrix processed along the tri-mode downsampling, the HSI-SR problem can be expressed by the following equation:
min P 1 , P 2 , P 3 , C Y C × 1 P 1 ^ × 2 P 2 ^ × 3 P 3 F 2 + Z C × 1 P 1 × 2 P 2 × 3 P ^ 3 F 2 s . t . C 0 N
where · F denotes the Frobenius norm and N denotes the number of nonzero entries in matrix. Clearly, the optimization problem in (8) is non-convex. Aiming for a tractable and scalable approximation optimization, we impose the l 1 -norm on the core tensor instead of the l 0 -norm to formulate the unconstrained version and describe the sparsity in spatial and spectral dimensions.
min P 1 , P 2 , P 3 , C Y C × 1 P 1 ^ × 2 P 2 ^ × 3 P 3 F 2 + Z C × 1 P 1 × 2 P 2 × 3 P ^ 3 F 2 + λ 1 C 1
Regardless, problem (9) is still a non-convex problem of discomfort. Therefore, to solve problem (9), some prior information about the target HR-HSI is needed. In this paper, we consider the spectral correlation and spatial coherence of hyperspectral and multispectral images.
As we all know, HSI suffers from high correlation and redundancy in the spectral space and retains the fundamental information in the low-dimensional subspace. Because of the lack of appropriate regularization item, the fusion model in (9) is sensitive to outliers and noise. Therefore, to accurately estimate the HSI, we used a joint regularization (graph regularization and unidirectional total variation regularization) in the form of a constraint on the HR-MSI and LR-HSI. To obtain accurate results for the target HR-HSI, we first assume that the spatial and spectral manifold information between HR-MSI and LR-HSI is similar to the embedded in the target HR-HSI, and describe the manifold information present in HR-MSI and LR-HSI in the form of two graphs: one based on the spatial dimension and the other on the spectral dimension. Thus, the spatial and spectral information from HR-MSI and LR-HSI can be transferred to HR-HSI by spatial and spectral graph regularization, which can preserve the intrinsic geometric structure information of HR-MSI and LR-HSI as much as possible. After that, we used a unidirectional total variation regularization model to manipulate the three-mode dictionary for the purpose of eliminating noise in the images.

2.3.1. Graph Regularization

We know that the pixels in HR-MSI do not exist independently and the correlation between neighboring pixels is very high. Scholars generally use a block strategy to define adjacent pixels, but this ignores the spatial structure and consistency of the image. As a hyper segmentation method, the hyper-pixel technique not only captures image redundant information, but also adaptively adjusts the shape and size of spatial regions. Considering the compatibility and computational complexity of superpixels, the entropy rate superpixel (ERS) segmentation method is employed in this paper to find spatial domains adaptively. The construction of the spatial graph consists of four steps: generating intensity images, superpixel segmentation, defining spatial neighborhoods, and generating spatial graphs. In contrast, for LR-HSI, its neighboring bands are usually contiguous, meaning that the neighboring bands have extremely strong correlation in the spectral domain. To further maintain the correlation and consistency in HR-HSI, we leverage the nearest neighbor strategy to establish the spectral graph.

2.3.2. Unidirectional Total Variation Regularization

Hyperspectral images are susceptible to noise, which seriously affects the image visual quality and reduces the accuracy and robustness of subsequent algorithms for image recognition, image classification and edge information extraction. Therefore, it is necessary to study effective noise removal algorithms. Common algorithms have the following three categories: the first type of methods is filtering method, including spatial domain filtering and transform domain filtering; the second type of methods is matching method, including moment matching method and histogram matching method; the third type of methods is variation method.
The best known of the variation methods is the total variation (TV) model, an algorithm that has proven to be one of the most effective image denoising techniques. The total variation model is an anisotropic model that relies on gradient descent for image smoothing, hoping to smooth the image as much as possible in the interior of the image (with small differences between adjacent pixels), while not smoothing as much as possible at the edges of the image. The most distinctive feature of this model is that it preserves the edge information of the image while removing the image noise. In general, scholars impose the l 1 -norm on the total variation model to obtain better denoising effect by improving the total variation model or combining the total variation model with other algorithms. When l 1 -norm is used in the model, it is insensitive to smaller outliers but sensitive to larger ones; when l 2 -norm is used, it is insensitive to larger outliers and sensitive to smaller ones; and when l σ -norm is used, it can be adjusted by tuning the parameters to be between l 2 -norm and l 1 -norm, so that the robustness of both l 1 -norm and l 2 -norm is utilized regardless of whether the outliers are large or small, but the burden of tuning parameters σ is increased. In order to solve the above problem, the l 2 , 1 -norm makes the total variation model better handle outliers and reduce the burden of tuning parameters, acting as a flexible embedding without the burden of tuning parameters of the l σ -norm. Therefore, in this paper, we impose the l 2 , 1 -norm on the unidirectional total variation model to achieve the purpose of noise removal.

2.4. Proposed Algorithm

Combining the observation model proposed in Section 2.2 with the joint regularization constraint proposed in Section 2.3, the following fusion model is obtained to solve the HSI-SR problem, i.e.,
min P 1 , P 2 , P 3 , C Y C × 1 P 1 ^ × 2 P 2 ^ × 3 P 3 F 2 + Z C × 1 P 1 × 2 P 2 × 3 P ^ 3 F 2 + λ 1 C 1 + β t r P 3 T P S P 3 + γ t r P 2 P 1 T P D P 2 P 1 + λ 2 D y P 1 2 , 1 + λ 3 D y P 2 2 , 1 + λ 4 D y P 3 2 , 1 s . t . X = C × P 1 1 × P 2 2 × P 3 3
where X denotes the desired HR-HSI, Y denotes the acquired LR-HSI, Z denotes the HR-MSI of the same scene, P 1 , P 2 , P 3 are the dictionaries in the three modes, C is the core tensor, P ^ 1 , P ^ 2 , P ^ 3 are the down-sampling dictionaries in the three modes, P S , P D are the graph Laplacian matrices, β , γ are the equilibrium parameters of the graph regularization, λ i i = 1 , 2 , 3 , 4 are the positive regularization parameters, D y is a finite difference operator along the vertical direction, given by the following equation:
D y = 1 1 0 0 0 0 1 1 0 0 0 0 0 1 1
Next, we will give an effective algorithm for solving Model (10).

3. Optimization

The proposed model (10) is a non-convex problem by solving P 1 , P 2 , P 3 and C jointly, and we can barely obtain the closed-form solutions for P 1 , P 2 , P 3 and C . We know that non-convex optimization problems are considered to be very difficult to solve because the set of feasible domains may have an infinite number of local optima; that is to say, the solution of the problem is not unique. However, with respect to each block of variables, the model proposed in (10) is convex while keeping the other variables fixed. In this context, we utilize the proximal alternating optimization (PAO) scheme [45,46] to solve it, which is ensured to converge to a stationary point under certain conditions. Concretely, the iterative update of model (10) is as follows:
P 1 = arg min P 1 f P 1 , P 2 , P 3 , C + ρ P 1 P 1 p r e F 2 P 2 = arg min P 2 f P 1 , P 2 , P 3 , C + ρ P 2 P 2 p r e F 2 P 3 = arg min P 3 f P 1 , P 2 , P 3 , C + ρ P 3 P 3 p r e F 2 C = arg min C f P 1 , P 2 , P 3 , C + ρ C C p r e F 2
where the objective function f P 1 , P 2 , P 3 , C is the implicit definition of (10), and · p r e and ρ represent the estimated blocks of variables in the previous iteration and a positive number, respectively. Next, we present the solution of the four optimization problems in (12) in detail.

3.1. Optimization of P 1

With fixing P 2 , P 3 and C , the optimization problem of P 1 in (12) is given by
arg min P 1 Y C × 1 P 1 ^ × 2 P 2 ^ × 3 P 3 F 2 + Z C × 1 P 1 × 2 P 2 × 3 P ^ 3 F 2 + ρ P 1 P 1 p r e F 2 + γ t r P 2 P 1 T P D P 2 P 1 + λ 2 D y P 1 2 , 1
where P 1 p r e denotes the estimated dictionary of width mode in the previous iteration and D y R N W 1 × N W denotes the difference matrix along the vertical direction of P 1 . Using the properties of n-mode matrix unfolding, problem (13) can be formulated as
arg min P 1 Y 1 S 1 P 1 A 1 F 2 + Z 1 P 1 B 1 F 2 + ρ P 1 P 1 p r e F 2 + γ t r P 2 P 1 T P D P 2 P 1 + λ 2 D y P 1 2 , 1
where Y 1 and Z 1 are the width-mode (1-mode) unfolding matrix of tensors Y and Z, respectively, A 1 = C × 2 P ^ 2 × 3 P 3 1 , and B 1 = C × 2 P 2 × 3 P ^ 3 1 .

3.2. Optimization of P 2

With fixing P 1 , P 3 and C , the optimization problem of P 2 in (12) is given by
arg min P 2 Y C × 1 P 1 ^ × 2 P 2 ^ × 3 P 3 F 2 + Z C × 1 P 1 × 2 P 2 × 3 P ^ 3 F 2 + ρ P 2 P 2 p r e F 2 + γ t r P 2 P 1 T P D P 2 P 1 + λ 3 D y P 2 2 , 1
where P 2 p r e denotes the estimated dictionary of height mode in the previous iteration and D y R N H 1 × N H denotes the difference matrix along the vertical direction of P 2 . Using the properties of n-mode matrix unfolding, problem (15) can be formulated as
arg min P 2 Y 2 S 2 P 2 A 2 F 2 + Z 2 P 2 B 2 F 2 + ρ P 2 P 2 p r e F 2 + γ t r P 2 P 1 T P D P 2 P 1 + λ 3 D y P 2 2 , 1
where Y 2 and Z 2 are the height-mode (2-mode) unfolding matrix of tensors Y and Z, respectively, A 2 = C × 1 P ^ 1 × 3 P 3 2 , and B 2 = C × 1 P 1 × 3 P ^ 3 2 .

3.3. Optimization of P 3

With fixing P 1 , P 2 and C , the optimization problem of P 3 in (12) is given by
arg min P 3 Y C × 1 P 1 ^ × 2 P 2 ^ × 3 P 3 F 2 + Z C × 1 P 1 × 2 P 2 × 3 P ^ 3 F 2 + ρ P 3 P 3 p r e F 2 + β t r P 3 T P S P 3 + λ 4 D y P 3 2 , 1
where P 3 p r e denotes the estimated spectral dictionary in the previous iteration and D y R N S 1 × N S denotes the difference matrix along the vertical direction of P 3 . Using the properties of n-mode matrix unfolding, problem (17) can be formulated as
arg min P 3 Y 3 P 3 A 3 F 2 + Z 3 S 3 P 3 B 3 F 2 + ρ P 1 P 1 p r e F 2 + β t r P 3 T P S P 3 + λ 4 D y P 3 2 , 1
where Y 3 and Z 3 are the spectral-mode (3-mode) unfolding matrix of tensors Y and Z, respectively, A 3 = C × 1 P ^ 1 × 2 P ^ 2 3 , and B 3 = C × 1 P 1 × 2 P 2 3 .

3.4. Optimization of C

With fixing P 1 , P 2 and P 3 , the optimization problem of C in (12) is given by
arg min C Y C × 1 P 1 ^ × 2 P 2 ^ × 3 P 3 F 2 + Z C × 1 P 1 × 2 P 2 × 3 P ^ 3 F 2 + λ 1 C 1 + ρ C C p r e F 2
where C p r e represents the estimated core tensor in the previous iteration.
It should be noted that problems (14), (16), (18) and (19) are convex problems. Therefore, all these four subproblems can be effectively solved using fast and accurate ADMM technique. Due to the similarity of the solution process of problems (14), (16), and (18), we include the solution details of the four subproblems and the optimization updates of each variable as appendices for more conciseness. In Appendix A, Algorithms A1–A4 draw a summary of the solution process of the four subproblems in (12).
Algorithm 1 specifies the steps of the JRLTD-based hyperspectral image super-resolution proposed in this section.
Algorithm 1 JRLTD-Based Hyperspectral Image Super-Resolution.
 1:
Initialize P 1 , P 2 through the DUC-KSVD algorithm [47];
 2:
Initialize P 3 through the SISAL algorithm [48];
 3:
Initialize C through the Algorithm A4;
 4:
while not converged do
 5:
    Step 1 Update the width mode dictionary matrix P 1 via Algorithm A1;
 6:
     P ^ 1 = S 1 P 1 , P 1 p r e = P 1 ;
 7:
    Step 2 Update the height mode dictionary matrix P 2 via Algorithm A2;
 8:
     P ^ 2 = S 2 P 2 , P 2 p r e = P 2 ;
 9:
    Step 3 Update the spectral dictionary matrix P 3 via Algorithm A3;
10:
     P ^ 3 = S 3 P 3 , P 3 p r e = P 3 ;
11:
    Step 4 Update the core tensor C via Algorithm A4;
12:
     C p r e = C ;
13:
end while
14:
Estimating target HR-HSI X via formula (4)

4. Experiments

4.1. Datasets

In this section, three datasets are used to test the performance of the proposed method.
The first dataset is the Pavia University dataset, which was acquired by the Italian Reflection Optical System Imaging Spectrometer (ROSIS) optical sensor in the downtown area of the University of Pavia. The image size is 610 × 340 × 115 , with a spatial resolution of 1.3 m. We reduced the number of spectral bands to 93 after removing the water vapor absorption band. For reasons related to the down-sampling process, only the 256 × 256 × 93 image in the upper left corner was used as a reference image in the experiment.
The second dataset is the Washington DC dataset, which is obtained from the Washington shopping mall acquired by the HYDICE sensor, intercepting images of size 1280 × 307 for annotation. The spatial resolution is 2.5m and contains a total of 210 bands. We intercept a part of the image with the size of 256 × 256 × 191 for the experiment and use it as a reference image.
The third dataset is the Sand Lake in Ningxia of China, which is a scene acquired from the GF-5 AHSI sensor during the flight activity in Ningxia. The original image size is 2774 × 2554 × 330 , its spatial resolution is 30 m, and the image has 330 bands, and the experiments reduce the spectral bands to 93 to obtain the reference image size of Sand Lake as 256 × 256 × 93 .

4.2. Compared Algorithms

We selected classical and currently popular fusion methods for comparison, including CNMF [29], HySure [18], NLSTF [38], CSTF [39], and UTV-HSISR [41]. The experiment was run on a PC equipped with an Intel Core i5-9300HF CPU, 16 GB RAM and NVIDIA GTX 1660Ti GPU. The Windows 10 x64 operating system was used and the programming application was Matlab R2016a.

4.3. Quantitative Metrics

For the evaluation of image fusion, it is more important to obtain more convincing values from objective metrics in addition to observing the results from subjective assumptions. To evaluate the fusion output in the numerical results, we use the following eight metrics, namely the peak signal-to-noise ratio (PSNR), which is an objective measure of image distortion or noise level; the error relative global dimensionless synthesis (ERGAS) to measure the comprehensive quality of the fused results; the spectral angle mapping (SAM) represents the absolute value of the spectral angle between two images; the root mean square error (RMSE) is used to measure the deviation between the predicted value and true value; the correlation coefficient (CC), which indicates the ability of the fused image to retain spectral information; the degree of distortion (DD), which is used to indicate the degree of distortion between the fused image and the ground truth image; the structural similarity (SSIM) and the universal image quality index (UIQI), which measures the degree of structural similarity between the two images.
The concept of mean squared deviation is first defined in the paper:
M S E = 1 N W N H i = 0 N W 1 j = 0 N H 1 I i , j J i , j 2
where N W and N H denote the size of the image, I denotes a noise-free image, and J denotes a noisy image. Then PSNR is defined as:
P S N R = 10 · log 10 M A X i 2 M S E
where M A X denotes the maximum number of pixels of the image. After that, the metrics we use to evaluate the fused image can be expressed by the following equation:
P S N R X , X ˜ = 1 N S P S N R X i , X ˜ i
E R G A S X , X ˜ = 100 S 1 N S i = 1 N S M S E X , X ˜ M E A N X ˜
S A M X , X ˜ = 1 N W N H i = 1 N W N H arc cos X , X ˜ X i 2 · X ˜ i 2
R M S E X , X ˜ = X , X ˜ F N W N H N S
C C X , X ˜ = i = 1 N W j = 1 N H X i , j V X · X ˜ i , j V X ˜ i = 1 N W j = 1 N H X i , j V X 2 · i = 1 N W j = 1 N H X ˜ i , j V X ˜ 2
S S I M X , X ˜ = 1 M i = 1 M 2 X ¯ i , X ˜ ¯ i + c 1 2 σ X ¯ i X ˜ ¯ i + c 2 X ¯ i 2 + X ˜ ¯ i 2 + c 1 σ X i 2 + σ X ˜ i 2 + c 2
D D X , X ˜ = 1 N W N H N S v e c X v e c X ˜ 1
U I Q I X , X ˜ = 1 M i = 1 M 4 σ X ¯ i X ˜ ¯ i 2 · X ¯ i , X ˜ ¯ i σ X i 2 + σ X ˜ i 2 + X ¯ i 2 + X ˜ ¯ i 2
where N S denotes the number of bands; S denotes the spatial downsampling factor; X i , X ˜ i denote the value of the ith band of the ground truth image and the fused image, respectively; M E A N Z ˜ denotes the mean value of each band image; V X denotes the mean pixel value of the original image; V X ˜ is the mean pixel value of the fused image; M denotes the sliding window; X ¯ i , X ˜ ¯ i denotes the mean value of X, X ˜ , respectively; σ X i , σ X ˜ i denotes the standard deviation of X, X ˜ , respectively; c 1 , c 2 are constants; σ X i , X ˜ i 2 denotes the covariance of X i , X ˜ i . Furthermore, σ X i 2 , σ X ˜ i 2 denotes the variance of X i , X ˜ i , respectively. It should be noted that the best value of ERGAS, SAM, RMSE and DD is 0, the best value of CC, SSIM and UIQI is 1, and the best value of PSNR is .

4.4. Parameters Discussion

JRLTD is mainly related to the following parameters, i.e., the number of PAO iterations K, the weights of the proximal terms ρ , the sparse regularization parameters λ 1 , the smooth regularization parameters λ 2 , λ 3 and λ 4 , the graph regularization parameters β and γ , and the number of three-mode dictionaries N w , N h and N s .
According to the description of Algorithm 1, we use the PAO scheme to solve the problem (10). The change of PSNR caused by the change in the number of PAO iterations K is shown in Figure 1. In Figure 1, all three datasets show a fast increasing trend of PSNR as K goes from 1 to 10. For the PAVIA dataset, there is a slight fluctuation in PSNR when K varies from 10 to 50, and the maximum number of iterations of PAVIA is set to 20 in the experiment. The Washington dataset reached the maximum PSNR when K = 25, so we set the maximum number of iterations of the algorithm in Washington to 25. Similarly, we set the maximum number of iterations for Sand Lake as 20.
The parameter ρ is the weight of the proximal term in (12). For the evaluation of the influence of ρ , we perform the method for different ρ . Figure 2 presents the change of PSNR values of the fused HSIs of the three datasets with different log ρ values (the base of log is 10). In the experiments of this paper, we take the range of log ρ to be set to [−3, 0]. As is displayed in Figure 2, there is a rise trend of PSNR for all three datasets as log ρ varies from −3 to −1, reaches a maximum when log ρ equals −1, and decreases sharply when log ρ is greater than −1. Therefore, we set log ρ to −1, i.e., we take ρ = 0.1 for all three datasets.
The regularization parameter λ 1 in (10) controls the sparsity of the core tensor, therefore, λ 1 affects the estimation of the HR-HSI. Higher values of λ 1 yield sparser core tensor. Figure 3 shows the PSNR values of the reconstructed HSI for the Pavia University dataset under different log λ 1 . In this work, we set the range log λ 1 of to [−9, −2]. As shown in Figure 3, when log λ 1 belongs to [−9, −5], the PSNR stays relatively stable; when log λ 1 belongs to [−5, −4], the PSNR decreases slowly; and when log λ 1 > −4, the PSNR decreases sharply. Therefore, we set log λ 1 as −6, that is, λ 1 = 10 6 for the Pavia University dataset. By the same token, the values for the Washington dataset and the Sand Lake dataset can be decided in the same way.
The unidirectional total variation regularization parameters λ 2 , λ 3 and λ 4 control the segmental smoothness of the width-mode, height-mode and spectral-mode dictionaries, respectively. Figure 4 shows the reconstructed PSNR values of HSI for the Pavia University dataset with different log λ 2 , log λ 3 and log λ 4 . In the experiments of this paper, we set the range of values of log λ 2 and log λ 3 both to [−9, −2] and the range of values of log λ 4 to [−4, 4]. As shown in Figure 4 and Figure 5, the PSNR reaches its peak value when log λ 2 = −8, log λ 3 = −7, and log λ 4 = 2. Therefore, for Pavia University dataset, we set log λ 2 as −8, log λ 3 as −7, and log λ 4 as 2. It is worth noting that the optimal value of λ 4 is relatively large compared of λ 2 and λ 3 , due to the fact that HSI is continuous in the spectral dimension, which leads to a potentially smaller full variation regularization of the dictionary along the spectral direction. Therefore, the optimal value of its regularization parameter should be relatively large. Similarly, the values of λ 2 , λ 3 and λ 4 for the Washington and Sand Lake datasets can be determined in the same way.
The graph regularization parameters β and γ control the spectral structure of the spectral graph and the spatial correlation of the spatial graph, respectively. Figure 6 shows the reconstructed PSNR values of HSI for the Pavia University dataset under different β and γ . In the experiments of this paper, we take the value range of both log β and log γ to [−7, −1]. As shown in Figure 6, the PSNR reaches its peak value when log β = 1 and log γ = 1 . Therefore, for the Pavia University dataset, we set log β as −1 and log γ as −1. Similarly, the β and γ values for the Washington dataset and the Sand Lake dataset can be determined in the same way.
The number of atoms in the three-model dictionaries are n w , n h and n s . Figure 7 shows the PSNR values of the fused HSI of the Pavia University dataset for different n w and n h , and Figure 8 shows the PSNR values of the fused HSI of the Pavia University dataset for different n s . In this paper, we set the range of values for both n w and n h to [260, 400], and set n s as [3, 21]. This is because the spectral features of HSI exist on the low-dimensional subspace. As shown in Figure 7, the PSNR increases sharply when n w is varied in the range [260, 360] and reaches a maximum at n w = 360, while it tends to decrease when n w is varied in the range [360, 400]. Therefore, we set n w as 360. It should be noted that the PSNR reaches its peak value when n h is 400, but what we have to consider is the overall performance of other evaluation indicators, so we set n h as 380 in the paper. It can be seen from Figure 8 that the PSNR decreases with n s > 15. Therefore, we set n w = 360, n h = 400, and n s = 15 for the Pavia University dataset. Similarly, the values of n w , n h and n s for the Washington dataset and the Sand Lake dataset can be determined in the same way.
In Table 1, we give the tuning ranges for the 11 main parameters, give the values of each parameter for the three HSI datasets mentioned in Section 4.1, and show the recommended ranges for each parameter to easily tune the parameters.

4.5. Experimental Results

In this section, we show the fusion results of the five tested methods for the Pavia University, Washington DC, and Sand Lake datasets.

4.5.1. Experiment on Pavia University

In order to better display more spatial detail information and fusion results, we select three bands (R:61, G:25, B:13) to be synthesized as pseudo-color image for display, and then compared with other methods, the fusion results of Pavia University dataset are shown in the first row of Figure 9. In addition, to show the fusion performance more visually, we generate difference images to present the discrepancy between the reference image and the fused image. The second row in Figure 9 shows the difference image of the Pavia University dataset, which correspond to the fusion results in the first row.
From Figure 9, we can see that the spatial details in the fusion results of different methods are greatly enhanced. However, compared with the reference image, there are still some spectral differences and noise effects in the fused image. For example, in Figure 9c,d, the fusion results of CNMF [29] and Hysure [18] show spectral distortion. Compared with the fusion results in Figure 9e,f, the fused images in Figure 9g,h are able to provide better spectral information and preserve the spatial structure.
In addition, it can be seen from the difference images that the reconstruction errors is relatively large from the difference images of Figure 9c–e. Figure 9g,h are better and similar compared with Figure 9f. In other words, the UTV-HSISR algorithm [41] and the JRLTD algorithm proposed in the paper achieve better fusion results, that is, there is little noise.
The quality indicators of the comparison method are shown in Table 2, and the better results obtained in the experiment are highlighted in bold typeface. From the spectral features, the algorithm proposed in this paper has the smallest RMSE, the closest CC to 1, the smallest ERGAS, the smallest SAM, and the smallest DD, indicating that the algorithm proposed in this paper is closest to the reference image, has the smallest spectral distortion, and has the best spectral agreement with the reference image. From the results of signal-to-noise ratio, the algorithm in this paper has the highest PSNR, which indicates that the algorithm has the best effective suppression of noise. From the spatial characteristics, SSIM is closest to 1, indicating that it is closest to the reference image in terms of brightness, contrast and structure; UIQI is closest to 1, indicating that the loss of relevant information reaches the minimum, the closer to the reference image.

4.5.2. Experiment on Washington DC

In order to better display more spatial detail information and fusion results, we select three bands(R:40, G:30, B:5) to be synthesized as pseudo-color image for display, and then compared with other methods, the fusion results of Washington DC dataset are shown in the first row of Figure 10. Besides, in order to show the fusion performance more visually, we generate difference images to present the discrepancy between the reference image and the fused image. The second row of Figure 10 shows the difference image of the Washington DC dataset.
It can be seen that the spectral information is distorted in the results of CNMF [29] and HySure [18]. In addition, there are some blurring effects in the building regions in the results of NLSTF [38] when compared with Figure 10a. Compared with the fusion results of CSTF [39], the fused images of UTV-HSISR [41] and JRLTD are able to provide better spectral information and preserve the spatial structure. From the difference images, we can observe that the error of the UTV-HSISR algorithm [41] and the JRLTD algorithm proposed in the paper is smaller as a whole.
The quality evaluation results are shown in Table 3, and the better values obtained in the experiment are marked with bolded font. From Table 3, it can be seen that the algorithm proposed in this paper has the smallest RMSE, the closest CC to 1, the second minimum value of ERGAS, the smallest SAM, and the smallest DD in terms of spectral features. Collectively, the algorithm proposed in this paper is the closest to the reference image, with the smallest spectral distortion and the best spectral agreement with the reference image. From the results of signal-to-noise ratio, the algorithm in this paper has the highest PSNR, which indicates that the algorithm has the best effective suppression of noise. From the spatial characteristics, SSIM is closest to 1, which indicates that it is closest to the reference image in terms of brightness, contrast and structure; UIQI is closest to 1, which indicates that the loss of relevant information reaches the minimum, the closer to the reference image. In summary, the JRLTD algorithm proposed in this paper outperforms other algorithms in most cases.

4.5.3. Experiment on Sand Lake in Ningxia of China

In order to better display more spatial detail information and fusion results, we select three bands (R:41, G:25, B:3) to be synthesized as pseudo-color image for displaying, respectively, and then compared with other methods, the fusion results of Sand Lake dataset are shown in the first row of Figure 11. In addition, to show the fusion performance more visually, we generate difference images to present the discrepancy between the reference image and the fused image. The second row of Figure 11 shows the difference image of the Sand Lake dataset.
After corresponding the fusion results obtained in the first row of Figure 11 using different algorithms with the difference images in the second row, we can see that Figure 11c–e have spectral distortion compared to the reference image. In addition, we can observe that the Figure 11c–e are poorly reconstructed, so the difference images seems to have a lot of information. From the difference images, Figure 11g,h are better and similar compared to Figure 11f. In other words, the UTV-HSISR algorithm [41] and the JRLTD algorithm proposed in the paper achieve better fusion results, that is, there is little noise.
Furthermore, Table 4 displays the quantitative experimental evaluations with eight metrics. The better values obtained in the experiment are indicated in bold. As can be seen from Table 4, from the spectral features, the algorithm proposed in this paper has the smallest RMSE, the smallest ERGAS, the smallest SAM, the smallest DD, and CC values are the same as those obtained by the UTV-HSISR algorithm. Overall, it shows that the algorithm proposed in this paper is closest to the reference image, has the smallest spectral distortion, and has the best spectral agreement with the source image. From the results of the signal-to-noise ratio, the algorithm in this paper has the highest PSNR, which indicates that the algorithm has the best effective suppression of noise. From the spatial characteristics, SSIM is closest to 1, which indicates that it is closest to the reference image in terms of brightness, contrast and structure; UIQI is closest to 1, which indicates that the loss of relevant information reaches the minimum, the closer to the reference image. In general, the JRLTD algorithm proposed in this paper outperforms other algorithms in most cases.

5. Conclusions

In this paper, a hyperspectral image super-resolution method using joint regularization as prior information is proposed. Considering the geometric structures of LR-HSI and HR-MSI, two graphs are constructed to capture the spatial correlation of HR-MSI and the spectral similarity of LR-HSI. Then, the presence of anomalous noise values in the images was reduced by smoothing the LR-HSI and HR-MSI using unidirectional total variational regularization. In addition, an optimization algorithm based on PAO and ADMM is utilized to efficiently solve the fusion model. Finally, experiments were conducted on two benchmark datasets and one real dataset. Compared with some fusion methods such as CNMF [29], HySure [18], NLSTF [38], CSTF [39], and UTV-HSISR [41], this fusion method produces better spatial details and better preservation of the spectral structure due to the superiority of joint regularization and tensor decomposition.
However, there are still some limitations, and there is room for improvement of the proposed JRLTD algorithm. For example, the proposed JRLTD algorithm has a high computational complexity, and this leads to a relatively long running time. In our future work, we aim to extend the method in two directions. On the one hand, since the model utilizes the ADMM algorithm, although it is possible to divide a large complex problem into multiple smaller problems that can be solved simultaneously in a distributed manner, leads to an increase in computational effort and a decrease in computational speed. Therefore, we will try to find a closed form solution for each sub-problem. Alternatively, it can be accelerated by using parallel computing techniques. On the other hand, there is non-local spatial similarity in HSI, that is, there are duplicate or similar structures in the image, and when processing blocks of images, we can use information from surrounding blocks of images that are similar to them. This prior information has been shown to be valid for image super-resolution problems. Therefore, we will investigate the incorporation of non-local spatial similarity into the JRLTD method.

Author Contributions

Funding acquisition, W.B.; Validation, K.Q.; Writing—original draft, M.C.; Writing—review & editing, W.B. All authors have read and agreed to the published version of the manuscript.

Funding

This work is supported by the Natural Science Foundation of Ningxia Province of China (Project No. 2020AAC02028), the Natural Science Foundation of Ningxia Province of China (Project No. 2021AAC03179) and the Innovation Projects for Graduate Students of North Minzu University (Project No.YCX21080).

Acknowledgments

The authors would like to thank the Key Laboratory of Images and Graphics Intelligent Processing of State Ethnic Affairs Commission: IGIPLab for their support.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

HSI-SRHyperspectral image super-resolution
LR-HSILow-resolution hyperspectral image
HR-MSIHigh-resolution multispectral image
HR-HSIHigh-resolution hyperspectral image
JRLTDJoint regularized low-rank Tensor decomposition method
TVTotal variation
ADMMAlternating direction multiplier method
PAOProximal alternate optimization
CNMFCoupled non-negative matrix factorization
HySureHyperspectral image superresolution via subspace-based regularization
NLSTFNon-local sparse tensor factorization
CSTFCoupled sparse Tensor factorization
PSNRPeak signal-to-noise ratio
ERGASError relative global dimensionless synthesis
SAMspectral angle mapping
RMSERoot mean square error
CCCorrelation coefficient
DDDistortion degree
SSIMStructural similarity
UIQIUniversal image quality index

Appendix A

Appendix A.1. Optimization of P 1

Problem (14) is convex and can be solved efficiently by ADMM. Thus, we introduce the variable M = P 1 and then the unconstrained optimization in (14) can be reexpressed as an equivalent constrained form, i.e.,
arg min P 1 Y 1 S 1 P 1 A 1 F 2 + Z 1 P 1 B 1 F 2 + ρ P 1 P 1 p r e F 2 + γ t r P 2 P 1 T P D P 2 P 1 + λ 2 D y P 1 2 , 1 s . t . P 1 = M
It is easy to deduce that the augmented Lagrangian function for problem (A1) is
L P 1 , M , V 1 = Y 1 S 1 P 1 A 1 F 2 + Z 1 P 1 B 1 F 2 + ρ P 1 P 1 p r e F 2 + μ 1 P 1 M V 1 F 2 + γ t r P 2 M T P D P 2 M + λ 2 D y M 2 , 1
where V 1 denotes the Lagrange multiplier and μ 1 represents a positive penalty parameter. We solve (A2) using the ADMM algorithm:
P 1 t + 1 = arg min P 1 L P 1 , M t , V 1 t M t + 1 = arg min M L P 1 t + 1 , M , V 1 t V 1 t + 1 = arg min V 1 L P 1 t + 1 , M t + 1 , V 1
(1) P 1 -Subproblem: From (A2), we have
arg min P 1 Y 1 S 1 P 1 A 1 F 2 + Z 1 P 1 B 1 F 2 + ρ P 1 P 1 p r e F 2 + μ 1 P 1 M V 1 F 2
The optimization problem in (A4) is quadratic, which has a unique solution, and it is equal to compute the following Sylvester matrix equation, i.e.,
S 1 T S 1 P 1 A 1 A 1 T + P 1 B 1 B 1 T + ρ I + μ 1 I P 1 = S 1 T Y 1 A 1 T + Z 1 B 1 T + ρ P 1 p r e + μ 1 I M + V 1
We adopt the CG [46] to solve (A5) efficiently.
 (2) M-Subproblem: From (A2), we have
arg min M γ t r P 2 M T P D P 2 M + λ 2 D y M 2 , 1 + μ 1 P 1 M V 1 F 2
Note that it is complicated to solve due to the Kronecker product involved in the regularization of the spatial graph. Taking advantage of the symmetry and positive-semidefinite of Laplacian matrices, we simplify formula (A6) by implementing the Cholesky factorization [49] of P D . After that, we obtain a more contextually specific and brief function with respect to M as
arg min M γ M T U 1 1 F 2 + λ 2 D y M 2 , 1 + μ 1 P 1 M V 1 F 2
where U 1 1 is the matrix obtained by performing the Cholesky decomposition of P D and combining it with the Tucker2 decomposition model. The solution of the function (A7) can be obtained from the following equation:
μ 1 I λ 2 D y Σ D M = γ U 1 1 + μ 1 I P 1 V 1 M = μ 1 I λ 2 D y Σ D 1 γ U 1 1 + μ 1 I P 1 V 1
where I denotes a unit matrix of appropriate size, Σ D = 1 M 1 2 1 M 2 2 1 M n w 2 .
 (3) V 1 -Subproblem: From (A2), the Lagrangian multiplier V 1 can be updated by the following formula:
V 1 = V 1 P 1 M
Specifically, the each step of solving P 1 -subproblem (13) by the ADMM is summarized in Algorithm A1.
Algorithm A1 Solve P 1 -Subproblem (13) with ADMM.
Input:  Y, Z, P 2 , P ^ 2 , P 3 , P ^ 3 , C , D y , P 1 p r e , ρ > 0 , γ > 0 , μ 1 > 0 , and λ 2 > 0 .
   Output:  Dictionary matrix P 1 .
   1:
while not converged do
   2:
    Step 1 Update the dictionary matrix P 1 via (A5);
   3:
    Step 2 Update the variable M via (A8);
   4:
    Step 3 Update the Lagrangian multiplier V 1 via (A9);
   5:
end while

Appendix A.2. Optimization of P2

Like problem (14), problem (16) can be solved efficiently with ADMM. Hence, we introduce the variable N = P 2 and then the unconstrained optimization in (16) can be rephrased into an equivalent constrained form, i.e.,
arg min P 2 Y 2 S 2 P 2 A 2 F 2 + Z 2 P 2 B 2 F 2 + ρ P 2 P 2 p r e F 2 + γ t r P 2 P 1 T P D P 2 P 1 + λ 3 D y P 2 2 , 1 s . t . P 2 = N
It is easy to deduce that the augmented Lagrangian function for problem (A10) is
L P 2 , N , V 2 = Y 2 S 2 P 2 A 2 F 2 + Z 2 P 2 B 2 F 2 + ρ P 2 P 2 p r e F 2 + μ 2 P 2 N V 2 F 2 + γ t r N P 1 T P D N P 1 + λ 3 D y N 2 , 1
where V 2 denotes the Lagrangian multiplier and μ 2 represents a positive penalty parameter. We solve (A11) using the ADMM algorithm:
P 2 t + 1 = arg min P 2 L P 2 , N t , V 2 t N t + 1 = arg min N L P 2 t + 1 , N , V 2 t V 2 t + 1 = arg min V 2 L P 2 t + 1 , N t + 1 , V 2
(1) P 2 -Subproblem: From (A11), we have
arg min P 2 Y 2 S 2 P 2 A 2 F 2 + Z 2 P 2 B 2 F 2 + ρ P 2 P 2 p r e F 2 + μ 2 P 2 N V 2 F 2
The optimization problem in (A13) is quadratic, which has a unique solution, and it is equal to compute the following Sylvester matrix equation, i.e.,
S 2 T S 2 P 2 A 2 A 2 T + P 2 B 2 B 2 T + ρ I + μ 2 I P 2 = S 2 T Y 2 A 2 T + Z 2 B 2 T + ρ P 2 p r e + μ 2 I N + V 2
We adopt the CG to solve (A14) efficiently.
  (2) N-Subproblem: From (A11), we have
arg min N γ t r N P 1 T P D N P 1 + λ 3 D y N 2 , 1 + μ 2 P 2 N V 2 F 2
Note that the same calculation of the Kronecker product is needed here, and we can use the method of solving for M to solve for the solution with respect to N.
  (3) V 2 -Subproblem: From (A11), the Lagrangian multiplier V 2 can be updated by the following formula:
V 2 = V 2 P 2 N
Specifically, the each step of solving P 2 -subproblem (15) by the ADMM is summarized in Algorithm A2.
Algorithm A2 Solve P 2 -Subproblem (15) with ADMM.
Input:  Y, Z, P 1 , P ^ 1 , P 3 , P ^ 3 , C , D y , P 2 p r e , ρ > 0 , γ > 0 , μ 2 > 0 , and λ 3 > 0 .
   Output:   Dictionary matrix P 2 .
    1:
while not converged do
    2:
    Step 1 Update the dictionary matrix P 2 via (A14);
    3:
    Step 2 Update the variable N via (A15);
    4:
    Step 3 Update the Lagrangian multiplier V 2 via (A16);
    5:
end while

Appendix A.3. Optimization of P3

Like problem (14), problem (18) can be solved efficiently with ADMM. Hence, we introduce the variable O = P 3 and then the unconstrained optimization in (18) can be rephrased into an equivalent constrained form, i.e.,
arg min P 3 Y 3 S 3 P 3 A 3 F 2 + Z 3 P 3 B 3 F 2 + ρ P 3 P 3 p r e F 2 + β t r P 3 T P S P 3 + λ 4 D y P 3 2 , 1 s . t . P 3 = O
It is easy to deduce that the augmented Lagrangian function for problem (A17) is
L P 3 , O , V 3 = Y 3 S 3 P 3 A 3 F 2 + Z 3 P 3 B 3 F 2 + ρ P 3 P 3 p r e F 2 + μ 3 P 3 O V 3 F 2 + β t r O T P S O + λ 4 D y O 2 , 1
where V 3 denotes the Lagrangian multiplier and μ 3 represents a positive penalty parameter. We solve (A18) using the ADMM algorithm:
P 3 t + 1 = arg min P 3 L P 3 , O t , V 3 t O t + 1 = arg min O L P 3 t + 1 , O , V 3 t V 3 t + 1 = arg min V 3 L P 3 t + 1 , O t + 1 , V 3
(1) P 3 -Subproblem: From (A18), we have
arg min P 3 Y 3 S 3 P 3 A 3 F 2 + Z 3 P 3 B 3 F 2 + ρ P 3 P 3 p r e F 2 + μ 3 P 3 O V 3 F 2
The optimization problem in (A20) is quadratic, which has a unique solution, and it is equal to compute the following Sylvester matrix equation, i.e.,
S 3 T S 3 P 3 A 3 A 3 T + P 3 B 3 B 3 T + ρ I + μ 3 I P 3 = S 3 T Y 3 A 3 T + Z 3 B 3 T + ρ P 3 p r e + μ 3 I O + V 3
We adopt the CG to solve (A21) efficiently.
(2) O-Subproblem: From (A18), we have
arg min O β t r O T P S O + λ 4 D y O 2 , 1 + μ 3 P 3 O V 3 F 2
After that, we obtain the closed solution of O:
O = 2 β P S + μ 3 I + λ 4 D y Σ S 1 μ 3 I P 3 + V 3
where I denotes a unit matrix of appropriate size, Σ S = 1 O 1 2 1 O 2 2 1 O n s 2 .
(3) V 3 -Subproblem: From (A18), the Lagrangian multiplier V 3 can be updated by the following formula:
V 3 = V 3 P 3 O
Specifically, the each step of solving P 3 -subproblem (17) by the ADMM is summarized in Algorithm A3.
Algorithm A3 Solve P 3 -Subproblem (17) with ADMM.
Input:  Y, Z, P 1 , P ^ 1 , P 2 , P ^ 2 , C , D y , P 3 p r e , ρ > 0 , β > 0 , μ 3 > 0 , and λ 4 > 0 .
   Output:  Dictionary matrix P 3 .
    1:
while not converged do
    2:
    Step 1 Update the dictionary matrix P 3 via (A21);
    3:
    Step 2 Update the variable O via (A23);
    4:
    Step 3 Update the Lagrangian multiplier V 3 via (A24);
    5:
end while

Appendix A.4. Optimization of C

Problem (19) is convex and can be solved efficiently by ADMM algorithm by introducing two auxiliary variables C 1 = C and C 2 = C and then reformulate the problem (19) as follows:
arg min C , C 1 , C 2 f C + f C 1 + f C 2 s . t . C 1 = C , C 2 = C
where
f C = λ 1 C 1 + ρ C C p r e F 2 f 1 C 1 = Y C 1 × 1 P 1 ^ × 2 P 2 ^ × 3 P 3 F 2 f 2 C 2 = Z C 2 × 1 P 1 × 2 P 2 × 3 P ^ 3 F 2
It is easy to deduce that the augmented Lagrangian function for problem (A26) is
L C , C 1 , C 2 , V 4 , V 5 = λ 1 C 1 + ρ C C p r e F 2 + Y C 1 × 1 P 1 ^ × 2 P 2 ^ × 3 P 3 F 2 + μ 4 C C 1 V 4 F 2 + Z C 2 × 1 P 1 × 2 P 2 × 3 P ^ 3 F 2 + μ 4 C C 2 V 5 F 2
where V 4 , V 5 denotes the Lagrangian multiplier and μ 4 represents a positive penalty parameter.
ADMM iterations of (A27) are shown below:
C t + 1 = arg min C L C , C 1 t , C 2 t , V 4 t , V 5 t C 1 t + 1 = arg min C 1 L C t + 1 , C 1 , C 2 t , V 4 t , V 5 t C 2 t + 1 = arg min C 2 L C t + 1 , C 1 t + 1 , C 2 , V 4 t , V 5 t V 4 t + 1 = arg min V 4 L C t + 1 , C 1 t + 1 , C 2 t + 1 , V 4 , V 5 t V 5 t + 1 = arg min V 5 L C t + 1 , C 1 t + 1 , C 2 t + 1 , V 4 t + 1 , V 5
(1) C -Subproblem: From (A27), we have
arg min C λ 1 C 1 + ρ C C p r e F 2 + μ 4 C C 1 V 4 F 2 + μ 4 C C 2 V 5 F 2
whose solution C can be easily derived by columnwise vector-soft threshold function as:
C = s o f t μ 4 C 1 + V 4 + C 2 + V 5 + ρ C p r e 2 μ 4 + ρ , λ 1 4 μ 4 + 2 ρ
where s o f t x , y = s i g n x x y , 0 .
  (2) C 1 -Subproblem: From (A27), we have
arg min C 1 μ 4 C 1 C + V 4 F 2 + Y C 1 × 1 P 1 ^ × 2 P 2 ^ × 3 P 3 F 2
Problem (A31) is equal to
arg min C 1 μ 4 c 1 c + v 4 F 2 + y Q 1 c 1 F 2
where the vectors c 1 = v e c C 1 , c = v e c C , v 4 = v e c V 4 and y = v e c Y are generated by vectorizing the tensors C 1 , C , V 4 and Y, respectively, and Q 1 = P 3 P ^ 2 P ^ 1 .
Problem (A32) has the following closed-form solution, i.e.,
c 1 = Q 1 T Q 1 + μ 4 I 1 Q 1 T y + μ 4 c μ 4 v 4
Note that Q 1 R N w N h N S × n w n h n s is extremely large, and formula in (A33) is complicated to solve. Fortunately, we find that
Q 1 T Q 1 + μ 4 I 1 = D 3 D 2 D 1 Σ 3 Σ 2 Σ 1 + μ 4 I 1 × D 3 T D 2 T D 1 T
where Σ i and D i i = 1 , 2 , 3 are diagonal matrices and unitary matrices containing the eigenvalues and eigenvectors of P ^ 1 T P ^ 1 , P ^ 2 T P ^ 2 , and P 3 T P 3 , respectively.
Therefore, Σ 3 Σ 2 Σ 1 + μ 4 I 1 is a diagonal matrix and could be computed easily. Besides, the term Q 1 T y in (A33) can be computed by
Q 1 T y = v e c Y × 1 P ^ 1 T × 2 P ^ 2 T × 3 P 3 T
  (3) C 2 -Subproblem: From (A27), we have
arg min C 2 μ 4 C 2 C + V 5 F 2 + Z C 2 × 1 P 1 × 2 P 2 × 3 P ^ 3 F 2
Problem (A36) is equal to
arg min C 2 μ 4 c 2 c + v 5 F 2 + z Q 2 c 2 F 2
where the vectors c 2 = v e c C 2 , c = v e c C , v 5 = v e c V 5 and z = v e c Z are generated by vectorizing the tensors C 2 , C , V 5 and Z, respectively, and Q 2 = P ^ 3 P 2 P 1 .
Problem (A37) has the following closed-form solution, i.e.,
c 2 = Q 2 T Q 2 + μ 4 I 1 Q 2 T z + μ 4 c μ 4 v 5
Note that Q 1 R N w N h N S × n w n h n s is extremely large, and formula in (A38) is complicated to solve. Fortunately, we find that
Q 2 T Q 2 + μ 4 I 1 = D ˜ 3 D ˜ 2 D ˜ 1 Σ ˜ 3 Σ ˜ 2 Σ ˜ 1 + μ 4 I 1 × D ˜ 3 T D ˜ 2 T D ˜ 1 T
where Σ ˜ i and D ˜ i i = 1 , 2 , 3 are diagonal matrices and unitary matrices containing the eigenvalues and eigenvectors of P 1 T P 1 , P 2 T P 2 , and P ^ 3 T P ^ 3 , respectively.
Therefore, Σ ˜ 3 Σ ˜ 2 Σ ˜ 1 + μ 4 I 1 is a diagonal matrix and could be computed easily.
(3) V 4 and V 5 -Subproblem: From (A27), the multipliers V 4 and V 5 can be updated by the following formulas:
V 4 = V 4 C C 1 V 5 = V 5 C C 2
Specifically, the each step of solving C -subproblem (19) by the ADMM is summarized in Algorithm A4.
Algorithm A4 Solve C - Subproblem (19) with ADMM.
Input:  Y, Z, P 1 , P ^ 1 , P 2 , P ^ 2 , P 3 , P ^ 3 , C p r e , ρ > 0 , μ 4 > 0 , and λ 1 > 0 .
   Output:  Core tensor C .
    1:
while not converged do
    2:
    Step 1 Update C via (A30);
    3:
    Step 2 Update C 1 via (A33);
    4:
    Step 3 Update C 2 via (A38);
    5:
    Step 4 Update V 4 and V 5 via (A40);
    6:
end while

References

  1. Bioucas-Dias, J.M.; Plaza, A.; Camps-Valls, G.; Scheunders, P.; Nasrabadi, N.; Chanussot, J. Hyperspectral remote sensing data analysis and future challenges. IEEE Geosci. Remote. Sens. Mag. 2013, 1, 6–36. [Google Scholar] [CrossRef] [Green Version]
  2. Vivone, G.; Alparone, L.; Chanussot, J.; Dalla Mura, M.; Garzelli, A.; Licciardi, G.A.; Restaino, R.; Wald, L. A critical comparison among pansharpening algorithms. IEEE Trans. Geosci. Remote. Sens. 2014, 53, 2565–2586. [Google Scholar] [CrossRef]
  3. Aiazzi, B.; Alparone, L.; Baronti, S.; Garzelli, A.; Selva, M. Years of pansharpening: A critical review and new developments. In Signal Image Processing for Remote Sensing, 2nd ed.; Chen, C.H., Ed.; CRC Press: Boca Raton, FL, USA, 2012; Volume 25, pp. 533–548. [Google Scholar]
  4. Meng, X.; Shen, H.; Li, H.; Zhang, L.; Fu, R. Review of the pansharpening methods for remote sensing images based on the idea of meta-analysis: Practical discussion and challenges. Inf. Fusion 2019, 46, 102–113. [Google Scholar] [CrossRef]
  5. Vivone, G.; Restaino, R.; Licciardi, G.; Dalla Mura, M.; Chanussot, J. Multiresolution analysis and component substitution techniques for hyperspectral pansharpening. In Proceedings of the 2014 IEEE Geoscience and Remote Sensing Symposium, Quebec City, QC, Canada, 13–18 July 2014; pp. 2649–2652. [Google Scholar]
  6. Kwarteng, P.; Chavez, A. Extracting spectral contrast in Landsat Thematic Mapper image data using selective principal component analysis. Photogramm. Eng. Remote Sens. 1989, 55, 339–348. [Google Scholar]
  7. Feng-Hua, H.; Lu-Ming, Y. Study on the hyperspectral image fusion based on the gram_schmidt improved algorithm. Inf. Technol. J. 2013, 12, 6694. [Google Scholar] [CrossRef]
  8. Alparone, L.; Baronti, S.; Aiazzi, B.; Garzelli, A. Spatial methods for multispectral pansharpening: Multiresolution analysis demystified. IEEE Trans. Geosci. Remote Sens. 2016, 54, 2563–2576. [Google Scholar] [CrossRef]
  9. Li, H.; Manjunath, B.; Mitra, S.K. Multisensor image fusion using the wavelet transform. Graph. Model. Image Process. 1995, 57, 235–245. [Google Scholar] [CrossRef]
  10. Liu, J. Smoothing filter-based intensity modulation: A spectral preserve image fusion technique for improving spatial details. Int. J. Remote Sens. 2000, 21, 3461–3472. [Google Scholar] [CrossRef]
  11. Aiazzi, B.; Alparone, L.; Baronti, S.; Garzelli, A.; Selva, M. MTF-tailored multiscale fusion of high-resolution MS and Pan imagery. Photogramm. Eng. Remote Sens. 2006, 72, 591–596. [Google Scholar] [CrossRef]
  12. Yuan, Q.; Wei, Y.; Meng, X.; Shen, H.; Zhang, L. A multiscale and multidepth convolutional neural network for remote sensing imagery pan-sharpening. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2018, 11, 978–989. [Google Scholar] [CrossRef] [Green Version]
  13. Bungert, L.; Coomes, D.A.; Ehrhardt, M.J.; Rasch, J.; Reisenhofer, R.; Schönlieb, C.B. Blind image fusion for hyperspectral imaging with the directional total variation. Inverse Probl. 2018, 34, 044003. [Google Scholar] [CrossRef]
  14. Bajaj, C.; Wang, T. Blind Hyperspectral-Multispectral Image Fusion via Graph Laplacian Regularization. arXiv 2019, arXiv:1902.08224. [Google Scholar]
  15. Ghaderpour, E. Multichannel antileakage least-squares spectral analysis for seismic data regularization beyond aliasing. Acta Geophys. 2019, 67, 1349–1363. [Google Scholar] [CrossRef]
  16. Miao, J.; Cao, H.; Jin, X.B.; Ma, R.; Fei, X.; Niu, L. Joint sparse regularization for dictionary learning. Cogn. Comput. 2019, 11, 697–710. [Google Scholar] [CrossRef]
  17. He, Z.; Wang, Y.; Hu, J. Joint sparse and low-rank multitask learning with laplacian-like regularization for hyperspectral classification. Remote Sens. 2018, 10, 322. [Google Scholar] [CrossRef] [Green Version]
  18. Simoes, M.; Bioucas-Dias, J.; Almeida, L.B.; Chanussot, J. A convex formulation for hyperspectral image superresolution via subspace-based regularization. IEEE Trans. Geosci. Remote Sens. 2014, 53, 3373–3388. [Google Scholar] [CrossRef] [Green Version]
  19. Zhang, Y.; De Backer, S.; Scheunders, P. Noise-resistant wavelet-based Bayesian fusion of multispectral and hyperspectral images. IEEE Trans. Geosci. Remote Sens. 2009, 47, 3834–3843. [Google Scholar] [CrossRef]
  20. Akhtar, N.; Shafait, F.; Mian, A. Bayesian sparse representation for hyperspectral image super resolution. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 3631–3640. [Google Scholar]
  21. Ballester, C.; Caselles, V.; Igual, L.; Verdera, J.; Rougé, B. A variational model for P+ XS image fusion. Int. J. Comput. Vis. 2006, 69, 43–58. [Google Scholar] [CrossRef]
  22. Ning, M.; Ze-Ming, Z.; ZHANG, P.; Li-Min, L. A new variational model for panchromatic and multispectral image fusion. Acta Autom. Sin. 2013, 39, 179–187. [Google Scholar]
  23. Xing, Y.; Yang, S.; Feng, Z.; Jiao, L. Dual-Collaborative Fusion Model for Multispectral and Panchromatic Image Fusion. IEEE Trans. Geosci. Remote Sens. 2020, 1–15. [Google Scholar] [CrossRef]
  24. Zhu, X.X.; Bamler, R. A sparse image fusion algorithm with application to pan-sharpening. IEEE Trans. Geosci. Remote Sens. 2012, 51, 2827–2836. [Google Scholar] [CrossRef]
  25. Yang, X.; Jian, L.; Yan, B.; Liu, K.; Zhang, L.; Liu, Y. A sparse representation based pansharpening method. Future Gener. Comput. Syst. 2018, 88, 385–399. [Google Scholar] [CrossRef]
  26. Simsek, M.; Polat, E. Performance evaluation of pan-sharpening and dictionary learning methods for sparse representation of hyperspectral super-resolution. In Signal Image and Video Processing; Springer: Berlin/Heidelberg, Germany, 2021; pp. 1–8. [Google Scholar]
  27. Garzelli, A. A review of image fusion algorithms based on the super-resolution paradigm. Remote Sens. 2016, 8, 797. [Google Scholar] [CrossRef] [Green Version]
  28. Loncan, L.; De Almeida, L.B.; Bioucas-Dias, J.M.; Briottet, X.; Chanussot, J.; Dobigeon, N.; Fabre, S.; Liao, W.; Licciardi, G.A.; Simoes, M.; et al. Hyperspectral pansharpening: A review. IEEE Geosci. Remote Sens. Mag. 2015, 3, 27–46. [Google Scholar] [CrossRef] [Green Version]
  29. Yokoya, N.; Yairi, T.; Iwasaki, A. Coupled nonnegative matrix factorization unmixing for hyperspectral and multispectral data fusion. IEEE Trans. Geosci. Remote Sens. 2011, 50, 528–537. [Google Scholar] [CrossRef]
  30. Bioucas-Dias, J.M.; Plaza, A.; Dobigeon, N.; Parente, M.; Du, Q.; Gader, P.; Chanussot, J. Hyperspectral unmixing overview: Geometrical, statistical, and sparse regression-based approaches. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2012, 5, 354–379. [Google Scholar] [CrossRef] [Green Version]
  31. Bendoumi, M.A.; He, M.; Mei, S. Hyperspectral image resolution enhancement using high-resolution multispectral image based on spectral unmixing. IEEE Trans. Geosci. Remote Sens. 2014, 52, 6574–6583. [Google Scholar] [CrossRef]
  32. Berné, O.; Helens, A.; Pilleri, P.; Joblin, C. Non-negative matrix factorization pansharpening of hyperspectral data: An application to mid-infrared astronomy. In Proceedings of the 2010 2nd Workshop on Hyperspectral Image and Signal Processing: Evolution in Remote Sensing, Reykjavik, Iceland, 14–16 June 2010; pp. 1–4. [Google Scholar]
  33. Kawakami, R.; Matsushita, Y.; Wright, J.; Ben-Ezra, M.; Tai, Y.W.; Ikeuchi, K. High-resolution hyperspectral imaging via matrix factorization. In Proceedings of the CVPR 2011, Colorado Springs, CO, USA, 20–25 June 2011; pp. 2329–2336. [Google Scholar]
  34. An, Z.; Shi, Z. Hyperspectral image fusion by multiplication of spectral constraint and NMF. Optik 2014, 125, 3150–3158. [Google Scholar] [CrossRef]
  35. Chen, Q.; Shi, Z.; An, Z. Hyperspectral image fusion based on sparse constraint NMF. Optik 2014, 125, 832–838. [Google Scholar] [CrossRef]
  36. Karoui, M.S.; Deville, Y.; Benhalouche, F.Z.; Boukerch, I. Hypersharpening by joint-criterion nonnegative matrix factorization. IEEE Trans. Geosci. Remote Sens. 2016, 55, 1660–1670. [Google Scholar] [CrossRef]
  37. Lanaras, C.; Baltsavias, E.; Schindler, K. Hyperspectral super-resolution by coupled spectral unmixing. In Proceedings of the IEEE International Conference on Computer Vision, Santiago, Chile, 7–13 December 2015; pp. 3586–3594. [Google Scholar]
  38. Dian, R.; Fang, L.; Li, S. Hyperspectral image super-resolution via non-local sparse tensor factorization. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 June 2017; pp. 5344–5353. [Google Scholar]
  39. Li, S.; Dian, R.; Fang, L.; Bioucas-Dias, J.M. Fusing hyperspectral and multispectral images via coupled sparse tensor factorization. IEEE Trans. Image Process. 2018, 27, 4118–4130. [Google Scholar] [CrossRef]
  40. Zhang, K.; Wang, M.; Yang, S.; Jiao, L. Spatial–spectral-graph-regularized low-rank tensor decomposition for multispectral and hyperspectral image fusion. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2018, 11, 1030–1040. [Google Scholar] [CrossRef]
  41. Xu, T.; Huang, T.Z.; Deng, L.J.; Zhao, X.L.; Huang, J. Hyperspectral image superresolution using unidirectional total variation with tucker decomposition. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2020, 13, 4381–4398. [Google Scholar] [CrossRef]
  42. He, W.; Yokoya, N.; Yuan, L.; Zhao, Q. Remote sensing image reconstruction using tensor ring completion and total variation. IEEE Trans. Geosci. Remote Sens. 2019, 57, 8998–9009. [Google Scholar] [CrossRef]
  43. He, W.; Chen, Y.; Yokoya, N.; Li, C.; Zhao, Q. Hyperspectral super-resolution via coupled tensor ring factorization. arXiv 2020, arXiv:2001.01547. [Google Scholar]
  44. Xu, Y.; Wu, Z.; Chanussot, J.; Wei, Z. Hyperspectral images super-resolution via learning high-order coupled tensor ring representation. IEEE Trans. Neural Networks Learn. Syst. 2020, 31, 4747–4760. [Google Scholar] [CrossRef] [PubMed]
  45. Attouch, H.; Bolte, J.; Redont, P.; Soubeyran, A. Proximal alternating minimization and projection methods for nonconvex problems: An approach based on the Kurdyka-Łojasiewicz inequality. Math. Oper. Res. 2010, 35, 438–457. [Google Scholar] [CrossRef] [Green Version]
  46. Gene, H.; Van Loan, C. Matrix Computations; Johns Hopkins University Press: Baltimore, MD, USA, 2012; Volume 4. [Google Scholar]
  47. Smith, L.N.; Elad, M. Improving dictionary learning: Multiple dictionary updates and coefficient reuse. IEEE Signal Process. Lett. 2012, 20, 79–82. [Google Scholar] [CrossRef]
  48. Bioucas-Dias, J.M. A variable splitting augmented Lagrangian approach to linear spectral unmixing. In Proceedings of the 2009 First Workshop on Hyperspectral Image and Signal Processing: Evolution in Remote Sensing, Grenoble, France, 26–28 August 2009; pp. 1–4. [Google Scholar]
  49. Dereniowski, D.; Kubale, M. Cholesky factorization of matrices in parallel and ranking of graphs. In International Conference on Parallel Processing and Applied Mathematics; Springer: Berlin/Heidelberg, Germany, 2003; pp. 985–992. [Google Scholar]
Figure 1. PSNR values for different K.
Figure 1. PSNR values for different K.
Remotesensing 13 04116 g001
Figure 2. PSNR values for different log ρ .
Figure 2. PSNR values for different log ρ .
Remotesensing 13 04116 g002
Figure 3. PSNR values for different log λ 1 .
Figure 3. PSNR values for different log λ 1 .
Remotesensing 13 04116 g003
Figure 4. PSNR values for different log λ 2 and log λ 3 .
Figure 4. PSNR values for different log λ 2 and log λ 3 .
Remotesensing 13 04116 g004
Figure 5. PSNR values for different λ 4 .
Figure 5. PSNR values for different λ 4 .
Remotesensing 13 04116 g005
Figure 6. PSNR values for different log β and log γ .
Figure 6. PSNR values for different log β and log γ .
Remotesensing 13 04116 g006
Figure 7. PSNR values for different n w and n h .
Figure 7. PSNR values for different n w and n h .
Remotesensing 13 04116 g007
Figure 8. PSNR values for different n s .
Figure 8. PSNR values for different n s .
Remotesensing 13 04116 g008
Figure 9. Comparison of fusion results on the Pavia University dataset. (a) Reference Image; (b) LR-HSI; (c) CNMF; (d) HySure; (e) NLSTF; (f) CSTF; (g) UTV-HSISR; (h) JRLTD.
Figure 9. Comparison of fusion results on the Pavia University dataset. (a) Reference Image; (b) LR-HSI; (c) CNMF; (d) HySure; (e) NLSTF; (f) CSTF; (g) UTV-HSISR; (h) JRLTD.
Remotesensing 13 04116 g009
Figure 10. Comparison of fusion results on the Washington DC dataset. (a) Reference Image; (b) LR-HSI; (c) CNMF; (d) HySure; (e) NLSTF; (f) CSTF; (g) UTV-HSISR; (h) JRLTD.
Figure 10. Comparison of fusion results on the Washington DC dataset. (a) Reference Image; (b) LR-HSI; (c) CNMF; (d) HySure; (e) NLSTF; (f) CSTF; (g) UTV-HSISR; (h) JRLTD.
Remotesensing 13 04116 g010
Figure 11. Comparison of fusion results on the Sand Lake dataset. (a) Reference Image; (b) LR-HSI; (c) CNMF; (d) HySure; (e) NLSTF; (f) CSTF; (g) UTV-HSISR; (h) JRLTD.
Figure 11. Comparison of fusion results on the Sand Lake dataset. (a) Reference Image; (b) LR-HSI; (c) CNMF; (d) HySure; (e) NLSTF; (f) CSTF; (g) UTV-HSISR; (h) JRLTD.
Remotesensing 13 04116 g011
Table 1. Discussion of the main parameters.
Table 1. Discussion of the main parameters.
ParametersTuning RangesPavia University DatasetWashington DC DatasetSand Lake DatasetSuggested Ranges
K 1 , 50 202520 20 , 50
ρ 10 3 , 10 0 10 1 10 1 10 1 10 1 , 10 0
λ 1 10 9 , 10 1 10 6 10 6 10 7 10 7 , 10 6
λ 2 10 9 , 10 2 10 8 10 7 10 8 10 8 , 10 7
λ 3 10 9 , 10 2 10 6 10 6 10 5 10 6 , 10 5
λ 4 10 4 , 10 4 10 2 10 2 10 1 10 1 , 10 2
β 10 7 , 10 1 10 1 10 1 10 3 10 3 , 10 1
γ 10 7 , 10 1 10 1 10 2 10 1 10 2 , 10 1
N w 260 , 400 360340360 340 , 360
N h 260 , 400 380380380 380 , 400
N s 3 , 21 151518 15 , 18
Table 2. Quality evaluation for Pavia University dataset.
Table 2. Quality evaluation for Pavia University dataset.
MethodsSpectral FeaturesSignal-To-Noise RatioSpatial Features
RMSECCERGASSAMDDPSNRSSIMUIQI
BEST0100011
CNMF6.38890.97023.63003.74273.958632.12270.93660.9492
HySure4.01040.98802.23973.33632.541136.48500.97030.9790
NLSTF2.02650.99661.16022.08731.306444.43230.97060.9928
CSTF1.76730.99740.98861.83911.161043.94730.98810.9942
UTV-HSISR1.68810.99760.92941.76351.046044.64070.98980.9950
Proposed1.65520.99770.90721.70971.010544.83880.99050.9952
Table 3. Quality evaluation for Washington DC dataset.
Table 3. Quality evaluation for Washington DC dataset.
MethodsSpectral FeaturesSignal-To-Noise RatioSpatial Features
RMSECCERGASSAMDDPSNRSSIMUIQI
BEST0100011
CNMF4.11220.97453.49843.28252.927937.55460.95850.9569
HySure3.05880.98373.74413.48221.963239.71090.97780.9749
NLSTF1.27780.99472.23391.73810.784048.15960.99230.9919
CSTF1.06180.99502.39831.54330.686548.39250.99450.9926
UTV-HSISR0.93970.99622.03011.34210.544449.70230.99610.9945
Proposed0.88470.99632.04781.24540.487150.27310.99660.9946
Table 4. Quality evaluation for Sand Lake dataset.
Table 4. Quality evaluation for Sand Lake dataset.
MethodsSpectral FeaturesSignal-To-Noise RatioSpatial Features
RMSECCERGASSAMDDPSNRSSIMUIQI
BEST0100011
CNMF3.55120.97521.12931.14952.482237.65490.96810.9688
HySure2.97760.99351.88471.38811.927339.69450.97320.9820
NLSTF2.00260.99650.62631.15351.459244.45970.98410.9828
CSTF1.53030.99800.48530.97821.134344.78500.98600.9859
UTV-HSISR0.89260.99940.29320.55140.505450.54210.99560.9959
Proposed0.84520.99940.27860.51910.460651.02140.99620.9965
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Cao, M.; Bao, W.; Qu, K. Hyperspectral Super-Resolution Via Joint Regularization of Low-Rank Tensor Decomposition. Remote Sens. 2021, 13, 4116. https://doi.org/10.3390/rs13204116

AMA Style

Cao M, Bao W, Qu K. Hyperspectral Super-Resolution Via Joint Regularization of Low-Rank Tensor Decomposition. Remote Sensing. 2021; 13(20):4116. https://doi.org/10.3390/rs13204116

Chicago/Turabian Style

Cao, Meng, Wenxing Bao, and Kewen Qu. 2021. "Hyperspectral Super-Resolution Via Joint Regularization of Low-Rank Tensor Decomposition" Remote Sensing 13, no. 20: 4116. https://doi.org/10.3390/rs13204116

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop