Next Article in Journal
Correction: Hu, J., et al. Hyperspectral Image Super-Resolution by Deep Spatial-Spectral Exploitation. Remote Sensing 2019, 11, 1229
Previous Article in Journal
Extracting Taklimakan Dust Parameters from AIRS with Artificial Neural Network Method
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Efficient Compressive Hyperspectral Imaging Algorithm Based on Sequential Computations of Alternating Least Squares

Division of Global Business and Technology, Hankuk University of Foreign Studies, Yongin 17035, Korea
Remote Sens. 2019, 11(24), 2932; https://doi.org/10.3390/rs11242932
Submission received: 23 October 2019 / Revised: 28 November 2019 / Accepted: 3 December 2019 / Published: 6 December 2019

Abstract

:
Hyperspectral imaging is widely used to many applications as it includes both spatial and spectral distributions of a target scene. However, a compression, or a low multilinear rank approximation of hyperspectral imaging data, is required owing to the difficult manipulation of the massive amount of data. In this paper, we propose an efficient algorithm for higher order singular value decomposition that enables the decomposition of a tensor into a compressed tensor multiplied by orthogonal factor matrices. Specifically, we sequentially compute low rank factor matrices from the Tucker-1 model optimization problems via an alternating least squares approach. Experiments with real world hyperspectral imaging revealed that the proposed algorithm could compute the compressed tensor with a higher computational speed, but with no significant difference in accuracy of compression compared to the other tensor decomposition-based compression algorithms.

Graphical Abstract

1. Introduction

Hyperspectral imaging (HSI) allows one to provide information on the spatial and spectral distributions of a target scene simultaneously by acquiring up to hundreds of narrow and adjacent spectral band images ranging from ultraviolet to far-infrared electromagnetic spectrum [1,2]. To do this, an imaging sensor such as a charged coupled device collects the different wavelengths dispersed from the incoming light. Then, the signals captured by the imaging sensor are digitized and arranged into pixels of a two-dimensional image T = R I × λ , where I denotes the size of X-directional spatial information, and λ is the number of quantized spectra of the signals. The procedure of capturing the pixels as an X-directional single line continues until the spatial range reaches J, which is the Y-directional size of the entire target scene. Finally, HSI constructs a three-dimensional data T R I × J × λ . Once the HSI data are obtained, they can be used in many applications, such as detecting and identifying objects at a distance in environmental monitoring [3] or medical image processing [4], finding anomaly in automatic visual inspection [5], or detecting and identifying targets of interest [6,7]. However, as the area of the target scene I × J or the number of quantized spectra λ increase, the manipulation of T demands prohibitively large computational resources and storage space. To overcome this, efficient compression techniques must be employed as preprocessing for applications to filter out some redundancy along their adjacent spectral bands or spatial information, thereby reducing the size of T .
Owing to the shape of HSI data, the typical mathematical basis for compression techniques is based on tensor decompositions, because it facilitates the simultaneous preservation and analysis of the spatial and spectral structures of data [8]. Fang et al. used canonical polyadic decomposition (CPD) for dimension reduction of HSI data, where CPD decomposes a tensor into several rank-one tensors [9,10]. De Lathauwer et al. suggested a Tucker decomposition-based low rank approximation algorithm of a tensor, where Tucker decomposition decomposes a tensor into a smaller sized core tensor multiplied by a factor matrix along each mode [11]. In this study, we considered Tucker decomposition for compressing HSI data. Specifically, we focused on developing an efficient algorithm to compute a higher order singular value decomposition (HOSVD), which is a special case of Tucker decomposition with orthogonal constraints on the factor matrices. Subsequently, we applied it to compression problems from real world HSI data.
The remainder of this paper is organized as follows. Section 2 defines the notations and preliminaries frequently used in this paper. Section 3 briefly explains the well-known algorithms for computing HOSVD for a compression. Section 4 introduces the algorithm we propose. Section 5 provides the experimental results, and Section 6 concludes the paper.

2. Notations and Preliminaries

Here, we define the symbols and terminology for the simplicity of notation and presentation. We use calligraphic letters to denote tensors, e.g., A ; boldface capital letters for matrices, e.g., A ; boldface lowercase letters for vectors, e.g., a ; and lowercase letters for scalars, e.g., a. We define an operation of tensor-matrix multiplication between an arbitrary N-th order tensor A R I 1 × I 2 × I N and a matrix B R K × I n , 1 n N along mode-n, such that
C = A × n B ,
where C R I 1 × × I n 1 × K × I n + 1 × × I N and the ( i 1 i n 1 , k , i n + 1 i N ) -th element of C is computed by
C i 1 i n 1 , k , i n + 1 i N = i n = 1 I n a i 1 , i 2 i N b k , i n ,
for arbitrary 1 i n I n and 1 k K . Note that the values a i 1 , i 2 i N and b k , i n are the elements of A and B , respectively [12].
A tensor A can be matricized with mode-n, after rearranging each element of A appropriately. For example, a third order tensor A R I 1 × I 2 × I 3 is matricized along each mode, as shown in Figure 1.
The Frobenius norm of a tensor A R I 1 × I 2 × × I N is the square root of the sum of the squares of all elements in A , such that
A F = i 1 = 1 I 1 i 2 = 1 I 2 i N = 1 I N a i 1 i 2 i N 2 .

3. Related Works

We briefly revisit here, well-known algorithms to compute HOSVD of a tensor. HOSVD, which is a special case of Tucker decomposition that has an orthogonal constraint, decomposes an N-th order tensor T R I 1 × I 2 × I N into factor matrices U n R I n × R n , 1 n N and a core tensor G R R 1 × R 2 × R N such that
T = G × n = 1 to N U n ,
while satisfying a constraint U n T U n = I , where a matrix I R R n × R n represents an identity matrix [11]. Here, a factor matrix U n is considered as the principal components in each mode, and elements of a core tensor G reveal the level of interactions between the different components. Note that R n I n , and we denote ( R 1 , R 2 , , R N ) as multilinear ranks of T . The representation of T with the form of (1) is not unique. Thus, there are many algorithms to compute (1). One of the simplest methods to obtain U n and G is computing leading singular vectors of each unfolding matrix of T such that
T n = U n Σ n V n T ,
where T n indicates the mode-n unfolding matrix of T . Then a core tensor G is obtained from G = T × n = 1 to N U n T , and G is regarded as a compressed tensor of T . Despite its simple procedure of computation, a single step of applying singular value decomposition (SVD) is insufficient to improve approximations of T in many contexts. De Lathauwer et al. proposed a more accurate algorithm of computing HOSVD via iterative computations of SVD, called higher order orthogonal iterations (HOOI) [13]. HOOI is designed to solve the optimization problem of finding U n and G , such that
min G , U n , n = 1 to N T G × n = 1 to N U n F , subject to U n T U n = I .
Since T G × n = 1 to N U n F = T F G F , the minimization problem (3) is identical to finding max G F ; thus, by definition,
max T × n , n = 1 to N U n T F .
Therefore, HOOI obtains each factor matrix U n independently from the R n , leading singular vectors of the unfolding matrix matricized from T × n = 1 to N , n k U n along mode-k while fixing the other factor matrices. The iterations continue until the output converges. The procedure of HOOI is summarized in Algorithm 1 for N = 3 . In practice, HOOI produces more accurate outputs than those from the algorithm based on (2); it is considered one of the most accurate algorithms for obtaining HOSVD from a tensor. Therefore, many hyperspectral compression techniques have been developed based on HOOI. For example, Zhang et. al applied HOOI to the compression of HSI [14]. An et al. suggested the method based on [11] with an adaptive, multilinear rank estimation, and applied it to HSI compression [15]. However, iterative computations of SVD require huge computational resources. Additionally, the number of iterations for convergence is difficult to predict.
To overcome these limitations, Elden et al. proposed a Newton–Grassman based algorithm that guaranteed quadratic convergence and fewer iteration steps than HOOI [16]. However a single iteration of the algorithm is much more expensive owing to the computation of the Hessian. Sorber et al. introduced a quasi-Newton optimization algorithm that iteratively improves the initial guess by using a quasi-Newton method from the nonlinear cost function [17]. Hassanzadeh and karami proposed a block coordinate descent search based algorithm [18], which updates the factor matrices initialized by using compressed sensing. Instead of employing SVD for unfolding matrices, Phan et al. proposed a fast algorithm based on a Crank–Nicholson-like technique, which has a lower computational cost in a single step compared with HOOI [19]. Lee proposed a HOSVD algorithm based on an alternating least squares method that recycles the intermediate results of computations for one factor matrix to the other computations [20]. In contrast to the algorithm proposed by [20], which computes the Tucker-1 model optimization individually along each mode, the proposed algorithm considers sub-problems of computing a factor matrix simultaneously in a single iteration. This approach enables more accurate computation of the intermediate results in each iteration.
Algorithm 1 HOOI
Input: T , G 0 , U 1 , U 2 , U 3 , ϵ
Output: U 1 , U 2 , U 3 , G l
 1: for l = 1, 2, 3, … do
 2:  for n = 1, 2, 3 do
 3:    k = [ n , 1 : n 1 , n + 1 : 3 ]
 4:    S = T × k U k T
 5:    U n = R n leading singular vectors of S n
 6:  end for
 7:   G l = T × n = 1 , 2 , 3 U n T
 8:  if G l G l 1 F / G l 1 F ϵ then
 9:   break
 10:  end if
 11: end for

4. Sequential Computations of Alternating Least Squares for Efficient HOSVD

In the proposed algorithm, we sequentially compute low multilinear rank factor matrices from the Tucker-1 model optimization problems via an alternating least squares approach. For simplicity, and to consider the shape of a HSI data, we only used a third order tensor in this study. However, the extension of the algorithm for application to any dimensional tensors without loss of generality is straightforward.
Assume that we have a third order tensor T R I 1 × I 2 × I 3 . Our goal is to decompose T to
T G × n = 1 to 3 U n ,
where U n R I n × R n represents an orthogonal factor matrix along mode-n, R n is the appropriate truncation level or n-multilinear rank, and G R R 1 × R 2 × R 3 denotes the core tensor. Then, we rewrite the formulation of the optimization problem of finding U n , n = 1 , 2 , 3 and G in (3) as
min G , U n , n = 1 , 2 , 3 T G × n = 1 , 2 , 3 U n F 2 subject to U n T U n = I .
Before starting the explanation, we note that the order of computation for each factor matrix does not need to be fixed. However, for convenience we will present the procedure for solving our optimization problems from the order of (1,2,3) mode. Let S 1 = G × n = 2 , 3 U n . Then, the problem of finding U 1 in (6) is equivalent to the problem of finding the solution from the Tucker-1 model optimization of T , such that
min U 1 , S 1 T S 1 × 1 U 1 F 2 subject to U 1 T U 1 = I ,
where I R R 1 × R 1 denotes an identity matrix. Here, we can see that the tensor S 1 is a Tucker-2 model, but it can be used to formulate another Tucker-1 optimization problem of finding U 2 , such that
min U 2 , S 2 S 1 S 2 × 2 U 2 F 2 subject to U 2 T U 2 = I ,
where S 2 = G × 3 U 3 . Similarly, to find U 3 , we use the optimization problem of Tucker-1 model, which is given by
min U 3 , G S 3 G × 3 U 3 F 2 subject to U 3 T U 3 = I .
We illustrate the sequential procedure for the Tucker-1 model optimization problems in Figure 2. Let us consider the optimization problems (7)–(9) simultaneously. Then, the optimal factor matrices of the tensor T are computed by solving the minimization problem of the cost function C ( U k , k = 1 , 2 , 3 ) , which is defined as
C ( U k , k = 1 , 2 , 3 ) = T S 1 × 1 U 1 F 2 + S 1 S 2 × 2 U 2 F 2 + S 3 G × 3 U 3 F 2 + n = 1 , 2 , 3 λ n tr ( U n T U n I ) ,
where the parameters λ n represent the Lagrangian multipliers, and the function tr ( A ) computes the trace of an arbitrary matrix A . Because the cost function C ( U k , k = 1 , 2 , 3 ) in (10) has too many unknown variables, an alternating least squares method that optimizes one variable while leaving the others fixed, is a proper approach. First, by taking a derivative of C ( U k , k = 1 , 2 , 3 ) with respect to U 1 while regarding the other variables as constant values, and matricizing those along mode-1, we obtain
C ( U k , k = 1 , 2 , 3 ) U 1 = 1 2 ( T 1 U 1 S 1 1 ) S 1 1 T + 1 2 λ 1 U 1 = 0 ,
where S 1 1 is the unfolding matrix of S 1 along mode-1. Thus, from (11), we can compute U 1 as follows:
U 1 = T 1 S 1 1 T ( λ 1 I + S 1 1 S 1 1 T ) 1 .
After U 1 is obtained, the next step is to compute S 1 while fixing U 1 and the others to recycle the intermediate results for computing the others. After updating U 1 in (12) and by taking a derivative to (10) with respect to S 1 , we can compute S 1 such that
S 1 = ( T × 1 U 1 + S 2 × 2 U 2 ) × 1 ( I + U 1 T U 1 ) 1 .
Because the orthogonal constraint must be satisfied, we reorthogonalize U 1 by simply applying QR-decomposition.
Next, we find U 2 . After updating S 1 and U 1 , we take a derivative of C ( U k , k = 1 , 2 , 3 ) with respect to U 2 and rearrange the terms similar to the procedure of computing U 1 . Then, we obtain
U 2 = S 1 2 S 2 2 T ( λ 2 I + S 2 2 S 2 2 T ) 1 ,
where S 1 2 and S 2 2 are the unfolding matrices of S 1 and S 2 along mode-2, respectively. Then, we can compute S 2 with fixed U 2 such that
S 2 = ( S 1 × 2 U 2 + G × 3 U 3 ) × 2 ( I + U 2 T U 2 ) 1 .
Finally, U 3 is obtained by solving
U 3 = S 2 3 G 3 T ( λ 3 I + G 3 G 3 T ) 1 ,
where S 2 3 and G 3 are the unfolding matrices from the tensor S 2 and G along mode-3, respectively.
Algorithm 2 summarizes the procedure explained in this section. Note that the function A i = unfolding ( A , i ) in steps 6, 10, and 14, returns the unfolding matrix A i from an arbitrary tensor A along mode-i, and the function C = Reorth ( B ) in steps 9, 13, and 16 returns the reorthogonalized matrix C from B by applying QR decomposition. If we assume that the size of an input tensor is ( I , I , I ) , and its initial multilinear rank is ( R , R , R ) , then the most expensive step in Algorithm 2 occurs at the computation of T i S i i T in step 7, and its computational complexity is approximately O ( I 3 R ) operations, which is similar to those of the other HOSVD algorithms. Additionally, unlike HOOI, we eliminate independence from the computation of each factor matrix by reusing intermediate tensors and factor matrices to find a specific factor matrix. Thus, we expected the proposed algorithm to achieve better convergence to the solution.
Algorithm 2 HOSVD _ ALS
Input: T , G 0 , U 1 , U 2 , U 3 , λ 1 , λ 2 , λ 3 , ϵ
Output: U 1 , U 2 , U 3 , G l
 1: for l = 1, 2, 3, … do
 2:  for n = 1 to 3 do
 3:   Rearrange the order [ i , j , k ] such that [ n , 1 : n 1 , n + 1 : 3 ]
 4:    S j = G l 1 × k U k
 5:    S i = S j × j U j
 6:    T i = unfolding ( T , i ) , and S i i = unfolding ( S i , i )
 7:    U i = T i S i i T ( λ i I + S i i S i i T ) 1
 8:    S i = ( T × i U i + S j × j U j ) × i ( I + U i T U i ) 1
 9:    U i = Reorth ( U i )
 10:    S i j = unfolding ( S i , j ) , and S j j = unfolding ( S j , j )
 11:    U j = S i j S j j T ( λ j I + S j j S j j T ) 1
 12:    S j = ( S i × j U j + G k 1 × k U k ) × j ( I + U j T U j j ) 1
 13:    U j = Reorth ( U j )
 14:    S j k = unfolding ( S j , k ) , and G k = unfolding ( G l 1 , k )
 15:    U k = S j k G k T ( λ k I + G k G k T ) 1
 16:    U k = Reorth ( U k )
 17:    G l = T × m = 1 , 2 , 3 U m T
 18:   if G l G l 1 F / G l 1 F ϵ then
 19:    break
 20:   end if
 21:  end for
 22: end for

5. Experiments

We begin this section by introducing the experimental settings. Then, we compare the performance of Algorithm 2 to those of the other well-known HOSVD algorithms by showing an application to real-world HSI data.

5.1. Experimental Settings

Our experiments were performed on Intel i9 processor with 32 GB of memory. We developed the software for the experiments using MATLAB version 9.6.0.1135713. For the real-world HSI dataset, we used three datasets which are widely used for testing classification or compression performance of HSI data. Details on these datasets are as follows.
Jasper Ridge: Jasper Ridge dataset was captured by an airborne visible/infrared imaging spectrometer (AVIRIS) sensor by the Jet Propulsion Laboratory. The spatial size of the dataset is 100 × 100 with 224 channels, in which its quantized spectra range is from 380 nm to 2500 nm. There are four endmembers in this dataset, which include “road,” “soil,” “water,” and “tree.” Detailed information regarding Jasper Ridge dataset is provided in [21].
Indian Pines: Indian pine dataset was captured by AVIRIS sensor over the Indian pine test site in North-western Indiana. The scenery is comprised of agriculture, forest or natural perennial vegetation. The spatial size of the dataset used in the experiments was 145 × 145 with 224 channels ranging from 400 nm to 2500 nm. Detailed information of Indian Pines dataset is provided in [22].
Urban: Urban dataset was recorded by a hyperspectral digital image collection experiment (HYDICE) sensor; its location is an urban area at Copperas Cove in Texas. The spatial size of the dataset is 307 × 307 with 221 channels, in which its quantized spectral range is from 400 nm to 2500 nm. There are four endmembers to be classified; namely, “asphalt,” “grass,” “tree,” and “roof.” More information about Urban dataset is provided in [21].
Figure 3 depicts the average intensities of pixels throughout the spectrum. The datasets are represented as tensors; for example, the Jasper Ridge dataset is denoted by T R 100 × 100 × 224 .
To compare the performances of the proposed algorithm with previous algorithms, we evaluated the relative errors and the execution times with the algorithm from HOOI, a Crank–Nicholson-like algorithm for HOSVD (CrNc henceforce) [19]; a quasi-Newton-based nonlinear least squares algorithm (henceforce HOSVD_NLS) [17]; and a method based on block coordinate descent search [18] which is a slight modification of the algorithm described in [23] (henceforth BCD-CD). Here, the relative errors, denoted as relerr, are defined such that
relerr = T g t T o u t p u t F T g t F ,
where T g t and T o u t p u t are the tensors before and after applying compression algorithms, respectively. The programs implementing HOOI and HOSVD_NLS algorithms, are from [24]. Note that the stopping criterion of all the algorithms in the comparison is equal, and it is defined as
G n + 1 G n F G n F ϵ ,
where ϵ indicates a user-defined threshold. The maximum iteration number is 100. To measure the execution time of each algorithm, we repeat the experiments 10 times and take the average from the results.
The initial factor matrices with low multilinear ranks of all algorithms are computed from SVD-based HOSVD algorithm described in (2) (henceforth, HOSVD) when the truncation level R n in mode- n , 1 n 3 , satisfies the condition
T n F 2 i = 1 R n σ i γ ,
where σ i represents the i-th largest singular value of the mode-n unfolding matrix T n from T , and γ is the user-defined threshold to adjust the compression rate of the spectral and spatial dimensions of T . Specifically, we set the value γ to 0.05, 0.1, 0.2, and 0.3. Table 1 lists the low multilinear ranks in each case after applying HOSVD. Note that the sizes of the core tensors from all algorithms are identical.

5.2. Experimental Results

The first experiment measured the performances of the algorithms when the user-defined threshold ϵ in (15) was 1.0 × 10 6 . We measured relerrs and the execution times while changing the value of γ to 0.05, 0.1, 0.2, and 0.3. When γ = 0.3 we set the Lagrangian multipliers λ 1 = λ 2 = λ 3 = 1.0 × 10 8 in (10), and set 0 to the other cases of γ . The experimental results are given in Table 2, Table 3 and Table 4. For an unknown reason, HOOI failed to converged to the solutions occasionally; for example, when γ = 0.05 , as shown Table 2. From these results, we can see that the overall execution speed of the proposed algorithm is the fastest with fewer iteration numbers, while its relative errors are very close to those of HOOI, which is the most accurate algorithm in this experiment. CrNc converged to the solutions with the fewest iteration numbers and produced the outputs with the fastest times occasionally; however, its relative errors are inaccurate compared to the other algorithms. HOSVD_NLS produced the closest relative errors to HOOI; however, its execution time was the slowest among all algorithms. The overall performance of BCD-CD appears to be unsatisfactory in all cases, especially with regard to convergence speed and relative errors. Excluding one case presented in Table 3, BCD-CD failed to converge to the solutions within the predefined maximum iteration number.
The second experiment measured the performances of the algorithms when ϵ in (15) was 1.0 × 10 8 . We provide the results of this experiment in Table 5, Table 6 and Table 7. Similar to the results from the first experiment, the proposed algorithm computes the compressed tensor more efficiently compared to the other algorithms.
The third experiment measured the performance of the algorithms under noisy conditions. We added white Gaussian noises with different signal-to-noise ratios to HSI imaging data such that
T n o i s e = T g t + 10 σ / 20 T g t F N F · N ,
where σ represents the signal-to-noise ratio, and N is the randomly generated tensor. We set σ = +60 dB, +30 dB, and +20 dB, respectively. Table 8 summarizes the outputs of the experiment, and similar to the first experiment, it shows that the most accurate algorithm in many cases is HOOI. However, the proposed algorithm produces outputs with relative errors very similar to those of HOOI, while maintaining robust convergence to the solutions. In some cases, when using Indian Pines and Urban as the input data, the proposed algorithm produces even smaller relative errors than HOOI. Note that the numbers in the parenthesis represent the average iteration numbers required for convergence under noisy conditions. Additionally, Figure 4 shows the first channel to be compressed when Jasper Ridge dataset is used. There are no significant differences except the case of BCD-CD.
The last experiment examined how the algorithms would converge to the solutions. In the experiment, the algorithms were forced to continue until 100 iterations without considering the stopping criterion. Figure 5, Figure 6, Figure 7 and Figure 8 depict the histories of convergences when γ = 0.05 , γ = 0.1 , γ = 0.2 , γ = 0.3 , respectively. Additionally, Table 9 shows the relative errors of outputs after the iterations reached 100 steps. In this experiment, even though the overall shapes of convergence histories from CrNc appear better than the others, the outputs of CrNc seem to converge to the relatively inaccurate local minimum, as shown in Table 9. The convergence speed of HOSVD_NLS is very slow according to this experiment, but no-meaningful differences with HOOI in terms accuracy were generated. Furthermore, HOOI produces unstable convergence history occasionally. In any case, Algorithm 2 produces robust outputs with stable convergence histories and with the accuracy close to that of HOOI.

6. Conclusions

Hyperspectral imaging is widely used, as it enables the simultaneous manipulation of the spatial and spectral distribution information of a target scene. Owing to the massive amount of information, tensor compression techniques such as higher order singular value decomposition must be applied. In this paper, we suggested an efficient computation method of higher order singular value decomposition by using sequential computations of an alternating least squares approach. Experiments on real-world hyperspectral imaging datasets highlight the faster computation of the proposed algorithm with no-meaningful difference in accuracy compared to higher order orthogonal iteration, which is typically known as the most accurate algorithm for computing higher order singular value decomposition.

Funding

This work was supported by Hankuk University of Foreign Studies Research Fund and the National Research Foundation of Korea (NRF) grant funded by the Korean government (2018R1C1B5085022).

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Bioucas-Dias, J.M.; Plaza, A.; Dobigeon, N.; Parente, M.; Du, Q.; Gader, P.; Chanussot, J. Hyperspectral Unmixing Overview: Geometrical, Statistical, and Sparse Regression-Based Approaches. IEEE Sel. Top. Appl. Earth Obs. Remote. Sen. 2012, 5, 354–379. [Google Scholar] [CrossRef] [Green Version]
  2. Grahn, H.; Geladi, P. Techniques and Applications of Hyperspectral Image Analysis; John Wiley & Sons: New York, NY, USA, 2007. [Google Scholar]
  3. Transon, J.; Andrimont, R.; Maugnard, A. Survey of Hyperspectral Earth Observation Applications from Space in the Sentinel-2 Context. Remote Sens. 2018, 10, 157. [Google Scholar] [CrossRef] [Green Version]
  4. Lu, G.; Fei, B. Medical hyperspectral imaging: A review. Biomed. Opt. 2014, 19, 1–23. [Google Scholar] [CrossRef] [PubMed]
  5. Lee, H.; Yang, C.; Kim, M.; Lim, J.; Cho, B.; Lefcourt, A.; Chao, K.; Everard, C. A Simple Multispectral Imaging Algorithm for Detection of Defects on Red Delicious Apples. J. Biosyst. Eng. 2014, 39, 142–149. [Google Scholar] [CrossRef]
  6. Nasrabadi, N.M. Hyperspectral Target Detection. IEEE Signal Process. Mag. 2013, 31, 34–44. [Google Scholar] [CrossRef]
  7. Poojary, N.; D’Souza, H.; Puttaswamy, M.R.; Kumar, G.H. Automatic target detection in hyperspectral image processing: A review of algorithms. In Proceedings of the 2015 12th International Conference on Fuzzy Systems and Knowledge Discovery (FSKD), Zhangjiajie, China, 15–17 August 2015; pp. 1991–1996. [Google Scholar]
  8. Renard, N.; Bourennane, S. Dimensionality Reduction Based on Tensor Modeling for Classification Methods. IEEE Trans. Geosci. Remote Sens. 2009, 47, 1123–1131. [Google Scholar] [CrossRef]
  9. Fang, L.; He, N.; Lin, H. CP tensor-based compression of hyperspectral images. J. Opt. Soc. Am. A Opt. Image Sci. Vis. 2017, 34, 252–258. [Google Scholar] [CrossRef] [PubMed]
  10. Yan, R.; Peng, J.; Wen, D.; Ma, D. Denoising and dimensionality reduction based on PARAFAC decomposition for hyperspectral imaging. Proc. SPIE Opt. Sens. Image. Tech. Appl. 2018, 10846, 538–549. [Google Scholar]
  11. Lathauwer, L.D.; Moor, B.D.; Vandewalle, J. A Multilinear Singular Value Decomposition. SIAM J. Matrix Anal. Appl. 2000, 21, 1253–1278. [Google Scholar] [CrossRef] [Green Version]
  12. Kolda, T.G.; Bader, B.W. Tensor Decomposition and Applications. Siam Rev. 2009, 51, 455–500. [Google Scholar] [CrossRef]
  13. Lathauwer, L.D.; Moor, B.D.; Vandewalle, J. On the Best Rank-1 and Rank-(R1, R2, …, RN) Approximation of Higher-Order Tensors. SIAM J. Matrix Anal. Appl. 2000, 21, 1324–1342. [Google Scholar] [CrossRef]
  14. Zhang, L.; Zhang, L.; Tao, D.; Huang, X.; Du, B. Compression of hyperspectral remote sensing images by tensor approach. Neurocomputing 2015, 147, 358–363. [Google Scholar] [CrossRef]
  15. An, J.; Lei, J.; Song, Y.; Zhang, X.; Guo, J. Tensor Based Multiscale Low Rank Decomposition for Hyperspectral Images Dimensionality Reduction. Remote Sens. 2019, 11–12, 1485. [Google Scholar] [CrossRef] [Green Version]
  16. Eldén, L.; Savas, B. A Newton-Grassmann method for computing the Best Multi-Linear Rank-(r1, r2, r3) Approximation of a Tensor. Siam Matrix Anal. Appl. 2009, 31, 248–271. [Google Scholar] [CrossRef] [Green Version]
  17. Sorber, L.; Barel, M.V.; Lathauwer, L.D. Structured Data Fusion. IEEE Sel. Top. Sig. Proc. 2015, 9, 586–600. [Google Scholar] [CrossRef]
  18. Hassanzadeh, S.; Karami, A. Compression and noise reduction of hyperspectral images using non-negative tensor decomposition and compressed sensing. Eur. J. Remote Sens. 2016, 49, 587–598. [Google Scholar] [CrossRef]
  19. Phan, A.; Cichocki, A.; Tichavsky, P. On Fast Algorithms for Orthogonal Tucker Decomposition. In Proceedings of the 2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Florence, Italy, 4–9 May 2014; pp. 6766–6770. [Google Scholar]
  20. Lee, G. Fast computation of the compressive hyperspectral imaging by using alternating least squares methods. Sig. Proc. Image Commun. 2018, 60, 100–106. [Google Scholar] [CrossRef]
  21. Zhu, F.; Wang, Y.; Xiang, S.; Fan, B.; Pan, C. Structured Sparse Method for Hyperspectral Unmixing. ISPRS J. Photogramm. Remote Sens. 2014, 88, 101–118. [Google Scholar] [CrossRef] [Green Version]
  22. Baumgardner, M.F.; Biehl, L.L.; Landgrebe, D.A. 220 Band AVIRIS Hyperspectral Image Data Set: June 12, 1992 Indian Pine Test Site 3. Purdue Univ. Res. Repos. 2015, 10, R7RX991C. [Google Scholar]
  23. Xu, Y.; Yin, W. A Block Coordinate Descent Method for Regularized Multiconvex Optimization with Applications to Nonnegative Tensor Factorization and Completion. Siam Imag. Sci. 2013, 6, 1758–1789. [Google Scholar] [CrossRef]
  24. Vervliet, N.; Devals, O.; Sorber, L.; Van Barel, M.; De Lathauwer, L. Tensorlab 3.0. Available online: https://www.tensorlab.net/ (accessed on 10 March 2016).
Figure 1. Example of third order tensor matricization along each mode.
Figure 1. Example of third order tensor matricization along each mode.
Remotesensing 11 02932 g001
Figure 2. Sequential computations of factor matrices from Tucker-1 model optimization problems.
Figure 2. Sequential computations of factor matrices from Tucker-1 model optimization problems.
Remotesensing 11 02932 g002
Figure 3. Real-world HSI dataset for performance comparison of the proposed algorithm. Images display the average intensities of pixels throughout the spectrum.
Figure 3. Real-world HSI dataset for performance comparison of the proposed algorithm. Images display the average intensities of pixels throughout the spectrum.
Remotesensing 11 02932 g003
Figure 4. The first channel image of compressed HSI when the Jasper Ridge dataset was used; σ = + 20 dB, γ = 0.3 , and ϵ = 1.0 × 10 6 .
Figure 4. The first channel image of compressed HSI when the Jasper Ridge dataset was used; σ = + 20 dB, γ = 0.3 , and ϵ = 1.0 × 10 6 .
Remotesensing 11 02932 g004
Figure 5. Convergence history of the algorithms when γ = 0.05 .
Figure 5. Convergence history of the algorithms when γ = 0.05 .
Remotesensing 11 02932 g005
Figure 6. Convergence history of the algorithms when γ = 0.1 .
Figure 6. Convergence history of the algorithms when γ = 0.1 .
Remotesensing 11 02932 g006
Figure 7. Convergence history of the algorithms when γ = 0.2 .
Figure 7. Convergence history of the algorithms when γ = 0.2 .
Remotesensing 11 02932 g007
Figure 8. Convergence history of the algorithms when γ = 0.3 .
Figure 8. Convergence history of the algorithms when γ = 0.3 .
Remotesensing 11 02932 g008
Table 1. Multilinear ranks of the initial factor matrices computed from HOSVD when γ = 0.05 , 0.1 , 0.2 , and 0.3 , respectively.
Table 1. Multilinear ranks of the initial factor matrices computed from HOSVD when γ = 0.05 , 0.1 , 0.2 , and 0.3 , respectively.
Dataset γ = 0.05 0.10.20.3
Jasper Ridge(71,75,5)(48,54,3)(28,32,2)(17,20,2)
Indian Pines(97,117,8)(44,51,2)(11,10,2)(3,3,2)
Urban(292,227,13)(265,165,5)(186,93,3)(108,52,3)
Table 2. Experimental results of algorithms when the Jasper Ridge dataset was used and ϵ = 1.0 × 10 6 .
Table 2. Experimental results of algorithms when the Jasper Ridge dataset was used and ϵ = 1.0 × 10 6 .
γ HOOICrNcHOSVD_NLSBCD-CDAlgorithm 2
0.05iteration1002291002
relerr0.02911790.02913980.02911810.1559590.0291264
time (s)8.16020.26748.33054.68890.1905
0.1iteration52261002
relerr0.05917550.05924190.05917570.1525890.0591861
time (s)0.35090.21974.49063.47220.1401
0.2iteration84681006
relerr0.1185730.1188310.1185720.1614620.118703
time (s)0.40580.29216.36832.43990.2031
0.3iteration66231006
relerr0.1412580.1412710.1412580.1796460.141280
time (s)0.23390.29781.91923.43410.1442
Table 3. Experimental results of algorithms when the Indian Pines dataset was used and ϵ = 1.0 × 10 6 .
Table 3. Experimental results of algorithms when the Indian Pines dataset was used and ϵ = 1.0 × 10 6 .
γ HOOICrNcHOSVD_NLSBCD-CDAlgorithm 2
0.05iteration1002571004
relerr0.03038990.03049280.03039060.0714810.0303906
time (s)20.62980.548845.429212.03040.7744
0.1iteration62361002
relerr0.05273130.05282680.05273150.0730790.0527553
time (s)0.65240.29277.54704.61330.2011
0.2iteration74481006
relerr0.07686190.07700750.07686050.1243550.0769648
time (s)0.28390.26966.18512.09550.1630
0.3iteration229502
relerr0.1059150.1059150.1059150.1270370.105915
time (s)0.08140.15221.14820.69960.0662
Table 4. Experimental results of algorithms when the Urban dataset was used and ϵ = 1.0 × 10 6 .
Table 4. Experimental results of algorithms when the Urban dataset was used and ϵ = 1.0 × 10 6 .
γ HOOICrNcHOSVD_NLSBCD-CDAlgorithm 2
0.05iteration1002471002
relerr0.03104160.03111420.03104240.2984750.0310626
time (s)84.24341.8922303.629863.00521.9077
0.1iteration1002911005
relerr0.06176980.06211530.06177090.2908380.0617950
time (s)63.79051.5078242.152345.48302.9882
0.2iteration124421006
relerr0.1209920.1217520.1209920.2996420.12102
time (s)4.73281.816039.524727.24521.8032
0.3iteration126441008
relerr0.1806220.1813410.1806240.2956600.180645
time (s)3.00671.706827.09441.803227.2452
Table 5. Experimental results of algorithms when Jasper Ridge dataset was used and ϵ = 1.0 × 10 8 .
Table 5. Experimental results of algorithms when Jasper Ridge dataset was used and ϵ = 1.0 × 10 8 .
γ HOOICrNcHOSVD_NLSBCD-CDAlgorithm 2
0.05iteration1002811006
relerr0.02911790.02913980.02911790.1559590.0291195
time (s)8.17610.274722.44454.70150.4218
0.1iteration1026310011
relerr0.05917550.05924190.05917550.1525890.0591765
time (s)0.69390.239910.39363.45660.5311
0.2iteration97610010039
relerr0.1185640.1187820.1185710.1746860.118575
time (s)4.41290.60479.36482.48321.1770
0.3iteration9153910015
relerr0.1412580.1412590.1412580.1796460.141259
time (s)0.36180.53573.14291.86330.3152
Table 6. Experimental results of algorithms when Indian Pines dataset was used and ϵ = 1.0 × 10 8 .
Table 6. Experimental results of algorithms when Indian Pines dataset was used and ϵ = 1.0 × 10 8 .
γ HOOICrNcHOSVD_NLSBCD-CDAlgorithm 2
0.05iteration100210010014
relerr0.03038990.03049280.03039030.0714810.0303932
time (s)20.46600.545579.481111.83592.5751
0.1iteration988910012
relerr0.05273120.05276930.05273120.0730790.0527339
time (s)0.94990.908818.64774.67420.8240
0.2iteration12167610027
relerr0.07686040.07687000.07686040.1243550.0768864
time (s)0.46980.84579.63522.16000.5941
0.3iteration44181006
relerr0.1059150.1059150.1059150.1268500.105915
time (s)0.13580.22842.20561.35070.1252
Table 7. Experimental results of algorithms when Urban dataset was used and ϵ = 1.0 × 10 8 .
Table 7. Experimental results of algorithms when Urban dataset was used and ϵ = 1.0 × 10 8 .
γ HOOICrNcHOSVD_NLSBCD-CDAlgorithm 2
0.05iteration10021001009
relerr0.03104160.03111420.03104210.2984750.0310468
time (s)84.84501.9226665.828263.47756.4988
0.1iteration100810010018
relerr0.06176980.06200330.06177060.2908380.0617754
time (s)64.33675.3712268.65645.84719.2198
0.2iteration1001310010024
relerr0.1209910.1213550.1209930.2996420.120993
time (s)39.35845.589895.710827.62346.8905
0.3iteration381310010027
relerr0.1806220.1808550.1806220.295660.180625
time (s)9.58983.568261.346616.70074.4569
Table 8. Experimental results of algorithms when HSI dataset when Gaussian white noise was used and ϵ = 1.0 × 10 6 . The numbers in the parenthesis represent the average iteration numbers.
Table 8. Experimental results of algorithms when HSI dataset when Gaussian white noise was used and ϵ = 1.0 × 10 6 . The numbers in the parenthesis represent the average iteration numbers.
Dataset γ σ HOOICrNcHOSVD_NLSBCD-CDAlgorithm 2
Jasper Ridge0.1+60 dB0.0591755 (5)0.0592419 (2)0.0591757 (26)0.153469 (100)0.0591861 (2)
+30 dB0.0449392 (100)0.0449707 (2)0.0449392 (25.33)0.152897 (100)0.0449440 (2.67)
+20 dB0.0788476 (100)0.0788524 (2)0.0788469 (100)0.118547 (100)0.0788477 (2.67)
0.3+60 dB0.141258 (5.67)0.141296 (5.17)0.141258 (23)0.179359 (100)0.141280 (6)
+30 dB0.137677 (5.83)0.137713 (4.67)0.137677 (24.33)0.175483 (100)0.137699 (5)
+20 dB0.113076 (6.33)0.113200 (2.83)0.113077 (32.5)0.166953 (100)0.113094 (4.33)
Indian Pines0.1+60 dB0.0527313 (6.33)0.0528347 (2)0.0527315 (36)0.0731127 (100)0.0527554 (2)
+30 dB0.0491485 (6.83)0.0492972 (2)0.0491491 (100)0.0739519 (100)0.0491542 (5.16)
+20 dB0.0791087 (100)0.0790856 (2)0.0790949 (100)0.0962606 (100)0.0790843 (2.83)
0.3+60 dB0.105915 (2)0.105915 (2)0.105915 (9)0.127037 (50)0.105915 (2)
+30 dB0.1053231 (2.67)0.103234 (2)0.103231 (11)0.124117 (100)0.103233 (5)
+20 dB0.0618722 (7.33)0.0620632 (2.5)0.0618740 (41.83)0.0746809 (100)0.0619321 (2.33)
Urban0.1+60 dB0.0617699 (100)0.0621156 (2)0.0617710 (90.67)0.289897 (100)0.0618057 (5)
+30 dB0.0539840 (71.67)0.0543273 (2)0.0539852 (52.33)0.296169 (100)0.0540272 (4.5)
+20 dB0.0853106 (70.67)0.08527384 (2)0.0853042 (100)0.302653 (100)0.0852731 (2.83)
0.3+60 dB0.180622 (13)0.181286 (5.83)0.180624 (43.5)0.295660 (100)0.180645 (8)
+30 dB0.178234 (12.83)0.179252 (4.33)0.178236 (36.17)0.298176 (100)0.178262 (5.5)
+20 dB0.158404 (68.33)0.159271 (2)0.158406 (20.83)0.297824 (100)0.158410 (7.16)
Table 9. Relative errors when algorithms continue for 100 iterations.
Table 9. Relative errors when algorithms continue for 100 iterations.
γ HOOICrNcHOSVD_NLSBCD-CDAlgorithm 2
0.05Jasper Ridge0.02911790.02912830.02911790.1559590.0291179
Indian Pines0.0338990.03041790.03039000.0714810.0303903
Urban0.03104160.03107160.03104180.2984750.0310421
0.1Jasper Ridge0.05917550.05917790.05917550.1525890.0591755
Indian Pines0.05273120.05274660.05273120.0730790.0527312
Urban0.06176980.06186830.06176980.2908380.0617706
0.2Jasper Ridge0.1185640.1186210.1185640.1614620.118571
Indian Pines0.07686040.07687000.07686040.1243550.0768604
Urban0.1209910.1212980.1209910.2996420.120992
0.3Jasper Ridge0.1412580.1412580.1412580.1796460.141258
Indian Pines0.1059150.1059150.1059150.1268500.105915
Urban0.1806220.1808550.1806220.2956600.180622

Share and Cite

MDPI and ACS Style

Lee, G. An Efficient Compressive Hyperspectral Imaging Algorithm Based on Sequential Computations of Alternating Least Squares. Remote Sens. 2019, 11, 2932. https://doi.org/10.3390/rs11242932

AMA Style

Lee G. An Efficient Compressive Hyperspectral Imaging Algorithm Based on Sequential Computations of Alternating Least Squares. Remote Sensing. 2019; 11(24):2932. https://doi.org/10.3390/rs11242932

Chicago/Turabian Style

Lee, Geunseop. 2019. "An Efficient Compressive Hyperspectral Imaging Algorithm Based on Sequential Computations of Alternating Least Squares" Remote Sensing 11, no. 24: 2932. https://doi.org/10.3390/rs11242932

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop