Next Article in Journal
Identification of Alzheimer’s Disease on the Basis of a Voxel-Wise Approach
Previous Article in Journal
Rebar Shape Time-Evolution During a Reinforced Concrete Corrosion Test: An Electrochemical Model
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Hyperspectral Image Inpainting Based on Robust Spectral Dictionary Learning

Science and Technology on Complex Electronic System Simulation Laboratory, Space Engineering University, Beijing 101416, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2019, 9(15), 3062; https://doi.org/10.3390/app9153062
Submission received: 27 June 2019 / Revised: 25 July 2019 / Accepted: 26 July 2019 / Published: 29 July 2019

Abstract

:
To address the problems of defective pixels and strips in hyperspectral images affecting subsequent processing and applications, we modeled the hyperspectral image (HSI) inpainting problem as a sparse signal reconstruction problem with incomplete observations using the theory of sparse representation, and proposed an HSI inpainting algorithm based on spectral dictionary learning. First, we studied the HSI observation model under the assumption of additive noise. We subsequently proposed a new algorithm for constructing a spectral dictionary directly from hyperspectral data by introducing an online learning optimization method and performing dictionary learning using a robust function. Afterwards, the image was sparsely encoded by applying the variable decomposition and augmented Lagrangian sparse regression method. Finally, the inpainted HSI was obtained by sparse reconstruction. The experimental results showed that compared with the existing algorithms, the algorithm proposed herein could effectively inpaint the defective HSI under different noise conditions with a shorter calculation time than those of existing methods and other dictionary learning inpainting algorithms.

1. Introduction

In the operation of hyperspectral imaging systems, sensor malfunctions can create missing pixels, missing scan lines and other anomalies in the acquired hyperspectral image (HSI), resulting in incomplete observations of the target area. This can significantly affect subsequent processing and applications of the HSI. Therefore, it is necessary to restore the complete image information in the preprocessing of the HSI. In the field of digital image processing, the process of restoring the missing data in an image is called image inpainting [1]. Since the emergence of image inpainting problems, several algorithms have been widely studied and applied, such as interpolation methods [2,3], partial differential equation methods [4,5], total variation methods [6,7,8], texture synthesis [9], and the Huber-Markov method [10,11]. All these algorithms were first proposed and applied for two-dimensional images and were mainly applied for the restoration of small areas. They were not suited for large areas of missing pixels. An improved partial differential equation method was applied to hyperspectral imagery [12] and preserving edge sharpness, but it suffered from excessive blurring. A hyperspectral image restored by the total variation method was prone to exhibiting a step effect, which was not beneficial for the subsequent processing of the image.
As sparse representation theory has been widely applied in the field of image and signal processing, its powerful processing ability has attracted the attention of many researchers [13,14]. Sparse representation methods have also been introduced into image inpainting problems [15,16,17]. The theoretical basis is the new theory of compressed sensing. Specifically, sparse or compressible signals in some transform domains can be reconstructed from a small amount of incomplete information.
To better solve the HSI inpainting problem, it was proposed that high-precision sparse reconstruction optimization could be achieved by constructing an appropriate over-complete dictionary [18]. Many scholars have studied the construction of over-complete dictionaries for image inpainting. Among them, the most classic example is that of Elad and Aharon [19]. They proposed the use of a K-SVD algorithm in image inpainting, with the over-complete dictionary obtained from a preliminary training process. In the K-SVD dictionary training process, the sparse coding in each iteration was obtained by the block coordinate relaxation algorithm (BCR), and the update of the dictionary atom was performed by the eigenvalue decomposition method. To minimize the objective cost function under certain constraints, each iteration of the dictionary learning accessed all elements in the training set, a batch processing method based on the second iteration. This feature made the process too computationally intensive to be applied to large-scale data sets, such as HSIs. In addition, the algorithm assumed that certain noise parameters, such as the variance, were known, and the image inpainting accuracy in practical applications was quite sensitive to the errors of estimated variance [17]. Zhou et al. [17] proposed a nonparametric Bayesian dictionary learning method and applied it to HSI inpainting problems. The algorithm treated dictionary learning as a factor analysis problem, in which the factor loading corresponded to dictionary atoms and adaptively exploited the potential spectral correlation between different spectral bands using beta-process factor analysis (BPFA). The dictionary set was adaptively updated by Gibbs sampling of the data to be processed and appeared as an approximation of the total posteriori probability. The method used the spectral correlation of the HSIs to achieve better inpainting. However, BPFA must acquire noise or residual variance as a priori knowledge, and each iteration step must access all the elements in the data set. Thus, this process is computationally intensive. Based on BPFA, Shen et al. [20] proposed an adaptive spectrally weighted sparse Bayesian dictionary learning method to solve the HSI inpainting problems. Through BPFA with a framework of compressive sensing, the algorithm adaptively used the potential spectral correlation between different spectral bands. Because the algorithm was improved based on the BPFA method, it also has the drawbacks of requiring noise parameters as a priori knowledge and being computationally intensive.
To quickly acquire the training dictionary, an online dictionary learning algorithm was proposed previously [21]. The algorithm only used one element or a subset of the data set for each iteration instead of all the elements in the training set. It was an effective statistical approximation of the batch processing method. The algorithm successfully reduced the computational complexity of the dictionary learning by using an online scheme, but it was still sensitive to noise and outliers. Hao et al. [22] extended the dictionary learning algorithm to the complex domain to solve the noise reduction problem of interferometric synthetic aperture radar (InSAR) images.
For actual images, the high levels of noise and the existence of outliers are still major challenges for the current online dictionary learning algorithms. At present, existing dictionary-based inpainting algorithms typically use the l 2 norm as the data fitting term in the objective function. Thus, the dictionary update process is easily disrupted by noise and outliers. The optimization process is not robust enough to handle outliers, and it is often difficult to obtain satisfactory results in the inpainting of HSIs.
In view of the large volume of hyperspectral data and the presence of high noise levels and outliers, we adopted an online scheme of dictionary learning in this work to introduce a robust function into the objective function of the dictionary update. Furthermore, we used the hyperspectral data of the image to be inpainted to construct an adaptive dictionary that was suitable for the image inpainting problem. The introduction of robust functions could enhance the robustness of the inpainting process against noise and outliers. In addition to the large data size, HSIs were also formed in a special way, resulting in three-dimensional cubic data. Because all the dictionary learning algorithms currently in use were developed from 2D images, the hyperspectral data are usually expanded into pixel spectral matrices for processing. However, algorithms such as BPFA and K-SVD still use processing methods for 2D images that select image blocks in the pixel spectral matrix as the training set for dictionary learning, and they do not make good use of the high correlation between the bands of hyperspectral data. For this reason, according to [23], we used pixel spectral vectors instead of the traditional image blocks as the training data for dictionary in this work, and the obtained dictionary dimension was the same as the image spectral dimension. Because the dictionary atom could be regarded as the spectral curve that constituted the HSI, the correlation between the spectral bands was better used, and the sparse coding obtained by the dictionary was more consistent with the physical meaning of the pixel. Thus, the inpainting effects were also better.
According to aforementioned analysis, in this paper, we proposed an HSI inpainting algorithm based on online robust spectral dictionary learning, termed INORSDL. The image to be restored was processed with methods based on sparse domain modeling to finally obtain a satisfactory inpainting result.
The main contributions of this paper are as follows:
  • We model the HSI inpainting as a reconstruction problem of sparse signals under incomplete observation, with the theory of dictionary learning and sparse representation. To improve the efficiency of the dictionary learning, different from block loading algorithms, the spectral dictionary was trained progressively using the pixel spectral vectors waiting to be trained.
  • To enhance the robustness of the dictionary updating process against noise and outliers and improve the inpainting effect of HSIs, we use more robust l 1 loss as the data fitting term in the objective function when performing the dictionary learning.
The contents of this paper are divided into five parts. Section 1 briefly introduces the background and current status of the research topic. In Section 2, we present the mathematical inpainting model and the inpainting mechanism. In Section 3, the robust spectral dictionary learning algorithm and the robust sparse coding approach are explained. Section 4 formally outlines INORSDL, an inpainting approach based on online robust dictionary learning and sparse coding in the spectral domain. In Section 5, we present the experimental results of the proposed approach, compared with the state of art HSI inpainting algorithms. The final section concludes this paper.

2. HSI Inpainting Model and Mechanism

2.1. HSI Inpainting Model

The HSI inpainting problem can be modeled as a reconstruction problem of signals acquired under incomplete observation. Assuming that the noise is additive, the inpainting model can be expressed as follows:
y = M x + n ,
where x R n v is the vectorized form of column stacking of clean and defect-free HSI X R L × n s , or x = vec ( X ) . Therefore, n v = L × n s , L is the number of spectral bands in the HSI and n s is the number of pixel spectral vectors. Vector M R n m × n v is a mask chosen for the observable portion of X that consists of two elements, 0 and 1. There is only one value of 1 in each row, and n m n v . Finally, y R n m is the vector form of the observed incomplete HSI, and the additive noise is n R n m .
In this paper, we assume that the mask matrix M is user-provided. In addition, y is also known. Thus, the image inpainting problem is converted into a reconstruction problem for x .

2.2. HSI Inpainting Mechanism

In recent years, sparse representations have become a popular research topic for signal recovery and reconstruction [24]. The principle is that because a real image or signal has low rank attributes, it can be assumed that a pure defect-free image or signal is a linear combination of a few atoms in the dictionary. Based on the above assumptions, image inpainting can be considered to be a sparse signal reconstruction task supported by a dictionary.
Because HSIs are sparse, a pure defect-free image can be expressed as follows:
X = D α ,
where D R L × k is the spectral dictionary matrix, k is the number of atoms in the dictionary, α R k × n s is the corresponding sparse coding matrix, and only a few elements in each column are non-zero.
Substituting Equation (2) into Equation (1) yields the following:
y = M vec ( D α ) + n = M ( I D ) vec ( α ) + n
Here, is the Kronecker product operation. Therefore, the HSI inpainting problem can be expressed as follows:
min vec ( α ) 1 2 M ( I D ) vec ( α ) y 2 2 + λ ϕ ( vec ( α ) ) .
We next address the problem of solving for the pixel spectral vector. Letting M i be a submatrix of M acting on the pixel spectral vector i ,
y o , i = M i x i + n o , i ,
where y o , i R n i is the observation vector of the pixel spectral vector i , and n i represents the observable components in the pixel spectral vector, namely the number of spectra. Therefore, n i L , x i is the i-th column vector of the pixel spectral matrix X with i = 1 , , n s , and n o , i R n i is the corresponding noise component.
A pure defect-free pixel spectral vector can be expressed as follows:
x i = D α i ,
where αi is the i-th column vector of the sparse coding matrix α, and i = 1 , , n s .
Substituting Equation (6) into Equation (5) yields the following:
y o , i = M i D α i + n o , i .
Letting Ψ i = M i D , Equation (7) can be expressed as follows:
y o , i = Ψ i α i + n o , i .
Therefore, the optimization problem (4) can be rewritten as an objective function in the form of the least absolute shrinkage and selection operator (LASSO):
min α i i 1 2 Ψ i α i y o , i 2 2 + λ α i 1 .
In Equation (9), the squared term is the data fitting term, the l 1 norm term induces sparse coding, and the regularization parameter λ is the weight between the two terms in the objective function.
In real images, the HSI data often contains noise and outliers, and the squared data fitting term used in Equation (9) is sensitive to noise and prone to large deviations. Therefore, we introduce a robust objective function to enhance the robustness of the image inpainting process against noise and outliers, i.e., we convert Equation (4) into an optimization of the following minimization problem:
min α i i Ψ i α i y o , i 1 1 + λ α i 1 .
According to previous discussions [25,26], the robust function error term Ψ i α i y o , i 1 1 makes the data fit less susceptible to the influence of noise or outliers.
The robust sparse coding α ^ i is obtained by solving the optimization problem given by Equation (10) and reconstructing the unobserved parts of each pixel spectral vector to obtain the following:
x ^ i = D α ^ i ,   i = 1 , , n s .
Therefore, the inpainted HSI is expressed as X ^ = [ x ^ 1 , x ^ 2 , , x ^ n s ] .
The process described above uses the sparse representation method to inpaint the hyperspectral remote sensing image. The most critical steps are the acquisition of the spectral dictionary D and the solution of the robust sparse coding α ^ i .

3. Robust Spectral Dictionary Learning and Sparse Representation

3.1. Robust Spectral Dictionary Learning

The spectral dictionary D used to solve the sparse coding α ^ i and the final inpainted HSI in this work were obtained by adaptive training, i.e., by a learning method.
Given the large volume of HSI data, dictionary-based learning methods, such as batch processing, result in computational intensiveness, and a low learning efficiency. Therefore, we adopted an online learning method to train the dictionary. This method used only one element or a subset of the data set per iteration. Because dictionary D was regarded as a combination of the statistical parameters X of the observation data during the training process, an update of D did not require complete historical information. Thus, it was not necessary to record each iteration and process all the elements during the online processing. The online dictionary learning method is suitable for large-scale and dynamic data processing, which can greatly improve the computational efficiency and reduce memory overhead.
In addition, to enhance the robustness against noise and outliers, a robust function as a data fitting term of the objective function was introduced for online updating in the dictionary learning process.
Due to the special structure of the HSI data, it was necessary to adjust and improve the existing online dictionary learning algorithm. To make full use of the correlation between the spectral bands of the HSIs, the pixel spectrum vector of the completely observed portion of the HSI was used instead of the image block as the training data for the dictionary learning.
The training process of the dictionary aimed to find the coding base of the information in the HSI to be inpainted. Thus, for a better inpainting effect, it was necessary to ensure that the coding of the information contained in each pixel spectral vector of the image carried a high degree of sparseness. Therefore, the spectral dictionary obtained through the learning was an over-complete dictionary, i.e., k > > L .
For a pixel spectral vector with a complete observation in the HSI to be inpainted, the following equation must be applied:
y ˜ c , q = M c , q D α ˜ q + n ˜ c , q ,
where y ˜ c , q R L is a pixel spectral vector q that can be observed completely, α ˜ q is the corresponding sparse coding, n ˜ c , q R L is the corresponding noise component, q = 1 , , n c and n c is the number of pixel spectral vectors with complete observations in the image to be processed. Because y ˜ c , q was observed completely, M c , q = I and Equation (12) was simplified as follows:
y ˜ c , q = D α ˜ q + n ˜ c , q .
For a set of completely observed pixel spectral vectors in a given HSI, the robust dictionary learning problem was expressed under the frame of regularization:
min D C , α ˜ 1 ,     , α ˜ n c q = 1 n c y ˜ c , q D α ˜ q 1 1 + λ α ˜ q 1 .
Equation (14) is an optimization problem. The objective function is constructed from a l 1 norm fitting error and a sparsity-inducing term. The weight between the two is determined by the regularization parameter λ ( > 0 ) , and n c is the number of pixels in the training set. To prevent the dictionary atom in D from tending toward infinity due to the effect of the l 1 norm regularization term, we imposed a constraint D C , C = { D R L × k : d j T d j 1 , j = 1 ,     , k } , which may be viewed as a constraining convex set matrix.
For the joint optimization problem of dictionary D and sparse solution α ˜ = [ α ˜ 1 ,     , α ˜ n c ] for Equation (14), a more convenient solution is to alternately fix the values of D and α ˜ q and subsequently solve for the minimum value of the other, i.e., to alternately solve and optimize D and α ˜ q . Because there is a non-smooth term in Equation (14), its solution is somewhat challenging.
Due to the large size of the HSI data, the cost of optimizing α ˜ = [ α ˜ 1 ,     , α ˜ n c ] is very high. Therefore, we used the idea of an online method in which only one subset of a randomly selected pixel spectral vector set was trained in each iteration. For each new element in the subset, we first computed the robust sparse coding and subsequently performed a dictionary update.
First, we solved the problem of the robust sparse coding in a given dictionary, i.e., for a given dictionary matrix D , we optimized each element α ˜ q in α ˜ = [ α ˜ 1 ,     , α ˜ n c ] and expressed it as follows:
α ˜ q = arg   min α ˜ q D α ˜ q y ˜ c , q 1 1 + λ α ˜ q 1 ,   q = 1 , , n c .
Equation (15) is a l 1 norm metric and regularization convex optimization problem. We solved the optimization problem expressed by Equation (15) using the equivalent l 1 approximation robust sparse coding scheme reported previously [27]. The specific details are discussed in the next subsection.
When the robust sparse coding is assumed to be constant, the optimization solution of the dictionary matrix D is equivalent to minimizing the following problem:
min D 0 1 h y ˜ c t D α ˜ t 1 1 + λ α ˜ t 1 ,
where t is the current number of iterations, h is the number of pixel spectral vectors used in each iteration, y ˜ c t is the current training set consisting of h randomly chosen pixel spectral vectors from the complete observation pixel spectral vector set Y ˜ c = [ y ˜ c , 1 , , y ˜ c , n c ] , and the current training set robust sparse coding matrix α ˜ t is obtained by solving Equation (15). Equation (16) is a standard l 1 regression problem. Due to the lack of differentiability, it cannot be solved like an l 2 regression problem. Therefore, we resorted to the iteratively re-weighted least squares method (IRLS) [28].
Each D ( j , : ) can be estimated independently, and thus, without loss of generality, the j-th optimization function can be expressed as follows:
D ( j , : ) = arg   min D ( j , : ) 1 h i = 1 h | y ˜ c , i j t D ( j , : ) a ˜ i t | ,
where D ( j , : ) R 1 × k , and y ˜ c , i j t is the j-th element of y ˜ c , i t . According to the IRLS, Equation (17) can be converted into the following two problems:
D ( j , : ) = arg   min D ( j , : ) 1 h i = 1 h ω i j t ( y ˜ c , i j t D ( j , : ) a ˜ i t ) 2
ω i j t = 1 ( y ˜ c , i j t D ( j , : ) a ˜ i t ) 2 + δ ,
where δ is a small positive quantity (the machine precision is chosen to be the experimental value). For robust statistical properties, the weighted square in Equation (5) is a reasonable approximation of the l 1 norm. Each iteration of Equation (5) involves minimizing the objective function twice. A global optimum can be achieved by taking the derivatives and setting them to zero.
In the online processing method, each update uses a small batch of new data of a fixed size, and thus, it is necessary to consider the statistical information of the historical data. Therefore, the information in the current iteration can updated as follows:
M j t = M j t 1 + i = 1 h ω i j t a ˜ i t ( a ˜ i t ) T
C j t = C j t 1 + i = 1 h ω i j t y ˜ c , i j t ( a ˜ i t ) T
M j 1 = i = 1 h ω i j 1 a ˜ i 1 ( a ˜ i 1 ) T
C j 1 = i = 1 h ω i j 1 y ˜ c , i j 1 ( a ˜ i 1 ) T .
Here, i = 1 h ω i j t a ˜ i t ( a ˜ i t ) T and i = 1 h ω i j t y ˜ c , i j t ( a ˜ i t ) T are the information of the observed data for the current iteration.
Therefore, a solution can be sought in the following linear system D ( j , : ) :
C j t = D ( j , : ) M j t .
To solve Equation (14), a conjugate gradient method was used. The value of D t 1 in the previous iteration was used as the starting value for the current iteration. Because the matrix M j t was usually diagonally dominant, a reasonable initialization allowed the conjugate gradient update to converge quickly.
Through the above analysis, the specific steps of the online robust spectral dictionary learning algorithm (ORSDL) can be summarized. Algorithm 1 shows the pseudo code for the ORSDL.
Algorithm 1: Online robust spectral dictionary learning (ORSDL)
Input: complete observation pixel spectral vector y ˜ c , q R L , q = 1 ,     , n c ,
   number of iterations T N , regularization term coefficient λ > 0
   the number of the pixel spectral vectors per iteration h N
   initial spectral dictionary D 0 R L × k
Output: spectral dictionary D R L × k
1   for t = 1 to T do
2    Randomly choose y ˜ c t = [ y ˜ c , i t , i = 1 ,     , h ] from Y ˜ c
     /* Solving robust sparse coding problem*/
3    for i = 1 to h do
4       a ˜ i t = arg   min a ˜ i t y ˜ c , i t D a ˜ i t 1 1 + λ a ˜ i t 1
5    end for
     /* dictionary update */
6    repeat
7      for j = 1 to k do
8         M j t = M j t 1 + i = 1 h ω i j t a ˜ i t ( a ˜ i t ) T
9         C j t = C j t 1 + i = 1 h ω i j t y ˜ c , i j t ( a ˜ i t ) T
10        solve linear system C j t = D ( j , : ) M j t
11         ω i j t = 1 ( y ˜ c , i j t D ( j ,   : ) a ˜ i t ) 2 + δ ,
12     end for
13    until convergence
14   end for
15   return D
Because the value of the initialization dictionary has a negligible effect on the learning results of the spectral dictionary, the first k pixel spectral vectors of Y ˜ c are usually chosen for the initialization of the spectral dictionary for convenience.

3.2. Robust Sparse Coding

After training the spectral dictionary D by the spectral dictionary learning method proposed in Section 3.1, the quantity Ψ i in the optimization problem (Equation (10)) was known. Thus, solving the corresponding robust sparse coding α ^ i was the same as calculating the sparse coding in the dictionary learning and was regarded as an l 1 norm measurement and l 1 norm regularization convex optimization problem, expressed as follows:
α ^ i = arg   min α i Ψ i α i y o , i 1 1 + λ α i 1 ,
where α i 1 = i j | α i j | .
Equation (25) can be converted to an equivalent-approximate robust sparse coding problem:
α ^ i = arg   min α i ( y o , i 0 ) ( Ψ i λ I ) α i 1 .
Because the set of linear equations,
[ y o , i 0 ] = [ Ψ i λ I ] α i ,
is hyper-determined, Equation (26) satisfies the conditions outlined previously [29], and thus, the existence of a global optimal solution is guaranteed. Finally, the corresponding robust sparse coding α ^ i was obtained by solving Equation (26).

4. HSI Inpainting Algorithm Outline

By analyzing the HSI inpainting model and the HSI inpainting mechanism in Section 2 and studying the online robust spectral dictionary learning approach and the robust sparse coding method in Section 3, an HSI inpainting algorithm based on online robust spectral dictionary learning, INORSDL, is proposed. Algorithm 2 shows the pseudo code of the proposed INORSDL approach.
Algorithm 2 INORSDL.
Input: Y R L × n s (Observed HSI data), M (inpainting mask matrix)
Output: X ^ R L × n s (Inpainted HSI data)
1  begin
2   select the complete observed HSI pixel spectral vectors y ˜ c , q R L , q = 1 ,     , n c
3    D ^ = ORSDL ( y ˜ c , q R L , q = 1 ,     , n c ) (robust spectral dictionary training, Algorithm 1)
4   solve the robust sparse code matrix α ^
5    X ^ = D ^ α ^
6  end

5. Experimental Results and Analysis

To verify the performance of the HSI inpainting algorithm proposed herein, experiments were performed using simulated and real hyperspectral data. The experimental results were analyzed and compared with the effective HSI inpainting algorithms that are widely used today. The performance of the algorithm was evaluated in terms of subjective effects and objective quality. The experimental environment was the MATLAB R2018a software, the operating system was a 64-bit Windows 10, the hardware platform was an Alienware R15 laptop containing a 2.80 GHz eight-core Intel Core i7-7700HQ CPU and 16 GB RAM.
To validate the effectiveness of the algorithm, we compared the performance of the proposed algorithm with that of the 3D data compatible discrete partial differential equation method (PDE) [30,31], unmixing-based denoising (UBD) [32], a denoising method based on local low rank matrix recovery and global spatial-spectral total variation method (LLRSSTV) [33], the BPFA, and other image inpainting algorithms.

5.1. Simulated Data Experiment

The Pavia Centre HSI was used to construct simulated data (http://www.ehu.eus/ccwintco/index.php?Title=Hyperspectral_Remote_Sensing_Scenes). The Pavia Centre scene was acquired by the reflective optics system imaging spectrometer (ROSIS) sensor during a flight over Pavia, northern Italy, with atmospheric correction [34]. Each pixel in the scene includes 102 spectral bands between 0.43 μ m and 0.86 μ m . For the convenience of analysis, the chosen spatial size of the data was 201 × 201 pixels here, as shown in Figure 1. To simulate the clean image, high level noise bands were removed and totally 80 spectral bands were remained. This data set was considered to be a clean image in this section, and each band of data was normalized before adding noise and missing strips.
The effects of four types of noise were considered in the experiment: Gaussian independent and identically distributed (i.i.d) noise, Gaussian non-i.i.d noise, Laplacian noise, and Poisson noise. The Gaussian i.i.d noise was taken to have a zero mean and a variance of σ = 0.10 . The Gaussian non-i.i.d noise obeyed the distribution n i N ( 0 ,   D d 2 ) , where D d is a diagonal matrix with the diagonal elements uniformly distributed in the (0, 1) range. The Laplacian noise scale was set to 30. The Poisson noise obeyed the distribution Y P ( φ X ) , where P ( W ) represents an independent Poisson random variable matrix of size W , and the parameter is given by the corresponding elements W :   = [ w i j ] . The signal-to-noise ratio, SNR :   φ ( i , j w i j 2 ) / ( i , j w i j ) , was set to 15 dB using parameter φ . The setting of the missing strip was done following a previous report [12], and the position of the missing strip was known. In dictionary learning process, the regularization parameter λ of the proposed INORSDL is set to 1.
To quantitatively assess the effectiveness of the algorithm, the peak signal-to-noise ratio (PSNR) and structural similarity (SSIM) indicators were used to quantitatively evaluate and compare the image inpainting performance of each algorithm. The PSNR was used to measure the degree of similarity between the inpainted HSI as a whole and the original real image. SSIM is a quantitative evaluation method based on human vision. It can better reflect the quality of visual perception of an image and gives emphasis to the structural information of the images, making it suitable for the spatial dimensions of HSIs.
Figure 2, Figure 3, Figure 4 and Figure 5 show the results of various hyperspectral inpainting algorithms for Gaussian i.i.d noise, Gaussian non-i.i.d noise, Laplacian noise, and Poisson noise, respectively. For Poisson noise, the algorithm proposed herein first transformed the Poisson noise into an approximate additive Gaussian noise with an approximately equal variance using the variance stability transformation [35] and subsequently processed it.
The inpainting results of different algorithms under various noise conditions show that all the inpainting algorithms could restore the missing strips in the image, but the LLRSSTV algorithm could not completely restore the missing strips, which is evident in Figure 2. Because the PDE algorithm only inpaints missing pixels and does not perform simultaneous denoising on the image, there was still noise in the output image after the algorithm was applied. In sharp contrast, the UBD, LLRSSTV, BPFA, and INORSDL algorithms removed a large portion of the noise while restoring the missing pixels. Because the UBD algorithm reconstructs missing or corrupted pixels using the results of spectral unmixing, its performance depends largely on the estimation accuracy of the endmember and abundance coefficients in the spectral unmixing. In the field of HSI processing, spectral unmixing is still a challenging problem. The BPFA algorithm showed good image inpainting performance under various noise conditions, but its ability to remove Gaussian non-i.i.d noise was poor, which is evident in Figure 3. Figure 4 shows that with the exception of the PDE algorithm, all the other inpainting algorithms could remove Laplacian noise to some extent, but the INORSDL algorithm proposed herein yielded better results. The INORSDL algorithm could restore the missing pixels in the image for all three kinds of noise conditions and could also eliminate most of the noise at the same time, providing a high quality inpainted image directly. Figure 2, Figure 3, Figure 4 and Figure 5 show that the INORSDL algorithm qualitatively achieved the best inpainting results under different noise conditions.
In addition, we also compared the performance of each inpainting algorithm with the mean peak signal-to-noise ratio (Mean-PSNR, MPSNR) and the mean structural similarity (Mean-SSIM, MSSIM), as shown in Table 1. The highest value in each row is shown in bold. As shown in the table, the INORSDL algorithm yielded the best results for the four kinds of noise conditions, and its inpainting performance was quantitatively evaluated.
Table 1 also shows the time required for the different algorithms when applied to different noise conditions. Compared with the BPFA, another algorithm based on dictionary learning, the algorithm of this work was faster and more efficient.

5.2. Real Data Experiment

In this section, the algorithm was applied to the real HSI dataset Urban (http: //www.agc.army.mil/). The Urban scene was acquired by the hyperspectral digital collection experiment (HyDICE) sensor during a flight over Copperas Cove, Texas. The dataset had a spatial size of 307 × 307 and included 210 bands ranging from 0.4 to 2.5 μ m , with a spectral resolution of 10 nm, as shown in Figure 6. Its spatial resolution was about 2 × 2 m. Due to the influence of water vapor and atmospheric effects, the image had strong noise in many bands, and these bands contained little useful information.
To verify the effectiveness of the algorithm, the proposed algorithm was compared with other hyperspectral inpainting algorithms: PDE, UBD, LLRSSTV, and BPFA. In dictionary learning process, the regularization parameter λ of the proposed INORSDL is set to 1. Figure 7, Figure 8 and Figure 9 display the inpainting results of the various hyperspectral repair algorithms in the 105th, 144th, and 208th bands, respectively. It is evident in the figure that the INORSDL algorithm proposed in this paper exhibited better inpainting results than the other inpainting algorithms. However, although the LLRSSTV algorithm removed noise while restoring damaged pixels, the inpainted image showed noticeable patch effects, and some local restorations were unsatisfactory.

6. Conclusions

In this paper, we proposed an HSI inpainting algorithm based on online robust spectral dictionary learning. A spectral dictionary adapted to image inpainting was obtained using a new algorithm proposed in this paper for constructing a dictionary directly from hyperspectral data. The spectral dictionary was subsequently used in the sparse reconstruction of the image, thereby restoring missing or corrupted pixels. Compared to the most advanced HSI inpainting algorithms, the algorithm herein possessed a better inpainting ability for additive Gaussian noise, Laplacian noise, and Poisson noise. Compared to other image inpainting algorithms based on dictionary learning, this algorithm was faster. However, this algorithm requires complex calculations, and thus future research should focus on its rapid implementation. In addition, the research object of this paper is limited to visible near-infrared (VNIR) and short-wave infrared (SWIR) HSIs. However, in the preprocessing of medium-wave infrared (MWIR) HSIs, dead pixels inpainting is a critical problem. The imaging principle of MWIR HSIs is very different from VNIR/SWIR HSIs. In our future research, MWIR HSI inpainting will also be an important direction.

Author Contributions

X.S. conceived and designed the method; L.W. guided the students to complete the research; X.S. performed the simulation and experiment tests; L.W. helped in the simulation and experiment tests; and X.S. wrote the paper.

Funding

This research was supported by the National Natural Science Foundation of China under Grant No. 61801513.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Bertalmio, M.; Sapiro, G.; Ballester, C. Image Inpainting. Siggraph 2005, 4, 417–424. [Google Scholar] [CrossRef]
  2. Kokaram, A.C.; Morris, R.D.; Fitzgerald, W.J.; Rayner, P.J.W. Interpolation of missing data in image sequences. IEEE Trans. Image Process. 1995, 4, 1509–1519. [Google Scholar] [CrossRef]
  3. Shih, T.K.; Chang, R.C.; Lu, L.C.; Ko, W.C.; Wang, C.C. Adaptive Digital Image Inpainting. In Proceedings of the International Conference on Advanced Information Networking and Applications, Fukuoka, Japan, 29–31 March 2004. [Google Scholar] [CrossRef]
  4. Grossauer, H. A Combined PDE and Texture Synthesis Approach to Inpainting. In Proceedings of the 8th European Conference on Computer Vision, Prague, Czech Republic, 11–14 May 2004. [Google Scholar] [CrossRef]
  5. Bertalmio, M. Strong-continuation, contrast-invariant inpainting with a third-order optimal PDE. IEEE Trans. Image Process. 2006, 15, 1934–1938. [Google Scholar] [CrossRef]
  6. Ng, M.K.; Shen, H.; Chaudhuri, S.; Yau, A.C. Zoom-based super-resolution reconstruction approach using prior total variation. Opt. Eng. 2007, 46, 127003. [Google Scholar] [CrossRef]
  7. Ng, M.K.; Shen, H.; Lam, E.Y.; Zhang, L. A Total Variation Regularization Based Super-Resolution Reconstruction Algorithm for Digital Video. EURASIP J. Adv. Sig. Pr. 2007, 2007, 074585. [Google Scholar] [CrossRef] [Green Version]
  8. Chan, T.F.; Yip, A.M.; Park, F.E. Simultaneous total variation image inpainting and blind deconvolution. Int. J. Imag. Syst. Tech. 2010, 15, 92–102. [Google Scholar] [CrossRef]
  9. Wei, Y.; Sun, J.X.; Gang, Z.; Teng, S.; Wen, G.J. PDE Image Inpainting with Texture Synthesis based on Damaged Region Classification. In Proceedings of the International Conference on Advanced Computer Control, Shenyang, China, 27–29 March 2010. [Google Scholar] [CrossRef]
  10. Shen, H.; Zhang, L. A MAP-Based Algorithm for Destriping and Inpainting of Remotely Sensed Images. IEEE Trans. Geosci. Remote Sens. 2009, 47, 1492–1502. [Google Scholar] [CrossRef]
  11. Shen, H.; Liu, Y.; Ai, T.; Wang, Y.; Wu, B. Universal reconstruction method for radiometric quality improvement of remote sensing images. Int. J. Appl. Earth Obs. 2010, 12, 278–286. [Google Scholar] [CrossRef]
  12. Zhuang, L.; Bioucas-Dias, J.M. Fast Hyperspectral Image Denoising and Inpainting Based on Low-Rank and Sparse Representations. IEEE J. Sel. Topics Appl. Earth Observ. Remote Sens. 2018, 11, 730–742. [Google Scholar] [CrossRef]
  13. Xu, X.; Shi, Z.; Pan, B. ℓ0-based sparse hyperspectral unmixing using spectral information and a multi-objectives formulation. ISPRS J. Photogrammetry Remote Sens. 2018, 141, 46–58. [Google Scholar] [CrossRef]
  14. Pan, B.; Shi, Z.; Xu, X.; Shi, T.; Zhang, N.; Zhu, X. CoinNet: Copy Initialization Network for Multispectral Imagery Semantic Segmentation. IEEE Geosci. Remote Sens. Lett. 2018, 16, 816–820. [Google Scholar] [CrossRef]
  15. Fadili, M.J.; Starck, J.L. EM Algorithm for Sparse Representation-based Image Inpainting. In Proceedings of the IEEE International Conference on Image Processing, Genova, Italy, 14 September 2005. [Google Scholar] [CrossRef]
  16. Shen, B.; Wei, H.; Zhang, Y.; Zhang, Y.J. Image inpainting via sparse representation. In Proceedings of the IEEE International Conference on Acoustics, Taipei, Taiwan, 19–24 April 2009. [Google Scholar] [CrossRef]
  17. Zhou, M.; Chen, H.; Paisley, J.; Ren, L.; Li, L.; Xing, Z.; Dunson, D.; Sapiro, G.; Carin, L. Nonparametric Bayesian dictionary learning for analysis of noisy and incomplete images. IEEE Trans. Image. Process. 2012, 21, 130–144. [Google Scholar] [CrossRef]
  18. Bioucas-Dias, J.M.; Plaza, A.; Camps-Valls, G.; Scheunders, P.; Nasrabadi, N.; Chanussot, J. Hyperspectral remote sensing data analysis and future challenges. IEEE Geosci. Remote Sens. Mag. 2013, 1, 6–36. [Google Scholar] [CrossRef]
  19. Aharon, M.; Elad, M.; Bruckstein, A. K-SVD: An algorithm for designing overcomplete dictionaries for sparse representation. IEEE Trans. Signal Proc. 2006, 54, 4311. [Google Scholar] [CrossRef]
  20. Shen, H.; Li, X.; Zhang, L.; Tao, D.; Zeng, C. Compressed Sensing-Based Inpainting of Aqua Moderate Resolution Imaging Spectroradiometer Band 6 Using Adaptive Spectrum-Weighted Sparse Bayesian Dictionary Learning. IEEE Trans. Geosci. Remote Sens. 2013, 52, 894–906. [Google Scholar] [CrossRef]
  21. Mairal, J.; Bach, F.; Ponce, J.; Sapiro, G. Online Learning for Matrix Factorization and Sparse Coding. J. Mach. Learn. Res. 2009, 11, 19–60. [Google Scholar] [CrossRef]
  22. Hao, H.; Bioucas-Dias, J.M.; Katkovnik, V. Interferometric Phase Image Estimation via Sparse Coding in the Complex Domain. IEEE Trans. Geosci. Remote Sens. 2015, 53, 2587–2602. [Google Scholar] [CrossRef]
  23. Song, X.; Wu, L.; Hao, H.; Xu, W. Hyperspectral Image Denoising Based on Spectral Dictionary Learning and Sparse Coding. Electronics 2019, 8, 86. [Google Scholar] [CrossRef]
  24. Elad, M.; Mario, A.T.F.; Yi, M. On the Role of Sparse and Redundant Representations in Image Processing. Proc. IEEE 2010, 98, 972–982. [Google Scholar] [CrossRef]
  25. Andrew, W.; John, W.; Arvind, G.; Zihan, Z.; Hossein, M.; Yi, M. Toward a practical face recognition system: Robust alignment and illumination by sparse representation. IEEE Trans. Pattern Anal. Mach. Intell. 2012, 34, 372. [Google Scholar] [CrossRef]
  26. Lu, C.; Shi, J.; Jia, J. Online Robust Dictionary Learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Portland, OR, USA, 23–28 June 2013. [Google Scholar] [CrossRef]
  27. Zhao, C.; Wang, X.; Cham, W.K. Background Subtraction via Robust Dictionary Learning. EURASIP J. Image Vide. 2011, 2011, 1–12. [Google Scholar] [CrossRef] [Green Version]
  28. Bissantz, N.; Dümbgen, L.; Munk, A.; Stratmann, B. Convergence analysis of generalized iteratively reweighted least squares algorithms on convex function spaces. Tech. Rep. 2008, 19, 1828–1845. [Google Scholar] [CrossRef]
  29. Candes, E.J.; Tao, T. Decoding by linear programming. IEEE Trans. Inf. Theory 2005, 51, 4203–4215. [Google Scholar] [CrossRef]
  30. D’Errico, J. Inpainting nan elements in 3-d. Available online: http://www.mathworks.com/matlabcentral/fileexchange/21214-inpaintingnan-elements-in-3-d.htm (accessed on 15 April 2019).
  31. Schneider, C.; Gürenci, J. Mathematical Problems in Image Processing: Partial Differential Equations and the Calculus of Variations; Springer: Berlin, Germany, 2002. [Google Scholar]
  32. Cerra, D.; Müller, R.; Reinartz, P. Unmixing-based Denoising for Destriping and Inpainting of Hyperspectral Images. In Proceedings of the Geoscience and Remote Sensing Symposium, Quebec City, QC, Canada, 13–18 July 2014. [Google Scholar] [CrossRef]
  33. He, W.; Zhang, H.; Shen, H.; Zhang, L. Hyperspectral image denoising using local low-rank matrix recovery and global spatial–spectral total variation. IEEE J. Sel. Topics Appl. Earth Observ. Remote Sens. 2018, 11, 713–729. [Google Scholar] [CrossRef]
  34. Ghamisi, P.; Yokoya, N.; Li, J.; Liao, W.; Liu, S.; Plaza, J.; Rasti, B.; Plaza, A. Advances in Hyperspectral Image and Signal Processing: A Comprehensive Overview of the State of the Art. IEEE Geosci. Remote Sens. Mag. 2017, 5, 37–78. [Google Scholar] [CrossRef] [Green Version]
  35. Markku, M.K.; Alessandro, F. A closed-form approximation of the exact unbiased inverse of the Anscombe variance-stabilizing transformation. IEEE Trans. Image Process. 2011, 9, 2697–2698. [Google Scholar] [CrossRef]
Figure 1. 3D-cube of Pavia Centre hyperspectral data set.
Figure 1. 3D-cube of Pavia Centre hyperspectral data set.
Applsci 09 03062 g001
Figure 2. Inpainting results of band 60 of Pavia Centre dataset with Gaussian i.i.d. noise. (a) Clean image; (b) noisy image; (c) PDE; (d) UBD; (e) LLRSSTV; (f) BPFA; and (g) INORSDL.
Figure 2. Inpainting results of band 60 of Pavia Centre dataset with Gaussian i.i.d. noise. (a) Clean image; (b) noisy image; (c) PDE; (d) UBD; (e) LLRSSTV; (f) BPFA; and (g) INORSDL.
Applsci 09 03062 g002
Figure 3. Inpainting results of band 60 of Pavia Centre dataset with Gaussian non-i.i.d. noise. (a) Clean image; (b) noisy image; (c) PDE; (d) UBD; (e) LLRSSTV; (f) BPFA; and (g) INORSDL.
Figure 3. Inpainting results of band 60 of Pavia Centre dataset with Gaussian non-i.i.d. noise. (a) Clean image; (b) noisy image; (c) PDE; (d) UBD; (e) LLRSSTV; (f) BPFA; and (g) INORSDL.
Applsci 09 03062 g003
Figure 4. Inpainting results of band 60 of Pavia Centre dataset with Laplacian noise. (a) Clean image; (b) noisy image; (c) PDE; (d) UBD; (e) LLRSSTV; (f) BPFA; and (g) INORSDL.
Figure 4. Inpainting results of band 60 of Pavia Centre dataset with Laplacian noise. (a) Clean image; (b) noisy image; (c) PDE; (d) UBD; (e) LLRSSTV; (f) BPFA; and (g) INORSDL.
Applsci 09 03062 g004
Figure 5. Inpainting results of band 60 of Pavia Centre dataset with Poisson noise. (a) Clean image; (b) noisy image; (c) PDE; (d) UBD; (e) LLRSSTV; (f) BPFA; and (g) INORSDL.
Figure 5. Inpainting results of band 60 of Pavia Centre dataset with Poisson noise. (a) Clean image; (b) noisy image; (c) PDE; (d) UBD; (e) LLRSSTV; (f) BPFA; and (g) INORSDL.
Applsci 09 03062 g005
Figure 6. 3D-cube of Urban hyperspectral data set.
Figure 6. 3D-cube of Urban hyperspectral data set.
Applsci 09 03062 g006
Figure 7. Inpainting results of band 105 of Urban dataset. (a) Original image; (b) PDE; (c) UBD; (d) LLRSSTV; (e) BPFA; and (f) INORSDL.
Figure 7. Inpainting results of band 105 of Urban dataset. (a) Original image; (b) PDE; (c) UBD; (d) LLRSSTV; (e) BPFA; and (f) INORSDL.
Applsci 09 03062 g007
Figure 8. Inpainting results of band 144 of Urban dataset. (a) Original image; (b) PDE; (c) UBD; (d) LLRSSTV; (e) BPFA; and (f) INORSDL.
Figure 8. Inpainting results of band 144 of Urban dataset. (a) Original image; (b) PDE; (c) UBD; (d) LLRSSTV; (e) BPFA; and (f) INORSDL.
Applsci 09 03062 g008
Figure 9. Inpainting results of band 208 of Urban dataset. (a) Original image; (b) PDE; (c) UBD; (d) LLRSSTV; (e) BPFA; and (f) INORSDL.
Figure 9. Inpainting results of band 208 of Urban dataset. (a) Original image; (b) PDE; (c) UBD; (d) LLRSSTV; (e) BPFA; and (f) INORSDL.
Applsci 09 03062 g009
Table 1. Performance comparison of different hyperspectral inpainting algorithms applied to Pavia Centre dataset.
Table 1. Performance comparison of different hyperspectral inpainting algorithms applied to Pavia Centre dataset.
IndexNoisy ImagePDEUBDLLRSSTVBPFAINORSDL
Gaussian i.i.d. noiseMPSNR (dB)19.799720.039234.411230.894635.669736.7168
MSSIM0.43310.44350.93550.91200.95720.9687
Time (s)-2535971126903
Gaussian non-i.i.d noiseMPSNR (dB)28.101328.622137.583834.124332.858338.9195
MSSIM0.70570.72060.96860.95800.82400.9895
Time (s)-26057451521621
Laplacian noiseMPSNR (dB)33.171533.904537.920935.326838.203238.5285
MSSIM0.92920.94570.98730.97280.98920.9914
Time (s)-25158615251124
Poisson noiseMPSNR (dB)26.546626.995235.905933.884337.399440.6220
MSSIM0.74760.76090.96470.95990.96690.9870
Time (s)-13628797791531
The highest value in each row uses the bold font.

Share and Cite

MDPI and ACS Style

Song, X.; Wu, L. Hyperspectral Image Inpainting Based on Robust Spectral Dictionary Learning. Appl. Sci. 2019, 9, 3062. https://doi.org/10.3390/app9153062

AMA Style

Song X, Wu L. Hyperspectral Image Inpainting Based on Robust Spectral Dictionary Learning. Applied Sciences. 2019; 9(15):3062. https://doi.org/10.3390/app9153062

Chicago/Turabian Style

Song, Xiaorui, and Lingda Wu. 2019. "Hyperspectral Image Inpainting Based on Robust Spectral Dictionary Learning" Applied Sciences 9, no. 15: 3062. https://doi.org/10.3390/app9153062

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop