Next Article in Journal
Experimental Validation of a Novel Auto-Tuning Method for a Fractional Order PI Controller on an UR10 Robot
Previous Article in Journal
Layered Graphs: Applications and Algorithms
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Tensor Completion Based on Triple Tubal Nuclear Norm

1
School of Physics and Electronic Electrical Engineering, Huaiyin Normal University, Huaian 223300, China
2
School of Electronic and Optical Engineering, Nanjing University of Science and Technology, Nanjing 210094, China
3
Jiangsu Province Key Construction Laboratory of Modern Measurement Technology and Intelligent System, Huaian 223300, China
4
School of Computer Science and Engineering, Nanjing University of Science and Technology, Nanjing 210094, China
5
Jiangsu Yaoshi Software Technology Co., Ltd., Nanjing 211103, China
6
Jiangsu Shuoshi Welding Technology Co., Ltd., Nanjing 211103, China
*
Author to whom correspondence should be addressed.
Algorithms 2018, 11(7), 94; https://doi.org/10.3390/a11070094
Submission received: 21 May 2018 / Revised: 16 June 2018 / Accepted: 19 June 2018 / Published: 28 June 2018

Abstract

:
Many tasks in computer vision suffer from missing values in tensor data, i.e., multi-way data array. The recently proposed tensor tubal nuclear norm (TNN) has shown superiority in imputing missing values in 3D visual data, like color images and videos. However, by interpreting in a circulant way, TNN only exploits tube (often carrying temporal/channel information) redundancy in a circulant way while preserving the row and column (often carrying spatial information) relationship. In this paper, a new tensor norm named the triple tubal nuclear norm (TriTNN) is proposed to simultaneously exploit tube, row and column redundancy in a circulant way by using a weighted sum of three TNNs. Thus, more spatial-temporal information can be mined. Further, a TriTNN-based tensor completion model with an ADMM solver is developed. Experiments on color images, videos and LiDAR datasets show the superiority of the proposed TriTNN against state-of-the-art nuclear norm-based tensor norms.

1. Introduction

In recent decades, the rapid progress in multi-linear algebra [1] has provided a firm theoretical foundation for many applications in computer vision [2], data mining [3], machine learning [4], signal processing [5], and many other areas. Benefiting from its multi-way nature, the tensor has power against the vector and matrix in exploiting multi-way information in multi-modal data, like color images [6], videos [7], hyper-spectral images [8], functional magnetic resonance imaging [9], traffic volume data [10], etc. In many computer vision tasks, the data, like color images or videos, may be moderately redundant, then it can be interpreted by fewer latent factors [11]. The low-rank tensor provides a suitable model for such data [12]. The two most well-known low-rank tensor models are the low-CP-rank model [13], which tries to interpret a tensor in the fewest rank-one tensors [14], and the low-Tucker-rank model [15], which seeks a tensor proxy that is simultaneously low-rank along each mode.
In many computer vision applications, like image or video inpainting, one has to tackle the missing values in the observed data tensor due to many circumstances [2,16], including failure of sensors, errors or loss in communication, occlusions or noise in the environment, etc. However, it is obviously unable to fill in the missing entries perfectly since they can take arbitrary values without other priors taken into consideration. The most adopted prior is the low-rank prior assuming the underlying data tensor has low rank. Low-rank tensor completion [2,17] seeks a low-rank tensor to fit the underlying data tensor. It has been a hot research topic due its wide use [18]. In low-rank tensor recovery, the rank minimization problem (RMP) is often formulated [2]. However, the general rank minimization problem and most tensor problems are NP-hard [19,20]. To obtain polynomial-time algorithms, many different tensor rank surrogates have been proposed [2,7,17,21,22,23,24] to substitute the rank functions in RMP. Surrogates of the tensor CP rank and Tucker rank have been broadly studied [7,17,23,25,26,27,28,29].
Recently, a novel low-rank tensor model called the low-tubal-rank model was proposed [22,30]. The core of it is to model the 3D data as a tensor that has low tubal-rank [31], which is defined through a new tensor singular value decomposition (t-SVD) [1,32]. It has been successfully used in modeling multi-way real-world data, such as color images [6], videos [33], seismic data [34], WiFi fingerprint [35], MRI imaging [22], traffic volume data [36], etc. As pointed out in [37], compared with other tensor models, the low-tubal-rank tensor model is superior in capturing a “spatial-shifting” correlation, which is ubiquitous in real-world data arrays.
This paper focuses on low-tubal-rank models for tensor completion. The  recently-proposed tensor tubal nuclear norm (TNN) [30] based on t-SVD has shown superiority in imputing missing values in 3D visual data, like color images and videos. Its power lies in exploiting tube (often carrying temporal/channel information) redundancy in a circulant way while preserving the row and column (often carrying spatial information) relationship. A simple and successful variant of TNN, dubbed the twist tubal nuclear norm (t-TNN) [16], instead exploits row redundancy in a circulant way while keeping the tube relationship. However, both of them only exploit one kind of redundancy in a circulant way. In this paper, a new tensor norm dubbed the tensor triple tubal nuclear norm (TriTNN) is proposed to simultaneously exploit the row, column and tube redundancy while preserving the relative tube, row and column relationship. Based on the proposed TriTNN, a tensor completion model is studied and optimized by alternating direction multiplier methods (ADMM) [38]. Experimental results on color images, videos and LiDAR datasets demonstrate that the proposed TriTNN has better performances than other state-of-the-art nuclear norm-based tensor norms.
The paper is organized as follows. Some notations and preliminaries are presented in Section 2. The TriTNN is proposed following the introductions of the most related works in Section 3. The problem formulation and the proposed ADMM algorithm are shown in Section 4. Experimental results are reported in Section 5. We conclude this work in Section 6.

2. Notations and Preliminaries

In this section, the notations and the basic definitions are introduced.

2.1. Notations

Vectors are denoted by bold lower case letters, e.g.,  v R n , matrices are denoted by bold upper case letters, e.g.,  M R n 1 × n 2 , and tensors are denoted by calligraphy letters, e.g.,  T R n 1 × n 2 × n 3 . Given a third-order tensor, its fiber is defined as a 1D vector obtained by fixing all indices, but one, and its slice is a 2D matrix defined by fixing all but two indices. Given a 3D tensor T , T i j k denotes the entry with index ( i , j , k ) ; T ( k ) : = T ( : , : , k ) denotes the k-th frontal slice. T ˜ denotes the tensor after performing the fast Fourier transformation along the tube fibers of T . Notations dft 3 ( · ) and idft 3 ( · ) are used to represent the discrete Fourier transformation (DFT) and inverse discrete Fourier transformation (IDFT) along the tube fibers of 3D tensors.
Given a matrix M R n 1 × n 2 , its nuclear norm is defined as M * : = i = 1 p σ i ( M ) , where p = min { n 1 , n 2 } and σ 1 ( M ) σ p ( M ) are the singular values of M in non-ascending order. The inner product between two 3D tensors T 1 , T 2 R n 1 × n 2 × n 3 is defined as T 1 , T 2 : = i j k T 1 ( i , j , k ) T 2 ( i , j , k ) . The  Frobenius norm of a tensor T R n 1 × n 2 × n 3 is defined as T F : = i j k T i j k 2 . The  l -norm of a tensor T R n 1 × n 2 × n 3 is defined as T : = max i j k | T i j k | .

2.2. Tensor Singular Value Decomposition

Firstly, five block-based operators, i.e., bvec, bvfold, bdiag, bdfoldand bcirc [1], are introduced. Given a tensor T R n 1 × n 2 × n 3 , the block vectorizing and its opposite operation are defined as follows:
bvec ( T ) : = T ( 1 ) T ( 2 ) T ( n 3 ) , bvfold ( bvec ( T ) ) = T ,
the block diag matrix and its opposite operation:
bdiag ( T ) : = T ( 1 ) T ( n 3 ) , bdfold ( bdiag ( T ) ) = T ,
and the block circulant matrix as follows:
bcirc ( T ) : = T ( 1 ) T ( n 3 ) T ( 2 ) T ( 2 ) T ( 1 ) T ( 3 ) T ( n 3 ) T ( n 3 1 ) T ( 1 ) .
Based on the five operators defined above, we are able to give the definition of the tensor t-product.
Definition 1
 (t-product [1]). Let T 1 R n 1 × n 2 × n 3 and T 2 R n 2 × n 4 × n 3 . The t-product of T 1 and T 2 is a tensor T of size n 1 × n 4 × n 3 :
T = T 1 T 2 = : bvfold { bcirc ( T 1 ) bvec ( T 2 ) } .
Viewing a 3D tensor T R n 1 × n 2 × n 3 as an n 1 × n 2 matrix of tubes, the tensor t-product is analogous to the matrix multiplication by replacing scalar multiplication with the vector circular convolution between the tubes, as follows:
T ( i , j , : ) = k = 1 n 2 T 1 ( i , k , : ) T 2 ( k , j , : ) ,
where • denotes the circular convolution [1] between two tube vector x , y R n 3 defined as:
( x y ) j = k = 1 n 3 x k y 1 + ( j k ) mod n 3 .
Due to the relationship between the circular convolution and the DFT, the t-product in the original domain is equivalent to matrix multiplication of the frontal slices in the Fourier domain [1], i.e.,
T ˜ ( k ) = T 1 ˜ ( k ) T 2 ˜ ( k ) , k = 1 , , n 3 .
The tensor transpose, identity tensor, f-diagonal tensor and orthogonal tensor are further defined.
Definition 2
 (Tensor transpose [1]). Let T be a tensor of size n 1 × n 2 × n 3 ; then T is the n 2 × n 1 × n 3 tensor obtained by transposing each of the frontal slices and then reversing the order of transposed frontal Slices 2 through n 3 .
Definition 3
 (Identity tensor [1]). The  identity tensor I R n 1 × n 1 × n 3 is a tensor whose first frontal slice is the n 1 × n 1 identity matrix, and all other frontal slices are zero.A
Definition 4
 (F-diagonal tensor [1]). A tensor is called f-diagonal if each frontal slice of the tensor is a diagonal matrix.
Definition 5
 (Orthogonal tensor [1]). A tensor Q R n 1 × n 1 × n 3 is orthogonal if it satisfies the following relationship:
Q Q = Q Q = I .
Based on the concepts defined above, the tensor singular value decomposition (t-SVD) and the tensor tubal rank are established as follows.
Definition 6
 (Tensor singular value decomposition and tensor tubal-rank [31]). Any tensor T R n 1 × n 2 × n 3 can be decomposed as:
T = U S V ,
where U R n 1 × n 1 × n 3 and V R n 2 × n 2 × n 3 are orthogonal tensors and S is a rectangular f-diagonal tensor of size n 1 × n 2 × n 3 .
The tensor tubal rank of T is defined to be the number of non-zero tubes of S in Equation (4), i.e.,
r tubal ( T ) : = i 1 ( S ( i , i , : ) 0 ) ,
where 1 ( · ) is an indicator function whose value is one if the input condition is satisfied, and zero otherwise.
The t-SVD is illustrated in Figure 1. It can be computed efficiently by FFT and IFFT in the Fourier domain according to Equation (3). For more details, see [1].

3. The Triple Tubal Nuclear Norm

In this section, we will define the triple tensor tubal nuclear norm, before which the most related norms, i.e., tubal nuclear norm and twist tubal nuclear norm, are introduced first.

3.1. Tubal Nuclear Norm

Based on the preliminaries introduced in Section 2, the tubal nuclear norm is defined as follows:
Definition 7
 (Tubal nuclear norm [31]). The tubal nuclear norm (TNN) T of a 3D tensor T is defined as the nuclear norm of the block diagonal matrix of T ˜ (the Fourier domain version of T ), i.e.,
T : = bdiag ( T ˜ ) * = k = 1 n 3 T ˜ ( k ) * .
From Definition 7, we can compute the TNN of a tensor efficiently through first conducting FFT along the tube direction and summing the nuclear norms of each frontal slice. Given a tensor T R n 1 × n 2 × n 3 , the computation cost of T is O ( n 1 n 2 n 3 ( n 3 + log n 3 ) ) . Since a block circulant matrix can be block diagonalized through the Fourier transform [1], we obtain:
T = bdiag ( T ˜ ) * = ( F n 3 I n 1 ) bcirc ( T ) ( F n 3 1 I n 2 ) * = bcirc ( T ) * ,
where ⊗ denotes the Kronecker product [14], F n is the n × n discrete Fourier transform matrix and I n is an n × n identity matrix. Note that ( F n 3 I n 1 ) / n 3 is a unitary matrix.
The tubal nuclear norm has been used as a convex relaxation of the tensor tubal-rank for tensor completion, tensor robust principle component analysis (TRPCA) and outlier robust tensor principle component analysis (OR-TPCA) [6,30,36,39]. In optimization over TNN, one often needs to compute the proximal operator [40] of TNN defined as:
S τ ( T 0 ) : = argmin T 1 2 T 0 T F 2 + τ T .
In [3], a closed-form expression of S τ ( · ) is given as follows:
Definition 8
 ([3]). For a 3D tensor T R n 1 × n 2 × n 3 with reduced t-SVD T = U S V , where U R n 1 × r × n 3 and V R n 2 × r × n 3 are orthogonal tensors and S R r × r × n 3 is the f-diagonal tensor of singular tubes, the proximal operator S τ ( · ) at T 0 can be computed through the following equation:
S τ ( T ) : = U idft 3 ( max ( dft 3 ( S ) n 3 τ , 0 ) ) V .

3.2. Twist Tubal Nuclear Norm

The twist tubal nuclear norm [16] is related to a pair of tensor operations named column twist and column squeeze as follows.
Definition 9
 (Tensor column twist and column squeeze (here, the twist and squeeze operations in [1,16] are renamed as column twist and column squeeze, respectively, since we will define the row twist and row squeeze) [16]). Let T R n 1 × n 2 × n 3 , then the column twist tensor T 1 = T is a tensor of size n 1 × n 3 × n 2 whose lateral slice T 1 ( : , k , : ) = T ( k ) . Correspondingly, the column squeeze tensor of T 1 , i.e.,  T = T 1 , can be obtained by the reverse process, i.e., T ( k ) = ColSqueeze ( T 1 ( : , k , : ) ) . See Figure 2.
Then, we give the definition of twist nuclear norm as follows:
Definition 10
 (Twist tensor nuclear norm [16]). The twist tensor nuclear norm (t-TNN) based on the t-SVD framework is defined as follows:
T : = T = bcirc ( T ) * .
From the above definition, t-TNN can be computed efficiently by first twisting the original tensor and then computing the TNN. Given a tensor T R n 1 × n 2 × n 3 , the computation cost of T is O ( n 1 n 2 n 3 ( n 2 + log n 2 ) ) . The t-TNN has attained significant improvement against the TNN in video inpainting [16]. Since the t-TNN is essentially a TNN, the proximal operator of it can be derived from the proximal operator of TNN as follows:
ColSqueeze ( S τ ( T 0 ) ) = argmin T 1 2 T 0 T F 2 + τ T .

3.3. A Circular Interpretation of TNN and t-TNN

In this subsection, an illustration of TNN and t-TNN in a circular way [16], which motivates the proposal of TriTNN, will be given. For a tensor T R n 1 × n 2 × n 3 , we define the operation called circulant block matricization of T [41] in the following equation:
circ ( T ) : = circ ( T ( 1 , 1 , : ) ) circ ( T ( 1 , 2 , : ) ) circ ( T ( 1 , n 2 , : ) ) circ ( T ( 2 , 1 , : ) ) circ ( T ( 2 , 2 , : ) ) circ ( T ( 2 , n 2 , : ) ) circ ( T ( n 1 , 1 , : ) ) circ ( T ( n 1 , 2 , : ) ) circ ( T ( n 1 , n 2 , : ) ) R n 1 n 3 × n 2 n 3 ,
where circ ( x ) denotes an n 3 × n 3 matrix with x R n 3 in the following way:
circ ( x ) : = circ ( x 1 ) circ ( x n 3 ) circ ( x 2 ) circ ( x 2 ) circ ( x 1 ) circ ( x 3 ) circ ( x n 3 ) circ ( x n 3 1 ) circ ( x 1 ) R n 3 × n 3 .
By the permutation operation, there exist two so-called stride permutation matrices [42] P 1 and P 2 , such that the following relationship between circ( T ) and bcirc( T ) holds:
circ ( T ) = P 1 bcirc ( T ) P 2 .
Since the matrix nuclear norm is permutation invariant, it holds that [16]:
circ ( T ) = bcirc ( T ) .
As an example, Figure 3a,b intuitively shows the relationships between the original tensor, the column twist tensor, the block circulant matrix and the circulant block matricization of a tensor T R 3 × 3 × 3 . As illustrated in Subplots (a) and (b) of Figure 3, from the circulant perspective, TNN essentially exploits the tube redundancy in a circulant way while keeping the row and column relationship, and t-TNN essentially exploits the row redundancy in a circulant way while preserving the tube and column relationship [16]. In computer vision applications, the row and column of a data tensor (like color images or videos) often carry spatial information, and the tube often carries temporal or channel information. From the computational perspective, FFT is operated along the tube direction to compute TNN, while t-TNN needs FFT along the row direction.

3.4. The Proposed Row Twist Tubal Nuclear Norm and Triple Tubal Nuclear Norm

As discussed above, TNN and t-TNN need FFT along the tube and row direction, respectively. Note that, for real-word visual data, like color images, the row and column carry homogeneous information, and they are better treated equally. A simple operation similar to the column twist and column squeeze, called row twist and row squeeze, respectively, is defined.
Definition 11
 (Tensor row twist and row squeeze). Let T R n 1 × n 2 × n 3 , then the row twist tensor T 1 = T is a tensor of size n 3 × n 2 × n 1 whose horizontal slice T 1 ( k , : , : ) = T ( k ) . Correspondingly, the row squeeze tensor of T 1 , i.e., T = T 1 , can be obtained by the reverse process, i.e., T ( k ) = RowSqueeze ( T 1 ( k , : , : ) ) . See Figure 4.
Definition 12
(Row twist tubal nuclear norm (rt-TNN)). The row twist tubal nuclear norm (rt-TNN) of a tensor T R n 1 × n 2 × n 3 is defined as the tubal nuclear norm of its row twisted tensor, i.e.,
T = T = bcirc ( T ) * .
As illustrated in Subplot (c) of Figure 3, from the circulant perspective, rt-TNN essentially exploits the column redundancy in a circulant way while keeping the row and tube relationship. From the computational perspective, FFT is operated along the column direction to compute rt-TNN. The proximal operator of rt-TNN can be derived from the proximal operator of TNN as follows:
RowSqueeze ( S τ ( T 0 ) ) = argmin T 1 2 T 0 T F 2 + τ T .
It should be noted that each of TNN, t-TNN and rt-TNN only exploits one type of redundancy, i.e., the tube, row and column redundancies, in a circulant way. Real-world data may have more than one type of redundancy, and it is beneficial to exploit such a property. To simultaneously exploit the tube, row and column redundancy in a circulant way while keeping other relationships, we simply combine the TNN, t-TNN and rt-TNN to get the triple tubal nuclear norm.
Definition 13
 (Triple tubal nuclear norm). The triple tubal nuclear norm (TriTNN) is defined as a weighted sum of its tubal nuclear norm, column twist tubal nuclear norm and its row twist tubal nuclear norm, i.e.,
T = λ 1 T + λ 2 T + λ 3 T ,
where λ 1 , λ 2 and λ 3 are positive weights satisfying:
λ 1 + λ 2 + λ 3 = 1 .
From the above definition, the computation of T can be divided into computations of TNN, t-TNN and rt-TNN, which has the following computational complexity:
O n 1 n 2 n 3 ( n 1 + n 2 + n 3 + log n 1 + log n 2 + log n 3 ) .
Due to the coupling of three tubal nuclear norms, it is very difficult to derive a closed-form expression of the proximal operator of TriTNN.

4. TriTNN-Based Tensor Completion

4.1. Problem Formulation

Let L * R n 1 × n 2 × n 3 be the true, but unknown tensor to be completed. Suppose only a small fraction of its entries are observed and the observations are corrupted by small dense noise. Let T R n 1 × n 2 × n 3 denote the observed noisy tensor of L * . Then, we have the following observation model:
T = ( L * + E ) O ,
where E R n 1 × n 2 × n 3 is the noise tensor with element-wisely i.i.d. Gaussian noise, ⊙ denotes the element-wise multiplication and O R n 1 × n 2 × n 3 denotes the binary tensor whose entry O i j k = 1 if the ( i , j , k ) -th entry is observed, otherwise O i j k = 0 . The goal is to estimate L * given noisy observation T from observation Model (19).
We estimate L * by simultaneously exploiting the tube, row and column redundancy in a circular way through minimizing the proposed triple tubal nuclear norm. Specifically, we come up with the following problem:
min L L s . t . O ( T L ) F ϵ ,
where parameter ϵ > 0 denotes the noise level. The motivation is to recover L * by choosing the tensor with the smallest TriTNN from a hyper-ball in R n 1 × n 2 × n 3 with radius ϵ . It is well known that Problem (20) in the form of convex minimization with a bounded norm constraint is equivalent to the following unconstrained problem [23]:
min L 1 2 O ( T L ) F 2 + τ L .
where τ > 0 is the regularization parameter.

4.2. An ADMM Solver to Problem (21)

The alternative direction multiplier method (ADMM) [38] has been extensively used in solving composite convex problems like Problem (21). We will solve Problem (21) by using ADMM in this subsection.
Considering the definition of TriTNN, we introduce auxiliary variables U , V , W R n 1 × n 2 × n 3 and obtain the following constrained problem:
min L , U , V , W 1 2 O ( T L ) F 2 + τ λ 1 U + τ λ 2 V + τ λ 3 W s . t . U = L , V = L , W = L .
First, the augmented Lagrangian of Problem (22) is:
L ρ ( L , U , V , W , Y 1 , Y 2 , Y 3 ) = 1 2 O ( T L ) F 2 + τ λ 1 U + τ λ 2 V + τ λ 3 W + Y 1 , U L + ρ 2 U L F 2 + Y 2 , V L + ρ 2 V L F 2 + Y 3 , W L + ρ 2 W L F 2 ,
where Y 1 , Y 2 and Y 3 are Lagrangian multipliers and ρ > 0 is the penalty parameter.
Using the framework of ADMM, we update the variables alternatively by fixing others at the k + 1 -th iteration in the following way.
Update L . We update L by fixing the other variables as follows:
L k + 1 = argmin L L ρ ( L , U k , V k , W k , Y 1 k , Y 2 k , Y 3 k ) = argmin L 1 2 O ( T L ) F 2 + Y 1 k , U k L + ρ 2 U k L F 2 + Y 2 k , V k L + ρ 2 V k L F 2 + Y 3 k , W k L + ρ 2 W k L F 2 = ( ρ U k + ρ V k + ρ W k + Y 1 k + Y 2 k + Y 3 k + O T ) ( O + ρ I ) ,
where ⊘ denotes element-wise division and I denotes the tensor of all ones.
Update U , V and W . Tensor U is updated as follows:
U k + 1 = argmin U L ρ ( L k + 1 , U , V k , W k , Y 1 k , Y 2 k , Y 3 k ) = argmin U τ λ 1 U + Y 1 k , U L k + 1 + ρ 2 U L k + 1 F 2 = S τ λ 1 / ρ ( L k + 1 Y 1 k ρ ) ,
where S · ( · ) is the proximal operator of TNN at point L k + 1 Y 1 k / ρ with parameter τ λ 1 / ρ (see Equation (8)).
Tensors V and W are updated similarly to U ,
V k + 1 = argmin V L ρ ( L k + 1 , U k + 1 , V , W k , Y 1 k , Y 2 k , Y 3 k ) = ColSqueeze ( S τ λ 2 / ρ ( L k + 1 Y 2 k ρ ) ) ,
and:
W k + 1 = argmin W L ρ ( L k + 1 , U k + 1 , V k + 1 , W , Y 1 k , Y 2 k , Y 3 k ) = RowSqueeze ( S τ λ 3 / ρ ( L k + 1 Y 3 k ρ ) ) .
Update Y 1 , Y 2 and Y 3 . Using dual ascending, we update Y 1 , Y 2 and Y 3 as follows:
Y 1 k + 1 = Y 1 k + ρ ( U k + 1 L k + 1 ) , Y 2 k + 1 = Y 2 k + ρ ( V k + 1 L k + 1 ) , Y 3 k + 1 = Y 3 k + ρ ( W k + 1 L k + 1 ) .
We summarize the algorithm in Algorithm 1 and analyze the computational complexity as follows.
Complexity analysis: The main computational cost in each iteration rests in the singular tube thresholding operator, requiring the computation of FFT, IFFT and SVDs. Therefore, the time complexity in each iteration is:
O n 1 n 2 n 3 ( n 1 + n 2 + n 3 + log ( n 1 n 2 n 3 ) ) ,
Algorithm 1 Solving Problem (21) using ADMM.
Input: The observed tensor T , the parameters τ , λ 1 , λ 2 , λ 3 , ρ .
1:
while not converged do
2:
 Update L using Equation (24);
3:
 Update U , V and W using Equations (25), (26) and (27), respectively;
4:
 Update Y 1 , Y 2 and Y 3 using Equations (28);
5:
 Check the convergence conditions:
L k + 1 L k δ , U k + 1 U k δ , V k + 1 V k δ , W k + 1 W k δ .
6:
end while
Output: L ^ .

4.3. Convergence of Algorithm 1

As Problem (21) has more than two variables, the convergence property of Algorithm 1 cannot be directly obtained from existing results on the convergence of ADMM [38]. Thus, we prove its convergence in terms of the objective function in the following theorem.
Theorem 1
 (Convergence of Algorithm 1). For any ρ > 0 , if the unaugmented Lagrangian L 0 ( L , U , V , W , Y 1 , Y 2 , Y 3 ) has a saddle point, then the iterations ( L k , U k , V k , W k , Y 1 k , Y 2 k , Y 3 k ) in Algorithm 1 satisfy the residual convergence, objective convergence and dual variable convergence of Problem (21).
Proof. 
The key idea of the proof is to rewrite Problem (9) into a two-block ADMM problem. Since the RowTwist and ColTwist operations are linear, there exist two matrices P 1 , P 2 R n 1 n 2 n 3 × n 1 n 2 n 3 , such that the constraints V = ColTwist ( L ) and W = RowTwist ( L ) are equivalent to the vectorization expressions:
vec ( V ) = P 1 vec ( L ) , and vec ( W ) = P 2 vec ( L ) ,
where vec ( · ) denotes the operation of tensor vectorization (see [14]).
For notational simplicity, let:
x = vec ( L ) , y = vec ( Y 1 ) vec ( Y 2 ) vec ( Y 3 ) , z = vec ( U ) vec ( V ) vec ( W ) , A = I P 1 P 2 ,
and:
f ( x ) = 1 2 O ( L T ) F 2 , g ( z ) = τ ( λ 1 U + λ 2 V + λ 3 W ) .
It is obvious that f ( · ) and g ( · ) are closed, proper and convex. Then, Problem (21) can be re-written as follows:
min x , z f ( x ) + g ( z ) s . t . A x z = 0 .
According to the convergence analysis in [38], we have:
objective convergence : lim k f ( x k ) + g ( z k ) = f * + g * , dual variable convergence : lim k y k = y * , constraint convergence : lim k A x k z k = 0 ,
where f * , g * are the optimal values of f ( x ) , g ( z ) , respectively. Variable y * is a dual optimal point defined as:
y * = vec ( Y 1 * ) vec ( Y 2 * ) vec ( Y 3 * ) ,
where ( Y 1 * , Y 2 * , Y 3 * ) is the dual component of a saddle point ( L * , U * , V * , W * , Y 1 * , Y 2 * , Y 3 * ) of the unaugmented Lagrangian L 0 ( L , U , V , W , Y 1 , Y 2 , Y 3 ) . □

4.4. Differences from Prior Work

First, we show the difference between the proposed model TriTNN and two mostly related models TNN [30] and t-TNN [16]. Although all of them are based on the tubal nuclear norm, the main difference lies in that TNN and t-TNN only use information of one orientation, whereas TriTNN uses information of three orientations.
Now, we compare the proposed model with Tubal-Alt-Min [37]. It is based on tensor tubal rank and employs the tensor factorization strategy for tensor completion. The differences between TriTNN and Tubal-Alt-Min are: (a) TriTNN preserves the low-rank structure by summing three tubal nuclear norms, whereas Tubal-Alt-Min adopts low-rank tensor factorization to characterize the low-rank property of a tensor. In this way, they are two different kinds of models for tensor completion (since Tubal-Alt-Min and the proposed TriTNN are quite different algorithms and the main goal of this paper is to improve upon TNN, we do not compare Tubal-Alt-Min in the experiment section.) (b) Since TriTNN is based on the tubal nuclear norm, it is formulated as a convex optimization problem (21). Benefiting from convexity, each local minimum of Problem (21) must be a global minimum. However, Tubal-Alt-Min is formulated as a non-convex optimization problem, thus it may produce sub-optimal solutions.

5. Experiments

In this section, extensive experiments will be conducted to explore the effectiveness of the proposed Algorithm 1. All the codes are implemented in MATLAB language, and all experiments are carried out in Windows 10 based on an Intel Core(TM) 2.60-GHz CPU with 12 G RAM.
To explore the effectiveness of the proposed TriTNN-based model, we compare with the following nuclear norm-based tensor completion models:
  • The tensor nuclear norm-based model with ADMM solver: high accuracy low-rank tensor completion (HaLRTC, denoted by SNNin this paper) (code available: http://www.cs.rochester.edu/u/jliu/publications.html) [2], The tensor nuclear norm is defined as the weighted sum of nuclear norms of the unfolding matrices along each mode (thus, we denote this model as SNN):
    T S N N : = α 1 T ( 1 ) * + α 2 T ( 2 ) * + α 3 T ( 3 ) * ,
    where α 1 , α 2 , α 3 are positive parameters and T ( i ) R n i × j i n i , i = 1 , 2 , 3 , is the unfolding matrix of tensor T R n 1 × n 2 × n 3 along the i-th mode [2].
  • The latent tensor nuclear norm-based model (LatentNN) (code available: https://github.com/ryotat/tensor) [21]. The latent tensor nuclear norm is defined as:
    T l a t e n t : = inf X + Y + Z = T X ( 1 ) * + Y ( 2 ) * + Z ( 3 ) * ,
    where X ( 1 ) , Y ( 2 ) and Z ( 3 ) are the first-mode, second-mode and third-mode unfoldings of latent tensors X , Y and Z , respectively.
  • The square nuclear norm-based model (SquareNN) (code available: https://sites.google.com/site/mucun1988/publi) [23]. The square nuclear norm of a tensor is defined as the nuclear norm of the most balanced unfolding of a tensor (see [7,23]).
  • The most related tubal nuclear norm-based model (TNN) (code available: https://github.com/jamiezeminzhang/) [30] and the twist tubal nuclear norm-based model (t-TNN) [16].
We conduct tensor completion experiments on color images, videos and a dataset for autonomous vehicle. For an estimation tensor L ^ R n 1 × n 2 × n 3 , its quality is evaluated by the peak signal-to-noise ratio (PSNR) computed by the definition:
P S N R = 10 log 10 ( n 1 n 2 n 3 L * 2 L ^ L * F 2 ) ,
where L * is the underlying tensor. The higher the PSNR value is, the better the recovery performance will be.

5.1. Color Image Inpainting

Color images in row × column × channel are naturally expressed in 3D tensor form. Image inpainting aims at reconstructing a color image from a small fraction of its entries. In this experiment, twelve test images of size 256 × 256 × 3 are used; see Figure 5. Given an image M of size d 1 × d 2 × 3 , we randomly sample 30 % of its pixels and add i.i.d. Gaussian noise with standard deviation σ = 0.1 σ 0 , where σ 0 = M F / 3 d 1 d 2 is the rescaled magnitude of M .
The weight parameters α of SNN are chosen to satisfy α 1 : α 2 : α 3 = 1 : 1 : 0.01 as suggested in [2]. Parameter τ = 8 e 3 and λ 1 , λ 2 , λ 3 in Problem (22) are chosen to satisfy λ 1 : λ 2 : λ 3 = 1 : 0.01 : 0.01 . Parameters of other algorithms are tuned for better performances in most cases. We also employ the structural similarity index measure (SSIM) [43] to measure the quality of inpainted color images. The higher the SSIM value is, the better the inpainting performance will be. Given a color image, we test ten times and report the averaged PSNR and SSIM values.
The inpainting results of five images are shown in Figure 6 for qualitative comparison. We can see that the proposed TriTNN-based model obtains better visual performances. For quantitative comparison, the PSNR and SSIM values on the twelve images of seven algorithms are reported in Figure 7. It can be seen that the proposed TriTNN-based outperforms the competitors in most cases.

5.2. Video Inpainting

The video inpainting task aims at imputing the missing pixels of a video. The performance competition is carried out on five widely-used YUVvideos (They are available from https://sites.google.com/site/subudhibadri/fewhelpfuldownloads): salesman_qcif, silent_qcif, suzie_qcif, tempete_cif and waterfall_cif. Due to the computational limitation, we use the first 30 frames of Y components in each video. This results in three tensors sized 144 × 176 × 32 and two tensors sized 288 × 352 × 32 . For each video, we uniformly sample 10% of the entries and conduct the video inpainting experiments.
The weight parameters α of SNN are chosen to satisfy α 1 : α 2 : α 3 = 1 : 1 : 1 as suggested in [2]. Parameters τ = 2 , ρ = 5 e -5 and λ 1 , λ 2 , λ 3 are chosen to satisfy λ 1 : λ 2 : λ 3 = 1 : 1 : 1 for the proposed model. Parameters of other algorithms are tuned for better performances in most cases. Given a video, we test ten times and report the averaged PSNR value.
The qualitative comparison is shown in Figure 8. The PSNR values are reported in Table 1. It can be seen that the TriTNN-based model outperforms the others. The proposed TriTNN has better performances than TNN and t-TNN, since the TriTNN exploits the row, column and tube redundancy simultaneously, whereas TNN or t-TNN only exploit one type of redundancy. The superiority of TriTNN over SNN, LatentNN and SquareNN may be explained by the fact that the circulant block matricization of a tensor coded in TriTNN makes use of more information than directly unfolding along each mode.

5.3. A Dataset for Autonomous Driving

Environment perception for autonomous driving has attracted more and more attention in computer vision. In this subsection, experiments on a dataset collected for autonomous driving are performed.
The dataset (a collection of Frame No. 165–No. 244 in Scenario B and Scenario B; additional sensor data available at http://www.mrt.kit.edu/z/publ/download/velodynetracking/dataset.html) has 80 frames of gray images and LiDAR point cloud data acquired by a Velodyne HDL-64E LiDAR. The image sequence is resized to be a tensor of size 128 × 256 × 80 , and the LiDAR data are resampled, transformed and formatted to be two tensors of size 64 × 436 × 80 representing the distance data and the intensity data, respectively.
Given a tensor to complete T R n 1 × n 2 × n 3 , experiments are carried out with seven different observation settings where the sampling ratio p varies from 0.1–0.7. The observed entries are further corrupted by i.i.d. Gaussian noise with standard deviation σ = 0.2 σ 0 , where σ 0 = T F / d 1 d 2 d 3 is the normalized magnitude of T . The weight parameters α of SNN are chosen to satisfy α 1 : α 2 : α 3 = 1 : 1 : 1 . Parameters τ = 2 , ρ = 5 e -5 and λ 1 , λ 2 , λ 3 are chosen to satisfy λ 1 : λ 2 : λ 3 = 1 : 1 : 1 for the proposed model. Parameters of other algorithm are tuned manually to achieve better performances in most cases. Given a sampling ratio p, we repeat ten trials and report the averaged PSNR value.
The quantitative performance comparison in terms of PSNR is shown in Table 2, Table 3 and Table 4 for gray image sequence, distance data and intensity data completion, respectively. The highest average PSNR indicating the best recovery performance is highlighted in bold. From Table 2, Table 3 and Table 4, we can see that the proposed TriTNN-based model yields better performances over the other five algorithms for noisy tensor completion.

6. Conclusions

This paper studied the problem of completing a 3D tensor from incomplete observations. It has been used in broad applications of computer vision tasks like image and video inpainting. In this paper, a new tensor norm named the triple tubal nuclear norm (TriTNN) is proposed to simultaneously exploit tube, row and column redundancy in a circulant way by using a weighted sum of three TNNs. A TriTNN-based tensor completion model is presented. It is further solved by an ADMM-based algorithm with convergence guarantee. Experimental results on color images, videos and LiDAR datasets show the superiority of the proposed TriTNN against state-of-the-art nuclear norm-based tensor norms.
For the proposed TriTNN, the authors believe it can outperform many nuclear norm-based tensor completion models because more spatial-temporal information is exploited. However, generally speaking, it has the following two drawbacks:
  • Computational inefficiency: Compared to TNN and t-TNN, it is more time-consuming since it involves computing TNN, t-TNN and rt-TNN (see Equation (18)).
  • Sample inefficiency: Using the analysis of [23] and [44], to complete an incomplete tensor, TriTNN needs more observations than TNN and t-TNN (limited to the scope of this paper; the authors do not discuss it further).
In the future research, the authors are interested in efficient algorithms like [45] to tackle the problem of computational inefficiency. To decrease the sample complexity of TriTNN, it will be helpful to follow the suggestions in [44] to design new atomic norms like [46]. To get better visual completion performances, the authors would like to consider adding smoothness regularization in the model like [47,48,49] and adopting different tensorization methods like [50]. It is also helpful for studying new tensor completion models using deep neural networks [51]. For potential extensions of TriTNN, the authors would like to explore the p-th order ( p > 3 ) extension [52] and extensions to other discrete transforms other than DFT like [53].

Author Contributions

Conceptualization, D.W. and A.W. Data curation, D.W. and X.F. Formal analysis, B.W. (Bo Wang). Methodology, D.W. and A.W. Resources, B.W. (Boyu Wang) and B.W. (Bo Wang). Software, X.F. and B.W. (Boyu Wang). Writing, original draft, D.W., A.W. and X.F. Writing, review and editing, B.W. (Boyu Wang) and B.W. (Bo Wang).

Acknowledgments

The authors are grateful to the anonymous referees for their insightful and constructive comments that led to an improved version of the paper. The authors would like to thank Fangfang Yang and Zhong Jin for their long-time support. The authors wish to thank the audiences of White Paper Tao (WPT) for their focus.This work is partially supported by the National Natural Science Foundation of China under Grant Nos. 61702262, 61703209, 91420201 and 61472187.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Kilmer, M.E.; Braman, K.; Hao, N.; Hoover, R.C. Third-order tensors as operators on matrices: A theoretical and computational framework with applications in imaging. SIAM J. Matrix Anal. Appl. 2013, 34, 148–172. [Google Scholar] [CrossRef]
  2. Liu, J.; Musialski, P.; Wonka, P.; Ye, J. Tensor completion for estimating missing values in visual data. IEEE Trans. Pattern Anal. Mach. Intell. 2013, 35, 208–220. [Google Scholar] [CrossRef] [PubMed]
  3. Wang, A.; Jin, Z. Near-optimal Noisy Low-tubal-rank Tensor Completion via Singular Tube Thresholding. In Proceedings of the IEEE International Conference on Data Mining Workshop (ICDMW), New Orleans, LA, USA, 18–21 November 2017; pp. 553–560. [Google Scholar]
  4. Signoretto, M.; Dinh, Q.T.; Lathauwer, L.D.; Suykens, J.A.K. Learning with tensors: a framework based on convex optimization and spectral regularization. Mach. Learn. 2014, 94, 303–351. [Google Scholar] [CrossRef]
  5. Cichocki, A.; Mandic, D.; De Lathauwer, L.; Zhou, G.; Zhao, Q.; Caiafa, C.; Phan, H.A. Tensor decompositions for signal processing applications: From two-way to multiway component analysis. IEEE Signal Process. Mag. 2015, 32, 145–163. [Google Scholar] [CrossRef]
  6. Lu, C.; Feng, J.; Chen, Y.; Liu, W.; Lin, Z.; Yan, S. Tensor Robust Principal Component Analysis: Exact Recovery of Corrupted Low-Rank Tensors via Convex Optimization. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 5249–5257. [Google Scholar]
  7. Wei, D.; Wang, A.; Wang, B.; Feng, X. Tensor Completion Using Spectral (k, p) -Support Norm. IEEE Access 2018, 6, 11559–11572. [Google Scholar] [CrossRef]
  8. Liu, Y.; Shang, F. An Efficient Matrix Factorization Method for Tensor Completion. IEEE Signal Process. Lett. 2013, 20, 307–310. [Google Scholar] [CrossRef]
  9. Song, X.; Lu, H. Multilinear Regression for Embedded Feature Selection with Application to fMRI Analysis. In Proceedings of the AAAI Conference on Artificial Intelligence, San Francisco, CA, USA, 4–9 February 2017; pp. 2562–2568. [Google Scholar]
  10. Tan, H.; Feng, G.; Feng, J.; Wang, W.; Zhang, Y.J.; Li, F. A tensor-based method for missing traffic data completion. Transp. Res. Part C 2013, 28, 15–27. [Google Scholar] [CrossRef]
  11. Zhao, Q.; Zhang, L.; Cichocki, A. Bayesian CP Factorization of Incomplete Tensors with Automatic Rank Determination. IEEE Trans. Pattern Anal. Mach. Intell. 2015, 37, 1751–1763. [Google Scholar] [CrossRef] [PubMed]
  12. Cichocki, A.; Lee, N.; Oseledets, I.; Phan, A.H.; Zhao, Q.; Mandic, D.P. Tensor Networks for Dimensionality Reduction and Large-scale Optimization: Part 1 Low-Rank Tensor Decompositions. Found. Trends Mach. Learn. 2016, 9, 249–429. [Google Scholar] [CrossRef] [Green Version]
  13. Harshman, R.A. Foundations of the PARAFAC Procedure: Models and Conditions for an “Explanatory” Multi-Modal Factor Analysis; University of California: Los Angeles, CA, USA, 1970. [Google Scholar]
  14. Kolda, T.G.; Bader, B.W. Tensor decompositions and applications. SIAM Rev. 2009, 51, 455–500. [Google Scholar] [CrossRef]
  15. Tucker, L.R. Some mathematical notes on three-mode factor analysis. Psychometrika 1966, 31, 279–311. [Google Scholar] [CrossRef] [PubMed]
  16. Hu, W.; Tao, D.; Zhang, W. The twist tensor nuclear norm for video completion. IEEE Trans. Neural Netw. Learn. Syst. 2017, 28, 2961–2973. [Google Scholar] [CrossRef] [PubMed]
  17. Yuan, M.; Zhang, C.H. Incoherent Tensor Norms and Their Applications in Higher Order Tensor Completion. IEEE Trans. Inf. Theory 2017, 63, 6753–6766. [Google Scholar] [CrossRef] [Green Version]
  18. Song, Q.; Ge, H.; Caverlee, J.; Hu, X. Tensor Completion Algorithms in Big Data Analytics. arXiv, 2017; arXiv:1711.10105. [Google Scholar]
  19. Candès, E.J.; Tao, T. The power of convex relaxation: near-optimal matrix completion. IEEE Trans. Inf. Theory 2010, 56, 2053–2080. [Google Scholar] [CrossRef]
  20. Hillar, C.J.; Lim, L. Most Tensor Problems Are NP-Hard. J. ACM 2009, 60, 45. [Google Scholar] [CrossRef]
  21. Tomioka, R.; Suzuki, T. Convex tensor decomposition via structured schatten norm regularization. In Proceedings of the Advances in Neural Information Processing Systems, Lake Tahoe, NV, USA, 5–10 December 2013; pp. 1331–1339. [Google Scholar]
  22. Semerci, O.; Hao, N.; Kilmer, M.E.; Miller, E.L. Tensor-Based Formulation and Nuclear Norm Regularization for Multienergy Computed Tomography. IEEE Trans. Image Process. 2014, 23, 1678–1693. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  23. Mu, C.; Huang, B.; Wright, J.; Goldfarb, D. Square Deal: Lower Bounds and Improved Relaxations for Tensor Recovery. In Proceedings of the International Conference on Machine Learning, Beijing, China, 21–26 June 2014; pp. 73–81. [Google Scholar]
  24. Zhao, Q.; Meng, D.; Kong, X.; Xie, Q.; Cao, W.; Wang, Y.; Xu, Z. A Novel Sparsity Measure for Tensor Recovery. In Proceedings of the IEEE International Conference on Computer Vision, Santiago, Chile, 7–13 December 2015; pp. 271–279. [Google Scholar]
  25. Tomioka, R.; Hayashi, K.; Kashima, H. Estimation of low-rank tensors via convex optimization. arXiv, 2010; arXiv:1010.0789. [Google Scholar]
  26. Chretien, S.; Wei, T. Sensing tensors with Gaussian filters. IEEE Trans. Inf. Theory 2017, 63, 843–852. [Google Scholar] [CrossRef]
  27. Ghadermarzy, N.; Plan, Y.; Yılmaz, Ö. Near-optimal sample complexity for convex tensor completion. arXiv, 2017; arXiv:1711.04965. [Google Scholar]
  28. Ghadermarzy, N.; Plan, Y.; Yılmaz, Ö. Learning tensors from partial binary measurements. arXiv, 2018; arXiv:1804.00108. [Google Scholar]
  29. Liu, Y.; Shang, F.; Fan, W.; Cheng, J.; Cheng, H. Generalized Higher-Order Orthogonal Iteration for Tensor Decomposition and Completion. In Proceedings of the Advances in Neural Information Processing Systems, Palais des Congrès de Montréal, Montréal, Canada, 8–13 December 2014; pp. 1763–1771. [Google Scholar]
  30. Zhang, Z.; Ely, G.; Aeron, S.; Hao, N.; Kilmer, M. Novel methods for multilinear data completion and de-noising based on tensor-SVD. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014; pp. 3842–3849. [Google Scholar]
  31. Zhang, Z.; Aeron, S. Exact Tensor Completion Using t-SVD. IEEE Trans. Signal Process. 2017, 65, 1511–1526. [Google Scholar] [CrossRef]
  32. Sun, W.; Chen, Y.; Huang, L.; So, H.C. Tensor Completion via Generalized Tensor Tubal Rank Minimization using General Unfolding. IEEE Signal Process. Lett. 2018. [Google Scholar] [CrossRef]
  33. Zhou, P.; Lu, C.; Lin, Z.; Zhang, C. Tensor Factorization for Low-Rank Tensor Completion. IEEE Trans. Image Process. 2018, 27, 1152–1163. [Google Scholar] [CrossRef] [PubMed]
  34. Ely, G.T.; Aeron, S.; Hao, N.; Kilmer, M.E. 5D seismic data completion and denoising using a novel class of tensor decompositions. Geophysics 2015, 80, V83–V95. [Google Scholar] [CrossRef]
  35. Liu, X.; Aeron, S.; Aggarwal, V.; Wang, X.; Wu, M. Adaptive Sampling of RF Fingerprints for Fine-grained Indoor Localization. IEEE Trans. Mob. Comput. 2016, 15, 2411–2423. [Google Scholar] [CrossRef]
  36. Jiang, J.Q.; Ng, M.K. Exact Tensor Completion from Sparsely Corrupted Observations via Convex Optimization. arXiv, 2017; arXiv:1708.00601. [Google Scholar]
  37. Liu, X.Y.; Aeron, S.; Aggarwal, V.; Wang, X. Low-tubal-rank tensor completion using alternating minimization. arXiv, 2016; arXiv:1610.01690. [Google Scholar]
  38. Boyd, S.; Parikh, N.; Chu, E.; Peleato, B.; Eckstein, J. Distributed optimization and statistical learning via the alternating direction method of multipliers. Found. Trends Mach. Learn. 2011, 3, 1–122. [Google Scholar] [CrossRef]
  39. Zhou, P.; Feng, J. Outlier-Robust Tensor PCA. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017. [Google Scholar]
  40. Cai, J.F.; Candès, E.J.; Shen, Z. A singular value thresholding algorithm for matrix completion. SIAM J. Opt. 2010, 20, 1956–1982. [Google Scholar] [CrossRef]
  41. Gleich, D.F.; Greif, C.; Varah, J.M. The power and Arnoldi methods in an algebra of circulants. Numer. Linear Algebra Appl. 2013, 20, 809–831. [Google Scholar] [CrossRef]
  42. Granata, J.; Conner, M.; Tolimieri, R. The tensor product: a mathematical programming language for FFTs and other fast DSP operations. IEEE Signal Process. Mag. 1992, 9, 40–48. [Google Scholar] [CrossRef]
  43. Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image quality assessment: from error visibility to structural similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef] [PubMed]
  44. Oymak, S.; Jalali, A.; Fazel, M.; Eldar, Y.C.; Hassibi, B. Simultaneously structured models with application to sparse and low-rank matrices. IEEE Trans. Inf. Theory 2015, 61, 2886–2908. [Google Scholar] [CrossRef]
  45. Mu, C.; Zhang, Y.; Wright, J.; Goldfarb, D. Scalable Robust Matrix Recovery: Frank-Wolfe Meets Proximal Methods. SIAM J. Sci. Comput. 2016, 38, A3291–A3317. [Google Scholar] [CrossRef]
  46. Richard, E.; Obozinski, G.R.; Vert, J.P. Tight convex relaxations for sparse matrix factorization. In Proceedings of the Advances in Neural Information Processing Systems, Montreal, QC, Canada, 8–13 December 2014; pp. 3284–3292. [Google Scholar]
  47. Yokota, T.; Zhao, Q.; Cichocki, A. Smooth PARAFAC Decomposition for Tensor Completion. IEEE Trans. Signal Process. 2016, 64, 5423–5436. [Google Scholar] [CrossRef]
  48. Chen, Y.L.; Hsu, C.T.; Liao, H.Y.M. Simultaneous Tensor Decomposition and Completion Using Factor Priors. IEEE Trans. Pattern Anal. Mach. Intell. 2014, 36, 577–591. [Google Scholar] [CrossRef] [PubMed]
  49. Xutao Li, Y.Y.; Xu, X. Low-Rank Tensor Completion with Total Variation for Visual Data Inpainting. In Proceedings of the AAAI Conference on Artificial Intelligence, San Francisco, CA, USA, 4–9 February 2017; pp. 2210–2216. [Google Scholar]
  50. Yokota, T.; Erem, B.; Guler, S.; Warfield, S.K.; Hontani, H. Missing Slice Recovery for Tensors Using a Low-rank Model in Embedded Space. arXiv, 2018; arXiv:1804.01736. [Google Scholar]
  51. Fan, J.; Cheng, J. Matrix completion by deep matrix factorization. Neural Netw. 2018, 98, 34–41. [Google Scholar] [CrossRef] [PubMed]
  52. Martin, C.D.; Shafer, R.; Larue, B. An Order-p Tensor Factorization with Applications in Imaging. SIAM J. Sci. Comput. 2013, 35, A474–A490. [Google Scholar] [CrossRef]
  53. Liu, X.Y.; Wang, X. Fourth-order tensors with multidimensional discrete transforms. arXiv, 2017; arXiv:1705.01576. [Google Scholar]
Figure 1. Illustration of tensor (t)-SVD.
Figure 1. Illustration of tensor (t)-SVD.
Algorithms 11 00094 g001
Figure 2. The column twist and column squeeze operations.
Figure 2. The column twist and column squeeze operations.
Algorithms 11 00094 g002
Figure 3. An intuitive illustration of relationships between the original tensor, the column twist tensor, the row twist tensor, the block circulant matrix and the circulant block matricization of a tensor T R 3 × 3 × 3 . Subplots (ac) show the operations on the column twist tensor, the original tensor and the row twist tensor, respectively. It can be seen that circ ( T ) exploits the tube redundancy in a circulant way while keeping the row and column relationship, circ ( T ) exploits the row redundancy in a circulant way while keeping the tube and column relationship and circ ( T ) exploits the column redundancy in a circulant way while keeping the tube and row relationship.
Figure 3. An intuitive illustration of relationships between the original tensor, the column twist tensor, the row twist tensor, the block circulant matrix and the circulant block matricization of a tensor T R 3 × 3 × 3 . Subplots (ac) show the operations on the column twist tensor, the original tensor and the row twist tensor, respectively. It can be seen that circ ( T ) exploits the tube redundancy in a circulant way while keeping the row and column relationship, circ ( T ) exploits the row redundancy in a circulant way while keeping the tube and column relationship and circ ( T ) exploits the column redundancy in a circulant way while keeping the tube and row relationship.
Algorithms 11 00094 g003
Figure 4. The row twist and row squeeze operations.
Figure 4. The row twist and row squeeze operations.
Algorithms 11 00094 g004
Figure 5. Twelve color images used in the experiments.
Figure 5. Twelve color images used in the experiments.
Algorithms 11 00094 g005
Figure 6. Examples of color image inpainting. (a) is the observed noisy incomplete image (Obs.) with sampling ratio 0.3 and noise level σ = 0.1 σ 0 ; (bg) show the inpainting results of nuclear norm-based models: SNN [2], latent tensor nuclear norm-based model (LatentNN) [21], SquareNN [23], tensor tubal nuclear norm (TNN) [31], t-TNN [16] and the proposed model, respectively. The corresponding PSNR and SSIM values are listed in (k). The highest PSNR and SSIM indicating the best inpainting performance are highlighted in bold. It is suggested to be viewed in color as a pdf file with a 4× zoom in. TriTNN, triple tubal nuclear norm.
Figure 6. Examples of color image inpainting. (a) is the observed noisy incomplete image (Obs.) with sampling ratio 0.3 and noise level σ = 0.1 σ 0 ; (bg) show the inpainting results of nuclear norm-based models: SNN [2], latent tensor nuclear norm-based model (LatentNN) [21], SquareNN [23], tensor tubal nuclear norm (TNN) [31], t-TNN [16] and the proposed model, respectively. The corresponding PSNR and SSIM values are listed in (k). The highest PSNR and SSIM indicating the best inpainting performance are highlighted in bold. It is suggested to be viewed in color as a pdf file with a 4× zoom in. TriTNN, triple tubal nuclear norm.
Algorithms 11 00094 g006
Figure 7. Quantitative evaluation of algorithms on color images for Uniform-0.3: the image is sampled uniformly with ratio p = 0.3 and corrupted with noise level σ = 0.1 σ 0 . (a) PSNR values; (b) SSIM values.
Figure 7. Quantitative evaluation of algorithms on color images for Uniform-0.3: the image is sampled uniformly with ratio p = 0.3 and corrupted with noise level σ = 0.1 σ 0 . (a) PSNR values; (b) SSIM values.
Algorithms 11 00094 g007
Figure 8. Examples of YUVvideo inpainting. (a) is the first frame of each video; (b) is the observed incomplete frame (Obs.) with 90 % missing entries; (ch) show the inpainting results of nuclear norm-based models: SNN [2], LatentNN [21], SquareNN [23], TNN [31], t-TNN [16] and the proposed model TriTNN-based model, respectively. It is suggested to be viewed as a pdf file with a 4× zoom in.
Figure 8. Examples of YUVvideo inpainting. (a) is the first frame of each video; (b) is the observed incomplete frame (Obs.) with 90 % missing entries; (ch) show the inpainting results of nuclear norm-based models: SNN [2], LatentNN [21], SquareNN [23], TNN [31], t-TNN [16] and the proposed model TriTNN-based model, respectively. It is suggested to be viewed as a pdf file with a 4× zoom in.
Algorithms 11 00094 g008
Table 1. Quantitative evaluation of algorithms in PSNR values for YUV video inpainting: each video is sampled uniformly with ratio p = 0.1 .
Table 1. Quantitative evaluation of algorithms in PSNR values for YUV video inpainting: each video is sampled uniformly with ratio p = 0.1 .
VideoSNN [2]LatentNN [21]SquareNN [23]TNN [31]t-TNN [16]TriTNN
salesman_qcif21.1024.2120.3125.5625.7326.18
silent_qcif24.0723.9522.9827.9328.0328.47
suzie_qcif25.7924.6324.4827.9229.4729.70
tempete_cif19.1018.4519.2320.6521.2121.46
waterfall_cif24.1022.3924.0626.7127.4428.22
Table 2. Comparison of the PSNR values for image sequence completion.
Table 2. Comparison of the PSNR values for image sequence completion.
Sampling RatioSNN [2]LatentNN [21]SquareNN [23]TNN [31]t-TNN [16]TriTNN
p = 0.1 14.5815.5017.2717.2317.2917.45
p = 0.2 17.7917.2718.1618.3218.5918.74
p = 0.3 18.7418.3618.8019.0419.1219.53
p = 0.4 19.2919.0219.2119.5019.4420.02
p = 0.5 19.6119.4319.4919.8219.6420.28
p = 0.6 19.8019.7019.6920.0619.7622.01
p = 0.7 19.9019.8319.8420.2319.8522.41
Table 3. Comparison of the PSNR values for HDL-64Edistance data completion.
Table 3. Comparison of the PSNR values for HDL-64Edistance data completion.
Sampling RatioSNN [2]LatentNN [21]SquareNN [23]TNN [31]t-TNN [16]TriTNN
p = 0.1 17.8016.9017.6419.0318.6718.91
p = 0.2 19.2618.4818.9420.0419.6020.09
p = 0.3 20.3319.7119.9420.9320.4121.04
p = 0.4 21.2420.6720.8921.7421.1721.92
p = 0.5 22.0521.5521.7522.4721.8822.72
p = 0.6 22.8222.3922.5623.1622.6123.68
p = 0.7 23.5423.2123.3423.7923.3524.51
Table 4. Comparison of the PSNR values for HDL-64E intensity data completion.
Table 4. Comparison of the PSNR values for HDL-64E intensity data completion.
Sampling RatioSNN [2]LatentNN [21]SquareNN [23]TNN [31]t-TNN [16]TriTNN
p = 0.1 17.3017.6317.5718.3118.6218.36
p = 0.2 18.7919.3018.9219.5019.6119.63
p = 0.3 19.8520.2919.9320.3820.3720.45
p = 0.4 20.8421.1120.9021.2421.0821.30
p = 0.5 21.7221.8621.7622.0521.7822.18
p = 0.6 22.5122.6222.5322.7922.5022.94
p = 0.7 23.3923.4023.3923.5923.2723.80

Share and Cite

MDPI and ACS Style

Wei, D.; Wang, A.; Feng, X.; Wang, B.; Wang, B. Tensor Completion Based on Triple Tubal Nuclear Norm. Algorithms 2018, 11, 94. https://doi.org/10.3390/a11070094

AMA Style

Wei D, Wang A, Feng X, Wang B, Wang B. Tensor Completion Based on Triple Tubal Nuclear Norm. Algorithms. 2018; 11(7):94. https://doi.org/10.3390/a11070094

Chicago/Turabian Style

Wei, Dongxu, Andong Wang, Xiaoqin Feng, Boyu Wang, and Bo Wang. 2018. "Tensor Completion Based on Triple Tubal Nuclear Norm" Algorithms 11, no. 7: 94. https://doi.org/10.3390/a11070094

APA Style

Wei, D., Wang, A., Feng, X., Wang, B., & Wang, B. (2018). Tensor Completion Based on Triple Tubal Nuclear Norm. Algorithms, 11(7), 94. https://doi.org/10.3390/a11070094

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop