Next Article in Journal
Q-Neutrosophic Soft Relation and Its Application in Decision Making
Next Article in Special Issue
Amplitude- and Fluctuation-Based Dispersion Entropy
Previous Article in Journal
Quantifying Chaos by Various Computational Methods. Part 2: Vibrations of the Bernoulli–Euler Beam Subjected to Periodic and Colored Noise
Previous Article in Special Issue
A Joint Fault Diagnosis Scheme Based on Tensor Nuclear Norm Canonical Polyadic Decomposition and Multi-Scale Permutation Entropy for Gears
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Correntropy Based Matrix Completion

1
College of Mathematics and Information Science, Guangxi University, Nanning 530004, China
2
Department of Mathematics and Statistics, The State University of New York at Albany, Albany, NY 12222, USA
3
Department of Electrical Engineering, ESAT-STADIUS, KU Leuven, Kasteelpark Arenberg 10, Leuven B-3001, Belgium
*
Author to whom correspondence should be addressed.
Entropy 2018, 20(3), 171; https://doi.org/10.3390/e20030171
Submission received: 24 December 2017 / Revised: 8 February 2018 / Accepted: 22 February 2018 / Published: 6 March 2018
(This article belongs to the Special Issue Entropy in Signal Analysis)

Abstract

:
This paper studies the matrix completion problems when the entries are contaminated by non-Gaussian noise or outliers. The proposed approach employs a nonconvex loss function induced by the maximum correntropy criterion. With the help of this loss function, we develop a rank constrained, as well as a nuclear norm regularized model, which is resistant to non-Gaussian noise and outliers. However, its non-convexity also leads to certain difficulties. To tackle this problem, we use the simple iterative soft and hard thresholding strategies. We show that when extending to the general affine rank minimization problems, under proper conditions, certain recoverability results can be obtained for the proposed algorithms. Numerical experiments indicate the improved performance of our proposed approach.

1. Introduction

Arising from a variety of applications such as online recommendation systems [1,2], image inpainting [3,4] and video denoising [5], the matrix completion problem has drawn tremendous and continuous attention over recent years [6,7,8,9,10,11,12]. The matrix completion aims at recovering a low rank matrix from partial observations of its entries [7]. The problem can be mathematically formulated as:
min X R m × n rank ( X ) s . t . X i j = B i j , ( i , j ) Ω ,
where X , B R m × n and Ω is an index set. Due to the nonconvexity of the rank function rank ( · ) , solving this minimization problem is NP-hard in general. To obtain a tractable convex relaxation, the nuclear norm heuristic was proposed [7]. Incorporated with the least squares loss, the nuclear norm regularization was proposed to solve (1) when the observed entries are contaminated by Gaussian noise [13,14,15,16]. In real-world applications, datasets might be contaminated by non-Gaussian noise or sparse gross errors, which can appear in both explanatory and response variables. However, it has been well understood that the least squares loss cannot be resistant to non-Gaussian noise or outliers.
To address this problem, some efforts have been made in the literature. Ref. [17] proposed a robust approach by using the least absolute deviation loss. Huber’s criterion was adopted in [18] to introduce robustness into matrix completion. Ref. [19] proposed to use an L p ( 0 < p 1 ) loss to enhance the robustness. However, as explained later, the approaches mentioned above cannot be robust to impulsive errors. In this study, we propose to use the correntropy-induced loss function in matrix completion problems when pursuing robustness.
Correntropy, which serves as a similarity measurement between two random variables, was proposed in [20] within the information-theoretic learning framework developed in [21]. It is shown that in prediction problems, error correntropy is closely related to the error entropy [21]. The correntropy and the induced error criterion have been drawing a great deal of attention in the signal processing and machine learning community. Given two scalar random variables U, V, the correntropy V σ between U and V is defined as V σ ( U , V ) = E K σ ( U , V ) with K σ a Gaussian kernel given by K σ ( u , v ) = exp ( u v ) 2 / σ 2 , the scale parameter σ > 0 and ( u , v ) a realization of ( U , V ) . It is noticed in [20] that the correntropy V σ ( U , V ) can induce a new metric between U and V.
In this study, by employing the correntropy-induced losses, we propose a nonconvex relaxation approach to robust matrix completion. Specifically, we develop two models: one with a rank constraint and the other with a nuclear norm regularization term. To solve them, we propose to use simple, but efficient algorithms. Experiments on synthetic, as well as real data are implemented and show that our methods are effective even for heavily-contaminated datasets. We make the following contributions in this paper:
  • In Section 3, we propose a nonconvex relaxation strategy for the robust matrix completion problem, where the robustness benefits from using a robust loss. Based on this loss, a rank constraint, as well as a nuclear norm penalized model is proposed. We also extend the proposed models to deal with the affine rank minimization problem, which includes the matrix completion as a special case.
  • In Section 4, we propose to use simple, but effective algorithms to solve the proposed models, which are based on gradient descent and employ the hard/soft shrinkage operators. By verifying the Lipschitz continuity, the convergence of the algorithms can be proven. When extended to affine rank minimization problems, under proper conditions, certain recoverability results are obtained. These results give understandings of this loss function in an algorithmic sense, which is in accordance with and extends our previous work [22].
This paper is organized as follows: In Section 2, we review some existing (robust) matrix completion approaches. In Section 3, we propose our nonconvex relaxation approach. Two algorithms are proposed in Section 4 to solve the proposed models. Theoretical results will be presented in Section 4.1. Experimental results are reported in Section 5. We end this paper in Section 6 with concluding remarks.

2. Related Work and Discussions

In matrix completion, solving the optimization problem in Model (1) is NP-hard, and a usual remedy is to consider the following nuclear norm convex relaxation:
min X R m × n X s . t . X i , j = B i , j , ( i , j ) Ω .
Theoretically, it has been demonstrated in [7,8] that under proper assumptions, with an overwhelming probability, one can reconstruct the original matrix. Situations of the matrix completion with noisy entries have been also considered; see, e.g., [6,9]. In the noisy setting, the corresponding observed matrix turns out to be:
B Ω = X Ω + E ,
where B Ω denotes the projection of B onto Ω , and E refers to the noise. The following two models are frequently adopted to deal with the noisy case:
min X R m × n 1 2 X Ω B Ω F 2 s . t . rank ( X ) R ,
and its convex relaxed and regularized heuristic:
min X R m × n 1 2 X Ω B Ω F 2 + λ X ,
where λ > 0 is a regularization parameter. Similar theoretical reconstruction results have been also derived in the noiseless case under technical assumptions. Along this line, various approaches have been proposed [14,15,16,23,24]. Among others, Refs. [10,25] interpreted the matrix completion problem as a specific case of the trace regression problem endowed with an entry-wise least squares loss, · F 2 . In the above-mentioned settings, the noise term E is usually assumed to be Gaussian or sub-Gaussian to ensure the good generalization ability, which certainly excludes the heavily-tailed noise and/or outliers.

Existing Robust Matrix Completion Approaches

It has been well understood that the least squares estimator cannot deal with non-Gaussian noise or outliers. To alleviate this limitation, some efforts have been made.
In a seminal work, Ref. [17] proposed a robust matrix completion approach, in which the model takes the following form:
min X , E R m × n E 1 + λ X s . t . X Ω + E = B Ω .
The above model can be further formulated as:
min X R m × n X Ω B Ω 1 + λ X ,
where λ > 0 is a regularization parameter. The robustness of the model (4) results from using the least absolute deviation loss (LAD). This model was later applied to the column-wise robust matrix completion problem in [26].
By further decomposing E into E = E 1 + E 2 , where E 1 refers to the noise and E 2 stands for the outliers, Ref. [18] proposed the following robust reconstruction model:
min X , E 2 R m × n X Ω B Ω E 2 F 2 + λ X + γ E 2 1 ,
where λ , γ > 0 are regularization parameters. They further showed that the above estimator is equivalent to the one obtained by using Huber’s criterion when evaluating the data-fitting risk. We also note that [19] adopted an L p ( 0 < p 1 ) loss to enhance the robustness.

3. The Proposed Approach

3.1. Our Proposed Nonconvex Relaxation Approach

As stated previously, matrix completion models based on the least squares loss cannot perform well with non-Gaussian noise and/or outliers. Accordingly, robustness can be pursued by using a robust loss as mentioned earlier. Associated with a nuclear norm penalization term, they are essentially regularized M-estimator. However, note that the LAD loss and the L p loss penalize the small residuals strongly and hence cannot lead to accurate prediction for unobserved entries from the trace regression viewpoint. Moreover, robust statistics reminds us that models based on the above three mentioned loss functions cannot be robust to impulsive errors [27,28]. These limitations encourage us to employ more robust surrogate loss functions to address this problem. In this paper, we present a nonconvex relaxation approach to deal with the matrix completion problem with entries heavily contaminated by noise and/or outliers.
In our study, we propose the robust matrix completion model based on a robust and nonconvex loss, which is defined by:
ρ σ ( t ) = σ 2 ( 1 exp ( t 2 / σ 2 ) ) ,
with σ > 0 a scale parameter. To give an intuitive impression, plots of loss functions mentioned above are given in Figure 1. As mentioned above, this loss function is induced by the correntropy, which measures the similarity between two random variables [20,21] and has found many successful applications [29,30,31]. Recently, it was shown in [22] that regression with the correntropy-induced losses regresses towards the conditional mean function with a diverging scale parameter σ when the sample size goes to infinity. It was also shown in [32] that when the noise variable admits a unique global mode, regression with the correntropy-induced losses regresses towards the conditional mode. As argued in [22,32], learning with correntropy-induced losses can be resistant to non-Gaussian noise and outliers, while ensuring good prediction accuracy simultaneously with properly chosen σ .
Associated with the ρ σ loss, our rank-constraint robust matrix completion problem is formulated as:
min X R m × n σ ( X ) s . t . rank ( X ) R ,
where the data-fitting risk σ ( X ) is given by:
σ ( X ) = 1 2 ( i , j ) Ω ρ σ X i j B i j = σ 2 2 ( i , j ) Ω 1 exp ( X i j B i j ) 2 / σ 2 .
The nuclear norm heuristic model takes the following form:
min X R m × n σ ( X ) + λ X ,
where λ > 0 is a regularization parameter.

3.2. Affine Rank Minimization Problem

In this part, we will show that our robust matrix completion approach can be extended to deal with the robust affine rank minimization problems.
It is known that the matrix completion problem (1) is a special case of the following affine rank minimization problem:
min X R m × n rank ( X ) s . t . A ( X ) = b ,
where b R p is given, and A : R m × n R p is a linear operator defined by:
A ( · ) : = A 1 , · , A 2 , · , , A p , · T ,
where A i R m × n for each i. Introduced and studied in [33], this problem has drawn much attention in recent years [14,15,16,23]. Note that (7) can be reduced to the matrix completion problem (1) if we set p = | Ω | (the cardinality of Ω ), and let A ( i 1 ) n + j = e i ( m ) e j ( n ) T for each ( i , j ) Ω , where  e i ( m ) , i = 1 , , m and e j ( n ) , j = 1 , , n are the canonical basis vector of R m and R n , respectively.
In fact, (5) and (6) can be naturally extended to handle cases with noise and outliers of (7). Denote the risk as follows:
˜ σ ( X ) = σ 2 2 i = 1 p 1 exp A i , X b i 2 / σ 2 .
The rank constrained model can be formulated as:
min X R m × n ˜ σ ( X ) s . t . rank ( X ) R ,
and the nuclear norm regularized heuristic takes the form:
min X R m × n ˜ σ ( X ) + λ X .
Referring to computational considerations presented below, we will focus on the more general optimization problems (8) and (9), which can be directly applied to (5) and (6).

4. Algorithms and Analysis

We consider using gradient descent-based algorithms to solve the proposed models. It is usually admitted that gradient descent is not very efficient. However, in our experiments, we find that gradient descent is still efficient, and comparable with some state-of-the-art methods. On the other hand, we present recoverability and convergence rate results for gradient descent applied to the proposed models. Such results and analysis may help us better understand the models and such a nonconvex loss function from the algorithmic aspects.
We first consider gradient descent with hard thresholding for solving (8). The derivation is standard. Denote S R : = { X R m × n | rank ( X ) R } . By the differentiability of σ , when Y is sufficiently close to X, σ can be approximated by:
σ ( X ) σ ( Y ) + σ ( Y ) , X Y + α 2 X Y F 2 .
Here, α > 0 is a parameter, and σ ( Y ) , the gradient of σ at Y, is equal to:
i = 1 p exp ( A i , Y b i ) 2 / σ 2 ( A i , Y b i ) A i .
Now, the iterates can be generated as follows:
X ( k + 1 ) = arg min X S R σ ( X ( k ) ) + σ ( X ( k ) ) , X X ( k ) + α 2 X X ( k ) F 2 = arg min X S R X Y ( k + 1 ) F 2
with:
Y ( k + 1 ) = X ( k ) α 1 σ ( X ( k ) ) .
We simply write (11) as X ( k + 1 ) = P S R ( Y ( k + 1 ) ) , where P S R denotes the hard thresholding operator, i.e., the best rank-R approximation to Y ( k + 1 ) . The algorithm is presented in Algorithm 1.
Algorithm 1 Gradient descent iterative hard thresholding for (8).
  • Input: linear operator A : R m × n R p , initial guess X ( 0 ) R m × n , prescribed rank R 1 , σ > 0  
  • Output: the recovered matrix X ( k + 1 )   
  • while a certain stopping criterion is not satisfied do
  •  1: Choose a fixed step-size α 1 > 0 .
  •  2: Compute the gradient descent step (12)
    Y ( k + 1 ) = X ( k ) α 1 σ ( X ( k ) ) .
     
  •  3: Perform the hard thresholding operator to obtain
    X ( k + 1 ) = P S R Y ( k + 1 ) ,
  •  and set k : = k + 1 .
  • end while
The algorithm starts from an initial guess X ( 0 ) and continues until some stopping criterion is satisfied, e.g., X ( k + 1 ) X ( k ) F ϵ , where ϵ is a certain given positive number. Indeed, such a stopping criterion makes sense, as Proposition A3 shows that X ( k ) X ( k + 1 ) F 0 . To ensure the convergence, the step-size should satisfy α > L : = A 2 2 , where A 2 denotes the spectral norm of A . For matrix completion, the spectral norm is smaller than one, and thus, we can set α > 1 . In Appendix A, we have shown the Lipschitz continuity of σ ( · ) , which is necessary for the convergence of the algorithm. α can also be self-adaptive by using a certain line-search rule. Algorithm 2 is the line-search version of Algorithm 1.
Algorithm 2 Line-search version of Algorithm 1.
  • Input: linear operator A : R m × n R p , initial guess X ( 0 ) R m × n , prescribed rank R 1 , σ > 0 , α ( 0 ) > 0 , δ ( 0 , 1 ) , η > 1
  • Output: the recovered matrix X ( k + 1 )
  • while a certain stopping criterion is not satisfied do
  •  1: α ( k + 1 ) = α ( k )
  • repeat
  •   2: X ( k + 1 ) = P S R X ( k ) 1 α ( k + 1 ) σ ( X ( k ) )
  •   3: α ( k + 1 ) : = α ( k + 1 ) η
  • until σ ( X ( k + 1 ) ) σ ( X ( k ) ) δ α ( k + 1 ) 2 X ( k + 1 ) X ( k ) F 2
  •  4: α ( k + 1 ) : = α ( k + 1 ) / η ,
  •  and set k : = k + 1 .
  • end while
Solving (9) is similar, with only the hard thresholding P R replaced by the soft thresholding S τ , which can be derived as follows. Denote Y ( k + 1 ) = U diag { σ i } 1 i r V T as the SVD of Y ( k + 1 ) . Then, S λ / α is the matrix soft thresholding operator [13,16] defined as S λ / α ( Y ( k + 1 ) ) = U diag max { σ i λ / α , 0 } V T . Gradient descent-based soft thresholding is summarized in Algorithm 3.
Algorithm 3 Gradient descent iterative soft thresholding for (9).
  • Input: linear operator A : R m × n R p , initial guess X ( 0 ) R m × n , parameter λ > 0 , σ > 0
  • Output: the recovered matrix X ( k + 1 )
  • while a certain stopping criterion is not satisfied do
  •  1: Choose a fixed step-size α 1 > 0 , or choose it via the line-search rule.
  •  2: Compute
    Y ( k + 1 ) = X ( k ) α 1 σ ( X ( k ) ) .
  •  3: Perform the soft thresholding operator to obtain
    X ( k + 1 ) = S λ / α ( Y ( k + 1 ) ) ,
  •  and set k : = k + 1 .
  • end while

4.1. Convergence

With the Lipschitz continuity of σ presented in Appendix A, it is a standard routine to show the convergence of Algorithms 1 and 3, i.e., let { X ( k ) } be a sequence generated by Algorithm 1 or 3. Then, every limit point of the sequence is a critical point of the problem. In fact, the results can be enhanced to the statement that “the entire sequence converges to a critical point”, namely one can prove that lim k X ( k ) = X where X is a critical point. This can be achieved by verifying the so-called Kurdyka–ojasiewicz (KL) property [34] of the problems (8) and (9). As this is not the main concern of this paper, we omit the verification here.

4.2. Recoverability and Linear Convergence Rate

For affine rank minimization problems, the convergence rate results have been obtained in the literature; see, e.g., [23,24]. However, all the existing results are obtained for algorithms that solve the optimization problems incorporating the least squares loss. In this part, we are concerned with the recoverability and convergence rate of Algorithm 1. These results give the understanding of this loss function from the algorithmic aspect, which is in accordance with and extends our previous work [22].
It has been known that the convergence rate analysis requires the matrix RIPcondition [33]. In our context, instead of using the matrix RIP, we adopt the concept of the matrix scalable restricted isometry property (SRIP) [24].
Definition 1
(SRIP [24]). For any X S r , there exist constants ν r , μ r > 0 such that:
ν r X F A ( X ) F μ r X F .
Due to the scalability of ν r , μ r on the operator A , SRIP is a generalization of the RIP [33] as commented in [24]. We point out that the results of Algorithm 1 for the affine rank minimization problem (8) rely on the SRIP condition. However, in the matrix completion problem (5), this condition cannot be met, since ν r in this case is zero. Consequently, the results provided below cannot be applied directly to the matrix completion problem (5). However, similar results might be established for (5), if some refined RIP conditions are assumed to hold for the operator A in the situation of matrix completion [23]. To obtain the convergence rate results, besides the SRIP condition, we also need to make some assumptions.
Assumption 1.
  • At the ( k + 1 ) -th iteration of Algorithm 1, the parameter σ k + 1 in the loss function σ is chosen as:
    σ k + 1 = max A ( X ( k ) ) b F 2 ( 1 β ) , σ ^ ,
    where β [ 0.988 , 1 ) , and σ ^ is a positive constant.
  • The spectral norm of A is upper bounded as A 2 2 6 5 ν 2 R 2 .
Based on Assumption 1, the following results for Algorithm 1 can be derived.
Theorem 1.
Assume that A ( X ) + ϵ = b , where X is the matrix to be recovered and rank ( X ) = R . Assume that Assumption 1 holds. Let { X ( k ) } be generated by Algorithm 1, with the step-size α = A 2 2 . Then
  • at iteration ( k + 1 ) , Algorithm 1 will recover a matrix X k + 1 satisfying:
    X ( k + 1 ) X F q 1 k + 1 X ( 0 ) X F + 2 1 q 1 ϵ F A 2 ,
    where q 1 ( 0.8165 , 0.9082 ) depending on β.
  • If there is no noise or outliers, i.e., A ( X ) = b , then the algorithm converges linearly in the least squares and σ sense, respectively, i.e.,
    A ( X ( k + 1 ) ) b 2 q 2 A ( X ( k ) ) b 2 , and ˜ σ k + 1 ( X ( k + 1 ) ) q 3 ˜ σ k ( X ( k ) ) ,
    where q 2 ( 0.8 , 0.9898 ) and q 3 ( 0.2 , 0.262 ) , depending on the choice of β.
The proof of Theorem 1 relies on the following lemmas, which reveal certain properties of the loss function ˜ σ .
Lemma 1.
For any σ > 0 and t R , it holds:
σ 2 2 1 exp t 2 σ 2 t 2 2 .
Proof. 
For any σ > 0 , let f ( t ) : = t 2 2 σ 2 2 ( 1 exp ( t 2 σ 2 ) ) . Since f ( t ) is even, we need to only consider t 0 . Note that f ( t ) = t t exp ( t 2 σ 2 ) , which is nonnegative when t 0 . Therefore, f ( t ) is a nondecreasing function on [ 0 , + ) . On the other hand, f ( 0 ) = 0 and f ( t ) = 0 . Thus, the minimum of f ( t ) is f ( 0 ) = 0 . As a result, f ( t ) 0 . This completes the proof. ☐
Lemma 2.
Assuming that β [ 0 , 1 ) , and 0 < δ 2 ( 1 β ) , it holds:
g ( δ ) : = 1 exp ( δ ) β δ 0 .
Proof. 
Since δ > 0 , it is not hard to check that 1 exp ( δ ) δ 1 2 δ 2 . From the range of δ , it follows δ 1 2 δ 2 β δ . This completes the proof. ☐
Lemma 3.
Given a fixed t R , for σ > 0 , h ( σ ) : = σ 2 ( 1 exp ( t 2 / σ 2 ) ) is nondecreasing with respect to σ.
Proof. 
It is not hard to check that h is nonnegative on σ > 0 . ☐
Proof of Theorem 1.
By the fact that X is rank-R and X ( k + 1 ) is the best rank-R approximation to Y ( k + 1 ) , we have:
X ( k + 1 ) X F X ( k + 1 ) Y ( k + 1 ) F + Y ( k + 1 ) X F 2 Y ( k + 1 ) X F = 2 X ( k ) X 1 α σ k + 1 ( X ( k ) ) F .
Since:
vec σ k + 1 ( X ( k ) ) = A T Λ A vec ( X ( k ) ) b = A T Λ A vec ( X ( k ) X ) ϵ ,
we know that:
X ( k ) X 1 α σ k + 1 ( X ( k ) ) F = vec ( X ( k ) X ) 1 α A T Λ A vec ( X ( k ) X ) ϵ F vec ( X ( k ) X ) 1 α A T Λ A vec ( X ( k ) X ) F + 1 α A T Λ ϵ F vec ( X ( k ) X ) 1 α A T Λ A vec ( X ( k ) X ) F + ϵ F A 2 ,
where the last inequality follows from:
A T Λ ϵ F A 2 Λ 2 ϵ F A 2 ϵ F
and the choice of the step-size α . It remains to estimate vec ( X ( k ) X ) 1 α A T Λ A vec ( X ( k ) X ) F . We first see that:
vec ( X ( k ) X ) 1 α A T Λ A vec ( X ( k ) X ) F 2 = 2 α vec ( X ( k ) X ) , A T Λ A vec ( X ( k ) X ) + 1 α 2 A T Λ A vec ( X ( k ) X ) F 2 + X ( k ) X F 2
To verify our first assertion, it remains to bound the first two terms by means of X ( k ) X F 2 . We consider the first term. Denoting y i k = A i , X ( k ) X , we know that:
vec ( X ( k ) X ) , A T Λ A vec ( X ( k ) X ) = A vec ( X ( k ) X ) , Λ A vec ( X ( k ) X ) = i = 1 p exp A i , X ( k ) b i σ k + 1 2 y i k 2 .
The choice of σ k + 1 tells us that:
exp A i , X ( k ) b i σ k + 1 2 exp 2 ( 1 β ) ,
and consequently:
2 α vec ( X ( k ) X ) , A T Λ A vec ( X ( k ) X ) 2 α exp 2 ( 1 β ) A vec ( X ( k ) X ) F 2 .
Then, by the fact that Λ 2 2 1 and the choice of the step-size α , we observe that the second term of (13) can be upper bounded by:
1 α 2 A T Λ A vec ( X ( k ) X ) F 2 1 α A vec ( X ( k ) X ) F 2 .
Combining (14) and (15) and denoting γ = 1 2 exp 2 ( 1 β ) , we come to the following conclusion:
vec ( X ( k ) X ) 1 α A T Λ A vec ( X ( k ) X ) F 2 X ( k ) X F 2 + γ α A vec ( X ( k ) X ) F 2 X ( k ) X F 2 + γ ν 2 R 2 α X ( k ) X F 2 ,
where the last inequality follows from the SRIP condition and the fact that γ < 0 by the range of β . As a result, we get the following estimation:
X ( k + 1 ) X F 2 X ( k ) X 1 α σ k + 1 ( X ( k ) ) F 2 1 + γ ν 2 R 2 α X ( k ) X F + 2 ϵ F A 2 2 1 + 5 γ 6 X ( k ) X F + 2 ϵ F A 2
where the last inequality follows from the assumption α = A 2 2 6 / 5 ν 2 R 2 . Denote q 1 = 2 1 + 5 γ 6 . The range of β tells us that q 1 ( 0.8165 , 0.9082 ) . Iterating (16), we obtain:
X ( k + 1 ) X F q 1 k + 1 X ( 0 ) X F + 2 1 q 1 ϵ F A 2 .
Therefore, The first assertion concerning the recoverability is proven.
Suppose there is no noise or outliers, i.e., we have A ( X ) = b . In this case, it follows from (16) that:
X ( k + 1 ) X F q 1 X ( k ) X F ,
and then, the SRIP condition tells us that:
A ( X k ) b F 2 μ 2 R 2 X k + 1 X F 2 μ 2 R 2 q 1 2 X ( k ) X F 2 μ 2 R ν 2 R 2 q 1 2 A ( X k ) b F 2 6 5 q 1 2 A ( X k ) b F 2 ,
where the last inequality comes from the inequality chain μ 2 R 2 A 2 2 6 / 5 ν 2 R 2 . Denote q 2 = 6 q 1 2 / 5 . Then, q 2 ( 0.8 , 0.9898 ) . Therefore, the algorithm converges linearly to X in the least squares sense.
We now proceed to show the linear convergence in the ˜ σ sense. Following from the inequality X ( k + 1 ) Y ( k + 1 ) F 2 X Y ( k + 1 ) F 2 , we obtain:
α 2 X ( k + 1 ) X ( k ) F 2 + σ k + 1 ( X ( k ) ) , X ( k + 1 ) X ( k ) α 2 X ( k ) X F 2 + σ k + 1 ( X ( k ) ) , X X ( k ) .
Combining with Inequality (A1), we see that ˜ σ k + 1 ( X ( k + 1 ) ) can be upper bounded by:
˜ σ k + 1 ( X ( k ) ) + α 2 X ( k ) X F 2 + ˜ σ k + 1 ( X ( k ) ) , X X ( k ) .
We need to upper bound ˜ σ k + 1 ( X ( k ) ) , X X ( k ) and α 2 X ( k ) X F 2 in terms of ˜ σ k + 1 ( X ( k ) ) . We first consider the second term. Under the SRIP condition, we have:
X ( k ) X F 2 1 ν 2 R 2 A ( X ( k ) X ) F 2 = 1 ν 2 R 2 A ( X ( k ) ) b F 2 .
By setting δ = y i k σ k + 1 2 , we get δ 2 ( 1 β ) . Lemma 2 tells us that:
β ( y i k ) 2 ( σ k + 1 ) 2 1 exp ( y i k / σ k + 1 ) 2 .
Summing the above inequalities over i from 1 to p, we have:
β A ( X ( k ) ) b F 2 ( σ k + 1 ) 2 i = 1 p 1 exp ( y i k / σ k + 1 ) 2 = 2 ˜ σ k + 1 ( X ( k ) ) .
Therefore, α 2 X ( k ) X F 2 can be bounded as follows:
α 2 X ( k ) X F 2 α 2 ν 2 R 2 A ( X ( k ) X ) 2 α β ν 2 R 2 ˜ σ k + 1 ( X ( k ) ) .
We proceed to bound ˜ σ k + 1 ( X ( k ) ) , X X ( k ) . It follows from (14) and Lemma 1 that:
˜ σ k + 1 ( X ( k ) ) , X X ( k ) exp ( 2 ( 1 β ) ) A ( X ( k ) ) b 2 2 exp ( 2 ( 1 β ) ) ˜ σ k + 1 ( X ( k ) ) .
Combining (17)–(19) together, we get:
˜ σ k + 1 ( X ( k + 1 ) ) 1 + α β ν 2 R 2 2 exp ( 2 ( 1 β ) ) ˜ σ k + 1 ( X ( k ) ) 1 + 6 5 β 2 exp ( 2 ( 1 β ) ) ˜ σ k + 1 ( X ( k ) ) ,
where the last inequality follows from α 6 5 ν 2 R 2 .
By Lemma 3, the function σ 2 ( 1 exp ( t 2 / σ 2 ) ) is nondecreasing with respect to σ > 0 . This in connection with the fact that:
σ k + 1 = max A ( X ( k ) ) b F 2 ( 1 β ) , σ σ k = max A ( X ( k 1 ) ) b F 2 ( 1 β ) , σ
yields ˜ σ k + 1 ( X ( k ) ) ˜ σ k ( X ( k ) ) . Let q 3 = 1 + 6 5 β 2 exp 2 ( 1 β ) , and consequently, q 3 ( 0.2 , 0.2620 ) . We thus have:
˜ σ k + 1 ( X ( k + 1 ) ) q 3 ˜ σ k + 1 ( X ( k ) ) q 3 ˜ σ k ( X ( k ) ) .
The proof is now completed. ☐
The above results show that it is possible that Algorithm 1 will find X if the magnitude of the noise is not too large. Moreover, the results also imply that the algorithm is safe when there is no noise.

5. Numerical Experiments

This section presents numerical experiments to illustrate the effectiveness of our methods. Empirical comparisons with other methods are implemented on synthetic and real data contaminated by outliers or non-Gaussian noise.
The following 4 algorithms are implemented. RMC- σ -IHTand RMC- σ -ISTare denoted as Algorithms 1 and 3 incorporated with the line-search rule, respectively. The approach proposed in [16] is denoted as MC- 2 -IST, which is an iterative soft thresholding algorithm based on the least squares loss. The robust approach based on the LAD loss proposed in [17] is denoted by RMC- 1 -ADM. Empirically, the σ value of σ is set to be 0.5 ; the tuned parameter λ of RMC- σ -IST and MC- 2 -IST is set to λ = min { m , n } 10 max { m , n } , while for RMC- 1 -ADM, λ = 1 / max { m , n } , as suggested in [17]. All the numerical computations are conducted on an Intel i7-3770 CPU desktop computer with 16 GB of RAM. The supporting software is MATLAB R2013a. Some notations used frequently in this section are introduced first in Table 1. Bold number in the tables of this section means that it is the best among the competitors.

5.1. Evaluation on Synthetic Data

The synthetic datasets are generated in the following way:
  • Generating a low rank matrix: We first generate an m × n matrix with i.i.d. Gaussian entries ∼N(0,1), where m = n = 1000 . Then, a ρ r m -rank matrix M is obtained from the above matrix by rank truncation, where ρ r varies from 0.04 0.4 .
  • Adding outliers: We create a zero matrix E R m × n and uniformly randomly sample ρ o m 2 entries, where ρ o varies from 0– 0.6 . These entries are randomly drawn from the chi-square distribution, with four degrees of freedom. Multiplied by 10, the matrix E is used as the sparse error matrix.
  • Missing entries: ρ m m 2 of the entries are randomly missing, with ρ m varying between { 0 , 10 % , 20 % , 30 % } . Finally, the observed matrix is denoted as B = P Ω ( M + E ) .
RMC- σ -IHT (Algorithm 1), RMC- σ -IST (Algorithm 3) and RMC- 1 -ADM [17] are implemented respectively on the matrix completion problem with the datasets generated above. For these three algorithms, the same initial guess with the all-zero matrix X 0 = 0 is applied. The stopping criterion is X ( k + 1 ) X ( k ) F 10 3 , or restrictions on the number of iterations, which is set to be 500. For each tuple ( ρ m , ρ r , ρ o ) , we repeat 10 runs. The algorithm is regarded as successful if the relative error of the result X ^ satisfies X ^ M F / M F 10 1 .
Experimental results of RMC- σ -IHT (top), RMC- σ -IST (middle) and RMC- 1 -ADM (bottom) are reported in Figure 2, which are given in terms of phase transition diagrams. In Figure 2, the white zones denote perfect recovery in all the experiments, while the black ones denote failure for all the experiments. In each diagram, the x-axis represents the ratio of rank, i.e., we let ρ r = rank m [ 0.04 , 0.4 ] , and the y-axis represents the level of outliers, i.e., we let ρ o = outliers m 2 [ 0 , 0.6 ] . The level of missing entries ρ m varies from left to right in each row. As shown in Figure 2, our approach outperforms RMC- 1 -ADM when ρ o and ρ r increase. We also observe that RMC- σ -IHT performs better than RMC- σ -IST when the level of outliers increases, while RMC- σ -IST outperforms RMC- σ -IHT when the ratio of missing entries increases.
Comparison of the computational time and the relative error are also reported in Table 2. In this experiment, the level of missing entries ρ m = { 20 % , 30 % } , the ratio of rank ρ r = 0.1 and the level of outliers ρ o varies between { 0.1 , 0.15 , 0.2 , 0.25 , 0.3 } . For each ρ o , we randomly generate 20 instances and then average the results. In the table, “time” denotes the CPU time, with the unit being second, and “rel.err” represents the relative error introduced in the previous paragraph. The results also demonstrate the improved performance of our methods in most of the cases on CPU time and relative error, especially for RMC- σ -IHT.

5.2. Image Inpainting and Denoising

One typical application of matrix completion is the image inpainting problem [4]. The datasets and the experiment are conducted as follows:
  • We first choose five gray images, named “Baboon”, “Camera Man”, “Lake”, “Lena” and “Pepper” (the size of each image is 512 × 512 ), each of which is stored in a matrix M.
  • The outliers matrix E is added to each M, where E is generated in the same way as the previous experiment, and the level of outliers ρ o varies among { 0.3 , 0.4 , 0.5 , 0.6 , 0.7 } .
  • The ratio of the missing entries is set to 30 % . RMC- σ -IST, RMC- 1 -ADM and MC- 2 -IST, are tested in this experiment. In addition, we also test the Cauchy loss-based model min X c ( X ) + λ X , which is denoted as RMC- c -IST, where:
    c : = c 2 2 ( i , j ) Ω ln 1 + X i j B i j 2 / c 2 ,
    where c > 0 is a parameter controlling the robustness. Empirically, we set c = 0.15 . Other parameters are set to the same as those of RMC- σ -IST. The above model is also solved by soft thresholding similar to Algorithm 3. Note that Cauchy loss has a similar shape as that of Welsch loss and also enjoys the redescending property; such a loss function is also frequently used in the robust statistics literature. The initial guess is X 0 = 0 . The stopping criterion is X ( k + 1 ) X ( k ) F 10 2 , or the iterations exceed 500.
Detailed comparison results in terms of the relative error and CPU time are listed in Table 3, from which one can see the efficiency of our method. Indeed, experimental results show that our method can be terminated within 80 iterations. According to the relative error in Table 3, our method performs the best in almost all cases, followed by RMC- c -IST. This is not surprising because the Cauchy loss-based model enjoys similar properties as the proposed model. We also observe that the RMC- 1 -ADM algorithm cannot deal with situations when images are heavily contaminated by outliers. This illustrates the robustness of our method.
To better illustrate the robustness of our method empirically, we also attach images recovered by the three methods in Figure 3. For the sake of saving space, we merely list the recovery results for the case ρ o = 0.6 with 30 % missing entries. In Figure 3, the first column represents five original images, namely, “Baboon”, “Camera Man”, “Lake”, “Lena” and “Pepper”. Images in the second column are contaminated images with 60 % outliers and 30 % missing entries. Recovered results of each image are report in the remaining columns respectively by using RMC- σ -IST, RMC- 1 -ADM, MC- 2 -IST and RMC- c -IST. One can observe that the images recovered by our method retain most of the important information, followed by RMC- c -IST.
Our next experiment is designed to show the effectiveness of our method in dealing with the non-Gaussian noise. We assume that the entries of the noise matrix E are i.i.d drawn from Student’s t distribution, with three degrees of freedom. We then scale E by a factor s n , and we denote the corresponding E : = s n · E . The noise scale factor s n varies in { 0.01 , 0.05 , 0.1 } , and ρ m varies in { 0.1 , 0.3 , 0.5 } . The results are shown in Table 4, where the image “Building” is used. We list the recovered images in Figure 4 with the case s n = 0.05 . From the table and the recovered images, we can see that our method also performs well when the image is only contaminated by non-Gaussian noise.

5.3. Background Subtraction

Background subtraction, also known as foreground detection, is one of the major tasks in computer vision, which aims at detecting changes in image or video sequences and finds application in video surveillance, human motion analysis and human-machine interaction from static cameras [35].
Given a sequence of images, one can cast them into a matrix B by vectorizing each image and then stacking row by row. In many cases, it is reasonable to assume that the background varies little. Consequently, the background forms a low rank matrix M, while the foreground activity is spatially localized and can be seen as the error matrix E. Correspondingly, the image sequence matrix B can be expressed as the sum of a low rank background matrix M and a sparse error matrix E, which represents the activity in the scene.
In practice, it is reasonable to assume that some entries of the image sequence are missing and the images are contaminated by noise or outliers. Therefore, the foreground object detection problem can be formulated as a robust matrix completion problem. Ref. [36] proposed to use the LAD-loss-based matrix completion approach to separate M and E. The data of this experiment were downloaded from http://perception.i2r.a-star.edu.sg/bkmodel/bkindex.html.
Our experiment in this scenario is implemented as follows:
  • We choose the sequence named “Restaurant” for our experiment, which consists of 3057 color images. Each image of “Restaurant” is 160 × 120 in size. From the sequence, we pick 100 continuous images and convert them to gray images to form the original matrix B, which is 100 × 19200 in size, where each row is a vector converted from an image.
  • Two types of non-Gaussian noise are added to B. The first type of noise is drawn from the chi-square distribution, with four degree of freedom; the second type of noise is drawn from Student’s t distribution, with three degrees of freedom. Then, the two types of noise are simultaneously rescaled by s n = { 0.01 , 0.02 , 0.05 } . The last 50 % of the entries are missing randomly.
  • RMC- σ -IHT and RMC- 1 -ADM are used to deal with this problem. We set R = 1 in RMC- σ -IHT. The initial guess is the zero matrix. The stopping criterion is X ( k + 1 ) X ( k ) F 10 2 , or the iterations exceed 200.
The running time and relative error are reported in Table 5. From the table, we see that the proposed approach is faster and gives smaller relative errors. To give an intuitive impression, we choose five frames from each image sequence, as shown in Figure 5, from which we can observe that when the image sequences are corrupted by noise ( s n = 0.05 ) and missing entries, both of the methods can successfully extract the background and foreground images, and it seems that our method performs better because the details of the background images are recovered well, whereas the LAD-based approach does not seem to perform as well as ours where some details of the background are added to the foreground. It can be also observed that none of the two methods can recover the missing entries in the foreground. In order to achieve this, maybe more effective approaches are needed.

6. Concluding Remarks

The correntropy loss function has been studied in the literature [20,21] and has found many successful applications [29,30,31]. Learning with correntropy-induced losses could be resistant to non-Gaussian noise and outliers while ensuring good prediction accuracy simultaneously with properly chosen parameter σ . This paper addressed the robust matrix completion problem based on the correntropy loss. The proposed approach was shown to be efficient to deal with non-Gaussian noise and sparse gross errors. The nonconvexity of the proposed approach was due to using the σ loss. Based on the above approach, we proposed two nonconvex optimization models and extend them to the more general robust affine rank minimization problems. Two gradient-based iterative schemes to solve the nonconvex optimization problems were offered, with convergence rate results being obtained under proper assumptions. It would be interesting to investigate similar convergence and recoverability results for other redescending-type loss functions-based models. Numerical experiments verified the improved performance of our methods, where empirically, the parameter σ for σ is set to 0.5 and λ for the nuclear norm model (6) is λ = min { m , n } 10 max { m , n } .

Acknowledgments

The research leading to these results has received funding from the European Research Council under the European Union’s Seventh Framework Programme (FP7/2007-2013)/ERC AdGA-DATADRIVE-B (290923). This paper reflects only the authors’ views; the Union is not liable for any use that may be made of the contained information; Research Council KUL: GOA/10/09 MaNet, CoEPFV/10/002 (OPTEC), BIL12/11T; PhD/Postdoc grants; Flemish Government: FWO: PhD/Postdoc grants, projects: G.0377.12 (Structured systems), G.088114N (Tensor-based data similarity); IWT: PhD/Postdoc grants, projects: SBOPOM(100031); iMinds Medical Information Technologies SBO 2014; Belgian Federal Science Policy Office: IUAPP7/19 (DYSCO, Dynamical systems, control and optimization, 2012–2017).

Author Contributions

Y.Y., Y.F., and J.A.K.S. proposed and discussed the idea; Y.Y. and Y.F. conceived and designed the experiments; Y.Y. performed the experiments; Y.Y. and Y.F. analyzed the data; J.A.K.S. contributed analysis tools; Y.Y. and Y.F. wrote the paper.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Lipschitz Continuity of the Gradient of ℓ σ and Some Propositions

The propositions given in the Appendix hold for both σ and ˜ σ . For simplicity, we only present the formulas for ˜ σ . We first give some notations. Let vec ( · ) be the vectorization operator over any matrix space R s × t , with vec ( B ) R s t and:
vec ( B ) ( i 1 ) t + j = B i j , 1 i s , 1 j t , B R s × t .
We further define matrix A R p × m n , where:
A T = vec A 1 , vec A 2 , , vec A p .
Based on the above notations, the vectorized form of A ( X ) is written as:
vec ( A ( X ) ) = A vec ( X ) ,
and the gradient of σ at X can be rewritten as:
vec σ ( X ) = A T Λ A vec ( X ) b ,
where Λ R p × p is a diagonal matrix with:
Λ i i = exp ( A i , X b i ) 2 / σ 2 , 1 i p .
Let A 2 be the spectral norm of A. The following proposition shows that the gradient of ˜ σ is Lipschitz continuous.
Proposition A1.
The gradient of ˜ σ is Lipschitz continuous. That is, for any X , Y R m × n , it holds that:
˜ σ ( X ) ˜ σ ( Y ) F A 2 2 X Y F .
Proof. 
With notations introduced above, we know that:
˜ σ ( X ) ˜ σ ( Y ) F = A T Λ X A vec ( X ) b A T Λ Y A vec ( Y ) b F A 2 Λ X A vec ( X ) b Λ Y A vec ( Y ) b F ,
where Λ X and Λ Y are the diagonal matrices corresponding to ˜ σ ( X ) and ˜ σ ( Y ) . It remains to show that:
Λ X A vec ( X ) b Λ Y A vec ( Y ) b F A 2 X Y F .
By letting z 1 = A vec ( X ) b and z 2 = A vec ( Y ) b , we observe that:
Λ X A vec ( X ) b Λ Y A vec ( Y ) b F 2 = i = 1 p exp z 1 , i 2 / σ 2 z 1 , i exp z 2 , i 2 / σ 2 z 2 , i 2 .
Combining with the fact that for any t 1 , t 2 R and σ > 0 ,
| exp ( t 1 2 / σ 2 ) t 1 exp ( t 2 2 / σ 2 ) t 2 | | t 1 t 2 | ,
we have:
Λ X A vec ( X ) b Λ Y A vec ( Y ) b F 2 A vec ( X ) A vec ( Y ) F 2 A 2 2 X Y F 2 .
As a result, ˜ σ ( X ) ˜ σ ( Y ) F A 2 2 X Y F . This completes the proof. ☐
The following conclusion is a consequence of Proposition A1.
Proposition A2.
For any X , Y R m × n , it holds that:
˜ σ ( X ) ˜ σ ( Y ) + ˜ σ ( Y ) , X Y + A 2 2 / 2 X Y F 2 .
Proposition A3.
Let { X ( k ) } be generated by Algorithms 1 or 3 with α > L = A 2 . Then, it holds that:
X ( k ) X ( k + 1 ) F 0 .
Proof. 
We first consider { X ( k ) } generated by Algorithm 1. Following from the fact that rank ( X ( k ) ) R and X ( k + 1 ) is the best rank-R approximation of Y ( k + 1 ) , we know that:
α 2 X ( k + 1 ) X ( k ) F 2 + ˜ σ ( X ( k ) ) , X ( k + 1 ) X ( k ) = α 2 X ( k + 1 ) X ( k ) + 1 α σ ( X ( k ) ) F 2 α 2 1 α σ ( X ( k ) ) F 2 0 .
This together with (A1) gives:
˜ σ ( X ( k + 1 ) ) ˜ σ ( X ( k ) ) α L 2 X ( k + 1 ) X ( k ) F 2 ,
which implies that the sequence { ˜ σ ( X ( k ) ) } is monotonically decreasing. Due to the lower boundness of ˜ σ , we see that lim k X ( k + 1 ) X ( k ) F = 0 .
When { X ( k ) } is generated by Algorithm 3, after simple computation, we have that X ( k + 1 ) is the minimizer of:
min X 1 2 X Y ( k + 1 ) F 2 + λ α X .
we thus have:
α 2 X ( k + 1 ) X ( k ) F 2 + ˜ σ ( X ( k ) ) , X ( k + 1 ) X ( k ) + λ X ( k + 1 ) λ X ( k ) = α 2 X ( k + 1 ) Y ( k + 1 ) F 2 + λ X ( k + 1 ) α 2 1 α σ ( X ( k ) ) F 2 λ X ( k ) 0 .
This in connection with Proposition A2 reveals:
˜ σ ( X ( k + 1 ) ) + λ X ( k + 1 ) ˜ σ ( X ( k ) ) + λ X ( k ) α L 2 X ( k + 1 ) X ( k ) F 2 .
Analogously, we have lim k X ( k + 1 ) X ( k ) F = 0 . This completes the proof. ☐

References

  1. Srebro, N.; Jaakkola, T. Weighted low-rank approximations. In Proceedings of the 20th International Conference on Machine Learning, Copenhagen, Denmark, 11–12 June 2003; Volume 3, pp. 720–727. [Google Scholar]
  2. Prize Website, N. Available online: http://www.netflixprize.com (accessed on 2 March 2018).
  3. Komodakis, N. Image completion using global optimization. In Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, New York, NY, USA, 17–22 June 2006; Volume 1, pp. 442–452. [Google Scholar]
  4. Bertalmio, M.; Sapiro, G.; Caselles, V.; Ballester, C. Image inpainting. In Proceedings of the 27th Annual Conference on Computer Graphics and Interactive Techniques, New Orleans, LA, USA, 23–28 July 2000; pp. 417–424. [Google Scholar]
  5. Ji, H.; Liu, C.; Shen, Z.; Xu, Y. Robust video denoising using low rank matrix completion. In Proceedings of the 2010 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), San Francisco, CA, USA, 13–18 June 2010; pp. 1791–1798. [Google Scholar]
  6. Candès, E.J.; Plan, Y. Matrix completion with noise. Proc. IEEE 2010, 98, 925–936. [Google Scholar] [CrossRef]
  7. Candès, E.J.; Recht, B. Exact matrix completion via convex optimization. Found. Comput. Math. 2009, 9, 717–772. [Google Scholar] [CrossRef]
  8. Gross, D. Recovering low-rank matrices from few coefficients in any basis. IEEE Trans. Inf. Theory 2011, 57, 1548–1566. [Google Scholar] [CrossRef]
  9. Keshavan, R.H.; Montanari, A.; Oh, S. Matrix completion from noisy entries. J. Mach. Learn. Res. 2010, 99, 2057–2078. [Google Scholar]
  10. Koltchinskii, V.; Lounici, K.; Tsybakov, A.B. Nuclear-norm penalization and optimal rates for noisy low-rank matrix completion. Ann. Stat. 2011, 39, 2302–2329. [Google Scholar] [CrossRef]
  11. Signoretto, M.; Van de Plas, R.; De Moor, B.; Suykens, J.A.K. Tensor versus matrix completion: A comparison with application to spectral data. IEEE Signal Process. Lett. 2011, 18, 403–406. [Google Scholar] [CrossRef]
  12. Hu, Y.; Zhang, D.; Ye, J.; Li, X.; He, X. Fast and Accurate Matrix Completion via Truncated Nuclear Norm Regularization. IEEE Trans. Pattern Anal. 2013, 35, 2117–2130. [Google Scholar] [CrossRef] [PubMed]
  13. Cai, J.F.; Candès, E.J.; Shen, Z. A singular value thresholding algorithm for matrix completion. SIAM J. Optim. 2010, 20, 1956–1982. [Google Scholar] [CrossRef]
  14. Goldfarb, D.; Ma, S. Convergence of fixed-point continuation algorithms for matrix rank minimization. Found. Comput. Math. 2011, 11, 183–210. [Google Scholar] [CrossRef]
  15. Ji, S.; Ye, J. An accelerated gradient method for trace norm minimization. In Proceedings of the 26th Annual International Conference on Machine Learning, Montreal, QC, Canada, 14–18 June 2009; pp. 457–464. [Google Scholar]
  16. Ma, S.; Goldfarb, D.; Chen, L. Fixed point and Bregman iterative methods for matrix rank minimization. Math. Program. 2011, 128, 321–353. [Google Scholar] [CrossRef]
  17. Candès, E.J.; Li, X.; Ma, Y.; Wright, J. Robust principal component analysis? J. ACM (JACM) 2011, 58, 11. [Google Scholar] [CrossRef]
  18. Hastie, T. Matrix Completion and Large-Scale SVD Computations. Available online: http://www.stanford.edu/~hastie/TALKS/SVD_hastie.pdf (accessed on 21 February 2018).
  19. Nie, F.; Wang, H.; Cai, X.; Huang, H.; Ding, C. Robust matrix completion via joint Schatten p-norm andlp-norm minimization. In Proceedings of the 2012 IEEE 12th International Conference on Data Mining (ICDM), Brussels, Belgium, 10–13 December 2012; pp. 566–574. [Google Scholar]
  20. Liu, W.; Pokharel, P.P.; Príncipe, J.C. Correntropy: Properties and applications in non-Gaussian signal processing. IEEE Trans. Signal Process. 2007, 55, 5286–5298. [Google Scholar] [CrossRef]
  21. Príncipe, J.C. Information Theoretic Learning: Renyi’s Entropy and Kernel Perspectives; Springer Science & Business Media: Berlin, Germany, 2010. [Google Scholar]
  22. Feng, Y.; Huang, X.; Shi, L.; Yang, Y.; Suykens, J.A. Learning with the maximum correntropy criterion induced losses for regression. J. Mach. Learn. Res. 2015, 16, 993–1034. [Google Scholar]
  23. Jain, P.; Meka, R.; Dhillon, I.S. Guaranteed Rank Minimization via Singular Value Projection. In Proceedings of the Advances in Neural Information Processing Systems, Hyatt Regency, VAN, Canada, 6–11 December 2010; Volume 23, pp. 937–945. [Google Scholar]
  24. Beck, A.; Teboulle, M.A. A linearly convergent algorithm for solving a class of nonconvex/affine feasibility problems. In Fixed-Point Algorithms for Inverse Problems in Science and Engineering; Springer: Berlin, Germany, 2011; pp. 33–48. [Google Scholar]
  25. Rohde, A.; Tsybakov, A.B. Estimation of high-dimensional low-rank matrices. Ann. Stat. 2011, 39, 887–930. [Google Scholar] [CrossRef]
  26. Chen, Y.; Xu, H.; Caramanis, C.; Sanghavi, S. Robust Matrix Completion with Corrupted Columns. arXiv, 2011; arXiv:1102.2254. [Google Scholar]
  27. Huber, P.J. Robust Statistics; Springer: Berlin, Germany, 2011. [Google Scholar]
  28. Warmuth, M.K. From Relative Entropies to Bregman Divergences to the Design of Convex and Tempered Non-Convex Losses. Available online: http://classes.soe.ucsc.edu/cmps290c/Spring13/lect/9/holycow.pdf (accessed on 21 February 2018).
  29. Chen, B.; Xing, L.; Liang, J.; Zheng, N.; Príncipe, J.C. Steady-state mean-square error analysis for adaptive filtering under the maximum correntropy criterion. IEEE Signal Process. Lett. 2014, 21, 880–884. [Google Scholar]
  30. Chen, B.; Xing, L.; Zhao, H.; Zheng, N.; Príncipe, J.C. Generalized correntropy for robust adaptive filtering. IEEE Trans. Signal Process. 2016, 64, 3376–3387. [Google Scholar] [CrossRef]
  31. Chen, B.; Liu, X.; Zhao, H.; Príncipe, J.C. Maximum correntropy Kalman filter. Automatica 2017, 76, 70–77. [Google Scholar] [CrossRef]
  32. Feng, Y.; Fan, J.; Suykens, J. A Statistical Learning Approach to Modal Regression. arXiv, 2017; arXiv:1702.05960. [Google Scholar]
  33. Recht, B.; Fazel, M.; Parrilo, P.A. Guaranteed minimum-rank solutions of linear matrix equations via nuclear norm minimization. SIAM Rev. 2010, 52, 471–501. [Google Scholar] [CrossRef]
  34. Bolte, J.; Sabach, S.; Teboulle, M. Proximal alternating linearized minimization for nonconvex and nonsmooth problems. Math. Program. 2014, 146, 459–494. [Google Scholar] [CrossRef]
  35. Li, L.; Huang, W.; Gu, I.Y.H.; Tian, Q. Statistical modeling of complex backgrounds for foreground object detection. IEEE Trans. Image Process. 2004, 13, 1459–1472. [Google Scholar] [CrossRef] [PubMed]
  36. Wright, J.; Ganesh, A.; Rao, S.; Peng, Y.; Ma, Y. Robust principal component analysis: Exact recovery of corrupted low-rank matrices via convex optimization. In Proceedings of the Advances in Neural Information Processing Systems, Vancouver, BC, Canada, 7–8 December 2009; pp. 2080–2088. [Google Scholar]
Figure 1. Different losses: least squares, absolute deviation loss (LAD), Huber’s loss and ρ σ (Welsch loss).
Figure 1. Different losses: least squares, absolute deviation loss (LAD), Huber’s loss and ρ σ (Welsch loss).
Entropy 20 00171 g001
Figure 2. Phase transition diagrams of RMC- σ -IHT (Algorithm 1), RMC- σ -IST (Algorithm 3) and RMC- 1 -ADM [17]. The first row: RMC- σ -IHT; the second row: RMC- σ -IST; the last row: RMC- 1 -ADM. x-axis: ρ r [ 0.04 , 0.4 ] ; y-axis: ρ o [ 0 , 0.6 ] . From the first column to the last column, ρ m varies from 0– 30 % .
Figure 2. Phase transition diagrams of RMC- σ -IHT (Algorithm 1), RMC- σ -IST (Algorithm 3) and RMC- 1 -ADM [17]. The first row: RMC- σ -IHT; the second row: RMC- σ -IST; the last row: RMC- 1 -ADM. x-axis: ρ r [ 0.04 , 0.4 ] ; y-axis: ρ o [ 0 , 0.6 ] . From the first column to the last column, ρ m varies from 0– 30 % .
Entropy 20 00171 g002
Figure 3. Comparison of RMC- σ -IST, RMC- 1 -ADM and MC- 2 -IST on different images with 60 % outliers and 30 % missing entries. (a) The original low rank images; (b) images with 30 % missing entries and contaminated by 70 % outliers; (c) images recovered by RMC- σ -IST (Algorithm 3); (d) images recovered by RMC- 1 -ADM [17]; (e) images recovered by MC- 2 -IST [16]; (f) images recovered by RMC- c -IST.
Figure 3. Comparison of RMC- σ -IST, RMC- 1 -ADM and MC- 2 -IST on different images with 60 % outliers and 30 % missing entries. (a) The original low rank images; (b) images with 30 % missing entries and contaminated by 70 % outliers; (c) images recovered by RMC- σ -IST (Algorithm 3); (d) images recovered by RMC- 1 -ADM [17]; (e) images recovered by MC- 2 -IST [16]; (f) images recovered by RMC- c -IST.
Entropy 20 00171 g003
Figure 4. Recovery results of RMC- σ -IST (third), RMC- 1 -ADM (fourth) and MC- 2 -IST (fifth) on the image “Building” contaminated by non-Gaussian noise with s n = 0.05 and 30% missing entries.
Figure 4. Recovery results of RMC- σ -IST (third), RMC- 1 -ADM (fourth) and MC- 2 -IST (fifth) on the image “Building” contaminated by non-Gaussian noise with s n = 0.05 and 30% missing entries.
Entropy 20 00171 g004
Figure 5. Comparison between RMC- σ -IHT (Algorithm 1) and RMC- 1 -ADM [17] on extracting the image sequence “Restaurant” with ρ m = 50 % and contaminated by two types of non-Gaussian noise with s n = 0.05 . (a) The original image sequence; (b) the image sequence with missing entries and contaminated by noise; (c) background extracted by RMC- σ -IHT (Algorithm 1); (d) foreground extracted by RMC- σ -IHT (Algorithm 1); (e) background extracted by RMC- 1 -ADM [17]; (f) foreground extracted by RMC- 1 -ADM [17].
Figure 5. Comparison between RMC- σ -IHT (Algorithm 1) and RMC- 1 -ADM [17] on extracting the image sequence “Restaurant” with ρ m = 50 % and contaminated by two types of non-Gaussian noise with s n = 0.05 . (a) The original image sequence; (b) the image sequence with missing entries and contaminated by noise; (c) background extracted by RMC- σ -IHT (Algorithm 1); (d) foreground extracted by RMC- σ -IHT (Algorithm 1); (e) background extracted by RMC- 1 -ADM [17]; (f) foreground extracted by RMC- 1 -ADM [17].
Entropy 20 00171 g005
Table 1. Notations used in the experiments.
Table 1. Notations used in the experiments.
NotationsDescriptions
ρ r the ratio of the rank to the dimensionality of a matrix
ρ o the ratio of outliers to the number of entries of a matrix
ρ m the level of missing entries
s n the factor of scale of noise
Table 2. Comparison of RMC- σ -IHT(Algorithm 1), RMC- σ -IST(Algorithm 3) and RMC- 1 -ADM [17] on CPU time and the relative error on synthetic data. ρ m = 0.3 , ρ r = 0.1 . rel.err, relative error.
Table 2. Comparison of RMC- σ -IHT(Algorithm 1), RMC- σ -IST(Algorithm 3) and RMC- 1 -ADM [17] on CPU time and the relative error on synthetic data. ρ m = 0.3 , ρ r = 0.1 . rel.err, relative error.
ρ m ρ o RMC- σ -IHTRMC- σ -ISTRMC- 1 -ADM
Algorithm 1Algorithm 3[17]
Timerel.errTimerel.errTimerel.err
0.115.433.80 × 10 03 20.534.55 × 10 02 19.242.58 × 10 06
0.1515.314.40 × 10 03 21.264.96 × 10 02 18.322.33 × 10 06
0.216.935.40 × 10 03 22.955.53 × 10 02 48.972.82 × 10 04
0.2519.045.80 × 10 03 26.416.23 × 10 02 243.801.07 × 10 01
0.327.107.00 × 10 03 29.477.01 × 10 02 137.993.16 × 10 01
0.20.3526.358.00 × 10 03 36.038.10 × 10 02 99.264.86 × 10 01
0.423.911.03 × 10 02 37.419.41 × 10 02 79.856.38 × 10 01
0.4529.641.24 × 10 02 45.681.10 × 10 01 67.457.77 × 10 01
0.540.411.69 × 10 02 61.391.37 × 10 01 60.089.52 × 10 01
0.5560.282.45 × 10 02 103.871.80 × 10 01 68.521.39 × 10 + 00
0.6102.193.69 × 10 02 154.042.65 × 10 01 144.372.86 × 10 + 00
0.116.385.20 × 10 03 24.145.66 × 10 02 24.812.86 × 10 06
0.1520.145.00 × 10 03 23.856.41 × 10 02 110.678.30 × 10 03
0.222.836.00 × 10 03 25.927.00 × 10 02 117.911.15 × 10 01
0.2520.717.00 × 10 03 28.937.97 × 10 02 118.103.08 × 10 01
0.320.778.80 × 10 03 32.999.21 × 10 02 89.564.68 × 10 01
0.30.3521.288.20 × 10 03 33.729.09 × 10 02 88.734.66 × 10 01
0.427.641.15 × 10 02 41.531.05 × 10 01 75.075.98 × 10 01
0.4532.381.40 × 10 02 48.451.23 × 10 01 71.147.13 × 10 01
0.544.531.68 × 10 02 84.671.50 × 10 01 73.638.02 × 10 01
0.5562.232.26 × 10 02 125.481.95 × 10 01 78.348.84 × 10 01
0.692.143.26 × 10 02 241.352.78 × 10 01 74.091.07 × 10 + 00
Table 3. Experimental results of RMC- σ -IST (Algorithm 3), RMC- 1 -ADM [17] and MC- 2 -IST [16] on different images with ρ r = 0.1 , ρ m = 0.3 and ρ o varying from 0.3 to 0.7 .
Table 3. Experimental results of RMC- σ -IST (Algorithm 3), RMC- 1 -ADM [17] and MC- 2 -IST [16] on different images with ρ r = 0.1 , ρ m = 0.3 and ρ o varying from 0.3 to 0.7 .
ρ o ImagesBaboonCamera ManLakeLenaPepper
MethodTimerel.errTimerel.errTimerel.errTimerel.errTimerel.err
RMC- σ -IST (Algorithm 3)3.171.46 × 10 02 3.551.74 × 10 02 3.791.61 × 10 02 4.362.05 × 10 02 3.801.10 × 10 02
0.3RMC- 1 -ADM [17]32.222.86 × 10 02 35.874.36 × 10 02 26.744.57 × 10 02 20.673.98 × 10 02 33.082.46 × 10 02
MC- 2 -IST [16]68.334.35 × 10 + 00 72.444.44 × 10 + 00 68.394.14 × 10 + 00 68.684.22 × 10 + 00 68.383.07 × 10 + 00
RMC- c -IST5.191.38 × 10 02 5.601.83 × 10 02 5.241.70 × 10 02 4.732.46 × 10 02 4.361.61 × 10 02
RMC- σ -IST (Algorithm 3)3.761.73 × 10 02 3.942.15 × 10 02 4.691.96 × 10 02 4.582.41 × 10 02 4.911.42 × 10 02
0.4RMC- 1 -ADM [17]30.933.51 × 10 02 36.765.16 × 10 02 26.675.48 × 10 02 22.414.76 × 10 02 32.183.28 × 10 02
MC- 2 -IST [16]68.515.07 × 10 + 00 68.945.08 × 10 + 00 68.094.74 × 10 + 00 68.844.88 × 10 + 00 68.683.54 × 10 + 00
RMC- c -IST4.881.70 × 10 02 5.732.37 × 10 02 5.342.21 × 10 02 5.392.89 × 10 02 5.561.87 × 10 02
RMC- σ -IST (Algorithm 3)4.012.13 × 10 02 4.442.61 × 10 02 5.292.40 × 10 02 5.272.76 × 10 02 6.771.63 × 10 02
0.5RMC- 1 -ADM [17]24.954.91 × 10 02 27.696.57 × 10 02 22.756.92 × 10 02 20.746.71 × 10 02 26.863.98 × 10 02
MC- 2 -IST [16]68.305.56 × 10 + 00 69.645.62 × 10 + 00 68.715.37 × 10 + 00 68.565.44 × 10 + 00 68.713.91 × 10 + 00
RMC- c -IST6.632.18 × 10 02 6.942.95 × 10 02 5.842.90 × 10 02 6.103.32 × 10 02 6.942.15 × 10 02
RMC- σ -IST (Algorithm 3)4.982.65 × 10 02 6.363.37 × 10 02 7.963.11 × 10 02 5.753.49 × 10 02 9.522.20 × 10 02
0.6RMC- 1 -ADM [17]15.551.41 × 10 01 15.211.61 × 10 01 15.231.48 × 10 01 15.561.38 × 10 01 15.959.71 × 10 02
MC- 2 -IST [16]68.226.06 × 10 + 00 69.936.17 × 10 + 00 68.735.77 × 10 + 00 68.345.88 × 10 + 00 68.514.23 × 10 + 00
RMC- c -IST7.932.70 × 10 02 6.084.51 × 10 02 8.193.22 × 10 02 7.873.81 × 10 02 10.362.85 × 10 02
RMC- σ -IST (Algorithm 3)8.743.59 × 10 02 11.374.41 × 10 02 11.754.21 × 10 02 9.594.16 × 10 02 19.952.69 × 10 02
0.7RMC- 1 -ADM [17]44.311.90 × 10 + 00 44.631.96 × 10 + 00 45.161.81 × 10 + 00 43.491.85 × 10 + 00 43.881.37 × 10 + 00
MC- 2 -IST [16]68.546.52 × 10 + 00 68.756.59 × 10 + 00 69.066.18 × 10 + 00 68.416.22 × 10 + 00 68.624.52 × 10 + 00
RMC- c -IST13.123.59 × 10 02 23.035.03 × 10 02 15.194.36 × 10 02 22.954.68 × 10 02 14.783.86 × 10 02
Table 4. Experimental results on the image “Building”, contaminated by non-Gaussian noise with varying ρ m and the noise scale.
Table 4. Experimental results on the image “Building”, contaminated by non-Gaussian noise with varying ρ m and the noise scale.
s n ρ m RMC- σ -ISTRMC- 1 -ADMMC- 2 -IST
Algorithm 3[17][16]
Timerel.errTimerel.errTimerel.err
0.10.916.70 × 10 03 2.571.76 × 10 02 0.596.70 × 10 03
0.010.30.909.60 × 10 03 2.402.32 × 10 02 0.859.60 × 10 03
0.51.051.44 × 10 02 2.773.24 × 10 02 1.291.44 × 10 02
0.11.241.58 × 10 02 1.172.16 × 10 02 0.821.91 × 10 02
0.050.31.112.03 × 10 02 1.372.70 × 10 02 1.643.63 × 10 02
0.51.322.49 × 10 02 2.223.61 × 10 02 1.942.88 × 10 02
0.12.343.31 × 10 02 1.083.04 × 10 02 1.355.72 × 10 02
0.10.33.303.40 × 10 02 1.443.78 × 10 02 2.324.28 × 10 02
0.53.704.66 × 10 02 2.425.53 × 10 02 3.981.55 × 10 01
Table 5. Experiment results on “Restaurant” contaminated by non-Gaussian noise and 50 % missing entries.
Table 5. Experiment results on “Restaurant” contaminated by non-Gaussian noise and 50 % missing entries.
s n MethodTimerel.err
0.01RMC- σ -IHT (Algorithm 1)70.589.77 × 10 02
RMC- 1 -ADM [17]229.881.14 × 10 01
0.02RMC- σ -IHT (Algorithm 1)58.519.78 × 10 02
RMC- 1 -ADM [17]230.241.30 × 10 01
0.05RMC- σ -IHT (Algorithm 1)99.871.14 × 10 01
RMC- 1 -ADM [17]221.602.37 × 10 01

Share and Cite

MDPI and ACS Style

Yang, Y.; Feng, Y.; Suykens, J.A.K. Correntropy Based Matrix Completion. Entropy 2018, 20, 171. https://doi.org/10.3390/e20030171

AMA Style

Yang Y, Feng Y, Suykens JAK. Correntropy Based Matrix Completion. Entropy. 2018; 20(3):171. https://doi.org/10.3390/e20030171

Chicago/Turabian Style

Yang, Yuning, Yunlong Feng, and Johan A. K. Suykens. 2018. "Correntropy Based Matrix Completion" Entropy 20, no. 3: 171. https://doi.org/10.3390/e20030171

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop