Next Article in Journal
A Novel Inversion Method for Permeability Coefficients of Concrete Face Rockfill Dam Based on Sobol-IDBO-SVR Fusion Surrogate Model
Previous Article in Journal
Dynamic Event-Triggered Control for Delayed Nonlinear Markov Jump Systems under Randomly Occurring DoS Attack and Packet Loss
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Designed Thresholding Operator for Low-Rank Matrix Completion

1
School of Mathematics and Statistics, Yulin University, Yulin 719000, China
2
School of International Education, Yulin University, Yulin 719000, China
*
Author to whom correspondence should be addressed.
Mathematics 2024, 12(7), 1065; https://doi.org/10.3390/math12071065
Submission received: 16 February 2024 / Revised: 29 March 2024 / Accepted: 30 March 2024 / Published: 2 April 2024
(This article belongs to the Section Computational and Applied Mathematics)

Abstract

:
In this paper, a new thresholding operator, namely, designed thresholding operator, is designed to recover the low-rank matrices. With the change of parameter in designed thresholding operator, the designed thresholding operator can apply less bias to the larger singular values of a matrix compared with the classical soft thresholding operator. Based on the designed thresholding operator, an iterative thresholding algorithm for recovering the low-rank matrices is proposed. Numerical experiments on some image inpainting problems show that the proposed thresholding algorithm performs effectively in recovering the low-rank matrices.

1. Introduction

In recent years, the problem of recovering an unknown low-rank matrix from its limited number of observed entries has been actively studied in many scientific applications such as Netflix problem [1], image processing [2], system identification [3], video denoising [4], signal processing [5], subspace learning [6], and so on. In mathematics, this problem can be modeled by the following low-rank matrix completion problem:
min X R m × n rank ( X ) s . t . X i , j = M i , j , ( i , j ) Ω ,
where Ω { 1 , 2 , , m } × { 1 , 2 , , n } is the set of indices of observed entries. Without loss of generality, throughout this paper, we assume that m n . If we summarize the observed entries via P Ω ( M ) , where the projection P Ω : R m × n R m × n is defined by
[ P Ω ( X ) ] i , j = X i , j , ( i , j ) Ω ; 0 , otherwise ,
the low-rank matrix completion problem (1) can be rewritten as
min X R m × n rank ( X ) s . t . P Ω ( X ) = P Ω ( M ) .
Unfortunately, the problem (2) is NP-hard and all known algorithms for exactly solving it are time doubly exponential in both theory and practice [1,7]. To overcome such a difficulty, many researchers (e.g., [1,3,5,8,9,10]) have suggested to relax the rank function rank ( X ) by the nuclear norm X * , which leads to the following nuclear norm minimization problem:
min X R m × n X * s . t . P Ω ( X ) = P Ω ( M ) ,
where X * = i = 1 m σ i ( X ) denotes the nuclear norm of matrix X R m × n , and σ i ( X ) , i = 1 , , m , are the singular values of X R m × n .
The nuclear norm X * , as a convex relaxation of the rank function rank ( X ) , can be considered as the best convex approximation of the rank function rank ( X ) . In theory, Candès et al. [1,11] have proved that, if the observed entries are selected uniformly at random, the unknown low-rank matrix can be exactly recovered with high probability by solving the problem (2). As a convex optimization problem, the problem (2) can be solved by using the semidefinite programming solvers such as SeDuMi [12] and SDPT3 [13]. However, these algorithms are unsuitable to handle the large size problems due to the high computational cost and the memory requirements. Different from the semidefinite programming solvers, the regularized version of the problem (2), i.e.,
min X R m × n 1 2 P Ω ( X ) P Ω ( M ) F 2 + λ X *
has been frequently studied in the literature to solve the large-scale matrix completion, where λ > 0 is a regularization parameter representing the tradeoff between error and nuclear norm. There are many algorithms proposed to solve the problem (3). The most popular algorithms include the singular value thresholding (SVT) algorithm [8], accelerated proximal gradient (APG) algorithm [14], linearized augmented lagrangian and alternating direction methods [15], and so on. These algorithms are all based on the soft thresholding operator [16] to recover the unknown low-rank matrices. Although the problem (4) possesses many algorithmic advantages, these iterative soft thresholding version algorithms tend to have the bias on shrinking all the singular values of matrices, and sometimes result in over-penalization as the l 1 -norm in compressed sensing.
To reduce the bias, in this paper we design a new thresholding operator, namely, designed thresholding operator, to recover the low-rank matrices, which has less bias than the soft thresholding operator. The numerical experiments on some low-rank matrix completion problems show that the proposed thresholding operator performs efficiently in recovering the low-rank matrices.
This paper is organized as follows. In Section 2, we give the definition of the designed thresholding operator, and study some properties for the designed thresholding operator. In Section 3, an iterative thresholding algorithm based on the designed thresholding operator is generated to recover the low-rank matrices. In Section 4, some numerical experiments are presented to verify the effectiveness of the proposed algorithm. Finally, we draw some conclusions in Section 5.

2. Designed Thresholding Operator

In this section, we first review the classical soft thresholding operator [16], and then design a new thresholding operator, namely, designed thresholding operator, to recover the low-rank matrices.

2.1. Soft Thresholding Operator

For any fixed λ > 0 and t R , the soft thresholding operator [16] is defined as
s λ ( t ) = max | t | λ , 0 sign ( t ) ,
which can be explained by the fact that it is the proximity operator of | x | , i.e.,
s λ ( t ) : = arg min x R 1 2 ( x t ) 2 + λ | x | .
The soft thresholding operator plays a significant role in solving problem (4). However, it has the biased estimation and sometimes results in over-penalization. The behavior of the soft thresholding operator is plotted in Figure 1, and we can see that the bias between t and soft thresholding operator s λ ( t ) equals to λ .

2.2. Designed Thresholding Operator

To reduce the bias, in this subsection we design a new thresholding operator, namely, designed thresholding operator, to recover the low-rank matrices.
Definition 1. 
(Designed thresholding operator) For any fixed λ > 0 , c 0 and t R , the designed thresholding operator is defined as
r c , λ ( t ) = max | t | λ ( c + λ ) c + | t | , 0 sign ( t ) .
According to Definition 1, we immediately obtain the following Property 1, which shows that the designed thresholding operator has less bias to the larger coefficients than the soft thresholding operator.
Property 1. 
For any c [ 0 , + ) , the bias t r c , λ ( t ) approaches zero as the magnitude of t increases.
Proof. 
Without loss of generality, we assume t 0 . Then, we have t r c , λ ( t ) λ for any t 0 , and t r c , λ ( t ) = λ ( c + λ ) c + t for any t λ . It is easy to verify that t r c , λ ( t ) = λ ( c + λ ) c + t 0 if t + . The proof is thus complete.    □
In addition, we can also observe that the designed thresholding operator approximates soft thresholding operator when c + , i.e.,
lim c + r c , λ = max | t | λ , 0 sign ( t ) .
The behaviors of the designed thresholding operator for some c 0 with λ = 0.5 are plotted in Figure 2. With the change of parameter c > 0 , the designed thresholding operator can apply less bias to the larger coefficients.
According to ([17], Theorem 1), we immediately obtain the following Lemma 1, which shows that the designed thresholding operator is in fact the proximal mapping of a non-convex penalty function.
Lemma 1. 
The designed threshold operator r c , λ ( t ) is the proximal mapping of a penalty function g ( t ) , i.e.,
r c , λ ( t ) = arg min x R 1 2 ( x t ) 2 + λ g ( x ) ,
where g is even, nondecreasing and continuous on [ 0 , + ) , differentiable on ( 0 , + ) , and nondifferentiable at 0 with g ( 0 ) = [ 1 , 1 ] . Moreover, g is concave on [ 0 , + ) and satisfies the triangle inequality.
Definition 2. 
(Matrix designed thresholding operator) Given a matrix X R m × n , let X = U [ Diag ( { σ i ( X ) } 1 i m ) , 0 m × ( n m ) ] V be the singular value decomposition (SVD) of matrix X , where U is a m × m unitary matrix, V is a n × n unitary matrix, σ i ( X ) is the i-th largest singular value of X , Diag ( { σ i ( X ) } 1 i m ) R m × m is the diagonal matrix of the singular values of matrix X arranged in descending order σ 1 ( X ) σ 2 ( X ) σ m ( X ) 0 , and 0 m × ( n m ) is a m × ( n m ) zero matrix. For any fixed λ > 0 and c 0 , the matrix designed thresholding operator R c , λ ( X ) is defined as
R c , λ ( X ) = U [ Diag ( { r c , λ ( σ i ( X ) ) } 1 i m ) , 0 m × ( n m ) ] V ,
where r c , λ ( · ) is defined in Definition 1.
For any matrix X R m × n , define the function G : R m × n R + as
G ( X ) = i = 1 m g ( σ i ( X ) ) ,
where R + : = [ 0 , + ) , we can get the following result.
Theorem 1. 
For any fixed λ > 0 and Y R m × n , suppose that X ^ R m × n is the optimal solution of the problem
min X R m × n 1 2 X Y F 2 + λ G ( X ) .
Then X ^ can be expressed as
X ^ = R c , λ ( Y ) .
Before we give the proof of Theorem 1, we need prepare the following Lemma 2 which plays the key rule for proving Theorem 1.
Lemma 2. 
(von Neumann’s trace inequality) For any matrices X , Y R m × n ,
Tr ( X T Y ) i = 1 m σ i ( X ) σ i ( Y ) ,
where σ 1 ( X ) σ 2 ( X ) σ m ( X ) and σ 1 ( Y ) σ 2 ( Y ) σ m ( Y ) are the singular values of X and Y , respectively. The equality holds if and only if there exists unitary matrices U and V such that
X = U [ Diag ( { σ i ( X ) } 1 i m ) , 0 m × ( n m ) ] V T
and
Y = U [ Diag ( { σ i ( Y ) } 1 i m ) , 0 m × ( n m ) ] V T
are the SVDs of X and Y , simultaneously.
Now, we give the proof of Theorem 1.
Proof. 
(of Theorem 1) By Lemma 2, we have
X Y F 2 = Tr ( X T X ) 2 Tr ( X T Y ) + Tr ( Y T Y ) = i = 1 m σ i 2 ( X ) 2 Tr ( X T Y ) + i = 1 m σ i 2 ( Y ) i = 1 m σ i 2 ( X ) 2 i = 1 m σ i ( X ) σ i ( Y ) + i = 1 m σ i 2 ( Y ) = i = 1 m ( σ i ( X ) σ i ( Y ) ) 2 .
Notice that the equality holds if and only if X and Y share the same left and right unitary matrices, we assume that
X = U [ Diag ( { σ i ( X ) } 1 i m ) , 0 m × ( n m ) ] V T
and
Y = U [ Diag ( { σ i ( Y ) } 1 i m ) , 0 m × ( n m ) ] V T
are the SVDs of X and Y , simultaneously. Therefore, it holds that
X Y F 2 = i = 1 m ( σ i ( X ) σ i ( Y ) ) 2 ,
and the problem (11) reduces to
min σ 1 ( X ) σ 2 ( X ) σ m ( X ) 0 i = 1 m 1 2 ( σ i ( X ) σ i ( Y ) ) 2 + λ g ( σ i ( X ) ) .
The objective function in (13) is separable, hence, solving the problem (13) is equivalent to solving the following n problems, for each i = 1 , 2 , , n ,
min σ i ( X ) 0 1 2 ( σ i ( X ) σ i ( Y ) ) 2 + λ g ( σ i ( X ) ) .
Let σ ^ i ( X ) be the optimal solution of the problem (14), by Lemma 1, σ ^ i ( X ) can be expressed as
σ ^ i ( X ) = r c , λ ( σ i ( Y ) ) .
Therefore, we can get the optimal solution X ^ of the problem (11) as follows
X ^ = U [ Diag ( { r c , λ ( σ i ( Y ) ) } 1 i m ) , 0 m × ( n m ) ] V ,
namely,
X ^ = R c , λ ( Y ) .
This completes the proof.    □

3. Iterative Matrix Designed Thresholding Algorithm

In this section, we present an iterative thresholding algorithm to recover the low-rank matrices using the proposed designed thresholding operator.
Consider the following minimization problem:
min X R m × n 1 2 P Ω ( X ) P Ω ( M ) F 2 + λ G ( X ) ,
we can verify the optimal solution of problem (17) can be analytically expressed by the designed thresholding operator.
For any fixed λ > 0 , μ > 0 and Z R m × n , let
C λ ( X ) : = 1 2 P Ω ( X ) P Ω ( M ) F 2 + λ G ( X ) ,
C λ , μ ( X , Z ) : = μ C λ ( X ) 1 2 P Ω ( X ) P Ω ( Z ) F 2 + 1 2 X Z F 2
and
B μ ( Z ) : = Z + μ ( P Ω ( M ) P Ω ( Z ) ) .
The function C λ ( X ) defined in (18) is the objective function of the problem (17), and the function C λ , μ ( X , Z ) defined in (19) is a surrogate function of C λ ( X ) . Clearly, C λ , μ ( X , X ) = μ C λ ( X ) . By (19), we would expect minimizing the problem (17) by minimizing the surrogate function C λ , μ ( X , Z ) .
Lemma 3. 
For any fixed λ > 0 , μ > 0 and Z R m × n , if X R m × n is the minimizer of C λ , μ ( X , Z ) on X R m × n , then X satisfies
X = R c , λ μ ( B μ ( Z ) ) .
Proof. 
By definition, the function C λ , μ ( X , Z ) can be rewritten as
C λ , μ ( X , Z ) = 1 2 X ( Z + μ ( P Ω ( M ) P Ω ( Z ) ) ) F 2 + λ μ G ( X ) + μ 2 P Ω ( M ) F 2 + 1 2 Z F 2 μ 2 P Ω ( Z ) F 2 1 2 Z + μ ( P Ω ( M ) P Ω ( Z ) ) F 2 = 1 2 X B μ ( Z ) F 2 + λ μ G ( X ) + μ 2 P Ω ( M ) F 2 + 1 2 Z F 2 μ 2 P Ω ( Z ) F 2 1 2 B μ ( Z ) F 2 .
This means that, for any fixed λ > 0 , μ > 0 and Z R m × n , minimizing the function C λ , μ ( X , Z ) on X R m × n is equivalent to solving the following minimization problem
min X R m × n 1 2 X B μ ( Z ) F 2 + λ μ G ( X ) .
By Theorem 1, the minimizer X of C λ , μ ( X , Z ) on X R m × n can be expressed as
X = R c , λ μ ( B μ ( Z ) ) ,
which completes the proof.    □
Lemma 4. 
For any fixed λ > 0 and 0 < μ 1 . If X * R m × n is the optimal solution of the problem (17), then X * also solves the minimization problem
min X R m × n C λ , μ ( X , X * ) ,
that is, for any X R m × n ,
C λ , μ ( X * , X * ) C λ , μ ( X , X * ) .
Proof. 
Since 0 < μ 1 , we have
X Z F 2 μ P Ω ( X ) P Ω ( Z ) F 2 0 .
Therefore, we can get that
C λ , μ ( X , X * ) = μ C λ ( X ) 1 2 P Ω ( X ) P Ω ( Z ) F 2 + 1 2 X Z F 2 μ C λ ( X ) μ 2 P Ω ( X ) P Ω ( Z ) F 2 + 1 2 X Z F 2 μ C λ ( X ) μ C λ ( X * ) = C λ , μ ( X * , X * ) ,
which completes the proof.    □
By Lemmas 3 and 4, we can derive that the problem (17) permits a thresholding representation theory for its optimal solution.
Theorem 2. 
For any fixed λ > 0 and 0 < μ 1 . If X * R m × n is the optimal solution of the problem (17), then X * can be analytically expressed as
X * = R c , λ μ ( B μ ( X * ) ) .
With representation (24), an algorithm for solving the problem (17) can be naturally given by
X k + 1 = S λ μ ( B μ ( X k ) ) ,
where B μ ( X k ) = X k + μ ( P Ω ( M ) P Ω ( X k ) ) . In this paper, we call the iteration (25) the iterative matrix designed thresholding (IMDT) algorithm, which is summarized in Algorithm 1.
Algorithm 1 Iterative matrix designed thresholding (IMDT) algorithm
  • Input: λ > 0 , μ ( 0 , 1 ] , c 0 ;
  • Initialize: X 0 R m × n , k = 0 ;
  • while not converged, do
    • B μ ( X k ) = X k + μ ( P Ω ( M ) P Ω ( X k ) ) ;
    • X k + 1 = R c , λ μ ( B μ ( X k ) ) ;
    • k k + 1 ;
  • end while
  • return: X o p t
Similar to ([18], Theorem 4.1), we can immediately derive the convergence of the IMDT algorithm.
Theorem 3. 
Let { X k } be the sequence generated by the IMDT algorithm with 0 < μ < 1 . Then
( 1 )
The sequence { C λ ( X k ) } is monotonically decreasing and converges to C λ ( X * ) , where X * is any accumulation point of { X k } .
( 2 )
The sequence { X k } is asymptotically regular, i.e., lim k X k + 1 X k F 2 = 0 .
( 3 )
Any accumulation point of the sequence { X k } is a stationary point of the problem (17).
It is worth mentioning that the quality of the optimal solution generated by IMDT algorithm depends seriously on the setting of the regularization parameter λ . But the selection of optimal regularization parameter λ is a very hard problem. There is no optimal rule for the selection of the optimal regularization parameter λ in general. However, when some prior information (e.g., rank) is known for the optimal solution of problem (17), it is realistic to set the regularization parameter more reasonably. Following [18,19], we can give a useful parameter-setting rule to select the optimal regularization parameter λ for the IMDT algorithm. The details of the parameter-setting rule can be described as follows.
Suppose that the optimal solution X * of the problem (17) is rank r, and the singular values of matrix B μ ( X * ) are arranged as
σ 1 ( B μ ( X * ) ) σ 2 ( B μ ( X * ) ) σ m ( B μ ( X * ) ) 0 .
By Theorem 2, we have
σ i ( B μ ( X * ) ) > λ μ i { 1 , 2 , , r } ,
σ i ( B μ ( X * ) ) λ μ i { r + 1 , r + 2 , , m } ,
which implies
σ r + 1 ( B μ ( X * ) ) μ λ < σ r ( B μ ( X * ) ) μ .
Estimation (26) can help to set the optimal regularization parameter λ for IMDT algorithm, and a reliable selection can be set as
λ = σ r + 1 ( B μ ( X * ) ) μ .
In practice, we approximate the unknown real optimal solution X * by X k , that is, we can take
λ = λ k = σ r + 1 ( B μ ( X k ) ) μ
in each iteration of IMDT algorithm.
When we set the optimal regularization parameter λ using Equation (28), the IMDT algorithm will adapt to select the optimal regularization parameter λ . In this paper, we call IMDT algorithm with parameter-setting rule (28) the adaptive iterative matrix designed thresholding (AIMDT) algorithm which is summarized in Algorithm 2.
Algorithm 2 Adaptive iterative matrix designed thresholding (AIMDT) algorithm
  • Input: λ > 0 , μ ( 0 , 1 ) ;
  • Initialize: X 0 R m × n , k = 0 ;
  • while not converged, do
    • B μ ( X k ) = X k + μ ( P Ω ( M ) P Ω ( X k ) ) ;
    • λ k = σ r + 1 ( B μ ( X k ) ) μ ;
    • λ = λ k ;
    • X k + 1 = R c , λ μ ( B μ ( X k ) ) ;
    • k k + 1 ;
  • end while
  • return: X o p t

4. Numerical Experiments

In this section, we only carry out the numerical experiments to test the performance of the AIMDT algorithm on some grayscale image inpainting problems, and then compare it with the classical SVT algorithm [8]. The freedom ration is defined as FR = s / ( r ( m + n r ) ) , where s is the cardinality of observation set Ω and r is the rank of the matrix X R m × n . For FR < 1 , it is impossible to recover the original low-rank matrix [9]. The stopping criterion of the algorithms is defined as
X k + 1 X k F max { X k F , 1 } 10 8 .
Given the original low-rank matrix M R m × n , the accuracy of the generated solution X o p t of the algorithms is measured by the relative error ( RE ), defined as
RE = X o p t M F M F .
In all numerical experiments, we set μ = 0.99 in AIMDT algorithm. The numerical experiments are all conducted on a personal computer with Matlab platform.
In numerical experiments, the algorithms are tested on three 512 × 512 grayscale images (Lena, Boat and Fingerprint). It is well known that grayscale images can be expressed by matrices. In grayscale image inpainting, the grayscale value of the pixels of the image are missing, and we want to fill in these missing pixels. If the image is of low-rank, or of numerical low-rank, we can solve the image inpainting problem as a matrix completion problem (2). We first use the SVD to obtain their approximated low-rank images with rank 50. The original images and their corresponding approximated low-rank images are displayed in Figure 3, Figure 4 and Figure 5, respectively. Then, we mask some pixels of these three low-rank images. The 3.81% of elements of image are set to mask, and the masked images are displayed in Figure 6.
Table 1 reports the numerical results of the algorithms for image inpainting problems. For AIMDT algorithm, three different c values are considered, that is, c = 1 , 5 , 100 . From Table 1, we can make the following observations: (i) AIMDT algorithms with parameter c = 1 , 5 , 100 are all superior to SVT algorithm in RE. In particular, the REs generated by the AIMDT algorithms decreases as the parameter c decreases; (ii) AIMDT algorithms with parameter c = 1 , 5 , 100 are all more time-consuming than SVT algorithm. The calculation time of AIMDT algorithm increases with the decrease of parameter c. In a word, AIMDT algorithms can more accurately recover the low-rank grayscale images than SVT algorithm, and the AIMDT algorithm with parameter c = 1 performs best. We display the recovered low-rank Lena, Boat and Fingerprint images in Figure 7, Figure 8 and Figure 9, respectively. From these figures, we can see that the AIMDT algorithm with parameter c = 1 has the best performance in recovering the low-rank grayscale images.

5. Conclusions

In this paper, based on the designed thresholding operator which has less bias to the larger singular values than the soft thresholding operator, an iterative matrix designed thresholding algorithm and its adaptive version are proposed to recover the low-rank matrices. Numerical experiments on some image inpainting problems show that the adaptive iterative matrix designed thresholding algorithm performs efficiently in recovering the low-rank matrices. In the future work, we shall study the acceleration techniques to improve the convergence rate of the proposed algorithm and explore other applications for the designed thresholding operator.

Author Contributions

Conceptualization, methodology, writing—original draft, writing—review and editing, funding acquisition, A.C.; writing—original draft, writing—review and editing, H.H.; methodology, funding acquisition, H.Y. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Doctoral Research Project of Yulin University under Grant 21GK04 and Science and Technology Program of Yulin City under Grant CXY-2022-91.

Data Availability Statement

Dataset available on request from the authors.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Candès, E.J.; Recht, B. Exact matrix completion via convex optimization. Found. Comput. Math. 2009, 9, 717–772. [Google Scholar] [CrossRef]
  2. Cao, F.; Cai, M.; Tan, Y. Image interpolation via low-rank matrix completion and recovery. IEEE Trans. Circuits Syst. Video Technol. 2015, 25, 1261–1270. [Google Scholar]
  3. Liu, Z.; Vandenberghe, L. Interior-point method for nuclear norm approximation with application to system identification. SIAM J. Matrix Anal. Appl. 2009, 31, 1235–1256. [Google Scholar] [CrossRef]
  4. Chen, P.; Suter, D. Recovering the missing components in a large noisy low-rank matrix: Application to SFM. IEEE Trans. Pattern Anal. Mach. Intell. 2004, 26, 1051–1063. [Google Scholar] [CrossRef] [PubMed]
  5. Xing, Z.; Zhou, J.; Ge, Z.; Huang, G.; Hu, M. Recovery of high order statistics of PSK signals based on low-rank matrix completion. IEEE Access 2023, 11, 12973–12986. [Google Scholar] [CrossRef]
  6. Tsakiris, M.C. Low-rank matrix completion theory via Plücker coordinates. IEEE Trans. Pattern Anal. Mach. Intell. 2023, 45, 10084–10099. [Google Scholar] [CrossRef] [PubMed]
  7. Wang, K.; Chen, Z.; Ying, S.; Xu, X. Low-rank matrix completion via QR-based retraction on manifolds. Mathematics 2023, 11, 1155. [Google Scholar] [CrossRef]
  8. Cai, J.-F.; Candès, E.J.; Shen, Z. A singular value thresholding algorithm for matrix completion. SIAM J. Optim. 2010, 20, 1956–1982. [Google Scholar] [CrossRef]
  9. Ma, S.; Goldfarb, D.; Chen, L. Fixed point and Bregman iterative methods for matrix rank minimization. Math. Program. 2011, 128, 321–353. [Google Scholar] [CrossRef]
  10. Recht, B.; Fazel, M.; Parrilo, P.A. Guaranteed minimum-rank solution of linear matrix equations via nuclear norm minimization. SIAM Rev. 2010, 52, 471–501. [Google Scholar] [CrossRef]
  11. Candès, E.J.; Tao, T. The power of convex relaxation: Near-optimal matrix completion. IEEE Trans. Inf. Theory 2010, 56, 2053–2080. [Google Scholar] [CrossRef]
  12. Sturm, J.F. Using SeDuMi 1.02, a MATLAB toolbox for optimization over symmetric cones. Optim. Methods Softw. 1999, 11, 625–653. [Google Scholar] [CrossRef]
  13. Tütüncü, R.H.; Toh, K.C.; Todd, M.J. Solving semidefinite-quadratic-linear programs using SDPT3. Math. Program. 2003, 95, 189–217. [Google Scholar] [CrossRef]
  14. Toh, K.-C.; Yun, S. An accelerated proximal gradient algorithm for nuclear norm regularized linear least squares problems. Pac. J. Optim. 2010, 6, 615–640. [Google Scholar]
  15. Yang, J.; Yuan, X. Linearized augmented lagrangian and alternating direction methods for nuclear norm minimization. Math. Comput. 2013, 82, 301–329. [Google Scholar] [CrossRef]
  16. Daubechies, I.; Defrise, M.; Mol, C.D. An iterative thresholding algorithm for linear inverse problems with a sparsity constraint. Commun. Pure Appl. Math. 2004, 57, 1413–1457. [Google Scholar] [CrossRef]
  17. Chartrand, R. Shrinkage mappings and their induced penalty functions. In Proceedings of the 2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Florence, Italy, 4–9 May 2014. [Google Scholar]
  18. Peng, D.; Xiu, N.; Yu, J. S1/2 regularization methods and fixed point algorithms for affine rank minimization problems. Comput. Optim. Appl. 2017, 67, 543–569. [Google Scholar] [CrossRef]
  19. Xu, Z.; Chang, X.; Xu, F.; Zhang, H. L1/2 regularization: A thresholding representation theory and a fast solver. IEEE Trans. Neural Netw. Learn. Syst. 2012, 23, 1013–1027. [Google Scholar] [PubMed]
Figure 1. Plot of the soft thresholding operator s λ ( t ) with λ = 0.5 .
Figure 1. Plot of the soft thresholding operator s λ ( t ) with λ = 0.5 .
Mathematics 12 01065 g001
Figure 2. Plot of the designed thresholding operator r c , λ ( t ) for some c with λ = 0.5 .
Figure 2. Plot of the designed thresholding operator r c , λ ( t ) for some c with λ = 0.5 .
Mathematics 12 01065 g002
Figure 3. Original 512 × 512 Lena image and its approximated low-rank image with rank 50. (a) Original Lena image. (b) Low-rank Lena image with rank 50.
Figure 3. Original 512 × 512 Lena image and its approximated low-rank image with rank 50. (a) Original Lena image. (b) Low-rank Lena image with rank 50.
Mathematics 12 01065 g003
Figure 4. Original 512 × 512 Boat image and its approximated low-rank image with rank 50. (a) Original Boat image. (b) Low-rank Boat image with rank 50.
Figure 4. Original 512 × 512 Boat image and its approximated low-rank image with rank 50. (a) Original Boat image. (b) Low-rank Boat image with rank 50.
Mathematics 12 01065 g004
Figure 5. Original 512 × 512 Fingerprint image and its approximated low-rank image with rank 50. (a) Original Fingerprint image. (b) Low-rank Fingerprint image with rank 50.
Figure 5. Original 512 × 512 Fingerprint image and its approximated low-rank image with rank 50. (a) Original Fingerprint image. (b) Low-rank Fingerprint image with rank 50.
Mathematics 12 01065 g005
Figure 6. Low-rank images with 3.81% pixels missing. (a) 3.81% pixels missing Lena image. (b) 3.81% pixels missing Boat image. (c) 3.81% pixels missing Fingerprint image.
Figure 6. Low-rank images with 3.81% pixels missing. (a) 3.81% pixels missing Lena image. (b) 3.81% pixels missing Boat image. (c) 3.81% pixels missing Fingerprint image.
Mathematics 12 01065 g006
Figure 7. Comparisons of algorithms for recovering the low-rank Lena image with 3.81% pixels missing. (a) AIMDT algorithm, c = 1 . (b) AIMDT algorithm, c = 5 . (c) AIMDT algorithm, c = 100 . (d) SVT algorithm.
Figure 7. Comparisons of algorithms for recovering the low-rank Lena image with 3.81% pixels missing. (a) AIMDT algorithm, c = 1 . (b) AIMDT algorithm, c = 5 . (c) AIMDT algorithm, c = 100 . (d) SVT algorithm.
Mathematics 12 01065 g007
Figure 8. Comparisons of algorithms for recovering the low-rank Boat image with 3.81% pixels missing. (a) AIMDT algorithm, c = 1 . (b) AIMDT algorithm, c = 5 . (c) AIMDT algorithm, c = 100 . (d) SVT algorithm.
Figure 8. Comparisons of algorithms for recovering the low-rank Boat image with 3.81% pixels missing. (a) AIMDT algorithm, c = 1 . (b) AIMDT algorithm, c = 5 . (c) AIMDT algorithm, c = 100 . (d) SVT algorithm.
Mathematics 12 01065 g008
Figure 9. Comparisons of algorithms for recovering the low-rank Fingerprint image with 3.81% pixels missing. (a) AIMDT algorithm, c = 1 . (b) AIMDT algorithm, c = 5 . (c) AIMDT algorithm, c = 100 . (d) SVT algorithm.
Figure 9. Comparisons of algorithms for recovering the low-rank Fingerprint image with 3.81% pixels missing. (a) AIMDT algorithm, c = 1 . (b) AIMDT algorithm, c = 5 . (c) AIMDT algorithm, c = 100 . (d) SVT algorithm.
Mathematics 12 01065 g009
Table 1. Numerical results of algorithms for image inpainting problems.
Table 1. Numerical results of algorithms for image inpainting problems.
ImageAIMDT, c = 1 AIMDT, c = 5 AIMDT, c = 100 SVT Algorithm
(Name, Rank, FR)RETimeRETimeRETimeRETime
(Lena, 50, 5.17)2.11 × 10−2226.325.71 × 10−276.366.67 × 10−228.457.29 × 10−221.54
(Boat, 50, 5.17)3.28 × 10−5773.776.23 × 10−2107.118.18 × 10−219.548.84 × 10−215.30
(Fingerprint, 50, 5.17)4.21 × 10−694.001.34 × 10−267.865.68 × 10−210.036.29 × 10−28.03
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Cui, A.; He, H.; Yang, H. A Designed Thresholding Operator for Low-Rank Matrix Completion. Mathematics 2024, 12, 1065. https://doi.org/10.3390/math12071065

AMA Style

Cui A, He H, Yang H. A Designed Thresholding Operator for Low-Rank Matrix Completion. Mathematics. 2024; 12(7):1065. https://doi.org/10.3390/math12071065

Chicago/Turabian Style

Cui, Angang, Haizhen He, and Hong Yang. 2024. "A Designed Thresholding Operator for Low-Rank Matrix Completion" Mathematics 12, no. 7: 1065. https://doi.org/10.3390/math12071065

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop