Next Article in Journal
Deep Learning-Based Cyber–Physical Feature Fusion for Anomaly Detection in Industrial Control Systems
Next Article in Special Issue
Research on the Lightweight Design of an Aircraft Support Based on Lattice-Filled Structures
Previous Article in Journal
On New Matrix Version Extension of the Incomplete Wright Hypergeometric Functions and Their Fractional Calculus
Previous Article in Special Issue
Effective Optimization Based on Equilibrium Optimizer for Dynamic Cutting Force Coefficients of the End-Milling Process
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Relaxed Variable Metric Primal-Dual Fixed-Point Algorithm with Applications

1
School of Mathematics and Information Science, Guangzhou University, Guangzhou 510006, China
2
Department of Mathematics, Nanchang University, Nanchang 330031, China
3
School of Science, Xi’an Polytechnic University, Xi’an 710048, China
*
Author to whom correspondence should be addressed.
Mathematics 2022, 10(22), 4372; https://doi.org/10.3390/math10224372
Submission received: 14 September 2022 / Revised: 16 November 2022 / Accepted: 17 November 2022 / Published: 20 November 2022
(This article belongs to the Special Issue Numerical Analysis and Optimization: Methods and Applications)

Abstract

:
In this paper, a relaxed variable metric primal-dual fixed-point algorithm is proposed for solving the convex optimization problem involving the sum of two convex functions where one is differentiable with the Lipschitz continuous gradient while the other is composed of a linear operator. Based on the preconditioned forward–backward splitting algorithm, the convergence of the proposed algorithm is proved. At the same time, we show that some existing algorithms are special cases of the proposed algorithm. Furthermore, the ergodic convergence and linear convergence rates of the proposed algorithm are established under relaxed parameters. Numerical experiments on the image deblurring problems demonstrate that the proposed algorithm outperforms some existing algorithms in terms of the number of iterations.
MSC:
65K05; 68Q25; 68U10

1. Introduction

In this paper, we focus on the following convex optimization problem:
min x H f ( x ) + h ( L x ) ,
where f : H R is convex differentiable and its gradient f is 1 β Lipschitz-continuous for some β > 0 , h : G ( , + ] is a proper lower semi-continuous convex function, L : H G is a bounded linear operator, and H and G are real Hilbert spaces. This problem is widely used in signal and image processing [1,2], compressed sensing [3], and machine learning [4]. For instance, a classical model in image restoration and medical image reconstruction is:
min x R n 1 2 A x b 2 + μ x T V ,
where A : R n R m is a blurring operator, b R m is the observed image, μ > 0 is the regularization parameter, and x T V is the total variation, which can be represented by a composition of a convex function with a discrete gradient operator.
The corresponding dual problem of (1) is
max v G f * ( L * v ) h * ( v ) ,
and the associated saddle point problem of (1) and (3) is
min x H max v G { K ( v , x ) = f ( x ) + L x , v h * ( v ) } .
We say that ( x * , v * ) is a saddle point of (4) if and only if x * is a solution of (1) and v * is a solution of (3), respectively. The optimal solution set of problem (4) is denoted by Ω . In this paper, we always assume that Ω is nonempty.
Many efficient algorithms have been proposed for solving problem (1) in the last decades. Most of them are based on the alternating direction method of multipliers (ADMM) [5,6,7] and the forward–backward splitting (FBS) algorithm [8]. In [9,10], the authors showed that ADMM is equivalent to the Douglas–Rachford splitting algorithm [8]. The proximal gradient algorithm (PGA, also known as FBS) [11] is an efficient algorithm to solve (1) if L = I , and some accelerated versions of PGA had been studied [12,13,14]. The primal dual hybrid gradient algorithm [15,16] was proposed to solve (1) without the smoothness of f. Combining the FBS algorithm [11] with the fixed-point algorithm based on the proximity operator (FP 2 O) [17], Argyriou et al. [18] proposed a FBS_FP 2 O to solve (1). Note that the FBS_FP 2 O algorithm needs to solve a subproblem. Thus, it involves inner and outer iterations. To avoid choosing the number of inner iterations, Chen et al. [19] first a proposed the primal-dual fixed-point algorithm based on the proximity operator (PDFP 2 O) to solve (1). Compared with the FBS_FP 2 O, PDFP 2 O only performs one inner iteration and reduces to the generalized iterative soft-thresholding algorithm [20] when f = A x b 2 . The PDFP 2 O provided desirable performances to solve MRI reconstruction and TV-L1 wavelet inpainting [21,22]. In contrast, Combettes et al. [23] proposed a variable metric forward–backward splitting (VMFBS) algorithm to solve the saddle-point problem (4). By choosing a special variable metric, the PDFP 2 O could be recovered by the VMFBS algorithm. Moreover, the proximal alternating predictor-corrector (PAPC) algorithm [24] was proposed to solve the equivalent minimization problem of (1) and was proved to converge linearly [25]. To speed up the PDFP 2 O, Chen et al. [26] proposed an adapted metric version of PDFP 2 O, which is termed as PDFP 2 O_AM. The key feature of the PDFP 2 O_AM is that it uses a symmetric positive matrix to replace the stepsize of the PDFP 2 O. In contrast, Wen et al. [27] generalized the stepsize in the PDFP 2 O to the dynamic stepsize. Later, a larger stepsize of PDFP 2 O was proved [28]. Recently, Zhu and Zhang [29] introduced an inertial PDFP 2 O (IPDFP 2 O).
In Table 1, we summarize some variants of PDFP 2 O.
From Table 1, we note that the relaxation parameter of these algorithms belongs to ( 0 , 1 ] . It is well known that the convergence speed of the iterative algorithm can be accelerated when the relaxation parameter is greater than 1. This allows us to accelerate the PDFP 2 O_AM with larger relaxed parameters. We reformulate the PDFP 2 O_AM as the FBS algorithm and propose a primal dual fixed-point algorithm based on the proximity operator with relaxed parameters and variable metrics (Rv_PDFP 2 O). Based on the fixed point theory, we prove the convergence of the proposed algorithm. At the same time, we point out that PDFP 2 O_AM [26], PDFP 2 O_DS [27], and PDFP 2 O [19] are particular cases of Rv_PDFP 2 O. Further, the convergence rates are established under the larger relaxed parameters, including ergodic and linear convergence. To verify the effectiveness and superiority of Rv_PDFP 2 O, we apply it for solving the image-restoration problem and compare it with other algorithms.
The rest of the paper is organized as follows. In Section 2, we recall some preliminaries and related work. In Section 3, we deduce the Rv_PDFP 2 O from the preconditioned FBS algorithm and provide some convergence results. In Section 4, we show the numerical results of Rv_PDFP 2 O on solving image deblurring problem. Finally, we provide the conclusions.

2. Preliminaries and Related Work

In this section, we first provide some notations and definitions. Then, we briefly review some existing algorithms for solving (1).
Throughout this paper, H denotes a real Hilbert space endowed with scalar product · , · , and the associated norm is · . Let S + + H ( S + H ) denote the set of the symmetric positive definite (semi-definite) operator in H. For U S + + H , the U-weighted inner product is x , y U = x , U y and the corresponding U-weighted norm is defined by x U = x , x U . H 1 and H 2 are the real Hilbert space endowed with the scalar product. Let U 1 S + + H 1 and U 2 S + + H 2 , the U 1 , U 2 -weighted norm in H 1 × H 2 is x U 1 , U 2 = x 1 U 1 2 + x 2 U 2 2 for x = ( x 1 , x 2 ) H 1 × H 2 . We denote by Γ 0 ( H ) the class of all proper, lower semi-continuous, convex functions from H to ( , + ] . Most of these definitions can be found in [8].
Let A : H 2 H be a set-valued operator. The domain, the graph, the zeros, and the inverse of A are represented by d o m A = { x H : A x } , g r a A = { ( x , u ) H × H : u A x } and z e r A = { x H : 0 A x } , A 1 : H 2 H : u { x H : u A x } , and the resolvent of A is
J A = ( I + A ) 1 .
The operator A : H 2 H is monotone, if u v , x y 0 for all ( x , u ) g r a A , ( y , v ) g r a A . The monotone operator A is maximally monotone if there is no monotone operator B such that g r a A g r a B g r a A . Further, A is δ - monotone if u v , x y δ x y 2 for δ > 0 , ( x , u ) , ( y , v ) g r a A . An operator B : H H is β -cocoercive, for some β > 0 , if x y , B x B y β B x B y 2 , x , y H .
Let D H be nonempty and let T : D H . The fixed point set of T is denoted by F i x T , i.e., F i x T = { x D : T x = x } . T is α -averaged, for some α ( 0 , 1 ) , if
T x T y 2 + 1 α α ( I T ) x ( I T ) y 2 x y 2 , x , y D .
Let f Γ 0 ( H ) , the Fenchel conjugate of f is
f * ( u ) = sup x H { x , u f ( x ) } ,
and the subdifferential of f is the maximally monotone operator
f : x { u H : y x , u + f ( x ) f ( y ) , y H } .
Further, f ( x ) = { f ( x ) } when f is differentiable.
Let f Γ 0 ( H ) and U S + + H , the scale proximity operator of f with respect to the metric U is
p r o x f U ( x ) = arg min u H 1 2 u x U 2 + f ( u ) .
The scale proximity operator is the standard proximity operator when U = I .

Related Work

To solve (1), Argyriou et al. [18] considered the following forward–backward splitting algorithm:
x k + 1 = p r o x γ ( h L ) ( x k γ f ( x k ) ) ,
where γ ( 0 , 2 β ) . By the definition of proximity operator, (6) is equivalent to
x k + 1 = arg min x 1 2 x ( x k γ f ( x k ) ) 2 + γ h ( L x ) .
Then, Argyriou et al. [18] employed the FBS_FP 2 O algorithm to solve (7):
v k + 1 = arg min v 1 2 L * v 1 γ ( x k γ f ( x k ) ) 2 + 1 γ h * ( v ) , x k + 1 = x k γ f ( x k ) γ L * v k + 1 .
In (8), one needs to solve the subproblem of v to obtain the update of { v k + 1 } . More precisely, we obtain the following inner–outer iterative algorithm:
v k + 1 , j = p r o x λ γ h * ( v k , j λ L ( L * v k , j 1 γ ( x k γ f ( x k ) ) ) ) , x k + 1 = x k γ f ( x k ) γ L * v k + 1 , J ,
where λ ( 0 , 2 λ m a x ( L L * ) ) , j denotes the inner iteration, and J represents the maximum number of inner iteration. Here, λ m a x ( L ) denotes the largest eigenvalue of L when L is a matrix.
Chen et al. [19] proposed the PDFP 2 O as follows:
v ˜ k + 1 = ( I p r o x γ λ h ) ( L ( x k γ f ( x k ) ) + ( I λ L L * ) v k ) , x ˜ k + 1 = x k γ f ( x k ) λ L * v ˜ k + 1 , ( v k + 1 , x k + 1 ) = ( 1 ρ k ) ( v k , x k ) + ρ k ( v ˜ k + 1 , x ˜ k + 1 ) .
where λ ( 0 , 1 L 2 ] , γ ( 0 , 2 β ) , and ρ k ( 0 , 1 ] . Let ρ k = 1 , and with the help of the Moreau decomposition, we obtain from (10) that
v k + 1 = γ λ p r o x λ γ h * ( λ γ L ( x k γ f ( x k ) ) + λ γ ( I λ L L * ) v k ) , x k + 1 = x k γ f ( x k ) λ L * v k + 1 .
Let v ¯ k = λ γ v k , we have
v ¯ k + 1 = p r o x λ γ h * ( λ γ L ( x k γ f ( x k ) ) + ( I λ L L * ) v k ) , x k + 1 = x k γ f ( x k ) γ L * v ¯ k + 1 .
Compared with FBS_FP 2 O (9), PDFP 2 O (12) performs only one inner iteration to calculate v k + 1 ( v ¯ k + 1 ) . On the other hand, if we add a proximal term 1 2 v v k 1 λ I L L * 2 to the subproblem of v in (8), i.e.,
v k + 1 = arg min v 1 2 L * v 1 γ ( x k γ f ( x k ) ) 2 + 1 γ h * ( v ) + 1 2 v v k 1 λ I L L * 2 ,
after simple calculation, we could also recover the PDFP 2 O (12).

3. Relaxed Variable Metric Primal-Dual Fixed-Point Algorithm Based on Proximity Operator

In this section, we propose Rv_PDFP 2 O for solving the minimization problem (1). The Rv_PDFP 2 O is
v ˜ k + 1 = p r o x h * P k ( ( I P k 1 L Q k 1 L * ) v k + P k 1 L ( x k Q k 1 f ( x k ) ) ) , x ˜ k + 1 = x k Q k 1 f ( x k ) Q k 1 L * v ˜ k + 1 , ( v k + 1 , x k + 1 ) = ( 1 ρ k ) ( v k , x k ) + ρ k ( v ˜ k + 1 , x ˜ k + 1 ) .

3.1. Convergence Analysis

First, let us introduce the product space K = G × H and define the operators:
A : K 2 K ( v , x ) ( h * ( v ) L x ) × ( L * v ) ,
and
B : K K ( v , x ) ( 0 , f ( x ) ) .
Notice that u * = ( v * , x * ) Ω if and only if 0 A u * + B u * . Although A is maximally monotone and B is cocoercive in K, the forward–backward splitting algorithm could not be applicable since ( I + τ A ) 1 , τ > 0 does not have a closed-form solution. To overcome this difficulty, we consider a preconditioned forward–backward splitting algorithm as follows:
u k + 1 = ( 1 ρ k ) u k + ρ k J U k A ( u k U k B u k ) ,
where P k S + + G , Q k S + + H , ρ k > 0 and
U k 1 = P k L Q k 1 L * 0 0 Q k .
After simple calculation, we recover (13) from (14). In order to analyze the theoretical convergence of Rv_PDFP 2 O, we make the following assumptions:
(A1): 
P k 1 2 ( 0 , 1 λ max ( L Q k 1 L * ) ) , Q k 1 2 ( 0 , 2 β ) ;
(A2): 
ρ k [ ρ , 4 β Q k 1 2 2 β ξ k ] , for ρ , ξ k > 0 ;
(A3): 
U k + 1 U k S + + K , k N ;
(A4): 
ϑ ¯ = sup k N U k 2 < + .
Under the assumption (A1), we have U k 1 S + + K . Denote J U k A ( I U k B ) by T ^ ( k ) .
Lemma 1.
Suppose that (A1) holds. Then the following statements hold:
(1)
I U k B is Q k 1 2 2 β -averaged under · U k 1 ;
(2)
T ^ ( k ) is 2 β 4 β Q k 1 2 -averaged under · U k 1 .
Proof. 
(1) Let u 1 = ( v 1 , x 1 ) , u 2 = ( v 2 , x 2 ) K ; we have
U k B u 1 U k B u 2 , u 1 u 2 U k 1 = f ( x 1 ) f ( x 2 ) , x 1 x 2 β f ( x 1 ) f ( x 2 ) 2 β Q k 1 2 Q k 1 f ( x 1 ) Q k 1 f ( x 2 ) Q k 2 = β Q k 1 2 U k B u 1 U k B u 2 U k 1 2 ,
which means that U k B is β Q k 1 2 -cocoercive. Hence, I U k B is Q k 1 2 2 β -averaged.
(2) Since U k S + + K , U k A is maximally monotone and J U k A is 1 2 -averaged. Hence, J U k A ( I U k B ) is 2 β 4 β Q k 1 2 -averaged. □
Now, we are ready to present the main convergence theorem of Rv_PDFP 2 O (13).
Theorem 1.
Suppose that (A1)–(A4) hold. Let { u k = ( v k , x k ) } be generated by (13). Then, we have the following:
(1)
For any u * Ω , { u k u * U k 1 } is monotonically decreasing and lim k + u k u * U k 1 exists;
(2)
lim k + u k T ^ ( k ) ( u k ) = 0 ;
(3)
{ u k } converges weakly to a point in Ω.
Proof. 
(1) Let α k = 2 β 4 β Q k 1 2 . Notice that u U k + 1 1 u U k 1 , u G × H . Then, we obtain
u k + 1 u * U k + 1 1 2 u k + 1 u * U k 1 2 = ( 1 ρ k ) u k u * U k 1 2 + ρ k T ^ ( k ) ( u k ) u * U k 1 2 ρ k ( 1 ρ k ) u k T ^ ( k ) ( u k ) U k 1 2 u k u * U k 1 2 ρ k ( 1 α k ρ k ) u k T ^ ( k ) ( u k ) U k 1 2 ,
which implies that u k u * U k 1 is decreasing and lim k + u k u * U k 1 exists.
(2) Summing (16) from k = 0 to N 1 , we obtain
k = 0 N 1 ρ k ( 1 α k ρ k ) u k T ^ ( k ) ( u k ) U k 1 2 u 0 u * U 0 1 .
It follows from (17) that
lim k + u k T ^ ( k ) ( u k ) = 0 .
(3) Let { u k j } { u k } such that u k j u ^ * . It follows from Lemma 2.3 in [30] that there is U 1 S + + K such that U k 1 U 1 . Define T = J U A ( I U B ) , we have
u k j T ( u ( k j ) ) u k j T ^ ( k j ) ( u k j ) + T ^ ( k j ) ( u k j ) T ( u k j ) u k j T ^ ( k j ) ( u k j ) + 1 λ m i n ( U k j 1 ) ( U k j 1 U 1 ) ( u k j T ( u k j ) u k j T ^ ( k j ) ( u k j ) + ϑ ¯ ( U k j 1 U 1 ) ( u k j T ( u k j ) ,
which implies that lim j u k j T ( u k j ) = 0 . The second inequality in (19) holds by Lemma 3.4 of [31]. It follows from the demiclosedness of T that u ^ * Ω . By Opial’s lemma, we conclude that u k u ^ * . This completes the proof. □

3.2. Connections to Existing Algorithms

In this subsection, we present a series of special cases of the proposed algorithm and point out connections to other existing algorithms.
(i) Let P k = 1 λ I , Q k = Q ; then, (13) reduces to the PDFP 2 O_AM [26].
v ˜ k + 1 = p r o x λ h * ( 1 λ L ( x k Q 1 f ( x k ) ) + ( I λ L Q 1 L * ) v k ) , x ˜ k + 1 = x k Q 1 f ( x k ) Q 1 L * v ˜ k + 1 , ( v k + 1 , x k + 1 ) = ( 1 ρ k ) ( v k , x k ) + ρ k ( v ˜ k + 1 , x ˜ k + 1 ) .
(ii) Let P k = γ k λ k I and Q k = 1 γ k I ; then, we obtain from (13) that
v ˜ k + 1 = p r o x λ k γ k h * ( λ k γ k L ( x k γ k f ( x k ) ) + ( I λ k L L * ) v k ) , x ˜ k + 1 = x k γ k f ( x k ) γ k L * v ˜ k + 1 , ( v k + 1 , x k + 1 ) = ( 1 ρ k ) ( v k , x k ) + ρ k ( v ˜ k + 1 , x ˜ k + 1 ) ,
which recovers the PDFP 2 O_DS [27].
(iii) Let P k = γ λ I and Q k = 1 γ I ; then, (13) becomes
v ˜ k + 1 = p r o x λ γ h * ( λ γ L ( x k γ f ( x k ) ) + ( I λ L L * ) v k ) , x ˜ k + 1 = x k γ f ( x k ) γ L * v ˜ k + 1 , ( v k + 1 , x k + 1 ) = ( 1 ρ k ) ( v k , x k ) + ρ k ( v ˜ k + 1 , x ˜ k + 1 ) ,
which is the original PDFP 2 O (12).
Rv_PDFP 2 O (13) reduces to the above three algorithms (20)–(22) for different P k and Q k . It is eailsy confirmed that Rv_PDFP 2 O (13) generalizes these algorithms.

3.3. Convergence Rates

In this subsection, we discuss convergence rates of (13).

3.3.1. O ( 1 k ) Ergodic Convergence Rate

First, we establish the ergodic convergence rate.
Lemma 2.
Suppose that (A1) holds. Let { u ˜ k + 1 = ( v ˜ k + 1 , x ˜ k + 1 ) } be generated by (13). Then, for any u = ( v , x ) K , it holds that
K ( v , x ˜ k + 1 ) K ( v ˜ k + 1 , x ) 1 2 ( u k u U k 1 2 u ˜ k + 1 u U k 1 2 u ˜ k + 1 u k U k 1 2 ) + 1 2 β x ˜ k + 1 x k 2 .
Proof. 
It follows from the property of proximity operator that
h * ( v ) h * ( v ˜ k + 1 ) + L ( x k Q k 1 L * v k Q k 1 f ( x k ) ) , v v ˜ k + 1 + 1 2 ( v ˜ k + 1 v k P k 2 + v ˜ k + 1 v P k 2 v k v P k 2 ) .
By the differentiability of f, we have
f ( x ) f ( x ˜ k + 1 ) + f ( x k ) , x x ˜ k + 1 1 2 β x ˜ k + 1 x k 2 f ( x ˜ k + 1 ) + 1 2 x ˜ k + 1 x k Q k 2 + 1 2 x x ˜ k + 1 Q k 2 1 2 x x k Q k 2 + v ˜ k + 1 , L ( x ˜ k + 1 x ) 1 2 β x ˜ k + 1 x k 2 .
Adding (24) and (25) and rearranging to arrive at (23). □
Theorem 2.
Suppose that (A1)–(A3) hold with ξ k = Q k 1 2 2 β . Let { u ˜ k + 1 = ( v ˜ k + 1 , x ˜ k + 1 ) } be generated by (13). Then, for u * = ( v * , x * ) Ω , it holds that
K ( v * , X ¯ N ) K ( V ¯ N + 1 , x * ) 1 2 ρ N u 0 u * U 0 1 2 ,
where X ¯ N = 1 N k = 0 N 1 x ˜ k + 1 , V ¯ N = 1 N k = 0 N 1 v ˜ k + 1 .
Proof. 
Note that u ˜ k + 1 = u k + 1 ρ k ( u k + 1 u k ) . Substituting it back into (23), we have
K ( v * , x ˜ k + 1 ) K ( v ˜ k + 1 , x * ) 1 2 ρ k ( u k u * U k 1 2 u k + 1 u * U k + 1 1 2 ) 2 ρ k 2 ρ k 2 x k + 1 x k Q k I β ( 2 ρ k ) 2 1 2 ρ ( u k u * U k 1 2 u k + 1 u * U k + 1 1 2 ) .
Summing (27) from k = 0 , , N 1 , we obtain
k = 0 N 1 ( K ( v * , x ˜ k + 1 ) K ( v ˜ k + 1 , x * ) ) 1 2 ρ u 0 u * U 0 1 2 .
The final estimation (26) follows directly from the Jensen inequality. □

3.3.2. Linear Convergence Rate

Next, we establish a linear convergence rate of (13) with P k = P and Q k = Q . Therefore, T ^ ( k ) = T . For convenience, we give an equivalent formulation of T as follows:
T 1 ( v , x ) = p r o x h * P ( ( I P 1 L Q 1 L * ) v + P 1 L ( x Q 1 f ( x ) ) ) ,
T 2 ( v , x ) = x Q 1 f ( x ) Q 1 L * T 1 ( v , x ) ,
T ( v , x ) = ( T 1 ( v , x ) , T 2 ( v , x ) ) .
In addition, we make some additional assumptions. More precisely,
(A5): 
h * is τ h strongly monotone under · I P 1 L Q 1 L * , i.e.,
( x 1 , v 1 ) , ( x 2 , v 2 ) g r a h * , x 1 x 2 , v 1 v 2 τ h x 1 x 2 I P 1 L Q 1 L * 2 .
(A6): 
f is τ f strongly monotone under the norm · Q , i.e.,
x 1 , x 2 H , x 1 x 2 , f ( x 1 ) f ( x 2 ) τ f x 1 x 2 Q 2 .
(A7): 
There is θ 1 , θ 2 ( 0 , 1 ) such that I P 1 L Q 1 L * 2 θ 1 and x 1 Q 1 f ( x 1 ) x 2 Q 1 f ( x 2 ) Q θ 2 x 1 x 2 Q for all x 1 , x 2 H .
Lemma 3.
Suppose that ( A 1 ) , ( A 5 ) , and ( A 6 ) hold. Then,
T ( u 1 ) T ( u 2 ) ( P + 2 τ h I ) ( I P 1 L Q 1 L * ) , Q 2 θ u 1 u 2 ( P + 2 τ h I ) ( I P 1 L Q 1 L * ) , Q 2 ,
for u 1 = ( v 1 , x 1 ) , u 2 = ( v 2 , x 2 ) K , where θ ( 0 , 1 ) .
Proof. 
Let u h 1 = M v 1 + L ( x 1 Q 1 f ( x 1 ) ) P T 1 ( u 1 ) h * ( T 1 ( u 1 ) ) and u h 2 = M v 2 + L ( x 2 Q 1 f ( x 2 ) ) P T 1 ( u 2 ) h * ( T 1 ( u 2 ) ) . Then, we have
T ( u 1 ) T ( u 2 ) P L Q 1 L * , Q 2 = u 1 u 2 P L Q 1 L * , Q 2 2 f ( x 1 ) f ( x 2 ) , T 2 ( u 1 ) T 2 ( u 2 ) ( x 1 x 2 ) T ( u 1 ) T ( u 2 ) ( u 1 u 2 ) P L Q 1 L * , Q 2 2 f ( x 1 ) f ( x 2 ) , x 1 x 2 2 T 1 ( u 1 ) T 1 ( u 2 ) , u h 1 u h 2 u 1 u 2 P L Q 1 L * , Q 2 ( 2 Q 1 2 β ) f ( x 1 ) f ( x 2 ) , x 1 x 2 2 T 1 ( u 1 ) T 1 ( u 2 ) , u h 1 u h 2 u 1 u 2 P L Q 1 L * , ( 1 ( 2 Q 1 2 β ) τ f ) Q 2 2 τ h T 1 ( u 1 ) T 1 ( u 2 ) I P 1 L Q 1 L * 2 ,
which concludes the proof with θ = max { 1 ( 2 Q 1 2 β ) τ f , 1 1 + 2 τ h λ min ( P 1 ) } . □
Lemma 4.
Suppose that ( A 1 ) and ( A 7 ) hold. Then, for u 1 = ( v 1 , x 1 ) , u 2 = ( v 2 , x 2 ) K ,
T ( u 1 ) T ( u 2 ) P , Q 2 θ u 1 u 2 P , Q 2 ,
where θ ( 0 , 1 ) .
Proof. 
Define M = P L Q 1 L * . It follows from the fact that p r o x h * P is firmly nonexpansive that
T ( u 1 ) T ( u 2 ) P , Q 2 x 1 Q 1 f ( x 1 ) x 2 Q 1 f ( x 2 ) Q 2 T 1 ( u 1 ) T 1 ( u 2 ) M 2 + 2 T 1 ( u 1 ) T 1 ( u 2 ) , M ( v 1 v 2 ) = x 1 Q 1 f ( x 1 ) x 2 Q 1 f ( x 2 ) Q 2 + v 1 v 2 M 2 T 1 ( u 1 ) T 1 ( u 2 ) ( v 1 v 2 ) M 2 θ 2 x 1 x 2 Q 2 + θ 1 v 1 v 2 P 2 θ u 1 u 2 P , Q 2 ,
where θ = max { θ 1 , θ 2 } ( 0 , 1 ) . □
Theorem 3.
Suppose that ( A 1 ) holds. Suppose that ( A 5 ) ( A 6 ) hold or ( A 7 ) holds. Let { u k + 1 = ( v k + 1 , x k + 1 ) } be generated by (13). Let ρ k ( 0 , ρ ¯ ) for ρ ¯ = min { 2 1 + θ , 4 β Q 1 2 2 β } . Then, { u k + 1 } converges linearly to the unique point u * Ω , i.e.,
u k + 1 u * c η k + 1 ,
where c > 0 , η ( 0 , 1 ) .
Proof. 
Define T ( ρ k ) = ( 1 ρ k ) I + ρ k T . Note that u k + 1 = T ( ρ k ) ( u k ) and F i x T ( ρ k ) = F i x T = Ω . It is clear that T ( ρ k ) is η k -contractive for η k = | 1 ρ k | + ρ k θ . Therefore, { u k + 1 } converges linearly to the unique fixed point of T ( ρ k ) . □

4. Numerical Experiments

In this section, we apply the proposed Rv_PDFP 2 O (13) to solve the L 2 + T V deblurring problem (2) and compare it with those of the ADMM [5], PDS [32], PDFP 2 O [19], and PDFP 2 O_AM [26]. All of the experiments are performed under Windows 7 and MATLAB 7.2 (R2014a) running on a laptop with an Intel Core 2 Quad CPU 2.3 GHz with 4 GB of memory.
The test images are the standard “Text” image with a size of 256 × 256 , and “Barbara” and “Goldhill” with a size 512 × 512 , which are shown in Figure 1. We report numerical results on the image restoration for blurred images, corrupted by the Gaussian noise and the average kernel; a is the size of average kernel, and η is the standard variance of the Gaussian noise. To evaluate the ability of the algorithm to remove different noises, we set four kinds of a , η : (1) a = 3 , η = 0.01 ; (2) a = 3 , η = 0.05 ; (3) a = 7 , η = 0.01 ; and (4) a = 7 , η = 0.05 .
For the two common parameters γ and λ in PDS, and PDFP 2 O, we set γ = 1.9 and λ = 0.125 . Similarly to the literature [26], we choose Q = A T A + ζ L T L , where ζ = 0.1 . In particular, Q 1 can be easily computed by FFT with periodic boundary conditions. We tune the regularization parameter μ to achieve the maximum SNR, which is listed in Table 2.
The relative error of the iterative sequences is defined as the stopping criteria:
x k + 1 x k 2 x k 2 < ε ,
where ε > 0 is a prescribed tolerance value. In the experiment, we choose ε = 10 4 , 10 6 , 10 8 . The quality of the restored images is evaluated by signal-to-noise (SNR), which is defined by
S N R = 10 l o g x 2 x r x 2 ,
where x and x r denote the original and the recovered images. The obtained numerical results are listed in Table 3 and Table 4.
It can be seen from Table 3 and Table 4 that the proposed Rv_PDFP 2 O converges faster than other algorithms in terms of the number of iterations. In addition, Figure 2, Figure 3 and Figure 4 show the recovered images with ε = 10 8 . Figure 2, Figure 3 and Figure 4 show that the visual qualities of these images obtained by the proposed algorithm are slightly better than the compared algorithms.

5. Conclusions

In this article, we proposed a Rv_PDFP 2 O to solve the convex optimization problem (1). The proposed algorithm combined the over-relaxed parameters and the variable metric. Under a proper preconditioned operator, we derived the Rv_PDFP 2 O and established the convergence. By defining different stepsizes, we showed that the Rv_PDFP 2 O recovers some existing algorithms, including PDFP 2 O, PDFP 2 O_AM, and PDFP 2 O_DS, and we provide larger relaxed parameters for these algorithms. Furthermore, we studied the O ( 1 k ) ergodic convergence rate in the partial primal-dual gap. Under some strong conditions on the objective functions and the stepsizes, we proved that the iterative sequences converge linearly. We applied the Rv_PDFP 2 O to solve the TV image-restoration problem (2). The numerical results show that the Rv_PDFP 2 O performs better than some existing algorithms. As we all know, the self-adaptive stepsize and the inertial variant could improve the algorithm. However, these two accelerated strategies are not introduced to the Rv_PDFP 2 O algorithm. We would like to derive a self-adaptive Rv_PDFP 2 O and an inertial Rv_PDFP 2 O in the future.

Author Contributions

Conceptualization, W.H. and H.L.; methodology, W.H. and H.L.; software, Y.T. and M.W.; validation, W.H., Y.T. and H.L.; formal analysis, W.H. and Y.T.; and writing, W.H., Y.T., M.W. and H.L. All authors have read and agreed to the published version of the manuscript.

Funding

This work was funded by the National Science Foundation of China, grant numbers 12271117, 12061045, and 12001416, and the basic research joint-funding project of the university and Guangzhou city, grant number 202102010434.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Antonin, C.; Pock, T. An introduction to continuous optimization for imaging. Acta Numer. 2016, 25, 161–319. [Google Scholar]
  2. Rudin, L.I.; Osher, S.; Fatemi, E. Nonlinear total variation based noise removal algorithms. Phys. D Nonlinear Phenom. 1992, 60, 259–268. [Google Scholar] [CrossRef]
  3. Davenport, M.A.; Duarte, M.F.; Eldar, Y.C.; Kutyniok, G. Introduction to Compressed Sensing; Cambridge University Press: Cambridge, UK, 2012. [Google Scholar]
  4. Zou, H.; Hastie, T. Regularization and variable selection via the elastic net. J. R. Stat. Soc. Ser. B (Stat. Methodol.) 2005, 67, 301–320. [Google Scholar] [CrossRef] [Green Version]
  5. Boyd, S.; Parikh, N.; Chu, E.; Peleato, B.; Eckstein, J. Distributed optimization and statistical learning via the alternating direction method of multipliers. Found. Trends Mach. Learn. 2010, 3, 1–122. [Google Scholar] [CrossRef]
  6. Chan, R.H.; Tao, M.; Yuan, X.M. Constrained total variation deblurring models and fast algorithms based on alternating direction method of multipliers. SIAM J. Imaging Sci. 2011, 6, 680–697. [Google Scholar] [CrossRef] [Green Version]
  7. Han, D.R.; Yuan, X.M. A note on the alternating direction method of multipliers. J. Optimz. Theory Appl. 2012, 155, 227–238. [Google Scholar] [CrossRef]
  8. Bauschke, H.H.; Combettes, P.L. Concex Analysis and Monotone Operator Theory in Hilbert Spaces, 2nd ed.; Springer: Berlin/Heidelberg, Germany, 2017. [Google Scholar]
  9. Eckstein, J.; Bertsekas, D.P. On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 1992, 55, 293–318. [Google Scholar] [CrossRef] [Green Version]
  10. Yan, M.; Yin, W.T. Self equivalence of the alternating direction method of multipliers. In Splitting Methods in Communication, Imaging, Science, and Engineering; Glowinski, R., Osher, S.J., Yin, W.T., Eds.; Springer: Cham, Switzerland, 2016; pp. 165–194. [Google Scholar]
  11. Combettes, P.L.; Wajs, V.R. Signal recovery by proximal forward–backward splitting. Multiscale Model. Simul. 2005, 4, 1168–1200. [Google Scholar] [CrossRef] [Green Version]
  12. Beck, A.; Teboulle, M. A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2009, 2, 183–202. [Google Scholar] [CrossRef] [Green Version]
  13. Beck, A.; Teboulle, M. Fast gradient-based algorithms for constrained total variation image denoising and deblurring problems. IEEE Trans. Image Process. 2009, 18, 2419–2434. [Google Scholar] [CrossRef] [Green Version]
  14. Iutzeler, F.; Malick, J. On the proximal gradient algorithm with alternated inertia. J. Optimz. Theory App. 2018, 176, 688–710. [Google Scholar] [CrossRef]
  15. Chambolle, A.; Pock, T. A first-order primal-dual algorithm for convex problems with applications to imaging. J. Math. Imaging Vis. 2011, 40, 120–145. [Google Scholar] [CrossRef] [Green Version]
  16. Komodakis, N.; Pesquet, J.-C. Playing with duality: An overview of recent primal-dual approaches for solving large-scale optimization problems. IEEE Signal Proc. Mag. 2015, 32, 31–54. [Google Scholar] [CrossRef] [Green Version]
  17. Micchelli, C.A.; Shen, L.X.; Xu, Y.S. Proximity algorithms for image models: Denoising. Inverse Probl. 2011, 27, 045009. [Google Scholar] [CrossRef]
  18. Argyriou, A.; Micchelli, C.A.; Pontil, M.; Shen, L.X.; Xu, Y.S. Efficient first order methods for linear composite regularizers. arXiv 2011, arXiv:1104.1436. [Google Scholar]
  19. Chen, P.J.; Huang, J.G.; Zhang, X.Q. A primal-dual fixed-point algorithm for convex separable minimization with applications to image restoration. Inverse Probl. 2013, 29, 025011. [Google Scholar] [CrossRef]
  20. Loris, I.; Verhoeven, C. On a generalization of the iterative soft-thresholding algorithm for the case of non-separable penalty. Inverse Probl. 2011, 27, 125007. [Google Scholar] [CrossRef]
  21. He, Z.; Zhu, Y.N.; Qiu, S.H.; Wang, T.; Zhang, C.C.; Sun, B.M.; Zhang, X.Q.; Feng, Y. Low-Rank and Framelet Based Sparsity Decomposition for Interventional MRI Reconstruction. IEEE Bio-Med. Eng. 2022, 69, 2294–2304. [Google Scholar] [CrossRef]
  22. Ren, Z.M.; Zhang, Q.F.; Yuan, Y.X. A Primal–Dual Fixed-Point Algorithm for TVL1 Wavelet Inpainting Based on Moreau Envelope. Mathematics 2022, 10, 2470. [Google Scholar] [CrossRef]
  23. Combettes, P.L.; Condat, L.; Pesquet, J.C.; Vu, B.C. A forward–backward View of Some Primal-dual Optimization Methods in Image Recovery. In Proceedings of the 2014 IEEE International Conference on Image Processing, ICIP 2014 Conference, Paris, France, 27–30 October 2014; pp. 4141–4145. [Google Scholar]
  24. Drori, Y.; Sabach, S.; Teboulle, M. A simple algorithm for a class of nonsmooth convex-concave saddle-point problems. Oper. Res. Lett. 2015, 43, 209–214. [Google Scholar] [CrossRef]
  25. Luke, D.R.; Shefi, R. A globally linearly convergent method for pointwise quadratically supportable convex-concave saddle point problems. J. Math. Anal. Appl. 2018, 457, 1568–1590. [Google Scholar] [CrossRef] [Green Version]
  26. Chen, D.Q.; Zhou, Y.; Song, L.J. fixed-point algorithm based on adapted metric method for convex minimization problem with application to image deblurring. Adv. Comput. Math. 2016, 42, 1287–1310. [Google Scholar] [CrossRef]
  27. Wen, M.; Tang, Y.C.; Cui, A.G.; Peng, J.G. Efficient primal-dual fixed-point algorithms with dynamic stepsize for composite convex optimization problems. Multidimens. Syst. Signal Process. 2019, 30, 1531–1544. [Google Scholar] [CrossRef]
  28. Li, Z.; Yan, M. New convergence analysis of a primal-dual algorithm with large stepsizes. Adv. Comput. Math. 2019, 47, 1–20. [Google Scholar] [CrossRef]
  29. Zhu, Y.N.; Zhang, X.Q. Two Modified Schemes for the Primal Dual Fixed Point Method. CSIAM Trans. Appl. Math. 2021, 2, 108–130. [Google Scholar]
  30. Combettes, P.L.; Wajs, V.R. Variable metric quasi-fejer monotonicity. Nonlinear Anal-Theor. 2013, 78, 17–31. [Google Scholar] [CrossRef] [Green Version]
  31. Cui, F.Y.; Tang, Y.C.; Zhu, C.X. Convergence analysis of a variable metric forward–backward splitting algorithm with applications. J. Inequal. Appl. 2019, 2019, 1–27. [Google Scholar] [CrossRef] [Green Version]
  32. Condat, L. A primal-dual splitting method for convex optimization involving lipschitzian, proximable and linear composite terms. J. Optim. Theory Appl. 2013, 158, 460–479. [Google Scholar] [CrossRef]
Figure 1. These are the test images: (a) Text, (b) Barbara, and (c) Goldhill.
Figure 1. These are the test images: (a) Text, (b) Barbara, and (c) Goldhill.
Mathematics 10 04372 g001
Figure 2. These are the “Text” images: Row 1: the blurry and noisy images, Row 2: the images restored by ADMM, Row 3: the images restored by PDS, Row 4: the images restored by PDFP 2 O, Row 5: the images restored by PDFP 2 O_AM, and Row 6: the images restored by Rv_PDFP 2 O.
Figure 2. These are the “Text” images: Row 1: the blurry and noisy images, Row 2: the images restored by ADMM, Row 3: the images restored by PDS, Row 4: the images restored by PDFP 2 O, Row 5: the images restored by PDFP 2 O_AM, and Row 6: the images restored by Rv_PDFP 2 O.
Mathematics 10 04372 g002
Figure 3. These are the “Goldhill” images: Row 1: the blurry and noisy images, Row 2: the images restored by ADMM, Row 3: the images restored by PDS, Row 4: the images restored by PDFP 2 O, Row 5: the images restored by PDFP 2 O_AM, and Row 6: the images restored by Rv_PDFP 2 O.
Figure 3. These are the “Goldhill” images: Row 1: the blurry and noisy images, Row 2: the images restored by ADMM, Row 3: the images restored by PDS, Row 4: the images restored by PDFP 2 O, Row 5: the images restored by PDFP 2 O_AM, and Row 6: the images restored by Rv_PDFP 2 O.
Mathematics 10 04372 g003
Figure 4. These are the “Goldhill” images: Row 1: the blurry and noisy images, Row 2: the images restored by ADMM, Row 3: the images restored by PDS, Row 4: the images restored by PDFP 2 O, Row 5: the images restored by PDFP 2 O_AM, and Row 6: the images restored by Rv_PDFP 2 O.
Figure 4. These are the “Goldhill” images: Row 1: the blurry and noisy images, Row 2: the images restored by ADMM, Row 3: the images restored by PDS, Row 4: the images restored by PDFP 2 O, Row 5: the images restored by PDFP 2 O_AM, and Row 6: the images restored by Rv_PDFP 2 O.
Mathematics 10 04372 g004
Table 1. Listing of existing primal-dual fixed point type algorithms.
Table 1. Listing of existing primal-dual fixed point type algorithms.
Algorithm ρ k λ γ Variable MetricErgodic RateLinear Rate
PDFP 2 O [19] ( 0 , 1 ] ( 0 , 1 λ max ( L L * ) ] ( 0 , 2 β )
VMPD [23] ( 0 , 1 ] ( 0 , 1 λ max ( L Q k 1 L * ) ) ( 0 , 2 β )
PAPC [24]1 ( 0 , 1 λ max ( L L * ) ] ( 0 , β ]
PDFP 2 O_AM [26] ( 0 , 1 ] ( 0 , 1 λ m a x ( L Q 1 L * ) ) ( 0 , 2 β )
PDFP 2 O_DS [27] ( 0 , 1 ) ( 0 , 1 λ max ( L L * ) ) ( 0 , 2 β )
IPDFP 2 O [29]1 ( 0 , 1 λ max ( L L * ) ) ( 0 , 2 β )
Table 2. The best selection of μ in the current noise level.
Table 2. The best selection of μ in the current noise level.
Images a = 3 a = 7
η = 0 . 01 η = 0 . 05 η = 0 . 01 η = 0 . 05
“Text” 0.0013 0.0078 0.0003 0.0027
“Barbara” 0.0004 0.0101 0.0004 0.009
“Goldhill” 0.0011 0.0148 0.0006 0.0085
Table 3. The performance of a = 3 of the compared algorithms in terms of SNR (dB) and the number of iterations k for given tolerance values ε .
Table 3. The performance of a = 3 of the compared algorithms in terms of SNR (dB) and the number of iterations k for given tolerance values ε .
η ImageMethod ε = 10 4 ε = 10 6 ε = 10 8
SNR ( dB ) k SNR ( dB ) k SNR ( dB ) k
0.01 TextADMM 26.6288 289 27.6605 813 27.6686 1537
PDS 26.4334 296 27.6402 933 27.6500 1693
PDFP 2 O 27.0279 182 27.6455 521 27.6500 915
PDFP 2 O_AM 27.1659 178 27.6650 456 27.6686 831
Rv_PDFP 2 O 27.1838 173 27.6615 441 27.6686 803
BarbaraADMM 21.8641 138 21.6752 1423 21.6704 5332
PDS 21.8186 152 21.6783 1499 21.6733 5338
PDFP 2 O 21.7928 118 21.6757 958 21.6733 3022
PDFP 2 O_AM 21.8115 109 21.6727 911 21.6704 3058
Rv_PDFP 2 O 21.8075 108 21.6726 887 21.6704 2962
GoldhillADMM 26.6924 68 26.5822 513 26.5807 1554
PDS 26.6686 79 26.5769 510 26.5754 1471
PDFP 2 O 26.5980 95 26.5761 306 26.5754 829
PDFP 2 O_AM 26.6467 51 26.5814 314 26.5807 878
Rv_PDFP 2 O 26.6448 50 26.5814 305 26.5807 849
0.05 TextADMM 14.7157 150 14.7797 511 14.7801 1048
PDS 14.6905 161 14.7623 533 14.7626 1904
PDFP 2 O 14.7341 103 14.7627 384 14.7626 1892
PDFP 2 O_AM 14.7510 96 14.7800 308 14.7801 1185
Rv_PDFP 2 O 14.7520 93 14.7800 296 14.7801 1184
BarbaraADMM 18.3425 47 18.3337 231 18.3336 883
PDS 18 , 3496 74 18.3454 349 18.3454 1683
PDFP 2 O 18.3459 95 18.3453 342 18.3454 1681
PDFP 2 O_AM 18.3362 42 18.3336 226 18.3336 1154
Rv_PDFP 2 O 18.3359 42 18.3336 226 18.3336 1154
GoldhillADMM 22.7421 47 22.7367 260 22.7367 1134
PDS 22.7033 84 22.7015 524 22.7014 2529
PDFP 2 O 22.7027 97 22.7015 508 22.7014 2527
PDFP 2 O_AM 22.7383 53 22.7367 328 22.7367 1571
Rv_PDFP 2 O 22.7382 53 22.7367 328 22.7367 1568
Table 4. The performance of a = 7 of the compared algorithms in terms of SNR (dB) and the number of iterations k for given tolerance values ε .
Table 4. The performance of a = 7 of the compared algorithms in terms of SNR (dB) and the number of iterations k for given tolerance values ε .
η ImageMethod ε = 10 4 ε = 10 6 ε = 10 8
SNR ( dB ) k SNR ( dB ) k SNR ( dB ) k
0.01 TextADMM 12.8474 832 14.1337 6220 14.1548 19,540
PDS 12.4021 1114 14.1117 7138 14.1382 19,512
PDFP 2 O 13.1403 835 14.1252 4277 14.1383 10,939
PDFP 2 O_AM 13.4212 646 14.1446 3828 14.1549 11,248
Rv_PDFP 2 O 13.4436 634 14.1450 3718 14.1549 10,899
BarbaraADMM 18.5769 145 18.4559 2153 18.4523 9172
PDS 18.5597 192 18.4592 2549 18.4541 9851
PDFP 2 O 18.5490 161 18.4567 1647 18.4541 5623
PDFP 2 O_AM 18.5445 124 18.4541 1401 18.4523 5288
Rv_PDFP 2 O 18.5429 122 18.4541 1365 18.4523 5124
GoldhillADMM 23.2362 116 23.0522 1475 23.0480 8688
PDS 23.1623 167 23.0451 1791 23.0395 8417
PDFP 2 O 23.1506 129 23.0424 1146 23.0395 5284
PDFP 2 O_AM 23.1775 92 23.0500 966 23.0480 5608
Rv_PDFP 2 O 23.1739 91 23.0499 943 23.0480 5463
0.05 TextADMM 7.0043 374 6.9970 2466 6.9972 6803
PDS 6.9372 535 6.9771 3074 6.9773 7579
PDFP 2 O 6.9601 373 6.9772 1859 6.9773 4154
PDFP 2 O_AM 6.9997 276 6.9971 1528 6.9972 3737
Rv_PDFP 2 O 6.9995 270 6.9971 1488 6.9972 3611
BarbaraADMM 17.1962 95 17.1656 764 17.1653 2939
PDS 17.2239 161 17.1937 900 17.1932 3370
PDFP 2 O 17.2073 121 17.1933 648 17.1932 3091
FP 2 O_AM 17.1783 80 17.1654 506 17.1653 1955
Rv_PDFP 2 O 17.1776 79 17.1654 501 17.1653 1947
GoldhillADMM 20.4785 83 20.4481 612 20.4477 2441
PDS 20.4285 153 20.4010 762 20.4005 2684
PDFP 2 O 20.4114 116 20.4006 566 20.4005 2323
PDFP 2 O_AM 20.4595 72 20.4478 430 20.4477 1529
Rv_PDFP 2 O 20.4586 72 20.4478 426 20.4477 1505
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Huang, W.; Tang, Y.; Wen, M.; Li, H. Relaxed Variable Metric Primal-Dual Fixed-Point Algorithm with Applications. Mathematics 2022, 10, 4372. https://doi.org/10.3390/math10224372

AMA Style

Huang W, Tang Y, Wen M, Li H. Relaxed Variable Metric Primal-Dual Fixed-Point Algorithm with Applications. Mathematics. 2022; 10(22):4372. https://doi.org/10.3390/math10224372

Chicago/Turabian Style

Huang, Wenli, Yuchao Tang, Meng Wen, and Haiyang Li. 2022. "Relaxed Variable Metric Primal-Dual Fixed-Point Algorithm with Applications" Mathematics 10, no. 22: 4372. https://doi.org/10.3390/math10224372

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop