Next Article in Journal / Special Issue
Fixed Point Results for Multi-Valued Contractions in b−Metric Spaces and an Application
Previous Article in Journal
Equilibrium and Stability of Entropy Operator Model for Migratory Interaction of Regional Systems
Previous Article in Special Issue
Modified Relaxed CQ Iterative Algorithms for the Split Feasibility Problem
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Iterative Methods for Computing the Resolvent of Composed Operators in Hilbert Spaces

Department of Mathematics, NanChang University, Nanchang 330031, China
*
Author to whom correspondence should be addressed.
Mathematics 2019, 7(2), 131; https://doi.org/10.3390/math7020131
Submission received: 31 December 2018 / Revised: 22 January 2019 / Accepted: 28 January 2019 / Published: 1 February 2019
(This article belongs to the Special Issue Fixed Point Theory and Related Nonlinear Problems with Applications)

Abstract

:
The resolvent is a fundamental concept in studying various operator splitting algorithms. In this paper, we investigate the problem of computing the resolvent of compositions of operators with bounded linear operators. First, we discuss several explicit solutions of this resolvent operator by taking into account additional constraints on the linear operator. Second, we propose a fixed point approach for computing this resolvent operator in a general case. Based on the Krasnoselskii–Mann algorithm for finding fixed points of non-expansive operators, we prove the strong convergence of the sequence generated by the proposed algorithm. As a consequence, we obtain an effective iterative algorithm for solving the scaled proximity operator of a convex function composed by a linear operator, which has wide applications in image restoration and image reconstruction problems. Furthermore, we propose and study iterative algorithms for studying the resolvent operator of a finite sum of maximally monotone operators as well as the proximal operator of a finite sum of proper, lower semi-continuous convex functions.

1. Introduction

Let H be a real Hilbert space, the associated product is denoted by · , · and the corresponding norm is · . Let T : H 2 H be a maximally monotone operator with its domain and range denoted by D o m ( T ) and R ( T ) . Let I be the identity operator. We consider the simplest monotone inclusion problem:
find x H , such that 0 T x .
Many problems in variational inequalities, partial differential equations, signal and image processing can be solved via the monotone inclusion problem (1). See, for example, [1,2,3,4]. It is well known that x is a solution of (1) if and only if x = J λ T x , for any λ > 0 . Here and in what follows, let I be the identity operator, the resolvent of T with parameter λ > 0 is defined by J λ T = ( I + λ T ) 1 , and the Yoshida approximation of T with index λ is denoted by λ T = 1 λ ( I J λ T ) , respectively. The proximal point algorithm (PPA) is the most popular method to solve (1).
Let φ : H ( , + ] be a proper, lower semi-continuous convex function. By Fermat lemma, the following convex minimization problem
min x H φ ( x )
is equivalent to the monotone inclusion problem (1), where T = φ . Here, φ is the classical subdifferential of φ . For any λ > 0 and x 0 H , the PPA for solving the convex minimization problem (2) is defined as
x k + 1 = p r o x λ φ ( x k ) , k 0 .
In fact, the resolvent of φ is equivalent to the proximity operator p r o x φ , which was first introduced by Moreau [5]. More precisely, we have
J φ ( x ) = p r o x φ ( x ) = arg min u 1 2 u x 2 + φ ( u ) .
The proximity operators play an important role in studying various operator splitting algorithms for solving convex optimization problems. See, for example, [6,7,8]. In particular, Combettes et al. [9] proposed a forward-backward splitting algorithm for solving the dual of the proximity operator of a sum of composed convex functions. Adly et al. [10] provided an explicit decomposition of the proximity operator of the sum of two closed convex functions. Since the resolvent operator is a natural generalization of the proximity operators, Bauschke and Combettes [11] extended the Dykstra algorithm [12] for computing the projection onto the intersection of two closed convex sets to compute the resolvent of the sum of two maximally monotone operators. Combettes [13] generalized the Douglas–Rachford splitting and Dykstra-like algorithm to solve the resolvent of a sum of maximally monotone operators. Very recently, Artacho and Campoy [14] generalized the averaged alternating modified reflection algorithm [15] to compute the resolvent of the sum of two maximally monotone operators. On the other hand, in order to compute the resolvent of composed operators A * T A , where H 1 and H 2 are two Hilbert spaces, A : H 1 H 2 is a continuous linear operator and its adjoint is A * , and T : H 2 2 H 2 is a maximally monotone operator. Fukushima [16] proved that, if A A * is invertible, then A * T A is maximally monotone. Moreover, Fukushima [16] showed that
J λ A * T A ( x ) = x λ A * ( T 1 + λ A A * ) 1 ( A x ) ,
and
J λ A * T A ( x ) = x λ A * ( A A * ) 1 ( A x z ) ,
where z is the unique solution of 0 1 λ ( A A * ) 1 ( z A x ) + T z . The difference between (5) and (6) is that the former requires evaluating T 1 , while the latter has to calculate ( A A * ) 1 . In Proposition 23.25 of [17], Bauschke and Combettes proved that, if A A * = μ I , for some μ > 0 , then
J λ A * T A ( x ) = x λ A * μ T ( A x ) .
Since the computation of the resolvent of composed operators in (5)–(7) are restricted, Moudafi [18] developed a fixed point approach for computing the resolvent of the composed operators A * T A without the requirement of ( A A * ) 1 and T 1 . The basic assumption is that the operator A * T A is maximally monotone. This is true if 0 r i ( R ( A ) D o m ( T ) ) [19,20], where r i stands for the relative interior of a set; otherwise, c o n e ( R ( A ) D o m ( T ) ) = s p a n ¯ ( R ( A ) D o m ( T ) ) [17], where c o n e denotes conical hull of a set and s p a n ¯ stands for closed span of a set. The most general condition for the maximal monotonicity of the composition A * T A can be found in [21].
Let T = φ ; then, the resolvent of A * T A is equivalent to evaluating the proximity operator p r o x φ A . More precisely, we have
J A * T A ( x ) = p r o x φ A ( x ) = arg min u 1 2 u x 2 + φ ( A u ) .
This convex optimization problem (8) is a general extension of the famous Rudin–Osher–Fatemi (ROF) image denoising model [22]. It is worth mentioning that Micchelli et al. [23] proposed a fixed point algorithm to solve the proximity operator (8). The work of Moudafi [18] is a generalization of Micchelli et al. [23].
In recent years, Newton-type methods have been combined with the forward-backward splitting (FBS) algorithm to accelerate the speed of the original FBS algorithm. See, for example, [24,25,26]. Argyriou et al. [27] considered the following convex optimization problem:
min u R n 1 2 u U u b u + φ ( A u ) ,
where b R n , A : R n R m is a linear transformation, U R n × n is a symmetric positive definite matrix, and φ : R m ( , + ] is a proper, lower semi-continuous convex function. They proved that the minimizer of (9) can be found via a fixed point equation. In particular, when U = I , the convex optimization problem (9) is equivalent to the problem of computing the proximity operator of φ A in (8). In [28], Chen et al. proposed an accelerated primal-dual fixed point (PDFP2O) algorithm based on an adapted metric method for solving the convex optimization of the sum of two convex functions, where one of which is differentiable and the other is composed by a linear transformation. This algorithm could be viewed as a combination of the original PDFP2O [29] with a Quasi-Newton method. This convex optimization problem (9) could be viewed as the proximity operator of φ A relative to the metric induced by U. Recall that the proximity operator of a proper, lower semi-continuous convex function f ( x ) from R n to ( , + ] relative to the metric induced by U is defined by,
p r o x f U : R n R n : x arg min u R n 1 2 u x U 2 + f ( u ) ,
which was introduced in [30]. See also [31]. Thus, (9) is equivalent to
min u R n 1 2 u x U 2 + φ ( A u ) ,
where x R n , A, U and φ are the same as (9). Let A = I in (11); it becomes the scaled proximal operators of (10). By the first-order optimality condition, the convex optimization problem (11) is equivalent to the following monotone inclusion problem:
0 u x + U 1 A * φ ( A u ) ,
which means that u = ( I + U 1 A * φ A ) 1 ( x ) . It is worth mentioning that the scaled proximity operator (10) was extensively used in [32,33] for deriving effective iterative algorithms to solve structural convex optimization problems. However, these works didn’t consider the general scaled proximity operator of φ A and the related resolvent operator problem.
For this purpose, in this paper, we investigate the solution of the resolvent of composed operators U 1 A * T A , where A : H 1 H 2 is a continuous linear operator, T : H 2 2 H 2 is a maximally monotone operator, and U is a self-adjoint strongly positive definite operator. In particular, when T = φ , the resolvent of composed operators U 1 A * T A is equivalent to the proximity operator as follows:
J U 1 A * T A ( x ) = p r o x φ A U ( x ) = arg min u 1 2 u x U 2 + φ ( A u ) .
The convex minimization problem (13) is an extension of the convex minimization problem (8). In this paper, we always assume that A * T A is maximally monotone under some qualification conditions. According to Minty’s theorem, if A * T A is maximally monotone, then R ( I + λ A * T A ) = H 1 , for any λ > 0 . Thus, the resolvent J λ A * T A ( x ) is single-valued, for any x H 1 . To study the solution of the resolvent of composed operators U 1 A * T A , we divide our work into two parts. First, we present an explicit solution of the resolvent of composed operators under some conditions on A and U. Second, we develop a fixed point algorithm to solve the resolvent of composed operators. As an application, we discuss the resolvent of a finite sum of maximally monotone operators. Furthermore, we employ the obtained results to solve the problem of computing scaled proximity operators of a convex function composed by a linear operator and a finite sum of proper, lower semi-continuous convex functions, respectively.
The rest of the paper is organized as follows. In Section 2, we review some backgrounds on monotone operator theory. In Section 3, we first investigate the solution of the resolvent of composed operators U 1 A * T A . Second, we propose a fixed point approach for solving the resolvent of U 1 A * T A . Finally, we employ the proposed fixed point algorithm to compute the resolvent of the sum of a finite number of maximally monotone operators with U. In Section 4, we apply the obtained results to solve the problem of computing scaled proximity operators of a convex function composed by a linear operator and a finite sum of proper, lower semi-continuous convex functions, respectively. We give some conclusions and future work in the last section.

2. Preliminaries

In this section, we review some definitions and lemmas in monotone operator theory and convex analysis, which are used throughout the paper. Most of them can be found in [17].
Let H, H 1 and H 2 be real Hilbert spaces with inner product · , · and induced norm · : = · , · , respectively. x k x stands for { x k } converging weakly to x, and x k x stands for { x k } converging strongly to x. I denotes the identity operator. Let A : H 1 H 2 be a continuous linear operator and its adjoint be A * : H 2 H 1 such that A x , y = x , A * y , for any x H 1 and y H 2 .
Let T : H 2 H be a set-valued operator. We denote by its domain D o m ( T ) = { x H : T x } , by its range R ( T ) = { y H : x H , y T x } , by its graph g r a ( T ) = { ( x , y ) H × H : y T x } , and by its set of zeros z e r ( T ) = { x H : 0 T x } . We say that T is monotone if x y , u v 0 , for all ( x , u ) , ( y , v ) g r a ( T ) . T is said to be maximally monotone if its graph is not contained in the graph of any other monotone operator. Letting λ > 0 , the resolvent of λ T is defined by
J λ T = ( I + λ T ) 1 ,
and the Yoshida approximation of T with index λ is
λ T = 1 λ ( I J λ T ) .
The resolvent and Yoshida approximation of λ T have the following relationship:
λ T T ( J λ T ) .
We follow the notation as [31]. Let B ( H 1 , H 2 ) be the space of bounded linear operators from H 1 to H 2 , and B ( H ) = B ( H , H ) . We set S ( H ) = { U B ( H ) U = U * } , where U * denotes the adjoint of U. In the S ( H ) , the Loewner partial ordering is defined by
( U , V S ( H ) ) U V ( x H ) x , U x x , V x .
Let α > 0 . We set
P α ( H ) = { U S ( H ) U α I } .
Let P be a orthogonal matrix and its inverse be P 1 . Let P U P 1 = Λ , where Λ = d i a g ( λ 1 , , λ n ) and λ i ( i = 1 , , n ) is the eigenvalue of U. In addition, let Λ 1 = d i a g ( λ 1 , , λ n ) . Then, U = P 1 Λ 1 P , so U is defined as the square root of U P α ( H ) . For every U P α ( H ) , we define a scalar product and a norm by
( x H ) ( y H ) x , y U = x , U y a n d x U = x , U x .
Let T : H H be a single-valued operator. We say that T is non-expansive if T x T y x y , x , y H . T is firmly non-expansive if T x T y 2 + ( I T ) x ( I T ) y 2 x y 2 , x , y H . T is β c o c o e r c i v e for some β ( 0 , + ) if for every x H and y H , x y , T x T y β T x T y 2 . T is averaged if there exists a non-expansive operator R : H 2 H such that T = ( 1 α ) I + α R or for every x H and y H , T x T y 2 x y 2 1 α α ( I T ) x ( I T ) y 2 .
We collect several useful lemmas.
Lemma 1.
([17]) Let T : H 2 H be a maximally monotone operator and λ > 0 . Then, the following hold:
  • (i) J λ T : H H and I J λ T : H H are firmly non-expansive and maximally monotone;
  • (ii) the Yoshida approximation λ T is λ-cocoercive and maximally monotone.
Lemma 2.
([17]) Let C be a nonempty subset of H and let T : C H . We have
  • (i) T is non-expansive if and only if I T is 1 2 c o c o e r c i v e ;
  • (ii) T is firmly non-expansive if and only if I T is firmly non-expansive;
  • (iii) T is 1 2 ν a v e r a g e d if and only if I T is ν c o c o e r c i v e , where ν > 1 2 .
Lemma 3.
([34,35]) Let S be a nonempty subset of H, let T 1 : S H be α 1 -averaged and let T 2 : S H be α 2 -averaged. Then, T 1 T 2 is α 1 + α 2 2 α 1 α 2 1 α 1 α 2 -averaged.
Lemma 4.
([31]) Let T : H 2 H be a maximally monotone operator, let α > 0 and let U P α ( H ) . The scalar product of H is defined by x , y U = x , U y , for any x , y H . Then, the following hold:
  • (i) U 1 T is maximally monotone;
  • (ii) J U 1 T is firmly non-expansive;
  • (iii) J U 1 T = ( U + T ) 1 U .
The Kransnoselskii–Mann algorithm is a popular iterative algorithm for finding fixed points of non-expansive operators. The convergence of it is summarized in the following theorem.
Theorem 1.
([17]) (Kransnoselskii–Mann algorithm) Let C be a nonempty closed convex subset of H, let T : C C be a non-expansive operator such that F i x ( T ) , where F i x ( T ) denotes the fixed point set of T. Let λ k ( 0 , 1 ) such that k = 0 λ k ( 1 λ k ) = + . For any x 0 C , define
x k + 1 = ( 1 λ k ) x k + λ k T x k , k 0 .
Then, the following hold:
  • (i) { x k } is Fejer monotone with respect to F i x ( T ) , i.e., x k + 1 p x k p , for any p F i x ( T ) ;
  • (ii) x k + 1 T x k converges strongly to 0;
  • (iii) { x k } converges weakly to a fixed point in F i x ( T ) .

3. Computing Method for the Resolvent of Composed Operators

In this section, we consider the problem of computing the resolvent of composed operators (13). The obtained results extend and generalize the corresponding results of Fukushima [16] and Bauschke and Combettes [17], respectively. Second, we develop a fixed point approach for computing the resolvent of U 1 A * T A . We also propose a simple and efficient iterative algorithm to approximate the fixed point. The convergence of this algorithm is established in general Hilbert spaces. Finally, we apply the fixed point method to solve the resolvent of the sum of a finite family of maximally monotone operators.

3.1. Analytic Approach Method of Resolvent Operator

The following proposition is a direct generalization of Proposition 23.25 of [17].
Proposition 1.
Let α > 0 and U P α ( H ) . Let T : H 2 2 H 2 is a maximally monotone operator, and let A : H 1 H 2 be a continuous linear operator and its adjoint is A * . Suppose that A U 1 A * is invertible. Let λ > 0 , Then, the following hold:
  • (i) We have
    J λ U 1 A * T A ( x ) = x λ U 1 A * u w i t h u = ( T 1 + λ A U 1 A * ) 1 ( A x ) .
  • (ii) Suppose that A U 1 A * = ν I , for some ν > 0 . Then,
    J λ U 1 A * T A ( x ) = x λ U 1 A * ν T ( A x ) .
Proof. 
By Lemma 4, we know that λ U 1 A * T A is maximally monotone, if λ A * T A is maximally monotone. Thus, J λ U 1 A * T A ( x ) is single-valued, for any x H 1 .
(i) Let x H 1 . Define S : = ( T 1 + λ A U 1 A * ) 1 . Notice that D o m ( S ) = H 2 , then u = S ( A x ) = ( T 1 + λ A U 1 A * ) 1 ( A x ) is well defined. Set p = x λ U 1 A * u . Then, A x ( T 1 u + λ A U 1 A * u ) and A p = A ( x λ U 1 A * u ) T 1 u . This leads to u T ( A p ) . Therefore,
x p = λ U 1 A * u λ U 1 A * T ( A p ) ,
which means that
p = ( I + λ U 1 A * T A ) 1 ( x ) = J λ U 1 A * T A ( x ) .
(ii) Bringing ν I = A U 1 A * into u of (21), we can find that u = ( T 1 + λ ν I ) 1 ( A x ) . Then, we have
A x ( T 1 + λ ν I ) u u T ( A x λ ν u ) A x A x λ ν u + λ ν T ( A x λ ν u ) A x λ ν u = ( I + λ ν T ) 1 ( A x ) u = 1 λ ν ( I ( I + λ ν T ) 1 ) ( A x ) ,
which is equivalent to
u = λ ν T ( A x ) .
Then,
J λ U 1 A * T A ( x ) = x λ U 1 A * λ ν T ( A x ) .
In the Formula (21), T 1 needs to be calculated. However, it is sometimes difficult to evaluate it. Inspired by the method introduced by Fukushima [16], we provide an alternative way to compute the resolvent of composed operators, which avoids computing the inverse of operator T.
Proposition 2.
Let α > 0 and U P α ( H ) . Let T : H 2 2 H 2 be a maximally monotone operator, and let A : H 1 H 2 be a continuous linear operator and its adjoint is A * . Suppose that A U 1 A * is invertible. Then, the resolvent of λ U 1 A * T A is
J λ U 1 A * T A ( x ) = x λ U 1 A * u with u = 1 λ ( A U 1 A * ) 1 ( A x z ) ,
where z is the unique solution of 0 1 λ ( A U 1 A * ) 1 ( z A x ) T z .
Proof. 
Let ω = x λ U 1 A * u . By (21), we have
u = ( T 1 + λ A U 1 A * ) 1 ( A x ) A x ( T 1 + λ A U 1 A * ) u A ( x λ U 1 A * u ) T 1 u A ω T 1 u u T ( A ω ) .
Let z = A x λ A U 1 A * u ; then,
z = A ω .
By (25) and ω , we obtain
z = A x λ A U 1 A * u u = 1 λ ( A U 1 A * ) 1 ( A x z ) .
It follows from (24) and (25) that u T z . Taking into account that u T z and (26), we get
0 = u u 1 λ ( A U 1 A * ) 1 ( A x z ) T z .
Finally, we come to the conclusion that
J λ U 1 A * T A ( x ) = x λ U 1 A * u with u = 1 λ ( A U 1 A * ) 1 ( A x z ) .

3.2. Fixed-Point Approach Method of Resolvent Operator

In Propositions 1 and 2, the resolvent of composed operators U 1 A * T A is computed either with T 1 or requiring A U 1 A * to satisfy additional conditions. In practice, it is still difficult to evaluate it without these conditions. To overcome this difficulty, in this subsection, we propose a fixed point algorithm to compute the resolvent of U 1 A * T A . Our method discards these conditions on T 1 and A U 1 A * .
Lemma 5.
Let α > 0 and U P α ( H ) . Let T : H 2 2 H 2 be a maximally monotone operator, and let A : H 1 H 2 be a continuous linear operator. Let x H 1 . Then, the following hold:
J U 1 A * T A ( x ) x U 1 A * T ( A ( J U 1 A * T A ( x ) ) ) ,
and
y U 1 A * T A ( x ) x = J U 1 A * T A ( x + y ) .
Proof. 
(1) Let x H 1 ; then, we have
J U 1 A * T A ( x ) = ( I + U 1 A * T A ) 1 x x J U 1 A * T A ( x ) + U 1 A * T A ( J U 1 A * T A ( x ) ) J U 1 A * T A ( x ) x U 1 A * T ( A ( J U 1 A * T A ( x ) ) ) .
(2) Let x H 1 , we have
y U 1 A * T A ( x ) x + y x + U 1 A * T A ( x ) x = J U 1 A * T A ( x + y ) .
In the next lemma, we provide a fixed point characterization of the resolvent of composed operators U 1 A * T A . To achieve this goal, we define two operators F : H 2 H 2 and Q : H 2 H 2 . Let x H 1 and λ > 0 . Let y H 2 , and define
F y : = A x + ( I λ A U 1 A * ) y ,
and
Q y : = ( I J 1 λ T ) F y .
Lemma 6.
Let α > 0 and U P α ( H ) . Let T : H 2 2 H 2 be a maximally monotone operator, and let A : H 1 H 2 be a continuous linear operator and its adjoint is A * . Let λ > 0 . Then, we have
J U 1 A * T A ( x ) = x λ U 1 A * y
if and only if y is a fixed-point of Q .
Proof. 
" " Let J U 1 A * T A ( x ) = x λ U 1 A * y . By (27), we have y 1 λ T ( A ( J U 1 A * T A x ) ) . Then, y 1 λ T ( A ( x λ U 1 A * y ) ) .
From (28), we obtain
y 1 λ T ( A ( x λ U 1 A * y ) ) A x λ A U 1 A * y = J 1 λ T ( A x + ( I λ A U 1 A * ) y ) ,
thus
A x λ A U 1 A * y = J 1 λ T ( A x + ( I λ A U 1 A * ) y ) .
By (29), (30) and (32), we get
Q ( y ) = ( I J 1 λ T ) F y = ( I J 1 λ T ) ( A x + ( I λ A U 1 A * ) y ) = A x + y λ A U 1 A * y J 1 λ T ( A x + ( I λ A U 1 A * ) y ) = A x + y λ A U 1 A * y A x + λ A U 1 A * y = y ,
which implies that y is a fixed-point of Q .
Let y be a fixed-point of Q . Then, we can get (32) via the above.
According to (32), we have
A x λ A U 1 A * y = ( I + 1 λ T ) 1 ( A x + ( I λ A U 1 A * ) y ) A x + y λ A U 1 A * y ( I + 1 λ T ) ( A x λ A U 1 A * y ) y 1 λ T ( A x λ A U 1 A * y ) λ y T ( A x λ A U 1 A * y ) ,
that is,
λ y T ( A x λ A U 1 A * y ) .
Hence, (33) at both ends and at the same time multiplied by U 1 A * , we obtain
λ U 1 A * y U 1 A * T ( A x λ A U 1 A * y ) ,
and finally
x λ U 1 A * y x U 1 A * T ( A x λ A U 1 A * y ) .
By comparing (35) to (27), it is easy to find that J λ U 1 A * T A ( x ) = x λ U 1 A * y . The proof is completed. □
Lemma 7.
Let α > 0 and U P α ( H ) . Let T : H 2 H 2 be a maximally monotone operator, and let A : H 1 H 2 be a continuous linear operator and its adjoint is A * . Let λ > 0 , define operator W : H 2 H 2 , y A ( x λ U 1 A * y ) ; then, the following hold:
  • (i) The operator W is α λ A 2 -cocoercive;
  • (ii) For any λ ( 0 , 2 α A 2 ) , F = I W is λ A 2 2 α -averaged; furthermore, the operator Q = ( I J 1 λ T ) F is 2 α 4 α λ A 2 -averaged.
Proof. 
(i) Let y 1 , y 2 H 2 , we have
y 1 y 2 , W ( y 1 ) W ( y 2 ) = y 1 y 2 , A ( x λ U 1 A * y 1 ) + A ( x λ U 1 A * y 2 ) = A * y 1 A * y 2 , ( x λ U 1 A * y 1 ) + ( x λ U 1 A * y 2 ) = 1 λ λ A * y 1 + λ A * y 2 , ( x λ U 1 A * y 1 ) ( x λ U 1 A * y 2 ) .
In virtue of U = U * and U U 1 = I , we have
1 λ λ A * y 1 + λ A * y 2 , ( x λ U 1 A * y 1 ) ( x λ U 1 A * y 2 ) = 1 λ λ U 1 A * y 1 + λ U 1 A * y 2 , U ( x λ U 1 A * y 1 ) U ( x λ U 1 A * y 2 ) = 1 λ ( x λ U 1 A * y 1 ) ( x λ U 1 A * y 2 ) , U ( x λ U 1 A * y 1 ) U ( x λ U 1 A * y 2 ) .
Because U P α ( H 1 ) and for any x H 1 , x , U x α x 2 , we obtain
1 λ ( x λ U 1 A * y 1 ) ( x λ U 1 A * y 2 ) , U ( x λ U 1 A * y 1 ) U ( x λ U 1 A * y 2 ) α λ ( x λ U 1 A * y 1 ) ( x λ U 1 A * y 2 ) 2 α λ A 2 A ( x λ U 1 A * y 1 ) ( A ( x λ U 1 A * y 2 ) ) 2 = α λ A 2 W ( y 1 ) W ( y 2 ) 2 .
Thus, the operator W is α λ A 2 -cocoercive.
(ii) Because W = I F is α λ A 2 -cocoercive, by Lemma 2 (iii), for any λ ( 0 , 2 α A 2 ) , we have that F is λ A 2 2 α -averaged.
On the other hand, because I J 1 λ T is firmly non-expansive, it is also 1 / 2 -averaged. Let α 1 = 1 2 and α 2 = λ A 2 2 α , and, by Lemma 3, we find that ( I J 1 λ T ) F is α 1 + α 2 2 α 1 α 2 1 α 1 α 2 -averaged also α 1 + α 2 2 α 1 α 2 1 α 1 α 2 = 2 α 4 α λ A 2 and λ ( 0 , 2 α A 2 ) . We also have
λ ( 0 , 2 α A 2 ) 2 α 4 α λ A 2 ( 1 2 , 1 ) .
Then, Q is 2 α 4 α λ A 2 -averaged. This completes the proof.
Lemma 6 tells us that the resolvent of composed operators U 1 A * T A can be computed via the fixed point of operator Q . Furthermore, Lemma 7 shows that Q is an averaged operator. Therefore, we can define an iterative algorithm to approximate the fixed point of Q . For any y 0 H 2 , let the sequences { u k } and { y k } be defined by
u k = x λ U 1 A * y k , y k + 1 = ( 1 α k ) y k + α k Q y k ,
where α k ( 0 , 4 α λ A 2 2 α ) and λ ( 0 , 2 α A 2 ) .
Now, we are ready to prove the convergence of the iterative scheme (36).
Theorem 2.
Let α > 0 and U P α ( H ) . Let T : H 2 2 H 2 be a maximally monotone operator, and let A : H 1 H 2 be a continuous linear operator and its adjoint is A * . Let the sequences { y k } and { u k } be generated by (36). Assume that k = 0 + α k ( 1 2 α 4 α λ A 2 α k ) = + . Then, the following hold:
  • (i) { y k } converges weakly to a fixed-point of Q ;
  • (ii) Furthermore, if inf α k > 0 , then { u k } converges strongly to the resolvent J U 1 A * T A ( x ) .
Proof. 
(i) By Lemma 7, Q is 2 α 4 α λ A 2 -averaged. Then, there exists an non-expansive operator C such that
Q = ( 1 β ) I + β C ,
where β = 2 α 4 α λ A 2 . Therefore, the iterative sequence { y k + 1 } in (36) can be rewritten as
y k + 1 = ( 1 α k ) y k + α k Q y k = ( 1 α k ) y k + α k ( ( 1 β ) y k + β C y k ) = ( 1 α k β ) y k + α k β C y k .
The condition on { α k } implies that α k β ( 0 , 1 ) and k = 0 + α k β ( 1 α k β ) = + . It follows from Lemma 6 that F i x ( Q ) , and we observe that F i x ( Q ) = F i x ( C ) . Then, F i x ( C ) .
According to Theorem 1, we can conclude that (a) lim k + y k y exists, for any y F i x ( Q ) = F i x ( C ) ; (b) lim k + y k C y k = 0 , and lim k + y k Q y k = 0 ; (c) { y k } converges weakly to a fixed point of C, which is also a fixed point of Q .
(ii) Let y F i x ( Q ) . By using F as λ A 2 2 α -averaged, and I J 1 λ T as non-expansive, we have
Q y k y 2 = ( I J 1 λ T ) ( A x + ( I λ A U 1 A * ) y k ) ( I J 1 λ T ) ( A x + ( I λ A U 1 A * ) y ) 2 ( A x + ( I λ A U 1 A * ) y k ) ( A x + ( I λ A U 1 A * ) y ) 2 y k y 2 ( 2 α λ A 2 1 ) A ( x λ U 1 A * y k ) ( A ( x λ U 1 A * y ) ) 2 .
For y k + 1 and y, we have
y k + 1 y 2 = ( 1 α k ) ( y k y ) + α k ( Q y k y ) 2 = ( 1 α k ) y k y 2 + α k Q y k y 2 α k ( 1 α k ) Q y k y k 2 .
Combining (38) with (39), we obtain
y k + 1 y 2 ( 1 α k ) y k y 2 + α k ( y k y 2 ( 2 α λ A 2 1 ) A ( x λ U 1 A * y k ) ( A ( x λ U 1 A * y ) ) 2 ) α k ( 1 α k ) Q y k y k 2 = y k y 2 ( 2 α λ A 2 1 ) α k A ( x λ U 1 A * y k ) A ( x λ U 1 A * y ) 2 α k ( 1 α k ) Q y k y k 2 .
Hence, we arrive
( 2 α λ A 2 1 ) α k A ( x λ U 1 A * y k ) A ( x λ U 1 A * y ) 2 y k y 2 y k + 1 y 2 α k ( 1 α k ) Q y k y k ) 2 .
We notice that lim k + y k y exists, and lim k + y k Q y k = 0 . By letting k + , the right of inequality (41) is equal to zero. Together with the condition inf α k > 0 , and 2 α λ A 2 1 > 0 . Then, we obtain
lim k + A ( x λ U 1 A * y k ) A ( x λ U 1 A * y ) = 0 .
By virtue of J λ U 1 A * T A ( x ) = x λ U 1 A * y , y F i x ( Q ) . Then, we have
u k J λ U 1 A * T A x U 2 = x λ U 1 A * y k ( x λ U 1 A * y ) U 2 = λ A * y A * y k , ( x λ U 1 A * y k ) ( x λ U 1 A * y ) = λ y y k , A ( x λ U 1 A * y k ) A ( x λ U 1 A * y ) y k y A ( x λ U 1 A * y k ) A ( x λ U 1 A * y ) .
Taking into account the fact that lim k + y k y exists and (42), we obtain from the above inequality that
lim k + u k J λ U 1 A * T A x U = 0 .
Since the two norms · and · U are equivalent, we have lim k + u k J λ U 1 A * T A x = 0 . Hence, { u k } converges strongly to the resolvent operator J λ U 1 A * T A . This completes the proof. □
Remark 1.
Let U = I in (36); then, it reduced to the iterative algorithm introduced in Moudafi [18]. Therefore, the corresponding result of Moudafi [18] is a special case of ours. At the same time, the proposed iterative algorithm (36) provides a larger range of relaxation parameters than [18].

3.3. Resolvent of a Sum of m Maximally Monotone Operators with U

In this subsection, we apply the fixed-point approach method that was proposed in Section 3.2 to solve the resolvent of the sum of a finite number of maximally monotone operators.
Problem 1.
Let α > 0 and U P α ( H ) . Let m 2 be an integer. For any i { 1 , , m } , let T i : H 2 H be a maximally monotone operator. Letting x H , the problem is to solve the resolvent operator of the form,
y : = J U 1 i = 1 m T i x .
To solve the resolvent operator (43), we formally reformulate it as a special case of the resolvent operator (13), which was studied in the previous section. More precisely, we obtain the following convergence theorem.
Theorem 3.
Let α > 0 and U P α ( H ) . Let m 2 . For any i { 1 , , m } , let T i : H 2 H be a maximally monotone operator. Let x H and let y i 0 H , i = 1 , , m . Let the sequences { u k } and { y i k } i = 1 m be generated by the following:
u k = x λ U 1 ( i = 1 m y i k ) , y i k + 1 = ( 1 α k ) y i k + α k ( I J 1 λ T i ) ( x y i k λ U 1 i = 1 m y i k ) , i = 1 , , m ,
where α k ( 0 , 4 α λ m 2 α ) and λ ( 0 , 2 α m ) satisfy the following conditions:
  • (a) α k ( 4 α λ 2 α α k ) = + ;
  • (b) inf α k > 0 .
Then, the sequence { u k } converges strongly to the resolvent operator(43).
Proof. 
Let H = H × H × × H . The inner product of H is defined by
x , y H = i = 1 m x i , y i , x = ( x 1 , , x m ) H , y = ( y 1 , , y m ) H .
The associated norm is
x H = i = 1 m x i 2 , x = ( x 1 , , x m ) H .
Let us introduce the operators:
A : H H : x ( x , , x ) ,
and
T : H H : ( x 1 , , x m ) ( T 1 x 1 , , T m x m ) .
Therefore, T is a maximally monotone operator, and A is a bounded linear operator with A = m .
Let y H , x H , by the definition of A, we have
y , A x H = i = 1 m y i , x = i = 1 m y i , x = A * y , x .
Hence, we have y H
A * y = i = 1 m y i .
In addition, letting x H , we have
U 1 A * T A ( x ) = U 1 A * ( T 1 x , , T m x ) = U 1 i = 1 m T i x .
Let y k = ( y 1 k , , y m k ) H . Then, the iterative scheme (44) can be rewritten as
u k = x λ U 1 A * y k , y k + 1 = ( 1 α k ) y k + α k ( I J 1 λ T ) ( y k + A ( x λ U 1 A * y k ) ) .
According to Theorem 2 (ii), we can conclude that the sequence { u k } converges strongly to the resolvent operator (43).

4. Applications

In this section, we apply the obtained results in the last section to solve the problem of computing proximity operators of convex functions.
The first problem is a generalization of the proximity operator of a convex function consists of a linear transformation. Before we state our main problems, let’s introduce some notation. Let f : H ( , + ] , f is proper, if d o m f = { x H f ( x ) < + } . We denote by Γ 0 ( H ) the class of proper, lower semi-continuous convex functions from H to ( , + ] .
Problem 2.
Let α > 0 and U P α ( H ) . Let A B ( H , G ) . Let φ Γ 0 ( G ) and let x H . We consider the following scaled proximity operator:
p r o x φ A U ( x ) = arg min u H 1 2 u x U 2 + φ ( A u ) .
Theorem 4.
For Problem 2, let y 0 G , and set
u k = x λ U 1 A * y k , y k + 1 = ( 1 α k ) y k + α k ( I p r o x 1 λ φ ) ( y k + A ( x λ U 1 A * y k ) ) ,
where α k ( 0 , 4 α λ A 2 2 α ) and λ ( 0 , 2 α A 2 ) such that k = 0 + α k ( 1 2 α 4 α λ A 2 α k ) = + and inf α k > 0 . Then, the sequence { u k } converges strongly to the proximity operator p r o x φ U ( x ) .
Proof. 
Because the proximity operator p r o x φ A U ( x ) is equivalent to the resolvent operator J U 1 A * φ A ( x ) . In Theorem 2, let T = φ , we can conclude that the sequence { u k } generated by (49) converges strongly to the proximity operator p r o x φ A U ( x ) . □
Next, we consider the problem of computing scaled proximity operators of a finite sum of convex functions.
Problem 3.
Let α > 0 and U P α ( H ) . Let m 2 be an integer. For any i { 1 , , m } , let f i Γ 0 ( H ) and let x H . We consider the problem of a computing scaled proximity operator:
p r o x i = 1 m f i U ( x ) = arg min u H 1 2 u x U 2 + i = 1 m f i ( u ) .
Theorem 5.
For Problem 3, let y i 0 H and set
u k = x λ U 1 ( i = 1 m y i k ) , y i k + 1 = ( 1 α k ) y i k + α k ( I p r o x 1 λ f i ) ( x y i k λ U 1 i = 1 m y i k ) i = 1 , , m ,
where α k ( 0 , 4 α λ A 2 2 α ) and λ ( 0 , 2 α A 2 ) such that k = 0 + α k ( 1 2 α 4 α λ A 2 α k ) = + and inf α k > 0 . Then, the sequence { u k } converges strongly to the proximity operator p r o x i = 1 m f i U ( x ) .
Proof. 
Let T i = f i , i = 1 , , m . We know that the proximity operator (50) is equivalent to the resolvent operator (43), that is,
p r o x i = 1 m f i U ( x ) = J U 1 i = 1 m T i ( x ) .
Since J 1 λ T i = p r o x 1 λ f i , for any i = 1 , , m . By Theorem 3, we can conclude that the sequence { u k } generated by (51) converges strongly to the proximity operator p r o x i = 1 m f i U ( x ) . □

5. Conclusions

Inspired and motivated by the work of Moudafi [18], in this paper, we discussed the resolvent of composed operators U 1 A * T A . Under some additional conditions, we obtained explicit solutions of the resolvent of composed operators. The obtained results generalized and extended the classical results of Fukushima [16] and Bauschke and Combettes [17]. On the other hand, we presented a fixed point algorithm approach for computing the resolvent of composed operators. By virtue of the Krasnoselskii–Mann algorithm for finding fixed points of non-expansive operators, we proved that the strong convergence of the proposed fixed-point iterative algorithm. As applications, we employed the proposed algorithm to solve the scaled proximity operator of a convex function composed of a linear operator (48), and the proximity operator of a finite sum of proper, lower semi-continuous convex functions (50).
We observed that the considered resolvent of composed operators (13) is closely related to Newton’s method to non-smooth sparse optimization problems. However, how to choose the symmetric positive definite matrix in finite dimensional spaces for implementing the proposed algorithm more efficiently is not clear now. We will discuss it in future work.

Author Contributions

Conceptualization, Y.Y. and Y.T.; writing–original draft preparation, Y.Y.; writing–review and editing, Y.T.; supervision, C.Z.

Funding

This research was funded by the National Natural Science Foundations of China (11661056,11771198, 11771347, 91730306, 41390454, 11401293), the China Postdoctoral Science Foundation (2015M571989) and the Jiangxi Province Postdoctoral Science Foundation (2015KY51).

Acknowledgments

We would like to thank the associate editor and the two reviewers for their helpful comments to improve the paper.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Rockafellar, R.T. Monotone operators and the proximal point algorithm. SIAM J. Control Optim. 1976, 14, 877–898. [Google Scholar] [CrossRef]
  2. Rockafellar, R.T. Augmented lagrangians and applications of the proximal point algorithm in convex programming. Math. Oper. Res. 1976, 1, 97–116. [Google Scholar] [CrossRef]
  3. Tossings, P. The perturbed proximal point algorithm and some of its applications. Appl. Math. Optim. 1994, 29, 125–159. [Google Scholar] [CrossRef]
  4. Spingarn, J.E. Applications of the method of partial inverses to convex programming: Decomposition. Math. Program. 1985, 32, 199–223. [Google Scholar] [CrossRef]
  5. Moreau, J.J. Fonctions convexes duales et points proximaux dans un espace hilbertien. C. R. Acad. Sci. Paris Ser. A Math. 1962, 255, 2897–2899. [Google Scholar]
  6. Combettes, P.L.; Wajs, V.R. Signal recovery by proximal forward-backward splitting. Multiscale Model. Simul. 2005, 4, 1168–1200. [Google Scholar] [CrossRef]
  7. Combettes, P.L.; Pesquet, J.-C. A Douglas–Rachford splitting approach to nonsmooth conve variational signal recovery. IEEE J. Sel. Top. Signal Process. 2007, 1, 564–574. [Google Scholar] [CrossRef]
  8. Parikh, N.; Boyd, S. Proximal algorithms. Found. Trends Optim. 2014, 1, 123–231. [Google Scholar] [CrossRef]
  9. Combettes, P.L.; Dũng, D.; Vũ, B.C. Proximity for sums of composite functions. J. Math. Anal. Appl. 2011, 380, 680–688. [Google Scholar] [CrossRef] [Green Version]
  10. Adly, S.; Bourdin, L.; Caubet, F. On a decomposition formula for the proximal operator of the sum of two convex functions. arXiv, 2018; arXiv:1707.08509v2. [Google Scholar]
  11. Bauschke, H.H.; Combettes, P.L. A Dykstra-like algorithm for two montone operators. Pac. J. Optim. 2008, 4, 383–391. [Google Scholar]
  12. Dykstra, R.L. An algorithm for restricted least squares regression. J. Am. Stat. Assoc. 1983, 78, 837–842. [Google Scholar] [CrossRef]
  13. Combettes, P.L. Iterative construction of the resolvent of a sum of maximal monotone operators. J. Convex Anal. 2009, 16, 727–748. [Google Scholar]
  14. Aragón Artacho, F.J.; Campoy, R. Computing the resolvent of the sum of maximally monotone operators with the averaged alternating modified reflections algorithm. arxiv, 2018; arXiv:1805.09720. [Google Scholar]
  15. Aragón Artacho, F.J.; Campoy, R. A new projection method for finding the closest point in the intersection of convex sets. Comput. Optim. Appl. 2018, 69, 99–132. [Google Scholar] [CrossRef]
  16. Fukushima, M. The primal Douglas–Rachford splitting algorithm for a class of monotone mappings with application to the traffic equilibrium problem. Math. Program. 1996, 72, 1–15. [Google Scholar] [CrossRef]
  17. Bauschke, H.H.; Combettes, P.L. Convex Analysis and Motonone Operator Theory in Hilbert Spaces, 2nd ed.; Springer: London, UK, 2017. [Google Scholar]
  18. Moudafi, A. Computing the resolvent of composite operators. Cubo 2014, 16, 87–96. [Google Scholar] [CrossRef] [Green Version]
  19. Robinson, S.M. Composition duality and maximal monotonicity. Math. Program. 1999, 85, 1–13. [Google Scholar] [CrossRef]
  20. Pennanen, T. Dualization of generalized equations of maximal monotone type. SIAM J. Optim. 2000, 10, 809–835. [Google Scholar] [CrossRef]
  21. Bot, R.I.; Grad, M.S.; Wanka, G. Maximal monotonicity for the precomposition with a linear operator. SIAM J. Optim. 2007, 17, 1239–1252. [Google Scholar]
  22. Rudin, L.I.; Osher, S.; Fatemi, E. Nonlinear total variation based noise removal algorithms. Physica D 1992, 60, 259–268. [Google Scholar] [CrossRef]
  23. Micchelli, C.A.; Shen, L.; Xu, Y. Proximity algorithms for image models: Denoising. Inverse Probl. 2011, 27, 045009. [Google Scholar] [CrossRef]
  24. Lee, J.D.; Sun, Y.; Saunders, M.A. Proximal Newton-type methods for minimizing composite functions. SIAM J. Optim. 2014, 24, 1420–1443. [Google Scholar] [CrossRef]
  25. Hager, W.; Ngo, C.; Yashtini, M.; Zhao, H.C. An alternating direction approximate newton algorithm for ill-conditioned inverse problems with application to parallel MRI. J. Oper. Res. Soc. China 2015, 3, 139–162. [Google Scholar] [CrossRef]
  26. Li, X.D.; Sun, D.F.; Toh, K.C. A highly efficient semismooth newton augmented lagrangian method for solving Lasso problems. SIAM J. Optim. 2018, 28, 433–458. [Google Scholar] [CrossRef]
  27. Argyriou, A.; Micchelli, C.A.; Pontil, M.; Shen, L.X.; Xu, Y.S. Efficient first order methods for linear composite regularizers. arXiv, 2011; arXiv:1104.1436. [Google Scholar]
  28. Chen, D.Q.; Zhou, Y.; Song, L.J. Fixed point algorithm based on adapted metric method for convex minimization problem with application to image deblurring. Adv. Comput. Math. 2016, 42, 1287–1310. [Google Scholar] [CrossRef]
  29. Chen, P.J.; Huang, J.G.; Zhang, X.Q. A primal-dual fixed point algorithm for convex separable minimization with applications to image restoration. Inverse Probl. 2013, 29, 025011. [Google Scholar] [CrossRef]
  30. Hiriart-Urruty, J.-B.; Lemaréchal, C. Conve Analysis and Minimization Algorithms; Spinger: New York, NY, USA, 1993. [Google Scholar]
  31. Combettes, P.L.; Vũ, B.C. Variable metric forward-backward splitting with applications to monotone inclusions in duality. Optimization 2014, 63, 1289–1318. [Google Scholar] [CrossRef]
  32. Zhang, X.; Burger, M.; Osher, S. A unified primal-dual framework based on Bregman iteration. J. Sci. Comput. 2011, 46, 20–46. [Google Scholar] [CrossRef]
  33. Bitterlich, S.; Bot, R.I.; Csetnek, E.R.; Wanka, G. The proximal alternating minimization algorithm for two-block separable convex optimization problems with linear constraints. J. Optim. Theory Appl. 2018. [Google Scholar] [CrossRef]
  34. Ogura, N.; Yamada, I. Non-strictly convex minimization over the fixed point set of the asymptotically shrinking non-expansive mapping. Numer. Funct. Anal. Optim. 2002, 23, 113–137. [Google Scholar] [CrossRef]
  35. Combettes, P.L.; Yamada, I. Compositions and convex combinations of averaged non-expansive operators. J. Math. Anal. Appl. 2015, 425, 55–70. [Google Scholar] [CrossRef]

Share and Cite

MDPI and ACS Style

Yang, Y.; Tang, Y.; Zhu, C. Iterative Methods for Computing the Resolvent of Composed Operators in Hilbert Spaces. Mathematics 2019, 7, 131. https://doi.org/10.3390/math7020131

AMA Style

Yang Y, Tang Y, Zhu C. Iterative Methods for Computing the Resolvent of Composed Operators in Hilbert Spaces. Mathematics. 2019; 7(2):131. https://doi.org/10.3390/math7020131

Chicago/Turabian Style

Yang, Yixuan, Yuchao Tang, and Chuanxi Zhu. 2019. "Iterative Methods for Computing the Resolvent of Composed Operators in Hilbert Spaces" Mathematics 7, no. 2: 131. https://doi.org/10.3390/math7020131

APA Style

Yang, Y., Tang, Y., & Zhu, C. (2019). Iterative Methods for Computing the Resolvent of Composed Operators in Hilbert Spaces. Mathematics, 7(2), 131. https://doi.org/10.3390/math7020131

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop