Abstract
The aim of this present paper is to study zero points of the sum of two maximally monotone mappings and fixed points of a non-expansive mapping. Two splitting projection algorithms are introduced and investigated for treating the zero and fixed point problems. Possible computational errors are taken into account. Two convergence theorems are obtained and applications are also considered in Hilbert spaces
Keywords:
extragradient method; maximally monotone operator; proximal point algorithm; variational inequality; zero point MSC:
47H05; 47H09; 49J40
1. Introduction—Preliminaries
Let H be an infinite-dimensional real Hilbert space. Its inner product is denoted by . The induced norm is denoted by for Let C be a convex and closed set in H and let be a single-valued mapping. We recall the following definitions.
A is said to be a monotone mapping iff
A is said to be a strongly monotone mapping iff there exists a positive real constant L such that
A is said to be an inverse-strongly monotone mapping iff there exists a positive real constant L such that
A is said to be L-Lipschitz continuous iff there exists a positive real constant L such that
A is said to be sequentially weakly continuous iff, for any vector sequence in C, converges weakly to x, which implies that converges weakly to .
Consider that the following monotone variational inequality, which associates with mapping A and set C consists of finding an such that
From now on, we borrow to present the set of solutions (1). Recently, spotlight has been shed on projection-based iterative methods, which are efficient to deal with solutions of variational inequality (1). With the aid of resolvent mapping , where is the metric projection from H onto C, r is some positive real number and denotes the identity on one knows that x is a solution to inequality (1) iff x is a fixed point of . When dealing with the resolvent mapping, one is required to metric projections at every iteration. In the case that C is a linear variety or a closed ball or polytope, the computation of is not hard to implement. If C is a bounded set, then the existence of solutions of the variational inequality is guaranteed by Browder [1]. If A is monotone and L-Lipschitz continuous, Korpelevich [2] introduced the following so-called extragradient method:
where C is assumed to be a convex and closed set in a finite-dimensional Euclidean space and r is positive real number in . He proved that sequence converges to a point in , for more materials; see [2] and the cited references therein. We remark here that the extragradient method is an Ishikawa-like iterative method, which is efficient for solving fixed-point problems of pseudocontractive mappings whose complementary mappings are monotone, that is, A is monotone if and only if is pseudocontractive.
Next, we turn our attention to set-valued mappings. Let be a set-valued mapping. We borrow to denote the graph of mapping B and to denote the zero set of mapping B. One says that B is monotone iff , . One says that B is maximal iff there exists no proper monotone extensions of the graph of B on , that is, the graph of B is not a subset of any other monotone operator graphs. For a maximally monotone operator B, one can define the single-valued resolvent mapping where stands for the domain of B, stands for the identity mapping and r is a real number. In the case in which B is the subdifferential of proper, lower semicontinuous and convex functions, then its resolvent mapping is the called the proximity mapping. One knows , where stands for the fixed-point set of and is firmly non-expansive, that is,
The class of maximally monotone mapping is under the spotlight of researchers working on the fields of optimization and functional analysis. Let f be a proper convex and closed function . One known example of maximally mapping is , the subdifferential of f. It is defined as follows:
Rockafellar [3] asserted that is a maximally monotone operator. One can verify that iff . Next, we give one more example for maximally monotone mappings: , where M is a continuous single-valued maximally monotone mapping, and is the mapping of the normal cone:
for and is empty otherwise. Then, iff is a solution to the following monotone variational inequalities: for all
One of the fundamental and efficient solution methods for investigating the inclusion problem , where T is a maximally monotone mapping is the known proximal point algorithm (PPA), which was studied by Martinet [4,5] and Rockafellar [6,7]. The PPA has been extensively studied [8,9,10,11] and is known to yield special cases’ decomposition methods such as the method of partial inverses [12], the FB splitting method, and the ADMM [13,14]. The following forward-backward splitting method where , was proposed by Lions and Mercier [15], and Passty [16] for where A and B are two maximally monotone mappings. Furthermore, if , then this method is reduced to the gradient-projection iterative method [17]. Recently, a number of researchers, who work on the fields of monotone operators, studied the splitting algorithm; see [18,19,20,21,22] and the references therein.
Let be a single-valued mapping. In this paper, we use to stand for the fixed-point set of mapping S. Recall that S is said to be non-expansive if If C is a bounded set, then the set of fixed points of mapping S is non-empty; see [23]. In the real world, a number of problems and modelings have reformulations that require finding fixed points of non-expansive mapping (zeros of monotone mappings). One knows that Mann-like iteration is weakly convergent for non-expansive mapping only. Recently, a number of researchers concentrated on various Mann-like iterations so that strongly convergent theorems can be obtained without additional compact restrictions on mappings; see [24,25,26,27].
For most real mathematical modelings, one often has more than one constraint. For such modelings, solutions to a problem which are simultaneously solutions to two or more problems (or desired solutions lie on the solution set of other problems); see [28,29,30,31,32,33] and the references therein.
In this paper, we, based on Tseng’s ideas, are concerned with the problem of finding a common solution of fixed-point problems of a non-expansive mapping and zero-point problems of a sum of two monotone operators based on two splitting algorithms, which take into account possible computational errors. Convergence theorems of the algorithms are obtained. Applications of the algorithms are also discussed.
In order to prove the main results of this paper, the following tools are essential.
An infinite-dimensional space X is said to satisfy Opial’s condition [34] if, for any with , the following inequality holds:
for with . It is well known that the above inequality is equivalent to
for with . It is well known that , where and satisfy the Opial’s condition.
The following lemma is trivial, so the proof is omitted.
Lemma 1.
Let be a real positive sequence with where is a real positive sequence with and is some nonnegative integer. Then, the limit exists.
Lemma 2.
Reference [35] Let H be a Hilbert space and let be a real sequence with the restriction for all . Let and be two vector sequences in H with and where r is some positive real number. Then, .
Lemma 3.
Reference [34] Let C be a convex and closed set in an infinite-dimension Hilbert space H and let S be a non-expansive mapping with a non-empty fixed-point set on set C. Let be a vector sequence on C. If converges weakly to x and then .
2. Main Results
Theorem 1.
Let C be a convex and closed set in a Hilbert space Let S be a non-expansive self mapping on C, whose fixed-point set is non-empty. Let be a monotone and both L-Lipschitz continuous and sequentially weakly continuous mapping. Let B be a maximally monotone mapping on H. Assume that lies in C and is not empty. Let be a real number sequence in for some , and let be a real number sequence in for some . Let be a vector sequence defined and generated in the following iterative process:
where is an error sequence in H with Then, converges to a point weakly.
Proof.
Set . Fixing , we find that
Using the Lipschitz continuitity of mapping A, one finds that
It follows that
In light of Lemma 1, one finds that the following limit exists; in particular, the vector sequence is bounded. By using (2), one gets that
Thanks to the condition on , and , one obtains
Notice the fact that vector sequence is bounded. There is a vector sequence , which is a subsequence of original sequence converging to weakly. In light of (3), we find that the subsequence of also converges to weakly.
Now, one is in a position to claim that lies in . Notice that
Suppose . By using the continuitity of mapping B, one reaches
Taking into account the fact that A is a sequentially weakly continuous account, one arrives at This guarantees , that is, .
On the other hand, we have that . Indeed, set . It follows from (2) that This shows that . It follows from Lemma 2 that
Since A is L-Lipschitz continuous, we find that
Next, one claims that vector sequence converges to weakly. If not, one finds that there exists some subsequence of and this subsequence converges to weakly, and Similarly, one has . From the fact that exists, , one may suppose that , where d is a nonnegative number. By using the Opial’s inequality, one arrives at
One reaches a contradiction. Hence, . □
The following result is not hard to derive from Theorem 1.
Corollary 1.
Let C be a convex and closed set in a Hilbert space Let be a monotone and both L-Lipschitz continuous and sequentially weakly continuous mapping. Let B be a maximally monotone mapping on H. Suppose that and is not empty. Let be a real number sequence in for some and let be a real number sequence in for some . Let be a vector sequence defined and generated in the following process:
where is an error sequence in H such that Then, converges to a point weakly.
Next, one is ready to present the other convergence theorem.
Theorem 2.
Let C be a convex and closed set in a Hilbert space Let S be a non-expansive self mapping on C, whose fixed-point set is non-empty. Let be a monotone and both L-Lipschitz continuous and sequentially weakly continuous mapping. Let B be a maximally monotone mapping on H. Assume that lies in C and is not empty. Let be a real number sequence in for some and let be a real number sequence in for some . Let be a vector sequence defined and generated in the following iterative process:
where is an error sequence with the restriction Then, converges strongly to .
Proof.
First, we show that the set is closed and convex. It is clear that is closed. We only show the convexness of . From assumption, we see that is convex. We suppose that is a convex set for some . Next, one claims that is also a convex set. Since
is equivalent to
we easily find that is a convex set. This claims that set is convex and closed. Next, we show that . From the assumption, we see that . Suppose that for some . Next, we show that for the same m. Set . For any , we find that
Thanks to the fact that A is a Lipschitz continuous mapping, one asserts that
It follows that
This implies that . This proves that . Since and , which is a subset of , we find that
This implies that For any we find from that in particular,
This claims that vector sequence is bounded and limit exists. Note that
Letting one obtains . In light of , we see that
It follows that . This proves that Using the restrictions imposed on the sequence , and , we also find that By using the fact that is a bounded sequence, there exists a sequence , which is a subsequence of , converging to weakly. One also obtains that the sequence also converges to weakly. Note that Next, we suppose is a point in . The monotonicity of B yields that Since A is sequentially weakly continuous mapping, we obtain that These yield that . Hence, one obtains .
One now is in a position to claim that . Since which in turn implies that Since A is Lipschitz continuous, one has
This proves that In light of Lemma 3, one finds Put . Since and , we find that . Note that
It follows that
from which one gets From the arbitrariness of , one has □
The following results can be derived immediately from Theorem 2.
Corollary 2.
Let C be a convex and closed set in a Hilbert space Let be a monotone and both L-Lipschitz continuous and sequentially weakly continuous mapping. Let B be a maximally monotone mapping on H. Assume that lies in C and is not empty. Let be a real number sequence in for some and let be a real number sequence in for some . Let be a vector sequence defined and generated in the following iterative process:
where and is an error sequence in H such that Then, converges to strongly.
Corollary 3.
Let C be a convex and closed set in a Hilbert space Let S be a non-expansive self mapping on C, whose fixed-point set is non-empty. Let be a real number sequence in for some . Let be a vector sequence defined and generated in the following iterative process:
Then, converges strongly to .
3. Applications
This section gives some results on solutions of variational inequalities, minimizers of convex functions, and solutions of equilibrium problems.
Let H be a real Hilbert space and let C be a convex and closed set in H. Let be a function defined by
One knows that indicator function is proper convex and lower semicontinuous, and its subdifferential is maximally monotone. Define the resolvent mapping of subdifferential operator by . Letting , one finds
where . If in Theorem 1 and Theorem 2, then the following results can be derived immediately.
Theorem 3.
Let C be a convex and closed set in a Hilbert space Let S be a non-expansive self mapping on C, whose fixed-point set is non-empty. Let be a monotone and both L-Lipschitz continuous and sequentially weakly continuous mapping. Assume that is not empty. Let be a real number sequence in for some and let be a real number sequence in for some . Let be a vector sequence defined and generated in the following iterative process:
where is an error sequence in H such that Then, converges to a point weakly.
Theorem 4.
Let C be a convex and closed set in a Hilbert space Let S be a non-expansive self mapping on C, whose fixed-point set is non-empty. Let be a monotone and both L-Lipschitz continuous and sequentially weakly continuous mapping. Assume that is not empty. Let be a real number sequence in for some and let be a real number sequence in for some . Let be a vector sequence defined and generated in the following iterative process:
where is an error sequence in H such that Then, converges to strongly.
Next, we consider minimizers of a proper convex and lower semicontinuous function.
Let be a proper lower semicontinuous convex function. One can define subdifferential mapping by Rockafellar [3] proved that subdifferential mappings are maximally monotone and if and only if .
Theorem 5.
Let C be a convex and closed set in a Hilbert space Let be a proper convex lower semicontinuous function such that is not empty. Let be a real number sequence in for some and let be a real number sequence in for some . Let be a vector sequence defined and generated in the following iterative process:
where is an error sequence in H such that Then, converges to a point weakly.
Proof.
From the assumption that is proper, convex, and lower semicontinuous, one obtains that subdifferential is maximally monotone. Setting and , one sees that
is equivalent to
It follows that
□
By using Theorem 2.1, we draw the desired conclusion immediately.
Theorem 6.
Let C be a convex and closed set in a Hilbert space H. Let be a proper convex lower semicontinuous function such that is not empty. Let be a real number sequence in for some and let be a real number sequence in for some . Let be a vector sequence defined and generated in the following iterative process:
where is an error sequence in H such that Then, converges to strongly.
Proof.
From the assumption that is proper, convex and lower semicontinuous, one sees that subdifferential is maximally monotone. Setting and , one sees that
is equivalent to
It follows that
□
By using Theorem 2, we draw the desired conclusion immediately.
Finally, we consider an equilibrium problem, which is also known as Ky Fan inequality [36], in the sense of Blum and Oettli [37].
We employ to denote the set of real numbers. Let F be a bifunction mapping to . The equilibrium problem consists of
Hereafter, means the solution set of problem (5).
In order to study solutions of equilibrium problem (5), the following routine restrictions on F are needed:
- (R1)
- for each , is convex and lower semi-continuous;
- (R2)
- for each ,
- (R3)
- for each , ;
- (R4)
- for each , .
The following lemma is on a resolvent mapping associated with F, introduced in [38].
Lemma 4.
Let be a bifunction with restriction (R1)–(R4). Let and . Then, there exists a vector such that Define a mapping by
for each and each . Then, (1) is convex and closed; (2) is single-valued firmly non-expansive.
Lemma 5
[39]. Let F be a bifunction with restrictions (R1)–(R4), and let be a mapping on H defined by
Then, is a maximally monotone mapping such that , , and where is defined as in (6).
Thanks to Lemmas 4 and 5, one finds from Theorem 1 and Theorem 2 the following results on equilibrium problem (5) immediately.
Theorem 7.
Let C be a convex and closed set in a Hilbert space Let S be a non-expansive self mapping on C, whose fixed-point set is non-empty. Let be a bifunction with restrictions (R1)–(R4). Assume that is not empty. Let be a real number sequence in for some and let be a real number sequence such that , where c is some positive real number. Let be a vector sequence defined and generated in the following iterative process: where is defined by (7). Then, converges to a point weakly.
Theorem 8.
Let C be a convex and closed set in a Hilbert space Let S be a non-expansive self mapping on C, whose fixed-point set is non-empty. Let be a bifunction with restrictions (R1)–(R4). Assume that is not empty. Let be a real number sequence in for some and let be a real number sequence such that , where c is some positive real number. Let be a vector sequence defined and generated in the following iterative process:
where is defined by (7). Then, converges to strongly.
Author Contributions
These authors contributed equally to this work.
Funding
This project was funded by the Deanship of Scientific Research (DSR) at King Abdulaziz University, Jeddah, Saudi Arabia under grant no. KEP-2-130-39. The authors, therefore, acknowledge with thanks DSR for technical and financial support.
Conflicts of Interest
The authors declare no conflict of interest.
References
- Browder, F.E. Fixed-point theorems for noncompact mappings in Hilbert space. Proc. Natl. Acad. Sci. USA 1965, 53, 1272–1276. [Google Scholar] [CrossRef]
- Korpelevich, G.M. An extragradient method for finding saddle points and for other problems. Èkonomika i Matematicheskie Metody 1976, 12, 747–756. [Google Scholar]
- Rockafellar, R.T. Convex Analysis; Princeton University Press: Princeton, NJ, USA, 1970. [Google Scholar]
- Martinet, B. Regularisation d’inequations variationnelles par approximations successives. Rev. Franc. Inform. Rech. Oper. 1970, 4, 154–159. [Google Scholar]
- Martinet, B. Determination approchée d’un point fixe d’une application pseudo-contractante. C. R. Acad. Sci. Paris Ser. A–B 1972, 274, 163–165. [Google Scholar]
- Rockfellar, R.T. Monotone operators and proximal point algorithm. SIAM J. Control Optim. 1976, 14, 877–898. [Google Scholar] [CrossRef]
- Rockafellar, R.T. Augmented Lagrangians and applications of the proximal point algorithm in convex programmin. Math. Oper. Res. 1976, 1, 97–116. [Google Scholar] [CrossRef]
- Bin Dehaish, B.A.; Qin, X.; Latif, A.; Bakodah, H.O. Weak and strong convergence of algorithms for the sum of two accretive operators with applications. J. Nonlinear Convex Anal. 2015, 16, 1321–1336. [Google Scholar]
- Ansari, Q.H.; Babu, F.; Yao, J.C. Regularization of proximal point algorithms in Hadamard manifolds. J. Fixed Point Theory Appl. 2019, 21, 25. [Google Scholar] [CrossRef]
- Cho, S.Y.; Li, W.; Kang, S.M. Convergence analysis of an iterative algorithm for monotone operators. J. Inequal. Appl. 2013, 2013, 199. [Google Scholar] [CrossRef]
- Bin Dehaish, B.A.; Latif, A.; Bakodah, H.O.; Qin, X. A regularization projection algorithm for various problems with nonlinear mappings in Hilbert spaces. J. Inequal. Appl. 2015, 2015, 51. [Google Scholar] [CrossRef]
- Spingarn, J.E. Applications of the method of partial inverses to convex programming decomposition. Math. Program. 1985, 32, 199–223. [Google Scholar] [CrossRef]
- Eckstein, J.; Bertsekas, D.P. On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 1992, 55, 293–318. [Google Scholar] [CrossRef]
- Eckstein, J.; Ferris, M.C. Operator-splitting methods for monotone affine variational inequalities, with a parallel application to optimal control. Informs J. Comput. 1998, 10, 218–235. [Google Scholar] [CrossRef]
- Lions, P.L.; Mercier, B. Splitting algorithms for the sum of two nonlinear operators. SIAM J. Numer. Anal. 1979, 16, 964–979. [Google Scholar] [CrossRef]
- Passty, G.B. Ergodic convergence to a zero of the sum of monotone operators in Hilbert space. J. Math. Anal. Appl. 1979, 72, 383–390. [Google Scholar] [CrossRef]
- Sibony, M. Méthodes itératives pour les équations et inéquations aux dérivées partielles nonlinéares de type monotone. Calcolo 1970, 7, 65–183. [Google Scholar] [CrossRef]
- Cho, S.Y. Strong convergence analysis of a hybrid algorithm for nonlinear operators in a Banach space. J. Appl. Anal. Comput. 2018, 8, 19–31. [Google Scholar]
- Tseng, P. A modified forward-backward splitting methods for maximal monotone mappings. SIAM. J Optim. 2000, 38, 431–446. [Google Scholar] [CrossRef]
- Cho, S.Y.; Kang, S.M. Approximation of common solutions of variational inequalities via strict pseudocontractions. Acta Math. Sci. 2012, 32, 1607–1618. [Google Scholar] [CrossRef]
- Chang, S.S.; Wen, C.F.; Yao, J.C. Zero point problem of accretive operators in Banach spaces. Bull. Malays. Math. Sci. Soc. 2019, 42, 105–118. [Google Scholar] [CrossRef]
- Chang, S.S.; Wen, C.F.; Yao, J.C. Common zero point for a finite family of inclusion problems of accretive mappings in Banach spaces. Optimization 2018, 67, 1183–1196. [Google Scholar] [CrossRef]
- Browder, F.E. Nonexpansive nonlinear operators in a Banach space. Proc. Natl. Acad. Sci. USA 1965, 54, 1041–1044. [Google Scholar] [CrossRef] [PubMed]
- Cho, S.Y. Generalized mixed equilibrium and fixed point problems in a Banach space. J. Nonlinear Sci. Appl. 2016, 9, 1083–1092. [Google Scholar] [CrossRef]
- Cho, S.Y.; Kang, S.M. Approximation of fixed points of pseudocontraction semigroups based on a viscosity iterative process. Appl. Math. Lett. 2011, 24, 224–228. [Google Scholar] [CrossRef]
- Takahahsi, W.; Yao, J.C. The split common fixed point problem for two finite families of nonlinear mappings in Hilbert spaces. J. Nonlinear Convex Anal. 2019, 20, 173–195. [Google Scholar]
- Takahashi, W.; Wen, C.F.; Yao, J.C. The shrinking projection method for a finite family of demimetric mappings with variational inequality problems in a Hilbert space. Fixed Point Theory 2018, 19, 407–419. [Google Scholar] [CrossRef]
- Liu, L.; Qin, X.; Agarwal, R.P. Iterative methods for fixed points and zero points of nonlinear mappings with applications. Optimization 2019. [Google Scholar] [CrossRef]
- Ceng, L.C.; Ansari, Q.H.; Yao, J.C. Some iterative methods for finding fixed points and for solving constrained convex minimization problems. Nonlinear Anal. 2011, 74, 5286–5302. [Google Scholar] [CrossRef]
- Zhao, J.; Jia, Y.; Zhang, H. General alternative regularization methods for split equality common fixed-point problem. Optimization 2018, 67, 619–635. [Google Scholar] [CrossRef]
- He, S.; Wu, T.; Gibali, A.; Dong, Q. Totally relaxed, self-adaptive algorithm for solving variational inequalities over the intersection of sub-level sets. Optimization 2018, 67, 1487–1504. [Google Scholar] [CrossRef]
- Husain, S.; Singh, N. A hybrid iterative algorithm for a split mixed equilibrium problem and a hierarchical fixed point problem. Appl. Set-Valued Anal. Optim. 2019, 1, 149–169. [Google Scholar]
- Qin, X.; Yao, J.C. A viscosity iterative method for a split feasibility problem. J. Nonlinear Convex Anal. 2019, 20, 1497–1506. [Google Scholar]
- Opial, Z. Weak convergence of sequence of successive approximations for non-expansive mappings. Bull. Am. Math. Soc. 1967, 73, 591–597. [Google Scholar] [CrossRef]
- Schu, J. Weak and strong convergence of fixed points of asymptotically non-expansive mappings. Bull. Austral. Math. Soc. 1991, 43, 153–159. [Google Scholar] [CrossRef]
- Fan, K. A minimax inequality and applications. In Inequality III; Shisha, O., Ed.; Academic Press: New York, NY, USA, 1972; pp. 103–113. [Google Scholar]
- Blum, E.; Oettli, W. From optimization and variational inequalities to equilibriums problems. Math. Stud. 1994, 63, 123–145. [Google Scholar]
- Combettes, P.L.; Hirstoaga, S.A. Equilibrium programming in Hilbert spaces. J. Nonlinear Convex Anal. 2005, 6, 117–136. [Google Scholar]
- Takahashi, S.; Takahashi, W.; Toyoda, M. Strong convergence theorems for maximal monotone operators with nonlinear mappings in Hilbert spaces. J. Optim. Theory Appl. 2010, 147, 27–41. [Google Scholar] [CrossRef]
© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).