Next Article in Journal
New Results on Start-Points for Multi-Valued Maps
Next Article in Special Issue
An Inertial Generalized Viscosity Approximation Method for Solving Multiple-Sets Split Feasibility Problems and Common Fixed Point of Strictly Pseudo-Nonspreading Mappings
Previous Article in Journal
Bitcoin Analysis and Forecasting through Fuzzy Transform
Previous Article in Special Issue
Nonlinear Approximations to Critical and Relaxation Processes
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Self-Adaptive Shrinking Projection Method with an Inertial Technique for Split Common Null Point Problems in Banach Spaces

by
Chibueze Christian Okeke
1,
Lateef Olakunle Jolaoso
2,* and
Regina Nwokoye
3
1
Department of Mathematics, Faculty of Science and Engineering, University of Eswatini, Private Bag 4, Kwaluseni M201, Eswatini
2
Department of Mathematics and Applied Mathematics, Sefako Makgatho Health Sciences University, P.O. Box 94, Pretoria 0204, South Africa
3
Department of Mathematics, Faculty of Physical Sciences, University of Nigeria Nsukka, Nsukka 410001, Nigeria
*
Author to whom correspondence should be addressed.
Axioms 2020, 9(4), 140; https://doi.org/10.3390/axioms9040140
Submission received: 27 October 2020 / Revised: 19 November 2020 / Accepted: 20 November 2020 / Published: 2 December 2020
(This article belongs to the Special Issue Nonlinear Analysis and Optimization with Applications)

Abstract

:
In this paper, we present a new self-adaptive inertial projection method for solving split common null point problems in p-uniformly convex and uniformly smooth Banach spaces. The algorithm is designed such that its convergence does not require prior estimate of the norm of the bounded operator and a strong convergence result is proved for the sequence generated by our algorithm under mild conditions. Moreover, we give some applications of our result to split convex minimization and split equilibrium problems in real Banach spaces. This result improves and extends several other results in this direction in the literature.

1. Introduction

Let H 1 and H 2 be real Hilbert spaces and C and Q be nonempty, closed and convex subsets of H 1 and H 2 , respectively. We consider the Split Common Null Point Problem (SCNPP) which was introduced by Byrne et al. [1] as follows:
Find z H 1 such that z A 1 ( 0 ) T 1 ( B 1 ( 0 ) ) ,
where A : H 1 2 H 1 and B : H 2 2 H 2 are maximal monotone operators and T : H 1 H 2 is a linear bounded operator. The solution set of SCNPP (1) is denoted by Ω . The SCNPP contains several important optimization problems such as split feasibility problem, split equilibrium problem, split variational inequalities, split convex minimization problem, split common fixed point problems, etc., as special cases (see, e.g., [1,2,3,4,5]). Due to their importance, several researchers have studied and proposed various iterative methods for finding its solutions (see, e.g., [1,4,5,6,7,8,9]). In particular, Byrne et al. [1] introduced the following iterative scheme for solving SCNPP in real Hilbert spaces:
x 0 H 1 , λ > 0 , x n + 1 = J λ A ( x n + λ T * ( J λ B ) T x n ) , n 0 ,
where J λ A x = ( I + λ A ) 1 x , for all x H 1 . They also proved that the sequence { x n } generated by (2) converges weakly to a solution of SCNPP provided the step size λ satisfies
λ 0 , 2 L ,
where L is the spectral radius of T . Furthermore, Kazmi and Rizvi [10] proposed a viscosity method which converges strongly to a solution of (1) as follows:
x 0 H 1 , λ > 0 , u n = J λ A ( x n + λ T * ( J λ B I ) A x n ) , x n + 1 = α n f ( x n ) + ( 1 α n ) S u n , n 0 ,
where { α n } ( 0 , 1 ) satisfies some certain conditions and S : H 1 H 1 is a nonexpansive mapping. It is important to emphasize that the convergence of (4) is achieved with the aid of condition (3). Other similar results can be found, for instance, in [11,12] (and references therein). However, it is well known that the norm of bounded linear operator is very difficult to find (or at least estimate) (see [13,14,15]). Hence, it becomes necessary to find iterative methods whose step size selection does not require prior estimate of the norm of the bounded linear operator. Recently, some authors have provided breakthrough results in the framework of real Hilbert spaces (see, e.g., [13,14,15]).
On the other hand, Takahashi [8,16] extends the study of SCNPP (1) to uniformly convex and smooth Banach spaces as follows: Let E 1 and E 2 be uniformly convex and uniformly smooth real Banach spaces with dual E 1 * and E 2 * , respectively, and T : E 1 E 2 be a bounded linear operator. Let A : E 1 2 E 1 * and B : E 2 2 E 2 * be maximal monotone operators such that A 1 ( 0 ) , B 1 ( 0 ) and Q μ is a metric resolvent operator with respect to B and parameter μ > 0 . Takahashi and Takahashi [17] introduced the following shrinking projection method for solving SCNPP in uniformly convex and smooth Banach spaces:
x 1 C , μ 1 > 0 , z n = x n J λ n J E 1 1 T * J E 2 ( T x n Q μ n T x n ) , C n + 1 = { z C n : z n z , J E 1 ( x n z n ) 0 } , x n + 1 = P C n + 1 x 1 , for all n N ,
where J E i are the normalized duality mapping with respect to E i for i = 1 , 2 (defined in the next section). They proved a strong convergence result with the condition that the step size satisfies
0 < a λ n T 2 < b < 1 and 0 < c μ n for all n N .
Furthermore, Suantai et al. [18] introduced a new iterative scheme for solving SCNPP in a real Hilbert space H and a real Banach space E as follows:
x 1 H , y n = J λ n A ( x n + λ n T * J E ( Q μ n I ) T x n ) , x n + 1 = α n f ( x n ) + β n x n + γ n y n , n 1 ,
where { α n } , { β n } , { γ n } ( 0 , 1 ) such that α n + β n + γ n = 1 and f : H H is a contraction mapping. They also proved a strong convergence result under the condition that the step size satisfies
0 < λ n T 2 < 2 .
Recently, Takahashi [19] introduced a new hybrid method with generalized resolvent operators for solving the SCNPP in real Banach spaces as follows:
z n = J 1 ( J E x n r n T * ( J F T x n J F Q μ n T x n ) ) , y n = J λ n z n , C n = { z E : 2 x n z , J E x n J E z n r n φ F ( T x n , Q μ n T x n ) } , D n = { z E : y n z , J E z n J E y n 0 } , Q n = { z E : x n z , J E x 1 J E x n 0 } , x n + 1 = C n D n Q n x 1 , for all n N .
He also proved that the sequence generated by Algorithm (7) converges strongly to a solution of SCNPP provided the step sizes satisfy
0 < a r n 1 T 2 , and 0 < b λ n , μ n for all n N .
It is evident that the above methods and other similar ones (see, e.g., [6,9,20]) require prior knowledge of the operator norm, which is very difficult to find. Thus, the following natural question arises.
Problem 1.
Can we provide a new iterative method for solving SCNPP in real Banach spaces such that the step size does not require prior estimate of the norm of the bounded linear operator?
Let us also mention the inertial extrapolation process which is considered as a means of speeding up the rate of convergence of iterative methods. This technique was first introduced by Polyak [21] as a heavy-ball method of a two-order time dynamical system and has been employed by many authors recently (see, e.g., [22,23,24,25,26,27]). Moreover, Dong et al. [27] introduced a modified inertial hybrid algorithm for approximating the fixed points of non-expansive mappings in real Hilbert spaces as follows:
x 0 , x 1 C , w n = x n + θ n ( x n x n 1 ) , z n = ( 1 β n ) w n + β n T w n , C n = { x C : z n x 2 x n x 2 } , Q n = { x C : x n x , x 0 x n 0 } , x n + 1 = P C n Q n x 0 ,
where { θ n } [ a 1 , a 2 ] , a 1 ( , 0 ] , a 2 [ 0 , + ) , { β n } ( 0 , 1 ) are suitable parameters.
More recently, Cholamjiak et al. [28] introduced an inertial forward-backward algorithm for finding the zeros of sum of two monotone operators in Hilbert spaces as follows:
x 0 , x 1 H , r n > 0 , y n = x n + θ n ( x n x n 1 ) , z n = α n y n + ( 1 α n ) T y n , v n = β n z n + ( 1 β n ) J r n B ( I r n A ) z n , C n + 1 = { v C n : v n v 2 x n v 2 + K n } , x n + 1 = P C n + 1 x 1 , n 1 ,
where K n = 2 θ n 2 x n x n 1 2 θ n x n z , x n 1 x n , J r n B = ( I + r n B ) 1 , { θ n } [ 0 , θ ] for some θ [ 0 , 1 ) and { α n } , { β n } are sequences in [ 0 , 1 ] . The authors proved that the sequence { x n } generated by (9) converges strongly to a solution x ( A + B ) 1 ( 0 ) under some mild conditions.
Motivated by the above results, in this paper, we aim to provide an affirmative answer to Problem 1. We introduce a new inertial shrinking projection method for solving SCNPP in p-uniformly convex and uniformly smooth real Banach spaces. The algorithm is designed such that its step size is determined by a self-adaptive technique and its convergence does not require prior knowledge of the norm of the bounded operator. We also prove a strong convergence result and provide some applications of our main theorem to solving other nonlinear optimization problems. This result improves and extends the results in [6,8,9,11,12,16,19,20] and many other recent results in the literature.

2. Preliminaries

Let E be a real Banach space with dual E * and norm · . We denote the duality pairing between f E and g * E * as f , g * . The weak and strong convergence of { x n } E to a E are denoted by x n a and x n a , respectively, ∀ by “for all” and ⇔ by “if and only if”. The function δ E : [ 0 , 2 ] [ 0 , 1 ] defined by
δ E ( α ) = inf 1 f + g 2 : f = 1 = g , f g α
is called the modulus of convexity of E. The Banach space E is said to be uniformly convex if δ E ( α ) > 0 . If there exists a constant C p > 0 such that δ E ( α ) C p α p for any α ( 0 , 2 ] , then we say E is p-uniformly convex. In addition, the function ρ E ( β ) : [ 0 , ) [ 0 , + ) defined by
ρ E ( β ) = f + β g + f β g 2 1 : f = g = 1
is called the modulus of smoothness of E. The Banach space E is said to be uniformly smooth if lim β + ρ E ( β ) β = 0 . If there exists a constant D q > 0 such that ρ E ( β ) D q β q for any β > 0 , then E is called q-uniformly smooth Banach space. Let 1 < q 2 p with 1 p + 1 q = 1 . We Remark that a Banach space E is p-uniformly convex if and only if its dual E * is q-uniformly smooth. Examples of q-uniformly smooth Banach spaces include Hilbert spaces, L q (or l p ) spaces, 1 < p < and the Sobolev spaces, W m p , 1 < p < (see [29]). Moreover, the Hilbert spaces are uniformly smooth while
L p ( o r l p ) o r W m p i s p uniformly   smooth   if 1 < p 2 2 uniformly   smooth   if p 2 .
Let φ : R + R + be a continuous strictly increasing function. φ is called a gauge function if
φ ( 0 ) = 0 , lim t φ ( t ) = + .
The duality mapping with respect to φ , i.e., J φ : E E * is defined by
J φ ( x ) = { j E * : x , j = x j * , j * = φ ( x ) } , x E .
When φ ( t ) = t , then we call J φ = J a normalized duality mapping. In addition, if φ ( t ) = t p 1 where p > 1 , then, J φ = J p is called a generalized duality mapping defined by
J p ( u ) = { f E * : u , f = u f * , f * = u p 1 } , x E .
In the sequel, C is a nonempty closed convex subset of E and F ( T ) = { x C : T x = x } is the set of fixed point of T : C C .
Definition 1.
Ref. [30] Let E be a Banach space, J φ : E E * a duality mapping with gauge function φ , and C a nonempty subset of E . A mapping T : C E is said to be
(i) 
φ-firmly non-expansive if
T u T v , J φ ( T u ) J φ ( T v ) T u T v , J φ ( u ) J φ ( v )
for all u , v C .
(ii) 
φ-firmly quasi-non-expansive if F ( T ) and
T u z , J φ ( u ) J φ ( T u ) 0
for all u in C and z in F ( T ) .
Definition 2.
Given a Gâteaux differentiable and convex function f : E R , the function
Δ f ( u , v ) : = f ( v ) f ( u ) f ( u ) , v u , f o r a l l u , v E
is called the Bregman distance of u to v with respect to the function f .
Moreover, since J E p is the derivative of the function f p ( u ) = 1 p u p , in that case, the Bregman distance with respect to f p becomes
Δ p ( u , v ) = 1 q u p J E p u , v + 1 p v p = 1 p ( v p u p ) + J E p u , u v = 1 q ( u p v p ) J E p u J E p v , v .
Remark 1.
It follows from the Definition of Δ p that
Δ p ( u , v ) = Δ p ( u , z ) + Δ p ( z , v ) + z v , J E p u J E p z , f o r a l l u , v , z E ,
and
Δ p ( u , v ) + Δ p ( v , u ) = u v , J E p u J E p v , f o r a l l u , v , z E .
Although the Bregman is not symmetrical, it however has the following relationship with · distance:
α u v p Δ p ( u , v ) u v , J E p u J E p v , for all u , v E , α > 0 .
This indicates that Bregman distance is non-negative.
Definition 3.
The Bregman projection mapping C : E C is defined by
C u = arg min v C Δ p ( u , v ) , f o r a l l u E .
The Bregman projection can also be characterized by the following inequality
J E p u J E p C u , z C u 0 , for all z C ,
This is equivalent to
Δ p ( C u , z ) Δ p ( u , z ) Δ p ( u , C u ) , for all z C .
Lemma 1.
Ref. [31] Let E be a q-uniformly smooth Banach space with q-uniformly smoothness constant c q > 0 . For any u , v E , the following inequality holds:
u v q u q q v , J E q u + c q v q .
Definition 4.
A mapping T : C C is said to be closed or has a closed graph if a sequence { x n } C converges strongly to a point x C and T x n y , then T x = y .
Lemma 2.
Ref. [29] It is known that the generalized duality has the following properties:
(I) 
J E p ( x ) is nonempty bounded closed and convex, for any x E .
(II) 
If E is a reflexive Banach space, then J E p is a mapping from E onto E * .
(III) 
If E is smooth Banach space, then J E p single valued.
(IV) 
If E is a uniformly smooth Banach space, then J E p is norm-to-norm uniformly continuous on each bounded subset of E .
Lemma 3.
Ref. [32] For any { x n } E , { t n } ( 0 , 1 ) with n = 1 N t n = 1 , the following inequality holds:
Δ p ( J E * q , ( n = 1 N t n J E p ( x n ) ) , x ) n = 1 N t n Δ p ( x n , x ) f o r a l l x E .
We now define some important operators which play key role in our convergence analysis.
Definition 5.
Let A : E 2 E * be a multi-valued mapping. We define the effective domain of A by D ( A ) = { x E : A x 0 } and range of A by ( A ) = x D ( A ) A x . The operator A is said to be monotone if x y , u * v * 0 for all x , y D ( A ) , u * A x and v * A y . When the graph of A is not properly contained in the graph of any other monotone operator, then we say that A is maximally monotone.
Let E be a smooth, strictly convex, and reflexive Banach space and A : E 2 E * be a maximal monotone operator. The metric resolvent operator with respect to A is defined by Q r φ ( u ) = ( I + r J φ 1 A ) 1 ( u ) . It is easy to see that
0 J φ ( Q r φ ( u ) u ) + r A Q r φ ( u ) ,
and F ( Q r φ ) = A 1 0 for all r > 0 (see, e.g., [20]). Moreover, by the monotonicity of A , we can show that
Q r φ ( u ) Q r φ ( v ) , J φ ( u Q r φ ( u ) ) J φ ( v Q r φ ( v ) ) 0
for all u , v E . In addition, if A 1 0 , then
Q r φ ( u ) z , J φ ( u Q r φ ( u ) ) 0
for all u E and z A 1 0 . In the case φ ( t ) = t p 1 with p ( 1 , + ) , we denote Q r φ by Q r = ( I + r J p 1 A ) 1 (see, e.g., [33]).
Proposition 1.
Ref. [30] Let A : E 2 E * be an operator satisfying the following range condition
D ( A ) C J φ 1 ( J φ + λ A ) f o r a l l λ > 0 .
Define the φ-resolvent operator R λ φ : C 2 E associated with operator A by
R λ φ ( x ) = { z X : J φ ( x ) ( J φ + λ A ) z } , x C .
Then, for any u C and λ > 0 , we see that
0 A u J φ ( u ) ( J φ + λ A ) u u ( J φ + λ A ) 1 J φ ( u ) u F ( R λ φ ) .
Proposition 2.
Ref. [30] Let C be a nonempty, closed, and convex subset of a reflexive, strictly convex Banach space E and let J φ : E E * be the duality mapping with gauge φ . Let A : E 2 E * be a monotone operator satisfying the condition D C J φ 1 ( J φ + λ A ) , where λ > 0 . Let R λ φ be a resolvent operator of A ; then,
(a) 
R λ φ is φ-firmly non-expansive mapping from C into C .
(b) 
F ( R λ φ ) = A 1 0 .
Let E be a uniformly convex and smooth Banach space. Let A be a monotone operator of E into 2 E * . From Browder [34], we know that A is maximal if and only if, for any r > 0 ,
( J φ + r A ) = E * .
Remark 2.
(i) 
The smoothness and strict convexity of E ensures that R λ φ , A is single-valued. In addition, the range condition ensure that R λ φ single-valued operator from C into D ( A ) . In other words,
R λ ( x ) φ ( x ) = ( J φ + λ A ) 1 J φ ( x ) , f o r a l l x C .
(ii) 
When A is maximal monotone, the range condition holds for C = D ( A ) ¯ .
In the sequel, we denote R λ φ by R λ = ( J p + λ A ) 1 J p for convenience.
Let E and F be real Banach spaces and let T : E F be a bounded linear. The dual (adjoint) operator of T , denoted by T * , is a bounded linear operator defined by T * : F * E *
T * y ¯ , x : = y ¯ , T x , for all x E , y ¯ F *
and the equalities T * = T and ( T * ) = ( T ) are valid, where ( T ) : = { x * F * : x * , u = 0 , f o r a l l u ( T ) } (see [35,36] for more details on bounded linear operators and their duals).
Lemma 4.
Ref.[9] Let E and F be uniformly convex and smooth Banach spaces, Let T : E F be a bounded linear operator with the adjoint operator T * . Let R λ be the resolvent operator associated with a maximal monotone operator A on E and let Q r be a metric resolvent associated with a maximal monotone operator B on F . Assume that A 1 0 T 1 ( B 1 0 ) . Let λ , μ , r > 0 and z E . Then, the following are equivalent:
(a) 
z = R λ ( J E * q ( J E p ( z ) μ T * J F p ( T z Q r T z ) ) ) ; and
(b) 
z A 0 T 1 ( B 1 0 ) .

3. Main Results

In this section, we present our algorithm and its convergence analysis. In the sequel, we assume that the following assumption hold.
(i)
E 1 and E 2 are two p-uniformly convex and uniformly smooth real Banach spaces.
(ii)
T : E 1 E 2 is a bounded linear operator with T 0 with adjoint T * : E 2 * E 1 * .
(iii)
A : E 1 2 E 1 * and B : E 2 2 E 2 * are maximal monotone operators.
(iv)
R λ is the resolvent operator associated with A and Q r is the metric resolvent operator associated with B.
In addition, we denote by J E 1 p and J E 2 p the duality mappings of E 1 and E 2 , respectively, while J E 1 * q is the duality mapping of E 1 * . It is worth mentioning that, when E 1 * and E 2 * are two q-uniformly smooth and uniformly convex Banach spaces, J E 1 p = ( J E 1 * q ) 1 where 1 < q 2 p < + with 1 p + 1 q = 1 .
Algorithm SASPM:
Given initial values x 0 , x 1 C 1 = E 1 , the sequence { x n } generated by the following iterative algorithm:
w n = J E 1 * q J E 1 p x n + θ n J E 1 p ( x n x n 1 ) , z n = J E 1 * q J E 1 p ( w n ) ρ n f p 1 ( w n ) T * ( J E 2 p ( T w n Q r n T w n ) ) p T * J E 1 p ( T w n Q r n T w n ) , y n = J E 1 q α n J E 1 p z n + ( 1 α n ) J E 1 p R λ n z n , C n + 1 = { u C n : Δ p ( y n , u ) Δ p ( z n , u ) Δ p ( w n , u ) } , x n + 1 = C n + 1 x 0
where { r n } , { λ n } ( 0 , ) , C n + 1 is a Bregman projection of E 1 onto C n + 1 , the sequence of real number { α n } [ a , b ] ( 0 , 1 ) and { θ n } [ c , d ] ( , + ) , f ( w n ) : = 1 p ( I Q r n ) T w n p , and { ρ n } ( 0 , + ) satisfying
lim inf n + ρ n p C q ρ n q 1 q > 0 .
To prove the convergence analysis of Algorithm SASPM, we first prove some useful results.
Lemma 5.
Let E 1 be a p-uniformly convex and uniformly smooth real Banach space, and C 1 = E 1 . Then, for any sequence { y n } , { z n } and { w n } in E 1 , the set
C n + 1 = { u C n : Δ p ( y n , u ) Δ p ( z n , u ) Δ p ( w n , u ) }
is closed and convex for each n 1 .
Proof. 
First, since C 1 = E 1 , C 1 is closed and convex. Then, we assume that C n is a closed and convex. For each u C n , by the definition of the function Δ p , we have
Δ p ( y n , u ) Δ p ( z n , u ) if and only if 2 J E 1 p z n J E 1 p y n , u 1 q ( z n p y n p ) ,
and
Δ p ( z n , u ) Δ p ( w n , u ) if and only if 2 J E 1 p w n J E 1 p z n , u 1 q ( w n p z n p ) .
Hence, we know that C n + 1 is closed. In addition, we easily prove that C n + 1 is convex. The proof is completed. □
Lemma 6.
Let E 1 , E 2 , T T * A , B, and J E 1 p , J E 2 p . J E 2 p , J E 1 * q be the same as above such that Conditions (1)–(4) are satisfied. If Υ = { z : z A 1 0 T 1 ( B 1 0 ) } , then Υ C n for any n 1 .
Proof. 
If Υ = , it is obvious that Υ C n . Conversely, for any z Γ , according to Lemma 3 and using the fact that the resolvent R λ n is non-expansive, we easily obtain
Δ p ( y n , z ) = Δ p ( J E 1 * q ( α n J E 1 p z n + ( 1 α n ) J E 1 p R λ n z n ) , z ) α n Δ p ( z n , z ) + ( 1 α n ) Δ p ( R λ n z n , z ) Δ p ( z n , z ) .
From (20), let u n = J E 1 p ( w n ) ρ n f p 1 ( w n ) g ( w n ) p g ( w n ) for all n 1 , where g ( w n ) = T * J E 1 p ( T w n Q r n T w n ) . We see from Lemma 1 that
u n E 1 * q = J E 1 p ( w n ) ρ n f p 1 ( w n ) g ( w n ) p g ( w n ) E 1 * q w n p q ρ n f p 1 ( w n ) g ( w n ) p w n , g ( w n ) + c q ρ n q f ( p 1 ) q ( w n ) g ( w n ) p q g ( w n ) q = w n p q ρ n f p 1 ( w n ) g ( w n ) p w n , g ( w n ) + c q ρ n q f p ( w n ) g ( w n ) p .
Then, by (16) and (22), we get
Δ p ( z n , z ) Δ p ( J E 1 p ( u n ) , z ) = z p p + 1 q J E 1 p ( u n ) p z , u = z p p + 1 q u n ( q 1 ) p z , u n = z p p + 1 q u n ( q 1 ) q ( q 1 ) z , u n = z p p + 1 q u n q z , u n = z p p + 1 q u n q z , J E 1 p ( w n ) + ρ n f p 1 ( w n ) g ( w n ) p z , g ( w n ) z p p + 1 q w n p q ρ n f p 1 ( w n ) g ( w n ) p w n , g ( w n ) + c q ρ n q f p ( w n ) g ( w n ) p z , J E 1 p ( w n ) + ρ n f p 1 ( w n ) g ( w n ) p z , g ( w n ) = z p p + w n p q z , J E 1 p ( w n ) + c q ρ n q q f p ( w n ) g ( w n ) p + ρ n f p 1 ( w n ) g ( w n ) p z w n , g ( w n ) = Δ p ( w n , z ) + c q ρ n q q f p ( w n ) g ( w n ) p + ρ n f p 1 ( w n ) g ( w n ) p z w n , g ( w n )
On the other hand, observe that
g ( w n ) , z w n = T * J E 2 p ( I Q r n T w n ) , z w n = J E 2 p ( I Q r n T w n ) , T z T w n = J E 2 p ( w n ) ( I Q r n ) T w n , Q r n T w n T w n + J E 2 p ( I Q r n ) T w n , T z Q r n T w n ( I Q r n ) T w n p = p f ( w n ) .
By using (23) and (24), we get
Δ p ( z n , z ) Δ p ( w n , z ) + c q ρ n q q ρ n p f p ( w n ) g ( w n ) p ,
which implies by our assumption that
Δ p ( z n , z ) Δ p ( w n , z ) .
From (21) and (26), we have that z C n + 1 , that is, Υ C n , for all n 1 .
Theorem 1.
Let E 1 , E 2 T , T * , A , B, and J E 1 p , J E 2 p , J E 1 * q be the same as above such that Conditions (1)–(4) are satisfied. If Υ = { z : z A 1 0 T 1 ( B 1 0 ) } , then the sequence generated by Algorithm (20) converges strongly to a point z = Υ x 0 Υ .
Proof. 
By Lemmas 5 and 6, we know that C n + 1 x 0 is well defined and Υ C n . According to Algorithm (20), we know that x n = C n x 0 and x n + 1 = C n + 1 x 0 for each n 1 . Using Υ C n and (16), we have
Δ p ( x 0 , x n ) = Δ p ( x 0 , C n x 0 ) Δ p ( x 0 , z ) z Υ , n 1 .
It implies that { Δ p ( x 0 , x n ) } is bounded. Reusing (16), we also have
Δ p ( x n , x n + 1 ) = Δ p ( C n x 0 , x n + 1 ) Δ p ( x 0 , x n + 1 ) Δ p ( x 0 , C n x 0 ) = Δ p ( x 0 , x n + 1 ) Δ p ( x 0 , x n ) .
It follows that { Δ p ( x 0 , x n + 1 ) } is nondecreasing. Hence, the limit lim n + Δ p ( x 0 , x n ) exists, and
lim n + Δ p ( x n , x n + 1 ) = 0
It follows from (13) that
lim n + x n + 1 x n = 0
For some positive m , n with m n , we have x m = C m x 1 C n . Using (16), we obtain
Δ p ( x n , x m ) = Δ p ( C n x 0 , x m ) Δ p ( x 0 , x m ) Δ p ( x 0 , C n x 0 ) = Δ p ( x 0 , x m ) Δ p ( x 0 , x n ) .
Since the limit lim n + Δ p ( x 0 , x n ) exists, it follows from (31) that lim n + Δ p ( x n , x m ) = 0 and lim n + x n x m = 0 . Therefore, { x n } is Cauchy sequence. Further, there exists a point x * C such that x n x * .
From Algorithm (20), Definition 2, and Lemma 1, we have
Δ p ( w n , z ) = 1 q J E 1 * p ( J E 1 p x n + θ n J E 1 p ( x n x n 1 ) ) p + 1 p z p J E 1 p x n + θ n J E 1 p ( x n x n 1 ) , z = 1 q J E 1 p x n + θ n J E 1 p ( x n x n 1 ) q + 1 p z p J E 1 p x n + θ n J E 1 p ( x n x n 1 ) , z 1 q J E 1 p x n q + 1 p z p J E 1 p x n , x * θ n J E 1 p ( x n x n 1 ) , z + θ n J E 1 p ( x n x n 1 ) , x n + c q ( θ n ) q q J E 1 p ( x n x n 1 ) q = 1 q x n q + 1 p z p J E 1 p x n , x * θ n J E 1 p ( x n x n 1 ) , z + θ n J E 1 p ( x n x n 1 ) , x n + c q ( θ n ) q q J E 1 p ( x n x n 1 ) q = Δ p ( x n , z ) + θ n J E 1 p ( x n x n 1 ) , x n x * + c q ( θ n ) q q x n x n 1 p .
By virtue of Remark 1 and the definition of w n , we know
Δ p ( w n , z ) = Δ p ( w n , x n ) + Δ p ( x n , z ) + x n z , J E 1 p w n J E 1 p x n = Δ p ( w n , x n ) + Δ p ( x n , z ) + θ n x n z , J E 1 p ( x n x n 1 ) .
By (32) and (33), we get Δ p ( w n , x n ) c q ( θ n ) q q x n x n 1 p . Then, using (13) and (30) and the boundedness of the sequence { θ n } , we can obtain
lim n + w n x n = 0 .
Using a similar method, we can get
Δ p ( w n , x n + 1 ) = Δ p ( w n , x n ) + Δ p ( x n , x n + 1 ) + x n x n + 1 , J E 1 p w n J E 1 p x n .
By setting n + , we have
lim n + w n x n + 1 = 0 .
Since x n + 1 = C n + 1 x 0 C n + 1 C n , we have
Δ p ( y n , x n + 1 ) Δ p ( z n , x n + 1 ) Δ p ( w n , x n + 1 ) .
According to (35), we obtain
lim n + Δ p ( y n , x n + 1 ) = 0 , lim n + Δ p ( z n , x n + 1 ) = 0 ,
which implies that lim n + y n x n + 1 = 0 , lim n + z n x n + 1 = 0 . Hence,
x n z n x n + 1 x n + x n + 1 z n 0 , as n + ,
and
y n z n x n + 1 y n + x n + 1 z n 0 , as n + .
We also get from (34) and (37) that
w n z n w n x n + x n z n 0 , as n .
As J E 1 p is norm to norm uniformly continuous on a bounded subset of E 1 , we obtain
J E 1 p ( w n ) J E 1 p ( z n ) 0 , as n + .
Since E 1 is a p-uniformly convex and uniformly smooth real Banach space, then J E 1 p is uniformly norm-to-norm continuous. Thus, it follows from Algorithm (20) and real number sequence { α n } in [ a , b ] ( 0 , 1 ) that
lim n + J E 1 p R λ n z n J E 1 p z n = 0 = lim n + 1 1 α n J E 1 p y n J E 1 p z n = 0 ,
which also implies that lim n + R λ n z n z n = 0 . From (25), and z being in Υ , we get
Δ p ( z n , z ) Δ p ( w n , z ) + ρ n c q ρ n q 1 q p f p ( w n ) g ( w n ) p = Δ p ( w n , z ) ρ n p c q ρ n q 1 q f p ( w n ) g ( w n ) p .
This implies that
ρ n p c q ρ n q 1 q f p ( w n ) g ( w n ) p Δ p ( w n , z ) Δ p ( z n , z ) = 1 q w n p 1 q z n p J E 1 p w n J E 1 p z n , z = Δ p ( w n , z n ) + J E 1 p w n J E 1 p z n , z n z ( w n z n + z n z ) J E 1 p w n J E 1 p z n .
By setting of n + , the right-hand side of the last inequality tends to 0 . This implies that
ρ n p c q ρ n q 1 q f p ( w n ) g ( w n ) p 0 , n + .
Since lim inf n + ρ n p c q ρ n q 1 q > 0 , we get
f p ( w n ) g ( w n ) p 0 , n +
and hence
f ( w n ) g ( w n ) p 0 , n +
Furthermore, since { g ( w n ) } is bounded, we obtain from (42) that
0 g ( w n ) = g ( w n ) f ( w n ) g ( w n ) M 1 f ( w n ) g ( w n ) 0 , n + ,
for some M 1 > 0 . Therefore,
lim n + f ( w n ) = 0 .
Hence,
lim n + ( I Q r n ) T w n = 0 .
In addition,
T * J E 2 p ( I Q r n ) T w n T ( I Q r n ) T w n 0 , n + .
Since x n w n 0 , as n + , there exists a subsequence { x n j } of { x n } such that x n j w E 1 , as well as x n w n 0 , as n + there exists a subsequence { w n j } of { w n } such that w n j w E 1 . From T w n Q r n T w n 0 and by the boundedness and linearity of T , we have T w n j T w and Q r n j T w n j T w . Since Q r n is a metric resolvent on B for r n > 0 , we have
J E 2 p ( T w n Q r n T w n ) r n B Q r n T w n
for all n N , thus we obtain
0 v Q r n j T w n j T w n j , v * J E 2 p ( T w n j Q r n j T w n j ) r n j
for all ( v , v * ) B . It follows that
0 v T w , v * 0
for all ( v , v * ) B . Since B is maximal monotone, T w B 1 0 and hence w T 1 ( B 1 0 ) .
Let b n = R λ n z n and k n = T w n Q r n T w n n N
b n = J λ n J E 1 * q J E 1 p ( w n ) λ n T * J E 2 p ( k n ) b n = J E 1 p + λ n A 1 J E 1 p J E 1 * q ( J E 1 p ( w n ) λ n T * J E 2 p ( k n ) ) b n = J E 1 p + λ n A 1 J E 1 p ( w n ) λ n T * J E 2 p ( k n ) J E 1 p ( w n ) λ n T * J E 2 p ( k n ) J E 1 p ( b n ) + λ n A b n J E 1 p ( w n ) J E 1 p ( b n ) λ n T * J E 2 p ( k n ) A b n .
Note that
J E 1 p ( w n ) J E 1 p ( b n ) = J E 1 p ( w n ) J E 1 p ( R λ n z n ) J E 1 p ( w n ) J E 1 p ( z n ) + J E 1 p ( z n ) J E 1 p ( R λ n z n ) 0 , n + .
By the monotonicity of A , it follows that
0 v b n , v * J E 1 p ( w n ) J E 1 p ( b n ) λ n + T * J E 2 p ( k n )
for all ( v , v * ) A . Then,
0 v b n , v * J E 1 p ( w n i ) J E 1 p ( b n i ) λ n i + T * J E 2 p ( k n i ) .
Since b n i w , (40) and (43), it follows that 0 v w , v * 0 and hence w A 1 0 . This concludes that w A 1 0 T 1 ( B 1 0 ) . Then, from (28) and (20), we have
J E 1 p x 0 J E 1 p x n , p x n , for all p Υ .
By setting n + in (44), we obtain
J E 1 p x 0 J E 1 p x * , p x * 0 , for all p Υ .
Again, from (15), we have x * = Υ x 0 . Definitely, we obtain that { x n } generated by Algorithm (20) strongly converges to x * = Υ x 0 Υ . The proof is completed. □
As a corollary of Theorem 1, when E 1 and E 2 reduces to Hilbert spaces, the function Δ p is equal to 1 2 x y 2 and the Bregman projection C is equivalent to the metric projection P C . Then, we obtain the following result.
Theorem 2.
Let H 1 and H 2 be Hilbert spaces, A : H 1 2 H 1 and B : H 2 2 H 2 be maximal monotone operators, T : H 1 H 2 be a bounded linear operator with T 0 , and T * : H 2 H 1 be the adjoint of T . Let R λ be the resolvent operator associated with a maximal monotone operator A on H 1 and Q r be metric resolvent associated with a maximal monotone operator B on H 2 . Suppose that Υ = A 1 0 T 1 ( B 1 0 ) . For fixed x 0 H 1 , let { x n } n = 0 + be iteratively generated by x 1 H 1 and
w n = x n + θ n ( x n x n 1 ) , z n = w n ρ n f ( w n ) T * ( I Q r n ) T w n 2 [ T * ( I Q r n ) T w n ] y n = α n z n + ( 1 α n ) R λ n z n C n + 1 = { u C n : y n u z n u w n u } , x n + 1 = P C n + 1 x 0 ,
where P C n + 1 is the metric projection of H 1 onto C n + 1 , the sequence of real numbers, { α n } [ a , b ] ( 0 , 1 ) and { θ n } [ c , d ] ( , + ) . f ( w n ) : = 1 2 ( I Q r n ) T w n 2 , and { ρ n } ( 0 , 4 ) . Then, the sequence { x n } generated by (46) converges strongly to a point z 0 = P Υ x 0 Υ .

4. Applications

In this section, we provide some applications of our result to solving other nonlinear optimization problems.

4.1. Application to Minimization Problem

First, we consider an application of our result to convex minimization problem in real Banach space E. Let ϑ : E ( , + ] be a proper, convex and lower semicontinuous function. The convex minimization problem is to find x E such that
ϑ ( x ) ϑ ( y ) , for all y E .
The set of minimizer of ϑ is denoted by A r g m i n ϑ . The subdifferential of ϑ of ϑ is defined as follows
ϑ ( u ) = { w E * : ϑ ( u ) + v u , w ϑ ( u ) , for all v E } ,
for all u E . From Rockafellar [37], it is known that ϑ is a maximal monotone operator. Let C be a nonempty, closed, and convex subset of E and let i C be the indicator function of C i.e.,
i C ( u ) = 0 , u C , u C .
Then, i C is a proper, convex, and lower semicontinuous function on E . Thus, the subdifferential i C of i C is a maximal monotone operator. Then, we can define the resolvent R λ of i C for λ > 0 i.e.,
R λ u = ( J p + λ i C ) 1 J p u
for all u E and p ( 1 , + ) . We have that for any x E and u C
u = R λ x if and only if J p x J p u + λ i C u if and only if 1 λ ( J p x J p u ) i C u if and only if i C y y u , 1 λ ( J p x J p u ) + i C u for all y C if and only if 0 y u , 1 λ ( J p x J p u ) , for all y C if and only if y u , J p x J p u 0 , for all x C if and only if u = C x .
Let E 1 and E 2 be real Banach spaces and ϑ : E 1 ( , + ] and ξ : E 2 ( , + ] be proper, lower semicontinuous, and convex functions such that A r g m i n ϑ and A r g m i n ξ . Consider the Split Proximal Feasibility Problem (SPFP) defined by: Find x E 1 such that
x A r g m i n ϑ and A x A r g m i n ξ ,
where A r g m i n ϑ : = { x ¯ E 1 : ϑ ( x ¯ ) ϑ ( x ) , for all x E 1 } , and A r g m i n ξ = { y ¯ E 2 : ξ ( y ¯ ) ξ ( y ) , for all y E 2 } . We denote the solution set of (47) by Ω . The PSFP is a generalization of the split feasibility problem and has been studied extensively by many authors in real Hilbert space (see, e.g., [38,39,40,41,42]).
By setting A = ϑ and B = ξ , we obtain a strong convergence result for solving (47) in real Banach spaces.
Theorem 3.
Let E 1 be a p-uniformly convex and uniformly smooth Banach space and E 2 be a uniformly convex smooth Banach space. Let ϑ and ξ be proper, lower semicontinuous, and convex functions of E 1 into ( , + ] and E 2 into ( , + ] such that ( ϑ ) 1 0 and ( ξ ) 1 0 , respectively. Let T : E 1 E 2 be a bounded linear operator such that T 0 and let T * be the adjoint operator T . Suppose that Ω . For fixed x 0 E 1 , let { x n } n = 0 be iteratively generated by x 1 E 1 and
w n = J E 1 * q J E 1 p x n + θ n J E 1 p ( x n x n 1 ) , v n = arg min y E 2 { ξ ( y ) + 1 μ n y 2 1 μ n y , J E 2 p T w n } z n = J E 1 * q J E 1 p ( w n ) ρ n f p 1 ( w n ) T * ( J E 2 p ( T w n v n ) p ) T * J E 1 p ( T w n v n ) , u n = arg min x E 1 { ϑ ( x ) + 1 σ n x 2 1 σ n x , J E 2 p z n } y n = J E 1 q α n J E 1 p z n + ( 1 α n ) J E 1 p u n , C n + 1 = { u C n : Δ p ( y n , u ) Δ p ( z n , u ) Δ p ( w n , u ) } , x n + 1 = C n + 1 x 0
where { σ n } , { μ n } ( 0 , + ) , C n + 1 is a Bregman projection of E 1 onto C n + 1 , the sequence of real number { α n } [ a , b ] ( 0 , 1 ) and { θ n } [ c , d ] ( , + ) , f ( w n ) : = 1 p T w n v n p , and { ρ n } ( 0 , + ) satisfies
lim inf n + ρ n p C q ρ n q 1 q > 0 .
where c q is the uniform smoothness coefficient of E 1 . Then, x n z 0 ( ϑ ) 1 0 T 1 ( ( ξ ) 1 0 ) ,
where z 0 : = ( ϑ ) 1 0 T 1 ( ( ξ ) 1 0 ) x 0
Proof. 
We know from [43] that
v n = arg min y E 2 { ξ ( y ) + 1 2 μ n y 2 1 μ n } y , J E 2 p T w n
is equivalent to
0 ( ξ ) x n + 1 μ n J E 2 p x n 1 μ n J E 2 p T w n
From this, we have J E 2 p T w n J E 2 p v n + μ n ( ξ ) v n i.e., v n = Q r n T w n . Similarly, we have that
u n = arg min x E 1 { ϑ ( x ) + 1 2 σ n x 2 1 σ n x , J E 1 p z n }
is equivalent to u n = R λ n z n . Using Theorem 1, we get the conclusion. □

4.2. Application to Equilibrium Problem

Let C be a nonempty closed and convex subset of a Banach space E and let G : C × C R be a bifunction. For solving the equilibrium problem, we assume that G satisfies the following conditions:
(A1)
G ( x , x ) = 0 , x C .
(A2)
G is monotone, i.e., G ( x , y ) + G ( y , x ) 0 for any x , y C .
(A3)
G is upper-hemicontinuous, i.e., for each x , y , z C ,
lim sup t 0 + G ( t z + ( 1 t ) x , y ) G ( x , y ) .
(A4)
G ( x , 0 ) is convex and lower semicontinuous for each x C .
The equilibrium problem is to find x * C such that
G ( x * , y ) 0 for all y C .
The set of solution of this problem is denoted by E P ( G ) .
Lemma 7.
[44] Let g : E ( , + ] be super coercive Legendre function, G be a bifunction of C × C into R satisfying Conditions (A1)–(A4), and x E . Define a mapping S G g : E C as follows:
S G g ( x ) = { z C : G ( z , y ) + y z , g ( z ) g ( x ) 0 f o r a l l y C } .
Then,
(i) 
d o m S G g = E .
(ii) 
S G g is single-valued.
(iii) 
S G g is a Bregman firmly nonexpansive operator.
(iv) 
The set of fixed point of S G f is the solution set of the corresponding equilibrium problem, i.e., F ( S G g ) = E P ( G ) .
(v) 
E P ( G ) is closed and convex.
(vi) 
For all x E and for all u F ( S G g ) , we have
D g ( u , S G g ( x ) ) + D g ( S G g ( x ) , x ) D g ( u , x ) .
Proposition 3.
[45] Let g : E ( , + ] be a super coercive Legendre Frécht differentiable and totally convex function. Let C be a closed and convex subset of E and assume that the bifunction G : C × C R satisfies the Conditions (A1)–(A4). Let A G be a set-valued mapping of E into 2 E * defined by
A G ( x ) = { z E * : G ( x , y ) y x , z f o r a l l y C } , x C , x E C .
Then, A G is a maximal monotone operator, E P ( G ) = A G 1 ( 0 ) and S G g = R A G g .
Let E 1 and E 2 real Banach spaces and C and Q be nonempty, closed, and convex subsets of E 1 and E 2 , respectively. Let G 1 : C × C R and G 2 : Q × Q R be bifunctions satisfying Conditions (A1)–(A4) and T : E 1 E 2 be a bounded linear operator. We consider the Split Equilibrium Problem (SEP) defined by: Find x C such that
x E P ( G 1 ) and T x E P ( G 2 ) .
The SEP was introduced by Moudafi [46] and has been studied by many authors for Hilbert and Banach spaces (see, e.g., [47,48,49,50]). We denote the set of solution of (49) by S E P ( G 1 , G 2 ) .
Setting A = A G 1 and B = A G 2 in Algorithm (20), Lemma 7, and Proposition 3, we obtain a strong convergence result for solving SEP in real Banach spaces.
Theorem T4.
Let E 1 be a p-uniformly convex and uniformly smooth Banach space, E 2 be a uniformly smooth Banach space, and C and Q be nonempty closed subsets of E 1 and E 2 , respectively. Let G : C × C R and H : Q × Q R be bifunctions satisfying Conditions (A1)–(A4) and g : E 1 R and h : E 2 R be super coercive Legendre functions which are bounded, uniformly Frechet differentiable, and totally convex on bounded subset of E 2 . Let T : E 1 E 2 be a bounded linear operator with T 0 and T * : E 2 * E 1 * be the adjoint of T . Suppose that S E P ( G 1 , G 2 ) for fixed x 0 E 1 , let { x n } n = 0 be iteratively generated by x 1 E 1 , and
w n = J E 1 * q J E 1 p x n + θ n J E 1 p ( x n x n 1 ) , z n = J E 1 * q J E 1 p ( w n ) ρ n f p 1 ( w n ) T * ( J E 2 p ( T w n S H n h T w n ) p ) T * J E 1 p ( T w n S H n h T w n ) , y n = J E 1 q α n J E 1 p z n + ( 1 α n ) J E 1 p S G n g z n , C n + 1 = { u C n : Δ p ( y n , u ) Δ p ( z n , u ) Δ p ( w n , u ) } , x n + 1 = C n + 1 x 0
where { H n } and { G n } ( 0 , + ) , f ( w n ) = 1 p ( I S H n h ) T u n p , C n + 1 is a Bregman projection of E 1 onto C n + 1 , the sequence of real number { α n } [ a , b ] ( 0 , 1 ) and { θ n } [ c , d ] ( , + ) , and { ρ n } ( 0 , + ) satisfies
lim inf n + ρ n p C q ρ n q 1 q > 0 .
where c q is the uniform smoothness coefficient of E 1 . Then, x n z 0 S E P ( G 1 , G 2 ) x 0 .

5. Conclusions

In this paper, we introduce a new inertial shrinking projection method for solving the split common null point problem in uniformly convex and uniformly smooth real Banach spaces. The algorithm is designed such that its step size does not require prior knowledge of the norm of the bounded linear operator. A strong convergence result is also proved under some mild conditions. We further provide some applications of our result to other nonlinear optimization problems. We highlight our contributions in this paper as follow:
  • A significant improvement in this paper is that a self-adaptive technique is introduced for selecting the step size such that a strong convergence result is proved without prior knowledge of the norm of the bounded linear operator. This improves the results in [6,8,9,11,12,16,19,20] and other important results in this direction.
  • The result in this paper extends the results in [4,5,10,11] and several other results on solving split common null point problem from real Hilbert spaces to real Banach spaces.
  • The strong convergence result in this paper is more desirable in optimization theory (see, e.g., [51]).

Author Contributions

Conceptualization, C.C.O.; methodology, C.C.O. and L.O.J.; software, C.C.O. and L.O.J.; validation, C.C.O., L.O.J. and R.N.; formal analysis, C.C.O.and L.O.J.; writing–original draft preparation, C.C.O. and L.O.J.; writing–review and editing, C.C.O., L.O.J. and R.N.; supervision, L.O.J.; project administration, C.C.O.; funding acquisition, L.O.J. All authors have read and agreed to the published version of the manuscript.

Funding

This research is funded by the Mathematical research fund at the Sefako Makgatho Health Sciences University.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Byrne, C.; Censor, Y.; Gibali, A.; Reich, S. The split common null point problem. J. Nonlinear Convex Anal. 2012, 13, 759–775. [Google Scholar]
  2. Bruck, R.E.; Reich, S. Nonexpansive projections and resolvents of accretive operators in Banach spaces. Houst. J. Math. 1977, 3, 459–470. [Google Scholar]
  3. Censor, Y.; Elfving, T. A multiprojection algorithm using Bregman projections in a product space. Numer. Algorithms 1994, 8, 221–239. [Google Scholar] [CrossRef]
  4. Censor, Y.; Segal, A. The split common fixed-point problem for directed operators. J. Convex Anal. 2009, 16, 587–600. [Google Scholar]
  5. Muodafi, A. The split common fixed point problem for demicontractive mappings. Inverse Probl. 2010, 26, 055007. [Google Scholar] [CrossRef] [PubMed]
  6. Alofi, A.S.; Alsulami, S.M.; Takahashi, W. Strongly convergent iterative method for the split common null point problem in Banach spaces. J. Nonlinear Convex Anal. 2016, 17, 311–324. [Google Scholar]
  7. Alsulami, S.M.; Takahashi, W. The split common null problem for maximal monotone mappings in Hilbert spaces and applications in Banach spaces. J. Nonlinear Convex Anal. 2014, 15, 793–808. [Google Scholar]
  8. Takahashi, W. The split common null point problem in Banach space. Arch. Math. 2015, 104, 357–365. [Google Scholar] [CrossRef]
  9. Suantai, S.; Shehu, Y.; Cholamjiak, P. Nonlinear iterative methods for solving the split common null point problem in Banach spaces. Optim. Method Softw. 2019, 34, 853–874. [Google Scholar] [CrossRef]
  10. Kazmi, K.R.; Rizvi, S.H. An iterative method for split variational inclusion problem and fixed point problem for a nonexpansive mapping. Optim. Lett. 2014, 8, 1113–1124. [Google Scholar] [CrossRef]
  11. Jailoka, P.; Suantai, S. Split null point problems and fixed point problems for demicontractive multivalued mappings. Meditter. J. Math. 2018, 15, 204. [Google Scholar] [CrossRef]
  12. Sitthithakerngkiet, K.; Deepho, J.; Kumam, P. Convergence analysis of a general iterative algorithm for finding a common solution o split variational inclusion and optimization problems. Numer. Algorithms 2018, 79, 801–824. [Google Scholar] [CrossRef]
  13. Dong, Q.L.; He, S.; Zhao, J. Solving the split equality problem without prior knowledge of operator norms. Optimization 2015, 64, 1887–1906. [Google Scholar] [CrossRef]
  14. Zhao, J. Solving split equality fixed-point problem of quasi-nonexpansive mappings without prior knowledge of operators norms. Optimization 2015, 64, 2619–2630. [Google Scholar] [CrossRef]
  15. Zhao, J.; Zhang, H. Solving split common fixed-point problem of firmly quasi-nonexpansive mappings without prior knowledge of operators norms. Abstr. Appl. Anal. 2014, 389689. [Google Scholar] [CrossRef]
  16. Takahashi, W. The split common null point problem in two Banach spaces. J. Nonlinear Convex Anal. 2015, 16, 2343–2350. [Google Scholar]
  17. Takahashi, S.; Takahashi, W. The split common null point problem and the shrinking projection method in Banach spaces. Optimization 2016, 65, 281–287. [Google Scholar] [CrossRef]
  18. Suantai, S.; Srisap, K.; Naprang, N.; Mamat, M.; Yundon, V.; Cholamjiak, P. Convergence Theorems for finding split common null point problem in Banach spaces. Appl. Gen. Topol. 2017, 18, 345–360. [Google Scholar] [CrossRef] [Green Version]
  19. Takahashi, W. The split common null point problem for generalized resolvents in two banach spaces. Numer. Algorithms 2017, 75, 1065–1078. [Google Scholar] [CrossRef]
  20. Takahashi, W. The split feasibility problem in Banach spaces. Nonlinear Convex Anal. 2014, 15, 1349–1355. [Google Scholar]
  21. Polyak, B.T. Some methods of speeding up the convergence of iteration methods. USSR Comput. Math. Phys. 1964, 4, 1–17. [Google Scholar] [CrossRef]
  22. Alvarez, F.; Attouch, H. An inertial proximal method for maximal monotone operators via discretization of a nonlinear oscillator with damping. Set Valued Anal. 2001, 9, 3–11. [Google Scholar] [CrossRef]
  23. Attouch, H.; Peypouquet, J.; Redont, P. A dynamical approach to an inertial forward-backward algorithm for convex minimization. SIAM J. Optim. 2014, 24, 232–256. [Google Scholar] [CrossRef]
  24. Bot, R.I.; Csetnek, E.R.; Hendrich, C. Inertial Douglas–Rachford splitting for monotone inclusion. Appl. Math. Comput. 2015, 256, 472–487. [Google Scholar]
  25. Bot, R.I.; Csetnek, E.R. An inertial alternating direction method of multipliers. Minimax Theory Appl. 2016, 1, 29–49. [Google Scholar]
  26. Bot, R.I.; Csetnek, E.R. An inertial forward–backward–forward primal-dual splitting algorithm for solving monotone inclusion problems. Numer. Algebra 2016, 71, 519–540. [Google Scholar] [CrossRef] [Green Version]
  27. Dong, Q.L.; Yuan, H.B.; Cho, Y.J.; Rassias, T.M. Modified inertial Mann algorithm and inertial CQ-algorithm for nonexpansive mappings. Optim. Lett. 2018, 12, 87–102. [Google Scholar] [CrossRef]
  28. Cholamjiak, W.; Pholasa, N.; Suantai, S. A modified inertial shrinking projection method for solving inclusion problems and quasi nonepansive multivalued mappings. Comput. Appl. Math. 2018, 34, 5750–5774. [Google Scholar] [CrossRef]
  29. Cioranescu, I. Geometry of Banach Spaces, Duality Mappings and Nonlinear Problems; Kluwer: Doradrecht, The Netherlands, 1990; Volume 62. [Google Scholar]
  30. Kuo, L.-W.; Sahu, D.R. Bregman distance and strong convergence of proximal-type algorithms. Abstr. Appl. Anal. 2003, 2003, 590519. [Google Scholar] [CrossRef]
  31. Xu, H.K. Inequalities in Banach spaces with applications. Nonlinear Anal. 1991, 16, 1127–1138. [Google Scholar] [CrossRef]
  32. Shehu, Y.; Ogbuisi, F.; Iyiola, O. Convergence analysis of an iterative algorithm for fixed point problems and split feasibility problems in certain Banach spaces. Optimization 2016, 65, 299–323. [Google Scholar] [CrossRef]
  33. Aoyama, K.; Kohsaka, F.; Takahashi, W. Three generalizations of firmly nonexpansive mappings: Their relations and continuity properties. J. Nonlinear Convex Anal. 2009, 10, 131–147. [Google Scholar]
  34. Browder, F.E. Nonlinear maximal monotone operators in Banach space. Math. Ann. 1968, 175, 89–113. [Google Scholar] [CrossRef] [Green Version]
  35. Dunford, N.; Schwartz, J.T. Linear Operators I; Willey Interscience: New York, NY, USA, 1958. [Google Scholar]
  36. Takahashi, W. Nonlinear Functional Analysis; Yokohama Publishers: Yokohama, Japan, 2000. [Google Scholar]
  37. Rockafellar, R.T. On the maximality of sums of nonlinear monotone operators. Trans. Am. Math. Soc. 1970, 149, 75–88. [Google Scholar] [CrossRef]
  38. Muodafi, A.; Thakur, B.S. Solving proximal split feasibility problems without prior knowledge of operator norms. Optim. Lett. 2014, 8, 2099–2110. [Google Scholar] [CrossRef] [Green Version]
  39. Shehu, Y.; Cai, G.; Iyiola, O.S. Iterative approximation of solutions for proximal split feasibility problems. Fixed Point Theory Appl. 2015, 2015, 123. [Google Scholar] [CrossRef] [Green Version]
  40. Shehu, Y.; Ogbuisi, F.U. Convergence analysis for proximal split feasibility problems and fixed point problems. J. Appl. Math. Comput. 2015, 48, 221–239. [Google Scholar] [CrossRef]
  41. Pant, R.; Okeke, C.C.; Izuchukwu, C. Modified viscosity implicit rules for proximal split feasibility problem. J. Appl. Math. Comput. 2020. [Google Scholar] [CrossRef]
  42. Yao, Y.; Yao, Z.; Abdou, A.; Cho, Y. Self-adaptive algorithms for proximal split feasibility problems and strong convergence analysis. Fixed Point Theory Appl. 2015, 2015, 205. [Google Scholar] [CrossRef] [Green Version]
  43. Barbu, V. Nonlinear Semigroups and Differential Equations in Banach Spaces. Editura Acad. R. S. R., Bucuresti; Springer: Berlin/Heidelberg, Germany, 1976. [Google Scholar]
  44. Reich, S.; Sabach, S. Two strong convergence Theorems for a proximal method in reflexive Banach spaces. Numer. Funct. Anal. Optim. 2010, 31, 22–44. [Google Scholar] [CrossRef]
  45. Sabach, S. Products of finitely many resolvents of maximal monotone mappings in reflexive banach spaces. SIAM J. Optim. 2011, 21, 1289–1308. [Google Scholar] [CrossRef] [Green Version]
  46. Moudafi, A. Split monotone variational inclusions. J. Optim. Theory Appl. 2011, 150, 275–283. [Google Scholar] [CrossRef]
  47. Kazmi, K.R.; Rizvi, S.H. Iterative approximation of a common solution of a split equilibrium problem, a variational inequality problem and a fixed point problem. J. Egypt. Math. Soc. 2013, 21, 44–51. [Google Scholar] [CrossRef] [Green Version]
  48. Jolaoso, L.O.; Oyewole, O.K.; Okeke, C.C.; Mewomo, O.T. A unified algorithm for solving split generalized mixed equilibrium problem and fixed point of nonspreading mapping in Hilbert space. Demonstr. Math. 2018, 51, 211–232. [Google Scholar] [CrossRef] [Green Version]
  49. Jolaoso, L.O.; Karahan, I. A general alternative regularization method with line search technique for solving split equilibrium and fixed point problems in Hilbert space. Comput. Appl. Math. 2020, 30. [Google Scholar] [CrossRef]
  50. Okeke, C.C.; Jolaoso, L.O.; Isiogugu, F.O.; Mewomo, O.T. Solving split equality equilibrium and fixed point problems in Banach spaces without prior knowledge of operator norm. J. Nonlinear Convex Analy. 2019, 20, 661–683. [Google Scholar]
  51. Bauschke, H.H.; Combettes, P.L. A weak-to-strong convergence principle for Féjer-monotone methods in Hilbert spaces. Math. Oper. Res. 2001, 26, 248–264. [Google Scholar] [CrossRef]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Okeke, C.C.; Jolaoso, L.O.; Nwokoye, R. A Self-Adaptive Shrinking Projection Method with an Inertial Technique for Split Common Null Point Problems in Banach Spaces. Axioms 2020, 9, 140. https://doi.org/10.3390/axioms9040140

AMA Style

Okeke CC, Jolaoso LO, Nwokoye R. A Self-Adaptive Shrinking Projection Method with an Inertial Technique for Split Common Null Point Problems in Banach Spaces. Axioms. 2020; 9(4):140. https://doi.org/10.3390/axioms9040140

Chicago/Turabian Style

Okeke, Chibueze Christian, Lateef Olakunle Jolaoso, and Regina Nwokoye. 2020. "A Self-Adaptive Shrinking Projection Method with an Inertial Technique for Split Common Null Point Problems in Banach Spaces" Axioms 9, no. 4: 140. https://doi.org/10.3390/axioms9040140

APA Style

Okeke, C. C., Jolaoso, L. O., & Nwokoye, R. (2020). A Self-Adaptive Shrinking Projection Method with an Inertial Technique for Split Common Null Point Problems in Banach Spaces. Axioms, 9(4), 140. https://doi.org/10.3390/axioms9040140

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop