Next Article in Journal
Fractional Order Magnetic Resonance Fingerprinting in the Human Cerebral Cortex
Previous Article in Journal
Gamma Generalization Operators Involving Analytic Functions
Previous Article in Special Issue
Iterative Design for the Common Solution of Monotone Inclusions and Variational Inequalities
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Multi-Step Inertial Hybrid and Shrinking Tseng’s Algorithm with Meir–Keeler Contractions for Variational Inclusion Problems

College of Mathematics and Computer Science, Zhejiang Normal University, Jinhua 321004, China
*
Author to whom correspondence should be addressed.
Mathematics 2021, 9(13), 1548; https://doi.org/10.3390/math9131548
Submission received: 31 May 2021 / Revised: 28 June 2021 / Accepted: 29 June 2021 / Published: 1 July 2021
(This article belongs to the Special Issue Fixed Point, Optimization, and Applications)

Abstract

:
In our paper, we propose two new iterative algorithms with Meir–Keeler contractions that are based on Tseng’s method, the multi-step inertial method, the hybrid projection method, and the shrinking projection method to solve a monotone variational inclusion problem in Hilbert spaces. The strong convergence of the proposed iterative algorithms is proven. Using our results, we can solve convex minimization problems.

1. Introduction

1.1. Variational Inclusion Problem

In a real Hilbert space H with inner product · , · and induced norm · , we assume that G : H 2 H is a set-valued mapping while F : H H is a single-valued mapping.
We consider the following variational inclusion problem: find an element x * H such that
0 F x * + G x * .
This problem has been studied by many scholars [1,2,3,4,5,6,7,8,9].
A classical algorithm to solve the problem (1) is the forward–backward splitting algorithm put forward by Passty [2] and by Lions and Mercier [3]. In 2000, Tseng [4] proposed a modified forward–backward splitting algorithm (Algorithm 1) about null points of maximal monotone mappings. This algorithm is weakly convergent under some conditions.
Algorithm 1: Modified forward–backward splitting algorithm.
y n = ( I + γ n G ) 1 ( x n γ n F x n ) , x n + 1 = y n γ n ( F y n F x n ) .
In 2015, an algorithm named inertial forward–backward algorithm (Algorithm 2) was proposed by Lorenz and Pock [5]. We notice that the algorithm is also weakly convergent.
Algorithm 2: Inertial forward–backward algorithm.
y n = x n + α n ( x n x n 1 ) , x n + 1 = ( I + γ n G ) 1 ( y n γ n F y n ) .
In 2020, Tan et al. [6] introduced the inertial hybrid projection algorithm (Algorithm 3) and inertial shrinking projection method (Algorithm 4) by combining the two algorithms (Algorithms 1 and 2) with two classes of hybrid projection methods to solve the variational inclusion problem in Hilbert spaces, as follows:
Algorithm 3: Inertial hybrid projection algorithm.
w n = x n + α n ( x n x n 1 ) , y n = ( I + γ n G ) 1 ( I γ n F ) w n , z n = y n γ n ( F y n F w n ) , C n = u H : z n u 2 w n u 2 1 μ 2 γ n 2 γ n + 1 2 w n y n 2 , Q n = { u H : x n u , x n x 0 0 } , x n + 1 = P C n Q n x 0 .
Algorithm 4: Inertial shrinking projection algorithm.
w n = x n + α n ( x n x n 1 ) , y n = ( I + γ n G ) 1 ( I γ n F ) w n , z n = y n γ n ( F y n F w n ) , C n + 1 = u C n : z n u 2 w n u 2 1 μ 2 γ n 2 γ n + 1 2 w n y n 2 , x n + 1 = P C n + 1 x 0 .
They proved these two algorithms are strongly convergent under certain conditions.

1.2. Fixed Point Problem

Assume that D is a nonempty closed convex subset of H and that  T : D D is a mapping. Let us recall that the fixed point problem is finding a point x ¯ D such that T x ¯ = x ¯ . We denote the set of fixed points of T by Fix ( T ) .
In the field of fixed point problems, many fruitful achievements were introduced by scholars [10,11,12,13,14,15,16,17,18,19,20,21,22]. One of the classic algorithms is the Krasnosel’skiǐ–Mann algorithm [10,11], which is defined as follows:
x n + 1 = ( 1 λ n ) x n + λ n T x n .
Under some certain conditions, the sequence { x n } converges weakly to a fixed point of T. In 2019, Dong et al. [20] presented a multi-step inertial Krasnosel’skiǐ–Mann algorithm, which is defined as Algorithm 5.
Algorithm 5: Multi-step inertial Krasnosel’skiǐ–Mann algorithm.
y n = x n + k S n a k , n ( x n k x n k 1 ) , z n = x n + k S n b k , n ( x n k x n k 1 ) , x n + 1 = ( 1 λ n ) y n + λ n T z n .
Under suitable conditions, the sequence { x n } converges weakly to a point in Fix ( T ) . In addition, Yao et al. [21] proposed a projected fixed point algorithm in real Hilbert spaces in 2017, which is defined as Algorithm 6. The sequence { x n } converges strongly to the unique fixed point of P Fix ( T ) f under some conditions.
Algorithm 6: Projected fixed point algorithm.
y n = ( 1 λ n ) x n + λ n T x n , C n + 1 = { u C n : ( 1 α n ) x n + α n T y n u x n u } , x n + 1 = P C n + 1 f ( x n ) .
Motivated by the results of [6,20,21], we construct two new algorithms to solve variational inclusion problems and obtain two strong convergence theorems. By using our results, we can solve convex minimization problems in Hilbert spaces as applications.

2. Preliminaries

Now, we present some necessary definitions and lemmas in the following for our convergence analysis.
Definition 1
([23,24,25,26,27]). Let S : H H be a nonlinear mapping.
(i) 
S is nonexpansive if
S x S y x y , x , y H .
(ii) 
S is firmly nonexpansive if
S x S y , x y S x S y 2 , x , y H .
It is obvious to see that a firmly nonexpansive mapping is nonexpansive.
(iii) 
S is contractive if
S x S y ρ x y , x , y H ,
where ρ [ 0 , 1 ) is a real number.
(iv) 
S is Meir–Keeler contractive if, for any ϵ > 0 , there exists δ > 0 such that
x y < ϵ + δ i m p l i e s S x S y < ϵ , x , y H .
It it obvious to see that a contractive mapping is Meir–Keeler contractive.
(v) 
S is L-Lipschitz continuous ( L > 0 ) if
S x S y L x y , x , y H .
(vi) 
S is monotone if
S x S y , x y 0 , x , y H .
Lemma 1
([23,28,29,30]). A Meir–Keeler contractive mapping has a unique fixed point on a complete metric space.
Lemma 2
([31]). Let D be a convex subset of a Banach space E and S be a Meir–Keeler contractive mapping on D. Then, there exists ρ ( 0 , 1 ) for each ϵ > 0 , such that
x y ϵ i m p l i e s S x S y ρ x y , x , y D .
Recall the metric projection operator P D , defined as follows:
P D x = arg min y D x y , x H .
Lemma 3
([32,33]). Given x H and q D , we have
(i) 
q = P D x if and only if
x q , q y 0 , y D ;
(ii) 
P D is firmly nonexpansive, i.e.,
P D u P D v , u v P D u P D v 2 , u , v H ;
(iii) 
x P D x 2 x y 2 y P D x 2 , y D .
Definition 2
([34]). Let A : H 2 H be a set-valued mapping. Dom ( A ) = { x H : A x } is the effective domain of A. The graph of A is denoted by Gra ( A ) , i.e., Gra ( A ) = { ( x , u ) H × H : u A x } . A set-valued mapping A : H 2 H is called monotone if
x y , u v 0 , ( x , u ) , ( y , v ) Gra ( A ) .
A monotone set-valued mapping A is called maximal monotone if, for each ( x , u ) H × H , ( x , u ) Gra ( A ) if and only if
x y , u v 0 , ( y , v ) Gra ( A ) .
For a maximal monotone set-valued mapping A : H 2 H and r > 0 , we can define a mapping as
J r = ( I + r A ) 1 .
It is worth noticing that J r is single-valued and firmly nonexpansive. The mapping J r is called the resolvent of A for r.
Lemma 4
([35]). Let A be a maximal monotone mapping on H into 2 H and B : H H be a mapping. Then, for any r > 0 , ( A + B ) 1 ( 0 ) = Fix ( J r ( I r B ) ) , where J r is the resolvent of A for r.
Lemma 5
([35,36]). Let A : H 2 H be a maximal monotone mapping. For  r , s > 0 ,
J r x J s x | r s | r x J r x , x H ,
where J r is the resolvent of A for r and J s is the resolvent of A for s.
Let { x n } H be a sequence. We use x n x and x n x to indicate that { x n } converges strongly and weakly to x, respectively.
Definition 3
([21,37]). Let D n H be a nonempty closed convex subsets, n = 1 , 2 , . We define s- Li n D n and w- Ls n D n as follows:
s Li n D n = { x H : x n D n , x n x } ,
w Ls n D n = { x H : { D n k } { D n } , x n k D n k , x n k x } .
If there exists a set D 0 H such that D 0 = s- Li n D n = w- Ls n D n , we say that { D n } converges to D 0 in the sense of Mosco and denote by M- lim n D n = D 0 . It is obvious to prove that, if { D n } is non-increasing with respect to inclusion, then { D n } converges to n = 1 D n in the sense of Mosco.
Lemma 6
([21,38]). Let D n H be a nonempty closed convex subsets, n = 1 , 2 , . If D 0 = M lim n D n exists and is nonempty, then x H , P D n x P D 0 x .

3. Algorithms

In this section, we present two algorithms to find the solutions to variational inclusion problems in Hilbert spaces.
The following conditions are assumed to be true.
(A1) F : H H is L-Lipschitz continuous ( L > 0 ) and monotone.
(A2) G : H 2 H is maximal monotone.
(A3) f : H H is a Meir–Keeler contraction.
(A4) Ω = ( F + G ) 1 ( 0 ) .
We need the following lemma.
Lemma 7
([6]). The sequence { γ n } generated by the algorithm is non-increasing and
lim n γ n = γ min γ 1 , μ L .

4. Main Results

In this section, we analyze the strong convergence of Algorithms 7 and 8.
Algorithm 7: Multi-step inertial hybrid Tseng’s algorithm.
  • Initialization: Choose x 0 , x 1 H , γ 1 > 0 , μ ( 0 , 1 ) arbitrarily. For each i = 1 , 2 , , s (where s is a chosen positive integer), choose a bounded sequence { α i , n } R . Let { ε n } be a nonnegative number sequence with lim n ε n = 0 .
  • Iterative step: Compute x n + 1 via
    w n = x n + i = 1 min { s , n } α i , n ( x n i + 1 x n i ) , y n = J γ n ( I γ n F ) w n , z n = y n γ n ( F y n F w n ) , C n = u H : z n u 2 w n u 2 1 μ 2 γ n 2 γ n + 1 2 y n w n 2 + ε n , Q n = H , if   n = 1 , { u Q n 1 : x n f ( x n 1 ) , x n u 0 } , if   n 2 , x n + 1 = P C n Q n f ( x n ) ,
    where
    γ n + 1 = min γ n , μ w n y n F w n F y n , if   F w n F y n , γ n , otherwise .
Algorithm 8: Multi-step inertial shrinking Tseng’s algorithm.
  • Initialization: Choose x 0 , x 1 H , γ 1 > 0 , μ ( 0 , 1 ) arbitrarily. Let C 1 = H . For each i = 1 , 2 , , s (where s is a chosen positive integer), choose a bounded sequence { α i , n } R . Let { ε n } be a nonnegative number sequence with lim n ε n = 0 .
  • Iterative step: Compute x n + 1 via
    w n = x n + i = 1 min { s , n } α i , n ( x n i + 1 x n i ) , y n = J γ n ( I γ n F ) w n , z n = y n γ n ( F y n F w n ) , C n + 1 = u C n : z n u 2 w n u 2 1 μ 2 γ n 2 γ n + 1 2 y n w n 2 + ε n , x n + 1 = P C n + 1 f ( x n ) ,
    where the computation of γ n + 1 is the same as in Algorithm 7.
Theorem 1.
Assume that the conditions (A1)–(A4) are satisfied. Then, the sequence { x n } generated by Algorithm 7 converges strongly to x * Ω , where x * is the unique fixed point of P Ω f .
Proof. 
The proof is divided into four steps.
Step 1. n N , and C n and Q n are closed and convex.
Obviously, for each n N , C n is a half-space, so C n is closed and convex.
For n = 1 , Q 1 = H is closed and convex. Suppose that x k is given and that Q k is closed and convex for some k N . It is clear that { u H : x k + 1 f ( x k ) , x k u 0 } is a half-space, so it is closed and convex. Hence, Q k + 1 is closed and convex. n N , and Q n is closed and convex by induction.
Step 2. We prove that Ω C n Q n for each n N .
Let p Ω . We see that
z n p 2 = ( y n p ) γ n ( F y n F w n ) 2 = y n p 2 + γ n 2 F y n F w n 2 2 γ n y n p , F y n F w n = w n p 2 + y n w n 2 + 2 w n p , y n w n + γ n 2 F y n F w n 2 2 γ n y n p , F y n F w n w n p 2 y n w n 2 + γ n 2 F y n F w n 2 2 y n p , w n y n + γ n ( F p F w n ) w n p 2 y n w n 2 + μ 2 γ n 2 γ n + 1 2 y n w n 2 2 y n p , w n y n + γ n ( F p F w n ) = w n p 2 1 μ 2 γ n 2 γ n + 1 2 y n w n 2 2 y n p , w n y n + γ n ( F p F w n ) .
Since y n = J γ n ( I γ n F ) w n = ( I + γ n G ) 1 ( I γ n F ) w n , we have
( I γ n F ) w n ( I + γ n G ) y n .
Hence,
1 γ n ( w n γ n F w n y n ) G y n .
On the other hand, since p Ω = ( F + G ) 1 ( 0 ) , we have
0 ( F + G ) p .
Hence
F p G p .
From the maximal monotonicity of G, we deduce
1 γ n ( w n γ n F w n y n ) + F p , y n p 0 ,
which means
w n y n + γ n ( F p F w n ) , y n p 0 .
Substituting (5) into (2), we conclude
z n p 2 w n p 2 1 μ 2 γ n 2 γ n + 1 2 y n w n 2 w n p 2 1 μ 2 γ n 2 γ n + 1 2 y n w n 2 + ε n .
This means that p C n . Hence, Ω C n for each n N .
For n = 1 , Q 1 = H , which yields Ω C 1 Q 1 .
Assume that x k is given and that Ω C k Q k for some k N . From Lemma 3, we obtain
y x k + 1 , f ( x k ) x k + 1 0 , y C k Q k .
Since Ω C k Q k , we have
y x k + 1 , f ( x k ) x k + 1 0 , y Ω .
From the expression of Q n , we obtain Ω Q k + 1 . Hence, Ω C k + 1 Q k + 1 .
Therefore, n N , Ω C n Q n by induction.
Step 3. We prove that { x n } converges strongly to z, where z is the unique fixed point of P n = 1 Q n f .
From the expression of Q n , we know that Ω n = 1 Q n = M - lim n Q n . Set v n = P Q n f ( z ) . It follows from Lemma 6 that
v n P n = 1 Q n f ( z ) = z .
Suppose the contrary, i.e.,  lim sup n x n z > 0 . One can choose a real number ϵ > 0 such that lim sup n x n z > ϵ . Continue to choose a real number δ 1 such that lim sup n x n z > ϵ + δ 1 . Since f is a Meir–Keeler contraction, there exists δ 2 > 0 such that x y < ϵ + δ 2 implies, x , y H , f ( x ) f ( y ) < ϵ . Take δ = min { δ 1 , δ 2 } , we have
lim sup n x n z > ϵ + δ
and
x y < ϵ + δ implies f ( x ) f ( y ) < ϵ , x , y H .
Since v n z , there exists n 0 N such that
v n z < δ , n n 0 .
The following two cases are considered now.
Case 1. There exists n 1 n 0 such that x n 1 z < ϵ + δ .
From the expression of Q n and Lemma 3, we can obviously see that x n + 1 = P Q n + 1 f ( x n ) . Thus, from (10) and (11), we conclude
x n 1 + 1 z x n 1 + 1 v n 1 + 1 + v n 1 + 1 z < P Q n 1 + 1 f ( x n 1 ) P Q n 1 + 1 f ( z ) + δ f ( x n 1 ) f ( z ) + δ < ϵ + δ .
By induction, we obtain
x n 1 + m z < ϵ + δ , m N ,
which implies that
lim sup n x n z ϵ + δ .
This contradicts (9).
Case 2. x n z ϵ + δ for all n n 0 .
From Lemma 2, there exists ρ ( 0 , 1 ) such that
f ( x n ) f ( z ) ρ x n z .
Thus, for  n n 0 , we obtain
x n + 1 z x n + 1 v n + 1 + v n + 1 z < P Q n + 1 f ( x n ) P Q n + 1 f ( z ) + δ f ( x n ) f ( z ) + δ ρ x n z + δ < ρ 2 x n 1 z + ( 1 + ρ ) δ < ρ n n 0 + 1 x n 0 z + ( 1 + ρ + + ρ n n 0 ) δ < x n 0 z + δ 1 ρ ,
which means that lim sup n x n z is a finite number. Therefore,
lim sup n x n z = lim sup n x n + 1 z lim sup n x n + 1 v n + 1 + lim n v n + 1 z = lim sup n P Q n + 1 f ( x n ) P Q n + 1 f ( z ) lim sup n f ( x n ) f ( z ) ρ lim sup n x n z < lim sup n x n z .
This is a contradiction.
Hence, we obtain that { x n } converges strongly to z.
Step 4. We prove that { x n } converges strongly to x * .
From Step 3, it is sufficient to prove that z = x * . Since x n z , we have
x n + 1 x n 0 , n .
From the computation of w n , we deduce
x n w n = x n x n i = 1 s α i , n ( x n i + 1 x n i ) = i = 1 s α i , n ( x n i + 1 x n i ) i = 1 s | α i , n | x n i + 1 x n i
for n s . From (12), (13), and the boundedness of { α i , n } , we have
x n w n 0 , n .
Hence, w n z . Therefore { w n } is bounded, and so are { F w n } , { ( I γ F ) w n } , and { J γ ( I γ F ) w n } , where γ appears in Lemma 7. Combining (12) and (14), we conclude
x n + 1 w n 0 , n .
Since x n + 1 = P C n Q n f ( x n ) , we know that x n + 1 C n . Hence,
z n x n + 1 2 w n x n + 1 2 1 μ 2 γ n 2 γ n + 1 2 y n w n 2 + ε n .
By Lemma 7, we know that γ n γ > 0 . Therefore, γ n γ n + 1 1 . Combining this with μ ( 0 , 1 ) , we obtain
lim n 1 μ 2 γ n 2 γ n + 1 2 = 1 μ 2 ( 0 , 1 )
Combining (15)–(17) and the conditions of { ε n } , we deduce
x n + 1 z n 0 , n ,
and hence
y n w n 0 , n ,
i.e.,
J γ n ( I γ n F ) w n w n 0 , n .
Since J γ n is nonexpansive, we conclude
J γ n ( I γ n F ) w n J γ n ( I γ F ) w n ( I γ n F ) w n ( I γ F ) w n = | γ n γ | F w n .
Hence,
J γ n ( I γ n F ) w n J γ n ( I γ F ) w n 0 , n .
From Lemma 5, we have
J γ n ( I γ F ) w n J γ ( I γ F ) w n | γ n γ | γ ( I γ F ) w n J γ ( I γ F ) w n .
Hence,
J γ n ( I γ F ) w n J γ ( I γ F ) w n 0 , n .
Combining (18)–(20), we obtain
w n J γ ( I γ F ) w n 0 , n .
By w n z and the continuity of J γ ( I γ F ) , we conclude z = J γ ( I γ F ) z . It follows from Lemma 4 that z Ω . Since Ω Q n + 1 , we see that
x n + 1 f ( x n ) , x n + 1 y 0 , y Ω .
Taking the limit in (22), we obtain
z f ( z ) , z y 0 , y Ω .
It follows from Lemma 3 that z = P Ω f ( z ) . Since P Ω f has the unique fixed point x * , we obtain z = x * .    □
Theorem 2.
Assume that the conditions (A1)–(A4) are satisfied. Let the sequence { x n } be generated by Algorithm 8. Then, { x n } converges strongly to x * Ω , where x * is the unique fixed point of P Ω f .
Proof. 
It is obvious that C n is a closed convex subset of H for each n N by induction. Using the same proof as in (2)–(6), we obtain that Ω C n for each n N . Denote the unique fixed point of P n = 1 C n f by z. From the expression of C n , we know that Ω n = 1 C n = M - lim n C n . Set v n = P C n f ( z ) . It follows from Lemma 6 that v n  is as follows:
v n P n = 1 C n f ( z ) = z .
Using the similar proof of Theorem 1, we obtain x n x * .    □
Remark 1.
If s = 1 , ε n 0 , x 0 = x 1 , and f x 1 , then Algorithm 8 reduces to Algorithm 4.

5. Applications

In this section, some applications for solving the nonsmooth composite convex minimization problems are introduced in Hilbert spaces.
Denote Γ 0 ( H ) by
Γ 0 ( H ) = { f : H ( , ] : f is proper convex lower semi-continuous } .
Consider the following problem
min x H ( g ( x ) + h ( x ) ) ,
where g , h Γ 0 ( H ) and which satisfies the following conditions:
  • g is Gâteaux differentiable, and its gradient g is Lipschitz continuous. h may not be Gâteaux differentiable.
  • Ψ = arg min x H ( g ( x ) + h ( x ) ) .
We need the following definitions and lemmas.
Definition 4
([34]). Let h Γ 0 ( H ) . The proximal operator of h of order λ > 0 is defined by
prox λ h ( x ) : = arg min y H 1 2 λ y x 2 + h ( y ) , x H .
Lemma 8
([34]). Let h Γ 0 ( H ) . Then, h is maximal monotone and ( I + λ h ) 1 = prox λ h .
Lemma 9
([34]). Let g , h Γ 0 ( H ) . Then, x ^ Ψ if and only if 0 ( g + h ) ( x ^ ) .
Next, we apply our main results to solve problem (24).
Theorem 3.
Assume that the condition (A3) is satisfied. Let the sequence { x n } be generated by Algorithm 9. The  { x n } converges strongly to x * Ψ , where x * is the unique fixed point of P Ψ f .
Proof. 
g is monotone because g is convex. Let F = g and G = h in Theorem 4.1. We can obtain the desired result using Lemmas 8 and 9.    □
Theorem 4.
Assume that the condition (A3) is satisfied. Let the sequence { x n } be generated by Algorithm 10. Then, { x n } converges strongly to x * Ψ , where x * is the unique fixed point of P Ψ f .
Proof. 
g is monotone because g is convex. Let F = g and G = h in Theorem 4.2. We can obtain the desired result by Lemmas 8 and 9.    □
Algorithm 9:
  • Initialization: Choose x 0 , x 1 H , γ 1 > 0 , μ ( 0 , 1 ) arbitrarily. For each i = 1 , 2 , , s (where s is a chosen positive integer), choose a bounded sequence { α i , n } R . Let { ε n } be a nonnegative number sequence with lim n ε n = 0 .
  • Iterative step: Compute x n + 1 via
    w n = x n + i = 1 min { s , n } α i , n ( x n i + 1 x n i ) , y n = prox γ n h ( I γ n g ) w n , z n = y n γ n ( g ( y n ) g ( w n ) ) , C n = u H : z n u 2 w n u 2 1 μ 2 γ n 2 γ n + 1 2 y n w n 2 + ε n , Q n = H , if   n = 1 , { u Q n 1 : x n f ( x n 1 ) , x n u 0 } , if   n 2 , x n + 1 = P C n Q n f ( x n ) ,
    where
    γ n + 1 = min γ n , μ w n y n g ( w n ) g ( y n ) , if   g ( w n ) g ( y n ) , γ n , otherwise .
Algorithm 10:
  • Initialization: Choose x 0 , x 1 H , γ 1 > 0 , μ ( 0 , 1 ) arbitrarily. Let C 1 = H . For each i = 1 , 2 , , s (where s is a chosen positive integer), choose a bounded sequence { α i , n } R . Let { ε n } be a nonnegative number sequence with lim n ε n = 0 .
  • Iterative step: Compute x n + 1 via
    w n = x n + i = 1 min { s , n } α i , n ( x n i + 1 x n i ) , y n = prox γ n h ( I γ n g ) w n , z n = y n γ n ( g ( y n ) g ( w n ) ) , C n + 1 = u C n : z n u 2 w n u 2 1 μ 2 γ n 2 γ n + 1 2 y n w n 2 + ε n , x n + 1 = P C n + 1 f ( x n ) ,
    where the computation of γ n + 1 is the same as in Algorithm 9.

6. Conclusions

As known, the variational inclusion problems have always been a topic discussed by a large number of scholars. It not only plays an increasingly important role in the field of modern mathematics but also is widely used in many other fields, such as mechanics, optimization theory, nonlinear programming, etc. Tan et al. combined Tseng’s algorithm and hybrid projection algorithm to obtain a new strongly convergent algorithm. Our work in this paper is based on the work conducted by Tan et al. combined with the multi-step inertial method and the Krasnosel’skiǐ–Mann algorithm for solving the variational inclusion problems in a real Hilbert space. Then, new strong convergence theorems are obtained. By using our results, we can solve the related problems in a Hilbert space. Our results extend and improve many recent correlative results of other authors [1,2,3,4,5,6,20,21]. For example, our Algorithm 8 extends and improves Algorithm 4 in [6] in the following ways:
(i)
One-step inertia is generalized to multi-step inertia.
(ii)
There is an ε n in the definition of C n + 1 .
(iii)
The anchor value x 0 is replaced with f ( x n ) for the last step of iteration, where f is a Meir–Keeler contraction. This greatly expands the application scope of the iterative algorithm.

Author Contributions

Conceptualization, M.Y. and B.J.; Data curation, B.J.; Formal analysis, Y.W. and M.Y.; Funding acquisition, M.Y.; Methodology, Y.W. and B.J.; Project administration, Y.W.; Resources, B.J.; Supervision, Y.W.; Writing—original draft, M.Y. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

The authors thank the referees for their helpful comments, which notably improved the presentation of this paper. This work was supported by the National Natural Science Foundation of China (grant no. 11671365).

Conflicts of Interest

The author declare that they have no competing interests.

References

  1. Rockafellar, R.T. Monotone operators and the proximal point algorithms. SIAM J. Control Optim. 1976, 14, 877–898. [Google Scholar] [CrossRef] [Green Version]
  2. Passty, G.B. Ergodic convergence to a zero of the sum of monotone operators in Hilbert spaces. J. Math. Anal. Appl. 1979, 72, 383–390. [Google Scholar] [CrossRef] [Green Version]
  3. Lions, P.L.; Mercier, B. Splitting algorithms for the sum of two nonlinear operators. SIAM J. Number. Anal. 1979, 16, 964–979. [Google Scholar] [CrossRef]
  4. Tseng, P. A modified forward-backward splitting method for maximal monotone mappings. SIAM J. Control Optim. 2000, 38, 431–446. [Google Scholar] [CrossRef]
  5. Lorenz, D.; Pock, T. An inertial forward-backward algorithm for monotone inclusions. J. Math. Imaging Vis. 2015, 51, 311–325. [Google Scholar] [CrossRef] [Green Version]
  6. Tan, B.; Zhou, Z.; Qin, X. Accelerated projection-based forward-backward splitting algorithms for monotone inclusion problems. J. Appl. Anal. Comput. 2020, 10, 2184–2197. [Google Scholar]
  7. Dadashi, V.; Postolache, M. Forward-backward splitting algorithm for fixed point problems and zeros of the sum of monotone operators. Arab. J. Math. 2020, 9, 89–99. [Google Scholar] [CrossRef] [Green Version]
  8. Bauschke, H.H.; Combettes, P.L.; Reich, S. The asymptotic behavior of the composition of two resolvents. Nonlinear Anal. 2005, 60, 283–301. [Google Scholar] [CrossRef]
  9. Zhao, X.P.; Yao, J.C.; Yao, Y. A proximal algorithm for solving split monotone variational inclusions. UPB Sci. Bull. Ser. A 2020, 82, 43–52. [Google Scholar]
  10. Mann, W.R. Mean value methods in iteration. Proc. Am. Math. Soc. 1953, 4, 506–510. [Google Scholar] [CrossRef]
  11. Krasnosel’skiǐ, M.A. Two remarks on the method of successive approximations. Usp. Mat. Nauk. 1955, 10, 123–127. [Google Scholar]
  12. Halpern, B. Fixed points of nonexpanding maps. Bull. Amer. Math. Soc. 1967, 73, 957–961. [Google Scholar] [CrossRef] [Green Version]
  13. Reich, S. Weak convergence theorems for nonexpansive mappings in Banach spaces. J. Math. Anal. Appl. 1979, 67, 274–276. [Google Scholar] [CrossRef] [Green Version]
  14. Moudafi, A. Viscosity approximation methods for fixed-points problems. J. Math. Anal. Appl. 2000, 241, 46–55. [Google Scholar] [CrossRef] [Green Version]
  15. Nakajo, K.; Takahashi, W. Strong convergence theorems for nonexpansive mappings and nonexpansive semigroups. J. Math. Anal. Appl. 2003, 279, 372–379. [Google Scholar] [CrossRef] [Green Version]
  16. Takahashi, W.; Takeuchi, Y.; Kutoba, R. Strong convergence theorems by hybrid methods for families of nonexpansive mappings in Hilbert space. J. Math. Anal. Appl. 2008, 341, 276–286. [Google Scholar] [CrossRef]
  17. Marino, G.; Xu, H.K. A general iterative method for nonexpansive mappings in Hilbert spaces. J. Math. Anal. Appl. 2006, 318, 43–52. [Google Scholar] [CrossRef] [Green Version]
  18. Tian, M. A general iterative algorithm for nonexpansive mappings in Hilbert spaces. Nonlinear Anal. 2010, 73, 689–694. [Google Scholar] [CrossRef]
  19. Thakur, B.S.; Thakur, D.; Postolache, M. A new iterative scheme for numerical reckoning fixed points of Suzuki’s generalized nonexpansive mappings. Appl. Math. Comput. 2016, 275, 147–155. [Google Scholar] [CrossRef]
  20. Dong, Q.L.; Huang, J.Z.; Li, X.H.; Cho, Y.J.; Rassias, T.M. MiKM: Multi-step inertial Krasnosel’skiǐ-Mann algorithm and its applications. J. Glob. Optim. 2019, 73, 801–824. [Google Scholar] [CrossRef]
  21. Yao, Y.; Shahzad, N.; Liou, Y.C.; Zhu, L.J. A projected fixed point algorithm with Meir-Keeler contraction for pseudocontractive mappings. J. Nonlinear Sci. Appl. 2017, 10, 483–491. [Google Scholar] [CrossRef] [Green Version]
  22. Yao, Y.; Postolache, M.; Yao, J.C. Strong convergence of an extragradient algorithm for variational inequality and fixed point problems. UPB Sci. Bull. Ser. A 2020, 82, 3–12. [Google Scholar]
  23. Meir, A.; Keeler, E. A theorem on contraction mappings. J. Math. Anal. Appl. 1969, 28, 326–329. [Google Scholar] [CrossRef] [Green Version]
  24. Goebel, K.; Reich, S. Uniform Convexity, Hyperbolic Geometry, and Nonexpansive Mappings; Marcel Dekker: New York, NY, USA, 1984. [Google Scholar]
  25. Xu, H.K. Iterative methods for the split feasibility problem in infinite-dimensional Hilbert space. Inverse Probl. 2010, 26, 10518. [Google Scholar] [CrossRef]
  26. Yao, Y.; Liou, Y.C.; Yao, J.C. Iterative algorithms for the split variational inequality and fixed point problems under nonlinear transformations. J. Nonlinear Sci. Appl. 2017, 10, 843–854. [Google Scholar] [CrossRef] [Green Version]
  27. Ceng, L.C.; Petrusel, A.; Yao, J.C.; Yao, Y. Systems of variational inequalities with hierarchical variational inequality constraints for Lipschitzian pseudocontractions. Fixed Point Theory 2019, 20, 113–133. [Google Scholar] [CrossRef] [Green Version]
  28. Reich, S. Fixed points of contractive functions. Boll. Un. Mat. Ital. 1972, 5, 26–42. [Google Scholar]
  29. Karapınar, E.; Samet, B.; Zhang, D. Meir-Keeler type contractions on JS-metric spaces and related fixed point theorems. J. Fixed Point Theory Appl. 2018, 20, 60. [Google Scholar] [CrossRef]
  30. Li, C.Y.; Karapınar, E.; Chen, C.M. A discussion on random Meir-Keeler contractions. Mathematics 2020, 8, 245. [Google Scholar] [CrossRef] [Green Version]
  31. Suzuki, T. Moudafi’s viscosity approximations with Meir-Keeler contractions. J. Math. Anal. Appl. 2007, 325, 342–352. [Google Scholar] [CrossRef] [Green Version]
  32. Xu, H.K. Averaged mappings and the gradient-projection algorithm. J. Optim. Theory Appl. 2011, 150, 360–378. [Google Scholar] [CrossRef]
  33. Ceng, L.C.; Ansari, Q.H.; Yao, J.C. Some iterative methods for finding fixed points and for solving constrained convex minimization problems. Nonlinear Anal. 2011, 74, 5286–5302. [Google Scholar] [CrossRef]
  34. Bauschke, H.H.; Combettes, P.L. Convex Analysis and Monotone Operator Theory in Hilbert Spaces; Springer: Berlin, Germany, 2011. [Google Scholar]
  35. Lin, L.J.; Takahashi, W. A general iterative method for hierarchical variational inequality problems in Hilbert spaces and applications. Positivity 2012, 16, 429–453. [Google Scholar] [CrossRef]
  36. Takahashi, W.; Xu, H.K.; Yao, J.C. Iterative methods for generalized split feasibility problems in Hilbert spaces. Set-Valued Var. Anal. 2015, 23, 205–221. [Google Scholar] [CrossRef]
  37. Mosco, U. Convergence of convex sets and of solutions of variational inequalities. Adv. Math. 1969, 3, 510–585. [Google Scholar] [CrossRef] [Green Version]
  38. Tsukada, M. Convergence of best approximations in a smooth Banach space. J. Approx. Theory 1984, 40, 301–309. [Google Scholar] [CrossRef] [Green Version]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Wang, Y.; Yuan, M.; Jiang, B. Multi-Step Inertial Hybrid and Shrinking Tseng’s Algorithm with Meir–Keeler Contractions for Variational Inclusion Problems. Mathematics 2021, 9, 1548. https://doi.org/10.3390/math9131548

AMA Style

Wang Y, Yuan M, Jiang B. Multi-Step Inertial Hybrid and Shrinking Tseng’s Algorithm with Meir–Keeler Contractions for Variational Inclusion Problems. Mathematics. 2021; 9(13):1548. https://doi.org/10.3390/math9131548

Chicago/Turabian Style

Wang, Yuanheng, Mingyue Yuan, and Bingnan Jiang. 2021. "Multi-Step Inertial Hybrid and Shrinking Tseng’s Algorithm with Meir–Keeler Contractions for Variational Inclusion Problems" Mathematics 9, no. 13: 1548. https://doi.org/10.3390/math9131548

APA Style

Wang, Y., Yuan, M., & Jiang, B. (2021). Multi-Step Inertial Hybrid and Shrinking Tseng’s Algorithm with Meir–Keeler Contractions for Variational Inclusion Problems. Mathematics, 9(13), 1548. https://doi.org/10.3390/math9131548

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop