Next Article in Journal
A Symmetry Histogram Publishing Method Based on Differential Privacy
Next Article in Special Issue
Advancements in Hybrid Fixed Point Results and F-Contractive Operators
Previous Article in Journal
Exploration of Quantum Milne–Mercer-Type Inequalities with Applications
Previous Article in Special Issue
Geometry and Application in Economics of Fixed Point
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A New Viscosity Implicit Approximation Method for Solving Variational Inequalities over the Common Fixed Points of Nonexpansive Mappings in Symmetric Hilbert Space

School of Mathematical Sciences, Mudanjiang Normal University, Mudanjiang 157000, China
*
Author to whom correspondence should be addressed.
Symmetry 2023, 15(5), 1098; https://doi.org/10.3390/sym15051098
Submission received: 23 April 2023 / Revised: 5 May 2023 / Accepted: 11 May 2023 / Published: 17 May 2023

Abstract

:
In this paper, based on the viscosity approximation method and the hybrid steepest-descent iterative method, a new implicit iterative algorithm is presented for finding the common fixed points set of a finite family of nonexpansive mappings in a reflexive Hilbert space, which is called a symmetric space. We prove that the sequence generated by this new implicit rule strongly converges to the unique solution of a class of variational inequalities under certain appropriate conditions of the parameters. Moreover, we also study the applications to a broader family of strictly pseudo-contractive mappings and generalized equilibrium problems that involve several variational inequality problems, optimization problems, and fixed-point problems. Finally, numerical results are provided to clarify the stability and effectiveness of the algorithm and to compare with some existing iterative algorithms.

1. Introduction

The problem of variational inequality originally appeared in mathematical equations. Hartman and Stampacchia [1] proposed and established the initial theory of variational inequality in 1964. Since then, scholars have carried out extensive research on variational inequality that covers a wide range of disciplines, including optimization, optimal control, mechanics, and finance (see, e.g., [2,3,4,5]). In the theory of variational inequalities, an important and interesting problem is determination of the approximate solutions of variational inequalities by creating a feasible and effective iterative algorithm. In combination with general iterative methods, many scholars have constructed compound iterative schemes. Below, we list their main conclusions.
Let H be a real symmetric Hilbert space possessed of the inner product · , · and the induced norm · , and let C be a nonempty closed and convex subset of H. Recall that a mapping T : C C is nonexpansive if
T x T y x y , x , y C .
We define F i x ( T ) = { x C : T x = x } to express the set of all fixed points of T. Additionally, a f : C C is a contraction on H if there exists α ( 0 , 1 ) such that
f ( x ) f ( y ) α x y , x , y C .
In 2003, Xu [6] created the iterative scheme by means of the following recurrence relation:
x n + 1 = α n u + ( 1 α n A ) T x n , n 0 .
where α n ( 0 , 1 ) , u C and A is a strongly positive linear bounded operator. He not only proved the strong convergence from { x n } to a fixed point of T, but also showed that the solution of sequence { x n } is equivalent to the unique solution of the following minimization problem
min x C 1 2 A x , x ( x , u ) .
In the following way, for the nonexpansive mapping T, Moudafi [7] established the viscosity approximation method. The sequence { x n } can be created by
x n + 1 = α n f ( x n ) + ( 1 α n ) T x n , n 0 .
where { α n } is a sequence in ( 0 , 1 ) and f is a contraction on H. It was proven that the sequence { x n } constructed by Equation (4) strongly converges to the unique solution of the following form of variational inequality
( I f ) x * , x x * 0 , x F i x ( T ) .
In 2006, combining the iterative methods of Equations (3) and (4), Marino and Xu [8] created the viscosity iterative method below:
x n + 1 = α n γ f ( x n ) + ( I α n A ) T x n , n 0 ,
where f is a contraction and A is a strongly positive linear bounded operator. Under some appropriate conditions, they proved that the solution of the sequence { x n } constructed by Equation (6) is equal to the union solution of the following form of variational inequality
( A γ f ) x * , x x * 0 , x F i x ( T ) ,
which also becomes the optimal solution for the minimization problem
min x F i x ( T ) 1 2 A x , x h ( x ) ,
where h is a potential function for γ f (i.e., h ( x ) = γ f ( x ) for x H ).
In 2001, Yamada [9] et al. created the hybrid steepest-descent iterative method:
x n + 1 = T x n μ λ n F ( T x n ) , n 0 ,
where F is Lipschitzian continuous and strongly monotone operator and 0 < μ < 2 η κ 2 . They certified that the solution of the sequence { x n } constructed by Equation (7) is equivalent to the unique solution of the following form of variational inequality
F ( x * ) , x x * 0 , x F i x ( T ) .
In 2010, combining with the work of previous scholars, Tian [10] proposed a generalized viscosity iterative algorithm:
x n + 1 = α n γ V ( x n ) + ( I μ α n F ) T x n , n 0 ,
where F is Lipschitzian continuous and strongly monotone operator, and V is a Lipschitzian continuous operator. It was proven that the solution of the sequence { x n } produced by Equation (8) is equivalent to the unique solution of the following form of variational inequality
( μ F γ V ) x * , x x * 0 , x F i x ( T ) .
In 2013, Zhou and Wang [11] created a new iterative scheme:
x n + 1 = ( I λ n μ F ) T N n T 1 n x n .
They showed that the sequence { x n } proposed by Equation (9) converges faster and, at the same time, can solve the following type of variational inequality:
F ( x * ) , x x * 0 , x i = 1 N F i x ( T i ) ,
where T i n = ( 1 β n i ) I + β n i T i , i = 1 , 2 , , N , T 0 k = I λ k μ F , F is Lipschitzian continuous and strongly monotone operator.
In 2014, combining the iterative methods of Equations (8) and (9), Zhang and Yang [12] explored the following explicit iterative algorithm based on the viscosity method:
x n + 1 = α n γ V ( x n ) + ( I μ α n F ) T N n T N 1 n T 1 n x n , n 0 ,
where V is Lipschitzian, T i n = ( 1 β n i ) I + β n i T i , i = 1 , 2 , , N , and β n i ( 0 , 1 ) . The following variational inequality was proven by them:
( μ F γ V ) x * , x x * 0 , x i = 1 N F i x ( T i ) .
The implicit midpoint rules have played an important role in settling the ordinary differential equations in the development of the research pursuing a solution to fixed point problems of nonexpansive mappings (see the detailed references [13,14,15,16,17]). As a consequence, this method has recently aroused the interest of some scholars and is gradually being studied more. In 2015, using the viscosity approximation method, Xu [18] et al. built the iterative sequence of implicit midpoint rules for nonexpansive mappings:
x n + 1 = α n f ( x n ) + ( 1 α n ) T ( x n + x n + 1 2 ) , n 0 ,
where { α n } is sequence of numbers in ( 0 , 1 ) , and f is a compressed mapping on H. It was proven that the solution of sequence { x n } produced by Equation (12) is equivalent to the unique solution of variational inequality Equation (5).
In the same year, the implicit rule of generalized viscosity was established by Ke and Ma [19]:
x n + 1 = α n f ( x n ) + ( 1 α n ) T ( s n x n + ( 1 s n ) x n + 1 ) , n 0 .
where { α n } and { s n } are the real sequence for ( 0 , 1 ) . They verified that the solution of sequence { x n } produced by Equation (13) is equal to the unique solution of the above variational inequality Equation (5).
In 2017, He and Mao [20] showed the following new iterative method of implicit rules combined with the viscosity approximation method:
x n + 1 = α n f ( x n ) + ( 1 α n ) T n ( s n x n + ( 1 s n ) x n + 1 ) , n 0 .
It was proven that the solution of sequence { x n } produced by Equation (14) is equivalent to the unique solution of the above variational inequality Equation (5).
Recently, Cai and Yekini [21] studied a modified viscosity implicit rule of the nonexpansive mapping
x n + 1 = P C [ α n f ( x n ) + ( I μ α n F ) T ( s n x n + ( 1 s n ) x n + 1 ) ] , n 0 ,
where { α n } and { s n } are two sequences in ( 0 , 1 ] , and F is a Lipschitzian continuous and strongly monotone operator.
They proved that the solution of sequence { x n } is equal to the union solution of the variational inequality
( μ F f ) x * , x x * 0 , x F i x ( T ) .
After studying the results of the above scholars, we realize that viscosity approximation methods can be used to solve the fixed-point problem, that is to say, it is an efficient method which amounts to choosing a particular fixed point for a given nonexpansive self-mapping.
As a matter of fact, in Hilbert space, a variational inequality with respect to a closed convex subset is equal to a fixed-point equation involving a metric projection from any point onto the closed convex set, that is, the feasible set.
Therefore, solving the variational inequality depends on the projection mapping. However, when the closed form of the projection mapping is incomplete, it is not easy to compute. In this case, by assuming that the common fixed points set of a finite family of nonexpansive mappings becomes the new feasible set, the hybrid steepest-descent method is created, which overcomes the difficulty of estimating projection operators due to the complexity of feasible sets.
Inspired by the above methods and ideas and in combination with the viscosity approximation technique and hybrid steepest descent iterative method of nonexpansive mappings, we study a new generalized viscosity implicit iterative scheme in Hilbert space.
Let { T i } i = 1 N be a finite family of nonexpansive mappings, and start with an arbitrary x 0 C , let V be an α -Lipschitzian operator on H with coefficient α ( 0 , 1 ) , let F be a ζ -Lipschitzian continuous and κ -strongly monotone operator on H with constants ζ > 0 and κ > 0 ; then, define the sequences { x n } with:
x n + 1 = α n γ V ( x n ) + ( I μ α n F ) T N n T N 1 n T 1 n ( t n x n + ( 1 t n ) x n + 1 ) , n 0 .
Under appropriate conditions, we prove that the sequence { x n } strongly converges to the union solution of the variational inequality in Equation (11).
The remainder of the content of this paper is as follows. In Section 2, some useful definitions and lemmas are recalled for use in the main results. In Section 3, with the help of some suitable conditions, the strong convergence of the iterative sequence is proved. In Section 4, the new iterative algorithm is applied to the broader family of ξ -strictly pseudo-contractive mappings and generalized equilibrium problems. In Section 5, with the purpose of supporting the main results and discussing the convergence, two numerical examples are provided. In the final section, the main work of this article is summarized.

2. Preliminaries

To prove our main results, in this section we recall some helpful definitions and lemmas. When { x n } is a sequence in the real Hilbert space H, we use x n x and x n x , respectively, to denote that { x n } converges strongly to x and { x n } converges weakly to x.
A mapping P C is called a metric projection from H to C when C is a nonempty closed and convex subset of H. Then, for any u H , there exists a unique nearest point P C ( u ) C :
u P C ( u ) u v , v C .
As a matter of fact, P C is nonexpansive. Moreover, the next inequality holds:
u v , P C ( u ) P C ( v ) P C ( u ) P C ( v ) 2 , u , v H .
For u H and z C , P C satisfies
z = P C ( u ) u z , v z 0 , v C .
Furthermore,
P C ( u ) P C ( v ) u v , u , v H .
Definition 1.
An operator G : C C is said to be
(1) 
A strongly positive bounded linear operator with coefficient ρ if there exists a constant ρ > 0 such that
G u , u ρ u 2 , u C .
(2) 
κ-strongly monotone if there exists a positive constant κ such that
G u G v , u v κ u v 2 , u , v C .
(3) 
ζ-Lipschitzian if there exists a positive constant ζ such that
G u G v ζ u v , u , v C .
(4) 
θ-inverse strongly monotone (for short, θ-ism) if there exists a θ > 0 such that
G u G v , u v θ G u G v 2 , u , v C .
(5) 
Firmly nonexpansive if
G u G v , u v G u G v 2 , u , v C .
Remark 1. (1) It is not hard to find that the strongly positive bounded linear operator G turns into G -Lipschitzian and ρ-strongly monotone.
(2) Projection P C is an example of a firmly nonexpansive projection that is 1 2 -averaged.
Definition 2
([22]). An averaged mapping is defined by a mapping T : H H if there exists some constant λ ( 0 , 1 ) for
T = ( 1 λ ) I + λ S ,
where I : H H is the identity mapping, and S : H H is a nonexpansive mapping. More precisely, it can be said to be λ-averaged, which is also non-expansive and F i x ( S ) = F i x ( T ) .
Lemma 1
([23]). The composite of the limited multiple averaged mappings is still averaged. If the mappings { T i } i = 1 N are averaged and they all have a common fixed point, then
i = 1 N F i x ( T i ) = Fix ( T N T N 1 T 1 ) .
Distinctively, when N = 2 , F i x ( T 1 ) F i x ( T 2 ) = F i x ( T 1 T 2 ) = F i x ( T 2 T 1 ) .
Lemma 2
([24]). Let H be a real Hilbert space, C be a nonempty closed and convex subset of H, and T : C C be a nonexpansive mapping with F i x ( T ) ϕ . If { x n } C , and u , v C with { x n } u ; { ( I T ) x n } v , then ( I T ) u = v . Distinctively, if v = 0 , then u F i x ( T ) .
Lemma 3
([6]). Assume that { α n } is a sequence with nonnegative real numbers satisfying the condition
α n + 1 ( 1 β n ) α n + φ n , n 0 ,
where { β n } is a sequence in ( 0 , 1 ) and { φ n } is a sequence in R such that
(i) 
n = 1 β n = ,
(ii) 
lim n s u p φ n β n 0 or n = 1 | φ n | < .
Then, lim n α n = 0 .
Lemma 4.
Let F : H H be a ζ-Lipschitzian continuous and κ-strongly monotone operator, { T i } i = 1 N : C C be an N nonexpansive mapping of H, T i n = ( 1 ω n i ) I + ω n i T i for i = 1 , 2 , N , and ω n i ( 0 , 1 ] . For a digit λ in ( 0 , 1 ] and a fixed μ ( 0 , 2 κ ζ 2 ) , we define a family of nonexpansive mappings T N λ T 1 λ : C H by
T N λ T 1 λ u : = T N n T 1 n u λ μ F ( T N n T 1 n u ) , u C .
Then, T N λ T 1 λ forms a family of contractions, which satisfies the inequality
T N λ T 1 λ u T N λ T 1 λ v ( 1 λ τ ) u v , u , v C ,
where τ = 1 1 μ ( 2 κ μ ζ 2 ) ( 0 , 1 ] .
This lemma plays a significant role in the main results section.
Proof. 
By applying the ζ -Lipschitz continuity and κ -strong monotonicity of F over T ( H ) to G : μ F I , we can obtain
(i)
T N n T 1 n u T N n T 1 n v = ( 1 ω n N ) ( T N 1 n T 1 n u ) + ω n N T N ( T N 1 n T 1 n u ) ( 1 ω n N ) ( T N 1 n T 1 n v ) ω n N T N ( T N 1 n T 1 n v ) = ( 1 ω n N ) ( T N 1 n T 1 n u T N 1 n T 1 n v ) + ω n N ( T N T N 1 n T 1 n u T N T N 1 n T 1 n v ) ( 1 ω n N ) ( T N 1 n T 1 n u T N 1 n T 1 n v ) + ω n N ( T N T N 1 n T 1 n u T N T N 1 n T 1 n v ) ( 1 ω n N ) T N 1 n T 1 n u T N 1 n T 1 n v + ω n N T N 1 n T 1 n u T N 1 n T 1 n v u v
(ii)
G ( T N n T 1 n u ) G ( T N n T 1 n v ) 2 = ( μ F I ) ( T N n T 1 n u ) ( μ F I ) ( T N n T 1 n v ) 2 = μ F ( T N n T 1 n u ) T N n T 1 n u μ F ( T N n T 1 n v ) + T N n T 1 n v ) 2 = μ [ F ( T N n T 1 n u ) F ( T N n T 1 n v ) ] ( T N n T 1 n u T N n T 1 n v ) 2 = μ 2 F ( T N n T 1 n u ) F ( T N n T 1 n v ) 2 + T N n T 1 n u T N n T 1 n v 2 2 μ F ( T N n T 1 n u ) F ( T N n T 1 n v ) , T N n T 1 n u T N n T 1 n v μ 2 L 2 T N n T 1 n u T N n T 1 n v 2 + T N n T 1 n u T N n T 1 n v 2 2 μ η T N n T 1 n u T N n T 1 n v 2 = [ 1 μ ( 2 η μ L 2 ) ] T N n T 1 n u T N n T 1 n v 2
(iii)
T N λ T 1 λ u T N λ T 1 λ v = T N n T 1 n u λ μ F ( T N n T 1 n u ) T N n T 1 n v + λ μ F ( T N n T 1 n v ) = T N n T 1 n u T N n T 1 n v λ T N n T 1 n u + λ T N n T 1 n u λ T N n T 1 n v + λ T N n T 1 n v λ μ F ( T N n T 1 n u ) + λ μ F ( T N n T 1 n v ) = ( 1 λ ) ( T N n T 1 n u T N n T 1 n v ) λ [ μ F ( T N n T 1 n u ) T N n T 1 n u μ F ( T N n T 1 n v ) + T N n T 1 n v ] = ( 1 λ ) ( T N n T 1 n u T N n T 1 n v ) λ [ ( μ F I ) ( T N n T 1 n u ) ( μ F I ) ( T N n T 1 n v ) ] = ( 1 λ ) ( T N n T 1 n u T N n T 1 n v ) λ [ G ( T N n T 1 n u ) G ( T N n T 1 n v ) ] ( 1 λ ) T N n T 1 n u T N n T 1 n v + λ 1 μ ( 2 κ μ ζ 2 ) T N n T 1 n u T N n T 1 n v ( 1 λ ) u v + λ 1 μ ( 2 κ μ ζ 2 ) u v = [ 1 λ + λ 1 μ ( 2 κ μ ζ 2 ) ] u v = ( 1 λ τ ) u v .
where τ = 1 1 μ ( 2 κ μ ζ 2 ) .
Lemma 5
([25]). Let H be a real Hilbert space; then, for all u , v H ,
u + v 2 u 2 + 2 v , u + v .

3. Main Results

Theorem 1.
Let C be a nonempty closed and convex subset of the real Hilbert space H, { T i } i = 1 N : C C be N nonexpansive mapping of H such that C = i = 1 N F i x ( T i ) , V : C C be an α-Lipschitzian operator on H with coefficient α ( 0 , 1 ) , F : C H be a ζ-Lipschitzian continuous and κ-strongly monotone operator on H with constants ζ > 0 and κ > 0 . Let a sequence { x n } be created by:
x n + 1 = α n γ V ( x n ) + ( I μ α n F ) T N n T N 1 n T 1 n ( t n x n + ( 1 t n ) x n + 1 ) , n 0 .
where { α n } and { t n } both belong to ( 0 , 1 ] , 0 < γ < τ α , 0 < μ < 2 κ ζ 2 , τ = 1 1 μ ( 2 κ μ ζ 2 ) ( 0 , 1 ] , T i n = ( 1 ω n i ) I + ω n i T i for i = 1 , 2 , , N and ω n i ( 0 , 1 ] , satisfying the following conditions:
(i) 
lim n α n = 0 ;
(ii) 
n = 0 α n = and n = 0 | α n + 1 α n | < ;
(iii) 
0 < b t n t n + 1 < 1 , n 0 .
Then, the sequence { x n } converges strongly to the common fixed points set x * of a finite family of nonexpansive mappings, which is equivalent to the unique solution of the following variational inequality
( μ F γ f ) x * , x x * 0 , x i = 1 N F i x ( T i ) .
Equally, P C ( I μ F + γ V ) x * = x * holds.
Proof. 
Our proof can be easily extended to the general case, and for the sake of simplicity of computation, we will show proofs of Theorem 1 in five steps for N = 2 .
Step 1. We prove that the sequence { x n } is bounded.
Suppose q C , we obtain
x n + 1 q = α n γ V ( x n ) + ( I μ α n F ) T 2 n T 1 n ( t n x n + ( 1 t n ) x n + 1 ) q = α n [ γ V ( x n ) μ F ( q ) ] + ( I μ α n F ) T 2 n T 1 n ( t n x n + ( 1 t n ) x n + 1 ) ( I μ α n F ) T 2 n T 1 n q α n γ V ( x n ) μ F ( q ) + ( 1 τ α n ) t n x n + ( 1 t n ) x n + 1 q α n γ V ( x n ) γ V ( q ) + α n γ V ( q ) μ F ( q ) + ( 1 τ α n ) t n x n q + ( 1 τ α n ) ( 1 t n ) x n + 1 q γ α α n x n q + α n γ V ( q ) μ F ( q ) + ( 1 τ α n ) t n x n q + ( 1 τ α n ) ( 1 t n ) x n + 1 q = ( t n τ α n t n + γ α α n ) x n q + ( 1 τ α n ) ( 1 t n ) x n + 1 q + α n γ V ( q ) μ F ( q ) .
It follows that
( 1 ( 1 τ α n ) ( 1 t n ) ) x n + 1 q ( t n τ α n t n + γ α α n ) x n q + α n γ V ( q ) μ F ( q ) ,
which implies that
x n + 1 q t n τ α n t n + γ α α n t n + τ α n ( 1 t n ) x n q + α n γ V ( q ) μ F ( q ) t n + τ α n ( 1 t n ) = [ 1 α n ( τ γ α ) t n + τ α n ( 1 t n ) ] x n q + α n ( τ γ α ) t n + τ α n ( 1 t n ) 1 τ γ α γ V ( q ) μ F ( q )
max { x n q , 1 τ γ α γ V ( q ) μ F ( q ) } max { x 0 q , 1 τ γ α γ V ( q ) μ F ( q ) } .
Making use of the induction rule, we obtain
x n q max { x 0 q , 1 τ γ α γ V ( q ) μ F ( q ) } , n 0 .
Hence, x n is determined to be bounded.
Therefore, { f ( x n ) } , { T 2 n T 1 n [ t n x n + ( 1 t n ) x n + 1 ] } are both inferred as bounded.
Step 2. We show that lim n x n + 1 x n = 0 .
In order to achieve this purpose, Equation (17) is used to realize
x n + 2 x n + 1 = α n + 1 γ V ( x n + 1 ) + ( I μ α n + 1 F ) T 2 n + 1 T 1 n + 1 ( t n + 1 x n + 1 + ( 1 t n + 1 ) x n + 2 ) [ α n γ V ( x n ) + ( I μ α n F ) T 2 n T 1 n ( t n x n + ( 1 t n ) x n + 1 ) ] = α n + 1 γ [ V ( x n + 1 ) V ( x n ) ] + ( α n + 1 α n ) γ V ( x n ) + T 2 α n + 1 T 1 α n + 1 ( t n + 1 x n + 1 + ( 1 t n + 1 ) x n + 2 ) T 2 α n + 1 T 1 α n + 1 ( t n x n + ( 1 t n ) x n + 1 ) + T 2 α n + 1 T 1 α n + 1 ( t n x n + ( 1 t n ) x n + 1 ) T 2 α n T 1 α n ( t n x n + ( 1 t n ) x n + 1 ) = α n + 1 γ [ V ( x n + 1 ) V ( x n ) ] + ( α n + 1 α n ) γ V ( x n ) + T 2 α n + 1 T 1 α n + 1 ( t n + 1 x n + 1 + ( 1 t n + 1 ) x n + 2 ) T 2 α n + 1 T 1 α n + 1 ( t n x n + ( 1 t n ) x n + 1 ) + T 2 n T 1 n ( t n x n + ( 1 t n ) x n + 1 ) μ α n + 1 F T 2 n T 1 n ( t n x n + ( 1 t n ) x n + 1 ) T 2 n T 1 n ( t n x n + ( 1 t n ) x n + 1 ) + μ α n F T 2 n T 1 n ( t n x n + ( 1 t n ) x n + 1 ) γ α α n + 1 x n + 1 x n + ( 1 τ α n + 1 ) t n + 1 x n + 1 + ( 1 t n + 1 ) x n + 2 t n x n ( 1 t n ) x n + 1 + | α n + 1 α n | γ V ( x n ) μ F T 2 n T 1 n ( t n x n + ( 1 t n ) x n + 1 ) γ α α n + 1 x n + 1 x n + ( 1 τ α n + 1 ) ( 1 t n + 1 ) x n + 2 x n + 1 + ( 1 τ α n + 1 ) t n ( x n + 1 x n ) + | α n + 1 α n | M = ( t n τ α n + 1 t n + γ α α n + 1 ) x n + 1 x n + ( 1 τ α n + 1 ) ( 1 t n + 1 ) x n + 2 x n + 1 + | α n + 1 α n | M
where M = s u p n 0 γ V ( x n ) μ F T 2 n T 1 n ( t n x n + ( 1 t n ) x n + 1 ) .
It follows that
( 1 ( 1 τ α n + 1 ) ( 1 t n + 1 ) ) x n + 2 x n + 1 ( t n τ α n + 1 t n + γ α α n + 1 ) x n + 1 x n + | α n + 1 α n | M ,
that is,
x n + 2 x n + 1 t n τ α n + 1 t n + γ α α n + 1 1 ( 1 τ α n + 1 ) ( 1 t n + 1 ) x n + 1 x n + | α n + 1 α n | M 1 ( 1 τ α n + 1 ) ( 1 t n + 1 ) = [ 1 α n + 1 ( τ γ α ) + ( 1 τ α n + 1 ) ( t n + 1 t n ) 1 ( 1 τ α n + 1 ) ( 1 t n + 1 ) ] x n + 1 x n = [ 1 α n + 1 ( τ γ α ) + ( 1 τ α n + 1 ) ( t n + 1 t n ) 1 ( 1 τ α n + 1 ) ( 1 t n + 1 ) ] x n + 1 x n + | α n + 1 α n | M 1 ( 1 τ α n + 1 ) ( 1 t n + 1 ) .
Note that { α n } and { t n } both belong to ( 0 , 1 ] and from condition (iii), we have
0 < b t n t n + 1 1 ( 1 τ α n + 1 ) ( 1 t n + 1 ) < 1 .
This implies
α n + 1 ( τ γ α ) + ( 1 τ α n + 1 ) ( t n + 1 t n ) 1 ( 1 τ α n + 1 ) ( 1 t n + 1 ) α n + 1 ( τ γ α ) .
Thus,
x n + 2 x n + 1 [ 1 α n + 1 ( τ γ α ) ] x n + 1 x n + M b | α n + 1 α n | .
By the virtue of condition (ii) and Lemma 3, we show that lim n x n + 1 x n = 0 .
Step 3. We prove that lim n x n T 2 n T 1 n x n = 0 .
In reality, we obtain
x n T 2 n T 1 n x n = x n x n + 1 + x n + 1 T 2 n T 1 n ( t n x n + ( 1 t n ) x n + 1 ) + T 2 n T 1 n ( t n x n + ( 1 t n ) x n + 1 ) T 2 n T 1 n x n x n x n + 1 + x n + 1 T 2 n T 1 n ( t n x n + ( 1 t n ) x n + 1 ) + T 2 n T 1 n ( t n x n + ( 1 t n ) x n + 1 ) T 2 n T 1 n x n x n x n + 1 + α n [ γ V ( x n ) μ F T 2 n T 1 n ( t n x n + ( 1 t n ) x n + 1 ) ] + ( 1 τ α n ) t n x n + ( 1 t n ) x n + 1 x n x n x n + 1 + α n M + ( 1 t n ) x n + 1 x n = ( 2 t n ) x n + 1 x n + α n M 2 x n + 1 x n + α n M .
Combining Step 2 and condition (i), we obtain lim n x n T 2 n T 1 n x n = 0 .
Step 4. We claim that lim n sup ( μ F γ V ) x * , x * x n 0 where x * is the unique solution of the variational inequality Equation (11).
Above all, we provide a proof procedure for P C ( γ V + I μ F ) , which is a contraction. With all u , v C , by using Lemma 4, we obtain
P C ( γ V + I μ F ) ( u ) P C ( γ V + I μ F ) ( v ) γ V ( u ) γ V ( v ) + ( I μ F ) ( u ) ( I μ F ) ( v ) α γ u v + ( 1 τ ) u v = [ 1 ( τ α γ ) ] u v ,
which explains that P C ( γ V + I μ F ) is a contractive mapping. Then, because of the contraction mapping principle, we obtain there exists a unique fixed point expressed as x * C , which is x * = P C ( γ V + I μ F ) x * . Because { x n } is bounded and according to the supremum and infimum principle, we know there must exist a subsequence of { x n } that can obtain the least upper bound. Additionally, a bounded point column in a reflexive space must have a weakly convergent subcolumn. For convenience, we take a weakly convergent subcolumn that is equal to the one that obtains the upper bound, make that x n k x ^ C as k and
lim n sup ( μ F γ V ) x * , x * x n = lim n ( μ F γ V ) x * , x * x n k .
Due to the fact that { ω n i } is a bounded set for i = 1 , 2 , we suppose that ω n k i ω i ( k ), where 0 < ω i < 1 . Define T i = ( 1 ω i ) I + ω i T i for i = 1 , 2 . So, we obtain that F i x ( T i ) = F i x ( T i ) . Notice that
T i n k x T i x = [ ( 1 ω n k i ) x + ω n k i T i x ] [ ( 1 ω i ) x + ω i T i x ] | ω n k i ω i | ( x + T i x ) 0
Hence, we have
lim k sup x E T i n k x T i x = 0 ,
where E is an arbitrary bounded subset of H.
On account of the fact that F i x ( T 1 ) Fix ( T 2 ) = F i x ( T 1 ) F i x ( T 2 ) = C , T i is ω i -averaged for i = 1 , 2 , using Lemma 1, we have that F i x ( T 2 T 1 ) = F i x ( T 2 ) F i x ( T 1 ) = C . Consider
x n k T 2 T 1 x n k x n k T 2 n k T 1 n k x n k + T 2 n k T 1 n k x n k T 2 T 1 n k x n k + T 2 T 1 n k x n k T 2 T 1 x n k x n k T 2 n k T 1 n k x n k + T 2 n k x n k T 2 x n k + T 1 n k x n k T 1 x n k x n k T 2 n k T 1 n k x n k + sup x E 1 T 2 n k x T 2 x + sup x E 2 T 1 n k x T 1 x ,
where E 1 and E 2 are bounded subsets including { T 1 n k x n k } and { x n k } , respectively. From step 3 and Equation (19), we obtain that lim k x n k T 2 T 1 x n k = 0 . From Lemma 2, we obtain that x ^ F i x ( T 2 T 1 ) = F i x ( T 2 ) F i x ( T 1 ) = F i x ( T 2 ) F i x ( T 1 ) = C .
Then, it follows from Equation (16) that
lim n sup ( μ F γ V ) x * , x * x n = lim k ( μ F γ V ) x * , x * x n k = ( μ F γ V ) x * , x * x ^ 0 .
Step 5. We show that x n x * as n ; here, x * = P C ( γ V + I μ F ) x * .
It follows from Equation (17) and Lemma 5 that
x n + 1 x * 2 = α n γ V ( x n ) + ( I μ α n F ) T 2 n T 1 n ( t n x n + ( 1 t n ) x n + 1 ) x * 2 = ( I μ α n F ) T 2 n T 1 n ( t n x n + ( 1 t n ) x n + 1 ) ( I μ α n F ) T 2 n T 1 n x * + α n [ γ V ( x n ) μ F ( x * ) ] 2 ( I μ α n F ) T 2 n T 1 n ( t n x n + ( 1 t n ) x n + 1 ) ( I μ α n F ) T 2 n T 1 n x * 2 + 2 α n γ V ( x n ) μ F ( x * ) , x n + 1 x * ( 1 τ α n ) 2 t n x n + ( 1 t n ) x n + 1 x * 2 + 2 γ α n V ( x n ) V ( x * ) , x n + 1 x * + 2 α n γ V ( x * ) μ F ( x * ) , x n + 1 x * ( 1 τ α n ) 2 t n 2 x n x * 2 + ( 1 τ α n ) 2 ( 1 t n ) 2 x n + 1 x * 2 + 2 t n ( 1 t n ) ( 1 τ α n ) 2 ( x n x * ) ( x n + 1 x * ) + 2 γ α α n x n x * x n + 1 x * + 2 α n γ V ( x * ) μ F ( x * ) , x n + 1 x * = ( 1 τ α n ) 2 t n 2 x n x * 2 + ( 1 τ α n ) 2 ( 1 t n ) 2 x n + 1 x * 2 + 2 ( t n ( 1 t n ) ( 1 τ α n ) 2 + γ α α n ) ( x n x * ) ( x n + 1 x * ) + 2 α n γ V ( x * ) μ F ( x * ) , x n + 1 x * ( 1 τ α n ) 2 t n 2 x n x * 2 + ( 1 τ α n ) 2 ( 1 t n ) 2 x n + 1 x * 2 + ( t n ( 1 t n ) ( 1 τ α n ) 2 + γ α α n ) [ ( x n x * ) 2 + x n + 1 x * 2 ] + 2 α n γ V ( x * ) μ F ( x * ) , x n + 1 x * [ ( 1 τ α n ) 2 t n 2 + t n ( 1 t n ) ( 1 τ α n ) 2 + γ α α n ] ( x n x * ) 2 + [ ( 1 τ α n ) 2 ( 1 t n ) 2 + t n ( 1 t n ) ( 1 τ α n ) 2 + γ α α n ] x n + 1 x * 2 + L n ,
where
L n : = 2 α n γ V ( x * ) μ F ( x * ) , x n + 1 x * .
It follows that
[ 1 ( 1 τ α n ) 2 ( 1 t n ) 2 t n ( 1 t n ) ( 1 τ α n ) 2 γ α α n ] x n + 1 x * 2 [ ( 1 τ α n ) 2 t n 2 + t n ( 1 t n ) ( 1 τ α n ) 2 + γ α α n ] x n x * 2 + L n .
This indicates that
x n + 1 x * 2 ( 1 τ α n ) 2 t n 2 + t n ( 1 t n ) ( 1 τ α n ) 2 + γ α α n 1 ( 1 τ α n ) 2 ( 1 t n ) 2 t n ( 1 t n ) ( 1 τ α n ) 2 γ α α n x n x * 2 + L n 1 ( 1 τ α n ) 2 ( 1 t n ) 2 t n ( 1 t n ) ( 1 τ α n ) 2 γ α α n .
Let
ψ n : = 1 α n { 1 ( 1 τ α n ) 2 t n 2 + t n ( 1 t n ) ( 1 τ α n ) 2 + γ α α n 1 ( 1 τ α n ) 2 ( 1 t n ) 2 t n ( 1 t n ) ( 1 τ α n ) 2 γ α α n } = 1 α n { 1 ( 1 τ α n ) 2 2 γ α α n 1 ( 1 τ α n ) 2 ( 1 t n ) 2 t n ( 1 t n ) ( 1 τ α n ) 2 γ α α n } = 1 α n { 2 τ α n τ 2 α n 2 2 γ α α n 1 ( 1 τ α n ) 2 ( 1 t n ) 2 t n ( 1 t n ) ( 1 τ α n ) 2 γ α α n } = 2 ( τ γ α ) τ 2 α n 1 ( 1 τ α n ) 2 ( 1 t n ) 2 t n ( 1 t n ) ( 1 τ α n ) 2 γ α α n .
Because 0 < ε < τ γ α , { t n } satisfies 0 < b < t n t n + 1 < 1 for n > 0 . If lim n t n exists, we suppose that lim n t n = t * > 0 .
Then,
lim n ψ n = 2 ( τ γ α ) 2 t * ( s * ) 2 > 0 .
Let ρ satisfy 0 < ρ < 2 ( τ γ α ) 2 t * ( t * ) 2 ; then, there exists a large integer M such that ψ n > ρ for all n M . Therefore, we can obtain
( 1 τ α n ) 2 t n 2 + 2 ( t n ( 1 t n ) ( 1 τ α n ) 2 + γ α α n ) 1 ( 1 τ α n ) 2 ( 1 t n ) 2 t n ( 1 t n ) ( 1 τ α n ) 2 γ α α n < 1 ρ α n , n M .
From Equation (21), we can obtain that
x n + 1 x * 2 ( 1 ρ α n ) x n x * 2 + L n 1 ( 1 τ α n ) 2 ( 1 t n ) 2 t n ( 1 t n ) ( 1 τ α n ) 2 γ α α n
By Equation (20), we have
lim n sup L n ρ α n [ 1 ( 1 τ α n ) 2 ( 1 t n ) 2 t n ( 1 t n ) ( 1 τ α n ) 2 γ α α n ] = lim n sup 2 γ V ( x * ) μ F ( x * ) , x n + 1 x * ρ [ 1 ( 1 τ α n ) 2 ( 1 t n ) 2 t n ( 1 t n ) ( 1 τ α n ) 2 γ α α n ] 0 .
From Equation (22), Equation (23), and Lemma 3, we can gain
lim n x n x * = 0 .
The proof is completed. □
The following theorems can be easily gained from Theorem 1.
Theorem 2.
Let C be a nonempty closed and convex subset of the real Hilbert space H, { T i } i = 1 N : C C be N nonexpansive mapping of H such that C = i = 1 N F i x ( T i ) ϕ , V : C C be an α-Lipschitzian operator on H with coefficient α ( 0 , 1 ) , F : C H be a ζ-Lipschitzian continuous and κ-strongly monotone operator on H with constants ζ > 0 and κ > 0 . Let a sequence { x n } be generated by:
x n + 1 = α n γ V ( x n ) + ( I μ α n F ) T N n T N 1 n T 1 n ( x n + x n + 1 2 ) , n 0 ,
where T i n = ( 1 ω n i ) I + ω n i T i for i = 1 , 2 , , N and { α n } , { s n } , γ , μ , τ , ω n i ( 0 , 1 ] satisfy the same conditions as Theorem 1.
Then, the sequence { x n } converges strongly to x * , which settles the following variational inequality as well:
( μ F γ V ) x * , x x * 0 , x i = 1 N F i x ( T i ) .
Theorem 3.
Let C be a nonempty closed and convex subset of the real Hilbert space H, T i i = 1 N : C C be N non-expansive mapping of H such that C = i = 1 N F i x ( T i ) ϕ , and V : C C be an α-Lipschitzian operator on H with coefficient α ( 0 , 1 ) . Let A be a strongly positive bounded linear operator with a constant κ > 0 such that α < τ and 0 < μ < 2 κ A 2 , where τ = 1 1 μ ( 2 κ μ A 2 ) . Let a sequence { x n } be generated by:
x n + 1 = α n γ V ( x n ) + I μ α n A T N n T N 1 n T 1 n ( s n x n + ( 1 s n ) x n + 1 ) , n 0 ,
where T i n = ( 1 ω n i ) I + ω n i T i for i = 1 , 2 , , N and { α n } , { s n } , γ , μ , τ , ω n i ( 0 , 1 ] satisfy the same conditions as in Theorem 1.
Then, the sequence { x n } converges strongly to x * , which settles the following variational inequality as well:
( μ F γ V ) x * , x x * 0 , x i = 1 N F i x ( T i ) .

4. Application

In this section, the iterative algorithm Equation (17) is effectively applied to settle some important problems.

4.1. Strict Pseudo-Contractive Mappings

A mapping G : C H is named to be a ξ -strictly pseudo-contraction if there exists a constant ξ [ 0 , 1 ) such that
G x G y 2 x y 2 + ξ ( I G ) x ( I G ) y 2 , x , y H .
Lemma 6
([26]). Let G : C H be a ξ-strictly pseudo-contractive mapping, S : C H by S x = δ x + ( 1 δ ) G x for x C . Then, as δ [ ξ , 1 ) , S is a nonexpansive mapping such that F i x ( S ) = F i x ( G ) .
Theorem 4.
Let H be a real Hilbert space, { G i } i = 1 N be N ξ i -strictly pseudo-contraction mappings of H, C = i = 1 N F i x ( G i ) . Let V : C C be an α-Lipschitzian operator with coefficient α ( 0 , 1 ) and F : C H be a ζ-Lipschitzian continuous and κ-strongly monotone operator with constants ζ > 0 and κ > 0 such that 0 < μ < 2 κ ζ 2 , 0 < γ < τ α , τ = 1 1 μ ( 2 κ μ ζ 2 ) ( 0 , 1 ] . For an arbitrarily given x 0 C , { x n } is defined as follows:
S i ^ x = δ i x + ( 1 δ i ) G i x , i = 1 , N x n + 1 = α n γ V ( x n ) + ( I μ α n F ) S N ^ S N 1 ^ S 1 ^ ( t n x n + ( 1 t n ) x n + 1 ) , n 0
where δ i [ λ , 1 ) , { α n } and { t n } both belong to ( 0 , 1 ] , satisfying the next conditions:
(i) 
lim n α n = 0 ;
(ii) 
n = 0 α n = and n = 0 | α n + 1 α n | < ;
(iii) 
0 < b t n t n + 1 < 1 , n 0 .
Then, the sequence { x n } converges strongly to the common fixed points set x * of a finite family of nonexpansive mappings, which also settles the following type of variational inequality:
( μ F γ V ) x * , x x * 0 , x i = 1 N F i x ( T i ) .
Proof. 
Let { G i } i = 1 N be a family of ξ i -strictly pseudo-contractions of H. We define S i ^ : C H by S i ^ x = δ i x + ( 1 δ i ) G i x for x C , 0 ξ i δ i < 1 , and i = 1 , , N . By virtue of Lemma 6, we can clearly obtain that { S i ^ } i = 1 N is a family of nonexpansive mappings and F i x ( S i ^ ) = F i x ( G i ) . Therefore, the desired results can be easily obtained using Theorem 1. □

4.2. Generalized Equilibrium Problems

Let ψ i be a bifunction from C i × C i into R , where { C i } i = 1 N are nonempty closed convex subsets of H for i = 1 , , N , and R is the set of real numbers. In [27], Mihai and Ashish considered the next generalized equilibrium problem to find ( x 1 ^ , x 2 ^ , , x N ^ ) satisfying
ψ 1 ( x 1 ^ , x 1 ) + D 1 x 2 ^ , x 1 x 1 ^ + 1 δ 1 x 1 ^ x 2 ^ , x 1 x 1 ^ 0 , x 1 C 1 , ψ 2 ( x 2 ^ , x 2 ) + D 2 x 3 ^ , x 2 x 2 ^ + 1 δ 2 x 2 ^ x 3 ^ , x 2 x 2 ^ 0 , x 2 C 2 , ψ N ( x N ^ x N ) + D N x 1 ^ , x N x N ^ + 1 δ N x N ^ x 1 ^ , x N x N ^ 0 , x N C N ,
where D i : H H is a nonlinear mapping and δ i > 0 for i = 1 , 2 , N . Here, E P ( ψ ) is denoted as its set of solutions, as it is well known that generalized equilibrium problems contain a number of variational inequality problems, optimization problems, and fixed-point problems.
To solve the problem generated by Equation (28), we assume the following assumptions are satisfied by the bifunction ψ :
(A)
ψ ( u , u ) = 0 for u C ;
(B)
ψ is monotone, i.e., ψ ( u , v ) + ψ ( v , u ) 0 for u , v C ;
(C)
ψ is upper-hemi continuous, i.e., for each u , v , w C
lim t 0 + sup ψ ( t u + ( 1 t ) v , w ) ψ ( v , w ) ,
(D)
ψ ( u , ) is convex and weakly lower semicontinuous for any u C .
Lemma 7
([28]). Let g : H H be a α-ism operator on H. Then, I 2 α g is nonexpansive.
Lemma 8
([29]). Let ψ : C × C R be a bifunction meeting the conditions (A)–(D). Then, for δ > 0 and u H , there exists z C such that
ψ ( z , v ) + 1 δ v z , z u 0 , v C .
Lemma 9
([29]). Let ψ : C × C R satisfy (A)–(D). Define a mapping T δ ψ : H C for δ > 0 and u H as follows:
T δ ψ ( u ) = { z C : ψ ( z , v ) + 1 δ v z , z u 0 , v C } .
For all u H , the next conditions hold:
(a) 
T δ ψ is single valued;
(b) 
T δ ψ is firmly non-expansive, i.e., T δ ψ u T δ ψ v 2 T δ ψ u T δ ψ v , u v .
This implies that T δ ψ u T δ ψ v u v , namely, T δ ψ is a nonexpansive mapping;
(c) 
F i x ( T δ ψ ) = E P ( ψ ) ;
(d) 
E P ( ψ ) is a closed and convex set.
Lemma 10
([27]). Let { C i } i = 1 N be nonempty closed convex subsets of Hilbert space H, ψ i : C i × C i R be a bifunction meeting the conditions (A)–(D) for i = 1 , 2 , , N and D i : C C be a nonlinear mapping.
Then, for x i ^ C i , i = 1 , 2 , , N , ( x 1 ^ , x 2 ^ , , x N ^ ) C 1 × C 2 × × C N is a solution of Equation (28) if and only if x i ^ is a fixed point of the following mapping:
T = T δ 1 ψ 1 ( I δ 1 D 1 ) T δ 2 ψ 2 ( I δ 2 D 2 ) T δ N ψ N ( I δ N D N ) .
Theorem 5.
Let { C i } i = 1 N be nonempty closed convex subsets of H, ψ i : C i × C i R be a bifunction meeting the conditions (A)–(D) for i = 1 , 2 , , N and D i : C C be a σ i -ism self-mapping. Let V : C C be an α-Lipschitzian operator with coefficient α ( 0 , 1 ) and let F : C H be a ζ-Lipschitzian continuous and κ-strongly monotone operator with constants ζ > 0 and κ > 0 . Assume that Ω = F i x ( T ) , where T is given in Lemma 10. Let a sequence { x n } be generated by:
T i ^ = T δ i ψ i ( I δ i D i ) , i = 1 , , N x n + 1 = α n γ V ( x n ) + ( I μ α n F ) T N ^ T 1 ^ ( t n x n + ( 1 t n ) x n + 1 ) , n 0
where { α n } and { s n } both belong to [ 0 , 1 ) , 0 < μ < 2 κ ζ 2 , τ = 1 1 μ ( 2 κ μ ζ 2 ) ( 0 , 1 ] , 0 < γ < τ α , and δ i ( 0 , 2 σ i ) . The following conditions are satisfied:
(i) 
lim n α n = 0 ;
(ii) 
n = 0 α n = and n = 0 | α n + 1 α n | < ;
(iii) 
0 < b t n t n + 1 < 1 , n 0 .
Then, the sequence { x n } converges strongly to the common fixed points of Ω.
Proof .
We need to prove that T = T δ 1 ψ 1 ( I δ 1 D 1 ) T δ 2 ψ 2 ( I δ 2 D 2 ) T δ N ψ N ( I δ N D N ) is an averaged mapping. Observe that I δ i D i can be written as I δ i D i = ( 1 δ i 2 σ i ) I + δ i 2 σ i ( I 2 σ i D i ) , where δ i 2 σ i ( 0 , 1 ) . From Lemma 7, we obtain that I 2 σ i D i is nonexpansive. Therefore, by Definition 2, I δ i D i is averaged with δ i ( 0 , 2 σ i ) for i = 1 , 2 , , N . Next, applying Lemma 9, we can obtain that T δ i ψ i is firmly nonexpansive, that is to say, T δ i ψ i is 1 2 -averaged for i = 1 , 2 , , N . Then, Lemma 1 means that T is averaged on H. As a consequence, T can be expressed in the form of the identity mapping nonexpansive mapping, for example, T = ( 1 λ ) I + T for some λ ( 0 , 1 ) . Here, T is a nonexpansive mapping and F i x ( T ) = F i x ( T ) . Therefore, we can easily obtain the expected results by employing Theorem 1. □

5. Numerical Example

In this section, the first numerical example is provided to indicate the convergence of the proposed sequence. Then, the other numerical example is presented to compare the convergence rate with a number of implicit iterative sequences.
Example 1.
We establish the inner product · , · : R 3 × R 3 R by
x , y = x · y = x 1 · y 1 + x 2 · y 2 + x 3 · y 3
and the usual norm · : R 3 R is expressed as
x = ( x 1 2 + x 2 2 + x 3 2 ) 1 / 2 , x = ( x 1 , x 2 , x 3 ) R 3 .
Let α n = 1 2 n , s n = 1 3 , ω n i = 1 2 and T i x = x i , i = 1 , 2 , , N . Assume V ( x ) = 1 4 x , F ( x ) = x . Hence, V is 1 4 -Lipschitzian, F is 1-Lipschitzian and 1-strongly monotone, and T i n = 1 2 I + 1 2 T i . Let γ = 1 4 , μ = 1 . For convenience of calculation, we choose the situation of N = 2 in Theorem 1. Then, the sequence { x n } created by Equation (17) can be simplified as:
x n + 1 = α n γ V ( x n ) + ( I μ α n F ) T N n T N 1 n T 1 n ( t n x n + ( 1 t n ) x n + 1 ) = 1 32 n x n + ( 1 1 2 n ) ( 1 4 x n + 1 2 x n + 1 ) = 8 n 3 16 n + 8 x n .
Choosing x 1 in Equation (30), the numerical result is represented in the form of Figure 1 and Figure 2.
Remark 2.
As can be seen from the results in Figure 1 and Figure 2, the iterative sequence generated by Equation (17) is strongly convergent.
Example 2.
Let all the assumptions of Example 1 be satisfied except ω n i = 3 4 , μ = 3 2 . Hence, T i n = 1 4 I + 3 4 T i . In order to make the numerical result more obvious, let us consider the case where N = 3 .
Firstly, the sequence { x n } generated by Equation (17) can be simplified as
x n + 1 = α n γ V ( x n ) + ( I μ α n F ) T N n T N 1 n T 1 n ( t n x n + ( 1 t n ) x n + 1 ) = 1 32 n x n + ( 1 3 4 n ) ( 10 96 x n + 10 48 x n + 1 ) = 40 n 18 304 n + 60 x n .
Secondly, when i = 1 , T 1 x = T x = x , the sequence { x n } generated by Equation (15) can be simplified as
x n + 1 = P C [ α n f ( x n ) + ( I μ α n F ) T ( t n x n + ( 1 t n ) x n + 1 ) ] = 1 8 n x n + ( 1 3 4 n ) ( 1 3 x n + 2 3 x n + 1 ) = 8 n 3 8 n + 12 x n .
Thirdly, the sequence { x n } generated by Equation (13) can be simplified as
x n + 1 = α n f ( x n ) + ( 1 α n ) T ( t n x n + ( 1 t n ) x n + 1 ) = 1 8 n x n + ( 1 1 2 n ) ( 1 3 x n + 2 3 x n + 1 ) = 8 n 1 8 n + 8 x n .
Lastly, the sequence { x n } generated by Equation (12) can be simplified as
x n + 1 = α n f ( x n ) + ( 1 α n ) T ( x n + x n + 1 2 ) = 1 8 n x n + ( 1 1 2 n ) ( 1 2 x n + 1 2 x n + 1 ) = 4 n 1 4 n + 2 x n .
Numerical comparison of Algorithms (12), (13), (15), and (17).
Table 1 and Figure 3 indicate that when x 1 = 80 and n = 20 , the sequences { x n } produced by Algorithms (12), (13), (15), and (17) all converge to 0. An effective comparison can be clearly seen.
Remark 3.
Table 1 and Figure 3 show that the iterative Algorithm (17) enjoys a faster convergence rate than Algorithms (12), (13), and (15). Table 1 shows that the convergence rate of Algorithm (17) is not only faster, but also converges to zero in advance when compared with the iterations in Algorithms (12), (13) and (15), which all do not approach zero even until the twentieth term.

6. Conclusions

In this paper, we considered a combination of the viscosity approximation method and the hybrid steepest-descent iterative method into the implicit iterative algorithm, which has been proven to strongly converge to the unique solution of the variational inequality. As for its applications, we extended the main results to the case of treating the common fixed-point set of a finite family of strictly pseudo-compressed self-mappings as a feasible set and associated the fixed-point set of the nonexpansive mapping with the solution set of the generalized equilibrium problem, which includes a number of variational inequality problems, optimization problems, and fixed-point problems. In the numerical examples section, our Algorithm (17) required less iteration time and had a faster rate of convergence than the existing Algorithms (12), (13) and (15).

Author Contributions

Conceptualization, L.S., H.X. and Y.M.; methodology, L.S., H.X. and Y.M.; software, L.S., H.X. and Y.M.; validation, L.S., H.X. and Y.M.; formal analysis, L.S., H.X. and Y.M.; writing—original draft preparation, L.S., H.X. and Y.M.; writing—review and editing, L.S., H.X. and Y.M. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Not applicable.

Acknowledgments

The authors are very thankful to the referees for their valuable and helpful comments. This work was completed with the support of the Basic Scientific Research Foundation of Heilongjiang Educational Committee (No. 1353MSYQN017).

Conflicts of Interest

The authors declare no conflict of competing interests.

References

  1. Hartman, P.; Stampacchia, G. On some nonlinear elliptic differential functional equations. Acta Math. 1996, 115, 271–310. [Google Scholar] [CrossRef]
  2. Cen, J.; Haddad, T.; Nguyen, V.T. Simultaneous distributed-boundary optimal control problems driven by nonlinear complementarity systems. J. Glob. Optim. 2022, 84, 783–805. [Google Scholar] [CrossRef]
  3. Treanţă, S.; Guo, Y. The study of certain optimization problems via variational inequalities. Res. Math. Sci. 2023, 10, 7. [Google Scholar] [CrossRef]
  4. Benaceur, A.; Ern, A.; Ehrlacher, V. A reduced basis method for parametrized variational inequalities applied to contact mechanics. Int. J. Numer. Methods Eng. 2020, 121, 1170–1197. [Google Scholar] [CrossRef]
  5. Wu, T.; Sun, Y. Existence and Uniqueness of Generalized Solutions of Variational Inequalities with Fourth-Order Parabolic Operators in Finance. Symmetry 2022, 14, 1773. [Google Scholar] [CrossRef]
  6. Xu, H.K. An iterative approach to quadratic optimization. J. Optim. Theory Appl. 2003, 116, 659–678. [Google Scholar] [CrossRef]
  7. Moudafi, A. Viscosity approximation methods for fixed-points problems. J. Math. Anal. Appl. 2000, 241, 46–55. [Google Scholar] [CrossRef]
  8. Marino, G.; Xu, H.K. A general iterative method for non-expansive mappings in Hilbert spaces. J. Math. Anal. Appl. 2006, 318, 43–52. [Google Scholar] [CrossRef]
  9. Yamada, I.; Butnariu, D.; Censor, Y.; Reich, S. The hybrid steepest descent method for the variational inequility problem over the intersection of fixed-point sets of non-expansive mappings. In Inherently Parallel Algorithms in Feasibility and Optimization and Their Application; North-Holland: Amsterdam, The Netherlands, 2001; Volume 8, pp. 473–504. [Google Scholar]
  10. Tian, M. A general iterative method based on the hybrid steepest descent scheme for non-expansive mappings in Hilbert spaces. In Proceedings of the 2010 International Conference on Computational Intelligence and Software Engineering, Wuhan, China, 10–12 December 2010; IEEE: Piscataway, NJ, USA, 2010; pp. 1–4. [Google Scholar]
  11. Zhou, H.; Wang, P. A simpler explicit iterative algorithm for a class of variational inequalities in Hilbert spaces. J. Optim. Theory Appl. 2014, 161, 716–727. [Google Scholar] [CrossRef]
  12. Zhang, C.; Yang, C. A new explicit iterative algorithm for solving a class of variational inequalities over the common fixed points set of a finite family of non-expansive mappings. Fixed Point Theory Appl. 2014, 2014, 60. [Google Scholar] [CrossRef]
  13. Bader, G.; Deuflhard, P. A semi-implicit mid-point rule for stiff systems of ordinary differential equations. Numer. Math. 1983, 41, 373–398. [Google Scholar] [CrossRef]
  14. Deuflhard, P. Recent progress in extrapolation methods for ordinary differential equations. SIAM Rev. 1985, 27, 505–535. [Google Scholar] [CrossRef]
  15. Auzinger, W.; Frank, R. Asymptotic error expansions for stiff equations: An analysis for the implicit midpoint and trapezoidal rules in the strongly stiff case. Numer. Math. 1989, 56, 469–499. [Google Scholar] [CrossRef]
  16. Schneider, C. Analysis of the linearly implicit mid-point rule for differential-algebra equations. Electron. Trans. Numer. Anal. 1993, 1, 1–10. [Google Scholar]
  17. Somalia, S. Implicit midpoint rule to the nonlinear degenerate boundary value problems. Int. J. Comput. Math. 2002, 79, 327–332. [Google Scholar] [CrossRef]
  18. Xu, H.K.; Alghamdi, M.A.; Shahzad, N. The viscosity technique for the implicit midpoint rule of non-expansive mappings in Hilbert spaces. Fixed Point Theory Appl. 2015, 2015, 41. [Google Scholar] [CrossRef]
  19. Ke, Y.; Ma, C. The generalized viscosity implicit rules of non-expansive mappings in Hilbert spaces. Fixed Point Theory Appl. 2015, 2015, 190. [Google Scholar] [CrossRef]
  20. He, S.; Mao, Y.; Zhou, Z. The generalized viscosity implicit rules of asymptotically non-expansive mappings in Hilbert spaces. Appl. Math. Sci. 2017, 11, 549–560. [Google Scholar]
  21. Cai, G.; Shehu, Y.; Iyiola, O.S. Modified viscosity implicit rules for non-expansive mappings in Hilbert spaces. J. Fixed Point Theory Appl. 2017, 19, 549–560. [Google Scholar] [CrossRef]
  22. Xu, H.K. Averaged mappings and the gradient-projection algorithm. J. Optim. Theory Appl. 2011, 150, 360–378. [Google Scholar] [CrossRef]
  23. Lopez, G.; Martin, V.; Xu, H.K. Iterative algorithms for the multiple-sets split feasibility problem. In Biomedical Mathematics: Promising Directions in Imaging, Therapy Planning and Inverse Problems; Medical Physics Publishing: Madison, WI, USA, 2012; pp. 243–279. [Google Scholar]
  24. Goebel, K.; Kirk, W. Topics in Metric Fixed-Point Theory; Cambridge University Press: Cambridge, UK, 1990. [Google Scholar]
  25. Yao, Y.; Maruster, S. Strong convergence of an iterative algorithm for variational inequalities in Banach spaces. Math. Comput. Model. 2011, 54, 325–329. [Google Scholar] [CrossRef]
  26. Zhou, H. Convergence theorems of fixed points for k-strict pseudo-contractions in Hilbert spaces. Nonlinear Anal. 2008, 69, 456–462. [Google Scholar] [CrossRef]
  27. Postolache, M.; Nandal, A.; Chugh, R. Strong convergence of a new generalized viscosity implicit rule and some applications in hilbert space. Acta Math. 2019, 7, 773. [Google Scholar] [CrossRef]
  28. Nandal, A.; Chugh, R.; Postolache, M. Iteration process for fixed point problems and zeros of maximal monotone operators. Symmetry 2019, 11, 655. [Google Scholar] [CrossRef]
  29. Combettes, P.L.; Hirstoaga, S.A. Equilibrium programming in Hilbert spaces. J. Nonlinear Convex Anal. 2005, 6, 117–136. [Google Scholar]
Figure 1. Convergence in two dimensions.
Figure 1. Convergence in two dimensions.
Symmetry 15 01098 g001
Figure 2. Convergence in three dimensions.
Figure 2. Convergence in three dimensions.
Symmetry 15 01098 g002
Figure 3. Convergence in three dimensions.
Figure 3. Convergence in three dimensions.
Symmetry 15 01098 g003
Table 1. Convergence numerical comparison between Algorithms (12), (13), (15), and (17).
Table 1. Convergence numerical comparison between Algorithms (12), (13), (15), and (17).
n x n 0 for (12) x n 0 for (13) x n 0 for (15) x n 0 for (17)
180.000080.000080.000080.0000
240.000035.000020.00004.8352
328.000021.87509.28570.4488
422.000015.72275.41670.0471
518.333312.18513.57010.0052
615.83339.90042.54020.0006
714.00648.30921.90520.0001
812.60587.14071.48490.0000
911.49356.24821.19180.0000
167.36143.25910.40690.0000
177.02683.04340.36330.0000
186.72572.85320.32650.0000
196.45302.68430.29510.0000
206.20482.53330.26810.0000
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Sun, L.; Xu, H.; Ma, Y. A New Viscosity Implicit Approximation Method for Solving Variational Inequalities over the Common Fixed Points of Nonexpansive Mappings in Symmetric Hilbert Space. Symmetry 2023, 15, 1098. https://doi.org/10.3390/sym15051098

AMA Style

Sun L, Xu H, Ma Y. A New Viscosity Implicit Approximation Method for Solving Variational Inequalities over the Common Fixed Points of Nonexpansive Mappings in Symmetric Hilbert Space. Symmetry. 2023; 15(5):1098. https://doi.org/10.3390/sym15051098

Chicago/Turabian Style

Sun, Linqi, Hongwen Xu, and Yan Ma. 2023. "A New Viscosity Implicit Approximation Method for Solving Variational Inequalities over the Common Fixed Points of Nonexpansive Mappings in Symmetric Hilbert Space" Symmetry 15, no. 5: 1098. https://doi.org/10.3390/sym15051098

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop