Next Article in Journal
Heat Transfer Enhancement of MHD Natural Convection in a Star-Shaped Enclosure, Using Heated Baffle and MWCNT–Water Nanofluid
Next Article in Special Issue
Lie Modules of Banach Space Nest Algebras
Previous Article in Journal
Second Order Chebyshev–Edgeworth-Type Approximations for Statistics Based on Random Size Samples
Previous Article in Special Issue
Compactness in Groups of Group-Valued Mappings
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Alternated Inertial Projection Algorithm for Multi-Valued Variational Inequality and Fixed Point Problems

1
College of Mathematics and Statistics, Sichuan University of Science and Engineering, Zigong 643000, China
2
South Sichuan Center for Applied Mathematics, Zigong 643000, China
3
Artificial Intelligence Key Laboratory of Sichuan Province, Zigong 643000, China
*
Author to whom correspondence should be addressed.
Mathematics 2023, 11(8), 1850; https://doi.org/10.3390/math11081850
Submission received: 14 March 2023 / Revised: 6 April 2023 / Accepted: 11 April 2023 / Published: 13 April 2023
(This article belongs to the Special Issue Advances on Nonlinear Functional Analysis)

Abstract

:
In this paper, we propose an alternated inertial projection algorithm for solving multi-valued variational inequality problem and fixed point problem of demi-contractive mapping. On one hand, this algorithm only requires the mapping is pseudo-monotone. On the other hand, this algorithm is combined with the alternated inertial method to accelerate the convergence speed. The global convergence of the algorithm can be obtained under mild conditions. Preliminary numerical results show that the convergence speed of our algorithm is faster than some existing algorithms.

1. Introduction and Preliminaries

Let C be a nonempty closed convex subset of R n and F : R n 2 R n be a set-valued mapping with nonempty compact and convex values, R n is n-dimensional Euclidean space. We consider the following multi-valued variational inequality problem (MVIP):
find a vector x C and a vector t F ( x ) such that
t , y x 0 , for all y C .
We denote the solution set of problem (MVIP) is S, S D is the solution set of the dual multi-valued variational inequality: find a vector x C that satisfies
u , y x 0 , for all y C , all u F ( y ) .
When F is continuous, we have
S D S .
When the mapping F is single-valued, problem (MVIP) reduces to problem (VIP), i.e., find a vector x C that satisfies
F ( x ) , y x 0 , for all y C .
It is not difficult to see from the above definitions that the multi-valued variational inequality is a more general variational inequality. Therefore, it is of great significance to study the multi-valued variational inequality.
In recent years, multi-valued variational inequality has attracted extensive attention from scholars. In 2018, Ye [1] proposed an algorithm for solving multi-valued variational inequality problem, this algorithm requires the mapping F has pseudo-monotonicity and S Ø . Chen [2] proposed an inertial Popov extragradient projection algorithm for solving multi-valued variational inequality problem, and this algorithm only needs one value of the mapping F. Thus, the computation amount of the algorithm reduces, but it requires that the mapping F is pseudo-monotone and Lipschitz continuous. He [3] proposed a new algorithm for variational inequality problem without monotonicity. However, the next iteration point x k + 1 is generated by projecting a vector onto the intersection of the feasible set C and k + 1 half-spaces. Hence, the computational cost of computing x k + 1 will increase as k increases.
Next, we introduce the fixed point problem (FFP) [4].
Let T : R n R n be a mapping and F i x ( T ) be the set of the fixed points of T, that is,
F i x ( T ) = { x R n : T ( x ) = x } .
With the development of variational inequality algorithm, the common solutions of variational inequality and fixed point problems have been widely studied, for example, [5,6,7,8,9,10,11,12,13,14,15]. The reason for studying this kind of problems is that some constraints in mathematical models can be expressed as variational inequality problems or fixed point problems, especially in practical problems, such as signal processing, network resource allocation, image restoration, etc. The multi-valued variational inequality problem generalizes single-valued to multi-valued, which makes the structure of variational inequality complicated. Based on the above, the research on the common solutions of multi-valued variational inequality and fixed point problems is not deep enough, compared with classical variational inequality.
In 2015, Tu [16] introduced an algorithm to solve multi-valued variational inequality problem and fixed point problem of nonexpansive mapping, which requires that the mapping F must be pseudo-monotone. In addition, as we are all known, nonexpansive mapping is not a very generalized mapping as compared to quasi-nonexpansive mapping, strictly pseudo-contractive, demi-contractive mapping, etc. Therefore, it is very interesting to study the common solution of variational inequality problem and fixed point problem of these classes of mappings. In 2018, Zhang [17] proposed an algorithm to solve multi-valued variational inequality problem and fixed point problem of strictly pseudo-contractive mappings. Compared with the algorithm in [16], this algorithm does not require any monotonicity of the mapping F, and this algorithm solves the fixed point problem of a class of strictly pseudo-contractive mappings. Therefore, it is suitable for more common solutions of variational inequalities and fixed points problem. However, similar to the algorithm in [3], this algorithm needs to calculate the projection to the intersection of k + 1 half spaces and C once in each iteration, computational amount will increase with the growth of the number of iterations increases unceasingly, the convergence speed of the algorithm is seriously affected.
In order to improve the convergence speed of algorithm, scholars have done extensive research [18,19,20,21]. Utilizing inertial technique can accelerate the convergence speed of the algorithm. For general inertial technique, the Fejér monotonicity of { x n x * } ( x S ) is lost and this makes the inertial technique ineffective in speeding up the algorithm in some cases. In order to overcome this shortcoming, in 2015, Mu [22] proposed an alternated inertial method, to some extent, which ensured Fejér monotonicity of { x 2 n x * } ( x S ) , see [23,24,25].
Inspired by the above work, this paper proposes an improved alternated inertial projection algorithm for solving the common solution of multi-valued variational inequality and fixed point problems. Our algorithm combines the alternated inertial technique to ensure the Fejér monotonicity of { x n x * } ( x S ) , thus improves the convergence rate of the algorithm. In addition, under some mild conditions, we prove that the sequence generated by the algorithm is globally convergent.
The paper is structured as follows: In Section 1, we give some theorems and lemmas, and Section 2 introduces our algorithm and prove a strong convergence result of it. In Section 3, two numerical experiments are given to prove the effectiveness of the algorithm. The conclusion is given in Section 4.
For the completeness of next section, we give the following definitions.
The inner product in R n is denoted by · , · and its norm by · . The weak convergence of x n to x represented by x n x .
Definition 1 ([26]). 
Let F : R n 2 R n be multi-valued mapping, C R n be a nonempty closed convex set. We say
(i) F is said to be outer-semicontinuous, if and only if, the graph of F is closed.
(ii) F is said to be inner-semicontinuous at x C , if and only if, for any y F ( x ) and for any sequence ( x k ) k N such that x k x , there exists a sequence ( y k ) k N satisfies y k F ( x k ) for all k N , y k y .
(iii) F is said to be continuous if and only if it is both outer-semicontinuous and inner-semicontinuous.
Definition 2. 
Let C R n be a nonempty closed convex set, a mapping F : R n 2 R n is called
(i) pseudo-monotone on C if and only if for all x , y C and t x F ( x )
t x , y x 0   i m p l i e s   t y , y x 0   f o r   a l l   t y F ( y ) .
A mapping T : R n R n is called
(ii) nonexpansive if and only if
T ( x ) T ( y ) x y , f o r   a l l   x , y R n .
(iii) ρ-demi-contractive with 0 ρ < 1 if and only if
T ( x ) z 2 x z 2 + ρ x T ( x ) 2 , for all z F i x ( T ) , x R n .
or equivalently
T x x , x z ρ 1 2 x T x 2 , for all z F i x ( T ) , x R n .
(iv) ρ-strict pseudo-contractive mapping if and only if there exists ρ [ 0 , 1 ) satisfying
T ( x ) T ( y ) 2 x y 2 + ρ ( I T ) x ( I T ) y 2 , for all x , y R n .
or equivalently
T x T y , x z x y 2 1 ρ 2 ( I T ) x ( I T ) y 2 , for all x , y R n .
Remark 1. 
We can easily seen from the above definition that:
T is nonexpansive mapping implies T is ρ-strict pseudo-contractive mapping, and T is ρ-strict pseudo-contractive mapping implies T is ρ-demi-contractive mapping. The converse is not necessarily true.
Example 1. 
Let the mapping T : [ 2 , 1 ] [ 2 , 1 ] such that
T x = x 2 x .
We can see that T is a demi-contractive mapping, but not a nonexpansive or strictly pseudo-contractive mapping, so the demi-contractive mapping is more generalized than nonexpansive and strictly pseudo-contractive mapping.
Definition 3 ([27]). 
Assume that the mapping T such that F i x ( T ) Ø . Then T is said to be demiclosed at zero if for any sequence { x n } in R n , the following implication holds:
x n x   and   ( I T ) x n 0   implies   x F i x ( T ) .
Lemma 1 ([28]). 
For a given multi-valued variational inequality problem (MVIP(F,C)), we let its residual function be r ( x , μ , ξ ) = x P C ( x μ ξ ) , ξ F ( x ) . Then
x is a solution to the (MVIP(F,C)) if and only if r ( x , μ , ξ ) = 0 , for all μ > 0 .
Lemma 2 ([1]). 
Let C R n be a nonempty closed convex set. Then, we have
(i) P C ( x ) x , y P C ( x ) 0 , for all y C , all x R n .
(ii) P C ( x ) P C ( y ) x y , for all x , y R n .
(iii) P C ( x ) z 2 x z 2 P C ( x ) x 2 , for all x R n , all z C .
Lemma 3 ([29]). 
The following equalities are true:
(i) 2 x , y = x 2 + y 2 x y 2 , for all x , y H .
(ii) 2 x y , x z = x y 2 + x z 2 y z 2 , for all x , y , z R n .
(iii) For x , y H , α , β [ 0 , 1 ] and α + β = 1 , then
α x + β y 2 = α x 2 + β y 2 α β x y 2 .
(iv) For all x , y H , we have
( 1 + θ ) x θ y 2 = ( 1 + θ ) x 2 θ y 2 + θ ( 1 + θ ) x y 2 .
Lemma 4 ([1]). 
Let x R n , ξ F ( x ) , then the following sentences are true
(i) r ( x , α , ξ ) is nondecreasing, for all α > 0 ;
(ii) r ( x , α , ξ ) α is nonincreasing, for all α > 0 .
Lemma 5 ([30]). 
Let C be a closed convex subset of R n , h be a real-valued function on R n , and K = { x C : h ( x ) 0 } . If K is nonempty and h is Lipschitz continuous on C with modulus θ > 0 , then
dist ( x , K ) θ 1 max { h ( x ) , 0 } , for   all   x C .

2. Main Results

In this section, we introduce an alternated inertial projection algorithm for the common solution of multi-valued variational inequality and fixed points problem. The algorithm only requires the following mild assumptions:
Assumption 1. 
S F i x ( T ) Ø .
Assumption 2. 
The multi-valued mapping F is continuous on R n with nonempty compact convex values, and is pseudo-monotone on C. The mapping T : R n R n is ρ-demi-contractive mapping, and demiclosed at zero, ρ 0 , 1 2 2 .
Assumption 3. 
θ 0 , 2 ( 1 ρ ) 2 1 2 ( 1 ρ ) 2 + 1 , 1 + θ 2 ( 1 ρ ) < β n < 1 θ 2 2 .
Remark 2. 
In Algorithm 1, if θ = 0 and the mapping T is the identity mapping ( i . e . , T ( x ) = x ) , then Algorithm 1 reduces to He’s algorithm proposed in [3].
Lemma 6 ([1]). 
For any ξ n F ( ω n ) , the step size can be clearly defined, that is, there must be a nonnegative integer m satisfies the following inequality
l m ξ n t n ( m ) , ω n y n ( m ) σ ω n y n ( m ) 2 .
Lemma 7. 
Let x ¯ S , and { x n } be the sequence generated by Algorithm 1, then
h n ( x ¯ ) 0   and   h n ( x n ) ( 1 σ ) r ( ω n , α n , ξ n ) 2 .
Algorithm 1 Choose parameters σ , l ( 0 , 1 ) , x 0 , x 1 C as initial points.
Step 1 Take arbitrarily ξ n F ( ω n ) , if ω n P C ( ω n ξ n ) = 0 and T ( ω n ) = ω n , then stop, otherwise, go to step 2, where
ω n = x n ,    n    is    even ; x n + θ ( x n x n 1 ) ,    n    is    odd .
Step 2 Let m n is the smallest nonegative integer m such that
l m ξ n t n ( m ) , ω n y n ( m ) σ ω n y n ( m ) 2 ,
where y n ( m ) = P C ( ω n l m ξ n ) , t n ( m ) = P F ( y n ( m ) ) ( ξ n ) , t n = t n ( m n ) and y n = y n ( m n ) , α n = l m n .
Step 3 Compute d n = P H n ( ω n ) , where
h n ( v ) = ω n y n α n ( ξ n t n ) , v y n ,
H n = { v : h n ( v ) 0 } .
Step 4 Compute x n + 1 = ( 1 β n ) d n + β n T d n . Set n = n + 1 and go to Step 1.
Proof. 
By the definition of h n and step size, the following is true
h n ( ω n ) = ω n y n α n ( ξ n t n ) , ω n y n = ω n y n 2 α n ξ n t n , ω n y n ω n y n 2 σ ω n y n 2 = ( 1 σ ) ω n y n 2 = ( 1 σ ) r ( ω n , α n , ξ n ) 2 .
In addition, according to the pseudo-monotonicity of F, x ¯ S and t n F ( y n ) , we can obtain
t n , x ¯ y n 0 .
From (5) and Lemma 2, we have
h n ( x ¯ ) = ω n y n α n ( ξ n t n ) , x ¯ y n = ω n y n α n ξ n , x ¯ y n + α n t n , x ¯ y n ω n y n α n ξ n , x ¯ y n 0 .
Remark 3. 
For all x ¯ S , we can obtain h n ( x ¯ ) 0 , then x ¯ H n , for all n N .
Lemma 8. 
Let { x n } be a sequence generated by Algorithm 1, p S F i x ( T ) , then
x n + 1 p 2 d n p 2 β n ( 1 ρ β n ) d n T d n 2 .
Proof. 
From the definition of x n + 1 and (2), (3), we get
x n + 1 p 2 = β n T d n + ( 1 β n ) d n p 2 = β n ( T d n p ) + ( 1 β n ) ( d n p ) 2 = β n 2 T d n p 2 + ( 1 β n ) 2 d n p 2 + 2 β n ( 1 β n ) T d n p , d n p = β n 2 T d n p 2 + 2 β n ( 1 β n ) T d n d n , d n p + 2 β n ( 1 β n ) d n p 2 + ( 1 β n ) 2 d n p 2 ( 1 β n ) 2 d n p 2 + β n 2 d n p 2 + β n 2 ρ T d n d n 2 + β n ( 1 β n ) ( ρ 1 ) T d n d n 2 + 2 β n ( 1 β n ) d n p 2 d n p 2 β n ( 1 ρ β n ) T d n d n 2 .
The proof is completed.   □
Lemma 9. 
The sequence { x 2 n } generated by Algorithm 1 is bounded.
Proof. 
By Lemmas 2 and 3 and the definition of d n , for all p S F i x ( T ) , we have
x 2 n + 2 p 2 d 2 n + 1 p 2 β 2 n + 1 ( 1 ρ β 2 n + 1 ) T d 2 n + 1 d 2 n + 1 2 = P H 2 n + 1 ( ω 2 n + 1 ) p 2 β 2 n + 1 ( 1 ρ β 2 n + 1 ) T d 2 n + 1 d 2 n + 1 2 ω 2 n + 1 p 2 ω 2 n + 1 d 2 n + 1 2 β 2 n + 1 ( 1 ρ β 2 n + 1 ) T d 2 n + 1 d 2 n + 1 2 = ( 1 + θ ) ( x 2 n + 1 p ) θ ( x 2 n p ) 2 ω 2 n + 1 d 2 n + 1 2 β 2 n + 1 ( 1 ρ β 2 n + 1 ) T d 2 n + 1 d 2 n + 1 2 ( 1 + θ ) x 2 n + 1 p 2 θ x 2 n p 2 + θ ( 1 + θ ) x 2 n + 1 x 2 n 2 ω 2 n + 1 d 2 n + 1 2 β 2 n + 1 ( 1 ρ β 2 n + 1 ) T d 2 n + 1 d 2 n + 1 2 .
Again, by Lemma 8 and the fact ω 2 n = x 2 n , we obtain
x 2 n + 2 p 2 ( 1 + θ ) [ d 2 n p 2 β 2 n ( 1 ρ β 2 n ) T d 2 n d 2 n 2 ] θ x 2 n p 2 + θ ( 1 + θ ) x 2 n + 1 x 2 n 2 ω 2 n + 1 d 2 n + 1 2 β 2 n + 1 ( 1 ρ β 2 n + 1 ) T d 2 n + 1 d 2 n + 1 2 = ( 1 + θ ) P H 2 n ( x 2 n ) p 2 ( 1 + θ ) β 2 n ( 1 ρ β 2 n ) T d 2 n d 2 n 2 θ x 2 n p 2 + θ ( 1 + θ ) x 2 n + 1 x 2 n 2 ω 2 n + 1 d 2 n + 1 2 β 2 n + 1 ( 1 ρ β 2 n + 1 ) T d 2 n + 1 d 2 n + 1 2 ( 1 + θ ) [ x 2 n p 2 x 2 n d 2 n 2 ] ( 1 + θ ) β 2 n ( 1 ρ β 2 n ) T d 2 n d 2 n 2 θ x 2 n p 2 + θ ( 1 + θ ) x 2 n + 1 x 2 n 2 ω 2 n + 1 d 2 n + 1 2 β 2 n + 1 ( 1 ρ β 2 n + 1 ) T d 2 n + 1 d 2 n + 1 2 = x 2 n p 2 ( 1 + θ ) x 2 n d 2 n 2 + θ ( 1 + θ ) x 2 n + 1 x 2 n 2 ( 1 + θ ) β 2 n ( 1 ρ β 2 n ) T d 2 n d 2 n 2 ω 2 n + 1 d 2 n + 1 2 β 2 n + 1 ( 1 ρ β 2 n + 1 ) T d 2 n + 1 d 2 n + 1 2 .
By the definition of x 2 n + 1 , we have
( 1 + θ ) x 2 n d 2 n 2 = ( 1 + θ ) x 2 n x 2 n + 1 + β 2 n ( T d 2 n d 2 n ) 2 = ( 1 + θ ) x 2 n x 2 n + 1 2 + ( 1 + θ ) β 2 n 2 T d 2 n d 2 n 2 2 ( 1 + θ ) β 2 n d 2 n T d 2 n , x 2 n x 2 n + 1 ( 1 + θ ) x 2 n x 2 n + 1 2 + ( 1 + θ ) β 2 n 2 T d 2 n d 2 n 2 2 ( 1 + θ ) β 2 n d 2 n T d 2 n x 2 n x 2 n + 1 ( 1 + θ ) x 2 n x 2 n + 1 2 + ( 1 + θ ) β 2 n 2 T d 2 n d 2 n 2 ( 1 + θ ) 2 2 d 2 n T d 2 n 2 2 β 2 n 2 x 2 n x 2 n + 1 .
Substituting (8) into (7), we obtain
x 2 n + 2 p 2 x 2 n p 2 [ ( 1 + θ ) x 2 n x 2 n + 1 2 + ( 1 + θ ) β 2 n 2 T d 2 n d 2 n 2 ( 1 + θ ) 2 2 d 2 n T d 2 n 2 2 β 2 n 2 x 2 n x 2 n + 1 ] ( 1 + θ ) β 2 n ( 1 ρ β 2 n ) T d 2 n d 2 n 2 + θ ( 1 + θ ) x 2 n + 1 x 2 n 2 ω 2 n + 1 d 2 n + 1 2 β 2 n + 1 ( 1 ρ β 2 n + 1 ) T d 2 n + 1 d 2 n + 1 2 = x 2 n p 2 ( 1 θ 2 ) x 2 n x 2 n + 1 2 + ( 1 + θ ) 2 2 d 2 n T d 2 n 2 ( 1 + θ ) ( β 2 n ρ β 2 n ) d 2 n T d 2 n 2 + 2 β 2 n 2 x 2 n + 1 x 2 n 2 ω 2 n + 1 d 2 n + 1 2 β 2 n + 1 ( 1 ρ β 2 n + 1 ) T d 2 n + 1 d 2 n + 1 2 = x 2 n p 2 ( 1 θ 2 2 β 2 n 2 ) x 2 n x 2 n + 1 2 ω 2 n + 1 d 2 n + 1 2 ( 1 + θ ) [ β 2 n ρ β 2 n ( 1 + θ ) 2 ] d 2 n T d 2 n 2 β 2 n + 1 ( 1 ρ β 2 n + 1 ) T d 2 n + 1 d 2 n + 1 2 .
According to θ 0 , 2 ( 1 ρ ) 2 1 2 ( 1 ρ ) 2 + 1 , 1 + θ 2 ( 1 ρ ) < β n < 1 θ 2 2 , we get the sequence { x 2 n p } is decreasing, hence { x 2 n p } is convergent, further, { x 2 n } is bounded.    □
Lemma 10. 
Let { x 2 n } be the sequence generated by Algorithm 1, x * be any cluster point of sequence { x 2 n } , then x * F i x ( T ) .
Proof. 
Since x * is a cluster point of sequence { x 2 n } , then there exists a subsequence { x 2 n k } { x 2 n } satisfying
x 2 n k x * .
From (9) and the convergence of { x 2 n p } , we can deduce that
x 2 n x 2 n + 1 0 , d 2 n T d 2 n 0 ,
d 2 n + 1 T d 2 n + 1 0 , ω 2 n + 1 d 2 n + 1 0 , as n + .
By the definition of x 2 n + 1 , we have
x 2 n d 2 n = x 2 n x 2 n + 1 β 2 n d 2 n + β 2 n T d 2 n x 2 n x 2 n + 1 + β 2 n d 2 n T d 2 n ,
then
x 2 n d 2 n 0 .
It implies
lim n + dist ( x 2 n , H 2 n ) = 0 .
Next, it suffices to prove that x * F i x ( T ) , by (12) and x 2 n k x * , then we get
d 2 n k x * .
Since T is demiclosed at zero, Definition 3, (10) and (14) imply
x * F i x ( T ) .
The proof is completed.   □
Lemma 11. 
Assume that Assumption 1–3 hold, and let { x 2 n } be the sequence generated by Algorithm 1, x * be any cluster point of sequence { x 2 n } , then x * S .
Proof. 
Since { x 2 n } is bounded, F and r ( · ) are continuous, then we conclude that { y 2 n } , { ξ 2 n } and { t 2 n } are bounded, therefore, { ω 2 n y 2 n α 2 n ( ξ 2 n t 2 n ) } is bounded, then there exists a constant M > 0 satisfying
ω 2 n y 2 n α 2 n ( ξ 2 n t 2 n ) M .
By the definition of h 2 n , then for all x , y R n , we have
h 2 n ( x ) h 2 n ( y ) = ω 2 n y 2 n α 2 n ( ξ 2 n t 2 n ) , x y ω 2 n y 2 n α 2 n ( ξ 2 n t 2 n ) x y M x y .
It implies h 2 n ( · ) is M-Lipschitz continuous. From Lemma 5 and Lemma 7, we have
dist ( x 2 n , H 2 n ) M 1 h 2 n ( x 2 n ) M 1 ( 1 σ ) r ( x 2 n , α 2 n , ξ 2 n ) 2 .
Let n + on both sides of this inequality, (13) implies
lim n + r ( x 2 n , α 2 n , ξ 2 n ) = 0 .
Since { x 2 n } and { ξ 2 n } are bounded, we let x * and ξ ¯ be cluster points of { x 2 n } and { ξ 2 n } , then there exist subsequences { x 2 n k } { x 2 n } , { ξ 2 n k } { ξ 2 n } such that
x 2 n k x * , ξ 2 n k ξ ¯ .
Since F is an outer-semicontinuous mapping, we can deduced that F is closed from Definition 1. In addition, this together with the fact x 2 n k x * , ξ 2 n k ξ ¯ and the fact ξ 2 n k F ( x 2 n k ) , we have
ξ ¯ F ( x * ) .
Next, we prove it in two cases:
Case 1. lim sup k +   α 2 n k > b > 0 , there exists { α 2 n k j } { α 2 n k } such that lim j + α 2 n k j > b . Therefore there exists N > 0 , for all j N , α 2 n k j > b .
Then by Lemma 4, we have
0 r ( x 2 n k j , 1 , ξ 2 n k j ) r ( x 2 n k j , min { α 2 n k j , 1 } , ξ 2 n k j ) min { α 2 n k j , 1 } r ( x 2 n k j , α 2 n k j , ξ 2 n k j ) min { b , 1 } .
Taking j + , we have
r ( x * , 1 , ξ ¯ ) = 0 .
Case 2. lim sup k +   α 2 n k = 0 , then
lim k + α 2 n k = 0 .
lim j + r ( x 2 n k , α 2 n k , t 2 n k ) = 0 and lim j + α 2 n k = 0 imply
x 2 n k y 2 n k ( m 2 n k 1 ) x 2 n k y 2 n k + y 2 n k y 2 n k ( m 2 n k 1 ) = r ( x 2 n k , α 2 n k , t 2 n k ) + P C ( x 2 n k α 2 n k ξ 2 n k ) P C ( x 2 n k l 1 α 2 n k ξ 2 n k ) r ( x 2 n k , α 2 n k , t 2 n k ) + x 2 n k α 2 n k ξ 2 n k ( x 2 n k l 1 α 2 n k ξ 2 n k ) = r ( x 2 n k , α 2 n k , t 2 n k ) + α 2 n k ( l 1 1 ) ξ 2 n k 0 , as k + .
Combining x 2 n k y 2 n k ( m 2 n k 1 ) 0 with x 2 n k x * , we have
y 2 n k ( m 2 n k 1 ) x * ,
and together with ξ ¯ F ( x * ) and F is inner-semicontinuous, we can deduced that there exists a sequence v 2 n k ( m 2 n k 1 ) F ( y 2 n k ( m 2 n k 1 ) ) such that
v 2 n k ( m 2 n k 1 ) ξ ¯ .
Further, from t 2 n k ( m 2 n k 1 ) = P F ( y 2 n k ( m 2 n k 1 ) ) ( ξ 2 n k ) , we can obtain
0 t 2 n k ( m 2 n k 1 ) ξ 2 n k v 2 n k ( m 2 n k 1 ) ξ 2 n k v 2 n k ( m 2 n k 1 ) ξ ¯ + ξ ¯ ξ 2 n k ,
it implies
t 2 n k ( m 2 n k 1 ) ξ 2 n k 0 .
In addition, by the definition of step size α 2 n k , we have
α 2 n k l 1 ξ 2 n k t 2 n k ( m 2 n k 1 ) , x 2 n k y 2 n k ( m 2 n k 1 ) > σ x 2 n k y 2 n k ( m 2 n k 1 ) 2 ,
and there exists a positive integer N, for all k N , α 2 n k l 1 1 , Lemma 4 implies
ξ 2 n k t 2 n k ( m 2 n k 1 ) > σ r ( x 2 n k , α 2 n k l 1 , ξ 2 n k ) α 2 n k l 1 σ r ( x 2 n k , 1 , ξ 2 n k ) 1 , for all k N .
Let k + on both sides of this inequality, combining with (15), we get
lim k + r ( x 2 n k , 1 , ξ 2 n k ) = 0 ,
it implies
r ( x * , 1 , ξ ¯ ) = 0 .
Therefore, x * S .    □
Theorem 1. 
Let { x n } be the sequence generated by Algorithm 1, then { x n } strongly converges to a point of S F i x ( T ) .
Proof. 
Let x * be any cluster point of the sequence { x 2 n } , then Lemmas 10 and 11 imply
x * S F i x ( T ) .
According to Lemma 9, we can obtain { x 2 n x * } is convergent. Further, x * is a cluster point of the sequence { x 2 n } implies { x 2 n } converges to x * , i.e.,
x 2 n x * .
In addition, by (10) and (16), we get
x 2 n + 1 x * x 2 n + 1 x 2 n + x 2 n x * 0 , as n + ,
Therefore, { x 2 n + 1 } converges to x * , then, { x n } converges globally to a point x * S F i x ( T ) .    □
Remark 4. 
Our algorithm has the following advantages over the existing algorithms:
(i) Compared with [1,31], we extend the multi-valued variational inequality problem to the common solution of multi-valued variational inequality and fixed point problem.
(ii) Compared with [3], this algorithm does not require the Lipschitz continuity of the mapping and can get strong convergence results under the condition of continuous mapping.
(iii) Compared with [16,17]. On one hand, our proposed algorithm for solving multi-valued variational inequality problem and fixed point problem of demi-contractive mapping, and demi-contractive mapping is more generalized than strictly pseudo-contractive mapping and nonexpansive mapping. On the other hand, we use the alternated inertial technique to accelerate the convergence speed of the algorithm.

3. Numerical Examples

In this section, several examples are given to prove the effectiveness of Algorithm 3.1, and Algorithm 1 is compared with Algorithm 1 [1] (shortly, Ye+2018) and Algorithm 3.1 [31] (shortly, Wang+2020), which proves that Algorithm 1 is better than other algorithms in terms of convergence speed and iteration times. The tolerance e means that when r ( x , ξ ) e , the procedure stops.
In our experiment, the parameters in each algorithm are as follows:
For our Algorithm 1 and Algorithm 1 [1], σ = 0.9 , l = 0.5 ; in Algorithm 3.1 [31], r = 0.55 , σ = 0.2 , ρ = 3 .
Example 2. 
Let C be a nonempty closed convex sets, and C = { x R + n : i = 1 n x i = 1 } , F : C 2 R n such that
F ( x ) = { ( t ; t + 2 x 2 ; t + 3 x 3 ; t + 4 x 4 ) : t [ 0 , 1 ] } ,
and the mapping T 1 : R n R n satisfies T 1 ( x ) = ( x 1 ; x 4 ; x 3 ; x 2 ) , T 2 = I (I is the identity mapping). It is obvious that mapping F has pesdo-monotonicity and T 1 is a demi-contractive mapping. x * = ( 1 ; 0 ; 0 ; 0 ) S F i x ( T ) , in our Algorithm 1, we let θ = 0.1 , β = 0.6 . Numerical observations for Example 2 are shown in Table 1 and Table 2, Figure 1.
When T = T 1 , let x 0 = ( 2 / 5 ; 1 / 5 ; 1 / 5 ; 1 / 5 ) , x 1 = ( 1 / 2 ; 1 / 3 ; 1 / 6 ; 0 ) be initial values.
When T = T 2 = I , S = S F i x ( T ) , let e = 10 3 , and take the following different initial values for the numerical experiment:
(i) x 0 = ( 2 / 5 ; 1 / 5 ; 1 / 5 ; 1 / 5 ) , x 1 = ( 1 / 2 ; 1 / 3 ; 1 / 6 ; 0 ) ;
(ii) x 0 = ( 1 / 4 ; 1 / 4 ; 1 / 4 ; 1 / 4 ) , x 1 = ( 1 / 2 ; 1 / 3 ; 1 / 6 ; 0 ) .
Example 3. 
Let feasible set C be [ 1 , 1 ] × [ 1 , 1 ] . The mapping F : R 2 R 2 defined by
F ( x 1 , x 2 ) = ( x 1 + x 2 + sin ( x 1 ) ; x 1 + x 2 + sin ( x 2 ) ) ,
we see that the mapping F satisfies Assumption 1–3. Let x 0 = ( 1 ; 1 ) , x 1 = ( 1 ; 1 ) be initial values for Algorithm 1, x 0 = ( 1 ; 1 ) be initial value for Algorithm 1 [1] and Algorithm 3.1 [31]. Numerical observations for Example 3 are shown in Table 3.

4. Conclusions

Algorithm 1 proposed in this paper is designed to solve multi-valued variational inequality problem and fixed point problem of demi-contractive mappings. It is worth noting that we combined alternated inertial technique to accelerate the convergence rate of the algorithm and to some extent ensure the Fejér monotonicity of { x 2 n x * } . Different from algorithm [31], this algorithm study the fixed point problem of demi-contractive mapping, which is more generalized than strictly pseudo-contractive mapping. In addition, the experiments show that the effectiveness of Algorithm 1.

Author Contributions

Conceptualisation of the article and methodology were given by H.Z.; formal analysis, investigation, and writing—original draft preparation by H.Z. and Y.S.; software and validation by H.Z. and X.L., writing—review and editing by H.Z., J.H. and Y.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by National Natural Science Foundation of China (Grant No. 11872043), Natural Science Foundation of Sichuan Province (Grant No. 2023NSFSC1299), Fund Project of Sichuan University of Science and Engineering in hit-haunting for talents (Grant No. 2022RC04), 2021 Innovation and Entrepreneurship Training Program for College Students of Sichuan University of Science and Engineering (Grant No. cx2021150), 2022 Graduate Innovation Project of Sichuan University of Science and Engineering (Grant No. Y2022190).

Data Availability Statement

The data that support the findings of this study are available from the corresponding author upon reasonable request.

Acknowledgments

The author wishes to express his sincere thanks to the judges for their valuable comments and suggestions, which will make this article more readable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Ye, M. An improved projection method for solving generalized variational inequality problems. Optimization 2018, 67, 1523–1533. [Google Scholar] [CrossRef]
  2. Chen, Y.; Ye, M. An inertial Popov extragradient projection algorithm for solving multi-valued variational inequality problems. Optimization 2022. [Google Scholar] [CrossRef]
  3. He, X.; Huang, N.; Li, X. Modified projection methods for solving multi-valued variational inequality without monotonicity. Netw. Spart. Econ. 2019. [Google Scholar] [CrossRef]
  4. Debnath, P.; Konwar, N.; Radenović, S. (Eds.) Metric Fixed Point Theory: Applications in Science, Engineering and Behavioural Sciences; Springer: Singapore, 2021. [Google Scholar]
  5. Ye, M. An infeasible projection type algorithm for nonmonotone variational inequalities. Numer. Algorithms 2022, 89, 1723–1742. [Google Scholar] [CrossRef]
  6. Ceng, L.C.; Yao, J.C. Strong Convergence theorem by an extragradient method for fixed point problems and variational inequality problems. Taiwanese J. Math. 2006, 10, 1293–1303. [Google Scholar]
  7. Zhao, T.Y.; Wang, D.Q.; Ceng, L.C. Quasi-inertial Tsengas extragradient algorithms for pseudomonotone variational inequalities and fixed point problems of quasi-nonexpansive operators. Numer. Func. Anal. Opt. 2020, 42, 69–90. [Google Scholar] [CrossRef]
  8. Shehu, Y.; Iyiola, O.S.; Reich, S. A modified inertial subgradient extragradient method for solving variational inequalities. Optim. Eng. 2021. [Google Scholar] [CrossRef]
  9. Gibali, A.; Shehu, Y. An efficient iterative method for finding common fixed point and variational inequalities in Hilbert spaces. Optimization 2019, 68, 13–32. [Google Scholar] [CrossRef]
  10. Yao, Y.; Postolache, M. Iterative methods for pseudomonotone variational inequalities and fixed point problems. J. Optimiz. Theory. Appl. 2012, 155, 273–287. [Google Scholar] [CrossRef]
  11. Thong, D.V.; Hieu, D.V. Modified subgradient extragradient algorithms for variational inequality problems and fixed point problems. Optimization 2018, 67, 83–102. [Google Scholar] [CrossRef]
  12. Thong, D.V.; Hieu, D.V. Some extragradient-viscosity algorithms for solving variational inequality problems and fixed point problems. Numer. Algorithms 2019, 82, 761–789. [Google Scholar] [CrossRef]
  13. Fang, C.J.; Wang, Y.; Yang, S.K. Two algorithms for solving single-valued variational inequalities and fixed point problems. J. Fix. Point. Theory. A 2016, 18, 27–43. [Google Scholar] [CrossRef]
  14. Godwin, E.C.; Alakoya, T.O.; Mewomo, O.T. Relaxed inertial Tseng extragradient method for variational inequality and fixed point problems. Appl. Anal. 2022. [Google Scholar] [CrossRef]
  15. Alakoya, T.O.; Jolaoso, L.O.; Mewomo, O.T. Modified inertial subgradient extragradient method with self adaptive stepsize for solving monotone variational inequality and fixed point problems. Optimization 2021, 70, 545–574. [Google Scholar] [CrossRef]
  16. Tu, K.; Xia, F.Q.; Yao, J.C. An iterative algorithm for solving generalized variational inequality problems and fixed point problems. Appl. Anal. 2016, 95, 209–225. [Google Scholar] [CrossRef]
  17. Zhang, L.; Fang, C.; Chen, S. A projection-type method for solving multi-valued variational inequalities and fixed point problems. Optimization 2017, 66, 2329–2344. [Google Scholar] [CrossRef]
  18. Alvarez, F.; Attouch, H. An inertial proximal method for maximal monotone operators via discretization of a nonlinear oscillator with damping. Set-Valued Anal. 2021, 9, 3–11. [Google Scholar] [CrossRef]
  19. Alakoya, T.O.; Jolaoso, L.O.; Taiwo, A.; Mewomo, O.T. Inertial algorithm with self-adaptive step size for split common null point and common fixed point problems for multivalued mappings in Banach spaces. Optimization 2022, 71, 3041–3075. [Google Scholar] [CrossRef]
  20. Godwin, E.C.; Izuchukwu, C.; Mewomo, O.T. Image restorations using a modified relaxed inertial technique for generalized split feasibility problems. Math. Method. Appl. Sci. 2023, 46, 5521–5544. [Google Scholar] [CrossRef]
  21. Godwin, E.C.; Izuchukwu, C.; Mewomo, O.T. An inertial extrapolation method for solving generalized split feasibility problems in real Hilbert spaces. Boll Unione Mat Ital. 2021, 14, 379–401. [Google Scholar] [CrossRef]
  22. Mu, Z.; Peng, Y. A note on the inertial proximal point method. Stat. Optim. Inf. Comput. 2015, 3, 241–248. [Google Scholar] [CrossRef]
  23. Iutzeler, F.; Hendrickx, J.M. A generic online acceleration scheme for optimization algorithms via relaxation and inertia. Optim. Method. Softw. 2019, 34, 383–405. [Google Scholar] [CrossRef] [Green Version]
  24. Iutzeler, F.; Malick, J. On the proximal gradient algorithm with alternated inertia. J. Optim. Theory Appl. 2018, 176, 688–710. [Google Scholar] [CrossRef] [Green Version]
  25. Shehu, Y.; Iyiola, O.S. Projection methods with alternating inertial steps for variational inequalities: Weak and linear convergence. Appl. Numer. Math. 2020, 157, 315–337. [Google Scholar] [CrossRef]
  26. Burachik, R.S.; Millan, R.D. A projection algorithm for non-monotone variational inequalities. Set-Valued Var. Anal. 2020, 28, 149–166. [Google Scholar] [CrossRef] [Green Version]
  27. Linh, H.M.; Reich, S.; Thong, D.V.; Lan, N.P. Analysis of two variants of an inertial projection algorithm for finding the minimum-norm solutions of variational inequality and fixed point problems. Numer. Algorithms 2022, 89, 1695–1721. [Google Scholar] [CrossRef]
  28. Belguidoum, O.; Grar, H. An improved projection algorithm for variational inequality problem with multivalued mapping. Number. Algebr. Control. 2023, 13, 210–223. [Google Scholar] [CrossRef]
  29. Jolaoso, L.O.; Shehu, Y.; Yao, J.C. Inertial extragradient type method for mixed variational inequalities without monotonicity. Math. Comput. Simulat. 2022, 192, 353–369. [Google Scholar] [CrossRef]
  30. Ye, M.; He, Y. A double projection method for solving variational inequalities without monotonicity. Comput. Optim. Appl. 2015, 60, 141–150. [Google Scholar] [CrossRef]
  31. Wang, Z.; Chen, Z.; Xiao, Y.; Zhang, C. A new projection-type method for solving multi-valued mixed variational inequalities without monotonicity. Appl. Anal. 2020, 99, 1453–1466. [Google Scholar] [CrossRef]
Figure 1. Comparison of iteration times of Algorithm 1, Algorithm 1 [1] and Algorithm 3.1 [31] at different initial points under T = T 2 in Example 2.
Figure 1. Comparison of iteration times of Algorithm 1, Algorithm 1 [1] and Algorithm 3.1 [31] at different initial points under T = T 2 in Example 2.
Mathematics 11 01850 g001
Table 1. The numerical result under T = T 1 in Example 2.
Table 1. The numerical result under T = T 1 in Example 2.
Algorithm 1
eiterCpu x *
10 1 72.1875(0.8854; 0.0465; 0.0215; 0.0463)
10 2 172.9218(0.9891; 0.0052; 0.0004; 0.0051)
10 3 263.4218(0.9989; 0.0005; 7.6653 × 10 6 ; 0.0005)
10 3.5 303.7500(0.9994; 0.0002; 9.3523 × 10 5 ; 0.0002)
Table 2. Comparison of numerical results of Algorithm 1, Algorithm 1 [1] and Algorithm 3.1 [31] at different initial points under T = T 2 in Example 2.
Table 2. Comparison of numerical results of Algorithm 1, Algorithm 1 [1] and Algorithm 3.1 [31] at different initial points under T = T 2 in Example 2.
Algorithm 1Algorithm 1 [1]Algorithm 3.1 [31]
Case Iiter152329
Cpu2.73403.10933.3656
Case IIiter162128
Cpu2.79683.07813.9218
Table 3. Comparison of numerical results of Algorithm 1, Algorithm 1 [1], Algorithm 3.1 [31] in Example 3.
Table 3. Comparison of numerical results of Algorithm 1, Algorithm 1 [1], Algorithm 3.1 [31] in Example 3.
e Algorithm 1Algorithm 1 [1]Algorithm 3.1 [31]
10 1 iter569
cpu1.56251.70311.8750
10 2 iter8915
cpu1.78121.82812.0000
10 3 iter121220
cpu1.87502.31252.2343
10 3 iter151633
cpu1.93752.42182.4843
10 5 iter181952
cpu2.14062.46873.0468
10 6 iter2122-
cpu2.23432.5156-
10 7 iter2425-
cpu2.40622.5468-
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhang, H.; Liu, X.; Sun, Y.; Hu, J. An Alternated Inertial Projection Algorithm for Multi-Valued Variational Inequality and Fixed Point Problems. Mathematics 2023, 11, 1850. https://doi.org/10.3390/math11081850

AMA Style

Zhang H, Liu X, Sun Y, Hu J. An Alternated Inertial Projection Algorithm for Multi-Valued Variational Inequality and Fixed Point Problems. Mathematics. 2023; 11(8):1850. https://doi.org/10.3390/math11081850

Chicago/Turabian Style

Zhang, Huan, Xiaolan Liu, Yan Sun, and Ju Hu. 2023. "An Alternated Inertial Projection Algorithm for Multi-Valued Variational Inequality and Fixed Point Problems" Mathematics 11, no. 8: 1850. https://doi.org/10.3390/math11081850

APA Style

Zhang, H., Liu, X., Sun, Y., & Hu, J. (2023). An Alternated Inertial Projection Algorithm for Multi-Valued Variational Inequality and Fixed Point Problems. Mathematics, 11(8), 1850. https://doi.org/10.3390/math11081850

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop