1. Introduction
Throughout this paper, let
and
be the sets of positive integers and real numbers, respectively. Let
H be a real Hilbert space with the inner product
and norm
. Let
C be a nonempty, closed and convex subset of
H. Let
be a nonlinear operator. For the variational inequality problem, one has to find some
such that
The set of solutions of variational inequality can be denoted as . Nowadays, the variational inequality problem has aroused more and more attention by many scholars and it is an important branch of the nonlinear problem. Its applications involve different fields, such as engineering sciences, medical image processions and so on.
Through the transformation of (1), we know that the variational inequality problem is equivalent to the fixed point problem. In other words, it can be converted to find a point
such that
where
is the metric projection of
H to
C and
is a positive real constant. Its iteration format is that
This method is an example of the so-called gradient projection method. It is well known that if A is -strongly monotone and L-Lipschitz continuous, the variational inequality has a unique solution and the sequence generated by (3) converges strongly, when , to this unique solution. If A is k-inverse strongly monotone and , assuming its solution is nonempty, then the sequence converges weakly to a point of .
In 1976, Korpelevich [
1] proposed an algorithm which was known as the extragradient method [
1,
2]:
for every
, where
,
A is Lipschitz continuous and monotone. Compared with Equation (
3), Equation (
4) avoids the hypotheses of the strong monotonicity of the operator
A. If
, the sequence
generated by (4) converges weakly to an element of
. In fact, although the extragradient method has weaken the condition of the operator, we need to calculate two projections onto
C in each iteration.
Meanwhile, the extragradient method is applicable to the case that has a closed form, in other words, has an explicit expression. But in some cases, is not easy to calculate and has some limitations. When C is a closed ball or half space, has an analytical expression, while for a general closed convex set, often does not have analytical expression.
To overcome this difficulty, it has received great attention by many authors who had improved it in various ways. To our knowledge, there are three kinds of methods which are all improvements to the second projection about Equation (
4). In all three methods, operator A is Lipschitz continuous and monotone. The first one is the subgradient extragradient method which was proposed by Censor [
3] in 2011 and the iterate
by the process:
where
,
A is Lipschitz continuous and monotone. The key operation of subgradient extragradient method replaces the second projection onto
C of the extragradient method by a projection onto a special constructible half-space. Obviously, this reduces the difficulty of the calculation. The second one is the Tseng’s extragradient method by Duong Viet Thong and Dang Van Hieu [
4] in 2017:
where
,
,
.
In particular, this algorithm does not require to know the Lipschitz constant. The third one is the projection and contraction method which was studied by Q.L. Dong [
5] in 2017:
for each
, where
,
,
As a result, the sequences generated by Equations (5)–(7) both converge weakly to a solution of the variational inequality.
By comparing the above three methods, we find that reducing the condition of the algorithm can also solve the variational inequality problem. However, calculating projections is essential in these methods. So is there a way to avoid the calculation of projections that can solve the variational inequality problem?
As we all know, Yamada [
6] introduced the so-called hybrid steepest descent method in 2001:
which is essentially an algorithmic solution to the variational inequality problem. It does not require calculate
but requires a closed form expression of a nonexpansive mapping
T. The fixed point set of
T is
C. So if
T is a nonexpansive mapping with
and
F is
k-Lipschitz continuous and
-strongly monotone, the sequence
generated by (8) converges strongly to the unique solution
of
.
Inspired by this thought, in 2014, Zhou and Wang [
7] proposed a new iterative algorithm which based on Yamada’s hybrid steepest descent method and Mann iterative method:
where
and
. In (9),
are nonexpansive mappings of
with
and
F is an
-strongly monotone,
L-Lipschitz continuous mapping. Then the sequence
generated by (9) converges strongly to the unique point
of
. In particular, when
, (9) can be rewritten as
and we can also get the sequence
generated by (10) converges strongly to
of
.
The advantage of the Equations (4)–(7) is that the condition of the algorithms is reduced while the advantage of Yamada algorithm avoids the influence of the projection. So, can we combine the advantages of ideas of these several methods to design a new algorithm? In that way, we cannot help but pose such a question: If we weaken the condition of Equation (
10), will we get a different result? This is the main issue we will explore in this paper.
In this paper, motivated and inspired by the above results, we introduce a new iteration algorithm:
and
In this iteration algorithm, we weaken the conditions of operators. In other words, we change the strongly monotone of F into inverse strongly monotone. Then the weak convergence of our algorithmic will be proved. It is worth emphasizing that the advantage of our algorithm is that it does not require projection and the condition of the operator is also properly weakened.
Finally, the outline of this paper is as follows. In
Section 2, we list some useful basic definitions and lemmas which will be used in this paper. In
Section 3, we prove the weak convergence theorems of our main algorithm. In
Section 4, through the above-mentioned conclusion, we get some new weak convergence theorems to the equilibrium problem and the split feasibility problem and so on. In
Section 6, we give a concrete example and the numerical result to verify the correctness of our conclusions.
2. Preliminaries
In what follows, H denotes a real Hilbert space with the inner product and norm . And C denotes a nonempty, closed and convex subset of H. We use the sign to denote that the sequence converges strongly to a point x, i.e., and use to denote that the sequence converges weakly to a point x, i.e., . If there exists a subsequence of converging weakly to a point z, then z is called a weak cluster point of . We use to denote the set of all weak cluster points of .
Definition 1. ([8]) A mapping is called nonexpansive if The set of fixed points of T is the set As we all know, if T is nonexpansive, assume , the is closed convex.
Definition 2. A mapping is called
(
i)
L-Lipschitz, where ,
iff(
iii)
strongly monotone iffwhere .
In this case, F is said to be η-strongly monotone;(
iv)
inverse strongly monotone iffwhere . In this case, F is said to be k-inverse strongly monotone.
As we all know, if F is k-inverse strongly monotone, F is also -Lipschitz continuous.
Definition 3. ([9]) A mapping is said to be an averaged mapping, if and only if it can be written as the convex combination of the identifier I and a nonexpansive mapping, that is to saywhere and is a nonexpansive mapping. To be more precise, we also say that T is α-averaged. Lemma 1. Let H be a real Hilbert space, then the following relationships are established:
;
;
.
Lemma 2. ([9]) Let H be a real Hilbert space and C is nonempty bounded closed convex of H. Let T is a nonexpansive mapping of C to C, then . Lemma 3. ([9]) Let H be a real Hilbert space, then: T is nonexpansive if and only if the complement is -inverse strongly monotone;
if T is ν-inverse strongly monotone, then for , is -inverse strongly monotone;
T is averaged if and only if the complement is ν-inverse strongly monotone for some ; indeed, for , T is α-averaged if and only if the complement is -inverse strongly monotone.
Lemma 4. (Demiclosedness Principle [10]): Let C be a closed and convex subset of a real Hilbert space H. Let be a nonexpansive mapping with . If the sequence converges weakly to x and converges strongly to y, then . In particular, if and , then . In other words, .
Lemma 5. ([9]) Let C be a nonempty closed and convex subset of a real Hilbert space H. Let be a sequence in H such that the following two properties hold: exists for each ;
.
Then the sequence is converges weakly to a point in C.
Lemma 6. ([11]) Let C be a nonempty closed and convex subset and be a sequence of a real Hilbert space H. Supposefor every . Then, the sequence converges strongly to a point in C. 4. Application
In this section, we will illustrate the practical value of our algorithm and give some applications, which are useful in nonlinear analysis and optimization.
In the following, we mainly discuss the equilibrium problem and the split feasibility problem by applying the idea of Theorem 1 to obtain weak convergence theorems in a real Hilbert space.
First of all, let us understand the equilibrium problem.
Let
C be a nonempty closed convex subset of a real Hilbert space
H and let
be a bifunction. Then, we consider the equilibrium problem (see [
12,
13,
14,
15]) which is to find
such that
We denote the set all of
by
, i.e.,
Assume that the bifunction f satisfies the following conditions:
for all ;
for all , i.e., f is monotone;
for all ;
for each , is convex and lower semicontinuous.
If
f satisfies the above conditions
, let
and
, then there exists
, such that [
16]
Lemma 7. ([15]) Assume that satisfies the conditions of , define a mapping which we also call the resolvent of f for and as follows:Then the following holds: is single-valued;
is a firmly nonexpansive mapping, i.e., for all ;
is closed and convex.
From Lemma 7, we know that under certain conditions, solving the equilibrium problem can be transformed into solving the fixed point problem. Combined with the idea of Theorem 1, we can get the following result.
Theorem 2. Let H be a real Hilbert space and C be a nonempty closed convex subset of H. Let be a bifunction which satisfies the conditions . Let F be a k-inverse strongly monotone of H to H. Assume . Let the sequences and are generated by andwhere , satisfy the following conditions: ;
;
r is a positive real number.
Then the sequence converges weakly to a point , where . At the same time, is a solution of .
Proof. Let , combining Theorem 1 and Lemma 7, the result is proven. □
Next, we are looking at the split feasibility problem.
In 1994, the split feasibility problem was introduced by Censor and Elfving [
17]. The split feasibility problem is as follows:
Find , such that and ,
(see [
17,
18,
19,
20,
21,
22,
23]) where
C and
Q are nonempty closed convex subset of real Hilbert spaces
and
, respectively, and
is a bounded linear operator. We usually abbreviate the split feasibility problem as SFP.
In 2002, the so-called CQ algorithm was first introduced by Byrne [
21]. Define
as the following:
where
and
and
are the metric projections.
From (8), we can see that the CQ algorithm needs to calculate two projections. So, can we use the idea of the Yamada iteration to improve the algorithm? We consider that C is the set of a fixed point of a nonexpansive mapping T, so we come to the following conclusion.
Before solving this problem, we give a lemma.
Lemma 8. Let and be real Hilbert spaces, let be a bounded linear operator and be the adjoint of A, let C be a nonempty closed convex subset, and let G be a firmly nonexpansive mapping of to . Then is a -inverse strongly monotone operator, i.e., Proof. Since
G is a firmly nonexpansive mapping, then
So that is -inverse strongly monotone. □
Below we present the related theorem for the split feasibility problem.
Theorem 3. Let and be real Hilbert spaces, let be a nonexpansive mapping with , let be a bounded linear operator and be the adjoint of A. Assume that the solution of the split feasibility problem is nonempty. Let the sequences and are generated by andwhere , satisfy the following conditions: ;
.
Then the sequence converges weakly to a point which is the solution of .
Proof. We notice that is nonexpansive mapping, according to Lemma 8, we know that is -inverse strongly monotone.
Put in Theorem 1, the conclusion is obtained. □