1. Introduction
Let H be a real symmetric Hilbert space equipped with the inner product and norm , and let C be a nonempty closed convex subset of H. A mapping T of C into itself is called nonexpansive if for all . We use to denote the set of fixed points T, i.e., . Additionally, is a contraction if for all and some constant . In this case, f is said to be a -contraction.
In 2008, Peng and Yao [
1] considered the following generalized mixed equilibrium problem, which involves finding
such that
where
is a nonlinear mapping,
is a function and
is a bifunction of
C. The solution set of (
1) is denoted by
.
If
, then problem (
1) reduces to the following equilibrium problem (EP), which aims to find a point
satisfying the following property:
We use
to denote the set of solutions of EP (
2), that is,
. The EP (
2) includes, as special cases, numerous problems in physics, optimization and economics. Some authors (e.g., [
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15]) have proposed some useful methods for solving the EP (
2). Set
for all
, where
is a nonlinear mapping. Then,
if and only if
that is,
is a solution of the variational inequality. The (
3) is well known as the classical variational inequality. The set of solutions of (
3) is denoted by
.
Let A be a bounded operator on C. A is -strongly; that is, there exists a constant such that for all .
In 1967, Halpern [
16] considered the following explicit iterative process:
where
u is a given point and
is nonexpansive. He proved the strong convergence of
to a fixed point of
T provided that
with
. In 2003, Xu [
17] introduced the following iterative process:
where
is a sequence in
. He proved that the above sequence
converges strongly to the unique solution of the minimization problem with
:
, where
A is a strongly positive bounded linear operator on
H.
In 2006, Marino and Xu [
18] considered the following viscosity iterative method:
where
f is a contraction on
H. They proved the above sequence
converges strongly to the unique solution of the variational inequality
In 2001, Yamada et al. [
19] considered the following hybrid steepest-descent iterative method:
where
F is
-Lipschitzian continuous and
-strongly monotone operator with
,
and
. Under some suitable conditions, the above sequence
converges strongly to the unique solution of the variational inequality
In 2010, Tian [
20] considered the following general viscosity type iterative method:
Under certain approximate conditions, the above sequence
converges strongly to a fixed point of
T, which solves the variational inequality
In 2014, Zhang and Yang [
21] proposed an explicit iterative algorithm based on the viscosity method for finding a solution for a class of variational inequalities over the common fixed points set of the finite family of nonexpansive mappings
, as follows:
where
for
,
V is
-Lipschitzian and
is a real sequence in
. They proved that
converges strongly to the unique solution
of the variational inequality:
In 2016, Jeong [
22] introduced a new iterative method based on the hybrid viscosity approximation method and the hybrid steepest-descent method, as follows:
He proved that the sequence
converges strongly to the unique solution
of the variational inequality:
On the other hand, in 2008, Ceng et al. [
23] considered the following problem of finding
satisfying
which is called a general system of variational inequalities, where
are two nonlinear mappings, and
and
are two fixed constants. Precisely, they introduced the following iterative algorithm:
where
,
and
are real sequences,
S is a nonexpansive mapping on
C, and
is the metric projection of
H onto
C and the strong convergence theorem obtained.
The implicit midpoint rules for solving fixed point problems of nonexpansive mappings are a powerful numerical method for solving ordinary differential equations; see [
24,
25,
26] and the references therein. Therefore, many authors have studied them; see [
27,
28,
29,
30,
31]. In 2015, Xu et al. [
31] applied the viscosity technique to the implicit midpoint rule for nonexpansive mappings and proposed the following viscosity implicit midpoint rule:
where
is a real sequence. They proved that the sequence
converges strongly to a fixed point of
T, which is the unique solution of a certain variational inequality. Additionally, Ke and Ma [
29] studied the following generalized viscosity implicit rules:
where
and
are real sequences. They showed that the sequence
converges strongly to a fixed point of
T, which is the unique solution of a certain variational inequality.
Recently, Cai et al. [
32] introduced the following modified viscosity implicit rules:
where
F is a Lipschitzian and strongly monotone map,
,
and
are real sequences and
is the metric projection of
H onto
C. Under some suitable assumptions imposed on the parameters, they obtained some strong convergence theorems.
Motivated by the above results, we proposed a new composite iterative scheme for finding a common element of the set of solutions of a general system of variational inequalities, a generalized mixed equilibrium problem and the set of common fixed points of a finite family of nonexpansive mappings in Hilbert spaces. Then, we proved a strong convergence theorem. Finally, we provided two numerical examples for supporting our main result.
2. Preliminaries
Let
H be a real Hilbert space. We use ⇀ and → to denote the weak and strong convergences in
H, respectively. The following identity holds:
for all
and
such that
. Let
C be a nonempty closed convex subset of
H. Then, for any
, there exists a unique nearest point in
C, denoted by
, such that
is called the metric projection of
H onto
C. It is known that
is nonexpansive and satisfies
Furthermore, for
and
, we have
Lemma 1. Let H be a real Hilbert space. Then, for all ,
.
Definition 2 ([
32]).
A mapping is called firmly nonexpansive if for any Definition 3 ([
32]).
A mapping is called α-strongly monotone if for any Definition 4 ([
33]).
A mapping is said to be an averaged mapping if it can be written as the average of the identity I and a nonexpansive mapping; that is, where and is nonexpansive. More precisely, we say that T is averaged. Clearly, a firmly nonexpansive mapping is a averaged map.
Proposition 5 ([
34]).
The composite of finitely many averaged mapping is averaged. That is, if each of the mappings is averaged, then so is the composite . In particular, if is averaged, and is averaged, where , then the composite is averaged, where . If the mappings are averaged and have a common fixed point, then . In particular, if , we have .
Lemma 6 ([
35]).
Let C be a nonempty closed convex subset of H and be a bifunction satisfying the following conditions: - ()
for all ;
- ()
Θ is monotone, i.e., for all ;
- ()
For each is weakly upper semicontinuous;
- ()
For each is convex and lower semicontinuous.
Suppose that is convex and lower semicontinuous satisfying the following conditions:
- ()
For each and , there exist a bounded subset and such that for any , - ()
C is bounded set.
For and , define a mapping as follows: for all . Then, the following hold:
- (i)
for each and is single-valued;
- (ii)
is firmly nonexpansive;
- (iii)
;
- (iv)
Ω is closed and convex.
Lemma 7 ([
36]).
Let C, H, Θ and be as in Lemma 6. Then, the following inequality holds:for all and .
Definition 8 ([
32]).
A nonlinear operator A in which the domain is and the range is is said to be inverse strongly monotone (for short, ism ) if there exists such that Lemma 9 ([
37]).
Let C be a closed convex subset of H and be a nonexpansive mapping with . If is a sequence in C such that and , then . Lemma 10 ([
38]).
Let be an L-Lipschitzian and η-strongly monotone mapping. Let and . Definewhere is a nonexpansive mapping. Then, the mapping is a contraction from H into H, that is, where . Lemma 11 ([
39]).
Assume that is a sequence of nonnegative real numbers such thatwhere is a sequence in , a sequence of nonnegative real numbers and a sequence in such that and Then, . Lemma 12 ([
23]).
For a given is a solution of problem (4) if and only if is a fixed point of the mapping defined bywhere . Lemma 13 ([
30]).
Let and be bounded sequences in Banach space X and be a sequence in with . Suppose that for all integer and . Then, . 3. Main Result
Theorem 14. Let C be a closed convex subset of H; be a bifunction satisfying the conditions of Lemma 6; be a lower semicontinuous and convex function with restriction or of Lemma 6; be α-ism, β-ism and ω-ism, respectively; be an infinite family of nonexpansive self-mappings on H; be an L-Lipschitzian and ν-strongly monotone mapping; and be a κ-Lipschitzian mapping. Let and , where . Set and assume . Suppose that , , and are real sequences satisfying the following conditions:
- ()
, , and ;
- ()
and ;
- ()
and ;
- ()
for some and .
Given , let be a sequence generated bywhere for and for some , and . Suppose for . Then, the sequence converges strongly to , where , which solves the variational inequality (VI): To prove Theorem 14, we first establish some lemmas.
Lemma 15. Let be an L-Lipschitzian and ν-strongly monotone mapping with . Then, is nonexpansive.
Lemma 16. Let be an α-ism and . Then, is nonexpansive.
Proof of Theorem 14. We break the proof into several steps.
Step 1. The sequences
and
are bounded. Suppose
and
. Therefore, from (
7), we obtain
since
A is
ism,
,
and
, we derive from (
7) and Lemma 16 that
Hence,
. Therefore, by (
11), we have
. Therefore, from (
12), we have
. Hence, by (
10), we have
In a similar way, we have
Substituting (
14) into (
15), we obtain
By using (
7), and conditions (
) and (
), we may assume, without loss of generality,
. Then, from (
7), (
9) and Lemma 10, we have
By induction, we have
for all
. Hence,
is bounded, which implies that
,
,
,
,
and
are all bounded.
Step 2. The sequence
is asymptotically regular, that is,
. To see this, we set
to derive that
Since
for
and
,
are bounded, we have
Thus, by (
18)–(
20), we have
Observe that by Lemma 16, we have
Therefore, from (
7), we have
which implies that
where
is a big enough constant. Additionally, from Lemma 7, we have
Consequently, it follows from (
17), (
19), (
21)–(
23), and conditions
and
that
Hence, by Lemma 13, we have
Therefore,
Step 3. We prove
- (3a)
,
- (3b)
,
- (3c)
.
Therefore, from (
24), we obtain
Let
and
. Therefore, from (
7) and (
16), we obtain
Additionaly, from (
12), (
13) and (
26), we have
On the other hand, by (
7) and (
6), we have
Therefore, by Lemma 16, we have
Again, by (
7), we obtain
which implies
It follows from (
29) and (
30) that
Therefore, from Lemmas 1 and 10, we obtain
From
, (
24), (
25) and (
27), we obtain
By (
32) and
we have
The firm nonexpansivity of
implies that
From (
24), (
25) and (
28), we obtain
. Then, from
, we have
. By (
7), we have
. Hence,
Therefore,
. So
. Since
it follows from (
25),
and
that
Step 4. We have the following variational inequality:
where
is the unique fixed point of the contraction
; namely,
. Alternatively,
is the unique solution of the variational inequality
To prove (
34), take a subsequence
of
weakly convergent to a point
and such that
By virtue of VI (
35), it suffices to show that
. To see
, we use
and the demiclosedness principle of nonexpansive mappings then ensures that
. Since
is bounded for
, we can assume that
as
, where
for
. Define
for
. Therefore,
for
. Note that
Hence,
for
, where
E is an arbitrary bounded subset of
H. Since
and
is
averaged for
, by Lemma 5, we have
. From
where
is a bounded subset including
and
is a bounded subset including
. By (
33) and (
36), we obtain
. Therefore, from Lemma 9, we have
. Hence,
. Next, we show
. Since
, it follows from the definition of
and the monotonicity of
that
From (
), it follows that
Replacing
n by
, we have
Now, set
with
. Then, from (
37), we have
From (3b), we have
. Moreover, by the monotonicity of
A, the lower semi-continuous of
,
and
, we obtain
as
. From (
), (
), the convexity of
and (
38), we have
Hence,
. Then,
remains to be solved. we know
From Lemma 9, we have . Therefore, and the proof of Step 4 is complete.
Step 5. Strong convergence:
in the norm, where
satisfies (
34). From Lemma 1 and (
7), we have
We can rewrite the last relation as
where
and
It is now immediately clear that
and
. This enables us to apply Lemma 11 to the relation (
39) to arrive at
, that is,
in the norm. □
Corollary 17. Let all the assumptions of Theorem 14 hold except for all , , and , and (instead of ). Then, the sequence defined by where the initial guess is arbitrary and converges strongly to , where , which solves the variational inequality (8). Corollary 18. Let all the assumptions of Theorem 14 hold except for all and , for all and (instead of ). Then, the sequence is defined by where the initial guess is arbitrary and converges strongly to , where , which solves the variational inequality (8). 4. Numerical Test
In this section, first, we give a numerical example that satisfies all assumptions in Theorem 14 in order to illustrate the convergence of the sequence generated by the iterative process defined by (
7). Next, we give another numerical example for (
7) to compare its behavior with the iterative method (
5) of Ke and Ma [
29].
Example 19. Let , and define , and . Then, A is -ism, and from Lemma 6, is single-valued for all . Now, we deduce a formula for . For any and , we have Set . Then, is a quadratic function of y with coefficients and . Therefore, its discriminate is Since for all , this is true if and only if . That is, . Therefore, , which yields . Therefore, from Lemma 6, we have . Let , , , and for . Suppose , , and . Hence, B is ism, D is ism, F is -Lipschitzian and ism, and V is -Lipschitzian. Let , , and . Hence, . Then, from Theorem 14, the sequence , generated iteratively by converges strongly to , where .
Now, we compare the effectiveness of our algorithm with the algorithm (
5) by a numerical example. In fact, Ke and Ma [
29] proved the following strong convergence theorem.
Theorem 20. Let C be a closed convex subset of H, T be a nonexpansive self-mappings on C with and f be a κ-contraction on C for some . Pick any ; let be a sequence generated by (5), where and are real sequences satisfying the following conditions: - ()
, , and ;
- ()
.
Then, the sequence converges strongly to , which solves the variational inequality: Example 21. Let all the assumptions of Example 19 hold except the mappings for all and . First, suppose the sequence be generated by (7). Then, the scheme (7) can be simplified as Therefore, the sequence converges strongly to 0 by Theorem 14. Now, let the sequence be generated by (5). Then, the scheme (5) can be simplified as Therefore, the sequence converges strongly to 0 by Theorem 20.
Next, the numerical comparison of algorithms (
41) and (
42) is provided.
According to the
Table 1 and the
Figure 1, we see that, although the initial points are different (
and
), in both cases, the sequence
defined by (
40) converges to 0 where
and
.
Table 2 and
Figure 2 indicate that the sequence
generated by (
41) and (
42) converges to 0 where
and
. The efficiency of algorithm (
42) in comparison with Algorithm (
41) clearly appeared in this figure.
Remark 22. Table 2 and Figure 2 show that the convergent rate of iterative algorithm (7) is faster than that of the iterative algorithm (5) of Ke and Ma. In fact, regarding to Table 2 and Figure 2, we consider that Algorithm (41) approaches 0 from the third term onwards, but Algorithm (42) does not approach 0 even until the fiftieth term.