1. Introduction
Let
H be a real Hilbert space such that norm is
and the inner product is
, respectively. We recall that the variational inclusion problem (VIP):
where
is a set-valued operator and
is a single-valued operator. We denote the solution set of (
1) by
. The variational inclusion problem is a crucial extension of the variational inequality problem. Many nonlinear problems such as problems of saddle point, minimization, and split feasibility can be transformed into variational inclusion problems which can be applied to signal processing, neural networks, medical image reconstruction, machine learning, and data mining, etc., see [
1,
2,
3,
4,
5,
6,
7].
As we all know, (
1) can be converted to the fixed point equation
for some
, where
is the resolvent operator of
A. The famous forward–backward splitting method (FBSM) was proposed by Lions and Mercier [
8] in 1979:
where
A and
B are maximally monotone and
-inverse strongly monotone, respectively,
. Note that the Lipschitz continuity of an operator is a weaker property than the inverse strong monotonicity. So the algorithm has a shortcoming: the convergence requires a strong hypothesis. In order to overcome this difficulty, Tseng [
9] constructed a modified forward–backward splitting algorithm (TFBSM) in 2000:
where
B is monotone and Lipschitz continuous.
On the other hand, a famous method for solutions of variational inequalities is the projection and contraction method which was first introduced by He [
10] for the variational inequality problem in Euclidean space. Inspired by this, the following proximal contraction method (PCM) was proposed by Zhang and Wang [
11] in 2018:
where
,
, and the sequence of variable stepsizes
satisfies some conditions. Notice that both (TFBSM) and (PCM) can only get weak convergent results in real Hilbert spaces. In general, weakly convergent results are obviously less popular than strongly convergent ones. In order to get the strong convergence, Hieu et al. [
12] gave an algorithm named the regularization proximal contraction method (RPCM), for solving (
1) in 2021:
where
,
,
and
satisfies some appropriate conditions. Before this, some scholars successfully applied this technique to the variational inequality problem. Very recently, Song and Bazighifan [
13] introduced an inertial regularized method for solving the variational inequality and null point problem.
In recent years, there has been interest in methods with inertia which are considered effective methods to expedite the convergence. The inertial method is favored by many scholars because of its simple structure and easy operation, which is promoted by many scholars and in-depth research. In 2003, Moudafi and Oliny [
14] combined (FBSM) with the inertial method to construct a new algorithm:
where
is a positive real sequence. Furthermore, some scholars have proposed multi-step inertial methods. In 2021, Wang et al. [
15] proposed the multi-step inertial hybrid method to solve the problem (
1).
Inspired by [
12,
13,
15], we consider the variational inclusion and null point problem:
where
G and
F are nonlinear operators. We propose two modified regularized multi-step inertial methods to solve the above problem. These two algorithms are the modified forward-backward splitting algorithm and the proximal contraction algorithm. Using regularization techniques, the new algorithms converge strongly under mild conditions. Some numerical examples are given to show that our algorithms are efficient.
This article is arranged as follows: we introduce some notations, fundamental definitions, and results that are used in later proofs in
Section 2. In
Section 3, we present the new algorithms and discuss their convergence. In
Section 4, we report some numerical experiments to support our theoretical results obtained.
3. Main Results
We mainly introduce our new algorithms and analyze their convergence in this section. Let H be a real Hilbert space. The following assumptions will be needed throughout the paper:
- (A1)
is maximally monotone.
- (A2)
is monotone and L-Lipschitz continuous.
- (A3)
is -strongly monotone and k-Lipschitz continuous.
- (A4)
is -inverse strongly monotone.
- (A5)
, where
is the solution set of (
1).
To solve (
2), we construct a auxiliary problem:
for each
and
, the solution of the problem (
3) denoted by
.
Lemma 4. Under the assumptions (A1)–(A4), for each and , the problem (3) has a unique solution . Proof. Since the properties of
A,
B,
G, and
F in the hypothesis, we can conclude that
is strongly monotone. It is well known that strong monotone operators have unique solutions (see [
20]). Therefore, the problem (
3) has a unique solution
. □
Lemma 5. The net is bounded.
Proof. For each
and
, we have
,
and
. Thus,
and
Using the monotonic property of
and
G, we derive
By (
4) and the
-strong monotonicity, it follows that
Consequently (
5) and the Cauchy-Schwarz inequality, we find
, then
, we get
So the net is bounded. □
Lemma 6. For all , there exists such that, Proof. According to the assumption,
are solutions of the problem (
3), let us suppose that
. Then,
and
which implies
and
By Lemma 1, we know that
or, equivalently,
The properties of
G and
F and the Cauchy-Schwarz inequality imply that
which equal to
The Lipschitz continuity of the mapping
F and
G imply they are bounded. Combining the Lagrange’s mean-value theorem, we deduce that
this together with (
6), implies that
where
. Indeed, since
F and
G are Lipschitz continuous, the net
and
is bounded. If
, we can also get the same results. □
Lemma 7.
Proof. According to the conclusion of Lemma 5, there exists a subsequence
of the net
such that
and
as
. From RVI, we have that
. Let us take a point
in
, that is,
. Thus, we derive by the assumption (A1),
Replace
with
, we deduce from the monotonicity of
B that
It obtains that the sequence
is bounded by the boundedness of the sequence
and the Lipschitz continuity of
F. Letting
in relation (
8) and we infer that
For every
,
and
. By (
3), we obtain
due to the definition of
, we know that
by the monotonicity of
F,
which leads to
By the property of
G, noting (
10) and
, we obtain
which yields that
For any
,
is nonexpansive obviously holds. Owing to Lemma 3, we obtain that
,
together with (
9), implies
Noting (
5), we obtain
for all
. Letting
, we have
By Minty lemma [
21], we get
Due to uniqueness of the solution
to the problem (
2), we have
. Since
is any point in
,
, that is, the net
converges weakly to
. After that, applying (
5) for
, we get
Taking limit in (
11) as
, we obtain
Thus, . □
Remark 1. can be chosen as , where .
Lemma 8. Under the condition (A2), the sequence generated by Algorithm 1 or Algorithm 2 is convergent and To be more precise, we have .
Algorithm 1 Modified multi-steps inertial forward-backward splitting method with regularization |
Initialization: Let be arbitrary, , and set . Choose a sequence such that and . Choose a sequence satisfying:
For a given positive integer N, choose a sequence satisfying
Iterative steps: Calculate as follows: Step 1. Compute
where for some with
Set and go to Step 1.
|
Algorithm 2 Modified multi-steps inertial proximal contraction method with regularization |
|
Proof. Since
in the case of
,
By induction, can draw the sequence
has the lower bound
. Since the computation of
, we can get
that is
Let
represent
for all
. And we know
, then
Because
, obviously
Besides
, we infer
then,
Since
has the lower bound
, we know
. So we have
furthermore,
Therefore, is convergent. □
Theorem 1. If the conditions (A1)–(A5) hold, is the unique solution of problem (2) and the sequence is generated by Algorithm 1, then converges strongly to . Proof. Setting
,
Since
is the solution of (
3), we get
and
is firmly nonexpansive,
which implies
Since the monotony of
B and
G, we find
combining (
13) and (
14), we derive
or, equivalently,
Combining (
12) and (
15), we get that
and the fact of
imply that
Let
and
be three positive numbers such that
By virtue of Lemma 8,
and
, there exists
,
such that
Because
F is strongly monotone,
In views of (
18)–(
20), we get
which implies,
By Lemma 6, for all
, we have
where
M appears in Lemma 6. Substituting (
23) into (
22), for all
, we deduce
In the view of, for all
,
where
. The condition of
implies that
. Substituting (
25) into (
24), for all
,
where
,
is positive and
. Because the constraints of
and
, we know that
,
, and
. We deduce from Lemma 2 that
as
. □
Theorem 2. If the conditions (A1)–(A5) hold, is the unique solution of problem (2) and the sequence is generated by Algorithm 2, then converges strongly to . Proof. We have by Lemma 8, so for all , there exists and such that . We can also obtain , then is bounded. We will use the letter V to denote , obviously .
In the remainder proof, we assume that
. Setting
, then
For any
,
is equivalent to
. Since
combining (
26) and (
27), if
, then
hence
. Then observe that
By the definition of
,
which and (
28) imply
By the definition of
,
and
Substituting (
30) and (
31) into (
29), we infer
And then, by the properties of
B and
G, we infer that
Using the same method in the Theorem 1, we get
where
. Cause
, we assume
. Hence
The remaining proofs are the same as Theorem 1. □
4. Numerical Experiments
Three examples are given to show the performances of our algorithms. When the coefficients of inertia are equal to zero, let us use MFBMR and MPCMR for Algorithms 1 and 2, respectively. We denote Algorithm 1 for by MIFBMR, 2-MMIFBMR and 3-MMIFBMR, respectively. Similarly denote Algorithm 2 for by MIPCMR, 2-MMIPCMR and 3-MMIPCMR, respectively. All the programmes are written in Matlab 9.0 and performed on PC Desktop Intel(R) Core(TM) i5-1035G1 CPU @ 1.00 GHz 1.19 GHz, RAM 16.0 GB.
Example 1. Suppose . Let be a mapping defined asand as It is obvious that A is maximally monotone. We can prove that B is monotone and Lipschitz continuous. We know G is -inverse strongly monotone by calculation. Let .
Choose , and for MIFBMR, 2-MMIFBMR, 3-MMIFBMR, MIPCMR, 2-MMIPCMR and 3-MMIPCMR. Choose , , , , and for each algorithm. Choose , for MPCMR, MIPCMR, 2-MMIPCMR, and 3-MMIPCMR. It is obvious that and is the only one solution of problem (2). The numerical results of this example are represented in Figure 1 and Figure 2. Example 2. Let . Let . Let be defined bywhere J is an upper triangular matrix whose nonzero elements are all 1 in . Let be a mapping defined aswherehere C is a matrix, S is a skew-symmetric matrix and D is a diagonal matrix whose diagonal entries are positive. They all in . Therefore E is positive definite. Obviously, B is monotone and Lipschitz continuous. Define aswhere Q is a nonzero matrix in . We know G is -inverse strongly monotone by calculation. Choose , and for MIFBMR, 2-MMIFBMR, 3-MMIFBMR, MIPCMR, 2-MMIPCMR and 3-MMIPCMR. Choose , , , , and for each algorithm. Choose , for MPCMR, MIPCMR, 2-MMIPCMR and 3-MMIPCMR. All the diagonal elements of D are arbitrary in , the elements of C, S and Q are generated randomly in , and , respectively. It is obvious that and hence the solution of (2) is unique. The numerical results are represented in Figure 3 and Figure 4. Example 3. Let . Let be a mapping defined as be a mapping defined asand be a mapping defined as We can claim that B is monotone and -Lipschitz continuous, F is 1-strongly monotone and -Lipschitz continuous. We know G is 2-inverse strongly monotone by calculation. Choose , and for MIFBMR, 2-MMIFBMR, 3-MMIFBMR, MIPCMR, 2-MMIPCMR and 3-MMIPCMR. Choose , , , , and for each algorithm. Choose , for MPCMR, MIPCMR, 2-MMIPCMR and 3-MMIPCMR. It is obvious that and is the only solution of problem (2). The numerical results are represented in Figure 5, Figure 6, Figure 7 and Figure 8. Remark 2. In Algorithms 1 and 2, the values of L, k and ξ are not necessary to be known.
5. Conclusions
We have introduce two improved regularized algorithms with multi-step inertia to solve the variational inclusion and null point problem in Hilbert spaces. Then we can get strong convergence without using the inverse strongly monotone assumption. Another advantage of our algorithms is that the stepsizes do not need to use the Lipschitz constant of the operator. In addition, the values of k, L, and are not needed in the calculation process, and the choice of seems harsh but is actually available, such as , . Finally, the feasibility and effectiveness of our algorithms can be seen in the figures of the numerical experiments. After this, a question is how to get strong convergence under weaker conditions. We will discuss and study this issue in the future.