Next Article in Journal
A New Conservative Hyperchaotic System-Based Image Symmetric Encryption Scheme with DNA Coding
Next Article in Special Issue
Primal-Dual Splitting Algorithms for Solving Structured Monotone Inclusion with Applications
Previous Article in Journal
A Relativistic Toda Lattice Hierarchy, Discrete Generalized (m,2Nm)-Fold Darboux Transformation and Diverse Exact Solutions
Previous Article in Special Issue
Completeness of b−Metric Spaces and Best Proximity Points of Nonself Quasi-Contractions
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

New Self-Adaptive Inertial-like Proximal Point Methods for the Split Common Null Point Problem

1
College of Mathematics, Sichuan University, Chengdu 600014, China
2
Department of Mathematics, ORT Braude College of Engineering, Karmiel 2161002, Israel
*
Author to whom correspondence should be addressed.
Symmetry 2021, 13(12), 2316; https://doi.org/10.3390/sym13122316
Submission received: 24 October 2021 / Revised: 16 November 2021 / Accepted: 24 November 2021 / Published: 3 December 2021
(This article belongs to the Special Issue Symmetry in Nonlinear Analysis and Fixed Point Theory)

Abstract

:
Symmetry plays an important role in solving practical problems of applied science, especially in algorithm innovation. In this paper, we propose what we call the self-adaptive inertial-like proximal point algorithms for solving the split common null point problem, which use a new inertial structure to avoid the traditional convergence condition in general inertial methods and avoid computing the norm of the difference between x n and x n 1 before choosing the inertial parameter. In addition, the selection of the step-sizes in the inertial-like proximal point algorithms does not need prior knowledge of operator norms. Numerical experiments are presented to illustrate the performance of the algorithms. The proposed algorithms provide enlightenment for the further development of applied science in order to dig deep into symmetry under the background of technological innovation.

1. Introduction

We are concerned with the following split common null point problem (SCNPP):
find x H 1 to   solve 0 A ( x ) and
y   =   T x H 2 to   solve 0 B ( y ) ,
where H 1 and H 2 are Hilbert spaces, A : H 1 2 H 1 and B : H 2 2 H 2 are set-valued mappings, and T : H 1 H 2 is a nonzero bounded linear operator.
The SCNPP (1) and (2), which covers the convex feasibility problem (CFP) (Censor and Elfving [1]), variational inequalities (VIs) ( Moudafi [2]), and many constrained optimization problems as special cases, has attracted important attention both theoretically and practically (see Byrne [3], Moudafi and Thukur [4]).
The main idea to solve SCNPP comes from symmetry, that is, invariance. Therefore, fixed point theory plays a key role here. We recall the resolvent operator J r A   =   ( I + r A ) 1 ,   r   >   0 , which plays an essential role in the approximation theory for zero points of maximal monotone operators as well as in solving (1) and (2) and has the following key facts.
Fact 1: 
The resolvent is not only always single-valued but also firmly monotone:
J r A x J r A y , x y     J r A x J r A y 2 .
Fact 2: 
Using the resolvent operator, the problem (1) and (2) can be written as a fixed point problem:
x   =   J λ A ( I γ T ( I J λ B ) T ) x ,   λ   >   0 ,   γ   >   0 .
Fact 2 transforms the problem (1) and (2) into a fixed-point problem, and the research of the latter reflects the invariance in transformation, which is the essence of symmetry. Based on Fact 2, Byrne, et al. [5] proposed the following forward–backward algorithm:
x n + 1   =   J λ A ( x n γ T ( I J λ B ) T x n )
and obtained weak convergence, where T is the adjoint of T, the stepsize γ ( 0 , 2 L ) with L   =   T T .
At the same time, the inertial method originating from the heavy ball with friction system has attracted increasing attention thanks to its convergence properties in the field of continuous optimization. Therefore, many scholars have combined the forward–backward method (4) with the inertial algorithm to study the SCNPP and proposed some iterates. For related works, one can consult Alvarez and Attouch [6], Alvarez [7], Attouch et al. [8,9,10], Akgül [11], Hasan et al. [12], Khdhr et al. [13], Ochs et al. [14,15], Dang et al. [16], Soleymani and Akgül [17], Suantai et al. [18,19], Dong et al. [20], Sitthithakerngkiet et al. [21], Kazmi and Rizvi [22], Promluang and Kumman [23], Eslamian et al. [24], and references therein.
Although these algorithms improved the numerical solution of the split common null point problem, there exist two common drawbacks: one is that the step size depends on the norm of the linear operator T, which means there is a high computation cost, because the norm of the linear operator must be estimated before selecting the step size; another drawback is that the following condition is required:
n = 1 α n x n x n 1 2   <   ,
which means that one not only needs to calculate the norm of the difference between x n and x n 1 in advance at each step but also check if α n satisfies (5).
So it is natural to ask the following questions:
Question 1.1 
Can we construct the iterate for SCNPP whose step size does not depend on the norm of the linear operator T?
Question 1.2 
Can condition (5) be removed from the inertial method and still ensure the convergence of the sequence? Namely, can we construct a new inertial algorithm to solve SCNPP (1) and (2) without prior computation of the norm of the difference between x n and x n 1 ?
The purpose of this paper is to present a new self-adaptive inertial-like technique to give an affirmative answer to the above questions. Importantly, the innovative algorithms provide an idea of how to use symmetry to solve real-world problems in applied science.

2. Preliminaries

Let · , · and · be the inner product and the induced norm in a Hilbert space H , respectively. For a sequence { x n } in H , denote x n x and x n x by the strong and weak convergence to x of { x n } , respectively. Moreover, the symbol ω w ( x n ) represents the ω -weak limit set of { x n } , that is,
ω w ( x n ) : = { x H : x n j x for some subsequence { x n j } of { x n } } .
The identity below is useful:
α x + β y + γ z 2   =   α x 2   +   β y 2   +   γ z 2 α β x y 2     β γ y z 2     γ α x z 2
for all x , y , z R and α   +   β   +   γ   =   1 .
Definition 1.
A multivalued mapping A : H 2 H with domain D ( A )   =   { x H , A x } is monotone if
x y , x y     0 ,
for all x , y D ( A ) , x A ( x ) , and y A ( y ) . A monotone operator A is referred to be maximal if its graph is not properly contained in the graph of any other monotone operator.
Definition 2.
Let H be a real Hilbert space and let h : H H be a mapping.
(i) 
h is called Lipschitz with constant κ   >   0 if h ( x ) h ( y )     κ x y for all x , y H .
(ii) 
h is called nonexpansive if h ( x ) h ( y )     x y for all x , y H .
From Fact 1, we can conclude that J r A is a nonexpansive operator if A is a maximal monotone mapping. Moreover, due to the work of Aoyama et al. [25], we have the following property:
J r A x y , x J r A x     0 ,   y A 1 ( 0 ) ,
where A 1 ( 0 )   =   { z H : 0 A z } .
Definition 3.
Let C be a nonempty closed convex subset of H. We use P C to denote the projection from H onto C; namely,
P C x   =   arg min { x y : y C } , x H .
The following significant characterization of the projection P C should be recalled: given x H and y C ,
P C x   =   z x z , y z     0 , y C .
Lemma 1.
(Xu [26], Maingé [27]) Assume that { a n } is a sequence of nonnegative real numbers such that
a n + 1     ( 1 γ n a n ) a n   +   γ n δ n   +   c n , n     0 ,
where { γ n } is a sequence in ( 0 , 1 ) and { δ n } is a sequence in R such that
(1) 
n = 1 γ n   =   ;
(2) 
lim sup n sup n δ n     0 or n = 1 γ n | δ n |   <   ;
(3) 
n = 1 c n   <   .
Then lim n a n   =   0 .
Lemma 2.
(see, e.g., Opial [28]) Let H be a real Hilbert space and { x n } be a bounded sequence in H. Assume there exists a nonempty subset S H satisfying the properties:
(i)
lim n x n z exists for every z S ,
(ii)
ω w ( x n ) S .
Then, there exists x ¯ S such that { x n } converges weakly to x ¯ .
Lemma 3.
(Maingé [29]) Let { Γ n } be a sequence of real numbers that does not decrease at the infinity in the sense that there exists a subsequence { Γ n j } of { Γ n } such that Γ n j   <   Γ n j + 1 for all j     0 . Also consider the sequence of integers { σ ( n ) } n n 0 defined by
σ ( n )   =   max { k n : Γ k Γ k + 1 } .
Then, { σ ( n ) } n n 0 is a nondecreasing sequence verifying lim n σ ( n )   =   and, for all n     n 0 ,
max { Γ σ ( n ) , Γ n }     Γ σ ( n ) + 1 .

3. Main Results

3.1. Variant of Discretization

Inspired by the discretization of the second order dynamic system d 2 x d t 2   +   λ ( t ) d x d t   +   A x   =   0 , we consider the following iterative sequence
x n + 1     x n 1     θ n ( x n x n 1 )   +   γ n A ( x n + 1 ) 0 ,
where x 0 , x 1 are two arbitrary initial points, and γ n is a real nonnegative number. This recursion can be rewritten as
x n + 1   =   J γ n A ( x n 1 + θ n ( x n x n 1 ) ) ,
which proves that the sequence { x n } satisfying (10) always exists for any choice of the sequences { γ n } and { θ n } , provided that γ n   >   0 .
To distinguish from Alvarez and Attouch’s Inertial-Prox algorithm ([6]), we call it inertial-like proximal point algorithm. Combining the inertial-like proximal point algorithm and the forward–backward method, we propose the following self adaptive inertial-like proximal algorithms.

3.2. Some Assumptions

Assumption 1.
Throughout the rest of this paper, we assume that H 1 and H 2 are Hilbert spaces. We study the split common null point problem (SCNPP) as (1) and (2), where A : H 1 2 H 1 and B : H 2 2 H 2 are set-valued maximal monotone mappings, respectively, and T : H 1 H 2 is a bounded linear operator, T means the adjoint of T.
Assumption 2.
The functions are defined as:
f ( x )   =   1 2 ( I J r A ) x 2 , F ( x )   =   ( I J r A ) x , r   >   0 ;
and
g ( x )   =   1 2 ( I J μ B ) T x 2 , G ( x )   =   T ( I J μ B ) T x , μ   >   0 .
Assumption 3.
Denote by Ω the solution set of the SCNPP (1) and (2); namely,
Ω   =   x H 1 : 0 A ( x ) and 0 B ( T x ) ,
and we always assume Ω .

3.3. Inertial-like Proximal Point Algorithms

Remark 1.
It is not hard to find that if F ( y n ) 2   +   G ( y n ) 2   =   0 for some n     0 , then x n is a solution of the SCNPP (1) and (2), and the iteration process is terminated in finite iterations. In addition, if θ n 1 and step size τ n depends on the norm of linear operator T, Algorithm 1 recovers Byrne et al. [5].
Algorithm 1 Self adaptive inertial-like algorithm
  • Initialization: Choose a sequence { θ n } [ 0 , 1 ] satisfying one of the three cases: (I.) θ n ( 0 , 1 ) such that lim ̲ n θ n ( 1 θ n )   >   0 ; (II.) θ n 0 ; or (III.) θ n 1 . Select arbitrary initial points x 0 , x 1 .
  • Iterative Step: After constructing the nth-iterate x n , compute
    y n   =   x n 1   +   θ n ( x n x n 1 ) ,
    and define the ( n + 1 ) th iterate by
    x n + 1   =   J r A ( I τ n T ( I J μ B ) T ) y n ,
    where τ n is defined as
    τ n   =   g ( y n ) F ( y n ) 2 + G ( y n ) 2 , if F ( y n ) 2   +   G ( y n ) 2 0 0 , otherwise .
Remark 2.
In the subsequent convergence analysis, we will always assume that the two algorithms generate an infinite sequence, namely, the algorithms are not terminated in finite iterations. In addition, in the simulation experiments, we will give a stop criterion to end the iteration for practice.

3.4. Convergence Analysis of Algorithms

Theorem 1.
If the assumptions (A1)–(A3) are satisfied, then the sequence { x n } generated by Algorithm 1 converges weakly to a point z Ω .
Proof. 
To this end, the following three situations will be discussed: (I). θ n ( 0 , 1 ) , lim ̲ n θ n ( 1 θ n )   >   0 ; (II). θ n 0 ; and (III). θ n 1 .
(I).
First, we consider the case of θ n ( 0 , 1 ) , lim ̲ n θ n ( 1 θ n )   >   0 .
Without loss of generality, we take z Ω , and then we have z   =   J r A z ,   T z   =   J μ B T z and J r A ( I τ n T ( I J μ B ) T ) z   =   z from Fact 2. It turns out from (11) and (7) that
y n z 2 = x n 1 + θ n ( x n x n 1 ) z 2 = θ n x n z 2   +   ( 1 θ n ) x n 1 z 2     θ n ( 1 θ n ) x n x n 1 2 .
Since J r A is nonexpansive, it follows from (12) that
x n + 1 z 2 = J r A ( I τ n T ( I J μ B ) T ) y n z 2 ( I τ n T * ( I J μ B ) T ) y n z 2 = y n z 2     2 τ n y n z , T ( I J μ B ) T y n   +   τ n 2 T ( I J μ B ) T y n 2 = y n z 2     2 τ n T y n T z , ( I J μ B ) T y n   +   τ n 2 G ( y n ) 2 .
It follows from the property (8) of resolvent operator that
J μ B T y n T z , ( I J μ B ) T y n     0 ,   T z B 1 ( 0 ) ,
and then from the definition of g ( x ) , we have that
T y n T z , ( I J μ B ) T y n = T y n J μ B T y n , ( I J μ B ) T y n   +   J μ B T y n T z , ( I J μ B ) T y n = J μ B T y n T y n 2   +   J μ B T y n T z , ( I J μ B ) T y n 2 g ( y n ) .
Notice the definition of τ n , we obtain
x n + 1 z 2 y n z 2     4 τ n g ( y n )   +   τ n 2 G ( y n ) 2 = θ n x n z 2   +   ( 1 θ n ) x n 1 z 2     θ n ( 1 θ n ) x n x n 1 2 4 τ n g ( y n )   +   τ n 2 G ( y n ) 2 θ n x n z 2 + ( 1 θ n ) x n 1 z 2     θ n ( 1 θ n ) x n x n 1 2 3 g 2 ( y n ) F ( y n ) 2 + G ( y n ) 2 ,
which means that
x n + 1 z 2 θ n x n z 2   +   ( 1 θ n ) x n 1 z 2 max { x n z 2 , x n 1 z 2 } ,
and, hence, the sequence { x n z } is bounded, and so in turn is { y n } .
It may be assumed that the sequence { x n z } is not decreasing at the infinity in the sense that there exists a subsequence { σ ( n ) } of positive integers such that there exists a nondecreasing sequence σ ( n ) for n     N 1 (for some N 1 large enough) such that σ ( n ) as n and
x σ ( n ) z     x σ ( n ) + 1 z ,
for each n     0 .
Notice that (15) holds for each σ ( n ) , so from (15) with n replaced by σ ( n ) , we have
x σ ( n ) + 1 z 2 θ σ ( n ) x σ ( n ) z 2   +   ( 1 θ σ ( n ) ) x σ ( n ) 1 z 2 θ σ ( n ) ( 1 θ σ ( n ) ) x σ ( n ) x σ ( n ) 1 2     3 g 2 ( y σ ( n ) ) F ( y σ ( n ) ) 2 + G ( y σ ( n ) ) 2 θ σ ( n ) x σ ( n ) z 2   +   ( 1 θ σ ( n ) ) x σ ( n ) 1 z 2 ,
which means that
x σ ( n ) + 1 z 2     x σ ( n ) z 2     ( 1 θ σ ( n ) ) ( x σ ( n ) 1 z 2     x σ ( n ) z 2 ) ,
observe the relation x σ ( n ) z     x σ ( n ) + 1 z for each n     0 , the above inequality concludes a contradiction.
Therefore, there exists an integer N 0     0 such that x n + 1 z     x n z for all n     N 0 . Then, we have the limit of the sequence { x n z 2 } , denoted by l   =   lim n x n z 2 , and so
lim n ( x n z 2 x n + 1 z 2 )   =   0 .
In addition, we have
n = 0 ( x n + 1 z 2 x n z 2 )   =   lim n ( x n + 1 z 2 x 0 z 2 )   <   .
It turns out from (15) that
θ n ( 1 θ n ) x n x n 1 2   +   3 g 2 ( y n ) F ( y n ) 2 + G ( y n ) 2 θ n x n z 2   +   ( 1 θ n ) x n 1 z 2     x n + 1 z 2 = x n z 2     x n + 1 z 2   +   ( 1 θ n ) ( x n 1 z 2 x n z 2 )
and so
lim n θ n ( 1 θ n ) x n x n 1 2   =   0 , g 2 ( y n ) F ( y n ) 2 + G ( y n ) 2 0 ,
as n , furthermore, we can conclude that g ( y n ) 0 since F and G are Lipschitz continuous (see Censor et al. [30]), and so τ n 0 . Therefore, we have
( I J μ B ) T y n 2 0 .
Now, it remains to show that
ω w ( x n ) Ω .
Since the sequence { x n } is bounded, let x ¯ ω w ( x n ) and { x n k } be a subsequence of { x n } weakly converging to x ¯ . This suffices to verify that x ¯ A 1 ( 0 ) and T x ¯ B 1 ( 0 ) .
Notice lim n θ n ( 1 θ n ) x n x n 1 2   =   0 and the assumption lim ̲ n θ n ( 1 θ n )   >   0 , we have lim n x n x n 1 2   =   0 , which implies that
y n x n   =   ( 1 θ n ) · x n x n 1 0 .
Therefore, there exists a subsequence { y n k } of { y n } , which converges weakly to x ¯ . It follows from the lower semicontinuity of ( I J μ B ) T and (16) that
( I J μ B ) T x ¯ 2   =   lim k inf ( I J μ B ) T y n k 2   =   0 ,
which means that T x ¯ B 1 ( 0 ) .
On the other hand, according to (11) and (12), we have
x n + 1 y n 2 = x n + 1 z y n + z 2 = y n z 2 + x n + 1 z 2 + 2 z y n , x n + 1 z = y n z 2 x n + 1 z 2 + 2 x n + 1 y n , x n + 1 z = θ n x n z 2 + ( 1 θ n ) x n 1 z 2 θ n ( 1 θ n ) x n x n 1 2 x n + 1 z 2   +   2 x n + 1 y n , x n + 1 z = x n z 2     x n + 1 z 2   +   ( 1 θ n ) ( x n 1 z 2 x n z 2 ) θ n ( 1 θ n ) x n x n 1 2   +   2 x n + 1 y n , x n + 1 z .
Using again the property (8), we have
J r A z n z n , J r A z n z     0 ,   z A 1 ( 0 ) .
If we take z n   =   ( I τ n T ( I J μ B ) T ) y n in the above inequality, then we have
x n + 1 ( I τ n T ( I J μ B ) T ) y n , x n + 1 z     0 ,   z A 1 ( 0 ) ,
which yields
x n + 1 y n , x n + 1 z τ n T ( J μ B I ) T y n , x n + 1 z τ n T ( J μ B I ) T y n · x n + 1 z 0 .
Thus, it follows from (17) and (18) that
x n + 1 y n 0 .
Since the sequence (12) can be rewritten as
y n x n + 1 τ n T * ( I J μ B ) T y n r A x n + 1 ;
therefore, we have
1 r ( y n x n + 1 τ n T ( I J μ B ) T y n ) A x n + 1 .
In addition, it turns out from τ n 0 that
y n x n + 1 τ n T ( I J μ B ) T y n y n x n + 1   +   τ n T ( I J μ B ) T y n = y n x n + 1   +   τ n G ( y n ) 0 .
Note that the graph of the maximal monotone operator A is weakly–strongly closed; by passing to the limit in (19), we obtain 0 A x ¯ , namely, x ¯ A 1 ( 0 ) . Consequently, x ¯ Ω .
Since the choice of x ¯ is arbitrary, we conclude that ω w ( x n ) Ω . Hence, it follows Lemma 1 that the result holds.
(II).
Secondly, we consider the case of θ n 0 . In this case, y n   =   x n 1 . Similar to the proof of (15), we have that
x n + 1 z 2     x n 1 z 2     3 g 2 ( y n ) F ( y n ) 2 + G ( y n ) 2 ,
and then
3 g 2 ( x n 1 ) F ( x n 1 ) 2 + G ( x n 1 ) 2 x n 1 z 2     x n + 1 z 2 .
It may be assumed that the sequence { x n z } is not decreasing at the infinity in the sense that there exists a subsequence { σ ( n ) } of positive integers such that there exists a nondecreasing sequence σ ( n ) for n     N 1 (for some N 1 large enough) such that σ ( n ) as n and
x σ ( n ) z     x σ ( n ) + 1 z ,
for each n     0 .
Notice that (20) holds for each σ ( n ) , so from (20) with n replaced by σ ( n ) , we have
x σ ( n ) + 1 z 2 x σ ( n ) 1 z 2     3 g 2 ( y σ ( n ) ) F ( y σ ( n ) ) 2 + G ( y σ ( n ) ) 2 x σ ( n ) 1 z 2 ,
which means that
x σ ( n ) + 1 z 2     x σ ( n ) z 2     x σ ( n ) 1 z 2     x σ ( n ) z 2 ,
observe the relation x σ ( n ) z     x σ ( n ) + 1 z for each n     N 1 , the above inequality concludes a contradiction.
So there exists an integer N 0     0 such that x n + 1 z     x n z for all n     N 0 . Then, we have the limit of the sequence { x n z 2 } , denoted by l   =   lim n x n z 2 , and so
lim n ( x n z 2 x n + 1 z 2 )   =   0 , n = 1 ( x n z 2 x n + 1 z 2 )   <   .
Now, it remains to show that
ω w ( x n ) Ω .
Since the sequence { x n } is bounded, let x ¯ ω w ( x n ) and { x n k } be a subsequence of { x n } weakly converging to x ¯ . It suffices to verify that x ¯ A 1 ( 0 ) and T x ¯ B 1 ( 0 ) .
Next, we show that x n x n 1 0 . Indeed, it follows from the relation between the norm and inner product that
x n x n 1 2 = x n z + z x n 1 2 = x n z 2   +   z x n 1 2   +   2 x n z , z x n 1 = x n z 2   +   z x n 1 2   +   2 x n z , z x n + x n x n 1 x n 1 z 2     x n z 2   +   2 x n z · x n x n 1 x n 1 z 2     x n z 2   +   2 ( M + m ) · x n x n 1 ,
where M is a constant such that M     x n z for all n and m   >   0 is a given constant, which means that
x n x n 1 2     2 ( M + m ) · x n x n 1     x n 1 z 2     x n z 2 ,
and then
n = 0 [ x n x n 1 2 ( M + m ) ] · x n x n 1     n = 0 ( x n 1 z 2 x n z 2 )   <   ,
which implies [ x n x n 1 2 ( M + m ) ] · x n x n 1 0 as n .
Since x n x n 1     x n z + x n 1 z     2 M , we have x n x n 1 0 and then
x n + 1 x n 1 0 .
It follows from (21) that g 2 ( x n 1 ) 0 and then
( I J μ B ) T x n 1 2 0 .
By using the lower semicontinuity of ( I J μ B ) T , we have
( I J μ B ) T x ¯ 2   =   lim k inf ( I J μ B ) T x n k 1 2   =   0 ,
which means that T x ¯ B 1 ( 0 ) .
Notice again that the sequence (12) can be rewritten as
x n 1 x n + 1 τ n T ( I J μ B ) T x n 1 r A x n + 1 ;
therefore, we have
1 r ( x n 1 x n + 1 τ n T ( I J μ B ) T x n 1 ) A x n + 1 .
In addition, it turns out from τ n 0 that
x n 1 x n + 1 τ n T ( I J μ B ) T x n 1 x n 1 x n + 1   +   τ n T ( I J μ B ) T x n 1 = x n 1 x n + 1 + τ n G ( x n 1 ) 0 .
Note that the graph of the maximal monotone operator A is weakly–strongly closed, by passing to the limit in (22), we obtain 0 A x ¯ , namely, x ¯ A 1 ( 0 ) . Consequently, x ¯ Ω .
Since the choice of x ¯ is arbitrary, we conclude that ω w ( x n ) Ω . Hence, it follows Lemma 1 that the result holds.
(III).
Finally, we consider the case of θ n 1 . Indeed, we just need to replace x n 1 with x n in the proof of (II) and then the desired result is obtained. □
Next, we prove the strong convergence of Algorithm 2.
Algorithm 2 Update of self adaptive inertial-like algorithm
  • Initialization: Choose a sequence { θ n } [ 0 , 1 ] satisfying one of the three cases: (I.) θ n ( 0 , 1 ) such that lim ̲ n θ n ( 1 θ n )   >   0 ; (II.) θ n 0 ; or (III.) θ n 1 . Choose { α n } and { γ n } in ( 0 , 1 ) such that
    lim n γ n   =   0 , n = 0 γ n = , lim n ( 1 α n γ n ) α n   >   0 .
  • Select arbitrary initial points x 0 , x 1 .
  • Iterative Step: Given the iterate x n , compute
    y n   =   x n 1 + θ n ( x n x n 1 ) ,
    and define the ( n + 1 ) th iterate by
    x n + 1   =   ( 1 α n γ n ) y n   +   α n J r A ( I τ n T ( I J μ B ) T ) y n ,
    where
    τ n   =   g ( y n ) F ( y n ) 2 + G ( y n ) 2 , if F ( y n ) 2   +   G ( y n ) 2 0 0 , otherwise .
Theorem 2.
If the assumptions (A1)–(A3) are satisfied, then the sequence { x n } generated by Algorithm 2 converges in norm to z   =   P Ω ( 0 ) (i.e., the minimum-norm element of the solution set Ω).
Proof. 
Similar to the weak convergence, we consider the following three situations: (I). θ n ( 0 , 1 ) and lim ̲ n θ n ( 1 θ n )   >   0 ; (II). θ n 0 ; and (III). θ n 1 .
(I).
We first consider the strong convergence under the situation of θ n ( 0 , 1 ) and lim ̲ n θ n ( 1 θ n )   >   0 .
Let us begin by showing the boundedness of the sequence { x n } . To see this, we denote z n   =   J r A ( I τ n T ( I J μ B ) T y n and use the projection z : = P Ω ( 0 ) to obtain in a similar way to the proof of (13)–(15) of Theorem 1 that
y n z = θ n x n z   +   ( 1 θ n ) x n 1 z max { x n z , x n 1 z } , z n z 2 y n z 2 3 g 2 ( y n ) F ( y n ) 2 + G ( y n ) 2 ;
hence, one can see z n z     y n z .
It turns out from (23) that
x n + 1 z = ( 1 α n γ n ) y n + α n z n z = ( 1 α n γ n ) ( y n z ) + α n ( z n z ) + γ n ( z ) ( 1 α n γ n ) y n z   +   α n z n z   +   γ n z ( 1 γ n ) ( θ n x n z   +   ( 1 θ n ) x n 1 z )   +   γ n z max { x n z , x n 1 z , z } max { x 0 z , x 1 z , z } ,
which implies that the sequence { x n } is bounded, and so are the sequences { y n } , { z n } .
Applying the identity (7), we deduce that
x n + 1 z 2 = ( 1 α n γ n ) y n + α n z n z 2 = ( 1 α n γ n ) ( y n z ) + α n ( z n z ) + γ n ( z ) 2 ( 1 α n γ n ) y n z 2 + α n z n z 2 + γ n z 2 ( 1 α n γ n ) α n z n y n 2 .
Substituting (13) and (24) into (25) and after some manipulations, we obtain
x n + 1 z 2 ( 1 α n γ n ) y n z 2 + α n ( y n z 2 3 g 2 ( y n ) F ( y n ) 2 + G ( y n ) 2 ) + γ n z 2 ( 1 α n γ n ) α n z n y n 2 = ( 1 γ n ) y n z 2   +   γ n z 2     ( 1 α n γ n ) α n z n y n 2 3 α n g 2 ( y n ) F ( y n ) 2 + G ( y n ) 2
x n + 1 z 2 ( 1 γ n ) [ θ n x n z 2   +   ( 1 θ n ) x n 1 z 2     θ n ( 1 θ n ) x n x n 1 2 ] + γ n z 2     ( 1 α n γ n ) α n z n y n 2     3 α n g 2 ( y n ) F ( y n ) 2 + G ( y n ) 2 θ n x n z 2   +   ( 1 θ n ) x n 1 z 2   +   γ n z 2     ( 1 α n γ n ) α n z n y n 2 ( 1 γ n ) θ n ( 1 θ n ) x n x n 1 2     3 α n g 2 ( y n ) F ( y n ) 2 + G ( y n ) 2
Next we distinguish two cases.
Case 1. The sequence { x n z } is nonincreasing at the infinity in the sense that there exists n 0     0 such that for each n     n 0 , x n + 1 z     x n z . This particularly implies that lim n x n z exists and thus,
lim n ( x n z 2 x n 1 z 2 )   =   0 , n = 1 ( x n z 2 x n 1 z 2 )   <   .
For all n   >   n 0 , it follows from (26) that
( 1 α n γ n ) α n z n y n 2   +   ( 1 γ n ) θ n ( 1 θ n ) x n x n 1 2   +   3 α n g 2 ( y n ) F ( y n ) 2 + G ( y n ) 2 θ n x n z 2     x n + 1 z 2   +   ( 1 θ n ) ( x n 1 z 2     x n z 2 )   +   γ n z 2 x n z 2     x n + 1 z 2   +   ( 1 θ n ) ( x n 1 z 2 x n z 2 )   +   γ n z 2 .
Now, due to the assumptions on α n , β n , and γ n and the boundedness of { x n } and { y n } , we have
lim n z n y n   =   0 ;
lim n ( 1 γ n ) θ n ( 1 θ n ) x n x n 1 2   =   0 ;
lim n 3 α n g 2 ( y n ) F ( y n ) 2 + G ( y n ) 2   =   0 .
It turns out from (29) that g ( y n ) 0 since F and G are Lipschitz continuous and so lim n τ n   =   0 , and from (27) that lim n x n x n 1   =   0 , which in turn implies from (11) that
lim n y n x n lim n ( y n x n 1   +   x n 1 x n ) = lim n ( 1 + θ n ) x n 1 x n   =   0 .
Observing x n + 1 y n     α n z n y n   +   γ n y n 0 , we obtain
x n + 1 x n     x n + 1 y n   +   y n x n 0 .
This proves the asymptotic regularity of { x n } .
By repeating the relevant part of the proof of Theorem 1, we obtain ω w ( x n ) Ω .
It is now at the position to prove the strong convergence of { x n } . Rewriting x n + 1   =   ( 1 γ n ) v n   +   γ n α n ( z n y n ) , where v n   =   ( 1 α n ) y n   +   α n z n , and making use of the inequality u + v 2     u 2   +   2 v , u + v , which holds for all u ,   v in Hilbert spaces, we obtain
x n + 1 z 2 = ( 1 γ n ) ( v n z ) + γ n ( α n ( z n y n ) z ) 2 ( 1 γ n ) 2 v n z 2   +   2 γ n α n ( z n y n ) z , x n + 1 z .
It follows from (7) that
v n z 2   =   ( 1 α n ) y n z 2   +   α n z n z 2     α n ( 1 α n ) z n y n 2 ,
and then
x n + 1 z 2 ( 1 γ n ) 2 ( ( 1 α n ) y n z 2 + α n z n z 2 α n ( 1 α n ) z n y n 2 ) + 2 γ n α n ( z n y n ) z , x n + 1 z .
It turns out from (24) that z n z 2     y n z 2 ; hence, we obtain
x n + 1 z 2 ( 1 γ n ) y n z 2     α n ( 1 α n ) ( 1 γ n ) 2 z n y n 2 + 2 γ n α n ( z n y n ) z , x n + 1 z
Submitting (13) into the above inequality, we have
x n + 1 z 2 ( 1 γ n ) ( θ n x n z 2 + ( 1 θ n ) x n 1 z 2 θ n ( 1 θ n ) x n x n 1 2 )     α n ( 1 α n ) ( 1 γ n ) 2 z n y n 2 + 2 γ n α n ( z n y n ) z , x n + 1 z ,
which means that
x n + 1 z 2 ( 1 γ n ) x n z 2   +   ( 1 γ n ) ( 1 θ n ) ( x n 1 z 2 x n z 2 ) θ n ( 1 θ n ) ( 1 γ n ) x n x n 1 2     α n ( 1 α n ) ( 1 γ n ) 2 z n y n 2 + 2 γ n α n ( z n y n ) z , x n + 1 z ( 1 γ n ) x n z 2   +   ( 1 γ n ) ( 1 θ n ) ( x n 1 z 2 x n z 2 ) + 2 γ n α n ( z n y n ) z , x n + 1 z .
For simplicity, we denote by
a n + 1 ( 1 γ n ) a n   +   γ n δ n   +   c n ,
where a n   =   x n z 2 , δ n   =   2 α n z n y n , x n + 1 z   +   z , x n + 1 z , and c n   =   ( 1 γ n ) ( 1 θ n ) ( x n 1 z 2 x n z 2 ) .
Since ω w ( x n ) Ω and z   =   P Ω ( 0 ) , which implies z , q z     0 for all q Ω , we deduce that
lim sup n z , x n + 1 z   =   max q ω w ( x n ) z , q z     0 .
Combining (28) and (32) implies that
lim sup n δ n   =   lim sup n { α n z n y n , x n + 1 z + z , x n + 1 z } =   lim sup n z , x n + 1 z     0 .
In addition, by the assumptions on θ n and γ n , we have
n = 1 c n   =   n = 1 ( 1 γ n ) ( 1 θ n ) ( x n 1 z 2 x n z 2 )   <   .
These enable us to apply Lemma 1 to (31) to obtain that a n 0 . Namely, x n z in norm, and the proof of Case 1 is complete.
Case 2. The sequence { x n z } is not nonincreasing at the infinity in the sense that there exists a subsequence { σ ( n ) } of positive integers such that σ ( n ) (as n ) and with the properties:
x σ ( n ) z   <   x σ ( n ) + 1 z , max { x σ ( n ) z , x n z }     x σ ( n ) + 1 z .
Notice the boundedness of the sequence { x n z } , which implies that there exists the limit of the sequence { x σ ( n ) z } and, hence, we conclude that
lim n ( x σ ( n ) + 1 z 2 x σ ( n ) z 2 )   =   0 .
Observe that (26) holds for all σ ( n ) , so replacing n with σ ( n ) in (26) and transposing, we obtain
( 1 α σ ( n ) γ σ ( n ) ) α σ ( n ) z σ ( n ) y σ ( n ) 2   +   ( 1 γ σ ( n ) ) θ σ ( n ) ( 1 θ σ ( n ) ) x σ ( n ) x σ ( n ) 1 2 + 3 α σ ( n ) g 2 ( y σ ( n ) ) F ( y σ ( n ) ) 2 + G ( y σ ( n ) ) 2 ( 1 γ σ ( n ) ) ( θ σ ( n ) x σ ( n ) z 2   +   ( 1 θ σ ( n ) ) x σ ( n ) 1 z 2 )     x σ ( n ) + 1 z 2   +   γ σ ( n ) z 2 θ σ ( n ) x σ ( n ) z 2   +   ( 1 θ σ ( n ) ) x σ ( n ) 1 z 2   +   x σ ( n ) z 2     x σ ( n ) z 2 x σ ( n ) + 1 z 2   +   γ σ ( n ) z 2 = ( 1 θ σ ( n ) ) ( x σ ( n ) 1 z 2 x σ ( n ) z 2 ) + x σ ( n ) z 2 x σ ( n ) + 1 z 2 + γ σ ( n ) z 2 .
Now, taking the limit by letting n yields
lim n z σ ( n ) y σ ( n )   =   0 ;
lim n g ( y σ ( n ) )   =   0 ;
lim n x σ ( n ) x σ ( n ) 1 2   =   0 .
Note that we still have x σ ( n ) + 1 x σ ( n ) 0 and that the relations (33)–(35) are sufficient to guarantee that ω w ( x σ ( n ) ) Ω .
Next, we prove x σ ( n ) z .
As a matter of fact, observe that (30) holds for each σ ( n ) . So replacing n with σ ( n ) in (30) and using the relation x σ ( n ) z 2   <   x σ ( n ) + 1 z 2 , we obtain
x σ ( n ) + 1 z 2 = ( 1 γ σ ( n ) ) ( θ σ ( n ) x σ ( n ) z 2 + ( 1 θ σ ( n ) ) x σ ( n ) 1 z 2 θ σ ( n ) ( 1 θ σ ( n ) ) x σ ( n ) x σ ( n ) 1 2 ) α σ ( n ) ( 1 α σ ( n ) ) ( 1 γ σ ( n ) ) 2 z σ ( n ) y σ ( n ) 2   +   2 γ σ ( n ) α σ ( n ) ( z σ ( n ) y σ ( n ) ) z , x σ ( n ) + 1 z ( 1 γ σ ( n ) ) x σ ( n ) z 2 + 2 γ σ ( n ) α σ ( n ) ( z σ ( n ) y σ ( n ) ) z , x σ ( n ) + 1 z ;
therefore, we have
γ σ ( n ) x σ ( n ) z 2     x σ ( n ) z 2     x σ ( n ) + 1 z 2   +   2 γ σ ( n ) α σ ( n ) ( z σ ( n ) y σ ( n ) ) z , x σ ( n ) + 1 z ,
Notice again the relation x σ ( n ) z 2   <   x σ ( n ) + 1 z 2 , we obtain
x σ ( n ) z 2 2 α σ ( n ) ( z σ ( n ) y σ ( n ) ) z , x σ ( n ) + 1 z M z σ ( n ) y σ ( n )   +   2 z , x σ ( n ) + 1 z .
[Here M is a constant such that M     2 x n z for all n.]
Now, since x σ ( n ) + 1 x σ ( n ) 0 , z   =   P Ω ( 0 ) and ω ( x σ ( n ) ) Ω , we have
lim sup n z , x σ ( n ) + 1 z = lim sup n z , x σ ( n ) z = max q ω w ( x σ ( n ) ) z , q z     0 .
Consequently, the relation (36) and z σ ( n ) y σ ( n ) 0 assure that x σ ( n ) z , which follows from Lemma 3 that
x n z     x σ ( n ) + 1 z     x σ ( n ) + 1 x σ ( n ) + x σ ( n ) z 0 .
Namely, x n z in norm, and the proof of Case 2 is complete.
(II).
Now, we consider the case of θ n 0 . In this case, we have y n   =   x n 1 and x n + 1   =   ( 1 α n γ n ) x n 1   +   α n J r A ( I τ n T * ( I J μ B ) T ) x n 1 . Denote by z n 1   =   J r A ( I τ n T * ( I J μ B ) T ) x n 1 , similar to the proof of (24)–(26), we obtain that the sequence { x n } is bounded and
x n + 1 z 2 ( 1 γ n ) x n 1 z 2   +   γ n z 2     ( 1 α n γ n ) α n z n 1 x n 1 2 3 α n g 2 ( x n 1 ) F ( x n 1 ) 2 + G ( x n 1 ) 2 ,
which implies that
( 1 α n γ n ) α n z n 1 x n 1 2   +   3 α n g 2 ( x n 1 ) F ( x n 1 ) 2 + G ( x n 1 ) 2 ( 1 γ n ) x n 1 z 2   +   γ n z 2     x n + 1 z 2 = ( x n 1 z 2 x n + 1 z 2 )   +   γ n ( z 2 x n 1 z 2 ) .
Next, we distinguish two cases.
Case 1. There exists n 0     0 such that for each n   >   n 0 , x n + 1 z     x n z , which implies that lim n x n z exists, and thus
lim n ( x n z 2 x n 1 z 2 )   =   0 , n = 1 ( x n z 2 x n 1 z 2 )   <   .
Since γ n 0 and lim ̲ n ( 1 α n γ n ) α n   >   0 , it follows from (37) that
z n 1 x n 1 2 0 ; 3 α n g 2 ( x n 1 ) F ( x n 1 ) 2 + G ( x n 1 ) 2 0 ,
which means that g ( x n 1 ) 0 since F and G are Lipschitz continuous and so τ n 0 . Similar to the proof of x n x n 1 0 in the weak convergence Theorem 1, we still have the asymptotic regularity of { x n } and ω w ( x n ) Ω .
Similar to the proof of (30), we have
x n + 1 z 2 ( 1 γ n ) x n 1 z 2     α n ( 1 α n ) ( 1 γ n ) 2 z n 1 x n 1 2 + 2 γ n α n ( z n 1 x n 1 ) z , x n + 1 z = ( 1 γ n ) x n z 2   +   ( 1 γ n ) ( x n 1 z 2 x n z 2 ) α n ( 1 α n ) ( 1 γ n ) 2 z n 1 x n 1 2   +   2 γ n α n ( z n 1 x n 1 ) z , x n + 1 z ( 1 γ n ) x n z 2   +   ( 1 γ n ) ( x n 1 z 2 x n z 2 ) + 2 γ n α n ( z n 1 x n 1 ) z , x n + 1 z = ( 1 γ n ) a n   +   γ n δ n   +   c n ,
where a n   =   x n z 2 , δ n   =   2 α n ( z n 1 x n 1 ) z , x n + 1 z and c n   =   ( 1 γ n ) ( x n 1 z 2 x n z 2 ) .
Obviously, γ n ,   δ n and c n are satisfying the conditions in Lemma 1, so we can conclude that x n z .
Case 2. The sequence { x n z } is not nonincreasing at the infinity in the sense that there exists a subsequence { σ ( n ) } of positive integers such that σ ( n ) (as n ) and with the properties:
x σ ( n ) z   <   x σ ( n ) + 1 z , max { x σ ( n ) z , x n z }     x σ ( n ) + 1 z .
Since the sequence { x n z } is bounded, there exists the limit of the sequence { x σ ( n ) z } and, hence, we conclude that
lim n ( x σ ( n ) + 1 z 2 x σ ( n ) z 2 )   =   0 .
Notice (37) holds for all σ ( n ) , so replacing n with σ ( n ) in (37) and using the relation x σ ( n ) z   <   x σ ( n ) + 1 z , we have
( 1 α σ ( n ) γ σ ( n ) ) α σ ( n ) z σ ( n ) 1 x σ ( n ) 1 2   +   3 α σ ( n ) g 2 ( x σ ( n ) 1 ) F ( x σ ( n ) 1 ) 2 + G ( x σ ( n ) 1 ) 2 ( x σ ( n ) 1 z 2 x σ ( n ) + 1 z 2 )   +   γ σ ( n ) ( z 2 x σ ( n ) 1 z 2 ) = ( x σ ( n ) 1 z 2 x σ ( n ) z 2 )   +   ( x σ ( n ) z 2 x σ ( n ) + 1 z 2 )   +   γ σ ( n ) ( z 2 x σ ( n ) 1 z 2 ) γ σ ( n ) ( z 2 x σ ( n ) 1 z 2 ) .
Since γ σ ( n ) 0 and lim ̲ n ( 1 α σ ( n ) γ σ ( n ) ) α σ ( n )   >   0 , we obtain
z σ ( n ) 1 x σ ( n ) 1 2 0 ; 3 α σ ( n ) g 2 ( x σ ( n ) 1 ) F ( x σ ( n ) 1 ) 2 + G ( x σ ( n ) 1 ) 2 0 .
Similarly, we still have the asymptotic regularity of { x σ ( n ) } and ω w ( x σ ( n ) ) Ω .
In addition, similar to the inequality above (31), we obtain the following
x σ ( n ) + 1 z 2 ( 1 γ σ ( n ) ) x σ ( n ) z 2   +   ( 1 γ σ ( n ) ) ( x σ ( n ) 1 z 2 x σ ( n ) z 2 ) + 2 γ σ ( n ) α σ ( n ) ( z σ ( n ) 1 x σ ( n ) 1 ) z , x σ ( n ) + 1 z ,
which means that
γ σ ( n ) x σ ( n ) z 2 x σ ( n ) z 2     x σ ( n ) + 1 z 2   +   ( 1 γ σ ( n ) ) ( x σ ( n ) 1 z 2 x σ ( n ) z 2 ) + 2 γ σ ( n ) α σ ( n ) ( z σ ( n ) 1 x σ ( n ) 1 ) z , x σ ( n ) + 1 z ,
notice again the relation x σ ( n ) z 2     x σ ( n ) + 1 z 2 for all σ ( n ) , we have
x σ ( n ) z 2 2 α σ ( n ) ( z σ ( n ) 1 x σ ( n ) 1 ) z , x σ ( n ) + 1 z M z σ ( n ) 1 x σ ( n ) 1   +   2 z , x σ ( n ) + 1 z .
[Here M is a constant such that M 2 x n z for all n.]
Again, since x σ ( n ) + 1 x σ ( n ) 0 , z   =   P Ω ( 0 ) and ω ( x σ ( n ) ) Ω , we have
lim sup n z , x σ ( n ) + 1 z = lim sup n z , x σ ( n ) z = max q ω w ( x σ ( n ) ) z , q z     0 .
Consequently, the inequality (38) and z σ ( n ) 1 x σ ( n ) 1 0 assure that x σ ( n ) z , which follows from Lemma 3 that
x n z     x σ ( n ) + 1 z     x σ ( n ) + 1 x σ ( n )   +   x σ ( n ) z 0 .
Namely, x n z in norm, and the proof of the second situation (II) is complete.
(III.)
Finally, we consider the case of θ n 1 . Indeed, we just need to replace x n 1 with x n in the proof of (II), and then the desired result is obtained. □

4. Numerical Examples and Experiments

Example 1.
We consider the numerical Let H 1   =   H 2   =   L 2 [ 0 , 1 ] . Define the mappings A, B, and T by T x ( t ) : = x ( t ) , A x ( t ) : = x ( t ) 2 , and B ( x ) ( t ) : = 2 x ( t ) 3 for all x ( t ) L 2 [ 0 , 1 ] . Then it can be shown that A and B are monotone operators, respectively, and the adjoint T of T is T x ( t ) : = x ( t ) . For simplicity, we choose α n   =   n 1 n + 1 , γ n   =   1 n + 1 in Algorithm 2 for all n     1 . We consider different choices of initial functions x 0 ( t ) ,   x 1 ( t ) and θ n   =   0.5   +   1 ( n + 1 ) ; 0 ; 1 . In addition, x n + 1 x n   <   10 10 is used as stopping criterion.
Case I: 
x 0 ( t )   =   t ,   x 1 ( t )   =   2 t ;
Case II: 
x 0 ( t )   =   e t ,   x 1 ( t )   =   2 s i n ( 5 t ) ;
Case III: 
x 0 ( t )   =   e t ,   x 1 ( t )   =   2 t .
It is clear that our algorithm is fast, efficient, stable, and simple to implement. All the numerical results are presented in Figure 1, Figure 2 and Figure 3 under different initial functions, and the number of iterations and CPU run time remain almost consistent, which are shown in Table 1.
Example 2.
We consider an examplewhich is from the realm of compressed sensing. More specifically, we try to recover the K-sparse original signal x 0 from the observed signal b.
Here, matrix T R m n , m < < n would be involved and created by standard Gaussian distribution. The observed signal b   =   T x   +   ϵ , where ϵ is noise. For more details on signal recovery, one can consult Nguyen and Shin [31].
Conveniently, solving the above sparse signal recovery problem is usually equivalent to solving the following LASSO problem (see Gibali et al. [32] and Moudafi et al. [33]):
min x R n T x b 2 s . t . x 1 t ,
where t is a given positive constant. If we define
A ( x )   = { u : sup x 1 t x y , u 0 } , if y R n , , otherwise , B ( x )   = R m , if x   =   b , , otherwise ,
then one can see that the LASSO problem coincides with the problem of finding x R n such that
0 A ( x ) and 0 B ( T x ) .
During the operation, T R m n is generated randomly with m   =   2 15 ,   2 7 , n   =   2 16 ,   2 8 , x 0 R n is K-spikes ( K   =   100 ,   50 ) with amplitude ±   1 distributed throughout the region randomly. In addition, the signal to noise ratio (SNR) is chosen as S N R   =   40 , α n   =   0.5   +   1 / ( 10 n + 2 ) in two algorithms and γ n   =   1 / n in Algorithm 2. The recovery simulation results are illustrated in Figure 4.
Moreover, we also compare our algorithms with the results of Sitthithakerngkiet et al. [21], Kazimi and Riviz. [22], Byrne et al. [5] which have no inertial item and Tang [34] with a general inertial method.
For simplicity, for Algorithm 3.1 in Sitthithakerngkiet et al. [21], the nonexpansive mappings S n are defined as S n   =   I , D   =   I , ξ   =   0.5 , and u   =   0.1 , the parameters α n   =   1 n + 1 and β n   =   0.5     1 10 n + 2 . For Algorithm 3.1 in Sitthithakerngkiet et al. [21], Kazimi and Riviz [22], and Algorithm 3.1 in Byrne et al. [5], the step size γ   =   1 L , where L   =   T T . For Algorithms 3.1 and 3.2 in Tang [34], the step size is self-adaptive, and α n   =   0.5   +   1 / ( 10 n + 2 ) ,   γ n   =   1 / n . The experiment results are illustrated in Figure 5 and Table 2.
From Table 2, we can see that our Algorithms 1 and 2 seem to have some competitive advantages.
Compared with the general inertial methods, the main advantage of our Algorithms 1 and 2 in this paper, as mentioned in the previous sections, is that they have no constraint on the norm of the difference between x n and x n 1 in advance, and no assumption on the inertial parameter θ n , so it is extremely natural, attractive, and user friendly.
Moreover, when we test Algorithm 3.1 of Sitthithakerngkiet et al. [21], Algorithm 3.1 of Byrne et al. [5], and Kazimi and Riviz [22], we find that the convergence rate depends strongly on the step size γ , which depends on the norm of linear operator T, so another advantage of our Algorithms 1 and 2 in this paper is the self-adaptive step size.

5. Conclusions

We proposed two new self-adaptive inertial-like proximal point algorithms (Algorithms 1 and 2) for the split common null point problem (SCNPP). Under more general conditions, the weak and strong convergences to a solution of SCNPP are obtained. The new inertial-like proximal point algorithms listed are novel in the following ways:
(1)
Different from the average inertia technique, the convergence of the proposed algorithms remain even if without the term below:
n = 1 θ n x n x n 1 2   <   .
They do not need to calculate the values of x n x n 1 in advance if one chooses the coefficients θ n , which means that the algorithms are easy to use.
(2)
The inertial factors θ n can be chosen in [ 0 , 1 ] , which means that θ n is a possible equivalent to 1 and opens a wider path for parameter selection.
(3)
The step sizes of our inertial proximal algorithms are self-adaptive and are independent of the cocoercive coefficients, which means that they do not use any prior knowledge of the operator norms.
In addition, two numerical examples involving comparison results have been expressed to show the efficiency and reliability of the listed algorithms.

Author Contributions

Formal analysis, Y.T. and A.G.; Writing—original draft, Y.T. and Y.Z.; Writing—review & editing, A.G. All authors have read and agreed to the published version of the manuscript.

Funding

This article was funded by the Natural Science Foundation of Chongqing (CSTC2021JCYJ-msxmX0177), the Science and Technology Research Project of Chongqing Municipal Education Commission (KJQN 201900804), and the Research Project of Chongqing Technology and Business University (KFJJ1952007).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

No statement since the study did not report any data.

Acknowledgments

The authors express their deep gratitude to the referee for their valuable comments and suggestions.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Censor, Y.; Elfving, T. A multiprojection algorithm using Bregman projections in product space. Numer. Algorithms 1994, 8, 221–239. [Google Scholar] [CrossRef]
  2. Moudafi, A. Split monotone variational inclusions. J. Optim. Theory Appl. 2011, 150, 275–283. [Google Scholar] [CrossRef]
  3. Byrne, C. Iterative oblique projection onto convex sets and the split feasibility problem. Inverse Probl. 2002, 18, 441–453. [Google Scholar] [CrossRef]
  4. Moudafi, A.; Thakur, B.S. Solving proximal split feasibilty problem without prior knowledge of matrix norms. Optim. Lett. 2013, 8. [Google Scholar] [CrossRef] [Green Version]
  5. Byrne, C.; Censor, Y.; Gibali, A.; Reich, S. Weak and strong convergence of algorithms for the split common null point problem. J. Nonlinear Convex Anal. 2012, 13, 759–775. [Google Scholar]
  6. Alvarez, F.; Attouch, H. An inertial proximal method for maximal monotone operators via discretization of a nonlinear oscillator with damping. Set-Valued Anal. 2001, 9, 3–11. [Google Scholar] [CrossRef]
  7. Alvarez, F. Weak convergence of a relaxed and inertial hybrid projection-proximal point algorithm for maximal monotone operators in Hilbert spaces. SIAM J. Optim. 2004, 14, 773–782. [Google Scholar] [CrossRef]
  8. Attouch, H.; Chbani, Z. Fast inertial dynamics and FISTA algorithms in convex optimization, perturbation aspects. arXiv 2016, arXiv:1507.01367. [Google Scholar]
  9. Attouch, H.; Chbani, Z.; Peypouquet, J.; Redont, P. Fast convergence of inertial dynamics and algorithms with asymptotic vanishing viscosity. Math. Program. 2018, 168, 123–175. [Google Scholar] [CrossRef]
  10. Attouch, H.; Peypouquet, J.; Redont, P. A dynamical approach to an inertial forward-backward algorithm for convex minimization. SIAM J. Optim. 2014, 24, 232–256. [Google Scholar] [CrossRef]
  11. Akgül, A. A novel method for a fractional derivative with non-local and non-singular kernel. Chaos Solitons Fractals 2018, 114, 478–482. [Google Scholar]
  12. Hasan, P.; Sulaiman, N.A.; Soleymani, F.; Akgül, A. The existence and uniqueness of solution for linear system of mixed Volterra-Fredholm integral equations in Banach space. AIMS Math. 2020, 5, 226–235. [Google Scholar] [CrossRef]
  13. Khdhr, F.W.; Soleymani, F.; Saeed, R.K.; Akgül, A. An optimized Steffensen-type iterative method with memory associated with annuity calculation. Eur. Phys. J. Plus 2019, 134, 146. [Google Scholar] [CrossRef]
  14. Ochs, P.; Chen, Y.; Brox, T.; Pock, T. iPiano: Inertial proximal algorithm for non-convex optimization. SIAM J. Imaging Sci. 2014, 7, 1388–1419. [Google Scholar] [CrossRef]
  15. Ochs, P.; Brox, T.; Pock, T. iPiasco: Inertial proximal algorithm for strongly convex optimization. J. Math. Imaging Vis. 2015, 53, 171–181. [Google Scholar] [CrossRef]
  16. Dang, Y.; Sun, J.; Xu, H.K. Inertial accelerated algorithms for solving a split feasibility problem. J. Ind. Manag. Optim. 2017, 13, 1383–1394. [Google Scholar] [CrossRef] [Green Version]
  17. Soleymani, F.; Akgül, A. European option valuation under the Bates PIDE in finance: A numerical implementation of the Gaussian scheme. Discret. Contin. Dyn. Syst.-S 2018, 13, 889–909. [Google Scholar] [CrossRef] [Green Version]
  18. Suantai, S.; Srisap, K.; Naprang, N.; Mamat, M.; Yundon, V.; Cholamjiak, P. Convergence theorems for finding the split common null point in Banach spaces. Appl. Gen. Topol. 2017, 18, 3345–3360. [Google Scholar] [CrossRef] [Green Version]
  19. Suantai, S.; Pholasa, N.; Cholamjiak, P. The modified inertial relaxed CQ algorithm for solving the split feasibility problems. J. Ind. Manag. Optim. 2017, 14, 1595. [Google Scholar] [CrossRef] [Green Version]
  20. Dong, Q.L.; Cho, Y.J.; Zhong, L.L.; Rassias, T.M. Inertial projection and contraction algorithms for variational inequalities. J. Glob. Optim. 2018, 70, 687–704. [Google Scholar] [CrossRef]
  21. Sitthithakerngkiet, K.; Deepho, J.; Martinez-Moreno, J.; Kumam, P. Convergence analysis of a general iterative algorithm for finding a common solution of split variational inclusion and optimization problems. Numer Algorithms 2018, 79, 801–824. [Google Scholar] [CrossRef]
  22. Kazmi, K.R.; Rizvi, S.H. An iterative method for split variational inclusion problem and fixed point problem for a nonexpansive mapping. Optim. Lett. 2014, 8, 1113–1124. [Google Scholar] [CrossRef]
  23. Promluang, K.; Kumam, P. Viscosity approximation method for split common null point problems between Banach spaces and Hilbert spaces. J. Inform. Math. Sci. 2017, 9, 27–44. [Google Scholar]
  24. Ealamian, M.; Zamani, G.; Raeisi, M. Split common null point and common fixed point problems between Banach spaces and Hilbert spaces. Mediterr. J. Math. 2017, 14, 119. [Google Scholar] [CrossRef]
  25. Aoyama, K.; Kohsaka, F.; Takahashi, W. Three generalizations of firmly nonexpansive mappings: Their relations and continuity properties. J. Nonlinear Convex Anal. 2009, 10, 131–147. [Google Scholar]
  26. Xu, H.K. Iterative algorithms for nonliear operators. J. Lond. Math. Soc. 2002, 66, 240–256. [Google Scholar] [CrossRef]
  27. Maingé, P.E. Approximation methods for common fixed points of nonexpansive mappings in Hilbert spaces. J. Math. Anal. Appl. 2007, 325, 469–479. [Google Scholar] [CrossRef] [Green Version]
  28. Opial, Z. Weak convergence of the sequence of successive approximations for nonexpansivemappings. Bull. Am. Math. Soc. 1967, 73, 591–597. [Google Scholar] [CrossRef] [Green Version]
  29. Maingé, P.E. Strong convergence of projected subgradient methods for nonsmooth and nonstrictly convex minimization. Set-Valued Anal. 2008, 16, 899–912. [Google Scholar] [CrossRef]
  30. Censor, Y.; Gibali, A.; Reich, S. Algorithms for the split variational inequality problem. Numer. Algorithms 2012, 59, 301–323. [Google Scholar] [CrossRef]
  31. Nguyen, T.L.N.; Shin, Y. Deterministic sensing matrices in compressive sensing: A survey. Sci. World J. 2013, 2013, 192795. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  32. Gibali, A.; Liu, L.W.; Tang, Y.C. Note on the modified relaxation CQ algorithm for the split feasibility problem. Optim. Lett. 2018, 12, 817–830. [Google Scholar] [CrossRef]
  33. Moudafi, A.; Gibali, A. l1l2 Regularization of split feasibility problems. Numer. Algorithms 2018, 78, 739–757. [Google Scholar] [CrossRef] [Green Version]
  34. Tang, Y. New inertial algorithm for solving split common null point problem in Banach spaces. J. Inequal. Appl. 2019, 2019, 17. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Three initial cases for θ n   =   0.5   +   1 / ( n + 1 ) .
Figure 1. Three initial cases for θ n   =   0.5   +   1 / ( n + 1 ) .
Symmetry 13 02316 g001
Figure 2. Three initial cases for θ n   =   1 .
Figure 2. Three initial cases for θ n   =   1 .
Symmetry 13 02316 g002
Figure 3. Three initial cases for θ n   =   0 .
Figure 3. Three initial cases for θ n   =   0 .
Symmetry 13 02316 g003
Figure 4. Numerical results for m   =   2 15 ,   m   =   2 16 , and K   =   100 .
Figure 4. Numerical results for m   =   2 15 ,   m   =   2 16 , and K   =   100 .
Symmetry 13 02316 g004
Figure 5. Numerical results for m   =   2 7 ,   m   =   2 9 , and K   =   50 .
Figure 5. Numerical results for m   =   2 7 ,   m   =   2 9 , and K   =   50 .
Symmetry 13 02316 g005aSymmetry 13 02316 g005b
Table 1. Time and iterations of Algorithms 1 and 2 in Ex.1.
Table 1. Time and iterations of Algorithms 1 and 2 in Ex.1.
AlgorithmCase ICase IICase III
(sec.)/(n)(sec.)/(n)(sec.)/(n)
θ n   =   0 Algorithm 14.26/164.75/184.78/18
Algorithm 24.80/189.35/201.67/22
θ n   =   0.5   +   1 ( n + 1 ) Algorithm 14.19/124.96/124.27/12
Algorithm 24.23/1616.37/1810.69/18
θ n   =   1 /(n)Algorithm 12.56/102.63/102.60/10
Algorithm 23.17/123.25/123.25/12
Table 2. Comparisons of Algorithm 3.1, Algorithm 3.2, and Algorithm 3.1 in Sitthithakerngkiet [21], Algorithm 3.1 in Byrne [5], Algorithm 3.1 in Kazimi and Riviz [22], and Algorithms 3.1 and 3.2 in Tang [34].
Table 2. Comparisons of Algorithm 3.1, Algorithm 3.2, and Algorithm 3.1 in Sitthithakerngkiet [21], Algorithm 3.1 in Byrne [5], Algorithm 3.1 in Kazimi and Riviz [22], and Algorithms 3.1 and 3.2 in Tang [34].
DOLMethodIter (n)CPU Time (s)
10 4 Algorithm 130.019
Algorithm 2650.19
Algorithm 3.1-Tang [34]30.14
Algorithm 3.2-Tang [34]352.26
Sitthithakerngkiet [21]780.12
Byrne et al. [5]20.01
Kazimi and Riviz [22]480.08
10 5 Algorithm 130.017
Algorithm 21020.24
Algorithm 3.1-Tang [34]82.37
Algorithm 3.2-Tang [34]762.78
Sitthithakerngkiet [21]12723.03
Byrne et al. [5]30.013
Kazimi and Riviz [22]5030.74
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Tang, Y.; Zhang, Y.; Gibali, A. New Self-Adaptive Inertial-like Proximal Point Methods for the Split Common Null Point Problem. Symmetry 2021, 13, 2316. https://doi.org/10.3390/sym13122316

AMA Style

Tang Y, Zhang Y, Gibali A. New Self-Adaptive Inertial-like Proximal Point Methods for the Split Common Null Point Problem. Symmetry. 2021; 13(12):2316. https://doi.org/10.3390/sym13122316

Chicago/Turabian Style

Tang, Yan, Yeyu Zhang, and Aviv Gibali. 2021. "New Self-Adaptive Inertial-like Proximal Point Methods for the Split Common Null Point Problem" Symmetry 13, no. 12: 2316. https://doi.org/10.3390/sym13122316

APA Style

Tang, Y., Zhang, Y., & Gibali, A. (2021). New Self-Adaptive Inertial-like Proximal Point Methods for the Split Common Null Point Problem. Symmetry, 13(12), 2316. https://doi.org/10.3390/sym13122316

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop