Next Article in Journal
Finite-Time Passivity and Synchronization for a Class of Fuzzy Inertial Complex-Valued Neural Networks with Time-Varying Delays
Previous Article in Journal
Heuristic Ensemble Construction Methods of Automatically Designed Dispatching Rules for the Unrelated Machines Environment
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The AA-Viscosity Algorithm for Fixed-Point, Generalized Equilibrium and Variational Inclusion Problems

by
Muhammad Waseem Asghar
1,
Mujahid Abbas
1,2 and
Behzad Djafari Rouhani
3,*
1
Department of Mathematics, Government College University, Katchery Road, Lahore 54000, Pakistan
2
Department of Medical Research, China Medical University, Taichung 404, Taiwan
3
Department of Mathematical Sciences, University of Texas at El Paso, El Paso, TX 79968, USA
*
Author to whom correspondence should be addressed.
Axioms 2024, 13(1), 38; https://doi.org/10.3390/axioms13010038
Submission received: 5 December 2023 / Revised: 1 January 2024 / Accepted: 2 January 2024 / Published: 5 January 2024

Abstract

:
The aim of this paper is to propose an inertial-type AA-viscosity algorithm for approximating the common solutions of the split variational inclusion problem, the generalized equilibrium problem and the common fixed-point problem of nonexpansive mappings. The strong convergence of an iterative sequence obtained through the proposed method is proved under some mild assumptions. Consequently, approximations of the solution of the split feasibility problem, the relaxed split feasibility problem, the split common null point problem and the split minimization problem are given. The applicability of our proposed algorithm has been illustrated with the help of a numerical example. Our iterative method was then compared graphically with different comparable methods in the existing literature.

1. Introduction and Preliminaries

Differential equations, game theory, control theory, the variational inequality problem, the equilibrium problem, the fixed-point problem, the optimization problem and the split feasibility problem are some well-known examples of nonlinear problems to which nonlinear operator theory is applicable. Over the past few decades, the development of efficient, flexible, less expensive and manageable approximation methods that are easy to test and debug for approximating the solutions of nonlinear operator equations and inclusions has become an active area of research. As a continuation, we propose an efficient and flexible iterative algorithm for approximating the common solution of some generalized nonlinear problems.
Throughout this paper, the letters R, R + and N will denote the set of all real numbers, the set of all positive real numbers and the set of all natural numbers, respectively.
Let H be a real Hilbert space, C be a nonempty closed convex subset of H and T be a self-mapping on C . The set { a ̲ * C : a ̲ * = T a ̲ * } of all fixed points of T is denoted by F ( T ) . A mapping T is called a Lipschitzian mapping if there exists a constant L > 0 such that T a ̲ T b ̲     L a ̲ b ̲ holds for all a ̲ , b ̲ C . If in the above inequality, we restrict L to vary only in the interval ( 0 , 1 ) ; then, the mapping T is called a contraction. Furthermore, the mapping T is called nonexpansive if we set L = 1 in the above inequality.
The study of nonexpansive mappings is significant mainly because of three reasons: (1) The existence of fixed points of such mappings relies on the geometric properties of the underlying Banach spaces/Hilbert spaces instead of compactness properties. (2) These mappings are used as the transition operators for certain initial value problems of differential inclusions involving accretive or dissipative operators. (3) Different problems appearing in areas like compressed sensing, economics, convex optimization theory, variational inequality problems, monotone inclusions, convex feasibility, image restoration and other applied sciences give rise to operator equations which involve nonexpansive mappings (see [1,2]). Another reason for studying nonexpansive mappings involves complex analysis, holomorphic mappings and the Hilbert ball (see, for example, [3,4]).
Let us recall that a multi-valued mapping T : H 2 H is said to be monotone if a ̲ b ̲ , p ̲ q ̲ 0 , where a ̲ , b ̲ H , p ̲ T a ̲ and q ̲ T b ̲ .
A monotone mapping T is said to be maximal if the graph of T is not properly contained in the graph of any other monotone mapping.
An operator T : H H is called t-inverse strongly monotone if for all a ̲ , b ̲ H , we have T a ̲ T b ̲ , a ̲ b ̲ t T a ̲ T b ̲ 2 for some t > 0 .
If we set t = 1 in the above inequality, then T is called inverse strongly monotone.
Let λ > 0 be the given parameter and I be the identity operator on H . If we set
J ( λ ; T ) = J λ T = ( I + λ T ) 1 ;
then J λ T is called the resolvent of the mapping T. Note that J λ T : R ( I + λ T ) D ( T ) .
It is known that for each a ̲ H , there is a unique element P C a ̲ C such that
a ̲ P C a ̲   = inf { a ̲ q ̲ : q ̲ C } .
A mapping P C from H onto C is called a metric projection of H onto C .
Recall that for any a ̲ H ,
P C a ̲ = q ̲ if and only if a ̲ q ̲ , q ̲ c ̲ 0 , for all c ̲ C .
More information on metric projections can be found in Section 3 in [3]; also, we refer the reader to [5].
Throughout this manuscript, we denote the strong and weak convergence of a sequence { a ̲ n } to a point a ̲ * by a ̲ n a ̲ * and a ̲ n a ̲ * , respectively. The set of all weak subsequential limits of { a ̲ n } is denoted by χ ( a ̲ n ) ; that is, if a ̲ H such that a ̲ χ ( a ̲ n ) , then there exists some subsequence { a ̲ n i } of the sequence { a ̲ n } which converges weakly to a ̲ .
Definition 1.
A mapping ϕ : C H is said to be firmly nonexpansive if for all a ̲ , b ̲ C , we have
ϕ a ̲ ϕ b ̲ , a ̲ b ̲ ϕ a ̲ ϕ b ̲ 2 .
Note that P C : H C is a well-known example of a firmly nonexpansive mapping. More information on firmly nonexpansive mappings can be found in Section 11 of [3].
Moreover, ϕ is called hemicontinuous on C if it is continuous along each line segment in C .
Lemma 1
([6]). Let C be a nonempty closed convex subset of a Hilbert space H and T : C H be nonexpansive. Then, I T is demiclosed on C; that is, any sequence { a ̲ n } in C with a ̲ n a ̲ and ( I T ) a ̲ n c ̲ gives that ( I T ) a ̲ = c ̲ .
Definition 2.
A mapping ϕ : H R { + } is weakly lower semicontinuous at a ̲ H if for any sequence { a ̲ n } in H with a ̲ n a ̲ , we have
ϕ ( a ̲ ) lim inf n ϕ ( a ̲ n ) .
Lemma 2
([7]). For any a ̲ , c ̲ H and β R , the following results hold:
(i) 
a ̲ + c ̲ 2   a ̲ 2 + 2 a ̲ + c ̲ , c ̲ ;
(ii) 
a ̲ + c ̲ 2   =   a ̲ 2 + 2 a ̲ , c ̲ + c ̲ 2 ;
(iii) 
a ̲ c ̲ 2   =   a ̲ 2 2 a ̲ , c ̲ + c ̲ 2 ;
(iv) 
β a ̲ + ( 1 β ) c ̲ 2   = β a ̲ 2 + ( 1 β ) c ̲ 2 β ( 1 β ) a ̲ c ̲ 2 .
Lemma 3
([8]). Let a ̲ , b ̲ , c ̲ H and α , β , γ [ 0 , 1 ] , with α + β + γ = 1 ; then, the following holds:
α a ̲ + β b ̲ + γ c ̲ 2 = α a ̲ 2 + β b ̲ 2 + γ c ̲ 2 α β a ̲ b ̲ 2 α γ a ̲ c ̲ 2 β γ b ̲ c ̲ 2 .
Lemma 4
([9]). Suppose that { a ̲ n } , { c ̲ n } are sequences of positive real numbers with n = 0 c ̲ n < , { b ̲ n } R , { σ n } ( 0 , 1 ) such that the following holds:
a ̲ n + 1 ( 1 σ n ) a ̲ n + b ̲ n + c ̲ n , for all n 0 .
(i) 
If b ̲ n η σ n for some η 0 , then { a ̲ n } is a bounded sequence.
(ii) 
If n = 0 σ n = and lim sup n b ̲ n σ n 0 , then we have lim n a ̲ n = 0 .
Lemma 5
([10]). Suppose that { a ¯ n } R + , { σ n } ( 0 , 1 ) with n = 0 σ n = and { b ¯ n } R such that
a ¯ n + 1 ( 1 σ n ) a ¯ n + σ n b ¯ n , for all n N .
If lim sup n b ¯ n i 0 , for every subsequence { a ¯ n i } of { a ¯ n } with lim inf n ( a ¯ n i + 1 a ¯ n i ) 0 , we have lim n a ¯ n = 0 .

1.1. Some Nonlinear Problems

Throughout this paper, we suppose that H , H 1 , H 2 are real Hilbert spaces, C and Q are nonempty closed and convex subsets of H 1 and H 2 , respectively, and A : H 1 H 2 is a bounded linear operator with A * as its adjoint operator.
Let T : H H . The fixed-point problem (FPP) can be formulated as:
find   a ̲ * H   such that   T a ̲ * = a ̲ *
For two multivalued mappings S and T, if a ̲ * = S a ̲ * T a ̲ * , then we say that a ̲ * is a common fixed point of S and T.
Let F : C × C R be a bifunction. An equilibrium problem (EP) involving F and the set C is defined as follows:
find   a ̲ * C   such that   F ( a ̲ * , b ̲ ) 0   for all   b ̲ C
Let T : C H . The variational inequality problem (VIP) associated with T and C is given as follows:
find   a ̲ * C   such that   b ̲ a ̲ * , T a ̲ * 0   holds for all   b ̲ C
Suppose that T : C H and F : C × C R are two mappings. The generalized equilibrium problem, G E P ( F , T ) , of F and T is defined as follows:
find a ̲ * C such that F ( a ̲ * , b ̲ ) + b ̲ a ̲ * , T a ̲ * 0 holds for all b ̲ C .
Note that if T is a zero operator in (2), then the G E P ( F , T ) reduces to the EP. If F is a zero operator in (2), then the G E P ( F , T ) becomes the VIP. The solution set of G E P ( F , T ) (2) is denoted by S ( G E P ( F , T ) ) .
The G E P ( F , T ) unifies different problems such as the VIP, EP, complementarity problem, optimization problem, FPP and Nash equilibrium problem in noncooperative games (for instance, see [11,12,13,14]).
The split inverse problem (SIP) has gained a lot of attention from many researchers recently. The first version of the SIP was the split feasibility problem (SFP), which was proposed by Censor and Elfving in 1994 [15].
The SFP associated with a bounded linear operator A : H 1 H 2 is defined as follows:
find a point   a ̲ * C   such that   A a ̲ * Q
That is, the SFP is a problem of finding a point of a closed convex subset such that the image of the point under a given bounded linear operator is in another closed convex subset. This problem has found several applications in real-world problems such as image recognizing, signal processing, intensity-modulated radiation therapy and many others. For more results in this direction, we refer to [16,17,18,19,20,21].
For any operator A : H 1 H 2 :
(a)
The direct problem is to determine b ̲ = A ( a ̲ ) for any a ̲ C (that is, from the cause to the consequence).
(b)
The inverse problem is to determine a point a ̲ C such that b ̲ = A ( a ̲ ) for any b ̲ Q (that is, from the consequence to the cause).
The split inverse problem (SIP) is defined as follows:
Find a point
a ̲ * H 1 which olves IP 1 ,
such that
A a ̲ * H 2 solves IP 2 ,
where IP 1 is the inverse problem formulated in H 1 and IP 2 is another inverse problem formulated in H 2 .
Moudafi [22] proposed the new version of the SIP called the split monotone variational inclusion problem (SMVIP).
Suppose that ϕ 1 : H 1 H 1 and ϕ 2 : H 2 H 2 are inverse strongly monotone mappings, T 1 : H 1 2 H 1 and T 2 : H 2 2 H 2 are multivalued maximal monotone mappings and A : H 1 H 2 is a bounded linear operator. The SMVIP is defined as follows: Find a point
a ̲ * H 1 such that 0 ϕ 1 a ̲ * + T 1 a ̲ * ,
and
b ̲ * = A a ̲ * H 2 such that 0 ϕ 2 A a ̲ * + T 2 A a ̲ * .
If ϕ 1 = ϕ 2 = 0 , then the SMVIP reduces to the following split variational inclusion problem (SVIP), which is defined as follows: Find a point
a ̲ * H 1 such that 0 T 1 a ̲ * ,
and
A a ̲ * H 2 such that 0 T 2 A a ̲ * .
Moreover, Moudafi showed that the SFP is a special case of the SVIP. Many inverse problems arising in real-world problems can be modeled as an SVIP (for details, see [16,19]). We shall denote the solution set of the variational inclusion problem on H 1 by S V I P ( H 1 ) and the solution set of the variational inclusion problem on H 2 by S V I P ( H 2 ) . The solution set of the SVIP is denoted by
Γ = a ̲ * H 1 : a ̲ * S V I P ( H 1 ) and A a ̲ * S V I P ( H 2 ) .
Remark 1.
According to [23,24], the following hold,
  • The mapping T is maximal monotone if and only if the resolvent operator J λ T is a single-valued mapping.
  • J λ T a ̲ * = a ̲ * if and only if a ̲ * T 1 ( 0 ) .
  • The split variation inclusion problem given in (3) is equivalent to the following:
    Find a ̲ * H 1 with
    J λ T 1 a ̲ * = a ̲ * such that A a ̲ * H 2 and A a ̲ * = J λ T 2 A a ̲ * .

1.2. Some Notable Iterative Algorithms

The problem of approximating fixed points of nonexpansive mappings with the help of different iterative processes has been studied extensively (see [9,13,25,26,27,28,29,30]).
There have been several iterative methods proposed in the literature for the solution of nonlinear problems. For instance, in 2022, Abbas et al. [31] proposed an iterative method known as the AA (Abbas–Asghar)-iteration.
The sequence { a ̲ n } generated by the AA-iteration is defined as follows in Algorithm 1:
Algorithm 1: AA-iterative algorithm proposed in [31].
Initialization: Let { η n } , { δ n } and { σ n } be sequences of real numbers in ( 0 , 1 ) .
Choose any a ̲ 1 C ;
For n 1 , calculate a ̲ n + 1 as follows:
d ̲ n = ( 1 η n ) a ̲ n + η n T a ̲ n , c ̲ n = T ( ( 1 δ n ) d ̲ n + δ n T d ̲ n ) , b ̲ n = T ( ( 1 σ n ) T d ̲ n + σ n T c ̲ n ) , a ̲ n + 1 = T b ̲ n .
It was shown that the AA-iteration method has a faster rate of convergence than other well-known iteration methods existing in the literature [31]. Note that the AA-iteration method has been successfully applied for obtaining the solutions of operator equations involving nonexpansive-type mappings; for instance, see [32,33,34].
Byrne et al. [35] proposed an iterative algorithm to solve the SVIP involving maximal monotone operators T 1 and T 2 which is as follows in Algorithm 2:
Algorithm 2: Proximal algorithm proposed in [35].
Initialization: Let { α n } be a sequence of real numbers in ( 0 , 1 ) , λ > 0 and ω ( 0 , 2 L ) , where L =   A * A .
Choose any a ̲ 1 H 1 ;
For n 1 , calculate a ̲ n + 1 as follows:
a ̲ n + 1 = α n a ̲ n + ( 1 α n ) J λ T 1 ( a ̲ n + ω A * ( J λ T 2 I ) A a ̲ n ) ,
where J λ T 1 and J λ T 2 are resolvent operators of T 1 and T 2 , respectively, lim x α n = 0 and n = 1 α n = .
The problem of finding the common solution of some nonlinear problems has gained a lot of attention from by many authors. For example, Wangkeeree et al. [36] proposed the following iterative algorithm to obtain the common solution of the FPP and the SVIP for nonexpansive mappings. The proposed iterative method is given in Algorithm 3.
Algorithm 3: General iterative algorithm proposed in [36].
Initialization: Let { α n } be a sequence of real numbers in ( 0 , 1 ) , λ > 0 and ω ( 0 , 2 L ) , where L is the spectral radius of operator A * A .
Choose any a ̲ 1 H 1 ;
For n 1 , calculate a ̲ n + 1 as follows:
b ̲ n = J λ T 1 ( a ̲ n + ω A * ( J λ T 2 I ) A a ̲ n )
a ̲ n + 1 = α n η ϕ a ̲ n + ( 1 α n B ) S b ̲ n ,
where ϕ : H 1 H 1 is a contraction with contraction constant c, S : H 1 H 1 is a nonexpansive mapping, B : H 1 H 1 is a bounded linear operator with constant θ and η > 0 , with η < θ c , and T 1 : H 1 2 H 1 and T 2 : H 2 2 H 2 are multivalued maximal monotone operators.
It was shown that under some appropriate conditions, the sequence defined in Algorithm 3 converges strongly to a common solution of the FPP and the SVIP.
The step size in any algorithm has an important role so far as its computation and the rate of convergence of an algorithm are concerned. Indeed, the selection of an appropriate step size can help in approximating the solution in fewer steps, and hence the step size may effect the rate of convergence of any iterative algorithm. Note that the step sizes described in Algorithms 2 and 3 depend upon the operator norm, and hence these algorithms are not easily implementable as the computation of the operator norm in each step makes the task difficult.
Later on, Tang [24] modified Algorithm 2 with a self-adaptive step size for approximating the solution of the SVIP. The proposed method is described in Algorithm 4.
Algorithm 4: Iterative algorithm proposed by Tang in [24].
Initialization: Let { ρ n } be a sequence such that ρ n ( 0 , 4 ) with inf ρ n ( 4 ρ n ) > 0 .
Choose any a ̲ 1 H 1 ;
For n 1 , calculate a ̲ n + 1 as follows:
Compute
ω n = ρ n g ( a ̲ n ) G a ̲ n 2 + H a ̲ n 2 ,
then compute
a ̲ n + 1 = α n a ̲ n + ( 1 α n ) J λ T 1 ( a ̲ n ω n A * ( I J λ T 2 ) A a ̲ n ) ,
where g ( a ̲ ) = 1 2 ( I J λ 1 T 2 ) A a ̲ 2 , G ( a ̲ ) = A * ( I J λ 1 T 2 ) A a ̲ , H ( a ̲ ) = ( I J λ 1 β 2 ) a ̲ and { α n } is a sequence with the conditions given in Algorithm 2.
Under some suitable conditions, a strong convergence result was proven for Algorithm 4. Moreover, many researchers have worked on inertial-type algorithms, in which each iteration is defined using the previous two iterations. Many authors have proposed some efficient algorithms combining the inertial process with self-adaptive step size methods for approximating the solutions of certain nonlinear problems; for more details we refer to ([30,37,38,39,40]). Moreover, Rouhani et al. proposed different iterative algorithms to find the common solution of some important nonlinear problems in Hilbert and Banach spaces; for details, see [41,42,43,44].
Recently, Alakoya and Mewomo [45] proposed an inertial-type viscosity algorithm hybrid with S-iteration [46] to approximate the common solution of certain nonlinear problems. They used a suitable step size in the proposed algorithm to approximate the solution without prior knowledge of an operator norm. A natural question arises: is it possible to develop a method which converges at a faster rate and approximate the solution of more general nonlinear problems?
Using the step size given in [45], we proposed an efficient inertial viscosity algorithm hybrid with the AA-iteration for approximating the common solution of more generalized nonlinear problems. Indeed, finding common solutions to nonlinear problems, as opposed to solving them separately, is crucial because it offers a unified perspective on the interconnected variables. This approach provides a more comprehensive understanding of the system’s behavior, ensuring consistency and enabling more robust modeling and analysis in complex scenarios. Using suitable control parameters, we proved the strong convergence result to approximate the common solution of a split variation inclusion problem, the G E P ( F , T ) , and the common FPP. These problems are much important in different fields, like network resources, signal processing, image processing and many others (for more details, we refer to [26,28]).

2. Convergence Analysis

Now, we present the following assumptions for the proposed algorithm.
Assumption 1.
Let F : C × C R and T : C H . For solving the G E P ( F , T ) , we impose the following conditions on F and T:
( A 1 )
F ( a ̲ , a ̲ ) = 0 , for all a ̲ C .
( A 2 )
F ( a ̲ , b ̲ ) + F ( b ̲ , a ̲ ) + T a ̲ , b ̲ a ̲ + T b ̲ , a ̲ b ̲ 0 , for all a ̲ , b ̲ C .
( A 3 )
lim α 0 F ( α a ̲ + ( 1 α ) b ̲ , c ̲ ) F ( b ̲ , c ̲ ) , for all a ̲ , b ̲ , c ̲ C .
( A 4 )
For each a ̲ C , b ̲ F ( a ̲ , b ̲ ) + T a ̲ , b ̲ a ̲ is convex and lower semi-continuous.
Definition 3.
For some r > 0 , define the mapping T r F : H 2 C as follows:
T r F a ̲ = c ̲ C : F ( c ̲ , b ̲ ) + T c ̲ , b ̲ c ̲ + 1 r b ̲ c ̲ , c ̲ a ̲ 0 b ̲ H .
Lemma 6.
Under the conditions ( A 1 ) ( A 4 ) , we have the following:
( 1 )
T r F is firmly nonexpansive and single-valued.
( 2 )
F ( T r F ) = S ( G E P ( F , T ) ) .
( 3 )
F ( T r F ) is closed and convex.
Proof. 
( 1 ) . For a given a ̲ , a ̲ * H , if c ̲ T r F a ̲ and c ̲ * T r F a ̲ * . Then, we have F ( c ̲ , c ̲ * ) + T c ̲ , c ̲ * c ̲ 1 r c ̲ * c ̲ , a ̲ c ̲ and F ( c ̲ * , c ̲ ) + T c ̲ * , c ̲ c ̲ * 1 r c ̲ c ̲ * , a ̲ * c ̲ * . It follows from ( A 2 ) that 1 r c ̲ * c ̲ , ( a ̲ a ̲ * ) ( c ̲ c ̲ * ) F ( c ̲ , c ̲ * ) + F ( c ̲ * , c ̲ ) + T c ̲ , c ̲ * c ̲ + T c ̲ * , c ̲ c ̲ * 0 . Hence, we get
c ̲ * c ̲ , a ̲ * a ̲ c ̲ c ̲ * 2 .
That is, T r F is firmly nonexpansive. Furthermore, for a ̲ = a ̲ * , we get c ̲ = c ̲ * , which implies that T r F is single-valued.
( 2 ) . Now, we show that F ( T r F ) = S ( G E P ( F , T ) ) . If a ̲ H , then a ̲ F ( T r F ) T r F a ̲ = a ̲ F ( a ̲ , b ̲ ) + T a ̲ , b ̲ a ̲ + 1 r b ̲ a ̲ , a ̲ a ̲ 0 , b ̲ C F ( a ̲ , b ̲ ) + T a ̲ , b ̲ a ̲ 0 , b ̲ C a ̲ S ( G E P ( F , T ) ) .
( 3 ) . Since T r F is firmly nonexpansive, and hence nonexpansive. The set of fixed points of a nonexpansive map is closed and convex. □

Proposed Algorithm

Here, we discuss our proposed algorithm. Initially, we describe some notations as follows.
Suppose that B 1 : H 1 2 H 1 and B 2 : H 2 2 H 2 are maximal monotone mappings, F : C × C R satisfies Assumption 1, A : H 1 H 2 is a bounded linear operator and the adjoint operator of A is denoted by A * . Let S , T : H 1 H 1 be nonexpansive mappings and ϕ : H 1 H 1 be a contraction mapping with contraction constant c.
We define the following mappings as follows:
g ( a ̲ ) = 1 2 ( I J λ 2 B 2 ) A a ̲ 2 , h ( a ̲ ) = 1 2 ( I J λ 1 B 1 ) a ̲ 2 , G ( a ̲ ) = A * ( I J λ 2 B 2 ) A a ̲ , H ( a ̲ ) = ( I J λ 1 B 1 ) a ̲ .
Note that g and h are weakly lower semi-continuous, convex and differentiable [47]. Furthermore, G and H are Lipschitz continuous [24].
We now present our proposed method which is given in Algorithm 5 and its flowchart diagram can be seen form Figure 1.
Algorithm 5: Proposed inertial-type AA-viscosity algorithm for variational inclusion problems, G E P ( F , T ) and common FPP.
Step 0. Suppose that a ̲ 0 , a ̲ 1 H and κ is non-negative real number. Set n = 1 .
Step 1. Given the ( n 1 ) th and nth iterations, set κ n such that 0 κ n κ ^ n with κ ^ n given as
κ ^ n = min { κ , θ n a ̲ n a ̲ n 1 } , if a ̲ n a ̲ n 1 , κ , otherwise .
Step 2. Compute
h ̲ n = a ̲ n + κ n ( a ̲ n a ̲ n 1 ) .
Step 3. Find g ̲ n C such that
F ( g ̲ n , p ̲ * ) + T g ̲ n , p ̲ * g ̲ n + 1 r n p ̲ * g ̲ n , g ̲ n h ̲ n 0 , p * H .
Step 4. Compute
f ̲ n = η n h ̲ n + ( 1 η n ) g ̲ n .
Step 5. Compute
e ̲ n = J λ 1 B 1 ( I ω n A * ( I J λ 2 B 2 ) A ) f ̲ n ,
where
ω n = ρ n g ( f ̲ n ) G ( f ̲ n ) 2 + H ( f ̲ n ) 2 if G ( f ̲ n ) 2 + H ( f ̲ n ) 2 0 0 otherwise .
Step 6. Evaluate
d ̲ n = S ( 1 σ n ) f ̲ n + σ n S e ̲ n .
Step 7. Compute
c ̲ n = S ( 1 δ n ) S e ̲ n + δ n S d ̲ n .
Step 8. Set
b ̲ n = S c ̲ n .
Step 9. Find
a ̲ n + 1 = α n ϕ a ̲ n + β n S e ̲ n + γ n T b ̲ n .
Update: set n = n + 1 and return back to step 1.
The control parameters are given and satisfy the following conditions:
(i) { α n } is a sequence in ( 0 , 1 ) with n = 0 α n = and lim n α n = 0 .
(ii) { η n } , { σ n } , { β n } , { γ n } are sequences in ( 0 , 1 ) such that all are in [ a , b ] with a , b ( 0 , 1 ) satisfying the following: α n + β n + γ n = 1 .
(iii) κ > 0 is fixed and { θ n } is a sequence of positive real numbers such that lim n θ n α n = 0 .
(iv) 0 < a ρ n b < 4 and { γ n } are sequences of positive real numbers such that lim inf n γ n > 0 and λ i > 0 , i = 1 , 2 .
Remark 2.
Note that by conditions (i) and (iii), one can easily verify from (7) that
lim n κ n α n a ̲ n a ̲ n 1 = 0 ,
In addition, ϕ : H 1 H 1 and P Ω : H 1 Ω are given.
Suppose that Ω = F ( T ) F ( S ) Γ S ( G E P ( F , T ) ) . The strong convergence result for the proposed algorithm is given as follows,
Theorem 1.
Suppose that A , S , T and ϕ are mappings as described above. If { a ̲ n } is a sequence generated by Algorithm 5 and fulfills the conditions ( A 1 ) ( A 4 ) and ( i ) ( i v ) , then the sequence { a ̲ n } converges strongly to a fixed point of P Ω o ϕ .
We divide our proof into the following lemmas.
Lemma 7.
If { a ̲ n } is a sequence generated by Algorithm 5, then { a ̲ n } is bounded.
Proof. 
Since g ̲ n = T r n F h ̲ n , and also noting that P Ω o ϕ is a contraction, then we can apply the Banach contraction result, which says that there exists a p ̲ * H 1 such that P Ω o ϕ p ̲ * = p ̲ * and p ̲ * Ω . This gives S p ̲ * = p ̲ * , T r n F p ̲ * = p ̲ * , J λ 1 B 1 p ̲ * = p ̲ * , J λ 2 B 2 A p ̲ * = A p ̲ * . As T r n F is nonexpansive for each n, then
g ̲ n p ̲ *   =   T r n F h ̲ n p ̲ *     h ̲ n p ̲ * .
Now,
h ̲ n p ̲ *   = a ̲ n + κ n ( a ̲ n a ̲ n 1 ) p ̲ * a ̲ n p ̲ *   +   κ n a ̲ n a ̲ n 1 = a ̲ n p ̲ *   +   α n κ n α n a ̲ n a ̲ n 1 .
By Remark 2, lim n κ n α n a ̲ n a ̲ n 1 = 0 . Then, it follows that there exists a constant K 1 > 0 such that
κ n α n a ̲ n a ̲ n 1 K 1 , for all n 1 .
So, by Equation (10), we obtain
h ̲ n p ̲ *     a ̲ n p ̲ *   +   α n K 1 .
Also,
f ̲ n p ̲ *   = η n h ̲ n + ( 1 η n ) g ̲ n p ̲ * η n h ̲ n p ̲ *   + ( 1 η n ) g ̲ n p ̲ * η n h ̲ n p ̲ *   + ( 1 η n ) h ̲ n p ̲ * = h ̲ n p ̲ * .
Now, using the definition of G ( a ̲ ) and the property of the firm nonexpansivity of I J λ 2 B 2 , we get
G f ̲ n , f ̲ n p ̲ * = A * ( I J λ 2 B 2 ) A f ̲ n , f ̲ n p ̲ * = ( I J λ 2 B 2 ) A f ̲ n , A f ̲ n A p ̲ * = ( I J λ 2 B 2 ) A f ̲ n A p ̲ * + A p ̲ * , A f ̲ n A p ̲ * = ( I J λ 2 B 2 ) A f ̲ n A p ̲ * + J λ 2 B 2 A p ̲ * , A f ̲ n A p ̲ * = ( I J λ 2 B 2 ) A f ̲ n ( I J λ 2 B 2 ) A p ̲ * , A f ̲ n A p ̲ * ( I J λ 2 B 2 ) A f ̲ n ( I J λ 2 B 2 ) A p ̲ * 2 = ( I J λ 2 B 2 ) A f ̲ n 2 = 2 g ( f ̲ n ) .
Now, by Lemma 2 and applying (13) together with the nonexpansivity of J λ 1 B 1 , we have
e ̲ n p ̲ * 2 = J λ 1 B 1 ( I ω n A * ( I J λ 2 B 2 ) A ) f ̲ n p ̲ * 2 f ̲ n ω n A * ( I J λ 2 B 2 ) A f ̲ n p ̲ * 2 = f ̲ n p ̲ * ω n G ( f ̲ n ) 2 = f ̲ n p ̲ * 2 + ω n 2 G ( f ̲ n ) 2 2 ω n G ( f ̲ n ) , f ̲ n p ̲ * ,
and putting in the value of ω n , we have
= f ̲ n p ̲ * 2 + ρ n 2 g 2 ( f ̲ n ) ( G ( f ̲ n ) 2 + H ( f ̲ n ) 2 ) 2 G ( f ̲ n ) 2 4 ρ n g 2 ( f ̲ n ) G ( f ̲ n ) 2 +   H ( f ̲ n ) 2 f ̲ n p ̲ * 2 ( 4 ρ n ) ρ n g 2 ( f ̲ n ) G ( f ̲ n ) 2 +   H ( f ̲ n ) 2 .
By using the assumption on ρ n , we obtain
e ̲ n p ̲ *     f ̲ n p ̲ * .
Now, by using (15), we get
d ̲ n p ̲ *   = S ( ( 1 σ n ) f ̲ n + σ n S e ̲ n ) p ̲ * ( 1 σ n ) f ̲ n + σ n S e ̲ n p ̲ * ( 1 σ n ) f ̲ n p ̲ *   +   σ n S e ̲ n p ̲ * ( 1 σ n ) f ̲ n p ̲ *   +   σ n e ̲ n p ̲ * ( 1 σ n ) f ̲ n p ̲ *   +   σ n f ̲ n p ̲ * = f ̲ n p ̲ * .
Now, by (15) and since (16) and S are nonexpansive, we have
c ̲ n p ̲ *   = S ( ( 1 δ n ) S e ̲ n + δ n S d ̲ n ) p ̲ * ( 1 δ n ) S e ̲ n δ n S d ̲ n p ̲ * ( 1 δ n ) S e ̲ n p ̲ *   +   δ n S d ̲ n p ̲ * ( 1 δ n ) e ̲ n p ̲ *   +   δ n d ̲ n p ̲ * ( 1 δ n ) f ̲ n p ̲ *   +   δ n f ̲ n p ̲ * = f ̲ n p ̲ * .
Now, by (11) and (12),
b ̲ n p ̲ *   = S c ̲ n p ̲ * c ̲ n p ̲ * f ̲ n p ̲ * h ̲ n p ̲ * a ̲ n p ̲ *   +   α n K 1 .
Hence,
b ̲ n p ̲ *     a ̲ n p ̲ *   +   α n K 1 .
Now, by using condition ( i i ) and (18), we have
a ̲ n + 1 p ̲ *   = α n ϕ a ̲ n + β n S e ̲ n + γ n T b ̲ n p ̲ * = α n ( ϕ a ̲ n ϕ p ̲ * ) + α n ( ϕ p ̲ * p ̲ * ) + β n ( S e ̲ n p ̲ * ) + γ n ( T b ̲ n p ̲ * ) α n ϕ a ̲ n ϕ p ̲ *   +   α n ϕ p ̲ * p ̲ * + β n S e ̲ n p ̲ *   +   γ n T b ̲ n p ̲ * α n c a ̲ n p ̲ * + α n ϕ p ̲ * p ̲ *   +   β n e ̲ n p ̲ *   +   γ n b ̲ n p ̲ * α n c a ̲ n p ̲ *   +   α n ϕ p ̲ * p ̲ *   +   β n ( a ̲ n p ̲ * + α n K 1 ) + γ n ( a ̲ n p ̲ *   +   α n K 1 ) = ( α n c + β n + γ n ) a ̲ n p ̲ *   +   α n ϕ p ̲ * p ̲ * + ( β n + γ n ) α n K 1 = ( α n c + β n + γ n ) a ̲ n p ̲ *   +   α n ϕ p ̲ * p ̲ * + ( 1 α n ) α n K 1 = ( 1 α n ( 1 c ) ) a ̲ n p ̲ * + α n ( 1 c ) ϕ p ̲ * p ̲ * 1 c + ( 1 α n ) K 1 1 c ( 1 α n ( 1 c ) ) a ̲ n p ̲ * + 2 α n ( 1 c ) K * .
where K * = sup n N ϕ ( p ̲ * ) p ̲ * 1 c , ( 1 α n ) K 1 1 c , if we put a ¯ n = a ̲ n p ̲ * , b ¯ n = α n ( 1 c ) K * , c ¯ n = 0 and σ n = α n ( 1 c ) . Then, by applying Lemma 4 along with assumptions on control parameters, we get that { a ̲ n p ̲ * } is bounded, and this implies that { a ̲ n } is bounded. Moreover, { d ̲ n } , { e ̲ n } , { g ̲ n } , { f ̲ n } , { h ̲ n } , { b ̲ n } and { c ̲ n } are bounded. □
Lemma 8.
Let { a ̲ n } be a sequence defined in Algorithm 5 and p ̲ * Ω ; also, the conditions given in Theorem 1 hold. Then, we have the following inequality.
a ̲ n + 1 p ̲ * 2   1 2 α n ( 1 c ) 1 α n c a ̲ n p ̲ * 2 + 2 α n ( 1 c ) 1 α n c { α n K 3 2 ( 1 c ) + 3 K 2 ( 1 α n ) 2 1 c κ n α n a ̲ n a ̲ n 1 + 1 1 c ϕ p ̲ * p ̲ * , a ̲ n + 1 p ̲ * } γ n ( 1 α n ) ( 1 α n c ) { ( 4 ρ n ) ρ n σ n g 2 ( f ̲ n ) G ( f ̲ n ) 2 +   H ( f ̲ n ) 2 + δ n [ η n ( 1 η n ) h ̲ n g ̲ n 2 + σ n ( 1 σ n ) f ̲ n S e ̲ n 2 ] + δ n ( 1 δ n ) e ̲ n d ̲ n 2 } .
Proof. 
If p ̲ * Ω ; then, using Lemma 2 and (9) and (14), we get
h ̲ n p ̲ * 2   = a ̲ n κ n ( a ̲ n a ̲ n 1 ) p ̲ * 2 = a ̲ n p ̲ * 2 +   κ n 2 a ̲ n a ̲ n 1 2 + 2 κ n a ̲ n p ̲ * , a ̲ n a ̲ n 1 a ̲ n p ̲ * 2 +   κ n 2 a ̲ n a ̲ n 1 2 +   2 κ n a ̲ n a ̲ n 1 a ̲ n p ̲ * = a ̲ n p ̲ * 2 +   κ n a ̲ n a ̲ n 1 ( κ n a ̲ n a ̲ n 1   +   2 a ̲ n p ̲ * ) a ̲ n p ̲ * 2 +   3 K 2 κ n a ̲ n a ̲ n 1 = a ̲ n p ̲ * 2 +   3 K 2 α n κ n α n a ̲ n a ̲ n 1 ,
where K 2 : = sup n N a ̲ n p ̲ * , κ n a ̲ n a ̲ n 1 0 . Now,
f ̲ n p ̲ * 2   = η n h ̲ n + ( 1 η n ) g ̲ n p ̲ * 2 = η n h ̲ n + ( 1 η n ) g ̲ n η n p ̲ * + η n p ̲ * p ̲ * 2 = η n ( h ̲ n p ̲ * ) + ( 1 η n ) ( g ̲ n p ̲ * ) 2 = η n h ̲ n p ̲ * 2 + ( 1 η n ) g ̲ n p ̲ * 2 η n ( 1 η n ) h ̲ n g ̲ n 2 η n h ̲ n p ̲ * 2 + ( 1 η n ) h ̲ n p ̲ * 2 η n ( 1 η n ) h ̲ n g ̲ n 2 = h ̲ n p ̲ * 2 η n ( 1 η n ) h ̲ n g ̲ n 2 .
Now,
d ̲ n p ̲ * 2   = S ( ( 1 σ n ) f ̲ n + σ n S e ̲ n ) p ̲ * 2 ( 1 σ n ) f ̲ n + σ n S e ̲ n p ̲ * 2 = ( 1 σ n ) ( f ̲ n p ̲ * ) + σ n ( S e ̲ n p ̲ * ) 2 = ( 1 σ n ) f ̲ n p ̲ * 2 +   σ n S e ̲ n p ̲ * 2 σ n ( 1 σ n ) f ̲ n S e ̲ n 2 ( 1 σ n ) f ̲ n p ̲ * 2 +   σ n e ̲ n p ̲ * 2 σ n ( 1 σ n ) f ̲ n S e ̲ n 2 ( 1 σ n ) f ̲ n p ̲ * 2 + σ n f ̲ n p ̲ * 2 ( 4 ρ n ) ρ n g 2 ( f ̲ n ) G ( f ̲ n ) 2 +   H ( f ̲ n ) 2 σ n ( 1 σ n ) f ̲ n S e ̲ n 2 = f ̲ n p ̲ * 2 ( 4 ρ n ) ρ n σ n g 2 ( f ̲ n ) G ( f ̲ n ) 2 +   H ( f ̲ n ) 2 σ n ( 1 σ n ) f ̲ n S e ̲ n 2
= η n h ̲ n + ( 1 η n ) g ̲ n p ̲ * 2 ( 4 ρ n ) ρ n σ n g 2 ( f ̲ n ) G ( f ̲ n ) 2 +   H ( f ̲ n ) 2 σ n ( 1 σ n ) f ̲ n S e ̲ n 2 = η n h ̲ n p ̲ * 2 + ( 1 η n ) g ̲ n p ̲ * 2 η n ( 1 η n ) h ̲ n g ̲ n 2 ( 4 ρ n ) ρ n σ n g 2 ( f ̲ n ) G ( f ̲ n ) 2 +   H ( f ̲ n ) 2 σ n ( 1 σ n ) f ̲ n S e ̲ n 2 η n h ̲ n p ̲ * 2 + ( 1 η n ) h ̲ n p ̲ * 2 η n ( 1 η n ) h ̲ n g ̲ n 2 ( 4 ρ n ) ρ n σ n g 2 ( f ̲ n ) G ( f ̲ n ) 2 +   H ( f ̲ n ) 2 σ n ( 1 σ n ) f ̲ n S e ̲ n 2 = h ̲ n p ̲ * 2 η n ( 1 η n ) h ̲ n g ̲ n 2 ( 4 ρ n ) ρ n σ n g 2 ( f ̲ n ) G ( f ̲ n ) 2 +   H ( f ̲ n ) 2 σ n ( 1 σ n ) f ̲ n S e ̲ n 2 .
Now,
c ̲ n p ̲ * 2   = S ( ( 1 δ n ) S e ̲ n + δ n S d ̲ n ) p ̲ * 2 ( 1 δ n ) S e ̲ n + δ n S d ̲ n p ̲ * 2 = ( 1 δ n ) ( S e ̲ n p ̲ * ) + δ n ( S d ̲ n p ̲ * ) 2 = ( 1 δ n ) S e ̲ n p ̲ * 2 +   δ n S d ̲ n p ̲ * 2 δ n ( 1 δ n ) S e ̲ n S d ̲ n 2 ( 1 δ n ) e ̲ n p ̲ * 2 +   δ n d ̲ n p ̲ * 2 δ n ( 1 δ n ) e ̲ n d ̲ n 2 ( 1 δ n ) f ̲ n p ̲ * 2 ( 4 ρ n ) ρ n σ n g 2 ( f ̲ n ) G ( f ̲ n ) 2 +   H ( f ̲ n ) 2 + δ n { h ̲ n p ̲ * 2 η n ( 1 η n ) h ̲ n g ̲ n 2 ( 4 ρ n ) ρ n σ n g 2 ( f ̲ n ) G ( f ̲ n ) 2 +   H ( f ̲ n ) 2 σ n ( 1 σ n ) f ̲ n S e ̲ n 2 } δ n ( 1 δ n ) e ̲ n d ̲ n 2 ( 1 δ n ) h ̲ n p ̲ * 2 ( 4 ρ n ) ρ n σ n g 2 ( f ̲ n ) G ( f ̲ n ) 2 +   H ( f ̲ n ) 2 + δ n { h ̲ n p ̲ * 2 + η n ( 1 η n ) h ̲ n g ̲ n 2 ( 4 ρ n ) ρ n σ n g 2 ( f ̲ n ) G ( f ̲ n ) 2 +   H ( f ̲ n ) 2 σ n ( 1 σ n ) f ̲ n S e ̲ n 2 } δ n ( 1 δ n ) e ̲ n d ̲ n 2 = h ̲ n p ̲ * 2 ( 4 ρ n ) ρ n σ n g 2 ( f ̲ n ) G ( f ̲ n ) 2 +   H ( f ̲ n ) 2 + δ n { η n ( 1 η n ) h ̲ n g ̲ n 2 σ n ( 1 σ n ) f ̲ n S e ̲ n 2 } δ n ( 1 δ n ) e ̲ n d ̲ n 2 .
Now, using ( i v ) , we get
b ̲ n p ̲ * 2   =   S c ̲ n p ̲ * 2     c ̲ n p ̲ * 2 .
Hence, by the Cauchy–Schwarz inequality, we get
a ̲ n + 1 p ̲ * 2   = α n ϕ a ̲ n + β n S e ̲ n + γ n T b ̲ n p ̲ * 2 = α n ϕ a ̲ n + β n S e ̲ n + γ n T b ̲ n α n p ̲ * β n p ̲ * γ n p ̲ * 2 = [ β n ( S e ̲ n p ̲ * ) + γ n ( T b ̲ n p ̲ * ) ] + α n ( ϕ a ̲ n p ̲ * ) 2 β n ( S e ̲ n p ̲ * ) + γ n ( T b ̲ n p ̲ * ) 2 + 2 α n ϕ a ̲ n p ̲ * , a ̲ n + 1 p ̲ * = β n 2 S e ̲ n p ̲ * 2 + γ n 2 T b ̲ n p ̲ * 2 + 2 β n γ n S e ̲ n p ̲ * , T b ̲ n p ̲ * + 2 α n ϕ a ̲ n p ̲ * , a ̲ n + 1 p ̲ * β n 2 S e ̲ n p ̲ * 2 + γ n 2 T b ̲ n p ̲ * 2 + 2 β n γ n S e ̲ n p ̲ * T b ̲ n p ̲ * + 2 α n ϕ a ̲ n p ̲ * , a ̲ n + 1 p ̲ * β n 2 S e ̲ n p ̲ * 2 + γ n 2 T b ̲ n p ̲ * 2 + β n γ n ( S e ̲ n p ̲ * 2 + T b ̲ n p ̲ * 2 ) + 2 α n ϕ a ̲ n p ̲ * , a ̲ n + 1 p ̲ * = β n ( β n + γ n ) S e ̲ n p ̲ * 2 + γ n ( γ n + β n ) T b ̲ n p ̲ * 2 + 2 α n ϕ a ̲ n p ̲ * , a ̲ n + 1 p ̲ * β n ( 1 α n ) e ̲ n p ̲ * 2 + γ n ( 1 α n ) b ̲ n p ̲ * 2 + 2 α n ϕ a ̲ n + ϕ p ̲ * ϕ p ̲ * p ̲ * , a ̲ n + 1 p ̲ * = β n ( 1 α n ) e ̲ n p ̲ * 2 + γ n ( 1 α n ) b ̲ n p ̲ * 2 + 2 α n ϕ a ̲ n ϕ p ̲ * , a ̲ n + 1 p ̲ * + 2 α n ϕ p ̲ * p ̲ * , a ̲ n + 1 p ̲ * .
By (12), (15), (22) and (23) and knowing that ϕ is a contraction and by Cauchy–Schwarz inequality, we have
a ̲ n + 1 p ̲ * 2   β n ( 1 α n ) h ̲ n p ̲ * 2 + γ n ( 1 α n ) { h ̲ n p ̲ * 2 ( 4 ρ n ) ρ n σ n g 2 ( f ̲ n ) G ( f ̲ n ) 2 + H ( f ̲ n ) 2 + δ n [ η n ( 1 η n ) h ̲ n g ̲ n 2 σ n ( 1 σ n ) f ̲ n S e ̲ n 2 ] δ n ( 1 δ n ) e ̲ n d ̲ n 2 } + 2 α n c a ̲ n p ̲ * a ̲ n + 1 p ̲ * + 2 α n ϕ p ̲ * p ̲ * , a ̲ n + 1 p ̲ * ( 1 α n ) 2 h ̲ n p ̲ * 2 + γ n ( 1 α n ) { ( 4 ρ n ) ρ n σ n g 2 ( f ̲ n ) G ( f ̲ n ) 2 + H ( f ̲ n ) 2 + δ n [ η n ( 1 η n ) h ̲ n g ̲ n 2 σ n ( 1 σ n ) f ̲ n S e ̲ n 2 ] δ n ( 1 δ n ) e ̲ n d ̲ n 2 } + α n c ( a ̲ n p ̲ * 2 + a ̲ n + 1 p ̲ * 2 ) + 2 α n ϕ p ̲ * p ̲ * , a ̲ n + 1 p ̲ * ( 1 α n ) 2 ( a ̲ n p ̲ * 2 + 3 K 2 α n κ n α n a ̲ n a ̲ n 1 ) + γ n ( 1 α n ) { ( 4 ρ n ) ρ n σ n g 2 ( f ̲ n ) G ( f ̲ n ) 2 + H ( f ̲ n ) 2 + δ n [ η n ( 1 η n ) h ̲ n g ̲ n 2 σ n ( 1 σ n ) f ̲ n S e ̲ n 2 ] δ n ( 1 δ n ) e ̲ n d ̲ n 2 } + α n c ( a ̲ n p ̲ * 2 + a ̲ n + 1 p ̲ * 2 ) + 2 α n ϕ p ̲ * p ̲ * , a ̲ n + 1 p ̲ * = ( 1 α n ) 2 + α n c a ̲ n p ̲ * 2 + α n c a ̲ n + 1 p ̲ * 2 + 3 K 2 ( 1 α n ) 2 α n κ n α n a ̲ n a ̲ n 1 γ n ( 1 α n ) { ( 4 ρ n ) ρ n σ n g 2 ( f ̲ n ) G ( f ̲ n ) 2 + H ( f ̲ n ) 2 + δ n [ η n ( 1 η n ) h ̲ n g ̲ n 2 + σ n ( 1 σ n ) f ̲ n S e ̲ n 2 ] + δ n ( 1 δ n ) e ̲ n d ̲ n 2 } + 2 α n ϕ p ̲ * p ̲ * , a ̲ n + 1 p ̲ * .
Hence, we get
a ̲ n + 1 p ̲ * 2   ( 1 2 α n + α n 2 + α n c ) 1 α n c a ̲ n p ̲ * 2 + α n ( 1 α n c ) { 3 K 2 ( 1 α n ) 2 κ n α n a ̲ n a ̲ n 1 + 2 ϕ p ̲ * p ̲ * , a ̲ n + 1 p ̲ * } γ n ( 1 α n ) 1 α n c { ( 4 ρ n ) ρ n σ n g 2 ( f ̲ n ) G ( f ̲ n ) 2 + H ( f ̲ n ) 2 + δ n [ η n ( 1 η n ) h ̲ n g ̲ n 2 + σ n ( 1 σ n ) f ̲ n S e ̲ n 2 ] + δ n ( 1 δ n ) e ̲ n d ̲ n 2 } = ( 1 2 α n + α n 2 + α n c ) 1 α n c a ̲ n p ̲ * 2 + α n 2 ( 1 α n c ) a ̲ n p ̲ * 2 + 1 ( 1 α n c ) 3 K 2 ( 1 α n ) 2 k n α n a ̲ n a ̲ n 1 + 2 ϕ p ̲ * p ̲ * , a ̲ n + 1 p ̲ * γ n ( 1 α n ) 1 α n c { ( 4 ρ n ) ρ n σ n g 2 ( f ̲ n ) G ( f ̲ n ) 2 + H ( f ̲ n ) 2 + δ n [ η n ( 1 η n ) h ̲ n g ̲ n 2 σ n ( 1 σ n ) f ̲ n S e ̲ n 2 ] + δ n ( 1 δ n ) e ̲ n d ̲ n 2 } 1 2 α n ( 1 c ) 1 α n c a ̲ n p ̲ * 2 + 2 α n ( 1 c ) 1 α n c { α n K 3 2 ( 1 c ) + 3 K 2 ( 1 α n ) 2 2 ( 1 c ) κ n α n a ̲ n a ̲ n 1 + 1 1 c ϕ p ̲ * p ̲ * , a ̲ n + 1 p ̲ * } γ n ( 1 α n ) ( 1 c ) { ( 4 ρ n ) ρ n σ n g 2 ( f ̲ n ) G ( f ̲ n ) 2 + H ( f ̲ n ) 2 + δ n [ η n ( 1 η n ) h ̲ n g ̲ n 2 + σ n ( 1 σ n ) f ̲ n S e ̲ n 2 ] + δ n ( 1 δ n ) e ̲ n d ̲ n 2 } .
where K 3 = sup { a ̲ n p ̲ * 2 : n N } .
Lemma 9.
If { a ̲ n } is a sequence defined in Algorithm 5 and p ̲ * Ω , and also the conditions given in Theorem 1 hold. Then, we have the following inequality
a ̲ n + 1 p ̲ *   ( 1 α n ) a ̲ n p ̲ * 2 + α n ϕ a ̲ n p ̲ * 2 + 3 K 2 ( 1 α n ) α n κ n α n a ̲ n a ̲ n 1 β n e ̲ n f ̲ n 2 + 2 β n K 4 A * ( I J λ 2 B 2 ) A f ̲ n δ n ξ n S e ̲ n T b ̲ n 2 .
Proof. 
Let p ̲ * Ω , by (14), then we have
f ̲ n ω n A * ( I J λ 2 B 2 ) A f ̲ n p ̲ * 2 f ̲ n p ̲ * 2 .
Applying Lemma 2 and the firmly nonexpansivity of J λ 1 B 1 , we have
e ̲ n p ̲ * 2   = J λ 1 B 1 ( I ω n A * ( I J λ 2 B 2 ) A f ̲ n ) p ̲ * 2 e ̲ n p ̲ * , f ̲ n ω n A * ( I J λ 2 B 2 ) A f ̲ n p ̲ * = 1 2 ( e ̲ n p ̲ * 2 + f ̲ n ω n A * ( I J λ 2 B 2 ) A f ̲ n p ̲ * 2 e ̲ n f ̲ n + ω n A * ( I J λ 2 B 2 ) A f ̲ n 2 ) 1 2 e ̲ n p 2 + f ̲ n p ̲ * 2 ( e ̲ n f ̲ n + ω n A * ( I J λ 2 B 2 ) A f ̲ n 2 ) = 1 2 ( e ̲ n p 2 + f ̲ n p ̲ * 2 ( e ̲ n f ̲ n 2 + ω n 2 A * ( I J λ 2 B 2 ) A f ̲ n 2 2 ω n f ̲ n e ̲ n , A * ( I J λ 2 B 2 ) A f ̲ n ) ) 1 2 ( e ̲ n p ̲ * 2 + f ̲ n p ̲ * 2 e ̲ n f ̲ n 2 ω n 2 A * ( I J λ 2 B 2 ) A f ̲ n 2 + 2 ω n f ̲ n e ̲ n A * ( I J λ 2 B 2 ) A f ̲ n ) 1 2 ( e ̲ n p 2 + f ̲ n p ̲ * 2 e ̲ n f ̲ n 2 + 2 ω n f ̲ n e ̲ n A * ( I J λ 2 B 2 ) A f ̲ n ) .
Hence, we have
e ̲ n p ̲ * 2   f ̲ n p ̲ * 2 e ̲ n f ̲ n 2 + 2 ω n f ̲ n e ̲ n A * ( I J λ 2 B 2 ) A f ̲ n h ̲ n p ̲ * 2 e ̲ n f ̲ n 2 + 2 K 4 A * ( I J λ 2 B 2 ) A f ̲ n ,
where K 4 = sup n N { ω n f ̲ n e ̲ n } . Next, by Lemma 3 and (16), (17) and (24), we get
a ̲ n + 1 p ̲ * 2   = α n ϕ a ̲ n + β n S e ̲ n + γ n T b ̲ n p ̲ * 2 = α n ( ϕ a ̲ n p ̲ * ) + β n ( S e ̲ n p ̲ * ) + γ n ( T b ̲ n p ̲ * ) 2 α n ϕ a ̲ n p ̲ * 2 + β n S e ̲ n p ̲ * 2 + γ n T b ̲ n p ̲ * 2 β n γ n S e ̲ n T b ̲ n 2 α n ϕ a ̲ n p ̲ * 2 + β n e ̲ n p ̲ * 2 + γ n b ̲ n p ̲ * 2 β n γ n S e ̲ n T b ̲ n 2 α n ϕ a ̲ n p ̲ * 2 + β n ( h ̲ n p ̲ * 2 e ̲ n f ̲ n 2 + 2 K 4 A * ( I J λ 2 B 2 ) A f ̲ n ) + γ n h ̲ n p ̲ * 2 β n γ n S e ̲ n T b ̲ n 2 = α n ϕ a ̲ n p ̲ * 2 + ( 1 α n ) h ̲ n p ̲ * 2 β n e ̲ n f ̲ n 2 + 2 β n K 4 A * ( I J λ 2 B 2 ) A f ̲ n β n γ n S e ̲ n T b ̲ n 2 α n ϕ a ̲ n p ̲ * 2 + ( 1 α n ) a ̲ n p ̲ * 2 + 3 K 2 α n κ n α n a ̲ n a ̲ n 1 β n e ̲ n f ̲ n 2 + 2 β n K 4 A * ( I J λ 2 B 2 ) A f ̲ n β n γ n S e ̲ n T b ̲ n 2 = ( 1 α n ) a ̲ n p ̲ * 2 + α n ϕ a ̲ n p ̲ * 2 + 3 K 2 ( 1 α n ) α n κ n α n a ̲ n a ̲ n 1 β n e ̲ n f ̲ n 2 + 2 β n K 4 A * ( I J λ 2 B 2 ) A f ̲ n β n γ n S e ̲ n T b ̲ n 2 .
Lemma 10.
Under the assumptions of Theorem 1, the sequence { a ̲ n } defined by Algorithm 5 converges strongly to a ̲ * Ω , where a ̲ * = P Ω o ϕ ( a ̲ * ) .
Proof. 
Let a ̲ * = P Ω o ϕ ( a ̲ * ) . By Lemma 8, we have
a ̲ n + 1 a ̲ * 2   1 2 α n ( 1 c ) 1 α n c a ̲ n a ̲ * 2 + 2 α n ( 1 c ) 1 α n c { α n K 3 2 ( 1 c ) + 3 K 2 ( 1 α n ) 2 1 c κ n α n a ̲ n a ̲ n 1 + 1 1 c ϕ a ̲ * a ̲ * , a ̲ n + 1 a ̲ * } + γ n δ n ( 1 α n ) η n ( 1 η n ) ( 1 α n c ) h ̲ n g ̲ n 2 .
We now show that { a ̲ n a ̲ * } converges to zero as n . Set a n ¯ = a ̲ n a ̲ * and b n ¯ = ϕ a ̲ * a ̲ * , a ̲ n + 1 a ̲ * in Lemma 5. We now show that
lim sup k ϕ a ̲ * a ̲ * , a ̲ n + 1 a ̲ * 0 ,
for every subsequence { a ̲ n k a ̲ * } of { a ̲ n a ̲ * } satisfying
lim inf k ( a ̲ n a ̲ * a ̲ n k a ̲ * ) 0 .
Suppose that { a ̲ n k a ̲ * } is a subsequence of { a ̲ n a ̲ * } such that
lim inf k ( a ̲ n k + 1 a ̲ * a ̲ n k a ̲ * ) 0 .
By Lemma 8, we have
δ n k γ n k ( 1 α n k ) η n k ( 1 η n k ) ( 1 α n k c ) h ̲ n k g ̲ n k 2 1 2 α n k ( 1 c ) ( 1 α n k c ) a ̲ n k p ̲ * 2
a ̲ n k + 1 p ̲ * 2 + 2 α n k ( 1 c ) ( 1 α n k c ) { α n k K 3 2 ( 1 c ) + 3 K 2 ( 1 α n k ) 2 2 ( 1 c ) κ n k α n k a ̲ n k a ̲ n k 1 + 1 1 c ϕ p ̲ * p ̲ * , a ̲ n k + 1 p ̲ * } .
By (27) and lim α n k = 0 , we obtain that
δ n k γ n k ( 1 α n k ) ( 1 α n k c ) η n k ( 1 η n k ) h ̲ n k g ̲ n k 2 0 .
This implies that
h ̲ n k g ̲ n k 0 as k .
Similarly, we have
γ n k ( 1 α n k ) σ n k ( 1 σ n k ) ( 1 α n k c ) f ̲ n k S e ̲ n k 2 1 2 α n ( 1 c ) ( 1 α n k c ) a ̲ n k p ̲ *
a ̲ n k + 1 p ̲ * 2 + 2 α n k ( 1 c ) 1 α n k c ) { α n k K 3 2 ( 1 c ) + 3 K 2 ( 1 α n k ) 2 2 ( 1 c ) κ n k α n k a ̲ n k a ̲ n k 1 + 1 c ϕ p ̲ * p ̲ * , a ̲ n k + 1 p ̲ * } .
Following arguments similar to those given above, we have
f ̲ n k S e ̲ n k 0 as k .
Similarly, from Lemma 8, we obtain
e ̲ n k d ̲ n k 0 as k ,
and
( 4 ρ n k ) ρ n k σ n k g ( f ̲ n k ) G ( f ̲ n k ) 2 + H ( f ̲ n k ) 2 0 as k .
As G and H are Lipschitz continuous, by ρ n k , we have
g 2 ( f ̲ n k ) 0 as k
and
lim k g ( f ̲ n k ) = lim k 1 2 ( I J λ 2 B 2 ) A f ̲ n k = 0 .
Thus,
( I J λ 2 B 2 ) A f ̲ n k 0 as k .
So,
A * ( I J λ 2 B 2 ) A f ̲ n k A * ( I J λ 2 B 2 ) A f ̲ n k = A ( I J λ 2 B 2 ) A f ̲ n k 0 as k .
β n k e ̲ n k f ̲ n k 2 ( 1 α n k ) a ̲ n k p ̲ * 2 + α n k a ̲ n k + 1 p ̲ * 2 + α n k ϕ a ̲ n k p ̲ * 2 + 3 K 2 ( 1 α n k ) α n k κ n k α n k a ̲ n k a ̲ n k 1 + 2 K 4 β n k A * ( I J λ 2 B 2 ) A f ̲ n k .
By (26) and (33) with Remark 2 and using lim k α n k = 0 , we get
e ̲ n k f ̲ n k 0 as k .
Similarly, by Lemma 9, we have
S e ̲ n k T b ̲ n k 0 as k .
By Remark 2, we get
h ̲ n k a ̲ n k = κ n k a ̲ n k a ̲ n k 1 0 as k .
Applying (28) and (36), we obtain that
a ̲ n k g ̲ n k 0 and f ̲ n k a ̲ n k 0 as k .
Similarly, by applying (29), (34), (35) and (37), we have
a ̲ n k e ̲ n k 0 , a ̲ n k S e ̲ n k 0 , a ̲ n k T b ̲ n k 0 as k .
Furthermore, by (37) and (38), we get
b ̲ n k a ̲ n k 0 , e ̲ n k S e ̲ n k 0 , c ̲ n k T b ̲ n k 0 as k .
Using (38) with lim k α n k = 0 , we get
a ̲ n k + 1 a ̲ n k   α n k ϕ a ̲ n k a ̲ n k   +   β n k S e ̲ n k a ̲ n k   +   γ n k T b ̲ n k a ̲ n k 0 as k .
Now, we show that χ ( a ̲ n ) Ω . Note that χ ( a ̲ n ) S ( G E P ( F , T ) ) . Indeed, { a ̲ n } is bounded, so χ ( a ̲ n ) . Let a ̲ χ ( a ̲ n ) be any arbitrary element, then there is a subsequence { a ̲ n k } of { a ̲ n } such that a ̲ n k a ̲ as k . By (37), it follows that g ̲ n k a ̲ as k . By the definition of T r n k F h ̲ n k , we get
F ( g ̲ n k , j ̲ ) + T g ̲ n k , j ̲ g ̲ n k + 1 r n k j ̲ g ̲ n k , g ̲ n k h ̲ n k 0 for all j ̲ C .
By the monotonicity of F, we have
1 r n k j ̲ g ̲ n k , g ̲ n k h ̲ n k F ( j ̲ , g ̲ n k ) + T g ̲ n k , j ̲ g ̲ n k for all j ̲ C .
By (28) lim k inf r n k > 0 and the condition ( A 4 ) , we have
T g ̲ n k , j ̲ g ̲ n k + F ( j ̲ , g ̲ n k ) 0 .
Hence,
T a ̲ , j ̲ a ̲ + F ( j ̲ , a ̲ ) 0 .
Let j ̲ α = α j ̲ + ( 1 α ) a ̲ , j ̲ C and α ( 0 , 1 ] . This implies that j ̲ α C . Now, by (41) and applying the conditions ( A 1 ) ( A 4 ) , we have
T a ̲ , a ̲ j ̲ α + F ( j ̲ α , a ̲ ) 0 .
Thus, we have
0 = T j ̲ α , j ̲ α j ̲ α + F ( j ̲ α , j ̲ α ) α T j ̲ α , j ̲ j ̲ α + ( 1 α ) T a ̲ , a ̲ j ̲ α + α F ( j ̲ α , j ̲ ) + ( 1 α ) F ( j ̲ α , a ̲ ) α T j ̲ α , j ̲ j ̲ α + F ( j ̲ α , j ̲ ) .
So, we obtain that
T j ̲ α , j ̲ j ̲ α + F ( j ̲ α , j ̲ ) 0 , for all j ̲ C .
Taking α 0 and by condition ( A 3 ) , we have
T a ̲ , j ̲ a ̲ + F ( a ̲ , j ̲ ) 0 , for all j ̲ C .
This implies that a ̲ E P ( F , T ) . Further, we show that a ̲ Γ . By using the lower semi-continuity of g, it follows from (31) that
0 g ( a ̲ ) lim k g ( f ̲ n k ) = lim k g ( f ̲ n ) = 0 ,
which implies that
g ( a ̲ ) = 1 2 ( I J λ 2 B 2 ) A a ̲ * 2 = 0 .
By Remark 1, we get
A a ̲ B 2 1 ( 0 ) or 0 B 2 ( A a ̲ ) .
e ̲ n k = J λ 1 B 1 ( f ̲ n k ω n k A * ( I J λ 2 B 2 ) A f ̲ n k ) can be written as f ̲ n k ω n k A * ( I J λ 2 B 2 ) A f ̲ n k e ̲ n k + λ 1 B 1 ( e ̲ n k ) or
( f ̲ n k e ̲ n k ) ω n k A * ( I J λ 2 B 2 ) A f ̲ n k λ 1 B 1 ( e ̲ n k )
On taking the limit as k in the above Equation (43), and applying (33), (34) and (38) and combining it with the result that the graph of a maximal monotone mapping is weakly strongly closed, we get 0 B 1 ( a ̲ ) . Combining this with (42), we have a ̲ Γ . Next, we show that a ̲ F ( S ) F ( T ) . By (38) and (39), we get e ̲ n k a ̲ and c ̲ n k a ̲ as k . S and T are nonexpansive and demiclosed principals and (39) gives a ̲ F ( S ) F ( T ) . Hence, χ ( a ̲ n ) Ω . Moreover, by (38) and (39), it follows that χ { a ̲ n } = χ { c ̲ n } . Since { a ̲ n k } is bounded, there exists a subsequence { a ̲ n k i } of { a ̲ n k } such that a ̲ n k i a ̲ and
lim i ϕ a ̲ * a ̲ * , a ̲ n k i a ̲ * = lim sup k ϕ a ̲ * a ̲ * , a ̲ n k a ̲ * = lim sup k ϕ a ̲ * a ̲ * , e ̲ n k a ̲ * .
As a ̲ * = P Ω o ϕ a ̲ * , we have
lim sup k ϕ a ̲ * a ̲ * , a ̲ n k a ̲ * = lim i ϕ a ̲ * a ̲ * , a ̲ n k i a ̲ * = ϕ a ̲ * a ̲ * , a ̲ a ̲ * 0 .
Now, by (40) and (44), we get
lim sup k ϕ a ̲ * a ̲ * , a ̲ n k + 1 a ̲ * = lim sup k ϕ a ̲ * a ̲ * , a ̲ n k a ̲ * = ϕ a ̲ * a ̲ * , a ̲ a ̲ * 0 .
Applying Lemma 5 to (25) and using (45) with lim n κ n α n a ̲ n a ̲ n 1 = 0 and lim n α n = 0 , we conclude that lim n a ̲ n a ̲ * 2 = 0 and hence lim n a ̲ n a ̲ * = 0 . □

3. Applications

In the following sections, we use our proposed iterative scheme to approximate the solution of some well-known nonlinear problems.

3.1. Split Feasibility Problem

Suppose that A , H 1 , H 2 , C and Q are given as in previous section. The SFP is defined as follows:
find a point a ̲ 0 C such that A a ̲ 0 Q .
This problem was introduced by Censor and Elfving in 1994 [15] and is used to model problems arising in different fields such as image diagnosing and restoration, computer tomography and radiation therapy treatment. The set of solutions of the SFP (46) is denoted by Γ S F P . Suppose that C is a nonempty closed and convex subset of a Hilbert space H and δ C is an indicator function which is defined as follows:
δ C ( a ̲ ) = 0 if a ̲ C , otherwise .
Define the normal cone N C g ̲ 0 at g ̲ 0 C as follows:
N C g ̲ 0 = { c ̲ H : c ̲ , f ̲ g ̲ 0 0 , f ̲ C } .
As δ C is a proper, lower semicontinuous and convex function on H , the subdifferential δ C of δ C is a maximal monotone operator. Note that the resolvent J r δ C of δ C is given by
J r δ C ( a ̲ ) = I + r δ C 1 a ̲ , a ̲ H .
Furthermore, for each a ̲ C , we have
δ C ( a ̲ ) = c ̲ H : δ C a ̲ + c ̲ , g ̲ 0 a ̲ δ C g ̲ 0 g ̲ 0 H = c ̲ H : c ̲ , g ̲ 0 a ̲ 0 g ̲ 0 C = N C a ̲ .
For all r > 0 , we have
g ̲ 0 = J r δ C ( a ̲ ) a ̲ g ̲ 0 + r δ C g ̲ 0 a ̲ g ̲ 0 r δ C g ̲ 0 a ̲ g ̲ 0 , c ̲ g ̲ 0 0 , c ̲ C g ̲ 0 = P C a ̲ .
As an application of Theorem 1, we obtain the approximation of the common solution of the SFP, the G E P ( F , T ) , and the common FPP involving nonexpansive mappings. We now present Algorithm 6 given below which serves this purpose.
Algorithm 6: Proposed algorithm for SFP, G E P ( F , T ) and common FPP.
Step 0. Let a ̲ 0 , a ̲ 1 H and κ be any non-negative real number. Set n = 1 .
Step 1. Given the ( n 1 ) th and nth iterations, set κ n such that 0 κ n κ ^ n with κ ^ n given as
κ ^ n = min { κ , θ n a ̲ n a ̲ n 1 } , if a ̲ n a ̲ n 1 , κ , otherwise .
Step 2. Compute
h ̲ n = a ̲ n + κ n ( a ̲ n a ̲ n 1 ) .
Step 3. Find g ̲ n C such that
F ( g ̲ n , p ̲ * ) + T g ̲ n , p ̲ * g ̲ n + 1 r n p ̲ * g ̲ n , g ̲ n h ̲ n 0 .
Step 4. Compute
f ̲ n = η n h ̲ n + ( 1 η n ) g ̲ n .
Step 5. Compute
e ̲ n = P C ( I ω n A * ( I P Q ) A ) f ̲ n ,
where
ω n = ρ n g ( f ̲ n ) G ( f ̲ n ) 2 + H ( f ̲ n ) 2 if G ( f ̲ n ) 2 + H ( f ̲ n ) 2 0 0 otherwise .
Step 6. Evaluate
d ̲ n = S ( 1 σ n ) f ̲ n + σ n S e ̲ n .
Step 7. Compute
c ̲ n = S ( 1 δ n ) S e ̲ n + δ n S d ̲ n .
Step 8. Set
b ̲ n = S c ̲ n .
Step 9. Find
a ̲ n + 1 = α n ϕ a ̲ n + β n S e ̲ n + γ n T b ̲ n ,
where
g ( a ̲ ) = 1 2 ( I P Q ) A a ̲ 2 , h ( a ̲ ) = 1 2 ( I P C ) a ̲ 2 , G ( a ̲ ) = A * ( I P Q ) A a ̲ , H ( a ̲ ) = ( I P C ) a ̲ .
Update: set n = n + 1 and return back to step 1.
We now present the following result.
Theorem 2.
Suppose that S and T are nonexpansive self-mappings on H 1 and ϕ : H 1 H 1 is a contraction with contraction constant c. If Ω = F ( S ) F ( T ) Γ S F P S ( G E P ( F , T ) ) and the conditions ( A 1 ) ( A 4 ) and ( i ) ( i v ) hold, then the sequence { a ̲ n } defined by Algorithm 6 converges strongly to a ̲ * Ω , where a ̲ * = P Ω o ϕ a ̲ * .
Proof. 
The proof follows from Theorem 1. □

3.2. Relaxed Split Feasibility Problem

The relaxed split feasibility problem (RSFP) is a special case of the SFP, which is defined as follows.
Let J : H 1 R and K : H 2 R be convex and lower semicontinuous functions with bounded subdifferentials on bounded domains. Take the sets C and Q as follows:
C = { g ̲ 0 H 1 : J ( g ̲ 0 ) 0 } and Q = { f ̲ 0 H 2 : K ( f ̲ 0 ) 0 } .
The solution set of the RSFP is denoted by Γ R S F P . We now present an algorithm (Algorithm 7) to approximate the common solution of the RSFP, the G E P ( F , T ) and the common FPP.
Algorithm 7: Proposed algorithm for RSFP, G E P ( F , T ) and common FPP.
Step 0. Let a ̲ 0 , a ̲ 1 H and κ be any non-negative real number. Set n = 1 .
Step 1. Given the ( n 1 ) th and nth iterations, set κ n such that 0 κ n κ ^ n with κ ^ n given as
κ ^ n = min { κ , θ n a ̲ n a ̲ n 1 } , if a ̲ n a ̲ n 1 , κ , otherwise .
Step 2. Compute
h ̲ n = a ̲ n + κ n ( a ̲ n a ̲ n 1 )
Step 3. Find g ̲ n C such that
F ( g ̲ n , p ̲ * ) + T g ̲ n , p ̲ * g ̲ n + 1 r n p ̲ * g ̲ n , g ̲ n h ̲ n 0
Step 4. Compute
f ̲ n = η n h ̲ n + ( 1 η n ) g ̲ n
Step 5. Compute
e ̲ n = P C n ( I ω n A * ( I P Q n ) A ) f ̲ n
where
ω n = ρ n g ( f ̲ n ) G ( f ̲ n ) 2 + H ( f ̲ n ) 2 if G ( f ̲ n ) 2 + H ( f ̲ n ) 2 0 0 otherwise .
and
C n = { v H 1 : J ( f ̲ n ) + a n , v v n 0 , a n J ( f ̲ n ) } ,
Q n = { h ̲ H 2 : K ( A f ̲ n ) + b n , h ̲ A v n 0 , b n K ( A f ̲ n ) } .
Step 6. Evaluate
d ̲ n = S ( 1 σ n ) f ̲ n + σ n S e ̲ n
Step 7. Compute
c ̲ n = S ( 1 δ n ) S e ̲ n + δ n S d ̲ n
Step 8. Set
b ̲ n = S c ̲ n
Step 9. Find
a ̲ n + 1 = α n ϕ a ̲ n + β n S e ̲ n + γ n T b ̲ n ,
where
g ( a ̲ ) = 1 2 ( I P Q n ) A a ̲ 2 , h ( a ̲ ) = 1 2 ( I P C n ) a ̲ 2
G ( a ̲ ) = A * ( I P Q n ) A a ̲ , H ( a ̲ ) = ( I P C n ) a ̲ .
Update: set n = n + 1 and return back to step 1.
Now, using Theorem 2, we have the following result which approximates the common solution of the RSFP, the G E P ( F , T ) and the common FPP involving nonexpansive mappings.
Theorem 3.
Suppose that S and T are nonexpansive self-mappings on H 1 and ϕ : H 1 H 1 is a contraction mapping with the contraction constant c. If Ω = F ( S ) F ( T ) Γ R S F P S ( G E P ( F , T ) ) and the conditions ( A 1 ) ( A 4 ) and ( i ) ( i v ) hold, then the sequence { a ̲ n } defined by Algorithm 7 converges strongly to a ̲ * Ω , where a ̲ * = P Ω o ϕ a ̲ * .
Proof. 
The proof follows from Theorem 1. □

3.3. Split Common Null Point Problem

The split common null point problem (SCNPP) for multi-valued maximal monotone mappings was introduced by Byrne et al. [35]. They also proposed iterative algorithms to solve this problem. The SCNPP includes the convex feasibility problem (CFP) ([15]), the VIP ([22]) and many constrained optimization problems as special cases; for more details about its practicability, we refer to [16,48]).
For multivalued mappings S : H 1 2 H 1 , T : H 2 2 H 2 , the SCNPP is formulated as:
Find a ̲ * H 1 such that 0 S ( a ̲ * ) and 0 T A a ̲ * .
We denote the solution set of the SCNPP (50) by Γ S C N P P . It is well known that for any λ > 0 , J λ T is single-valued and nonexpansive if and only if T is maximal and monotone. Let T : H 2 H be a maximal monotone mapping, then the resolvent operator I + λ T 1 = J λ T : H H is a single-valued map associated with T, where λ > 0 . Moreover, the resolvent operator J λ T is firmly nonexpansive and 0 T ( a ̲ ) if and only if a ̲ F ( J λ T ) . Moreover, Lemma 7.1 on page 392 of [49] shows that this fact is equivalent to the classical Kirszbraun–Valentine extension theorem. Now, we propose Algorithm 8 to approximate the common solution of the G E P ( F , T ) , the variational inclusion problem and the SCNPP.
Algorithm 8: Proposed algorithm for variational inclusion problem, G E P ( F , T ) and SCNPP.
Step 0. Let a ̲ 0 , a ̲ 1 H and κ be any non-negative real number. Set n = 1 .
Step 1. Given the ( n 1 ) th and nth iterations, set κ n such that 0 κ n κ ^ n with κ ^ n given as
κ ^ n = min { κ , θ n a ̲ n a ̲ n 1 } , if a ̲ n a ̲ n 1 , κ , otherwise .
Step 2. Compute
h ̲ n = a ̲ n + κ n ( a ̲ n a ̲ n 1 ) .
Step 3. Find g ̲ n C such that
F ( g ̲ n , p ̲ * ) + J λ T g ̲ n , p ̲ * g ̲ n + 1 r n p ̲ * g ̲ n , g ̲ n h ̲ n 0 .
Step 4. Compute
f ̲ n = η n h ̲ n + ( 1 η n ) g ̲ n .
Step 5. Compute
e ̲ n = J λ 1 B 1 ( I ω n A * ( I J λ 2 B 2 ) A ) f ̲ n ,
where
ω n = ρ n g ( f ̲ n ) G ( f ̲ n ) 2 + H ( f ̲ n ) 2 if G ( f ̲ n ) 2 + H ( f ̲ n ) 2 0 0 otherwise .
Step 6. Evaluate
d ̲ n = S ( 1 σ n ) f ̲ n + σ n J λ S e ̲ n .
Step 7. Compute
c ̲ n = S ( 1 δ n ) J λ S e ̲ n + δ n J λ S d ̲ n .
Step 8. Set
b ̲ n = J λ S c ̲ n .
Step 9. Find
a ̲ n + 1 = α n ϕ a ̲ n + β n J λ S e ̲ n + γ n J λ T b ̲ n ,
where
g ( a ̲ ) = 1 2 ( I J λ 2 B 2 ) A a ̲ 2 , h ( a ̲ ) = 1 2 ( I J λ 1 B 1 ) a ̲ 2 ,
G ( a ̲ ) = A * ( I J λ 2 B 2 ) A a ̲ , H ( a ̲ ) = ( I J λ 1 B 1 ) a ̲ .
Update: set n = n + 1 and return back to step 1.
We now present the following result.
Theorem 4.
Suppose that S and T are maximal monotone multivalued mappings on H 1 and ϕ : H 1 H 1 is a contraction mapping with contraction constant c. If Ω = F ( S ) F ( T ) Γ S C N P P S ( G E P ( F , T ) ) , and the conditions ( A 1 ) ( A 4 ) and ( i ) ( i v ) hold, then the sequence { a ̲ n } defined by Algorithm 8 converges strongly to a ̲ * Ω , where a ̲ * = P Ω o ϕ a ̲ * .
Proof. 
As the resolvent operators J λ S and J λ T are firmly nonexpansive and hence nonexpansive, the proof follows from Theorem 1. □

3.4. Split Minimization Problem

Let us recall the definition of a proximal operator.
Let H be a Hilbert space, λ > 0 and ϕ : H R { } be a convex proper and lower semicontinuous function. The proximal operator of mapping ϕ is defined as follows:
p r o x λ , ϕ ( a ̲ ) = arg min q ̲ H ϕ q ̲ + 1 2 λ a ̲ q ̲ 2 . a ̲ H .
It is known that
p r o x λ , ϕ ( a ̲ ) = ( I + λ ϕ ) 1 ( a ̲ ) = J λ ϕ ( a ̲ ) ,
where ϕ denotes the subdifferential of ϕ which is given as:
ϕ ( a ̲ ) = { q ̲ H : ϕ a ̲ ϕ b ̲ q ̲ , a ̲ b ̲ , b ̲ H , for each a ̲ H } .
The split minimization problem (SMP) introduced by Moudafi and Thakur [48] has been successfully applied in Fourier regularization, multi-resolution and sparse regularization, alternating projection signal synthesis problems and hard-constrained inconsistent feasibility (see [50]).
Suppose that ϕ 1 : H 1 R { } and ϕ 2 : H 2 R { } are convex proper and lower semicontinuous functions. The split minimization problem (SMP) is defined as follows: find a point
a ̲ * H 1 such that a ̲ * arg min a ̲ H 1 ϕ 1 a ̲ and A a ̲ * = b ̲ arg min b ̲ H 2 ϕ 2 b ̲ .
The solution set of the SMP (53) is denoted by Γ S M P .
Note that ϕ is a firmly nonexpansive and maximal monotone operator. Set ϕ 1 = B 1 and ϕ 2 = B 2 in Theorem 1 and use Algorithm 9 given below to approximate the common solution of the SMP, the G E P ( F , T ) and the common FPP.
Algorithm 9: Proposed algorithm for SMP, the G E P ( F , T ) and common FPP.
Step 0. Suppose that a ̲ 0 , a ̲ 1 H and κ is any non-negative real number. Set n = 1 .
Step 1. Given the ( n 1 ) th and nth iterations, set κ n such that 0 κ n κ ^ n with κ ^ n given as
κ ^ n = min { κ , θ n a ̲ n a ̲ n 1 } , if a ̲ n a ̲ n 1 , κ , otherwise .
Step 2. Compute
h ̲ n = a ̲ n + κ n ( a ̲ n a ̲ n 1 ) .
Step 3. Find g ̲ n C such that
F ( g ̲ n , p ̲ * ) + T g ̲ n , p ̲ * g ̲ n + 1 r n p ̲ * g ̲ n , g ̲ n h ̲ n 0 .
Step 4. Compute
f ̲ n = η n h ̲ n + ( 1 η n ) g ̲ n .
Step 5. Compute
e ̲ n = p r o x λ 1 , ϕ 1 ( I ω n A * ( I p r o x λ 2 , ϕ 2 ) A ) f ̲ n ,
where
ω n = ρ n g ( f ̲ n ) G ( f ̲ n ) 2 + H ( f ̲ n ) 2 if G ( f ̲ n ) 2 + H ( f ̲ n ) 2 0 0 otherwise .
Step 6. Evaluate
d ̲ n = S ( 1 σ n ) f ̲ n + σ n S e ̲ n .
Step 7. Compute
c ̲ n = S ( 1 δ n ) S e ̲ n + δ n S d ̲ n .
Step 8. Set
b ̲ n = S c ̲ n .
Step 9. Find
a ̲ n + 1 = α n ϕ a ̲ n + β n S e ̲ n + γ n T b ̲ n ,
where
g ( a ̲ ) = 1 2 ( I p r o x λ 2 , ϕ 2 ) A a ̲ 2 , h ( a ̲ ) = 1 2 ( I p r o x λ 1 , ϕ 1 ) a ̲ 2 ,
G ( a ̲ ) = A * ( I p r o x λ 2 , ϕ 2 ) A a ̲ , H ( a ̲ ) = ( I p r o x λ 1 , ϕ 1 ) a ̲ .
Update: set n = n + 1 and return back to step 1.
Finally, we present the following result.
Theorem 5.
Suppose that S and T are nonexpansive self-mappings on H 1 , ϕ : H 1 H 1 is a contraction with contraction constant c and ϕ 1 : H 1 R { } and ϕ 2 : H 2 R { } are convex proper and lower semicontinuous functions. If Ω = F ( S ) F ( T ) Γ S M P S ( G E P ( F , T ) ) and the conditions ( A 1 ) ( A 4 ) and ( i ) ( i v ) hold, then the sequence { a ̲ n } generated by Algorithm 9 converges strongly a ̲ * Ω , where a ̲ * = P Ω o ϕ a ̲ * .
Proof. 
The proof follows from Theorem 1. □

4. Numerical Experiment

In this section, a significant numerical aspect, namely the rate of convergence of the proposed algorithm, is studied. We have used MATLAB version R2018a for all of the numerical calculations. The affectivity of Algorithm 5 is shown via comparison with Algorithms 2–4, 10 and 11. We have implemented our results with different initial guesses and parameters to compare our method with the existing approaches.
Example 1.
Let H 1 = H 2 = R 3 and C = { a ̲ R 3 : p ̲ , a ̲ q ̲ } . Take η n = n n + 4 , ρ n = 3 1 2 n 1 , σ n = n 2 n 2 + 3 , θ n = 1 4 n + 1 , r n = n n + 3 , λ 1 = λ = λ 2 = 0.5 , κ = 0.8 , γ n = n + 2 2 n + 5 = β n , α n = 1 2 n + 7 . Furthermore, take ϕ ( a ̲ ) = a ̲ 5 , S ( a ̲ ) = a ̲ 2 , T ( a ̲ ) = a ̲ 3 . Set, ω = 0.0001 in Algorithms 2, 3 and 10, and also set η = 0.5 , B = T in Algorithm 3. Note that all the conditions of Theorem 1 are satisfied. The operators A , B 1 , B 2 are given as follows:
A = 6 3 1 8 7 5 3 6 2 , B 1 = 7 0 0 5 5 0 0 0 2 , B 2 = 8 0 0 0 7 0 0 0 3 .
If we take any r > 0 , then T r F ( a ̲ ) = q ̲ p ̲ , a ̲ a ̲ 2 p ̲ + a ̲ . In this computation, we take p ̲ = ( 8 , 3 , 1 ) and q ̲ = 1 and choose randomly initial guesses as described in Figure 2, Figure 3, Figure 4 and Figure 5 with the stopping criteria given by a ̲ n + 1 a ̲ n   < 10 3 . We display the error graphs versus the number of iterations for each scenario. Table 1 and Figure 2, Figure 3, Figure 4 and Figure 5 show the numerical results.
Some important comparable algorithms are given as follows:
Algorithm 10: Comparable algorithm proposed in [27].
Initialization: Let { α n } be a sequence of real numbers in ( 0 , 1 ) , λ > 0 and ω ( 0 , 1 L ) , where L is the spectral radius of operator A * A .
Choose any a ̲ 1 H 1 ;
For n 1 , calculate a ̲ n + 1 as follows:
b ̲ n = J λ T 1 ( a ̲ n + ω A * ( J λ T 2 I ) A a ̲ n )
a ̲ n + 1 = α n ϕ a ̲ n + ( 1 α n ) S b ̲ n .
ϕ : H 1 H 1 is a contraction, S : H 1 H 1 is a nonexpansive mapping and T 1 : H 1 2 H 1 and T 2 : H 2 2 H 2 are multivalued maximal monotone operators. Further, under the assumptions of Algorithm 5, the Algorithm 11 is given as follows:
Algorithm 11: Viscosity S-algorithm proposed in [45].
Step 0. Let a ̲ 0 , a ̲ 1 H and κ be any non-negative real number. Set n = 1 .
Step 1. Given the ( n 1 ) th and nth iterations, set κ n such that 0 κ n κ ^ n with κ ^ n given as
κ ^ n = min { κ , θ n a ̲ n a ̲ n 1 } , if a ̲ n a ̲ n 1 , κ , otherwise .
Step 2. Compute
h ̲ n = a ̲ n + κ n ( a ̲ n a ̲ n 1 ) .
Step 3. Find g ̲ n C such that
F ( g ̲ n , p ̲ * ) + 1 r n p ̲ * g ̲ n , g ̲ n h ̲ n 0 .
Step 4. Compute
d ̲ n = η n h ̲ n + ( 1 η n ) g ̲ n .
Step 5. Compute
c ̲ n = J λ 1 B 1 ( I ω n A * ( I J λ 2 B 2 ) A ) d ̲ n ,
where
ω n = ρ n g ( d ̲ n ) G ( d ̲ n ) 2 + H ( d ̲ n ) 2 if G ( d ̲ n ) 2 + H ( d ̲ n ) 2 0 0 otherwise .
Step 6.
b ̲ n = ( 1 σ n ) d ̲ n + σ n S c ̲ n .
Step 7.
a ̲ n + 1 = α n ϕ a ̲ n + β n S c ̲ n + γ n T b ̲ n .
Update: set n = n + 1 and return back to step 1.

5. Conclusions

The problem of approximating a common solution to the split variational inclusion problem, the G E P ( F , T ) , and the common FPP in the framework of Hilbert spaces was studied in this paper. We in this paper contributed in the following ways: (1) We developed a new iterative scheme for estimating the common solution of certain well-known nonlinear problems. (2) We proved the strong convergence of the proposed algorithm. (3) We approximated the solution of the generalized equilibrium problem and hence Theorem 4.1 in [45] becomes a special case of Theorem 1. (4) We have shown that our scheme, in terms of the rate of convergence, is more effective than the iterative methods given in Algorithm 10 [27] and Algorithm 11 [45] and the algorithms given in [24,35,36] with the help of Figure 2, Figure 3, Figure 4 and Figure 5 and Table 1. (5) As applications of our main result, an approximation of the solution of several nonlinear problems was obtained.

Author Contributions

M.W.A., M.A. and B.D.R. contributed to the study conception, design and computations. M.W.A. wrote the first draft of the manuscript and M.A. and B.D.R. commented, read and approved the final manuscript. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Data is contained within the article.

Acknowledgments

The authors are grateful to reviewers for their useful remarks which helped us to improve the presentation of this manuscript.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

References

  1. Bauschke, H.H.; Borwein, J.M. On projection algorithms for solving convex feasibility problems. SIAM Rev. 1996, 38, 367–426. [Google Scholar] [CrossRef]
  2. Chen, P.; Huang, J.; Zhang, X. A primal–dual fixed point algorithm for convex separable minimization with applications to image restoration. Inverse Probl. 2013, 29, 025011. [Google Scholar] [CrossRef]
  3. Goebel, K.; Reich, S. Uniform Convexity, Hyperbolic Geometry, and Non-Expansive Mappings; Marcel Dekker: New York, NY, USA; Basel, Switzerland, 1984. [Google Scholar]
  4. Reich, S.; Shoiykhet, D. Nonlinear Semigroups, Fixed Points, and Geometry of Domains in Banach Spaces; Imperial College Press: London, UK, 2005. [Google Scholar]
  5. Deutsch, F. Best Approximation in Inner Product Spaces; Springer: Berlin/Heidelberg, Germany, 2001. [Google Scholar]
  6. Zhao, J.; Liang, Y.; Liu, Y.; Cho, Y.J. Split equilibrium, variational inequality and fixed point problems for multi-valued mappings in Hilbert spaces. Appl. Math. Comput. 2018, 17, 271–283. [Google Scholar]
  7. Chuang, C.S. Strong convergence theorems for the split variational inclusion problem in Hilbert spaces. Fixed Point Theory Appl. 2013, 2013, 1–20. [Google Scholar] [CrossRef]
  8. Osilike, M.O.; Igbokwe, D.I. Weak and strong convergence theorems for fixed points of pseudocontractions and solutions of monotone type operator equations. Comput. Math. Appl. 2000, 40, 559–567. [Google Scholar] [CrossRef]
  9. Maingé, P.E. Approximation methods for common fixed points of nonexpansive mappings in Hilbert spaces. J. Math. Anal. Appl. 2007, 325, 469–479. [Google Scholar] [CrossRef]
  10. Saejung, S.; Yotkaew, P. Approximation of zeros of inverse strongly monotone operators in Banach spaces. Nonlinear Anal. Theory Methods Appl. 2012, 75, 742–750. [Google Scholar] [CrossRef]
  11. Barbagallo, A. Existence and regularity of solutions to nonlinear degenerate evolutionary variational inequalities with applications to dynamic network equilibrium problems. Appl. Math. Comput. 2009, 208, 1–13. [Google Scholar] [CrossRef]
  12. Dafermos, S.; Nagurney, A. A network formulation of market equilibrium problems and variational inequalities. Oper. Res. Lett. 1984, 3, 247–250. [Google Scholar] [CrossRef]
  13. He, R. Coincidence theorem and existence theorems of solutions for a system of ky fan type minimax inequalities in fc-spaces. Adv. Fixed Point Theory 2012, 2, 47–57. [Google Scholar]
  14. Qin, X.; Cho, S.Y.; Kang, S.M. Strong convergence of shrinking projection methods for quasi-ϕ-nonexpansive mappings and equilibrium problems. J. Comput. Appl. Math. 2010, 234, 750–760. [Google Scholar] [CrossRef]
  15. Censor, Y.; Elfving, T. A multiprojection algorithm using Bregman projections in a product space. Numer. Algorithms 1994, 8, 221–239. [Google Scholar] [CrossRef]
  16. Byrne, C. Iterative oblique projection onto convex sets and the split feasibility problem. Inverse Probl. 2002, 18, 441. [Google Scholar] [CrossRef]
  17. Byrne, C. A unified treatment of some iterative algorithms in signal processing and image reconstruction. Inverse Probl. 2003, 20, 103. [Google Scholar] [CrossRef]
  18. Censor, Y.; Bortfeld, T.; Martin, B.; Trofimov, A. A unified approach for inversion problems in intensity-modulated radiation therapy. Phy. Med. Biol. 2006, 51, 2353. [Google Scholar] [CrossRef]
  19. Combettes, P. The convex feasibility problem in image recovery. Adv. Imaging Electron Phys. 1996, 95, 155–270. [Google Scholar]
  20. Xu, H.K. Iterative methods for the split feasibility problem in infinite-dimensional Hilbert spaces. Inverse Probl. 2010, 26, 105018. [Google Scholar] [CrossRef]
  21. Yang, Q. The relaxed CQ algorithm solving the split feasibility problem. Inverse Probl. 2004, 20, 1261. [Google Scholar] [CrossRef]
  22. Moudafi, A. Split monotone variational inclusions. J. Opt. Theory Appl. 2011, 150, 275–283. [Google Scholar] [CrossRef]
  23. Censor, Y.; Gibali, A.; Reich, S. Algorithms for the split variational inequality problem. Numer. Algorithms 2012, 59, 301–323. [Google Scholar] [CrossRef]
  24. Tang, Y. Convergence analysis of a new iterative algorithm for solving split variational inclusion problems. J. Ind. Manag. Opt. 2020, 16, 945. [Google Scholar] [CrossRef]
  25. Halpern, B. Fixed points of nonexpanding maps. Bull. Am. Math. Soc. 1967, 73, 957–961. [Google Scholar] [CrossRef]
  26. Iiduka, H. Fixed point optimization algorithm and its application to network bandwidth allocation. J. Comput. Appl. Math. 2012, 236, 1733–1742. [Google Scholar] [CrossRef]
  27. Kazmi, K.R.; Rizvi, S.H. An iterative method for split variational inclusion problem and fixed point problem for a nonexpansive mapping. Opt. Lett. 2014, 8, 1113–1124. [Google Scholar] [CrossRef]
  28. Luo, C.; Ji, H.; Li, Y. Utility-based multi-service bandwidth allocation in the 4 g heterogeneous wireless access networks. In Proceedings of the IEEE Wireless Communications and Networking Conference, Budapest, Hungary, 5–8 April 2009; pp. 1–5. [Google Scholar]
  29. Moudafi, A. Viscosity approximation methods for fixed-points problems. J. Math. Anal. Appl. 2000, 241, 46–55. [Google Scholar] [CrossRef]
  30. Izuchukwu, C.; Ogwo, G.; Mewomo, O. An inertial method for solving generalized split feasibility problems over the solution set of monotone variational inclusions. Optimization 2022, 71, 583–611. [Google Scholar] [CrossRef]
  31. Abbas, M.; Asghar, M.W.; De la Sen, M. Approximation of the solution of delay fractional differential equation using AA-iterative scheme. Mathematics 2022, 10, 273. [Google Scholar] [CrossRef]
  32. Asghar, M.W.; Abbas, M.; Eyni, D.C.; Omaba, M.E. Iterative approximation of fixed points of generalized αm-nonexpansive mappings in modular spaces. AIMS Math. 2023, 8, 26922–26944. [Google Scholar] [CrossRef]
  33. Beg, I.; Abbas, M.; Asghar, M.W. Convergence of AA-Iterative Algorithm for Generalized α-Nonexpansive Mappings with an Application. Mathematics 2022, 10, 4375. [Google Scholar] [CrossRef]
  34. Suanoom, C.; Gebrie, A.G.; Grace, T. The Convergence of AA-Iterative Algorithm for Generalized AK-α-Nonexpansive Mappings in Banach Spaces. Sci. Technol. Asia 2023, 10, 82–90. [Google Scholar]
  35. Byrne, C.; Censor, Y.; Gibali, A.; Reich, S. The split common null point problem. J. Nonlinear Convex Anal. 2012, 13, 759–775. [Google Scholar]
  36. Wangkeeree, R.; Rattanaseeha, K.; Wangkeeree, R. The general iterative methods for split variational inclusion problem and fixed point problem in Hilbert spaces. J. Comput. Anal. Appl. 2018, 25, 19. [Google Scholar]
  37. Alakoya, T.O.; Owolabi, A.; Mewomo, O.T. An inertial algorithm with a self-adaptive step size for a split equilibrium problem and a fixed point problem of an infinite family of strict pseudo-contractions. J. Nonlinear Var. Anal. 2021, 5, 803–829. [Google Scholar]
  38. Asghar, M.W.; Abbas, M. A self-adpative viscosity algorithm for split fixed point problems and variational inequality problems in Banach spaces. J. Nonlinear Convex Anal. 2023, 24, 341–361. [Google Scholar]
  39. Shehu, Y.; Iyiola, O.S.; Ogbuisi, F.U. Iterative method with inertial terms for nonexpansive mappings: Applications to compressed sensing. Numer. Algorithms 2020, 83, 1321–1347. [Google Scholar] [CrossRef]
  40. Shehu, Y.; Iyiola, O.S.; Thong, D.V.; Cam Van, T.C. An inertial subgradient extragradient algorithm extended to pseudomonotone equilibrium problems. Math. Meth. Oper. Res. 2021, 93, 213–242. [Google Scholar] [CrossRef]
  41. Rouhani, B.D.; Farid, M.; Kazmi, K.R. Common solution to generalized mixed equilibrium problem and fixed point problem for a nonexpansive semigroup in Hilbert space. J. Korean Math. Soc. 2016, 53, 89–114. [Google Scholar] [CrossRef]
  42. Rouhani, B.D.; Farid, M.; Kazmi, K.R. Common solutions to some systems of variational inequalities and fixed point problems. Fixed Point Theory 2017, 18, 167–190. [Google Scholar] [CrossRef]
  43. Rouhani, B.D.; Mohebbi, V. Extragradient methods for quasi-equilibrium problems in Banach spaces. J. Aust. Math. Soc. 2022, 112, 90–114. [Google Scholar] [CrossRef]
  44. Rouhani, B.D.; Farid, M.; Kazmi, K.R.; Moradi, S.; Ali, R.; Khan, S.A. Solving the split equality hierachical fixed point problem. Fixed Point Theory 2022, 23, 351–370. [Google Scholar] [CrossRef]
  45. Alakoya, T.O.; Mewomo, O.T. Viscosity S-iteration method with inertial technique and self-adaptive step size for split variational inclusion, equilibrium and fixed point problems. Comput. Appl. Math. 2022, 41, 1–31. [Google Scholar] [CrossRef]
  46. Agarwal, R.; Regan, D.O.; Sahu, D. Iterative construction of fixed points of nearly asymptotically nonexpansive mappings. J. Nonlinear Convex Anal. 2007, 8, 61. [Google Scholar]
  47. Aubin, J.P. Optima and Equilibria: An Introduction to Nonlinear Analysis; Springer Science+Business Media: Berlin/Heidelberg, Germany, 2002. [Google Scholar]
  48. Moudafi, A.; Thakur, B.S. Solving proximal split feasibility problems without prior knowledge of operator norms. Opt. Lett. 2014, 8, 2099–2110. [Google Scholar] [CrossRef]
  49. Reich, S. Extension problems for accretive sets in Banach spaces. J. Funct. Anal. 1977, 26, 378–395. [Google Scholar] [CrossRef]
  50. Abbas, M.; AlShahrani, M.; Ansari, Q.H.; Iyiola, O.S.; Shehu, Y. Iterative methods for solving proximal split minimization problems. Numer. Algorithms 2018, 7, 193–215. [Google Scholar] [CrossRef]
Figure 1. Flowchart diagram of the proposed algorithm.
Figure 1. Flowchart diagram of the proposed algorithm.
Axioms 13 00038 g001
Figure 2. Using a ̲ 0 = ( 6 , 2.1 , 3.5 ) , a ̲ 1 = ( 4 , 1.5 , 3 ) as initial guesses.
Figure 2. Using a ̲ 0 = ( 6 , 2.1 , 3.5 ) , a ̲ 1 = ( 4 , 1.5 , 3 ) as initial guesses.
Axioms 13 00038 g002
Figure 3. Using a ̲ 0 = ( 6 , 2.1 , 3.5 ) , a ̲ 1 = ( 4 , 1.5 , 3 ) as initial guesses.
Figure 3. Using a ̲ 0 = ( 6 , 2.1 , 3.5 ) , a ̲ 1 = ( 4 , 1.5 , 3 ) as initial guesses.
Axioms 13 00038 g003
Figure 4. Using a ̲ 0 = ( 3.6 , 2.7 , 4.5 ) , a ̲ 1 = ( 5 , 0.5 , 1 ) as initial guesses.
Figure 4. Using a ̲ 0 = ( 3.6 , 2.7 , 4.5 ) , a ̲ 1 = ( 5 , 0.5 , 1 ) as initial guesses.
Axioms 13 00038 g004
Figure 5. Using a ̲ 0 = ( 25 , 12 , 34.6 ) , a ̲ 1 = ( 15 , 8.5 , 21 ) as initial guesses.
Figure 5. Using a ̲ 0 = ( 25 , 12 , 34.6 ) , a ̲ 1 = ( 15 , 8.5 , 21 ) as initial guesses.
Axioms 13 00038 g005
Table 1. Number of iterations corresponding to algorithms.
Table 1. Number of iterations corresponding to algorithms.
AlgorithmCase 1Case 2Case 3Case 4
Algorithm 55567
Algorithm 118878
Algorithm 106678
Algorithm 430303030
Algorithm 318181819
Algorithm 211111114
Note: We obtained the numerical findings shown in Table 1 and Figure 2, Figure 3, Figure 4 and Figure 5 by choosing various initial approximations and illustrated the errors against the number of iterations in the provided example. We have also compared the other algorithms with our Algorithm 5. Based on our observations, we conclude that the various initial points and parameters do not significantly influence our iterative method in term of its effectiveness regarding the rate of convergence. The table and figures demonstrate that our proposed method’s iteration count stays constant.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Asghar, M.W.; Abbas, M.; Rouhani, B.D. The AA-Viscosity Algorithm for Fixed-Point, Generalized Equilibrium and Variational Inclusion Problems. Axioms 2024, 13, 38. https://doi.org/10.3390/axioms13010038

AMA Style

Asghar MW, Abbas M, Rouhani BD. The AA-Viscosity Algorithm for Fixed-Point, Generalized Equilibrium and Variational Inclusion Problems. Axioms. 2024; 13(1):38. https://doi.org/10.3390/axioms13010038

Chicago/Turabian Style

Asghar, Muhammad Waseem, Mujahid Abbas, and Behzad Djafari Rouhani. 2024. "The AA-Viscosity Algorithm for Fixed-Point, Generalized Equilibrium and Variational Inclusion Problems" Axioms 13, no. 1: 38. https://doi.org/10.3390/axioms13010038

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop