Next Article in Journal
A Measure-on-Graph-Valued Diffusion: A Particle System with Collisions and Its Applications
Next Article in Special Issue
Remark on a Fixed-Point Theorem in the Lebesgue Spaces of Variable Integrability Lp(·)
Previous Article in Journal
On Covering-Based Rough Intuitionistic Fuzzy Sets
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Approximating Common Fixed Points of Nonexpansive Mappings on Hadamard Manifolds with Applications

by
Konrawut Khammahawong
1,
Parin Chaipunya
2 and
Kamonrat Sombut
1,*
1
Applied Mathematics for Science and Engineering Research Unit (AMSERU), Department of Mathematics and Computer Science, Faculty of Science and Technology, Rajamangala University of Technology Thanyaburi (RMUTT), Pathum Thani 12110, Thailand
2
NCAO Research Center, Fixed Point Theory and Applications Research Group, Center of Excellence in Theoretical and Computational Science (TaCS-CoE), Faculty of Science, King Mongkut’s University of Technology Thonburi (KMUTT), 126 Pracha Uthit Rd., Bang Mod, Thung Khru, Bangkok 10140, Thailand
*
Author to whom correspondence should be addressed.
Mathematics 2022, 10(21), 4080; https://doi.org/10.3390/math10214080
Submission received: 22 September 2022 / Revised: 12 October 2022 / Accepted: 26 October 2022 / Published: 2 November 2022
(This article belongs to the Special Issue New Trends in Nonlinear Analysis)

Abstract

:
The point of this research is to present a new iterative procedure for approximating common fixed points of nonexpansive mappings in Hadamard manifolds. The convergence theorem of the proposed method is discussed under certain conditions. For the sake of clarity, we provide some numerical examples to support our results. Furthermore, we apply the suggested approach to solve inclusion problems and convex feasibility problems.

1. Introduction

Let C be a nonempty, closed, and geodesic convex subset of a Hadamard manifold M. Assume that g : C C is a nonexpansive mapping, which means that d ( g ( x ) , g ( y ) ) d ( x , y ) for every x , y C , where d ( · , · ) stands for a Riemannian distance function. In this research, we consider the following fixed point problem:
find x * C such that g ( x * ) = x * .
F ( g ) : { x C : g ( x ) = x } denotes the set of fixed points of the mapping g. Many problems, such as convex feasibility problems, convex optimization problems, monotone variational inequalities, and image restorations, can be cast in the light of fixed point problems for nonexpansive mappings, which leads to a wide range of specific applications (see [1,2,3,4] and references therein).
To estimate fixed points, there is a significant amount of literature on fixed point iteration approaches. Most of the research has focused on the case when g is a self-mapping defined on a convex subset C of a normed linear space, and here are some of the most famous variations:
1.
The Picard algorithm [5] is defined by
x n + 1 = g ( x n ) , n N .
2.
The Mann algorithm [6] is defined by
x n + 1 = a n x n + ( 1 a n ) g ( x n ) , n N ,
where { a n } is a sequence in ( 0 , 1 ) .
3.
The Ishikawa algorithm [7] is defined by
x n + 1 = a n x n + ( 1 a n ) g [ b n x n + ( 1 b n ) g ( x n ) ] , n N ,
where { a n } and { b n } are sequences in ( 0 , 1 ) . One can see that the Ishikawa algorithm is a two-step Mann algorithm.
Assuming g is a contraction mapping, the sequence { x n } generated by Picard’s iteration (2) converges strongly to a unique fixed point. The Picard method may not converge if the mapping g is nonexpansive. On the other hand, the sequence { x n } produced by Mann’s iteration (3) converges weakly to a fixed point of the nonexpansive mapping g. Subsequently, numerous scholars have investigated the Mann method and the Ishikawa algorithm to estimate fixed points of nonexpansive mapping. Observe, for instance [8,9,10,11,12,13,14,15].
Both Sahu et al. [9] and Thakur et al. [10] published their findings in 2016, proposing the same iterative approach for approximating fixed points of nonexpansive mappings in uniformly convex Banach spaces:
{ z n = ( 1 c n ) x n + c n g ( x n ) , y n = ( 1 b n ) z n + b n g ( z n ) , x n + 1 = ( 1 a n ) g ( z n ) + a n g ( y n ) , n N ,
where { a n } , { b n } , and { c n } are sequences in ( 0 , 1 ) . The authors [9,10] showed that the iterative scheme (5) converges to fixed points of contractive mappings faster than various existing iterative schemes. Recently, the iterative method (5) has been an attractive process and has been extensively studied in many directions; see, e.g., Refs. [12,16,17,18]. In particular, Padcharoen and Sukprasert [12] improved and extended the iterative scheme (5) for finding common fixed points of two nonexpansive mappings g , h : C C , which is defined by the following:
{ z n = ( 1 c n ) x n + c n g ( x n ) , y n = ( 1 b n ) z n + b n h ( z n ) , x n + 1 = ( 1 a n ) g ( z n ) + a n h ( y n ) , n N ,
where { a n } , { b n } , and { c n } are sequences in ( 0 , 1 ) . The authors of [12] proved that the sequence { x n } achieved by (6) converges weakly to common fixed points. Moreover, they applied the proposed iteration to solve common solutions of accretive operators, convex constrained least square problems, convex minimization problems, and signal processing in Banach spaces.
In recent years, many scholars have turned their attention to nonlinear problems in nonlinear systems such as Hadamard manifolds and CAT(0) spaces [19,20,21,22,23,24,25,26]. Nonlinear problems naturally extend from linear spaces to Riemannian manifolds, which provides a range of advantages. For instance, non-convex problems can be transformed as convex problems by introducing an appropriate Riemannian metric, and constrained optimization problems can be approached in the same manner as unconstrained problems. In addition to this, Hadamard manifolds are suitable frameworks for the development of effective methods for the solution of realistic problems [27,28,29,30].
In 2010, Li et al. [11] investigated the fixed point problem (1) in the context of Hadamard manifolds. The Mann and Halpern methods were also extended from Euclidean spaces to Hadamard manifolds. Recently, S-iterative techniques for approximating common fixed points of two nonexpansive mappings in the setting of Hadamard manifolds were recently presented by Sahu et al. [13].
Algorithm 1 (S-iterative algorithm rank 2). 
Given g , h : C C are nonexpansive mappings and { x n } is defined by
y n = exp x n ( 1 b n ) exp x n 1 g ( x n ) , x n + 1 = exp h ( x n ) ( 1 a n ) exp h ( x n ) 1 g ( y n ) , n N ,
where { a n } , { b n } are sequences in ( 0 , 1 ) and exp is an exponential mapping.
Algorithm 2 (S-iterative algorithm rank 3). 
Given g , h : C C are nonexpansive mappings and { x n } is defined by
z n = exp x n ( 1 c n ) exp x n 1 g ( x n ) , y n = exp h ( x n ) ( 1 b n ) exp h ( x n ) 1 g ( z n ) , x n + 1 = exp h ( x n ) ( 1 a n ) exp h ( x n ) 1 g ( y n ) , n N ,
where { a n } , { b n } and { c n } are sequences in ( 0 , 1 ) .
It was shown by the authors [13] that any sequence produced by the suggested two algorithms converges to the common fixed points of nonexpansive maps. In addition, they provide numerical examples to support their assertions.
The goal of this paper is to provide a formal introduction to the iterative process (6) in terms of exponential mappings on Hadamard manifolds, building on the foundation laid in [11,12]. Under certain conditions, we establish the convergence theorem of our proposed method for common fixed points of two nonexpansive mappings. We provide some numerical examples to indicate how successful the proposed method is, and we compare it with some of the other methods that are already in use. In addition to this, we illustrate how the proposed method can be utilized to solve inclusion and convex feasibility problems in Hadamard manifolds.
The remainder of this paper is organized as follows: In Section 2, we present basic concepts and fundamental results in Riemannian geometry. In Section 3, we propose an iterative algorithm for finding common fixed points of nonexpansive mappings in Hadamard manifolds. We establish the convergence results of the proposed algorithm. In Section 4, we present some numerical experiments to demonstrate applications of the results in the present paper. Section 5 consists of applications to inclusion problems and convex feasibility problems in Hadamard manifolds. Finally, Section 6 provides a concise overview of the paper.

2. Preliminaries

In this section, we review the necessary terminology, concepts, properties, and results from Riemannian geometry that will be used in the rest of the paper. Readers may find references to it in a variety of textbooks that serve as introduction to Riemannian geometry. Some examples are [31,32,33].
Let us assume that M is a connected manifold with finite dimensions. The tangent bundle of M is defined by T M = p M T p M , where T p M is the tangent space of M at p and 0 denotes the zero section of T M . Every manifold M is a Riemannian manifold if and only if it is endowed with a Riemannian metric · , · p with the corresponding norm denoted by · p . If there is no doubt, the subscript p is omitted. Determine the length of the piecewise smooth curve φ : [ a , b ] M by using the formula L ( φ ) = a b φ ( t ) d t , where φ ( t ) is the tangent vector at φ ( t ) in the tangent space T φ ( t ) M . A Riemannian distance, indicated by d ( p , q ) , is the shortest path between any two points p and q.
Let ∇ be a Levi–Civita connection associated with the Riemannian manifold M. A smooth vector field V along a smooth curve φ is said to be parallel ⟺ φ V = 0 . If φ is parallel to itself, we call it a geodesic, and in this case φ is a constant. The curve φ is considered to be normalized when the value of φ = 1 . If the length of a geodesic that joins two points p and q in M is equal to the distance between those two points, then we refer to that geodesic as a minimizing geodesic.
If the geodesics of a Riemannian manifold can be determined for any value of t R , then the manifold is said to be complete. The Hopf–Rinow theorem states that if M is complete, then any two points in M can be connected via a minimizing geodesic. In addition to this, because ( M , d ) is a complete metric space, any bounded closed subset is a compact subset.
Let M be a complete Riemannian manifold and p M . The exponential map exp p : T p M M is given by exp p u = φ p ( 1 , p ) , where φ ( · ) = φ u ( · , p ) is the geodesic starting at the point p with velocity u (i.e., φ u ( 0 , p ) = p and φ u ( 0 , p ) = p ). Then, for each real number t, we have exp p t u = φ u ( t , p ) and exp p 0 = φ u ( 0 , p ) = p . Note that exponential map is differentiable on T p M for each p M . This implies that the exponential map has an inverse exp p 1 : M T p M . We also have d ( p , q ) = exp p 1 q , p , q M .
A complete simply connected Riemannian manifold of non-positive sectional curvature is named a Hadamard manifold. The remainder of the article will proceed on the assumption that M is a Hadamard manifold with finite dimensions. The exponential mapping exp p : T p M M is a diffeomorphism for p M . Any two points p , q M have a unique minimizing normalized geodesic connecting them ([31] Theorem 4.1).
A geodesic triangle ( p 1 , p 2 , p 3 ) of a Riemannian manifold M is a set consisting of three points p 1 , p 2 and p 3 , and three minimizing geodesics joining these points.
Proposition 1 
([31]).Let ( p 1 , p 2 , p 3 ) be a geodesic triangle. Then
d 2 ( p 1 , p 2 ) + d 2 ( p 2 , p 3 ) 2 exp p 2 1 p 1 , exp p 2 1 p 3 d 2 ( p 3 , p 1 ) ,
and
d 2 ( p 1 , p 2 ) exp p 1 1 p 3 , exp p 1 1 p 2 + exp p 2 1 p 3 , exp p 2 1 p 1 .
Moreover, if θ is the angle at p 1 , then we have
exp p 1 1 p 2 , exp p 1 1 p 3 = d ( p 2 , p 1 ) d ( p 1 , p 3 ) cos θ .
Readers can refer to [34] for further information on the relation between geodesic triangles on Riemannian manifolds and triangles on R 2 .
Lemma 1 
([34]).Let ( p 1 , p 2 , p 3 ) be a geodesic triangle in M. Then, there exists a triangle ( p 1 ¯ , p 2 ¯ , p 3 ¯ ) for ( p 1 , p 2 , p 3 ) such that d ( p i , p i + 1 ) = p i ¯ p i + 1 ¯ , indices taken modulo 3; it is unique up to an isometry of R 2 .
The triangle ( p 1 ¯ , p 2 ¯ , p 3 ¯ ) in Lemma 1 is said to be a comparison triangle for ( p 1 , p 2 , p 3 ) . The points p 1 ¯ , p 2 ¯ , p 3 ¯ are called comparison points to the points p 1 , p 2 , p 3 , respectively.
Lemma 2. 
Let ( p 1 , p 2 , p 3 ) be a geodesic triangle in M and ( p 1 ¯ , p 2 ¯ , p 3 ¯ ) be its comparison triangle.
(i)
Let θ 1 , θ 2 , θ 3 (respectively, θ 1 ¯ , θ 2 ¯ , θ 3 ¯ ) be the angles of ( p 1 , p 2 , p 3 ) (respectively, ( p 1 ¯ , p 2 ¯ , p 3 ¯ ) ) at the vertices p 1 , p 2 , p 3 (respectively, p 1 ¯ , p 2 ¯ , p 3 ¯ ). Then
θ 1 θ 1 ¯ , θ 2 θ 2 ¯ and θ 3 θ 3 ¯ .
(ii)
Let q be a point on the geodesic joining p 1 to p 2 and q ¯ its comparison point in the interval [ p 1 , p 2 ] . If d ( p 1 , q ) = p 1 ¯ q ¯ and d ( p 2 , q ) = p 2 ¯ q ¯ , then d ( p 3 , q ) p 3 ¯ q ¯ .
Several convex analysis concepts and results have been extended in the setting of manifolds. We present some of these that will be used throughout the rest of the paper.
Let M be an Hadamard manifold. A set C M is called geodesic convex if for any two points p and q in C, the geodesic joining p to q is contained in C; that is, if φ : [ a , b ] M is a geodesic such that p = φ ( a ) and q = φ ( b ) , then φ ( t a + ( 1 t ) b ) C , a , b R and t [ 0 , 1 ] . A real valued function f : M R is called geodesic convex if for any geodesic φ : [ a , b ] M the composition function f φ : [ a , b ] R is convex; that is,
( f φ ) ( a t + ( 1 t ) b ) t ( f φ ) ( a ) + ( 1 t ) ( f φ ) ( b ) , a , b R and t [ 0 , 1 ] .
Proposition 2 
([31]).Let the distance function be d : M × M R . Then, d ( · , · ) is a geodesic convex function with respect to the Riemannian metric product; that is, the following inequality holds for any pair of geodesics φ 1 : [ 0 , 1 ] M and φ 2 : [ 0 , 1 ] M .
d ( φ 1 ( t ) , φ 2 ( t ) ) ( 1 t ) d ( φ 1 ( 0 ) , φ 2 ( 0 ) ) + t d ( φ 1 ( 1 ) , φ 2 ( 1 ) ) , t [ 0 , 1 ] .
In particular, for each q M , the function d ( · , q ) : M R is a geodesic convex.
Let us conclude this section with the following results, which are essential to proving our convergence theorem.
Definition 1 
([20]).Assume that { p n } is a sequence in M and that C is a nonempty subset of M. If d ( p n + 1 , q ) d ( p n , q ) , q C and n N , then { p n } is Fejér monotone with respect to C.
Lemma 3 
([20]).Let C be a nonempty subset of M and { p n } M be a sequence in M, such that { p n } is a Fejér monotone with respect to C. Then, the following holds:
(i)
For every q C , { d ( p n , q ) } converges;
(ii)
{ p n } is bounded;
(iii)
Assume that any cluster point of { p n } belongs to C, then { p n } converges to a point in C.

3. Main Results

Unless otherwise stated, the rest of this article will refer to C in terms of a nonempty, closed, and geodesic convex subset of a Hadamard manifold M . Given two mappings g and h from C to C, we say that g and h have at least one fixed point. The common fixed points between mappings g and h are denoted Γ ( g , h ) : = F ( g ) F ( h ) . An iterative approach for finding common fixed points of two nonexpansive mappings g and h is described below.
Algorithm 3. 
Let g , h : C C are mappings, and x 0 C be an initial point. Given x n C and calculate x n + 1 by
z n = exp x n c n exp x n 1 g ( x n ) ,
y n = exp z n b n exp z n 1 h ( z n ) ,
x n + 1 = exp g ( z n ) a n exp g ( z n ) 1 h ( y n ) , n N ,
where { a n } , { b n } and { c n } are real positive sequences in ( 0 , 1 ) .
Following this, we are going to prove that Algorithm 3 satisfies the conditions required to achieve the convergence theorem.
Theorem 1. 
Let C be a nonempty, closed and geodesic convex subset of a Hadamard manifold M, and g , h : C C are nonexpansive mappings such that Γ ( g , h ) . Suppose that { a n } , { b n } and { c n } are real positive sequences satisfying 0 < k 1 a n k 1 ^ < 1 , 0 < k 2 b n k 2 ^ < 1 and 0 < k 3 c n k 3 ^ < 1 , n N . Let x 0 C and { x n } is defined by Algorithm 3. Then { x n } converges to a common fixed point of g and h.
Proof. 
Because ( M , d ) is a complete metric space and Γ ( g , h ) M , it is sufficient to show by Lemma 3 that { x n } is Fejér monotone with respect to Γ ( g , h ) and the cluster point of { x n } belongs to Γ ( g , h ) .
Fix n N , let η Γ ( g , h ) and φ 1 : [ 0 , 1 ] M be geodesic connecting g ( z n ) to h ( y n ) , φ 2 : [ 0 , 1 ] M be geodesic connecting z n to h ( z n ) and φ 3 : [ 0 , 1 ] M be geodesic connecting x n to g ( x n ) . Hence, (11), (12) and (13) can be written as z n = φ 3 ( c n ) , y n = φ 2 ( b n ) and x n + 1 = φ 1 ( a n ) , respectively. By applying the geodesic convexity of the Riemannian distance, we have arrived at the conclusion that
d ( z n , η ) = d ( φ 3 ( c n ) , η ) ( 1 c n ) d ( x n , η ) + c n d ( g ( x n ) , η ) ( 1 c n ) d ( x n , η ) + c n d ( x n , η ) = d ( x n , η ) ,
d ( y n , η ) = d ( φ 2 ( b n ) , η ) ( 1 b n ) d ( z n , η ) + b n d ( h ( z n ) , η ) ( 1 b n ) d ( z n , η ) + b n d ( z n , η ) = d ( z n , η ) d ( x n , η ) ,
and
d ( x n + 1 , η ) = d ( φ 1 ( a n ) , η ) ( 1 a n ) d ( g ( z n ) , η ) + a n d ( h ( y n ) , η ) ( 1 a n ) d ( z n , η ) + a n d ( y n , η ) ( 1 a n ) d ( x n , η ) + a n d ( x n , η ) = d ( x n , η ) .
As a result, { x n } is a Fejér monotone with respect to Γ ( g , h ) . Lemma 3 (ii) states that { x n } is bounded. Therefore, there is a subsequence { x n i } of { x n } that converges to a cluster point p. Following this, we will prove that
lim n + d ( x n , h ( x n ) ) = 0 .
Fix n N and for η Γ ( g , h ) . Let ( g ( z n ) , h ( y n ) , η ) M be a geodesic triangle with vertices g ( z n ) , h ( y n ) and η , and g ( z n ) ¯ , h ( y n ) ¯ , η ¯ R 2 be the corresponding comparison triangle. Then, we obtain d ( g ( z n ) , η ) = g ( z n ) ¯ η ¯ , d ( h ( y n ) , η ) = h ( y n ) ¯ η ¯ and d ( g ( z n ) , h ( y n ) ) = g ( z n ) ¯ h ( y n ) ¯ . Let x n + 1 ¯ = ( 1 a n ) g ( z n ) ¯ + a n h ( y n ) ¯ be the comparison point of x n + 1 . Using (ii) of Lemma 2 together with the expressions (14) and (15),
d 2 ( x n + 1 , η ) x n + 1 ¯ η ¯ 2 = ( 1 a n ) g ( z n ) ¯ + a n h ( y n ) ¯ η ¯ 2 = ( 1 a n ) g ( z n ) ¯ η ¯ 2 + a n h ( y n ) ¯ η ¯ 2 a n ( 1 a n ) g ( z n ) ¯ h ( y n ) ¯ 2 = ( 1 a n ) d 2 g ( z n ) , η + a n d 2 ( h ( y n ) , η ) a n ( 1 a n ) d 2 ( g ( z n ) , h ( y n ) ) ( 1 a n ) d 2 z n , η + a n d 2 ( y n , η ) a n ( 1 a n ) d 2 ( g ( z n ) , h ( y n ) ) = d 2 ( x n , η ) a n ( 1 a n ) d 2 ( g ( z n ) , h ( y n ) ) , n N ,
which implies that
a n ( 1 a n ) d 2 ( g ( z n ) , h ( y n ) ) d 2 ( x n , η ) d 2 ( x n + 1 , η ) , n N .
Based on the fact that 0 < k 1 a n k 1 ^ < 1 , so k 1 ( 1 k 1 ^ ) a n ( 1 a n ) , n N . Summing up (18) from j = 0 to j = n , we obtain
k 1 ( 1 k 1 ^ ) j = 0 n d 2 ( g ( z j ) , h ( y j ) ) d 2 ( x 0 , η ) d 2 ( x n + 1 , η ) , n N .
Letting n + , we have
k 1 ( 1 k 1 ^ ) j = 0 + d 2 ( g ( z j ) , h ( y j ) ) d 2 ( x 0 , η ) < + .
Hence,
lim n + d ( g ( z n ) , h ( y n ) ) = 0 .
Now, let ( z n , h ( z n ) , η ) M be a geodesic triangle with vertices z n , h ( z n ) and η , and z n ¯ , h ( z n ) ¯ , η ¯ R 2 be the corresponding comparison triangle. Then, we obtain
d ( z n , η ) = z n ¯ η ¯ , d ( h ( z n ) , η ) = h ( z n ) ¯ η ¯ and d ( z n , h ( z n ) ) = z n ¯ h ( z n ) ¯ .
Let y n ¯ = ( 1 b n ) z n ¯ + b n h ( z n ) ¯ be the comparison point of y n . Using (ii) of Lemma 2 and (14), then
d 2 ( y n , η ) y n ¯ η ¯ 2 = ( 1 b n ) z n ¯ + b n h ( z n ) ¯ η ¯ 2 = ( 1 b n ) z n ¯ η ¯ 2 + b n h ( z n ) ¯ η ¯ 2 b n ( 1 b n ) z n ¯ h ( z n ) ¯ 2 = ( 1 b n ) d 2 z n , η + b n d 2 ( h ( z n ) , η ) b n ( 1 b n ) d 2 ( z n , h ( z n ) ) ( 1 b n ) d 2 z n , η + b n d 2 ( z n , η ) b n ( 1 b n ) d 2 ( z n , h ( z n ) ) = d 2 ( z n , η ) b n ( 1 b n ) d 2 ( z n , h ( z n ) ) d 2 ( x n , η ) b n ( 1 b n ) d 2 ( z n , h ( z n ) ) , n N .
Adding (20) into (17) and using (14), we get
d 2 ( x n + 1 , η ) ( 1 a n ) d 2 z n , η + a n [ d 2 ( x n , η ) b n ( 1 b n ) d 2 ( z n , h ( z n ) ) ] a n ( 1 a n ) d 2 ( g ( z n ) , h ( y n ) ) d 2 x n , η a n b n ( 1 b n ) d 2 ( z n , h ( z n ) ) a n ( 1 a n ) d 2 ( g ( z n ) , h ( y n ) ) .
Note that k 1 k 2 ( 1 k 2 ^ ) a n b n ( 1 b n ) . Summing up (21) form j = 0 to j = n , we obtain
k 1 k 2 ( 1 k 2 ^ ) j = 0 n d 2 ( z n , h ( z n ) ) d 2 x 0 , η d 2 x n + 1 , η , n N .
Taking n + , yields
k 1 k 2 ( 1 k 2 ^ ) j = 0 + d 2 ( z n , h ( z n ) ) d 2 x 0 , η < + .
Therefore,
lim n + d ( z n , h ( z n ) ) = 0 .
Moreover, let ( x n , g ( x n ) , η ) M represent a geodesic triangle with vertices x n , g ( x n ) and η , and x n ¯ , g ( x n ) ¯ , η ¯ R 2 be the corresponding comparison triangle. Then, we get
d ( x n , η ) = x n ¯ η ¯ , d ( g ( x n ) , η ) = g ( x n ) ¯ η ¯ and d ( x n , g ( x n ) ) = x n ¯ g ( x n ) ¯ .
Let z n ¯ = ( 1 c n ) x n ¯ + c n g ( x n ) ¯ be the comparison point of z n . Using (ii) of Lemma 2, then
d 2 ( z n , η ) z n ¯ η ¯ 2 = ( 1 c n ) x n ¯ + c n g ( x n ) ¯ η ¯ 2 = ( 1 c n ) x n ¯ η ¯ 2 + c n g ( x n ) ¯ η ¯ 2 c n ( 1 c n ) x n ¯ g ( x n ) ¯ 2 = ( 1 c n ) d 2 x n , η + c n d 2 ( g ( x n ) , η ) c n ( 1 c n ) d 2 ( x n , g ( x n ) ) ( 1 c n ) d 2 x n , η + b n d 2 ( x , η ) c n ( 1 c n ) d 2 ( x n , g ( x n ) ) = d 2 ( x n , η ) c n ( 1 c n ) d 2 ( x n , g ( x n ) ) , n N .
Combining (23) and (17), and using (15), yields
d 2 ( x n + 1 , η ) ( 1 a n ) [ d 2 ( x n , η ) c n ( 1 c n ) d 2 ( x n , g ( x n ) ) ] + a n d 2 ( y n , η ) a n ( 1 a n ) d 2 ( g ( z n ) , h ( y n ) ) d 2 ( x n , η ) ( 1 a n ) c n ( 1 c n ) d 2 ( x n , g ( x n ) ) a n ( 1 a n ) d 2 ( g ( z n ) , h ( y n ) ) .
Note that ( 1 k 1 ^ ) k 3 ( 1 k 3 ^ ) ( 1 a n ) c n ( 1 c n ) . Taking the sum of (24) from j = 0 to j = n , we get
( 1 k 1 ^ ) k 3 ( 1 k 3 ^ ) j = 0 n d 2 ( x n , g ( x n ) ) d 2 x 0 , η d 2 x n + 1 , η , n N .
Letting n + , we have
( 1 k 1 ^ ) k 3 ( 1 k 3 ^ ) j = 0 + d 2 ( x n , g ( x n ) ) d 2 x 0 , η < + .
This indicates that
lim n + d ( x n , g ( x n ) ) = 0 .
With the help of the geodesic convexity of the Riemannian distance,
d ( z n , x n ) ( 1 c n ) d ( x n , x n ) + c n d ( g ( x n ) , x n ) = c n d ( g ( x n ) , x n ) d ( g ( x n ) , x n ) .
Taking n + into the last inequality and applying (25), we get
lim n + d ( x n , z n ) = 0 .
Because g is nonexpansive, then
d ( z n , g ( z n ) ) d ( z n , x n ) + d ( x n , g ( x n ) ) + d ( g ( x n ) , g ( z n ) ) d ( x n , z n ) + d ( x n , g ( x n ) ) + d ( x n , z n ) .
In view of (25) and (26), we deduce that
lim n + d ( z n , g ( z n ) ) = 0 .
Recall (12) that y n = exp z n b n exp z n 1 h ( z n ) , we get
d ( y n , z n ) = b n d ( z n , h ( z n ) ) d ( z n , h ( z n ) ) .
Letting n + to the above inequality and using (22), so
lim n + d ( y n , z n ) = 0 .
Now,
d ( x n , h ( x n ) ) d ( x n , z n ) + d ( z n , g ( z n ) ) + d ( g ( z n ) , h ( y n ) ) + d ( h ( y n ) , h ( z n ) ) + d ( h ( z n ) , h ( x n ) ) d ( x n , z n ) + d ( z n , g ( z n ) ) + d ( g ( z n ) , h ( y n ) ) + d ( y n , z n ) + d ( x n , z n ) .
By taking n + into (29). Then, by combining (19), (26), (27) and (28),
lim n + d ( x n , h ( x n ) ) 2 lim n + d ( x n , z n ) + lim n + d ( z n , g ( z n ) ) + lim n + d ( g ( z n ) , h ( y n ) ) + lim n + d ( y n , z n ) = 0 .
Consider,
d ( p , g ( p ) ) d ( p , x n i ) + d ( x n i , g ( x n i ) ) + d ( g ( x n i ) , g ( p ) ) 2 d ( p , x n i ) + d ( x n i , g ( x n i ) ) ,
and
d ( p , h ( p ) ) d ( p , x n i ) + d ( x n i , h ( x n i ) ) + d ( h ( x n i ) , h ( p ) ) 2 d ( p , x n i ) + d ( x n i , h ( x n i ) ) .
By letting i + , we are able to show that d ( p , g ( p ) ) = 0 and d ( p , h ( p ) ) = 0 . This proves that p Γ ( g , h ) . According to Lemma 3 (iii), the sequence { x n } generated by Algorithm 3 converges to a common fixed point of g and h. As a result, the proof has been completed.    □

4. Numerical Examples

We provide two numerical examples on Hadamard manifolds in order to illustrate the performance of Algorithm 3 and to evaluate its efficacy in comparison to other existing algorithms. All programs were coded in Matlab R2016b, and the computations were done on a personal computer with an Intel(R) Core(TM) i7 @1.80 GHz, together with 8 GB 1600 MHz DDR3.
Example 1. 
Let M = ( R 3 , · , · ) be an Hadamard manifold with Riemannian metric u , v = u W ( x ) v , u , v T x M and x = ( x 1 , x 2 , x 3 ) M , where W ( x ) is 3 × 3 matrix defined by
W ( x ) : = 1 0 0 0 1 + 4 x 2 2 2 x 2 0 2 x 2 1 , x M .
The Riemannian distance between any x and y in M is defined by
d 2 ( x , y ) = i = 1 2 ( x i y i ) 2 + ( x 2 2 x 3 y 2 2 + y 3 ) 2 .
See [28] for further details. The geodesic joining the points φ ( 0 ) = x and φ ( 1 ) = y is given by
φ ( t ) : = ( φ 1 ( t ) , φ 2 ( t ) , φ 3 ( t ) ) , t [ 0 , 1 ] ,
where φ i ( t ) = x i + t ( y i x i ) , i = 1 , 2 and
φ 3 ( t ) = x 3 + t ( ( y 3 x 3 ) 2 ( y 2 x 2 ) 2 ) + 2 t 2 ( y 2 x 2 ) 2 .
Therefore, exp x ( t v ) = φ ( t ) , where φ : R M is the unique geodesic starting from φ ( 0 ) = x with v = φ ( 0 ) T x M . To define the inverse exponential mapping, we use the following formula:
exp x 1 y = y 1 x 1 , y 2 x 2 , y 3 x 3 ( y 2 x 2 ) 2 .
In the same vein as [35] [Example 5.1], we define two nonexpansive mappings g , h : M M by
g ( x ) = ( x 1 , x 2 , x 3 ) , x M ,
and
h ( x ) = x 1 2 , x 2 3 , x 3 2 + x 2 2 2 , x M .
Then, F ( g ) = { ( x 1 , x 2 , x 3 ) M : x 1 = x 2 = 0 } and F ( h ) = { ( 0 , 0 , 0 ) } . These imply that Γ ( g , h ) = { ( 0 , 0 , 0 ) } . Let x * = ( 0 , 0 , 0 ) and the initial point x 0 = ( 0 , 0 , 1 ) . Denoted D n = d ( x n , x * ) < 10 6 as the stopping criterion. The five different cases of control parameters a n , b n and c n are as follows:
C a s e I : a n = b n = c n = 0.1 . C a s e I I : a n = b n = c n = 0.3 . C a s e I I I : a n = b n = c n = 0.5 . C a s e I V : a n = b n = c n = 0.7 . C a s e V : a n = b n = c n = 0.9 .
The numerical behavior of Algorithm 3 using different choices of control parameters is reported in Table 1 and Figure 1.
Remark 1. 
(i) 
Table 1 shows that Algorithm 3 with different choices of control parameters is efficient and simple to implement. The most essential point is that Algorithm 3 converges quickly when the control parameter are 0.5 a n , b n , c n < 1 .
(ii) 
The speed of our proposed Algorithm 3 with the parameter a n = b n = c n = 0.9 is clearly faster than others, as can be seen in Figure 1.
Example 2. 
Let M : = H 3 = { x = ( x 1 , x 2 , x 3 , x 4 ) R 4 : x , x = 1 , x 4 > 0 } be the 3−dimensional hyperbolic space endowed with the Lorentz metric · , · of R 4 defined by
x , y = x 1 y 1 + x 2 y 2 + x 3 y 3 x 4 y 4 , x = ( x 1 , x 2 , x 3 , x 4 ) , y = ( y 1 , y 2 , y 3 , y 4 ) H 3 .
For richer details, see [36,37]. Then, H 3 is a Hadamard manifold with sectional curvature 1 . The normalized geodesic φ : R H 3 starting from x H 3 is defined as
φ ( t ) = ( cosh t ) x + ( sinh t ) v , t R ,
where v T x H 3 is unit vector. In light of this, we deduce that exp x t v = ( cosh t ) x + ( sinh t ) v . The inverse exponential map is assigned by
exp x 1 y = arccosh ( x , y ) y + x , y x x , y 2 1 , x , y H 3 .
The Riemannian distance d : H 3 × H 3 R is defined by d ( x , y ) = arccosh ( x , y ) .
Let g , h : H 3 H 3 are nonexpansive mappings respectively defined by
g ( x 1 , x 2 , x 3 , x 4 ) = ( x 1 , x 2 , x 3 , x 4 ) ,
and
h ( x 1 , x 2 , x 3 , x 4 ) = ( x 1 , x 2 , x 3 , x 4 )
for all x = ( x 1 , x 2 , x 3 , x 4 ) H 3 . Then, F ( g ) = { ( 0 , 0 , 0 , 1 ) } and F ( h ) = { ( x 1 , x 2 , x 3 , x 4 ) H 3 : x 1 = 0 ,   x 2 2 + x 3 2 = x 4 2 1 } . We can see that Γ ( g , h ) = { ( 0 , 0 , 0 , 1 ) } .
In order to show the effectiveness of our Algorithm 3, we compare it to two others algorithms: S-iteration algorithm of rank 2 (7) and S-iteration algorithm of rank 3 (8). In Algorithm 3, we set a n = b n = c n = 0.5 + 1 n + 2 . In S-iteration algorithm of rank 2 and S-iteration algorithm of rank 3, we take a n = b n = c n = 0.5 1 n + 2 . Let x * = ( 0 , 0 , 0 , 1 ) and a random initial point
x 0 = ( 0.69445440978475 , 1.01382609280137 , 0.99360871330745 , 1.87012527625153 ) .
We denoted D n = d ( x n , x * ) < 10 6 as the stopping criterion. The computational results of Algorithm 3, S-iteration algorithm of rank 2 and S-iteration algorithm of rank 3 are indicated in Table 2 and Figure 2 for the behavior of D n . It is seen that our proposed method converges faster than other iterative methods.

5. Applications

5.1. Inclusion Problems

Let Ψ ( M ) be the set of all multivalued vector fields A : M 2 T M such that A ( x ) T x M , x M , and denote D ( A ) the domain of A defined by D ( A ) = { x M : A ( x ) } . Suppose that A Ψ ( M ) is a multivalued vector. A point x * M is called a singularity of A such that 0 A ( x * ) . The set of all singularities of A is assigned by A 1 ( 0 ) = { x C : 0 A ( x ) } .
Next, we recall concept of monotonicity for a multivalued vector field on Hadamard manifolds.
Definition 2 
([38]).A vector field A Ψ ( M ) is said to be
(i)
monotone if x , y D ( A )
u , exp x 1 y v , exp y 1 x , u A ( x ) , v A ( y ) ;
and
(ii)
maximal monotone if it is monotone and x M and u T x M , the condition
u , exp x 1 y v , exp y 1 x , y D ( A ) , v A ( y ) u A ( x ) .
Li et al. [21] provided a definition for the resolvent of a multivalued vector field as well as a firmly nonexpansive mapping on Hadamard manifolds.
Definition 3 
([21]).Given λ > 0 and let the multivalued vector field A Ψ ( M ) , the resolvent related to A of order λ is a multivalued map J λ A : M 2 M defined by
J λ A ( x ) : = y M : exp y 1 x λ A ( y ) , x M .
Remark 2 
([21]).Assume that λ > 0 . According to the definition of the resolvent, then the range of the resolvent J λ A is contained the domain of A and F ( J λ A ) = A 1 ( 0 ) .
Definition 4 
([21]).A mapping T : M M is said to be firmly nonexpansive if for any two points x , y M , the function γ : [ 0 , 1 ] [ 0 , + ) defined by
γ ( t ) : = d ( exp x t exp x 1 T x , exp y t exp y 1 T y ) , t [ 0 , 1 ] ,
is nonincreasing.
According to the concept of firmly nonexpansive mappings, it is straightforward to deduce that all firmly nonexpansive mappings are nonexpansive. By the way, the monotonicity and the expansiveness are closely related to one another.
Theorem 2 
([21]).Let a vector field A Ψ ( M ) . Then, the multivalued vector field A is monotone ⟺ J λ A is single-valued and firmly nonexpansive, λ > 0 .
Let g : M R { + } be a geodesic convex function. We know that the subdifferential g ( x ) at x is closed and geodesic convex [33] and is defined as
g ( x ) = { u T x M : g ( y ) g ( x ) + u , exp x 1 y , y M } .
The following lemma states that the subdifferential g is maximal monotone vector field.
Lemma 4 
([39]).Let g : M R { + } be a proper lower semicontinuous geodesic convex function and D ( g ) = M . Then, the subdifferential g of g is a maximal monotone vector field.
Clearly,
x min M g 0 g ( x ) ,
where min M g = { x M : g ( x ) g ( y ) , y M } stands for the set of minimizers.
Here we consider an inclusion problem in the setting of Hadamard manifolds. Then the problem is to find x * M such that
x * A 1 ( 0 ) B 1 ( 0 ) ,
where A , B Ψ ( M ) . S represents the solution set of the inclusion problem (32). In light of Remark 2, the challenge of finding singularities of A is transformed into the problem of finding fixed points of the mapping J λ A .
Next, we apply Algorithm 3 to find a common singularity of two multivalued monotone vector fields.
Theorem 3. 
Let A , B Ψ ( M ) be multivalued monotone vector fields such that S . Assume that { a n } , { b n } and { c n } are real positive sequences satisfying 0 < k 1 a n k 1 ^ < 1 ,   0 < k 2 b n k 2 ^ < 1 and 0 < k 3 c n k 3 ^ < 1 , n N . Let x 0 M and { x n } is defined by
z n = exp x n c n exp x n 1 J λ A ( x n ) , y n = exp z n b n exp z n 1 J μ B ( z n ) , x n + 1 = exp J λ A ( z n ) a n exp J λ A ( z n ) 1 J μ B ( y n ) , n N ,
where λ , μ > 0 . Then, { x n } converges to an element in A 1 ( 0 ) B 1 ( 0 ) .
Proof. 
Set g = J λ A and h = J μ B . Form Theorem 2, g and h are single-valued and firmly nonexpansive mappings. Hence, they are nonexpansive mappings with F ( g ) = A 1 ( 0 ) and F ( h ) = B 1 ( 0 ) . From the hypothesis, F ( g ) F ( h ) = A 1 ( 0 ) B 1 ( 0 ) . Therefore, according to Theorem 1, we get the desired result. □
Now, we discuss a numerical experiment which support Theorem 3.
Example 3. 
Let M : = R + + m = { x R m : x i > 0 , i = 1 , , m } and R + m = { x R m : x i 0 , i = 1 , , m } . As [28], let ( R + + m , · , · ) be the Riemannian manifold with the Riemannian metric · , · defined by u , v : = u T W ( x ) v , x R + + m and u , v T x R + + m , where W ( x ) is a diagonal metrix defined by W ( x ) = diag x 1 2 , x 2 2 , , x m 2 . Tangent space at x R + + m , denoted by T x R + + m . The Riemannian distance d : R + + m × R + + m R + m is defined by
d ( x , y ) : = i = 1 m ln 2 x i y i , x , y R + + m .
Thus, ( R + + m , · , · ) is a Hadamard manifold. The exponential map on R + + m is given by
exp x t v = x 1 e v 1 t x 1 , x 2 e v 2 t x 2 , , x m e v m t x m ,
for x R + + m and υ T x R + + m . The inverse of the exponential map is assigned by
exp x 1 y = x 1 ln y 1 x 1 , x 2 ln y 2 x 2 , , x m ln y m x m , x , y R + + m .
Let g : R + + m R be a mapping defined by
g ( x ) : = i = 1 m g i ( x i ) , g i ( x i ) : = α i ln x i ω i + β i ρ i ln ( x i ) , i = 1 , , m ,
where α i , β i , ρ i , ω i R + + satisfy ρ i < α i ω i and ω i 2 for i = 1 , , m . The minimizer of g is x * = ( x 1 * , x 2 * , , x m * ) , where x i * = β i ρ i / ( α i ω i ρ i ) ω i , for i = 1 , , m . Ferreira et al. [19] showed that g is a geodesic convex function in ( R + + m , · , · ) . Taking α i = β i = ρ i = 1 and ω i = 2 , for i = 1 , , m , we obtain 1 as the minimizer of the mapping g. Let h : R + + m R be a mapping defined by
h ( x ) : = d ( 1 , x ) , x R + + m .
It is easy to see that h is geodesic convex function, and 1 is the minimizer.
From (31), we have
g ( x ) = u T x R + + m | g ( y ) g ( x ) + u x ln x y , y R + + m ,
and
h ( x ) = u T x R + + m | d ( 1 , y ) d ( 1 , x ) + u x ln x y , y R + + m .
The subdifferential g and h are maxiaml monotone vector fields, as shown by Lemma 4. Thus, we instead the multivalued monotone vector fields A and B by g and h , respectively. In addition to it, we have
J λ g ( x ) = arg min y R + + m g ( y ) + 1 2 λ d 2 ( y , x ) , λ > 0 ,
and
J μ h ( x ) = arg min y R + + m d ( 1 , y ) + 1 2 μ d 2 ( y , x ) , μ > 0 .
Due to the fact that the minimizer of both g and h is 1, it is possible for us to observe that S : = A 1 ( 0 ) B 1 ( 0 ) = { 1 } .
Choose a n = b n = c n = 0.5 , λ = μ = 1 , and let x * = 1 . We take into account of the various of numbers for dimension m. Denoted D n = d ( x n , x * ) 2 < 10 6 as the stopping criterion. The initial values x 0 are randomly generated in MATLAB. Both Table 3 and Figure 3 include the numerical results that we obtained.
Remark 3. 
(i) 
Numerical experiments, as shown in Table 3, demonstrate that the proposed Algorithm (33) with different dimensional sizes converges to a common singularity of maximal monotone multivalued vector fields. Our method is efficient and simple to implement for solving the inclusion problem (32). Furthermore, the number of iterations required by Algorithm (33) is unaffected by the dimension selection; in fact, the number of iterations required by the proposed method is only slightly affected by the dimension leaping change.
(ii) 
The functions g and h in the Example 3 are nonconvex functions in the Euclidean sense. Therefore, the iterative method [12] can’t be applied to solve the problem (32).

5.2. Convex Feasibility Problems

Suppose that C is a nonempty, closed and geodesic convex subset of M. The projection operator P C ( · ) : M C is defined by P C ( x ) : = { z : d ( x , z ) d ( x , y ) , y C } , x M . We know that the projection operator P C is single-valued and firmly nonexpansive [21].
Let C 1 and C 2 are nonempty, closed and geodesic convex subsets of M such that C 1 C 2 = . Since P C 1 : M C 1 with F ( P C 1 ) = C 1 and P C 2 : M C 2 with F ( P C 2 ) = C 2 are nonexpansive mappings. Therefore, Γ ( P C 1 , P C 2 ) = C 1 C 2 . This means that finding an element in Γ ( P C 1 , P C 2 ) is analogous to finding a point in the intersection of two nonempty, closed and geodesic convex subsets of Hadamard manifolds.
Next, we apply Algorithm 3 to find a point in the intersection of two nonempty, closed, and geodesic convex subsets of Hadamard manifolds.
Theorem 4. 
Let C 1 and C 2 are nonempty, closed, and geodesic convex subset of M such that C 1 C 2 . Assume that { a n } , { b n } and { c n } are real positive sequences satisfying 0 < k 1 a n k 1 ^ < 1 ,   0 < k 2 b n k 2 ^ < 1 and 0 < k 3 c n k 3 ^ < 1 , n N . Let x 0 M and { x n } is defined by
z n = exp x n c n exp x n 1 P C 1 ( x n ) , y n = exp z n b n exp z n 1 P C 2 ( z n ) , x n + 1 = exp P C 1 ( z n ) a n exp P C 1 ( z n ) 1 P C 2 ( y n ) , n N .
Then, { x n } converges to an element in C 1 C 2 .
Proof. 
Set g = P C 1 and h = P C 2 . Then, g and h are nonexpansive mappings with F ( g ) = C 1 and F ( h ) = C 2 . From the hypothesis, F ( g ) F ( h ) = C 1 C 2 . Therefore, according to Theorem 1, we get the desired result. □
Now, we discuss a numerical experiment which support Theorem 4.
Example 4. 
Let M : = H = { ( t 1 , t 2 ) R 2 | t 2 > 0 } be the Poincaré plane endowed with the Riemannian metric defined by
g 11 = g 22 : = 1 t 2 2 , g 12 : = 0 , ( t 1 , t 2 ) H .
The sectional curvature of H is equal to 1 , and the geodesics of the Poincaré plane are the semilines φ a : t 1 = a ,   t 2 > 0 and the semicircles φ b , r : ( t 1 b ) 2 + t 2 2 = r 2 , t 2 > 0 ; or admit the following natural parameterizations:
φ a : t 1 = a , t 2 = e s , s ( , + ) ; φ b , r : t 1 = r tanh s , t 2 = r cosh s , s ( , + ) ;
see e.g., [33]. Furthermore, consider two points y = ( t 1 y , t 2 y ) and z = ( t 1 z , t 2 z ) in H . Then, the Riemannian distance between y and z is defined by
d H ( y , z ) = ln t 2 z t 2 y , if t 1 y = t 1 z , ln t 1 y b + r t 1 z b + r · t 2 z t 2 y , if t 1 y t 1 z ,
where
b = ( t 1 y ) 2 + ( t 2 y ) 2 ( ( t 1 z ) 2 + ( t 2 z ) 2 ) 2 ( t 1 y t 1 z ) and r = ( t 1 y b ) 2 + ( t 2 y ) 2 .
To get the expression of exp y 1 z , we consider a smooth geodesic curve φ joining y to z defined by
φ ( s ) : = ( φ 1 ( s ) , φ 2 ( s ) ) , s [ 0 , 1 ] ,
where φ 1 ( s ) and φ 2 ( s ) are respectively defined by
φ 1 ( s ) : = t 1 y if t 1 y = t 1 z , b r tanh ( 1 s ) · arctanh b t 1 y r + s · arctanh b t 1 z r , if t 1 y t 1 z ,
and
φ 2 ( s ) : = e ( 1 s ) · ln t 2 y + s · ln t 2 z if t 1 y = t 1 z , r cosh ( 1 s ) · arctanh b t 1 y r + s · arctanh b t 1 z r , if t 1 y t 1 z .
By the Riemannian metric endowed on H (c.f. (35)), one checks that
φ ( 0 ) = d φ 1 ( s ) d s , d φ 2 ( s ) d s s = 0 ;
see [32] (p. 7). Therefore, by elementary calculus, we get
exp y 1 z = φ ( 0 ) = 0 , t 2 y ln t 2 z t 2 y , if t 1 y = t 1 z , t 2 y arctanh b t 1 y r arctanh b t 1 z r r ( t 2 y , b t 1 y ) , if t 1 y t 1 z .
It follows from [40] [Example 5.1], that we must let C 1 , C 2 be closed convex subsets of H defined as
C 1 : = { ( t 1 , t 2 ) H : t 2 1 } ,
and
C 2 : = { ( t 1 , t 2 ) H : t 1 2 + t 2 2 1 ) } .
From [33] (p. 301), C 1 and C 2 are convex because that C 1 is level set convex function f : H R defined by
f ( y ) = 1 t 2 , y = ( t 1 , t 2 ) H
and φ : = { ( t 1 , t 2 ) M : t 1 + t 2 = 1 } is a geodesic of H , receptively. Moreover Γ ( P C 1 , P C 2 ) = { ( 0 , 1 ) } ,
P C 1 ( x ) = ( t 1 , 1 ) , x = ( t 1 , t 2 ) C 1 ,
and
P C 2 ( x ) = 2 t 1 t 1 2 + t 2 2 + 1 , 1 2 t 1 t 1 2 + t 2 2 + 1 2 , x = ( t 1 , t 2 ) C 2 .
Choose a n = b n = c n = 0.7 + 1 n + 3 and the initial point x 0 = ( 1 , 1 ) with x * = ( 0 , 1 ) . The computational results of Algorithm (34) are presented in Table 4 and Figure 4.

6. Conclusions

The problem of finding common fixed points of two nonexpansive mappings in the setting Hadamard manifolds is the subject of this research. Regarding the solution to this problem, a new three-step method is suggested. The proposed method has been shown to converge under certain assumptions. The effectiveness of the proposed method is shown by numerical examples.

Author Contributions

Conceptualization, K.K., P.C. and K.S.; methodology, K.K. and P.C.; software, K.K. and P.C.; validation, K.K., P.C. and K.S.; formal analysis, K.K., P.C. and K.S.; investigation, K.K., P.C. and K.S.; writing—original draft preparation, K.K., P.C. and K.S. writing—review and editing, K.K., P.C. and K.S.; visualization, K.K., P.C. and K.S.; supervision and funding acquisition, P.C. and K.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research project is supported by The Science, Research and Innovation Promotion Funding (TSRI) (Grant no. FRB650070/0168). This research block grant was managed under Rajamangala University of Technology Thanyaburi (FRB65E0632M.1).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

The first author would like to thank Faculty of Science and Technology, Rajamangala University of Technology Thanyaburi (RMUTT). The second author was supported by Thailand Science Research and Innovation (TSRI) Basic Research Fund: Fiscal Year 2023. Moreover, the last author acknowledges the financial support provided by the Science, Research, and Innovation Promotion Funding (TSRI) (Grant no. FRB650070/0168). This research block grant was managed under Rajamangala University of Technology Thanyaburi (FRB65E0632M.1).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Bauschke, H.H.; Combettes, P.L. Convex Analysis and Monotone Operator Theory in Hilbert Spaces, 2nd ed.; CMS Books in Mathematics/Ouvrages de Mathématiques de la SMC; Springer: Cham, Switzerland, 2017; p. xix+619, With a foreword by Hédy Attouch. [Google Scholar] [CrossRef]
  2. Bauschke, H.H.; Borwein, J.M. On projection algorithms for solving convex feasibility problems. SIAM Rev. 1996, 38, 367–426. [Google Scholar] [CrossRef] [Green Version]
  3. Kitkuan, D.; Kumam, P.; Martínez-Moreno, J. Generalized Halpern-type forward-backward splitting methods for convex minimization problems with application to image restoration problems. Optimization 2020, 69, 1557–1581. [Google Scholar] [CrossRef]
  4. Padcharoen, A.; Kumam, P.; Cho, Y.J. Split common fixed point problems for demicontractive operators. Numer. Algorithms 2019, 82, 297–320. [Google Scholar] [CrossRef]
  5. Picard, E. Mémoire sur la théorie des équations aux dérivées partielles et la méthode des approximations successives. Journal de Mathématiques Pures et Appliquées 1890, 6, 145–210. [Google Scholar]
  6. Mann, W.R. Mean value methods in iteration. Proc. Am. Math. Soc. 1953, 4, 506–510. [Google Scholar] [CrossRef]
  7. Ishikawa, S. Fixed points by a new iteration method. Proc. Am. Math. Soc. 1974, 44, 147–150. [Google Scholar] [CrossRef]
  8. Agarwal, R.P.; O’Regan, D.; Sahu, D.R. Iterative construction of fixed points of nearly asymptotically nonexpansive mappings. J. Nonlinear Convex Anal. 2007, 8, 61–79. [Google Scholar]
  9. Sahu, V.K.; Pathak, H.K.; Tiwari, R. Convergence theorems for new iteration scheme and comparison results. Aligarh Bull. Math. 2016, 35, 18–42. [Google Scholar]
  10. Thakur, B.S.; Thakur, D.; Postolache, M. A new iteration scheme for approximating fixed points of nonexpansive mappings. Filomat 2016, 30, 2711–2720. [Google Scholar] [CrossRef] [Green Version]
  11. Li, C.; López, G.; Martín-Márquez, V. Iterative algorithms for nonexpansive mappings on Hadamard manifolds. Taiwan. J. Math. 2010, 14, 541–559. [Google Scholar] [CrossRef]
  12. Padcharoen, A.; Sukprasert, P. Nonlinear Operators as Concerns Convex Programming and Applied to Signal Processing. Mathematics 2019, 7, 866. [Google Scholar] [CrossRef] [Green Version]
  13. Sahu, D.R.; Babu, F.; Sharma, S. The S-iterative techniques on Hadamard manifolds and applications. J. Appl. Numer. Optim. 2020, 2, 353–371. [Google Scholar] [CrossRef]
  14. Debnath, P.; Konwar, N.; Radenović, S. (Eds.) Metric Fixed Point Theory. Applications in Science, Engineering and Behavioural Sciences; Forum for Interdisciplinary Mathematics (FFIM); Springer: Singapore, 2021. [Google Scholar] [CrossRef]
  15. Todorčević, V. Harmonic Quasiconformal Mappings and Hyperbolic Type Metrics; Springer: Cham, Switzerland, 2019. [Google Scholar] [CrossRef]
  16. Khuri, S.A.; Louhichi, I. A novel Ishikawa-Green’s fixed point scheme for the solution of BVPs. Appl. Math. Lett. 2018, 82, 50–57. [Google Scholar] [CrossRef]
  17. Sintunavarat, W.; Pitea, A. On a new iteration scheme for numerical reckoning fixed points of Berinde mappings with convergence analysis. J. Nonlinear Sci. Appl. 2016, 9, 2553–2562. [Google Scholar] [CrossRef] [Green Version]
  18. Ali, J.; Ali, F.; Kumar, P. Approximation of fixed points for Suzuki’s generalized non-expansive mappings. Mathematics 2019, 7, 522. [Google Scholar] [CrossRef] [Green Version]
  19. Ferreira, O.P.; Louzeiro, M.S.; Prudente, L.F. Gradient method for optimization on Riemannian manifolds with lower bounded curvature. SIAM J. Optim. 2019, 29, 2517–2541. [Google Scholar] [CrossRef] [Green Version]
  20. Ferreira, O.P.; Oliveira, P.R. Proximal point algorithm on Riemannian manifolds. Optimization 2002, 51, 257–270. [Google Scholar] [CrossRef]
  21. Li, C.; López, G.; Martín-Márquez, V.; Wang, J.H. Resolvents of set-valued monotone vector fields in Hadamard manifolds. Set-Valued Var. Anal. 2011, 19, 361–383. [Google Scholar] [CrossRef]
  22. Németh, S.Z. Variational inequalities on Hadamard manifolds. Nonlinear Anal. 2003, 52, 1491–1498. [Google Scholar] [CrossRef]
  23. Salisu, S.; Kumam, P.; Sriwongsa, S.; Abubakar, J. On minimization and fixed point problems in Hadamard spaces. Comput. Appl. Math. 2022, 41, 22. [Google Scholar] [CrossRef]
  24. Kumam, P.; Chaipunya, P. Equilibrium problems and proximal algorithms in Hadamard spaces. J. Nonlinear Anal. Optim. 2017, 8, 155–172. [Google Scholar]
  25. Kirk, W.; Shahzad, N. Fixed Point Theory in Distance Spaces; Springer: Cham, Switzerland, 2014; p. xii+173. [Google Scholar] [CrossRef]
  26. Salisu, S.; Minjibir, M.S.; Kumam, P.; Sriwongsa, S. Convergence theorems for fixed points in CAT_p(0) spaces. J. Appl. Math. Comput. 2022, 1–20. [Google Scholar] [CrossRef]
  27. Adler, R.L.; Dedieu, J.P.; Margulies, J.Y.; Martens, M.; Shub, M. Newton’s method on Riemannian manifolds and a geometric model for the human spine. IMA J. Numer. Anal. 2002, 22, 359–390. [Google Scholar] [CrossRef]
  28. Da Cruz Neto, J.X.; Ferreira, O.P.; Pérez, L.R.L.; Németh, S.Z. Convex- and monotone-transformable mathematical programming problems and a proximal-like point method. J. Glob. Optim. 2006, 35, 53–69. [Google Scholar] [CrossRef] [Green Version]
  29. Grohs, P.; Hosseini, S. Nonsmooth trust region algorithms for locally Lipschitz functions on Riemannian manifolds. IMA J. Numer. Anal. 2016, 36, 1167–1192. [Google Scholar] [CrossRef] [Green Version]
  30. Kristály, A. Nash-type equilibria on Riemannian manifolds: A variational approach. J. Math. Pures Appl. 2014, 101, 660–688. [Google Scholar] [CrossRef]
  31. Sakai, T. Riemannian Geometry. In Translations of Mathematical Monographs; American Mathematical Society: Providence, RI, USA, 1996; Volume 149, p. xiv+358, Translated from the 1992 Japanese original by the author. [Google Scholar]
  32. do Carmo, M.P.A. Riemannian geometry. In Mathematics: Theory & Applications; Birkhauser Boston, Inc.: Boston, MA, USA, 1992; p. xiv+300, Translated from the second Portuguese edition by Francis Flaherty. [Google Scholar] [CrossRef]
  33. Udrişte, C. Convex Functions and Optimization Methods on Riemannians Manifolds. In Mathematics and Its Applications; Kluwer Academic Publishers Group: Dordrecht, the Netherlands, 1994; Volume 297, p. xviii+348. [Google Scholar] [CrossRef]
  34. Bridson, M.R.; Haefliger, A. Metric spaces of non-positive curvature. In Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences]; Springer: Berlin/Heidelberg, Germany, 1999; Volume 319, p. xxii+643. [Google Scholar] [CrossRef]
  35. Al-Homidan, S.; Ansari, Q.H.; Babu, F.; Yao, J.C. Viscosity method with a ϕ-contraction mapping for hierarchical variational inequalities on Hadamard manifolds. Fixed Point Theory 2020, 21, 561–584. [Google Scholar] [CrossRef]
  36. Ferreira, O.P.; Pérez, L.R.L.; Németh, S.Z. Singularities of monotone vector fields and an extragradient-type algorithm. J. Glob. Optim. 2005, 31, 133–151. [Google Scholar] [CrossRef]
  37. Tang, G.J.; Huang, N.J. Korpelevich’s method for variational inequality problems on Hadamard manifolds. J. Glob. Optim. 2012, 54, 493–509. [Google Scholar] [CrossRef]
  38. da Cruz Neto, J.X.; Ferreira, O.P.; Lucambio Pérez, L.R. Monotone point-to-set vector fields. Balkan J. Geom. Appl. 2000, 5, 69–79, Dedicated to Professor Constantin Udrişte. [Google Scholar]
  39. Li, C.; López, G.; Martín-Márquez, V. Monotone vector fields and the proximal point algorithm on Hadamard manifolds. J. Lond. Math. Soc. 2009, 79, 663–683. [Google Scholar] [CrossRef]
  40. Wang, X.; Li, C.; Yao, J.C. Projection algorithms for convex feasibility problems on Hadamard manifolds. J. Nonlinear Convex Anal. 2016, 17, 483–497. [Google Scholar]
Figure 1. Numerical behaviour of { D n } for Example 1.
Figure 1. Numerical behaviour of { D n } for Example 1.
Mathematics 10 04080 g001
Figure 2. Numerical behavior of { D n } for Example 2.
Figure 2. Numerical behavior of { D n } for Example 2.
Mathematics 10 04080 g002
Figure 3. Numerical behavior of { D n } for Example 3.
Figure 3. Numerical behavior of { D n } for Example 3.
Mathematics 10 04080 g003
Figure 4. Distance to solution x * = ( 0 , 1 ) of each iteration number where the initial point is x 0 = ( 1 , 1 ) .
Figure 4. Distance to solution x * = ( 0 , 1 ) of each iteration number where the initial point is x 0 = ( 1 , 1 ) .
Mathematics 10 04080 g004
Table 1. Numerical experiments of Example 1.
Table 1. Numerical experiments of Example 1.
CaseIterationTime (s)
I2580.0291
II740.0149
III380.0115
IV230.1060
V160.0052
Table 2. Numerical experiments of Example 2.
Table 2. Numerical experiments of Example 2.
AlgorithmIterationTime (s)
Algorithm 370.0064
S-iteration algorithm of rank 2220.0188
S-iteration algorithm of rank 3100.0198
Table 3. Numerical experiments of Example 3.
Table 3. Numerical experiments of Example 3.
D n Dimension
m = 1 m = 10 m = 50 m = 100
D 1 0.9757198.39743440.64700886.538043
D 2 0.0369590.8343268.44506821.070651
D 3 0.0013030.0295380.7941563.402976
D 4 4.582065 × 10 5 0.0010430.0279610.129793
D 5 1.641608 × 10 6 3.782427 × 10 5 0.0009910.004588
D 6 2.488131 × 10 7 2.415600 × 10 6 4.070337 × 10 5 0.000176
D 7 -6.194778 × 10 7 8.331605 × 10 6 1.623930 × 10 5
D 8 --2.111748 × 10 6 4.140798 × 10 6
D 9 --5.425004 × 10 7 1.076243 × 10 6
D 10 ---2.742307 × 10 7
Table 4. The numerical results of Algorithm (34).
Table 4. The numerical results of Algorithm (34).
IterationAlgorithm (34)
x n d ( x n , x * )
0(1,1)0.962424
1(0.666667,0.745356)0.804719
2(0.558497,0.829507)0.630646
3(0.493526,0.869731)0.540711
4(0.448527,0.893769)0.482855
5(0.414798,0.909914)0.441392
6(0.388202,0.921574)0.409681
7(0.366477,0.930427)0.384347
8(0.348263,0.937397)0.363465
9(0.332683,0.943039)0.345842
10(0.319143,0.947707)0.330693
50(0.159591,0.987183)0.160967
100(0.115512,0.993306)0.116031
500(0.052958,0.998597)0.053007
1000(0.037602,0.999293)0.037619
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Khammahawong, K.; Chaipunya, P.; Sombut, K. Approximating Common Fixed Points of Nonexpansive Mappings on Hadamard Manifolds with Applications. Mathematics 2022, 10, 4080. https://doi.org/10.3390/math10214080

AMA Style

Khammahawong K, Chaipunya P, Sombut K. Approximating Common Fixed Points of Nonexpansive Mappings on Hadamard Manifolds with Applications. Mathematics. 2022; 10(21):4080. https://doi.org/10.3390/math10214080

Chicago/Turabian Style

Khammahawong, Konrawut, Parin Chaipunya, and Kamonrat Sombut. 2022. "Approximating Common Fixed Points of Nonexpansive Mappings on Hadamard Manifolds with Applications" Mathematics 10, no. 21: 4080. https://doi.org/10.3390/math10214080

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop