Next Article in Journal
Stability of Certain Non-Autonomous Cooperative Systems of Difference Equations with the Application to Evolutionary Dynamics
Previous Article in Journal
Recent Progress on Point-Countable Covers and Sequence-Covering Mappings
Previous Article in Special Issue
F-Bipolar Metric Spaces: Fixed Point Results and Their Applications
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Iterative Approach to Common Fixed Points of G-Nonexpansive Mappings with Applications in Solving the Heat Equation

by
Raweerote Suparatulatorn
1,2,3,
Payakorn Saksuriya
1,2,4,
Teeranush Suebcharoen
1,2,5 and
Khuanchanok Chaichana
1,2,5,*
1
Advanced Research Center for Computational Simulation, Chiang Mai University, Chiang Mai 50200, Thailand
2
Centre of Excellence in Mathematics, MHESI, Bangkok 10400, Thailand
3
Office of Research Administration, Chiang Mai University, Chiang Mai 50200, Thailand
4
International College of Digital Innovation, Chiang Mai University, Chiang Mai 50200, Thailand
5
Department of Mathematics, Faculty of Science, Chiang Mai University, Chiang Mai 50200, Thailand
*
Author to whom correspondence should be addressed.
Axioms 2024, 13(11), 729; https://doi.org/10.3390/axioms13110729
Submission received: 1 October 2024 / Revised: 17 October 2024 / Accepted: 18 October 2024 / Published: 22 October 2024
(This article belongs to the Special Issue Fixed Point Theory and Its Related Topics IV)

Abstract

:
This study presents an iterative method for approximating common fixed points of a finite set of G-nonexpansive mappings within a real Hilbert space with a directed graph. We establish definitions for left and right coordinate convexity and demonstrate both weak and strong convergence results based on reasonable assumptions. Furthermore, our algorithm’s effectiveness in solving the heat equation is highlighted, contributing to energy optimization and sustainable development.
MSC:
65Y05; 68W10; 47H05; 47J25; 49M37

1. Introduction

Recent global challenges, particularly in urbanization and climate change, have intensified the urban heat island (UHI) effect. This phenomenon leads to significantly higher temperatures in cities compared to rural areas due to factors such as extensive concrete surfaces and reduced vegetation. UHI exacerbates energy consumption, increases pollution, and negatively impacts public health, especially during heatwaves. The heat equation offers a solution by modeling temperature dynamics to address these issues, see [1]. In smart cities, it helps mitigate the UHI effect by optimizing energy use in buildings and improving infrastructure resilience. In agriculture, the heat equation is used for precision farming, improving irrigation efficiency, and managing soil temperature to enhance crop yields and water conservation, see [2,3]. These applications, grounded in heat transfer modeling, are vital for climate resilience and sustainable development in the face of increasing environmental pressures. Algorithms used to solve the heat equation often rely on concepts from fixed point theorems. These theorems are central to the process, as they provide a framework, making them a fundamental tool in the development of algorithms for solving the heat equation, as seen in [4,5,6].
The study of fixed points is a rich field with applications not only in differential equations but also across various branches of mathematics, including geometry and algebra. In geometry, fixed point theorems have been instrumental in solving problems related to the behavior of transformations, such as automorphisms. For instance, Gamboa and Gromadzki [7] explored the fixed points of automorphisms on bordered Klein surfaces, further emphasizing the role of automorphisms in complex surface theory. In algebra, Cooper [8] focused on the fixed points of automorphisms in free groups, demonstrating that these fixed point sets are finitely generated. These applications highlight the importance of fixed point theorems in understanding transformations and algebraic structures across diverse mathematical fields.
The concept of a fixed point theorem for metric spaces endowed with graphs was first introduced by Jachymski [9] in 2008. Later, Aleomraninejad et al. [10] defined G-contractive and G-nonexpansive mappings within metric spaces that included directed graphs, providing convergence results for these types of mappings. Subsequently, numerous researchers have proposed various iterations for G-nonexpansive mappings in Hilbert spaces that involve a directed graph G = ( V ( G ) , E ( G ) ) , see [11,12,13,14], for instance. The development of iterative processes that achieve faster convergence remains a significant challenge. Suparatulatorn et al. [11] presented the following lemma, which we will utilize in our findings:
Lemma 1 
([11]). Let C be a nonempty closed convex subset of a Hilbert space H . Suppose that V ( G ) = C , and that F is a G-nonexpansive self-mapping on C. Given that { u n } is a sequence in C satisfying u n u (weak convergence) and { u n F u n } v (strong convergence), where u C and v H , if there is a subsequence { u n k } of { u n } satisfying ( u n k , u ) E ( G ) for all k N , then ( I F ) u = v .
Moreover, Karahan and Ozdemir [15] introduced an S * iteration for nonexpansive mappings T in a Banach space, demonstrating that their iteration method outperforms the Picard, Mann, and S iterations in terms of speed. The S * iteration is as follows:
x n + 1 = ( 1 α n ) T x n + α n T y n y n = ( 1 β n ) T x n + β n T z n z n = ( 1 γ n ) x n + γ n T x n ,
where { α n } , { β n } , and  { γ n } are real sequences in ( 0 , 1 ) . Recently, Yambangwai and Thianwan [16] introduced a parallel inertial SP-iteration monotone hybrid algorithm (PISPMHA) for which a weak convergence theorem has been established in Hilbert spaces H endowed with graphs. Given initial points x 0 , x 1 H and i = 1 , 2 , , N , PISPMHA for G-nonexpansive mapping T i is presented as follows:
w n = x n + θ n ( x n x n 1 ) z n i = ( 1 γ n i ) w n + γ n i T i w n y n i = ( 1 β n i ) z n i + β n i T i z n i h n i = ( 1 α n i ) y n i + α n i T i y n i x n + 1 = argmax { h n i w n , i = 1 , 2 , , N } ,
where { θ n } [ 0 , θ ] for each θ ( 0 , 1 ] and { α n i } , { β n i } , { γ n i } [ 0 , 1 ] . Inspired by prior research, our study introduces Algorithm for approximating a common fixed point of a finite family of G-nonexpansive mappings in a real Hilbert space with a directed graph G.
This paper is organized as follows: Section 2 outlines the definitions of left and right coordinate convexity and their properties while also presenting the weak and strong convergence results of the proposed algorithm based on reasonable assumptions. The final section applies our algorithm to solve linear systems in order to obtain the numerical solutions of the heat equation.

2. Main Results

Let H be a real Hilbert space with a directed graph G = ( H , E ( G ) ) , where E ( G ) H × H . Assume that for each i = 1 , 2 , , N , the mapping F i : H H is G-nonexpansive, and that the set of common fixed points
F : = i = 1 N F i x ( F i ) .
We then introduce new definitions and their corresponding properties, followed by a review of key lemmas from the previous study.
Definition 1. 
Let X be a vector space and E be a nonempty subset of X × X . For all ( u , p ) , ( v , p ) E and for all λ [ 0 , 1 ] , the set E is said to be left coordinate convex if
λ ( u , p ) + ( 1 λ ) ( v , p ) E .
Definition 2. 
Let X be a vector space and E be a nonempty subset of X × X . For all ( p , u ) , ( p , v ) E and for all λ [ 0 , 1 ] , the set E is said to be right coordinate convex if
λ ( p , u ) + ( 1 λ ) ( p , v ) E .
Moreover, E is both left and right coordinate convex if and only if E is coordinate convex, as defined by Van Dung and Trung Hieu in [17].
Example 1. 
Suppose E = [ 0 , 1 ] × ( [ 0 , 1 ] [ 2 , 3 ] ) . Let ( u , p ) , ( v , p ) E , and  λ [ 0 , 1 ] . From the convexity of [ 0 , 1 ] and the fact that u , v [ 0 , 1 ] , we obtain λ u + ( 1 λ ) v [ 0 , 1 ] , which implies that ( λ u + ( 1 λ ) v , p ) E . Thus,
λ ( u , p ) + ( 1 λ ) ( v , p ) E ,
so we can conclude that E is left coordinate convex. On the other hand, if we let u = 1 , v = 2 , and  λ = 0.5 , then for any p [ 0 , 1 ] , we have ( p , u ) , ( p , v ) E , but 
λ ( p , u ) + ( 1 λ ) ( p , v ) = ( p , 1.5 ) E .
Therefore, E is not right coordinate convex, and thus, E is not coordinate convex. Likewise, if we set E = ( [ 0 , 1 ] [ 2 , 3 ] ) × [ 0 , 1 ] , it follows that E is right coordinate convex, while it is not left coordinate convex.
In the Example 1, we observe that if the set E is generated by both convex and non-convex sets, then E is either left or right coordinate convex. This observation allows us to derive the following theorem:
Theorem 1. 
Let E 1 and E 2 be nonempty subsets of a vector space, and let E = E 1 × E 2 . We obtain the following statements:
(i) 
If E 1 is convex, then E is left coordinate convex.
(ii) 
If E 2 is convex, then E is right coordinate convex.
(iii) 
If E 1 and E 2 are convex, then E is coordinate convex.
Proof. 
Assume that E 1 is convex. Let ( u , p ) , ( v , p ) E , and  λ [ 0 , 1 ] . Then, u , v E 1 and p E 2 . By assumption, we have λ u + ( 1 λ ) v E 1 , so
λ ( u , p ) + ( 1 λ ) ( v , p ) = ( λ u + ( 1 λ ) v , p ) E .
Therefore, E is left coordinate convex, which means that the statement ( i ) holds. We can verify statement ( i i ) in a similar way. Additionally, statement ( i i i ) follows from combining statements ( i ) and ( i i ) . □
Lemma 2 
([18], Lemma 1). Let { x n } and { y n } be nonnegative sequences of real numbers satisfying x n + 1 x n + y n and n = 1 y n < . Then, the sequence { x n } converges.
Lemma 3 
([19], Opial). Let C be a nonempty subset of a Hilbert space H , and let { x n } be a sequence in H . Suppose that
(i) 
the sequence x n u converges for all u C ,
(ii) 
all weak sequential cluster points of { x n } belong to C.
Then, { x n } converges weakly to some point in C.
We now present our iterative method, as described below (Algorithm 1).
Algorithm 1 Inertial S * parallel algorithm
1:
Initialization: Choose x 0 , x 1 H , and let n : = 1 .
2:
Iterative Steps: Construct a sequence { x n } as the following:
   
Step 1. Compute
p n = x n + α n ( x n x n 1 ) ,
where { α n } is a real sequence.
   
Step 2. Compute
q n i = ( 1 ρ n i ) p n + ρ n i F i p n , r n i = ( 1 τ n i ) F i p n + τ n i F i q n i , s n i = ( 1 λ n i ) F i p n + λ n i F i r n i ,
where { ρ n i } , { τ n i } , { λ n i } [ 0 , 1 ] for all i = 1 , 2 , , N .
   
Step 3. Define
x n + 1 = arg max s n i p n : i = 1 , 2 , , N .
Repeat all steps by replacing n with n + 1 .
We will provide proofs for the lemmas that support our findings, assuming that { x n } is a sequence generated by Algorithm 1.
Lemma 4. 
Let H be a real Hilbert space with a directed graph G = ( H , E ( G ) ) . Suppose that { x n } is a sequence generated by Algorithm 1, satisfying the following conditions:
(i) 
n = 1 | α n | x n x n 1 < ,
(ii) 
E ( G ) is left coordinate convex, and  ( p n , x * ) E ( G ) .
Then, the sequence { x n } is bounded, and  lim n x n x * exists for all x * F .
Proof. 
Since F is nonempty, we can let x * F . Based on supposition ( i i ) and the fact that F i preserves edges, we have ( F i p n , x * ) = ( F i p n , F i x * ) E ( G ) for every i = 1 , 2 , , N . The left coordinate convexity of E ( G ) ensures that ( q n i , x * ) = ( 1 ρ n i ) ( p n , x * ) + ρ n i ( F i p n , x * ) E ( G ) . Similarly, we can conclude that ( F i q n i , x * ) , ( r n i , x * ) , ( F i r n i , x * ) , and  ( s n i , x * ) E ( G ) , for all i = 1 , 2 , , N . Furthermore, we derive the following results by considering the fact that F i is G-nonexpansive:
q n i x * =   ( 1 ρ n i ) p n + ρ n i F i p n x * =   ( 1 ρ n i ) ( p n x * ) + ρ n i ( F i p n x * )   ( 1 ρ n i ) p n x * + ρ n i F i p n x *   ( 1 ρ n i ) p n x * + ρ n i p n x * =   p n x * ,
r n i x * =   ( 1 τ n i ) ( F i p n x * ) + τ n i ( F i q n i x * )   ( 1 τ n i ) F i p n x * + τ n i F i q n i x *   ( 1 τ n i ) p n x * + τ n i q n i x *   ( 1 τ n i ) p n x * + τ n i p n x * =   p n x * ,
s n i x * =   ( 1 λ n i ) ( F i p n x * ) + λ n i ( F i r n i x * )   ( 1 λ n i ) F i p n x * + λ n i F i r n i x *   ( 1 λ n i ) p n x * + λ n i r n i x *   ( 1 λ n i ) p n x * + λ n i p n x * =   p n x * =   x n + α n ( x n x n 1 ) x *   x n x * + | α n | x n x n 1 ,
for all i = 1 , 2 , , N . We can infer from the definition of x n + 1 that
x n + 1 x *     x n x *   +   | α n | x n x n 1 .
From supposition ( i ) and Lemma 2, it follows that lim n x n x * exists, implying that the sequence { x n } is bounded. □
Lemma 5. 
Let H be a real Hilbert space with a directed graph G = ( H , E ( G ) ) . Suppose that { x n } is a sequence generated by Algorithm 1 satisfying the following conditions:
(i) 
n = 1 | α n | x n x n 1 < ,
(ii) 
E ( G ) is right coordinate convex, and  ( x * , p n ) E ( G ) .
Then, the sequence { x n } is bounded, and  lim n x n x * exists for all x * F .
Proof. 
Since F is nonempty, we can let x * F . Based on supposition ( i i ) and the fact that F i preserves edges, we have ( x * , F i p n ) E ( G ) for every i = 1 , 2 , , N . The right coordinate convexity of E ( G ) ensures that ( x * , q n i ) E ( G ) . Similarly, we can conclude that ( x * , F i q n i ) , ( x * , r n i ) , ( x * , F i r n i ) , and  ( x * , s n i ) E ( G ) , for all i = 1 , 2 , , N . Using the same argument as in Lemma 4, we can determine that the sequence { x n } is bounded, and  lim n x n x * exists. □
Some helpful equalities and inequalities are presented below. For  x , y , z H ,
x + y 2 = x 2 + 2 x , y + y 2 , x + y 2 x 2 + 2 y , x + y , γ x + ( 1 γ ) y 2 = γ x 2 + ( 1 γ ) y 2 γ ( 1 γ ) x y 2 ,
for any γ R .
Lemma 6. 
For all i = 1 , 2 , , N , let H be a real Hilbert space with a directed graph G = ( H , E ( G ) ) . Suppose that { x n } is a sequence generated by Algorithm 1, satisfying the following conditions:
(i) 
n = 1 | α n | x n x n 1 < ,
(ii) 
E ( G ) is left coordinate convex, and  ( p n , x * ) E ( G ) for all x * F ,
(iii) 
0 < lim   inf n ρ n i lim   sup n ρ n i < 1 ,
(iv) 
lim   inf n τ n i > 0 , and  lim   inf n λ n i > 0 ,
(v) 
G is transitive, and  ( x * , p n ) E ( G ) for all x * F .
Then, lim n p n F i p n = 0 for all i = 1 , 2 , , N .
Proof. 
Since F is nonempty, we can let x * F . According to Lemma 4, we can conclude that lim n x n x * exists and the sequence { x n } is bounded. Consequently, the sequence { p n } is also bounded for all i = 1 , 2 , , N , and it yield the following:
q n i x * 2 =   ( 1 ρ n i ) ( p n x * ) + ρ n i ( F i p n x * ) 2 =   ( 1 ρ n i ) p n x * 2 + ρ n i F i p n x * 2 ( 1 ρ n i ) ρ n i p n F i p n 2   ( 1 ρ n i ) p n x * 2 + ρ n i p n x * 2 ( 1 ρ n i ) ρ n i p n F i p n 2 =   p n x * 2 ( 1 ρ n i ) ρ n i p n F i p n 2 ,
r n i x * 2 =   ( 1 τ n i ) ( F i p n x * ) + τ n i ( F i q n i x * ) 2 =   ( 1 τ n i ) F i p n x * 2 + τ n i F i q n i x * 2 ( 1 τ n i ) τ n i F i p n F i q n i 2   ( 1 τ n i ) F i p n x * 2 + τ n i F i q n i x * 2   ( 1 τ n i ) p n x * 2 + τ n i q n i x * 2   ( 1 τ n i ) p n x * 2 + τ n i [ p n x * 2 ( 1 ρ n i ) ρ n i p n F i p n 2 ] =   p n x * 2 τ n i ρ n i ( 1 ρ n i ) p n F i p n 2 ,
s n i x * 2 =   ( 1 λ n i ) ( F i p n x * ) + λ n i ( F i r n i x * ) 2 =   ( 1 λ n i ) F i p n x * 2 + λ n i F i r n i x * 2 ( 1 λ n i ) λ n i F i p n F i r n i 2   ( 1 λ n i ) F i p n x * 2 + λ n i F i r n i x * 2   ( 1 λ n i ) p n x * 2 + λ n i r n i x * 2   ( 1 λ n i ) p n x * 2 + λ n i [ p n x * 2 τ n i ρ n i ( 1 ρ n i ) p n F i p n 2 ] =   p n x * 2 λ n i τ n i ρ n i ( 1 ρ n i ) p n F i p n 2   x n x * 2 + 2 | α n | x n x n 1 , p n x * λ n i τ n i ρ n i ( 1 ρ n i ) p n F i p n 2 .
For some K > 0 , the following inequality can be obtained by rearranging the terms:
λ n i τ n i ρ n i ( 1 ρ n i ) p n F i p n 2     x n x * 2   s n i x * 2 +   2 K | α n | x n x n 1 .
Thus, there exists an i n { 1 , 2 , , N } such that
λ n i n τ n i n ρ n i n ( 1 ρ n i n ) p n F i n p n 2     x n x * 2   x n + 1 x * 2 +   2 K | α n | x n x n 1 .
As lim n x n x * exists, by combining it with suppositions ( i ) , ( i i i ) , and  ( i v ) , we obtain
lim n p n F i n p n = 0 .
By referencing the proof of Lemma 4, we can see that ( q n i , x * ) , ( r n i , x * ) E ( G ) . Combining this with ( v ) leads us to conclude that ( q n i , p n ) , ( r n i , p n ) E ( G ) . We also obtain the following result by using Equation (2):
r n i n F i n p n   r n i n F i n q n i n   +   F i n q n i n F i n p n =   ( 2 τ n i n ) F i n q n i n F i n p n   ( 2 τ n i n ) q n i n p n =   ( 2 τ n i n ) ρ n i n p n F i n p n 0 as n ,
which implies that
x n + 1 p n =   ( 1 λ n i n ) ( F i n p n p n ) + λ n i n ( F i n r n i n p n )   ( 1 λ n i n ) F i n p n p n   +   λ n i n F i n r n i n p n   ( 1 λ n i n ) F i n p n p n   +   λ n i n F i n r n i n F i n p n   +   λ n i n F i n p n p n   F i n p n p n   +   λ n i n r n i n p n   ( 1 + λ n i n ) F i n p n p n   +   λ n i n r n i n F i n p n 0 as n .
From the definition of x n + 1 and Equation (3), we obtain
lim n s n i p n = 0 ,
for all i = 1 , 2 , , N . Equation (1) allows us to determine that, for some P > 0 ,
λ n i τ n i ρ n i ( 1   ρ n i ) p n F i p n 2   p n x * 2   s n i x * 2   P ( p n x * s n i x * )   P p n s n i .
Thus, by integrating inequality (5) with Equation (4), we have
lim n p n F i p n = 0 ,
for all i = 1 , 2 , , N . □
Lemma 7. 
For all i = 1 , 2 , , N , let H be a real Hilbert space with a directed graph G = ( H , E ( G ) ) . Suppose that { x n } is a sequence generated by Algorithm 1, satisfying the following conditions:
(i) 
n = 1 | α n | x n x n 1 < ,
(ii) 
E ( G ) is right coordinate convex, and  ( x * , p n ) E ( G ) for all x * F ,
(iii) 
0 < lim   inf n ρ n i lim   sup n ρ n i < 1 ,
(iv) 
lim   inf n τ n i > 0 , and  lim   inf n λ n i > 0 ,
(v) 
G is transitive, and  ( p n , x * ) E ( G ) for all x * F .
Then, lim n p n F i p n = 0 for all i = 1 , 2 , , N .
Proof. 
By applying supposition ( i i ) in the same way as in Lemma 6, there exists an i n { 1 , 2 , , N } such that
λ n i n τ n i n ρ n i n ( 1 ρ n i n ) p n F i n p n 2     x n x * 2   x n + 1 x * 2 +   2 K | α n | x n x n 1 .
According to Lemma 5 and suppositions ( i ) , ( i i i ) , and  ( i v ) , we have
lim n p n F i n p n = 0 .
By referencing the proof of Lemma 5, we can see that ( x * , q n i ) , ( x * , r n i ) E ( G ) . Combining this with ( v ) leads us to conclude that ( p n , q n i ) , ( p n , r n i ) E ( G ) . Following the same steps in Lemma 6, we get
lim n p n F i p n = 0 ,
for all i = 1 , 2 , , N . □
Lemma 8. 
For all i = 1 , 2 , , N , let H be a real Hilbert space with a directed graph G = ( H , E ( G ) ) . Suppose that { x n } is a sequence generated by Algorithm 1, satisfying the following conditions:
(i) 
n = 1 | α n | x n x n 1 < ,
(ii) 
E ( G ) is left coordinate convex, and  ( p n , x * ) E ( G ) for all x * F ,
(iii) 
0 < lim   inf n ρ n i lim   sup n ρ n i < 1 ,
(iv) 
lim   inf n τ n i > 0 , and  lim   inf n λ n i > 0 ,
(v) 
( q n i , p n ) , ( r n i , p n ) E ( G ) for all x * F ,
Then, lim n p n F i p n = 0 for all i = 1 , 2 , , N .
Lemma 9. 
For all i = 1 , 2 , , N , let H be a real Hilbert space with a directed graph G = ( H , E ( G ) ) . Suppose that { x n } is a sequence generated by Algorithm 1, satisfying the following conditions:
(i) 
n = 1 | α n | x n x n 1 < ,
(ii) 
E ( G ) is right coordinate convex, and  ( x * , p n ) E ( G ) for all x * F ,
(iii) 
0 < lim   inf n ρ n i lim   sup n ρ n i < 1 ,
(iv) 
lim   inf n τ n i > 0 , and  lim   inf n λ n i > 0 ,
(v) 
( p n , q n i ) , ( p n , r n i ) E ( G ) for all x * F ,
Then, lim n p n F i p n = 0 for all i = 1 , 2 , , N .
Next, we outline several weak convergence theorems pertaining to Algorithm 1.
Theorem 2. 
Suppose all conditions in Lemma 6 and condition A,
if there is a subsequence { p n k } of { p n } , p n k u for some u H , then ( p n k , u ) E ( G ) ,
hold. Then, the sequence { x n } converges weakly to an element in F .
Proof. 
By applying Lemmas 4 and 6, it follows that lim n x n x * exists for every x * F , and  lim n p n F i p n = 0 for each i = 1 , 2 , , N . We now show that all weak sequential cluster points of the sequence { x n } belong to F . Let u be a weak sequential cluster point. This means that there exists a subsequence { x n k } such that x n k u . From supposition ( i ) , we know that p n x n   =   x n + α n ( x n x n 1 ) x n   =   | α n | x n x n 1   0 as n . Thus, p n k u . By condition A, we obtain that ( p n k , u ) E ( G ) . Therefore, using Lemma 1, we conclude that u F . Consequently, by Lemma 3, the sequence { x n } converges weakly to an element in F . □
By applying Lemmas 7–9 along with the reasoning used in the proof of Theorem 2, we can derive the following theorems:
Theorem 3. 
Suppose all conditions in Lemma 7 and condition A hold. Then, the sequence { x n } converges weakly to an element in F .
Theorem 4. 
Suppose all conditions in Lemma 8 and condition A hold. Then, the sequence { x n } converges weakly to an element in F .
Theorem 5. 
Suppose all conditions in Lemma 9 the condition A hold. Then, the sequence { x n } converges weakly to an element in F .
To reinforce our main theorems, we present the following example:
Example 2. 
Let E ( G ) = [ 0 , ) × [ 0 , ) R × R . Define a mapping F i : R R as follows:
F i x = π + sin ( x ) if x [ 0 , ) and i = 1 , π + x 2 if x [ 0 , ) and i = 2 , ( i + 1 ) sin ( x ) otherwise ,
for all x R , and   i = 1 , 2 . Thus, F i is G-nonexpansive, and the set of common fixed points is F = { π } . If we take x = π 2 and y = 3 π 2 , for  i = 1 , 2 , we have
| F i x F i y | = 2 ( i + 1 ) > π = | x y | ,
which implies that F i is not nonexpansive. Set the initial values x 0 = x 1 = 100 , and parameters α n = 0 , ρ n i = τ n i = λ n i = 0.5 for all n N and i = 1 , 2 . It is evident that the conditions ( i ) ( v ) of Lemma 6 hold. Now, suppose there exists a subsequence { p n k } of { p n } such that p n k u for some u R . From this setting, it follows that p n k , u [ 0 , ) . Therefore, ( p n k , u ) E ( G ) , and the condition A is satisfied. According to Theorem 2, the sequence { x n } converges to π in F .
For a family of nonexpansive mappings on a real Hilbert space, we also obtain the following weak convergence theorem:
Theorem 6. 
Let { F i : i = 1 , 2 , , N } be a family of nonexpansive mappings on a real Hilbert space H such that F , and let { x n } be a sequence generated by Algorithm 1. Suppose that:
(i) 
n = 1 | α n | x n x n 1 < ,
(ii) 
0 < lim   inf n ρ n i lim   sup n ρ n i < 1 ,
(iii) 
lim   inf n τ n i > 0 , and  lim   inf n λ n i > 0 .
Then, the sequence { x n } converges weakly to an element in F .
Before presenting the strong convergence theorems, we first recall condition ( S K ) , as introduced in [14].
Definition 3 
([14]). Let C be a nonempty subset of a metric space ( X , d ) . For each i = 1 , 2 , , N , suppose that F i is a self-mapping on C. Then, the set { F i : i = 1 , 2 , , N } is said to satisfy condition ( S K ) if there is a non-decreasing function φ : [ 0 , ) [ 0 , ) with φ ( 0 ) = 0 , and  φ ( r ) > 0 for r > 0 such that for each c C ,
φ ( d ( c , F ) ) max 1 i N d ( c , F i c ) ,
where F : = i = 1 N F i x ( F i ) and d ( c , F ) = inf v F d ( c , v ) .
Theorem 7. 
Let x * F . Suppose all conditions in Lemma 6 hold, along with
{ ( x n , x * ) , ( x * , x n ) } E ( G ) ,
and that { F i : i = 1 , 2 , , N } satisfies condition ( S K ) , where F is closed. Then, the sequence { x n } converges strongly to an element in F .
Proof. 
Based on conditions ( i i ) and ( v ) in Lemma 6, it follows that either ( x n , p n ) E ( G ) or ( p n , x n ) E ( G ) . Thus, we conclude that
F i x n x n   F i x n F i p n   +   F i p n p n   +   p n x n   F i p n p n   +   2 p n x n   F i p n p n   +   2 | α n | x n x n 1 0 as n .
In accordance with Lemma 4, lim n x n x * exists for every x * F , which implies that lim n d ( x n , F ) exists. According to condition ( S K ) , there is a nondecreasing function φ : [ 0 , ) [ 0 , ) , such that φ ( 0 ) = 0 , φ ( r ) > 0 for all r > 0 , and 
φ ( d ( x n , F ) ) max 1 l N F l x n x n .
From Equation (6), we obtain lim n φ ( d ( x n , F ) ) = 0 . Utilizing the property of φ , we get lim n d ( x n , F ) = 0 . As a result, we can identify a subsequence { x n j } from { x n } and a corresponding sequence { m j } in F such that x n j m j     2 j . We set n j + 1 = n j + k for some k N . From the proof of Lemma 4, we recall that x n + 1 x *     x n x *   +   | α n | x n x n 1 . Thus, we can conclude that
x n j + 1 m j =   x n j + k m j   x n j + k 1 m j   +   | α n j + k 1 | x n j + k 1 x n j + k 2   x n j m j   +   | α n j | x n j x n j 1 + + | α n j + k 1 | x n j + k 1 x n j + k 2   2 j   +   | α n j | x n j x n j 1 + + | α n j + k 1 | x n j + k 1 x n j + k 2 .
Additionally,
m j + 1 m j 3 2 j + 1   +   | α n j | x n j x n j 1 + + | α n j + k 1 | x n j + k 1 x n j + k 2 .
Condition ( i ) in Lemma 6 indicates that the right-hand side of the inequality converges to zero as j . This implies that the sequence { m j } is a Cauchy sequence in F . Since F is closed, there exists an m ^ F such that lim j m j = m ^ . Furthermore, noting that x n j m j     2 j , we obtain lim j x n j m ^ = 0 . Moreover, since lim n x n m ^ exists, it follows that lim n x n m ^ = 0 . Therefore, the sequence { x n } converges strongly to m ^ F . □
Theorem 8. 
Let x * F . Suppose all conditions in Lemma 7 hold, along with
{ ( x n , x * ) , ( x * , x n ) } E ( G ) ,
and that { F i : i = 1 , 2 , , N } satisfies condition ( S K ) , where F is closed. Then, the sequence { x n } converges strongly to an element in F .
Theorem 9. 
Let x * F . Suppose all conditions in Lemma 8 hold, along with
{ ( x n , p n ) , ( p n , x n ) } E ( G ) ,
and that { F i : i = 1 , 2 , , N } satisfies condition ( S K ) , where F is closed. Then, the sequence { x n } converges strongly to an element in F .
Theorem 10. 
Let x * F . Suppose all conditions in Lemma 9 hold, along with
{ ( x n , p n ) , ( p n , x n ) } E ( G ) ,
and that { F i : i = 1 , 2 , , N } satisfies condition ( S K ) , where F is closed. Then, the sequence { x n } converges strongly to an element in F .
Finally, we establish the following strong convergence theorem for a family of nonexpansive mappings on a real Hilbert space.
Theorem 11. 
Let { F i : i = 1 , 2 , , N } be a family of nonexpansive mappings on a real Hilbert space H such that F , and let { x n } be a sequence generated by Algorithm 1. Suppose all conditions in Theorem 6 hold, and that { F i : i = 1 , 2 , , N } satisfies condition ( S K ) . Then, the sequence { x n } converges strongly to an element in F .

3. Application in Numerical Method

Efficient and accurate numerical methods for solving equations are an important tool in science and engineering. The heat equation is a fundamental partial differential equation (PDE) that describes the distribution and flow of heat in a material or system over time. To evaluate the efficacy of the proposed algorithm, we apply it to find the numerical solution of the heat equation using Crank–Nicolson scheme [20] described as follows:
u t ( x , t ) = κ u x x ( x , t ) + f ( x , t ) , x [ 0 , l ] , t > 0 , u ( x , 0 ) = u 0 ( x ) , x ( 0 , l ) , u ( 0 , t ) = ϕ 0 ( t ) , t > 0 , u ( l , t ) = ϕ l ( t ) , t > 0 ,
where
  • u ( x , t ) is the temperature distribution function over space x and time t,
  • κ is the thermal diffusivity constant,
  • f ( x , t ) , u 0 ( x ) , ϕ 0 ( t ) , ϕ l ( t ) are sufficiently smooth functions.
Due to the complexity of most PDEs, exact solutions are often unattainable. Consequently, numerical methods are employed to approximate these solutions. In this context, the Crank–Nicolson scheme is utilized to approximate the solution to the heat equation. This method involves discretizing the space x with a step size Δ x and the time t with a step size Δ t . The solution at position i = 1 , 2 , , N and time step n = 1 , 2 , , T , where N and T represent total number of the discretizing in space and time, respectively. Then, u ( x i , t n ) , we write u i n for short, is the approximation solution for space x i at time t n = n Δ t . For each point i { 1 , 2 , , N } and time step n { 1 , 2 , 3 , 4 , , T } , the scheme based on Crank–Nicolson can be written as
u i n + 1 u i n Δ t = κ 2 u i + 1 n 2 u i n + u i 1 n ( Δ x ) 2 + u i + 1 n + 1 2 u i n + 1 + u i 1 n + 1 ( Δ x ) 2 + f i n + 1 2 ,
where
f i n + 1 2 = f x i , n + 1 2 Δ t .
The initial conditions are defined as follows:
u i 0 = u 0 ( x i ) , i = 1 , 2 , , N ,
u 1 n = ϕ 0 ( t n ) , n = 1 , 2 , , T ,
u N n = ϕ l ( t n ) , n = 1 , 2 , , T .
By rearranging terms, the scheme can be written in a tridiagonal matrix form as follows:
A u n + 1 = B u n + F n + 1 2 ,
where
A = 1 + 2 r r r 1 + 2 r r r 1 + 2 r r r 1 + 2 r r r 1 + 2 r ,
B = 1 2 r r r 1 2 r r r 1 2 r r r 1 2 r r r 1 2 r ,
F n + 1 2 = r ϕ 0 ( n + 1 2 ) + Δ t f 2 n + 1 2 Δ t f 3 n + 1 2 Δ t f N 2 n + 1 2 r ϕ l ( n + 1 2 ) + Δ t f N 1 n + 1 2 ,
u n = u 2 n u 3 n u N 2 n u N 1 n ,
r = κ Δ t 2 ( Δ x ) 2 .
Therefore, to find the next time step ( u n + 1 ) , the linear system needs to be solved. In this case, due to the advantages in speed of convergence, iterative methods are often used to solve the linear systems, particularly for large and sparse matrices. The general form of the linear system can be written as
A u = b ,
where A is an N × N matrix, u is an N × 1 unknown vector, and b is an N × 1 constant vector. In particular, for each time step, our algorithm can alternatively be applied to solve the linear system with u = u n and b = B u n + F n + 1 2 .
Solving linear systems of the form A u = b is a fundamental step for obtaining the solution in various scientific and engineering domains. Direct methods, such as Gaussian elimination, LU decomposition, etc., are well known and efficient, but they can be computationally costly and impractical for large linear systems. In practice, the linear system must be solved multiple times when addressing a problem. For this reason, iterative methods provide an efficient alternative approach by utilizing iterative calculations that converge to the solution. Specifically, the iterative method is a form of the fixed point method. When converted to a fixed point approach, the problem is transformed into F ( u ) = u , where F is a mapping or function that satisfies the fixed point properties. There are different ways to define fixed point mapping for solving linear system, as follows:
  • Gauss–Seidel: F GS ( u ) = I ω ( D ω L ) 1 A u + ω ( D ω L ) 1 b ,
  • Jacobi: F WJ ( u ) = ( I ω D 1 A ) u + ω D 1 b ,
  • Successive over relaxation: F SOR ( u ) = I ω ( D ω L ) 1 A u + ω ( D ω L ) 1 b ,
    where ω is a weight parameter, D is the diagonal part of matrix A, and L is the upper part of matrix A.
From PISPMHA, we can solve the linear system by integrating the fixed point approaches followed from the proposed algorithm. The Algorithm 2 aims to solve the linear system using the proposed fixed point iteration, starting with two vectors through the set of fixed point mapping F . Note that we can choose I to be any subset of { WJ , GS , SOR } . Consequently, there are seven possible subsets. The generated sequence is defined as follows. Note that the parameters α , ρ , τ , λ (lines 6, 7, 8, and 9) can be any sequence that satisfies the conditions.
Algorithm 2 Inertial S * parallel algorithm
1: 
Input: Matrix A, vector b, initial guess u ( 0 ) , u ( 1 ) , tolerance ϵ , and the set of fixed point mapping F
2: 
Output: Approximate solution u
3: 
Initialize k 0
4: 
repeat
5: 
    k k + 1
6: 
    Set α ( k ) min 1 k 2 | | u ( k + 1 ) u ( k ) | | 2 2 , 0.1 ,    if u ( k + 1 ) u ( k ) 0.15 ,              otherwise
7: 
   Set ρ ( k ) 0.5
8: 
   Set τ ( k ) 0.5
9: 
   Set λ ( k ) 0.5
10:
   Compute p ( k ) = u ( k ) + α ( k ) ( u ( k ) u ( k 1 ) )
11:
   Compute q i ( k ) = ( 1 ρ i ( k ) ) p ( k ) + ρ i ( k ) F i ( p ( k ) ) for i I
12:
   Compute r i ( k ) = ( 1 τ i ( k ) ) F i ( p ( k ) ) + τ i ( k ) F i ( q ( k ) ) for i I
13:
   Compute s i ( k ) = ( 1 λ i ( k ) ) F i ( p ( k ) ) + λ i ( k ) F i ( r ( k ) ) for i I
14:
   Compute u ( k + 1 ) = arg max { | | s i ( k ) p ( k ) | | , i I }
15:
until  u ( k + 1 ) u ( k ) < ϵ
16:
return  u ( k + 1 )
To better assess performance, we compare our algorithm to two other algorithms from the literature, referred to as “Pd” and “Drs”, which correspond to [12] and [16], respectively. Note that the sequences within the algorithms have been adjusted and differ from those originally proposed in the references. Additionally, to facilitate comparison, the symbols used in the references have been changed to match those in our Algorithms 3 and 4.
Algorithm 3 Pd
1: 
Input: Matrix A, vector b, initial guess u ( 0 ) , u ( 1 ) , tolerance ϵ , and the set of fixed point mapping F
2: 
Output: Approximate solution u
3: 
Initialize k 0
4: 
repeat
5: 
    k k + 1
6: 
   Set α ( k ) min 1 k 2 | | u ( k + 1 ) u ( k ) | | 2 2 , 0.1 ,     if u ( k + 1 ) u ( k ) 0.15 ,                 otherwise
7: 
   Set ρ ( k ) 0.5
8: 
   Set τ ( k ) 0.5
9: 
   Set λ ( k ) 0.5
10:
   Compute p ( k ) = u ( k ) + α ( k ) ( u ( k ) u ( k 1 ) )
11:
   Compute q i ( k ) = ( 1 ρ i ( k ) ) p ( k ) + ρ i ( k ) F i ( p ( k ) ) for i I
12:
   Compute s i ( k ) = ( 1 λ i ( k ) ) p ( k ) + λ i ( k ) F i ( q ( k ) ) for i I
13:
   Compute u ( k + 1 ) = arg max { | | s i ( k ) p ( k ) | | , i I }
14:
until  u ( k + 1 ) u ( k ) < ϵ
15:
return  u ( k + 1 )
Algorithm 4 Drs
1: 
Input: Matrix A, vector b, initial guess u ( 0 ) , u ( 1 ) , tolerance ϵ , and the set of fixed point mapping F
2: 
Output: Approximate solution u
3: 
Initialize k 0
4: 
repeat
5: 
    k k + 1
6: 
   Set α ( k ) min 1 k 2 | | u ( k + 1 ) u ( k ) | | 2 2 , 0.1 ,     if u ( k + 1 ) u ( k ) 0.15 ,                 otherwise
7: 
   Set ρ ( k ) 0.5
8: 
   Set τ ( k ) 0.5
9: 
   Set λ ( k ) 0.5
10:
   Compute p ( k ) = u ( k ) + α ( k ) ( u ( k ) u ( k 1 ) )
11:
   Compute q i ( k ) = ( 1 ρ i ( k ) ) p ( k ) + ρ i ( k ) F i ( p ( k ) ) for i I
12:
   Compute r i ( k ) = ( 1 τ i ( k ) ) q i ( k ) + τ i ( k ) F i ( q ( k ) ) for i I
13:
   Compute s i ( k ) = ( 1 λ i ( k ) ) r i ( k ) + λ i ( k ) F i ( r ( k ) ) for i I
14:
   Compute u ( k + 1 ) = arg max { | | s i ( k ) p ( k ) | | , i I }
15:
until  u ( k + 1 ) u ( k ) < ϵ
16:
return  u ( k + 1 )
In this case, we consider the heat equation defined in interval [ 0 , 1 ] , and we aim to approximate the solution at time 0.1 using a time step of Δ t = 0.01 . To ensure fairness, the sequences and parameter settings for each algorithm are identical. All algorithms in this study are implemented in the Julia programming language [21,22] and execute on an M2 processor. The parameters are set as follows: κ = 25 , Δ t = Δ x 2 , and ϵ = 10 10 ,
u t ( x , t ) = α u x x ( x , t ) + 0.4 κ ( 4 π 2 1 ) e 4 α t cos ( 4 π x ) , x [ 0 , 1 ] , t > 0 , u ( x , 0 ) = 0.1 cos ( 4 π x ) , x [ 0 , 1 ] , u ( 0 , t ) = 0.1 e 4 α t , t > 0 , u ( 1 , t ) = 0.1 e 4 α t , t > 0 ,
where the system has the exact solution u ( x , t ) = 0.1 e 4 α t cos ( 4 π x ) .
Figure 1 shows the comparison between the exact solution and numerical solutions. To clarify the labels in Figure 1, “Our” refers to our proposed algorithm. In this figure, “Alg” followed by the name of the iterative method, such as WJ, GS, or SOR, represents the fixed point iteration used in the main algorithm for solving linear systems at each time step. We can see from this figure that every method produces almost the same solution. Therefore, our algorithm is capable of solving the linear system effectively.
Figure 2 displays a bar plot showing the number of iterations for both the proposed and existing algorithms. The results clearly demonstrate that our algorithm requires fewer iterations to solve the linear system compared to existing algorithms, highlighting the efficiency of our approach. Furthermore, the performance of different fixed point iterations is consistent across each algorithm, suggesting that lower-complexity fixed point iterations can yield the same results. This consistency reduces overall computational time. Additionally, the SOR algorithm, when used alone, has the highest average number of iterations. However, when SOR is combined with another algorithm, the number of iterations decreases to match that of the other algorithm, indicating that SOR is not efficient in this context. Moreover, SOR performs the worst even when combined with other algorithms.

4. Conclusions

This study introduces a novel iterative method for approximating common fixed points of G-nonexpansive mappings in real Hilbert spaces structured by directed graphs. The introduction of left and right coordinate convexity concepts proved crucial in establishing both weak and strong convergence under reasonable assumptions. The proposed algorithm demonstrated its effectiveness in addressing real-world problems, notably in solving the heat equation through linear systems, highlighting its potential application in energy optimization. The results contribute to advancements in computational methods that support sustainable development by offering improved tools for energy-efficient solutions.

Author Contributions

Conceptualization, R.S.; methodology, R.S.; software, P.S. and T.S.; formal analysis, K.C.; writing—original draft preparation, K.C.; writing—review and editing, P.S. and T.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research was partially supported by (1) Fundamental Fund 2025, Chiang Mai University, Chiang Mai, Thailand; (2) Chiang Mai University, Chiang Mai, Thailand; and (3) Centre of Excellence in Mathematics, MHESI, Bangkok 10400, Thailand.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Dudorova, N.V.; Belan, B.D. The Energy Model of Urban Heat Island. Atmosphere 2022, 13, 457. [Google Scholar] [CrossRef]
  2. Allahem, A.; Boulaaras, S.; Zennir, K.; Haiour, M. A new mathematical model of heat equations and its application on the agriculture soil. Eur. J. Pure Appl. Math. 2018, 11, 110–137. [Google Scholar] [CrossRef]
  3. Ruby, T.; Sutrisno, A. Mathematical modeling of heat transper in agricultural drying machine room (box dryer). J. Phys. Conf. Ser. 2021, 1751, 012029. [Google Scholar] [CrossRef]
  4. Emmanuel, E.C.; Chinelo, A. On the Numerical Fixed Point Iterative Methods of Solution for the Boundary Value Problems of Elliptic Partial Differential Equation Types. Asian J. Math. Sci. 2018, 2. Available online: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3812257 (accessed on 29 September 2024).
  5. Atangana, A.; Araz, S.I. An Accurate Iterative Method for Ordinary Differential Equations with Classical and Caputo-Fabrizio Derivatives; Hindustan Aeronautics Limited: Bangalore, India, 2023. [Google Scholar]
  6. Young, D. Iterative methods for solving partial difference equations of elliptic type. Trans. Am. Math. Soc. 1954, 76, 92–111. [Google Scholar] [CrossRef]
  7. Gamboa, J.M.; Gromadzki, G. On the set of fixed points of automorphisms of bordered Klein surfaces. Rev. Mat. Iberoam. 2012, 28, 113–126. [Google Scholar] [CrossRef]
  8. Cooper, D. Automorphisms of free groups have finitely generated fixed point sets. J. Algebra 1987, 111, 453–456. [Google Scholar] [CrossRef]
  9. Jachymski, J. The contraction principle for mappings on a metric space with a graph. Proc. Am. Math. Soc. 2008, 136, 1359–1373. [Google Scholar] [CrossRef]
  10. Aleomraninejad, S.M.A.; Rezapour, S.; Shahzad, N. Some fixed point results on a metric space with a graph. Topol. Its Appl. 2012, 159, 659–663. [Google Scholar] [CrossRef]
  11. Suparatulatorn, R.; Suantai, S.; Cholamjiak, W. Hybrid methods for a finite family of G-nonexpansive mappings in Hilbert spaces endowed with graphs. AKCE Int. J. Graphs Comb. 2017, 14, 101–111. [Google Scholar] [CrossRef]
  12. Charoensawan, P.; Yambangwai, D.; Cholamjiak, W.; Suparatulatorn, R. An inertial parallel algorithm for a finite family of G-nonexpansive mappings with application to the diffusion problem. Adv. Differ. Equ. 2021, 2021, 1–13. [Google Scholar] [CrossRef]
  13. Jun-On, N.; Suparatulatorn, R.; Gamal, M.; Cholamjiak, W. An inertial parallel algorithm for a finite family of G-nonexpansive mappings applied to signal recovery. AIMS Math 2022, 7, 1775–1790. [Google Scholar] [CrossRef]
  14. Khemphet, A.; Suparatulatorn, R.; Varnakovida, P.; Charoensawan, P. A Modified Parallel Algorithm for a Common Fixed-Point Problem with Application to Signal Recovery. Symmetry 2023, 15, 1464. [Google Scholar] [CrossRef]
  15. Karahan, I.; Ozdemir, M. A general iterative method for approximation of fixed points and their applications. Adv. Fixed Point Theory 2013, 3, 510–526. [Google Scholar]
  16. Yambangwai, D.; Thianwan, T. A parallel inertial SP-iteration monotone hybrid algorithm for a finite family of G-nonexpansive mappings and its application in linear system, differential, and signal recovery problems. Carpathian J. Math. 2024, 40, 535–557. [Google Scholar] [CrossRef]
  17. Van Dung, N.; Trung Hieu, N. Convergence of a new three-step iteration process to common fixed points of three G-nonexpansive mappings in Banach spaces with directed graphs. RACSAM 2020, 114, 140. [Google Scholar] [CrossRef]
  18. Auslender, A.; Teboulle, M.; Ben-Tiba, S. A Logarithmic-Quadratic Proximal Method for Variational Inequalities. Comput. Optim. Appl. 1999, 12, 31–40. [Google Scholar] [CrossRef]
  19. Bauschke, H.H.; Combettes, P.L. Convex Analysis and Monotone Operator Theory in Hilbert Spaces; CMS Books in Mathematics; Springer: New York, NY, USA, 2011. [Google Scholar]
  20. Yambangwai, D.; Cholamjiak, W.; Thianwan, T.; Dutta, H. On a new weight tri-diagonal iterative method and its applications. Soft Comput. 2021, 25, 725–740. [Google Scholar] [CrossRef]
  21. Bezanson, J.; Karpinski, S.; Shah, V.B.; Edelman, A. Julia: A fast dynamic language for technical computing. arXiv 2012, arXiv:1209.5145. [Google Scholar]
  22. Bezanson, J.; Edelman, A.; Karpinski, S.; Shah, V.B. Julia: A fresh approach to numerical computing. SIAM Rev. 2017, 59, 65–98. [Google Scholar] [CrossRef]
Figure 1. The exact and approximate solutions for the heat equation at time t = 0.1 and Δ x = 0.1 .
Figure 1. The exact and approximate solutions for the heat equation at time t = 0.1 and Δ x = 0.1 .
Axioms 13 00729 g001
Figure 2. The comparison of the average number of iterations between the literature and our proposed algorithm.
Figure 2. The comparison of the average number of iterations between the literature and our proposed algorithm.
Axioms 13 00729 g002
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Suparatulatorn, R.; Saksuriya, P.; Suebcharoen, T.; Chaichana, K. An Iterative Approach to Common Fixed Points of G-Nonexpansive Mappings with Applications in Solving the Heat Equation. Axioms 2024, 13, 729. https://doi.org/10.3390/axioms13110729

AMA Style

Suparatulatorn R, Saksuriya P, Suebcharoen T, Chaichana K. An Iterative Approach to Common Fixed Points of G-Nonexpansive Mappings with Applications in Solving the Heat Equation. Axioms. 2024; 13(11):729. https://doi.org/10.3390/axioms13110729

Chicago/Turabian Style

Suparatulatorn, Raweerote, Payakorn Saksuriya, Teeranush Suebcharoen, and Khuanchanok Chaichana. 2024. "An Iterative Approach to Common Fixed Points of G-Nonexpansive Mappings with Applications in Solving the Heat Equation" Axioms 13, no. 11: 729. https://doi.org/10.3390/axioms13110729

APA Style

Suparatulatorn, R., Saksuriya, P., Suebcharoen, T., & Chaichana, K. (2024). An Iterative Approach to Common Fixed Points of G-Nonexpansive Mappings with Applications in Solving the Heat Equation. Axioms, 13(11), 729. https://doi.org/10.3390/axioms13110729

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop