Next Article in Journal
Graph-Based Stock Volatility Forecasting with Effective Transfer Entropy and Hurst-Based Regime Adaptation
Previous Article in Journal
Deep Learning for Leukemia Classification: Performance Analysis and Challenges Across Multiple Architectures
Previous Article in Special Issue
Qualitative Analysis of a Three-Dimensional Dynamical System of Fractal-Fractional-Order Evolution Differential Equations with Terminal Boundary Conditions
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Neural Network Method for Solving Time Fractional Diffusion Equations

1
School of Science, Qingdao University of Technology, Qingdao 266520, China
2
School of Information and Control Engineering, Qingdao University of Technology, Qingdao 266520, China
*
Author to whom correspondence should be addressed.
Fractal Fract. 2025, 9(6), 338; https://doi.org/10.3390/fractalfract9060338
Submission received: 20 March 2025 / Revised: 23 April 2025 / Accepted: 12 May 2025 / Published: 23 May 2025

Abstract

:
In this paper, we propose a neural network method to solve time-fractional diffusion equations with Dirichlet boundary conditions by using a combination of machine learning techniques and Method of Lines. We first used the Method of Lines to discretize the equation in the space domain while keeping the time domain continuous, and represent the solution of the diffusion equation using a neural network. Then we used Gauss–Jacobi quadrature to approximate the fractional derivative in the time domain, thereby obtaining the loss function for the neural network. We used TensorFlow to carry out the gradient descent process to train this neural network. We conducted numerical tests in 1D and 2D cases and compared the results with the exact solutions. The numerical tests showed that this method is effective and easy to manipulate for many time-fractional diffusion problems.

1. Introduction

The fractional derivative generalizes classical differentiation to non-integer orders, extending the concept to continuous and complex systems. Introduced by Leibniz in the 17th century, it gained formal structure with Liouville, Riemann, and later Caputo’s definitions, ensuring compatibility with initial conditions in differential equations. Its importance lies in modeling memory-dependent and nonlocal phenomena, where traditional derivatives fail. Applications include viscoelasticity (describing material stress–strain relationships), anomalous diffusion (particle transport in heterogeneous media), control theory (fractional PID controllers), and bioengineering (tumor growth modeling). The Caputo and Riemann–Liouville formulations are the most common, balancing physical interpretability and mathematical rigor. We refer to [1] for a brief review of the evolution of fractional calculus and [2,3,4,5] for the applications of fractional derivatives in many related fields.
In recent years, researchers have developed many techniques to solve time-fractional partial differential equations (TFPDEs). Among these methods, the finite difference method, the finite element method, and their various variations are widely used [6,7,8] and [9,10,11]. In addition to the finite difference method, the spectral method [12], homotopy method [13,14,15,16], and the spline technique [17] are important techniques. The Line Method (MOL) [18] can also be applied to the numerical solution of TFPDEs. Typically, MOL as a semi-analytical method is usually used for solving integer-order diffusion equations by discretizing them in the spatial domain while leaving the time domain continuous. Subsequently, the diffusion equation is transformed into a system of ordinary differential equations (ODEs), thereby ODE library routines are then usually applied to obtain the solution to the system of ordinary differential equations. However, when applying MOL to time-fractional evolution equations, the equation is converted into a system of fractional ODEs with initial conditions. Solving this system of fractional ODEs is not as straightforward as solving integer-order ones. Generally, the solution can be expressed in terms of the Mittag–Leffler matrix function [19], which can be approximated using rational functions, Pade approximation, and scaling and squaring method, these techniques are generally less efficient for computing the Mittag–Leffler matrix function. Saeed Kazem recently proposed a semi-analytical method for solving time-fractional diffusion equations using MOL and the Mittag–Leffler matrix by leveraging matrix eigenvalues and eigenvectors [11,20].
Apart from MOL and finite difference methods, some scholars proposed other techniques to solve TFPDEs. Feng Gao and Chunmei Chi introduced a neural network method for solving a specific classes of fractional partial differential equations (FPDE) [21,22]. They constructed BP neural networks to represent the solution of FPDE and obtained the numerical solution by pre-training the neural network on selected points of the equations. However, their method has one limitation: it can only be applied to solve certain classes of FPDEs and cannot be applied to solve time-fractional diffusion equations. In this paper, we propose one method that combines neural network with the Method of Lines for the numerical solution of time-fractional fractional diffusion equations. The main challenges for machine learning methods include constructing an effective neural network architecture and defining an appropriate loss function. To address this challenge, we employ Gauss–Jacobi quadrature to construct the loss function for the proposed neural network. This paper is organized as follows: Section 2 reviews the preliminaries of fractional calculus and Gauss–Jacobi quadrature. Section 3 presents the proposed numerical method. Section 4 provides numerical tests for 1D and 2D examples, and the comparison with the exact solutions.

2. Preliminaries and Definitions

In calculus, the Caputo derivative is defined as follows:
D a α f ( x ) = 1 Γ ( n α ) a x ( x t ) n α 1 f ( n ) ( t ) d t , α ( n 1 , n ) f ( n ) ( x ) , α = n
where n N and Γ ( · ) is the Euler’s Gamma function.
For u ( x , t ) , x R d , t R 1 , the Caputo derivative with respect to t is defined as follows:
α u ( x , t ) t α = 1 Γ ( n α ) a t ( t τ ) n α 1 n u ( x , τ ) τ n d τ , α ( n 1 , n ) n u ( x , t ) t n , α = n
where n N .
In this paper, we consider the following Caputo time-fractional diffusion equations with Dirichlet boundary conditions.
The equations for 1D time-fractional diffusion are as follows:
α U t α = a 2 U x 2 ,
where t 0 , 0 < α < 1 , 0 x L , a is a positive constant, and the boundary conditions are as follows:
U ( 0 , x ) = f ( x ) , U ( t , 0 ) = u 0 ( t ) , U ( t , L ) = u L ( t ) .
The 2D time-fractional diffusion equations are as follows:
α U t α = a ( 2 U x 2 + 2 U y 2 ) ,
where a is an positive constant, t > 0 , 0 < α < 1 , ( x , y ) Ω = [ a , b ] × [ c , d ] R 2 , and the boundary conditions are as follows:
U ( x , y , 0 ) = f ( x , y ) , U ( x , y , t ) = g ( x , y ) ( t ) , ( x , y ) ( Ω ) .
The 3D time-fractional diffusion equations are as follows:
α U t α = a ( 2 U x 2 + 2 U y 2 + 2 U z 2 ) ,
where a is an positive constant, t > 0 , 0 < α < 1 , ( x , y , z ) Ω = [ a , b ] × [ c , d ] × [ e , f ] R 3 , and the initial values are as follows:
U ( x , y , z , 0 ) = f ( x , y , z ) , U ( x , y , z ) = g ( x , y , z ) ( t ) , ( x , y , z ) ( Ω ) .
The analytic solutions of some fractional ordinary differential equation (FODE) and the system of FODE can be expressed by Mittag–Leffler functions [23], and Mittag–Leffler functions are defined as follows:
E a ( z ) = k = 0 z k Γ ( α k + 1 ) , α > 0 .
The analytic solutions of linear fractional differential equations are often expressed in terms of Mittag–Leffler functions because of the following property:
D * α E α λ x α = λ E α λ x α .
Many types of fractional ordinary differential equations can be solved by using the Method of Lines. Method of Lines typically involves the construction or analysis of numerical methods for partial differential equations that proceeds by first discretizing the spatial derivatives only and leaving the time variable continuous. This yields a system of ordinary differential equations to which some numerical methods for initial value ordinary differential equations can be applied. However, in this work, by discretizing the spatial derivatives only and leaving the time variable continuous we can regard the solution to the time-fractional evolution equation as a pre-trained neural network. In particular, the neural network for problem (1) takes one variable t as input and n + 1 variables u 0 ( t ) , u 1 ( t ) , u 2 ( t ) , , u n ( t ) as output (see Figure 1). Therefore, finding the numerical solution to Problem 1 is equivalent to pre-training this neural network.
For the construction and pre-training of the neural networks related to Problem 1, we outline the following fundamental steps.
Gauss–Jacobi quadrature
To calculate the integral
1 1 ( 1 z ) α u t t ( z + 1 ) 2 , x d z , 0 < α < 1 ,
which is involved in the construction of the loss functions for the neural network, we use the following Gauss–Jacobi quadrature [24],
1 1 w ( z ) g ( z ) d z i = 1 n w i * g z i
where w ( z ) is the weight function
w ( z ) = ( 1 z ) α , 0 < α < 1 ,
and the error term for (4) is
E n = Γ 2 ( n α + 1 ) Γ ( n + 1 ) ( 2 n α + 1 ) Γ 2 ( 2 n α + 1 ) 2 2 n α + 1 n ! ( 2 n ) ! g ( 2 n ) ( ξ ) .
The nodes z i are given by the roots of the Jacobi polynomials P n ( α , 0 ) ( x ) and the weights w i * takes the following form [20]
w i * = Γ ( n + α + 1 ) Γ ( n + 1 ) Γ ( n + α + 1 ) 2 2 n + a + 1 n ! 1 z i 2 V n z i 2 ,
where V n ( x ) = P n ( α , 0 ) ( x ) 2 n n ! ( 1 ) n and d d x P n ( α , 0 ) ( x ) = 0.5 ( n + α + 1 ) P n 1 ( a + 1 , 1 ) ( x ) . Since no existing Gauss–Jacobi quadrature integration table is available, we can calculate the integral nodes and weights using the MATLAB R2025b function JacobiP with parameters n , α , x where α is the fractional order. The calculations show that the coefficient of the error term can be reduced to machine accuracy when n = 10 . Table 1 provides the numerical integral nodes and weights for n = 10 when fractional order α = 0.5 .
Construction of the loss functions
One of the main challenges in solving FODE and FPDE using neural networks is constructing the loss functions for these networks [25,26,27]. Due to the complexity of the fractional derivatives of neural networks, which are typically composite functions, it is difficult to obtain analytical or explicit loss functions. To address this issue, we propose approximating the fractional derivative in the time domain as follows: Let
I = D 0 α u ( t , x ) = 1 Γ ( 1 α ) 0 t u t ( s , x ) ( t s ) α d s , 0 < α < 1 .
Play z = 2 t s 1 into the above equation, we obtain the following:
D 0 α u ( t , x ) = 1 Γ ( 1 α ) 0 t u t ( s , x ) ( t s ) α d s
= 1 Γ ( 1 α ) 1 1 ( 1 z ) α t 2 1 α u t t ( z + 1 ) 2 , x d z
= 1 Γ ( 1 α ) t 2 1 α 1 1 ( 1 z ) α u t t ( z + 1 ) 2 , x d z .
By using Gauss–Jacobi quadrature (4), we have the following:
D 0 α u ( t , x ) 1 Γ ( 1 α ) t 2 1 α i = 1 m w i * g z i ,
where
g ( z ) = u t t ( z + 1 ) 2 , x .
and w i * , i = 1 , 2 , , m are the weights for the Gauss–Jacobi quadrature. Once the neural network is constructed, u ( t , x i ) represents the output of the network. The explicit form of u t ( t , x ) can be obtained by differentiating the neural network. Consequently, D 0 α u ( t , x ) can be approximated, and the loss function for the related network is constructed as described in the previous step.

3. Neural Network for Solving Time Fractional Diffusion Equations

3.1. Neural Network Structure and Parameter

We use the following neural networks (see Figure 1 and Figure 2) to approximate the solution u ( t , x ) to problem (1) and the solution to problem (2). By using the methodology of Method of Lines, we discretize the space domain into n grids, e.g.,
0 < x 1 < x 2 < < x n = L , x i = i h , h = L / n .
Then the solution of (1) can be represented as the vector as follows:
U ( t ) = [ u ( t , x 0 ) , u ( t , x 1 ) , , u ( t , x n ) ] T .
We denote this neural network as N ( t , W ) where W is the parameter, t is the input and
U ( t ) = [ u ( t , x 0 ) , u ( t , x 1 ) , , u ( t , x n ) ] T .
is the output. The solution to Problem (1) can be viewed as the pre-trained neural network. We assume that this neural network has three layers: input layer, hidden layer, and output layer as shown in Figure 1. The weights for the first layer are w i , i = 1 , 2 , , n . That means
y i = w i t , i = 1 , 2 , , n .
The activation function for the first hidden layer is the sigmoid function
g ( z ) = 1 1 + e z ,
and its derivative is given by
g ( z ) = ( 1 g ( z ) ) g ( z ) .
The output of the hidden layer is as follows:
h i = g ( y i ) = 1 1 + e ( w i t + b i ) , i = 1 , 2 , , n ,
where b i are the bias. The weights for the second layer are v i j , i = 1 , 2 , , n , j = 1 , 2 , , n . The final output of the network is as follows:
U ( t ) = [ u ( t , x 0 ) , u ( t , x 1 ) , , u ( t , x n ) ] T ,
where
u ( t , x k ) = i = 1 n v k i 1 + e ( w i t + b i ) , k = 1 , 2 , , n .

3.2. Fractional Differentiation of the Neural Network and the Construction of the Loss Function

From the results in Section 3.1, we obtain the following:
u ( t , u k ) t = i = 1 n v k i w k 1 + e ( w i t + b i ) ( 1 1 1 + e ( w i t + b i ) ) , k = 1 , 2 , , n .
Then, we substitute u ( t , x k ) t into (5) to replace u t ( t ( z + 1 ) 2 , x ) to obtain the fractional derivative
D 0 α u ( t , x k ) = 1 Γ ( 1 α ) 0 t u t ( t ( s + 1 ) 2 , x k ) ( t s ) α d s = 1 Γ ( 1 α ) 0 t u ( t , x k ) t ( t s ) α d s .
By using Gauss–Jacobi quadrature we obtain the following:
D 0 α u k ( t ) = i = 1 m w i * g k ( t ( p i + 1 ) / 2 ) , k = 1 , 2 , , n .
where p i are the roots of the Jacobi polynomials and w i , i = 1 , 2 , , m are the weights and
g k ( t ( z + 1 ) / 2 ) = t 2 1 α u t t ( z + 1 ) 2 , x k , k = 0 , 1 , 2 , , n .
Then, we construct the following loss function for u ( t , x k ) , k = 0 , 1 , 2 , , n
1 2 i = 1 m w i * g t ( p i + 1 ) 2 2 u ( t , x k ) x 2 t [ 0 , 1 ] 2 + 1 2 ( u ( 0 , x k ) f ( x k ) ) 2 .
We use the central difference to replace
2 u ( t , x k ) x 2 = u ( t , x k + 1 ) u ( t , x k 1 ) h 2
in the following equation:
α u t α = a 2 u x 2 ,
to obtain the loss function for the entire network as follows:
L ( u ( t , X , θ ) ) = 1 2 k = 0 n i = 1 m w i * g t p i + 1 2 u ( t , x k + 1 ) u ( t , x k 1 ) h 2 t [ 0 , 1 ] 2
+ 1 2 k = 0 n ( u ( 0 , x k ) f ( x k ) ) 2
+ 1 2 ( u ( t , 0 ) u 0 ( t ) ) t [ 0 , 1 ] 2 + 1 2 ( u ( t , L ) u L ( t ) ) t [ 0 , 1 ] 2 ,
where θ is the parameter vector to be pre-trained.

3.3. Construction of the Loss Function for 2D and 3D Problem

In general, we consider the following d dimension ( d = 2 , 3 ) fractional diffusion equation
D t α u ( t , X ) = a 2 u ( t , X ) , t [ 0 , T ] , X Ω R d u ( t , X ) = g ( t ) , X ( Ω ) u ( 0 , X ) = u 0 ( X ) , X Ω .
We can construct the neural network N ( t , X , θ ) where θ is the parameter vector of the neural network to approximate the solution, and the loss function can be constructed as follows:
L ( u ( t , X , θ ) ) = 1 2 D t α u a 2 u Ω × [ 0 , T ] 2 + 1 2 u ( t ) g ( t ) Ω × [ 0 , T ] 2 + 1 2 u ( 0 , X ) u 0 ( x ) t = 0 , X Ω 2 .
The fractional derivatives in the above formula can be approximated in the same way as in the 1D case. Typically, the following gradient descent process is used to minimize the loss function, thereby pre-training of the neural network can be done by using the following:
θ n + 1 = θ n η · θ L ( θ ) ,
where η is the learning rate. In general, calculating the gradient of the loss function L ( u ( t , X , θ ) ) is an arduous task. To overcome this difficulty, we utilize TensorFlow 2.8.0 for this purpose. TensorFlow is a second-generation artificial intelligence learning system developed by Google based on DistBelief. TensorFlow uses tensors to represent N-dimensional arrays and flows to represent data flow graphs to compute the flow of tensors from one end of the graph to the other and transfers complex data structures to artificial neural networks for analysis and processing. Since TensorFlow supports automatic differentiation, it facilitates the pre-training of neural networks.
The loss function relies on numerical integration, and since there is no existing Gauss–Jacobi quadrature integration table, we need to calculate the integral nodes and weights first.
Figure 2. Neural network for 2D problem.
Figure 2. Neural network for 2D problem.
Fractalfract 09 00338 g002

4. Numerical Tests

Problem 1.
Consider the following homogeneous problem.
t α α u ( t , x ) t α = Γ ( 3 ) Γ ( 3 α ) 2 u ( t , x ) x 2 u ( t , 0 ) = 0 , u ( t , 2 π ) = 0 , u ( 0 , x ) = 0
where 0 t 1 , 0 x 2 π .
The analytic solution to this problem is u ( t , x ) = t 2 s i n x . Using the Method of Lines approach, the solution can be represented as the vector as follows:
U ( t ) = u 0 ( t ) , u 1 ( t ) , , u n ( t ) .
where each u i ( t ) corresponds to the solution at a discrete spatial point x i .
We employed a neural network u ( t , θ ) as illustrated in Figure 1, in which t is the input and U ( t ) = ( u 0 ( t ) , u 1 ( t ) , , u n ( t ) ) T , n = 20 is the output and θ is the parameter vector to be trained. In our method, we use 30 points selected from 0 t 1 to train the neural network. We use gradient descent algorithm to minimize the loss function. TensorFlow was used as the software tool for coding the numerical test. The numerical solution ( α = 0.6) and analytic solution are presented in Figure 3 and Figure 4, respectively. Figure 5 is the loss function and Figure 6 is the square errors of the numerical solutions with respect to different t [ 0 , 1 ] .
Figure 3. Numerical solution for Problem 1.
Figure 3. Numerical solution for Problem 1.
Fractalfract 09 00338 g003
Figure 4. Analytic solution for Problem 1.
Figure 4. Analytic solution for Problem 1.
Fractalfract 09 00338 g004
From Figure 3 and Figure 4, we observe that the well-trained neural network provides a good approximation to the analytic solution.
Figure 5. Loss function for u 5 ( t ) in Problem 1.
Figure 5. Loss function for u 5 ( t ) in Problem 1.
Fractalfract 09 00338 g005
Figure 6. Square error with respect to t in Problem 1.
Figure 6. Square error with respect to t in Problem 1.
Fractalfract 09 00338 g006
Problem 2.
Consider the following problem:
α u ( x , t ) t α = 2 u x 2
where
u ( t , 0 ) = 0 , u ( t , 2 π ) = 0 , u ( 0 , x ) = s i n ( π x ) ,
and where 0 t 1 , 0 x 2 . The analytic solution to this problem is as follows:
u ( x , t ) = E α π 2 t α s i n ( π x ) ,
where E α ( z ) is Mittag–Leffler function. By using the methodology of Method of Lines, the solution can be represented as the following vector:
U = u 0 ( t ) , u 1 ( t ) , . . . , u n ( t ) .
We use the neural network u ( t , θ ) in Figure 1 which takes t as input and
u 0 ( t ) , u 1 ( t ) , , u 21 ( t )
as output to solve this problem. We can observe the numerical solution ( α = 0.8) presented in Figure 7 is a good approximstion to the analytic solution presented in Figure 8. Figure 9 is the loss function for u 3 ( t ) . In particular, we list the comparison between the numerical solution u 7 ( t ) and the analytic solution u ( t , x 7 ) in Figure 10, which indicates the approximation is good.
Problem 3.
We consider the following problem:
α u ( x , y , t ) t α = 2 u x 2 + 2 u y 2 ,
where
u ( x , y , 0 ) = s i n ( π x ) s i n ( π y ) , u ( 0 , y , t ) = 0 , u ( 2 , y , t ) = 0 , u ( x , 0 , t ) = 0 , u ( x , 2 , t ) = 0 .
The analytic solution to this problem is as follows:
u ( x , y , t ) = E α π 2 t α s i n ( π x ) s i n ( π y )
0 t 1 , 0 x 2 , 0 y 2 .
Using the Method of Lines, the solution can be represented as the following matrix:
u 00 ( t ) u 01 ( t ) u 0 n ( t ) u 10 ( t ) u 11 ( t ) u 1 n ( t ) u n 0 ( t ) u n 1 ( t ) u n n ( t ) .
We use the neural network illustrated in Figure 2 which takes t as input and 21 × 21 as output to solve this problem.
We use the neural network u ( t , x , y , θ ) with one hidden layer and t is the input and
u 0 , 0 ( t ) u 0 , 1 ( t ) u 0 , 21 ( t ) u 1 , 0 ( t ) u 1 , 1 ( t ) u 1 , 21 ( t ) u 21 , 0 ( t ) u 21 , 1 ( t ) u 21 , 21 ( t )
is the output. We list the error for the approximation at t = 0.3 and α = 0.5 in Figure 11, t = 0.7 and α = 0.9 in Figure 12.
Figure 11. Numerical error for Problem 3 when t = 0.3.
Figure 11. Numerical error for Problem 3 when t = 0.3.
Fractalfract 09 00338 g011
Figure 12. Numerical error for for Problem 3 when t = 0.7.
Figure 12. Numerical error for for Problem 3 when t = 0.7.
Fractalfract 09 00338 g012

5. Conclusions

We propose a neural network method to solve time-fractional diffusion equations with initial values by combining learning techniques with the Method of Lines. The solutions u ( t , X ) are represented by the network u ( t , θ ) . To address the challenges of differentiation within the neural network, we employ Gauss–Jacobi quadrature to approximate the fractional derivative in the time domain, which is used to construct the loss function. TensorFlow is utilized to implement the gradient descent algorithm for training the neural network. Numerical experiments demonstrate that this method is straightforward to implement and it is effective to handle 1D, 2D, and 3D time-fractional diffusion problems.

Author Contributions

Author contributions: F.G., methodology; C.C., software; C.C. and F.G., formal analysis. F.G., resources; F.G., writing—original draft; C.C., numerical test. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Shandong Provincial Natural Science Foundation, grant number ZR2019BA037.

Data Availability Statement

The original contributions presented in the study are included in the article, further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Shiri, B.; Baleanu, D. All linear fractional derivatives with power functions’convolution kernel and interpolation properties. Chaos Solitons Fractals 2023, 170, 113399. [Google Scholar] [CrossRef]
  2. Shiri, B.; Kong, H.; Wu, G.C.; Luo, C. Adaptive Learning Neural Network Method for Solving Time–Fractional Diffusion Equations. Neural Comput. 2022, 4, 971–990. [Google Scholar] [CrossRef]
  3. Ma, C.H.; Shiri, B.; Wu, G.; Baleanu, D. New fractional signal smoothing equations with short memory and variable order. Optik 2020, 218, 164507. [Google Scholar] [CrossRef]
  4. Wu, Z.; Zhang, X.; Wang, J.; Zeng, X. Applications of Fractional Differentiation Matrices in Solving Caputo Fractional Differential Equations. Fractal Fract. 2023, 7, 374. [Google Scholar] [CrossRef]
  5. Kuzenov, V.V.; Ryzhkov, S.V.; Varaksin, A.Y. Development of a Method for Solving Elliptic Differential Equations Based on a Nonlinear Compact-Polynomial Scheme. J. Comput. Appl. Math. 2024, 451, 116098. [Google Scholar] [CrossRef]
  6. Wang, Y.; Cai, M. Finite Difference Schemes for Time-Space Fractional Diffusion Equations in One- and Two-Dimensions. Commun. Appl. Math. Comput. 2023, 5, 1674–1696. [Google Scholar] [CrossRef]
  7. Zhou, Z.; Gong, W. Finite element approximation of optimal control problems governed by time fractional diffusion equation. Comput. Math. Appl. 2016, 71, 301–318. [Google Scholar] [CrossRef]
  8. Zhang, Y.N.; Sun, Z.Z.; Liao, H.L. Finite difference methods for the time fractional diffusion equation on non-uniform meshes. J. Comput. Phys. 2014, 265, 195–210. [Google Scholar] [CrossRef]
  9. Lin, Y.; Xu, C. Finite difference/spectral approximations for the time-fractional diffusion equation. J. Comput. Phys. 2007, 225, 1533–1552. [Google Scholar] [CrossRef]
  10. Wang, Z.; Li, M. Super convergence analysis of anisotropic finite element method for the time fractional substantial diffusion equation with smooth and non-smooth solutions. Math. Methods Appl. Sci. 2023, 46, 5545–5560. [Google Scholar] [CrossRef]
  11. Kazem, S.; Dehghan, M. Semi-analytical solution for time-fractional diffusion equation based on finite difference method of lines (MOL). Eng. Comput. 2019, 35, 229–241. [Google Scholar] [CrossRef]
  12. Zhu, H.; Xu, C. A Highly Efficient Numerical Method for the Time-Fractional Diffusion Equation on Unbounded Domains. J. Sci. Comput. 2024, 99, 1.1–1.34. [Google Scholar] [CrossRef]
  13. Dehghan, M.; Manafian, J.; Saadatmandi, A. Solving nonlinear fractional partial differential equations using the homotopy analysis method. Numer. Methods Part. Differ. Equ. 2010, 26, 448–479. [Google Scholar] [CrossRef]
  14. Khan, Y.; Wu, Q.; Faraz, N.; Yildirim, A.; Madani, M. A new fractional analytical approach via a modified Riemann–Liouville derivative. Appl. Math. Lett. 2012, 25, 1340–1346. [Google Scholar] [CrossRef]
  15. Khan, Y.; Sayev, K.; Fardi, M.; Ghasemi, M. A novel computing multi-parametric homotopy approach for system of linear and nonlinear Fredholm integral equations. Appl. Math. Comput. 2014, 249, 229–236. [Google Scholar] [CrossRef]
  16. Jleli, M.; Kumar, S.; Kumar, R.; Samet, B. Analytical approach for time fractional wave equations in the sense of Yang-Abdel-Aty-Cattani via the homotopy perturbation transform method. Alex. Eng. J. 2020, 59, 2859–2863. [Google Scholar] [CrossRef]
  17. Singh, S.; Singh, S.; Aggarwal, A. A new spline technique for the time fractional diffusion-wave equation. MethodsX 2023, 10, 102007. [Google Scholar] [CrossRef]
  18. Heath, M.T. Scientific Computing: An Introductory Survey, 2nd ed.; McGraw-Hill: New York, NY, USA, 2002. [Google Scholar]
  19. Garrappa, R.; Popolizio, M. Computing the Matrix Mittag-Leffler Function with Applications to Fractional Calculus. J. Sci. Comput. 2018, 77, 129–153. [Google Scholar] [CrossRef]
  20. Abbasb, Y.S.; Kazem, S.; Alhuthali, M.S.; Alsulami, H.H. Application of the operational matrix of fractional-order Legendre functions for solving the time-fractional convection-diffusion equation. Appl. Math. Comput. 2015, 266, 31–40. [Google Scholar] [CrossRef]
  21. Gao, F.; Chi, C. Solving FDE by trigonometric neural network and its applications in simulating fractional HIV model and fractional Schrodinger equation. Math. Methods Appl. Sci. 2025, 46, 3132–3142. [Google Scholar] [CrossRef]
  22. Gao, F.; Dong, Y.; Chi, C. Solving Fractional Differential Equations by Using Triangle Neural Network. J. Funct. Spaces 2021, 2021, 5589905. [Google Scholar] [CrossRef]
  23. Capelas de Oliveira, E. Mittag-Leffler Functions. In Solved Exercises in Fractional Calculus; Studies in Systems, Decision and Control; Springer: Cham, Switzerland, 2019; Volume 240. [Google Scholar]
  24. Pang, G.; Chen, W.; Sze, K.Y. Gauss-Jacobi-type quadrature rules for fractional directional integrals. Comput. Math. Appl. 2013, 66, 597–607. [Google Scholar] [CrossRef]
  25. Hale, N.; Townsend, A. Fast and accurate computation of Gauss-Legendre and Gauss-Jacobi quadrature nodes and weights. Siam J. Sci. Comput. 2013, 35, A652–A674. [Google Scholar] [CrossRef]
  26. Fang, X.; Qiao, L.; Zhang, F.; Sun, F. Explore deep network for a class of fractional partial differential equations. Chaos Solitons Fractals 2023, 172, 113528. [Google Scholar] [CrossRef]
  27. Pakdaman, M.; Ahmadian, A.; Effati, S.; Salahshour, S.; Baleanu, D. Solving differential equations of fractional order using an optimization technique based on training artificial neural network. Appl. Math. Comput. 2017, 293, 81–95. [Google Scholar] [CrossRef]
Figure 1. Neural network for a 1D problem.
Figure 1. Neural network for a 1D problem.
Fractalfract 09 00338 g001
Figure 7. Numerical solution for Problem 2.
Figure 7. Numerical solution for Problem 2.
Fractalfract 09 00338 g007
Figure 8. Analytic solution for Problem 2.
Figure 8. Analytic solution for Problem 2.
Fractalfract 09 00338 g008
Figure 9. Loss function for Problem 2.
Figure 9. Loss function for Problem 2.
Fractalfract 09 00338 g009
Figure 10. Comparison between numerical and exact solution when x = 1.8 for Probem 2.
Figure 10. Comparison between numerical and exact solution when x = 1.8 for Probem 2.
Fractalfract 09 00338 g010
Table 1. Gauss–Jacobi quadrature nodes and weights when n = 10 .
Table 1. Gauss–Jacobi quadrature nodes and weights when n = 10 .
x i 0.972608829 0.858483753 0.664343304 0.408234585 0.114022629
w i 0.0498199360.1148381850.1772633210.2355421950.288301916
x i 0.19087143 0.478029813 0.720687519 0.896227212 0.988287383
w i 0.334304620.3724717170.401908490.4219249210.432051824
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Gao, F.; Chi, C. Neural Network Method for Solving Time Fractional Diffusion Equations. Fractal Fract. 2025, 9, 338. https://doi.org/10.3390/fractalfract9060338

AMA Style

Gao F, Chi C. Neural Network Method for Solving Time Fractional Diffusion Equations. Fractal and Fractional. 2025; 9(6):338. https://doi.org/10.3390/fractalfract9060338

Chicago/Turabian Style

Gao, Feng, and Chunmei Chi. 2025. "Neural Network Method for Solving Time Fractional Diffusion Equations" Fractal and Fractional 9, no. 6: 338. https://doi.org/10.3390/fractalfract9060338

APA Style

Gao, F., & Chi, C. (2025). Neural Network Method for Solving Time Fractional Diffusion Equations. Fractal and Fractional, 9(6), 338. https://doi.org/10.3390/fractalfract9060338

Article Metrics

Back to TopTop