Next Article in Journal
JKRL: Joint Knowledge Representation Learning of Text Description and Knowledge Graph
Previous Article in Journal
Entanglement and Fidelity: Statics and Dynamics
Previous Article in Special Issue
Lumped Element Method Based Conductivity Reconstruction Algorithm for Localization Using Symmetric Discrete Operators on Coarse Meshes
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Galerkin Method for a Backward Problem of Time-Space Fractional Symmetric Diffusion Equation

School of Mathematics and Information Science, North Minzu University, Yinchuan 750021, China
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Symmetry 2023, 15(5), 1057; https://doi.org/10.3390/sym15051057
Submission received: 31 March 2023 / Revised: 4 May 2023 / Accepted: 7 May 2023 / Published: 10 May 2023

Abstract

:
We investigate a backward problem of the time-space fractional symmetric diffusion equation with a source term, wherein the negative Laplace operator Δ contained in the main equation belongs to the category of uniformly symmetric elliptic operators. The problem is ill-posed because the solution does not depend continuously on the measured data. In this paper, the existence and uniqueness of the solution and the conditional stability for the inverse problem are given and proven. Based on the least squares technique, we construct a Galerkin regularization method to overcome the ill-posedness of the considered problem. Under a priori and a posteriori selection rules for the regularization parameter, the Hölder-type convergence results of optimal order for the proposed method are derived. Meanwhile, we verify the regularized effect of our method by carrying out some numerical experiments where the initial value function is a smooth function or a non-smooth one. Numerical results show that this method works well in dealing with the backward problem of the time-space fractional parabolic equation.

1. Introduction

In the fields of thermodynamics and biology, classical integer order parabolic equations are used to describe normal diffusion phenomena. However, with the development of technology and driven by the in-depth study of some practical scientific problems, people find that more and more abnormal diffusion phenomena appear in the real world, such as the diffusion problems that occur in some medias with memory and genetic characteristics. For this reason, with the help of fractional calculus theories and methods, mathematicians and physicists have derived various time or space fractional parabolic equations. Here, the time fractional derivative in a fractional parabolic equation can be used to describe particle sticking and trapping phenomena, and the space fractional derivative usually describes the long particle jump. In the past few decades, the direct problems of fractional parabolic equations have attracted more and more attention, and fractional calculus has been used in many scientific fields, such as bioengineering, pharmacokinetics, tissue dynamics, economic, and epidemiology [1,2,3,4]. In recent years, in view of the requirement of dealing with some practical problems, more and more people are focusing on the inverse problems of fractional parabolic equations, which mainly include the inverse source problem [5], parameter identification problem [6], sideways problem [7], backward problem in time (inverse initial value problem or final value problem) [8], etc.
In the present paper, we consider the backward problem of the time-space fractional parabolic equation:
α u ( x , t ) t α + ( Δ ) β 2 u ( x , t ) = f ( x , t ) , x Ω , 0 < t < T , u ( x , t ) = 0 , x Ω , 0 < t < T , u ( x , T ) = g ( x ) , x Ω ,
where Ω : ( 1 , 1 ) , T > 0 is a fixed final time, and α ( 0 , 1 ) and β ( 1 , 2 ) are fractional orders of the time and space derivatives, respectively. The fractional derivative α u t α denotes the Caputo fractional derivative about the time variable of the function u ( x , t ) , which is defined as [9]:
α u t α = 1 Γ ( 1 α ) 0 t u ( x , s ) s d s ( t s ) α , 0 < α < 1 ,
α u t α = u ( x , t ) t , α = 1 ,
and Γ ( · ) is the Gamma function. The fractional Laplacian operator ( Δ ) β 2 of order β ( 1 < β < 2 ) is defined by using the spectral decomposition of the Laplace operator. The definition is summarized in Section 2; also see [10]. g ( x ) L 2 ( Ω ) is the exact final value data, and f ( x , t ) L ( 0 , T ; L 2 ( Ω ) ) denotes the source term. Let δ > 0 be the bound of the measured error, denoting g δ ( x ) and f δ ( x , t ) as the noisy final value and source term data, respectively, which satisfy:
g δ ( x ) g ( x ) L 2 ( Ω ) δ ,
f δ ( x , t ) f ( x , t ) L ( 0 , T ; L 2 ( Ω ) ) δ .
Our task is to determine the initial value u ( x , 0 ) from the noisy datum g δ ( x ) and f δ ( x , t ) .
Note that, when α = 1 , β = 2 , the governing equation of (1) is the classical diffusion equation, this equation is usually used to describe the normal diffusion phenomenon, and in the normal diffusion process the diffusion flux follows Fick’s law. In physics and biology, the classical diffusion equation is also used to describe the density distribution of particles in Brownian motion. When 0 < α < 1 , β = 2 , the governing equation of (1) is called the time-fractional diffusion equation, which can be used to describe some anomalous diffusion phenomena. This is because many anomalous diffusions do not follow Fick’s law, but rather follow the time-fractional Fick’s law in natural science. In physics and biology, the time-fractional diffusion equation is also used to describe the density distribution of particles in fractional Brownian motion. When α = 1 , 1 < β < 2 , the governing equation of (1) is called the space-fractional diffusion equation. This equation often describes the diffusion law of particles driven by a Lévy flights. When 0 < α < 1 , 1 < β < 2 , as is the case in this paper, the governing equation of (1) can be used to describe the diffusion law of particles that simultaneously follow the Lévy and fractional Brownian processes; see [11,12].
We know that the backward problem of the time-fractional parabolic equation ( 0 < α < 1 , β = 2 ) has been studied by many researchers, and some meaningful regularization methods have been presented and used to overcome the ill-posedness—for instance, the quasi-reversible method [13,14], total variation method [15], iterative fractional Tikhonov method [16], projection method [17], truncation method [18], simplified Tikhonov method [19], and fractional Tikhonov method [20]. For the backward problem of the space-fractional parabolic equation ( α = 1 , 1 < β < 2 ) , there also have been some research works in which some regularization methods have been developed, such as the convolution and spectral methods [21], iteration method [22], simplified Tikhonov method [23], the logarithmic, negative exponential and fractional Tikhonov methods [24,25,26], optimal method [27], modified kernel method [28], and generalized Tikhonov method [29]. For the backward problem of the time-space fractional parabolic equation, some works have recently been published, in which some interesting and efficient regularization methods have been proposed—for instance, the fundamental kernel-based method [30], variable total variation method [31], modified iterated Lavrentiev method [32], fractional Tikhonov method [33,34], quasi-boundary value method [35], modified Tikhonov method [36], and modified quasi-boundary value method [37].
Followed on the above existing works, this paper continues to consider the backward problem (1). There are many examples of abnormal diffusion in Physics, such as the diffusion of pollen or haze. In the diffusion process, we only know the solute concentration in a certain position at the current moment. However, we often hope to know the solute concentration at the initial time. The above problem can be transformed into the solution of the backward problem of time-space fractional diffusion Equation (1), which is an ill-posed problem. Based on the ill-posedness analysis, the results of the existence and the uniqueness of the solution and conditional stability are given and proven. Then, we construct a Galerkin regularization method based on the least squares technique to recover the stability of the solution and apply it to solve the backward problem of the time-space fractional diffusion equation. Meanwhile, with the help of the conditional stability result, the Hölder-type a priori and a posteriori convergence results for the regularization method are derived. Finally, we solve a forward problem to construct the exact datum, and compute the exact and regularized solutions through respective analytic expression. The simulation effects of the regularization method are also verified by carrying out some numerical experiments.
The Galerkin method is an excellent numerical calculation method, because it has an approximate space composed of discontinuous piecewise functions. These discontinuities occur at the boundaries of the partitioning elements, and these are continuous inside the partitioning elements. Therefore, this method is often used to solve the problems faced in the finite element method; see [38]. Recently, various Galerkin methods have been developed and used in some fields of inverse problems. For instance, Ref. [39] presented and used a local discontinuous Galerkin method to identify the space-dependent source term in a time-fractional diffusion equation. In [40], the authors used the local discontinuous Galerkin method to research a time-fractional diffusion inverse problem subject to an extra measurement. Ref. [41] constructed a spectral Galerkin method to solve a Cauchy problem of the Helmholtz equation. For other works related to this type of method, please see [42,43], etc. The method in this paper is a discrete regularization one. The basic idea of this method is: under the action of orthogonal projection, we can transform an ill-posed operator equation in infinite dimensional space into a linear system in finite dimensional subspace, and construct the regularization solution of the original inverse problem with the help of the least squares technique. Here, the dimension of the finite dimensional subspace plays the role of the regularization parameter. From the numerical algorithm perspective, this method has low computational complexity, and it is easy to be implemented. In addition, the Galerkin method has the convergence property of high order in the sense of weak norms, and there are also many applications in some specific problems, such as the boundary element methods for solving boundary value problems. On the general theory of the discrete regularization method, we can refer to [44,45], and so on.
The rest of this article is arranged as follows. In Section 2, we present some preparatory knowledge. Section 3 presents and proves the results of existence and uniqueness, as well as the conditional stability for te inverse problem. Section 4 describes the construction procedure of the regularized method. Section 5 derives a priori and a posteriori convergence results of the regularization method. In Section 6, we verify the simulation effect of our method by performing some numerical experiments. Some conclusions and discussions are presented in Section 7.

2. Preliminary Knowledge

We first need a few properties of the eigenvalues of the negative Laplace operator Δ ; please see [9,46], etc.
Proposition 1. 
1. Denote the eigenvalues of Δ as λ ¯ k ( k = 1 , 2 , · · · ) , and suppose that 0 < λ ¯ 1 λ ¯ 2 · · · λ ¯ k · · · , and λ ¯ k as k . 2. Let { w k ( x ) } k = 1 be the corresponding eigenfunctions; thus, ( λ ¯ k , w k ( x ) ) satisfy the boundary value problem:
Δ w k ( x ) = λ ¯ k w k ( x ) , x Ω , w k ( x ) = 0 , x Ω .
Let:
H 0 β ( Ω ) : = u = k = 1 a k w k : u H 0 β ( Ω ) 2 = k = 1 a k 2 λ ¯ k β < + ;
then, if u H 0 β ( Ω ) , we define the operator ( Δ ) β 2 by:
( Δ ) β 2 u = k = 1 a k λ ¯ k β 2 w k ( x ) ,
which maps H 0 β ( Ω ) into L 2 ( Ω ) , with the following equivalence:
u H 0 β ( Ω ) = ( Δ ) β 2 u L 2 ( Ω ) .
In the following, we introduce some other preliminary knowledge, which is useful for the demonstration of the related theory results.
Definition 1 
([9]). The two-parameter Mittag–Leffler function is defined as:
E α , β ( z ) = k = 0 z k Γ ( α k + β ) , z C ,
where α > 0 and β R are arbitrary constants.
Lemma 1 
([47]). For λ > 0 , α > 0 , and positive integer m N + , we have:
d m d t m E α , 1 λ t α = λ t α m E α , α m + 1 λ t α , t > 0 .
Lemma 2 
([48]). For 0 < α < 1 and η > 0 , we have 0 < E α , 1 ( η ) < 1 . Moreover, E α , 1 ( η ) is completely monotonic, that is:
( 1 ) n d n d η n E α , 1 ( η ) 0 .
Lemma 3 
([13]). Assume that 0 < α 0 < α 1 < 1 ; then, there exist the constants C 1 , C 2 > 0 , depending only on α 0 , α 1 such that for all α [ α 0 , α 1 ] , we have:
C 1 Γ ( 1 α ) 1 1 x E α , 1 ( x ) C 2 Γ ( 1 α ) 1 1 x , for all x 0 .
Lemma 4. 
For λ ¯ k = ( k π 2 ) 2 and any positive integer k, there exist the positive constants C 3 and C 4 , such that:
C 3 2 β k β π β E α , 1 k π 2 β T α C 4 2 β k β π β ,
where C 3 = C 1 Γ ( 1 α ) 1 2 π β + T α and C 4 = C 2 Γ ( 1 α ) 1 T α .
Proof. 
It can be proven by applying Lemma 3 and the similar method in [35]. □
Lemma 5. 
Let f ( x , t ) L ( 0 , T ; L 2 ( Ω ) ) ; then, the following estimate holds:
k = 1 0 t ( t τ ) α 1 E α , α k π 2 β ( t τ ) α f k ( τ ) d τ 2 C 5 f L ( 0 , T ; L 2 ( Ω ) ) 2 ,
here, C 5 = k = 1 ( 2 β k β π β ) 2 .
Proof. 
By using Lemma 1 for m = 1 , λ = ( k π 2 ) β , Lemma 2, and the similar method in [16], this Lemma can be proven. □

3. Existence and Uniqueness of the Solution and Conditional Stability

3.1. Existence and Uniqueness of the Solution

In order to make a mathematical formulation for the inverse problem, we first consider the forward problem:
α u ( x , t ) t α + ( Δ ) β 2 u ( x , t ) = f ( x , t ) , x Ω , 0 < t < T , u ( x , t ) = 0 , x Ω , 0 < t < T , u ( x , 0 ) = h ( x ) , x Ω .
By separating the variables and Laplace transform of the Mittag–Leffler function, the formal solution of the forward problem (12) can be expressed as:
u ( x , t ) = k = 1 h k E α , 1 λ ¯ k β 2 t α + 0 t ( t τ ) α 1 E α , α λ ¯ k β 2 ( t τ ) α f k ( τ ) d τ w k ( x ) ,
where h k = h ( x ) , w k ( x ) , f k ( t ) = f ( x , t ) , w k ( x ) , · , · denotes the inner product in L 2 ( Ω ) . Additionally, λ ¯ k = k 2 π 2 4 with w k ( x ) = sin k π x 2 when k is even, and w k ( x ) = cos k π x 2 when k is odd. It is easy to check that the eigenfunctions w k ( x ) k = 1 form an orthonormal basis in L 2 ( Ω ) .
Let u ( x , T ) = g ( x ) and bring it into Expression (13); then, we can obtain that:
g ( x ) = k = 1 h k E α , 1 λ ¯ k β 2 T α + 0 T ( T τ ) α 1 E α , α λ ¯ k β 2 ( T τ ) α f k ( τ ) d τ w k ( x ) .
According to the Fourier series expansion, g ( x ) = k = 1 g k w k ( x ) , we have:
g k = h k E α , 1 λ ¯ k β 2 T α + 0 T ( T τ ) α 1 E α , α λ ¯ k β 2 ( T τ ) α f k ( τ ) d τ .
Hence,
h k = g k 0 T ( T τ ) α 1 E α , α λ ¯ k β 2 ( T τ ) α f k ( τ ) d τ E α , 1 λ ¯ k β 2 T α .
Substituting (16) into (13), we can obtain the formal solution of the backward problem (1) as:
u ( x , t ) = k = 1 E α , 1 k π 2 β t α E α , 1 k π 2 β T α g k 0 T ( T τ ) α 1 E α , α k π 2 β ( T τ ) α f k ( τ ) d τ w k ( x ) + k = 1 0 t ( t τ ) α 1 E α , α k π 2 β ( t τ ) α f k ( τ ) d τ w k ( x ) .
As t = 0 , the formal solution can be written as:
u ( x , 0 ) = k = 1 g k 0 T ( T τ ) α 1 E α , α k π 2 β ( T τ ) α f k ( τ ) d τ E α , 1 k π 2 β T α w k ( x ) .
In fact, we can give and prove the existence and uniqueness, and the stability of the solution to the backward problem (1) as below.
Theorem 1. 
Let g ( x ) L 2 ( Ω ) and f ( x , t ) L ( 0 , T ; L 2 ( Ω ) ) ; then, the problem (1) has a unique solution (17) if and only if k = 1 | u ( x , 0 ) , w k ( x ) | 2 < + . Meanwhile, we have the below stability estimate:
u ( · , t ) L 2 ( Ω ) 2 3 C 2 C 1 2 β π β + T α t α 2 g L 2 ( Ω ) 2 + 6 C 5 C 2 C 1 2 β π β + T α t α 2 f L ( 0 , T ; L 2 ( Ω ) ) 2 .
Proof. 
Applying the similar mentality in [16] and Lemma 3, we can prove Theorem 1. □
Remark 1. 
From Result (19), it is easy to find that Problem (1) is well-posed for all t ( 0 , T ) . This is different from the backward problem of the standard diffusion equation ( α = 1 , β = 2 ). The explanation of this phenomenon is that the current state depends on the past state; it is called the hereditary or memory property of fractional derivative (see the description in [16,19,49], etc.). However, the state at t = 0 is an exception, i.e., the solution at t = 0 does not continuously depend on the given data, but if it satisfies a certain priori condition, and one can establish the conditional stability of the solution at t = 0 .

3.2. Conditional Stability

In this subsection, we derive the estimate of conditional stability for the backward problem (1) at t = 0 . Assume that there exists a constant E > 0 such that the solution of the backward problem satisfies a priori condition:
u ( · , 0 ) H p ( Ω ) E ,
where u ( · , 0 ) H p ( Ω ) is the norm in Sobolev space H p ( Ω ) , and defined by:
u ( · , 0 ) H p ( Ω ) = k = 1 1 + k 2 p | u ( · , 0 ) , w k | 2 1 / 2 , p > 0 .
Theorem 2. 
Assume that the solution u ( · , 0 ) given by (18) satisfies a prior condition (20), g ( x ) L 2 ( Ω ) , f ( x , t ) L ( 0 , T ; L 2 ( Ω ) ) ; then, we have:
u ( · , 0 ) C 6 E β p + β ( g L 2 ( Ω ) 2 + C 5 f L ( 0 , T ; L 2 ( Ω ) ) 2 ) p 2 ( p + β ) ,
where C 6 = ( 2 π β C 3 2 β ) p p + β is a constant.
Proof. 
We denote ξ k = g k 0 T ( T τ ) α 1 E α , α k π 2 β ( T τ ) α f k ( τ ) d τ , and ξ ( x ) = k = 1 ξ k w k ( x ) . Because β p + β + p p + β = 1 , with β + p β , p + β p > 1 . By using the Hölder inequality, we obtain:
u ( · , 0 ) 2 = k = 1 | u ( · , 0 ) , w k ( x ) | 2 = k = 1 ξ k 2 E α , 1 2 ( ( k π 2 ) β T α ) = k = 1 1 E α , 1 2 ( ( k π 2 ) β T α ) ξ k 2 β p + β ξ k 2 p p + β k = 1 ξ k 2 E α , 1 2 ( β + p ) β ( ( k π 2 ) β T α ) β β + p k = 1 ξ k 2 p β + p : = I 1 β β + p I 2 p β + p .
By using the Parseval equality and Lemma 5, we can derive that:
I 2 = k = 1 ξ k 2 = ξ 2 = k = 1 g k 0 T ( T τ ) α 1 E α , α k π 2 β ( T τ ) α f k ( τ ) d τ 2 k = 1 | g k | + | 0 T ( T τ ) α 1 E α , α k π 2 β ( T τ ) α f k ( τ ) d τ | 2 2 ( g L 2 ( Ω ) 2 + C 5 f L ( 0 , T ; L 2 ( Ω ) ) 2 ) .
On the other hand, from Lemma 4 and a priori condition (20), we have:
I 1 = k = 1 ξ k 2 E α , 1 2 ( β + p ) β ( ( k π 2 ) β T α ) = k = 1 ξ k 2 E α , 1 2 ( ( k π 2 ) β T α ) E α , 1 2 p β ( ( k π 2 ) β T α ) k = 1 ξ k 2 E α , 1 2 ( ( k π 2 ) β T α ) ( 1 + k 2 ) p ( 1 + k 2 ) p ( k β π β C 3 2 β ) 2 p β E 2 ( 1 + k 2 ) p ( k β π β C 3 2 β ) 2 p β E 2 ( 1 + k 2 ) p k 2 p ( π β C 3 2 β ) 2 p β E 2 ( π β C 3 2 β ) 2 p β .
Finally, we obtain that:
u ( · , 0 ) ( 2 π β C 3 2 β ) p p + β E β p + β ( g L 2 ( Ω ) 2 + C 5 f L ( 0 , T ; L 2 ( Ω ) ) 2 ) p 2 ( p + β ) .

4. Regularization Method

According to the analysis and the result of the conditional stability in Section 3, we have found that Problem (1) is only ill-posed at the initial time t = 0 . In this section, we firstly describe its ill-posedness by adopting the form of an operator equation, and then construct a Galerkin regularization method to overcome the ill-posedness of Problem (1) at the initial time.

4.1. The Operator Equation and Ill-Posedness Analysis

Let us denote that:
ξ k δ = g k δ 0 T ( T τ ) α 1 E α , α k π 2 β ( T τ ) α f k δ ( τ ) d τ ,
where g k δ = g δ ( x ) , w k ( x ) and f k δ ( τ ) = f δ ( x , τ ) , w k ( x ) ; thus, ξ δ ( x ) = k = 1 ξ k δ w k ( x ) .
Based on (18), we define a linear operator K : L 2 ( Ω ) L 2 ( Ω ) as follows:
K u ( x , 0 ) = k = 1 E α , 1 ( ( k π 2 ) β T α ) u ( x , 0 ) , w k ( x ) w k ( x ) = Ω A ( x , ζ ) u ( ζ , 0 ) d ζ = ξ ( x ) ,
where A ( x , ζ ) = k = 1 E α , 1 ( ( k π 2 ) β T α ) w k ( x ) w k ( ζ ) . Note that A ( x , ζ ) = A ( ζ , x ) ; then, K is a self-adjoint operator. Additionally, by adopting a similar procedure as in [46], it can be proven that K is a compact operator, so (24) is a Fredholm integral equation of the first type. This means that the inversion problem of initial value u ( x , 0 ) is an ill-posed one. In the following, we construct a Galerkin regularization method based on the least squares technique to recover the stability of the solution (or recover the continuous dependency of the solution on measurement data).

4.2. Galerkin Regularization Method

Let K : L 2 ( Ω ) L 2 ( Ω ) be the linear self-adjoint compact operator defined in (24), U N L 2 ( Ω ) and Y N L 2 ( Ω ) be finite dimensional subspaces of dimension N, and P N : L 2 ( Ω ) Y N be a projection operator. We know that the singular values of K are θ k = E α , 1 ( ( k π 2 ) β T α ) , and the corresponding eigenvectors are w k ( x ) for k = 1 , 2 , . Assume that N N + U N is dense in L 2 ( Ω ) and that P N K | U N : U N Y N is one-to-one. According to the references [44,45], in the original projection method, a regularized solution u N ( x , 0 ) with the parameter N satisfies the projection equation:
P N K u N ( x , 0 ) = P N ξ ( x ) , u N U N L 2 ( Ω ) ,
and the solution of Equation (25) can be represented as follows:
u N ( x , 0 ) = R N ξ ( x ) ,
where R N = ( P N K | U N ) 1 P N : L 2 ( Ω ) U N L 2 ( Ω ) is defined as the regularization operator, and N plays a role of the regularization parameter.
Since the projection operator P N is an orthogonal projection, the above method is called a Galerkin method. The projection Equation (25) can be formulated as the following Galerkin equation:
K u N , Z = ξ , Z , Z Y N L 2 ( Ω ) .
In particular, when choosing Y N = K U N in (27), one can establish a Galerkin regularization method based on least squares technique, and the solution u N U N is characterized by:
K u N , K z N = ξ , K z N , z N U N L 2 ( Ω ) ;
since U N is finite-dimensional and K : L 2 ( Ω ) L 2 ( Ω ) is one-to-one, Equation (28) has a unique solution u N U N .
Let u ^ 1 , , u ^ N and y ^ 1 , , y ^ N be the basis functions of subspaces U N and Y N , respectively, and we take u ^ k = w k and y ^ i = K w i , i , k = 1 , , N ; then,
P N ξ = i = 1 N γ i K w i , P N K w k = i = 1 N A i k K w i , k = 1 , , N .
Setting:
u N ( x , 0 ) = k = 1 N q k w k ( x ) ,
then, u N ( x , 0 ) is the solution of projection Equation (25) if and only if q = ( q 1 , q 2 , · · · , q N ) satisfies finite dimensional linear equations:
k = 1 N A i k q k = γ i , i = 1 , 2 , · · · , N .
Taking Z = K w i in Galerkin Equation (27), the coefficients and right-hand side terms in (31) can be written as:
A i k = K w k , K w i , γ i = ξ , K w i , i , k = 1 , , N ,
i.e., Equation (31) can be expressed as:
k = 1 N q k K w k , K w i = ξ , K w i , i = 1 , 2 , 3 , , N .
Since { w k } is orthogonality, we have:
k = 1 N q k K w k , K w i = k = 1 N q k E α , 1 ( ( k π 2 ) β T α ) E α , 1 ( ( i π 2 ) β T α ) w k , w i = E α , 1 ( ( k π 2 ) β T α ) ξ , w k ,
and therefore,
q k = 1 E α , 1 ( ( k π 2 ) β T α ) ξ , w k .
Substituting q k into (30), we obtain the regularized solution with exact data ξ ( x ) as:
u N ( x , 0 ) = R N ξ ( x ) = k = 1 N ξ ( x ) , w k ( x ) E α , 1 ( ( k π 2 ) β T α ) w k ( x ) = k = 1 N ξ k E α , 1 ( ( k π 2 ) β T α ) w k ( x ) ,
where N is the regularization parameter. Ultimately, we define the regularization solution u N δ U N with the measured data ξ δ ( x ) as:
u N δ ( x , 0 ) = R N ξ δ ( x ) = k = 1 N ξ δ ( x ) , w k ( x ) E α , 1 ( ( k π 2 ) β T α ) w k ( x ) = k = 1 N ξ k δ E α , 1 ( ( k π 2 ) β T α ) w k ( x ) .

5. Convergence Estimates for a Priori and a Posteriori Rules

In this section, we choose the regularization parameters N by a priori and a posteriori rules, and derive the convergence estimates for the regularization method.

5.1. A Priori Convergence Estimate

In order to make the convergence estimate, the uniform boundedness condition of operator sequence R N K is required.
Lemma 6 
([44]). Assume that N N + U N is dense in L 2 ( Ω ) , and P N K | U N : U N Y N is one-to-one. The solution u N ( x , 0 ) = R N ξ ( x ) U N in (26) converges to u ( x , 0 ) for every K u ( x , 0 ) = ξ ( x ) if and only if there exists c > 0 such that:
R N K c , f o r a l l N N + .
If (37) is satisfied, then the following error estimate holds:
u N ( x , 0 ) u ( x , 0 ) ( 1 + c ) min z N U N z N u ( x , 0 ) .
Theorem 3. 
Let K : L 2 ( Ω ) L 2 ( Ω ) be the operator defined in (24), R N : L 2 ( Ω ) U N L 2 ( Ω ) given by (26) be the regularization operator, and assume that N N + U N is dense in L 2 ( Ω ) ; then, R N K is uniformly bounded, and we have:
R N K 2 + C 4 C 3 , f o r a l l N N + .
Proof. 
First, from (24) and (26), we have u N ( x , 0 ) = R N ξ ( x ) = R N K u ( x , 0 ) U N ; then, the following expression holds:
K u N ( x , 0 ) , K z N = K R N ξ ( x ) , K z N = K ( P N K | U N ) 1 P N ξ ( x ) , K z N = ξ ( x ) , K z N = K u ( x , 0 ) , K z N , z N U N .
It follows that:
K u N ( x , 0 ) K z N 2 = K ( u N ( x , 0 ) z N ) , K ( u N ( x , 0 ) z N ) = K ( u ( x , 0 ) z N ) , K ( u N ( x , 0 ) z N ) K u ( x , 0 ) K z N K u N ( x , 0 ) K z N .
Hence:
K u N ( x , 0 ) K z N K u ( x , 0 ) K z N .
Notice that u N ( x , 0 ) z N = u N ( x , 0 ) z N K u N ( x , 0 ) z N K u N ( x , 0 ) z N .
Denote η N = u N ( x , 0 ) z N K u N ( x , 0 ) z N ; thus, η N U N and K η N   = 1 . We define σ N as follows:
σ N = max { η N : η N U N , K η N   = 1 } ,
through (40), (41) and the monotonicity of Mittag–Leffler function, then:
u N ( x , 0 ) z N σ N K u N ( x , 0 ) z N = 1 E α , 1 ( N π 2 ) β T α K u N ( x , 0 ) z N = 1 E α , 1 ( N π 2 ) β T α K u ( x , 0 ) z N .
For every z N U N , owing to the triangle inequality and (42):
u N ( x , 0 ) u N ( x , 0 ) z N + z N u ( x , 0 ) + u ( x , 0 ) u ( x , 0 ) + z N u ( x , 0 ) + 1 E α , 1 ( N π 2 ) β T α K ( u ( x , 0 ) z N ) .
Note that, R N K is also a projection operator from L 2 ( Ω ) U N ; then, using the projection theorem, it can be obtained that:
min z N U N z N u ( x , 0 ) = u ( x , 0 ) R N K u ( x , 0 ) = k = N + 1 1 E α , 1 ( k π 2 ) β T α ξ k w k ( x ) k = 1 1 E α , 1 ( k π 2 ) β T α ξ k w k ( x ) = u ( x , 0 ) .
In addition, according to Lemma 4 and (43), we have:
min z N U N K ( u ( x , 0 ) z N ) = min z N U N k = 1 u ( x , 0 ) z N , w k ( x ) E α , 1 ( k π 2 ) β T α w k ( x ) min z N U N k = 1 N u ( x , 0 ) z N , w k ( x ) E α , 1 ( k π 2 ) β T α w k ( x ) + min z N U N k = N + 1 u ( x , 0 ) z N , w k ( x ) E α , 1 ( k π 2 ) β T α w k ( x ) 0 + sup k N + 1 E α , 1 ( k π 2 ) β T α min z N U N u ( x , 0 ) z N sup k N + 1 C 4 2 β k β π β u ( x , 0 ) .
Combining with (42), (43), (44), note that:
min z N U N z N u ( x , 0 ) + 1 E α , 1 ( N π 2 ) β T α K ( u ( x , 0 ) z N ) 1 + 1 E α , 1 ( N π 2 ) β T α sup k N + 1 C 4 2 β k β π β u ( x , 0 ) 1 + N β π β C 3 2 β C 4 2 β ( N + 1 ) β π β u ( x , 0 ) 1 + C 4 C 3 u ( x , 0 ) .
From Theorem 3.10 in [44], we know that the Galerkin method based on the least squares technique is convergent and R N σ N . Furthermore, we can obtain that:
u N ( x , 0 ) u N ( x , 0 ) z N + z N u ( x , 0 ) + u ( x , 0 ) u ( x , 0 ) + min z N U N z N u ( x , 0 ) + 1 E α , 1 ( N π 2 ) β T α K ( u ( x , 0 ) z N ) .
Finally, from (45), (46), we have:
R N K u ( x , 0 ) = u N ( x , 0 ) ( 2 + C 4 C 3 ) u ( x , 0 ) ;
then, R N K is uniformly bounded, and the result (39) can be established. □
Theorem 4. 
(A priori convergence estimate) Let u ( x , 0 ) given by (18) be the exact solution at t = 0 for problem (1) with the exact data g ( x ) and source item f ( x , t ) , and u N δ ( x , 0 ) is the Galerkin regularization solution defined by (36) with the measured data g δ ( x ) and f δ ( x , t ) , which satisfy (4) and (5). Suppose that a priori condition (20) is satisfied. If choosing the regularization parameters N as:
N = E δ 1 p + β ,
then, we have the convergence estimate:
u N δ ( x , 0 ) u ( x , 0 ) C p r i E β p + β δ p p + β , for δ 0 ,
where C p r i = π β C 3 2 β 2 1 + C 5 + 3 + C 4 C 3 , [ · ] denotes the largest integer part of a real number.
Proof. 
From the triangle inequality, (35), and (36), we have:
u N δ ( x , 0 ) u ( x , 0 ) u N δ ( x , 0 ) u N ( x , 0 ) + u N ( x , 0 ) u ( x , 0 ) = R N ξ δ ( x ) R N ξ ( x ) + R N K u ( x , 0 ) u ( x , 0 ) R N · ξ δ ( x ) ξ ( x ) + R N K u ( x , 0 ) u ( x , 0 ) .
On the one hand, we first estimate R N , from (28):
K u N ( x , 0 ) 2 = K u N ( x , 0 ) , K u N ( x , 0 ) = ξ ( x ) , K u N ( x , 0 ) ξ ( x ) · K u N ( x , 0 ) ;
hence:
K u N ( x , 0 ) ξ ( x ) .
Note that, u N ( x , 0 ) = u N ( x , 0 ) K u N ( x , 0 ) K u N ( x , 0 ) . Denote ϕ N = u N ( x , 0 ) K u N ( x , 0 ) ; then, ϕ N U N and K ϕ N = 1 . We define ψ N = max { ϕ N : ϕ N U N , K ϕ N = 1 } , and it is not difficult to find that σ N = ψ N , so we have:
R N ξ ( x ) = u N ( x , 0 ) u N ( x , 0 ) K u N ( x , 0 ) · K u N ( x , 0 ) ψ N · K u N ( x , 0 ) = σ N · K u N ( x , 0 ) = 1 E α , 1 ( N π 2 ) β T α ξ ( x ) ,
which yields:
R N 1 E α , 1 ( N π 2 ) β T α .
On the other hand, from Lemma 5 and the inequality ( | a | + | b | ) 2 2 | a | 2 + | b | 2 , we can obtain:
ξ δ ( x ) ξ ( x ) 2 = k = 1 g k δ g k 0 T ( T τ ) α 1 E α , α k π 2 β ( T τ ) α f k δ ( τ ) f k ( τ ) d τ 2 k = 1 2 | g k δ g k | 2 + | 0 T ( T τ ) α 1 E α , α k π 2 β ( T τ ) α f k δ ( τ ) f k ( τ ) d τ | 2 2 g δ g L 2 ( Ω ) 2 + C 5 f δ f L ( 0 , T ; L 2 ( Ω ) ) 2 .
Note that, for any real number a, it holds that [ a ] a < [ a ] + 1 . Meanwhile, combining with (49), (50), (4), (5), and Lemma 4, we can derive that:
u N δ ( x , 0 ) u N ( x , 0 ) R N · ξ δ ( x ) ξ ( x ) 1 E α , 1 ( N π 2 ) β T α · 2 1 + C 5 δ π β C 3 2 β 2 1 + C 5 N β δ .
In addition, we need to estimate the second item on the right of (49). According to Lemma 6, Theorem 3, (50), and a priori condition (20), we have:
R N K u ( x , 0 ) u ( x , 0 ) ( 3 + C 4 C 3 ) min z N U N u ( x , 0 ) z N = ( 3 + C 4 C 3 ) u ( x , 0 ) R N K u ( x , 0 ) = ( 3 + C 4 C 3 ) k = N + 1 1 E α , 1 ( k π 2 ) β T α ξ k w k ( x ) ( 3 + C 4 C 3 ) sup k N + 1 ( 1 + k 2 ) p 2 k = N + 1 ( 1 + k 2 ) p 2 1 E α , 1 ( k π 2 ) β T α ξ k w k ( x ) ( 3 + C 4 C 3 ) sup k N + 1 ( 1 + k 2 ) p 2 k = 1 ( 1 + k 2 ) p 2 1 E α , 1 ( k π 2 ) β T α ξ k w k ( x ) ( 3 + C 4 C 3 ) ( 1 + ( N + 1 ) 2 ) p 2 u ( x , 0 ) p ( 3 + C 4 C 3 ) ( N + 1 ) p E .
In the end, from (49), (52), (53), and (47), it can be obtained that:
u N δ ( x , 0 ) u ( x , 0 ) u N δ ( x , 0 ) u N ( x , 0 ) + u N ( x , 0 ) u ( x , 0 ) π β C 3 2 β 2 1 + C 5 N β δ + ( 3 + C 4 C 3 ) ( N + 1 ) p E π β C 3 2 β 2 1 + C 5 + 3 + C 4 C 3 E β p + β δ p p + β ;
then, the convergence result in (48) can be established. □

5.2. A Posteriori Convergence Estimate

In this section, based on the Morozov discrepancy principle [50], we select the regularization parameter N by an a posteriori stopping rule. Here, N does not depend on a priori bound E of the exact solution; rather, it depends on the noisy level δ and measured datum g δ ( x ) and f δ ( x , t ) , which is more reasonable and has important theoretical and practical significance.
In the Morozov discrepancy principle, the regularization parameter is usually chosen by the stopping rule ξ δ K u N δ ( x , 0 ) τ δ , τ > 1 , and is a positive constant. However, from (36) we note that there is ξ δ K u N δ ( x , 0 ) = ξ δ K R N ξ δ ( x ) . Therefore, we choose the regularization parameter as the first integer N such that:
ξ δ K R N ξ δ τ δ < ξ δ K R N 1 ξ δ , N > 1 ,
where τ > 2 ( 1 + C 5 ) is a positive constant.
Lemma 7. 
Assume that the a priori bound condition (20) is valid; then, the regularized solution (36) combined with the a posteriori selection rule (54) determine that the regularization parameter N satisfies:
N C 4 2 β τ 2 ( 1 + C 5 ) π β E δ 1 p + β .
Proof. 
From (35), a priori condition (20), and Lemma 4, we have:
ξ K R N 1 ξ 2 = k = 1 ξ k w k k = 1 N 1 ξ k w k 2 = k = N ξ k w k 2 = k = N 1 + k 2 p 2 1 + k 2 p 2 E α , 1 ( k π 2 ) β T α 1 E α , 1 ( k π 2 ) β T α ξ k w k 2 sup k N 1 ( 1 + k 2 ) p C 4 2 β k β π β 2 E 2 C 4 2 2 2 β π 2 β E 2 N 2 p + 2 β .
In addition, from (35) and a posteriori stopping rule (54), we have:
ξ K R N 1 ξ = ( ξ δ K R N 1 ξ δ ) ( ξ δ K R N 1 ξ δ ) + ( ξ K R N 1 ξ ) = ( ξ δ K R N 1 ξ δ ) ( I K R N 1 ) ( ξ δ ξ ) ( I K R N 1 ) ξ δ ( I K R N 1 ) ( ξ δ ξ ) ( I K R N 1 ) ξ δ ξ δ ξ τ δ 2 ( 1 + C 5 ) δ .
Combining the above two estimates, we have τ δ 2 ( 1 + C 5 ) δ C 4 2 β π β E N p + β , and by some elementary calculations, we can derive the result in (55). □
Theorem 5. 
(A posteriori convergence estimate) Let u ( x , 0 ) given by (18) be the exact solution at t = 0 for Problem (1) with the exact data g ( x ) and source item f ( x , t ) , and u N δ ( x , 0 ) is the Galerkin regularization solution defined by (36) with the measured data g δ ( x ) and f δ ( x , t ) , which satisfy (4) and (5). Suppose that a priori condition (20) is satisfied. If the regularization parameter N is selected by a posteriori rule (54), we have the convergence estimate:
u N δ ( x , 0 ) u ( x , 0 ) C p o s t E β p + β δ p p + β , for δ 0 ,
where C p o s t = π β C 3 2 β 2 1 + C 5 C 4 2 β τ 2 ( 1 + C 5 ) π β β p + β + τ + 2 ( 1 + C 5 ) π β C 3 2 β p p + β .
Proof. 
Using the triangle inequality, (35), and (36), we have:
u N δ ( x , 0 ) u ( x , 0 ) u N δ ( x , 0 ) u N ( x , 0 ) + u N ( x , 0 ) u ( x , 0 ) R N · ξ δ ( x ) ξ ( x ) + R N K u ( x , 0 ) u ( x , 0 ) .
Meanwhile, from (52), we have known that:
u N δ ( x , 0 ) u N ( x , 0 ) π β C 3 2 β 2 1 + C 5 N β δ .
Below, we make an estimate for u N ( x , 0 ) u ( x , 0 ) . Note, that using the Hölder inequality, we have:
u N ( x , 0 ) u ( x , 0 ) 2 = k = 1 N ξ k E α , 1 ( k π 2 ) β T α w k ( x ) k = 1 ξ k E α , 1 ( k π 2 ) β T α w k ( x ) 2 = k = N + 1 ξ k E α , 1 ( k π 2 ) β T α w k ( x ) 2 = k = N + 1 ξ k 2 E α , 1 2 ( k π 2 ) β T α = k = N + 1 1 E α , 1 2 ( k π 2 ) β T α ξ k 2 β p + β ξ k 2 p p + β k = N + 1 ξ k 2 E α , 1 2 ( p + β ) β ( k π 2 ) β T α β p + β k = N + 1 ξ k 2 p p + β : = I 3 β p + β I 4 p p + β .
By the Parseval equality, we can obtain that:
I 4 1 2 = k = N + 1 ξ k 2 1 2 = ( I K R N ) ξ = ( I K R N ) ξ δ ( I K R N ) ξ δ + ( I K R N ) ξ ( I K R N ) ξ δ + ( I K R N ) ( ξ δ ξ ) τ δ + 2 ( 1 + C 5 ) δ .
On the other hand, from Lemma 4 and a priori condition (20), we have:
I 3 = k = N + 1 ξ k 2 E α , 1 2 ( β + p ) β ( ( k π 2 ) β T α ) = k = N + 1 ξ k 2 E α , 1 2 ( ( k π 2 ) β T α ) E α , 1 2 p β ( ( k π 2 ) β T α ) k = N + 1 ξ k 2 E α , 1 2 ( ( k π 2 ) β T α ) ( 1 + k 2 ) p ( 1 + k 2 ) p k β π β C 3 2 β 2 p β sup k N + 1 ( 1 + k 2 ) p k β π β C 3 2 β 2 p β E 2 π β C 3 2 β 2 p β E 2 .
Combined with the above estimates, we can obtain:
u N ( x , 0 ) u ( x , 0 ) I 3 β 2 ( p + β ) I 4 p 2 ( p + β ) π β C 3 2 β 2 p β E 2 β 2 ( p + β ) τ δ + 2 ( 1 + C 5 ) δ p p + β = τ + 2 ( 1 + C 5 ) π β C 3 2 β p p + β E β p + β δ p p + β .
Finally, from (57), (58), (55), and (59), it can be derived that:
u N δ ( x , 0 ) u ( x , 0 ) π β C 3 2 β 2 1 + C 5 N β δ + τ + 2 ( 1 + C 5 ) π β C 3 2 β p p + β E β p + β δ p p + β π β C 3 2 β 2 1 + C 5 C 4 2 β τ 2 ( 1 + C 5 ) π β β p + β + τ + 2 ( 1 + C 5 ) π β C 3 2 β p p + β E β p + β δ p p + β ,
and then the convergence result in (56) can be established. □

6. Numerical Implementations

In this section, the stability and feasibility of the regularization method are verified by carrying out some numerical experiments. In the actual research process, since the analytic solution of Problem (1) is generally difficult to be expressed explicitly, we consider the forward problem (12) with the initial value h ( x ) and source term f ( x , t ) to construct the exact final data g ( x ) , where the function h ( x ) is chosen as the smooth and non-smooth function, respectively. On this basis, the measured datum is generated by the random perturbation as below:
g δ = g + ε 1 g · ( 2 rand ( size ( g ) ) 1 ) , f δ = f + ε 2 f · ( 2 rand ( size ( f ) ) 1 ) ,
where ε 1 denotes the noisy level of g, ε 2 denotes the noisy level of f, and the corresponding measured error bound is calculated by δ = ε 1 g + ε 2 f . Meanwhile, we calculate the relative error between the exact and Galerkin regularization solutions by:
ϵ ( u ) = u ( x , 0 ) u N δ ( x , 0 ) / u ( x , 0 ) .
Note, that for the a priori case, we always require the a priori bound E of the exact solution during the computational procedure. We know that, as h ( x ) belongs to H p ( Ω ) ( p 0 ), one can compute u ( x , 0 ) H p = h ( x ) H p . Moreover, τ > 2 is obvious. Additionally, we then take T = 1 , τ = 1.6 , and ε = ε 1 = ε 2 . In order to compute the Mittag–Leffler function, we adopt the algorithm in [51]. For the a priori case, the regularization parameter N is chosen by (47), and the regularization parameter under the a posteriori case is chosen by the stopping rule (54). In Example 1 and 2, we verify the computation results of the relative errors using the Galerkin method and the Quasi-Boundary Value method (QBV method) mentioned in [35]. Meanwhile, for the a priori case, the regularization parameter μ of the QBV method is chosen by μ = ( δ E ) 1 2 . For the a posteriori case, we adopt the stopping rule presented in this paper.
Example 1. 
In the forward problem (12), we chose:
h ( x ) = x ( 1 x ) e x sin ( π x ) , f ( x , t ) = t sin ( x ) .
It is clear that h ( x ) is a smooth function, and it belongs to H p ( Ω ) L 2 ( Ω ) . Next, we selected p = 2 to perform the numerical experiment. Since u ( x , 0 ) H 2 = h ( x ) H 2 = 8.35 < 9 , we took E = 9 . In Figure 1, we show the comparison results between the exact and regularized solutions for various noise levels ε = 0.01 , 0.005 , 0.001 , 0.0001 in the case of α = 0.2 , β = 1.7 . For ε = 0.0001 , p = 2 , α = 0.2 and 0.3 , β = 1.7 and 1.9 , numerical results of the exact and regularization solutions are shown in Figure 2. For p = 2 and ε = 0.0001 , the calculation results for the relative errors by the Galerkin method and the QBV method are presented in Table 1. From Figure 1, it can be concluded that as ε decreases, the calculation effect improves. Figure 2 and Table 1 show that our method is stable and feasible in dealing with the case in which the solution of the backward problem is a smooth function. At the same time, it also shown that our method is slightly better than the QBV method in handling a smooth function. In order to describe the inversion process of the backward problem on the entire time scale, in Figure 3, we show the comparison results between the exact and regularized solutions at x = 0.6 in the case of α = 0.2 , β = 1.7 . From result (19), it is easy to find that problem (1) is well-posed for all t ( 0 , T ) . Thus, when t = 0 , we use expression (36) to calculate the regularization solution, and the exact solution is calculated by expression (18). For t ( 0 , T ) , the exact and regularization solutions are both computed by expression (17). Finally, we compare the relative errors under the different step sizes (grids), which are shown in Figure 4. Figure 4 shows that, as the step size increases, the relative error first decreases and then increases.
Example 2. 
In the forward problem (12), we take:
h ( x ) = 4 x 2 + 4 , x [ 1 , 0 ] , 4 x + 4 , x [ 0 , 1 2 ] , e x 1 2 + 1 , x [ 1 2 , 1 ] . f ( x , t ) = t sin ( x ) .
Note, that h ( x ) is a continuous function, but it only belongs to H 1 ( Ω ) L 2 ( Ω ) , and u ( x , 0 ) H 1 = h ( x ) H 1 7 ; thus, we chose a priori bound E = 7 . For ε = 0.0001 , p = 1 , α = 0.2 and 0.3 , β = 1.7 , and 1.9 , the computation results of the exact and regularization solutions are shown in Figure 5. For p = 1 and ε = 0.0001 , the calculation results for the relative errors by the Galerkin method and the QBV method are presented in Table 2. Figure 5 and Table 2 show that our method is also effective in solving the case in which the solution of the backward problem only belongs to the space H 1 ( Ω ) . Meanwhile, Table 2 also shows that our method is slightly better than the QBV method in dealing with the inverse problem whose solution belongs to the space H 1 ( Ω ) .
Example 3. 
In the forward problem (12), h ( x ) L 2 ( Ω ) is taken as a non-smooth function:
h ( x ) = 0 , x [ 1 , 1 2 ] , 4 , x [ 1 2 , 0 ] , 2 , x [ 0 , 1 2 ] , 0 , x [ 1 2 , 1 ] . f ( x , t ) = t sin ( x ) .
Since u ( x , 0 ) L 2 = h ( x ) L 2 = 10 < 4 , we took E = 4 , and selected p = 2 to perform the numerical experiment. For ε = 0.01 , p = 2 , α = 0.2 and 0.3 , β = 1.7 , and 1.9 , the computation results of the exact and regularization solutions are shown in Figure 6. For E = 4 , p = 2 , and ε = 0.01 , the calculation results for the relative errors and a priori and a posteriori regularization parameters are presented in Table 3. Figure 6 and Table 3 show that our method is also effective in solving the inverse problem whose solution is a non-smooth function.

7. Conclusions and Further Discussions

This article studies a backward problem of the time-space fractional parabolic equation. This problem is ill-posed in the sense that its solution is unstable. We prove the existence and uniqueness of a solution and conditional stability for the inverse problem. Based on the ill-posedness analysis, a Galerkin regularization method is constructed to recover the stability of the solution, and a priori and a posteriori convergence results are derived. Additionally, by making some numerical experiments we also verified the simulation effect of our method. Numerical results show that the Galerkin method is stable and feasible in solving the considered problem.
This paper mainly researches the Galerkin regularization theory and numerical algorithm for the case of continuous datum, which satisfy (4), (5). We point out that in this method we also can encounter the case of discrete data, but at this moment we first need solve the discrete system (31) with noisy data γ δ , and then according to (30) to compute the regularization solution. Meanwhile, a priori and a posteriori convergence results need to be derived. In fact, we have tried and derived a similar a priori convergence result with the case of continuous data. However, for the a posteriori convergence result, at present we have not explored the appropriate selection rule of the regularization parameter. In future works, we will continue to consider and overcome this difficulty.

Author Contributions

Investigation, writing—original draft preparation, H.Z.; Investigation, writing —review and editing, Y.L. All authors have read and agreed to the published version of the manuscript.

Funding

The work described in this paper was supported by the NSF of Ningxia (2022AAC03234), the NSF of China (11761004), the Construction Project of First-Class Disciplines in Ningxia Higher Education (NXYLXK2017B09), and the Postgraduate Innovation Project of North Minzu University (YCX22094).

Data Availability Statement

Data sharing not applicable to this article as no datasets were generated or analyzed during the current study.

Acknowledgments

The authors would like to thank the reviewers for their constructive comments and valuable suggestions that improved the quality of our paper.

Conflicts of Interest

The authors declare that they have no conflict of interest.

References

  1. Magin, E.L. Fractional calculus models of complex dynamics in biological tissues. Comput. Math. Appl. 2009, 59, 1586–1593. [Google Scholar] [CrossRef]
  2. Sopasakis, P.; Sarimveis, H.; Macheras, P.; Dokoumetzidis, A. Fractional calculus in pharmacokinetics. J. Pharmacokinet. Pharmacodyn. 2018, 45, 107–125. [Google Scholar] [CrossRef] [PubMed]
  3. Tenreiro Machado, J.A.; Mata, M.E.; Lopes, A.M. Fractional dynamics and pseudo-phase space of country economic processes. Mathematics 2020, 8, 81. [Google Scholar] [CrossRef]
  4. Ionescu, C.; Lopes, A.; Copot, D.; Machado, J.T.; Bates, J.H. The role of fractional calculus in modeling biological phenomena: A review. Commun. Nonlinear Sci. Numer. Simul. 2017, 51, 141–159. [Google Scholar] [CrossRef]
  5. Liu, S.S.; Sun, F.Q.; Feng, L.X. Regularization of inverse source problem for fractional diffusion equation with Riemann-Liouville derivative. Comput. Appl. Math. 2021, 40, 112. [Google Scholar] [CrossRef]
  6. Sun, L.L.; Wei, T. Identification of the zeroth-order coefficient in a time fractional diffusion equation. Appl. Numer. Math. 2017, 111, 160–180. [Google Scholar] [CrossRef]
  7. Tran, B.N.; Nguyen, H.T.; Mokhtar, K. Regularization of a sideways problem for a time-fractional diffusion equation with nonlinear source. J. Inverse Ill-Posed Probl. 2020, 28, 1–29. [Google Scholar]
  8. Zhao, J.; Liu, S.; Liu, T. An inverse problem for space-fractional backward diffusion problem. Math. Methods Appl. Sci. 2014, 37, 1147–1158. [Google Scholar] [CrossRef]
  9. Podlubny, I. Fractional Differential Equations; Academic Press: San Diego, CA, USA, 1999. [Google Scholar]
  10. Tatar, S.; Tnaztepe, R.; Ulusoy, S. Determination of an unknown source term in a space-time fractional diffusion equation. J. Fract. Calc. Appl. 2015, 6, 83–90. [Google Scholar]
  11. Dipierro, S.; Lippi, E.P.; Valdinoci, E. (Non) local logistic equations with Neumann conditions. arXiv 2021, arXiv:2101.02315. [Google Scholar] [CrossRef]
  12. Cassani, D.; Vilasi, L.; Wang, Y.J. Local versus nonlocal elliptic equations:short-long range field interactions. Adv. Nonlinear Anal. 2021, 10, 895–921. [Google Scholar] [CrossRef]
  13. Liu, J.J.; Yamamoto, M. A backward problem for the time-fractional diffusion equation. Appl. Anal. 2010, 89, 1769–1788. [Google Scholar] [CrossRef]
  14. Yang, F.; Ren, Y.P.; Li, X.X. The quasi-reversibility method for a final value problem of the time-fractional diffusion equation with inhomogeneous source. Math. Methods Appl. Sci. 2018, 41, 1774–1795. [Google Scholar] [CrossRef]
  15. Wang, L.Y.; Liu, J.J. Total variation regularization for a backward time-fractional diffusion problem. Inverse Probl. 2013, 29, 1–22. [Google Scholar] [CrossRef]
  16. Yang, S.P.; Xiong, X.T.; Nie, Y. Iterated fractional Tikhonov regularization method for solving the spherically symmetric backward time-fractional diffusion equation. Appl. Numer. Math. 2021, 160, 217–241. [Google Scholar] [CrossRef]
  17. Ren, C.; Xu, X.; Lu, S. Regularization by projection for a backward problem of the time-fractional diffusion equation. J. Inverse Ill-Posed Probl. 2014, 22, 121–139. [Google Scholar] [CrossRef]
  18. Wang, L.Y.; Liu, J.J. Data regularization for a backward time-fractional diffusion problem. Comput. Math. Appl. 2012, 64, 3613–3626. [Google Scholar] [CrossRef]
  19. Wang, J.G.; Wei, T.; Zhou, Y.B. Optimal error bound and simplified Tikhonov regularization method for a backward problem for the time-fractional diffusion equation. J. Comput. Appl. Math. 2015, 279, 277–292. [Google Scholar] [CrossRef]
  20. Dinh, L.L.; Kim, V.H.; Duy, B.H.; Reza, S. On backward problem for fractional spherically symmetric diffusion equation with observation data of nonlocal type. Adv. Differ. Equ. 2021, 2021, 445. [Google Scholar]
  21. Zheng, G.H.; Wei, T. Two regularization methods for solving a Riesz-Feller space-fractional backward diffusion problem. Inverse Probl. 2010, 26, 1–23. [Google Scholar] [CrossRef]
  22. Cheng, H.; Fu, C.L.; Zheng, G.H.; Gao, J. A regularization for a Riesz-Feller space-fractional backward diffusion problem. Inverse Probl. Sci. Eng. 2014, 22, 860–872. [Google Scholar] [CrossRef]
  23. Yang, F.; Li, X.X.; Li, D.G.; Wang, L. The simplified Tikhonov regularization method for solving a Riesz-Feller space-fractional backward diffusion problem. Math. Comput. Sci. 2017, 11, 91–110. [Google Scholar] [CrossRef]
  24. Zheng, G.H.; Zhang, Q.G. Recovering the initial distribution for space-fractional diffusion equation by a logarithmic regularization method. Appl. Math. Lett. 2016, 61, 143–148. [Google Scholar] [CrossRef]
  25. Zheng, G.H.; Zhang, Q.G. Determining the initial distribution in space-fractional diffusion by a negative exponential regularization method. Inverse Probl. Sci. Eng. 2017, 25, 965–977. [Google Scholar] [CrossRef]
  26. Zheng, G.H.; Zhang, Q.G. Solving the backward problem for space-fractional diffusion equation by a fractional Tikhonov regularization method. Math. Comput. Simul. 2018, 148, 37–47. [Google Scholar] [CrossRef]
  27. Zhang, Z.Q.; Wei, T. An optimal regularization method for space-fractional backward diffusion problem. Math. Comput. Simul. 2013, 92, 14–27. [Google Scholar] [CrossRef]
  28. Liu, S.S.; Feng, L.X. Optimal error bound and modified kernel method for a space-fractional backward diffusion problem. Adv. Differ. Equ. 2018, 2018, 268. [Google Scholar] [CrossRef]
  29. Zhang, H.W.; Zhang, X.J. Solving the Riesz-Feller space-fractional backward diffusion problem by a generalized Tikhonov method. Adv. Differ. Equ. 2020, 2020, 376–384. [Google Scholar] [CrossRef]
  30. Dou, F.F.; Hon, Y.C. Fundamental kernel-based method for backward space-time fractional diffusion problem. Comput. Math. Appl. 2016, 71, 356–367. [Google Scholar] [CrossRef]
  31. Jia, J.X.; Peng, J.G.; Gao, J.H.; Li, Y. Backward problem for a time-space fractional diffusion equation. Inverse Probl. Imaging 2018, 12, 773–799. [Google Scholar] [CrossRef]
  32. Trong, D.D.; Hai, D.N.D.; Dien, N.M. On a time-space fractional backward diffusion problem with inexact orders. Comput. Math. Appl. 2019, 78, 1572–1593. [Google Scholar] [CrossRef]
  33. Wang, J.L.; Xiong, X.T.; Cao, X.X. Fractional Tikhonov regularization method for a time-fractional backward heat equation with a fractional Laplacian. J. Partial. Differ. Equ. 2018, 4, 333–342. [Google Scholar]
  34. Djennadi, S.; Shawagfeh, N.; Abu Arqub, O. A fractional Tikhonov regularization method for an inverse backward and source problems in the time-space fractional diffusion equations. Chaos Solitons Fractals 2021, 150, 111127. [Google Scholar] [CrossRef]
  35. Yang, F.; Zhang, Y.; Liu, X.; Li, X. The quasi-boundary value method for identifying the initial value of the space-time fractional diffusion equation. Acta Math. Sci. 2020, 40, 641–658. [Google Scholar] [CrossRef]
  36. Feng, X.L.; Zhao, M.X.; Qian, Z. A Tikhonov regularization method for solving a backward time-space fractional diffusion problem. J. Comput. Appl. Math. 2022, 411, 114236. [Google Scholar] [CrossRef]
  37. Trong, D.D.; Hai, D.N. Backward problem for time-space fractional diffusion equations in Hilbert scales. Comput. Math. Appl. 2021, 93, 253–264. [Google Scholar] [CrossRef]
  38. Asadzadeh, M.; Beilina, L. A posteriori error analysis in a globally convergent numerical method for a hyperbolic coefficient inverse. Inverse Probl. 2010, 26, 115–127. [Google Scholar] [CrossRef]
  39. Yeganeh, S.; Mokhtari, R.; Hesthaven, J.S. Space-dependent source determination in a time-fractional diffusion equation using a local discontinuous Galerkin method. Bit Numer. Math. 2017, 57, 685–707. [Google Scholar] [CrossRef]
  40. Qasemi, S.; Rostamy, D.; Abdollahi, N. The time-fractional diffusion inverse problem subject to an extra measurement by a local discontinuous Galerkin method. Bit Numer. Math. 2019, 59, 183–212. [Google Scholar] [CrossRef]
  41. Xiong, X.T.; Zhao, X.; Wang, J. Spectral Galerkin method and its application to a Cauchy problem of Helmholtz equation. Numer. Algorithms 2013, 63, 691–711. [Google Scholar] [CrossRef]
  42. Kien, B.T.; Qin, X.; Wen, C.F.; Yao, J.C. The Galerkin Method and Regularization for Variational Inequalities in Reflexive Banach Spaces. J. Optim. Theory Appl. 2021, 189, 578–596. [Google Scholar] [CrossRef]
  43. Zhao, Q.L. Regularization of Three Inverse Boundary Value Problems. Master’s Thesis, Lanzhou University, Lanzhou, China, 2008. (In Chinese). [Google Scholar]
  44. Kirsch, A. An Introduction to the Mathematical Theory of Inverse Problems; Springer: New York, NY, USA, 2011. [Google Scholar]
  45. Liu, J.J. Regularization Method for Ill-Posed Problem and Its Application; Science Press: Beijing, China, 2005. (In Chinese) [Google Scholar]
  46. Kilbas, A.A.; Srivastava, H.M.; Trujillo, J.J. Theory and applications of fractional differential equations. North-Holl. Math. Stud. 2006, 204, 2453–2461. [Google Scholar]
  47. Sakamoto, K.; Yamamoto, M. Initial value/boundary value problems for fractional diffusion-wave equations and applications to some inverse problems. J. Math. Anal. Appl. 2011, 382, 426–447. [Google Scholar] [CrossRef]
  48. Pollard, H. The completely monotonic character of the Mittag–Leffler function Eα(−x). Bull. Am. Math. Soc. 1948, 54, 1115–1116. [Google Scholar] [CrossRef]
  49. Mishura, Y.S. Stochastic Calculus for Fractional Brownian Motion and Related Processes; Springer: Berlin, Germany, 2008. [Google Scholar]
  50. Morozov, V.A.; Nashed, Z.; Aries, A.B. Methods for Solving Incorrectly Posed Problems; Springer: New York, NY, USA, 1984. [Google Scholar]
  51. Podlubny, I.; Kaccenak, M. Mittag–Leffler Function. The Matlabroutine. 2006. Available online: http://www.mathworks.com/matlabcentral/fileexchange (accessed on 26 February 2023).
Figure 1. Example 1: p = 2 , E = 9 , α = 0.2 , β = 1.7 , the exact and regularization solutions for various ε .
Figure 1. Example 1: p = 2 , E = 9 , α = 0.2 , β = 1.7 , the exact and regularization solutions for various ε .
Symmetry 15 01057 g001
Figure 2. Example 1: p = 2 , E = 9 , ε = 0.0001 , the exact and regularization solutions for various α , β .
Figure 2. Example 1: p = 2 , E = 9 , ε = 0.0001 , the exact and regularization solutions for various α , β .
Symmetry 15 01057 g002
Figure 3. Example 1: α = 0.2 , β = 1.7 , the exact and regularization solutions for various t at x = 0.6 .
Figure 3. Example 1: α = 0.2 , β = 1.7 , the exact and regularization solutions for various t at x = 0.6 .
Symmetry 15 01057 g003
Figure 4. Example 1: α = 0.2 , β = 1.7 , the relative error for various step h.
Figure 4. Example 1: α = 0.2 , β = 1.7 , the relative error for various step h.
Symmetry 15 01057 g004
Figure 5. Example 2: p = 1 , E = 7 , ε = 0.0001 , the exact and regularization solutions for various α , β .
Figure 5. Example 2: p = 1 , E = 7 , ε = 0.0001 , the exact and regularization solutions for various α , β .
Symmetry 15 01057 g005
Figure 6. Example 3: p = 2 , E = 4 , ε = 0.01 , the exact and regularization solutions for various α , β .
Figure 6. Example 3: p = 2 , E = 4 , ε = 0.01 , the exact and regularization solutions for various α , β .
Symmetry 15 01057 g006
Table 1. Example 1: p = 2 , ε = 0.0001 , the computation results for the relative errors by the Galerkin method and the QBV method.
Table 1. Example 1: p = 2 , ε = 0.0001 , the computation results for the relative errors by the Galerkin method and the QBV method.
α = 0.2 β = 1.7 α = 0.2 β = 1.9 α = 0.3 β = 1.7 α = 0.3 β = 1.9
ϵ p r i , G a l e r k i n ( u ) 0.0074350.0097970.0075110.009825
ϵ p o s t , G a l e r k i n ( u ) 0.0073480.0093560.0074310.009825
ϵ p r i , Q B V ( u ) 0.06330.08550.06880.0926
ϵ p o s t , Q B V ( u ) 0.02160.03170.02340.0341
Table 2. Example 2: p = 1 , ε = 0.0001 , the computation results for the relative errors by the Galerkin method and the QBV method.
Table 2. Example 2: p = 1 , ε = 0.0001 , the computation results for the relative errors by the Galerkin method and the QBV method.
α = 0.2 β = 1.7 α = 0.2 β = 1.9 α = 0.3 β = 1.7 α = 0.3 β = 1.9
ϵ p r i , G a l e r k i n ( u ) 0.1355130.1364830.1355390.136502
ϵ p o s t , G a l e r k i n ( u ) 0.1354680.1356710.1354070.136065
ϵ p r i , Q B V ( u ) 0.1512630.1658460.1550990.168608
ϵ p o s t , Q B V ( u ) 0.1359120.1499520.1366760.151806
Table 3. Example 3: p = 2 , E = 4 , ε = 0.01 , the computation results for the relative error and regularization parameter.
Table 3. Example 3: p = 2 , E = 4 , ε = 0.01 , the computation results for the relative error and regularization parameter.
α = 0.2 β = 1.7 α = 0.2 β = 1.9 α = 0.3 β = 1.7 α = 0.3 β = 1.9
N p r i 12101210
N p o s t 13111311
ϵ p r i ( u ) 0.2794690.3447570.2794980.344775
ϵ p o s t ( u ) 0.2794690.3447570.2794980.344775
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhang, H.; Lv, Y. Galerkin Method for a Backward Problem of Time-Space Fractional Symmetric Diffusion Equation. Symmetry 2023, 15, 1057. https://doi.org/10.3390/sym15051057

AMA Style

Zhang H, Lv Y. Galerkin Method for a Backward Problem of Time-Space Fractional Symmetric Diffusion Equation. Symmetry. 2023; 15(5):1057. https://doi.org/10.3390/sym15051057

Chicago/Turabian Style

Zhang, Hongwu, and Yong Lv. 2023. "Galerkin Method for a Backward Problem of Time-Space Fractional Symmetric Diffusion Equation" Symmetry 15, no. 5: 1057. https://doi.org/10.3390/sym15051057

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop