Next Article in Journal
Second-Order Approximate Reflection Coefficient of Thin Interbeds with Vertical Fractures
Previous Article in Journal
The Budgeted Labeled Minimum Spanning Tree Problem
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Two Regularization Methods for Identifying the Spatial Source Term Problem for a Space-Time Fractional Diffusion-Wave Equation

School of Science, Lanzhou University of Technology, Lanzhou 730050, China
*
Author to whom correspondence should be addressed.
Mathematics 2024, 12(2), 231; https://doi.org/10.3390/math12020231
Submission received: 28 November 2023 / Revised: 31 December 2023 / Accepted: 8 January 2024 / Published: 10 January 2024

Abstract

:
In this paper, we delve into the challenge of identifying an unknown source in a space-time fractional diffusion-wave equation. Through an analysis of the exact solution, it becomes evident that the problem is ill-posed. To address this, we employ both the Tikhonov regularization method and the Quasi-boundary regularization method, aiming to restore the stability of the solution. By adhering to both a priori and a posteriori regularization parameter choice rules, we derive error estimates that quantify the discrepancies between the regularization solutions and the exact solution. Finally, we present numerical examples to illustrate the effectiveness and stability of the proposed methods.

1. Introduction

In the past decade, fractional differential equations have received increasing attention from researchers. On the one hand, this is due to the development of fractional calculus theory itself, and on the other hand, it is due to the widespread application of fractional differential equations in various disciplines. Anomalous diffusion is almost ubiquitous in nature [1,2]. Many researchers have found that fractional differential equations can provide more suitable mathematical models to describe anomalous diffusion and the transport dynamics of complex systems [3,4].
In physics, the fractional diffusion equation describes diffusion in media with fractal geometry [5]. The fractional wave equation controls the propagation of mechanical diffusion waves in viscoelastic media that exhibit a power-law creep [6]. The fractional diffusion-wave equation can be used to describe hyperdiffusion phenomenon. The space-time fractional diffusion-wave equation is an extension of the classical diffusion wave equation, which is used to simulate practical diffusion and wave phenomena in fluid flow, oil strata and other situations.
When solving a practical problem, the initial values, diffusion coefficients, source terms, or partial boundary values of the problem are unknown and require some measurement data to invert them, which constitutes the inverse problem. The identification of an unknown source in an inverse problem is widely used in engineering science. In environmental protection engineering, source term identification can accurately estimate the location and intensity of pollution sources [7]. In an environment with an increasing energy shortage, source term identification helps to find new energy. A variety of regularization methods are proposed by scholars to deal with inverse problems, which include, for example, the Tikhonov regularization method [8,9], Fourier regularization method [10,11], Quasi-boundary value regularization method [12,13], Quasi-reversibility regularization method [14,15], truncation regularization method [16,17], Landweber iterative regularization method [18,19], etc. This paper will use the Tikhonov regularization method and Quasi-boundary regularization method to deal with the problem of source term identification.
Some research has been conducted on the direct problem of space-time fractional diffusion-wave equation. Luchko [20] utilized the technique of the Mellin integral transform to derive the fundamental solution of the linear multidimensional space-time fractional diffusion wave equation in the form of a Mellin–Barnes integral. Dehghan et al. [21] applied a high-order numerical scheme to solve the space-time tempered fractional diffusion-wave equation, and proved the unconditional stability and convergence of the method. Garg et al. [22] made use of a matrix method to solve space-time fractional diffusion-wave equations with three space variables, and the solutions of classical, time-fractional, space-fractional, and space-time fractional wave equations were given. Bhrawy et al. [23] solved the two-sided space and time Caputo space-time fractional diffusion-wave equation with various types of nonhomogeneous boundary conditions with an accurate spectral tau method.
Until now, the research on the inverse problem of fractional diffusion equation has been relatively mature. Huang et al. [24] solved the Cauchy problem of the space-time fractional diffusion equation, and explicit expression of the Green function was given. Tatar et al. [25] proposed a discrete numerical method based on the minimization problem, the steepest descent method and the least square method, and the numerical method was used to solve the nonlocal inverse source problem of the one-dimensional space-time fractional diffusion equation. Wang et al. [26] solved the inverse initial value problem for a time fractional diffusion equation with the Tikhonov regularization method. However, at present, the research on the inverse problem of the time fractional diffusion-wave equation is relatively limited [27,28]. The research on the inverse problem of the diffusion-wave equation with both time and space fractional derivatives is scarce and therefore worth investigating. In this paper, we consider the inverse source problem of the space-time fractional diffusion-wave equation, and derive the prior and posterior error estimates of the exact solution and these regularization solutions, respectively.
We consider the following space-time fractional diffusion-wave equation [29]:
D t α u ( x , t ) + ( Δ ) β u ( x , t ) + v D t α ( Δ ) γ u ( x , t ) = f ( x ) , x Ω , t ( 0 , T ) , u ( x , t ) = 0 , x Ω , t [ 0 , T ] , u ( x , 0 ) = φ ( x ) , x Ω , u t ( x , 0 ) = ψ ( x ) , x Ω , u ( x , T ) = g ( x ) , x Ω ,
where Ω is a bounded domain in R d ( d = 1 , 2 , 3 ) with sufficient glossy boundary Ω , T > 0 is a final time, v > 0 , 0 < γ < β 1 . φ ( x ) and ψ ( x ) are the initial data defined on L 2 ( Ω ) , and D t α is the Caputo operator for a fractional derivative of the order α ( 1 < α < 2 ) defined by [30]
D t α u = 1 Γ ( 2 α ) 0 t 2 u ( x , s ) s 2 d s ( t s ) α 1 , 1 < α < 2 .
where Γ ( · ) represents a Gamma function. The operator ( Δ ) s is a fractional Laplace operator defined by [31]
( Δ ) s u ( x , t ) = C d , s R d u ( x , t ) u ( z , t ) | x z | d + 2 s d z , s ( 0 , 1 ) ,
where C d , s = 4 s Γ ( d / 2 + s ) π d / 2 Γ ( s ) .
In Equation (1), the source term f ( x ) is unknown. The main goal of this inverse problem is to identify the spatial source term f ( x ) through the final data u ( x , T ) = g ( x ) . In practice, g ( x ) is obtained through measurement. We assume that the exact data g ( x ) and the measurement data g δ ( x ) satisfy
g ( · ) g δ ( · )   δ .
It can be found that the small perturbation of the measurement data will lead to a significant change in the source term f ( x ) , and this change has made the solution obtained by the usual method meaningless. The reason for this phenomenon is the ill-posedness of the inverse problem. Regularization methods can be applied to solve this ill-posed problem.
This paper is divided into seven sections. In Section 2, we provide some preliminary results. The solution of Equation (1), ill-posedness and the conditional stability about the source term identification problem are derived in Section 3. In Section 4, we propose the Tikhonov regularization method to restore the stability of the solution, and provide the convergence error estimations of the source term under the prior and posterior regularization parameter selection rules. In Section 5, we propose the Quasi-boundary regularization method to restore the stability of the solution, and provide the convergence error estimations of the source term under the prior and posterior regularization parameter selection rules. Several numerical examples are given to demonstrate the effectiveness and stability of the Tikhonov regularization method and the Quasi-boundary regularization method in Section 6. In the final section, we provide a brief conclusion.

2. Preliminaries

Throughout this paper, we use the following Definitions and Lemmas.
Definition 1.
Suppose λ n , χ n be the Dirichlet eigenvalues and eigenfunctions of the Laplacian operator Δ on the domain Ω:
Δ χ n = λ n χ n , in Ω , χ n = 0 , on Ω ,
where 0 < λ 1 λ 2 λ n , lim n λ n = + and χ n ( x ) H 2 ( Ω ) H 0 1 ( Ω ) , then χ n ( x ) n = 1 can be normalized as the orthonormal basis in space L 2 ( Ω ) . For the fractional Laplacian operator ( Δ ) s , we have a similar definition. The spectrum of the fractional Laplacian in a bounded region Ω is as follows
( Δ ) s χ n = λ n s χ n , in Ω , χ n = 0 , on Ω ,
where the sequence of positive eigenvalues χ n ( x ) n = 1 satisfies
0 < λ 1 λ 2 λ n , lim n λ n = + ,
whose corresponding set of real eigenfunctions χ n ( x ) n = 1 is orthogonal and complete.
Definition 2.
For any p > 0 and 0 < r < 1 , we define the space
D ( ( Δ r ) p ) = ϕ L 2 ( Ω ) | n = 1 ( λ n r ) 2 p | ( ϕ , χ n ) | 2 < ,
where ( · , · ) is the inner product in L 2 ( Ω ) , and D ( ( Δ r ) p ) is a Hilbert space with the norm
ϕ D ( ( Δ r ) p ) : = n = 1 λ n 2 p r | ( ϕ , χ n ) | 2 1 2 .
Definition 3
([30]). The Mittag–Leffler function is defined by
E α , β ( z ) = k = 0 z k Γ ( α k + β ) , z C ,
where α > 0 and γ R are arbitrary constants.
Lemma 1
([30]). For λ > 0 , then
0 e p t t α k + β 1 E α , β ( k ) ( ± a t α ) d t = k ! p α β ( p α a ) k + 1 , R e ( p ) > | a | 1 α ,
where E α , β ( k ) ( y ) : = d k d y k E α , β ( y ) .
Lemma 1 means that the Laplace transformation of t γ k + β 1 E γ , β ( k ) ( ± a t γ ) is k ! p γ β ( p γ a ) k + 1 .
Lemma 2
([30]). If α < 2 and β R be arbitrary. We suppose that μ is such that π α 2 < μ < min { π , π α } , μ | arg ( z ) | π . Then there exists a constant C 1 > 0 such that
| E α , β ( z ) | C 1 1 + | z | , z C .
Lemma 3
([32]). For 1 < α < 2 , β R and η > 0 , we have
E α , β ( η ) = 1 Γ ( β α ) η + O ( 1 η 2 ) , η .
Lemma 4.
For 1 < α < 2 and any λ n satisfying λ n > λ 1 > 0 , there exists a positive constant C 1 such that
1 λ n T α   E α , α + 1 ( λ n T α )   C 1 λ n T α .
Proof. 
By Lemma 2, we know
| E α , α + 1 ( λ n T α ) |   C 1 1 + λ n T α C 1 λ n T α .
Applying Lemma 3, we have
E α , α + 1 ( λ n T α ) 1 Γ ( 1 ) λ n T α = 1 λ n T α .
The proof is completed. □
Lemma 5.
For constants p > 0 , μ > 0 , s λ 1 β > 0 , the following inequality holds:
F 1 ( s ) = μ s 2 p 2 1 + μ s 2 C 2 μ p 4 , 0 < p < 4 , C 3 μ , p 4 ,
where C 2 = ( 4 p p ) p 4 , C 3 = λ 1 ( 2 p 2 ) β .
Proof. 
When 0 < p < 4 , if s 0 satisfies equation F 1 ( s 0 ) = 0 , then we can easily get
s 0 = ( 4 p p μ ) 1 2 .
Therefore,
F 1 ( s ) F 1 ( s 0 ) = μ ( ( 4 p p μ ) 1 2 ) 2 p 2 1 + μ ( ( 4 p p μ ) 1 2 ) 2 μ ( 4 p p μ ) 4 p 4 4 p p = ( 4 p p ) p 4 μ p 4 .
When p 4 ,
F 1 ( η ) μ s 2 p 2 λ 1 ( 2 p 2 ) β μ .
Lemma 6.
For constants p > 0 , μ > 0 , s λ 1 β > 0 , the following inequality holds:
F 2 ( s ) = μ C 1 s 1 p 2 1 + μ s 2 C 4 μ p + 2 4 , 0 < p < 2 , C 5 μ , p 2 ,
where C 4 = C 1 ( 2 + p 2 p ) p + 2 4 , C 5 = C 1 λ 1 ( p 2 1 ) β .
Proof. 
When 0 < p < 2 , if s 0 satisfies equation F 2 ( s 0 ) = 0 , then we can easily get
s 0 = ( 2 p ( p + 2 ) μ ) 1 2 .
Therefore,
F 2 ( s ) F 2 ( s 0 ) μ C 1 ( ( 2 p ( p + 2 ) μ ) 1 2 ) 1 p 2 1 + μ ( ( 2 p ( p + 2 ) μ ) 1 2 ) 2 μ C 1 ( ( 2 p ( p + 2 ) μ ) 1 2 ) 1 p 2 2 p p + 2 = C 1 ( 2 + p 2 p ) p + 2 4 μ p + 2 4 .
When p 2 ,
F 2 ( s ) = μ C 1 s 1 p 2 1 + μ s 2 μ C 1 s 1 p 2 C 1 λ 1 ( p 2 1 ) β μ .
Lemma 7.
For constants p > 0 , η > 0 , s λ 1 β > 0 , the following inequality holds:
F 3 ( s ) = η s 1 p 2 η s + 1 C 6 η p 2 , 0 < p < 2 , C 7 η , p 2 ,
where C 6 = ( 2 p p ) p 2 , C 7 = λ 1 ( 1 p 2 ) β .
Proof. 
When 0 < p < 2 , if s 0 satisfies equation F 3 ( s 0 ) = 0 , then we can easily get
s 0 = 2 p p η .
Therefore,
F 3 ( s ) F 3 ( s 0 ) = η ( 2 p p η ) 1 p 2 2 p p + 1 η ( 2 p p η ) 1 p 2 2 p p = ( 2 p p ) p 2 η p 2 .
When p 2 ,
F 3 ( s ) = η s 1 p 2 η s + 1 η s 1 p 2 λ 1 ( 1 p 2 ) β η .
Lemma 8.
For constants p > 0 , η > 0 , s λ 1 β > 0 , the following inequality holds:
F 4 ( s ) = η ( C 1 ) 1 2 s 2 p 4 η s + 1 C 8 η p + 2 4 , 0 < p < 2 , C 9 η , p 2 ,
where C 8 = ( C 1 ) 1 2 ( 2 + p 2 p ) p + 2 4 , C 9 = ( C 1 ) 1 2 λ 1 ( p 2 4 ) β .
Proof. 
When 0 < p < 2 , if s 0 satisfies equation F 4 ( s ) ( s 0 ) = 0 , then we can easily get
s 0 = 2 p ( p + 2 ) η .
Therefore,
F 4 ( s ) F 4 ( s 0 ) = η ( C 1 ) 1 2 ( 2 p ( p + 2 ) η ) 2 p 4 2 p p + 2 + 1 η ( C 1 ) 1 2 ( 2 p ( p + 2 ) η ) 2 p 4 2 p p + 2 = ( C 1 ) 1 2 ( 2 + p 2 p ) p + 2 4 η p + 2 4 .
When p 2 ,
F 4 ( s ) = η ( C 1 ) 1 2 s 2 p 4 s η + 1 ( C 1 ) 1 2 λ 1 ( p 2 4 ) β η .

3. Ill-Posed Analysis and Conditional Stability

In this section, we provide some results that are very useful for our main conclusions. Using the separation of variables and Laplace transform of the Mittag–Leffler function, we obtain the solution for (1) as follows:
u ( x , t ) = n = 1 ( 1 1 + v λ n γ t α E α , α + 1 ( λ n β 1 + v λ n γ t α ) f n + E α , 1 ( λ n β 1 + v λ n γ t α ) φ n + t E α , 2 ( λ n β 1 + v λ n γ t α ) ψ n ) χ n ( x ) ,
where f n = ( f ( x ) , χ n ( x ) ) , φ n = ( φ ( x ) , χ n ( x ) ) , ψ n = ( ψ ( x ) , χ n ( x ) ) are Fourier coefficients.
Using g ( x ) = u ( x , T ) , we can get
g ( x ) = n = 1 ( 1 1 + v λ n γ T α E α , α + 1 ( λ n β 1 + v λ n γ T α ) f n + E α , 1 ( λ n β 1 + v λ n γ T α ) φ n + T E α , 2 ( λ n β 1 + v λ n γ T α ) ψ n ) χ n ( x ) ,
and
g n = 1 1 + v λ n γ T α E α , α + 1 ( λ n β 1 + v λ n γ T α ) f n + E α , 1 ( λ n β 1 + v λ n γ T α ) φ n + T E α , 2 ( λ n β 1 + v λ n γ T α ) ψ n ,
where g n = ( g ( x ) , χ n ( x ) ) is a Fourier coefficient.
Then,
f n = g n E α , 1 ( λ n β 1 + v λ n γ T α ) φ n T E α , 2 ( λ n β 1 + v λ n γ T α ) ψ n 1 1 + v λ n γ T α E α , α + 1 ( λ n β 1 + v λ n γ T α ) .
Thus, the exact solution of the problem can be expressed as
f ( x ) = n = 1 g n E α , 1 ( λ n β 1 + v λ n γ T α ) φ n T E α , 2 ( λ n β 1 + v λ n γ T α ) ψ n 1 1 + v λ n γ T α E α , α + 1 ( λ n β 1 + v λ n γ T α ) χ n ( x ) .
Let us denote
h n = g n E α , 1 ( λ n β 1 + v λ n γ T α ) φ n T E α , 2 ( λ n β 1 + v λ n γ T α ) ψ n ,
then we have
f ( x ) = n = 1 h n 1 1 + v λ n γ T α E α , α + 1 ( λ n β 1 + v λ n γ T α ) χ n ( x ) .
Now, we put the definition of f n into (17). It is easy to see that Equation (1) becomes the following operator equation
K f ( x ) = h ( x ) ,
i.e.,
n = 1 ( 1 1 + v λ n γ T α E α , α + 1 ( λ n β 1 + v λ n γ T α ) ) f n χ n ( x ) = n = 1 h n χ n ( x ) ,
where K : f h is a linear operator, h n = ( h ( x ) , χ n ( x ) ) is a Fourier coefficient. Letting K * be the adjoint of K, since { χ n } n = 1 are orthonormal in L 2 , it is easy to verify
K * K χ n ( x ) = ( 1 1 + v λ n γ T α E α , α + 1 ( λ n β 1 + v λ n γ T α ) ) 2 χ n ( x ) .
Hence, the singular value of K is σ n = | 1 1 + v λ n γ T α E α , α + 1 ( λ n β 1 + v λ n γ T α ) | . The source term is
f ( x ) = n = 1 h n 1 1 + v λ n γ T α E α , α + 1 ( λ n β 1 + v λ n γ T α ) χ n ( x ) = n = 1 ( h n σ n ) χ n ( x ) .
From Lemma 4, we know
1 λ n β σ n C 1 λ n β .
Thus,
1 σ n = 1 | 1 1 + v λ n γ T α E α , α + 1 ( λ n β 1 + v λ n γ T α ) | λ n β .
Since lim n + λ n = + , we have 1 σ n + , a small perturbation of h ( x ) will cause a significant change in the source term f ( x ) . So the Equation (1) is ill-posed.
Next, we give an a priori bound for the exact solution f ( x ) ,
f ( · ) D ( ( ( Δ ) β ) p 2 ) = ( n = 1 λ n β p | ( f , X n ( x ) ) | 2 ) 1 2 E ,
where p > 0 , E > 0 are constants.
Theorem 1.
Supposing f ( x ) satisfies the priori bound condition (22), then we have
f ( · ) E 2 p + 2 h ( · ) p p + 2 .
Proof. 
By the Hölder inequality, we have
f ( · ) 2 = n = 1 h n 2 ( 1 1 + v λ n γ T α E α , α + 1 ( λ n β 1 + v λ n γ T α ) ) 2 n = 1 h n 4 p + 2 ( 1 1 + v λ n γ T α E α , α + 1 ( λ n β 1 + v λ n γ T α ) ) 2 · h n 2 p p + 2 = n = 1 ( h n 2 ( 1 1 + v λ n γ T α E α , α + 1 ( λ n β 1 + v λ n γ T α ) ) p + 2 ) 2 p + 2 · ( h n 2 ) p p + 2 n = 1 f n 2 ( 1 1 + v λ n γ T α E α , α + 1 ( λ n β 1 + v λ n γ T α ) ) p ) 2 p + 2 · h ( · ) 2 p p + 2 = ( 1 λ n β 1 + v λ n γ T α E α , α + 1 ( λ n β 1 + v λ n γ T α ) ) 2 p p + 2 ( n = 1 λ n β p f n 2 ) 2 p + 2 h ( · ) 2 p p + 2 ( 1 λ n β 1 + v λ n γ T α 1 λ n β 1 + v λ n γ T α ) 2 p p + 2 E 4 p + 2 h ( · ) 2 p p + 2 = E 4 p + 2 h ( · ) 2 p p + 2 .
Thus,
f ( · ) E 2 p + 2 h ( · ) p p + 2 .

4. Tikhonov Regularization Method and Convergence Error Estimation

In this section, we aim to use the Tikhonov regularization method to overcome the Equation (1) and give the Tikhonov regularization solution. And the error estimates of the Tikhonov regularization method under prior parameter selection and posterior parameter selection rules are provided, respectively.
Consider the following operator equation:
K f = h .
In the finite dimensional space, when the overdetermined linear algebra equation K f = h is approximately solved, the method is to find the least square solution, that is, the minimization continuous functional K f h on the finite dimensional space X. The problem must have a solution.
However, if K : X Y is a compact linear operator with a non closed range between X and Y in Hilbert space, then the generalized solution of (24) is given by f = K 1 h , and K 1 is the Moore–Penrose inverse (generalized inverse) of the operator K. In practical application, h may be contaminated by some external noise and it appears as noisy data, h δ , such that h δ h δ , where δ is the noise level. Since the range of K is not closed, the generalized inverse K 1 is not continuous, which means, K 1 h δ does not converge to f = K 1 h when h δ tends to h as δ 0 . In this case, the minimization problem is ill-posed.
Therefore, a penalty term must be added to the objective function K f h so that the problem of finding the minimal element of the new objective function is well-posed (from the perspective of optimization theory). Alternatively, the equation that the minimal element satisfies is a second kind of equation (from the perspective of integral equation theory). This is the basic idea of the Tikhonov regularization method to solve ill-posed problems.
The Tikhonov regularization method takes the minimal element f μ of the regularization functional
J μ ( x ) = K f h 2 + μ f 2 , f X , μ > 0 ,
as the regularization approximate solution of K f = h . f μ is uniquely determined by
μ f μ + K * K f μ = K * h ,
where μ > 0 is the regularization parameter, and K * is the self-adjoint operator of K.
By the singular value of the linear operator K, we can obtain the Tikhonov regularization solution of the spatial source term without error
f μ ( x ) = n = 1 ( 1 1 + v λ n γ T α E α , α + 1 ( λ n β 1 + v λ n γ T α ) ( 1 1 + v λ n γ T α E α , α + 1 ( λ n β 1 + v λ n γ T α ) ) 2 + μ h n ) χ n ( x ) ,
where h n = ( h ( x ) , χ n ( x ) ) . For the noisy data
h n δ = g n δ E α , 1 ( λ n β 1 + v λ n γ T α ) φ n T E α , 2 ( λ n β 1 + v λ n γ T α ) ψ n .
Thus,
h δ ( · ) h ( · )   δ .
We have the Tikhonov regularization solution of the spatial source term with error
f μ δ ( x ) = n = 1 ( 1 1 + v λ n γ T α E α , α + 1 ( λ n β 1 + v λ n γ T α ) ( 1 1 + v λ n γ T α E α , α + 1 ( λ n β 1 + v λ n γ T α ) ) 2 + μ h n δ ) χ n ( x ) ,
where h n δ = ( h δ ( x ) , χ n ( x ) ) is Fourier coefficient.

4.1. The Priori Convergence Error Estimate

The convergence error estimate for the above the Tikhonov regularization solution can be obtained under the priori regularization parameter choice rule.
Theorem 2.
Let f ( x ) be given by (20) and f μ δ ( x ) be given by (29). Suppose that f ( x ) satisfies the a priori bounded condition (22). Then we have
(1)
If 0 < p < 4 and the regularization parameter μ = ( δ E ) 4 p + 2 is selected, then we have
f μ δ ( · ) f ( · )   ( 1 2 + C 2 ) δ p p + 2 E 2 p + 2 ;
(2)
If p 4 and the regularization parameter μ = ( δ E ) 2 3 is selected, then we have
f μ δ ( · ) f ( · )   ( 1 2 + C 3 ) δ 2 3 E 1 3 ,
where C 2 = ( 4 p p ) p 4 , C 3 = λ 1 ( 2 p 2 ) β .
Proof. 
By the triangle inequality, we have
f μ δ ( · ) f ( · )     f μ δ ( · ) f μ ( · )   +   f μ ( · ) f ( · ) .
Firstly, we give an estimate for the first term. From (27) and (29), we have
f μ δ ( · ) f μ ( · ) 2 =   n = 1 1 1 + v λ n γ T α E α , α + 1 ( λ n β 1 + v λ n γ T α ) ( 1 1 + v λ n γ T α E α , α + 1 ( λ n β 1 + v λ n γ T α ) ) 2 + μ h n δ χ n ( x ) n = 1 1 1 + v λ n γ T α E α , α + 1 ( λ n β 1 + v λ n γ T α ) ( 1 1 + v λ n γ T α E α , α + 1 ( λ n β 1 + v λ n γ T α ) ) 2 + μ h n χ n ( x ) 2 =   n = 1 1 1 + v λ n γ T α E α , α + 1 ( λ n β 1 + v λ n γ T α ) ( 1 1 + v λ n γ T α E α , α + 1 ( λ n β 1 + v λ n γ T α ) ) 2 + μ ( h n δ h n ) χ n ( x ) 2 = n = 1 ( 1 1 + v λ n γ T α E α , α + 1 ( λ n β 1 + v λ n γ T α ) ( 1 1 + v λ n γ T α E α , α + 1 ( λ n β 1 + v λ n γ T α ) ) 2 + μ ) 2 ( h n δ h n ) 2 sup n 1 ( 1 1 + v λ n γ T α E α , α + 1 ( λ n β 1 + v λ n γ T α ) ( 1 1 + v λ n γ T α E α , α + 1 ( λ n β 1 + v λ n γ T α ) ) 2 + μ ) 2 δ 2 ( 1 2 μ ) 2 δ 2 .
Thus,
f μ δ ( · ) f μ ( · )   1 2 μ δ .
Then, we estimate the second term by (20), (22) and (27), and Lemmas 4 and 5,
f μ ( · ) f ( · ) 2 =   n = 1 1 1 + v λ n γ T α E α , α + 1 ( λ n β 1 + v λ n γ T α ) ( 1 1 + v λ n γ T α E α , α + 1 ( λ n β 1 + v λ n γ T α ) ) 2 + μ h n χ n ( x ) n = 1 1 1 1 + v λ n γ T α E α , α + 1 ( λ n β 1 + v λ n γ T α ) h n χ n ( x ) 2 =   n = 1 μ ( ( 1 1 + v λ n γ T α E α , α + 1 ( λ n β 1 + v λ n γ T α ) ) 2 + μ ) ( 1 1 + v λ n γ T α E α , α + 1 ( λ n β 1 + v λ n γ T α ) ) h n χ n ( x ) 2 = n = 1 ( μ ( 1 1 + v λ n γ T α E α , α + 1 ( λ n β 1 + v λ n γ T α ) ) 2 + μ ) 2 f n 2 = n = 1 1 λ n β p ( μ ( 1 1 + v λ n γ T α E α , α + 1 ( λ n β 1 + v λ n γ T α ) ) 2 + μ ) 2 f n 2 λ n β p sup n 1 ( μ λ n β p 2 ( 1 1 + v λ n γ T α E α , α + 1 ( λ n β 1 + v λ n γ T α ) ) 2 + μ ) 2 E 2 sup n 1 ( μ λ n β p 2 ( 1 λ n β ) 2 + μ ) 2 E 2 = sup n 1 ( A 1 ( n ) ) 2 E 2 ,
where A 1 ( n ) = μ λ n ( 2 p 2 ) β 1 + μ λ n 2 β .
Letting s = λ n β , from Lemma 5, we obtain
A 1 ( n ) = μ λ n ( 2 p 2 ) β 1 + μ λ n 2 β = μ s 2 p 2 1 + μ s 2 C 2 μ p 4 , 0 < p < 4 , C 3 μ , p 4 .
where C 2 = ( 4 p p ) p 4 , C 3 = λ 1 ( 2 p 2 ) β .
Thus,
f μ ( · ) f ( · )   C 2 μ p 4 E , 0 < p < 4 , C 3 μ E , p 4 .
Combining (33) and (35), we choose the regularization parameter μ by
μ = ( δ E ) 4 p + 2 , 0 < p < 4 , ( δ E ) 2 3 , p 4 .
We have
f μ δ ( · ) f ( · )   ( 1 2 + C 2 ) δ p p + 2 E 2 p + 2 , 0 < p < 4 , ( 1 2 + C 3 ) δ 2 3 E 1 3 , p 4 .
The proof is completed. □

4.2. The Posteriori Convergence Error Estimate

The priori parameter choice is based on the priori bound of exact solution. However, in practice the priori bound generally can not be known easily. Now, we consider a posterior regularization parameter choice rule called the Morozov discrepancy principle. Based on the conditional stability estimate in Theorem 1, we can obtain the posteriori convergence error estimate of the source term.
The Morozov discrepancy principle here is to find μ such that
K f μ δ ( · ) h δ ( · ) = τ δ ,
where τ > 1 is a constant. From the following lemma, there exists a unique solution for (38) if h δ > τ δ .
Lemma 9.
Letting ρ ( μ ) = K f μ δ ( · ) h δ ( · ) , the following results hold
(a)
ρ ( μ ) is a continuous function;
(b)
lim μ 0 ρ ( μ ) = 0 ;
(c)
lim μ ρ ( μ ) = h δ ( · ) ;
(d)
ρ ( μ ) is a strictly increasing function over ( 0 , ) .
Proof. 
The Lemma can be easily proven with the expression
ρ ( μ ) = ( n = 1 ( μ ( 1 1 + v λ n γ T α E α , α + 1 ( λ n β 1 + v λ n γ T α ) ) 2 + μ ) 2 ( h n δ ) 2 ) 1 2 .
Theorem 3.
Let f ( x ) be given by (20) and f μ δ ( x ) be given by (29). Supposing that f ( x ) satisfies the a priori bounded condition (22), the selection of the regularization parameter μ > 0 is given by the Morozov discrepancy principle (38). Then we have
(1)
If 0 < p < 2 and the regularization parameter η = ( δ E ) 4 p + 2 is selected, then we have
f μ δ ( · ) f ( · )   ( ( τ + 1 ) p p + 2 + 1 2 ( C 1 C 4 τ 1 ) 2 p + 2 ) δ p p + 2 E 2 p + 2 ;
(2)
If p 2 and the regularization parameter η = ( δ E ) 2 3 is selected, then we have
f μ δ ( · ) f ( · )   ( ( τ + 1 ) p p + 2 + 1 2 ( C 1 C 5 τ 1 ) 1 2 ) δ 1 2 E 1 2 ,
where C 4 = ( 2 + p 2 p ) p + 2 4 , C 5 = λ 1 ( 1 p 2 ) β .
Proof. 
By the triangle inequality, we have
f μ δ ( · ) f ( · )     f μ δ ( · ) f μ ( · )   +   f μ ( · ) f ( · ) .
τ δ =   n = 1 ( μ ( 1 1 + v λ n γ T α E α , α + 1 ( λ n β 1 + v λ n γ T α ) ) 2 + μ h n δ ) χ n ( x )   n = 1 ( μ ( 1 1 + v λ n γ T α E α , α + 1 ( λ n β 1 + v λ n γ T α ) ) 2 + μ ) ( h n δ h n ) χ n ( x ) +   n = 1 ( μ ( 1 1 + v λ n γ T α E α , α + 1 ( λ n β 1 + v λ n γ T α ) ) 2 + μ h n ) χ n ( x ) δ + n = 1 ( μ ( 1 1 + v λ n γ T α E α , α + 1 ( λ n β 1 + v λ n γ T α ) ) ( 1 1 + v λ n γ T α E α , α + 1 ( λ n β 1 + v λ n γ T α ) ) 2 + μ ( λ n β ) p 2 ( λ n β ) p 2 f n ) χ n ( x ) δ + sup n 1 ( ( μ ( 1 1 + v λ n γ T α E α , α + 1 ( λ n β 1 + v λ n γ T α ) ) ( 1 1 + v λ n γ T α E α , α + 1 ( λ n β 1 + v λ n γ T α ) ) 2 + μ ( λ n β ) p 2 ) E δ + sup n 1 ( μ C 1 λ n β ( 1 λ n β ) 2 + μ ) ( λ n β ) p 2 E δ + sup n 1 ( μ C 1 λ n ( 1 p 2 ) β 1 + μ λ n 2 β ) E δ + sup n 1 ( A 2 ( n ) ) E ,
where A 2 ( n ) = μ C 1 λ n ( 1 p 2 ) β 1 + μ λ n 2 β . Let s = λ n β , from Lemma 6, we obtain
A 2 ( n ) = μ C 1 λ n ( 1 p 2 ) β 1 + μ λ n 2 β = μ C 1 s 1 p 2 1 + μ s 2 C 4 μ p + 2 4 , 0 < p < 2 , C 5 μ , p 2 ,
where C 4 = C 1 ( 2 + p 2 p ) p + 2 4 , C 5 = C 1 λ 1 ( p 2 1 ) β . Thus,
( τ 1 ) δ C 4 μ p + 2 4 E , 0 < p < 2 , C 5 μ E , p 2 .
Then we obtain
μ 1 ( C 4 τ 1 ) 4 p + 2 ( E δ ) 4 p + 2 , 0 < p < 2 , C 5 τ 1 E δ , p 2 .
Substituting (44) into (33), we have
f μ δ ( · ) f μ ( · )   δ 2 μ 1 2 ( C 4 τ 1 ) 2 p + 2 δ p p + 2 E 2 p + 2 , 0 < p < 2 , 1 2 ( C 5 τ 1 ) 1 2 δ 1 2 E 1 2 , p 2 .
Then, we give an estimate for the second term of (41). From (20) and (27), we know that
K ( f μ ( x ) f ( x ) ) = n = 1 μ ( 1 1 + v λ n γ T α E α , α + 1 ( λ n β 1 + v λ n γ T α ) ) 2 + μ h n χ n ( x ) n = 1 μ ( 1 1 + v λ n γ T α E α , α + 1 ( λ n β 1 + v λ n γ T α ) ) 2 + μ ( h n h n δ ) χ n ( x ) + n = 1 μ ( 1 1 + v λ n γ T α E α , α + 1 ( λ n β 1 + v λ n γ T α ) ) 2 + μ h n δ χ n ( x ) δ + τ δ = ( 1 + τ ) δ .
Applying the a priori bound condition (22), we obtain
f μ ( · ) f ( · ) H p = ( n = 1 ( h n 1 1 + v λ n γ T α E α , α + 1 ( λ n β 1 + v λ n γ T α ) μ ( 1 1 + v λ n γ T α E α , α + 1 ( λ n β 1 + v λ n γ T α ) ) 2 + μ ) 2 ( λ n ) β p ) 1 2 ( n = 1 ( f , X n ) 2 ( λ n ) β p ) 1 2 E .
By Theorem 1, we deduce that
f μ ( · ) f ( · )   E 2 p + 2 h μ ( · ) h ( · ) p p + 2 ( τ + 1 ) p p + 2 δ p p + 2 E 2 p + 2 , f o r e a c h p > 0 .
Combining (45) and (47), we have
f μ δ ( · ) f ( · )   ( ( τ + 1 ) p p + 2 + 1 2 ( C 4 τ 1 ) 2 p + 2 ) δ p p + 2 E 2 p + 2 , 0 < p < 2 , ( ( τ + 1 ) p p + 2 + 1 2 ( C 5 τ 1 ) 1 2 ) δ 1 2 E 1 2 , p 2 .

5. Quasi-Boundary Regularization Method and Convergence Error Estimation

In this section, we aim to use the Quasi-boundary regularization method to solve the Equation (1), and give the Quasi-boundary regularization solution. And the error estimates of the Quasi-boundary regularization method under prior parameter selection and posterior parameter selection rules are provided, respectively.
The Quasi-boundary regularization method replaces the local boundary condition or terminal condition of the original problem with nonlocal conditions. Then, we solve the new problem to obtain a Quasi-boundary regularization solution.
Let u η δ ( x , t ) be the Quasi-boundary regularization solution of the following problem
D t α u η δ ( x , t ) + ( Δ ) β u η δ ( x , t ) + v D t α ( Δ ) γ u η δ ( x , t ) = f η δ ( x ) , x Ω , t ( 0 , T ) , u η δ ( x , t ) = 0 , x Ω , t [ 0 , T ] , u η δ ( x , 0 ) = φ ( x ) , x Ω , u η , t δ ( x , 0 ) = ψ ( x ) , x Ω , u η δ ( x , T ) + η f η δ ( x ) = g δ ( x ) , x Ω ,
where η > 0 is the regularization parameter. Using the separated variable method, the Laplace transformation and the inverse transformation, we have
u η δ ( x , t ) = n = 1 ( 1 1 + v λ n γ t α E α , α + 1 ( λ n β 1 + v λ n γ t α ) ( f η δ ) n + E α , 1 ( λ n β 1 + v λ n γ t α ) φ n + t E α , 2 ( λ n β 1 + v λ n γ t α ) ψ n ) χ n ( x ) .
According to u η δ ( x , T ) + η f η δ ( x ) = g δ ( x ) , we obtain u η δ ( x , T ) = n = 1 g n δ η ( f η δ ) n χ n ( x ) .
On the other hand, according to formula (50) when t = T , we obtain
u η δ ( x , T ) = n = 1 ( 1 1 + v λ n γ T α E α , α + 1 ( λ n β 1 + v λ n γ T α ) ( f η δ ) n + E α , 1 ( λ n β 1 + v λ n γ T α ) φ n + T E α , 2 ( λ n β 1 + v λ n γ T α ) ψ n ) χ n ( x ) .
Thus,
1 1 + v λ n γ T α E α , α + 1 ( λ n β 1 + v λ n γ T α ) ( f η δ ) n + E α , 1 ( λ n β 1 + v λ n γ T α ) φ n + T E α , 2 ( λ n β 1 + v λ n γ T α ) ψ n = g n δ η ( f η δ ) n .
We can obtain
f η δ n = g n δ E α , 1 ( λ n β 1 + v λ n γ T α ) φ n T E α , 2 ( λ n β 1 + v λ n γ T α ) ψ n 1 1 + v λ n γ T α E α , α + 1 ( λ n β 1 + v λ n γ T α ) + η .
For the noisy data
h n δ = g n δ E α , 1 ( λ n β 1 + v λ n γ T α ) φ n T E α , 2 ( λ n β 1 + v λ n γ T α ) ψ n .
Thus,
h δ ( · ) h ( · ) δ .
We have the Quasi-boundary regularization solution of the spatial source term with error
f η δ ( x ) = n = 1 h n δ 1 1 + v λ n γ T α E α , α + 1 ( λ n β 1 + v λ n γ T α ) + η χ n ( x ) ,
where h n δ = ( h δ ( x ) , χ n ( x ) ) . Thus, we obtain the regularization solution of the spatial source term without error
f η ( x ) = n = 1 h n 1 1 + v λ n γ T α E α , α + 1 ( λ n β 1 + v λ n γ T α ) + η χ n ( x ) ,
where h n = ( h ( x ) , χ n ( x ) ) .

5.1. The Priori Convergence Error Estimate

The convergence error estimate for the above Quasi-boundary regularization solution can be obtained under the priori regularization parameter choice rule.
Theorem 4.
Let f ( x ) be given by (20) and f η δ ( x ) be given by (54). Supposing that f ( x ) satisfies the a priori bounded condition (22), then we have
(1)
If 0 < p < 2 and the regularization parameter η = ( δ E ) 2 p + 2 is selected, then we have
f η δ ( · ) f ( · )   ( 1 + C 6 ) δ p p + 2 E 2 p + 2 ;
(2)
If p 2 and the regularization parameter η = ( δ E ) 1 2 is selected, then we have
f η δ ( · ) f ( · )   ( 1 + C 7 ) δ 1 2 E 1 2 ,
where C 6 = ( 2 p p ) p 2 , C 7 = λ 1 ( 1 p 2 ) β .
Proof. 
With the triangular inequality, we obtain
f η δ ( · ) f ( · )     f η δ ( · ) f η ( · )   +   f η ( · ) f ( · ) .
We first give an estimate of the first term of (58). Through (53)–(55), we have
f η δ ( · ) f η ( · ) 2 =   n = 1 h n δ h n 1 1 + v λ n γ T α E α , α + 1 ( λ n β 1 + v λ n γ T α ) + η χ n ( x ) 2 = n = 1 h n δ h n 1 1 + v λ n γ T α E α , α + 1 ( λ n β 1 + v λ n γ T α ) + η 2 ( δ η ) 2 .
Thus,
f η δ ( · ) f η ( · )   δ η .
Now, we estimate the second term of formula ( 58 ) . Using ( 20 ) , ( 22 ) and ( 55 ) , and Lemma 4, we can deduce
f η ( · ) f ( · ) 2 =   n = 1 h n 1 1 + v λ n γ T α E α , α + 1 ( λ n β 1 + v λ n γ T α ) + η χ n ( x ) n = 1 h n 1 1 + v λ n γ T α E α , α + 1 ( λ n β 1 + v λ n γ T α ) χ n ( x ) 2 =   n = 1 h n 1 1 + v λ n γ T α E α , α + 1 ( λ n β 1 + v λ n γ T α ) + η h n 1 1 + v λ n γ T α E α , α + 1 ( λ n β 1 + v λ n γ T α ) χ n ( x ) 2 =   n = 1 h n ( 1 1 + v λ n γ T α E α , α + 1 ( λ n β 1 + v λ n γ T α ) ) h n ( 1 1 + v λ n γ T α E α , α + 1 ( λ n β 1 + v λ n γ T α ) + η ) 1 1 + v λ n γ T α E α , α + 1 ( λ n β 1 + v λ n γ T α ) + η ) 1 1 + v λ n γ T α E α , α + 1 ( λ n β 1 + v λ n γ T α ) χ n ( x ) 2 =   n = 1 h n 1 1 + v λ n γ T α E α , α + 1 ( λ n β 1 + v λ n γ T α ) + η · η 1 1 + v λ n γ T α E α , α + 1 ( λ n β 1 + v λ n γ T α ) χ n ( x ) 2 =   n = 1 h n 1 1 + v λ n γ T α E α , α + 1 ( λ n β 1 + v λ n γ T α ) · η 1 1 + v λ n γ T α E α , α + 1 ( λ n β 1 + v λ n γ T α ) + η χ n ( x ) 2 = n = 1 h n 1 1 + v λ n γ T α E α , α + 1 ( λ n β 1 + v λ n γ T α ) 2 η 1 1 + v λ n γ T α E α , α + 1 ( λ n β 1 + v λ n γ T α ) + η 2 = n = 1 h n 1 1 + v λ n γ T α E α , α + 1 ( λ n β 1 + v λ n γ T α ) 2 λ n β p η 1 1 + v λ n γ T α E α , α + 1 ( λ n β 1 + v λ n γ T α ) + η 2 λ n β p n = 1 h n 1 1 + v λ n γ T α E α , α + 1 ( λ n β 1 + v λ n γ T α ) 2 λ n β p η λ n ( 1 p 2 ) β 1 + η λ n β 2 sup n 1 B 1 ( n ) 2 E 2 ,
where B 1 ( n ) = η λ n ( 1 p 2 ) β 1 + η λ n β . Let s = λ n β , from Lemma 7, we obtain
B 1 ( n ) = η λ n ( 1 p 2 ) β η λ n β + 1 = η s 1 p 2 η s + 1 C 6 η p 2 , 0 < p < 2 , C 7 η , p 2 ,
where C 6 = ( 2 p p ) p 2 , C 7 = λ 1 ( 1 p 2 ) β .
Therefore, we obtain
f η ( · ) f ( · )   C 6 E η p 2 , 0 < p < 2 , C 7 E η , p 2 .
Combining (59) and (61), we choose the regularization parameter η by
η = ( δ E ) 2 p + 2 , 0 < p < 2 , ( δ E ) 1 2 , p 2 .
We have
f η δ ( · ) f ( · )   ( 1 + C 6 ) δ p p + 2 E 2 p + 2 , 0 < p < 2 , ( 1 + C 7 ) δ 1 2 E 1 2 , p 2 .
The proof of Theorem 4 is completed. □

5.2. The Posteriori Convergence Error Estimate

In this section, we consider a posterior regularization parameter choice rule called the Morozov discrepancy principle. We can obtain the posteriori convergence error estimate of the source term under the selection rules for the posteriori regularization parameters.
The Morozov discrepancy principle here is to find η such that
η ( K + η ) 1 ( K f η δ ( · ) h δ ( · ) ) = τ δ ,
where τ > 1 is a constant. From the following lemma, there exists a unique solution for (64) if h δ > τ δ .
Lemma 10.
Letting ρ ( η ) : =   η ( K + η ) 1 ( K f η δ ( · ) h δ ( · ) ) , the following results hold
(a)
ρ ( η ) is a continuous function;
(b)
lim η 0 ρ ( η ) = 0 ;
(c)
lim η ρ ( η ) = h δ ( · ) ;
(d)
ρ ( η ) is a strictly increasing function over η ( 0 , + ) .
Proof. 
The Lemma can be easily proven with the expression
ρ ( η ) = ( n = 1 ( η 1 1 + v λ n γ T α E α , α + 1 ( λ n β 1 + v λ n γ T α ) + η ) 4 ( h n δ ) 2 ) 1 2 .
Theorem 5.
Let f ( x ) be given by (20) and f η δ ( x ) be given by (54). Suppose that f ( x ) satisfies the a priori bounded condition (22), the selection of the regularization parameter η > 0 is given by the Morozov discrepancy principle (64), then we have
(1)
If 0 < p < 2 , we have the following error convergent estimate
f η δ ( · ) f ( · )   ( ( τ + 1 ) p p + 2 + ( ( C 8 ) 2 τ 1 ) 2 p + 2 ) δ p p + 2 E 2 p + 2 ;
(2)
If p 2 , we have the following error convergent estimate
f η δ ( · ) f ( · )   ( ( τ + 1 ) p p + 2 + ( ( C 9 ) 2 τ 1 ) 1 2 ) δ 1 2 E 1 2 ,
where C 8 = ( C 1 ) 1 2 ( 2 + p 2 p ) p + 2 4 , C 9 = ( C 1 ) 1 2 λ 1 ( p 2 4 ) β .
Proof. 
With the triangular inequality, we have
f η δ ( · ) f ( · )     f η δ ( · ) f η ( · )   +   f η ( · ) f ( · ) .
We first give an estimate of the first term of ( 67 ) . Using ( 53 ) and ( 64 ) , we obtain
τ δ =   n = 1 ( η 1 1 + v λ n γ T α E α , α + 1 ( λ n β 1 + v λ n γ T α ) + η ) 2 h n δ χ n ( x )   n = 1 ( η 1 1 + v λ n γ T α E α , α + 1 ( λ n β 1 + v λ n γ T α ) + η ) 2 ( h n δ h n ) χ n ( x ) + n = 1 ( η 1 1 + v λ n γ T α E α , α + 1 ( λ n β 1 + v λ n γ T α ) + η ) 2 h n χ n ( x ) δ + J .
Applying the prior bound condition ( 22 ) , we have
J =   n = 1 ( η 1 1 + v λ n γ T α E α , α + 1 ( λ n β 1 + v λ n γ T α ) + η ) 2 h n χ n ( x ) =   n = 1 ( η 1 1 + v λ n γ T α E α , α + 1 ( λ n β 1 + v λ n γ T α ) + η ) 2 1 1 + v λ n γ T α E α , α + 1 ( λ n β 1 + v λ n γ T α ) 1 λ n p 2 β f n λ n p 2 β χ n ( x ) = n = 1 ( η 1 1 + v λ n γ T α E α , α + 1 ( λ n β 1 + v λ n γ T α ) + η ) 2 1 1 + v λ n γ T α E α , α + 1 ( λ n β 1 + v λ n γ T α ) 2 1 λ n p β f n 2 λ n p β 1 2 sup n 1 η ( 1 1 + v λ n γ T α E α , α + 1 ( λ n β 1 + v λ n γ T α ) ) 1 2 ( 1 1 + v λ n γ T α E α , α + 1 ( λ n β 1 + v λ n γ T α ) + η ) λ n p 4 β 2 E sup n 1 η ( C 1 λ n β ) 1 2 ( 1 λ n β + η ) λ n p 4 β 2 E = sup n 1 η ( C 1 ) 1 2 λ n 2 p 4 β η λ n β + 1 2 E sup n 1 ( B 2 ( n ) ) 2 E ,
where B 2 ( n ) = η ( C 1 ) 1 2 λ n 2 p 4 β η λ n β + 1 . Let s = λ n β , from Lemma 8, we obtain
B 2 ( n ) = η ( C 1 ) 1 2 λ n 2 p 4 β η λ n β + 1 = η ( C 1 ) 1 2 s 2 p 4 η s + 1 C 8 η p + 2 4 , 0 < p < 2 , C 9 η , p 2 ,
where C 8 = ( C 1 ) 1 2 ( 2 + p 2 p ) p + 2 4 , C 9 = ( C 1 ) 1 2 λ 1 ( p 2 4 ) β .
Thus,
( τ 1 ) δ ( C 8 ) 2 E η p + 2 2 , 0 < p < 2 , ( C 9 ) 2 E η 2 , p 2 .
Therefore,
1 η ( ( C 8 ) 2 τ 1 ) 2 p + 2 E 2 p + 2 δ 2 p + 2 , 0 < p < 2 , ( ( C 9 ) 2 τ 1 ) 1 2 E 1 2 δ 1 2 , p 2 .
Substituting ( 70 ) into ( 59 ) , we have
f η δ ( · ) f η ( · )   δ η ( ( C 8 ) 2 τ 1 ) 2 p + 2 E 2 p + 2 δ p p + 2 , 0 < p < 2 , ( ( C 9 ) 2 τ 1 ) 1 2 E 1 2 δ 1 2 , p 2 .
Now, we estimate the second term of formula ( 67 ) by ( 20 ) , ( 22 ) , ( 55 ) , ( 64 ) ,
f η ( · ) f ( · ) 2 =   n = 1 η f n 1 1 + v λ n γ T α E α , α + 1 ( λ n β 1 + v λ n γ T α ) + η χ n ( x ) 2 =   n = 1 ( η ( 1 1 + v λ n γ T α E α , α + 1 ( λ n β 1 + v λ n γ T α ) ) 1 1 + v λ n γ T α E α , α + 1 ( λ n β 1 + v λ n γ T α ) + η ) p 2 ( η 1 1 + v λ n γ T α E α , α + 1 ( λ n β 1 + v λ n γ T α ) + η ) 1 p 2 f n ( 1 1 + v λ n γ T α E α , α + 1 ( λ n β 1 + v λ n γ T α ) ) p 2 χ n ( x ) 2 = n = 1 ( η ( 1 1 + v λ n γ T α E α , α + 1 ( λ n β 1 + v λ n γ T α ) ) 1 1 + v λ n γ T α E α , α + 1 ( λ n β 1 + v λ n γ T α ) + η ) p ( η 1 1 + v λ n γ T α E α , α + 1 ( λ n β 1 + v λ n γ T α ) + η ) 2 p f n 2 ( 1 1 + v λ n γ T α E α , α + 1 ( λ n β 1 + v λ n γ T α ) ) p = n = 1 ( η ( 1 1 + v λ n γ T α E α , α + 1 ( λ n β 1 + v λ n γ T α ) ) 1 1 + v λ n γ T α E α , α + 1 ( λ n β 1 + v λ n γ T α ) + η ) p ( ( η 1 1 + v λ n γ T α E α , α + 1 ( λ n β 1 + v λ n γ T α ) + η ) 2 p f n 2 ( 1 1 + v λ n γ T α E α , α + 1 ( λ n β 1 + v λ n γ T α ) ) p ) p p + 2 ( ( η 1 1 + v λ n γ T α E α , α + 1 ( λ n β 1 + v λ n γ T α ) + η ) 2 p f n 2 ( 1 1 + v λ n γ T α E α , α + 1 ( λ n β 1 + v λ n γ T α ) ) p ) 2 p + 2 ( n = 1 ( ( η ( 1 1 + v λ n γ T α E α , α + 1 ( λ n β 1 + v λ n γ T α ) ) 1 1 + v λ n γ T α E α , α + 1 ( λ n β 1 + v λ n γ T α ) + η ) p ( ( η 1 1 + v λ n γ T α E α , α + 1 ( λ n β 1 + v λ n γ T α ) + η ) 2 p f n 2 ( 1 1 + v λ n γ T α E α , α + 1 ( λ n β 1 + v λ n γ T α ) ) p ) p p + 2 ) p + 2 p ) p p + 2 · ( n = 1 ( ( ( η 1 1 + v λ n γ T α E α , α + 1 ( λ n β 1 + v λ n γ T α ) + η ) 2 p f n 2 ( 1 1 + v λ n γ T α E α , α + 1 ( λ n β 1 + v λ n γ T α ) ) p ) 2 p + 2 ) p + 2 2 ) 2 p + 2 = ( n = 1 ( η ( 1 1 + v λ n γ T α E α , α + 1 ( λ n β 1 + v λ n γ T α ) ) 1 1 + v λ n γ T α E α , α + 1 ( λ n β 1 + v λ n γ T α ) + η ) p + 2 ( η 1 1 + v λ n γ T α E α , α + 1 ( λ n β 1 + v λ n γ T α ) + η ) 2 p f n 2 ( 1 1 + v λ n γ T α E α , α + 1 ( λ n β 1 + v λ n γ T α ) ) p ) p p + 2
· ( n = 1 ( η 1 1 + v λ n γ T α E α , α + 1 ( λ n β 1 + v λ n γ T α ) + η ) 2 p f n 2 ( 1 1 + v λ n γ T α E α , α + 1 ( λ n β 1 + v λ n γ T α ) ) p ) 2 p + 2 ( n = 1 ( η 1 1 + v λ n γ T α E α , α + 1 ( λ n β 1 + v λ n γ T α ) + η ) 4 ( 1 1 + v λ n γ T α E α , α + 1 ( λ n β 1 + v λ n γ T α ) f n ) 2 ) p p + 2 · ( n = 1 f n 2 ( 1 1 + v λ n γ T α E α , α + 1 ( λ n β 1 + v λ n γ T α ) ) p ) 2 p + 2 n = 1 ( η 1 1 + v λ n γ T α E α , α + 1 ( λ n β 1 + v λ n γ T α ) + η ) 2 h n χ n ( x ) 2 p p + 2 ( n = 1 λ n β p f n 2 ) 2 p + 2 ( n = 1 ( η 1 1 + v λ n γ T α E α , α + 1 ( λ n β 1 + v λ n γ T α ) + η ) 2 ( h n h n δ ) χ n ( x ) + n = 1 ( η 1 1 + v λ n γ T α E α , α + 1 ( λ n β 1 + v λ n γ T α ) + η ) 2 h n δ χ n ( x ) ) 2 p p + 2 E 4 p + 2 ( τ + 1 ) 2 p p + 2 δ 2 p p + 2 E 4 p + 2 .
Therefore, we deduce that
f η ( · ) f ( · ) ( τ + 1 ) p p + 2 δ p p + 2 E 2 p + 2 .
Combining ( 71 ) and ( 72 ) , we have
f η δ ( · ) f ( · )   ( ( τ + 1 ) p p + 2 + ( ( C 8 ) 2 τ 1 ) 2 p + 2 ) δ 2 p + 2 E 2 p + 2 , 0 < p < 2 , ( ( τ + 1 ) p p + 2 + ( ( C 9 ) 2 τ 1 ) 1 2 ) δ 1 2 E 1 2 , p 2 .

6. Numerical Implementation and Numerical Examples

In this section, we present some numerical results to show the effectiveness of our proposed schemes.
We suppose γ = 0.5 , β = 1 , v = 1 , Ω = ( 0 , π ) , T = 1 . Consider the following problem
D t α u ( x , t ) + ( Δ ) β u ( x , t ) + v D t α ( Δ ) γ u ( x , t ) = f ( x ) , x ( 0 , π ) , t ( 0 , 1 ] , 1 < α < 2 , u ( 0 , t ) = u ( π , t ) = 0 , t [ 0 , 1 ] , u ( x , 0 ) = φ ( x ) , x ( 0 , π ) , u t ( x , 0 ) = ψ ( x ) , x ( 0 , π ) , u ( x , 1 ) = g ( x ) , x ( 0 , π ) .
We solve the above equation by a finite difference method to obtain the “exact” g ( x ) . The split time and space variables are Δ t = T / N and Δ x = π / M . Thus, t n = n Δ t   ( n = 0 , 1 , 2 , , N ) , x i = i Δ x ( i = 0 , 1 , 2 , , M ) , and the approximate value of the unknown function is denoted as u i n u ( x i , t n ) . By direct calculation, we have the eigenvalues λ n = n 2 and χ n ( x ) = 2 π s i n ( n x ) for n = 1 , 2 , .
The discrete form of the time fractional derivative is as follows [33]:
D t α u ( x i , t n ) ( Δ t ) 1 α Γ ( 3 α ) b 0 Δ t ( u i n u i n 1 ) j = 1 n 1 b n j 1 b n j Δ t ( u i j u i j 1 ) b n 1 ψ ( x i ) ,
where i = 1 , 2 , 3 , , M 1 , n = 1 , 2 , 3 , , N , b j = ( j + 1 ) 2 α j 2 α . The fractional Laplace operator is ( Δ ) s = 2 s x 2 s . We approximate the space derivatives by
( Δ ) u ( x , t ) = 2 x 2 = u i + 1 n 2 u i n + u i 1 n ( Δ x ) 2 ,
v D t α ( Δ ) 0.5 u ( x , t ) = v Δ x ( D t α u i + 1 n D t α u i n ) .
We suppose U n = ( u 1 n , u 2 n , , u M 1 n ) T , φ = ( φ 1 , φ 2 , , φ M 1 ) T , ψ = ( ψ 1 , ψ 2 , , ψ M 1 ) T , f = ( f 1 , f 2 , , f M 1 ) T . There is the following iteration format
A U 1 = D ( φ Δ t ψ ) + f ,
A U n = D ( ( b 1 2 b 0 ) U n 1 + j = 2 n 1 ( b j 2 + b j 2 b j 1 ) U n j + ( b n 2 b n 1 ) φ Δ t b n 1 ψ ) + f , n = 2 , 3 , , N ,
where
A ( M 1 ) × ( M 1 ) = a 2 a 3 a 1 a 2 a 3 a 1 a 2 a 3 a 1 a 2 ,
here, a 1 = 1 ( Δ x ) 2 , a 2 = ( 1 + v Δ x ) ( Δ t ) α Γ ( 3 α ) + 2 ( Δ x ) 2 , a 3 = ( v Δ x ( Δ t ) α Γ ( 3 α ) + 1 ( Δ x ) 2 ) .
D ( M 1 ) × ( M 1 ) = d 1 d 2 d 1 d 2 d 1 d 2 d 1 ,
here, d 1 = ( 1 + v Δ x ) ( Δ t ) α Γ ( 3 α ) , d 2 = v Δ x ( Δ t ) α Γ ( 3 α ) .
By solving the above discrete format and taking U N = g with MATLAB version R2018a software, the value g can be obtained.
Next, we solve the inverse problem of (74). We generate the data with error by adding random perturbation, i.e.,
g δ ( x ) = g ( x ) + ε · g ( x ) ( 2 r a n d ( s i z e ( g ) ) 1 ) ,
where the function r a n d n ( · ) produces a list of random numbers with a mean of 0 and a variance of 1, and ε represents the relative error level. The error level is given by the following equation,
δ =   g δ g   = 1 M + 1 i = 1 M + 1 ( g i g i δ ) 2 .
To verify the accuracy of the numerical results, the relative errors are given by the following equations:
e μ = ( f f μ δ ) 2 f 2 ,
e η = ( f f η δ ) 2 f 2 .
In practice, it is difficult to obtain the priori bound E, and the selection of prior regularization parameters is based on the prior bound condition. Therefore, we only provide numerical results under the posterior regularization parameter selection rules. We let τ = 1.1 in (38) and τ = 1.1 in (64), the initial value φ ( x ) = x , ψ ( x ) = x , M = 100 , N = 100 and the noisy level ε = 0.005 , 0.001 , 0.0001 .
Here are three numerical examples.
Example 1.
Consider the smooth function
f ( x ) = x sin ( x ) , x [ 0 , π ] .
Example 2.
Consider the piecewise smooth function
f ( x ) = 2 x , 0 x π 2 , 2 π 2 x , π 2 x π .
Example 3.
Consider the piecewise function
f ( x ) = 0 , 0 x π 3 , 1 , π 3 x 2 π 3 , 1 , 2 π 3 x π .
Figure 1 shows the exact f ( x ) and its Tikhonov regularization approximation solution f μ δ ( x ) for Example 1 in the case of α = 1.3 , 1.5 , 1.7 . Figure 2 shows the exact f ( x ) and its Quasi-boundary regularization approximation solution f η δ ( x ) for Example 1 in the case of α = 1.3 , 1.5 , 1.7 . From Table 1, we find that the smaller α and ε , the smaller the relative error between the exact solution and the regularization solution. Figure 3 shows the exact f ( x ) and its Tikhonov regularization approximation solution f μ δ ( x ) for Example 2 in the case of α = 1.3 , 1.5 , 1.7 . Figure 4 shows the exact f ( x ) and its Quasi-boundary regularization approximation solution f η δ ( x ) for Example 2 in the case of α = 1.3 , 1.5 , 1.7 . From Table 2, we find that the smaller α and ε , the smaller the relative error between the exact solution and the regularization solution. Figure 5 shows the exact f ( x ) and its Tikhonov regularization approximation solution f μ δ ( x ) for Example 3 in the case of α = 1.3 , 1.5 , 1.7 . Figure 6 shows the exact f ( x ) and its Quasi-boundary regularization approximation solution f η δ ( x ) for Example 3 in the case of α = 1.3 , 1.5 , 1.7 . From Table 3, we find that the smaller α and ε , the smaller the relative error between the exact solution and the the regularization solution.
Through the above examples containing different types of functions, we find that the smaller the α and ε , the better the approximation between the exact solution and the regularization solution. Both the Tikhonov regularization method and the Quasi-boundary regularization method are effective. Moreover, it can be seen that the fitting effect of the Tikhonov regularization method is better than that of the Quasi-boundary regularization method.

7. Conclusions

This paper focuses on solving an inverse problem related to identifying the spatial source term in a space-time fractional diffusion-wave equation. This problem is known to be ill-posed, as even a small perturbation in the final data can lead to a significant change in the source term. The Tikhonov regularization method and the Quasi-boundary regularization method are employed to address this issue. We apply an a priori bound assumption and priori and posteriori regularization parameter selection rules to derive convergence error estimates for the source term in each case. Finally, we assess the validity, stability, and advantages of both the Tikhonov regularization method and the Quasi-boundary regularization method through numerical experiments involving three types of functions: smooth, piecewise smooth, and non-smooth.

Author Contributions

The main idea of this paper was proposed by C.Z. and F.Y., X.L. reviewed the manuscript. All authors prepared the manuscript initially and performed all the steps of the proofs in this research. All authors have read and agreed to the published version of the manuscript.

Funding

The project is supported by the National Natural Science Foundation of China (No. 11961044).

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Acknowledgments

The authors would like to thank the editor and the referees for their valuable comments and suggestions that improve the quality of our paper.

Conflicts of Interest

The authors declare that they have no conflict of interest.

References

  1. Hanneken, J.W.; Narahari Achar, B.N.; Vaught, D.M.; Harrington, K.L. A random walk simulation of fractional diffusion. J. Mol. Liq. 2004, 114, 153–157. [Google Scholar] [CrossRef]
  2. Molz, F.J.; Fix, G.J.; Lu, S. A physical interpretation for the fractional derivative in Levy diffusion. Appl. Math. Lett. 2002, 15, 907–911. [Google Scholar] [CrossRef]
  3. Gorenflo, R.; Mainardi, F.; Moretti, D.; Pagnini, G.; Paradisi, P. Discrete random walk models for space-time fractional diffusion. Chem. Phys. 2002, 284, 521–541. [Google Scholar] [CrossRef]
  4. Gorenflo, R.; Vivoli, A. Fully discrete random walks for space-time fractional diffusion equations. Signal Process. 2003, 83, 2411–2420. [Google Scholar] [CrossRef]
  5. Nigmatullin, R.R. The realization of the generalized transfer equation in a medium with fractal geometry. Phys. Status Solidi B 1986, 133, 425–430. [Google Scholar] [CrossRef]
  6. Mainardi, F. Fractional diffusive waves in viscoelastic solids. Nonlinear Waves Solids 1995, 137, 1. [Google Scholar]
  7. Li, G.S. Determining magnitude of groundwater pollution sources by data compatibility analysis. Inverse Probl. Sci. Eng. 2006, 14, 287–300. [Google Scholar] [CrossRef]
  8. Yang, F.; Zhang, P.; Li, X.X.; Ma, X.Y. Tikhonov regularization method for identifying the space-dependent source for time-fractional diffusion equation on a columnar symmetric domain. Adv. Differ. Equ. 2020, 2020, 1–16. [Google Scholar] [CrossRef]
  9. Tuan, N.H.; Long, L.D.; Tatar, S. Tikhonov regularization method for a backward problem for the inhomogeneous time-fractional diffusion equation. Appl. Anal. 2018, 97, 842–863. [Google Scholar] [CrossRef]
  10. Xiong, X.T.; Fu, C.L.; Li, H.F. Fourier regularization method of a sideways heat equation for determining surface heat flux. Math. Anal. Appl. 2006, 317, 331–348. [Google Scholar] [CrossRef]
  11. Li, X.X.; Lei, J.L.; Yang, F. An a posteriori fourier regularization method for identifying the unknown source of the space-fractional diffusion equation. J. Inequal. Appl. 2014, 2014, 1–13. [Google Scholar] [CrossRef]
  12. Yang, F.; Zhang, Y.; Liu, X.; Huang, C.Y. The quasi-boundary value method for identifying the initial value of the space-time fractional diffusion equation. Acta Math. Sci. 2020, 40, 641–658. [Google Scholar] [CrossRef]
  13. Feng, X.L.; Eldén, L. Solving a cauchy problem for a 3d elliptic pde with variable coefficients by a quasi-boundary-value method. Inverse Probl. 2014, 30, 015005. [Google Scholar] [CrossRef]
  14. Zhang, H.W.; Qin, H.H.; Wei, T. A quasi-reversibility regularization method for the cauchy problem of the helmholtz equation. Int. J. Comput. Math. 2011, 88, 839–850. [Google Scholar] [CrossRef]
  15. Bourgeois, L. Convergence rates for the quasi-reversibility method to solve the Cauchy problem for Laplace’s equation. Inverse Probl. 2006, 22, 413. [Google Scholar] [CrossRef]
  16. Zhang, Z.Q.; Wei, T. Identifying an unknown source in time-fractional diffusion equation by a truncation method. Appl. Math. Comput. 2013, 219, 5972–5983. [Google Scholar] [CrossRef]
  17. Zhang, Y.X.; Fu, C.L.; Ma, Y.J. An a posteriori parameter choice rule for the truncation regularization method for solving backward parabolic problems. J. Comput. Appl. Math. 2014, 255, 150–160. [Google Scholar] [CrossRef]
  18. Al-Mahdawi, H.K.I.; Alkattan, H.; Abotaleb, M.; Kadi, A.; El-Kenawy, E.S.M. Updating the landweber iteration method for solving inverse problems. Mathematics 2022, 10, 2798. [Google Scholar] [CrossRef]
  19. Mittal, G.; Giri, A.K. Iteratively regularized Landweber iteration method: Convergence analysis via Hölder stability. Appl. Math. Comput. 2021, 392, 125744. [Google Scholar] [CrossRef]
  20. Luchko, Y. Subordination principles for the multi-dimensional space-time-fractional diffusion-wave equation. arXiv 2018, arXiv:1802.04752. [Google Scholar] [CrossRef]
  21. Dehghan, M.; Abbaszadeh, M.; Deng, W. Fourth-order numerical method for the space-time tempered fractional diffusion-wave equation. Appl. Math. Lett. 2017, 73, 120–127. [Google Scholar] [CrossRef]
  22. Garg, M.; Manohar, P. Matrix method for numerical solution of space-time fractional diffusion-wave equations with three space variables. Afr. Mat. 2014, 25, 161–181. [Google Scholar] [CrossRef]
  23. Bhrawy, A.H.; Zaky, M.A.; Van Gorder, R.A. A space-time Legendre spectral tau method for the two-sided space-time Caputo fractional diffusion-wave equation. Numer. Algorithms 2016, 71, 151–180. [Google Scholar] [CrossRef]
  24. Huang, F.; Liu, F. The space-time fractional diffusion equation with Caputo derivatives. J. Appl. Math. Comput. 2005, 19, 179–190. [Google Scholar] [CrossRef]
  25. Tatar, S.; Tinaztepe, R.; Ulusoy, S. Determination of an unknown source term in a space-time fractional diffusion equation. J. Fract. Calc. Appl. 2015, 6, 83–90. [Google Scholar]
  26. Wang, J.G.; Wei, T.; Zhou, Y.B. Tikhonov regularization method for a back-ward problem for the time-fractional diffusion equation. Appl. Math. Model. 2013, 37, 8518–8532. [Google Scholar] [CrossRef]
  27. Liao, K.F.; Li, Y.S.; Wei, T. The identification of the time-dependent source term in time-fractional diffusion-wave equations. East Asian J. Appl. Math. 2019, 9, 330–354. [Google Scholar] [CrossRef]
  28. Yan, X.B.; Wei, T. Determine a space-dependent source term in a time fractional diffusion-wave equation. Acta Appl. Math. 2020, 165, 163–181. [Google Scholar] [CrossRef]
  29. Phuong, N.D.; Long, L.D.; Nguyen, A.T.; Baleanu, D. Regularization of the Inverse Problem for Time Fractional Pseudo-parabolic Equation with Non-local in Time Conditions. Acta Math. Sin. 2022, 38, 2199–2219. [Google Scholar] [CrossRef]
  30. Podlubny, I. Fractional Differential Equations: An Introduction to Fractional Derivatives Fractional Differential Equations, to Methods of Their Solution an Some of Their Applications; Academic Press Inc.: San Diego, CA, USA, 1999. [Google Scholar]
  31. Chen, W.; Li, C. Maximum principles for the fractional p-Laplacian and symmetry of solutions. Adv. Math. 2018, 335, 735–758. [Google Scholar] [CrossRef]
  32. Kilbas, A.A.; Srivastava, H.M.; Trujillo, J.J. Theory and Applications of Fractional Differential Equations; Elsevier: Amsterdam, The Netherlands, 2006. [Google Scholar]
  33. Sun, Z. The Method of Order Reduction and Its Application to the Numerical Solutions of Partial Differential Equations; Science Press: Beijing, China, 2009. [Google Scholar]
Figure 1. The comparison of the exact solution f ( x ) and its Tikhonov regularization approximation solution f μ δ ( x ) of Example 1 with α = 1.3 , 1.5 , 1.7 for ε = 0.005 , 0.001 , 0.0001 . (a) Comparison between exact solution and regularization solution when α = 1.3 . (b) Comparison between exact solution and regularization solution when α = 1.5 . (c) Comparison between exact solution and regularization solution when α = 1.7 .
Figure 1. The comparison of the exact solution f ( x ) and its Tikhonov regularization approximation solution f μ δ ( x ) of Example 1 with α = 1.3 , 1.5 , 1.7 for ε = 0.005 , 0.001 , 0.0001 . (a) Comparison between exact solution and regularization solution when α = 1.3 . (b) Comparison between exact solution and regularization solution when α = 1.5 . (c) Comparison between exact solution and regularization solution when α = 1.7 .
Mathematics 12 00231 g001
Figure 2. The comparison of the exact solution f ( x ) and its Quasi-boundary regularization approximation solution f η δ ( x ) of Example 1 with α = 1.3 , 1.5 , 1.7 for ε = 0.005 , 0.001 , 0.0001 . (a) Comparison between exact solution and regularization solution when α = 1.3 . (b) Comparison between exact solution and regularization solution when α = 1.5 . (c) Comparison between exact solution and regularization solution when α = 1.7 .
Figure 2. The comparison of the exact solution f ( x ) and its Quasi-boundary regularization approximation solution f η δ ( x ) of Example 1 with α = 1.3 , 1.5 , 1.7 for ε = 0.005 , 0.001 , 0.0001 . (a) Comparison between exact solution and regularization solution when α = 1.3 . (b) Comparison between exact solution and regularization solution when α = 1.5 . (c) Comparison between exact solution and regularization solution when α = 1.7 .
Mathematics 12 00231 g002
Figure 3. The comparison of the exact solution f ( x ) and its Tikhonov regularization approximation solution f μ δ ( x ) of Example 2 with α = 1.3 , 1.5 , 1.7 for ε = 0.005 , 0.001 , 0.0001 . (a) Comparison between exact solution and regularization solution when α = 1.3 . (b) Comparison between exact solution and regularization solution when α = 1.5 . (c) Comparison between exact solution and regularization solution when α = 1.7 .
Figure 3. The comparison of the exact solution f ( x ) and its Tikhonov regularization approximation solution f μ δ ( x ) of Example 2 with α = 1.3 , 1.5 , 1.7 for ε = 0.005 , 0.001 , 0.0001 . (a) Comparison between exact solution and regularization solution when α = 1.3 . (b) Comparison between exact solution and regularization solution when α = 1.5 . (c) Comparison between exact solution and regularization solution when α = 1.7 .
Mathematics 12 00231 g003aMathematics 12 00231 g003b
Figure 4. The comparison of the exact solution f ( x ) and its Quasi-boundary regularization approximation solution f η δ ( x ) of Example 2 with α = 1.3 , 1.5 , 1.7 for ε = 0.005 , 0.001 , 0.0001 . (a) Comparison between exact solution and regularization solution when α = 1.3 . (b) Comparison between exact solution and regularization solution when α = 1.5 . (c) Comparison between exact solution and regularization solution when α = 1.7 .
Figure 4. The comparison of the exact solution f ( x ) and its Quasi-boundary regularization approximation solution f η δ ( x ) of Example 2 with α = 1.3 , 1.5 , 1.7 for ε = 0.005 , 0.001 , 0.0001 . (a) Comparison between exact solution and regularization solution when α = 1.3 . (b) Comparison between exact solution and regularization solution when α = 1.5 . (c) Comparison between exact solution and regularization solution when α = 1.7 .
Mathematics 12 00231 g004
Figure 5. The comparison of the exact solution f ( x ) and its Tikhonov regularization approximation solution f μ δ ( x ) of Example 3 with α = 1.3 , 1.5 , 1.7 for ε = 0.005 , 0.001 , 0.0001 . (a) Comparison between exact solution and regularization solution when α = 1.3 . (b) Comparison between exact solution and regularization solution when α = 1.5 . (c) Comparison between exact solution and regularization solution when α = 1.7 .
Figure 5. The comparison of the exact solution f ( x ) and its Tikhonov regularization approximation solution f μ δ ( x ) of Example 3 with α = 1.3 , 1.5 , 1.7 for ε = 0.005 , 0.001 , 0.0001 . (a) Comparison between exact solution and regularization solution when α = 1.3 . (b) Comparison between exact solution and regularization solution when α = 1.5 . (c) Comparison between exact solution and regularization solution when α = 1.7 .
Mathematics 12 00231 g005
Figure 6. The comparison of the exact solution f ( x ) and its Quasi-boundary regularization approximation solution f η δ ( x ) of Example 3 with α = 1.3 , 1.5 , 1.7 for ε = 0.005 , 0.001 , 0.0001 . (a) Comparison between exact solution and regularization solution when α = 1.3 . (b) Comparison between exact solution and regularization solution when α = 1.5 . (c) Comparison between exact solution and regularization solution when α = 1.7 .
Figure 6. The comparison of the exact solution f ( x ) and its Quasi-boundary regularization approximation solution f η δ ( x ) of Example 3 with α = 1.3 , 1.5 , 1.7 for ε = 0.005 , 0.001 , 0.0001 . (a) Comparison between exact solution and regularization solution when α = 1.3 . (b) Comparison between exact solution and regularization solution when α = 1.5 . (c) Comparison between exact solution and regularization solution when α = 1.7 .
Mathematics 12 00231 g006aMathematics 12 00231 g006b
Table 1. The relative errors between the exact solution and the regularization solutions of Example 1 with different α .
Table 1. The relative errors between the exact solution and the regularization solutions of Example 1 with different α .
ε 0.0050.0010.0001
α = 1.3 Tikhonov0.02990.00580.0007
Quasi-boundary0.37490.13610.0300
α = 1.5 Tikhonov0.03190.00720.0007
Quasi-boundary0.40900.17240.0464
α = 1.7 Tikhonov0.03460.00630.0007
Quasi-boundary0.50990.23020.0722
Table 2. The relative errors between the exact solution and the regularization solutions of Example 2 with different α .
Table 2. The relative errors between the exact solution and the regularization solutions of Example 2 with different α .
ε 0.0050.0010.0001
α = 1.3 Tikhonov0.02850.00700.0007
Quasi-boundary0.61740.21370.0555
α = 1.5 Tikhonov0.03190.00560.0008
Quasi-boundary0.67960.28460.0882
α = 1.7 Tikhonov0.03520.00630.0006
Quasi-boundary0.86500.42280.1263
Table 3. The relative errors between the exact solution and the regularization solutions of Example 3 with different α .
Table 3. The relative errors between the exact solution and the regularization solutions of Example 3 with different α .
ε 0.0050.0010.0001
α = 1.3 Tikhonov0.01600.00390.0003
Quasi-boundary0.45450.21830.0725
α = 1.5 Tikhonov0.01890.00380.0003
Quasi-boundary0.59410.31830.1178
α = 1.7 Tikhonov0.02220.00300.0004
Quasi-boundary0.79060.42520.1761
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhang, C.; Yang, F.; Li, X. Two Regularization Methods for Identifying the Spatial Source Term Problem for a Space-Time Fractional Diffusion-Wave Equation. Mathematics 2024, 12, 231. https://doi.org/10.3390/math12020231

AMA Style

Zhang C, Yang F, Li X. Two Regularization Methods for Identifying the Spatial Source Term Problem for a Space-Time Fractional Diffusion-Wave Equation. Mathematics. 2024; 12(2):231. https://doi.org/10.3390/math12020231

Chicago/Turabian Style

Zhang, Chenyu, Fan Yang, and Xiaoxiao Li. 2024. "Two Regularization Methods for Identifying the Spatial Source Term Problem for a Space-Time Fractional Diffusion-Wave Equation" Mathematics 12, no. 2: 231. https://doi.org/10.3390/math12020231

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop