1. Introduction
Let
be a convex polyhedral domain with its boundary being
, and
is a fixed time. In this paper, we consider the following homogeneous Rayleigh–Stokes problem for a generalized second-grade fluid with a fractional derivative model
where
is a fixed constant and
is the Riemann–Liouville fractional derivative of the order
defined by
in which
is the gamma function.
In problem (
1), if
is known, we can use some basic methods to solve the direct problem. The inverse problem in this paper is to reconstruct the initial value
according to the additional data
. However, in practical problems,
can only be obtained by measurement, and the measured data will inevitably be disturbed by noise. Therefore, suppose the exact data function
and the measured data function
satisfy
where
denotes
norm and
is a noise level.
Recently, many scientists have demonstrated that fractional models describe natural phenomena in an accurate, systematic way better than their classic integer-order counterparts with ordinary time derivatives [
1,
2,
3,
4,
5,
6,
7]. Recently, the fractional calculus has been employed for the description of many complex biological systems. Although these studies provided better results than the classic integer-order models, a satisfactory precision may not be achieved in the whole of the time duration because of the appearance of a singularity in the definition of traditional fractional derivatives, a fact which makes such operators impractical for the description of nonlocal dynamics. The Rayleigh–Stokes problem has attracted extensive attention in recent years due to its importance in physics. Fractional derivatives are of great value in capturing viscoelastic behavior of flows; see [
8,
9]. Problem (
1) plays an important role in describing the behavior of some non-Newtonian fluids, such as polymer solutions and melts [
10]. Regarding the direct problem of Rayleigh–Stokes, the reader is referred to [
11,
12,
13,
14,
15,
16,
17,
18,
19].
However, there are few results on Rayleigh–Stokes inverse problem, and some regularization methods in particular are used to study the Rayleigh–Stokes inverse problem. For identifying the initial value problem, Nguyen et al. [
20] investigated a backward problem for the Rayleigh–Stokes problem by using the filter regularization method in Gaussian random noise, with the aim of determining the initial status of some physical fields, such as the temperature for slow diffusion from its present measurement data. Furthermore, based on a priori assumptions, the expectation between the exact solution and the regularization solution under the
and
norm was established, but the authors did not give the posteriori regularization parameter choice rule. Compared with the priori regularization parameter choice, the posteriori regularization parameter choice rule, which only depends on the measurable data, may be useful in practice. Moreover, the authors did not carry out numerical experiments to show their methods.
In this paper, we mainly use the Landweber iterative regularization method to solve the initial value problem of problem (
1). We prove that the initial value problem is ill posed, which means that
does not depend on the data
continuously. Based on an a priori bound condition, the priori convergent error estimate under an a priori regularization parameter choice rule is given, and the
posteriori convergent error estimate under an a
posteriori regularization parameter choice rule is obtained. Some numerical examples show the effectiveness of this method.
At present, there are many effective regularization methods for the study of inverse problems, such as the truncation method [
21,
22], Tikhonov regularization method [
23], quasi-boundary value method [
24,
25,
26], quasi-reversibility regularization method [
27,
28], mollification regularization method [
29], Fourier regularization method [
30,
31,
32], and Landweber iterative regularization method [
33,
34,
35], which never shows the saturation phenomenon. These regularization methods have been successfully applied to some inverse problems of mathematical and physical equations, and a lot of research results have been obtained. In this paper, we use the Landweber iterative regularization method, which does not have the saturation phenomenon; that is, it is order optimal for any
to solve this inverse problem.
The manuscript is organized as follows: The results of the ill-posed analysis and the conditional stability for identifying the initial value problem (
1) are given in
Section 2. In
Section 3, the Landweber iterative regularization method is used to solve the inverse problem, and the priori and
posteriori convergent error estimates are obtained. In
Section 4, some numerical examples are given to convincingly demonstrate the effectiveness of the Landweber regularization method. A brief conclusion is presented in
Section 5.
2. Ill-Posed Analysis and Conditional Stability Results for Problem (1)
In this section, we mainly give the results of the ill-posed analysis and conditional stability for the identification of the initial value problem of Problem (
1). Let
and
be the Dirichlet eigenvalues and eigenfunctions of
on the domain
, respectively, and satisfy
where
, and
is an orthonormal basis
.
Define
where
is the inner product in
, and
is a Hilbert space with the norm
According to the result of theorem 2.1 in Bazhlekova, Jin, Lazarov, and Zhou’s paper [
36], for any
, there exists a unique solution
and the solution for (
1) is given by
where
is the Fourier coefficients.
In Formula (
7), the function
satisfies
where
Letting
in (
7), we have
and further, we have
where
is the Fourier coefficients.
From (
9), we obtain the exact solution
In order to analyze the ill posedness of the inverse problem and give the result of conditional stability, the following lemmas are useful for the whole paper.
Lemma 1 ([
36]).
The functions , have the following properties:- (a)
, , ;
- (b)
are completely monotone for ;
- (c)
, ;
where the constant does not depend on n and t.
Lemma 2 ([
20]).
Assuming that , the following estimate holds for all where According to Lemma 1, we know
Due to
, it can be obtained
. Accordingly, from Formula (
10), it can be seen that the small perturbation of
will cause a great change in
; i.e., it is an ill-posed problem.
Next, we will give the conditional stability result of this inverse problem.
Theorem 1. When satisfies an a priori bound conditionwe havewhere ) is a positive constant. Proof. Due to the Hölder inequality and (
10), we have
According to Lemma 2 and (
11), we obtain
From (
13) and (
14), we have
where
.
We completed the proof of Theorem 1. □
In the next section, we mainly introduce the Landweber iterative regularization method to solve this inverse problem. Furthermore, we obtain an a priori and an a posteriori convergence error estimate by using the proving techniques of trigonometric inequality and Hölder inequality [
25,
26].
3. Landweber Iterative Regularization Method and Convergence Analysis
In this section, we propose the Landweber iterative regularization method to solve the ill-posed problem (
1). Under an a priori regularization parameter choice rule and an a posteriori regularization parameter choice rule, we obtain the convergent error estimates between the exact solution and the regularization solution. The inverse problem of identifying the initial value
can be converted to solving the following integral equation:
where the kernel function
Due to the kernel function , it can be seen that is a self-adjoint operator. Further, we have the following theorem:
Theorem 2. If , the integral operator in (15) is compact from to and its singular values are . Proof. From Lemma 1 and (
8), we have
From (
16), we know if
,
. Furthermore, due to
, we can determine that the integral operator
is compact from
to
. Let
be the adjoint of
. Since
is the orthonormal basis in space
, it is easy to verify
Hence, the singular values of the compact operator are . The proof of Theorem 2 is complete. □
Next, we give the Landweber iterative regularization solution of
. It must be borne in mind that
is the Landweber iterative regularization solution of the initial value problem (
1). We use the operator equation
to replace the equation
and obtain the following iterative format:
where
I is a unit operator;
m is the iterative step number, also known as the regularization parameter; and
a is called the relaxation factor and satisfies
. Note that the operator
is defined as
By simple calculation, we have
Since
is a self-adjoint operator, applying the singular values of the operator
and Formula (
17), we obtain the Landweber iterative regularization solution of the inverse problem (
1) as follows:
where
.
In the following, we give two the convergent error estimates by using an a priori choice rule and an aposteriori choice rule for the regularization parameter.
3.1. The Convergent Error Estimate with an a Priori Parameter Choice Rule
Theorem 3. Let given by (18) be the Landweber iterative regularization solution of the exact solution (10). Suppose the priori condition (11) and the noise assumption (3) hold. Choosing the regularization parameter , where we have the following convergence error estimate: where denotes the largest integer less than or equal to b and is a positive constant.
Proof. According to the triangle inequality, we have
From (
3), we have
where
.
Since
is a singular value of the operator
and
, we obtain
. Due to Bernoulli inequality, we have
then we obtain
From (
11), we have
where
.
According to Lemma 2, we have
Let
,
. Supposing that
satisfies
, we have
then we have
Combining (
19), (
21)–(
23), we obtain
where
.
The proof of Theorem 3 is complete. □
3.2. The Convergent Error Estimate with an a Posteriori Parameter Choice Rule
In this section, we consider an a posteriori regularization parameter choice rule in the Morozov discrepancy principle [
37] and obtain the convergent error estimate under an a posteriori regularization parameter choice rule. We assume that
is given a fixed constant and stop the algorithm at the first occurrence of
with
where
is constant.
Lemma 3. Let ; then, we have the following conclusions:
- (a)
is a continuous function;
- (b)
- (c)
- (d)
is a strictly decreasing function for any
Proof. The proof of Lemma 3 can be obtained by Formula (
24), so it is omitted here. □
Remark 1. According to the Lemma 3, we can find that the inequality (24) exists an unique solution. Lemma 4. Suppose the priori condition (11) and the noise assumption (3) hold. For fixed , if we choose the regularization parameter by using Morozov’s discrepancy principle (24), then the regularization parameter satisfies Proof. Firstly, we know that
. From (
3), we have
Further, from (
11) we can obtain
where
.
According to Lemmas 1 and 2, we have
Let
,
. Supposing that
satisfies
, we obtain
then we have
The proof of Lemma 4 is complete. □
Theorem 4. Let given by (18) be the Landweber iterative regularization solution of the exact solution (10). Suppose the priori condition (11) and the noise assumption (3) hold. If the regularization parameter is chosen by Morozov’s discrepancy principle with stopping rule (24), then we have the following error estimate: where is a positive constant.
Proof. Using the triangle inequality, we have
Applying Lemma 4 and (
3), we obtain
In addition, combining (
3) and (
24), we have
Hence, from (
11) we can obtain
Further, according to Theorem 1 and (
30), we have
Combining (
28), (
29) and (
31), we obtain
where
.
The proof of Theorem 4 is complete. □
4. Numerical Implementation
In this section, we use Matlab software to give several numerical examples to verify the effectiveness of the Landweber iterative regularization method. The following numerical simulation consists of two parts. First, we use the known function
to obtain the additional data value
, which is a forward problem. Then, we use the additional data value
to solve an inverse problem and obtain the regularization solution; i.e., we use the Landweber iterative algorithm to obtain the regularization solution
. Letting
,
, we consider a one-dimensional forward problem
where
is known. Here, we use the finite difference method to discretize the above problem (
32). In addition, two discrete schemes are introduced, i.e., the backward difference (BD) scheme and the implicit numerical optimization scheme (INAS) provided by reference [
38]. Both schemes are unconditionally stable.
Define
where
is the step size of temporal direction and
is the step size of spatial direction. Then, the approximate value of
u at each grid point is recorded as
. First, we give the BD iterative scheme. In the first step, the Riemann–Liouville operator (
2) is discretized by the Grünwald–Letnikov formula [
39]
where
is the integer part of
and
are the coefficients of the generating function
. In formula (
34), if
and
, it is simply called the Grünwald–Letnikov formula [
39]. In this case, the coefficients
can be calculated by means of the recursive formulae
If
, the expression (
34) is indicated by
evaluated at the grid points
In the second step, we use backward difference formula to discretize the differential operators
and
By using Formulas (
36)–(
38), we obtain the BD iterative form of Problem (
32) expressed as
where
,
and
.
Secondly, according to the literature [
38], we obtain the INAS iteration form of Problem (
32) as
where
,
and
.
Through the above two discrete formats, we use Matlab software to program and run the program to obtain the data function g. Next, we solve an inverse problem.
In practical application, the data
g are obtained by measurements and have certain error. Therefore, in the numerical simulation, we add a random disturbance to the data
g. The noisy data
are generated by adding random disturbances, i.e.,
where the function
represents the generation of a column of random numbers with a mean value of 0 and a variance of 1, and
represents the relative error level. The absolute error level
is expressed as
Finally, the regularization solution
is obtained by the following formula:
where
satisfies
.
To see the accuracy of numerical solutions, we compute the relative root mean square errors by
where
n is the total number of test points.
The priori regularization parameter is based on the smooth conditions of the exact solution, which is difficult to give in practical problems. The following examples are based on an a posterior regularization parameter choice rule (
24) to verify the effectiveness of the Landweber iterative regularization method. By simple calculation, we obtain
and
for
in Formula (
4). In the numerical calculation of Problem (
32), we choose
,
,
.
Let us take
,
in Formulas (
11) and (
24). Choosing
,
, we give the following three examples:
Example 1. Consider a smooth function .
Example 2. Consider a piecewise smooth function Example 3. Consider a non-smooth function Figure 1,
Figure 2 and
Figure 3 show a comparison of the exact solution
and its approximate solution
between BD and INAS in the iterative form of Example 1 for the relative error levels
with various values of
.
Table 1 shows the relative root mean square error
differences between the exact solution and the regularization solution of Example 1 for various values of
and
.
Table 2 shows a comparison between the number of iterations
for the exact solution and the regularization solution of Example 1 for various values of
and
.
Figure 4 shows a comparison between the exact solution and the regularization solution of the two iterative methods for
under
.
Figure 5,
Figure 6 and
Figure 7 show a comparison of the exact solution
and its approximate solution
between BD and INAS in the iterative form of Example 2 for the relative error levels
with various values of
.
Table 3 shows a comparison of the relative root mean square errors
for the exact solution and the regularization solution of Example 2 for various values of
and
.
Table 4 shows a comparison of the number of iterations
for the exact solution and the regularization solution of Example 2 for various values of
and
.
Figure 8 shows a comparison between the exact solution and the regularization solution of the two iterative methods for
under
.
Figure 9 and
Figure 10 show a comparison of the exact solution
and its approximate solution
between BD and INAS in the iterative form of Example 3 for the relative error levels
with various values of
.
Table 5 shows a comparison of the relative root mean square errors
of the exact solution and the regularization solution of Example 3 for various values of
and
.
Table 6 shows a comparison of the number of iterations
for the exact solution and the regularization solution of Example 3 for various values of
and
.
Figure 11 shows a comparison between the exact solution and the regularization solution of the two iterative methods for
under
.