In this section, we illustrate the method proposed in this work using synthetic examples. We know the exact defined on in this case. Then, we calculated the measurement with and without noise by solving the FP for the classical and fractional Cauchy problem.
The exact measurement is calculated by solving the FP. To generate the measurements with error
, we added to the exact measurement a Gaussian error using the function
of MATLAB. The exact measurement was calculated by solving the FP. Therefore, we define
where
‘
’
is a vector of random numbers of length
m (numbers of nodes on
) with a normal distribution. The corresponding numerical solutions are denoted by
.
In this section, we obtain the relative error between the exact source
and the recovered source
shown in tables and denoted by
. The relative error is given by
and the relative error between the exact measurement
V and the measurement with error
is denoted by
, which is given by
where
is the norm of the space
.
4.1. Solution to the IP Related to the Classical Cauchy Problem
In the following two examples, we consider a circular annular region
with
and
; then
and
are two circumferences of radii
and
(see
Figure 1), respectively.
Example 1. We take the ‘exact potential’ , , that in polar coordinates is . In this case, , and the solution to the forward problem, that is, the solution to the auxiliary problem (3), is given bywhere . Then, the ‘exact solution’ V and the ‘measurement with error’ are generated with the first N terms of the Fourier series (22) and (23), respectively. In this case, we take values of , 25, and 30 terms. For smooth functions in the Cauchy data, these values of N are obtained by combining numerical tests and the following ideas: is approximated by a truncation choosing N such that we can guarantee that , where the approximation . From the Parseval equality and (22), where we found that the Fourier coefficients of V decay at least as , we inferFrom this, we can take for obtaining the inequality. With this, we obtain an error regarding the truncation of the series expansion. Now, the measurement error is simulated by adding a random error to each Fourier coefficient , , such that , which guarantees that .
For other functions, such as absolute value and the jump function, we have to choose other values of
N. This is shown in Examples 3 and 4, which are included in
Section 4.2.6 .
Thus, we consider synthetic examples; that is, we examine the fundamental elements of the problems studied, such as how real data are generated and the inherent error within them. In this way, we attempt to emulate the characteristics of real-world problems so that our proposal is closely aligned with providing a solution to them.
Therefore, the measurement with error
is given by the series
where
are the Fourier coefficients of
. The regularized solution
to the inverse problem is given by the series (
24) truncated to
N terms. The solution without regularization
to the IP is given by
where the coefficients
are given by
Remark 1. In all tables associated with the classical case, if , then the solution is the solution without regularization given by (29), where the coefficients are given by (30). Table 1 shows the numerical results for data with and without error, applying TRM to solve the IP of the classical Cauchy problem (
2). In this case, we observe that the solutions with regularization
have a percentage of relative errors around
, equal to the percentage of error included in the data with error
for
. The regularization parameter was chosen as
for
, 25, and 30. Also, we can see that the
decreases when the error
tends to zero, while the
increases for each value of
N. In particular, the
increases faster when
, for
,
, and y
. In this case, the regularization parameter
depends on
.
Figure 2a,b show the graphs of the exact measurement
V and with error
, the graphs of the exact potential
and its approximations
(with regularization) and
(without regularization) taking
and
, corresponding to Example 1, for
(see
Table 1). In
Figure 2b, we can see the ill-posedness of the inverse problem if we do not apply regularization, where
and
.
Example 2. We consider the ‘exact potential’ , for . Similar to the first example, the ‘exact measurement’ V and the ‘measurement with error’ are generated with the first N terms of the Fourier series (22) and (23), respectively, such that , with . In this case, and the Fourier coefficients , are obtained numerically using the intrinsic function of . Here, we take values of , 25, and 30 terms. Table 2 shows the numerical results for data with and without error, applying TRM to solve the IP of the classical Cauchy problem (
2). Analogous to Example 1, we can observe that the solutions with regularization
have a percentage of relative errors around
, equal to the percentage of error included in the data with error
for
. Also, we can see that the
decreases when the error
tends to zero, while the
increases for each value of
N. In particular, the
increases when
for each
,
, and
. As in the previous example, the regularization parameter
depends on
, and we take
for each value of
, 25, and 30.
Figure 3a,b show the graphs of the exact measurement
V and with error
, the graphs of the exact potential
and its approximations
(with regularization) and
(without regularization) taking
and
, corresponding to Example 2, for
(see
Table 2). In
Figure 3b, we can see the ill-posedness of the inverse problem if we do not apply regularization. In this case,
and
.
4.2. Solution to the IP Related to the Fractional Cauchy Problem
In this section, we look into the performance of the TRM to solve the IP of the fractional Cauchy problem (
12) in a circular annular region
with
and
. Then,
and
are two circumferences of radii
and
(see
Figure 1), respectively. In this case, we consider as ‘
exact potentials’ the two functions from the previous subsection:
and
, for
.
Similar to the previous subsection, the ‘
exact solution’
V and the ‘
measurement with error’
are obtained by truncating the series (
16) and (
23) up to
N terms, respectively; furthermore, the Fourier coefficients
,
, and
(given by (
17)) are obtained numerically using the function
of
.
In this case, we take values of
, 20, 25, and 30 terms. Therefore, the measurement with error
is given by the series (
23) truncated to
N terms. The regularized solution
to the IP is given by the series (
21) truncated to
N terms. Also, the solution without regularization
to the IP is given by (
29), where
Remark 2. In all tables from the fractional case, if , the solution is the solution without regularization given by (29), where the coefficients are given by (31). 4.2.1. Case 1: and , When Tends to Zero
In this section, we consider the case when
,
, and for different values of
close to zero.
Table 3 and
Table 4 show the relative errors of the approximations
and
, when
tends to zero, for the two exact functions
considered in
Section 4.1. In both cases, we observe that the
of the solutions with regularization
is less than the
for each value of
and
N given in these tables. Additionally, the
and
are of the same order, i.e., the solutions without regularization
are close to regularized solutions
for
and
. In both cases, the measurements with errors
do not have much impact on recovered solution
, and they are close to
. We observe from the relative errors that regularized approximations
are better than those without regularization. In this case, the regularization parameter
depends on
,
N,
m, and
.
Considering
,
, and
, we show the graphs for the following potentials
(
Figure 4) and
(
Figure 5) for
where the following is true:
- (a)
The exact measurement V and the measurement with error .
- (b)
The exact potential and its approximations (with regularization) and (without regularization) taking and .
In
Figure 4b, the
is less than
for
(see
Table 3). In
Figure 5b, the
is less than
for
(see
Table 4).
4.2.2. Case 2: , for and
Table 5 and
Table 6 show the relative errors of the approximations
and
when
for the two exact functions
considered in
Section 4.1.
In
Table 5, we observe that
for each value of
N,
, and
m given in the mentioned table. Also, the
and
are of the same order, i.e., the solutions without regularization
are close to regularized solutions
, for
,
, 25, 30,
, and
. We can see similar results in
Table 6 for
,
, 25, 30,
,
,
, and 3; however the regularized approximates
are better than the solutions without regularization. Furthermore,
and these increase suddenly, starting at
and
(see
Table 5 and
Table 6) for the functions
and
for
, respectively. As in the previous case, the regularization parameter
changes depending on
,
N,
m, and
.
We show the graphs for the following functions:
- •
(
Figure 6 with parameters
,
,
,
, and
; and
Figure 7 with parameters
,
,
,
, and
).
- •
(
Figure 8 with parameters
,
,
,
, and
).
For . These figures show the following:
- (a)
The exact measurement V and the measurement with error .
- (b)
The exact potential and its approximations (with regularization) and (without regularization).
In both cases, as mentioned in the previous paragraph, the errors increase suddenly, starting at
for the first function and
for the second one, as can be seen in
Figure 7b and
Figure 8b, where we can see the ill-posedness of the IP if we do not apply regularization. For example, for the second function, the
is much greater than
for
,
, and
(see
Table 6). However, in this same example, for
, 10, 11, and 12, the
increases around 90%. Nevertheless,
is bigger than
. In this case, we could use the regularized solution as an initial point of an iterative method to recover a better solution to the IP.
4.2.3. Case 3: , When Is Next to or m and
This section considers the case when
and
is next to
or
m.
Table 7 and
Table 8 show the relative errors of the approximations
and
when
for the same two exact functions
considered in
Section 4.1.
In
Table 7, we observe that the
are less than the
for each value of
N,
, and
m given in this table. We can see that
and
are of the same order for
, 2, i.e., the solutions without regularization
are close to regularized solutions
. However, the regularized approximates
are better than the solutions without regularization for
, 2. Nonetheless,
increases more than
starting at
. Furthermore, we can observe similar results in
Table 8, where the
are less than the
for
, 2, 3, 4, with
,
, and the different values of
are close to
m or
given in this table. For the values of
, 6, 7, and 8, the
are around the percentage of the
. For the other values of
, 10, 11, and 12, given in
Table 8, the corresponding
increases around 90%, but no more than
, i.e., the TRM does not provide a good approximate solution to the IP. In this case, we could use the regularized solution
as an initial point of an iterative method to recover a better solution to the IP. Furthermore, the relative errors of the recovered solutions
without applying regularization increase suddenly, starting at
and
(see
Table 7 and
Table 8) for the functions
and
for
, respectively. As in the previous cases, the regularization parameter
changes depending on
,
N,
m, and
.
We show the graphs for the following functions:
- •
(
Figure 9 with parameters
,
,
,
, and
).
- •
(
Figure 10 with parameters
,
,
,
, and
).
For . These figures show the following:
- (a)
The exact measurement V and the measurement with error .
- (b)
The exact potential and its approximations (with regularization) and (without regularization).
In both cases, as mentioned in the previous paragraph, the errors increase starting at
for the first function and starting at
for the second one, as can be seen in
Figure 9b and
Figure 10b for
, where we can see the ill-posedness of the IP if we do not apply regularization. For example, for the first function, the
is greater than
for
,
, and
(see
Table 7). For the second one, the
is greater than
for
,
, and
(see
Table 8). In this latter function, the approximate solution
is far from the exact solution
. In this case, we could apply an iterative method to obtain a better solution, taking
as an initial point.
4.2.4. Case 4: , for ,…,12 and
In this Section, we consider the case when
for
, 3,…,12 and
.
Table 9 and
Table 10 show the relative errors of the approximations
and
when
, with the same two exact functions
considered in the
Section 4.1.
In
Table 9, we observe that the
from solutions with regularization
are less than the
. For some values of
N,
, and
m given in this same table, we can see that
and
are of the same order, i.e., the solutions without regularization
are close to regularized solutions
; however, the regularized solutions
are better than the solutions without regularization. The
increases faster than the
starting at
. Furthermore, we can observe similar results in
Table 10, where the
are of the same order as
for
, 3, 4, with
,
, except for
. Nevertheless, the
increases faster than the
starting at
. For the values of
, 9, 10, 11, and 12, the
increases between 40% and 90%, but no more than
. In this case, the TRM does not provide a good approximate solution to the IP. However, as mentioned before, we could use the regularized solution
as an initial point of an iterative method to recover a better solution to the IP. Also, the relative errors of the recovered solutions
without applying regularization increase suddenly, starting at
and
(see
Table 9 and
Table 10) for the functions
and
for
, respectively. Here also, as in the previous cases, the parameter of regularization
changes depending on
,
N,
m, and
.
Figure 11 and
Figure 12 show the graphs of the exact measurement
V and with error
with
, the graphs of the exact potential
and its approximations
(with regularization) and
(without regularization), corresponding to the functions
and
for
, respectively. In both cases, as mentioned in the previous paragraph, the errors increase suddenly, starting at
for the first function and starting at
for the second one, as can be seen in
Figure 11b and
Figure 12b, where we can see the ill-posedness of the IP if we do not apply regularization. For example, for the first function, the relative error
is greater than
for
,
, and y
(see
Table 9). For the second one, the
is greater than
for
,
, and
(see
Table 10). In this case, we could use the regularized solution
as an initial point of an iterative method to recover a better solution to the IP.
4.2.5. Case 5: , When Is Next to n or , Where , for and
In this section, we consider the case when
, when
is next to
n or
, where
, for
and
.
Table 11 and
Table 12 show the relative errors of the approximations
and
when
for the same two exact functions
considered in
Section 4.1.
In
Table 11, we observe that
. For
, we can see that
and
are of the same order when
is next to 1 or 0 (taking
), i.e., the solutions without regularization
are close to regularized solutions
. However, the regularized approximates
are better than the solutions without regularization. The
increases faster than the
starting at
, as shown in
Figure 13b for
,
, and
, where
and
. These approximations,
and
, are recovered from measurements with error
, shown in
Figure 13a. Also, we can observe similar results in
Table 12, where
and
are of the same order for
, 3, 4, and when
is next to
n or
(taking
, 1, and 3, respectively), for
, nevertheless the
increases between 17% and 38%, but no more than the
for
, 6, 7, and 8. For the values of
, 10, 11, and 12, the
increases around 90%, but no more than
. In this case, we could use the regularized solution
as an initial point of an iterative method to recover a better solution to the IP. Nevertheless, the relative errors of the recovered solutions
without applying regularization increase suddenly, starting at
and
(see
Table 11 and
Table 12) for the functions
and
for
, respectively. Here, the regularization parameters
also change depending on
,
N,
m, and
.
Figure 13,
Figure 14,
Figure 15 and
Figure 16 show the graphs of the exact measurement
V and with error
with
, the graphs of the exact potential
and its approximations
(with regularization) and
(without regularization), corresponding to the functions
and
for
, respectively. In both cases, as mentioned in the previous paragraph, the errors increase suddenly, starting at
for the first function and starting at
for the second one, as can be seen in
Figure 14b,
Figure 15b, and
Figure 16b, where we can see the ill-posedness of the IP if we do not apply regularization for
, 8, and
, respectively. For example, for the approximations
and
shown in
Figure 14b of the first function, the
is much greater than
for
,
, and
(see
Table 11). For the approximations
and
shown in
Figure 15b of the second one, the
is much greater than
, for
,
, and y
(see
Table 12). Lastly, for the approximations
and
shown in
Figure 16b of the second one, the
is greater than
for
,
, and
(see
Table 12). In these last two examples, when the approximate solutions
are not close to the exact solution
, we could use the regularized solution
as an initial point of an iterative method to recover a better solution to the IP.
4.2.6. Case 6: , for ,…,13 and
In this case, we consider the case when
, with
for
.
Table 13 and
Table 14 show the relative errors of the approximations
and
when
for the same two exact functions
considered in
Section 4.1.
In
Table 13, we observe that
. For
, we can see that
and
are of the same order when
, i.e., the solutions without regularization
are close to regularized solutions
. However, the regularized approximates
are better than the solutions without regularization. Additionally, the relative errors of the solutions without regularization
increase faster, starting at
. Furthermore, we can observe similar results in
Table 14. In this case, the relative errors from solutions with regularization
are less than the
for
, and increase between 29% and 46% for
, when
. Nevertheless, the corresponding relative errors of the solutions with regularization increase between 88% and 96% for
, but no more than the corresponding
. The relative errors of the recovered solutions
without regularization increase suddenly, starting at
. For example, for
and
, the
. For
, the
increases between 91% and 99%, but no more than the
), for
, as well as for
and
with
. Moreover, as in the previous cases, the regularization parameters
change depending on the data with error
, the values
N,
m, and
. Analogous results can be obtained for values
, as those obtained for
, which are not included in this work.
In the following two examples, we have considered non-smooth functions.
Example 3. We consider the ‘exact potential’ , for , which in polar coordinates is given by , for . Resembling the first example, the ‘exact measurement’ V and the ‘measurement with error’ are generated with the first N terms of the Fourier series (22) and (23), respectively, such that , with . For , . In this case, and the Fourier coefficients , are obtained numerically using the intrinsic function of . Table 15 shows the numerical results for data without error, applying TRM to solve the IP of the classical Cauchy problem (
2), where
, 25, and 30. In this table, we can observe similar results to the previous examples where the regularized solutions
are better.
Table 16 shows the numerical results for data without error, applying TRM to solve the IP of the fractional Cauchy problem (
12) for different values of
,
m, and
N.
Figure 17a,b show the graphs of the exact potential
and its approximations
(with regularization) and
(without regularization), corresponding to the function
, for
, for different values of
,
m, and
N, respectively.
Example 4. We consider the ‘exact potential’ in polar coordinates given by if and 1 if . Similar to the first example, the ‘exact measurement’ V and the ‘measurement with error’ are generated with the first N terms of the Fourier series (22) and (23), respectively, such that , with . For , . In this case, , and the Fourier coefficients , are obtained numerically using the intrinsic function of . Table 17 shows the numerical results for data without error, applying TRM to solve the IP of the classical Cauchy problem (
2), where
, 25, and 30. In this table, we can observe similar results to the previous examples where the regularized solutions
are better.
Table 18 shows the numerical results for data without error, applying TRM to solve the IP of the fractional Cauchy problem (
12) for different values of
,
m, and
N.
Figure 18a,b show the graphs of the exact potential
and its approximations
(with regularization) and
(without regularization), corresponding to the function
if
and 1 if
, for
, for different values of
,
m, and
N, respectively.
Examples 3 and 4 show numerical results analogous to the previous examples for both the classical and fractional cases. However, when we have piecewise constant functions, we truncate the series of approximate solutions to find a better approximation of the solution to the fractional Cauchy problem.