1. Introduction
The problem of recovering fractional powers of operators was developed in the works [
1,
2,
3] by G.G. Magaril-Ilyaev, K.Y. Osipenko and E.O. Sivkova. In these papers a Laplace operator was considered and the main instrument was a classical Fourier transform. We further developed these results for a more general Laplace—Bessel singular differential operator based on Fourier–Bessel transform. According to these authors, the formulation of the question of finding the error of optimal recovery and the optimal method on a class of elements ideologically goes back to Kolmogorov’s work on the cross-sections of functional classes [
4]. The formulation of the optimal recovery problem (but in a much simpler case than the one given here) belongs to Smolyak [
5].
There are well-known situations when special attributes of the Laplace operator and Fourier transform can be carried onto elliptic singular differential operators comprising the Bessel operator. In such cases, the Fourier–Bessel transform is used for research. The theory of singular differential operators accommodating the Bessel operator, as well as functional spaces generated with such operators and the Fourier–Bessel transformation, has taken complete shape in the works of I.A. Kipriyanov and their disciples (see [
6,
7,
8,
9,
10,
11,
12]). In this paper, we transfer the technique and results of [
3] to the case of the Fourier–Bessel transform and the singular B-elliptic operator. We base our results on
–estimates for a difference between Fourier–Bessel transform and its approximation on a convex subset with an error. The method, based on
–estimates, was constructed in [
13].
2. Necessary Definitions
We consider a part of the Euclidean space
where
.
Let be a domain abutting to the hyperplanes . Let the boundary of be a union of two parts: in and in the hyperplanes . We assume that ; however, we treat as a domain.
Let be a subset of . Let us assume that all points of are located at a distance not more than from . In this case we call a symmetrically interior (s-interior) sub-domain of the set .
Let be a set constructed from with symmetry in relation to , .
Let us denote by the set of all functions obeying the below listed properties. They are necessary for correct definitions of the Laplace–Bessel differential operator and Fourier–Bessel transform on proper functions.
1. Any function , and all of its partial derivatives of every order not more than ℓ, are continuous in .
2. Even continuations of any function in relation to still belongs to the class .
In addition, .
We say that functions admitting symmetrical even continuation in relation to the corresponding variables while keeping smoothness are
even with respect to these variables (see [
6]).
Let us denote by the set of all functions that equal zero beyond some s-interior sub-domain of . Let where , . We will write sometimes instead of , if this does not cause misunderstandings.
Let us denote by
a closure of
by the norm
If
=
, we can write
without the symbol
. Let
be a closure of
by the norm
Let
be the set of all such functions
f, that
for all s-interior sub-domains
of the domain
.
Let us introduce the spaces below.
Let be the set of all constrictions of even functions with respect to in the space to the set (test functions) with this topology, induced by the topology in , .
Let
be the linear space of all functions (also test functions)
that tend to zero, as well as all their derivatives of all order, faster than any power of
as
. The topology in
is introduced as being the same as in the space
(see [
14,
15,
16,
17,
18,
19]).
We define the space of distributions
as the dual space of
with the weak topology. The designation
means the action of a distribution
g on a test function
.
We do not make any difference between a function
and the functional
called
regular, acting by the formula
If a functional in is not regular, we call it singular.
Let us introduce the direct and inverse
mixed Fourier–Bessel transforms in
by the formulas
where
is the Euler gamma function,
is the Bessel function of the first kind,
.
Theorem 1 (The Parseval–Plancherel theorem for the Fourier–Bessel transform [
7]).
The formula(the Parseval–Plancherel formula) holds. We define the Fourier–Bessel transform of a distribution
g by the formula
where
.
The B-elliptic Laplace–Bessel operator
is defined by the formula (see [
6])
We will use the denotations below:
3. Problem Statement
We define the fractional degree of the operator
using the equality (see [
20])
where
. Let us introduce the functional spaces
Let us consider a measurable non-empty bounded subset
G in
. Let
. Suppose that a function
is known approximately, namely, we know only such a function
that
Based on this information, we aim to recover the functions f and in the best possible way. We consider the function as an approximation of the constriction with the error in the –metric.
As in the papers [
2,
3], we call any measurable mapping
recovering method. Its error we define as
where
The value
is called the optimal recovery error (supremum is over all methods
).
The mappings , on which the lower bound is reached, we call the optimal recovery methods.
4. Lower Estimate of the Optimal Recovery Error Value
The proofs of both lemmas in this section are carried out according to the scheme suggested in the papers [
13,
21], with the difference that the initial error estimate is given here in the
–metric and in those papers it is given in the
–metric.
Let us explore an auxiliary task.
Lemma 1. The optimal recovery error is not lower than the value of Extremal task .
Proof. Let
be an acceptable function of extremal task
; in other words,
satisfies the constraints of this problem. Then the function
is also acceptable. Let
be any fixed method and
be an image of the zero element of the space
with the mapping
. Then
Now we can pass to a supremum over all acceptable functions of extremal task on the left side of this inequality and for all mappings (methods) on the right side. Thus, we complete the proof. □
Let us explore another extremal task.
Lemma 2. The squared value of extremal task is equal to the value of extremal task .
Proof. The affirmation of the lemma follows from the Parseval–Plancherel theorem for the Fourier–Bessel transform.
Furthermore, we assume that is convex. □
Proof. Assume that
. In this case we can apply the finite dimensional separability theorem (for instance [
22]) and separate the origin from a convex set
and, as a consequence, from
. It means that there exists such a vector
that
For arbitrarily small
, we introduce a ball
. If
, then
and, taking into account that
, we get
Thus, .
Let us introduce a function
such that
Obviously, the function
has a bounded support and, therefore,
and
. Thus,
. Moreover, the function
belongs to
. Therefore,
, because
. It is also easy to show that the function
satisfies other conditions of Problem
. Let
. Then
. Taking this fact into account, we get
Since is arbitrary small, we get that the value of the objective functional in extremal task and, therefore, in the original recovery problem, can be made arbitrarily large. The proof is completed. □
Lemma 3. Let . The lower estimateholds. Proof. The intersection of the boundary of semi-ball
and the boundary of
is non-empty. Assume that
belongs to this intersection. Then
. Hence, there is the ability to separate the point
from the convex set
, i.e., there is a vector
, such that
and
. Explore a sequence of points
, and a sequence of balls
. Every one of these balls does not intersect with
. Indeed, if
, then
, that is
. Let
Let us explore a sequence of functions
having the Fourier–Bessel transform of the form
where
From the last two conditions we get
. Let us note that, if
, then
It means that functions are acceptable in problem .
Let us match “the Lagrange function”
to the problem
where
,
For every
-acceptable function, we have
In particular, the value of extremal task
is not more than
For
, we have
whence we get
On the other hand, we showed earlier that for
> From the last two inequalities, we have
Since
, we obtain
Let us return to the integral
This sequence tends to 1 when
. Hence, since
we have
Based on Formulas (
12) and (
13), we get
This fact means that the value
is the value of the problem
, and the square root from (
14) is the value of the problem
.
The integral
is a special case of the integral calculated in the book [
23] (p. 613, Formula (4.635.1)). In our case, we get
Now we can calculate the value of the problem
, substituting (
15) into (
14):
> From here, using Lemmas 1 and 2, we get a lower estimate of the error of optimal recovery (
10). The lemma is proved. □
Lemma 4. Let . The lower estimateholds. Proof. Explore a functions
whose Fourier–Bessel transform has the form
Obviously, .
Let us again match “the Lagrange function”
to the problem
where
,
Obviously,
. Moreover, taking into account (
15), we get
It means that function
is acceptable in problem
. Given these facts, for any
-acceptable function, we have
This means that
is the solution to problem
. The value of this problem is equal to
> From here, using Lemmas 1 and 2, we get a lower estimate of the error of optimal recovery (
17). The lemma is proved. □
5. Upper Error Estimation and Optimal Recovery Method
We explore one more auxiliary extremal task.
The optimality of the method from the statement of the theorem means that the value of extremal task E is equal to .
We will look for the optimal method among linear mappings, which in Fourier images act according to the rule
with a vanishing outside the ball
function
a. Let
m be a mapping of this kind. Then
Let
,
, when
,
and
, when
. According to the Cauchy–Bunyakovsky inequality, we obtain for the integrand function in the second integral:
For any
the minimum value of the expression
is reached at the point
This minimum value is equal to 1. Substituting (
21) into the first term of (
20) and taking into account the first constraint of the problem E, we get
For the second term of (
20), we have
Adding up (
22) and (
23), we obtain the next upper estimation
The obtained upper estimate of the squared error for the constructed method does not exceed the squared error of the optimal recovery method. This means that the constructed method is optimal. Thus, we proved the next result.
Theorem 3. If and , then