1. Introduction and Main Result
First, we recall the definitions of N-widths: Let X be a normed linear space with norm and be a bounded subset of X. Assume to be the family of subspaces in X of dimensions at most N.
Definition 1 ([1]). The Kolmogorov N-width of in X is defined as follows:where the leftmost infimum is taken from all N-dimensional linear subspaces of X. Definition 2 ([1]). The linear N-width of in X is defined as follows:where runs over all linear operators from X to X of rank at most N. Definition 3 ([1]). If there are N linear independent continuous linear functions on X, then is a complementary subspace of X with co-dimension N. The Gel’fand N-width of in X is defined as follows:where runs over all subspaces of X with co-dimension at most N. Definition 4 ([2,3,4]). Let W be a bounded subset of X, equipped with a probability measure μ. For any , the probabilistic Kolmogorov -width and probabilistic linear -width of W in X are defined as follows:where satisfies the condition that . In order to define the probabilistic Gel’fand
-width, we need to introduce some related concepts. Let
Y be a Hilbert space. And it is equipped with a probabilistic measure
. Let
A be a closed subspace in
Y. The orthogonal complement of
A is denoted by
. Then,
This form is unique.
If we introduce a projection operator P from Y to A, then the element y will be denoted by . For any Borel set in A, let . Then, is a probabilistic measure on A. After the above preparation, we can start to define the probabilistic Gel’fand -width.
Definition 5 ([2,3,4]). Let H be a Hilbert space and X be the same as Definition 1. Let H be equipped with a probabilistic measure μ. For any , the probabilistic Gel’fand -width is defined as follows:where satisfies the condition that , runs over all subsets of H with measure at most δ, which satisfies the condition that for any closed subspace F in H, Remark 1. In this article, we give the definition of the Gel’fand N-width in the probabilistic setting. By comparing to the Kolmogorov N-width, the linear N-width, and the Gel’fand N-width in the probabilistic setting, we find that the first two widths are the infimum of all with measure at most δ. However, Tan et al. [5] present that the probabilistic Gel’fand -widths are the infimum for a part of satisfying the condition. From [5], we know that they add the condition in order to make sure that has enough elements. In addition, we define the probabilistic Gel’fand -widths as follows:where runs over all subsets of W with the measure at most δ. We obtained the following result in paper [6]:where H is a Hilbert space and can be imbedded continuously into X. Therefore, in the current situation, it is difficult to study . We have to add this condition (7) in Definition 5. Widths theory is an important part of the approximation theory of functions. Different N-widths represent different approximation methods. In mathematical terms, for example, Kolmogorov N-width gives the optimal approximation degree of the “worst” elements of the sets that will be approximated by N-dimensional subspaces. The linear N-width reflects the optimal error of the “worst” elements that will be approximated by linear operators in the approximation. However, in the worst-case setting, widths are defined by the optimal approximation degree of the “worst” elements of the approximated sets. Therefore, in the worst-case setting, errors reflected by widths cannot represent the best approximation of the whole elements of approximated sets. If we define widths in the probabilistic or average setting, the information will be optimized. In the probabilistic setting, errors are defined by widths in the worst-case setting of the sets, which can remove some elements for algorithmic “mistakes”. Therefore, the probabilistic setting, compared to the worst-case setting, allows one to give the better analysis of the approximation for the function classes.
Probabilistic and average widths have attracted much attention in recent years. We refer to the literature [
7,
8,
9] for a survey. The usual widths may be found in the books [
1,
10]. V. E. Maiorov studied the probabilistic and average Kolmogorov and linear widths of one-dimensional Sobolev space in the
-norm,
[
2,
3,
4]. Subsequently, some scholars studied the probabilistic and average widths of
in the
. They also studied the probabilistic and average linear widths of Sobolev space. These results were fully analogous to those of Maiorov, Fang, and Ye [
2,
11,
12]. Chen and Fang studied the probabilistic and average widths of multivariate Sobolev space with mixed derivative in the
-norm,
[
13,
14]. After 2010, Wang studied the probabilistic and average widths of Sobolev spaces in weighted Sobolev spaces [
15,
16]. Tan et al. studied the Gel’fand
N-width in the probabilistic setting of one-dimensional Sobolev space in the
-norm,
[
5]. Dai and Wang studied the probabilistic and average linear
n-widths of diagonal matrices [
17]. Zhai and Hu estimated the sharp bound of the probabilistic and average linear widths of Sobolev spaces with Jacobi weight [
9]. Vasil’eva studied the Kolmogorov widths of intersections and Sobolev weighted classes [
18,
19].
Next, we define two equivalence relations.
Assume that c, , are positive constants depending only on the parameters r, q, , d. For two positive functions and satisfying for all y from the domain of the functions a and b, we write If and satisfy then we write .
In this article, we will continue the research of [
5] and estimate the exact bounds of the probabilistic Gel’fand widths of multivariate Sobolev spaces with mixed derivative in the
-norm,
. However, our results cannot be derived directly from the results of [
5]. This phenomenon shows that the results of multivariate Sobolev spaces are not an ordinary generalization from the results of univariate Sobolev spaces. There are differences in methods and techniques between the results of multivariate Sobolev spaces and univariate Sobolev spaces. We determine the asymptotic order of the probabilistic Gel’fand widths of multivariate Sobolev space with mixed derivative
in the space
,
, where
.
Theorem 1. Let , , , , . Then, Theorem 2. Let , , , , . Then, 2. Discretization
Consider Hilbert space
, where
, consisting of all
-periodic functions
with the Fourier series
Define the inner product by
Denoted by
,
, the classical
q-integral Lebesgue space of
-periodic functions with the usual norm
. Let
,
,
and
. We denote that
,
, and we define that
,
,
. In summary, we can define the
rth order derivative of
x for an arbitrary vector
in the sense of Weyl by
where
,
. It is well known that if
, then the space
can be imbedded continuously into
.
Sobolev space
is a Hilbert space given by
The inner product of
is
, and the norm in
is
Now, we equip with Gaussian measure , whose mean is zero and whose correlation operator has eigenfunctions and eigenvalues , , i.e., , , where .
Let
be any orthogonal system of functions in
,
,
, and
B be an arbitrary Borel subset of
. Then, the Gaussian measure
on the cylindrical subsets in the space
,
is given by
A result about the average error bounds in Banach spaces equipped with a Gaussian measure can be found in an article by Yongsheng Sun [
20].
Tan et al. [
5] studied the probabilistic Gel’fand width of univariate Sobolev space
. They proved three lemmas, as follows:
Lemma 1 ([5]). Let and . Then,Here, the upper bounds hold if . Lemma 2 ([5]). Let H be a Hilbert space and X be a linear space with norm . If H can be imbedded continuously into X, and μ is the Gaussian measure in H, then Lemma 3 ([5]). Let , , , . Then, Remark 2. In the case of one dimension , Theorems 1 and 2 are the same as Lemma 3.
Chen and Fang [
14] studied the probabilistic linear
N-width of multivariate Sobolev space with mixed derivative. They proved the following:
Lemma 4 ([14]). Let , , , ,. Then, Now, we use the discretization method to prove Theorems 1 and 2. This discretization method aims to transform functional spaces into finite-dimensional space. Therefore, discretization can reduce the calculation of the sharp bounds of the probabilistic -widths.
First, we recall some definitions, and we introduce some notations. We also need some results of the standard Gaussian measure of finite-dimensional spaces. Let the
be
m-dimensional normed space of vectors
, with norm
Consider in
the standard Gaussian measure, which is defined as
where
G is any Borel subset in
. Evidently,
. In order to establish the discretization theorem, we need to split the Fourier series of functions into the sum of diadic blocks. Then, we introduce some lemmas and notations. For
, let
be the “block” of
, where
Assume
to be the “block” of the Fourier series for
: that is,
After introducing these necessary concepts, we have
Lemma 5 ([21]). Let S be a subset of , , , . Then,where , , is the cardinality of the set S, and Lemma 6 ([2,3]). For , there is a positive , such thatFor and any , there exists a positive constant that depends only on q, such that Given
, we define
where
,
. Note that
,
for
.
Let . We obtain . And we define .
The following theorem reflects the upper bounds of Theorems 1 and 2:
Theorem 3. Let , , , , , . Let the sequence of natural numbers be such that , . And let the sequence of positive numbers be such that . Then,where . Proof. From Lemma 1, there is a positive constant
, such that
Let
, where
By
[
5], we can establish that
From Lemma 6, we have
Let
be a subspace of
with co-dimension at most
. Then,
Now, we consider the polynomials in the space
:
where
.
Obviously, these polynomials are orthogonal. And, for any
, we have
For
and
, we consider a mapping,
From the result of Chen and Fang [
13],
is a linear isomorphic from the space
to
, and
That is, there is a positive constant
, such that
Let
. From the results of Chen and Fang [
13], there is a positive constant
, such that
.
Now, we consider the subset
Then,
Let
. Then,
. Let
, and
, where the sum is the direct sum. Therefore,
F is a subspace of
with co-dimension as follows:
From the definition of the probabilistic Gel’fand
-width, we obtain
which completes the proof of Theorem 3. □
Remark 3. In order to prove Theorems 1 and 2, we use the discretization method [3]. The discretization method is the classical method of studying widths theory. To estimate the sharp bounds of probabilistic Kolmogorov -widths and probabilistic linear -widths, scholars like Maiorov, Fang, Ye, and Chen [2,3,4,11,12,13,14] also used the discretization method. Therefore, discretization is an effective method of calculating probabilistic -widths. However, our discretization theorem is different from the aforementioned. The difference between the proof of Theorem 3 and other discretization theorems is the difference between the definitions of Kolmogorov N-widths, linear N-widths, and Gel’fand N-widths in the probabilistic setting. We need to structure a new set, to enlarge the probabilistic Gel’fand -widths of multivariate Sobolev spaces. This set is related to the norm of . Therefore, we need to estimate the upper bound of : that is the innovation of our discretization theorem. 4. Summary
In this article, we have obtained the sharp bounds of probabilistic Gel’fand -widths of the multivariate Sobolev space with mixed derivative. The results of the manuscript are interesting and important, especially for information algorithms. And the proof is complete and clear. Our manuscript should play an important role in research on approximation theory and width theory.
In paper [
23], we obtained the sharp bounds of probabilistic Kolmogorov and linear
-widths, and
p-average Kolmogorov and linear
N-widths of multivariate Sobolev spaces
with common smoothness in the
norm. Due to the difference in norm, the discretization method used in the calculation was different from this article. Therefore, we could study the sharp bounds of probabilistic Gel’fand
-width and
p-average Gel’fand
N-widths of
in
norm and
norm. The above results have subsequently been obtained.