Next Article in Journal
Green Financing Efficiency and Influencing Factors of Chinese Listed Construction Companies against the Background of Carbon Neutralization: A Study Based on Three-Stage DEA and System GMM
Next Article in Special Issue
Supersymmetric Polynomials and a Ring of Multisets of a Banach Algebra
Previous Article in Journal
A Study of Team Recommended Generalized Assignment Methods
Previous Article in Special Issue
Selection of Appropriate Symbolic Regression Models Using Statistical and Dynamic System Criteria: Example of Waste Gasification
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Pointwise Wavelet Estimations for a Regression Model in Local Hölder Space

School of Mathematics and Computational Science, Guilin University of Electronic Technology, Guilin 541004, China
*
Author to whom correspondence should be addressed.
Axioms 2022, 11(9), 466; https://doi.org/10.3390/axioms11090466
Submission received: 26 July 2022 / Revised: 2 September 2022 / Accepted: 6 September 2022 / Published: 10 September 2022
(This article belongs to the Special Issue Approximation Theory and Related Applications)

Abstract

:
This paper considers an unknown functional estimation problem in a regression model with multiplicative and additive noise. A linear wavelet estimator is first constructed by a wavelet projection operator. The convergence rate under the pointwise error of linear wavelet estimators is studied in local Hölder space. A nonlinear wavelet estimator is provided by the hard thresholding method in order to obtain an adaptive estimator. The convergence rate of the nonlinear estimator is the same as the linear estimator up to a logarithmic term. Finally, it should be pointed out that the convergence rates of two wavelet estimators are consistent with the optimal convergence rate on pointwise nonparametric estimation.

1. Introduction

The classical regression model plays an important role in many practical applications. The definition of this model is shown by Y i = f ( X i ) + ε i , i { 1 , , n } . The aim of this conventional regression model is to estimate the unknown regression function f ( x ) by observed data ( X 1 , Y 1 ) , , ( X n , Y n ) . For this classical regression model, many important and interesting results have been obtained by Hart [1], Kerkyacharian and Picard [2], Chesneau [3], Reiß [4], Yuan and Zhou [5], and Wang and Politis [6].
Recently, Chesneau et al. [7] studied the following regression model
Y i = f ( X i ) U i + V i , i { 1 , , n } ,
where ( X 1 , Y 1 ) , , ( X n , Y n ) are independent and identically distributed random variables, f is an unknown function defined on Δ R , U 1 , , U n are n identically distributed random vectors, X 1 , , X n and V 1 , , V n are identically distributed random variables. Moreover, X i and U i are independent, U i and V i are independent for any i { 1 , , n } . The aim of this model is to estimate the unknown function r ( x ) ( r : = f 2 ) by the observed data ( X 1 , Y 1 ) , , ( X n , Y n ) .
For the above model (1), it reduces to the classical regression model when U i 1 . In other words, (1) can be viewed as an extension of the classical regression problem. In addition, model (1) becomes the classical heteroscedastic regression model when V i is a function of X i ( V i = g ( X i ) ). Then, the function r ( x ) ( r : = f 2 ) is called a variance function in a heteroscedastic regression model, which plays a crucial role in financial and economic fields (Cai and Wang [8], Alharbi and Patili [9]). Furthermore, the regression model (1) is also widely used in Global Positioning Systems (Huang et al. [10]), Image processing (Kravchenko et al. [11], Cui [12]), and so on.
For this regression model, Chesneau et al. [7] propose two wavelet estimators and discuss convergence rates under the mean integrated square error over Besov space. However, this study only focuses on the global error of wavelet estimators. There is a lack of pointwise risk estimation for this model. In this paper, two new wavelet estimators are constructed, and the convergence rates over the pointwise error of wavelet estimators in local Hölder space are considered. More importantly, those wavelet estimators can all obtain the optimal convergence rate under pointwise error.

2. Assumptions, Local Hölder Space and Wavelet

In this paper, we will consider model (1) with Δ = [ 0 , 1 ] . Additional technical assumptions are formulated below.
  • A1: Y i is bounded for any i { 1 , , n } .
  • A2: X 1 U ( 0 , 1 ) .
  • A3: U 1 N ( 0 , 1 ) .
  • A4: V 1 has a moment of order 2.
  • A5: X i and V i are independent for any i { 1 , , n } .
  • A6: V i = g ( X i ) , where g : [ 0 , 1 ] R is known and bounded.
For the above assumptions, it is easy to see that A5 and A6 are reversed. Hence, we will define the following two sets, H1 and H2, of the above assumptions
H1: = {A1,A2,A3,A4,A5},
H2: = {A1,A2,A3,A4,A6}.
Note that the difference between H1 and H2 is the relationship between V i and X i . Since the above assumptions are separated into two sets, H1 and H2; the estimators of the function r ( x ) should be constructed under different condition sets, respectively.
This paper will consider nonparametric pointwise estimation in local Hölder space. Now, we introduce the concept of local Hölder space. Recall the classic Hölder condition H δ R ( 0 < δ < 1 ) ,
f ( y ) f ( x ) C | y x | δ , x , y R .
Let Ω x 0 be a neighborhood of x 0 R and a function space H δ Ω x 0 ( 0 < δ 1 ) be defined as
H δ Ω x 0 = f : f y f x C y x δ , x , y Ω x 0 ,
where C > 0 is a fixed constant. Clearly, f H δ R must be contained in H δ Ω x 0 . However, the converse does not hold.
For s = N + δ > 0 with δ ( 0 , 1 ] and N N (the nonnegative integer set), we define the local Hölder space as
H s Ω x 0 = f : f N H δ Ω x 0 .
Furthermore, it follows from the definition of H s Ω x 0 that H s Ω x 0 L 2 ( R ) .
In order to construct wavelet estimators in later sections, we introduce some basic theories of wavelets.
Definition 1.
A multiresolution analysis (MRA) is a sequence of closed subspaces { V j } j Z of the square-integrable function space L 2 ( R ) satisfying the following properties:
(i) 
V j V j + 1 ;
(ii) 
j Z V j ¯ = L 2 ( R ) (the space j Z V j is dense in L 2 ( R ) ) ;
(iii) 
f ( 2 · ) V j + 1 if and only if f ( · ) V j for each j Z ;
(iv) 
There exists ϕ L 2 ( R ) (scaling function) such that { ϕ ( · k ) , k Z } forms an orthonormal basis of V 0 = s p a n ¯ { ϕ ( · k ) } .
Let ϕ be a scaling function, and ψ be a wavelet function such that
{ ϕ j * , k , ψ j , k , j j * , k Z }
constitutes an orthonormal basis of L 2 ( R ) , where j * is a positive integer, ϕ j * , k = 2 j * 2 ϕ ( 2 j * x k ) and ψ j , k = 2 j 2 ψ ( 2 j x k ) . In this paper, we choose the Daubechies wavelets. Then for any h ( x ) H s Ω x 0 , it has the following expansion
h x = k Z α j * , k ϕ j * , k x + j j * k Z β j , k ψ j , k x ,
where α j , k = h , ϕ j , k , β j , k = h , ψ j , k . Further details can be found in Meyer [13] and Daubechies [14].
Let P j be the orthogonal projection operator from L 2 ( R ) onto the space V j with the orthonormal basis ϕ j , k ( · ) = 2 j 2 ϕ ( 2 j · k ) , k Z . Then for h ( x ) H s Ω x 0 and α j , k = h , ϕ j , k ,
P j h ( x ) = k Z α j , k ϕ j , k ( x ) .
In this position, we give an important lemma, which will be used in later discussions. Here and after, we adopt the following symbol: A B denotes A c B for some constant c > 0 ; A B means B A ; A B stand for both A B and B A .
Lemma 1
(Liu and Wu [15]). If f H s Ω x 0 , s > 0 with s = N + δ ( 0 < δ 1 ) , then for x Ω x 0 and j * N ,
(i) 
sup f H s Ω x 0 k Z β j , k ψ j , k x 2 j s ;
(ii) 
f x = k Z α j * , k ϕ j * , k x + j j * k Z β j , k ψ j , k ;
(iii) 
sup f H s Ω x 0 f x P j * f x 2 j * s .

3. Linear Wavelet Estimator

In this section, a linear wavelet estimator is given by using the wavelet method, and the order of pointwise convergence of this estimator is studied in local Hölder space. Now we define our linear wavelet estimator
r ^ n l i n ( x ) = k α ^ j * , k ϕ j * , k ( x ) ,
where
α ^ j * , k = 1 n i = 1 n Y i 2 ϕ j * , k ( X i ) v j * , k ,
v j * , k = E [ V 1 2 ] 2 j * / 2 , A 5 , 0 1 g 2 ( x ) ϕ j * , k ( x ) d x , A 6 .
According to the definition of v j * , k , it is clear that the structure of this linear wavelet estimator depends on the reverse conditions of A5 and A6. Some of the lemmas needed in this section and their proofs are given below.
Lemma 2.
For model (1), if H1 or H2 hold,
E [ α ^ j * , k ] = α j * , k .
Proof. 
According to the definition of α ^ j * , k ,
E [ α ^ j * , k ] = E 1 n i = 1 n Y i 2 ϕ j * , k ( X i ) v j * , k = E Y 1 2 ϕ j * , k ( X 1 ) v j * , k = E r ( X 1 ) U 1 2 ϕ j * , k ( X 1 ) + 2 E [ f ( X 1 ) U 1 V 1 ϕ j * , k ( X 1 ) ] + E V 1 2 ϕ j * , k ( X 1 ) v j * , k .
Since U i is independent from X i and V i , respectively,
E [ f ( X 1 ) U 1 V 1 ϕ j * , k ( X 1 ) ] = E [ U 1 ] E [ f ( X 1 ) V 1 ϕ j * , k ( X 1 ) ] .
In addition, condition A3 implies that E [ U 1 ] = 0 . Then one gets
E [ f ( X 1 ) U 1 V 1 ϕ j * , k ( X 1 ) ] = 0 .
It follows from A5, A2 and A4 that
E [ V 1 2 ] E [ ϕ j * , k ( X 1 ) ] = E V 1 2 0 1 ϕ j * , k ( x ) d x = E [ V 1 2 ] 2 j * 2 = v j * , k .
On the other hand, we obtain
E V 1 2 ϕ j * , k ( X 1 ) = 0 1 g 2 ( x ) ϕ j * , k ( x ) d x = v j * , k
with condition A6.
Finally, according to the assumption of A3 and A2,
E [ α ^ j * , k ] = E [ U 1 2 ] E [ r ( X 1 ) ϕ j * , k ( X 1 ) ] = 0 1 r ( x ) ϕ j * , k ( x ) d x = α j * , k .
In order to estimate E α ^ j * , k α j * , k p , we need the following Rosenthal’s inequality.
Rosenthal’s inequality Let X 1 , , X n be independent random variables such that E [ X i ] = 0 and | X i | M ( i = 1 , 2 , , n ) ,
(i)
E i = 1 n X i p M p 2 i = 1 n E [ X i 2 ] + i = 1 n E [ X i 2 ] p / 2 , p > 2 ;
(ii)
E i = 1 n X i p i = 1 n E [ X i 2 ] p / 2 , 0 < p 2 .
Lemma 3.
Let α ^ j * , k be defined by (3). If H1 or H2 hold and 2 j * n , then for 1 p < ,
E α ^ j * , k α j * , k p n p / 2 .
Proof. 
By (5) and the definition of α ^ j * , k ,
| α ^ j * , k α j * , k | = 1 n i = 1 n Y i 2 ϕ j * , k ( X i ) v j * , k E 1 n i = 1 n Y i 2 ϕ j * , k ( X i ) v j * , k = 1 n i = 1 n ( Y i 2 ϕ j * , k ( X i ) E Y i 2 ϕ j * , k ( X i ) ) = 1 n i = 1 n Z i
with Z i : = Y i 2 ϕ j * , k ( X i ) E [ Y i 2 ϕ j * , k ( X i ) ] . It is clear that E [ Z i ] = 0 . Using the definition of Z i and A1, there exists a constant c > 0 such that
| Z i | = Y i 2 ϕ j * , k ( X i ) E Y i 2 ϕ j * , k ( X i ) Y i 2 ϕ j * , k ( X i ) | + | E Y i 2 ϕ j * , k ( X i ) c 2 j * 2 2 j * 2 .
When p > 2 , according to Rosenthal’s inequality,
E i = 1 n Z i p M p 2 i = 1 n E Z i 2 + i = 1 n E Z i 2 p 2 ( 2 j * 2 ) p 2 i = 1 n E Z i 2 + i = 1 n E Z i 2 p 2 .
Note that E Z i 2 = V a r [ Z i ] = V a r Y i 2 ϕ j * , k ( X i ) E [ Y i 2 ϕ j * , k ( X i ) ] = V a r Y i 2 ϕ j * , k ( X i ) E Y i 4 ϕ j * , k 2 ( X i ) . Furthermore, it follows from A1 and the property of ϕ j * , k that
E Z i 2 E Y i 4 ϕ j * , k 2 ( X i ) 1 .
Then it can be easily seen that
i = 1 n E Z i 2 p / 2 n p 2 .
By (8) and (9), we obtain
E i = 1 n Z i p ( 2 j * 2 ) p 2 n + n p 2 .
When 1 p < 2 ,
E i = 1 n Z i p i = 1 n E Z i 2 p / 2 .
Hence,
E i = 1 n Z i p n p 2 .
It follows from (7), (10) and (11) that
E | α ^ j * , k α j * , k | p E 1 n i = 1 n Z i p = 1 n p E i = 1 n Z i p .
Hence,
E | α ^ j * , k α j * , k | p 1 n p [ ( 2 j * 2 ) p 2 · n + n p 2 ] , p 2 , n p 2 , 1 p < 2 .
This with 2 j * n implies that
E [ | α ^ j * , k α j * , k | p ] n p 2 .
Now the convergence rate of the linear wavelet estimator is proved in the following.
Theorem 1.
Let r H s ( Ω x 0 ) with s > 0 . Then for each 1 p < , the linear wavelet estimator r ^ n l i n ( x ) defined in (2) with 2 j * n 1 2 s + 1 satisfies
sup r H s ( Ω x 0 ) E | r ^ n l i n ( x 0 ) r ( x 0 ) | p 1 p n s 2 s + 1 .
Remark 1.
Note that n s 2 s + 1 is the optimal convergence rate over pointwise error for nonparametric functional estimation (Brown and Low [16]). The above result yields that the linear wavelet estimator can obtain the optimal convergence rate.
Proof. 
The triangular inequality gives
E r ^ n l i n ( x 0 ) r ( x 0 ) p 1 p E r ^ n l i n ( x 0 ) P j * r ( x 0 ) p + P j * r ( x 0 ) r ( x 0 ) p 1 p E r ^ n l i n ( x 0 ) P j * r ( x 0 ) p 1 p + P j * r ( x 0 ) r ( x 0 ) .
  • The bias term P j * r ( x 0 ) r ( x 0 ) . According to Lemma 1,
    P j * r ( x 0 ) r ( x 0 ) 2 j * s .
  • The stochastic term E r ^ n l i n ( x 0 ) P j * r ( x 0 ) p 1 p . Note that
    E r ^ n l i n ( x 0 ) P j * r ( x 0 ) p = E k Λ j * α ^ j * , k α j * , k ϕ j * , k ( x 0 ) p E k Λ j * α ^ j * , k α j * , k ϕ j * , k ( x 0 ) 1 p ϕ j * , k ( x 0 ) 1 p p
with 1 p + 1 p = 1 . According to the Hölder inequality, Lemma 3 and k Λ j * ϕ j * , k 2 j * / 2 , the above inequality reduces to
E r ^ n l i n ( x 0 ) P j * r ( x 0 ) p E k Λ j * α ^ j * , k α j * , k p ϕ j * , k ( x 0 ) 1 p k Λ j * ϕ j * , k ( x 0 ) 1 p p k Λ j * E α ^ j * , k α j * , k p ϕ j * , k ( x 0 ) 2 j * p 2 p 1 n p 2 2 j * 2 1 + p p = 2 j * n p 2
Combining (13), (14) and (15), one has
E r ^ n l i n ( x 0 ) r ( x 0 ) p 1 / p 2 j * s + 2 j * n 1 2 .
Furthermore, by the given choice 2 j * n 1 2 s + 1 ,
sup r H s ( Ω x 0 ) E | r ^ n l i n ( x 0 ) r ( x 0 ) | p 1 p n s 2 s + 1 .

4. Nonlinear Wavelet Estimator

According to the definition of the linear wavelet estimator, we can easily find that the scale parameter j * of the linear wavelet estimator depends on the smooth parameter s of the function r ( x ) to be estimated, so the linear estimator is not adaptive. In this section, we will solve this problem by constructing a nonlinear wavelet estimator with the hard thresholding method. Now we define our nonlinear wavelet estimator
r ^ n n o n ( x ) = k Λ j * α ^ j * , k ϕ j * , k ( x ) + j = j * j 1 k Λ j β ^ j , k I { | β ^ j , k | κ t n } ψ j , k ( x ) , x [ 0 , 1 ] ,
where α ^ j * , k is defined by (3),
β ^ j , k : = 1 n i = 1 n Y i 2 ψ j , k ( X i ) w j , k ,
w j , k : = 0 , A 5 , 0 1 g 2 ( x ) ψ j , k ( x ) d x , A 6 ,
and t n = ln n / n , I G denotes the indicator function over an event G. The positive integer j * , j 1 , and κ will be given in Theorem 2.
Remark 2.
Compared with the structure of β ^ j , k in Chesneau et al. [7], the definition of β ^ j , k in this paper does not need a thresholding algorithm. In other words, this paper reduces the complexity of the nonlinear wavelet estimator.
Lemma 4.
For model (1), if H1 or H2 hold, then
E β ^ j , k = β j , k .
Lemma 5.
Let β ^ j , k be defined by (17). If H1 or H2 hold and 2 j n , then for 1 p < ,
E | β ^ j , k β j , k | p n p / 2 .
The proof methods of Lemmas 4 and 5 are similar to that of Lemmas 2 and 3, so the proofs are omitted here. For nonlinear wavelet estimation, Bernstein’s inequality plays a crucial role.
Bernstein’s inequality Let X 1 , , X n be independent random variables such that E [ X i ] = 0 , | X i | M and E X i 2 = σ 2 , then for each v > 0
P 1 n i = 1 n X i v 2 exp n v 2 2 ( σ 2 + v M 3 ) .
Lemma 6.
Let β ^ j , k be defined by (17), t n = ln n n and 2 j n ln n . If H1 or H2 hold, then for each w > 0 , there exists a constant κ > 1 such that
P ( | β ^ j , k β j , k | κ t n ) 2 w j .
Proof. 
According to the definition of β ^ j , k ,
| β ^ j , k β j , k | = 1 n i = 1 n Y i 2 ψ j , k ( X i ) w j , k E 1 n i = 1 n Y i 2 ψ j , k ( X i ) w j , k = 1 n i = 1 n Y i 2 ψ j , k ( X i ) E 1 n i = 1 n Y i 2 ψ j , k ( X i ) = 1 n i = 1 n Y i 2 ψ j , k ( X i ) E Y i 2 ψ j , k ( X i ) = 1 n i = 1 n D i
with D i = Y i 2 ψ j , k ( X i ) E [ Y i 2 ψ j , k ( X i ) ] . Clearly, E [ D i ] = 0 . Furthermore, by A1 and the property of ψ j , k , E [ D i 2 ] = V a r [ D i ] E [ Y i 4 ψ j , k 2 ( X i ) ] 1 and | D i | 2 j 2 . Note that
| β ^ j , k β j , u | κ t n 1 n i = 1 n D i κ t n .
Hence,
P ( | β ^ j , k β j , k | κ t n ) P 1 n i = 1 n D i κ t n .
Using Bernstein’s inequality, t n = ln n n and 2 j n ln n ,
P 1 n i = 1 n D i κ t n exp n ( κ t n ) 2 2 ( 1 + κ t n 2 j / 2 3 ) exp κ 2 ln n 2 ( 1 + κ 3 ) .
Then one chooses a large enough κ > 1 such that
P ( | β ^ j , k β j , k | κ t n ) P 1 n i = 1 n D i κ t n 2 w j .
Theorem 2.
Let r H s ( Ω x 0 ) with s > 0 . Then for each 1 p < , the nonlinear wavelet estimator r ^ n n o n ( x ) defined in (16) with 2 j * n 1 2 m + 1 ( s < m ) and 2 j 1 n ln n satisfies
sup r H s ( Ω x 0 ) E [ | r ^ n n o n ( x 0 ) r ( x 0 ) | p ] 1 p ln n 1 1 p ln n n s / ( 2 s + 1 ) .
Remark 3.
Compared with the linear wavelet estimator, the nonlinear wavelet estimator does not depend on the smooth parameter of r ( x ) . Hence, the nonlinear estimator is adaptive. More importantly, the nonlinear estimator can also achieve the optimal convergence rate up to an ln n factor.
Proof. 
By the definition of r ^ n l i n ( x ) and r ^ n n o n ( x ) , one has
r ^ n n o n ( x 0 ) r ( x 0 ) = [ r ^ n l i n ( x 0 ) P j * r ( x 0 ) ] [ r ( x 0 ) P j 1 + 1 r ( x 0 ) ] + j = j * j 1 k Λ j β ^ j , k I { | β ^ j , k | κ t n } β j , k ψ j , k ( x 0 ) .
Hence,
E [ | r ^ n n o n ( x 0 ) r ( x 0 ) | p ] 1 p T 1 + T 2 + Q ,
where
T 1 = E r ^ n l i n ( x 0 ) P j * r ( x 0 ) p 1 p ,
T 2 = P j 1 + 1 r ( x 0 ) r ( x 0 ) ,
Q = E j = j * j 1 k Λ j β ^ j , k I β ^ j , k κ t n β j , k ψ j , k ( x 0 ) p 1 p .
  • For T 1 . It follows from (15) and 2 j * n 1 2 m + 1 ( s < m ) that
    T 1 = E r ^ n l i n ( x 0 ) P j * r ( x 0 ) p 1 p 2 j * n 1 / 2 n m 2 m + 1 < n s 2 s + 1 .
  • For T 2 . Using Lemma 1 and 2 j 1 n ln n , one gets
    T 2 = P j 1 + 1 r ( x 0 ) r ( x 0 ) 2 j 1 s ln n n s < ln n n s 2 s + 1 .
Then equality (19) will be proven if we can show
Q ln n 1 1 p ln n n s / ( 2 s + 1 ) .
According to Hölder inequality,
Q j 1 j * + 1 p 1 j = j * j 1 E k Λ j β ^ j , k I β ^ j , k κ t n β j , k ψ j , k ( x 0 ) p 1 / p .
It is obvious that
| β ^ j , k I { | β ^ j , k | κ t n } β j , k | = | β ^ j , k β j , k | I { | β ^ j , k | κ t n , | β j , k | < κ t n 2 } + I { | β ^ j , k | κ t n , | β j , k | κ t n 2 } + | β j , k | I { | β ^ j , k | < κ t n , | β j , k | > 2 κ t n } + I { | β ^ j , k | < κ t n , | β j , k | 2 κ t n } .
Moreover,
| β ^ j , k | κ t n , | β j , k | < κ t n 2 | β ^ j , k β j , k | > κ t n 2 ,
| β ^ j , k | < κ t n , | β j , k | > 2 κ t n | β ^ j , k β j , k | > κ t n 2 ,
| β ^ j , k β j , k | | β j , k | | β ^ j , k | κ t n 2 .
Hence, one can obtain that
Q ( j 1 j * + 1 ) 1 1 p ( Q 1 + Q 2 + Q 3 ) ,
where
Q 1 = j = j * j 1 E k Λ j β ^ j , k β j , k I β ^ j , k β j , k > κ t n 2 ψ j , k ( x 0 ) p 1 / p ,
Q 2 = j = j * j 1 E k Λ j β ^ j , k β j , k I β j , k κ t n 2 ψ j , k ( x 0 ) p 1 / p ,
Q 3 = j = j * j 1 k Λ j β j , k I β j , k 2 κ t n ψ j , k ( x 0 ) .
  • For Q 1 . By Hölder inequality ( 1 p + 1 p = 1 ) and k ψ j , k ( x 0 ) 2 j / 2
    E k Λ j β ^ j , k β j , k I β ^ j , k β j , k > κ t n 2 ψ j , k ( x 0 ) p = E k Λ j β ^ j , k β j , k I β ^ j , k β j , k > κ t n 2 ψ j , k ( x 0 ) 1 / p ψ j , k ( x 0 ) 1 / p p E k Λ j β ^ j , k β j , k p I β ^ j , k β j , k > κ t n 2 ψ j , k ( x 0 ) k ψ j , k ( x 0 ) p / p E k Λ j β ^ j , k β j , k p I β ^ j , k β j , k > κ t n 2 ψ j , k ( x 0 ) 2 j p 2 p .
Furthermore, using the Cauchy–Schwarz inequality, Lemmas 5 and 6, one has
E β ^ j , k β j , k p I β ^ j , k β j , k > κ t n 2 E β ^ j , k β j , k 2 p 1 / 2 E I β ^ j , k β j , k > κ t n 2 1 / 2 n p 2 2 w j 2 .
This with (22) yields that
E k Λ j β ^ j , k β j , k I β ^ j , k β j , k > κ t n 2 ψ j , k ( x 0 ) p 2 j p 2 E β ^ j , k β j , k p I β ^ j , k β j , k > κ t n 2 n p 2 2 w j 2 2 j p 2 .
Hence,
Q 1 j = j * j 1 2 j p 2 n p 2 2 w j 2 1 p = n p 2 j = j * j 1 2 j p 2 w 2 1 p n p 2 2 j * p 2 1 p = 2 j * n 1 2 ,
where κ is chosen to be large enough such that w > p in Lemma 6. This with the choice 2 j * n 1 2 m + 1 ( s < m ) shows that
Q 1 n m 2 m + 1 n s 2 s + 1 .
  • For Q 2 . Let us first define
    2 j n ln n 1 / ( 2 s + 1 ) .
Clearly, 2 j * n 1 2 m + 1 2 j n ln n 1 / ( 2 s + 1 ) 2 j 1 n ln n . Note that
Q 2 = j = j * j 1 E k Λ j β ^ j , k β j , k I β j , k κ t n 2 ψ j , k ( x 0 ) p 1 / p j = j * j E k Λ j β ^ j , k β j , k ψ j , k ( x 0 ) p 1 / p + j = j + 1 j 1 E k Λ j β ^ j , k β j , k β j , k t n ψ j , k ( x 0 ) p 1 / p .
Similar to the argument of (15), one gets
j = j * j E k Λ j β ^ j , k β j , k ψ j , k ( x 0 ) p 1 / p j = j * j n p 2 2 j p 2 1 p 2 j n 1 / 2 .
On the other hand, by Hölder inequality ( 1 p + 1 p = 1 ) and Lemma 1
E k Λ j β ^ j , k β j , k β j , k t n ψ j , k ( x 0 ) p = E k Λ j β ^ j , k β j , k β j , k 1 / p t n 1 / p ψ j , k ( x 0 ) 1 / p β j , k 1 / p t n 1 / p ψ j , k ( x 0 ) 1 / p p E k Λ j β ^ j , k β j , k p β j , k t n ψ j , k ( x 0 ) k Λ j β j , k t n ψ j , k ( x 0 ) p / p n p / 2 t n p 2 j p s ( ln n ) p 2 2 j p s .
Hence,
j = j + 1 j 1 ln n p 2 2 j p s 1 / p ln n 1 2 2 j s .
Combing (26), (27) and 2 j n ln n 1 / ( 2 s + 1 ) , one gets
Q 2 2 j n 1 / 2 + ln n 1 2 2 j s ln n n s / ( 2 s + 1 ) .
  • For Q 3 . Note that
    Q 3 = j = j * j + j = j + 1 j 1 k Λ j β j , k I β j , k 2 κ t n ψ j , k ( x 0 ) = : Q 31 + Q 32 .
It is easy to show that
Q 31 = j = j * j k Λ j β j , k I β j , k 2 κ t n ψ j , k ( x 0 ) j = j * j k Λ j β j , k 2 κ t n β j , k ψ j , k ( x 0 ) j = j * j 2 j 2 t n 2 j 2 ln n n .
In addition,
Q 32 = j = j + 1 j 1 k Λ j β j , k I β j , k 2 κ t n ψ j , k ( x 0 ) j = j + 1 j 1 k Λ j β j , k ψ j , k ( x 0 ) j = j + 1 j 1 2 j s 2 j s .
Then according to (29), (30) and 2 j n ln n 1 / ( 2 s + 1 ) , one can obtain
Q 3 2 j 2 ln n n + 2 j s ln n n s / ( 2 s + 1 ) .
Furthermore, together with (25) and (28), this yields
Q ln n 1 1 p n s 2 s + 1 + ln n n s / 2 s + 1 + ln n n s / 2 s + 1 ln n 1 1 p ln n n s / 2 s + 1 .
Finally, it follows from (20), (21) and (32) that
sup r H s ( Ω x 0 ) E r ^ n n o n ( x 0 ) r ( x 0 ) p 1 p ln n 1 1 p ln n n s / ( 2 s + 1 ) ,
which completes the proof of Theorem 2. □

5. Conclusions

This paper studies the pointwise estimations of an unknown function in a regression model with multiplicative and additive noise. Under some different assumptions, linear and nonlinear wavelet estimators are constructed. It is clear that those wavelet estimators have diverse forms with different conditions. The convergence rates over the pointwise risk of two wavelet estimators are proposed by Theorems 1 and 2. It should be pointed out that the linear and nonlinear wavelet estimators can all obtain the optimal convergence rate of pointwise nonparametric estimation. More importantly, the nonlinear wavelet estimator is adaptive. In other words, the conclusions of asymptotic and theoretical performance are clear in this paper. However, it is a difficult problem to give numerical experiments, which need more investigations and new skills. We will study it in the future.

Author Contributions

Writing—original draft, J.K. and Q.H.; Writing—review and editing, H.G. All authors have read and agreed to the published version of the manuscript.

Funding

Junke Kou is supported by the National Natural Science Foundation of China (12001133) and Guangxi Natural Science Foundation (2019GXNSFFA245012). Huijun Guo is supported by the National Natural Science Foundation of China (12001132), and Guangxi Colleges and Universities Key Laboratory of Data Analysis and Computation.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

The authors would like to thank the anonymous reviewers for their helpful comments.

Conflicts of Interest

The authors state that there is no conflict of interest.

References

  1. Hart, J.D. Kernel regression estimation with time series errors. J. R. Stat. Soc. Ser. B 1991, 53, 173–187. [Google Scholar]
  2. Kerkyacharian, G.; Picard, D. Regression in random design and warped wavelets. Bernoulli 2004, 10, 1053–1105. [Google Scholar] [CrossRef]
  3. Chesneau, C. Regression with random design: A minimax study. Stat. Probab. Lett. 2007, 77, 40–53. [Google Scholar] [CrossRef]
  4. Reiß, M. Asymptotic equivalence for nonparametric regression with multivariate and random design. Ann. Stat. 2008, 36, 1957–1982. [Google Scholar] [CrossRef]
  5. Yuan, M.; Zhou, D.X. Minimax optimal rates of estimation in high dimensional additive models. Ann. Stat. 2016, 44, 2564–2593. [Google Scholar] [CrossRef]
  6. Wang, L.; Politis, D.N. Asymptotic validity of bootstrap confidence intervals in nonparametric regression without an additive model. Electron. J. Stat. 2021, 15, 392–426. [Google Scholar] [CrossRef]
  7. Chesneau, C.; Kolei, S.E.; Kou, J.K.; Navarro, F. Nonparametric estimation in a regression model with additive and multiplicative noise. J. Comput. Appl. Math. 2020, 380, 112971. [Google Scholar] [CrossRef]
  8. Cai, T.T.; Wang, L. Adaptive variance function estimation in heteroscedastic nonparametric regression. Ann. Stat. 2008, 36, 2025–2054. [Google Scholar] [CrossRef]
  9. Alharbi, Y.F.; Patili, P.N. Error variance function estimation in nonparametric regression models. Commun. Stat. Simul. Comput. 2018, 47, 1479–1491. [Google Scholar] [CrossRef]
  10. Huang, P.; Pi, Y.; Progri, I. GPS signal detection under multiplicative and additive noise. J. Navig. 2013, 66, 479–500. [Google Scholar] [CrossRef]
  11. Kravchenko, V.F.; Ponomaryov, V.I.; Pustovoit, V.I.; Palacios-Enriquez, A. 3D Filtering of images corrupted by additive-multiplicative noise. Dokl. Math. 2020, 102, 414–417. [Google Scholar] [CrossRef]
  12. Cui, G. Application of addition and multiplication noise model parameter estimation in INSAR image Processing. Math. Probl. Eng. 2022, 2022, 3164513. [Google Scholar] [CrossRef]
  13. Meyer, Y. Wavelets and Operators; Cambridge University Press: Cambridge, UK, 1992. [Google Scholar]
  14. Daubechies, I. Ten Lecture on Wavelets; SIAM: Philadelphia, PA, USA, 1992. [Google Scholar]
  15. Liu, Y.M.; Wu, C. Point-wise estimation for anisotropic densities. J. Multivar. Anal. 2019, 171, 112–125. [Google Scholar] [CrossRef]
  16. Brown, L.D.; Low,, M.G. A constrained risk inequality with applications to nonparametric functional estimation. Ann. Stat. 1996, 24, 2524–2535. [Google Scholar] [CrossRef]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Kou, J.; Huang, Q.; Guo, H. Pointwise Wavelet Estimations for a Regression Model in Local Hölder Space. Axioms 2022, 11, 466. https://doi.org/10.3390/axioms11090466

AMA Style

Kou J, Huang Q, Guo H. Pointwise Wavelet Estimations for a Regression Model in Local Hölder Space. Axioms. 2022; 11(9):466. https://doi.org/10.3390/axioms11090466

Chicago/Turabian Style

Kou, Junke, Qinmei Huang, and Huijun Guo. 2022. "Pointwise Wavelet Estimations for a Regression Model in Local Hölder Space" Axioms 11, no. 9: 466. https://doi.org/10.3390/axioms11090466

APA Style

Kou, J., Huang, Q., & Guo, H. (2022). Pointwise Wavelet Estimations for a Regression Model in Local Hölder Space. Axioms, 11(9), 466. https://doi.org/10.3390/axioms11090466

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop