Next Article in Journal
An Evidential Software Risk Evaluation Model
Next Article in Special Issue
A Semiparametric Approach to Test for the Presence of INAR: Simulations and Empirical Applications
Previous Article in Journal
Efficiency-Oriented MPC: Using Nested Structure to Realize Optimal Operation and Control
Previous Article in Special Issue
A Nonparametric Approach for Testing Long Memory in Stock Returns’ Higher Moments
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Quantile-Wavelet Nonparametric Estimates for Time-Varying Coefficient Models

School of Statistics and Data Science, Nanjing Audit University, Nanjing 211085, China
*
Author to whom correspondence should be addressed.
Mathematics 2022, 10(13), 2321; https://doi.org/10.3390/math10132321
Submission received: 22 May 2022 / Revised: 22 June 2022 / Accepted: 29 June 2022 / Published: 2 July 2022
(This article belongs to the Special Issue Nonparametric Statistical Methods and Their Applications)

Abstract

:
The paper considers quantile-wavelet estimation for time-varying coefficients by embedding a wavelet kernel into quantile regression. Our methodology is quite general in the sense that we do not require the unknown time-varying coefficients to be smooth curves of a common degree or the errors to be independently distributed. Quantile-wavelet estimation is robust to outliers or heavy-tailed data. The model is a dynamic time-varying model of nonlinear time series. A strong Bahadur order O 2 m n 3 / 4 ( log n ) 1 / 2 for the estimation is obtained under mild conditions. As applications, the rate of uniform strong convergence and the asymptotic normality are derived.

1. Introduction

The trending time-varying time series models have gained a lot of attention during the last three decades because they are increasingly used to observe changes in their dynamic structure, which have been applied to economics and finance [1,2]. When the time span of interest covers different economic periods, the parameters of the corresponding statistical models should be allowed to change with time; see references [3,4,5,6,7], among others. Varying coefficient models are flexible models for describing the dynamic structure of data, which have been extensively studied based on mean regression, for example, references [8,9,10,11,12], among others and the references therein. However, the important features of the joint distribution of the response and the covariates can not be well captured by mean regression.
In this paper, we consider the time-varying coefficient quantile regression models: the conditional τ th quantile of Y i given X i and a prespecified quantile τ ,
Q τ ( Y i | X i ) = inf s : F Y i | X i ( s | X i ) > τ = X i T β τ ( t i ) , i = 1 , , n ,
where t i = i / n , Y i are responses, β τ ( · ) = ( β 1 , τ ( · ) , , β p , τ ( · ) ) T is a p-dimensional vector of unspecified functions defined on [ 0 , 1 ] , X i = ( X i 1 , , X i p ) T are p-dimensional explanatory variables. We can write Equation (1) in the form
Y i = X i T β τ ( t i ) + ϵ τ , i , i = 1 , , n ,
where the errors ϵ τ , i satisfy Q τ ( ϵ τ , i | X i ) = 0 almost surely. Quantile regression has also been emerged as an essential statistical tool in many scientific fields; see [13]. To simplify notations, we omit the subscript τ from the function β τ ( · ) and ϵ τ , i subsequently.
To estimate the varying coefficients in quantile regression models in Equation (2), references [7,14,15] used the kernel method, and references [16,17] considered regression splines. To our knowledge, wavelet has not been considered in quantile regression with varying coefficients. Wavelet techniques can detect and represent localized features and can also create localized features on synthesis while being efficient in terms of computational speed and storage. It has received much attention from mathematicians, engineers and statisticians. Wavelet has been introduced for nonparametric regression, for example, references [18,19,20,21,22], and so on. For wavelet smoothing applied to nonparametric models, reference [18] is a key reference that introduces wavelet versions of some classical kernel and orthogonal series estimators and studies their asymptotic properties. Reference [23] provided asymptotic bias and variance of a wavelet estimator for a regression function under a stochastic mixing process. Reference [24] considered a wavelet estimator for the mean regression function with strong mixing errors and investigated their asymptotic rates of convergence by using the thresholding of the empirical wavelet coefficients. References [25,26] showed Berry–Esseen-type bounds on wavelet estimators for semiparametric models under dependent errors. For varying coefficient models, references [27,28] provided wavelet estimation and studied convergence rate and asymptotic normality under i.i.d. errors, and reference [29] discussed these asymptotics of the wavelet estimators based on censored dependent data. However, all of the above wavelet estimators are based on mean regression by the least-squares method.
In the paper, we propose a quantile-wavelet method for time-varying coefficient models in Equation (2) with an α -mixing errors stochastic process. The proposed methodology is quite general in the sense that we do not require coefficient β ( · ) to be smooth curves of a common degree, it does not suffer from the curse of dimensionality, it is robust to outlier or heavy-tailed data, and it is a dynamic model for nonlinear time series. Bahadur representation, rate of convergence and asymptotic normality of quantile-wavelet estimators will be established. Bahadur representation theory seeks to approximate a statistical estimate by a sum of variables with a higher-order remained. It has been a major statistical theory issue since Bahadur’s pioneering work on quantiles [30]; see [31], among others. Recently, reference [32] investigated the Bahadur representation for sample quantiles under φ -mixing sequence; reference [33] gave an M-estimation for time-varying coefficient models with α -mixing errors and established Bahadur representation in probability. In the paper, we will establish Bahadur representation with probability 1 (almost surely) for quantile-wavelet estimates in the models Equations (1) and (2).
Throughout, we assume that { X i , ϵ i } is a stationary α -mixing sequence. Recall that a sequence { ζ k , k 1 } is said to be α -mixing (or strong mixing) if the mixing coefficients
α ( m ) = sup { | P ( A B ) P ( A ) P ( B ) | : A F k , B F k + m }
converge to zero as m , where F l k denotes the σ -field generated by { ζ i , l i k } . We refer to the monograph of [34,35] for some properties or more mixing conditions.
The rest of this article is organized as follows. In Section 2, we present wavelet kernel and quantile-wavelet estimation for the model (2). Bahadur representation of quantile-wavelet estimators and their applications are given in Section 3. Technical proofs are provided in Section 4. Some simulation studies are conducted in Section 5.

2. Quantile-Wavelet Estimation

In the paper, the time-varying coefficient function is retrieved by a wavelet-based reproducing kernel Hilbert space (RKHS). An RKHS is a Hilbert space in which all the point evaluations are bounded linear functions. Let f H be a Hilbert space of functions on some domain I . For t I , then there exists an element k t H , such that
f ( t ) = k t , f , f H ,
where · , · is the inner product in H . Set k t , k s = K ( t , s ) , which is called the reproducing kernel. Let k t = K ( t , · ) , and then K ( t , · ) , K ( s , · ) = K ( t , s ) .
A multiresolution analysis is a sequence of closed subspaces { V m , m Z } in L 2 ( R ) such that they lie in a containment hierarchy,
V 2 V 1 V 0 V 1 V 2 ,
where L 2 ( R ) is the collection of square-integrable functions over the real line. The hierarchy Equation (3) is constructed such that (i) V-spaces are self-similar, f ( 2 m x ) V m iff f ( x ) V 0 , and (ii) there exists a scaling function ϕ V 0 whose integer-translate V 0 = { f L 2 ( R ) | f ( x ) = k Z c k ϕ ( x k ) } , and for which the set { ϕ ( · k ) , k Z } is an orthonormal basis. Wavelet analysis requires a description of two related and suitably chosen orthonormal basic functions: the scaling function ϕ and the wavelet ψ . A wavelet system is generated by dilation and translation of ϕ and ψ through
ϕ m , k ( t ) = 2 m / 2 ϕ ( 2 m t k ) , ψ m , k ( t ) = 2 m / 2 ψ ( 2 m t k ) , m , k Z .
Therefore, { ϕ 0 k , k Z } and { ϕ m k , k Z } are the orthogonal bases of V 0 and V m , respectively. From Moore-Aronszajn’s theorem [36], it follows that
K ( t , s ) = k ϕ ( t k ) ϕ ( s k )
is a reproducing kernel of V 0 . By self-similarity of multiresolution subspaces,
K m ( t , s ) = 2 m K ( 2 m t , 2 m s )
is a reproducing kernel of V m , and then the projection of g on the space V m is given by
P V m g ( t ) = K m ( t , s ) g ( s ) d s .
It motivates us to define a quantile-wavelet estimator of β ( t ) by
β ^ ( t ) = argmin b i = 1 n ρ τ y i X i T b A i K m ( t , s ) d s ,
where ρ τ ( u ) = u ( τ I { u < 0 } ) with u R called the loss (“check”) function, I B is the indicator function of any set B, and A i are intervals that partition [0, 1], so that t i A i . One way of defining the intervals A i = [ s i 1 , s i ) is by taking s 0 = 0 , s n = 1 , and s i = ( t i + t i + 1 ) / 2 , i = 1 , , n 1 .
Note that many other nonparametric methods can be used here, including spline and Kernel approaches. However, they might not be rich enough to characterize the local properties of the time-varying coefficient function. In the following section, we will present the asymptotic properties of the quantile-wavelet estimator Equation (4).

3. Bahadur Representation and Its Applications

Let H ν be the collection of all functions on [ 0 ,   1 ] with order ν > 0 in Sobolev space, which is a very general space. The degree of smoothness of the true coefficient functions determines how well the functions can be approximated. Functions belonging to H ν for 1 / 2 < ν < 3 / 2 are not continuously differentiable. It is worth stressing that the wavelet approach allows us to obtain rates under much weaker assumptions than second-order differentiability. Denote X = ( X 1 T , , X n T ) ; · is the L 2 norm, and C is used to denote positive constants whose values are unimportant and may change from line to line in the proof.
Our main results will be established under the following assumptions.
(A1.) (i) { ϵ i , X i } is a stationary α -mixing sequence with α ( i ) = O ( i κ 0 ) with κ 0 > 9 ( 1 + d ) δ + 6 2 ( 1 d ) δ 4 , δ > 2 1 d , where d is defined in (A7)(i); (ii) the noisy errors ϵ i has Q τ ( ϵ i | X i ) = 0 almost surely, and a continuous, positive conditional density f ϵ | X in a neighborhood of 0 given the X i , E X 1 2 δ < , and E ϵ 1 X 1 2 δ < , a.s.; (iii) Φ x = E X 1 X 1 T , Ω x = E f ϵ | X ( 0 ) X 1 X 1 T are non-singular matrices.
(A2.) β j belongs to Sobolev space H ν ( [ 0 ,   1 ] ) with order ν > 1 / 2 .
(A3.) β j satisfies the Lipschitz of the order condition of order γ > 0 .
(A4.) ϕ has compact support, is in the Schwarz space with order l > ν , and satisfies the Lipschitz condition with order l. Furthermore, | ϕ ^ ( ξ ) 1 | = O ( ξ ) as ξ 0 , where ϕ ^ is the Fourier transform of ϕ .
(A5.) max i | t i t i 1 | = O ( n 1 ) .
(A6.) We also assume that for some Lipschitz function κ ( · ) ,
ρ ( n ) = max i s i s i 1 κ ( s i ) n = o ( n 1 ) .
(A7.) The tuning parameter m satisfies (i) 2 m = O ( n d ) with 0 < d < 1 ; (ii) let v * = min ( 3 / 2 , ν , γ + 1 / 2 ) ϵ 1 and ϵ 1 = 0 for ν 3 / 2 , ϵ 1 > 0 for ν = 3 / 2 . Assume that n 2 2 m v * 0 .
Remark 1. 
These conditions are mild. Condition (A1) is the standard requirement for moments and the mixing coefficient for an α-mixing time series. Conditions (A2)–(A6) are the mild regularity conditions for wavelet smoothing, which have been adopted by [18]. In condition (A7), m acts as a tuning parameter, such as the bandwidth does for standard kernel smoothers; (A7) (i) is for Bahadur representation and rate of convergence, and (A7) (ii) combining with (A7) (i) is for asymptotic normality of the quantile-wavelet estimator.
Our results are as follows.
Theorem 1. 
(Bahadur representation) Support that (A1)–(A5) and (A7) (i) hold, then
β ^ ( t ) β ( t ) = Ω x 1 Z n ( t ) + R n ( m ; γ , ν ) , a . s .
with
Z n ( t ) = i = 1 n φ τ ϵ i + X i T [ β ( t i ) β ( t ) ] X i A i K m ( t , s ) d s
and
R n ( m ; γ , ν ) = O 2 m n 3 / 4 ( log n ) 1 / 2 .
Remark 2. 
Theorem 1 presents the strong Bahadur representation of a quantile-wavelet estimator for a time-varying coefficient model. Here, R n ( m ; γ , ν ) = O 2 m n 3 / 4 ( log n ) 1 / 2 , a.s., which is comparable with the Bahadur order O log log n n h 3 / 4 , a . s . , of [37], where the bandwidth h 0 . Reference [37] is based on kernel local polynomial M-estimation, and requires that the function β has the second-order differentiability. However, we do not need the strong, smooth conditions. The function β is not differentiable when β H ν , 1 / 2 < ν < 3 / 2 .
The Bahadur representation of the quantile-wavelet estimator of Theorem 1 can be applied to obtain the following two results.
Corollary 1. 
(Rate of uniform strong convergence) Assume that (A1)–(A5) and (A7) (i) hold, then
sup t [ 0 , 1 ] β ^ ( t ) β ( t ) = 2 m log n n + n γ + η m , a . s . .
Remark 3. 
Corollary 1 provides the rate of uniform strong convergence of quantile-wavelet estimator β ^ for model Equation (2). We consider the rate in the case of 1 / 2 < ν < 3 / 2 , under which η m is a lower rate of convergence than one of ν 3 / 2 . If we take 2 m = O ( n γ ) with 1 / 3 γ < 1 , then sup t [ 0 , 1 ] β ^ ( t ) β ( t ) = O n ( 1 γ ) / 2 ( log n ) 1 / 2 , a . s . . If we further take γ = 1 / 3 , then one obtains
sup t [ 0 , 1 ] | β ^ ( t ) β ( t ) | = O n 1 / 3 log n , a . s . ,
which is comparable with the optimal convergence rate of the nonparametric estimation in nonparametric models. The result is better than the ones of [27,28] based on the local linear estimator for the varying-coefficient model. In addition, we also do not require the unknown coefficient β to be smooth curves of a common degree.
Corollary 2. 
(Asymptotic normality) Support that (A1)–(A7) holds, then
n 2 m ( β ^ d ( t ) β ( t ) ) D N 0 , τ ( 1 τ ) κ ( t ) ω 0 2 Ω x 1 Ψ x Ω x 1 ,
where β ^ d ( t ) = β ^ ( t ( m ) ) with t ( m ) = 2 m t / 2 m and ω 0 2 = R E 0 2 ( 0 , u ) d u = k Z ϕ 2 ( k ) .
Remark 4. 
To obtain an asymptotic expansion of the variance and an asymptotic normality, we need to consider an approximation to β ^ ( t ) based on its values at dyadic points of order m, as reference [18] has done. The β ^ d ( t ) is the piecewise-constant approximation of β ^ ( t ) at resolution 2 m , which can avoid the instability of the variance of β ^ ( t ) . From the proof of Corollary 2, it can see that the main term of the variance of β ^ ( t ) is τ ( 1 τ ) κ ( t ) ω 2 ( t m ) 2 m n 1 Ω x 1 Ψ x Ω x 1 with t m = 2 m t [ 2 m t ] and ω 2 ( t m ) = 0 1 E 0 2 ( t m , s ) d s . When the dyadic t and m sufficiently large, t m = 0 , the variance of β ^ ( t ) is asymptotically stable. See [18] for the details.

4. Lemmas and Proofs

Lemma 1 
([18,38]). Suppose that (A4) holds. We have
(i) K 0 ( t , s ) c k / ( 1 + | t s | ) k and K k ( t , s ) 2 k c k / ( 1 + 2 k | t s | ) k , where k is a positive integer and c k is a constant depending on k only.
(ii) sup 0 t , s 1 | K m ( t , s ) | = O ( 2 m ) .
(iii) sup 0 t 1 0 1 | K m ( t , s ) | d s c , where c is a positive constant.
(iv) 0 1 K m ( t , s ) d s 1 uniformly in t [ 0 , 1 ] , as m .
Lemma 2 
([18]). Suppose that (A4)–(A5) hold and h ( · ) satisfies (A2)–(A3). Then
sup 0 t 1 h ( t ) i = 1 n h ( t i ) A i K m ( t , s ) d s = O ( n γ ) + O ( η m ) ,
where
η m = ( 1 / 2 m ) ν 1 / 2 i f 1 / 2 < ν < 3 / 2 , m / 2 m i f ν = 3 / 2 , 1 / 2 m i f ν > 3 / 2 .
Lemma 3 
([39]). Let { λ n ( θ ) , θ Θ } be a sequence of random convex functions defined on a convex, open subset Θ of R d . Suppose λ ( · ) is a real-valued function on Θ for which λ n ( θ ) λ ( θ ) with probability 1, for each fixed θ in Θ. Then for each compact subset K of Θ, with probability 1,
sup θ K | λ n ( θ ) λ ( θ ) | 0 .
Lemma 4. 
Let { ( X i , e i ) , 1 i n } be a stationary sequence satisfying the mixing condition α ( ) = O ( κ 0 ) for some κ 0 > 9 ( 1 + d ) δ + 6 2 ( 1 d ) δ 4 , δ > 2 1 d ; and 2 m = O ( n d ) with 0 < d < 1 . Further, assume that E { | e 1 X 1 | δ } < . If Conditions (A4) and (A5) hold. Then
sup s [ 0 , 1 ] i = 1 n { e i X i E ( e i X i ) } A i K m ( t , s ) d s = O 2 m log n n , a . s . .
Remark 5. 
In Lemma 4, we assume that { X i , 1 i n } is a sequence of a 1-dimensional random variable. In fact, X i R p for the fixed p, we also have the same result as Lemma 4.
Proof. 
The theorem is similar to Lemma A.4 in [29] but has some differences. We suppose E ( e i X i ) = 0 . If E ( e i X i ) 0 , the method of the proof is the same. Let Q m ( t ) = i = 1 n e i X i A i K m ( t , s ) d s . Partition the interval [0, 1] into N = ( n 2 3 m ) 1 / 2 subintervals I j of equal length. Let t j be the centers of I j . Notice that
| Q m ( t ) Q m ( t ) | C 2 2 m | t t | 1 n i = 1 n | e i X i | C 2 2 m | t t | E | e X | , a . s . .
One obtains
sup t [ 0 , 1 ] | Q m ( t ) | max 1 j N sup t I j | Q m ( t ) Q m ( t j ) | + max 1 j N | Q m ( t j ) | max 1 j N | Q m ( t j ) | + C 2 m / n .
Let Q m B ( s ) = i = 1 n e i X i I ( | e i X i | B n ) A i K m ( t , s ) d s , and take B n = n δ 1 + ϵ for some ϵ > 0 . Note that i P ( | e i X i | > B i ) < i B i δ E | e 1 X 1 | δ < . By the Borel–Cantelli lemma, | e i X i | B i , a.s., for sufficiently large i. Hence,
| e i X i | B n , a . s . , f o r a l l i n ,
for all sufficiently large n. In addition,
sup t [ 0 , 1 ] | E [ Q m ( t ) Q m B ( t ) ] | = sup t [ 0 , 1 ] i = 1 n E ( e i X i ) I [ | e i X i | > B n ] A i K m ( t , s ) d s C B n 1 δ E | e 1 X 1 | δ C B n 1 δ .
From Equations (7) and (8), we respectively have
sup t [ 0 , 1 ] | Q m ( t ) Q m B ( t ) E [ Q m ( t ) Q m B ( t ) ] | = O ( B n 1 δ ) = o ( n 1 / 2 ) , a . s .
Further, we have
sup t [ 0 , 1 ] | Q m ( t ) | max 1 j N | Q m B ( t j ) E ( Q m B ( t j ) ) | + C 2 m / n .
Let X ˜ i = n e i X i I ( | e i X i | B n ) A i E [ e i X i I ( | e i X i | B n ) A i K m ( t , s ) d s ] . Note that | X ˜ i | C 2 m B n . By Theorem 2.18 (ii) in Fan and Yao (2003) [35], take h n = ( M 2 m log n / n ) 1 / 2 , for any η > 0 and sufficiently large M, for each q = [ 1 , n / 2 ] , we have
n = 1 P max 1 j N Q m B ( t j ) E ( Q m B ( t j ) ) > η h n C n = 1 N exp h n 2 q v 2 ( q ) + 1 + 2 m B n h n 1 / 2 q α ( k ) ,
where k = [ n / ( 2 q ) ] , v 2 ( q ) = 2 σ 2 ( q ) / k 2 + C 2 m B n h n ,
σ 2 ( q ) = max 0 j 2 q 1 V a r X ˜ j k + 1 + + X ˜ ( j + 1 ) k + 1 .
Taking k = ( B n h n ) 1 , we obtain σ 2 ( q ) = O ( 2 m k ) . Assume 2 m = O ( n d ) , 0 < d < 1 . We have
n = 1 P max 1 j N Q m B ( t j ) E ( Q m B ( t j ) ) > η h n C n = 1 N exp C n h n 2 2 m + C 2 m n B n κ 0 + 1.5 h n κ 0 + 0.5 = C n = 1 n C M + 2 + C n = 1 n 5 4 + ( δ 1 + ϵ ) ( κ 0 + 1.5 ) κ 0 2 ( 2 m ) ( 9 + 2 κ 0 ) 4 ( log n ) κ 0 + 0.5 2 C n = 1 n C M + 2 + C n = 1 n 5 4 + δ 1 ( κ 0 + 1.5 ) κ 0 2 ( 2 m ) ( 9 + 2 κ 0 ) 4 n 2 ϵ ( κ 0 + 1.5 ) C n = 1 n C M + 2 + C n = 1 n ( 1 δ 1 2 + d 2 ) κ 0 + 5 4 + 9 d 4 + 3 δ 2 n 2 ϵ ( κ 0 + 1.5 ) < ,
since we have ( 1 δ 1 2 + d 2 ) κ 0 + 5 4 + 9 d 4 + 3 δ 2 + ϵ ( κ 0 + 1.5 ) < 1 when κ 0 > 9 ( 1 + d ) δ + 6 2 ( 1 d ) δ 4 and ϵ are small enough with δ > 2 / ( 1 d ) . By the Borel–Cantelli Lemma and Equation (10), we prove Lemma 4. □
Lemma 5. 
Under conditions in Theorem 1, we have
sup t [ 0 , 1 ] i = 1 n φ τ ( ϵ i * ) φ τ ( ϵ i ) X i A i K m ( t , s ) d s = O n γ + η m + 2 m log n n , a . s . ,
where ϵ i * = ϵ i + X i T [ β ( t i ) β ( t ) ] .
Proof. 
Let e i = I ϵ i X i T ( β ( t i ) β ( t ) ) I [ ϵ i 0 ] . By Lemmas 2 and 4, we have
sup t [ 0 , 1 ] i = 1 n φ τ ( ϵ i * ) φ τ ( ϵ i ) X i A i K m ( t , s ) d s = sup t [ 0 , 1 ] i = 1 n e i X i A i K m ( t , s ) d s sup t [ 0 , 1 ] i = 1 n E ( e i X i ) A i K m ( t , s ) d s + sup t [ 0 , 1 ] i = 1 n [ e i X i E ( e i X i ) ] A i K m ( t , s ) d s sup t [ 0 , 1 ] i = 1 n E F ϵ | X ( X i T [ β ( t i ) β ( t ) ] ) F ϵ | X ( 0 ) X i ) A i K m ( t , s ) d s + O 2 m log n n = sup t [ 0 , 1 ] i = 1 n E f ϵ | X ( 0 ) X i X i T [ β ( t i ) β ( t ) ] A i K m ( t , s ) d s + O 2 m log n n = sup t [ 0 , 1 ] Ω x β ( t ) i = 1 n β ( t i ) A i K m ( t , s ) d s + O 2 m log n n = O n γ + η m + 2 m log n n , a . s . .
This completes the proof of Lemma 5. □
Lemma 6. 
Under conditions in Theorem 1, for fixed θ , we have
sup t [ 0 , 1 ] E ( R n ( θ , t ) ) 1 2 θ T Ω x θ = O 2 m log n n
and
sup t [ 0 , 1 ] R n ( θ , t ) E ( R n ( θ , t ) ) = O 2 m log n n , a . s . ,
where R n ( θ , t ) is defined by (18) in the proof of Theorem 1.
Proof. 
For Equation (12), by Condition (A1) (iii) and Lemma 4, one obtains
E ( R n ( θ , t ) | X ) = n 2 m i = 1 n 0 v i F ϵ | X ( s X i T [ β ( t i ) β ( t ) ] ) F ϵ | X ( X i T [ β ( t i ) β ( t ) ] ) d s A i K m ( t , s ) d s = n 2 m i = 1 n 0 v i f ϵ | X ( 0 ) s d s A i K m ( t , s ) d s ( 1 + o ( 1 ) ) = n 2 m 1 2 θ T 2 m n i = 1 n f ϵ | X ( 0 ) X i X i T A i K m ( t , s ) d s θ ( 1 + o ( 1 ) ) = 1 2 θ T Ω x θ + o ( θ 2 ) + O 2 m log n n , a . s . .
Note that E ( R n ( θ , t ) ) = E [ E ( R n ( θ , t ) | X ) ] . Thus, Equation (12) holds.
Now, let us prove Equation (13). Notice that, for any k > 0 ,
| I ( ϵ i * s ) I ( ϵ i * 0 ) | k = I ( d 1 ϵ i d 2 ) ,
where d 1 = min ( c 1 , s + c 1 ) , and d 2 = max ( c 1 , s + c 1 ) with c 1 = X i T [ β ( t i ) β ( t ) ] . Let v ˜ i = n 2 m 0 v i { I ( ϵ i * s ) I ( ϵ i * 0 ) } d s . To prove Equation (13), we only need to show that E | v ˜ i | δ | < by the proof of Lemma 4. By Equation (15) and Jensen’s inequality, one obtains
E | v ˜ i | δ = E E | v ˜ i | δ | X E n 2 m 0 v i E | I ( ϵ i * s ) I ( ϵ i * 0 ) | d s δ | X = E n 2 m 0 v i | F ϵ | X ( d 2 ) F ϵ | X ( d 1 ) | d s δ | X = E n 2 m 0 v i f ϵ | X ( 0 ) s d s δ | X ( 1 + o ( 1 ) ) = E X 1 T θ 2 δ ( 1 + o ( 1 ) ) < .
By Lemma 4, we obtain Equation (13).
In the sequence, we will give the proofs of the main results. □
Proof of Theorem 1. 
Recall that β ^ n ( t ) = b ^ and b ^ minimize
i = 1 n ρ τ y i X i T b A i K m ( t , s ) d s .
Let θ ^ = n 2 m ( b ^ β ( t ) ) and ϵ i * = ϵ i + X i T [ β ( t i ) β ( t ) ] . The behavior of θ ^ follows from consideration of the objective function
G n ( θ ; t ) = n 2 m i = 1 n ρ τ ( ϵ i * 2 m / n X i T θ ) ρ τ ( ϵ i * ) A i K m ( t , s ) d s .
The function G n ( θ ; t ) is obviously convex and is minimized at θ ^ . It is sufficient to show that G n ( θ ; t ) converges to its expectation since it follows from Lemma 3 that the convergence is also uniform on any compact set K of θ . Using the identity of [40],
ρ τ ( u v ) ρ τ ( u ) = v φ τ ( u ) + 0 v { I ( u s ) I ( u 0 ) d s } ,
where φ τ ( u ) = τ I { u < 0 } ; we may write
G n ( θ ; t ) = θ T W n ( t ) + E ( R n ( θ , t ) ) + R n ( θ , t ) E [ R n ( θ , t ) ] ,
where
W n ( t ) = n / 2 m i = 1 n φ τ ( ϵ i * ) X i A i K m ( t , s ) d s ,
R n ( θ , t ) = n 2 m i = 1 n 0 v i { I ( ϵ i * s ) I ( ϵ i * 0 ) } d s A i K m ( t , s ) d s ,
with v i = 2 m / n X i T θ .
To obtain strong Bahadur representation of the quantile-wavelet estimator, we first need to show the uniform approximation of G n ( θ ; t ) on t [ 0 , 1 ] with probability 1 for the fixed θ by the three terms in Equation (16). By Lemma 6, we have
G n ( θ ; t ) = θ T W n ( t ) + 1 2 θ T Ω x θ + O 2 m log n n a . s .
uniformly on t [ 0 , 1 ] . Let a n 2 = 2 m n log n . We have
sup t [ 0 , 1 ] a n 2 G n ( θ ; t ) θ T W n ( t ) 1 2 θ T Ω x θ = O 1 log n = o ( 1 ) , a . s . .
Second, the strong Bahadur representation requires the existence of one compact subset K with probability measure 1 that will suffice for all θ . We prove it by applying a stronger convexity lemma (See Lemma 3) than one of [41]. However, the arguments to prove it are essentially the same as in [41].
Let θ ¯ = Ω x 1 W n ( t ) . It is easy to see that W n ( t ) has a bounded second moment and hence is stochastically bounded. Since the convex function λ n ( θ ) = G n ( θ ; t ) θ T W n ( t ) converges with probability 1 to the convex function λ ( θ ) = 1 2 θ T Ω x θ , it follows from the convexity Lemma 3 that for any compact subset K R p ,
sup θ K a n 2 G n ( θ ; t ) θ T W n ( t ) 1 2 θ T Ω x θ = o ( 1 ) , a . s .
The argument will be complete if we can show for each ε > 0 that, with probability 1,
θ ^ θ ¯ < a n ε .
Because θ ¯ is almost surely converged by Lemmas 4 and 5, it is bounded with probability 1. The compact subset K can be chosen to contain B ( n ) , a closed ball with center θ ¯ and radius a n ε , thereby implying that
Δ n sup θ B ( n ) a n 2 G n ( θ ; t ) θ T W n ( t ) 1 2 θ T Ω x θ = o ( 1 ) , a . s .
Now, we consider the behavior of G n ( θ ; t ) outside K. For any θ = θ ¯ + a n ϱ υ , with ϱ > ε and υ a unit vector. Define θ * as the boundary point of B n that lies on the line segment form θ ¯ to θ , that is, θ * = θ ¯ + a n ε υ . Convexity of G n ( θ ; t ) and the definition of Δ n imply
ε ϱ G n ( θ ; t ) + 1 ε ϱ G n ( θ ¯ ; t ) G n ( θ * ; t ) = ( θ * ) T Ω x θ ¯ + 1 2 ( θ * ) T Ω x θ * a n 2 Δ n = 1 2 a n 2 υ T Ω x υ ε 2 1 2 θ ¯ T Ω x θ ¯ a n 2 Δ n G n ( θ ¯ ; t ) + a n 2 1 2 υ T Ω x υ ε 2 2 Δ n .
The last expression does not depend on θ . It follows that
inf θ θ ¯ > a n ε G n ( θ ; t ) G n ( θ ¯ , t ) + a n 2 ϱ ε [ 1 2 υ T Ω x υ ε 2 2 Δ n ] .
As Ω x is positively defined, then according to (21), with probability 1, 2 Δ n < 1 2 υ T Ω x υ ε 2 for enough n. This implies that for any ε > 0 and for large enough n, the minimum of G n ( θ ; t ) must be achieved with B ( n ) , i.e., θ ^ θ ¯ < a n ε , that is,
n 2 m ( β ^ ( t ) β ( t ) ) = Ω x 1 W n ( t ) + O ( a n ) , a . s . .
One obtains
R n ( m ; γ , ν ) = O 2 m n 3 / 4 ( log n ) 1 / 2 .
We complete the proof of Theorem 1. □
Proof of Corollary 1. 
From Theorem 1, and Lemmas 4 and 5, it is easy to obtain the result of Corollary 1. □
Proof of Corollary 2. 
From Theorem 1 and Lemma 5, we only verify the asymptotic normality of
U n ( t ) = n / 2 m i = 1 n φ τ ( ϵ i ) X i A i K m ( t , s ) d s .
First, we compute its variance-covariance matrix. Let V i = n / 2 m φ τ ( ϵ i ) X i A i K m ( t , s ) d s . We know E ( V i ) = 0 . Let S 1 = { ( j , i ) : 1 j i d n ; 1 i < j n } and S 2 = { ( j , i ) : 1 i < j n } S 1 with d n specified later. We have
V a r ( U n ( t ) ) = V a r i = 1 n V i i = 1 n E ( V i V i T ) + 2 1 i < j n C o v ( V i , V j ) = i = 1 n E ( V i V i T ) + 2 S 1 C o v ( V i , V j ) + 2 S 2 C o v ( V i , V j ) = I 1 + I 2 + I 3 .
For I 1 , by Theorem 3.3 and Lemma 6.1 in [18], one obtains
I 1 = i = 1 n E ( E ( V i V i T | X i ) ) = n 2 m τ ( 1 τ ) E ( X 1 X 1 T ) i = 1 n A i K m ( t , s ) d s 2 = τ ( 1 τ ) E ( X 1 X 1 T ) n 2 m i = 1 n A i K m ( t , s ) d s 2 2 m 0 1 E m 2 ( t , s ) κ ( s ) d s + τ ( 1 τ ) E ( X 1 X 1 T ) 2 m 0 1 E m 2 ( t , s ) κ ( s ) d s = τ ( 1 τ ) κ ( t ) E ( X 1 X 1 T ) 0 1 E 0 2 ( 0 , s ) d s + O ( n ρ ( n ) + 2 m / n ) .
For I 2 , since n 2 m , take d n such that d n / ( n 2 m ) 0 . Let ( I 2 ) k , l be the ( k , l ) th component of I 2 . By Lemma 1, we have
| ( I 2 ) k , l | 2 S 1 E | V i k V j l | + E | V i k | E | V j l | C n 2 m S 1 E | X i k X j l | A i K m ( t , s ) d s A j K m ( t , s ) d s + E 2 X 1 A i K m ( t , s ) d s 2 = n 2 m O ( 2 m / n ) 2 d n = O 2 m n d n = o ( 1 ) .
For I 3 , let ( I 3 ) k , l be the ( k , l ) th component of I 3 . Noting that κ 0 > ( 2 + δ ) / δ , by Proposition 2.5 (i) in [35], we have
| ( I 3 ) k , l C S 2 E | V i k | 2 + δ 2 / ( 2 + δ ) α δ / ( 2 + δ ) ( j i ) C n 2 m 2 m n 2 E | X i k | 2 + δ 2 / ( 2 + δ ) j = d n α δ / ( 2 + δ ) ( j ) O 2 m n d n κ 0 δ / ( 2 + δ ) + 1 = O 2 m n d n d n κ 0 δ / ( 2 + δ ) = o ( 1 ) .
From Equations (22)–(25), we obtain
V a r ( U n ( t ) ) = τ ( 1 τ ) κ ( t ) E ( X 1 X 1 T ) 0 1 E 0 2 ( 0 , s ) d s + o ( 1 ) .
Similar to the proof of Theorem 2 in [42], by using the small-block and large-block technique and the Cramér–Wold device, one can show that
Z n ( t ) N 0 , τ ( 1 τ ) κ ( t ) ω 0 2 Ψ x .
This, in conjunction with Theorem 1 and the Slutsky Theorem, proves Corollary 2.

5. Simulation Study

To explore the numerical performances of quantile wavelet estimation, we compare our estimator with a local linear kernel [43] and Spline [44] by quantile regression. We call them QR-Wavelet, QR-Local-linear and QR-Spline methods, respectively. In the simulation study, our goal is to show that the QR-wavelet is robust to heavy-tailed data and more adaptive to a nonsmooth nonparametric function than local linear and spline methodologies. The data are of the form:
Y i = X i β ( i / n ) + ϵ τ , i , i = 1 , , n ,
where X i are random design points generated from the normal distribution N ( 2 , 0 . 1 2 ) independently, and ϵ τ , i = ε i F 1 ( τ ) with F being the distribution function of ε i . Here, F 1 ( τ ) is subtracted from ε i to make the τ th quantile ϵ τ , i zero for identifiability.
We set n = 200 , 300 and 500; τ = 0.10 , 0.25, 0.50, 0.75 and 0.90; ε i comes from the normal distribution N ( 0 , σ 2 ) with σ = 0.1 and the t distribution with d degrees of freedom (denoted as t ( d ) ) with d = 5 , respectively. We consider two special curves of β ( t ) for t [ 0 , 1 ] as follows:
Case 1. Pwpn (piecewise polynomial function): β ( t ) = [ 4 t 2 ( 3 4 t ) ] 1 [ 0 , 0.5 ] ( t ) + [ 4 3 t ( 4 t 2 10 t + 7 ) 1.5 ] 1 ( 0.5 , 0.75 ] ( t ) + [ 16 3 t ( t 1 ) 2 ] 1 ( 0.75 , 1 ] ( t ) . The function is generally smooth except for a jump at t = 0.5 .
Case 2. Blocks: β ( t ) = 0.6 9.2 { 4 ssgn ( t 0.1 ) 5 ssgn ( t 0.13 ) + 3 ssgn ( t 0.15 ) 4 ssgn ( t 0.23 ) + 5 ssgn ( t 0.25 ) 4.2 ssgn ( t 0.4 ) + 2.1 ssgn ( t 0.44 ) + 4.3 ssgn ( t 0.65 ) 3.1 ssgn ( t 0.76 ) + 2.1 ssgn ( t 0.78 ) 4.2 ssgn ( t 0.81 ) } + 0.2 , where ssgn ( t ) = [ 1 + sgn ( t ) ] / 2 with sgn ( t ) = 1 ( 0 , ) ( t ) 1 ( , 0 ) ( t ) . It is a step function with many jumps. Many jumps bring difficulties for the local linear and spline smoothing methods.
Pwpn and Blocks are shown in Figure 1 and Figure 2, respectively. In the study, we use the Haar wavelet, which is the simplest of the wavelets. For a given sample size, take 2 m = 0.6 n 2 / 3 in the QR-wavelet, h is chosen by “leave-one-out" cross-validation procedures and use the Gaussian kernel K ( t ) = exp ( t 2 / 2 ) / 2 π in the QR-local-linear. In the QR-splines, we use three degrees of the piecewise polynomial (cubic splines) and the knots 0.5 n 1 / 2 log n . The performances of the estimators are evaluated via the mean square error (MSE) based on the 200 repetitions, which is defined by
MSE = 1 M k = 1 M ( β ^ τ ( t k ) β ( t k ) ) 2 ,
where { t k , k = 1 , , M } is a sequence of regular grid points. These results of MSE for β ^ τ ( · ) are listed in Table 1 for Case I (Pwpn) and Table 2 for Case II (Blocks). The actual functions and their estimated curves for the two cases are depicted in Figure 1 and Figure 2 when n = 300 , and τ = 0.25 , 0.5 and 0.75, respectively.
From Table 1 and Table 2, we can make the following observations: (i) The MSEs of each time-varying coefficient β ( · ) obtained by wavelet, local linear and spline techniques decrease with increasing the sample size n. The accuracy of QR-wavelet is obviously higher than that of QR-local linear and QR-splines. (ii) All QR methods work well when the random error comes from the t distribution with five freedoms; that is, QR is robust to heavy-tailed data. Based on MSE only, for Pwpn that is generally smooth except for a jump, all three methods perform almost equally well, but for Blocks with many jumps, the QR-wavelet performs better than QR-local linear and QR-splines. (iii) All three methods perform well for different quantile levels, especially for high and low quantile levels ( τ = 0.9 (high), τ = 0.1 (low)). We also conducted some experiments based on the above setting by using least square estimation, and found that their estimators lead to systematic bias, especially at high and low quantile levels. However, QR-wavelet generally performs better than QR-local linear and QR-splines at these extreme quantile levels when there is a large sample size (e.g., n = 500 ) and multiple jump points (e.g., Blocks). From Figure 1 and Figure 2, the estimators based on QR-local-linear and QR-splines are not better than those of QR-wavelet. For example, in Pwpn, QR-local linear and QR-splines cannot characterize the shape of the function in the interval ( 0.4 , 0.6 ) ; in Blocks, they cannot find their jumping points, while QR-wavelet detects these jumping points and represents the localized features of Pwpn and Blocks as a whole. Compared with the local linear and spline methods, the wavelet technique has great advantages in characterizing the local features of underlying curves. Therefore, the QR-wavelet overwhelmingly outperforms the QR-local-linear and QR-splines for the discontinuous/irregular functional coefficients in our time-varying coefficient models.

Author Contributions

Conceptualization, X.Z.; methodology, X.Z.; validation, G.Y. and Y.X.; investigation, X.Z.; writing—original draft preparation, X.Z.; writing—review and editing, X.Z.; supervision, X.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Chinese National Social Science Fund (Grant No. 19BTJ034), the National Natural Science Foundation of China (Grant No. 12171242, 11971235) and the Postgraduate Research and Practice Innovation Program of Jiangsu Province (KYCX22_2210, KYCX21_1940).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Cochrane, J.H. Asset Pricing; Princeton University Press: Englewood Cliffs, NJ, USA, 2001. [Google Scholar]
  2. Tsay, R. Analysis of Financial Time Series; Wiley: New York, NY, USA, 2002. [Google Scholar]
  3. Robinson, P.M. Nonparametric Estimation of Time-Varying Parameters. In Statistical Analysis and Forecasting of Economic Structural Change; Hackl, P., Ed.; Springer: Berlin, Germany, 1989; pp. 164–253. [Google Scholar]
  4. Orbe, S.; Ferreira, E.; Rodríguez-Póo, J. Nonparametric estimation of time varying parameters under shape restrictions. J. Econom. 2005, 126, 53–77. [Google Scholar] [CrossRef]
  5. Cai, Z. Trending time-varying coefficient time series models with serially correlated errors. J. Econom. 2007, 136, 163–188. [Google Scholar] [CrossRef]
  6. Li, D.; Chen, J.; Lin, Z. Statistical inference in partially time-varying coefficient models. J. Stat. Plan. Inference 2011, 141, 995–1013. [Google Scholar] [CrossRef]
  7. Wu, W.; Zhou, Z. Nonparametric inference for time-varying coefficient quantile regression. J. Bus. Econ. Stat. 2017, 35, 98–109. [Google Scholar] [CrossRef]
  8. Hastie, T.; Tibshirani, R. Varying-coefficient models. J. R. Stat. Soc. Ser. B 1993, 55, 757–796. [Google Scholar] [CrossRef]
  9. Fan, J.; Zhang, W. Statistical estimation in varying coefficient models. Ann. Stat. 1999, 27, 1491–1518. [Google Scholar] [CrossRef]
  10. Hoover, D.R.; Rice, J.A.; Wu, C.O.; Yang, L.P. Nonparametric smoothing estimates of time-varying coefficient models with longitudinal data. Biometrika 1998, 85, 809–822. [Google Scholar] [CrossRef]
  11. Huang, J.Z.; Wu, C.O.; Zhou, L. Polynomial spline estimation and inference for varying coefficient models with longitudinal data. Stat. Sin. 2004, 14, 763–788. [Google Scholar]
  12. Zhou, Z.; Wu, W.B. Simultaneous inference of linear models with time varying coefficients. J. R. Stat. Soc. Ser. B 2010, 72, 513–531. [Google Scholar] [CrossRef]
  13. Koenker, R. Quantile Regression; Cambridge University Press: Cambridge, UK, 2005. [Google Scholar]
  14. Honda, T. Quantile regression in varying coefficient models. J. Stat. Plan. Inference 2004, 121, 113–125. [Google Scholar] [CrossRef]
  15. Cai, Z.; Xu, X. Nonparametric quantile estimations for dynamic smooth coefficient models. J. Am. Stat. Assoc. 2009, 104, 371–383. [Google Scholar] [CrossRef]
  16. Kim, M.O. Quantile regression with varying coefficients. Ann. Stat. 2007, 35, 92–108. [Google Scholar] [CrossRef] [Green Version]
  17. Andriyana, Y.; Gijbels, I. Quantile regression in heteroscedastic varying coefficient models. AStA. Adv. Stat. Anal. 2017, 101, 151–176. [Google Scholar] [CrossRef]
  18. Antoniadis, A.; Gregoire, G.; McKeague, I.W. Wavelet methods for curve estimation. J. Am. Stat. Assoc. 1994, 89, 1340–1353. [Google Scholar] [CrossRef]
  19. Donoho, D.L.; Johnstone, I.M. Adapting to unknown smoothness via wavelet shrinkage. J. Am. Stat. Assoc. 1994, 90, 1200–1224. [Google Scholar] [CrossRef]
  20. Hall, P.; Patil, P. Formulae for mean integrated squared error of non-linear wavelet-based density estimators. Ann. Stat. 1995, 23, 905–928. [Google Scholar] [CrossRef]
  21. Härdle, W.; Kerkycharian, G.; Picard, D.; Tsybakov, A. Wavelet, Approximation and Statistical Application; Springer: New York, NY, USA, 1998. [Google Scholar]
  22. Vidakovic, B. Statistical Modeling by Wavelet; Wiley: New York, NY, USA, 1999. [Google Scholar]
  23. Doosti, H.; Afshari, M.; Niroumand, H.A. Wavelets for nonparametric stochastic regression with mixing stochastic process. Commun. Stat.- Methods 2008, 37, 373–385. [Google Scholar] [CrossRef]
  24. Li, L.; Xiao, Y. Wavelet-based estimation of regression function with strong mixing errors under fixed design. Commun. Stat.- Methods 2017, 46, 4824–4842. [Google Scholar] [CrossRef]
  25. Zhou, X.C.; Lin, J.G. Asymptotic properties of wavelet estimators in semiparametric regression models under dependent errors. J. Multivar. Anal. 2013, 122, 251–270. [Google Scholar] [CrossRef]
  26. Ding, L.; Chen, P.; Li, Y. Berry-Esseen bound of wavelet estimator in heteroscedastic regression model with random errors. Int. J. Comput. Math. 2019, 96, 821–852. [Google Scholar] [CrossRef]
  27. Zhou, X.; You, J. Wavelet estimation in varying-coefficient partially linear regression models. Stat. Probab. Lett. 2004, 68, 91–104. [Google Scholar] [CrossRef]
  28. Lu, Y.; Li, Z. Wavelet estimation in varying-coefficient models. Chin. J. Appl. Probab. 2009, 25, 409–420. [Google Scholar]
  29. Zhou, X.C.; Xu, Y.Z.; Lin, J.G. Wavelet estiamtion in varying coefficient models for censored dependent data. Stat. Probab. Lett. 2017, 122, 179–189. [Google Scholar] [CrossRef]
  30. Bahadur, R.R. A note quantiles in large samples. Ann. Math. Statist. 1966, 37, 577–581. [Google Scholar] [CrossRef]
  31. He, X.M.; Shao, Q.M. A general bahadur representation of M-estimators and its application to linear regression with nonstochastic designs. Ann. Stat. 1996, 24, 2608–2630. [Google Scholar] [CrossRef]
  32. Yang, W.; Hu, S.; Wang, X. The Bahadur representation for sample quantiles under dependent sequence. Acta Math. Appl. Sin. Engl. Ser. 2019, 35, 521–531. [Google Scholar] [CrossRef] [Green Version]
  33. Zhou, X.; Zhu, F. Wavelet-M-estimation for time-varying coefficient time series models. Discret. Dyn. Nat. Soc. 2020, 2020, 1025452. [Google Scholar] [CrossRef]
  34. Doukhan, P. Mixing: Properties and Examples. In Lecture Notes in Statistics; Springer: Berlin, Germany, 1994; Volume 85. [Google Scholar]
  35. Fan, J.; Yao, Q. Nonlinear Time Series: Nonparametric and Parametric Methods; Springer: New York, NY, USA, 2003. [Google Scholar]
  36. Aronszajn, N. Theory of reproducing kernels. Trans. Am. Math. Soc. 1950, 68, 337–404. [Google Scholar] [CrossRef]
  37. Hong, S.Y. Bahadur representation and its applications for local polynomial estiamtes in nonparametric M regression. J. Nonparametr. Stat. 2003, 15, 237–251. [Google Scholar] [CrossRef]
  38. Walter, G.G. Wavelets and Orthogonal Systems with Applications; CRC Press Inc.: Boca Raton, FL, USA, 1994. [Google Scholar]
  39. Kong, E.; Xia, Y. A single-index quantile regression model and its estimation. Econom. Theory 2012, 28, 730–768. [Google Scholar] [CrossRef] [Green Version]
  40. Knight, K. Limiting distributions of L1 regression estimators under general conditions. Ann. Stat. 1998, 26, 755–770. [Google Scholar] [CrossRef]
  41. Pollard, D. Asymptotics for least absolute deviation regression estimators. Econom. Theory 1991, 7, 186–199. [Google Scholar] [CrossRef]
  42. Cai, Z.; Fan, J.; Yao, Q. Functional-coefficient regression models for nonlinear time series. J. Am. Stat. Assoc. 2000, 95, 941–956. [Google Scholar] [CrossRef]
  43. Wand, M.P.; Jones, M.C. Kernel Smoothing; Chapman and Hall: London, UK, 1995. [Google Scholar]
  44. Chambers, J.M.; Hastie, T.J. Statistical Models in S; Wadsworth & Brooks/Cole: Salt Lake City, UT, USA, 1992. [Google Scholar]
Figure 1. The true time-varying coefficient function Papw, and their QR-wavelet, QR-local-linear and QR-splines estimates when n = 300 , and τ = 0.25 , 0.50 and 0.75 for the errors N ( 0 , 1 ) and t ( 5 ) .
Figure 1. The true time-varying coefficient function Papw, and their QR-wavelet, QR-local-linear and QR-splines estimates when n = 300 , and τ = 0.25 , 0.50 and 0.75 for the errors N ( 0 , 1 ) and t ( 5 ) .
Mathematics 10 02321 g001
Figure 2. The true time-varying coefficient function Blocks and their QR-wavelet, QR-local-linear and QR-splines estimates when n = 300 , and τ = 0.25 , 0.50 and 0.75 for the errors N ( 0 , 1 ) and t ( 5 ) .
Figure 2. The true time-varying coefficient function Blocks and their QR-wavelet, QR-local-linear and QR-splines estimates when n = 300 , and τ = 0.25 , 0.50 and 0.75 for the errors N ( 0 , 1 ) and t ( 5 ) .
Mathematics 10 02321 g002
Table 1. MSEs of wavelet, local linear and spline smoothing in Case I: Pwpn.
Table 1. MSEs of wavelet, local linear and spline smoothing in Case I: Pwpn.
QR-WaveletQR-Local-LinearQR-Splines
n τ N ( 0 , 0.1 2 ) t ( 5 ) N ( 0 , 0.1 2 ) t ( 5 ) N ( 0 , 0.1 2 ) t ( 5 )
2000.100.000730.022730.006440.004790.010320.00824
0.250.000650.001250.003830.002990.008590.00584
0.500.000620.000850.002130.002930.005300.00529
0.750.000650.001040.002930.002620.008600.00532
0.900.000740.022080.004740.003450.010110.00660
3000.100.000420.009660.006050.002870.010490.00557
0.250.000400.001570.003550.002710.008680.00539
0.500.000370.000550.001970.002500.005290.00524
0.750.000390.001410.002790.002440.008670.00534
0.900.000410.011210.004550.002540.010170.00561
5000.100.000110.003190.005160.002620.005140.00319
0.250.000100.000410.003110.002270.003580.00309
0.500.000100.000260.001760.002130.003030.00300
0.750.000100.000450.002550.002210.003540.00306
0.900.000110.003540.004110.002520.005320.00339
Table 2. MSEs of wavelet, local linear and spline smoothing in Case II: Blocks.
Table 2. MSEs of wavelet, local linear and spline smoothing in Case II: Blocks.
QR-WaveletQR-Local-LinearQR-Splines
n τ N ( 0 , 0.1 2 ) t ( 5 ) N ( 0 , 0.1 2 ) t ( 5 ) N ( 0 , 0.1 2 ) t ( 5 )
2000.100.005730.019040.011140.007140.017300.00897
0.250.005030.003680.006610.005930.009210.00778
0.500.003780.003470.005990.005870.008030.00765
0.750.004180.003840.007210.006130.008940.00773
0.900.005740.026620.008650.008220.011500.01086
3000.100.006240.013580.010590.006220.017510.00777
0.250.006110.005530.006470.005560.009250.00764
0.500.004840.004480.005840.005670.008000.00761
0.750.006740.005570.007040.005700.008970.00775
0.900.006820.014210.008520.005920.011390.00794
5000.100.005620.004260.009580.005620.015410.00694
0.250.003850.003680.006340.005280.007510.00671
0.500.003100.002850.005570.005280.007060.00669
0.750.003570.003250.006790.005500.007570.00681
0.900.004080.004810.008130.005450.010000.00684
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Zhou, X.; Yang, G.; Xiang, Y. Quantile-Wavelet Nonparametric Estimates for Time-Varying Coefficient Models. Mathematics 2022, 10, 2321. https://doi.org/10.3390/math10132321

AMA Style

Zhou X, Yang G, Xiang Y. Quantile-Wavelet Nonparametric Estimates for Time-Varying Coefficient Models. Mathematics. 2022; 10(13):2321. https://doi.org/10.3390/math10132321

Chicago/Turabian Style

Zhou, Xingcai, Guang Yang, and Yu Xiang. 2022. "Quantile-Wavelet Nonparametric Estimates for Time-Varying Coefficient Models" Mathematics 10, no. 13: 2321. https://doi.org/10.3390/math10132321

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop