Next Article in Journal
MTV-MFO: Multi-Trial Vector-Based Moth-Flame Optimization Algorithm
Next Article in Special Issue
The Multi-Compartment SI(RD) Model with Regime Switching: An Application to COVID-19 Pandemic
Previous Article in Journal
Compound Fault Diagnosis of Rolling Bearing Based on ACMD, Gini Index Fusion and AO-LSTM
Previous Article in Special Issue
Asymptotic Results for Multinomial Models
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Efficient Estimation for the Derivative of Nonparametric Function by Optimally Combining Quantile Information

School of Mathematics and Big Data, Dezhou University, Dezhou 253023, China
*
Author to whom correspondence should be addressed.
Symmetry 2021, 13(12), 2387; https://doi.org/10.3390/sym13122387
Submission received: 11 November 2021 / Revised: 30 November 2021 / Accepted: 4 December 2021 / Published: 10 December 2021
(This article belongs to the Special Issue Probability, Statistics and Applied Mathematics)

Abstract

:
In this article, we focus on the efficient estimators of the derivative of the nonparametric function in the nonparametric quantile regression model. We develop two ways of combining quantile regression information to derive the estimators. One is the weighted composite quantile regression estimator based on the quantile weighted loss function; the other is the weighted quantile average estimator based on the weighted average of quantile regression estimators at a single quantile. Furthermore, by minimizing the asymptotic variance, the optimal weight vector is computed, and consequently, the optimal estimator is obtained. Furthermore, we conduct some simulations to evaluate the performance of our proposed estimators under different symmetric error distributions. Simulation studies further illustrate that both estimators work better than the local linear least square estimator for all the symmetric errors considered except the normal error, and the weighted quantile average estimator performs better than the weighted composite quantile regression estimator in most situations.

1. Introduction

Consider the following general nonparametric regression model:
Y = m ( T ) + σ ( T ) ε ,
where Y R is the response variable, and T is a scalar covariate independent of ε , where ε is the random error. m ( t ) = E ( Y | T = t ) is an unknown smooth function, and σ ( T ) is a non-negative function which represents the standard deviation of the model error ε . E ( ε ) = 0 and var ( ε ) = 1 . As we all know, lots of estimation methods, including the kernel regression method, the spline smoothing method, the orthogonal series approximation method and the local polynomial method, have been investigated for estimating the nonparametric regression function m ( t ) . The relevant works can be referred but not limited to [1,2,3,4,5]. However, we need to estimate m ( t ) , that is, the derivative of m ( t ) , in many situations. Therefore, in this article, we intend to derive the efficient estimators of m ( t ) .
Since the seminal work of Koenker and Bassett [6], quantile regression has been deeply studied in the literature and applied in econometrics, biomedicine and other fields. It referred to Koenker [7] for comprehensive treatments. Based on the quantile regression, Zou and Yuan [8] proposed composite quantile regression (abbreviated as CQR) for linear models. It assumes that the regression coefficients are the same across different quantile regression models by combining the strength across multiple quantile regression models. The advantage of the CQR method is that it can improve the relative efficiency of the relevant estimators significantly and is usually used in regression models with non-normal error distribution. Kai et al. [9] introduced the local CQR for the general nonparametric regression model. As a kind of nonlinear smoother, the local CQR method does not require finite error variance and hence can work well even when the error distribution has infinite variance. Meanwhile, the local CQR method can significantly improve the estimation efficiency of local linear least squares for some cases. Kai et al. [10] applied the local CQR method to a semiparametric varying coefficient partially linear model, and the results showed that the local CQR method outperformed both the least squares and the single quantile regression. Jiang et al. [11] applied the two step CQR method to the single index model and established its efficiency. Ning and Tang [12] considered the estimation and test issues for CQR method with missing covariates. Recently, many researchers applied the COR method to other various models under different data cases. It can be referred but not limited to [13,14,15,16,17,18,19].
To make full use of quantile information, one may consider combining information over different quantiles by the criterion function of the estimation procedure or combining information based on estimators at different quantiles. In this article, we argue that, although a composite quantile regression estimator based on the aggregated criterion function may outperform the ordinary least square estimator for some non-normal distributions, it is noted that simple averaging usually does not make full use of all the information of quantiles. Information at different quantiles are correlated, and improperly using multiple quantile information may even reduce the efficiency. Roughly speaking, simple average delivers good estimators when the error distribution is close to normal. In fact, for nonparametric regression, a simple averaging-based composite quantile regression estimator is asymptotically equivalent to the local least squares estimator. However, the main purpose of combining quantile information is to improve efficiency when the error distribution is not normal and the ordinary least square method does not work well. It is therefore important to combine quantile information appropriately to achieve efficiency. In this paper, we mainly study optimal combination of quantile regression information for estimating the derivative of nonparametric function. As stated above, we intend to propose and develop two ways of combining quantile information. One is the weighted local CQR estimator based on weighted quantile loss functions, and the other is the weighted quantile average estimator based on a weighted average of quantile regression estimators at single quantiles. Our proposed estimators inherit many advantages. Both the theoretical results and simulation studies further illustrate that both the weighted local CQR estimator and the weighted quantile average estimator work better than the common local linear least square estimator for all the symmetric errors considered except the normal error, and the weighted quantile average estimator performs better than the weighted composite quantile regression estimator in most situations.
The rest of the paper is organized as follows. The weighted local CQR estimator and the weighted quantile average estimator are proposed in Section 2. Meanwhile, the main theoretical results including asymptotic normality and the optimal weights are presented. In Section 3, the asymptotic relative efficiency of the weighted local CQR estimator and the weighted quantile average estimator are compared. The feasibility of our proposed method is verified by random simulations in Section 4. The technical proofs of the theoretical results are presented in Section 5. Conclusions are added in Section 2.

2. Methodology

Firstly, we give some conditions and notations, which are required in our subsequent discussions. Let f ( · ) and F ( · ) be the density function and cumulative distribution function of the error, respectively. Denote f T ( · ) as the marginal density function of the covariate T. Let T be the σ - field generated by { T 1 , T 2 , , T n } . Choose kernel K ( · ) as a symmetric density function and allow the following:
μ j = u j K ( u ) d u a n d ν j = u j K 2 ( u ) d u , j = 0 , 1 , 2 ,
c k = F 1 ( τ k ) a n d τ k k = m i n ( τ k , τ k ) τ k τ k .
The following Conditions (C1)–(C4) are needed for Theorems 1–3:
(C1)
m ( · ) has a continuous 3-th derivative in the neighborhood of t 0 ;
(C2)
f T ( · ) , the marginal density function of T, is differentiable and positive in the neighborhood of t 0 ;
(C3)
The conditional variance σ 2 ( · ) is continuous in the neighborhood of t 0 ;
(C4)
f ( · ) , the density function of the error ε , is always positive on its support.

2.1. Weighted Composite Quantile Regression Estimation

Let Q ( τ ) be the τ - quantile of ε and Q Y | T ( τ | t ) be the conditional τ - th quantile of Y given T = t . Then, the nonparametric regression model (1) has the following quantile regression representation:
Q Y | T ( τ | t ) = m ( t ) + σ ( t ) Q ( τ ) .
Suppose that ( t i , Y i ) , i = 1 , , n are independent and identically distributed random samples from model (2). We consider estimating m ( · ) at t 0 over { τ k , k = 1 , 2 , , q } jointly based on the following weighted local quadratic CQR loss function:
( a ^ 1 , , a ^ q , b ^ 1 , b ^ 2 ) = a r g m i n a 1 , , a q , b 1 , b 2 k = 1 q ω k i = 1 n ρ τ k { Y i a k b 1 ( t i t ) b 2 2 ( t i t ) 2 } K ( t i t h ) ,
where ρ τ k ( z ) = τ k z z I ( z 0 ) , k = 1 , , q are the quantile loss functions at q quantile positions τ k = k / ( q + 1 ) , respectively, and ω k 0 , k = 1 , , q with k = 1 q ω k = 1 are weights. Hereinafter, we use ω = ( ω 1 , , ω q ) T to denote the weight vector in different scenarios whenever no confusion arises. Then, the weighted local quadratic composite quantile regression ( W C Q R ) estimator for m ( t 0 ) is given by:
m ^ W C Q R ( t 0 ) = b ^ 1 .
In the subsequent Theorem 1, we present the asymptotic bias, variance and asymptotic normality of m ^ W C Q R ( t 0 ) , and the proofs can be found in Section 5.
Theorem 1.
Suppose that t 0 is an interior point of the support of f T ( · ) . Under the regularity conditions ( C 1 ) ( C 4 ) , if h 0 and n h 3 , then the asymptotic conditional bias and variance of m ^ W C Q R ( t 0 ) are given, respectively, by:
bias { m ^ W C Q R ( t 0 ) | T } = 1 6 m ( t 0 ) μ 4 μ 2 2 h 2 + o p ( h 2 ) ,
var { m ^ W C Q R ( t 0 ) | T } = 1 n h 3 ν 2 σ 2 ( t 0 ) μ 2 2 f T ( t 0 ) R ω ( q ) + o p ( 1 n h 3 ) ,
where
R ω ( q ) = k = 1 q k = 1 q ω k ω k τ k k { k = 1 q ω k f ( c k ) } 2 .
Furthermore, we have the following asymptotic normal result:
n h 3 { m ^ W C Q R ( t 0 ) m ( t 0 ) 1 6 m ( t 0 ) μ 4 μ 2 2 h 2 } D N { 0 , ν 2 σ 2 ( t 0 ) μ 2 2 f T ( t 0 ) R ω ( q ) } ,
where D stands for convergence in distribution.
Remark 1.
If we use equal weights ω k = 1 q , k = 1 , , q over all quantiles, then the asymptotic variance of the unweighted local quadratic CQR estimator is given by ν 2 σ 2 ( t 0 ) μ 2 2 f T ( t 0 ) R ( q ) , where:
R ( q ) = k = 1 q k = 1 q τ k k { k = 1 q f ( c k ) } 2 .
However, the importance of information at different quantiles are often different, and the information at different quantiles are correlated—depending on the error distribution. Thus, it is essential to optimally combine these quantiles’ information together.
From Theorem 1, the asymptotic variance of m ^ W C Q R ( t 0 ) depends on the weight vector ω through R ω ( q ) , thus, a natural way to select the optimal weight vector is to minimize R ω ( q ) in (7). It is easy to know that the optimal weight vector ω * is the solution of the following optimization problem:
ω * = a r g m i n R ω ( q ) = a r g m i n k = 1 q k = 1 q ω k ω k τ k k { k = 1 q ω k f ( c k ) } 2 .
such that k = 1 q ω k = 1 and k = 1 q ω k c k f ( c k ) = 0 .

2.2. Weighted Quantile Average Estimation

As described in Section 2.1, the W C Q R estimator combines the information at different quantiles by means of the criterion function. In the following, we consider an alternative approach which combines information based on the estimators at different quantiles. As stated in Section 1, we focus on the estimation of m ( t ) . For a fixed τ k , 0 < τ k < 1 , consider the local quadratic nonparametric quantile regression:
( a ^ , b ^ 1 , b ^ 2 ) = a r g m i n a , b 1 , b 2 i = 1 n ρ τ k { Y i a b 1 ( t i t ) b 2 2 ( t i t ) 2 } K ( t i t h ) .
The solution of b 1 in the above optimization, denoted as m ^ ( τ k , t 0 ) , provides an estimator for m ( t 0 ) . In the following, we construct the weighted quantile averaging nonparametric estimator for m ( t 0 ) based on the weighted average of m ^ ( τ k , t 0 ) over τ k , τ k = k / ( q + 1 ) , k = 1 , 2 , , q :
m ^ W Q A E ( t 0 ) = k = 1 q ω k m ^ ( τ k , t 0 ) ,
where ω = ( ω 1 , ω 2 , , ω q ) T satisfies the following conditions:
k = 1 q ω k = 1 ,
k = 1 q ω k F 1 ( τ k ) = 0 .
The introduction of ω can eliminate the bias term caused by the asymmetric random error. Meanwhile, the weight vector ω satisfying conditions (11) and (12) can guarantee the estimation consistency and asymptotic unbiasedness of m ^ W Q A E ( t 0 ) asymptotically, which can also be seen from the proof of the asymptotic properties of m ^ W Q A E ( t 0 ) in Section 5. In these weight vectors, which satisfy conditions (11) and (12), we can select the optimal one by optimization. The details are discussed in the subsequent Section.
In the subsequent Theorem 2, we present the asymptotic bias, variance and asymptotic normality of m ^ W A Q E ( t 0 ) , and the proofs can be found in Section 5.
Theorem 2.
Suppose that t 0 is an interior point of the support of f T ( · ) , and the weight vector ω satisfies conditions (10) and (11). Under the regularity Conditions ( C 1 ) ( C 4 ) in Section 5, if h 0 and n h 3 , then the asymptotic conditional bias and variance of the weighted quantile average estimation m ^ W Q A E ( t 0 ) are given, respectively, by:
bias { m ^ W Q A E ( t 0 ) | T } = 1 6 m ( t 0 ) μ 4 μ 2 2 h 2 + o p ( h 2 ) ,
var { m ^ W Q A E ( t 0 ) | T } = 1 n h 3 ν 2 σ 2 ( t 0 ) μ 2 2 f T ( t 0 ) J ω ( q ) + o p ( 1 n h 3 ) ,
where J ω ( q ) = ω T H ω , H is the q × q matrix with the ( k , k ) - t h element τ k k f ( c k ) f ( c k ) , that is
J ω ( q ) = k = 1 q k = 1 q ω k ω k τ k k f ( c k ) f ( c k ) .
Furthermore, conditioning on T , we have the following asymptotic normal distribution:
n h 3 { m ^ W Q A E ( t 0 ) m ( t 0 ) 1 6 m ( t 0 ) μ 4 μ 2 2 h 2 } D N { 0 , ν 2 σ 2 ( t 0 ) μ 2 2 f T ( t 0 ) J ω ( q ) } .
Remark 2.
If we simply use equal weights in (14), then the resulting unweighted quantile average estimator for m ( t 0 ) has the asymptotic normality in Theorem 2 with J ω ( q ) replaced by:
J ( q ) = 1 q 2 k = 1 q k = 1 q τ k k f ( c k ) f ( c k ) .
Remark 3.
From Theorem 2, the covariance matrix of m ^ W Q A E ( t 0 ) depends on the weight vector through J ω ( q ) , thus, a natural way to select the optimal weight vector is to minimize J ω ( q ) in (14).
The following Theorem gives the optimal weight vector and the optimal weighted quantile average estimation of m ( t 0 ) .
Theorem 3.
Supposing that the Conditions ( C 1 ) ( C 4 ) hold, then the optimal weight vector minimizing J ω ( q ) is:
ω 🟉 = ( c T H 1 c ) H 1 1 ( 1 T H 1 c ) H 1 c ( c T H 1 c ) ( 1 T H 1 1 ) ( 1 T H 1 c ) 2 ,
where c is a q - dimensional column vector with k-th element c k = F 1 ( τ k ) and 1 is a q - dimensional column vector with all elements 1. Furthermore, the corresponding conditionally variance of the optimal weighted quantile average estimation of m ^ W Q A E ( t 0 ) , denoted as m ^ W Q A E * ( t 0 ) , is given by:
var { m ^ W Q A E * ( t 0 ) | T } = 1 n h 3 ν 2 σ 2 ( t 0 ) μ 2 2 f T ( t 0 ) J ω * ( q ) + o p ( 1 n h 3 ) ,
where:
J ω * ( q ) = k = 1 q k = 1 q ω k * ω k * τ k k f ( c k ) f ( c k ) .
Comparing the weighted local quadratic CQR estimation, the weighted quantile average estimation and the local quadratic least squares estimators for m ( t 0 ) , we see that they have the same leading bias term 1 6 m ( t 0 ) μ 4 μ 2 2 h 2 , whereas their asymptotic variances are different.

3. Comparison of Asymptotic Efficiency

The WQAE differs from the WCQR estimator in several aspects. While the WCQR estimator is based on the aggregation of several quantile loss functions, the WQAE is based on a weighted average of separate estimators from different quantiles. As a result, computing the WQAE only involves q separate parameter minimization problems, whereas the WCQR requires solving a larger parameter minimization problem. In addition, to ensure a proper loss function, the weights ω k , k = 1 , , n in R ω ( q ) are restricted to be non-negative; by contrast, the weights ω k , k = 1 , , n in J ω ( q ) can be negative. Obviously, it is computationally appealing to impose less constraint on the weights.
From Theorem 1, the mean square of error (MSE) of the WCQR estimation m ^ W C Q R ( t 0 ) is given by:
M S E { m ^ W C Q R ( t 0 ) } = { 1 6 m ( t 0 ) μ 4 μ 2 } 2 h 4 + 1 n h 3 ν 2 σ 2 ( t 0 ) μ 2 2 f T ( t 0 ) R ω ( q ) + o p ( h 4 + 1 n h 3 ) .
Thus, the optimal variable bandwidth minimizing M S E { m ^ W C Q R ( t 0 ) } is:
h W C Q R o p t = R ω ( q ) 1 / 7 [ 27 ν 2 σ 2 ( t 0 ) f T ( t 0 ) { m ( t 0 ) μ 4 } 2 ] 1 / 7 n 1 / 7 .
From Theorem 2, the MSE of the WQAE m ^ W A Q E ( t 0 ) is given by:
M S E { m ^ W Q A E ( t 0 ) } = { 1 6 m ( t 0 ) μ 4 μ 2 } 2 h 4 + 1 n h 3 ν 2 σ 2 ( t 0 ) μ 2 2 f T ( t 0 ) J ω ( q ) + o p ( h 4 + 1 n h 3 ) .
Thus, the optimal variable bandwidth minimizing M S E { m ^ W A Q E ( t 0 ) } is:
h W A Q E o p t = J ω ( q ) 1 / 7 [ 27 ν 2 σ 2 ( t 0 ) f T ( t 0 ) { m ( t 0 ) μ 4 } 2 ] 1 / 7 n 1 / 7 .

4. Simulation Studies

In this Section, we implement simulation studies to compare the finite sample performance of our WQAE with those of the WCQR and the local linear least square estimates. In all examples, the kernel function is chosen to be the Gaussian kernel function, and the number of replications is set to be 400. Similar to Kai et al. [9], we use the short-cut strategy to select the bandwidth in our simulation studies.
An important quantity to valuate the performance of different estimators is the average squared errors (ASE), which can be represented by:
A S E ( g ^ ) = 1 n g r i d i = 1 n g r i d { g ^ ( u k ) g ( u k ) } 2
with g being m ( · ) in the simulation, where { u k , k = 1 , 2 , , n g r i d } are the grid points at which the estimator g ^ ( · ) of g is evaluated. Set n g r i d = 200 and set the grid points to evenly distribute over the interval on which m ( · ) is estimated. Furthermore, we can also evaluate the different estimators g ^ 1 and g ^ 2 of m ( · ) via the ratio of the average squared errors (RASE) defined by:
R A S E ( g ^ 1 , g ^ 2 ) = A S E ( g ^ 2 ) A S E ( g ^ 1 ) .
In the simulation below, we compare the finite behaviors of these three estimators under different conditions, including different symmetric or asymmetric errors, homoscedastic or heteroscedastic models.

4.1. Homoscedastic Model

We consider the following homoscedastic model:
Model   1 : Y = s i n ( 2 T ) + 2 exp ( 16 T 2 ) + 0.5 ε ,
where the covariate T follows N ( 0 , 1 ) , and the nonparametric regression function is m ( t ) = s i n ( 2 t ) + 2 exp ( 16 t 2 ) . In our simulation, we generate 400 random samples, each covering n = 200 observations. Our aim is to estimate the derivative of m ( t ) over [ 1.5 , 1.5 ] . It is noted that the Cauchy distribution is the truncated Cauchy distribution on [ 10 , 10 ] . We investigate the finite sample behaviors of the WCQR, WQAE and LSE (the local linear least squares estimator) separately according to the RASE and ASE under the models under some symmetric and asymmetric errors in the next two examples.
Example 1.
In this example, we consider Model 1 under different symmetric error distributions. The mean and standard deviations of RASEs with 400 simulations are established in Table 1. The following can be seen for Table 1:
(1) 
Under the standard normal error N ( 0 , 1 ) , the LSE estimator is the best one among these three estimators, and WQAE performs nearly as well as WCQR.
(2) 
Under the non-normal and symmetric errors, WCQR is consistently superior with WQAE and LSE methods, LSE especially performs consistently the worst. Obviously, our RASEs are significantly larger than 1.
Example 2.
In this example, we consider Model 1 under different asymmetric error distributions. The mean and standard deviations of ASEs with 400 simulations are presented in Table 2. It is noted that F ( 4 , 6 ) denotes the centralized F ( 4 , 6 ) . The following can be seen for Table 2:
(1) 
WCQR, whose bias is non-vanishing and does not approach to zero under asymmetric errors, breaks down. So, WCQR is the worst in the case of asymmetric errors.
(2) 
For all the asymmetric errors considered, WQAE performs consistently better than local linear least squares.

4.2. Heteroscedastic Model

In this subsection, we mainly consider the following heteroscedastic model:
Model   2 : Y = T s i n ( 2 π T ) + σ ( T ) ε ,
where the covariate T follows N ( 0 , 1 ) , σ ( t ) = ( 2 + c o s ( 2 π t ) ) / 10 , and the nonparametric regression function is m ( t ) = t sin ( 2 π t ) . Our aim is to estimate the derivative of m ( t ) on [ 1.5 , 1.5 ] . In our simulation, we generate 400 random samples, each having n = 200 observations. Similar to the arguments in the homoscedastic model, we evaluate their behaviors under symmetric and asymmetric errors in the following Examples 3 and 4.
Example 3.
We consider Model 2 under different symmetric error distributions in this example. The simulation results of biases and standard deviations of RASEs over 400 simulations are summarized in Table 3. We have the following findings:
(1) 
Under the standard normal error N ( 0 , 1 ) , LSE outperforms both WQAE and WCQR. Even for normal errors, our RASEs are a little smaller than 1.
(2) 
For all the symmetric errors considered except the normal error, WQAE performs significantly better than the LSE, and our RASEs are obviously greater than 1 compared with the LSE. Furthermore, WQAE performs better than WCQR in most cases.
Example 4.
In this example, we consider Model 2 under different asymmetric error distributions. The simulation results of biases and standard deviations of RASEs over 400 simulations are summarized in Table 4. Similar to Example 2, the simulation results show the following:
(1) 
WCQR breaks down for its non-vanishing bias in the case of asymmetric errors. Therefore, WCQR performs the worst among the three estimators.
(2) 
For all the asymmetric errors considered, WQAE performs consistently better than local linear least squares.

5. Proofs

Before embarking on the proofs of Theorems 1–3, we first give and prove Lemma 1. Suppose that the kernel function K has the finite support [ M , M ] . The following notations are needed to present our theoretical results:
Denote
u k = n h { a k m ( t 0 ) σ ( t 0 ) c k } , k = 1 , , q ,
and
v j = h j n h { j ! b j m ( j ) ( t 0 ) } / j ! , j = 1 , 2 .
Set x i = ( t i t 0 ) / h , K i = K ( x i ) and Δ i , k = u k n h + v 1 x i n h + v 2 x i 2 n h . Write d i , k = c k { σ ( t i ) σ ( t 0 ) } + r i , 2 , r i , 2 = m ( t i ) m ( t 0 ) m ( t 0 ) ( t i t 0 ) m ( t 0 ) 2 ( t i t 0 ) 2 . Define η i , k * = I { ε i c k d i , k / σ ( t i ) τ k } , and let W * = ( w 11 * , , w 1 q * , w 21 * , w 22 * ) T , with
w 1 k * = ω k n h i = 1 n K i η i , k * , k = 1 , , q , w 2 j * = 1 n h k = 1 q ω k i = 1 n K i x i j η i , k * , j = 1 , 2 .
Furthermore, let S 11 be a q × q diagonal matrix with diagonal elements { ω k f ( c k ) , k = 1 , , q } ; S 12 be a be a q × 2 matrix with ( k , j ) - element ω k f ( c k ) μ j , k = 1 , , q and j = 1 , 2 . S 21 = S 12 T ; and S 22 be a 2 × 2 diagonal with diagonal elements μ 2 k = 1 q ω k f ( c k ) and μ 4 k = 1 q ω k f ( c k ) . In addition, let S n , 11 be a q × q diagonal matrix with diagonal elements { ω k f ( c k ) i = 1 n K i / ( n h σ ( t i ) ) , k = 1 , , q } , let S 12 be a q × 2 matrix with ( k , j ) - element ω k f ( c k ) i = 1 n K i x i j / ( n h σ ( t i ) ) , k = 1 , , q and j = 1 , 2 .   S n , 21 = S n , 12 T , and let S n , 22 be a 2 × 2 diagonal with diagonal elements k = 1 q ω k f ( c k ) i = 1 n K i x i 2 / ( n h σ ( t i ) ) . Similarly, let Σ 11 be a q × q matrix with ( k , k ) - element ν 0 ω k ω k τ k k , k , k = 1 , , q , let Σ 12 be a q × 2 matrix with ( k , j ) - element ν j ω k k = 1 q ω k τ k k , k = 1 , , q and j = 1 , 2 . Σ 21 = Σ 12 T , and let Σ 22 be a 2 × 2 matrix with ( j , j ) - element ν j + j k , k = 1 q ω k ω k τ k k for j , j = 1 , 2 .
Define:
S = S 11 S 12 S 21 S 22 , S n = S n , 11 S n , 12 S n , 21 S n , 22 ,
Σ = Σ 11 Σ 12 Σ 21 Σ 22 .
Partition S 1 Σ S 1 into four submatrices as follows:
S 1 Σ S 1 = S 1 Σ S 1 11 S 1 Σ S 1 12 S 1 Σ S 1 21 S 1 Σ S 1 22 ,
where ( · ) 11 stands for the top left-hand q × q submatrix, and ( · ) 22 stands for the bottom right-hand element.
Lemma 1.
Denote θ ^ n = ( u ^ 1 , , u ^ q , v ^ 1 , v ^ 2 ) T as the minimizer of the weighted local quadratic CQR loss. Under regularity Conditions (C1)–(C4), we have:
θ ^ n + σ ( t 0 ) f T ( t 0 ) S 1 E [ W n * | T ] D MVN q + 2 0 , σ 2 ( t 0 ) f T ( t 0 ) S 1 Σ S 1 .
Proof of Lemma 1. 
The proof is similar to that of Theorem 5 of Kai et al. [9]. We divide the whole procedure into three steps:
Step 1 
Minimizing the weighted local quadratic CQR loss is equivalent to minimizing L n ( θ ) , defined as:
L n ( θ ) = k = 1 q ω k u k i = 1 n K i η i , k * n h + j = 1 2 ν j k = 1 q ω k i = 1 n K i x i j η i , k * n h + k = 1 q ω k B n , k ( θ ) .
with respect to θ = ( u 1 , , u q , v 1 , v 2 ) T , where
B n , k ( θ ) = i = 1 n K i 0 Δ i , k I ε i c k d i , k σ ( t i ) + z σ ( t i ) I ε i c k d i , k σ ( t i ) d z .
For further details, see the proofs of Lemma 2 in Kai et al. [9].
Step 2 
Under regularity conditions (C1)–(C4), we have:
L n ( θ ) = 1 2 θ T S n θ + ( W n * ) T θ + o p ( 1 ) .
For further details, see the proof of Lemma 2 of Kai et al. [9].
Step 3 
It is easy to obtain:
1 n h i = 1 n K i x i j P f T ( t 0 ) μ j , j = 0 , 1 , 2 ,
where P stands for convergence in probability. Thus we have:
S n P f T ( t 0 ) σ ( t 0 ) S = f T ( t 0 ) σ ( t 0 ) S 11 S 12 S 21 S 22 .
Together with the results of Step 2, we have:
L n ( θ ) = 1 2 f T ( t 0 ) σ ( t 0 ) θ T S θ + ( W n * ) T θ + o p ( 1 ) .
Note that the convex function L n ( θ ) ( W n * ) T θ converges in probability to the convex function θ T S θ f T ( t 0 ) / ( 2 σ ( t 0 ) ) . Then, it can be deduced from the convexity lemma of Pollard [20] that the quadratic approximation to L n ( θ ) holds uniformly for θ in any compact set, which leads to:
θ ^ n = σ ( t 0 ) f T ( t 0 ) S 1 W n * + o p ( 1 ) .
Finally, similar to the procedures of Theorem 5 in Kai et al. [9], we have:
W n * | T E [ W n * | T ] D MVN q + 2 { 0 , f T ( t 0 ) Σ } .
This completes the proof.
 □
Proof of Remark 1. 
If we use equal weights ω k = 1 q , k = 1 , , q over all quantiles, then we have:
R w ( q ) = k = 1 q k = 1 q 1 q 1 q τ k k { k = 1 q 1 q f ( c k ) } 2 = 1 q 2 k = 1 q k = 1 q τ k k 1 q 2 { k = 1 q f ( c k ) } 2 = k = 1 q k = 1 q τ k k { k = 1 q f ( c k ) } 2 R ( q ) .
Therefore, the asymptotic variance of the unweighted local quadratic CQR estimator can by given by ν 2 σ 2 ( t 0 ) μ 2 2 f T ( t 0 ) R ( q ) . □
Proof of Remark 2. 
If we use equal weights ω k = 1 q , k = 1 , , q over all quantiles, then we have:
J w ( q ) = k = 1 q k = 1 q 1 q 1 q τ k k f ( c k ) f ( c k ) = 1 q 2 k = 1 q k = 1 q τ k k f ( c k ) f ( c k ) J ( q ) ,
then the resulting unweighted quantile average estimator for m ( t 0 ) has the asymptotic normality in Theorem 2 with J ω ( q ) replaced by:
J ( q ) = 1 q 2 k = 1 q k = 1 q τ k k f ( c k ) f ( c k ) .
 □
Proof of Theorem 1. 
We apply Lemma 1 to obtain the asymptotic normality of m ^ W C Q R ( t 0 ) . It is easy to obtain:
S 12 = ( 0 q × 1 , μ 2 ω k f ( c k ) q × 1 ) ,
and
S 22 = diag { μ 2 k = 1 q ω k f ( c k ) , μ 4 k = 1 q ω k f ( c k ) } .
Since S 11 = diag { ω 1 f ( c 1 ) , , ω q f ( c q ) } ,
( S 1 ) 22 = ( S 22 S 21 S 11 1 S 12 ) 1 = diag [ { μ 2 k = 1 q ω k f ( c k ) } 1 , { ( μ 4 μ 2 2 ) k = 1 q ω k f ( c k ) } 1 ] .
Note that ( S 1 ) 21 = ( S 1 ) 22 S 21 S 11 1 . Thus, we have:
( S 1 ) 21 = ( 0 q × 1 , [ μ 2 ( μ 4 μ 2 2 ) k = 1 q ω k f ( c k ) ] 1 q × 1 ) T .
where 0 q × 1 and 1 q × 1 denote the q - dimensional column vector with all entries of 0 and 1, respectively. Denote e = ( 1 , 0 ) T . By Lemma 1, we can obtain:
bias { m ^ W C Q R ( t 0 ) | T } = σ ( t 0 ) h f T ( t 0 ) 1 n h e T { ( S 1 ) 21 E [ W 1 n * | T ] + ( S 1 ) 22 E [ W 2 n * | T ] } = σ ( t 0 ) h f T ( t 0 ) 1 μ 2 k = 1 q ω k f ( c k ) 1 n h E [ w 21 * | T ] .
Note that:
E [ w 2 j * | T ] = 1 n h i = 1 n K i x i j k = 1 q ω k [ F { c k d i , k σ ( t i ) } F ( c k ) ] .
Similarly, we have:
k = 1 q ω k [ F { c k d i , k σ ( t i ) } F ( c k ) ] = k = 1 q ω k f ( c k ) r i , 2 σ ( t i ) { 1 + o p ( 1 ) } .
Therefore:
bias { m ^ W C Q R ( t 0 ) | T } = 1 n h 2 σ ( t 0 ) f T ( t 0 ) i = 1 n K i x i r i , 2 σ ( t i ) { 1 + o p ( 1 ) } .
It is easy to obtain:
1 n h i = 1 n K i x i r i , 2 σ ( t i ) = f T ( t 0 ) m ( t 0 ) 6 σ ( t 0 ) μ 4 μ 2 h 3 { 1 + o p ( 1 ) } .
Thus, we obtain:
bias { m ^ W C Q R ( t 0 ) | T } = 1 6 m ( t 0 ) μ 4 μ 2 h 2 + o p ( h 2 ) .
Furthermore, the conditional variance of m ^ W C Q R ( t 0 ) is:
var { m ^ W C Q R ( t 0 ) | T } = 1 n h 3 σ 2 ( t 0 ) f T ( t 0 ) e T ( ) e + o p ( 1 n h 3 ) = 1 n h 3 ν 2 μ 2 2 σ 2 ( t 0 ) f T ( t 0 ) R ω ( q ) + o p ( 1 n h 3 ) .
which completes the proof. □
Proof of Theorem 2. 
It is easy to see that:
E { m ^ ( τ k , t 0 ) | T } = 1 h n h σ ( t 0 ) f T ( t 0 ) f ( c k ) 1 μ 2 E [ w 21 * | T ] + m ( t 0 ) = 1 n h 2 σ ( t 0 ) f T ( t 0 ) f ( c k ) 1 μ 2 i = 1 n K i x i F c k d i , k σ ( t i ) F ( c k ) + m ( t 0 ) = 1 n h 2 σ ( t 0 ) f T ( t 0 ) 1 μ 2 i = 1 n K i x i d i , k σ ( t i ) 1 + o p ( 1 ) + m ( t 0 ) = 1 n h 2 σ ( t 0 ) f T ( t 0 ) 1 μ 2 i = 1 n K i x i c k { σ ( t i ) σ ( t 0 ) } + r i , 2 σ ( t i ) 1 + o p ( 1 ) + m ( t 0 ) .
Then, we have:
bias { m ^ W Q A E ( t 0 ) | T } = E { m ^ W Q A E ( t 0 ) | T } m ( t 0 ) = E k = 1 q ω k m ^ ( τ k , t 0 ) | T m ( t 0 ) = k = 1 q ω k E { m ^ ( τ k , t 0 ) | T } m ( t 0 ) = k = 1 q ω k 1 n h 2 σ ( t 0 ) f T ( t 0 ) 1 μ 2 i = 1 n K i x i c k { σ ( t i ) σ ( t 0 ) } + r i , 2 σ ( t i ) 1 + o p ( 1 ) = 1 n h 2 σ ( t 0 ) f T ( t 0 ) 1 μ 2 i = 1 n K i x i k = 1 q ω k c k { σ ( t i ) σ ( t 0 ) } + r i , 2 σ ( t i ) 1 + o p ( 1 ) .
Using the condition k = 1 q ω k c k = 0 , we obtain:
bias { m ^ W Q A E ( t 0 ) | T } = 1 n h 2 σ ( t 0 ) f T ( t 0 ) 1 μ 2 i = 1 n K i x i k = 1 q ω k r i , 2 σ ( t i ) 1 + o p ( 1 ) = 1 n h 2 σ ( t 0 ) f T ( t 0 ) 1 μ 2 i = 1 n K i x i r i , 2 σ ( t i ) 1 + o p ( 1 ) .
By using the following fact:
1 n h i = 1 n K i x i r i , 2 σ ( t i ) = f T ( t 0 ) m ( t 0 ) 6 σ ( t 0 ) μ 4 μ 2 h 3 1 + o p ( 1 ) ,
we obtain:
bias { m ^ W Q A E ( t 0 ) | T } = 1 6 m ( t 0 ) μ 4 μ 2 2 h 2 + o p ( h 2 ) .
var { m ^ W Q A E ( t 0 ) | T } = var { k = 1 q ω k m ^ ( τ k , t 0 ) | T } = var { 1 h n h σ ( t 0 ) f T ( t 0 ) 1 μ 2 k = 1 q ω k f ( c k ) ω 21 * | T } = 1 n h 3 σ 2 ( t 0 ) f T 2 ( t 0 ) 1 μ 2 2 var { k = 1 q ω k f ( c k ) ω 21 * | T } .
Using the similar arguments in Kai et al. [9], we have:
var { k = 1 q ω k f ( c k ) ( ω 21 * ω 21 ) | T } = o p ( 1 ) .
Thus, we have:
var { m ^ W Q A E ( t 0 ) | T } = 1 n h 3 σ 2 ( t 0 ) f T 2 ( t 0 ) 1 μ 2 2 var { k = 1 q ω k f ( c k ) ω 21 | T } = 1 n 2 h 4 σ 2 ( t 0 ) f T 2 ( t 0 ) 1 μ 2 2 var { i = 1 n K i x i k = 1 q ω k f ( c k ) η i , k | T } .
Denote ξ = i = 1 n K i x i Q i , where Q i = k = 1 q ω k f ( c k ) η i , k . Note that c o v ( η i , k , η i , k ) = τ k k and c o v ( η i , k , η j , k ) = 0 , if i j . It is easy to obtain E Q i 2 = J ω ( q ) , where
J ω ( q ) = k = 1 q k = 1 q ω k ω k τ k k f ( c k ) f ( c k ) .
It is easy to obtain that E ( K i 2 x i 2 ) = h f T ( t 0 ) ν 2 ( 1 + o p ( 1 ) ) . Therefore:
var ( ξ | T ) = n E ξ 2 E ( K 1 x 1 ) 2 = n h f T ( t 0 ) ν 2 J ω ( q ) { 1 + o p ( 1 ) } .
Combined with the result (13), we have:
var { m ^ W Q A E ( t 0 ) } = 1 n h 3 ν 2 μ 2 2 σ 2 ( t 0 ) f T ( t 0 ) J ω ( q ) + o p ( 1 n h 3 ) .
which completes the proof. □
Proof of Theorem 3. 
The optimal ω * can be obtained by solving the following optimization problem:
min J ω ( q ) = min k = 1 q k = 1 q ω k ω k τ k k f ( c k ) f ( c k ) ,
where ω satisfies k = 1 q ω k = 1 and k = 1 q ω k c k = 0 . With the help of the Lagrange multiplier method, we can obtain the optimal weight vector. This leads to the corresponding minimum conditional variance of m ^ W Q A E * . □

6. Conclusions

In this article, we mainly investigated the efficient estimators of the derivative of the nonparametric function in the nonparametric quantile regression model (1). We developed two ways of combining quantile regression information to derive the estimators of m ( t ) . One is the weighted composite quantile regression estimator based on the quantile weighted loss functions, and the other is the weighted quantile average estimator based on the weighted average of quantile regression estimators at a single quantile. Furthermore, by minimizing the asymptotic variance, the optimal weight vector is computed, and consequently, the optimal estimator can be obtained. Moreover, we conduct some simulations to compare the performance of our proposed estimators to the local linear least square estimator under different symmetric error distributions. Simulation studies illustrate that both the estimators works better than the local linear least square estimator for all the symmetric errors considered except the normal error, and the weighted quantile average estimator performs better than the weighted composite quantile regression estimator in most situations.

Author Contributions

Conceptualization, methodology and formal analysis, X.Z. and X.G.; validation and data analysis, X.Y.; visualization and supervision, Y.Z.; investigation and resources, Y.S.; writing—original draft preparation, X.Z. and X.Y.; writing—review and editing, X.G.; project administration, X.Z., X.G. and Y.Z. All authors have read and agreed to the published version of the manuscript.

Funding

Zhou’s work was supported by the Ministry of Education Humanities and Social Sciences Research Youth Foundation (Grant No. 19YJC910011), the Natural Science Foundation of Shandong Province (Grant No. ZR2020MA021) and the Project of Shandong Province Higher Educational Science and Technology Program (Grant No. J18KB099). Yin’s research is supported by the Research and Development Project of Dezhou City in China. Shen’s research is supported by the Natural Science Foundation of Shandong Province (Grant No. ZR2018LA003) and the Open Research Fund Program of Data Recovery Key Laboratory of Sichuan Province (Grant No. DRN19020).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Some data, models, or code that support the findings of this study are available from the corresponding author upon reasonable request.

Conflicts of Interest

The authors declare no conflict of interest. The funders X.Z., X.Y. and Y.S. have roles in the design of the study, in the analyses and interpretation of data, in the writing of the manuscript, and in the decision to publish the results.

References

  1. Dou, X.; Shirahata, S. Comparisons of B-spline procedures with kernel procedures in estimating regression functions and their derivatives. J. Jpn. Soc. Comput. Stat. 2009, 22, 57–77. [Google Scholar] [CrossRef]
  2. Ruppert, D. Nonparametric Regression and Spline Smoothing. J. Am. Stat. Assoc. 2001, 96, 1522–1523. [Google Scholar] [CrossRef]
  3. Fan, J.; Gasser, T.; Gijbels, I.; Brockmann, M.; Engel, J. Local Polynomial Regression: Optimal Kernels and Asymptotic Minimax Efficiency. Ann. Inst. Stat. Math. 1997, 49, 79–99. [Google Scholar] [CrossRef]
  4. Zhang, X.; King, M.L.; Shang, H.L. Bayesian Bandwidth Selection for a Nonparametric Regression Model with Mixed Types of Regressors. Econometrics 2016, 4, 24. [Google Scholar] [CrossRef] [Green Version]
  5. Souza-Rodrigues, E.A. Nonparametric Regression with Common Shocks. Econometrics 2016, 4, 36. [Google Scholar] [CrossRef] [Green Version]
  6. Koenker, R.; Bassett, G. Regression quantiles. Econometrica 1978, 46, 33–50. [Google Scholar] [CrossRef]
  7. Koenker, R. Quantile Regression; Cambridge University Press: Cambridge, UK, 2005. [Google Scholar]
  8. Zou, H.; Yuan, M. Composite quantile regression and the oracle Model Selection Theory. Ann. Stat. 2008, 36, 1108–1126. [Google Scholar] [CrossRef]
  9. Kai, B.; Li, R.; Zou, H. Local CQR smoothing: An efficient and safe alternative to local polynomial regression. J. R. Stat. Soc. Ser. B 2010, 72, 49–69. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  10. Kai, B.; Li, R.; Zou, H. New efficient estimation and variable selection methods for semiparametric varying-coefficient partially linear models. Ann. Stat. 2011, 39, 305–332. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  11. Jiang, R.; Zhou, Z.; Qian, W. Single index composite quantile regression. J. Korean Stat. Soc. 2011, 3, 323–332. [Google Scholar]
  12. Ning, Z.; Tang, L. Estimation and test procedures for composite quantile regression with covariates missing at random. Stat. Probab. Lett. 2014, 95, 15–25. [Google Scholar] [CrossRef]
  13. Jiang, R. Composite quantile regression for linear errors-in-variables models. Hacet. J. Math. Stat. 2015, 44, 707–713. [Google Scholar] [CrossRef]
  14. Jiang, R.; Qian, W.; Zhou, Z. Single-index composite quantile regression with heteroscedasticity and general error distributions. Stat. Pap. 2016, 57, 185–203. [Google Scholar] [CrossRef]
  15. Zhang, R.; Lv, Y.; Zhao, W.; Liu, J. Composite quantile regression and variable selection in single-index coefficient model. J. Stat. Plan. Inference 2016, 176, 1–21. [Google Scholar] [CrossRef]
  16. Zhao, W.; Lian, H.; Song, X. Composite quantile regression for correlated data. Comput. Stat. Data Anal. 2016, 109, 15–33. [Google Scholar] [CrossRef]
  17. Luo, S.; Zhang, C.; Wang, M. Composite Quantile Regression for Varying Coefficient Models with Response Data Missing at Random. Symmetry 2019, 11, 1065. [Google Scholar] [CrossRef] [Green Version]
  18. Sun, J.; Gai, Y.; Lin, L. Weighted local linear composite quantile estimation for the case of general error distributions. J. Stat. Plan. Inference 2013, 143, 1049–1063. [Google Scholar] [CrossRef]
  19. Zhao, Z.; Xiao, Z. Efficient Regressions via Optimally Combining Quantile Information. Econom. Theory 2014, 30, 1272–1314. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  20. Pollard, D. Asymptotics for least absolute deviation regression estimators. Econom. Theory 1991, 7, 186–199. [Google Scholar] [CrossRef]
Table 1. The biases and standard deviations for RASEs in Example 1.
Table 1. The biases and standard deviations for RASEs in Example 1.
ε R A S E ( m ^ W Q A E , m ^ W C Q R ) R A S E ( m ^ W Q A E , m ^ L S E )
q = 5 q = 9 q = 19 q = 5 q = 9 q = 19
N ( 0 , 1 ) 0.98230.97450.93790.92170.93910.9264
(0.0784)(0.1028)(0.1235)(0.1332)(0.1213)(0.1229)
L a p l a c e 1.06961.10521.11431.30761.29191.2298
(0.1765)(0.2354)(0.2689)(0.3757)(0.3723)(0.3326)
t ( 3 ) 1.02351.05361.11371.64411.56781.4705
(0.1329)(0.2092)(0.2901)(0.8123)(0.8002)(0.6790)
U [ 1 , 1 ] 1.22431.42211.47651.12681.43731.6057
(0.2029)(0.3048)(0.3832)(0.2035)(0.3414)(0.4565)
C a u c h y 1.30661.50161.58312.29932.29392.1652
(0.3960)(0.5601)(0.6746)(0.9127)(0.9850)(0.9245)
Table 2. The means and standard deviations for ASEs in Example 2.
Table 2. The means and standard deviations for ASEs in Example 2.
Distribution of ε A S E ( m ^ W Q A E ) A S E ( m ^ W C Q R ) A S E ( m ^ L S E )
q = 5 q = 9 q = 19 q = 5 q = 9 q = 19
N ( 0 , 1 ) 0.02970.03020.03390.13020.08830.05650.0426
(0.0185)(0.0180)(0.0220)(0.0275)(0.0227)(0.0181)(0.0264)
L a p l a c e 0.02230.02260.02890.08710.06140.04220.0377
(0.0121)(0.0132)(0.0333)(0.0200)(0.0268)(0.0299)(0.0719)
t ( 3 ) 0.04920.04800.04830.11910.08880.06650.0515
(0.0243)(0.0243)(0.0245)(0.0348)(0.0286)(0.0248)(0.0231)
U [ 1 , 1 ] 0.05280.05250.05430.09600.07860.06470.0570
(0.0257)(0.0251)(0.0252)(0.0343)(0.0303)(0.0269)(0.0267)
C a u c h y 0.01940.02050.02130.04500.03660.03000.0223
(0.0172)(0.0209)(0.0231)(0.0153)(0.0146)(0.0155)(0.0192)
Table 3. The biases and standard deviations for RASEs in Example 3.
Table 3. The biases and standard deviations for RASEs in Example 3.
ε R A S E ( m ^ W Q A E , m ^ W C Q R ) R A S E ( m ^ W Q A E , m ^ L S E )
q = 5 q = 9 q = 19 q = 5 q = 9 q = 19
N ( 0 , 1 ) 0.98410.97490.95320.93280.94760.9357
(0.0518)(0.0642)(0.0767)(0.1211)(0.1115)(0.1109)
L a p l a c e 0.99631.01521.01301.06581.06361.0417
t ( 3 ) 1.03681.06571.06881.30671.25841.1891
(0.1886)(0.2442)(0.2638)(0.7175)(0.5569)(0.4523)
U [ 1 , 1 ] 1.08591.08300.93871.18361.26021.1277
(0.1335)(0.2008)(0.2842)(0.2804)(0.3641)(0.4413)
C a u c h y 1.12731.17841.16831.55321.53291.4596
(0.2488)(0.3176)(0.3371)(0.4678)(0.4672)(0.4234)
Table 4. The means and standard deviations for ASEs in Example 4.
Table 4. The means and standard deviations for ASEs in Example 4.
ε A S E ( m ^ W Q A E ) A S E ( m ^ W C Q R ) A S E ( m ^ L S E )
q = 5 q = 9 q = 19 q = 5 q = 9 q = 19
N ( 0 , 1 ) 0.02490.02870.03480.07110.06440.05530.0752
(0.0383)(0.0514)(0.0562)(0.0779)(0.1101)(0.1245)(0.2549)
L a p l a c e 0.03120.03230.04510.08310.06780.05560.0724
(0.1021)(0.0902)(0.1179)(0.0824)(0.0723)(0.0638)(0.1357)
t ( 3 ) 0.03690.03650.03760.07830.06020.04760.0434
(0.0198)(0.0183)(0.0192)(0.0190)(0.0169)(0.0158)(0.0221)
U [ 1 , 1 ] 0.03210.03020.03290.05670.04650.03920.0387
(0.0169)(0.0173)(0.0171)(0.0194)(0.0177)(0.0166)(0.0214)
C a u c h y 0.03480.03750.04140.10610.08330.06570.0569
(0.0219)(0.0316)(0.0455)(0.0277)(0.0274)(0.0301)(0.0526)
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Zhou, X.; Gao, X.; Zhang, Y.; Yin, X.; Shen, Y. Efficient Estimation for the Derivative of Nonparametric Function by Optimally Combining Quantile Information. Symmetry 2021, 13, 2387. https://doi.org/10.3390/sym13122387

AMA Style

Zhou X, Gao X, Zhang Y, Yin X, Shen Y. Efficient Estimation for the Derivative of Nonparametric Function by Optimally Combining Quantile Information. Symmetry. 2021; 13(12):2387. https://doi.org/10.3390/sym13122387

Chicago/Turabian Style

Zhou, Xiaoshuang, Xiulian Gao, Yukun Zhang, Xiuling Yin, and Yanfeng Shen. 2021. "Efficient Estimation for the Derivative of Nonparametric Function by Optimally Combining Quantile Information" Symmetry 13, no. 12: 2387. https://doi.org/10.3390/sym13122387

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop