Next Article in Journal
Towards Efficient Federated Learning: Layer-Wise Pruning-Quantization Scheme and Coding Design
Next Article in Special Issue
Genetic Algorithm for Feature Selection Applied to Financial Time Series Monotonicity Prediction: Experimental Cases in Cryptocurrencies and Brazilian Assets
Previous Article in Journal
Non-Additive Entropy Formulas: Motivation and Derivations
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The Financial Risk Measurement EVaR Based on DTARCH Models

1
Department of Mathematics and Statistics, York University, Toronto, ON M3J 1P3, Canada
2
Key Laboratory of Advanced Theory and Application in Statistics and Data Science, MOE, and Academy of Statistics and Interdisciplinary Sciences and School of Statistics, East China Normal University, Shanghai 200062, China
*
Author to whom correspondence should be addressed.
Entropy 2023, 25(8), 1204; https://doi.org/10.3390/e25081204
Submission received: 15 June 2023 / Revised: 1 August 2023 / Accepted: 9 August 2023 / Published: 13 August 2023
(This article belongs to the Special Issue Advanced Statistical Applications in Financial Econometrics)

Abstract

:
The value at risk based on expectile (EVaR) is a very useful method to measure financial risk, especially in measuring extreme financial risk. The double-threshold autoregressive conditional heteroscedastic (DTARCH) model is a valuable tool in assessing the volatility of a financial asset’s return. A significant characteristic of DTARCH models is that their conditional mean and conditional variance functions are both piecewise linear, involving double thresholds. This paper proposes the weighted composite expectile regression (WCER) estimation of the DTARCH model based on expectile regression theory. Therefore, we can use EVaR to predict extreme financial risk, especially when the conditional mean and the conditional variance of asset returns are nonlinear. Unlike the existing papers on DTARCH models, we do not assume that the threshold and delay parameters are known. Using simulation studies, it has been demonstrated that the proposed WCER estimation exhibits adequate and promising performance in finite samples. Finally, the proposed approach is used to analyze the daily Hang Seng Index (HSI) and the Standard & Poor’s 500 Index (SPI).

1. Introduction

Scientifically and accurately measuring financial risk is the core part of the financial risk management process. Developing efficient statistical methods of financial risk measurement is essential for effectively controlling financial risk. We aim to develop financial risk models that account for extreme events, thereby enhancing the accuracy and efficacy of risk assessments in the field of finance. Due to the increasing complexity, time-variation and randomness of financial markets, nonlinear time series models are used to provide a more reasonable description of the markets’ behaviors or phenomena, the double-threshold autoregressive conditional heteroscedastic (DTARCH) model is one of nonlinear time series models which are designed for this purpose (see [1] for details). A significant characteristic of DTARCH models is that their conditional mean and conditional variance functions both are piecewise linear involving double thresholds. Our investigation will focus on developing an expectile-based value at risk (EVaR) model with a DTARCH structure.
DTARCH models are very useful and flexible in analyzing asymmetric financial time series, making them a subject of considerable attention in recent statistical and econometric papers. Ref. [1] investigated the model identification, estimation and diagnostic checking techniques based on the maximum likelihood principle under the normal assumption of the conditional distribution of the observed data. Ref. [2] investigated robust modeling techniques without a specific form of the conditional distribution, focusing on the L 1 estimation of DTARCH models and deriving limiting distributions for the proposed estimators. Ref. [3] further studied the parameter estimation of DTARCH models using the weighted composite quantile regression procedure, which includes quantile regression as a special case while significantly improving efficiency and inheriting robustness. Ref. [4] investigated DTARCH models with restrictions on parameters and proposed both unrestricted and restricted weighted composite quantile regression estimation for the model parameters, which can be utilized to construct the likelihood ratio-type test statistic. However, these papers are all based on the known threshold and delay parameters of DTARCH models.
The risk measure EVaR proposed by [5] is based on expectile regression theory. Ref. [6] proposed the concept of expectile. The expectile is a one-to-one mapping relationship with the quantile and has similar properties as the quantile. So, the expectile can be regarded as an estimation of quantile; see [7,8,9,10] for details. Expectile has gained popularity in recent years as a subject of interest. Ref. [11] discovered that similar to quantiles, time-varying expectiles can be estimated using a state space signal extraction algorithm. Ref. [12] proposed a new model based on expectile regression–geoadditive expectile regression model. Ref. [13] proposed regularized expectile regression with smoothly clipped absolute deviation (SCAD) penalty for analyzing heteroscedasticity in high dimensions when the error has finite moments. Ref. [14] considered penalized linear expectile regression using SCAD penalty function. Ref. [15] proposed aggregated expectile regression by exponential weighting. Ref. [16] derived joint weighted Gaussian approximations of the tail empirical expectile and quantile processes. Ref. [17]focused on the semi-parametric estimation of multivariate expectiles for extreme levels of risk. Ref. [18] proposed expectHill estimators, which are used as the basis for estimating tail expectiles and expected shortfall. Ref. [19] built a general theory for the estimation of extreme conditional expectiles in heteroscedastic regression models with heavy-tailed noise. Ref. [20] developed a weighted expectile regression approach for estimating the conditional expectile when covariates are missing at random. Ref. [21] studied the problem of the nonparametric estimation of the expectile regression model for strong mixing functional time series. Ref. [22] considered model averaging for expectile regressions. Ref. [23] exploited the fact that the expectiles of a distribution F are in fact the quantiles of another distribution E explicitly linked to F, in order to construct nonparametric kernel estimators of extreme conditional expectiles. Ref. [24] dealt with the problem of the nonparametric estimation of the functional expectile regression, and so on.
Since EVaR is derived from expectile theory and utilizes a squared loss function as its loss function, it exhibits higher sensitivity to extreme values and is mathematically easier to handle. In addition, EVaR is a weighted average of the lower risk (expected shortfall, i.e., ES) and upper risk in conditions. Currently, several research papers on EVaR have been published, exploring various aspects of its application and properties. For example, Ref. [11] proved that EVaR is a consistent risk measure when the confidence level p is less than 0.5 . Ref. [25] studied risk measurement EVaR under a variable coefficient model. Ref. [26] proposed a weighted composite expectile regression estimation for autoregressive models. Ref. [27] discussed the financial meaning of EVaR, compared them with VaR and ES, and studied their asymptotic behavior. Ref. [28] considered a new class of conditional dynamic expectile models with partially varying coefficients in assessing the tail risk of asset returns for S&P 500 Index. Ref. [29] proposed a class of semiparametric composite expectile models with varying coefficients. Ref. [30] proposed a semi-parametric model with varying-coefficients to analyze the EVaR under the assumption of α -mixing. Ref. [31] forecasted the expectile-based risk measures by using the expected-based procedures. Ref. [32] provided a basis for inference on extreme expectiles and expectile-based marginal expected shortfall in a general β -mixing context that encompasses ARMA and GARCH models with heavy-tailed innovations. Ref. [33] developed a single-index approach for modeling the expectile-based value at risk. Ref. [34] studied the estimation of extremal conditional expectile based on quantile regression and expectile regression models. Considering the advantages of EVaR, we will propose the estimation of the DTARCH model based on expectile regression theory. Unlike the existing papers on the DTARCH model, we do not assume that the threshold and delay parameters of DTARCH models are known.
The rest of the paper is organized as follows. Section 2 investigates the estimation problem of DTARCH models based on expectile regression theory. We propose WCER estimation of DTARCH models in Section 2.3, and the proposed expectile regression estimation in Section 2.2 is a special case of WCER estimation, while the least squares estimation in Section 2.1 is a special case of the expectile regression estimation. In Section 2.4, we show that the asymptotic efficiency of WCER estimators calculated using weights obtained through data-driven methods is the same as those of WCER estimators calculated using known weights. We compare the least squares estimation, quantile regression estimation, expectile regression estimation and weighted composite expectile regression estimation of DTARCH models based on the maximum likelihood estimation in Section 3. The proposed methodology is also applied to analyze the daily Hang Seng Index (HSI) and the Standard & Poor’s 500 Composite Index (SPI) in Section 4. We summarize our work in Section 5. Also, for readers interested in the theoretical basis of our results, the proofs of our theoretical results are provided in Appendix A. In addition, some of our simulation results are given in Appendix B.

2. Estimation of the DTARCH Model

Ref. [1] proposed the DTARCH model based on the autoregressive conditional heteroskedasticity model (ARCH) model (see [35]) and the threshold model (see [36]). The DTARCH model can handle situations where both conditional mean and conditional variance specifications are piecewise linear based on previous information. Given a time series y t , t = 1 , 2 , , n , let F t be the σ -field generated from the realized value y t , y t 1 , at time t. Assume that y t is generated by
y t = x t , j α ( j ) + ϵ t , if r j 1 < y t d r j ,
where j = 1 , 2 , , m ; the delay parameter d is a positive integer; the threshold parameters r j satisfy = r 0 < r 1 < < r m = ; x t , j = 1 , y t 1 , , y t p j is a ( p j + 1 ) × 1 vector of lagged variables; and α ( j ) = α 0 ( j ) , α 1 ( j ) , , α p j ( j ) is a ( p j + 1 ) × 1 parameter vector. The stochastic error satisfies ϵ t = h t ( γ ) u t with
h t ( γ ) = j = 1 m I t , j γ 0 ( j ) + γ 1 ( j ) ϵ t 1 2 + + γ q j ( j ) ϵ t q j 2 ,
where I t , j = I r j 1 < y t d r j , and γ = vec γ ( 1 ) , , γ ( m ) with γ ( j ) = γ 0 ( j ) , γ 1 ( j ) , , γ q j ( j ) is a ( q j + 1 ) × 1 parameter vector, j = 1 , , m . Because (2) is an ARCH process, the innovations u t are independently and identically distributed random variables with E ( u t ) = 0 , Var ( u t ) = 1 , the parameters γ 0 ( j ) > 0 , γ i ( j ) 0 ( i = 1 , , q j ) and i = 1 q j γ i ( j ) < 1 . This is the DTARCH model proposed by [1]. A significant characteristic of DTARCH models is that their conditional mean and conditional variance functions both are piecewise linear involving double thresholds.
We have made a slight modification to the DTARCH model under consideration. As reported by [3,4], the stochastic error satisfies ϵ t = h t ( β ) u t with
h t ( β ) = j = 1 m I t , j β 0 ( j ) + β 1 ( j ) | ϵ t 1 | + + β q j ( j ) | ϵ t q j | j = 1 m I t , j z t , j β ( j ) ,
where z t , j = 1 , | ϵ t 1 | , , | ϵ t q j | , β = vec β ( 1 ) , , β ( m ) with β ( j ) = β 0 ( j ) , β 1 ( j ) , , β q j ( j ) ( j = 1 , , m ), and β 0 ( j ) > 0 , β i ( j ) 0 ( i = 1 , , q j ) . The innovations u t are independently and identically distributed random variables with an unknown distribution F ( u ) and a density function f ( u ) .
Let α = vec α ( 1 ) , , α ( m ) , x t = vec I t , 1 x t , 1 , , I t , m x t , m , z t = vec I t , 1 z t , 1 , , I t , m z t , m , and denote j = 1 m p j + 1 = p and j = 1 m q j + 1 = q . Then Equations (1) and (3) can be written as
y t = x t α + ϵ t ,
and ϵ t = h t ( β ) u t with
h t ( β ) = z t β ,
respectively. As in [1,3,4], we denote the model defined by (4) and (5) by
DTARCH ( p 1 , , p m ; q 1 , , q m ) ,
where p 1 , , p m represent the autoregressive model (AR) orders in the m regimes and q 1 , , q m denote the ARCH orders in the m regimes. We use the DTARCH model with a conditional scale, rather than a conditional variance, because modeling the conditional scale is very important. Previous studies emphasized that such a scale provides a more natural dispersion concept than the variance and offers substantial advantages in terms of robustness. The advantage of such an approach with conditional scale instead of conditional variance can be found in [37,38,39,40,41] and so on.

2.1. Least Squares Estimators of the DTARCH Model

Most of the research papers on DTARCH models are based on the condition that the threshold parameters { r 0 , r 1 , , r m } and delay parameter d are known. But in real data analysis, we know that this condition is hard to meet. In the literature on threshold models, there are also a few studies that are based on scenarios where the threshold or delay parameters are unknown. For example, [42] proposed the least squares (LS) estimators for a threshold AR(1) model with an unknown threshold and proved that LS estimators of the threshold parameters were strongly consistent. Ref. [43] proposed the conditional least squares (CLS) estimators for the threshold autoregressive model with unknown threshold and delay parameters and proved that CLS estimators of the threshold parameters were convergent in distribution. In this paper, we propose the parameter estimation methods for the DTARCH model based on expectile regression theory, which includes the expectile regression estimation and the weighted composite expectile regression estimation of the DTARCH model. Note that the expectile regression estimation can be seen as a special case of the weighted composite expectile regression estimation when the expectile takes on a certain value (see Section 2.3 of this paper for details), while the least squares estimation can be seen as a special value of the expectile regression estimation (see Section 2.2 of this paper for details). Under some conditions, we can show that the proposed estimators of the threshold and delay parameters are consistent.
Using the least squares estimation method, we can obtain the least squares estimation of the DTARCH model DTARCH ( p 1 , , p m ; q 1 , , q m ) , α ^ 0 L S , β ^ 0 L S . Denote the threshold parameters ( r 0 , r 1 , , r m ) = r , the least squares estimator of r by r ^ 0 L S , and the least squares estimator of the delay parameter d by d ^ 0 L S . However, these estimators are biased. Obviously, the distribution of | ϵ t | is skewed and the log-transformation is an intuitive mechanism that can make the distribution less skewed; see [44] for details. Thus, in light of [3,4], we introduce a modified form of the model DTARCH ( p 1 , , p m ; q 1 , , q m ) . Let
log | ϵ t ( α ) | = log h t ( α , β ) + e t ,
where e t = log | u t | , and
h t ( α , β ) = j = 1 m I t , j β 0 ( j ) + β 1 ( j ) | ϵ t 1 ( α ) | + + β q j ( j ) | ϵ t q j ( α ) | .
h t ( α , β ) is equivalent to h t ( β ) , as we can see that h t ( β ) is also related to α . Therefore, we rename h t ( β ) as h t ( α , β ) . Apply the least squares method again, we can obtain LS estimators of DTARCH ( p 1 , , p m ; q 1 , , q m ) denoted by α ^ L S , β ^ L S , r ^ L S and d ^ L S , respectively. We study the properties of the least squares estimators under the following conditions.
For j = 1 , 2 , , m , suppose that x t , j are all Markov chains. Their l-step transition probability is denoted by P l ( x j , A j ) , where x j R p and A j are Borel sets. Later on, we will need the following set of regularity conditions.
(C1)
{ x t , j } admits a unique invariant measure π j ( · ) such that ∃ K j , ρ j < 1 , x j R p , n j N , P n j ( x j , · ) π j ( · ) K j ( 1 + | x j | ) ρ j n j , where · and | · | denote the total variation norm and the Euclidean norm, respectively.
(C2)
E | y t | 2 + δ < + for δ > 0 , and { y t } is strictly stationary and ergodic.
(C3)
{ y t } has the first derivative, and the stationary distribution of the derivative admits a density positive everywhere.
(C4)
Error e t has the cumulative distribution function G ( · ) with density g ( · ) being positive and having a continuous second derivative. Furthermore, E ( e t 4 ) < + .
Let α * , β * , d * and r * be the true values of α , β , d and r , respectively. Then, we can obtain the following theorems and corollary.
Theorem 1.
Suppose that the conditions (C2) and (C3) hold. Then, the estimators α ^ L S β ^ L S are strongly consistent, that is,
α ^ L S β ^ L S a . s . α * β * .
Under some additional conditions, we have the following corollary.
Corollary 1.
Suppose that the conditions (C1), (C2) and (C4) hold. Then it follows from Theorem 1 that the estimator d ^ L S is strongly consistent, that is,
d ^ L S a . s . d * .
Theorem 2.
Suppose that the conditions (C1), (C2) and (C4) hold. Then, the estimator r ^ L S converges to r * in distribution, that is,
r ^ L S D r * .
According to Corollary 1 and Theorem 2, both the threshold and delay parameters converge to their true values. Therefore, after obtaining estimated values for threshold and delay parameters, estimating the remaining parameters of the DTARCH model will yield convergence properties that are equivalent to those obtained by estimating the parameters using the known threshold and delay parameters. In order to simplify the theoretical analysis, without loss of generality, we assume that the threshold and delay parameters are known throughout the remainder of this paper.

2.2. Expectile Regression Estimators of the DTARCH Model

The definition of expectile regression proposed by [6] states that the τ -th expectile of a random variable u can be obtained minimizing the following check function,
Q τ ( u ) = ( 1 τ ) u 2 , u 0 , τ u 2 , u > 0 ,
and the derivative 1 2 Q ˙ τ ( u ) = ( 1 τ ) u , u 0 , τ u , u > 0 , satisfies
E Q ˙ τ ( u ) = 0 ,
i.e.,
( 1 τ ) 0 u f ( u ) d u + τ 0 u f ( u ) d u = 0 ,
where f ( · ) is the density function of u. Therefore, the τ -th expectile of u is 0.
Let ϵ t ( α ) = y t x t α = h t ( α , β ) u t and the τ -th expectile of u t be μ ( τ ) . Based on Theorem 1 in [6], the τ -th conditional expectile of ϵ t given F t 1 is
μ τ ( ϵ t | F t 1 ) = h t ( α , β ) μ ( τ ) .
The τ -th expectile regression (ER) estimator of α and β can be obtained by minimizing
t = s + 1 n Q τ ϵ t ( α ) h t ( α , β ) b τ ,
over b τ , α and β , where 0 < τ < 1 , s = max p 1 , , p m , q 1 , , q m and b τ is the τ -th expectile of u t . Let the resulting estimators from (7) be b ^ τ E R , α ^ 0 E R , β ^ 0 E R . Not surprisingly, these estimators are also biased. To correct the bias, we should still perform expectile regression estimation on the DTARCH model (6) that has undergone a logarithmic transformation.
From model (6), the τ -th expectile of log ϵ t ( α ) given F t 1 is
log ϵ t ( α ) F t 1 = log h t ( α , β ) + c τ ,
where c τ is the τ -th expectile of e t . Applying the expectile regression scheme, we can obtain the expectile regression estimators of c τ , α and β by minimizing
t = s + 1 n Q τ log ( | ϵ t ( α ) | ) log { h t ( α , β ) } c τ ,
over c τ , α and β . Obviously, when τ = 0.5 , the expectile regression estimators are the least square estimators in Section 2.1.
Let the resulting estimators of c τ , α , β be c ^ τ E R , α ^ E R , β ^ E R . To derive the asymptotic property of the proposed estimator, we introduce some notations and conditions. Let c τ * be the τ -th expectile of e t , ϵ t = ϵ t ( α * ) , h t = h t α * , β * , r t = x t ϵ t + f t h t with
f t = j = 1 m I t , j 0 , sgn ( ϵ t 1 ) x t 1 , , sgn ( ϵ t q j ) x t q j β ( j ) ,
μ = μ r , μ z with μ r = E r t and μ z = E z t / h t , Ω = Ω r Ω r z Ω r z Ω z with Ω r = E r t r t , Ω r z = E z t h t r t and Ω z = E z t z t h t 2 and Π = Π r Π r z Π r z Π z with Π r = Var r t , Π r z = Cov r t , z t h t and Π z = Var z t h t . We assume that
(C5)
Covariance matrix Π is positive definite.
Then we have the following asymptotic results for c ^ τ E R , α ^ E R , β ^ E R .
Theorem 3.
Suppose that the conditions (C2), (C4) and (C5) hold. Then, c ^ τ E R , α ^ E R , β ^ E R converge to c τ * , ( α * ) , ( β * ) in distribution, that is
n c ^ τ E R c τ * , α ^ E R α * , β ^ E R β * D N 0 , Ψ ,
where Ψ is a block matrix with blocks
Ψ 11 = ζ 1 + μ Π 1 μ , Ψ 12 = ζ μ Π 1 , Ψ 21 = ζ Π 1 μ , Ψ 22 = ζ Π 1 ,
with ζ = 1 4 ( 1 2 τ ) G ( c τ * ) + τ 2 E Q ˙ τ 2 e t c τ * .
By Theorem 3, it follows that
n c ^ τ E R c τ * D N 0 , Ψ 11 ,
n α ^ E R α * β ^ E R β * D N 0 , Ψ 22 .

2.3. Weight Composite Expectile Regression Estimators of the DTARCH Model

Refs. [45,46,47] considered the composite quantile regression (CQR) estimation, which is obtained by incorporating the information of multiple quantiles into the objective function. This estimation method incorporates more comprehensive model information. Subsequently, ref. [29] introduced an estimation called composite expectile regression (CER) and established large sample properties of the resulting CER estimator. However, both CQR and CER estimations assign equal weights to different quantiles and expectiles, respectively. Intuitively, using different weights for different quantile regression (QR) and expectile regression (ER) models might lead to improved efficiency. Hence, ref. [48,49] proposed the weighted composite quantile regression (WCQR) estimation method. The standard deviation (SD) of the WCQR estimator is smaller than the SD of the CQR estimator and QR estimator as discussed in [4]. Furthermore, ref. [26] proposed a weighted composite expectile regression (WCER) estimation for AR models and established its large sample’properties.
From model (6), the τ k -th expectile of log ϵ t ( α ) given F t 1 is
log ϵ t ( α ) F t 1 = log h t ( α , β ) + c τ k ,
where c τ k is the τ k -th expectile of e t . Applying the WCER scheme, we can jointly estimate the AR and ARCH parameters by minimizing
k = 1 K ω k t = s + 1 n Q τ k log ( | ϵ t ( α ) | ) log { h t ( α , β ) } c τ k ,
over c τ k , α and β , where ω = ω 1 , , ω K is a vector of weights such that ω = 1 , with · denoting the Euclidean norm. Without loss of generality, we assume that 0 < τ 1 < < τ K < 1 . If ω i = 1 / K , the estimation obtained from Equation (9) is CER estimation. Obviously, the weight ω k is the contribution rate of the τ k -th expectile. Since Q τ k ( log ( | ϵ t ( α ) | ) log h t ( α , β ) c τ k ) may not have a positive correlation, it is possible for the weight component ω to be negative. Therefore, the WCER estimation is not a simple extension of the CER estimation. Due to the limited space, we will not discuss the CER estimation in detail.
Let the resulting estimator of c τ 1 , , c τ K , α , β be c ^ τ 1 , , c ^ τ K , α ^ , β ^ and c τ k * be the true value of the τ k -th expectile of e t . Under certain conditions, we have the following asymptotic results for c ^ τ 1 , , c ^ τ K , α ^ , β ^ .
Theorem 4.
Suppose that the conditions (C2), (C4) and (C5) hold. Then c ^ τ 1 , , c ^ τ K , α ^ , β ^ converge to c τ 1 * , , c τ K * , ( α * ) , ( β * ) in distribution, that is
n c ^ τ 1 c τ 1 * , , c ^ τ K c τ K * , α ^ α * , β ^ β * D N 0 , Σ ,
where Σ is a block matrix with blocks
Σ 11 = ξ + σ 2 ( ω ) μ Π 1 μ 1 1 , Σ 12 = σ 2 ( ω ) 1 μ Π 1 , Σ 21 = σ 2 ( ω ) 1 Π 1 μ , Σ 22 = σ 2 ( ω ) Π 1 ,
with 1 = 1 , , 1 , ξ is a K × K matrix with the ( j , k ) th element being
ξ j k = E Q ˙ τ j e t c τ j * Q ˙ τ k e t c τ k * 4 ( 1 2 τ j ) G ( c τ j * ) + τ j ( 1 2 τ k ) G ( c τ k * ) + τ k ,
σ 2 ( ω ) = ω Λ ω k = 1 K ω k ( 1 2 τ k ) G ( c τ k * ) + τ k 2 , Λ is also a K × K matrix with the ( j , k ) th element being
Λ ( j , k ) = 1 4 E Q ˙ τ j e t c τ j * Q ˙ τ k e t c τ k * = τ j τ k τ j I ( e t c τ k * ) τ k I ( e t c τ j * ) + I ( e t c τ j * c τ k * ) × e t 2 ( c τ j * + c τ k * ) e t + c τ j * c τ k * d G ( e t ) .

2.4. Selection of Optimal Weight

By Theorem 4, we have
n α ^ α * β ^ β * D N 0 , Σ 22 ,
where Σ 22 = σ 2 ( ω ) Π 1 . Because Π does not contain weight ω , to obtain the optimal weight, we only need to minimize σ 2 ( ω ) under the condition of ω = 1 , which yields
ω opt = g Λ 2 g 1 / 2 Λ 1 g ,
where
g = ( 1 2 τ 1 ) G ( c τ 1 * ) + τ 1 , , ( 1 2 τ K ) G ( c τ K * ) + τ K .
The cumulative distribution function G ( · ) with density g ( · ) of error { e t } can be obtained by the kernel smooth estimation. Then, the nonparametric estimator of ω opt is given by
ω ^ = ω ^ 1 , , ω ^ K = g ^ Λ ^ 2 g ^ 1 / 2 Λ ^ 1 g ^ ,
so that estimator of α β denoted by α ^ 0 β ^ 0 can be obtained by the following formula,
min c τ k , α , β k = 1 K ω ^ k t = s + 1 n Q τ k log ( | ϵ t ( α ) | ) log { h t ( α , β ) } c τ k .
Then, under certain conditions, we have the following asymptotic results for α ^ 0 β ^ 0 .
Corollary 2.
Suppose that the conditions (C2), (C4) and (C5) hold. Then α ^ 0 β ^ 0 converge to α * β * in distribution, that is
n α ^ 0 α * β ^ 0 β * D N 0 , g Λ 1 g 1 Π 1 .
Note that σ 2 ( ω opt ) = g Λ 1 g 1 . When ω opt is known, the asymptotic covariance of α ^ 0 β ^ 0 is the same as the covariance variance of α ^ β ^ . In other words, the asymptotic efficiency of WCER estimators calculated using weights obtained through data-driven optimal weighting is the same as those of WCER estimators calculated using known weights.

3. Comparison of Estimation Methods

In this section, we compare the least squares estimation, quantile regression estimation, expectile regression estimation and weighted composite expectile regression estimation of DTARCH models by using the maximum likelihood estimation (MLE) of DTARCH models as the benchmark. We consider the following DTARCH ( 2 , 2 ; 2 , 2 ) model:
y t = α 1 ( 1 ) y t 1 + α 2 ( 1 ) y t 2 + ϵ t , if y t 1 0 , α 1 ( 2 ) y t 1 + α 2 ( 2 ) y t 2 + ϵ t , if y t 1 > 0 ,
where α 1 ( 1 ) , α 2 ( 1 ) = 0.25 , 0.30 , α 1 ( 2 ) , α 2 ( 2 ) = 0.45 , 0.20 and ϵ t = h t u t , with
h t = β 0 ( 1 ) + β 1 ( 1 ) ϵ t 1 + β 2 ( 1 ) ϵ t 2 , if y t 1 0 , β 0 ( 2 ) + β 1 ( 2 ) ϵ t 1 + β 2 ( 2 ) ϵ t 2 , if y t 1 > 0 ,
where β 0 ( 1 ) , β 1 ( 1 ) , β 2 ( 1 ) = 0.03 , 0.30 , 0.50 , β 0 ( 2 ) , β 1 ( 2 ) , β 2 ( 2 ) = 0.06 , 0.45 , 0.35 .
We consider three types of innovation variables, which are distributed as N ( 0 , 1 ) , t ( 6 ) and χ 2 ( 4 ) . They are centralized and normalized so that the medians of the absolute innovations are 1, i.e., u t is normalized to satisfy Median ( | u t | ) = 1 . The sample size n is chosen are 100, 300, 800, 1500 and 2500. All the simulation results are based on 500 Monte Carlo replications. Seven equally spaced expectiles in ( 0 , 1 ) are chosen for each simulation setting when we apply the WCER estimation process. For QR and ER estimation, we take τ = 0.25 and 0.75 , respectively. In each simulation, the root mean squared error (RMSE) for different estimators are calculated, and they are reported in Table 1, Table 2 and Table 3. In addition, the parameter estimators obtained from different estimation methods of the DTARCH model are listed in Table A1, Table A2 and Table A3 in Appendix B.
As expected, the oracle MLE performs the best, while the WCER estimators outperform both the QR estimators and the ER estimators. As can be seen from Table 1 and Table A1, the WCER estimators slightly underperform LS estimators only when the residual error follows a normal distribution. From Table 2, Table 3, Table A2 and Table A3, we can see that the WCER estimators greatly outperform the LS estimators in terms of RMSE when the error follows a heavy-tailed or asymmetric distribution. In studies that apply time series models to study financial data, it is more realistic to assume that the error follows a non-normal and heavy-tailed distribution. For example, [50] considered the heavy-tailed nature and extreme volatility of asset returns, and demonstrated these statistical characteristics using financial data. Ref. [51] introduced a new heavy-tailed distribution to characterize errors in the ARCH/GARCH model and applied it to financial data. Furthermore, [52] assumed that there are two different types of heavy-tail distributions for GARCH model errors: the student’s t-distribution and the normal reciprocal inverse Gaussian distribution. They compared the application of these distributions to South Korea’s daily stock market returns.
Therefore, it is possible to obtain a WCER estimator with favorable statistical properties similar to those of the MLE, even when the distribution of the error is unknown. Moreover, the RMSEs of all estimation methods decrease with the increase of sample size n, indicating that all estimators are consistent.
We also make an empirical analysis of the DTARCH model with delay parameter d 1 . Similarly, based on the maximum likelihood estimation, we compare the QR estimation, ER estimation and WCER estimation of this DTARCH model. We present the simulation results in the supplementary materials.

4. Real Data Analysis

In this section, we use the proposed method to analyze the Hang Seng Index (HSI) and the Standard & Poor’s 500 Index (SPI) daily from 7 February 2013, to 6 February 2023. The formula for calculating returns series y t uses the daily returns of the exponential market, which are represented by the first-order difference of the logarithm of the closing prices of the index on adjacent days,
y t = log ( x t ) log ( x t 1 ) ,
where x t represents the closing price of the HSI or SPI on day t. The sample size for SPI is n = 2516 , and the sample size for HSI is n = 2453 . We are interested in the asymmetry of the conditional mean and conditional variance of the stock market.
First, we need to identify the values of m, the delay parameter d and the threshold parameters r i . Applying the same method as in [3], we obtain the values of m, d and r i as d = 1 , m = 2 and ( r 0 , r 1 , r 2 ) = ( , 0 , + ) . This aligns with stock market observations and supports our goal of examining the asymmetry in conditional mean and variance. Similar to [3,4], we employ the generalized Akaike information criterion (GAIC) and the generalized Bayesian information criterion (GBIC) methods to determine the orders of DTARCH models before fitting the models to HSI and SPI. We find out that both GAIC and GBIC reach their minimum values for HSI and SPI with p = 2 and q = 4 . The minimum GAIC and GBIC values for HSI are 0.3144 and 0.3876 , respectively. The minimum GAIC and GBIC values for SPI are 0.2975 and 0.3463 , respectively. This designates a DTARCH ( 2 , 2 ; 4 , 4 ) model for the return series. Thus, the following DTARCH model is taken into account for the return series y t ,
y t = α 1 ( 1 ) y t 1 + α 2 ( 1 ) y t 2 + ϵ t , if y t 1 0 , α 1 ( 2 ) y t 1 + α 2 ( 2 ) y t 2 + ϵ t , if y t 1 > 0 ,
and ϵ t = h t u t , with
h t = β 0 ( 1 ) + β 1 ( 1 ) ϵ t 1 + β 2 ( 1 ) ϵ t 2 + β 3 ( 1 ) ϵ t 3 + β 4 ( 1 ) ϵ t 4 , if y t 1 0 , β 0 ( 2 ) + β 1 ( 2 ) ϵ t 1 + β 2 ( 2 ) ϵ t 2 + β 3 ( 2 ) ϵ t 3 + β 4 ( 2 ) ϵ t 4 , if y t 1 > 0 .
For comparison, we calculate the proposed WCER estimate, ER estimate, QR estimate and LS estimate, respectively. Seven equally spaced expectiles in ( 0 , 1 ) are chosen when we apply the WCER estimation process. For QR and ER estimation, we take τ = 0.25 and 0.75 respectively.
These estimates and their standard errors are listed in Table 4 and Table 5, which show some interesting results. First, the estimated α ^ 1 ( 1 ) , α ^ 2 ( 1 ) , α ^ 1 ( 2 ) , α ^ 2 ( 2 ) are all negative, which suggest that if current and past returns are negative, the forecasted mean return will be positive; if they are positive, the forecasted mean return will be negative. Second, all the estimated coefficients β ^ 0 ( 1 ) , β ^ 1 ( 1 ) , β ^ 2 ( 1 ) , β ^ 3 ( 1 ) , β ^ 4 ( 1 ) , β ^ 0 ( 2 ) and β ^ 1 ( 2 ) , β ^ 2 ( 2 ) , β ^ 3 ( 2 ) , β ^ 4 ( 2 ) are positive, which aligns with the expectations as the DTARCH models assume non-negative volatility coefficients. Third, the values of β ^ 1 ( 1 ) , β ^ 2 ( 1 ) , β ^ 3 ( 1 ) , β ^ 4 ( 1 ) are significantly different from those of β ^ 1 ( 2 ) , β ^ 2 ( 2 ) , β ^ 3 ( 2 ) , β ^ 4 ( 2 ) . These show that the volatility of HSI and SPI exhibit obvious asymmetry. Fourth, the absolute values of the parameter estimates of HSI are almost greater than those of SPI. This shows that the market volatility of HSI is higher than that of SPI. Indeed, over the past 20 years, the volatility of the S&P 500 Index has been lower than that of the Hang Seng Index, leading some to believe that the former’s returns would be lower than the latter. However, since the S&P 500 Index has experienced smaller market drops in the past compared to the Hang Seng Index, its absolute returns have been higher over the past 20 years and both its short-term and long-term risk-adjusted returns are higher as well.
To evaluate the predictive performance of the models we built, we split the dataset into two parts: a larger part for model building and a smaller part for model validation. For example, we split the SPI dataset with sample size n = 2516 into two subsets: sample 1 (the sample from 7 February 2013 to 21 December 2022) of size n 1 = 2486 and sample 2 (the sample from 22 December 2022 to 6 February 2023) of n 2 = 30 . We build a DARCH model using the sample 1, and then apply the model to predict the dataset from 22 December 2022 to 6 February 2023. We compare the predicted values with the sample 2 to evaluate the performance of different estimation methods. The chosen evaluation metric is Median Absolute Percentage Error (MAPE), calculated as the median absolute difference between predicted values y ^ i and observed values y i ( i = 1 , 2 , , n 2 ). This approach is similar to that used in [29]. We perform the same procedure on the SPI and HSI datasets with sample 2 of different sizes, specifically n 2 = 10 , 20 and 30. The results obtained are shown in Table 6 and Table 7, respectively.
Based on the MAPE from Table 6 and Table 7, it can be see that the WCER estimation consistently produces lower MAPE values compared to the other methods. Therefore, we conclude that the WCER estimation method outperforms other methods.

5. Concluding Remarks

In this paper, we develop an estimation method for DTARCH models based on the expectile theory. We propose the WCER estimators for DTARCH models and derive the large sample properties of the proposed estimators. Unlike the existing papers on the study of DTARCH models, we do not need to know the threshold and delay parameters. We conduct a simulation study to test the proposed theory and find that our WCER estimator outperforms the LS estimator in terms of RMSE, particularly when the errors follow a heavy-tailed or asymmetric distribution. The simulation results are consistent with our theoretical results. Even if the common distribution of errors is unknown, we can still obtain a WCER estimator with good statistical properties like the MLE. Furthermore, we apply the proposed WCER estimation method to estimate the parameters of DTARCH models using daily returns data for HSI and SPI.
It is noted that the proposed WCER estimation method is more effective for DTARCH models when the errors follow a non-normal heavy-tail distribution. This finding is consistent with real data examples, which adds to the practical significance of our study. Therefore, our future work will focus on further practical data analysis using the proposed methods. In addition, considering the high dimensionality of many real datasets, one of our next key steps is to come up with estimates based on expectile regression theory in such a scenario. This is one of the important research tasks that we will undertake.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/e25081204/s1, Table S1: RMSE comparison of various estimation methods, ϵ t N ( 0 , 1 ) ; Table S2: RMSE comparison of various estimation methods, ϵ t t ( 6 ) ; Table S3: RMSE comparison of various estimation methods, ϵ t χ 2 ( 4 ) ; Table S4: Parameter estimate of various estimation methods, ϵ t N ( 0 , 1 ) ; Table S5: Parameter estimate of various estimation methods, ϵ t t ( 6 ) ; Table S6: Parameter estimate of various estimation methods, ϵ t χ 2 ( 4 ) .

Author Contributions

Conceptualization and methodology, X.L., Y.W. and Y.Z.; software and writing—original draft preparation, X.L. and Z.T.; writing—review and editing, X.L., Y.W. and Y.Z.; supervision, Y.W. and Y.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This work is partially supported by the Natural Science and Engineering Research Council of Canada (RGPIN-2017-05720, RGPIN-2023-05655), the State Key Program of National Natural Science Foundation of China (71931004) and National Natural Science Foundation of China (92046005).

Institutional Review Board Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
MLEMaximum Likelihood Estimation
LSLeast Squares
ERExpectile Regression
QRQuantile Regression
CERComposite Expectile Regression
WCERWeighted Composite Expectile Regression
EVaRExpectile-based Value at Risk
DTARCHDouble-threshold Autoregressive Conditional Heteroscedastic
RMSERoot Mean Squared Error

Appendix A

This appendix contains proofs of the various theorems and corollaries used herein. The proofs of Theorem 1, Corollary 1 and Theorem 2 are similar to those of [42,43], which will not be listed in detail here. Theorem 3 is a special case when k = 1 and ω = 1 in Theorem 4, and hence its proof is omitted. Corollary 2 can be obtained in the same way as Theorem 4. Thus, we only give the detailed proof of Theorem 4, in which the following lemmas are needed.
Lemma A1.
Suppose that the conditions (C2), (C4) and (C5) hold. Then,
n ( v , δ , u ) = k = 1 K ω k t = s + 1 n Q τ k S t k * Δ t k Q τ k S t k * ,
which is defined in (A11), can be written as
n ( v , δ , u ) = L n θ + θ G θ + o p ( 1 ) ,
  where θ = v , δ , u ; L n = L n 1 , L n 2 , L n 3 , with
L n 1 = ω 1 q n , 1 , , ω K q n , K , q n , k = 1 n t = s + 1 n Q ˙ τ k e t c τ k * ( k = 1 , , K ) , L n 2 = 1 n k = 1 K ω k t = s + 1 n Q ˙ τ k e t c τ k * r t , L n 3 = 1 n k = 1 K ω k t = s + 1 n Q ˙ τ k e t c τ k * z t h t ;
G = G 11 G 12 G 21 G 22 is a block matrix with each block being
G 11 = diag ω 1 ( 1 2 τ 1 ) G ( c τ 1 * ) + τ 1 , , ω K ( 1 2 τ K ) G ( c τ K * ) + τ K ,
G 12 = G 11 1 μ , G 21 = 1 G 11 μ , G 22 = 1 G 11 1 Ω .
Proof of Lemma A1.
To facilitate the proof, we denote by
H t = H t 11 H t 12 H t 21 H t 22 ,
where H t = 2 log h t ( α , β ) γ γ γ * is a 2 × 2 block matrix with γ = α β , γ * = α * β * , and
H = E H t = H 11 H 12 H 21 H 22 .
Note that for arbitrary positive number a, we have
P n 1 / 2 x t δ ϵ t < a 1 .
Then, it follows from Taylor’s expansion for the natural logarithm log | ϵ t ( α ) | that
log | ϵ t ( α ) | log | ϵ t ( α * ) | = x t δ n ϵ t 1 2 n δ x t x t δ ϵ t 2 + o p n 1 .
And by Taylor’s expansions for the natural logarithm log h t ( α , β ) , we have
log h t ( α , β ) log h t ( α * , β * ) = f t δ n h t + z t u n h t 1 2 n δ H t 11 δ 1 2 n u H t 22 u 1 n δ H t 12 u + o p n 1 ,
where
f t = j = 1 m I t , j 0 , sgn ( ϵ t 1 ) x t 1 , , sgn ( ϵ t q j ) x t q j β ( j ) .
By substituting Equations (A1) and (A2) into Δ t k , where the definition of Δ t k is given in the proof of Theorem 4, we can obtain that
Δ t k = v k n + z t u n h t + r t δ n + 1 2 n δ x t x t δ ϵ t 2 1 2 n δ H t 11 δ 1 2 n u H t 22 u 1 n δ H t 12 u + o p n 1 ,
where r t = x t ϵ t + f t h t .
Note that n ( v , δ , u ) is convex in ( v , δ , u ) . Showing that n ( v , δ , u ) converges pointwise to its conditional expectation is enough, and the convergence is uniformly valid on any compact set of ( v , δ , u ) , which is because of the convexity lemma in [53].
n ( v , δ , u ) = k = 1 K ω k t = s + 1 n E Q τ k S t k * Δ t k Q τ k S t k * + Q ˙ τ k S t k * Δ t k x t , z t k = 1 K ω k t = s + 1 n Q ˙ τ k S t k * Δ t k + R n , k ( v , δ , u ) ,
where
Q τ k S t k * Δ t k Q τ k S t k * + Q ˙ τ k S t k * Δ t k = ( 1 2 τ k ) S t k * Δ t k 2 I S t k * Δ t k I S t k * 0 + ( 1 τ k ) Δ t k 2 I S t k * 0 + τ k Δ t k 2 I S t k * > 0 ,
and
E I S t k * < Δ t k I S t k * < 0 x t , z t = g c τ k * Δ t k + 1 2 g ˙ c τ k * Δ t k 2 + o Δ t k 2 ,
E e t I S t k * < Δ t k I S t k * < 0 x t , z t = c τ k * g c τ k * Δ t k + 1 2 g c τ k * + c τ k * g ˙ c τ k * Δ t k 2 + o Δ t k 2 ,
E e t 2 I S t k * < Δ t k I S t k * < 0 x t , z t = c τ k * 2 g c τ k * Δ t k + 1 2 2 c τ k * g c τ k * + c τ k * 2 g ˙ c τ k * Δ t k 2 + o Δ t k 2 .
From the above equations, we can obtain that
k = 1 K ω k t = s + 1 n E Q τ k S t k * Δ t k Q τ k S t k * + Q ˙ τ k S t k * Δ t k x t , z t = k = 1 K ω k ( 1 2 τ k ) G ( c τ k * ) + τ k v k , δ , u × 1 n t = s + 1 n 1 1 n t = s + 1 n r t 1 n t = s + 1 n z t h t 1 n t = s + 1 n r t 1 n t = s + 1 n r t r t 1 n t = s + 1 n r t z t h t 1 n t = s + 1 n z t h t 1 n t = s + 1 n z t r t h t 1 n t = s + 1 n z t z t h t 2 v k δ u + O p 1 n .
By the Chebyshev’s weak law of large numbers, we obtain that
1 n t = s + 1 n 1 1 n t = s + 1 n r t 1 n t = s + 1 n z t h t 1 n t = s + 1 n r t 1 n t = s + 1 n r t r t 1 n t = s + 1 n r t z t h t 1 n t = s + 1 n z t h t 1 n t = s + 1 n z t r t h t 1 n t = s + 1 n z t z t h t 2 p 1 μ r μ z μ r Ω r Ω r z μ z Ω r z Ω z Γ .
By substituting the above equations into Equation (A4), we can obtain that
n ( v , δ , u ) = k = 1 K ω k v k 1 n t = s + 1 n Q ˙ τ k e t c τ k * k = 1 K ω k 1 n t = s + 1 n Q ˙ τ k e t c τ k * r t δ k = 1 K ω k 1 n t = s + 1 n Q ˙ τ k e t c τ k * z t h t u + k = 1 K ω k δ 1 n t = s + 1 n Q ˙ τ k e t c τ k * H t 12 u + k = 1 K ω k u 1 2 n t = s + 1 n Q ˙ τ k e t c τ k * H t 22 u + k = 1 K ω k δ 1 2 n t = s + 1 n Q ˙ τ k e t c τ k * H t 11 x t x t ϵ t 2 δ + k = 1 K ω k ( 1 2 τ k ) G ( c τ k * ) + τ k θ k Γ θ k + o p ( 1 ) + R n , k ( v , δ , u ) ,
where θ k = v k , δ , u . Since E Q ˙ τ k e t c τ k * = 0 , by the Chebyshev’s weak law of large numbers, we obtain that
1 n t = s + 1 n Q ˙ τ k e t c τ k * H t 12 p 0 ,
1 n t = s + 1 n Q ˙ τ k e t c τ k * H t 22 p 0 ,
1 n t = s + 1 n Q ˙ τ k e t c τ k * H t 11 x t x t ϵ t 2 p 0 .
Subsequently, we obtain that
n ( v , δ , u ) = k = 1 K ω k 1 n t = s + 1 n Q ˙ τ k e t c τ k * v k k = 1 K ω k 1 n t = s + 1 n Q ˙ τ k e t c τ k * r t δ k = 1 K ω k 1 n t = s + 1 n Q ˙ τ k e t c τ k * z t h t u + k = 1 K ω k ( 1 2 τ k ) G ( c τ k * ) + τ k θ k Γ θ k + o p ( 1 ) + R n , k ( v , δ , u ) .
According to Lemma A2, we can obtain that
R n , k ( v , δ , u ) = o p n 1 = o p ( 1 ) ,
which combining with (A7) and performing some straightforward calculations, we arrive at the conclusion of Lemma A1. □
Lemma A2.
Suppose that the conditions (C2), (C4) and (C5) hold. Then,
R n , k ( v , δ , u ) = o p n 1 = o p ( 1 ) ,
which is defined in (A4).
Proof of Lemma A2.
We can manipulate Equation (A4) to obtain the following form
E n ( v , δ , u ) = E E n ( v , δ , u ) x , z E k = 1 K ω k t = s + 1 n Q ˙ τ k S t k * Δ t k E k = 1 K ω k t = s + 1 n E Q ˙ τ k S t k * Δ t k x t , z t + E R n , k ( v , δ , u ) ,
which yields that
E R n , k ( v , δ , u ) = 0 .
According to Equations (A6) and (A11), we obtain that
R n , k ( v , δ , u ) = k = 1 K ω k t = s + 1 n Q τ k S t k * Δ t k Q τ k S t k * + Q ˙ τ k S t k * Δ t k k = 1 K ω k ( 1 2 τ k ) G ( c τ k * ) + τ k θ k Γ θ k + o p ( 1 ) .
By Lemma 2 in [9], we obtain that
Var R n , k ( v , δ , u ) = n k = 1 K ω k Var Q τ k S t k * Δ t k Q τ k S t k * + Q ˙ τ k S t k * Δ t k n k = 1 K ω k E Q τ k S t k * Δ t k Q τ k S t k * + Q ˙ τ k S t k * Δ t k 2 n k = 1 K ω k E 4 Δ t k 2 2 = O n 1 ,
i.e.,
Var R n , k ( v , δ , u ) = o n 1 .
From (A8) and (A10), it follows that
R n , k ( v , δ , u ) = o p n 1 = o p ( 1 ) .
This completes the proof of Lemma A2. □
Lemma A3.
Suppose that the conditions (C2), (C4) and (C5) hold. Then,
L n D N 0 , Σ 0 ,
where L n is defined in Lemma A1, and Σ 0 = Σ 11 0 Σ 12 0 Σ 21 0 Σ 22 0 is a block matrix with blocks Σ 11 0 is a K × K matrix with the ( j , k ) th element being
Σ 11 0 ( j , k ) = ω j ω k E Q ˙ τ j e t c τ j * Q ˙ τ k e t c τ k * ;
Σ 12 0 = Σ 11 0 1 μ ; Σ 21 0 = 1 Σ 11 0 μ ; Σ 22 0 = 1 Σ 11 0 1 Ω .
Proof of Lemma A3.
Note that
L n = L n 1 L n 2 L n 3 = 1 n t = s + 1 n Q ˙ τ 1 e t c τ 1 * ω 1 1 n t = s + 1 n Q ˙ τ K e t c τ K * ω K 1 n k = 1 K ω k t = s + 1 n Q ˙ τ k e t c τ k * r t 1 n k = 1 K ω k t = s + 1 n Q ˙ τ k e t c τ k * z t h t .
By the Cramer–Wald device and the Central Limit Theorem, we obtain that
n 1 n L n μ L D N 0 ( K + p + q ) × 1 , Σ 0 .
We now calculate μ L and Σ 0 , respectively. It is easy to show that
μ L = 0 ( K + p + q ) × 1 .
Let
Σ 0 = Σ 11 0 Σ 12 0 Σ 21 0 Σ 22 0 ,
where
Σ 11 0 = E Q ˙ τ 1 e t c τ 1 * ω 1 Q ˙ τ K e t c τ K * ω K Q ˙ τ 1 e t c τ 1 * ω 1 Q ˙ τ K e t c τ K * ω K ,
which is a K × K matrix with the ( j , k ) th element being
Σ 11 ( j , k ) = ω j ω k E Q ˙ τ j e t c τ j * Q ˙ τ k e t c τ k * = 4 ω j ω k τ j τ k τ j I ( e t c τ k * ) τ k I ( e t c τ j * ) + I ( e t c τ j * c τ k * ) × e t 2 ( c τ j * + c τ k * ) e t + c τ j * c τ k * d G ( e t ) .
We can obtain the expressions of Σ 12 , Σ 21 and Σ 22 similarly. Thus,
L n D N 0 , Σ 0 ,
which completes the proof of Lemma A3. □
Proof of Theorem 4.
Let n ( α α * ) = δ , n ( β β * ) = u , n ( c τ k c τ k * ) = v k , v = v 1 , , v K and
S t k = log | ϵ t ( α ) | log h t ( α , β ) c τ k = log | ϵ t ( α * + n 1 / 2 δ ) | log h t ( α * + n 1 / 2 δ , β * + n 1 / 2 u ) c τ k + n 1 / 2 v k .
Define S t k * = log | ϵ t ( α * ) | log h t ( α * , β * ) c τ k * = e t c τ k * , Δ t k = S t k * S t k , and
n ( v , δ , u ) = k = 1 K ω k t = s + 1 n Q τ k S t k Q τ k e t c τ k * = k = 1 K ω k t = s + 1 n Q τ k S t k * Δ t k Q τ k S t k * .
Then minimizing the objective in (9) is equivalent to minimizing n ( v , δ , u ) . By Lemma A1, we obtain that
n ( v , δ , u ) = L n θ + 1 2 θ 2 G θ + o p ( 1 ) ,
where θ = v , δ , u ; L n = L n 1 , L n 2 , L n 3 , with
L n 1 = ω 1 q n , 1 , , ω K q n , K , q n , k = 1 n t = s + 1 n Q ˙ τ k e t c τ k * ( k = 1 , , K ) , L n 2 = 1 n k = 1 K ω k t = s + 1 n Q ˙ τ k e t c τ k * r t , L n 3 = 1 n k = 1 K ω k t = s + 1 n Q ˙ τ k e t c τ k * z t h t ;
G = G 11 G 12 G 21 G 22 is a block matrix, with each block being
G 11 = diag ω 1 ( 1 2 τ 1 ) G ( c τ 1 * ) + τ 1 , , ω K ( 1 2 τ K ) G ( c τ K * ) + τ K ,
G 12 = G 11 1 μ , G 21 = 1 G 11 μ , G 22 = 1 G 11 1 Ω .
We further perform transformation calculations on matrix G, resulting in
G = G 11 G 11 1 μ 1 G 11 μ 1 G 11 1 Ω = G 11 1 / 2 0 1 G 11 1 / 2 μ I I 0 0 1 G 11 1 Π G 11 1 / 2 G 11 1 / 2 1 μ 0 I = G 11 1 / 2 0 1 G 11 1 / 2 μ I I 0 0 1 G 11 1 Π 1 / 2 I 0 0 1 G 11 1 Π 1 / 2 G 11 1 / 2 G 11 1 / 2 1 μ 0 I = G 11 1 / 2 0 1 G 11 1 / 2 μ I I 0 0 1 G 11 1 Π 1 / 2 G 11 1 / 2 0 1 G 11 1 / 2 μ I I 0 0 1 G 11 1 Π 1 / 2 ,
that is to say, G is a positive matrix. Furthermore, according to Lemma A3, we obtain that
L n D N 0 , Σ 0 ,
where Σ 0 = Σ 11 0 Σ 12 0 Σ 21 0 Σ 22 0 is a block matrix, and each block is, respectively: Σ 11 0 is a K × K matrix with the ( j , k ) th element being
Σ 11 0 ( j , k ) = ω j ω k E Q ˙ τ j e t c τ j * Q ˙ τ k e t c τ k * ;
Σ 12 0 = Σ 11 0 1 μ ; Σ 21 0 = 1 Σ 11 0 μ ; Σ 22 0 = 1 Σ 11 0 1 Ω .
Therefore, by Theorem 2 in [54], we can obtain that
θ ^ D N 0 , 1 4 G 1 Σ 0 G 1 .
Note θ ^ = n c ^ τ 1 c τ 1 * , , c ^ τ K c τ K * , α ^ α * , β ^ β * . Now, we obtain that
n c ^ τ 1 c τ 1 * , , c ^ τ K c τ K * , α ^ α * , β ^ β * D N 0 , 1 4 G 1 Σ 0 G 1 ,
next, we need to compute 1 4 G 1 Σ 0 G 1 .
Denote G 1 = G 11 G 12 G 21 G 22 , by applying the inverse operation rules of block matrices, we can obtain that
G 22.1 = G 22 G 21 G 11 1 G 12 = 1 G 11 1 Ω 1 G 11 μ G 11 1 G 11 1 μ = 1 G 11 1 Ω μ μ .
Because Ω r = E r t r t , Π r = Var r t , μ r = E r t , then Π r = Ω r μ r μ r ; because Ω z = E z t z t h t 2 , Π z = Var z t h t , μ z = E z t h t , then Π z = Ω z μ z μ z . Because Ω r z = E z t h t r t , Π r z = Cov r t , z t h t , then Π r z = Cov z t h t , r t = Ω r z μ z μ r ; so, we obtain that
Ω μ μ = Π .
Substituting the above formula into G 22.1 yields
G 22.1 = 1 G 11 1 Π = k = 1 K ω k ( 1 2 τ k ) G ( c τ k * ) + τ k Π .
So, we can obtain the expressions for G 11 , G 12 , G 21 and G 22 as follows:
G 11 = G 11 1 + G 11 1 G 12 G 22.1 1 G 21 G 11 1 = G 11 1 + G 11 1 G 11 1 μ 1 G 11 1 Π 1 1 G 11 μ G 11 1 = G 11 1 + 1 G 11 1 1 1 μ Π 1 1 μ = G 11 1 + 1 G 11 1 1 μ Π 1 μ 1 1 = diag 1 ω 1 ( 1 2 τ 1 ) G ( c τ 1 * ) + τ 1 , , 1 ω K ( 1 2 τ K ) G ( c τ K * ) + τ K + μ Π 1 μ k = 1 K ω k ( 1 2 τ k ) G ( c τ k * ) + τ k 1 1 ;
G 12 = G 11 1 G 12 G 22.1 1 = G 11 1 G 11 1 μ 1 G 11 1 Π 1 = 1 G 11 1 1 1 μ Π 1 = 1 k = 1 K ω k ( 1 2 τ k ) G ( c τ k * ) + τ k 1 μ Π 1 ;
G 21 = G 22.1 1 G 21 G 11 1 = 1 G 11 1 1 1 Π 1 μ = 1 k = 1 K ω k ( 1 2 τ k ) G ( c τ k * ) + τ k 1 Π 1 μ ;
and
G 22 = G 22.1 1 = 1 G 11 1 Π 1 = 1 k = 1 K ω k ( 1 2 τ k ) G ( c τ k * ) + τ k Π 1 .
Denote
Σ = Σ 11 Σ 12 Σ 21 Σ 22 = 1 4 G 1 Σ 0 G 1 = 1 4 G 11 G 12 G 21 G 22 Σ 11 0 Σ 12 0 Σ 21 0 Σ 22 0 G 11 G 12 G 21 G 22 ,
next, we have to calculate Σ 11 , Σ 12 , Σ 21 and Σ 22 , respectively. First of all,
Σ 11 = 1 4 G 11 Σ 11 0 G 11 + G 12 Σ 21 0 G 11 + G 11 Σ 12 0 G 21 + G 12 Σ 22 0 G 21 ,
where
G 11 Σ 11 0 G 11 = G 11 1 + 1 G 11 1 1 μ Π 1 μ 1 1 Σ 11 0 G 11 1 + 1 G 11 1 1 μ Π 1 μ 1 1 = G 11 1 Σ 11 0 G 11 1 + G 11 1 Σ 11 0 1 G 11 1 1 μ Π 1 μ 1 1 + 1 G 11 1 1 μ Π 1 μ 1 1 Σ 11 0 G 11 1 + 1 G 11 1 1 μ Π 1 μ 1 1 Σ 11 0 1 G 11 1 1 μ Π 1 μ 1 1 = G 11 1 Σ 11 0 G 11 1 + μ Π 1 μ 1 G 11 1 G 11 1 Σ 11 0 1 1 + 1 1 Σ 11 0 G 11 1 + μ Π 1 μ 1 G 11 1 2 1 Σ 11 0 1 1 1 ;
G 12 Σ 21 0 G 11 = 1 G 11 1 1 1 μ Π 1 1 Σ 11 0 μ G 11 1 + 1 G 11 1 1 μ Π 1 μ 1 1 = μ Π 1 μ 1 G 11 1 1 1 Σ 11 0 G 11 1 μ Π 1 μ 1 G 11 1 2 1 Σ 11 0 1 1 1 ;
G 11 Σ 12 0 G 21 = G 12 Σ 21 0 G 11 = μ Π 1 μ 1 G 11 1 G 11 1 Σ 11 0 1 1 μ Π 1 μ 1 G 11 1 2 1 Σ 11 0 1 1 1 ;
G 12 Σ 22 0 G 21 = 1 G 11 1 1 1 μ Π 1 1 Σ 11 0 1 Ω 1 G 11 1 1 1 Π 1 μ = 1 G 11 1 2 1 Σ 11 0 1 μ Π 1 Ω Π 1 μ 1 1 = 1 G 11 1 2 1 Σ 11 0 1 μ Π 1 Π + μ μ Π 1 μ 1 1 = 1 G 11 1 2 1 Σ 11 0 1 μ Π 1 μ z + μ Π 1 μ 2 1 1 .
So, by substituting the results of (A18)–(A21) into the expression for Σ 11 in (A17), we can obtain the value of Σ 11 as
Σ 11 = 1 4 G 11 Σ 11 0 G 11 + G 12 Σ 21 0 G 11 + G 11 Σ 12 0 G 21 + G 12 Σ 22 0 G 21 = 1 4 G 11 1 Σ 11 0 G 11 1 + 1 4 1 G 11 1 2 1 Σ 11 0 1 μ Π 1 μ 1 1 = ξ + σ 2 ( ω ) μ Π 1 μ 1 1 ,
where ξ is a K × K matrix with the ( j , k ) th element being
ξ j k = E Q ˙ τ j e t c τ j * Q ˙ τ k e t c τ k * 4 ( 1 2 τ j ) G ( c τ j * ) + τ j ( 1 2 τ k ) G ( c τ k * ) + τ k ,
σ 2 ( ω ) = ω Λ ω k = 1 K ω k ( 1 2 τ k ) G ( c τ k * ) + τ k 2 ,
with Λ is a K × K matrix with the ( j , k ) th element being Λ ( j , k ) ,
Λ ( j , k ) = 1 4 E Q ˙ τ j e t c τ j * Q ˙ τ k e t c τ k * = τ j τ k τ j I ( e t c τ k * ) τ k I ( e t c τ j * ) + I ( e t c τ j * c τ k * ) e t 2 ( c τ j * + c τ k * ) e t + c τ j * c τ k * d G ( e t ) .
Secondly, we have
Σ 12 = 1 4 G 11 Σ 11 0 G 12 + G 12 Σ 21 0 G 12 + G 11 Σ 12 0 G 22 + G 12 Σ 22 0 G 22 ,
where
G 11 Σ 11 0 G 12 = G 11 1 + 1 G 11 1 1 μ Π 1 μ 1 1 Σ 11 0 1 G 11 1 1 1 μ Π 1 = 1 G 11 1 1 G 11 1 Σ 11 0 1 μ Π 1 1 G 11 1 2 μ Π 1 μ 1 Σ 11 0 1 1 μ Π 1 ;
G 12 Σ 21 0 G 12 = 1 G 11 1 1 1 μ Π 1 1 Σ 11 0 μ 1 G 11 1 1 1 μ Π 1 = 1 G 11 1 2 1 Σ 11 0 1 μ Π 1 μ 1 μ Π 1 ;
G 11 Σ 12 0 G 22 = G 11 1 + 1 G 11 1 1 μ Π 1 μ 1 1 Σ 11 0 1 μ 1 G 11 1 Π z 1 = 1 G 11 1 1 G 11 1 Σ 11 0 1 μ Π 1 + 1 G 11 1 2 μ Π 1 μ 1 Σ 11 0 1 1 μ Π 1 ;
G 12 Σ 22 0 G 22 = 1 G 11 1 1 1 μ Π 1 1 Σ 11 0 1 Ω 1 G 11 1 Π 1 = 1 G 11 1 2 1 Σ 11 0 1 1 μ Π 1 Ω Π 1 = 1 G 11 1 2 1 Σ 11 0 1 1 μ Π 1 Π + μ μ Π 1 = 1 G 11 1 2 1 Σ 11 0 1 1 μ Π 1 1 G 11 1 2 1 Σ 11 0 1 μ Π 1 μ 1 μ Π 1 .
So, by substituting the results of (A24)–(A27) into the expression for Σ 12 in (A23), we can obtain the value of Σ 12 as
Σ 12 = 1 4 G 11 Σ 11 0 G 12 + G 12 Σ 21 0 G 12 + G 11 Σ 12 0 G 22 + G 12 Σ 22 0 G 22 = 1 4 1 G 11 1 2 1 Σ 11 0 1 1 μ Π 1 = σ 2 ( ω ) 1 μ Π 1 .
Thirdly, we have
Σ 21 = Σ 12 = σ 2 ( ω ) 1 Π 1 μ .
Fourthly, we have
Σ 22 = 1 4 G 21 Σ 11 0 G 12 + G 22 Σ 21 0 G 12 + G 21 Σ 12 0 G 22 + G 22 Σ 22 0 G 22 ,
where
G 21 Σ 11 0 G 12 = 1 G 11 1 1 1 Π 1 μ Σ 11 0 1 G 11 1 1 1 μ Π 1 = 1 G 11 1 2 1 Σ 11 0 1 Π 1 μ μ Π 1 ;
G 22 Σ 21 0 G 12 = 1 G 11 1 Π 1 1 Σ 11 0 μ 1 G 11 1 1 1 μ Π 1 = 1 G 11 1 2 1 Σ 11 0 1 Π 1 μ μ Π 1 ;
G 21 Σ 12 0 G 22 = G 22 Σ 21 0 G 12 = 1 G 11 1 2 1 Σ 11 0 1 Π 1 μ μ Π 1 ;
G 22 Σ 22 0 G 22 = 1 G 11 1 Π 1 1 Σ 11 0 1 Ω 1 G 11 1 Π 1 = 1 G 11 1 2 1 Σ 11 0 1 Π 1 Ω Π 1 = 1 G 11 1 2 1 Σ 11 0 1 Π 1 Π + μ μ Π 1 = 1 G 11 1 2 1 Σ 11 0 1 Π 1 + Π 1 μ μ Π 1 .
So, by substituting the results of (A31)–(A34) into the expression for Σ 22 in (A30), we can obtain the value of Σ 22 as
Σ 22 = 1 4 G 21 Σ 11 0 G 12 + G 22 Σ 21 0 G 12 + G 21 Σ 12 0 G 22 + G 22 Σ 22 0 G 22 = 1 4 1 G 11 1 2 1 Σ 11 0 1 Π 1 = σ 2 ( ω ) Π 1 .
Therefore, we obtain that
n c ^ τ 1 c τ 1 * , , c ^ τ K c τ K * , α ^ α * , β ^ β * D N 0 , Σ ,
where Σ = Σ 11 Σ 12 Σ 21 Σ 22 . We have now concluded the proof of Theorem 4. □

Appendix B. Partial Results of Simulation Studies

Table A1. Parameter estimate of various estimation methods, ϵ t N ( 0 , 1 ) .
Table A1. Parameter estimate of various estimation methods, ϵ t N ( 0 , 1 ) .
n = 100
Estimate α 1 ( 1 ) α 2 ( 1 ) α 1 ( 2 ) α 2 ( 2 ) β 0 ( 1 ) β 1 ( 1 ) β 2 ( 1 ) β 0 ( 2 ) β 1 ( 2 ) β 2 ( 2 )
MLE0.23570.29020.42640.18680.03490.28640.49070.06380.43460.3369
LS0.23420.28780.42280.18540.03540.28520.48810.06410.43330.3357
QR 0.25 0.21410.15690.41130.10190.04410.22490.43440.07890.38990.2959
QR 0.75 0.21480.17520.41340.10890.04390.22560.43560.07920.39010.2981
ER 0.25 0.20990.21230.41340.10990.04300.22570.43120.07940.39140.2953
ER 0.75 0.20840.21010.41290.10980.04410.22410.43460.07910.39130.2953
WCER0.21590.22570.42650.14390.03540.24590.47530.06710.40560.3149
n = 300
Estimate α 1 ( 1 ) α 2 ( 1 ) α 1 ( 2 ) α 2 ( 2 ) β 0 ( 1 ) β 1 ( 1 ) β 2 ( 1 ) β 0 ( 2 ) β 1 ( 2 ) β 2 ( 2 )
MLE0.24170.29130.43960.19070.03270.29030.49360.06230.44030.3421
LS0.24040.29020.43900.19020.03310.29020.49250.06310.43910.3404
QR 0.25 0.22090.23870.42420.14990.04010.25120.45480.07170.40240.3080
QR 0.75 0.22130.23890.42440.15080.03980.25080.45390.07290.40110.3079
ER 0.25 0.22030.23990.42330.15080.03990.24890.45530.07270.40130.3086
ER 0.75 0.22020.23800.42390.15110.03920.25130.45460.07410.40060.3089
WCER0.22950.26040.43760.17010.03390.28240.48690.06420.42480.3302
n = 800
Estimate α 1 ( 1 ) α 2 ( 1 ) α 1 ( 2 ) α 2 ( 2 ) β 0 ( 1 ) β 1 ( 1 ) β 2 ( 1 ) β 0 ( 2 ) β 1 ( 2 ) β 2 ( 2 )
MLE0.24490.29280.44460.19250.03160.29440.49580.06170.44340.3449
LS0.24410.29190.44430.19190.03180.29410.49510.06200.44300.3441
QR 0.25 0.23110.25780.43130.16410.03720.25880.46610.06870.40990.3191
QR 0.75 0.23060.25840.43210.16440.03710.25800.46560.06890.41010.3195
ER 0.25 0.23010.25790.43110.16580.03760.25940.46590.06920.41080.3188
ER 0.75 0.22940.25810.43140.16540.03740.25870.46480.06960.41060.3191
WCER0.24090.28020.44180.18140.03220.29010.49180.06240.43370.3379
n = 1500
Estimate α 1 ( 1 ) α 2 ( 1 ) α 1 ( 2 ) α 2 ( 2 ) β 0 ( 1 ) β 1 ( 1 ) β 2 ( 1 ) β 0 ( 2 ) β 1 ( 2 ) β 2 ( 2 )
MLE0.24670.29550.44620.19540.03060.29770.49710.06050.44650.3473
LS0.24620.29440.44600.19510.03080.29720.49680.06060.44600.3469
QR 0.25 0.23610.27320.44350.18620.03360.27040.47610.06510.42410.3269
QR 0.75 0.23680.27380.44370.18580.03390.27110.47580.06510.42390.3263
ER 0.25 0.23550.27210.44350.18640.03380.27010.47590.06490.42310.3277
ER 0.75 0.23640.27180.44320.18710.03410.27050.47550.06520.42290.3266
WCER0.24460.29030.44590.19450.03070.29240.49640.06070.44510.3467
n = 2500
Estimate α 1 ( 1 ) α 2 ( 1 ) α 1 ( 2 ) α 2 ( 2 ) β 0 ( 1 ) β 1 ( 1 ) β 2 ( 1 ) β 0 ( 2 ) β 1 ( 2 ) β 2 ( 2 )
MLE0.24850.29780.44830.19760.03020.29880.49850.06020.44830.3487
LS0.24830.29720.44790.19700.03040.29830.49800.060040.44780.3484
QR 0.25 0.24240.28580.44630.19240.03160.28580.48770.06250.43620.3373
QR 0.75 0.24390.28560.44650.19260.03190.28510.48750.06260.43640.3371
ER 0.25 0.24280.28530.44650.19250.03180.28530.48720.06240.43610.3378
ER 0.75 0.24370.28500.44620.19220.03200.28500.48740.06250.43630.3376
WCER0.24740.29600.44770.19650.03050.29800.49790.06050.44750.3482
Table A2. Parameter estimate of various estimation methods, ϵ t t ( 6 ) .
Table A2. Parameter estimate of various estimation methods, ϵ t t ( 6 ) .
n = 100
Estimate α 1 ( 1 ) α 2 ( 1 ) α 1 ( 2 ) α 2 ( 2 ) β 0 ( 1 ) β 1 ( 1 ) β 2 ( 1 ) β 0 ( 2 ) β 1 ( 2 ) β 2 ( 2 )
MLE0.23620.27620.43020.18220.03460.28690.49030.06380.43370.3394
LS0.21350.20570.41810.27070.04090.23410.44240.07650.40040.3038
QR 0.25 0.21190.20440.41890.27560.04120.36890.44180.07590.39880.3030
QR 0.75 0.21210.20550.41800.12480.04100.36920.44110.07610.39800.3025
ER 0.25 0.21220.20390.41870.12500.04160.22990.44200.07670.39960.3036
ER 0.75 0.21180.20490.41760.12550.04220.22960.44160.07620.39880.3039
WCER0.21790.20950.42720.14360.03650.25010.48510.06780.41060.3177
n = 300
Estimate α 1 ( 1 ) α 2 ( 1 ) α 1 ( 2 ) α 2 ( 2 ) β 0 ( 1 ) β 1 ( 1 ) β 2 ( 1 ) β 0 ( 2 ) β 1 ( 2 ) β 2 ( 2 )
MLE0.24240.28940.44360.19130.03290.29180.49480.06210.44060.3426
LS0.22280.24460.42870.24210.03760.25630.46450.07050.40840.3156
QR 0.25 0.22210.24330.42590.24780.03790.34510.46330.07090.40450.3151
QR 0.75 0.22180.24300.42560.15240.03810.34530.46390.07070.40430.3159
ER 0.25 0.22250.24240.42640.15390.03810.25430.46240.07120.40440.3142
ER 0.75 0.22230.24280.42550.15340.03870.25510.46260.07160.40390.3151
WCER0.23210.26460.43840.17080.03370.28210.48990.06410.42590.3313
n = 800
Estimate α 1 ( 1 ) α 2 ( 1 ) α 1 ( 2 ) α 2 ( 2 ) β 0 ( 1 ) β 1 ( 1 ) β 2 ( 1 ) β 0 ( 2 ) β 1 ( 2 ) β 2 ( 2 )
MLE0.24540.29220.44580.19360.03090.29470.49560.06110.44440.3458
LS0.23190.26390.43250.22680.03430.26990.47200.06560.42030.3262
QR 0.25 0.23070.26250.43230.22880.03700.33110.47020.06690.41890.3260
QR 0.75 0.23050.26140.43240.17210.03690.33200.47050.06710.41840.3256
ER 0.25 0.23090.26280.43310.17090.03660.26830.46990.06780.41940.3261
ER 0.75 0.23000.26190.43290.16950.03680.26690.46870.06810.41830.3249
WCER0.24120.28010.44230.18240.03140.29080.49130.06180.43460.3395
n = 1500
Estimate α 1 ( 1 ) α 2 ( 1 ) α 1 ( 2 ) α 2 ( 2 ) β 0 ( 1 ) β 1 ( 1 ) β 2 ( 1 ) β 0 ( 2 ) β 1 ( 2 ) β 2 ( 2 )
MLE0.24770.29490.44730.19550.03040.29690.49720.06070.44590.3466
LS0.23970.28040.43980.21680.03230.28080.48130.06470.43090.3353
QR 0.25 0.23890.27990.43840.21760.03350.32030.48110.06530.43010.3349
QR 0.75 0.23830.27870.43790.18210.03370.32090.48080.06550.42970.3341
ER 0.25 0.23850.27920.43880.18230.03330.27920.48030.06510.42950.3348
ER 0.75 0.23790.27830.43820.18320.03380.27900.48090.06490.42840.3340
WCER0.24490.28980.44690.19330.03080.29320.49540.06100.44070.3419
n = 2500
Estimate α 1 ( 1 ) α 2 ( 1 ) α 1 ( 2 ) α 2 ( 2 ) β 0 ( 1 ) β 1 ( 1 ) β 2 ( 1 ) β 0 ( 2 ) β 1 ( 2 ) β 2 ( 2 )
MLE0.24880.29770.44850.19760.03020.29830.49860.06040.44790.3482
LS0.24380.28910.44410.20690.03150.28870.48980.06250.44010.3413
QR 0.25 0.24400.28880.44390.20730.03170.31170.48910.06270.43980.3408
QR 0.75 0.24370.28870.44360.19210.03190.31150.48890.06290.43960.3410
ER 0.25 0.24360.28890.44370.19220.03160.28810.48930.06330.43950.3411
ER 0.75 0.24370.28830.44340.19280.03210.28770.48870.06310.43970.3409
WCER0.24750.29410.44810.19660.03040.29640.49760.06070.44520.3468
Table A3. Parameter estimate of various estimation methods, ϵ t χ 2 ( 4 ) .
Table A3. Parameter estimate of various estimation methods, ϵ t χ 2 ( 4 ) .
n = 100
Estimate α 1 ( 1 ) α 2 ( 1 ) α 1 ( 2 ) α 2 ( 2 ) β 0 ( 1 ) β 1 ( 1 ) β 2 ( 1 ) β 0 ( 2 ) β 1 ( 2 ) β 2 ( 2 )
MLE0.23770.27720.43520.18690.03370.28830.48920.06510.43770.3397
LS0.21260.20090.40180.28330.04110.22440.56090.07670.39530.3002
QR 0.25 0.21510.20480.49270.28390.04170.22390.56010.07690.39620.4001
QR 0.75 0.21480.20370.49440.11490.04210.22220.43960.07620.39560.4007
ER 0.25 0.21610.20570.40840.11790.04090.22550.44060.07570.39740.3006
ER 0.75 0.21530.20610.40690.11280.04170.22180.44010.07720.39470.2995
WCER0.22390.21230.42890.15180.03650.25190.47570.06830.41810.3188
n = 300
Estimate α 1 ( 1 ) α 2 ( 1 ) α 1 ( 2 ) α 2 ( 2 ) β 0 ( 1 ) β 1 ( 1 ) β 2 ( 1 ) β 0 ( 2 ) β 1 ( 2 ) β 2 ( 2 )
MLE0.24140.28910.44390.19220.03180.29220.49510.06210.44190.3437
LS0.22350.24560.42680.23440.03810.26550.53040.07010.41600.3227
QR 0.25 0.22420.24690.47220.23420.03790.26670.53070.06990.41690.3771
QR 0.75 0.22400.24590.47310.16510.03810.26610.46940.06980.41630.3773
ER 0.25 0.22480.24770.42810.16640.03730.26760.46990.06980.41740.3240
ER 0.75 0.22220.24450.42630.16490.03850.26420.46910.07120.41470.3224
WCER0.23280.26670.43800.17280.03350.28390.48920.06380.42940.3329
n = 800
Estimate α 1 ( 1 ) α 2 ( 1 ) α 1 ( 2 ) α 2 ( 2 ) β 0 ( 1 ) β 1 ( 1 ) β 2 ( 1 ) β 0 ( 2 ) β 1 ( 2 ) β 2 ( 2 )
MLE0.24560.29440.44630.19490.03100.29560.49650.06120.44460.3459
LS0.23140.26130.43360.22890.03630.27050.52270.06840.42250.3286
QR 0.25 0.23210.26290.46610.22810.03610.27090.52200.06810.42270.3717
QR 0.75 0.23170.26230.46670.17130.03650.27030.47750.06860.42210.3718
ER 0.25 0.23260.26330.43420.17230.03540.27140.47830.06750.42320.3295
ER 0.75 0.23090.26080.43310.17050.03670.26990.47680.06890.42180.3282
WCER0.24160.28010.44260.18740.03200.29090.49170.06180.43550.3389
n = 1500
Estimate α 1 ( 1 ) α 2 ( 1 ) α 1 ( 2 ) α 2 ( 2 ) β 0 ( 1 ) β 1 ( 1 ) β 2 ( 1 ) β 0 ( 2 ) β 1 ( 2 ) β 2 ( 2 )
MLE0.24790.29580.44760.19650.03060.29720.49710.06050.44680.3467
LS0.23570.27430.43790.21310.03440.27820.51920.06530.43050.3317
QR 0.25 0.23650.27470.46200.21290.03400.27840.51910.06500.43080.3678
QR 0.75 0.23610.27410.46270.18670.03450.27810.48050.06540.43060.3680
ER 0.25 0.23660.27520.43840.18730.03370.27860.48110.06480.43130.3331
ER 0.75 0.23510.27390.43730.18620.03470.27770.48030.06560.42990.3311
WCER0.24420.28990.44690.19320.03080.29300.49550.06080.44010.3433
n = 2500
Estimate α 1 ( 1 ) α 2 ( 1 ) α 1 ( 2 ) α 2 ( 2 ) β 0 ( 1 ) β 1 ( 1 ) β 2 ( 1 ) β 0 ( 2 ) β 1 ( 2 ) β 2 ( 2 )
MLE0.24890.29780.44860.19820.03040.29850.49850.06030.44830.3482
LS0.24180.28700.44330.20730.03220.28860.51010.06250.44010.3412
QR 0.25 0.24240.28720.45690.20760.03230.28840.51030.06270.44040.3590
QR 0.75 0.24160.28750.45710.19250.03240.28810.48950.06240.44020.3594
ER 0.25 0.24230.28780.44320.19280.03230.28850.48960.06240.44060.3408
ER 0.75 0.24150.28760.44300.1930220.28870.48930.06220.44040.3412
WCER0.24720.29510.44800.19650.03060.29670.49790.06050.44610.3473

References

  1. Li, C.W.; Li, W.K. On a double-threshold autoregressive heteroscedastic time series model. J. Appl. Econom. 1996, 11, 253–274. [Google Scholar] [CrossRef]
  2. Van Hui, Y.; Jiang, J. Robust modelling of DTARCH models. Econom. J. 2005, 8, 143–158. [Google Scholar] [CrossRef]
  3. Jiang, J.; Jiang, X.; Song, X. Weighted composite quantile regression estimation of DTARCH models. Econom. J. 2014, 17, 1–23. [Google Scholar] [CrossRef]
  4. Liu, X.; Song, X.; Zhou, Y. Likelihood ratio-type tests in weighted composite quantile regression of DTARCH models. Sci. China Math. 2019, 62, 2571–2590. [Google Scholar] [CrossRef]
  5. Kuan, C.M.; Yeh, J.H.; Hsu, Y.C. Assessing value at risk with CARE, the conditional autoregressive expectile models. J. Econom. 2009, 150, 261–270. [Google Scholar] [CrossRef]
  6. Newey, W.; Powell, J. Asymmetric least squares estimation and testing. Econometrica 1987, 55, 819–847. [Google Scholar] [CrossRef]
  7. Efron, B. Regression percentiles using asymmetric squared error loss. Stat. Sin. 1991, 1, 93–125. [Google Scholar]
  8. Jones, M. Expectiles and m-quantiles are quantiles. Stat. Probab. 1994, 20, 149–153. [Google Scholar] [CrossRef]
  9. Yao, Q.; Tong, H. Asymmetric least squares regression estimation: A nonparametric approach. J. Nonparametric Stat. 1996, 6, 273–292. [Google Scholar] [CrossRef] [Green Version]
  10. Eberl, A.; Klar, B. Expectile-based measures of skewness. Scand. J. Stat. 2022, 49, 373–399. [Google Scholar] [CrossRef]
  11. De Rossi, G.; Harvey, A. Quantiles, expectiles and splines. J. Econom. 2009, 152, 179–185. [Google Scholar] [CrossRef] [Green Version]
  12. Sobotka, F.; Kneib, T. Geoadditive expectile regression. Comput. Stat. Data Anal. 2012, 56, 755–767. [Google Scholar] [CrossRef]
  13. Zhao, J.; Chen, Y.; Zhang, Y. Expectile regression for analyzing heteroscedasticity in high dimension. Stat. Probab. Lett. 2018, 137, 304–311. [Google Scholar] [CrossRef]
  14. Zhao, J.; Zhang, Y. Variable selection in expectile regression. Commun. Stat.-Theory Methods 2018, 47, 1731–1746. [Google Scholar] [CrossRef]
  15. Gu, Y.; Zou, H. Aggregated expectile regression by exponential weighting. Stat. Sin. 2019, 29, 671–692. [Google Scholar] [CrossRef]
  16. Daouia, A.; Girard, S.; Stupfler, G. Tail expectile process and risk assessment. Bernoulli 2020, 26, 531–556. [Google Scholar] [CrossRef] [Green Version]
  17. Beck, N.; Di Bernardino, E.; Mailhot, M. Semi-parametric estimation of multivariate extreme expectiles. J. Multivar. Anal. 2021, 184, 104758. [Google Scholar] [CrossRef]
  18. Daouia, A.; Girard, S.; Stupfler, G. ExpectHill estimation, extreme risk and heavy tails. J. Econom. 2021, 221, 97–117. [Google Scholar] [CrossRef] [Green Version]
  19. Girard, S.; Stupfler, G.; Usseglio-Carleve, A. Extreme conditional expectile estimation in heavy-tailed heteroscedastic regression models. Ann. Stat. 2021, 49, 3358–3382. [Google Scholar] [CrossRef]
  20. Pan, Y.; Liu, Z.; Song, G. Weighted expectile regression with covariates missing at random. Commun. -Stat.-Simul. Comput. 2021, 1–20, just-accepted. [Google Scholar] [CrossRef]
  21. Almanjahie, I.M.; Bouzebda, S.; Kaid, Z.; Laksaci, A. Nonparametric estimation of expectile regression in functional dependent data. J. Nonparametric Stat. 2022, 34, 250–281. [Google Scholar] [CrossRef]
  22. Bai, Y.; Jiang, R.; Zhang, M. Optimal model averaging estimator for expectile regressions. J. Stat. Plan. Inference 2022, 217, 204–223. [Google Scholar] [CrossRef]
  23. Girard, S.; Stupfler, G.; Usseglio-Carleve, A. Nonparametric extreme conditional expectile estimation. Scand. J. Stat. 2022, 49, 78–115. [Google Scholar] [CrossRef]
  24. Litimein, O.; Laksaci, A.; Mechab, B.; Bouzebda, S. Local linear estimate of the functional expectile regression. Stat. Probab. Lett. 2023, 192, 109682. [Google Scholar] [CrossRef]
  25. Xie, S.; Zhou, Y.; Alan, T. A varying-coefficient expectile model for estimating value at risk. J. Bus. Econ. Stat. 2014, 32, 576–592. [Google Scholar] [CrossRef]
  26. Liu, X.; Zhou, Y. Weighted composite expectile regression estimate of autoregressive models with application. Syst. Eng.-Theory Pract. 2016, 36, 1089–1098. [Google Scholar]
  27. Bellini, F.; Di Bernardino, E. Risk management with expectiles. Eur. J. Financ. 2017, 23, 487–506. [Google Scholar] [CrossRef]
  28. Cai, Z.; Fang, Y.; Tian, D. Assessing tail risk using expectile regressions with partially varying coefficients. J. Manag. Sci. Eng. 2018, 3, 183–213. [Google Scholar] [CrossRef]
  29. Liu, X.; Zhou, Y. The semiparametric varying-coefficient composite expectile regression model in risk measurement and its application. Syst. Eng.-Theory Pract. 2020, 40, 2176–2192. [Google Scholar]
  30. Wang, Z.; Zhou, Y.; Zeng, F. Semiparametric varying-coefficient expectile model for estimating value at risk on dependent samples. Sci. China Math. 2021, 51, 1377. [Google Scholar]
  31. Syuhada, K.; Hakim, A.; Nur’aini, R. The expected-based value-at-risk and expected shortfall using quantile and expectile with application to electricity market data. Commun. -Stat.-Simul. Comput. 2021, 1–18, just-accepted. [Google Scholar] [CrossRef]
  32. Davison, A.C.; Padoan, S.A.; Stupfler, G. Tail risk inference via expectiles in heavy-tailed time series. J. Bus. Econ. Stat. 2022, 1–34, just-accepted. [Google Scholar] [CrossRef]
  33. Jiang, R.; Hu, X.; Yu, K. Single-index expectile models for estimating conditional value at risk and expected shortfall. J. Financ. Econom. 2022, 20, 345–366. [Google Scholar] [CrossRef]
  34. Xu, W.; Hou, Y.; Li, D. Prediction of extremal expectile based on regression models with heteroscedastic extremes. J. Bus. Econ. Stat. 2022, 40, 522–536. [Google Scholar] [CrossRef]
  35. Engle, R. Autoregressive conditional heteroskedasticity with estimates of the U.K. inflation. Econometrica 1982, 50, 987–1008. [Google Scholar] [CrossRef]
  36. Tong, H. On a threshold model. In Pattern Recognition and Signal Procession; Sijthoff & Noordhoff: Amsterdam, The Netherlands, 1978; pp. 575–586. [Google Scholar]
  37. Bickel, P.; Lehmann, E. Descriptive statistics for nonparametric models. III. Dispersion. Ann. Stat. 1976, 4, 1139–1158. [Google Scholar] [CrossRef]
  38. Bickel, P. Tests for heteroscedasticity, nonlinearity. Ann. Stat. 1978, 6, 266–291. [Google Scholar] [CrossRef]
  39. Carroll, R.; Ruppert, D. Transformations and Weighting in Regression; Chapman and Hall: New York, NY, USA, 1988. [Google Scholar]
  40. Koenker, R.; Zhao, Q. Conditional quantile estimation and inference for ARCH models. Econom. Theory 1996, 12, 793–813. [Google Scholar] [CrossRef]
  41. Jiang, J.; Zhao, Q.; Hui, Y. Robust modelling of ARCH models. J. Forecast. 2001, 20, 111–133. [Google Scholar] [CrossRef]
  42. Petruccelli, J.D. On the consistency of least squares estimators for a thershold AR(1) model. J. Time Ser. Anal. 1986, 7, 269–278. [Google Scholar] [CrossRef]
  43. Chan, K.S. Consistency and limiting distribution of the least squares estimator of a threshold autoregressive model. Ann. Stat. 1993, 21, 520–533. [Google Scholar] [CrossRef]
  44. Liu, X. The Statistical Analysis of Financial Risk Measurement Time Series Models and Their Application. Ph.D. Thesis, Shanghai University of Finance and Economics, Shanghai, China, 2014. [Google Scholar]
  45. Kai, B.; Li, R.; Zou, H. Local composite quantile regression smoothing: An efficient and safe alternative to local polynomial regression. J. R. Stat. Soc. Ser. B (Methodological) 2010, 72, 49–69. [Google Scholar] [CrossRef] [Green Version]
  46. Kai, B.; Li, R.; Zou, H. New efficient estimation and variable selection methods for semiparametric varying-coefficient partially linear models. Ann. Stat. 2011, 39, 305–332. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  47. Zou, H.; Yuan, M. Composite quantile regression and the oracle model selection theory. Ann. Stat. 2008, 36, 1108–1126. [Google Scholar] [CrossRef]
  48. Jiang, X.; Jiang, J.; Song, X. Oracle model selection for nonlinear models based on weighted composite quantile regression. Stat. Sin. 2012, 22, 1479–1506. [Google Scholar]
  49. Zhao, Z.; Xiao, Z. Efficient regressions via optimally combining quantile information. Econom. Theory 2014, 30, 1272–1314. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  50. Cont, R. Empirical properties of asset returns: Stylized facts and statistical issues. Quant. Financ. 2001, 1, 223–236. [Google Scholar] [CrossRef]
  51. Politis, D.N. A heavy-tailed distribution for ARCH residuals with application to volatility prediction. Ann. Econ. Financ. 2004, 5, 283–298. [Google Scholar]
  52. Hong, Y.; Lee, J.C.; Ding, G.; Heavy-Tailed Distributions, GARCH Model and the Stock Market Returns in South Korea. Available Ssrn 3014472. 2017. Available online: https://papers.ssrn.com/sol3/papers.cfm?abstract-id=3014472 (accessed on 14 June 2023).
  53. Pollard, D. Asymptotics for least absolute deviation regression estimators. Econom. Theory 1991, 7, 186–199. [Google Scholar] [CrossRef]
  54. Sherman, R. The limiting distribution of the maximum rank correlation estimator. Econometrica 1993, 61, 123–137. [Google Scholar] [CrossRef]
Table 1. RMSE comparison of various estimation methods, ϵ t N ( 0 , 1 ) .
Table 1. RMSE comparison of various estimation methods, ϵ t N ( 0 , 1 ) .
n = 100
Estimate α 1 ( 1 ) α 2 ( 1 ) α 1 ( 2 ) α 2 ( 2 ) β 0 ( 1 ) β 1 ( 1 ) β 2 ( 1 ) β 0 ( 2 ) β 1 ( 2 ) β 2 ( 2 )
MLE13.1411.4310.1411.071.6317.3814.111.8215.0310.08
LS14.8312.7211.6311.971.9319.9715.372.3116.6111.27
QR 0.25 21.1623.4720.0923.433.0928.9727.623.9925.3824.13
QR 0.75 22.3725.6421.4723.313.3129.9325.893.6827.1524.47
ER 0.25 22.5124.1821.2122.193.1429.9126.714.0126.9324.51
ER 0.75 22.7924.6521.3524.093.1930.0726.963.9927.2425.48
WCER18.7519.7816.7118.451.9820.0915.612.4117.5211.98
n = 300
Estimate α 1 ( 1 ) α 2 ( 1 ) α 1 ( 2 ) α 2 ( 2 ) β 0 ( 1 ) β 1 ( 1 ) β 2 ( 1 ) β 0 ( 2 ) β 1 ( 2 ) β 2 ( 2 )
MLE9.219.478.969.110.9210.299.411.019.346.49
LS10.1110.389.119.320.9911.109.541.149.496.75
QR 0.25 17.9821.4215.9818.762.1923.8918.912.5921.9918.49
QR 0.75 18.9921.3916.0918.942.2923.9418.992.6322.0318.53
ER 0.25 18.9121.0116.6718.922.2223.7118.852.6722.3118.56
ER 0.75 19.1221.0516.7119.012.3223.7919.012.6822.4518.67
WCER13.4213.329.589.971.0111.439.881.1210.127.18
n = 800
Estimate α 1 ( 1 ) α 2 ( 1 ) α 1 ( 2 ) α 2 ( 2 ) β 0 ( 1 ) β 1 ( 1 ) β 2 ( 1 ) β 0 ( 2 ) β 1 ( 2 ) β 2 ( 2 )
MLE6.817.015.726.520.498.197.450.714.213.68
LS7.457.216.147.110.548.927.940.794.353.81
QR 0.25 14.9616.5712.6814.381.7917.0914.481.8916.7113.09
QR 0.75 14.9916.6612.7414.461.8817.0414.491.9416.7913.11
ER 0.25 15.0216.6512.6514.311.8616.9214.461.9216.6213.16
ER 0.75 15.3616.7812.8714.411.9616.9814.522.0216.8713.18
WCER7.658.056.867.560.609.218.190.864.563.99
n = 1500
Estimate α 1 ( 1 ) α 2 ( 1 ) α 1 ( 2 ) α 2 ( 2 ) β 0 ( 1 ) β 1 ( 1 ) β 2 ( 1 ) β 0 ( 2 ) β 1 ( 2 ) β 2 ( 2 )
MLE4.785.463.273.490.286.793.940.422.932.07
LS4.845.793.413.620.366.904.040.473.112.18
QR 0.25 9.9210.148.119.521.2110.0910.211.0111.348.92
QR 0.75 9.9910.208.239.511.2810.0710.280.9911.268.89
ER 0.25 9.9510.138.259.431.1711.1310.121.0411.268.86
ER 0.75 10.1010.248.379.441.2511.0910.251.0211.668.92
WCER5.035.983.683.840.417.124.230.563.242.37
n = 2500
Estimate α 1 ( 1 ) α 2 ( 1 ) α 1 ( 2 ) α 2 ( 2 ) β 0 ( 1 ) β 1 ( 1 ) β 2 ( 1 ) β 0 ( 2 ) β 1 ( 2 ) β 2 ( 2 )
MLE2.213.131.561.700.113.991.980.201.150.98
LS2.303.471.721.890.184.142.150.281.301.13
QR 0.25 4.655.844.014.440.517.105.490.595.794.24
QR 0.75 4.685.894.044.500.547.085.510.575.764.29
ER 0.25 4.645.864.054.460.507.125.420.605.754.22
ER 0.75 4.705.924.084.420.557.155.440.585.774.27
WCER2.423.561.842.010.224.232.260.361.391.21
Notes: All RMSEs are multiplied by 10 2 .
Table 2. RMSE comparison of various estimation methods, ϵ t t ( 6 ) .
Table 2. RMSE comparison of various estimation methods, ϵ t t ( 6 ) .
n = 100
Estimate α 1 ( 1 ) α 2 ( 1 ) α 1 ( 2 ) α 2 ( 2 ) β 0 ( 1 ) β 1 ( 1 ) β 2 ( 1 ) β 0 ( 2 ) β 1 ( 2 ) β 2 ( 2 )
MLE13.7211.6910.9811.061.7617.8914.611.9414.4310.61
LS20.5423.9819.9121.982.7527.8423.323.4923.2519.94
QR 0.25 20.0823.8120.0121.992.8128.1124.063.5223.9919.98
QR 0.75 20.0123.8620.1422.122.9028.2024.013.6024.1221.11
ER 0.25 19.5623.8920.1322.202.7828.1624.163.4724.0020.22
ER 0.75 20.7523.4520.1622.672.8328.5624.703.5524.0520.53
WCER19.1221.6216.5919.651.8925.7117.842.5619.0812.19
n = 300
Estimate α 1 ( 1 ) α 2 ( 1 ) α 1 ( 2 ) α 2 ( 2 ) β 0 ( 1 ) β 1 ( 1 ) β 2 ( 1 ) β 0 ( 2 ) β 1 ( 2 ) β 2 ( 2 )
MLE8.728.898.368.610.949.748.940.968.875.05
LS16.9819.3914.4415.921.9819.0915.632.1118.4213.99
QR 0.25 17.6220.0114.9816.922.1120.3816.112.2619.9915.13
QR 0.75 17.6919.9214.8716.572.2220.9716.042.3120.0815.24
ER 0.25 17.5920.0215.0716.912.0720.4616.012.3720.1315.72
ER 0.75 18.0120.4315.1616.132.1220.7116.072.4120.3415.83
WCER12.5612.489.069.521.0510.679.371.129.626.26
n = 800
Estimate α 1 ( 1 ) α 2 ( 1 ) α 1 ( 2 ) α 2 ( 2 ) β 0 ( 1 ) β 1 ( 1 ) β 2 ( 1 ) β 0 ( 2 ) β 1 ( 2 ) β 2 ( 2 )
MLE6.136.165.336.020.397.366.890.633.833.16
LS12.3013.8710.3911.991.3413.8212.031.5111.8710.01
QR 0.25 12.4113.9010.9812.141.4514.7212.581.7212.6110.99
QR 0.75 12.4713.9911.1312.311.4914.7012.641.8612.5911.07
ER 0.25 12.4313.9811.1712.511.4714.6212.631.6812.5611.09
ER 0.75 12.6913.7811.2312.631.5814.6712.721.9012.4911.03
WCER7.228.115.647.310.578.167.870.784.533.92
n = 1500
Estimate α 1 ( 1 ) α 2 ( 1 ) α 1 ( 2 ) α 2 ( 2 ) β 0 ( 1 ) β 1 ( 1 ) β 2 ( 1 ) β 0 ( 2 ) β 1 ( 2 ) β 2 ( 2 )
MLE4.275.133.024.010.235.653.380.401.981.95
LS8.809.967.607.930.9910.549.330.948.987.56
QR 0.25 9.3310.437.878.131.0110.9810.111.059.387.91
QR 0.75 9.3010.587.828.171.0210.9310.071.089.457.87
ER 0.25 9.4010.707.988.091.0311.1810.061.039.977.85
ER 0.75 9.4510.787.788.111.0110.9910.191.0710.017.86
WCER5.115.893.674.270.396.744.190.592.922.59
n = 2500
Estimate α 1 ( 1 ) α 2 ( 1 ) α 1 ( 2 ) α 2 ( 2 ) β 0 ( 1 ) β 1 ( 1 ) β 2 ( 1 ) β 0 ( 2 ) β 1 ( 2 ) β 2 ( 2 )
MLE2.323.071.441.780.123.961.920.181.090.96
LS4.635.813.964.390.447.015.390.565.684.15
QR 0.25 4.725.874.044.410.477.165.420.595.734.23
QR 0.75 4.685.894.094.450.497.195.450.575.754.27
ER 0.25 4.765.924.064.400.467.215.410.585.714.21
ER 0.75 4.785.904.084.470.477.205.440.605.724.25
WCER2.493.291.561.890.214.152.170.281.271.15
Notes: All RMSEs are multiplied by 10 2 .
Table 3. RMSE comparison of various estimation methods, ϵ t χ 2 ( 4 ) .
Table 3. RMSE comparison of various estimation methods, ϵ t χ 2 ( 4 ) .
n = 100
Estimate α 1 ( 1 ) α 2 ( 1 ) α 1 ( 2 ) α 2 ( 2 ) β 0 ( 1 ) β 1 ( 1 ) β 2 ( 1 ) β 0 ( 2 ) β 1 ( 2 ) β 2 ( 2 )
MLE11.3211.0610.099.851.5917.5313.271.8213.5811.03
LS20.5623.4519.8220.893.1727.9624.383.9522.5021.94
QR 0.25 20.1123.2419.5920.523.0327.0224.113.9122.4321.87
QR 0.75 20.5223.3719.8120.783.1127.2124.103.9822.5221.99
ER 0.25 19.4823.1319.4120.192.9926.8724.073.8022.3720.20
ER 0.75 20.7823.5819.9821.043.2428.0124.893.9922.6322.06
WCER18.3419.6415.5715.991.9720.1614.132.5216.5913.09
n = 300
Estimate α 1 ( 1 ) α 2 ( 1 ) α 1 ( 2 ) α 2 ( 2 ) β 0 ( 1 ) β 1 ( 1 ) β 2 ( 1 ) β 0 ( 2 ) β 1 ( 2 ) β 2 ( 2 )
MLE8.248.367.698.130.9210.338.850.959.487.41
LS17.8319.4516.0516.532.0219.3217.452.3019.6316.71
QR 0.25 17.4919.1016.0716.312.0119.1517.312.2219.4716.56
QR 0.75 17.5819.2116.0216.432.0719.2017.412.2819.5816.61
ER 0.25 17.3518.9815.8816.131.9218.9817.242.1519.3816.40
ER 0.75 17.9819.4816.1316.752.1119.6017.582.3819.7516.82
WCER12.2412.7110.1710.091.0914.0410.891.1710.2910.57
n = 800
Estimate α 1 ( 1 ) α 2 ( 1 ) α 1 ( 2 ) α 2 ( 2 ) β 0 ( 1 ) β 1 ( 1 ) β 2 ( 1 ) β 0 ( 2 ) β 1 ( 2 ) β 2 ( 2 )
MLE6.116.245.325.490.487.236.670.714.813.71
LS13.6614.5211.6513.031.7715.1513.211.8115.1212.01
QR 0.25 13.2814.4711.2112.781.6815.0813.121.8014.9811.99
QR 0.75 13.4214.5811.3512.921.7915.1913.231.8715.0712.03
ER 0.25 12.9614.3510.9912.631.5914.9812.971.7414.8711.75
ER 0.75 13.7214.6311.7413.181.7915.2613.261.8615.2412.09
WCER7.067.886.397.170.648.927.510.825.124.51
n = 1500
Estimate α 1 ( 1 ) α 2 ( 1 ) α 1 ( 2 ) α 2 ( 2 ) β 0 ( 1 ) β 1 ( 1 ) β 2 ( 1 ) β 0 ( 2 ) β 1 ( 2 ) β 2 ( 2 )
MLE3.753.523.113.680.295.643.170.472.911.87
LS9.979.338.778.651.1211.1410.231.2810.099.26
QR 0.25 9.259.288.548.331.1111.1010.111.2110.079.20
QR 0.75 9.439.378.768.591.1711.1910.211.3010.119.27
ER 0.25 8.999.087.948.171.0610.199.961.199.979.14
ER 0.75 10.029.498.898.781.2311.1910.311.3110.149.33
WCER4.984.693.754.080.376.914.510.613.282.71
n = 2500
Estimate α 1 ( 1 ) α 2 ( 1 ) α 1 ( 2 ) α 2 ( 2 ) β 0 ( 1 ) β 1 ( 1 ) β 2 ( 1 ) β 0 ( 2 ) β 1 ( 2 ) β 2 ( 2 )
MLE2.272.691.561.840.113.821.890.171.030.94
LS4.685.784.014.420.477.085.420.555.724.21
QR 0.25 4.775.894.124.500.497.195.490.575.834.30
QR 0.75 4.745.924.144.480.487.235.510.595.854.28
ER 0.25 4.765.984.154.440.517.255.550.565.814.27
ER 0.75 4.785.964.134.470.547.215.500.605.874.26
WCER2.412.981.761.930.223.992.050.271.191.08
Notes: All RMSEs are multiplied by 10 2 .
Table 4. Estimates of parameters for HSI.
Table 4. Estimates of parameters for HSI.
EstimatesLSQR 0.25 QR 0.75 ER 0.25 ER 0.75 WCER
α 1 ( 1 ) −0.1199 (0.18)−0.1212 (0.23)−0.1124 (0.19)−0.1293 (0.20)−0.1237 (0.19)−0.1279 (0.08)
α 2 ( 1 ) −0.0909 (0.21)−0.0921 (0.20)−0.0914 (0.19)−0.0937 (0.23)−0.0955 (0.22)−0.972 (0.08)
α 1 ( 2 ) −0.0424 (0.23)−0.0492 (0.25)−0.0433 (0.21)−0.0481 (0.23)−0.0475 (0.23)−0.0487 (0.07)
α 2 ( 2 ) −0.0360 (0.15)−0.0345 (0.15)−0.0379 (0.14)−0.0388 (0.17)−0.0375 (0.16)−0.0372 (0.04)
β 0 ( 1 ) 0.0059 (0.07)0.0059 (0.08)0.0062 (0.08)0.0061 (0.07)0.0058 (0.09)0.0061 (0.05)
β 1 ( 1 ) 0.1798 (0.85)0.1759 (0.79)0.1694 (0.88)0.1707 (0.84)0.1763 (0.80)0.1748 (0.47)
β 2 ( 1 ) 0.2901 (0.50)0.2892 (0.48)0.2859 (0.41)0.2866 (0.44)0.2873 (0.43)0.2817 (0.22)
β 3 ( 1 ) 0.1837 (1.11)0.1861 (1.14)0.1790 (1.09)0.1811 (1.17)0.1843 (1.01)0.1893 (0.89)
β 4 ( 1 ) 0.1298 (0.60)0.1305 (0.63)0.1321 (0.58)0.1313 (0.55)0.1355 (0.61)0.1349 (0.21)
β 0 ( 2 ) 0.0077 (0.05)0.0079 (0.06)0.0074 (0.05)0.0076 (0.07)0.0078 (0.05)0.0077 (0.04)
β 1 ( 2 ) 0.2877 (0.60)0.2841 (0.62)0.2763 (0.58)0.2788 (0.64)0.2859 (0.57)0.2856 (0.34)
β 2 ( 2 ) 0.2134 (0.99)0.2209 (0.89)0.2098 (0.82)0.2177 (0.86)0.2106 (0.84)0.2081 (0.68)
β 3 ( 2 ) 0.1298 (1.54)0.1343 (1.62)0.1266 (1.49)0.1331 (1.57)0.1276 (1.44)0.1315 (1.19)
β 4 ( 2 ) 0.1923 (1.21)0.1904 (1.30)0.1898 (1.26)0.1867 (1.19)0.1854 (1.12)0.1834 (0.93)
Note: Estimated SEs are given in parentheses, and all SEs are multiplied by 10 2 .
Table 5. Estimates of parameters for SPI.
Table 5. Estimates of parameters for SPI.
EstimatesLSQR 0.25 QR 0.75 ER 0.25 ER 0.75 WCER
α 1 ( 1 ) −0.0840 (0.18)−0.0919 (0.21)−0.0850 (0.22)−0.0976 (0.17)−0.0938 (0.17)−0.0963 (0.09)
α 2 ( 1 ) −0.0750 (0.23)−0.0736 (0.18)−0.0709 (0.18)−0.0769 (0.25)−0.0801 (0.24)−0.0798 (0.07)
α 1 ( 2 ) −0.0385 (0.24)−0.0395 (0.21)−0.0392 (0.19)−0.0403 (0.22)−0.0359 (0.25)−0.0361 (0.07)
α 2 ( 2 ) −0.0257 (0.13)−0.0214 (0.17)−0.0290 (0.14)−0.0276 (0.15)−0.0265 (0.14)−0.0282 (0.05)
β 0 ( 1 ) 0.0037 (0.09)0.0038 (0.08)0.0038 (0.08)0.0039 (0.09)0.0039 (0.09)0.0039 (0.04)
β 1 ( 1 ) 0.1635 (0.69)0.1634 (0.76)0.1692 (0.72)0.1609 (0.68)0.1628 (0.73)0.1596 (0.31)
β 2 ( 1 ) 0.2697 (0.54)0.2614 (0.52)0.2527 (0.49)0.2496 (0.51)0.2469 (0.55)0.2445 (0.29)
β 3 ( 1 ) 0.1965 (1.24)0.1912 (1.19)0.1958 (1.31)0.1892 (1.22)0.1881 (1.25)0.1786 (0.98)
β 4 ( 1 ) 0.1011 (0.41)0.1280 (0.32)0.1222 (0.37)0.1173 (0.43)0.1123 (0.35)0.1094 (0.14)
β 0 ( 2 ) 0.0045 (0.10)0.0044 (0.12)0.0044 (0.11)0.0046 (0.11)0.0046 (0.11)0.0045 (0.05)
β 1 ( 2 ) 0.2389 (0.68)0.2347 (0.73)0.2372 (0.70)0.2452 (0.76)0.2410 (0.81)0.2433 (0.43)
β 2 ( 2 ) 0.1754 (0.78)0.1731 (0.72)0.1801 (0.75)0.1780 (0.69)0.1799 (0.77)0.1823 (0.45)
β 3 ( 2 ) 0.1125 (1.43)0.1083 (1.58)0.1136 (1.61)0.1101 (1.51)0.1098 (1.47)0.1037 (1.01)
β 4 ( 2 ) 0.1593 (1.09)0.1436 (1.12)0.1449 (1.19)0.1498 (1.02)0.1473 (1.11)0.1505 (0.82)
Note: Estimated SEs are given in parentheses, and all SEs are multiplied by 10 2 .
Table 6. Comparing the fitted values and predicted values of HSI using MAPE.
Table 6. Comparing the fitted values and predicted values of HSI using MAPE.
EstimatesLSQR 0.25 QR 0.75 ER 0.25 ER 0.75 WCER
n 2 = 10 0.01780.01810.01850.01740.01870.0120
n 2 = 20 0.02230.02340.02390.02190.02390.0157
n 2 = 30 0.02520.02680.02710.02480.02730.0186
Table 7. Comparing the fitted values and predicted values of SPI using MAPE.
Table 7. Comparing the fitted values and predicted values of SPI using MAPE.
EstimatesLSQR 0.25 QR 0.75 ER 0.25 ER 0.75 WCER
n 2 = 10 0.01710.01760.01740.01760.01790.0105
n 2 = 20 0.02140.02230.02190.02220.02290.0144
n 2 = 30 0.02490.02560.02520.02580.02610.0179
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Liu, X.; Tan, Z.; Wu, Y.; Zhou, Y. The Financial Risk Measurement EVaR Based on DTARCH Models. Entropy 2023, 25, 1204. https://doi.org/10.3390/e25081204

AMA Style

Liu X, Tan Z, Wu Y, Zhou Y. The Financial Risk Measurement EVaR Based on DTARCH Models. Entropy. 2023; 25(8):1204. https://doi.org/10.3390/e25081204

Chicago/Turabian Style

Liu, Xiaoqian, Zhenni Tan, Yuehua Wu, and Yong Zhou. 2023. "The Financial Risk Measurement EVaR Based on DTARCH Models" Entropy 25, no. 8: 1204. https://doi.org/10.3390/e25081204

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop