Next Article in Journal
The Generalized Eta Transformation Formulas as the Hecke Modular Relation
Previous Article in Journal
Asymptotic Behavior of Some Differential Inequalities with Mixed Delays and Their Applications
Previous Article in Special Issue
Randomness Test of Thinning Parameters for the NBRCINAR(1) Process
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Estimation of Random Coefficient Autoregressive Model with Error in Covariates

1
School of Mathematics, Jilin University, Changchun 130012, China
2
School of Mathematics and Statistics, Changchun University of Technology, Changchun 130012, China
*
Author to whom correspondence should be addressed.
Axioms 2024, 13(5), 303; https://doi.org/10.3390/axioms13050303
Submission received: 29 March 2024 / Revised: 24 April 2024 / Accepted: 30 April 2024 / Published: 2 May 2024
(This article belongs to the Special Issue Time Series Analysis: Research on Data Modeling Methods)

Abstract

:
Measurement error is common in many statistical problems and has received considerable attention in various regression contexts. In this study, we consider the random coefficient autoregressive model with measurement error possibly present in covariates. The least squares and weighted least squares methods are used to estimate the model parameters, and the consistency and asymptotic normality of the two kinds of estimators are proved. Furthermore, we propose an empirical likelihood method based on weighted score equations to construct confidence regions for the parameters. The simulation results show that the weighted least squares estimators are superior to the least squares estimators and that the confidence regions have good finite-sample behavior. At last, the model is applied to a real data example.

1. Introduction

Many financial time series, such as stock returns, exhibit changes in volatility over time. The random coefficient autoregressive (RCA) model can capture this feature of financial time series. The RCA model has received considerable attention in the literature. Hwang and Basawa [1] derived the least squares and weighted least squares estimators. Hill and Peng [2] proposed empirical likelihood methods based on weighted score equations. Regis et al. [3] presented a structured overview of the literature on autoregressive models with random coefficients.
In many practical problems, other factors may also influence time series. For example, the per capita consumption of residents is affected by income, the growth rate of national GDP is affected by the unemployment rate, and interest rates may be related to the behavior of investors. Anderson and Rubin [4] and Koopmans [5] considered the autoregressive model with covariates. Yang and Wang [6] considered Bayesian inference for this model. Peng et al. [7] considered Bayesian variable selection methods for the quantile autoregressive model with covariates. In this paper, we generalize the autoregressive model with covariates into a RCA model with covariates of the following form:
Y t = ( α + α t ) Y ( t 1 ) + β X t + ε t ,
where Y ( t 1 ) = ( Y t 1 , , Y t p ) ,   α = ( α 1 , , α p ) and β = ( β 1 , , β q ) are the unknown parameters; X t = ( X 1 t , , X q t ) is an independent identically distributed covariate; { ε t , t 1 } is a stochastic process with mean zero and variance σ 2 ; and α t is a random variable with mean zero and covariance matrix Σ . Model (1) plays a significant role in economic modeling. It not only describes the influences of internal and external factors on economic laws but can also capture volatility in financial and economic time series.
We notice that the autoregressive model with covariates considers only the covariate-accurate observations case. However, measurement error is ubiquitous in many statistical problems and has received considerable attention in various regression contexts (e.g., Fuller [8], Carroll et al. [9], Ding et al. [10] and Sinha and Ma [11]). In this article, instead of observing X t in model (1), we observe
W t = X t + U t ,
where U t is the measurement error with mean zero and covariance matrix Σ u u .
As an essential nonparametric technology of statistical inference, the empirical likelihood (EL) method was first proposed by Owen [12] and further studied by Owen [13] among others. Qin and Lawless [14] linked the empirical likelihood and general estimating equations. The empirical likelihood method has been extended to many different fields, including linear and nonlinear time series, as a powerful nonparametric method for constructing confidence regions and hypothesis tests. Yang et al. [15] considered the Bayesian empirical likelihood inference and order shrinkage for a class of sparse autoregressive models. Liu and Zhao [16] summarized the new development of empirical likelihood.
In this paper, we attempt to combine the RCA model with measurement error and explore RCA models with measurement error in covariates, denoting it as the RCAR-X-EV model. To the best of our knowledge, no other studies about RCAR-X-EV models have been published. This article aims to make a contribution in this direction. In our research, the corrected least squares estimates and the corrected weighted least squares estimates of the model parameters are provided. To construct confidence regions, the empirical likelihood method based on the corrected weighted score equations is exploited.
The paper is organized as follows: The corrected least squares and the corrected weighted least squares methods are considered in Section 2. In Section 3, we propose the empirical likelihood ratio statistic for parameters and give the asymptotic distribution of the empirical likelihood ratio. In Section 4, finite-sample simulation results are reported. In Section 5, we apply the proposed method to analyze a real data set. Some concluding remarks and outlook for future research are given in Section 6. All proofs are presented in Appendix A.

2. The Model and Estimation Methods

In this section, we present the definition of the RCA model with errors in covariates, and develop least squares and weighted least squares estimation methods.

2.1. The RCA Model with Errors in Covariates

The RCA model with errors in covariates is described by
Y t = ( α + α t ) Y ( t 1 ) + β X t + ε t , W t = X t + U t ,
where { α t } , { U t } and { ε t } are independent identically distributed sequences independent of each other.
Remark 1.
For the conditional mean and variance of model (3), we have
E ( Y t | F t 1 ) = ( Y ( t 1 ) , W t ) θ ,
and
Var ( Y t | F t 1 ) = R t γ + β Σ u u β ,
where θ = ( α , β ) , γ = ( σ 2 , vech ( Σ ) ) ,   R t = ( 1 , Y ( t 1 ) ( Y ( t 1 ) I p ) K p ) , K p is a duplication matrix and F t 1 is the σ-field of events generated by ( Y ( 0 ) , Y 1 , , Y t 1 ) .
Model (3) can be rewritten as
Y ( t ) = ( A + B t ) Y ( t 1 ) + C X t + E t ,
where E t = ( ε t , 0 , , 0 ) is a ( p × 1 ) vector and
A = α 1 α 2 α p 1 α p 1 0 0 0 0 1 0 0 0 0 1 0 ,   B t = α t 0 0 0 p × p , C = β 0 0 0 p × q .
  • In order to derive the limit distribution of estimators, we consider the following assumptions:
( C . 1 ) All the eigenvalues of matrix E ( B t B t ) + ( A A ) are less than unity in modulus;
( C . 2 ) All the eigenvalues of E ( ( A + B t ) ( A + B t ) ) are less than unity in modulus;
( C . 3 ) E | ε t | 2 m < , for some m > 0 ;
( C . 4 ) E X t 2 m < , for some m > 0 ;
( C . 5 ) E U t 4 < ;
( C . 6 ) There is no non-zero vector c such that c 1 Y ( t 1 ) + c 2 X t = 0 almost everywhere for all t;
( C . 7 ) There is no non-zero vector c such that c R ( t ) = 0 almost everywhere for all t.
Remark 2.
The Kronecker product is over 2 m factors. (C.1) ensures that { Y t } is strictly stationary and ergodic (see Corollary 2.2.1 and Theorem 2.7 in Nicholls and Quinn [17]). (C.2)−(C.4) are the sufficient conditions for E | Y t | 2 m < (see Theorem 5 in Feigin and Tweedie [18]). (C.5)−(C.7) are necessary conditions in the asymptotic distribution theory.

2.2. The Corrected Least Squares Procedure

Let us suppose that ( Y ( 0 ) , { W 1 , Y 1 } , , { W n , Y n } ) is generated from model (3). The two-stage least squares (LS) method is used to estimate θ and γ . In the first stage, because X t is measured with error, it is well known that if one ignores measurement error and replaces X t with W t , the resulting estimator is inconsistent for θ . Thus, the corrected least squares objective function of parameter θ is defined as follows:
S ( θ ) = t = 1 n Y t α Y ( t 1 ) β W t 2 β Σ u u β .
We assume that Σ u u is known. When Σ u u is unknown, it is estimated as in Section 2.4, by repeatedly measuring W t as mentioned by Liang et al. [19]. The corrected LS estimation of parameter θ is obtained as follows:
θ ^ = t = 1 n J t 1 t = 1 n Y ( t 1 ) Y t t = 1 n W t Y t ,
where
J t = Y ( t 1 ) Y ( t 1 ) Y ( t 1 ) W t W t Y ( t 1 ) W t W t Σ u u .
In the second stage, by the residual u ^ t = Y t α ^ Y ( t 1 ) β ^ W t and Equation (4), the LS estimator γ ^ of γ can be obtained. Next, the asymptotic distributions of θ ^ and γ ^ are provided.
Theorem 1.
Let us suppose that conditions (C.1)−(C.5) hold and E Y 1 4 < ; then, the joint limit distribution of the corrected LS estimators ( θ ^ ) given by (7) is
n θ ^ θ d N ( 0 , V 1 1 Γ 1 ( V 1 ) 1 ) , n ,
where V 1 = E ( J t ) ,   Γ 1 = E ( m t m t ) and m t = Y ( t 1 ) ( Y t α Y ( t 1 ) β W t ) W t ( Y t α Y ( t 1 ) β W t ) + Σ u u β .
Theorem 2.
Let us suppose that conditions (C.1)−(C.5) hold and E Y 1 8 < ; then, LS estimator γ ^ is strongly consistent and asymptotically normal:
n γ ^ γ d N ( 0 , T 1 1 Ω 1 ( T 1 ) 1 ) , n ,
where T 1 = E R t R t and Ω 1 = E [ R t R t ( u t 2 β Σ u u β R t γ ) 2 ] .
The related proofs of Theorem 1 and Theorem 2 are given in Appendix A.

2.3. The Corrected Weighted Least Squares Procedure

Since Var ( Y t | F t 1 ) depends on Y ( t 1 ) , in order to improve the efficiency, we consider the corrected weighted least squares (WLS) estimators. Motivated by the WLS estimator for RCA(1) model in Horv´th and Trapani [20], the WLS estimators of θ and γ are defined as
θ ˜ = t = 1 n J t 1 + Y ( t 1 ) Y ( t 1 ) 1 t = 1 n Y ( t 1 ) Y t 1 + Y ( t 1 ) Y ( t 1 ) t = 1 n W t Y t 1 + Y ( t 1 ) Y ( t 1 ) ,
and
γ ˜ = arg min γ i = 1 n u ˜ t 2 β ˜ Σ u u β ˜ R t γ 1 + Y ( t 1 ) Y ( t 1 ) 2 ,
where u ˜ t = Y t α ˜ Y ( t 1 ) β ˜ W t . The following theorems give the limit distribution of θ ˜ and γ ˜ .
Theorem 3.
Let us suppose that conditions (C.1)−(C.5) hold and E Y 1 2 < ; we have that as n , WLS estimator θ ˜ is strongly consistent and
n θ ˜ θ d N ( 0 , V 2 1 Γ 2 ( V 2 ) 1 ) ,
where V 2 = E J t 1 + Y ( t 1 ) Y ( t 1 ) and Γ 2 = E m t m t ( 1 + Y ( t 1 ) Y ( t 1 ) ) 2 .
Theorem 4.
Let us suppose that conditions (C.1)−(C.5) hold and E Y 1 4 < ; we have, as n ,
n γ ˜ γ d N ( 0 , T 2 1 Ω 2 T 2 1 ) ,
where T 2 = E R t R t ( 1 + Y ( t 1 ) Y ( t 1 ) ) 2 , Ω 2 = E R t R t ( 1 + Y ( t 1 ) Y ( t 1 ) ) 4 ( u t 2 β Σ u u β R t γ ) 2 .
The relevant proofs of Theorems 3 and 4 are similar to Theorems 1 and 2, respectively. Hence, detailed proofs are omitted here.
Remark 3.
Conditions (C.6) and (C.7) ensure that V i and T i ( i = 1 , 2 ) are invertible.

2.4. Estimated Error Variance

Although, in some cases, the measurement error covariance matrix has been established by independent experiments, in others, it is unknown and must be estimated. The usual method of doing so is partial replication, so that we observe W t j = X t + U t j ( j = 1 , , m t ) . For notational convenience, we consider here only the case where m t 2 and assume that a fraction ( δ ) of the data has such replicates. Let W ¯ t be the sample mean of the replicates. Then, a consistent, unbiased moment estimate for Σ u u is
Σ ^ u u = t = 1 n j = 1 m t ( W t j W ¯ t ) ( W t j W ¯ t ) t = 1 n ( m t 1 ) .
Because Var ( Y t | F t 1 ) = R t γ + ( 1 δ 2 ) β Σ u u β , the estimator is specified as
θ ^ = arg min θ t = 1 n Y t α Y ( t 1 ) β W ¯ t 2 ( 1 δ 2 ) β Σ ^ u u β .
By using the techniques in Appendix A, one can show that the limit distribution of (15) is N ( 0 , V 1 1 Γ 3 ( V 1 ) 1 ) , with Γ 3 = E ( m ^ t m ^ t ) and
m ^ t = Y ( t 1 ) ( Y t α Y ( t 1 ) β W ¯ t ) W ¯ t ( Y t α Y ( t 1 ) β W ¯ t ) + ( 1 δ 2 ) Σ u u β .

3. The Empirical Likelihood Ratio and the Confidence Region

The empirical likelihood method is very useful for establishing the confidence regions of the parameters of interest. In this section, we propose empirical likelihood methods based on weighted score equations to construct confidence regions for θ and γ .
Based on the corrected weighted least squares estimator in (10), we employ the empirical likelihood method to the weighted least squares score function
H t ( θ ) = Y ( t 1 ) ( Y t α Y ( t 1 ) β W t ) 1 + Y ( t 1 ) Y ( t 1 ) W t ( Y t α Y ( t 1 ) β W t ) + Σ u u β 1 + Y ( t 1 ) Y ( t 1 ) .
We define the empirical likelihood function for θ as
L ( θ ) = max t = 1 n log ( n p t ) | p t 0 , t = 1 n p t = 1 , t = 1 n p t H t ( θ ) = 0 .
By applying the Lagrange multiplier technique, we obtain the well-known log-empirical likelihood representation
l ( θ ) = 2 L ( θ ) = 2 t = 1 n log ( 1 + λ H t ( θ ) ) ,
where Lagrange multiplier λ = λ ( θ ) satisfies
1 n t = 1 n H t ( θ ) 1 + λ H t ( θ ) = 0 .
The following theorem shows that Wilks’ theorem holds for the proposed empirical likelihood method.
Theorem 5.
Let us suppose that conditions (C.1)−(C.5) hold and E Y 1 2 < . If θ 0 is the true value of the parameter, we have
l ( θ 0 ) d χ 2 ( p + q ) , a s n ,
where χ 2 ( p + q ) is a chi-squared distribution with ( p + q ) degrees of freedom.
By Theorem 5, the 100 ( 1 α ) % confidence region for θ is C ( θ ) = { θ : l ( θ ) χ α 2 ( p + q ) } .
Based on the weighted least squares estimator in (11), we define the empirical likelihood function of γ as
L ( γ ) = max t = 1 n log ( n p t ) | p t 0 , t = 1 n p t = 1 , t = 1 n p t H t ( γ ) = 0 ,
where H t ( γ ) = R ( t ) u ˜ t 2 β ˜ Σ u u β ˜ R ( t ) γ 1 + Y ( t 1 ) Y ( t 1 ) 2 .
Similarly, by the Lagrange multiplier method, we obtain
l ( γ ) = 2 L ( γ ) = 2 t = 1 n log ( 1 + λ H t ( γ ) ) ,
and Lagrange multiplier λ is determined by
1 n t = 1 n H t ( γ ) 1 + λ H t ( γ ) = 0 .
Theorem 6.
Let us suppose that conditions (C.1)−(C.5) hold and E Y 1 4 < . If γ 0 is the true value of the parameter, we have
l ( γ 0 ) d χ 2 ( 1 + p ( p + 1 ) 2 ) , a s n ,
where χ 2 ( 1 + p ( p + 1 ) 2 ) is a chi-squared distribution with 1 + p ( p + 1 ) 2 degrees of freedom.
Together with Theorems 3 and 4, we prove Theorems 5 and 6 by using the same method as was used in Zhao and Wang [21]. On the basis of the above theorem, the 100 ( 1 α ) % confidence region for γ is C ( γ ) = { γ : l ( γ ) χ α 2 ( 1 + p ( p + 1 ) 2 ) } .

4. Simulation Study

In this section, we carry out a simulation study to show the finite-sample performance of the proposed procedures in the case p = q = 2 . For simplicity, we consider Σ 11 = Σ 22 , covariate X t N ( 0 , I 2 ) and measurement error U t N ( 0 , 0.01 I 2 ) , with I 2 denoting the 2 × 2 identity matrix. The error process ( ε t ) is generated from the distribution N ( 0 , 1 ) ,   N ( 0 , 1.5 ) , t ( 6 ) and t ( 10 ) . We consider two such models below:
  • Model A: θ = ( 0.2 , 0.3 , 0.2 , 0.4 ) , vech ( Σ ) = ( 0.2 , 0.1 , 0.2 ) .
  • Model B: θ = ( 0.2 , 0.3 , 0.4 , 0.6 ) , vech ( Σ ) = ( 0.2 , 0.1 , 0.2 ) .
  • Note that assumption (C.1) is fulfilled, because matrix E [ ( B t B t ) + A A ] has eigenvalues 0.8, −0.5, −0.3 and 0.2, which are less than one, in the modulus. All the calculations are performed by using 1000 replications.
When Σ u u is unknown, we can estimate it by (14). In order to clarify that the convergence rate of Σ ^ u u does not perturb the asymptotic behavior of the estimator, we compare the least squares estimates in Model A with n = 100 through simulation. The results are reported in Table 1. Table 1 reports the mean, relative bias (RB), standard deviation (SD) and mean square errors (MSE). It can be seen that the asymptotic behavior of the estimator changes slightly. Next, we only need to focus on the situation where Σ u u is known.

4.1. Point Estimation of Parameters

To compare the least squares estimator with the weighted least squares estimator, we conduct a simulation study under sample sizes of n = 100 , 200 , 500 for Models A and B with ε t N ( 0 , 1 ) . Table 2 reports RB, SD and MSE. For Model A, the Q-Q plots of least squares and weighted least squares estimators are drawn with ε t N ( 0 , 1.5 ) , as illustrated in Figure 1 and Figure 2.
It can be seen from Table 2 that (i) as the sample size (n) increases, relative bias, standard deviation and the mean square errors decrease and that the least squares and weighted least squares estimators approximate the true parameter value. (ii) For a large sample size (n), it can be seen that weighted least squares is superior to least squares. When the sample size is n = 500, the Q-Q plots in Figure 1 and Figure 2 show that empirically, the estimators of parameters θ and γ are asymptotically normal. By comparing the Q-Q plots of the estimators, it can also be seen that weighted least squares estimators converge to a normal distribution better than the least squares estimators. When the error term ε t comes from the distribution N ( 0 , 1 ) , t ( 6 ) and t ( 10 ) , the results are similar, and the objectives are the same, where t ( m ) ( m = 6 , 10 ) denotes a Student’s t distribution with m degrees of freedom. This choice should provide some evidence on estimator performance with heavy-tailed data.

4.2. Confidence Regions of Parameters

We compare the proposed empirical likelihood methods with the normal approximation method based on weighted least squares. The coverage probability of the confidence regions is computed based on 1000 replications with nominal level 1 α = 0.95 .
The normal approximation method based on Theorem 3 needs to estimate the asymptotic variance. The consistent estimates of Γ 2 and V 2 are denoted by 1 n t = 1 n m t m t ( 1 + Y ( t 1 ) Y ( t 1 ) ) 2 and 1 n t = 1 n J t 1 + Y ( t 1 ) Y ( t 1 ) , respectively. Furthermore, the error distributions are taken to be normal distribution and t distribution. The sample sizes are chosen to be 100, 200 and 500. The numerical results of the simulations are presented in Table 3. It can be seen from Table 3 that the coverage levels for both methods approach the nominal level as the sample size (n) increases. We also find that the empirical likelihood method outperforms the weighted least squares method in general. The performance of the empirical likelihood method is not so good with t t ( 6 ) . In this case, the error term is heavy-tailed, and σ 2 is significantly bigger than one. However, when the sample size (n) gets large or the degree of freedom (m) increases, the empirical likelihood method shows significant improvement. In Figure 3, we plot the confidence regions of parameter θ = ( 0.2 , 0.3 , 0.2 , 0.4 ) with n = 200 , which are average areas of the confidence regions based on empirical likelihood and weighted least squares. It can be seen that the accuracy of the empirical likelihood confidence region is found to be better than that of weighted least squares.
The normal approximation method based on Theorem 4 also needs to estimate the asymptotic variance. The consistent estimates of Ω 2 and T 2 are 1 n t = 1 n R t R t ( 1 + Y ( t 1 ) Y ( t 1 ) ) 4 ( u ˜ t 2 β ˜ Σ u u β ˜ R t γ ˜ ) 2 and 1 n t = 1 n R t R t ( 1 + Y ( t 1 ) Y ( t 1 ) ) 2 , respectively. The numerical results of the simulations are presented in Table 4. In Figure 4, we plot the confidence regions of parameter γ = ( 1 , 0.2 , 0.1 , 0.2 ) with n = 500 , which are the average areas of the confidence regions based on the empirical likelihood and weighted least squares. From Table 4 and Figure 4, we find that empirical likelihood consistently gives slightly higher coverage levels and yields smaller regions than weighted least squares. Our simulation studies indicate that empirical likelihood outperforms weighted least squares.

5. Real Data Example

We now illustrate the model and the estimation methods by considering the Shanghai Composite Index data, which are available from the website https://data.eastmoney.com/cjsj/cpi.html (accessed on 20 March 2024). The analyzed period starts on 1 January 2008 and ends on 31 March 2023, yielding 182 observations. We use the first 150 observations to develop the model and use the remaining 32 observations as the out-of-sample set to assess the performance of the forecasting. Due to the obvious non-stationary behavior of the original series, we consider the first difference series, i.e.,
Y t = log ( P t ) log ( P t 1 ) 1 T t = 2 T ( log ( P t ) log ( P t 1 ) ) ,
where P t is the closing price on day t. The observed time series ( P t ) and the log-returns ( Y t ) are shown in Figure 5.
The Dickey–Fuller test shows a significant result (p-value < 0.01 ) for Y t being stationary. As shown in Figure 5, the differential series { Y t } exhibits evident variability, and an autoregressive model with random coefficients might provide a better explanation of the data. The ACF plot shows an autoregressive model with only the sixth lag. We consider the two covariates of Consumer Price Index (CPI) and Corporate Goods Price Index (CGPI) and denote the Logarithmic differences of CPI and CGPI by X 1 and X 2 , respectively.
Specifically, we fit the following three models for the differential series:
  • Model 1 (AR-X): Y t = α Y t 6 + X t β + ε t ;
  • Model 2 (RCAR-X): Y t = ( α + α t ) Y t 6 + X t β + ε t ;
  • Model 3 (RCAR-X-EV): Y t = ( α + α t ) Y t 6 + X t β + ε t , W t = X t + U t .
For Models 1 and 2, the CLS and WLS estimates are provided, respectively. In Model 3, estimator Σ ^ u u = 0.0001 0.0005 0.0005 0.0398 is obtained by (14). The corrected WLS estimate is provided for Model 3. The fitting results are summarized in Table 5. From Table 5, one can see that the corrected estimator achieves the smallest one-step forecasting mean square error (MSE) among the three proposed models.
In order to check the adequacy of the RCAR-X-EV model, we consider the Pearson residuals, which are defined by
e t = Y t E ( Y t | F t 1 ) Var ( Y t | F t 1 ) .
When the fitted model is correct, { e t } is a white noise sequence. In practice, e ^ t is calculated by putting the parameter estimation result into the conditional expectation and conditional variance equations in (19). In Figure 6, we draw the time series plot, ACF and PACF of the Pearson residuals under the fitted RCAR-X-EV model, aiming to show a visual presentation of the fitted residuals. As shown in Figure 6, the residual is a stationary series, and the ACF and PACF plots do not show that the sequence is correlated, indicating that { e ^ t } is a white noise sequence. This also means that the RCAR-X-EV model is correctly specified.

6. Discussion

This article introduces a random coefficient autoregressive model with measurement error in covariates, which can capture the various characteristics of financial time series better than the RCA models. The corrected least squares estimators and the corrected weighted least squares estimators of the model parameters and their asymptotic behaviors are obtained. We compare the two proposed estimators by simulation studies. These comparisons suggest that the corrected weighted least squares estimators are often quite efficient relative to the corrected least squares estimators. Moreover, the confidence regions of the parameters are constructed based on the empirical likelihood method. A real data example reveals that the proposed model is competitive in modeling the financial time series.
However, the RCAR-X-EV model defined by (3) considers covariates with measurement errors. Depending on the concrete applications, time series data are often subject to measurement error. Thus, it makes more sense to consider that both Y t and X t are subject to measurement errors. Following up on this work, we focus on two potential issues. Firstly, it is straightforward to generalize the autoregressive model with covariates into the measurement error case. Procedures for valid inferences about the model parameters in large samples in the presence of measurement error deserve further investigation. We would like to explore the use of restricted ML-type estimators and second-order bias corrections (e.g., Cheang and Reinsel [22]) in the presence of measurement error. Secondly, a varying coefficient time series model with covariates is considered, where data sets are observed to have measurement error. Future studies may benefit from an estimating equations-based approach (see, e.g., Pei [23]). However, the inference problems with these extended models are somewhat beyond the scope of this paper at this stage and need greater attention, so we will address these issues in a future study.

Author Contributions

Conceptualization, X.Z. and J.C.; methodology, J.C.; software, J.C.; validation, X.Z. and J.C.; formal analysis, X.Z.; investigation, X.Z.; resources, X.Z.; data curation, J.C.; writing—original draft preparation, J.C.; writing—review and editing, Q.L.; visualization, X.Z.; supervision, Q.L.; project administration, J.C.; funding acquisition, J.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by National Natural Science Foundation of China (No. 12271231, 12001229 and 11901053) and Natural Science Foundation of Jilin Province (No. 20240101291JC).

Data Availability Statement

Publicly available data sets were analyzed in this study. These data can be found at https://data.eastmoney.com/cjsj/cpi.html (accessed on 20 March 2024).

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A

Proof of Theorem 1.
Note that
n ( θ ^ θ ) = 1 n t = 1 n J t 1 1 n t = 1 n Y ( t 1 ) ( Y t α Y ( t 1 ) β W t ) 1 n t = 1 n W t ( Y t α Y ( t 1 ) β W t ) + n Σ e e β .
By the ergodic theorem, we obtain 1 n t = 1 n J t a . s . V , θ ^ a . s . θ as n . We denote S n ( θ ) = t = 1 n m t ( θ ) , where m t ( θ ) = Y ( t 1 ) ( Y t α Y ( t 1 ) β W t ) W t ( Y t α Y ( t 1 ) β W t ) + Σ e e β , and we obtain
E ( S n ( θ ) | F n 1 ) = S n 1 ( θ ) + E ( m t ( θ ) | F n 1 ) = S n 1 ( θ ) ,
so { S n ( θ ) , F n , n 1 } is a martingale. The result in the theorem will be proved if we show that
1 n S n ( θ ) d N 0 , Γ , n .
Since E X t 2 < , E Y t 4 < , each element of vector m t is a stationary ergodic zero-mean martingale difference, with Var ( m t ) = Γ .
Let Q t j = 1 n m t , j , and m t , j is the jth component of m t . In order to apply the martingale central limit theorem (see Hall and Heyde [24]), we need to show that
( i ) max 1 t n | Q t j | p 0 ,
( ii ) E max 1 t n | Q t j 2 | = O ( 1 ) ,
( iii ) t = 1 n Q t j 2 p E m t , j 2 .
(i) Note that m t , i = Y t i u t , i = 1 , , p ; m t , p + j = W t , j u t + ( Σ u u β ) j , j = 1 , , q . By the Markov inequality,
P { max 1 t n | m t , i | n δ } t = 1 n P { Y t i u t n δ } 1 n 1 + r / 2 t = 1 n E ( Y t i u t ) 2 + r 0 , i = 1 , , p ,
P { max 1 t n | W t , j u t | n δ } t = 1 n P { W t , j u t n δ } 1 n 1 + r / 2 t = 1 n E ( W t , j u t ) 2 + r 0 , j = 1 , , q .
So max 1 t n m t = o ( n ) ,   max 1 t n | Q t j | 1 n max 1 t n m t = o ( 1 ) .
(ii) E max 1 t n | Q t j 2 | 1 n t = 1 n E m t , j 2 = O ( 1 ) .
(iii ) By the ergodic theorem, t = 1 n Q t j 2 p E m t , j 2 .
Thus, by applying the martingale central limit theorem and the Cramer–Wold device, as n , we obtain 1 n S n ( θ ) d N 0 ,   Γ .
Proof of Theorem 2.
Let Z t 2 ( θ ) = u t 2 β Σ e e β . Assuming that θ is known, conditional least squares estimate γ ˘ of γ is given by a solution of the equation
t = 1 n R t Z t 2 ( θ ) R t γ = 0 .
Next, we replace Z t 2 ( θ ) with Z t 2 ( θ ^ ) , where θ ^ is a strongly consistent estimator of θ . In view of (A1), we have
n ( γ ^ γ ) = n ( γ ^ γ ˘ ) + n ( γ ˘ γ ) = 1 n t = 1 n R t R t 1 1 n t = 1 n R t Z t 2 ( θ ^ ) Z t 2 ( θ ) 1 n t = 1 n R t Z t 2 ( θ ) R t γ .
By the ergodic theorem,
1 n t = 1 n R t R t a . s . E ( R t R t ) = T 1 .
A direct calculation shows that t = 1 n R t Z t 2 ( θ ) R t γ , F n , n 1 is a martingale. Under (C.1)−(C.5), as E Y 1 8 < , similarly to Theorem 1,
1 n t = 1 n R t Z t 2 ( θ ) R t γ d N 0 , Ω 1 ,
where Ω 1 = E [ R t R t ( u t 2 β Σ e e β R t γ ) 2 ] . We need to show that
1 n t = 1 n R t Z t 2 ( θ ^ ) Z t 2 ( θ ) = o p ( 1 ) .
Note that
Z t 2 ( θ ^ ) Z t 2 ( θ ) = 1 n t = 1 n R t ( u ^ t 2 u t 2 + β Σ e e β β ^ Σ e e β ^ ) ,
where
u ^ t 2 u t 2 = ( θ θ ^ ) Y ( t 1 ) W t 2 u t ( Y ( t 1 ) , W t ) ( θ θ ^ ) .
Since n ( θ θ ^ ) is bounded in probability, and θ ^ a . s . θ , we have
1 n t = 1 n 2 u t R t ( θ θ ^ ) Y ( t 1 ) W t p 0 ,
1 n t = 1 n R t ( θ θ ^ ) Y ( t 1 ) W t ( Y ( t 1 ) , W t ) ( θ θ ^ ) p 0 ,
and
β Σ e e β β ^ Σ e e β ^ = ( β β ^ ) Σ e e ( β β ^ ) p 0 .
Thus, γ ^ is a strongly consistent estimate of γ , as n , i.e.,
n γ ^ γ d N ( 0 , T 1 1 Ω 1 T 1 1 ) .

References

  1. Hwang, S.Y.; Basawa, I.V. Parameter estimation for generalized random coefficient autoregressive processes. J. Stat. Plan. Inference 1998, 68, 323–337. [Google Scholar] [CrossRef]
  2. Hill, J.; Peng, L. Unified interval estimation for random coefficient autoregressive models. J. Time Ser. Anal. 2014, 35, 282–297. [Google Scholar] [CrossRef]
  3. Regis, M.; Serra, P.; Heuvel, E. Random autoregressive models: A structured Overview. Econom. Rev. 2022, 41, 207–230. [Google Scholar] [CrossRef]
  4. Anderson, T.W.; Rubin, H. The asymptotic propertics of estimates of the parameters of a single equation in a complete system of stochastic equations. Ann. Math. Stat. 1950, 21, 570–582. [Google Scholar] [CrossRef]
  5. Koopmans, T. Statistical Inference in Dynamic Economic Models; Wiley: New York, NY, USA, 1950. [Google Scholar]
  6. Yang, K.; Wang, D.H. Bayesian estimation for first-order autoregressive model with explanatory variables. Commun. Stat.-Theory Methods 2017, 46, 11214–11227. [Google Scholar] [CrossRef]
  7. Peng, B.; Yang, K.; Dong, X.G. Variable selection for quantile autoregressive model: Bayesian methods versus classical methods. J. Appl. Stat. 2023, 51, 1098–1130. [Google Scholar] [CrossRef] [PubMed]
  8. Fuller, W.A. Measurement Error Models; John Wiley and Sons, Inc.: New York, NY, USA, 1987. [Google Scholar]
  9. Carroll, R.J.; Ruppert, D.; Stefanski, L. Measurement Error in Nonlinear Models; Chapman and Hall/CRC: London, UK, 1995. [Google Scholar]
  10. Ding, H.; Zhang, R.; Zhu, H. New estimation for heteroscedastic single-index measurement error models. J. Nonparametric Stat. 2022, 34, 95–112. [Google Scholar] [CrossRef]
  11. Sinha, S.; Ma, Y. Semiparametric analysis of linear transformation models with covariate measurement errors. Biometrics 2014, 70, 21–32. [Google Scholar] [CrossRef] [PubMed]
  12. Owen, A. Empirical likelihood ratio confidence intervals for a single functional. Biometrika 1988, 75, 237–249. [Google Scholar] [CrossRef]
  13. Owen, A. Empirical likelihood ratio confidence regions. Ann. Stat. 1990, 18, 90–120. [Google Scholar] [CrossRef]
  14. Qin, J.; Lawless, J. Empirical likelihood and general estimating equations. Ann. Stat. 1994, 22, 300–325. [Google Scholar] [CrossRef]
  15. Yang, K.; Ding, X.; Yuan, X.H. Bayesian empirical likelihood inference and order shrinkage for autoregressive models. Stat. Pap. 2022, 63, 97–121. [Google Scholar] [CrossRef]
  16. Liu, P.P.; Zhao, Y.C. A review of recent advances in empirical likelihood. Wiley Interdiscip. Rev.-Comput. Stat. 2022, 15, e1599. [Google Scholar] [CrossRef]
  17. Nicholls, D.F.; Quinn, B.G. Random Coefficient Autoregressive Models: An Introduction; Springer: New York, NY, USA, 1982. [Google Scholar]
  18. Feigin, P.D.; Tweedie, R.L. Random coefficient autoregressive processes: A markov chain analysis of stationarity and finiteness of moments. J. Time Ser. Anal. 1985, 6, 1–14. [Google Scholar] [CrossRef]
  19. Liang, H.; Härdle, W.; Carroll, R.J. Estimation in a semiparametric partially linear errors-in-variables model. Ann. Stat. 1999, 27, 1519–1535. [Google Scholar] [CrossRef]
  20. Horváth, L.; Trapani, L. Testing for randomness in a random coefficient autoregression model. J. Econom. 2019, 209, 338–352. [Google Scholar] [CrossRef]
  21. Zhao, Z.W.; Wang, D.H. Statistical inference for generalized random coefficient autoregressive model. Math. Comput. Model. 2012, 52, 152–166. [Google Scholar] [CrossRef]
  22. Cheang, W.K.; Reinsel, G.C. Bias reduction of autoregressive estimates in time series regression model through restricted maximum likelihood. J. Am. Stat. Assoc. 2000, 95, 1173–1184. [Google Scholar] [CrossRef]
  23. Pei, G. Estimation of functional-coefficient autoregressive models with measurement error. J. Multivar. Anal. 2022, 192, 105077. [Google Scholar]
  24. Hall, P.; Heyde, C.C. Martingale Limit Theory and Its Application; Academic Press: New York, NY, USA, 1980. [Google Scholar]
Figure 1. Q-Q plots from LS estimation for θ and γ in Model A with ε t N ( 0 , 1.5 ) ( n = 500 ) .
Figure 1. Q-Q plots from LS estimation for θ and γ in Model A with ε t N ( 0 , 1.5 ) ( n = 500 ) .
Axioms 13 00303 g001
Figure 2. Q-Q plots from WLS estimation for θ and γ in Model A with ε t N ( 0 , 1.5 ) ( n = 500 ) .
Figure 2. Q-Q plots from WLS estimation for θ and γ in Model A with ε t N ( 0 , 1.5 ) ( n = 500 ) .
Axioms 13 00303 g002
Figure 3. Confidence regions of θ in Model A, based on EL (dotted) and WLS (solid) under ε t t ( 6 ) ( n = 200 ) .
Figure 3. Confidence regions of θ in Model A, based on EL (dotted) and WLS (solid) under ε t t ( 6 ) ( n = 200 ) .
Axioms 13 00303 g003
Figure 4. Confidence regions of γ in Model A, based on EL (dotted) and WLS (solid) under ε t N ( 0 , 1 ) ( n = 500 ) .
Figure 4. Confidence regions of γ in Model A, based on EL (dotted) and WLS (solid) under ε t N ( 0 , 1 ) ( n = 500 ) .
Axioms 13 00303 g004aAxioms 13 00303 g004b
Figure 5. Time series and ACF plots for the original and differenced data.
Figure 5. Time series and ACF plots for the original and differenced data.
Axioms 13 00303 g005
Figure 6. Diagnostic checking plots in fitting the RCAR-X-EV model with the Shanghai Composite Index data.
Figure 6. Diagnostic checking plots in fitting the RCAR-X-EV model with the Shanghai Composite Index data.
Axioms 13 00303 g006
Table 1. The least squares simulation results for Model A with n = 100 .
Table 1. The least squares simulation results for Model A with n = 100 .
ε t Para. Σ u u Known Σ u u Unknown
MeanRBSDMSEMeanRBSDMSE
N ( 0 , 1 ) α 1    0.1871−0.06400.12090.0147   0.1803−0.09850.12530.0161
α 2    0.2579−0.14000.11810.0157   0.2702−0.09930.12420.0163
β 1 −0.2028   0.01350.15720.0247−0.2040   0.02000.15570.0242
β 2    0.4011   0.00280.16050.0257   0.4115   0.02900.15720.0248
σ 2    1.1667   0.16670.63080.4254   1.1207   0.12070.72280.5365
Σ 11    0.1587−0.20650.20160.0423   0.1566−0.21700.19640.0404
Σ 12    0.0854−0.14600.12590.0160   0.0879−0.12100.14880.0222
Σ 22    0.1483−0.25850.15860.0278   0.1675−0.16200.18280.0344
N ( 0 , 1.5 ) α 1    0.1728−0.13600.11770.0145   0.1732−0.13400.11840.0147
α 2    0.2669−0.11030.11720.0148   0.2622−0.12600.11830.0154
β 1 −0.1948−0.02600.18000.0324−0.2063   0.03150.19040.0362
β 2    0.4017   0.00430.18790.0352   0.4009   0.00230.17860.0318
σ 2    1.7238   0.14921.08131.2183   1.6653   0.11021.14641.3404
Σ 11    0.1550−0.22500.19140.0386   0.1702−0.14900.20660.0435
Σ 12    0.0847−0.15200.14440.0210   0.0899−0.10100.14810.0220
Σ 22    0.1559−0.22050.17240.0316   0.1489−0.25550.17040.0316
t ( 6 ) α 1    0.1749−0.12550.12580.0164   0.1750−0.12500.12050.0151
α 2    0.2589−0.13700.11790.0155   0.2628−0.12400.12090.0160
β 1 −0.2059   0.02950.19300.0372−0.2159   0.07950.19020.0364
β 2    0.3937−0.01580.19060.0363   0.3971−0.00730.18860.0355
σ 2    1.6838   0.12250.99301.0190   1.7219   0.14790.95570.9617
Σ 11    0.1727−0.13650.20190.0414   0.1670−0.16500.20660.0437
Σ 12    0.0828−0.17200.14770.0221   0.0735−0.26500.15850.0258
Σ 22    0.1510−0.24500.17020.0313   0.1612−0.19400.17360.0316
t ( 10 ) α 1    0.1719−0.14050.12180.0156   0.1789−0.10500.12010.0148
α 2    0.2663−0.11230.11670.0147   0.2582−0.13900.11660.0153
β 1 −0.1988−0.00550.17420.0303−0.2008   0.00400.17370.0301
β 2    0.3946−0.01330.17140.0293   0.4068   0.01700.17200.0296
σ 2    1.3948   0.11580.73640.5627   1.4419   0.15350.72310.5593
Σ 11    0.1617−0.19100.19210.0383   0.1634−0.18250.18740.0364
Σ 12    0.0790−0.20970.13220.0179   0.0888−0.11170.14280.0205
Σ 22    0.1595−0.20210.16400.0285   0.1613−0.19310.17040.0305
Table 2. Simulation results of Models A and B for ε t N ( 0 , 1 ) .
Table 2. Simulation results of Models A and B for ε t N ( 0 , 1 ) .
ModelnPara.LSWLS
MeanRBSDMSEMeanRBSDMSE
A100 α 1    0.1871−0.06400.12090.0147   0.2067   0.03350.11430.0130
α 2    0.2579−0.14000.11810.0157   0.2840−0.05330.11070.0125
β 1 −0.2028−0.01350.15720.0247−0.2044   0.02200.14570.0212
β 2    0.4011   0.00280.16050.0257   0.4088   0.02200.14710.0217
σ 2    1.1667   0.16670.63080.4254   0.9765−0.02350.31040.0968
Σ 11    0.1587−0.20650.20160.0423   0.1856−0.07200.15870.0253
Σ 12    0.0854−0.14600.12590.0160   0.0937−0.06300.10930.0119
Σ 22    0.1483−0.25850.15860.0278   0.1911−0.04450.15090.0228
200 α 1    0.1800−0.10000.08830.0081   0.1951−0.02450.07810.0061
α 2    0.2742−0.08600.08840.0084   0.2946−0.01800.08050.0065
β 1 −0.1982−0.00900.11090.0123−0.2010   0.00500.10020.0100
β 2    0.4000−0.00000.10450.0109   0.4001−0.00030.09430.0088
σ 2    1.1769   0.17690.55090.3345   0.9735−0.02650.21910.0486
Σ 11    0.1679−0.16050.16150.0270   0.2005   0.00290.11590.0134
Σ 12    0.0872−0.12780.11220.0127   0.1028   0.02840.07500.0056
Σ 22    0.1591−0.20450.13270.0192   0.1985−0.00770.10960.0120
500 α 1    0.1916−0.04210.06770.0046   0.1965−0.01750.04900.0024
α 2    0.2871−0.04270.06480.0043   0.2997−0.00100.05080.0025
β 1 −0.1967−0.01650.07210.0052−0.1994−0.00300.06240.0038
β 2    0.3966−0.00850.06900.0047   0.3991−0.00230.06520.0042
σ 2    1.1258   0.12580.53480.3015   0.9878−0.01220.14120.0200
Σ 11    0.1826−0.08700.14990.0227   0.2043   0.02150.06980.0048
Σ 12    0.0873−0.12700.10470.0111   0.1005   0.00530.04580.0020
Σ 22    0.1791−0.10430.12910.0170   0.1984−0.00800.07130.0050
B100 α 1    0.1748−0.12600.11770.0144   0.1942−0.02900.11080.0123
α 2    0.2619−0.12670.11500.0146   0.2867−0.04430.10980.0122
β 1    0.4000   0.00000.16330.0266   0.3973−0.00680.14310.0204
β 2 −0.5999−0.00020.16490.0271−0.6128   0.02130.15090.0229
σ 2    1.1936   0.19360.85450.7670   0.9526−0.04740.33120.1118
Σ 11    0.1487−0.25650.18760.0378   0.2024   0.01200.14080.0198
Σ 12    0.0830−0.16980.14580.0215   0.0923−0.07740.09810.0096
Σ 22    0.1641−0.17950.18290.0347   0.1962−0.01900.14380.0206
200 α 1    0.1827−0.08650.08870.0081   0.1987−0.00650.07470.0055
α 2    0.2782−0.07270.08590.0078   0.2956−0.01470.07550.0057
β 1    0.4040   0.01000.11880.0141   0.4043   0.01080.10350.0107
β 2 −0.6012   0.00200.11650.0135−0.5998−0.00030.09690.0093
σ 2    1.2056   0.20560.65650.4729   0.9842−0.01580.24110.0583
Σ 11    0.1702−0.14900.17860.0327   0.1975−0.01250.10460.0109
Σ 12    0.0816−0.18400.12390.0156   0.0948−0.05150.06920.0048
Σ 22    0.1613−0.19300.13360.0193   0.1926−0.03700.10100.0102
500 α 1    0.1940−0.03000.06130.0037   0.2004   0.00200.04670.0021
α 2    0.2858−0.04730.06100.0039   0.2998−0.00070.04720.0022
β 1    0.3975−0.00630.08520.0072   0.3981−0.00480.06510.0042
β 2 −0.5973−0.00450.07740.0060−0.6004   0.00070.06590.0043
σ 2    1.1874   0.18740.65130.4589   0.9858−0.01420.14660.0216
Σ 11    0.1756−0.12200.16170.0267   0.2026   0.01300.06440.0041
Σ 12    0.0875−0.12500.11470.0133   0.1006   0.00600.04220.0017
Σ 22    0.1732−0.13400.12680.0167   0.1975−0.01250.06330.0040
Table 3. Coverage probabilities of confidence regions for θ .
Table 3. Coverage probabilities of confidence regions for θ .
ε t Para.n = 100n = 200n = 500
θ vech ( Σ ) ELWLSELWLSELWLS
N ( 0 , 1 ) (0.2, 0.3, 0.4, −0.6)’    (0,  0,  0)’0.8840.8940.9100.9100.9300.932
(0.1, 0.05, 0.1)’0.8980.8880.9360.9320.9480.946
(0.2, 0.1, 0.2)’0.9020.8820.9260.9200.9560.956
(0.3, 0.15, 0.3)’0.9000.8900.9320.9300.9580.952
(0.2, 0.3, −0.2, 0.4)’    (0,  0,  0)’0.9080.9060.9440.9380.9460.946
(0.1, 0.05, 0.1)’0.8900.8880.9420.9400.9500.944
(0.2, 0.1, 0.2)’0.9180.9200.9340.9240.9420.938
(0.3, 0.15, 0.3)’0.8940.8900.9280.9260.9580.956
N ( 0 , 1.5 ) (0.2, 0.3, 0.4, −0.6)’    (0,  0,  0)’0.9120.8980.9180.9120.9360.928
(0.1, 0.05, 0.1)’0.8680.8720.9220.9160.9260.922
(0.2, 0.1, 0.2)’0.9120.8860.9260.9260.9420.944
(0.3, 0.15, 0.3)’0.9060.8900.9440.9420.9440.940
(0.2, 0.3, −0.2, 0.4)’    (0,  0,  0)’0.8800.8740.9220.9180.9280.926
(0.1, 0.05, 0.1)’0.9100.9140.9300.9400.9460.940
(0.2, 0.1, 0.2)’0.8880.8860.9420.9400.9540.954
(0.3, 0.15, 0.3)’0.8820.8780.9180.9100.9340.938
t ( 6 ) (0.2, 0.3, 0.4, −0.6)’    (0,  0,  0)’0.8840.9100.9180.9340.9280.934
(0.1, 0.05, 0.1)’0.8840.9080.9140.9300.9240.926
(0.2, 0.1, 0.2)’0.8960.9080.9180.9140.9300.934
(0.3, 0.15, 0.3)’0.8620.8740.8980.9000.9400.948
(0.2, 0.3, −0.2, 0.4)’    (0,  0,  0)’0.8880.9180.9040.9200.9360.938
(0.1, 0.05, 0.1)’0.8880.9160.9180.9300.9420.948
(0.2, 0.1, 0.2)’0.8760.9000.9120.9180.9360.944
(0.3, 0.15, 0.3)’0.8980.9200.9160.9300.9400.938
t ( 10 ) (0.2, 0.3, 0.4, −0.6)’(0,  0,  0)’0.8760.8920.9020.9180.9500.948
(0.1, 0.05, 0.1)’0.8840.8880.9240.9260.9400.930
(0.2, 0.1, 0.2)’0.8940.9080.9320.9340.9380.936
(0.3, 0.15, 0.3)’0.8980.8820.9220.9160.9320.930
(0.2, 0.3, −0.2, 0.4)’(0,  0,  0)’0.8960.9040.9340.9400.9540.950
(0.1, 0.05, 0.1)’0.8780.8880.9140.9260.9280.930
(0.2, 0.1, 0.2)’0.8920.8940.9280.9360.9400.938
(0.3, 0.15, 0.3)’0.9060.9040.9240.9200.9520.954
Table 4. Coverage probabilities of confidence regions for γ .
Table 4. Coverage probabilities of confidence regions for γ .
ε t Para.n = 200n = 500n = 1000n = 2000
θ vech ( Σ ) ELWLSELWLSELWLSELWLS
N ( 0 , 1 ) ( 0.2 , 0.3 , 0.4 , 0.6 ) (0.1, 0.05, 0.1)’0.8580.8440.9200.8980.9460.9260.9420.934
(0.2, 0.1, 0.2)’0.8720.8320.9560.9380.9560.9240.9520.944
(0.3, 0.15, 0.3)’0.8720.8160.9260.9100.9460.9320.9560.940
N ( 0 , 1.5 ) (0.1, 0.05, 0.1)’0.8520.8360.9200.9080.9320.9220.9280.910
(0.2, 0.1, 0.2)’0.8680.8260.9400.9100.9420.9240.9340.922
(0.3, 0.15, 0.3)’0.8720.8080.9240.8880.9500.9200.9420.936
t ( 6 ) (0.1, 0.05, 0.1)’0.7740.7140.8680.8000.8860.8620.9200.892
(0.2, 0.1, 0.2)’0.8060.7560.8760.8200.8880.8760.9440.926
(0.3, 0.15, 0.3)’0.7980.7400.8760.8420.9060.8760.9340.904
t ( 10 ) (0.1, 0.05, 0.1)’0.8260.7720.9040.8560.9200.9160.9260.914
(0.2, 0.1, 0.2)’0.8680.8340.9000.8780.9280.9060.9340.914
(0.3, 0.15, 0.3)’0.8660.8020.8900.8540.9160.8880.9380.918
N ( 0 , 1 ) ( 0.2 , 0.3 , 0.2 , 0.4 ) (0.1, 0.05, 0.1)’0.8900.8400.9400.9300.9460.9300.9440.954
(0.2, 0.1, 0.2)’0.8920.8360.9360.9100.9400.9180.9520.942
(0.3, 0.15, 0.3)’0.8760.8180.9180.8980.9400.9160.9560.938
N ( 0 , 1.5 ) (0.1, 0.05, 0.1)’0.8740.8500.9280.8860.9340.9320.9460.946
(0.2, 0.1, 0.2)’0.8860.8540.9240.9100.9320.9140.9400.928
(0.3, 0.15, 0.3)’0.8920.8460.9180.9040.9200.9040.9600.958
t ( 6 ) (0.1, 0.05, 0.1)’0.7780.7080.8540.8220.9040.8860.9360.910
(0.2, 0.1, 0.2)’0.8100.7480.8620.8300.8980.8680.9280.922
(0.3, 0.15, 0.3)’0.8220.7660.8460.8080.9000.8600.9200.904
t ( 10 ) (0.1, 0.05, 0.1)’0.8400.7940.8860.8820.9220.9080.9480.924
(0.2, 0.1, 0.2)’0.8680.8300.9060.8720.9100.8860.9440.922
(0.3, 0.15, 0.3)’0.8540.8160.9100.8940.9220.9080.9400.918
Table 5. Fitting results and the one-step forecasting MSEs for Models 1–3.
Table 5. Fitting results and the one-step forecasting MSEs for Models 1–3.
ModelEstimatorMSE
AR-X α ^ = 0.1873 0.9445
β 1 ^ = 0.9696
β 2 ^ = 0.3009
RCAR-X α ^ = 0.2420 0.8951
β 1 ^ = 0.5759
β 2 ^ = 0.2148
RCAR-X-EV α ^ = 0.2341 0.7074
β 1 ^ = 0.9754
β 2 ^ = 0.4032
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhang, X.; Chen, J.; Li, Q. Estimation of Random Coefficient Autoregressive Model with Error in Covariates. Axioms 2024, 13, 303. https://doi.org/10.3390/axioms13050303

AMA Style

Zhang X, Chen J, Li Q. Estimation of Random Coefficient Autoregressive Model with Error in Covariates. Axioms. 2024; 13(5):303. https://doi.org/10.3390/axioms13050303

Chicago/Turabian Style

Zhang, Xiaolei, Jin Chen, and Qi Li. 2024. "Estimation of Random Coefficient Autoregressive Model with Error in Covariates" Axioms 13, no. 5: 303. https://doi.org/10.3390/axioms13050303

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop