Next Article in Journal
Schwarzschild Spacetimes: Topology
Next Article in Special Issue
The Impacts of Digital Economy on Balanced and Sufficient Development in China: A Regression and Spatial Panel Data Approach
Previous Article in Journal
A Complete Characterization of Bipartite Graphs with Given Diameter in Terms of the Inverse Sum Indeg Index
Previous Article in Special Issue
Strategic Alliances for Sustainable Development: An Application of DEA and Grey Theory Models in the Coal Mining Sector
 
 
Article
Peer-Review Record

Forecasting High-Dimensional Covariance Matrices Using High-Dimensional Principal Component Analysis

Axioms 2022, 11(12), 692; https://doi.org/10.3390/axioms11120692
by Hideto Shigemoto 1 and Takayuki Morimoto 2,*
Axioms 2022, 11(12), 692; https://doi.org/10.3390/axioms11120692
Submission received: 21 October 2022 / Revised: 15 November 2022 / Accepted: 29 November 2022 / Published: 3 December 2022
(This article belongs to the Special Issue Advances in Mathematical Methods in Economics)

Round 1

Reviewer 1 Report

General comment

In this paper, the estimation of the covariance matrix for high dimensional data is considered. When the dimension is higher than the number of observations, this problem may lead to inconsistent estimate. The SPOET method is used in this paper. The simulation results reported show good performance for financial data analysis based on many autoregressive models.

The covariance matrix is expressed as a sum of its eigen decomposition leading to factor part and noise component part. This paper is well organized and readable.

Specific comments

In relation (2), it is assumed that that X_t and Z_t are independent, that is not mentioned in the text

The notation should be homogeneous. The letters Sigma and S ae both used for the covariance matrices. An explanation in the text should be useful if this is necessary

A subscript should be used in relations (3) and (4) to specify which variable covariance matrix of is of concern

In relation (11), why 22 is used in the denominator?

In relation (18) and (19), please give more explanation for the covariance matrices used since both Sigma and S letters are used. What is the meaning of the subscript t in these relations?

Author Response

The authors thank the editor and reviewers for their valuable time, constructive comments, and insight on the paper “Forecasting high-dimensional covariance matrices using high-dimensional principal component analysis.” The comments are immensely helpful in improving the work and the quality of the manuscript. We have tried our best to respond to the comments. We express our sincere gratitude for your in-depth reviews, which have helped improve the quality of the paper significantly.

 

1. In relation (2), it is assumed that Xt and Zt are independent, that is not mentioned in the text.

 

Response: We added a sentence on page 6, line 130, such as, “Also, Xt and Zt are independent.”

 

2. The notation should be homogeneous. The letters Σ and S are both used for the covariance matrices. An explanation in the text should be useful if this is necessary.

 

Response: Since the matrix S shows the forecasted covariance matrix and Σ without hat and tilde is the integrated covariance matrix, we added to clear them in lines 230, 248-250, and 316.

 

3. A subscript should be used in relations (3) and (4) to specify which variable covariance matrix of is of concern.

 

Response: We added a sentence on page 8, line 163-164, and also pointed out which covariance matrix is used using a subscript Eq. (3) and (4).

 

4. In relation (11), why 22 is used in the denominator?

 

Response: The HAR model forecasts a next-term realized volatility using daily, weekly, and monthly realized volatility. To estimate the monthly, we estimate the sample mean based on the estimated eigenvalues by the daily eigenvalues. Here, since we set 5 and 22 trading days in a week and a month, we use 22 in the dominator.

 

5. In relations (18) and (19), please give more explanation for the covariance matrices used since both Σ and S letters are used. What is the meaning of the subscript t in these relations?

 

Response: We added sentences about Åœt, Σt, and t on page 16, line 313, and 316-318.

Reviewer 2 Report

see the attached file

Comments for author File: Comments.pdf

Author Response

The authors thank the editor and reviewers for their valuable time, constructive comments, and insight on the paper “Forecasting high-dimensional covariance matrices using high-dimensional principal component analysis.” The comments are immensely helpful in improving the work and the quality of the manuscript. We have tried our best to respond to the comments. We express our sincere gratitude for your in-depth reviews, which have helped improve the quality of the paper significantly.

 

1. In section 2, please provide necessary background for each model/method, simply summarize some pros and cons for each model from existing literature. Do not just list the model/method.

 

Response: We added the necessary background for each method in lines 139-145 (pages 6 and 7) and 161-162 (page 8).

 

2. In section 3, please provide necessary explanation on each model formulation and the choice of parameter, for example, page 10, line 199, why setting a=0.94 is a good choice?

 

Response: We added some explanation. On page 10, lines 207-209, we added the EWMA model. We omit the explanation for the AR and VAR models, as these are well-known to the public. We added a few sentences for the HAR model on page 11, lines 221-222 and 226.

 

3. In section 4, please add a clear motivation or explanation on the simulation design, such as why the design (equations 15-17 in page 13) with the specified parameter values for A, B, C, a, and b are chosen.

 

Response: We wrote a clear motivation for the simulation and added the explanation of the simulation design in lines 242-246 (pages 12 and 13). Also, since we follow the setting of Andersen et al. (2012) and Bollerslev et al. (2018), we wrote the explanation of the setting of this simulation method like “we set A = 0.75 . . . , respectively, following Andersen et al. (2012) and Bollerslev et al. (2018).” in page 13 under Eq. (17).

 

4. In section 5, three performance criteria are proposed: equations 18-19 (section 5.1) and equation (20) (section 5.2). What is the relationship among them?

 

Response: Eq. (18) and(19) are loss functions and are known to be robust to noisy covariance proxies (Laurent et al., 2013). Eq. (20) is not a loss function but the variance of a portfolio based on a forecasted covariance matrix. We added about Eq. (18) and (19) in lines 313-315 (page 16), section 5.2.1.

 

5. In section 6, it is necessary to discuss the limitation of the simulation study and conclusion from the empirical analysis.

 

Response: We mentioned the limitation of our research in section 6: “This study applied SPOET discussed under the i.i.d. setting to the continuous Itô semi-martingale setting for simulation study and empirical analysis. Thus, theoretical results are needed in the future.”

 

Response to specific comments:

 

Page 7, line 4 from the bottom, “As τ becomes larger, the degree of sparsity of the residual covariance increases, and finally the matrix becomes a diagonal matrix.” An explanation or a reference is needed for this statement.

 

Response: The sentence is mentioned by Dai et al. (2019), and we added the literature part of the sentence in lines 155-157 (pages 7 and 8).

 

Page 8, under line 164, why the sample covariance matrix is defined without subtracting the sample mean from the y-data?

 

Response: First, this equation does not show the sample covariance matrix; it shows the realized covariance matrix proposed by Barndorff-Neilsen and Shephard (2004). The realized covariance matrix is defined by the summation of squared intraday returns observed in a day. Therefore, the realized covariance matrix is the ex-post-observed covariance matrix of the day. Then, the reason why the covariance matrix is defined without subtracting the sample mean from the y-data has been explained by Barndorff-Nielsen and Shephard (2004), and we cite the part that mentions such “The realized covariation matrix is different from the empirical covariance matrix of high-frequency returns. This quantity does not make sense in high-frequency finance, for it will converge in probability to the matrix of zeros as M → ∞. Realized covariation is roughly M times the empirical covariance of returns, the difference being that realized covariance ignores the (1/M )y_i y’_i term as it is stochastically of smaller order than ∑_{j=1}^{M}y_{j, i} y’_{j, i}.”

 

Page 9, line 185, is $\tilde{\lambda}_{j} = \tilde{\lambda}_{j}^{t}$?

 

Response: Yes, it is. Then, we corrected it to $\tilde{\lambda}_j^t = \max\{\tilde{\lambda}_j^t − \bar{c}d/M, 0\}$ to understand more easily in line 193 (page 10).

 

Why equation (6) needs the condition of sparsity of the residual covariance matrix?

 

Response: The sparsity of the residual covariance matrix, especially the low rank + sparsity idea, is needed to obtain the consistency of the high-dimensional covariance matrix. The literature review about the factor models, which have the low rank + sparsity structure, is discussed by Fan et al. (2013) section “Discussion on the paper by Fan, Liao, and Mincheva.” We added the usefulness of the sparsity in lines 139-145 (pages 6 and 7).

 

Page 10, line 199, why can you set a a = 0.94?

 

Response: As mentioned in section 3.1, this model, the EWMA model, was proposed by J.P.Morgan called the Risk Metrics. In the original paper, a = 0.94 was used for the daily return, and much literature uses the Risk Metrics set a = 0.94 following the original paper. Therefore, we use the same value as previous research.

 

Page 10, section 3.1 omits too many details, leaving much confusion on setting a = 0.94.

 

Response: Regarding the setting a = 0.94, as we mentioned above.

 

Page 11, line 211, “forecast” should be “forecasting”.

 

Response: We corrected it in lines 220-221 (page 11).

 

Page 11, equations (10) and (11) are the average weekly eigenvalue and average monthly eigenvalue, respectively. They are not real eigenvalues. Why the average eigenvalues should be used?

 

Response: The HAR and Vector-HAR models forecast the next-term realized (co)variance using the daily, weekly, and monthly realized measures. To estimate the parameters of these models, we have to estimate the weekly and monthly realized measures using the sample mean during trading days. Most researchers set 5 and 22 trading days in a week and a month. In this paper, we apply these models to forecasting eigenvalues. Here, since the daily realized covariance matrix obtains eigenvalues, we treat eigenvalues with daily time series data. Therefore, to estimate the HAR model, we estimate the weekly and monthly eigenvalues using the sample mean.

 

Page 11, based on the following two equations, can one always get positive estimators for λ and σ? Please provide some explanation or reference.

 

Response: For the λ, we cannot guarantee the positiveness. However, the estimation results do not show negative values in our empirical analysis. For the σ, it is guaranteed the positive because we forecast σ = [log λ1, . . . , log λr]′ and calculate the original scale forecasted values by e^σ.

 

Page 12, how to derive equation (14)?

 

Response: The integrated covariance matrix of the price process in Eq. (2). Following this equation, we estimate and forecast each part, factor loading which is eigenvectors, factors which are eigenvalues, and sparse residual covariance matrix, and can obtain the Eq. (14).

 

Page 12, typo in the following statement: it should be “when applied to estimating . . . ”.

 

Response: We corrected it in line 239 (page 12).

 

Page 13, please check the grammar of the sentence: . . . The all eigenvalues . . . .

 

Response: We corrected it in lines 262-263 (page 13).

 

Page 25, Table 1 and Figure 1 actually cannot tell which method is the best. Some comments are needed.

 

Response: We wrote explanation about Table 1 in page 13, line 262-263, and about Figure 1 in page 14, line 269-272.

 

Page 27-28, it may be better to add graphs to compare the forecasting loosed in tables 3-4 in addition to the numbers.

 

Response: Although we tried to add graphs, we couldn’t. The reason is that the aims of these tables confirm that our proposed methods are selected by MCS and the improvement using the DM test. Therefore, the loss values for each model are not important.

 

Reference

1. Barndorff-Nielsen and Shephard (2004). Econometric analysis of realized covariation: High frequency based covariance, regression, and correlation in financial economics. Econometrica, 72, 885–925

2. Dai, Lu, and Xiu. (2019). Knowing factors or factor loadings, or neither? Evaluating estimators of large covariance matrices with noisy and asynchronous data. Journal of Econometrics, 208, 43–79.

3. Fan, Liao, and Mincheva (2013). Large covariance estimation by thresholding principal orthogonal components. Journal of the Royal Statistical Society. Series B: Statistical Methodology, 75, 603–680.

4. Laurent, Rombouts, and Violante (2013). On loss functions and ranking forecasting performances of multivariate volatility models. Journal of Econometrics, 173, 1–10.

Back to TopTop