Next Article in Journal
Ill-Posedness of a Three-Component Novikov System in Besov Spaces
Next Article in Special Issue
On Stock Volatility Forecasting under Mixed-Frequency Data Based on Hybrid RR-MIDAS and CNN-LSTM Models
Previous Article in Journal
Lp-Norm for Compositional Data: Exploring the CoDa L1-Norm in Penalised Regression
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Enhancing Portfolio Allocation: A Random Matrix Theory Perspective

Department of Economics, University of Insubria, 21100 Varese, Italy
*
Author to whom correspondence should be addressed.
Mathematics 2024, 12(9), 1389; https://doi.org/10.3390/math12091389
Submission received: 10 April 2024 / Revised: 25 April 2024 / Accepted: 29 April 2024 / Published: 1 May 2024

Abstract

:
This paper explores the application of Random Matrix Theory (RMT) as a methodological enhancement for portfolio selection within financial markets. Traditional approaches to portfolio optimization often rely on historical estimates of correlation matrices, which are particularly susceptible to instabilities. To address this challenge, we combine a data preprocessing technique based on the Hilbert transformation of returns with RMT to refine the accuracy and robustness of correlation matrix estimation. By comparing empirical correlations with those generated through RMT, we reveal non-random properties and uncover underlying relationships within financial data. We then utilize this methodology to construct the correlation network dependence structure used in portfolio optimization. The empirical analysis presented in this paper validates the effectiveness of RMT in enhancing portfolio diversification and risk management strategies. This research contributes by offering investors and portfolio managers with methodological insights to construct portfolios that are more stable, robust, and diversified. At the same time, it advances our comprehension of the intricate statistical principles underlying multivariate financial data.

1. Introduction

Markowitz’s seminal paper [1] remains a cornerstone in practical portfolio management, despite facing challenges in out-of-sample analyses due to estimation errors in expected returns and the covariance matrix, as noted in [2,3,4,5,6]. To address these limitations, researchers have investigated the use of robust estimators aimed at providing more accurate and reliable measures of moments and co-moments. Robust estimators, designed to handle outliers and errors, enhance the robustness of portfolio optimization, leading to improved stability and performance, see [7,8,9].
In recent years, there has been a notable shift towards a novel approach known as the market graph for understanding the relationship between asset returns, departing from traditional covariance-based methods. This methodology represents assets as nodes and their relationships as edges in a graph, capturing the broader structure of the financial market. Pioneering work by Mantegna [10] led to the construction of asset graphs using stock price correlations, revealing hierarchical organization within stock markets. Subsequent research conducted by [11] investigates stock network topology by examining return relationships, with the objective of revealing significant patterns inherent in the correlation matrix. Additionally, ref. [12] introduced constraints based on asset connections in the correlation graph to encourage the inclusion of less correlated assets for diversification, employing assortativity from complex network analysis to guide the optimization process. Recent research explores innovative portfolio optimization techniques using clustering information from asset networks [13,14,15]. These approaches replace the classical Markowitz model’s correlation matrix with a correlation-based clustering matrix. In [15], a network-based approach is utilized to model the structure of the financial market and the interdependencies among asset returns. This is achieved through the construction of a market graph, which captures asset connections using various edge weights. An objective function is introduced, considering both individual security volatility and network connections, to enhance the understanding of market dynamics. In [13], the authors proposed a unified mixed-integer linear programming (MILP) framework integrating clustering and portfolio optimization. Empirical results demonstrate the promising performance of the network-based portfolio selection approach, outperforming classical approaches relying solely on pairwise correlation between assets’ returns. Building upon the research outlined in [14,15], we introduce a novel framework that integrates Random Matrix Theory (RMT) into portfolio allocation procedures. Random Matrix Theory (RMT) is a mathematical approach introduced in [16] which is used to analyze correlations in the finance area and especially to improve portfolio management [17,18,19,20]. Specifically, we employ a combination of the Hilbert transformation and RMT to estimate the correlation matrix. This methodological approach draws inspiration from a dimensionality reduction technique introduced in [21] for data analysis, which was subsequently extended to economic time series and quantitative finance [22,23,24]. By employing this technique, we effectively separate noise from significant correlations, thereby enhancing the accuracy and robustness of the portfolio allocation process. On the one hand, the Hilbert transformation technique is a data preprocessing method which we use to add complexity the time series of returns, thus improving the effectiveness in detecting lead/lag correlations in time-series data, as shown in [23,25,26]. On the other hand, RMT offers a robust denoising technique on the basis of the Marčenko–Pastur theorem, which describes the eigenvalue distribution of the covariance matrices [27,28,29]. As a result, we obtain a denoised version of the covariance matrix that captures the true underlying correlation structure more accurately and facilitates the computation of clustering coefficients to assess the interconnectedness of assets.
The proposed method is then tested on four equity portfolios, employing a buy and hold rolling window strategy with various lengths for the in-sample and the out-of-sample periods. The out-of-sample performances are then compared across different methods of estimating the dependence structure between assets, including sample estimates, shrinkage estimates, clustering coefficients, and RMT. Notably, employing the RMT method consistently results in a superior out-of-sample performance compared to traditional methods across all portfolios and rolling window strategies. Furthermore, the analysis of transaction costs, measured by turnover, reveals that the RMT approach consistently led to a lower portfolio turnover compared to other methods. This reduction in turnover translates to lower transaction costs and underscores the practical applicability of the RMT approach in portfolio management.
The paper is organized as follows: Section 2 introduces the basic notions of the Hilbert transformation and RMT-based techniques. Section 3 offers a brief overview of existing portfolio selection models. Section 4 details the in-sample and out-of-sample protocols and analyzes the empirical outcomes. Finally, Section 5 discusses this paper’s key findings.

2. Random Matrix Theory

In the context of financial mathematics, it has been established that extensive empirical correlation matrices exhibit significant noise to the point where, with the exception of their principal eigenvalues and associated eigenvectors, they can be essentially treated as stochastic or random entities. Therefore, it is customary to denoise an empirical correlation matrix before its utilization. In this section, we will elucidate a denoising technique rooted in the principles of RMT where the fundamental result is the Marčenko–Pastur theorem, which describes the eigenvalue distribution of large random covariance matrices.
In the following, we denote by M ( C n × T ) the space of complex-valued matrices with dimensions n × T . Let X M ( C n × T ) be an observation matrix, where the columns are the single observations for different n time series. Let Σ M ( C n × n ) be the empirical covariance matrix associated with X, i.e., Σ = 1 T X X * ( X * is the adjoint matrix of X). If Σ is Hermitian (that is, if the eigenvalues are real), we can define the empirical spectral distribution function as F = n 1 j = 1 n 1 λ j λ , where λ j , for j = 1 , , n , are the eigenvalues of Σ . The distribution of the eigenvalues of a large random covariance matrix is actually universal, meaning that it follows a distribution independent of the underlying observation matrix as stated in the following Marčenko–Pastur theorem [27,28,29].
Theorem 1 
(Marčenko–Pastur, 1967 [27]). Suppose X i , i = 1 , , n are independent and identically distributed random variables with mean 0 and variance σ 2 . Suppose that T n q ( 0 , + ) as n + , T + . Then, as n + , the empirical spectral distribution F ^ of the sample covariance Σ ^ converges almost surely in distribution to a nonrandom distribution, known as the Marčenko–Pastur law and denoted by F q , whose probability distribution is:
ρ q ( λ )   d λ = max 1 1 q , 0 δ 0 + ( λ + λ ) ( λ λ ) 2 π q λ σ 2   1 λ λ λ +   d λ
where the first term represents a point mass with weight 1 1 q at the origin if q > 1 and λ ± = σ 2 ( 1 ± q ) 2 coincide with the lower and the upper edges of the eigenvalue spectrum.
Let us note that, under proper assumptions, the Marčenko–Pastur theorem remains valid for observations drawn from more general distributions, like fat-tailed distributions [18,30].
Utilizing the boundaries of the Marčenko–Pastur distribution within the eigenvalue spectrum facilitates distinguishing between information and noise, enabling the filtration of the sample correlation matrix. Consequently, it is expected that this filtered correlation matrix will provide more stable correlations compared to the standard sample correlation matrix.

2.1. Filtering Covariance by RMT

One can compare the theoretical density of the eigenvalues generated the by the Marčenko–Pastur theorem (in the following indicated as benchmark or null model) with the corresponding empirical one. Thus, we can identify the number of empirical eigenvalues, possessing some known and easily interpretable characteristics, that significantly deviate from the null model. There are many methods for filtering the correlation matrix based on RMT as proposed previously. Essentially, the approaches primarily involve suitable modification of the eigenvalues within the spectrum of the sample correlation matrix while simultaneously preserving its trace, as explained below.
The upper boundary λ + of the Marčenko–Pastur density serves as a threshold for distinguishing the noisy component of Σ . Eigenvalues of Σ within the interval [ λ , λ + ] adhere to the random correlation matrix hypothesis and represent noise-associated eigenvalues, as well as those below λ . The rationale behind excluding the smallest deviating eigenvalues from these filters lies in the fact that, unlike large eigenvalues—which are separated from the Marčenko–Pastur (MP) upper bound—the same rationale does not always hold true for the smallest deviating eigenvalues. Typically, small eigenvalues can be situated beyond the lower edge of the spectrum, a pattern consistent with the finite dimension of the observed matrix ( n × T ) . Furthermore, as emphasized in [17,31] there exists clear evidence of the non-randomness and temporal stability of the eigenvectors corresponding to eigenvalues larger than λ + . Such characteristics have not been consistently confirmed for the eigenvectors associated with eigenvalues smaller than λ . Therefore, eigenvalues higher than λ + can be regarded as “meaningful”. This leads to the following denoising method for the estimated covariance matrix Σ ^ : all the eigenvalues of Σ ^ lower than or equal to λ + are replaced by a constant value either equal to the average value of the “noisy” eigenvalues (as in [17]) or equal to zero (as in [19]) (both methods have the advantage of preserving the trace of Σ ^ while eliminating the non-meaningful eigenvalues). All the eigenvalues of Σ ^ strictly greater than λ + are left unchanged.
In this paper, the eigenvalues of Σ ^ that fall in the range predicted by the Marčenko–Pastur distribution are set to zero. Then, to build the filtered covariance matrix Σ F , the following steps are undertaken: suppose there exist k relevant eigenvalues (i.e., higher than λ + ). Let us denote them by { λ j } j = n k , , n , and also consider the diagonal matrix of filtered eigenvalues Λ F = diag ( 0 , , 0 , λ n k , , λ n ) . At this point, we use Λ F back in the eigendecomposition with the original eigenvectors, here denoted by V, thus obtaining:
Σ F = V Λ F V 1 .
Finally, in order to preserve the trace of the original matrix and prevent system distortion, we adjust the main diagonal elements of the filtered matrix Σ F by replacing them with the sample variances of the portfolio components σ ^ i i 2 , i = 1 , , n .
Remark 1. 
When applying this method to portfolio selection, we will work with the correlation matrix. However, the Marčenko–Pastur law remains unchanged with the only difference that σ 2 = 1 .

2.2. Data Preprocessing: Hilbert Transformation

Data preprocessing is a crucial step in the data analysis process, involving techniques designed to transform raw data into a clean dataset suitable for further analysis. One powerful method used particularly in signal processing is the Hilbert transform. This mathematical transformation is mainly applied in climatology, signal processing, finance, and economics [23,25,26] and it helps to derive the analytic signal from a real-valued signal, which is crucial for applications like amplitude modulation and phase shifting. This technique is especially well suited for detecting correlations with lead/lags, making it highly effective for time series exhibiting shifts in their co-movements.
Let us consider the panel of asset returns (i.e., portfolio) over time as x ( t ) and let us indicate with x ˜ ( t ) the complex series derived by Hilbert transforming the original time series as x ˜ ( t ) = x ( t ) + i H [ x ( t ) ] so that the real part of each component R e ( x ˜ ( t ) ) = x ( t ) coincides with the original time series. The Hilbert transformation is a linear transformation defined in [32] as the following integral:
H x ( t ) = 1 π P x ( τ ) τ t d τ ,
where P indicates the Cauchy principal value. The sequence x ˜ ( t ) with added complexity can then be written as:
x ˜ ( t ) = i π x ( τ ) τ t d τ .
As an example of the effect of the Hilbert transformation on the correlation matrix, let us consider two periodic signals defined as:
x = sin ( π t 4 )
y = cos ( π t 4 ) .
The correlation matrix between the two time series is:
C = 1 5.86 · 10 4 5.86 · 10 4 1
from which it is clear how negligible the correlation between the two time series is. Meanwhile, after Hilbert transformations of the time series, the two sequences with added complexity have the following complex correlation matrix:
C ˜ = 1 + i 0 5.86 · 10 4 + i 0.99 5.86 · 10 4 + i 0.99 1 i 0 = 1 0.99 e i 1.57 π 0.99 e i 1.57 π 1 .
We note from (7) that the magnitude of the correlation is pretty high, as expected if one considers the lags to be just a lack of synchronicity and not a lack of co-movements. The behavior of the two correlation matrices is also evident if one plots the two time series in a real plane and in a Gauss plane, see Figure 1.
If we use the data sequence with added complexity, then the Marčenko–Pastur law has to be modified by replacing p with 2 p , since the imaginary part of a data sequence with added complexity is not independent of its real part, being related by the Hilbert transformation [23].
Utilizing the Hilbert transformation, the covariance (cross-correlation) matrix is represented in a space where the eigenvectors’ components are spread across the complex plane. This facilitates identifying lead–lag relationships between components through angular disparities. The components of the complex covariance matrix can be denoted as follows:
σ k j = R e ( σ k j ) + i   I m ( σ k j ) = | σ k j | e i ϕ k j , k , j = 1 , , n ,
where the absolute value of each element of the complex covariance matrix gives the strength of the correlation; meanwhile, ϕ k j , k , j = 1 , , n , indicates the correlation in phase space. The leading or lagging behavior of each component is determined by ϕ k j , measuring to what extent the time series k leads the time series j.
We can break down the covariance (correlation) matrix as:
Σ = V Λ V 1 = V ( Λ n o i s e + Λ F ) V 1 =   Σ n o i s e + Σ F
where Λ n o i s e is the diagonal matrix of insignificant eigenvalues and Σ F and Σ n o i s e denote the principal and noisy components of the correlation matrix, respectively.
In the context of portfolio allocation, the Hilbert transformation technique proves valuable for identifying lead/lag correlations in the performance of different assets over time. This allows investors to uncover temporal patterns and optimize portfolio strategies by understanding the timing of movements in asset values. Essentially, it helps to discern how changes in one asset’s value relate to shifts in another asset, leading to more informed and dynamic portfolio decision making.

3. Portfolio Selection Models

In this section, we explore the nuances of asset allocation problems. We begin by outlining the classical portfolio model, inspired by Markowitz’s foundational work, which employs the variance/covariance matrix to analyze the interdependencies among asset returns, typically estimated through a sample approach. Following this, we describe the approach rooted in network theory (see [15]), offering a novel perspective on portfolio optimization. As also observed in recent studies [22,23] across various fields of the economic literature, this methodological approach presents a modern and sophisticated way to address challenges in traditional portfolio selection processes. It aims to unveil an optimal solution by using the interconnected relationships among different assets. Finally, we explore the application of RMT in the portfolio selection model, focusing particularly on the minimum variance portfolio.

3.1. Traditional Global Minimum Variance Portfolio

The classical global minimum variance (GMV) approach is designed to optimize portfolio allocation by determining the fractions w i of a given capital to be invested in each asset i from a predetermined basket of assets. The objective is to minimize the portfolio risk, identified through its variance. In this context, n represents the number of available assets, X i , i = 1 , , n symbolizes the random variable of daily returns of the i-th asset and Σ is the sample covariance matrix. The GMV strategy is formulated as follows:
min w w Σ w w e = 1 0 w i 1 , i = 1 , , n ,
where e is a vector of ones of length n and w = [ w i ] i = 1 , , n is the vector of the fractions invested in each asset. The first equation is the budget constraint and requires that the whole capital should be invested. Conditions 0 w i 1 , i = 1 , , n preclude the possibility of short selling (as is well known, a closed-form solution of the GMV problem exists if short selling is allowed). It is evident that the dependence structure between assets, in [1], is assessed through the Pearson correlation coefficient between each pair of asset.
The process of estimating covariance matrices from samples is widely recognized for its susceptibility to noise and its high estimation error. This introduces uncertainty and instability when attempting to estimate the true covariance structure, which frequently results in a suboptimal performance of portfolios when applied out of sample. Portfolios based on unreliable covariance structures may exhibit increased volatility, high risk, and compromised returns. Therefore, ensuring precise and stable covariance estimates is crucial for enhancing the robustness and effectiveness of portfolio optimization models in real-world applications.

3.2. Asset Allocations through Network-Based Clustering Coefficients

To address the challenges of estimating the dependence structure and improve the reliability of covariance matrix estimation, network theory has recently been applied in portfolio allocation models. Here, we employ network theory to enhance the reliability of covariance matrix estimation in portfolio allocation. This approach involves representing financial assets through an undirected graph G = ( V , E ) , where V is the set of nodes representing assets and E is the set of edges denoting dependencies or connections between assets. The weights of the edges are represented by the Pearson correlation matrix ρ i j , i , j = 1 , , n .
At this point, problem (8) is transformed into the following:
min w w H w e w = 1 0 w i 1 , i = 1 , , n .
where H is a matrix obtained as described below. First, we create a binary undirected adjacency matrix A, associated with the filtered (complex) correlation matrix obtained through the techniques explained in Section 2. The binarization process (binarization is often used to simplify the analysis of networks by reducing them to binary representations; after applying the binarization process to the entire adjacency matrix, it is transformed into a binary adjacency matrix where each entry is either 0 or 1, representing the absence or presence of an edge, respectively, based on the chosen threshold) is assessed with the following threshold criteria: for every i , j = 1 , , n
A i j = 1 if ρ i j F ρ i j , 0 otherwise ,
where ρ i j F are the elements of the filtered correlation matrix.
Second, we calculate for each node i the clustering coefficient c i as proposed in [33] for binary and weighted graphs.
Third, we construct the interconnectedness matrix C, whose elements are:
C i j = c i c j if i j 1 otherwise .
Finally, we build H = Δ C Δ to be the dependency matrix in (9), where Δ = diag ( s i ) is the diagonal matrix with entries s i = σ i i = 1 n σ i 2 as in [15]. This gives the new objective function to be minimized in order to obtain the optimum allocation.

4. Empirical Protocol and Performance Analysis

In this section, we delineate the empirical protocol employed in this paper and subsequently apply it to assess the effectiveness of the proposed approach through empirical applications. To test the robustness and mitigate data mining bias, we analyze four energy portfolios, as will be explained in Section 4.3.
In portfolio allocation models, in-sample and out-of-sample analyses are crucial components for assessing the performance and robustness of the strategies. The analysis of portfolio strategies centers around key criteria, including portfolio diversification, transaction costs, and risk–return performance measures. This examination is conducted using a rolling window methodology characterized by an in-sample period of length l and an out-of-sample period of length m.
This methodology begins by computing optimal weights during the initial in-sample window spanning from time t = 1 to t = l . These optimal weights remain unchanged during the subsequent out-of-sample period, extending from t = l + 1 to t = l + m . Then, the returns and performance metrics for this out-of-sample period are calculated. This process iterates by advancing both the in-sample and out-of-sample periods by m steps. With each iteration, the weights are recalculated and the portfolio performance is evaluated until the end of the dataset.

4.1. Diversification and Transaction Costs: In-Sample Analysis

We assess the performance of the obtained portfolios by analyzing diversification and transaction costs, both of which play a crucial role in effective portfolio management. Diversification represents a fundamental risk management tool that seeks to optimize returns while minimizing exposure to undue risks associated with concentrated holdings. This is essential to mitigate risk and enhance the stability of an investment portfolio. By holding a diversified set of assets with potentially uncorrelated returns, investors aim to achieve a balance between risk and return. In this paper, as a measure of diversification, we use the modified Herfindahl index, defined as:
HI = ( w ) w 1 n 1 1 n ,
where w = w i , i = 1 , , n represents the vector of optimal weights. The index ranges from 0 to 1, where a value 0 is reached in the case of the EW portfolio (deemed as the most diversified one) and a value of 1 indicates a portfolio concentrated on only one asset.
The second index used in this analysis is related to transaction costs which have a direct impact on portfolio performances: higher transaction costs lead to lower returns. Understanding and minimizing transaction costs is an integral component of effective portfolio management, ensuring that investment decisions are cost-efficient and align with the investor’s goals. By addressing these aspects, investors can enhance the overall competitiveness and sustainability of their investment strategies. In this paper, as a proxy for transaction costs, we use the portfolio turnover, denoted as ϕ and computed as:
ϕ = i = 1 n w i     w i ,
where w i and   w i are the optimal portfolio weights of the ith asset, respectively, before and after rebalancing (based on the optimization strategy). A higher turnover often comes with increased transaction costs. Monitoring turnover is crucial for understanding the level of trading activity and its impact on the portfolio’s performance. In particular, a lower turnover percentage indicates that the portfolio has experienced relatively minimal changes in asset weights. This might be indicative of a more stable managed portfolio.

4.2. Performance Measurements: Out-of-Sample Analysis

Analyzing risk-adjusted performance measures for different allocation methods is essential as it evaluates the portfolio’s success in generating returns relative to the level of risk assumed. By examining metrics like the Sharpe ratio and omega ratio, investors gain valuable insights into how effectively a portfolio balances risk and return. This analysis assists investors to select the best approach that delivers favorable returns while aligning with their risk tolerance.
Here, we briefly recap the definitions of the risk-adjusted performance measures used in the empirical part of this paper to compare the different models under analysis. The Sharpe ratio ( SR ) is defined as:
SR = E r p r f V ar ( r p r f ) ,
where r p indicates the out-of-sample portfolio returns (obtained using the rolling window methodology) and r f denotes the risk-free rate (in our empirical analysis, we adopt a constant risk-free rate, set to zero, following the approach used in [9]). This ratio indicates the mean excess return per unit of overall risk. The portfolio with the highest SR is typically considered the best, especially in cases of positive portfolio excess returns concerning the risk-free rate.
The omega ratio  ( OR ) , introduced in [34], is defined as:
OR = E r p ϵ + E ϵ r p + ,
where ϵ is a specified threshold. Returns above the threshold are considered gains by investors, while those below are considered as losses. An OR greater than one suggests that the portfolio provides more expected gains than expected losses. The choice of threshold ϵ can vary, and in our empirical analysis, it is set to 0.

4.3. Data Description and Empirical Results

In this empirical analysis, we examine four equity indexes, sourcing our data from Bloomberg. The selected indexes are as follows: (i) the Nasdaq 100 index; (ii) the energy sector of the S&P500 index, which focuses only on energy companies; (iii) the Dow Jones Industrial Average index; and (iv) the benchmark stock market index of the Bolsa de Madrid. We construct each portfolio by including only components listed in the specified index from 2 January 2014 to 19 January 2024. Detailed information regarding the Bloomberg tickers of the selected assets in each portfolio can be found in Appendix A. The dataset for each portfolio is composed of 2622 daily observations spanning from 2 January 2014 to 19 January 2024.
As detailed in Section 3, we address optimization problems (8) and (9), both of which account for the dependence structure in constructing the optimal portfolio. In problem (8), this structure is captured by the covariance matrix Σ , which is estimated using both the sample and the shrinkage approaches. Conversely, in (9), the dependence structure is represented by the matrix H, leveraging the clustering coefficient method as outlined in [15] and along with the approach proposed in Section 2 (utilizing the Hilbert transformation and RMT).
To evaluate the performances of the approaches considered in this paper, we employ four distinct buy-and-hold rolling window methodologies: the first three are 6, 12 or 24 months in-sample and 1 month out-of-sample; the last one is 24 months in-sample and 2 months out-of-sample.
After obtaining the out-of-sample returns for each approach under analysis, we then proceed to compute their respective out-of-sample performances. Figure 2 illustrates the out-of-sample performances of the four portfolios under analysis, specifically for the rolling window methodology of 6 months in-sample and 1 month out-of-sample. (Please note that, due to space limitations, we have omitted the out-of-sample performances for the other rolling window methodologies. Nevertheless, this information is readily available upon request to the corresponding author.)
Observing Figure 2, it becomes clear that across all portfolios under analysis, and in the case of the rolling window methodology with 6 months in-sample and 1 month out-of-sample, the CC-RMT strategy exhibits a superior out-of-sample performance compared to the “Sample”, “Shrink”, and “CC” strategies. This finding holds true for the other rolling window methodologies as well. However, relying solely on the out-of-sample performance as a singular index may not provide a comprehensive evaluation of the various investment strategies under analysis. This limitation arises because the out-of-sample performance prioritizes returns overlooking the aspect of evaluating risk. Therefore, a more comprehensive assessment requires the consideration of both return and risk metrics to understand the inherent risk–return trade-off in each strategy. For this, we compute two risk-adjusted performance measures: the Sharpe ratio and the omega ratio. Table 1 displays key out-of-sample statistics, including the first four moments, the SR , and the OR , for the portfolios under analysis across the four rolling window methodologies considered. While the first four moments may not clearly indicate which strategy performs better due to potential inconsistencies in mean, skewness, standard deviation, and kurtosis, a different picture emerges when examining risk-adjusted performance measures like the SR and OR . In particular, the CC-RMT strategy consistently exhibits a superior risk-adjusted performance across all portfolios and rolling windows considered.
In contrast to more traditional methods, the CC-RMT strategy seems to effectively capture the complex interdependencies and structural patterns within the financial markets, providing a more accurate representation of the market dynamics. Moreover, the two network approaches (CC and CC-RMT) yield differing outcomes. Although both employ the clustering coefficient to measure the dependence structure, they differentiate for the threshold criteria used in the binarization process of the correlation adjacency matrix. Unlike our approach, the authors in [15] do not employ the Hilbert transformation of time series or the RMT filtering method to identify significant correlations exclusively. Instead, they consider different thresholds on correlation levels to generate multiple binary adjacencies matrices and compute the corresponding clustering coefficient of nodes. Finally, they compute the average clustering coefficient for each node to derive the interconnectedness matrix as outlined in Equation (11).
Considering transaction costs is crucial in practical portfolio management, as excessive turnover can erode returns. As mentioned earlier, this study evaluates transaction costs by using turnover as a proxy. Figure 3 illustrates the turnover values in each rebalancing period for the four portfolios under examination, employing a rolling window of 6 months in-sample and 1 month out-of-sample. The figure clearly indicates that CC-RMT consistently yields a lower portfolio turnover. To provide a comprehensive overview of portfolio turnover, Table 2 presents the average turnover for the four portfolios across all considered rolling window. These results consistently highlight that CC-RMT tends to produce portfolios with a lower turnover, leading to reduced transaction costs.
We have additionally computed the modified Herfindahl index as a proxy for assessing portfolio diversification. Due to space limitations, the average modified Herfindahl indexes for the analyzed portfolios and rolling window methodologies are presented on the right-hand side of Table 2. Notably, the findings reveal that the CC-RMT approach consistently produces a lower modified Herfindahl index, indicating a more diversified portfolio. This observation highlights the contribution of the CC-RMT model to enhancing diversification within the portfolio, a crucial aspect of effective risk management.

5. Discussions

This study introduces a novel method for estimating the dependence structure, a critical aspect in portfolio selection. In particular, we combine the Hilbert transformation technique and Random Matrix Theory. The Hilbert transformation is used as a data preprocessing technique which identifies lead/lag correlations in asset performance, enabling investors to optimize strategies based on timing as explained in Section 2.2. Random Matrix Theory is used as a filtering technique to differentiate between noise and meaningful information in the covariance matrix, as explained in Section 2.1. The proposed approach aims to obtain a denoised version of the dependence structure, capturing the genuine connections between assets more precisely. By integrating these two techniques, this study enables a more accurate and resilient estimation of the portfolio allocation process.
The empirical analysis demonstrates that the resulting CC-RMT model not only reduces transaction costs, as evidenced by a lower turnover, and enhances portfolio diversification by decreasing the modified Herfindahl index, but also excels in risk-adjusted performance measures. Across different portfolios and rolling window, the CC-RMT strategy consistently exhibits a superior risk-adjusted performance compared to other approaches, including the sample, shrinkage, and clustering coefficient strategies (as discussed in [15]). The CC-RMT model provides investors with a clearer understanding of market dynamics, offering a well-informed framework for making investment decisions. This paper only deals with a one-objective function as we consider the global minimum variance portfolio. A possible extension is to apply network theory to the estimation of other portfolio moments and co-moments. This would lead to the investigation of multi-objective portfolio optimization techniques as in [35,36].
This research significantly contributes to portfolio management by providing insights into the underlying structures and dynamics of investments. It assists investors in making smarter and more effective investment decisions by providing valuable insights and guiding them towards strategies that are better adjusted for risk.

Author Contributions

Conceptualization, F.V., A.H. and E.M.; methodology, F.V., A.H. and E.M.; software, F.V., A.H. and E.M.; validation, F.V., A.H. and E.M.; formal analysis, F.V., A.H. and E.M.; investigation, F.V., A.H. and E.M.; resources, F.V., A.H. and E.M.; data curation, F.V., A.H. and E.M.; writing—original draft preparation, F.V., A.H. and E.M.; writing—review and editing, F.V., A.H. and E.M.; visualization, F.V., A.H. and E.M.; supervision, F.V., A.H. and E.M.; project administration, F.V., A.H. and E.M. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The mobility datasets presented in this article are not readily available. Requests to access the datasets should be directed to Bloomberg Professional Services at https://www.bloomberg.com/professional, accessed on 10 April 2024.

Acknowledgments

F.V., A.H., E.M. want to acknowledge that they are members of GNAMPA-INdAM.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
RMTRandom Matrix Theory
GMVGlobal Minimum Variance
HIHerfindahl Index
OROmega Ratio
SRSharp Ratio
CCClustering Coefficient Approach
CC-RMTClustering Coefficient via the Random Matrix Theory Approach

Appendix A

Table A1. This table reports the selected components of the four portfolios under analysis.
Table A1. This table reports the selected components of the four portfolios under analysis.
IBEX-PDJI-PSP5ENRS-PNDX-P
IBE SM EquityUNH UN EquityOXY UN EquityAMZN UW Equity
SAN SM EquityMSFT UQ EquityOKE UN EquityCPRT UW Equity
BBVA SM EquityGS UN EquityCVX UN EquityIDXX UW Equity
TEF SM EquityHD UN EquityCOP UN EquityCSGP UW Equity
REP SM EquityAMGN UQ EquityXOM UN EquityCSCO UW Equity
ACS SM EquityMCD UN EquityPXD UN EquityINTC UW Equity
RED SM EquityCAT UN EquityVLO UN EquityMSFT UW Equity
ELE SM EquityBA UN EquitySLB UN EquityNVDA UW Equity
BKT SM EquityTRV UN EquityHES UN EquityCTSH UW Equity
ANA SM EquityAAPL UQ EquityMRO UN EquityBKNG UW Equity
NTGY SM EquityAXP UN EquityWMB UN EquityADBE UW Equity
MAP SM EquityJPM UN EquityCTRA UN EquityODFL UW Equity
IDR SM EquityIBM UN EquityEOG UN EquityAMGN UW Equity
ACX SM EquityWMT UN EquityEQT UN EquityAAPL UW Equity
SCYR SM EquityJNJ UN EquityHAL UN EquityADSK UW Equity
COL SM EquityPG UN Equity CTAS UW Equity
MEL SM EquityMRK UN Equity CMCSA UW Equity
MMM UN Equity KLAC UW Equity
NKE UN Equity PCAR UW Equity
DIS UN Equity COST UW Equity
KO UN Equity REGN UW Equity
CSCO UQ Equity AMAT UW Equity
INTC UQ Equity SNPS UW Equity
VZ UN Equity EA UW Equity
FAST UW Equity
ANSS UW Equity
GILD UW Equity
BIIB UW Equity
LRCX UW Equity
TTWO UW Equity
VRTX UW Equity
PAYX UW Equity
QCOM UW Equity
ROST UW Equity
SBUX UW Equity
INTU UW Equity
MCHP UW Equity
MNST UW Equity
ORLY UW Equity
ASML UW Equity
SIRI UW Equity
DLTR UW Equity

References

  1. Markowitz, H. Portfolio selection. J. Financ. 1952, 7, 77–91. [Google Scholar]
  2. Chopra, V.K.; Ziemba, W.T. The effect of errors in means, variances, and covariances on optimal portfolio choice. In Handbook of the Fundamentals of Financial Decision Making: Part I; World Scientific: Singapore, 2013; pp. 365–373. [Google Scholar]
  3. Jobson, J.D.; Korkie, B. Estimation for Markowitz efficient portfolios. J. Am. Stat. Assoc. 1980, 75, 544–554. [Google Scholar] [CrossRef]
  4. Merton, R.C. On estimating the expected return on the market: An exploratory investigation. J. Financ. Econ. 1980, 8, 323–361. [Google Scholar] [CrossRef]
  5. Chung, M.; Lee, Y.; Kim, J.H.; Kim, W.C.; Fabozzi, F.J. The effects of errors in means, variances, and correlations on the mean-variance framework. Quant. Financ. 2022, 22, 1893–1903. [Google Scholar] [CrossRef]
  6. Kolm, P.N.; Tütüncü, R.; Fabozzi, F.J. 60 years of portfolio optimization: Practical challenges and current trends. Eur. J. Oper. Res. 2014, 234, 356–371. [Google Scholar] [CrossRef]
  7. Ledoit, O.; Wolf, M. Robust performance hypothesis testing with the Sharpe ratio. J. Empir. Financ. 2008, 15, 850–859. [Google Scholar] [CrossRef]
  8. Martellini, L.; Ziemann, V. Improved estimates of higher-order comoments and implications for portfolio selection. Rev. Financ. Stud. 2009, 23, 1467–1502. [Google Scholar] [CrossRef]
  9. Hitaj, A.; Zambruno, G. Are Smart Beta strategies suitable for hedge fund portfolios? Rev. Financ. Econ. 2016, 29, 37–51. [Google Scholar] [CrossRef]
  10. Mantegna, R.N. Hierarchical structure in financial markets. Eur. Phys. J.-Condens. Matter Complex Syst. 1999, 11, 193–197. [Google Scholar] [CrossRef]
  11. Onnela, J.P.; Kaski, K.; Kertész, J. Clustering and information in correlation based financial networks. Eur. Phys. J. B 2004, 38, 353–362. [Google Scholar] [CrossRef]
  12. Ricca, F.; Scozzari, A. Portfolio optimization through a network approach: Network assortative mixing and portfolio diversification. Eur. J. Oper. Res. 2024, 312, 700–717. [Google Scholar] [CrossRef]
  13. Puerto, J.; Rodríguez-Madrena, M.; Scozzari, A. Clustering and portfolio selection problems: A unified framework. Comput. Oper. Res. 2020, 117, 104891. [Google Scholar] [CrossRef]
  14. Clemente, G.P.; Grassi, R.; Hitaj, A. Smart network based portfolios. Ann. Oper. Res. 2022, 316, 1519–1541. [Google Scholar] [CrossRef] [PubMed]
  15. Clemente, G.P.; Grassi, R.; Hitaj, A. Asset allocation: New evidence through network approaches. Ann. Oper. Res. 2021, 299, 61–80. [Google Scholar] [CrossRef]
  16. Wigner, E.P. On a class of analytic functions from the quantum theory of collisions. In The Collected Works of Eugene Paul Wigner: Part A: The Scientific Papers; Springer: Berlin/Heidelberg, Germany, 1993; pp. 409–440. [Google Scholar]
  17. Laloux, L.; Cizeau, P.; Potters, M.; Bouchaud, J.P. Random matrix theory and financial correlations. Int. J. Theor. Appl. Financ. 2000, 3, 391–397. [Google Scholar] [CrossRef]
  18. Bouchaud, J.P.; Potters, M. Financial applications of random matrix theory: A short review. In The Oxford Handbook of Random Matrix Theory; Oxford University Press: Oxford, UK, 2015. [Google Scholar]
  19. Plerou, V.; Gopikrishnan, P.; Rosenow, B.; Amaral, L.A.N.; Guhr, T.; Stanley, H.E. Random matrix approach to cross correlations in financial data. Phys. Rev. E 2002, 65, 066126. [Google Scholar] [CrossRef]
  20. Pafka, S.; Kondor, I. Estimated correlation matrices and portfolio optimization. Phys. A Stat. Mech. Its Appl. 2004, 343, 623–634. [Google Scholar] [CrossRef]
  21. Horel, J.D. Complex principal component analysis: Theory and examples. J. Appl. Meteorol. Climatol. 1984, 23, 1660–1673. [Google Scholar] [CrossRef]
  22. Guerini, M.; Vanni, F.; Napoletano, M. E pluribus, quaedam. Gross domestic product out of a dashboard of indicators. Ital. Econ. J. 2024, 1–16. [Google Scholar] [CrossRef]
  23. Aoyama, H. Macro-Econophysics: New Studies on Economic Networks and Synchronization; Cambridge University Press: Cambridge, UK; New York, NY, USA, 2017. [Google Scholar]
  24. Wilinski, M.; Ikeda, Y.; Aoyama, H. Complex correlation approach for high frequency financial data. J. Stat. Mech. Theory Exp. 2018, 2018, 023405. [Google Scholar] [CrossRef]
  25. Granger, C.W.J.; Hatanaka, M. Spectral Analysis of Economic Time Series.(PSME-1); Princeton University Press: Princeton, NJ, USA, 2015. [Google Scholar]
  26. Rasmusson, E.M.; Arkin, P.A.; Chen, W.Y.; Jalickee, J.B. Biennial variations in surface temperature over the United States as revealed by singular decomposition. Mon. Weather. Rev. 1981, 109, 587–598. [Google Scholar] [CrossRef]
  27. Pastur, L.; Martchenko, V. The distribution of eigenvalues in certain sets of random matrices. Math. USSR-Sb. 1967, 1, 457–483. [Google Scholar]
  28. Paul, D.; Aue, A. Random matrix theory in statistics: A review. J. Stat. Plan. Inference 2014, 150, 1–29. [Google Scholar] [CrossRef]
  29. Bai, Z.; Silverstein, J.W. Spectral Analysis of Large Dimensional Random Matrices; Springer: Berlin/Heidelberg, Germany, 2010; Volume 20. [Google Scholar]
  30. Biroli, G.; Bouchaud, J.P.; Potters, M. On the top eigenvalue of heavy-tailed random matrices. Europhys. Lett. 2007, 78, 10001. [Google Scholar] [CrossRef]
  31. Laloux, L.; Cizeau, P.; Bouchaud, J.P.; Potters, M. Noise dressing of financial correlation matrices. Phys. Rev. Lett. 1999, 83, 1467. [Google Scholar] [CrossRef]
  32. Poularikas, A.D.; Grigoryan, A.M. Transforms and Applications Handbook; CRC Press: Boca Raton, FL, USA, 2018. [Google Scholar]
  33. Watts, D.J.; Strogatz, S.H. Collective dynamics of ‘small-world’networks. Nature 1998, 393, 440. [Google Scholar] [CrossRef] [PubMed]
  34. Keating, C.; Shadwick, W.F. A Universal Performance Measure; Technical Report; The Finance Development Center: London, UK, 2002. [Google Scholar]
  35. Leung, M.F.; Wang, J. Minimax and biobjective portfolio selection based on collaborative neurodynamic optimization. IEEE Trans. Neural Netw. Learn. Syst. 2020, 32, 2825–2836. [Google Scholar] [CrossRef]
  36. Leung, M.F.; Wang, J. Cardinality-constrained portfolio selection based on collaborative neurodynamic optimization. Neural Netw. 2022, 145, 68–79. [Google Scholar] [CrossRef]
Figure 1. Example of two periodic time series shifted by π . (a) Original time series and (b) the two time series with added complexity via Hilbert transformation in a complex plane. The blue color indicates (4), the red color indicates (5).
Figure 1. Example of two periodic time series shifted by π . (a) Original time series and (b) the two time series with added complexity via Hilbert transformation in a complex plane. The blue color indicates (4), the red color indicates (5).
Mathematics 12 01389 g001
Figure 2. Out-of-sample performance using the rolling window methodology with 6 months in-sample and 1 month out-of-sample.
Figure 2. Out-of-sample performance using the rolling window methodology with 6 months in-sample and 1 month out-of-sample.
Mathematics 12 01389 g002
Figure 3. Portfolio turnovers using the rolling window methodology with 6 months in-sample and 1 month out-of-sample.
Figure 3. Portfolio turnovers using the rolling window methodology with 6 months in-sample and 1 month out-of-sample.
Mathematics 12 01389 g003
Table 1. Out-of-sample statistics for the various portfolios and the four strategies under analysis are reported for each rolling window methodology. The mean, standard deviation, and Sharpe ratio are provided on an annual basis. The best strategy for each computed statistic is emphasized in bold. The cells corresponding to negative Sharpe ratios are left blank.
Table 1. Out-of-sample statistics for the various portfolios and the four strategies under analysis are reported for each rolling window methodology. The mean, standard deviation, and Sharpe ratio are provided on an annual basis. The best strategy for each computed statistic is emphasized in bold. The cells corresponding to negative Sharpe ratios are left blank.
RW 6 Months In-Sample, 1 Month Out-of-Sample
NDXS5ENRS
μ σ SkewnessKurtosis SR OR μ σ SkewnessKurtosis SR OR
Sample0.1500.158−0.38812.4040.9531.193−0.0200.278−2.04454.4060.985
Shrinkage0.1560.158−0.42113.8690.9891.203−0.0230.278−2.02753.7870.983
CC0.1700.168−0.43215.1261.0141.212−0.0280.286−2.18258.6770.980
CC-RMT0.1730.169−0.54919.5081.0271.217−0.0060.278−1.00426.4460.996
DJIIBEX
Sample0.0690.135−0.09714.9550.5111.1030.0110.171−2.20230.7040.0651.012
Shrinkage0.0610.135−0.10915.7770.4521.0900.0140.170−2.23330.3840.0851.016
CC0.0360.140−0.01015.1970.2601.0510.0150.178−2.17630.8470.0821.016
CC-RMT0.0750.146−0.43525.3380.5141.1080.0210.174−1.69724.9440.1181.022
RW 12 Months In-Sample, 1 Month Out-of-Sample
NDXS5ENRS
μ σ SkewnessKurtosis SR OR μ σ SkewnessKurtosis SR OR
Sample0.1380.162−0.46215.0830.8521.1730.0030.273−1.60540.5780.0091.002
Shrinkage0.1370.162−0.44315.2920.8461.1730.0000.274−1.62340.7140.0011.000
CC0.1380.174−0.43518.3840.7981.1650.0000.277−1.56238.1240.0011.000
CC-RMT0.1490.175−0.55021.8570.8541.1790.0310.284−0.85324.7820.1101.021
DJIIBEX
Sample0.0560.140−0.34817.6370.4031.0810.0090.168−1.92627.3810.0561.010
Shrinkage0.0550.139−0.24916.7680.3931.0790.0100.167−1.96626.9840.0591.011
CM0.0360.142−0.23414.9250.2561.0500.0110.176−1.70324.0970.0601.011
CM RMT0.0520.150-0.46726.3070.3491.0730.0170.179−1.49924.2440.0971.018
RW 24 Months In-Sample, 1 Month Out-of-Sample
NDXS5ENRS
μ σ SkewnessKurtosis SR OR μ σ SkewnessKurtosis SR OR
Sample0.1440.168−0.51518.6210.8571.1780.0380.274−0.90324.8100.1391.028
Shrinkage0.1430.168−0.54419.6740.8511.1780.0370.275−0.92625.2090.1341.027
CC0.1520.181−0.42822.9290.8401.1790.0320.284−1.05926.6980.1111.022
CC-RMT0.1560.179−0.55721.3810.8681.1840.0710.294−0.81424.3100.2411.048
DJIIBEX
Sample0.0610.142−0.46520.4650.4291.089−0.0130.167−1.92427.7750.986
Shrinkage0.0560.142−0.46320.2260.3921.081−0.0140.166−1.98227.6530.985
CC0.0310.145−0.48717.5110.2161.0430.0060.175−1.67623.2750.0321.006
CC-RMT0.0710.153−0.45728.3510.4671.1010.0370.182−1.55225.4140.2031.039
NDXS5ENRS
μ σ SkewnessKurtosis SR OR μ σ SkewnessKurtosis SR OR
Sample0.1480.170−0.42518.5650.8711.1830.0550.278−0.84423.9600.1981.040
Shrinkage0.1480.171−0.44819.5050.8621.1810.0540.279−0.86824.3610.1941.039
CM0.1590.183−0.32722.7130.8671.1870.0480.289−0.99425.7980.1661.034
CM RMT0.1530.175−0.46521.1450.8731.1770.0800.299−0.78023.4680.2671.053
DJIIBEX
Sample0.0570.144−0.45320.1310.3921.082−0.0160.168−1.88227.5010.982
Shrinkage0.0510.144−0.43619.9300.3561.074−0.0150.169−1.87626.7420.983
CM0.0260.147−0.47017.2690.1761.0350.0050.177−1.62422.7510.0261.005
CM RMT0.0760.155−0.39427.8030.4891.1070.0380.182−1.54825.3130.2111.041
Table 2. Average portfolio turnover and average modified Herfindahl index for the four portfolios under analysis across different rolling window methodologies. The strategies with a lower average turnover and a lower average modified Herfindahl index are highlighted in bold.
Table 2. Average portfolio turnover and average modified Herfindahl index for the four portfolios under analysis across different rolling window methodologies. The strategies with a lower average turnover and a lower average modified Herfindahl index are highlighted in bold.
RW 6 Months In-Sample and 1 Month Out-of-Sample
Average TurnoverAverage Modified Herfindahl
NDXS5ENRSDJIIBEXNDXS5ENRSDJIIBEX
Sample0.4530.2680.4070.3840.1280.3580.1570.226
Shrinkage0.4130.2550.3470.3440.1380.3720.1560.207
CC0.4730.2820.3690.3910.2210.5190.2560.320
CC-RMT0.4460.1970.2890.1610.1020.1150.0670.042
RW 12 Months In-Sample and 1 Month Out-of-Sample
Average TurnoverAverage Modified Herfindahl
NDXS5ENRSDJIIBEXNDXS5ENRSDJIIBEX
Sample0.2660.1400.2260.2200.1200.3660.1480.192
Shrinkage0.2470.1380.1970.1970.1310.3740.1510.180
CC0.2850.1600.2190.2150.2210.5320.2420.277
CC-RMT0.2870.1050.2200.1120.1020.1170.0700.035
RW 24 Months In-Sample and 1 Month Out-of-Sample
Average TurnoverAverage Modified Herfindahl
NDXS5ENRSDJIIBEXNDXS5ENRSDJIIBEX
Sample0.1350.0840.1200.1190.1150.3640.1390.174
Shrinkage0.1300.0840.1090.1110.1230.3700.1420.164
CC0.1680.0950.1200.1220.2200.5360.2220.261
CC-RMT0.1900.0650.1280.0260.1020.1140.0700.012
RW 24 Months In-Sample and 2 Months Out-of-Sample
Average TurnoverAverage Modified Herfindahl
NDXS5ENRSDJIIBEXNDXS5ENRSDJIIBEX
Sample0.2070.1460.1860.1940.1150.3590.1400.175
Shrinkage0.2020.1470.1700.1860.1240.3660.1430.166
CC0.2580.1570.1890.1910.2220.5330.2220.262
CC-RMT0.2660.1050.1830.0450.1020.1090.0730.012
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Vanni, F.; Hitaj, A.; Mastrogiacomo, E. Enhancing Portfolio Allocation: A Random Matrix Theory Perspective. Mathematics 2024, 12, 1389. https://doi.org/10.3390/math12091389

AMA Style

Vanni F, Hitaj A, Mastrogiacomo E. Enhancing Portfolio Allocation: A Random Matrix Theory Perspective. Mathematics. 2024; 12(9):1389. https://doi.org/10.3390/math12091389

Chicago/Turabian Style

Vanni, Fabio, Asmerilda Hitaj, and Elisa Mastrogiacomo. 2024. "Enhancing Portfolio Allocation: A Random Matrix Theory Perspective" Mathematics 12, no. 9: 1389. https://doi.org/10.3390/math12091389

APA Style

Vanni, F., Hitaj, A., & Mastrogiacomo, E. (2024). Enhancing Portfolio Allocation: A Random Matrix Theory Perspective. Mathematics, 12(9), 1389. https://doi.org/10.3390/math12091389

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop