1. Introduction
Cryptocurrency is of particular importance for diversification purposes, because the stock market is currently witnessing an unprecedented level of dominance by giant corporations, particularly those in the technology sector. These corporations, often referred to as “the Magnificent Seven” (
Phillips 2024), including Apple, Microsoft, Alphabet, Amazon, Nvidia, Tesla, and Meta, along with other significant players like Berkshire Hathaway and Eli Lilly, have increasingly become the driving force behind the stock market’s performance. However, this concentration of power among a few key companies has raised concerns among analysts, who warn of potential risks associated with such dominance. The top 10 mega-cap companies in the S&P 500 currently represent nearly 35% of the index’s total market capitalization (
Shalett 2023). The current level of concentration, which has not been seen since the speculative period of the New Era
1 in June 2000 (
Rekenthaler 2020), exposes investors to significant risk, especially if interest rates remain high and stock prices fall. Thus, while the current dominance of giant corporations in the stock market is not without precedent, its scale and potential implications deserve careful consideration and investigation.
Feng et al. (
2018) asserts that cryptocurrencies exhibit left tail independence and cross tail independence with four selected stock indices, suggesting their potential to serve as significant diversifiers for the stock market, akin to gold. For
Corbet et al. (
2020), when investigating the period during the COVID-19 pandemic, digital assets not only provide diversification benefits for investors but also act as a safe haven similar to that of precious metals during historic crises. However, according to
Gambarelli et al. (
2023), the addition of a single cryptocurrency to a stock portfolio does not effectively hedge against market downturns and may increase the risk of short-term joint losses. Additionally,
Orlando (
2024) raised concerns about the speculative nature and susceptibility to manipulation of cryptocurrencies.
Entropy, derived from thermodynamics and information theory, is used in finance and economics to analyze uncertainty and dynamics in systems (
Zhou et al. 2013) or scarcity (
Kovalev 2016). In finance, entropy has been embraced as it quantifies disorder, randomness, and unpredictability, thereby assisting in risk assessment, portfolio optimization, and policy formulation. For instance, abnormal losses during the 2008 financial crisis, which were attributed to unexpected events (
Jorion 2009;
Orlando et al. 2021), can underscore the contribution of entropy as a measure of the unexpected (
Taleb 2007;
Orlando and Zimatore 2020;
Bufalo et al. 2023). Along these lines, (
Orlando and Lampart 2023) challenge conventional views by showing that decreasing prices align with increasing entropy, questioning the idea that diminished entropy in market crises suggests determinism. Moreover, their findings propose that bear markets tend to display higher entropy, signaling a greater probability of unexpected extreme events.
Entropy can also be viewed as a measure of distance and coherent entropy-based risk metrics are essential tools in minimum-risk portfolio selection, offering a robust framework for assessing and managing investment risk (
Yin 2019). For instance, Philippatos and Wilson employed the mean-entropy model in their seminal contribution (
Philippatos and Wilson 1972;
Philippatos and Gressis 1975) utilizing entropy models and measures to represent an uncertain environment and derive optimal economic decisions. Entropy metrics exploit the principles of information theory to quantify uncertainty and complexity in financial markets, providing valuable insights into the risk–return trade-offs of different investment strategies (
Pichler and Schlotter 2020). By maintaining coherence and flexibility, entropy-based risk measures enable investors to construct portfolios that balance risk and return efficiently, effectively accounting for tail risks and extreme events (
Ardakani 2023). Relative entropy, pioneered
Kullback and Leibler (
1951), has gained considerable attention in portfolio selection. An emerging advancement in risk evaluation, EVaR, shows a dual representation, emphasizing its connection to relative entropy (
Ahmadi et al. 2022). EVaR boasts several advantages: it is coherent (
Artzner et al. 1999), strongly monotone, and convex, serving as an upper bound to CVaR, which, in turn, is an upper bound to value at risk (VaR). Additionally, a generalization of EVaR has emerged in the form of RLVaR (
Cajas 2023), a coherent risk measure rooted in Kaniadakis entropy (
Kaniadakis 2006), which surpasses EVaR.
In this study, we carry out an extensive empirical analysis, assessing various minimum-risk models using real-world datasets, which consist of stock indexes from both developed and emerging markets, as well as Bitcoin time series. The objective of our work is threefold: to determine whether entropy-based criteria outperform other models, to study how the considered models behave in developed markets compared to emerging ones, and to analyze the effect of the introduction of cryptocurrency on portfolio performance and diversification. Here, we highlight that during the observed period, Bitcoin demonstrates an unfavorable risk profile. This disadvantage plays a crucial role in portfolio optimization, diversification, and selection.
This article is structured as follows.
Section 2 delves into the methodology and materials, detailing various definitions of entropy and explaining two portfolio entropy measures: the EVaR model and the RLVaR model. Following that, performance and risk measures are listed, and the dataset is presented. Finally, details of the simulations conducted are provided together with the portfolio optimization techniques adopted and the out-of-sample and sector analyses.
Section 3 is divided into two parts: the first presents the results of the optimal portfolios in terms of risk and performance depending on the risk aversion, while the second provides the allocation analysis. In both cases, the inclusion of the crypto security is also considered. Finally,
Section 4 concludes.
2. Methods and Materials
This section is dedicated to methods and materials. Concerning methods,
Section 2.1 presents theoretical definitions of entropy, while
Section 2.2 illustrates EVaR and RLVaR. Furthermore,
Section 2.3 outlines how the different types of entropy are incorporated into the portfolio selection framework. Finally,
Section 2.4 addresses various performance measures, including maximum drawdown, ulcer index, Sharpe ratio, Omega ratio, etc. Regarding materials,
Section 2.5 describes the datasets, including mature markets (S&P500 and Euro Stoxx 50), emerging markets (Istanbul Exchange and Bovespa), and crypto assets (Bitcoin).
Section 2.6 outlines the setup, including the hardware and software used, the optimization process, and the out-of-sample and sector analyses.
2.1. Definitions of Entropy
In this section, we explore some fundamental aspects of entropy, providing theoretical definitions as background knowledge.
2.1.1. Shannon Entropy
Shannon’s seminal work introduced entropy in information theory as a measure of the expected uncertainty of a random variable across possible outcomes (
Shannon 1948). This form of entropy is widely recognized as Shannon entropy, highlighting Shannon’s deep impact on the field. Additionally, equivalences between Shannon entropy and Kolmogorov complexity have been identified (
Leung-Yan-Cheong and Cover 1978). The concept of Shannon entropy has diverse applications across various fields including information theory (
Gray 2011), cryptography (
Koç and Özdemir 2023), machine learning (
Aljalal et al. 2022), data mining (
Holzinger et al. 2014), entanglement (
Calixto et al. 2021), etc.
Let us first provide definitions in the discrete case.
Definition 1 (Shannon entropy).
The entropy of a random variable X iswhere b denotes the base of the logarithm, denotes the probability mass function of X, and identifies the support of X. Typical choices of bases are 2, e, and 10, depending on the use. >From now on, throughout the paper, we will use e as a base, hence . >From the definition above, it is possible to derive some useful properties of the entropy. Consider a discrete random variable X with T states of nature, i.e., . For the sake of notation, let us denote .
Continuity: the entropy is continuous with respect to the probabilities .
Non-negativity: since , .
Minimum: following from the previous property, , i.e., it attains its minimum at for a given j, and for .
Maximum: is maximal when all the scenarios are equally likely, i.e., . Thus, in general, .
Symmetry: the order of the outcomes does not affect the value of .
Concavity: the entropy is concave in the probability mass function. Therefore,
where
and
denote two probability distributions of
X.
Let Y be another random variable. We can define the relationship between two random variables X and Y by means of concepts such as joint entropy conditional entropy.
2.1.2. Joint Entropy
Definition 2 (Joint entropy).
The joint entropy of two discrete random variables X and Y iswhere is the joint probability mass function of X and Y. The joint entropy measures the uncertainty of the two random variables X and Y considered together. However, there might be some instances in which a random variable, say X, is known. In this instance, the conditional entropy quantifies the entropy of a random variable Y (uncertain) while the other random variable X is known.
2.1.3. Conditional Entropy
Definition 3 (Conditional entropy).
The conditional entropy of Y given X iswhere is the conditional probability mass function of Y given . >From Bayes’ theorem, the conditional entropy can be rewritten as
>From Definitions 2 and 3, it is possible to derive the following additional properties of entropy.
Subadditivity: . The equality is obtained when X and Y are independent.
Comparison with individual entropies: .
Shannon entropy measures the uncertainty in a random variable
X distributed according to a distribution
. However, sometimes it might be useful to assume
X is distributed according to a distribution
, even though the actual distribution is
. Consequently,
Kullback and Leibler (
1951) introduced the relative entropy, also known as Kullback–Leibler divergence.
2.1.4. Relative Entropy
Definition 4 (Relative entropy, Kullback–Leibler divergence).
Given two probability distributions and over a discrete random variable X, the relative entropy of with respect to is The relative entropy measures the inefficiency of adopting the distribution when the actual distribution is . Note that even though it computes how distant one distribution is from another, from a mathematical perspective it is not a distance. Indeed, the Kullback–Leibler divergence is not symmetric, i.e., in general, , and it does not satisfy the triangle inequality.
2.1.5. Cross-Entropy
Definition 4 may be formulated using Shannon’s entropy and cross-entropy. First, let us define the latter.
Definition 5 (Cross-entropy).
Given two probability distributions and over a discrete random variable X, the cross-entropy of relative to is Therefore, the relative entropy can be restated as follows:
Below, we report its main properties.
Non-negativity: due to the Gibbs’ inequality, the cross-entropy of and is always greater than or equal to the Shannon’s entropy of , which translates into . The equality is obtained when the two distributions are identical.
Joint convexity: is convex both on and on .
Later,
Kaniadakis (
2002,
2006) proposed a generalization of Shannon’s entropy, known as
k-entropy or relativistic entropy.
Definition 6 (
k-entropy, relativistic entropy).
The k-entropy of a discrete random variable X iswhere is the k-logarithm function, and . The
k-logarithm is one of the
k-deformed functions included in (
Kaniadakis 2001), to which we refer for a more in-depth analysis of the properties of such a function. For the scope of this work, we only report the following properties:
Asymptotic behavior with respect to k: .
Strict increasing monotonicity: .
Concavity: .
Therefore, it is easy to notice that Shannon’s entropy can be considered as a special case of
k-entropy, i.e.,
2.2. Risk Measures
Let be a risk measure, i.e., a function that assigns a risk score to the random variable X over the feasible set . Below, we show some important properties of a risk measure , with , for every :
- (R1)
Translation invariance: for every .
- (R2)
Positive homogeneity: for every .
- (R3)
Subadditivity: .
- (R4)
Monotonicity: if , then .
Note that if a risk measure
is both positive homogeneous and subadditive, then it is convex on
. If
satisfies all the properties listed above, then it is called coherent (see
Artzner et al. (
1999);
Acerbi (
2003)). Some of the most famous risk measures are not coherent: for example, variance (but not the standard deviation), which does not possess any of the aforementioned properties, and mean absolute deviation, which is not monotone and translation invariant. Another popular risk measure is the value at risk (VaR), where the VaR at a confidence level
is
Such a measure does not satisfy the subadditivity property. Thus, in order to overcome this drawback,
Rockafellar and Uryasev (
2000) proposed the CVaR, also known as expected shortfall (ES) (see
Acerbi and Tasche (
2002)), defined as the worst
values of
X, i.e., mathematically,
The CVaR satisfies all the properties from 1 to 4, and it is, therefore, coherent. Two other coherent risk measures are the EVaR, recently proposed by
Ahmadi-Javid (
2012);
Ahmadi-Javid and Fallah-Tafti (
2019), and the RLVaR, suggested by
Cajas (
2023). The next subsections will be dedicated to the description of these two models.
2.2.1. Entropic Value at Risk (EVaR)
The EVaR is a relatively new risk measure introduced by Ahmadi-Javid. Such a measure is derived from the Chernoff inequality (see
Chernoff (
1952)), which for a given random variable
X reads
where
is the moment-generating function of
X, and
a is a constant. Solving for
a the equation
, with
, we have
Therefore, for each
,
since
. Formally, the authors define the EVaR as
which represents the smallest upper bound of VaR derived from the Chernoff inequality. As mentioned before, the EVaR satisfies all properties from (R1) to (R4), and it is, therefore, coherent. The dual representation (or robust representation) of the EVaR better shows its connection with entropy. Mathematically, we have
where
. For the proof, we refer to
Ahmadi-Javid (
2012). For a given value
, the EVaR is an upper bound not only to the VaR but to the CVaR as well. Additionally, when
,
, while
, i.e., as the confidence level approaches 1, the EVaR tends to the maximal loss.
2.2.2. Relativistic Value at Risk
Definition 7 (Divergence functions).
A convex function such that is called a divergence function if Definition 8 (
-divergence).
Given two probability distributions and over a discrete random variable X, the φ-divergence of with respect to is
Based on these two definitions,
Dommel and Pichler (
2020) define a new class of risk measures called
-divergence risk measures.
Definition 9 (
-divergence risk measures).
Given a divergence function φ with convex conjugate ψ, the φ-divergence risk measure is defined as follows:where is a risk aversion coefficient. Remark 1 (Coherence of
-divergence risk measures).
The functional satisfies properties (R1)–(R4), and it is, therefore, coherent (see Dommel and Pichler (2020) for the proof). The class of
-divergent risk measures always attains values between the expected value and the maximal loss, i.e.,
Dommel and Pichler (
2020) also show the dual representations of such risk measures, which is the following:
where
. The relativistic value at risk is a special case of
-divergent risk measures, where
, and
. Therefore, the dual representation of RLVaR is the following:
where
, and
k denotes the deformation parameter of the
k-logarithm function, where, in this instance,
. Note that because of the asymptotic behavior of the
k-logarithm function with respect to
k,
; on the other hand,
. Finally, merging the results of
Section 2.2.1 and
Section 2.2.2, for a given level
, the following inequalities are satisfied:
2.3. Entropy in Portfolio Selection
In this section, we describe how the various types of entropy discussed in
Section 2.1 are integrated into the portfolio selection framework. Let
and
denote, respectively, the realized price and the realized (linear) daily return of asset
i at time
t, with
(where
n denotes the number of assets), and
(where
T denotes the length of the time series of the returns). As often assumed in portfolio optimization (see, for example,
Roman et al. (
2013) and
Carleo et al. (
2017)), we assume equally likely scenarios, i.e., each realization has an equally likely probability of occurrence of
. Furthermore, we indicate with
the portfolio weights vector, and with
the portfolio return at time
t. Finally, we denote by
the expected return of asset
i. Here, we report the portfolio selection models based on the measures addressed in the previous section.
2.3.1. The Entropy Value-at-Risk (EVaR) Model
The first model to optimize the portfolio EVaR was that by
Ahmadi-Javid and Fallah-Tafti (
2019). The authors developed a convex program whose variables and constraints are independent of the sample size
T. In this work, we use the reformulation by
Cajas (
2021), which is a convex programming problem that has the advantage of being efficiently solvable by several software. Thus, the EVaR minimization problem can be formulated as follows:
where
is the exponential cone, as defined in
Chares (
2009).
2.3.2. The Relativistic Value-at-Risk (RLVaR) Model
Here we show the relativistic value-at-risk model proposed by
Cajas (
2023). The duality theorem of
Chares (
2009) allows the author to express the problem in its primal formulation, which is the following minimization problem:
where
is the power cone (see
Chares (
2009)).
2.4. Performance Measures
The out-of-sample performance of portfolios described above is evaluated using several performance measures typically adopted in the literature (see, e.g.,
Bruni et al. (
2017);
Cesarone et al. (
2022) and references therein). Let
be the out-of-sample return of the portfolio, and
.
Mean is the daily average portfolio return, i.e., . Evidently, higher values are associated with higher performances. The annualized mean is calculated as , where 252 are the trading days.
Standard deviation computed as . Since it measures the portfolio risk, lower values are preferable.
Maximum Drawdown Denoting the drawdowns by
MaxDD is defined as .
This evaluates the depth and the duration of drawdowns in prices over the out-of-sample period. Lower ulcer values are associated with better portfolio performances.
The Sharpe ratio measures the gain per unit risk and is defined as
where we set the risk-free rate
for simplicity since it does not influence the ranking among the models analyzed (
Bruni et al. 2017). The larger this value, the better the performance.
The Sortino ratio is similar to the Sharpe ratio but uses another risk measure, i.e., the target downside deviation,
. Therefore, the Sortino ratio is defined as
where
. Similar to the Sharpe ratio, the higher it is, the better the portfolio performance.
The Omega ratio is defined as
where
is the cumulative distribution function of the out-of-sample portfolio return and
. In a nutshell, Omega is the ratio between the sum of positive deviations of
from
and the sum of its negative deviations. Higher values of the Omega ratio are always preferred.
The Rachev ratio measures the upside potential, comparing the right and left tails. Mathematically, it is computed as
where we choose
and
.
VaR, with reference to Equation (
7); the confidence level was set to
.
The Herfindahl index, is a synthetic index used to measure risk concentration, defined as follows:
where
is the risk contribution of the i-th asset to the total portfolio risk:
in this case
is the risk contribution of the i-th asset calculated through the variance. It ranges from
(when the risk is evenly spread over all the assets) to 1 (when the risk is concentrated in only one asset). Therefore, lower values are preferable since they are associated with more diversified portfolios. We compute the average of the Herfindahl index over the number of rebalances
Q, i.e.,
Turnover, i.e.,
where
is the portfolio weight of asset
k after rebalancing, and
is the portfolio weight before rebalancing at time
q. Lower turnover values imply better portfolio performance. We point out that this definition of portfolio turnover is a proxy of the effective one, since it evaluates only the amount of trading generated by the models at each rebalance, without considering the trades due to changes in asset prices between one rebalance and the next. In the results section, we report the annual turnover.
Transaction cost is the average expense incurred by investors when rebalancing their portfolio. It is calculated as the product of turnover (
) and the sum of the average half spread (
) and the average one-way commission rate (
c):
We set
and
, as in
Jones (
2002).
Net performance is calculated as .
2.5. Datasets
In this section, we provide details about the four real-world equity datasets summarized in
Table 1. These datasets comprise daily prices adjusted for dividends and stock splits obtained from Refinitiv and are outlined in
Table 1. The aim is to provide an overview of four distinct stock markets, encompassing both developed and developing economies. In the USA and Europe, there exists a significant level of interconnectedness. However, in the USA, the industrial base is relatively lower compared to Europe (although this may change in the near future due to energy chaos
Caddle (
2023);
Ulrich (
2023)). Conversely, the financial sector and advanced industries such as information technology and biopharmaceuticals are more developed in the USA. Regarding developing markets, Turkey and Brazil serve as contrasting examples. Turkey, being relatively less rich in natural resources, relies heavily on its industrial sector and strategic geographical position between the East and the West.
The time series from the aforementioned datasets have been paired with that of Bitcoin, with data obtained from the same time interval from
CoinMarketCap (
2024). As the crypto market operates continuously, Saturdays and Sundays were excluded to align with the trading hours of the stock markets under consideration.
Here, we elaborate on the empirical analysis conducted on four real-world datasets along with one cryptocurrency asset. Specifically, building upon the methods and materials outlined above,
Section 2.6 illustrates the methodology employed for the experimental setup and the portfolio strategies analyzed.
2.6. Simulations and Set-Up
In this section, we begin by outlining the setup and simulations for optimized entropy models alongside classical ones like mean-variance, mean-CVaR, and mean-MinMax, in addition to the equally weighted portfolio. Then, we proceed with describing the sensitivity analysis across various expected target return levels for each model. Next, we detail the methodology for out-of-sample analysis, followed by an explanation of our sector analysis approach.
2.6.1. Portfolio Optimization and Selection
The optimization problems
13 and
14 were compared with a selection of classical models available in the literature: mean-variance (see
Markowitz (
1952)), mean-CVaR (using the formula linearized by
Rockafellar and Uryasev (
2000)), and mean-MinMax (as problem 1 from
Young (
1998). The confidence level (
) was set at 95% (
). In order to choose the value of
k of the RLVaR, we performed a sensitivity analysis with respect to this parameter. Specifically, we simulated a univariate normal distribution with 30,000 observations, and we computed the EVaR (with
) and the MaxLoss. Then, taking 99 equally spaced values of
k between 0.01 and 0.99, we computed the RLVaR with the same confidence level as the EVaR for each
k. Finally, we selected the value of that parameter such that the RLVaR target was halfway between the EVaR and the MaxLoss, i.e.,
, rounded to 0.3 (see
Figure 1).
For each model, we considered three different levels of expected target return
, where
,
, and
, providing different portfolio options based on the level of risk aversion. The results are displayed in
Section 3.
2.6.2. Hardware and Software
All procedures were implemented in MATLAB R2023b, with MOSEK support, and were run on a computer with an Intel(R) Xeon(R) CPU E5-2623 v4 @ 2.60 GHz processor and 64 GB of RAM.
2.6.3. Out-of-Sample Analysis
For the out-of-sample analysis, we employed a rolling time window approach for evaluation. Specifically, we enabled portfolio composition rebalancing within the holding period at consistent intervals. In our investigation, we designated a 2-year period (500 observations) for the sample window and 1 month (20 observations) for both the rebalancing interval and the holding period
2.
2.6.4. Sector Analysis
To perform a sector analysis of investment through each risk measure with constraint on expected return, on Refinitiv we obtained the sector composition of each index (see
Table 2). Note the weights reported in the table reflect the financial market valuation of each sector and do not necessarily represent the actual weights on the economy.
As can be seen, there is a different sector composition in each market; the Brazilian market appears to be the most diversified.
4. Conclusions
At the forefront of concern is the prevailing dominance of a handful of major corporations within the stock market landscape, prompting an in-depth exploration of the potential risks associated with such concentration. Relative entropy, a distance measure, has recently found broad application in portfolio selection because it aids in understanding diversification strategies. An intriguing and newly proposed risk measure, entropic value at risk (EVaR), possesses a dual representation that underlines its connection to relative entropy. EVaR boasts several advantages: it is coherent, strongly monotone, and convex, and serves as an upper bound to conditional value at risk (CVaR), which, in turn, is an upper bound to value at risk (VaR). Additionally, a generalization of EVaR has emerged in the form of relativistic value at risk (RLVaR), a coherent risk measure rooted in Kaniadakis entropy, which surpasses EVaR.
The objective of our work is threefold: firstly, to assess whether entropy-based criteria outperform other models; secondly, to investigate the behavior of the considered models in developed markets compared to emerging ones; and thirdly, to analyze the impact of cryptocurrency introduction on portfolio performance and diversification. Through extensive empirical analysis, we evaluated several minimum-risk models using real-world datasets comprising stock indexes such as the S&P 500 for the USA, Euro Stoxx 50 for Europe, BIST 100 for Turkey, and Bovespa for Brazil, along with Bitcoin. Our objective was threefold: first, to assess whether entropy-based criteria outperform other models; second, to examine the behavior of these models in developed versus emerging markets; and third, to analyze the impact of introducing cryptocurrency on portfolio performance and diversification. The results indicate that entropy measures help identify optimal portfolios, particularly when risk levels are elevated and portfolios become more concentrated (
Shalett 2023). This becomes particularly crucial when returns are low and/or turnover is high, leading to negative net performances. Bitcoin, due to its unfavorable risk profile, is chosen for diversification and performance enhancement only in the case of the BIST 100. In other cases, either the optimal weight is zero (USA market) or is small. This validates the findings of
Gambarelli et al. (
2023), indicating that incorporating a single cryptocurrency into a stock portfolio fails to offer adequate hedging during market downturns and may escalate the risk of short-term joint losses. Finally, we confirm the extreme concentration of stock markets, where a few leading stocks dominate all others (
Rekenthaler 2020,
Phillips 2024).
A limitation of this analysis stems from the specific selection of stock markets (e.g., developed versus developing, large-cap versus small-cap stock indices, etc.), the sample period of the time series under consideration, and the frequency of the data. For instance,
Orlando and Bufalo (
2021) have shown a close connection between the sampling process and the distribution of returns. On the same line,
Chiang and Doong (
2001), in their analysis of the relationship between stock returns and time-varying volatility using a threshold autoregressive GARCH(1,1)-in-mean specification, find that while the null hypothesis of no asymmetric effect on conditional volatility is rejected for daily data, it cannot be rejected for monthly data. Another example of varying results based on data is related to the random walk hypothesis (RWH). While
Fama (
1970) found no evidence of patterns in stock prices when testing the RWH,
Hong (
1978),
Cooper (
1983), and
Laurence (
1986) provided support for it. Additionally, within the same stock market, the Athens Stock Exchange (ASE),
Dockery et al. (
2001) found overwhelming support for the acceptance of the random walk hypothesis using monthly data, indicating weak-form efficiency in the ASE. This finding contradicts previous studies on the ASE by
Niarchos and Georgakopoulos (
1986), who analyzed the influence of annual corporate profit reports on the ASE.
Panagiotidis (
2005), when analyzing three ASE indices, the FTSE/ASE20, the FTSE/ASE Mid 40, and the FTSE/ASE Small Cap, found strong evidence against the random walk hypothesis. Moreover, they discovered that the lower capitalization fraction of the market is more ’efficient’, as past volatility does not assist in predicting future returns. That leads to varying results based on the consideration of large- and small-capitalization stock indices. In fact, as demonstrated by
Hung et al. (
2009), the weak-form efficient market hypothesis (EMH) for both large- and small-capitalization stock indices of TOPIX and FTSE reveals support for the EMH in large-cap stocks but rejection in small-cap ones. Similar results can be found in
Varamini and Kalash (
2008) for mutual funds, in
Al-Khazali et al. (
2016) for Islamic stock indices, in
Petajisto (
2017) for exchange-traded funds (ETFs), etc.
Future research could focus on analyzing the performance of entropy-based criteria in portfolio selection across a wider range of assets, including commodities, small caps, and ETFs. Additionally, investigating risk–return performances based on market momentum, such as during periods of decline and growth, would be valuable.