*2.10. Regret-Aversion*

Egozcue et al. (2015) examine the optimal output of a competitive firm for price uncertainty. Instead of assuming a risk-averse firm, the authors assume that the firm is regret-averse. They find that optimal output under uncertainty would be lower than under certainty, and prove that optimal output could increase or decrease as the regre<sup>t</sup> factor varies.

Guo et al. (2015) investigate regret-averse firms' production and hedging behavior. They show that the separation theorem operates under regre<sup>t</sup> aversion by proving that regre<sup>t</sup> aversion is independent of the level of optimal production. On the other hand, the authors find that the full-hedging theorem does not always hold under regre<sup>t</sup> aversion, as regret-averse firms take hedged positions differently from those of risk-averse firms in some situations. With more regre<sup>t</sup> aversion, regret-averse firms will hold smaller optimal hedging positions in an unbiased futures market. Furthermore, contrary to the conventional expectations, they show that banning firms from forward trading affects their production level in both directions.

#### *2.11. Covariances and Copulas*

Chebyshev's integral inequality, also known as covariance inequality, is an important problem in economics, finance, marketing, management, and decision-making in a wide range of cognate disciplines. Egozcue et al. (2009) derive some covariance inequalities for monotonic and non-monotonic functions. The results can be useful in many applications in economics, finance, marketing, management, and decision-making, and related disciplines where optimal decision-making is desired.

Egozcue et al. (2010) sharpen the upper bound of a Grüss-type covariance inequality by incorporating a notion of quadrant dependence between random variables, and also using the idea of constraining the means of the random variables.

Egozcue et al. (2011b) show that Grüss-type probabilistic inequalities for covariances can be considerably sharpened when the underlying random variables are quadrant dependent in expectation (QDE). The established covariance bounds not only sharpen the classical Grüss inequality, but also improve upon Grüss-type bounds under the assumption of quadrant dependency (QD), which is stronger than QDE. The authors illustrate the general results with examples based on specially devised bivariate distributions that are QDE but not QD. Such results play important roles in decision-making under uncertainty, and particularly in areas such as economics, finance, marketing, management, insurance and cognate disciplines in which optimal decision-making is required.

A number of problems in economics, finance and insurance rely on determining the signs of the covariances of two transformations of a random variable. The classical Chebyshev's inequality offers a powerful tool for solving the problem, but assumes that the transformations are monotonic, which is not always the case in applications.

For this reason, Egozcue et al. (2011c) establish new results for determining the covariance signs and provide further insights into the area. Unlike many previous contributions, their method of analysis, which is probabilistic in nature, does not rely on the classical Hoffding's representation of the covariances or on any of the numerous extensions and generalizations.

Egozcue et al. (2012b) establish the smallest upper bound for the p'th absolute central moment over the class of all random variables with values in a compact interval. Numerical values of the bound are calculated for the first ten integer values of *p*, and its asymptotic behaviour derived when *p* tends to infinity. In addition, the authors establish an analogous bound in the case of all symmetric

random variables with values in a compact interval. Such results play important roles in a number of areas, including actuarial science, economics, finance, marketing, management, operations research, and reliability.

It is well known that quadrant dependent (QD) random variables are also quadrant dependent in expectation (QDE). The recent literature has offered examples that establish rigorously the fact that there are QDE random variables that are not QD. The examples are based on convex combinations of specially chosen positive and negative QD copulas. Egozcue et al. (2013) establish general results that determine when convex combinations of arbitrary QD copulas yield negative or positive QD/QDE copulas. In addition to being an interesting mathematical exercise, the established results are helpful from a practical perspective when modelling insurance and financial portfolios.

#### **3. Statistical and Econometric Models**

Another suggestion is to develop statistical and econometric models in the areas related to managemen<sup>t</sup> information, decision sciences, economics, finance, and cognate disciplines. After developing mathematical models, one might consider developing related statistical and econometric models. We have developed several econometrics papers related to managemen<sup>t</sup> information, decision sciences, economics, and finance, among others.

#### *3.1. Portfolio Optimization*

We have developed some novel theoretical results on portfolio optimization. When the dimension of the data is large, the theoretical model of the classical MV portfolio optimization developed by Markowitz (1952a) has been found to have problematic issues in estimation as substituting the sample mean and covariance matrix into the MV optimization procedure will result in a serious departure of the optimal return estimate. Moreover, the corresponding portfolio allocation estimates will deviate from their theoretical counterparts when the number of assets is large. We call this return estimate the "plug-in" return, and its corresponding estimate for the asset allocation the "plug-in allocation."

Bai et al. (2009a) prove that this phenomenon is normal and call it "over-prediction". In order to circumvent over-prediction, the authors use a new method by incorporating the idea of the bootstrap into the theory of a large dimensional random matrix. They develop new bootstrap-corrected estimates for the optimal return and its asset allocation, and prove that these bootstrap-corrected estimates can analytically correct over-prediction and drastically reduce the error. The authors also show that the bootstrap-corrected estimate of return and its corresponding allocation estimate are proportionally consistent with their counterpart parameters.

Bai et al. (2009a) propose a bootstrap-corrected estimator to correct the overestimation, but there is no closed form for their estimator. Thus, it has to be obtained by using a bootstrap approach, which, as a result, is difficult for practitioners to adopt the estimation technique in practice. In order to circumvent this limitation, Leung et al. (2012) develop a new estimator for the optimal portfolio return based on an unbiased estimator of the inverse of the covariance matrix and its related terms, and derive explicit formulae for the estimator of the optimal portfolio return.

Bai et al. (2016a) improve on the estimation by using the spectral distribution of the sample covariance. They develop the limiting behavior of the quadratic form with the sample spectral corrected covariance matrix, and explain the superior performance to the sample covariance as the dimension increases to infinity with the sample size proportionally. Moreover, the authors derive the limiting behavior of the expected return and risk on the spectrally corrected MV portfolio. They also illustrate the superior properties of the spectral corrected MV portfolio in practice.

In simulations, they compare the spectrally corrected estimates with the traditional and bootstrap-corrected estimates, and show the performance of the spectrally corrected estimates are superior in terms of the portfolio return as well as for the portfolio risk. They also compare the performance of the novel proposed estimation method with different optimal portfolio estimates for real S&P 500 data.

We note that portfolio optimization can be used for big data as well finite samples that might not be classified as big data. In the theory developed by Bai et al. (2009a, 2009b), Leung et al. (2012) and Bai et al. (2016a) have already mentioned their theory holds when the observations tend to infinity. Academics and practitioners can use portfolio optimization in their analysis for big data, and for finite samples. The literature for using portfolio optimization in their theoretical and empirical analyses includes Abid et al. (2009, 2013, 2014), and Hoang et al. (2015a, 2015b) among others.

#### *3.2. Testing Investors' Behavorial Models*

Lam et al. (2010, 2012) developed a Bayesian model of excess volatility, short-term underreaction and long-term overreaction. Guo et al. (2017b) extend their model to excess volatility, short-term underreaction and long-term overreaction during financial crises. Fabozzi et al. (2013) develop three tests of the magnitude effect of short-term underreaction and long-term overreaction.

We note that the testing Investors' behavorial models developed by Lam et al. (2010, 2012), and Guo et al. (2017b) can be used for big data as well as in finite samples. Fabozzi et al. (2013) have already developed three tests and use S&P data to test for the magnitude effect of short-term underreaction and long-term overreaction. Academics and practitioners can apply the tests developed in Fabozzi et al. (2013) for broader data sets, such as for international markets and over time, leading to dynamic panel data models, so that the tests can be used for big data as well as for finite samples.

Wong et al. (2018) conduct a questionnaire survey to examine whether the theory developed by Lam et al. (2010, 2012) and Guo et al. (2017b) holds empirically by studying the conservative and representative heuristics of Hong Kong small investors who adopt momentum and/or contrarian trading strategies. It is worth noting that academics and practitioners could conduct a questionnaire survey for big data as well as finite samples associated with this topic.

#### *3.3. Stochastic Dominance*

Ng et al. (2017) develop tests for stochastic dominance by proposing and translating the inference problem of stochastic dominance into parameter restrictions in quantile regressions. They are variants of the one-sided Kolmogorov–Smirnoff statistic with a limiting distribution of the standard Brownian Bridge. The procedure to obtain the critical values of the proposed test statistics are provided. Simulation results show superior size and power compared with alternative procedures. They apply the estimation method to the NASDAQ 100 and S&P 500 indexes to investigate dominance relationship before and after major turning points. The empirical results show no arbitrage opportunities between the bear and bull markets.

Bai et al. (2015) derive the limiting process of stochastic dominance statistics for risk averters as well as for risk seekers, both for when the underlying processes are dependent or independent. They take account of the dependency of the partitions and propose a bootstrap method to determine the critical points. In addition, they illustrate the applicability of the stochastic dominance statistics for both risk averters and risk seekers to analyze the dominance relationship between the Chinese and US stock markets for the full sample period, as well as for the sub-periods before and after crises, including the internet bubble, the recent sub-prime crisis, and global financial crisis.

The empirical findings could be used to draw inferences on the preferences of risk averters and risk seekers in investing in the Chinese and US stock markets. The results also enable an examination as to whether there are arbitrage opportunities in these markets, whether these markets are efficient, and if investors are rational.

Bai et al. (2011a) develop new statistics for both PSD and MSD of the first three orders. These statistics provide tools to examine the preferences of investors with S-shaped utility functions in prospect theory and investors with RS-shaped investors. They also derive the limiting distributions of the test statistics to be stochastic processes, propose a bootstrap method to decide the critical points of the tests, and prove the consistency of the bootstrap tests. The authors also illustrate the applicability of their proposed statistics by examining the preferences of investors with the corresponding S-shaped

and RS-shaped utility functions vis-a-vis returns on iShares, and vis-a-vis returns of traditional stocks and Internet stocks, before and after the internet bubble.

Academics and practitioners can apply stochastic dominance tests in many different areas for big data, and for finite samples that might not be characterized as big data. The interesting literature in applying stochastic dominance tests includes Fong et al. (2005, 2008), Gasbarro et al. (2007), Lean et al. (2007, 2010, 2012, 2015), Qiao et al. (2010, 2012, 2013), Chan et al. (2012), Qiao and Wong (2015), Hoang et al. (2015a, 2015b), among others.

#### *3.4. Risk Measures*

Leung and Wong (2008) apply the technique of the repeated measures design to develop the Multiple Sharpe ratio test statistic to test the hypothesis of the equality of the multiple Sharpe ratios. They also establish the asymptotic distribution of the statistic and its properties. In order to demonstrate the superiority of the proposed statistic relative to the traditional pairwise Sharpe ratio test, they illustrate their approach by testing the equality of the Sharpe ratios for eighteen iShares.

The pairwise Sharpe ratio test shows that the performance of all 18 iShares are indistinguishable, as they reject the equality of the Sharpe ratios in each year as well as for the entire sample. These empirical results imply that the 18 iShares perform differently in each year, as well as for the entire sample, with some tests outperforming others in the market.

Recent results in optimal stopping theory have shown that a 'bang-bang' (buy or sell immediately) style of trading strategy is, in some sense optimal, provided that the asset price dynamics follow certain familiar stochastic processes. Wong et al. (2012) construct a reward-to-variability ratio (specifically, the mixed Sharpe ratio) that is sufficient for purposes of implementing the strategy.

The use of the novel ratio for optimal portfolio selection is discussed, and evidence for it varying over time is established. The performances of the 'bang-bang' and 'buy-and-hold' trading strategies are compared, and the former is found to be significantly more profitable.

Bai et al. (2011c) develop the mean–variance-ratio statistic to test the equality of the mean–variance ratios and prove that our proposed statistic is uniformly most powerful unbiased. In addition, they illustrate the applicability of our proposed test to compare the performances of stock indices.

Thereafter, Bai et al. (2012) propose and develop mean–variance-ratio (MVR) statistics for comparing the performance of prospects after the effect of the background risk has been mitigated. They investigate the performance of the statistics in large and small samples and show that, in the non-asymptotic framework, the MVR statistic produces a uniformly most powerful unbiased (UMPU) test.

The authors discuss the applicability of the MVR test in the case of large samples, and illustrate its superiority in the case of small samples by analyzing the Korea and Singapore stock returns after the impact of the US stock returns (which is viewed as the background risk) has been deducted. They find, in particular, that when samples are small, the MVR statistic can detect differences in asset performance while the Sharpe ratio, which is the mean-standard-deviation-ratio statistic, may not be able to do so.

Academics and practitioners can apply different risk measures estimators and test statistics in many different areas in the presence of big data, and large finite samples that might not be classified as big data. The literature in applying different risk measures estimators and test statistics includes Gasbarro et al. (2007), Lean et al. (2007, 2010, 2012, 2015), Chan et al. (2012), Qiao et al. (2012, 2013), Bai et al. (2013), Qiao and Wong (2015), Hoang et al. (2015a, 2015b), among many others.

#### *3.5. Economic and Financial Indicators*

We have developed financial indicators and have applied some economic indicators to examine several important economic issues. For example, Wong et al. (2001) develop a new financial indicator to test the performance of stock market forecasts by using E/P ratios and bond yields. They also develop two test statistics to use the indicator and illustrate empirically the tests in several stock markets. They show that the forecasts generated from the indicator would enable investors to escape

most of the crashes and catch most of the bull runs. The trading signals provided by the indicator can also generate profits that are significantly superior to the buy-and-hold strategy.

Exploring the characteristics associated with the formation of bubbles that occurred in the Hong Kong stock market in 1997 and 2007, and the 2000 dot-com bubble of Nasdaq, McAleer et al. (2016) establish trading rules that not only produce returns that are significantly greater than the buy-and-hold strategies, but also produce greater wealth compared with technical analysis (TA) strategies without trading rules. They conclude the bubble detection signals help investors generate greater wealth from applying appropriate long and short Moving Average strategies.

Chong et al. (2017) develop a new market sentiment index for the Hong Kong stock market, one of the largest stock markets in the world by using the turnover ratio, short-selling volume, money flow, Hong Kong Interbank Offer Rate (HIBOR), and returns of the US and Japanese markets, and the Shanghai and Shenzhen Composite indices.

Thereafter, they incorporate the threshold regression model with the sentiment index as a threshold variable to capture the state of the Hong Kong stock market. The authors find that the practical trading rule that sells (buys) the Hang Seng Index (HSI) or S&P/HKEx LargeCapIndex<sup>1</sup> when the sentiment index is above (below) the upper threshold value can beat the buy-and-hold strategy.

Sethi et al. (2018) examine the sectoral impact of disinflationary monetary policy by calculating the sacrifice ratios for several OECD (The Organisation for Economic Co-operation and Development) and non-OECD countries. Sacrifice ratios calculated through the episode method reveal that disinflationary monetary policy has a differential impact across three sectors in both OECD and non-OECD countries. Of the three sectors, the industry and service sectors show significant output loss due to a tight monetary policy in OECD and non-OECD countries.

The agricultural sector shows a differential impact of disinflation policy, namely a negative sacrifice ratio in OECD countries, thereby indicating that output growth is insignificantly affected by a tight monetary policy. Non-OECD countries yield positive sacrifice ratios, suggesting that the output loss is significant. Furthermore, it is observed that sacrifice ratios calculated from aggregate data are different from ratios that are calculated using sectoral data.

Financial and economic indicators can be used for big data, and for large data sets that might not be classified as big data. For example, Wong et al. (2001) use their indicator to test in markets for the USA, UK, Japan, Germany, and Singapore. This is not especially big data. Academics and practitioners could use it to test for stock markets for a large number of international markets using dynamic panel data models. The authors can use it to test not only for stock markets, but also for any financial products, and also to test for big data sets.

Similarly, Sethi et al. (2018) apply the sacrifice ratios to examine the sectoral impact of disinflationary monetary policy for several OECD and non-OECD countries. This is not especially associated with big data. However, academics and practitioners can apply the sacrifice ratios to examine the sectoral impact of disinflationary monetary policy for a large number of countries worldwide, which would be classified as a large data set.
