Next Article in Journal
Numerical Modeling of Water Jet Plunging in Molten Heavy Metal Pool
Previous Article in Journal
Algebraic, Analytic, and Computational Number Theory and Its Applications
Previous Article in Special Issue
Financial Volatility Modeling with the GARCH-MIDAS-LSTM Approach: The Effects of Economic Expectations, Geopolitical Risks and Industrial Production during COVID-19
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Expectation and Optimal Allocations in Existential Contests of Finite, Heavy-Tail-Distributed Outcomes

Exsuperatus, Sarasota, FL 34231, USA
Mathematics 2024, 12(1), 11; https://doi.org/10.3390/math12010011
Submission received: 7 November 2023 / Revised: 13 December 2023 / Accepted: 17 December 2023 / Published: 20 December 2023

Abstract

:
Financial time series and other human-driven, non-natural processes are known to exhibit fat-tailed outcome distributions. That is, such processes demonstrate a greater tendency for extreme outcomes than the normal distribution or other natural distributional processes would predict. We examine the mathematical expectation, or simply “expectation”, traditionally the probability-weighted outcome, regarded since the seventeenth century as the mathematical definition of “expectation”. However, when considering the “expectation” of an individual confronted with a finite sequence of outcomes, particularly existential outcomes (e.g., a trader with a limited time to perform or lose his position in a trading operation), we find this individual “expects” the median terminal outcome over those finite trials, with the classical seventeenth-century definition being the asymptotic limit as trials increase. Since such finite-sequence “expectations” often differ in values from the classic one, so do the optimal allocations (e.g., growth-optimal). We examine these for fat-tailed distributions. The focus is on implementation, and the techniques described can be applied to all distributional forms. We make no assertion that the empirical data for any financial time series comports to the generalized hyperbolic distribution (GHD), which we will use as a proxy of any heavy-tailed distribution herein. Rather, we have selected the GHD to highlight the process for determining expectation and other important time-dependent metrics in existential contests, using the GHD as a generic proxy for the specific distributional form an implementor of the presented technique might ascribe to the empirical data.

1. Introduction and Review of the Literature

In order to adequately apply methods of non-asymptotic expectation and allocation to fat-tailed-distributed outcomes, we must examine the history of all three of these facets. First, we will review the history of fat-tailed distributions in economic time series, leading to the proposal of the generalized hyperbolic distribution (GHD). Next, we will explore the history of mathematical expectation as well as optimal allocation methods, which we will apply to stable fat-tailed-distributed outcomes.

1.1. Fat-Tailed Distributions in Economic Time Series—History

While explicit mentions of fat-tailed distributions in the financial price context may have been limited prior to the cited references, observations and discussions regarding market fluctuation irregularities and extreme event occurrences existed in financial history.
Historical examples, such as the financial crises and market panics occurring throughout history, provide insights into understanding market dynamics and extreme event presence. Documented observations of events, such as the 17th century Tulip Mania and 18th century South Sea Bubble, offer glimpses into the irregular, non-normal behaviors exhibited in financial prices.
The field of speculative trading and risk management, with roots in ancient civilizations, involved assessing trading risks and managing uncertainties in commodity prices and exchange rates. While formal conceptualization of fat-tailed distributions may not have been explicitly articulated during these early periods, experiences and observations related to market irregularities and extreme price movements contributed to gradually understanding the complexities and non-standard behaviors present in financial markets and asset prices.
Exploring historical accounts of financial crises and the evolution of trading practices in ancient and medieval civilizations provides valuable insights into the early recognition of irregular market behaviors and consideration of extreme events in financial transactions and risk management.
One of the early explicit mentions of fat-tailed distributions in the financial context is found in the work of economist Alfred Marshall. His contributions to supply–demand theory and goods pricing included discussions on market price variability and economic outlier occurrences. While Marshall’s work focused on microeconomic principles, his insights into market fluctuation nature provided a preliminary understanding of heavy-tailed distribution’s potential presence in economic phenomena.
The field of actuarial science and insurance, predating modern finance theory, involved risk analysis and extreme event occurrence, laying foundations for understanding tail risk and accounting for rare but significant events in risk management.
While these earlier references did not explicitly focus on fat-tailed distributions in finance, they contributed to understanding variability and risk in economic/financial systems and provided insights leading to modern heavy-tailed distribution theories and finance applications.
Marshall’s 1890 “Principles of Economics” [1] laid foundations for modern microeconomic theory and analysis. While focused on general economic principles, it included discussions on market dynamics, supply–demand, and price fluctuation factors.
Later, in his 1919 work “Industry and Trade” [2], Marshall examined industrial organization dynamics and market functioning, providing insights into price variability and factors influencing economic fluctuations.
Marshall’s works provided insights into economic principles and market dynamics, laying groundwork for modern economic theory development. While not explicitly discussing finance-related fat-tailed distributions, they contributed to understanding market variability and economic analysis, providing context on risk and uncertainty ideas in economic systems.
Louis Bachelier’s 1900 thesis “Théorie de la spéculation” [3] delved into mathematical stock price modeling and introduced random walks, playing a significant role in efficient market hypothesis development and understanding financial price movements. While not focused on fat-tailed distributions, Bachelier’s work is considered an early reference laying groundwork for understanding financial market stochastic processes.
The work of French mathematician Paul Lévy [4,5] made significant contributions to probability theory and stochastic processes, including those related to finance. Specifically, Lévy’s research on random variable addition and general stochastic processes greatly influenced modern probability theory development and understanding stochastic dynamics across fields. Lévy’s contributions to Brownian motion theory and introduction of Lévy processes provided groundwork for understanding asset price dynamics and modeling market fluctuations. Through foundational work on topics such as Brownian motion, random processes, and stable distributions, Lévy helped establish mathematical tools and probability frameworks later applied to quantitative finance and modeling.
While explicit mentions of finance-related fat-tailed distributions may have been limited up to this time, Bachelier and Lévy’s contributions provided important insights into the probabilistic nature of finance and asset price movement governing processes.
In Feller’s work [6], we begin to find discussions on heavy-tailed distributions and their applications across fields. Feller’s work contributed to understanding heavy-tailed distribution.
Additionally, the notion of fat-tailed distributions in economics/finance was discussed by prominent statisticians and economists. Notably, Vilfredo Pareto [7] introduced the Pareto distribution in the early 20th century, closely related to the stable Pareto distribution, though it may not have been directly applied to price data then.
The concept of stable Pareto distributions in the finance context was notably introduced by mathematician Benoit Mandelbrot in his influential fractal and finance modeling work. Mandelbrot extensively discussed heavy-tailed distribution’s presence and implications for finance modeling.
In particular, Mandelbrot [8] discussed stable Pareto distributions as a potential model for certain financial data. This seminal paper laid foundations for his research on the financial market’s fractal nature and non-Gaussian model applications describing market dynamics.
Additionally, Mandelbrot [9] expanded on these ideas, delving into stable distribution concepts and their significance in understanding market fluctuations and risk dynamics.
Differences between the Pareto distribution, originally described by Vilfredo Pareto, and the stable Pareto distribution, referred to by Benoit Mandelbrot in 1963, lie in their specific characteristics and properties.
The Pareto distribution, introduced by Vilfredo Pareto in the early 20th century, is a power-law probability distribution describing wealth and income distribution in society. It exhibits a heavy tail, indicating a small number of instances with significantly higher values than the majority. It follows a specific mathematical form, reflecting the “Pareto principle” or the “80-20 rule”, suggesting that roughly 80% of effects come from 20% of causes.
The stable Pareto distribution, referred to by Benoit Mandelbrot, represents an extension of the traditional Pareto distribution. Mandelbrot’s 1963 work marks a pivotal point in applying stable distributions, including stable Pareto, to financial data and market behavior modeling. It incorporates additional parameters, allowing greater flexibility in modeling extreme events and heavy-tailed phenomena across fields such as finance and economics. The stable Pareto distribution accounts for distribution shape and scale stability across scales, making it suitable for analyzing complex systems with long-term dependencies and extreme fluctuations.
While both distributions share heavy-tailed characteristics and implications for analyzing extremes and rarities, the stable Pareto distribution, as conceptualized by Mandelbrot, provided a more nuanced and adaptable framework for modeling complex phenomena, especially in finance and other dynamic systems where traditional distributions typically fell short in capturing real-world data intricacies.

1.1.1. Subsequent Historical Milestones in Heavy-Tailed Distributions Applied to Financial Time Series

With time, many other fat-tailed modeling techniques found their way into quantitative departments. We examine several historically influential contributions within the realm of heavy-tailed distributions in finance.
Reference [10] made the area of non-Gaussian stable processes accessible to researchers, graduate students, and practitioners. This book presented the similarity between Gaussian and non-Gaussian stable multivariate distributions and introduced one-dimensional stable random variables. It also discussed the most basic sample path properties of stable processes, including sample boundedness and continuity.
The problem of parametrically specifying distributions suitable for asset-return models is reconsidered in [11]. The authors described alternative distributions, showing how they could be estimated and applied to stock-index and exchange-rate data. The book also investigates the implications for options pricing, making it a valuable resource for understanding the application of stable Paretian models in finance.
Further along the line of essential study in heavy-tailed distributions in finance, Reference [12] was a comprehensive resource that presented current research focusing on heavy-tailed distributions with specific applications in financial time series. The book covered methodological issues, including probabilistic, statistical, and econometric modeling under non-Gaussian assumptions. It also explored the applications of stable and other non-Gaussian models in finance and risk management, making it a valuable resource for finance and economics professors, researchers, and graduate students.
Finally, Reference [13] provided an accessible introduction to stable distributions for financial modeling. This article highlighted the need for better models for financial returns, as the normal (or Gaussian) model did not capture the large fluctuations seen in real assets. The author introduced stable laws, a class of heavy-tailed probability distributions that could model large fluctuations and allow more general dependence structures, offering a more accurate model for financial returns, and even focused on pitfalls in parameter estimation of such forms.

1.1.2. The Generalized Hyperbolic Distribution (GHD)

The generalized hyperbolic distribution (GHD) was introduced in [14], examining aeolian processes.
The GHD constitutes a flexible continuous probability law defined as the normal variance-mean mixture distribution, where the mixing distribution assumes the generalized inverse Gaussian form. Its probability density function can be expressed in terms of modified Bessel functions of the second kind, commonly denoted as BesselK functions in the literature. The specific functional form of the GHD density involves these BesselK functions along with model parameters governing aspects such as tail behavior, skewness, and kurtosis. While not possessing a simple closed-form analytical formula, the density function can be reliably evaluated through direct numerical calculation of the BesselK components for given parameter values. The mathematical tractability afforded by the ability to compute the GHD density and distribution function underpins its widespread use in applications spanning economics, finance, and the natural sciences.
Salient features of the GHD include closure under affine transformations and infinite divisibility. The latter property follows from its constructability as a normal variance-mean mixture based on the generalized inverse Gaussian law. As elucidated in [15], infinite divisibility bears important connections to Lévy processes, whereby the distribution of a Lévy process at any temporal point manifests infinite divisibility. While many canonical infinitely divisible families (e.g., Poisson and Brownian motion) exhibit convolution-closure, whereby the process distribution remains within the same family at all times, Podgórski et al. showed the GHD does not universally possess this convolution-closure attribute.
Owing to its considerable generality, the GHD represents the overarching class for various pivotal distributions, including the Student’s t, Laplace, hyperbolic, normal-inverse Gaussian, and variance-gamma. Its semi-heavy tail properties, unlike the lighter tails of the normal distribution, enable modeling of far-field behaviors. Consequently, the GHD finds ubiquitous applications in economics, particularly in financial market modeling and risk management contexts, where its tail properties suit the modeling of asset returns and risk metrics, and by 1995, the GHD appeared in financial markets application in [16]. They applied the hyperbolic subclass of the GHD to fit German financial data.
This work was later extended by Prause in 1999 [17], who applied GHDs to model financial data on German stocks and American indices. Since then, the GHD has been widely used in finance and risk management to model a wide range of financial and economic data, including stock prices, exchange rates, and commodity prices. See [18,19,20,21,22,23,24,25] for applications of the GHD to economic and share price data.
The GHD has been shown to provide a more realistic description of asset returns than the other classical distributional models and has been used to estimate the risk of financial assets and to construct efficient portfolios in energy and stock markets.
The generalized hyperbolic distribution has emerged as an indispensable modeling tool in modern econometrics and mathematical finance due to its advantageous mathematical properties and empirical performance. Salient features underscoring its suitability for analyzing economic and asset price data include:
  • Substantial flexibility afforded by multiple shape parameters, enabling the GHD to accurately fit myriad empirically observed, non-normal behaviors in financial and economic data sets. Both leptokurtic and platykurtic distributions can be readily captured.
  • Mathematical tractability, with the probability density function, cumulative distribution function, and characteristic function expressible in closed analytical form. This facilitates rigorous mathematical analysis and inference.
  • Theoretical connections to fundamental economic concepts, such as utility maximization. Various special cases also share close relationships with other pivotal distributions, such as the variance-gamma distribution.
  • Empirical studies across disparate samples and time horizons consistently demonstrate a superior goodness-of-fit compared to normal and stable models when applied to asset returns, market indices, and other economic variables.
  • Ability to more accurately model tail risks and extreme events compared to normal models, enabling robust quantification of value-at-risk, expected shortfall, and other vital risk metrics.
  • Despite the lack of a simple analytical formula, the distribution function can be reliably evaluated through straightforward numerical methods, aiding practical implementation.
In summary, the mathematical tractability, theoretical foundations, empirical performance, and remarkable flexibility of the GHD render it effective for modeling the non-normal behaviors ubiquitous in economic and financial data sets.
Although we will be using the symmetric GHD for ease-of-use in demonstrating the techniques we will illuminate in the discussion herein (the techniques being applicable to any distributional form), and despite the GDH possessing numerous favorable characteristics, its flexibility is limited in comparison to other extensions of the normal distribution that provide improved adaptability.

1.1.3. Extensions of the Normal Distribution for Greater Flexibility (GHD)

The skew two-piece, skew-normal distribution is a generalization of the normal distribution with an additional shape parameter. This distribution can model both left-skewed (negatively skewed) and right-skewed (positively skewed) data, depending on the value of the skewness parameter [26,27,28,29,30,31,32]. It becomes a normal distribution when the skewness is equal to one.
This is particularly useful in economic time series, where data often exhibit asymmetry and non-normality [33]. The skew two-piece, skew-normal distribution becomes a normal distribution when the skewness is equal to one, making it a more general model that can capture a wider range of data patterns [34]. Additionally, using a distribution that directly handles skewness can perform better than data transformation, which may have limitations, such as reduced information, difficulty in interpreting transformed variables, and no guarantee of joint normality [33].
The skew two-piece, skew-normal distribution and the skew normal distribution are not the same, but they are related. Both distributions are generalizations of the normal distribution that can accommodate skewed data, but they do so in different ways.
The skew normal distribution is a type of distribution that introduces skewness into a normal distribution. It is characterized by an asymmetry in the probability distribution of a real-valued random variable about its mean [35]. The skewness value can be positive, zero, negative, or undefined. For a unimodal distribution, negative skew commonly indicates that the tail is on the left side of the distribution, and positive skew indicates that the tail is on the right.
On the other hand, the skew two-piece, skew-normal distribution, also known as the Fernández-Steel Skew Normal distribution, is a more general model that introduces an additional shape parameter to the normal distribution. This distribution can model both left-skewed (negatively skewed) and right-skewed (positively skewed) data, depending on the value of the skewness parameter. It becomes a normal distribution when the skewness is equal to one. This distribution is particularly useful for modeling data that are concentrated in two directions in roughly equal proportions [34].
In summary, while both distributions can model skewed data, the skew two-piece, skew-normal distribution provides a more flexible model that can capture a wider range of data patterns.
Overall, the skew two-piece, skew-normal distribution provides a more flexible and accurate model for capturing the specific characteristics of economic time series data [36].
We now turn our attention to the odd log-logistic-type distributions.
The odd log-logistic normal distribution, the odd log-logistic geometric normal distribution, and the odd log-logistic skew normal distribution are all different, albeit related probability distributions.
The odd log-logistic normal distribution is a three-parameter model that is symmetric, platykurtic, leptokurtic, and may be unimodal or bimodal. It is a new distribution that has been studied for its various structural properties, including explicit expressions for the ordinary and incomplete moments, generating function, and mean deviations. It has been shown to provide more realistic fits than other special regression models in certain cases [37].
The odd log-logistic geometric normal distribution is a new regression model based on the odd log-logistic geometric normal distribution for modeling asymmetric or bimodal data with support in ℝ. This distribution is designed to model data that are not symmetric and may exhibit bimodal characteristics [38].
The odd log-logistic skew normal distribution, on the other hand, is a different distribution that combines the odd log-logistic and skew normal distributions. It is designed to model data that are not symmetric and may exhibit skewness characteristics. This distribution is particularly useful for modeling data that are concentrated in two directions in roughly equal proportions [37].
In summary, the odd log-logistic normal distribution, the odd log-logistic geometric normal distribution, and the odd log-logistic skew normal distribution are all different distributions that have been developed to model specific characteristics of data, including symmetry, kurtosis, bimodality, and skewness. Each distribution is designed for different purposes and may be more or less suitable depending on the specific characteristics of the data being modeled.

1.2. Mathematical Expectation—History

The genesis of mathematical expectation lies in the mid-17th century study of the “problem of points”, which concerns the equitable division of stakes between players forced to conclude a game prematurely. This longstanding conundrum was posed to eminent French mathematician Blaise Pascal in 1654 by compatriot Chevalier de Méré, an amateur mathematician. Galvanized by the challenge, Pascal commenced correspondence with eminent jurist Pierre de Fermat, and the two deduced identical solutions rooted in the fundamental tenet that a future gain’s value must be proportional to its probability.
Pascal’s 1654 work [39], compiled amidst his collaboration with Fermat, explores sundry mathematical concepts, including foundational probability theory and expectation calculation for games of chance. Through rigorous analysis of gambling scenarios, Pascal delved into mathematical quantification of expected values, establishing core principles that seeded probability theory’s growth and diverse applications. The “Pascal–Fermat correspondence” offers insights into the germination of mathematical expectation and its integral role in comprehending the outcomes of stochastic phenomena.
Contemporaneously in the mid-17th century, the Dutch mathematician, astronomer, and physicist Christiaan Huygens significantly advanced the field of probability and study of mathematical expectation. Building upon the contributions of Pascal and Fermat, Huygens’ 1657 work [40] incisively examined numerous facets of probability theory, with particular emphasis on games of chance and the calculation of fair expectations in contexts of gambling. He conceived an innovative methodology for determining expected value, underscoring the importance of elucidating average outcomes and long-term behaviors of random processes.
Huygens’ mathematical expectation framework facilitated subsequent scholars in expanding on his ideas to construct a comprehensive system for disentangling stochastic phenomena, quantifying uncertainty, and illuminating the nature of randomness across diverse scientific realms.
Essentially, the calculation for mathematical expectation, or “expectation”, remained unchanged for centuries, until the work in [41]. It was Vince who decided to look at a different means of calculating it to reflect an individual or group of individuals who are in an “existential” contest of finite length. Consider a young trader placed on an institution’s trading desk and given a relatively short time span to “prove himself” or be let go, or the resources used by a sports team in an “elimination-style” playoff contest, or such resources of a nation state involved in an existential war.
Such endeavors require a redefinition of the “expectation”—what the party “expects” to happen—and can readily be defined as that outcome, where half the outcomes are better or the same and half are worse or the same at the end of the specified period or number of trials.
Thus, the answer to this definition of “expectation” is no longer the classic, centuries-old definition of the probability weighted mean outcome, but instead is the mean sorted cumulative outcome over the specified number of trials.
For example, consider the prospect of a game where one might win 1 unit with a probability of 0.9, and lose −10 units with a probability of 0.1. The classical expectation is to lose −0.1 per trial.
Contrast this to the existential contest, which is only 1 trial long. Here, what one “expects” to happen is to win 1 unit. In fact, it can be shown that if one were to play and quit this game after 6 trials, one would “expect” to make 6 units (that is, half the outcomes of data distributed as such would show better or the same results, and half would show worse or the same results). After 7 trials, with these given parameters, however, the situation turns against the bettor.
Importantly, as demonstrated in [41], this non-asymptotic expectation converges to the asymptotic, centuries-old “classic” expectation in the limit as the number of trials approaches infinity. Thus, the “classic” expectation can be viewed as the asymptotic manifestation of Vince’s more general non-asymptotic expression.
This redefined “expectation” then links to optimal resource allocations in existential contests. The non-asymptotic expectation provides a modern conceptualization of “expectation” tailored to modeling human behavior in high-stakes, finite-time scenarios and, in the limit as the number of trials becomes ever-large, converges on the classical definition of mathematical expectation.

1.3. Optimal Allocations—History

The concept of geometric mean maximization originates with Daniel Bernoulli, who made the first known reference to it in 1738. Prior to that time, there is no known mention in any language of even generalized optimal reinvestment strategies by merchants, traders, or in any of the developing parts of the earth. Evidently, no one formally codified the concept. If anyone contemplated it, they did not record it.
Bernoulli’s 1738 paper [42] was originally published in Latin. A German translation appeared in 1896, and it was referenced in John Maynard Keynes’ work [43]. In 1936, John Burr Williams’ paper [44], pertaining to trading in cotton, posited that one should bet on a representative price. If profits and losses are reinvested, the method of calculating this price is to select the geometric mean of all possible prices.
Bernoulli’s 1738 paper was finally translated into English in Econometrica in 1954. When game theory emerged in the 1950s, concepts were being widely examined by numerous economists, mathematicians, and academics. Against this fertile backdrop, John L. Kelly Jr. [45] demonstrated that to achieve maximum wealth, a gambler should maximize the expected value of the logarithm of his capital. This is optimal because the logarithm is additive in repeated bets and satisfies the law of large numbers. In his paper, Kelly showed how Claude Shannon’s information theory [46] could determine growth-optimal bet sizing for an informed gambler.
Maximizing the expected end wealth is known as the Kelly criterion. Whether Kelly knew it or not, the cognates to his paper are from Daniel Bernoulli. In all fairness, Bernoulli was likely not the originator either. Kelly’s paper presented this as a solution to a technological problem absent in Bernoulli’s day.
Kelly’s paper makes no reference to applying the methods to financial markets. The gambling community embraced the concepts but applying them to various capital market applications necessitated formulaic alterations, specifically scaling for the absolute value of the worst-case potential outcome, which become particularly important considering fat-tailed probabilities. In fairness, neither Kelly nor Shannon were professional market traders, and the work presented did not claim applicability to finance. However, in actual practice, scaling to worst-case outcomes is anything but trivial. It is this author’s contention that Kelly and Shannon (who signed off on it [28]) would have caught this oversight (they would not have intentionally opted to make the application scope of this work knowingly less general) had the absolute value of worst-case not simply been unity, thus cloaking it. The necessary scaling was provided in [47].
In subsequent decades after Reference [45], many papers by numerous researchers expanded on geometric growth optimization strategies in capital market contexts, notably Bellman and Kalaba [48], Breiman [49], Latane [50,51], Tuttle [51], Thorp [52,53], and others. Thorp, a colleague of Claude Shannon, developed a winning strategy for Blackjack using the Kelly criterion and presented closed-form formulas to determine the Kelly fraction [52].
The idea of geometric mean maximization was also well-critiqued. Samuelson [54,55], Goldman [56], Merton [57], and others argued against universally accepting it as the investor criterion. Samuelson [54] highlighted that the Kelly fraction is only optimal asymptotically, as the number of trials approaches infinity, not for finite trials, whereas, in fact, it would always be sub-optimal.
The formulation that yields the growth-optimal allocation to each component in a portfolio of multiple components for a finite number of trials is provided in [41]. This approach incorporates the net present value of amounts wagered and any cash flows over the finite timespan, acknowledging that real-world outcomes typically do not manifest instantly.
Growth-optimality, however, is not always the desired criterion. The formulaic framework for determining it can be used to discern other “optimal” allocations. In Vince and Zhu’s work [58], we find two fractional allocations less than the growth-optimal allocation for maximizing the various catalogued return-to-risk ratios. This was further expanded upon by de Prado, Vince, and Zhu [59].
Finally, Reference [60] provides the case of using the formula for geometric growth maximization as a “devil’s advocate” in contexts where the outcome from one period is a function of available resources from previous periods, and one wishes to diminish geometric growth. Examples include certain biological/epidemiological applications and “runaway” cost functions, such as national debts.

2. Characteristics of the Generalized Hyperbolic Distribution (GHD)—Distributional Form and Corresponding Parameters

As discussed, although the two-piece, skew-normal distributions and the odd log-logistic-type distributions offer greater flexibility than the GHD and, therefore, are generally considered to better represent financial time series data, we will be illustrating our methods on the symmetrical GHD for simplicity.
These methods can be applied to any distributional form. The focus herein is not on the specific distributional form employed to model financial time series, but on the techniques involved in discerning a sample set of probable outcomes based on the particular distribution of an existential contest of finite length, from which numerous time-dependent metrics can be determined.
In addition to having the characteristics of being closed under affine transformations and possessing an infinitely divisible property, which enables connections to Lévy processes, the GHD provides some practical implantation advantages to the study of economic data series. Among these, the GHD possesses:
  • A more general distribution class that includes Student’s t, Laplace, hyperbolic, normal-inverse Gaussian, and variance-gamma as special cases. Its mathematical tractability provides the ability to derive other distribution properties.
  • Semi-heavy tail behavior, which allows modeling data with extreme events and fat-tailed probabilities.
  • A density function, involving modified Bessel functions of the second kind (BesselK functions). Thus, although there is a lack of a closed-form density function, it can still be numerically evaluated in a straightforward manner.
We examine the derivation of the probability density function (PDF) of the symmetrical GHD, and then derive the first integral of the PDF with respect to the random variable to obtain the cumulative density function (CDF) of the symmetrical GHD.
A characteristic function is a mathematical tool that provides a way to completely describe the probability distribution of a random variable. It is defined as the Fourier transform of the PDF of the random variable and is particularly useful for dealing with linear functions of independent random variables. We, therefore, derive the PDF of the symmetrical GHD:
1. Starting with the characteristic function of the symmetrical GHD, as derived from [61] as:
φ ( t ) = exp i μ t + α δ γ δ λ K λ α γ 2 δ 2 1 2 i β t t 2 γ δ λ K λ α γ 2 δ 2
where:
α = concentration
δ = scale
μ = location
λ = shape
γ = skewness
β = γ 2 δ 2 / δ
2. Using the inversion formula, the PDF of the GHD can be obtained from the characteristic function:
n.b. In the characteristic function provided in the paper by Klebanov and Rachev (1996) [15], t is a variable used to represent the argument of the characteristic function. It is not the same as x , which is a variable used to represent the value of the random variable being modeled by the GHD.
The characteristic function of a distribution is a complex-valued function that is defined for all values of its argument, t . It is the Fourier transform of the probability density function (PDF) of the distribution, and it provides a way to calculate the moments of the distribution and other properties, such as its mean, variance, and skewness.
In the case of the GHD, the characteristic function is given by (1).
The argument t of the characteristic function is a complex number that can take on any value. It is not the same as the value of the random variable being modeled by the GHD, which is represented by the variable x . The relationship between the characteristic function and the PDF of the GHD is given by the Fourier inversion formula:
f ( x ) = 1 2 π e i t x φ ( t ) d t
where f ( x ) is the PDF of the GHD and x is the value of the random variable being modeled.
3. Substituting the expression for the characteristic function into the Fourier inversion formula:
f ( x ) = 1 2 π e i t x exp i μ t + α δ γ δ λ K λ α γ 2 δ 2 1 2 i β t t 2 γ δ λ K λ α γ 2 δ 2 d t
4. Simplifying the expression inside the exponential:
i μ t + α δ γ δ λ K λ α γ 2 δ 2 1 2 i β t t 2 γ δ λ K λ α γ 2 δ 2
= i μ t + α δ γ δ λ K λ α γ 2 δ 2 K λ α δ 2 + ( x μ ) 2 K λ α γ 2 δ 2 1 2 i β t t 2
5. Using the identity: K λ ( z ) = 1 2 z 2 λ 0 e z cos h ( u ) cos h ( λ u ) d u , to simplify the expression:
f ( x ) = α 2 π δ γ δ λ δ δ 2 + ( x μ ) 2 λ 1 0 e α δ 2 + ( x μ ) 2 cos h ( u ) cos h ( λ u ) d u
6. Substituting: v = α δ 2 + ( x μ ) 2 sin h ( u ) , to simplify the integral:
f ( x ) = α π δ α 2 λ / 2 1 δ 2 + ( x μ ) 2 K λ 1 / 2 α δ 2 + ( x μ ) 2
7. Finally, setting the skewness parameter γ to zero to obtain the simplified PDF of the symmetric GHD:
f ( x ) = α δ δ 2 + ( x μ ) 2 λ / 2 K λ α , δ 2 + ( x μ ) 2
The derivation of the CDF of the symmetrical GHD is as follows:
1. Integrating the PDF from negative infinity to x to obtain the CDF:
F ( x ) = x f ( t ) d t
2. Substituting the PDF into the integral and simplifying:
F ( x ) = x α δ δ 2 + ( t μ ) 2 λ / 2 K λ α , δ 2 + ( t μ ) 2 d t
3. Substituting u = δ 2 + ( t μ ) 2 and d u = t μ δ 2 + ( t μ ) 2 d t to obtain:
F ( x ) = x α δ u λ K λ ( α , u ) d u t μ
4. With the identity: K λ ( x ) = 1 2 x 2 λ 0 e x 2 ( t + 1 t ) t λ 1 d t , to obtain:
F ( x ) = α 2 δ α 2 λ x 0 u λ e α 2 ( u 2 + v 2 2 u v cos ( θ ) ) v λ 1 d v d θ d u t μ
5. Substituting v = u sin ( θ ) and d v = u cos ( θ ) d θ to obtain:
F ( x ) = α 2 δ α 2 λ x 0 π / 2 e α 2 u 2 ( 1 + sin 2 ( θ ) 2 sin ( θ ) cos ( θ ) ) u 2 λ 2 sin 2 λ 2 ( θ ) d θ d u 1 t μ
6. Finally, simplifying the expression by noting that sin 2 ( θ ) 2 sin ( θ ) cos ( θ ) + 1 = ( sin ( θ ) cos ( θ ) ) 2 and taking the limit as x to obtain:
CDF ( x ) = F ( x ) = 1 2 1 + sgn ( x μ ) 1 δ δ 2 + ( x μ ) 2 α λ 1 α
where:
α = concentration
δ = scale
μ = location
λ = shape
sgn ( x μ ) = sign function, the sign of (xµ), or simply s g n x μ = x μ x μ .
Thus far, we have discussed the symmetrical GHD, where we are not accountable for skew (γ).
Implementing skew can become an unwieldy process in the GHD. Our focus is on implementation in this text, and the reader is reminded that although we present our work herein on the symmetrical GHD, it is applicable to all distributional forms, with the symmetrical GHD chosen so as to be current and relatively clear in terms of implementation of the techniques. It is the techniques that are important here—not so much the distributional form selected to demonstrate them. As such, the balance of the text will be using the symmetrical form of the GHD.

Parameter Description

The symmetric generalized hyperbolic distribution (GHD) is parameterized by four variables governing salient distributional properties:
  • The concentration parameter (α) modulates tail weight, with larger values engendering heavier tails and hence elevating the probability mass attributable to extreme deviations. α typically assumes values on the order of 0.1 to 10 for modeling economic phenomena.
  • The scale parameter (δ) controls the spread about the central tendency, with larger values yielding expanded variance and wider dispersion. Applied economic analysis often utilizes δ ranging from 0.1 to 10.
  • The location parameter (μ) shifts the distribution along the abscissa, with positive values effecting rightward translation. In economic contexts, μ commonly falls between −10 and 10.
  • The shape parameter (λ) influences peakedness and flatness, with larger values precipitating more acute modes and sharper central tendencies. λ on the order of 0.1 to 10 frequently appears in economic applications.
While these indicated ranges provide first approximations, the precise parametric tuning warrants meticulous examination of the data properties and phenomenon under investigation. Moreover, complex interdependencies between the four variables potentially exist, necessitating holistic assessment when calibrating the symmetric GHD for applied modeling tasks.

3. Parameter Estimation of the Symmetrical GHD

The task is now specified as follows. We have a set of empirical observations that represent the returns or percentage changes in an economic time series from one period to the next. This can be stock prices, economic data, the output of a trading technique (These are often expressed as d0/d1 − 1, where d0 represents the most recent data and d1 the data from the previous period. In the case of outputs of a trading approach, the divisor is problematic—the results must be expressed in log terms, in terms of a percentage change. Such results must be expressed, therefore, in terms of an amount put at risk to assume such results, which are generalized to the largest potential loss. The notion of “potential loss” is another Pandora’s box worthy of a separate discussion), or any other time series data that comport to an assumed probability density function whose parameters we wish to fit to this sample data.
In the context of finding the concentration, scale, location, and shape parameters that best fit the PDF(x) or the symmetrical GHD, we can use maximum likelihood estimation (MLE) to estimate these parameters.
Maximum likelihood estimation (MLE) stands as a widely used statistical technique for estimating probability distribution parameters based on observed data. This method involves determining parameter values that optimize the likelihood of observing the data within a given statistical model assumption. MLE can be viewed as a special instance of maximum a posteriori estimation (MAP) under the assumption of a uniform parameter prior distribution. Alternatively, it can be considered a variant of MAP that disregards the prior, resulting in an unregularized approach. The MLE estimator selects the parameter value that maximizes the probability (or probability density, in continuous cases) of observing the given data. When the parameter comprises multiple components, individual maximum likelihood estimators are defined for each component within the complete parameter’s MLE. MLE is recognized as a consistent estimator, meaning that under certain conditions and as the sample size increases, the estimates derived from MLE tend toward the true parameter values. Put differently, with larger sample sizes, the likelihood of obtaining accurate parameter estimates increases.
Mathematically, we can express MLE as follows:
1. Beginning with the likelihood function for the symmetrical GHD, which is the product of the PDF evaluated at each data point:
L ( θ | x ) = i = 1 n f ( x i | θ )
2. Taking the natural logarithm of the likelihood function to obtain the log-likelihood function:
l ( θ | x ) = i = 1 n ln f ( x i | θ )
3. Taking the partial derivative of the log-likelihood function with respect to each parameter and setting the result equal to zero:
l ( θ | x ) θ = 0
4. Solving the resulting system of equations to obtain the maximum likelihood estimates of the parameters.
5. Checking the second-order conditions to ensure that the estimates are indeed maximum likelihood estimates.
6. Using the estimated parameters to obtain the best-fit symmetrical GHD.
In step 1, θ represents the vector of parameters to be estimated, which includes the concentration, scale, location, and shape parameters. In step 2, we take the natural logarithm of the likelihood function to simplify the calculations and avoid numerical issues. In step 3, we take the partial derivative of the log-likelihood function with respect to each parameter and set the result equal to zero to find the values of the parameters that maximize the likelihood function. In step 4, we solve the resulting system of equations to obtain the maximum likelihood estimates of the parameters. In step 5, we check the second-order conditions to ensure that the estimates are indeed maximum likelihood estimates. Finally, in step 6, we use the estimated parameters to obtain the best-fit symmetrical GHD.
We shall now apply the discussed method of fitting the symmetrical GHD parameters to a real-world, financial time series. We implemented the parameter fitting techniques to the determination of the values for the real-world data set of S&P500 Index percent daily price changes for the first six months of 2023.
By way of this example, we obtained a time-ordered vector of daily percentage changes in the S&P500 for the first six months of 2023. We proffer these results as follows in Table 1, which we will use to fit our symmetrical GHD parameters to.
We performed the MLE best-fit analysis on our returns sample data. One of the benefits of the GHD, as mentioned earlier, is the flexibility afforded by multiple shape parameters. We do not necessarily have to fit per MLE or any other “best-fit” algorithm—we could fit by eye if we so desired, the GHD is that extensible. In our immediate case, we decided to fit but set the location parameter equal to the mean of our sample returns (0.0029048), and fit the other three parameters, arriving at a best-fit set of parameters in Table 2.
This resulted in the following PDF and CDF, in Figure 1a,b, respectively, of our sample data set.
For the shape of the PDF in the first 6 months of the 2023 daily price changes in the S&P500 index, the distribution is symmetrical solely because we used the symmetrical incarnation of the GHD. What has been fit here is the scale parameter and the shape parameter, reflected in Figure 1 for both the PDF and CDF of our sample data set. In working with the GHD, it is often necessary to use the flexibility of the parameters, in addition to a best-fit algorithm such as MLE, to obtain a satisfactory fit, not only to the sample data, but also to what the implementor may be anticipating in the future. Best-fit routines such as MLE are, in actual practice, a starting point.

4. Discussion

This far, we have laid the structural foundation to how we will use a given probability distribution and its parameters, which are fitted to a sample set of data. The presented technique allows any distributional shape to be applied. We chose the symmetrical GHD for its ease-of-use. In actual practice, it will be necessary to account for skewness in the distribution of the sampled data (this has been intentionally avoided in this paper to maintain simplicity).
In actual implementation, we would use the full, non-symmetrical GHD. Deriving the PDF from the characteristic function of the GHD:
Starting with the characteristic function of the GHD:
φ ( t ) = exp i μ t + α δ γ δ λ K λ α γ 2 δ 2 1 2 i β t t 2 γ δ λ K λ α γ 2 δ 2
where μ is the location parameter, α is the concentration parameter, δ is the scale parameter, λ is the shape parameter, γ is the skewness parameter, and β = γ 2 δ 2 / δ .
Using the inversion formula to obtain the PDF of the GHD:
f ( x ) = 1 2 π e i t x φ ( t ) d t
Substituting the characteristic function into the inversion formula and simplifying:
f ( x ) = α π δ 0 γ δ λ K λ α γ 2 δ 2 1 + t 2 cos ( μ t + β δ t 2 x t ) d t
1. Changing the variable u = α γ 2 δ 2 t :
f ( x ) = α π δ γ δ λ 0 K λ u 1 + δ 2 α 2 ( γ 2 δ 2 ) ( x μ ) 2 cos β δ α 2 ( γ 2 δ 2 ) ( x μ ) u 2 u x d u
2. Using the identity: K λ ( z ) = 1 2 z 2 λ 0 e z cos h ( u ) cos h ( λ u ) d u , to simplify the integral:
P D F x = f ( x ) = α π δ γ δ λ δ δ 2 + ( x μ ) 2 λ 1 K λ 1 α δ 2 + ( x μ ) 2
This is the full PDF with the skewness parameter (γ) for the GHD.
The integral required to compute the CDF(x) of the GHD with skewness (γ) does not have a closed-form solution, so numerical methods must be used to compute the CDF(x). However, there are software packages, such as R 4.2.0, MATLAB R2021a 9.10, or SAS 9.4M8, that have built-in functions for computing the CDF(x) of the GHD. Once the CDF(x) is computed, the inverse CDF(x) can be obtained by solving the equation: F ( x ) = p for x , where p is a probability value between 0 and 1. This equation can be solved numerically using methods such as bisection, Newton–Raphson, or the secant method. Alternatively, software packages, such as R, MATLAB, or SAS, have built-in functions for computing the inverse CDF of the GHD.
Our focus is on finite sequences of outcomes from a stochastic process. Although in our discussion, we are making the assumption that these outcomes are independent and identically distributed (IID), we assume this for simplicity of argument only; absent this IID assumption, the probabilities need only be amended at each subsequent trial in the finite, N-length sequence of outcomes based on previous outcomes. We refer to these finite sequences as being “existential contests”, realizing they exist in the milieu of life itself, as part of a longer sequence of a “lifetime-long” sequence of stochastic outcomes, or in the milieu of many players confronted with a finite sequence of stochastic outcomes.
The gambler wandering through the casino does not see this sequence stop and start as it continues when he continues gambling.
These are “existential contests” because we are walling-off these principles of ultimate ergodicity. We are examining what the single, individual (in the context of one man, or one team, one nation state—all one “individual”) player whose existence hangs on the outcome of the N-length, finite sequence of outcomes—the milieu it resides in be damned. Examples of existential contests might include a young trader placed on an institution’s trading desk and given a relatively short time span to “prove himself” or be let go, or the resources used by a sports team in an “elimination-style” playoff contest, or such resources of a nation state involved in an existential war. Even our gambler wandering through a Las Vegas casino, placing the minimum sized bets, who, suddenly, decides to wager his life savings on numbers 1–24 at the roulette wheel before heading for the airport, is now confronted with an N = 1 existential contest.
The notion of the existential contest does not fly in the face of The Optional Stopping Theorem, which still applies in the limit as N gets ever-larger. This theorem proffers that, under certain conditions, the expected value of a martingale at a stopping time is equal to its initial value. A martingale is a stochastic process that satisfies a certain property related to conditional expectations. The Optional Stopping Theorem is an important result in probability theory and has many applications in finance, economics, and other fields. The theorem is based on the idea that, if a gambler plays a fair game and can quit whenever they like, then their fortune over time is a martingale, and the time at which they decide to quit (or go broke and are forced to quit) is a stopping time. The theorem says that the expected value of the gambler’s fortune at the stopping time is equal to their initial fortune. The Optional Stopping Theorem has many generalizations and extensions, and it is an active area of research in probability theory. The theorem becomes less applicable as N approaches 0 from the right.
Not all human endeavors are regarded with the same, isotropic importance by the individual as traditional analysis assumes—some are more important than others, and some are absolutely critical to the individual, deserving of analysis in their own, critical enclosure.

4.1. The Goal of Median-Sorted Outcome after N Trials—The Process of Determining the Representative “Expected” Set of Oucomes

Let us return to our gambler in Las Vegas, ready to head for the airport, who decides to wager his life savings—half of it on the roulette ball falling on the numbers 1–12, “First Twelve” on the board, and the other half on 13–24, “Second Twelve” on the board. The payoff is 2:1. If the ball falls in any slot numbered 1–24, he will be paid 2:1 on one half of his life savings, keeping the half of his life savings wagered, and losing the other half. In effect, he has risked his entire life savings to increase it by half should the ball fall 1–24.
Here, 24 of the 38 slots will produce this result for him, so there is 24/38, or approximately 63.2% chance, he will see a 50% increase in his life savings. On one, single play, what one expects is the median outcome, which in this case, given the 0.632 probability of winning, he “expects” to win (in a European casino, he would have a better probability, as there is only one green zero, and his probabilities, therefore, increase to 24/37, or approximately 64.9%).
This is a critical notion here—what one “expects” to occur is that outcome, over N trials, where half of the outcomes after N trials are better, and half are worse. This is the definition of “expectation” in an existential contest.
However, it is always the expectation, as demonstrated in [41], that, as N increases, this definition of expectation converges on the classical one accepted since the seventeenth century, as the sum of the probability-weighted outcomes. This “classical” definition of expectation is the expectation in the limit as the number of trials increases. Over a finite sequence of trials, the expectation is the median-sorted outcome, which converges to the classic definition as the number of trials, N, becomes ever greater. Thus, we can state, in all instances, the finite-sequence expectation is the actual expectation.
Examining this further, we find the divergence between the finite-sequence definition and the classical one is a function of skewness in the distribution of outcomes, whereas the finite-sequence definition (expectation as the median-sorted outcome) is always the correct “expectation” amount, regardless of N. The divergence between this result and the classical definition (the sum of the probability-weighted outcomes) is a function of skewness in the distribution of outcomes.
Most games of chance offer the player a probability < 0.5 of winning, with various payoffs depending on the player’s chances, even if the payoff is even money. However, in any game where the probability of profitability on a given play is <0.5, and the classical expectation is negative, so too will the finite-sequence expectation be negative. Convergence is rapidly attained and never in the player’s favor for any period of plays.
Earlier, we referred to a hypothetical proposition where one might win 1 unit with a probability of 0.9 and lose −10 units with a probability of 0.1. The classical expectation is to lose −0.1 per trial.
This is in contrast to the existential contest, which is only 1 trial long. Here, what one “expects” to happen is to win 1 unit. In fact, it can be shown that if one were to play and quit this game after 6 trials, one would “expect” to make 6 units (that is, half the outcomes of data distributed as such would show better or the same results, and half would show worse or the same results). After 7 trials, with these given parameters, however, the situation turns against the bettor.
We find that similar situations arise in capital markets. It is not at all uncommon for commodity traders to employ trend-following models. Such models suffer during periods of price congestion and generally profit handsomely when prices break out and enter a trend. It is not at all uncommon for such approaches to see only 15% to 30% winning trades and yet be profitable in the “long run.” Thus, an investor in such programs would be wise to have as large an “N” as possible. The early phase of being involved in such an endeavor is likely to be a losing one, yet something that would payoff well in the long run—as the finite period becomes ever longer, the finite expectation converges on the classical one.
As we are discussing finite-length outcomes—“existential contests”, we wish to:
  • Determine what these finite-length expectations are, given a probability distribution we believe the outcomes comport to, so as to base on our sample data and the fitted parameters of that distribution.
  • Determine a likely N-length sequence of outcomes for what we “expect”; that is, a likely set of N outcomes representative of the outcome stream that would see half of the outcome streams of N length be better than, and half less than, such a stream of expected outcomes. It is this “expected, representative stream” that we will use as input to other functions, such as determining growth-optimal allocations.
Regarding point #2, returning to our hypothetical game of winning one unit with p = 0.9, and losing 10 units otherwise, here, the classical expectation is to lose 0.1 per play, with the finite expectation a function of N, and positive for the first 6 plays (assuming IID).
If one were to apply, say, the Kelly criterion, which requires a positive classical expectation, one would wager nothing. However, this asymptotic assumption under the finite-period expectation is found to be precisely the opposite, which would have the player start out risking a “fraction” equal to 1.0.

4.2. Determining Median-Sorted Outcome after N Trials

4.2.1. Calculating the Inverse CDF

To determine the median-sorted outcome, we need to first construct the inverse CDF function. An inverse cumulative distribution function (inverse CDF) is a mathematical function that provides the value of the random variable for which a given probability has been reached. It is also referred to as the quantile function or the percent-point function. In simpler terms, the inverse CDF essentially allows you to determine the value at which a specified probability is attained in a given distribution.
To derive the inverse CDF of the symmetrical GHD, we first start with the CDF of the symmetrical GHD, which is given by:
C D F x = F ( x ) = 1 2 1 + s g n ( x μ ) 1 δ δ 2 + ( x μ ) 2 α λ 1 α
where:
α = concentration
δ = scale
μ = location
λ = shape
sgn ( x μ ) = sign function, the sign of (xμ), or simply s g n x μ = x μ x μ .
Rearranging the equation to isolate the term inside the curly braces:
1 δ δ 2 + ( x μ ) 2 α λ 1 α = 2 ( F ( x ) 1 / 2 ) s g n ( x μ )
We raise both sides to the power of α / ( λ 1 ) :
1 δ δ 2 + ( x μ ) 2 α = 2 ( F ( x ) 1 / 2 ) s g n ( x μ ) α λ 1
Solving for δ 2 + ( x μ ) 2 :
δ 2 + ( x μ ) 2 = δ 1 2 ( F ( x ) 1 / 2 ) s g n ( x μ ) α λ 1 1 / α
Solving for x :
x = μ δ 1 2 ( F ( x ) 1 / 2 ) 1 α λ 1 1 / α i f   F ( x ) < 1 2 μ + δ 1 2 ( F ( x ) 1 / 2 ) 1 α λ 1 1 / α i f   F ( x ) 1 2
The inverse CDF function takes a probability value, p , as input and returns the corresponding value of x , such that:
F 1 ( p ) = μ δ 1 2 ( 1 p ) 1 1 / α 1 / λ i f   p < 1 2 μ + δ 1 2 ( p 1 / 2 ) 1 1 / α 1 / λ i f   p 1 2
Simplifying the expressions inside the parentheses:
F 1 ( p ) = μ δ 2 p 1 2 p 1 1 / α λ i f   p < 1 2 μ + δ 2 p 1 2 2 p 1 / α λ i f   p 1 2
Simplifying the expressions using the fact that S I G N ( p 0.5 ) = ± 1 :
x = F 1 ( p ) = μ + S I G N ( p 0.5 ) × δ × 2 | p 0.5 | 1 2 | p 0.5 | 1 / α λ
This is our inverse CDF for the symmetrical GHD.

4.2.2. Worst-Case Expected Outcome

Since we sought to construct a “representative” stream of outcomes, a stream of outcomes we would “expect” to be typical (half better, half worse) of what we would expect to happen over N trials, and given the critical nature of the “worst-case outcome” in [41,47,58,59,60,62] for which we wish to apply this expected stream of outcomes to, we thus wished to find that point on the distribution that we would expect be seen, typically, over N trials.
Using the inverse CDF, we can find this point, this “worst-case expected outcome,” as the value of the inverse CDF for the input:
worst-case expected outcome = (1/N) N
We can use this to examine against the results of our representative stream of outcomes at the median-sorted outcome.

4.2.3. Calculating the Representative Stream of Outcomes from the Median-Sorted Outcome

We proceeded to discern the median-sorted outcome. We generated a large sequence of random variables (0…1) to process via our inverse CDF to yield the random outcomes from our distribution, which we took N at a time:
  • For each of N outcomes, we determined the corresponding probability by taking the value returned by the inverse CDF as input to the CDF. When using the inverse CDF function, the random number generated is not the probability of the value returned by the inverse CDF. Instead, the random number generated is used as the input to the inverse CDF function to obtain a value from the distribution. This value is a random variable that follows the distribution specified by the CDF. To determine the probability of this value, one would need to use the CDF function and evaluate it at the value returned by the inverse CDF function. The CDF function gives the probability that a random variable is less than or equal to a specific value. Therefore, if the CDF function is evaluated at the value returned by the inverse CDF function, the probability of that value will be obtained.
  • For our purposes, however, we took any of the probabilities that were >0.5 and took the absolute value of their difference to 0.5. We called this p`:
    • p` = p if p ≤ 0.5
    • p` = |1 − p| if p > 0.5
  • We took M sets of N outputs, corresponding to outputs drawn from our fitted distribution as well as the corresponding p` probabilities.
  • For each of these M sets of N outcome—p` combinations, we summed the outcomes, and took the product of all of the p` values.
  • We sorted these M sets based on their summed outcomes.
  • We then took the sum of these M product of p` values. We called this the “SumOfProbabilities”.
  • For each M set, we proceeded sequentially through them, where each M has a cumulative probability, specified as the cumulative probability of the previous M in the (outcome-sorted) sequence, plus the probability of that M divided by the SumOfProbabilities.
  • It is the final calculation, we sought that M whose final calculation was closest to 0.5. This is our median-sorted outcome, and the values for N that comprise it are our typical, median-sorted outcome, our expected, finite-sequence set of outcomes.

4.2.4. Example—Calculating the Representative Stream of Outcomes from the Median-Sorted Outcome

Here, we demonstrate the steps from Section 4.2.3. We assumed a finite stream of 7 outcomes in length (since we used a distribution of daily S&P500 returns for the first half of 2023, we assume we are modeling an existential contest of 7 upcoming days in the S&P500). Thus, N = 7.
We calculated 1000 such sets of these 7 outcomes and, therefore, M = 1000, and we proceeded to create, for each M, 7 random values (0…1), representing the inputs to the inverse CDF, and the output for that N, then deriving the probability by taking the output for that N as the input to the CDF. This final probability number was processed into a p` value.
For each set of 7 probability–outcome pairs, we took the product of the p` values and the sum of the outcomes (Table 3).
We next calculated the cumulative sum of the probabilities for each M. This is simply the previous M in the sorted sequence, plus the p` product of the N p` valuess in the outcome—p` pairs, divided by the sum of such p` values over all M (“SumOfProbs”).
Lastly, as shown in Table 4, we found the M in the sorted set of M sets that corresponds to a cumulative sum of probabilities value closest to 0.5. This is the median-sorted outcome. Its output values (o1…o7) represent the “typical” expected outcomes over N trials (Table 4).
In this instance of 1000 samples of sequences of N outcomes per the parameters for our symmetrical GHD representation of S&P500 closing percentage changes, we found the sample set, the “typical” set of seven closing prices—what we would expect, where half the time we would see a better sequence, and half the time a worse one, as:
−0.006670152, −0.00638097, −0.006615831, −0.005151115, 0.013180558, 0.01693477, 0.011466985
Of note, our expected worst-case outcome was calculated as −0.00642. As we obtained an outcome that is at least as negative as this, we are not deluding ourselves on what might be the worst-case outcome for the “typical”, i.e., the median outcome stream of 7 outputs.
This set of outcomes represents the expected—the typical set of outcomes from data distributed in the first half of 2023 for daily percentage changes in the S&P500.
Further, for 7 trials in this distribution of outcomes, we can expect to see the ∑o outcome of 0.016764 as our non-asymptotic expectation. Of note, we used a symmetrical distribution in this paper and example, and as mentioned earlier, the divergence between the actual, finite expectation and the classical, asymptotic expectation is a function of the skewness of the distribution, which was 0 in our example. Nevertheless, the concept remains, and the procedure illuminated this for more perverse and skewed distributions.

4.3. Optimal Allocations

4.3.1. Growth-Optimal Fraction

Earlier, mention was made of determining the growth-optimal fraction from a sample data set, and how such results might diverge from its non-asymptotic determination.
Individuals and individual entities enmeshed in existential conflicts may very well be interested in finite-length growth optimization.
Consider, for example, a proposition with p (0.5) of winning 2 units versus losing 1 unit. The traditional method (even by way of the Kelly criterion, as the absolute value of the largest loss here is 1 unit) would determine asymptotic growth maximized by risking 0.25 of the stake on each play. However, when examined under the lens of finite-length existential contests, terminal growth is maximized if the length of the contest is 1 play at a fraction of 1.0, and if N = 2 plays, at a fraction of 0.5 per play, ad infinitum, to the asymptotic 0.25 risk of stake as N grows ever larger.
In determining optimal fractions of finite-length existential contests, the input data (the values of −0.006670152, −0.00638097, −0.006615831, −0.005151115, 0.013180558, 0.01693477, and 0.011466985 in our example, above) are what ought to be used for determining the optimal fraction. The data determine the expectation in such finite-length trials; that is, the data are germane to also determining these other facets—the data one “expects” over those N trials—and seeking to maximize them.

4.3.2. Return/Risk Optimal Points

The amount to risk that is growth-optimal is return/risk-optimal only asymptotically. For finite-length streams—existential contests—the authors of [58] demonstrated the two points on the spectrum of percentage risk that maximize the ratio of return to risk both as a tangent to the function on this spectrum as well as the inflection point “left” of the peak of the function.
Later, in [59], this was extended to the determination of multiple, simultaneous contests (portfolios).
Particularly in determining these critical points of return/risk maximization, which are length-dependent, utilizing the input data that would be “expected” over such length of trials, as demonstrated herein, is more appropriate to the individual engaged in an existential contest.

4.3.3. Risk of Ruin/Drawdown Calculations

Similarly, the determination of the risk of ruin, defined as drawdown on cumulative equity of a given percent of capital from an initial starting point, as well as the risk of drawdown, defined as drawdown on cumulative equity of a given percent of capital from a high watermark point, are more accurately calculated for the course of existential contests using a sample of outcomes determined by this method, as opposed to the universe of potential outcomes, as dictated by the full distribution of outcomes. In [63], it is demonstrated that these calculations depend on a specific-length horizon (and, in fact, demonstrate that risk of drawdown = 1.0, certainty, for any defined percentage for drawdown as the length of time approaches infinity). As these calculations are also time-specific, the accurate calculation requires the use of a realistic time-specific sample set, as provided herein.

4.3.4. Growth Diminishment in Existential Contests

In [60], a technique was presented, whereby examining an isomorphic mapping of the fraction of risk to the cosine of the standard deviation in percentage inputs to their arithmetic average allows one to examine growth functions sans a “risk fraction”, and instead in terms of variance in outcomes. The presented technique examined geometric growth functions, where growth was a negative property. Such functions might be the cumulative debt of a nation, the growth of a pathogen in a population or within an individual organism, etc.
On the function plotted in terms of the fraction of a stake to risk versus growth, as there is a growth-optimal point, there is necessarily a point where the function begins decreasing and where growth becomes negative by risking too much (for example, repeatedly risking beyond 0.5 in the 2:−1 payoff game mentioned earlier with p (0.9), one is certain to go broke as they continue to risk 0.5 of their stake or more, even in this very favorable game).
The isomorphic mapping provided in this paper allows one to examine such traits of the growth function without regard to the notion of a percentage of stake to risk, but rather of tweaking the variance.
As most endeavors to break malevolent growth functions are inherently time-sensitive (i.e., of finite N—true “existential contests”), the notion of using the median-expected outcome stream over N trials is very germane to this exercise, as well.

5. Further Research

We have demonstrated how to fit an extensible, albeit symmetrical distribution to economic data series, and done so with techniques applicable to other distributional forms. As discussed, although the two-piece, skew-normal distributions and the odd log-logistic-type distributions offer greater flexibility than the GHD and, therefore, are generally considered to better represent financial time series data, we have illustrated our methods on the symmetrical GHD for simplicity, but these methods can be applied to any distributional form.
However, it is the very parameter of skewness—the differences between mean and median—that accounts for the difference between conclusions drawn for existential contests and the classical, asymptotic assessments. Generally speaking, the greater the skewness, the more likely these differences will be highlighted, and the more likely to see cases where, for example, the expectation may be positive for the existential contest, but negative asymptotically, and vice versa. Thus, this paper leaves wide open the ripened area of applying the techniques to data modeled per the two-piece, skew-normal distributions and the odd log-logistic-type distributions.

6. Conclusions

We were concerned with “individuals” (as one or a collection of individuals with a common goal) involved in finite-length sequences of stochastic outcomes—existential contests—where the distribution of such outcomes is characterized by fat tails.
Once fitted, we demonstrated the process to determine the “expected” terminal outcome over the finite-length sequence, and the “expected” or “typical” outcomes that characterize such. We have demonstrated how to obtain the expected set of discrete outcomes for the given form for a given length of an existential contest.
This is important in analyzing the expectation of finite sequences, which differ from the classical expectation of the sum of the probabilities times their outcomes, with the former being what is germane to the “individual” in an existential contest, whereby the former is always the correct expectation, one that approaches the classical one asymptotically.
Such “expectation,” what the “individual” can “expect”, as that which has half the outcomes better half worse, also yields a typical stream of outputs achieving this expected outcome.
It is this stream of outputs which, to the individual in the existential contest, are the important inputs, considering that they are what the individual “expects” for such finite-length calculations as growth-optional fractions, maximizing return to risk, calculating ruin or drawdown in the existential contest, or even for those whose charge is growth diminishment with a finite deadline, given that life itself, despite the ever-present asymptotes, is rife with existential contests.

Funding

This research received no external funding.

Data Availability Statement

Data are also available upon request from the author.

Conflicts of Interest

The author was employed by Exsuperatus. The author declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. Exsuperatus had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.

References

  1. Marshall, A. Principles of Economics; Macmillan and Co.: New York, NY, USA, 1890. [Google Scholar]
  2. Marshall, A. Industry and Trade: A Study of Industrial Technique and Business Organization; Macmillan and Co.: New York, NY, USA, 1919. [Google Scholar]
  3. Bachelier, L. Théorie de la spéculation. Ann. Sci. de l’École Norm. Supérieure 1900, 17, 21–86. [Google Scholar] [CrossRef]
  4. Lévy, P. Théorie de L’addition des Variables Aléatoires; Gauthier-Villars: Paris, France, 1937. [Google Scholar]
  5. Lévy, P. Processus Stochastiques et Mouvement Brownien; Gauthier-Villars: Paris, France, 1948. [Google Scholar]
  6. Feller, W. Theory of Probability; John Wiley & Sons: New York, NY, USA, 1950. [Google Scholar]
  7. Pareto, V. Pareto Distribution and its Applications in Economics. J. Econ. Stud. 1910, 5, 128–145. [Google Scholar]
  8. Mandelbrot, B. The Variation of Certain Speculative Prices. J. Bus. 1963, 36, 394. [Google Scholar] [CrossRef]
  9. Mandelbrot, B.; Hudson, R.L. The (Mis)Behavior of Markets: A Fractal View of Risk, Ruin, and Reward; Basic Books: New York, NY, USA, 2004. [Google Scholar]
  10. Samorodnitsky, G.; Taqqu, M.S. Stable Non-Gaussian Random Processes; Chapman and Hall: Boca Raton, FL, USA, 1994. [Google Scholar]
  11. Rachev, S.; Mittnik, S. Stable Paretian Models in Finance; Wiley: New York, NY, USA, 2000. [Google Scholar]
  12. Rachev, S.T. Handbook of Heavy Tailed Distributions in Finance; Elsevier Science B.V.: Amsterdam, The Netherlands, 2003. [Google Scholar]
  13. Nolan, J.P. Financial Modeling with Heavy-Tailed Stable Distributions. WIREs Comput. Stat. 2014, 6, 45–55. [Google Scholar] [CrossRef]
  14. Barndorff-Nielsen, O.E. Exponentially Decreasing Distributions for the Logarithm of Particle Size. Proc. R. Soc. Lond. Ser. A Math. Phys. Sci. 1977, 353, 401–409. [Google Scholar] [CrossRef]
  15. Podgórski, K.; Wallin, J. Convolution-Invariant Subclasses of Generalized Hyperbolic Distributions. Commun. Stat. Theory Methods 2015, 45, 98–103. [Google Scholar] [CrossRef]
  16. Eberlein, E.; Keller, U. Hyperbolic distributions in finance. Bernoulli 1995, 1, 281–299. [Google Scholar] [CrossRef]
  17. Prause, K. The Generalized Hyperbolic Model: Estimation, Financial Derivatives, and Risk Measures. Ph.D. Thesis, University of Freiburg, Freiburg im Breisgau, Germany, 1999. [Google Scholar]
  18. Küchler, U.; Neumann, K. Stock Returns and Hyperbolic Distributions. ASTIN Bull. J. Int. Actuar. Assoc. 1999, 29, 3–16. [Google Scholar] [CrossRef]
  19. Eberlein, E.; Keller, U.; Prause, K. New Insights into Smile, Mispricing, and Value at Risk: The Hyperbolic Model. J. Bus. 1998, 71, 371–405. [Google Scholar] [CrossRef]
  20. Núñez-Mora, J.A.; Sánchez-Ruenes, E. Generalized Hyperbolic Distribution and Portfolio Efficiency in Energy and Stock Markets of BRIC Countries. Int. Bus. Econ. Res. J. 2014, 13, 299–310. [Google Scholar] [CrossRef]
  21. Huang, C.K.; Chinhamu, K.; Huang, C.S.; Hammujuddy, J. Generalized Hyperbolic Distributions and Value-at-Risk Estimation for the South African Mining Index. Int. Bus. Econ. Res. J. 2014, 13, 265–274. [Google Scholar] [CrossRef]
  22. Wang, C.; Liu, K.; Li, B.; Tan, K. Portfolio optimization under multivariate affine generalized hyperbolic distributions. Int. Rev. Econ. Financ. 2022, 80, 49–66. [Google Scholar] [CrossRef]
  23. Kostovetsky, L.; Kostovetska, B. Distribution Analysis of S&P 500 Financial Turbulence. J. Math. Financ. 2023, 13, 67–88. [Google Scholar]
  24. Hu, W.; Kercheval, A. Risk Management with Generalized Hyperbolic Distributions. Available online: https://www.math.fsu.edu/e-prints/archive/paper321.pdf (accessed on 1 November 2023).
  25. Klebanov, L.; Rachev, S.T. ν-Generalized Hyperbolic Distributions. J. Risk Financ. Manag. 2023, 16, 251. [Google Scholar] [CrossRef]
  26. Jamalizadeh, A.; Arabpour, A.R.; Balakrishnan, N. A Generalized Skew Two-Piece Skew-Normal Distribution. Stat. Papers 2011, 52, 431–446. [Google Scholar] [CrossRef]
  27. Shafiei, S.; Doostparast, M.; Jamalizadeh, A. The Alpha–Beta Skew Normal Distribution: Properties and Applications. Statistics 2016, 50, 338–349. [Google Scholar] [CrossRef]
  28. Rasekhi, M.; Chinipardaz, R.; Alavi, S.M.R. A Flexible Generalization of the Skew Normal Distribution Based on a Weighted Normal Distribution. Stat. Methods Appl. 2016, 25, 375–394. [Google Scholar] [CrossRef]
  29. Rasekhi, M.; Hamedani, G.G.; Chinipardaz, R. A Flexible Extension of Skew Generalized Normal Distribution. Metron 2017, 75, 87–107. [Google Scholar] [CrossRef]
  30. Fernández, C.; Steel, M.F.J. On Bayesian Modeling of Fat Tails and Skewness. J. Am. Stat. Assoc. 1998, 93, 359–371. [Google Scholar] [CrossRef]
  31. Castillo, N.O.; Gómez, H.W.; Leiva, V.; Sanhueza, A. On the Fernández-Steel Distribution: Inference and Application. Comput. Stat. Data Anal. 2011, 55, 2951–2961. [Google Scholar] [CrossRef]
  32. Rigby, R.A.; Stasinopoulos, M.D.; Heller, G.Z.; De Bastiani, F. Distributions for Modeling Location, Scale, and Shape: Using GAMLSS in R; Chapman and Hall/CRC: Boca Raton, FL, USA, 2019. [Google Scholar]
  33. Farzammehr, M.A.; Zadkarami, M.R.; McLachlan, G.J.; Lee, S.X. Skew-Normal Bayesian Spatial Heterogeneity Panel Data Models. J. Appl. Stat. 2019, 47, 804–826. [Google Scholar] [CrossRef] [PubMed]
  34. Kim, H.-J. On a Class of Two-Piece Skew-Normal Distributions. Statistics 2005, 39, 537–553. [Google Scholar] [CrossRef]
  35. Charemza, W.; Díaz, C.; Makarova, S. Choosing the Right Skew Normal Distribution: The Macroeconomist’s Dilemma; Working Paper No. 15/08; University of Leicester: Leicester, UK, May 2015. [Google Scholar]
  36. Neethling, A.; Ferreira, J.; Bekker, A.; Naderi, M. Skew Generalized Normal Innovations for the AR(p) Process Endorsing Asymmetry. Symmetry 2020, 12, 1253. [Google Scholar] [CrossRef]
  37. da Braga, A.S.; Cordeiro, G.M.; Ortega, E.M.; da Cruz, J.N. The Odd Log-Logistic Normal Distribution: Theory and Applications in Analysis of Experiments. J. Stat. Theory Pract. 2016, 10, 311–335. [Google Scholar] [CrossRef]
  38. Prataviera, F.; Cordeiro, G.M.; Ortega, E.M.M.; Suzuki, A.K. The Odd Log-Logistic Geometric Normal Regression Model with Applications. Adv. Data Sci. Adapt. Anal. 2019, 11, 1950003. [Google Scholar] [CrossRef]
  39. Pascal, B.; Fermat, P. Traité du triangle arithmétique. Corresp. Blaise Pascal Pierre Fermat 1654, 2, 67–82. [Google Scholar]
  40. Huygens, G. Libellus de Ratiociniis in ludo aleae (A Book on the Principles of Gambling); Original Latin transcript translated and published in English; S. KEIMER for T. WOODWARD: London, UK, 1714. [Google Scholar]
  41. Vince, R. Expectation and Optimal f: Expected Growth with and without Reinvestment for Discretely-Distributed Outcomes of Finite Length as a Basis in Evolutionary Decision-Making. Far East J. Theor. Stat. 2019, 56, 69–91. [Google Scholar] [CrossRef]
  42. Bernoulli, D. Specimen theoriae novae de mensura sortis (Exposition of a new theory on the measurement of risk). Comment. Acad. Sci. Imp. Petropolitanae 1738, 5, 175–192, Translated into English: Sommer, L. Econometrica 1954, 22, 23–36. [Google Scholar]
  43. Keynes, J.M. A Treatise on Probability; Macmillan: London, UK, 1921. [Google Scholar]
  44. Williams, J.B. Speculation and the carryover. Q. J. Econ. 1936, 50, 436–455. [Google Scholar] [CrossRef]
  45. Kelly, J.L., Jr. A new interpretation of information rate. Bell Syst. Tech. J. 1956, 35, 917–926. [Google Scholar] [CrossRef]
  46. Shannon, C.E. A mathematical theory of communication. Bell Syst. Tech. J. 1948, 27, 379–423, 623–656. [Google Scholar] [CrossRef]
  47. Vince, R. Portfolio Management Formulas; John Wiley & Sons, Inc.: New York, NY, USA, 1990. [Google Scholar]
  48. Bellman, R.; Kalaba, R. On the role of dynamic programming in statistical communication theory. IEEE Trans. Inf. Theory 1957, 3, 197–203. [Google Scholar] [CrossRef]
  49. Breiman, L. Optimal gambling systems for favorable games. In Proceedings of the fourth Berkeley Symposium on Mathematical Statistics and Probability; Neyman, J., Ed.; University of California Press: Berkeley, CA, USA, 1961; Volume 1, pp. 65–78. [Google Scholar]
  50. Latane, H.A. Criteria for choice among risky ventures. J. Political Econ. 1959, 67, 144–155. [Google Scholar] [CrossRef]
  51. Latane, H.; Tuttle, D. Criteria for portfolio building. J. Financ. 1967, 22, 362–363. [Google Scholar] [CrossRef]
  52. Thorp, E.O. Beat the Dealer: A Winning Strategy for the Game of Twenty-One; Vintage: New York, NY, USA, 1962. [Google Scholar]
  53. Thorp, E.O. The Kelly Criterion in blackjack, sports betting, and the stock market. In Proceedings of the 10th International Conference on Gambling and Risk Taking, Montreal, QC, Canada, 31 May–4 June 1997. [Google Scholar]
  54. Samuelson, P.A. The “fallacy” of maximizing the geometric mean in long sequences of investing or gambling. Proc. Natl. Acad. Sci. USA 1971, 68, 2493–2496. [Google Scholar] [CrossRef]
  55. Samuelson, P.A. Why we should not make mean log of wealth big though years to act are long. J. Bank. Financ. 1979, 3, 305–307. [Google Scholar] [CrossRef]
  56. Goldman, M.B. A negative report on the “near optimality” of the max-expected-log policy as applied to bounded utilities for long-lived programs. J. Financ. Econ. 1974, 1, 97–103. [Google Scholar] [CrossRef]
  57. Merton, R.C.; Samuelson, P.A. Fallacy of the lognormal approximation to optimal portfolio decision-making over many periods. J. Financ. Econ. 1974, 1, 67–94. [Google Scholar] [CrossRef]
  58. Vince, R.; Zhu, Q. Optimal betting sizes for the game of blackjack. J. Invest. Strateg. 2015, 4, 53–75. [Google Scholar] [CrossRef]
  59. Lopez de Prado, M.; Vince, R.; Zhu, Q.J. Optimal Risk Budgeting Under a Finite Investment Horizon. Risks 2019, 7, 86. [Google Scholar] [CrossRef]
  60. Vince, R. Diminution of Malevolent Geometric Growth Through Increased Variance. J. Econ. Bus. Mark. Res. 2020, 1, 35–53. [Google Scholar]
  61. Klebanov, L.; Rachev, S.T. The Global Distributions of Income and Wealth; Springer: Berlin/Heidelberg, Germany, 1996. [Google Scholar]
  62. Vince, R. The Mathematics of Money Management; John Wiley & Sons, Inc.: New York, NY, USA, 1992. [Google Scholar]
  63. Vince, R. The Handbook of Portfolio Mathematics: Formulas for Optimal Allocation & Leverage; John Wiley & Sons, Inc.: New York, NY, USA, 2007. [Google Scholar]
Figure 1. PDF and CDF to fitted sample returns data from the first half of 2023 for the S&P500.
Figure 1. PDF and CDF to fitted sample returns data from the first half of 2023 for the S&P500.
Mathematics 12 00011 g001
Table 1. Daily percentage changes in the S&P500 for the first six months of 2023.
Table 1. Daily percentage changes in the S&P500 for the first six months of 2023.
DateS&P500% ChangeDateS&P500% Change
202212303839.5 202304034124.510.003699
202301033824.14−0.004202304044100.6−0.0058
202301043852.970.007539202304054090.38−0.00249
202301053808.1−0.01165202304064105.020.003579
202301063895.080.022841202304104109.110.000996
202301093892.09−0.00077202304114108.94−4.1 × 10−5
202301103919.250.006978202304124091.95−0.00413
202301113969.610.012849202304134146.220.013263
202301123983.170.003416202304144137.64−0.00207
202301133999.090.003997202304174151.320.003306
202301173990.97−0.00203202304184154.870.000855
202301183928.86−0.01556202304194154.52−8.4 × 10−5
202301193898.85−0.00764202304204129.79−0.00595
202301203972.610.018918202304214133.520.000903
202301234019.810.011881202304244137.040.000852
202301244016.95−0.00071202304254071.63−0.01581
202301254016.22−0.00018202304264055.99−0.00384
202301264060.430.011008202304274135.350.019566
202301274070.560.002495202304284169.480.008253
202301304017.77−0.01297202305014167.87−0.00039
202301314076.60.014642202305024119.58−0.01159
202302014119.210.010452202305034090.75−0.007
202302024179.760.014699202305044061.22−0.00722
202302034136.48−0.01035202305054136.250.018475
202302064111.08−0.00614202305084138.120.000452
2023020741640.012873202305094119.17−0.00458
202302084117.86−0.01108202305104137.640.004484
202302094081.5−0.00883202305114130.62−0.0017
202302104090.460.002195202305124124.08−0.00158
202302134137.290.011449202305154136.280.002958
202302144136.13−0.00028202305164109.9−0.00638
202302154147.60.002773202305174158.770.011891
202302164090.41−0.01379202305184198.050.009445
202302174079.09−0.00277202305194191.98−0.00145
202302213997.34−0.02004202305224192.630.000155
202302223991.05−0.00157202305234145.58−0.01122
202302234012.320.005329202305244115.24−0.00732
202302243970.04−0.01054202305254151.280.008758
202302273982.240.003073202305264205.450.013049
202302283970.15−0.00304202305304205.521.66 × 10−5
202303013951.39−0.00473202305314179.83−0.00611
202303023981.350.007582202306014221.020.009854
202303034045.640.016148202306024282.370.014534
202303064048.420.000687202306054273.79−0.002
202303073986.37−0.01533202306064283.850.002354
202303083992.010.001415202306074267.52−0.00381
202303093918.32−0.01846202306084293.930.006189
202303103861.59−0.01448202306094298.860.001148
202303133855.76−0.00151202306124338.930.009321
202303143919.290.016477202306134369.010.006932
202303153891.93−0.00698202306144372.590.000819
202303163960.280.017562202306154425.840.012178
202303173916.64−0.01102202306164409.59−0.00367
202303203951.570.008918202306204388.71−0.00474
202303214002.870.012982202306214365.69−0.00525
202303223936.97−0.01646202306224381.890.003711
202303233948.720.002985202306234348.33−0.00766
202303243970.990.00564202306264328.82−0.00449
202303273977.530.001647202306274378.410.011456
202303283971.27−0.00157202306284376.86−0.00035
202303294027.810.014237202306294396.440.004474
202303304050.830.005715202306304450.380.012269
202303314109.310.014437
Table 2. Best-fit parameter set to Table 1, only optimizing three parameters to the first six months in 2023 of daily S&P500 closing price changes. Location was set to the mean (and median, being symmetrical) of the sample data of 0.0029048 and the scale was kept at the sample standard deviation of 0.0105381.
Table 2. Best-fit parameter set to Table 1, only optimizing three parameters to the first six months in 2023 of daily S&P500 closing price changes. Location was set to the mean (and median, being symmetrical) of the sample data of 0.0029048 and the scale was kept at the sample standard deviation of 0.0105381.
NameValues
αConcentration2.1
δScale0.01053813
μLocation0.0029048
λShape1.64940218
Table 3. Sorted outcomes. Outcomes are labeled as columns o1…o7, and each row is one of 1000 of the M sets of N = 7 outcomes.
Table 3. Sorted outcomes. Outcomes are labeled as columns o1…o7, and each row is one of 1000 of the M sets of N = 7 outcomes.
o1o2o3o4o5o6o7∑opCum Sum Probs
0.01898−0.01143−0.02245−0.00848−0.002450.00988−0.04847−0.0644177660.0000000007910.000367729532
−0.01148−0.01351−0.01114−0.00849−0.00428−0.0135−0.00353−0.0659234370.0000000154250.000361021142
−0.023−0.00209−0.00094−0.01424−0.0132−0.01449−0.00144−0.0694143770.0000000201000.000230227464
−0.00535−0.01323−0.00791−0.010910.00768−0.02873−0.01846−0.0769175790.0000000029480.000059794713
−0.00817−0.0109−0.00531−0.01709−0.01115−0.0179−0.00953−0.0800491570.0000000031510.000034795240
0.01658−0.0653−0.00424−0.01429−0.01636−0.013010.01241−0.0842031520.0000000002280.000008072898
−0.00544−0.114870.01205−0.01222−0.007340.01483−0.00471−0.1176853710.0000000007240.000006137397<-- Previous Cum Sum Probs + current ∏p/SumOfProbs
0
0.000117933<-- Sum of ∏p’s (SumOfProbs)
Table 4. Sorted outcomes with median selected.
Table 4. Sorted outcomes with median selected.
o1o2o3o4o5o6o7∑opCum Sum Probs
−0.01076−0.00960.014280.03664−0.00401−0.018160.008730.0171278940.0000000039020.503768567507
−0.00529−0.008060.010280.01246−0.00586−0.006240.01960.0168925320.0000000964820.503735481158
−0.01023−0.00529−0.004610.010460.008520.01987−0.001960.016765110.0000002684210.502917368488
−0.00667−0.00638−0.00662−0.005150.013180.016930.011470.0167642450.0000001065810.500641320047
0.008630.02348−0.00931−0.005490.01595−0.00801−0.008490.016760810.0000000260720.499737576467
−0.00721−0.010020.02994−0.01062−0.00915−0.011510.035240.0166774620.0000000005060.499516499541
−0.003520.010470.02715−0.00517−0.01393−0.010180.011820.0166317210.0000000235170.499512210603
−0.00990.01316−0.003140.0184−0.003240.00952−0.008190.016613940.0000001358270.499312798
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Vince, R. Expectation and Optimal Allocations in Existential Contests of Finite, Heavy-Tail-Distributed Outcomes. Mathematics 2024, 12, 11. https://doi.org/10.3390/math12010011

AMA Style

Vince R. Expectation and Optimal Allocations in Existential Contests of Finite, Heavy-Tail-Distributed Outcomes. Mathematics. 2024; 12(1):11. https://doi.org/10.3390/math12010011

Chicago/Turabian Style

Vince, Ralph. 2024. "Expectation and Optimal Allocations in Existential Contests of Finite, Heavy-Tail-Distributed Outcomes" Mathematics 12, no. 1: 11. https://doi.org/10.3390/math12010011

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop