**1. Introduction**

A large class of problems in random geometry is concerned with the collocation of points in high-dimensional space. Applications range from optimization of financial portfolios [1], binary classifications of data strings [2] and optimal stategies in game theory [3] to the existence of non-negative solutions to systems of linear equations [4,5], the emergence of cooperation in competitive ecosystems [6,7], and linear programming with random parameters [8]. It is frequently relevant to consider the case where both the number of points *T* and the dimension of space *N* tend to infinity. This limit is often characterized by abrupt qualitative changes reminiscent of phase transitions when an external parameter or the ratio *T*/*N* vary and cross a critical value. At the same time, this high-dimensional case is amenable to methods from the statistical mechanics of disordered systems offering additional insight.

Some results obtained in different disciplines are closely related to each other without the connection always being appreciated. In the present paper, we discuss some particular cases. We will show that the boundedness of the expected maximal loss, as well as the possibility of zero variance of a random financial portfolio is closely related to the existence of a linear separable binary coloring of random points called a dichotomy. Moreover, we point out the connection with the existence of non-negative solutions to systems of linear equations and with mixed strategies in zero-sum games. On a more technical level and for the above-mentioned limit of large instances in high-dimensional spaces, we also make contact between replica calculations performed for different problems in different fields.

In addition to uncovering the common random geometrical background of seemingly very different problems, our comparative analysis sheds light on each of them from various angles and points to ramifications in their respective fields.

**Citation:** Prüser, A.; Kondor, I.; Engel, A. Aspects of a Phase Transition in High-Dimensional Random Geometry. *Entropy* **2021**, *23*, 805. https://doi.org/10.3390/ e23070805

Academic Editors: Ryszard Kutner, H. Eugene Stanley and Christophe Schinckus

Received: 10 May 2021 Accepted: 17 June 2021 Published: 24 June 2021

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

#### **2. Dichotomies of Random Points**

Consider an *N*-dimensional Euclidean space with a fixed coordinate system. Choose *T* points in this space and color them either black or white. The coloring is called a dichotomy if a hyperplane through the origin of the coordinate system exists that separates black points from white ones, see Figure 1.

To avoid special arrangements like all points falling on one line, the points are required to be in what is called a general position: the position vectors of any subset of *N* points should be linearly independent. Under this rather mild prerequisite, the number *<sup>C</sup>*(*<sup>T</sup>*, *N*) of dichotomies of *T* points in *N* dimensions only depends on *T* and *N* and not on the particular location of the points. This remarkable result was proven in several works, among them a classical paper by Cover [2]. Establishing a recursion relation for *<sup>C</sup>*(*<sup>T</sup>*, *<sup>N</sup>*), the explicit result was derived:

$$C(T, N) = 2\sum\_{i=0}^{N-1} \binom{T-1}{i}.\tag{1}$$

If the coordinates of the points are chosen at random from a continuous distribution, the points are in a general position with the probability one. Since there are in total 2*<sup>T</sup>* different binary colorings of these points and only *<sup>C</sup>*(*<sup>T</sup>*, *N*) of them are dichotomies, we find for the probability that *T* random points in *N* dimensions with random coloring form a dichotomy with the cumulative binomial distribution:

$$P\_{\mathbf{d}}(T, N) = \frac{\mathbb{C}(T, N)}{2^{T}} = \frac{1}{2^{T-1}} \sum\_{i=0}^{N-1} \binom{T-1}{i} . \tag{2}$$

Hence, *<sup>P</sup>*d(*<sup>T</sup>*, *N*) = 1 for *T* ≤ *N*, *<sup>P</sup>*d(*<sup>T</sup>*, *N*) = 1/2 for *T* = 2*N* and *<sup>P</sup>*d(*<sup>T</sup>*, *N*) → 0 for *T* → ∞. The transition from *P* 1 at *T* = *N* to *P* 0 at large *T* becomes sharper with increasing *N*. This is clearly seen when considering the case of constant ratio

$$\mathbf{x} := \frac{T}{N} \tag{3}$$

between the number of points and the dimension of space for different values of *N*, which shows an abrupt transition at *αc* = 2 for *N* → <sup>∞</sup>, cf. Figure 2.

*α*

For later convenience, it is useful to reformulate the condition for a certain coloring to be a dichotomy in different ways. Let us denote the position vector of point *t*, *t* = 1, ... , *T*, by *ξt* ∈ R*<sup>N</sup>* and its coloring by the binary variable *ζt* = ±1. If a separating hyperplane exists, it has a normal vector **w** ∈ R*<sup>N</sup>* that fulfills

$$\mathbb{Z}^t = \text{sign}(\mathbf{w} \cdot \mathbb{Z}^t), \qquad t = 1, \ldots, T,\tag{4}$$

where we define sign(*x*) = 1 for *x* ≥ 0 and sign(*x*) = −1 otherwise. With the abbreviation

> **r***t*

$$
\mathfrak{h} := \mathfrak{J}^t \mathfrak{F}^t,\tag{5}
$$

Equation (4) translates into **w** · **r***t* ≥ 0 for all *t* = 1, ... , *T* which for points in a general position, is equivalent to the somewhat stronger condition

$$\mathbf{w} \cdot \mathbf{r}^t > 0, \qquad t = 1, \ldots, T. \tag{6}$$

A certain coloring *ζt* of points *ξt* is hence a dichotomy if a vector **w** exists such that (6) is fulfilled, that is, if its scalar product with all vectors **r***t* is positive. This is quite intuitive, since by going from the vectors *ξt* to **r***t* according to the (5), we replace all points colored black by their white-colored mirror images (or vice versa). If we started out with a dichotomy, after the transformation, all points will lie on the same side of the separating hyperplane. The meaning of Equation (6) is clear: For *T* random points in *N* dimensions with coordinates chosen independently from a symmetric distribution, there exists with probability *Pd*(*<sup>T</sup>*, *N*) a hyperplane such that all these points lie on the same side of the hyperplane. This formulation will be crucial in Section 3 to relate dichotomies to bounded cones characterizing financial portfolios.

**Figure 2.** Probability *<sup>P</sup>*d(*<sup>T</sup>*, *N*) that *T* randomly colored points in a general position in *N*-dimensional space form a dichotomy as a function of the ratio *α* between *T* and *N* for different values of *N*. The transition between the limiting values *P* = 1 at *α* = 1 and *P* = 0 at large *α* becomes increasingly sharp when *N* grows.

Singling out one particular point *s* = 1, ... , *T*, this in turn implies that there is, for any choice of *s*, a vector **w** with

$$\mathbf{w} \cdot \mathbf{r}^t > 0, \quad t = 1, \dots, T,\\ t \neq s \qquad \text{and} \qquad \mathbf{w} \cdot (-\mathbf{r}^s) < 0. \tag{7}$$

Consider now all vectors **r**¯ of the form

$$\bar{\mathbf{r}} = \sum\_{t \neq s} \mathbf{c}^t \mathbf{r}^t, \qquad \text{with} \qquad \mathbf{c}^t \ge 0, \ t = 1, \dots, T,\\ t \neq s, \tag{8}$$

that is, all vectors that may be written as a linear combination of the **r***t* with *t* = *s* and all expansion parameters *ct* being non-negative. The set of these vectors **r**¯ is called the *non-negative cone* of the **r***t* , *t* = *s*. Equation (7) then means that −**r***<sup>s</sup>* cannot be an element of this non-negative cone. This is clear since the hyperplane perpendicular to **w** separates −**r***<sup>s</sup>* from this very cone, an observation that is known as Farkas' lemma [9]. Therefore, if a set of vectors **r***t* forms a dichotomy no mirror image −**r***<sup>s</sup>* of any of them may be written as a linear combination of the remaining ones with non-negative expansion coefficients

$$\sum\_{t \neq s} c^t \mathbf{r}^t \neq -\mathbf{r}^s, \qquad \forall c^t \ge 0. \tag{9}$$

Finally, adding **r***s* to both sides of (9), we find

$$\sum\_{t} \mathbf{c}^{t} \mathbf{r}^{t} \neq \mathbf{o}, \qquad \text{with} \qquad \mathbf{c}^{t} \ge 0, \ t = 1, \dots, T, \quad \text{and} \quad \sum\_{t} \mathbf{c}^{t} > 0,\tag{10}$$

where **o** denotes the null vector in *N* dimensions. Given *T* points **r***t* in *N* dimensions forming a dichotomy, it is therefore impossible to find a nontrivial linear combination of these vectors with non-negative coefficients that equals the null vector.

Additionally, this corollary to the Cover result is easily intuitively understood. Assume there were some coefficients *ct* ≥ 0 that were not all zero at the same time, and that realize

$$\sum\_{t} c^{t} \mathbf{r}^{t} = \mathbf{o}.\tag{11}$$

If the points **r***t* form a dichotomy, then according to (6), there is a vector **w** that makes a positive scalar product with all of them. Multiplying (11) with this vector, we immediately arrive at a contradiction, since the l.h.s. of this equation is positive and the r.h.s. is zero.

Note that the inverse of (10) is also true: if the points do not form a dichotomy, a decomposition of the null vector of the type (11) can always be found. This is related to the fact that the non-negative cone of the corresponding position vectors is the complete R*N*. For if there were a vector **b** ∈ R*<sup>N</sup>* that lies not in this cone by Farkas' lemma, there would be a hyperplane separating the cone from **b**. However, the very existence of this hyperplane would qualify the points **r***t* to be a dichotomy in contradiction to what was assumed.

In the limit *N* → <sup>∞</sup>, *T* → ∞ with *α* = *T*/*N*, keeping the problem of random dichotomies constant can be investigated within statistical mechanics. To make this connection explicit, we first note that no inequality in (6) is altered if **w** is multiplied by a positive constant. To decide whether an appropriate vector **w** fulfilling (6) may be found or not, it is hence sufficient to study vectors of a given length. It is convenient to choose this length as √*<sup>N</sup>*, requiring

$$\sum\_{i=1}^{N} w\_i^2 = N.\tag{12}$$

Next, we introduce for each realization of the random vectors **r***t* an energy function

$$E(\mathbf{w}) := \sum\_{t=1}^{T} \Theta \left( -\sum\_{i} w\_{i} r\_{i}^{t} \right), \tag{13}$$

where <sup>Θ</sup>(*x*) = 1 if *x* > 0, and <sup>Θ</sup>(*x*) = 0; otherwise it is the Heaviside step function. This energy is nothing but the number of points violating (6) for a given vector **w**. Our central quantity of interest is the entropy of the groundstate of the system, that is, the logarithm of the fraction of points on the sphere defined by (12) that realize zero energy:

$$S(\kappa, a) := \lim\_{N \to \infty} \frac{1}{N} \ln \frac{\int \prod\_{i=1}^{N} dw\_i \, \delta(\sum\_i w\_i^2 - N) \prod\_{l=1}^{nN} \Theta\left(\sum\_i w\_l r\_i^t - \kappa\right)}{\int \prod\_{i=1}^{N} dw\_i \, \delta(\sum\_i w\_i^2 - N)}. \tag{14}$$

Here, *<sup>δ</sup>*(*x*) denotes the Dirac *δ*-function, and we have introduced the positive stability parameter *κ* to additionally sharpen the inequalities (6).

The main problem in the explicit determination of *<sup>S</sup>*(*<sup>κ</sup>*, *α*) is its dependence on the many random parameters *rti* . Luckily, for large values of *N* deviations of *S* from its typical value, *<sup>S</sup>*typ becomes extremely rare and, moreover, this typical value is given by the average over the realizations of the *rti*:

$$S\_{\rm hyp}(\mathfrak{x}, \mathfrak{a}) = \langle \langle S(\mathfrak{x}, \mathfrak{a}) \rangle \rangle. \tag{15}$$

The calculation of this average was performed by a classical calculation [10] which gave rise to the result:

$$S\_{\rm typ}(\kappa, a) = \exp\left[\frac{1}{2}\ln(1 - q) + \frac{q}{2(1 - q)} + a \int Dt \ln H\left(\frac{\kappa - \sqrt{q}t}{\sqrt{1 - q}}\right)\right],\tag{16}$$

where the extremum is over the auxiliary quantity *q*, and we have used the shorthand notations

$$Dt := \frac{dt}{\sqrt{2\pi}} e^{-\frac{t^2}{2}} \qquad \text{and} \qquad H(\mathbf{x}) := \int\_{\mathbf{x}}^{\infty} Dt. \tag{17}$$

More details of the calculation may be found in the original reference, and in chapter 6 of [11]. Appendix A contains some intermediate steps for a closely related analysis. 

Studying the limit *q* → 1 of (16) reveals

$$\mathcal{S}\_{\text{typ}}(\kappa,\mathfrak{a}) \begin{cases} > -\infty & \text{if } & \mathfrak{a} < \mathfrak{a}\_{\mathfrak{c}}(\kappa) \\ \to -\infty & \text{if } & \mathfrak{a} > \mathfrak{a}\_{\mathfrak{c}}(\kappa) \end{cases} \tag{18}$$

corresponding to a sharp transition from solvability to non-solvability at a critical value *<sup>α</sup>c*(*κ*). This is because *κ* = 0 finds *αc* = 2 in agreemen<sup>t</sup> with (2), cf. Figure 2.

Note that Cover's result (2) holds for all values of *T* and *N*, whereas the statistical mechanics analysis is restricted to the thermodynamic limit *N* → ∞. On the other hand, the latter can deal with all values of the stability parameter *κ*, whereas no generalization of Cover's approach to the case *κ* = 0 is known.

#### **3. Phase Transitions in Portfolio Optimization under the Variance and the Maximal Loss Risk Measure**

#### *3.1. Risk Measures*

The purpose of this subsection is to indicate the financial context, in which the geometric problem discussed in this paper appears. A portfolio is the weighted sum of financial assets. The weights represent the parts of the total wealth invested in the various assets. Some of the weights are allowed to be negative (short positions), but the weights sum to 1; this is called the budget constraint. Investment carries risk, and higher returns usually carry higher risk. Portfolio optimization seeks a trade-off between risk and return by the appropriate choice of the portfolio weights. Markowitz was the first to formulate the portfolio choice as a risk-reward problem [12]. Reward is normally regarded as the expected return on the portfolio. Assuming return fluctuations to be Gaussian-distributed random variables, portfolio variance offered itself as the natural risk measure. This setup made the optimization of portfolios a quadratic programming problem, which, especially in the case of large institutional portfolios, posed a serious numerical difficulty in its time. Another critical point concerning variance as a risk measure was that variance is symmetric in gains and losses, whereas investors are believed not to be afraid of big gains, only big losses. This consideration led to the introduction of downside risk measures, starting already with the semivariance [13]. Later it was recognized that the Gaussian assumption was not realistic, and alternative risk measures were sought to grasp the risk of rare but large events, and also to allow risk to be aggregated across the ever-increasing and increasingly heterogeneous institutional portfolios. Around the end of the 1980s, Value at Risk (VaR) was introduced by JP Morgan [14], and subsequently it was widely spread over the industry by their RiskMetrics methodology [15]. VaR is a high quantile, a downside risk measure (note that in the literature, the profit and loss axis is often reflected, so that losses are assigned a positive sign. It is under this convention that VaR is a high quantile, rather than a low one). It soon came under academic criticism for its insensitivity to the details of the distribution beyond the quantile, and for its lack of sub-additivity. Expected Shortfall (ES), the average loss above the VaR quantile, appeared around the turn of the century [16]. An axiomatic approach to risk measures was proposed by Artzner et al. [17] who introduced a set of postulates which any coherent risk measure was required to satisfy. ES turned out to be coherent [18,19] and was strongly advocated by academics. After a long debate, international regulation embraced it as the official risk measure in 2016 [20].

The various risk measures discussed all involved averages. Since the distributions of financial data are not known, the relative price movements of assets are observed at a number *T* of time points, and the true averages are replaced by empirical averages from these data. This works well if *T* is sufficiently large; however, in addition to all the aforementioned problems, a general difficulty of portfolio optimization lies in the fact that the dimension *N* of institutional portfolios (the number of different assets) is large, but the number *T* of observed data per asset is never large enough, due to lack of stationarity of the time series and the natural limits (transaction costs, technical difficulties of rebalancing) on the sampling frequency. Therefore, portfolio optimization in large dimensions suffers from a high degree of estimation error, which renders the exercise more or less illusory (see e.g., [21]). Estimation of returns is even more error-prone than the risk part, so several authors disregard the return completely, and seek the minimum risk portfolio (e.g., [22–24]). We follow the same approach here.

In the two subsections that follow, we also assume that the returns are independent, symmetrically distributed random variables. This is, of course, not meant to be a realistic market model, but it allows us to make an explicit connection between the optimization of the portfolio variance under a constraint excluding short positions and the geometric problem of dichotomies discussed in Section 2. This is all the more noteworthy because analytic results are notoriously scarce for portfolio optimization with no short positions. We note that similar simplifying assumptions (Gaussian fluctuations, independence) were built into the original JP Morgan methodology, which was industry standard in its time, and influences the thinking of practitioners even today.

#### *3.2. Vanishing of the Estimated Variance*

We consider a portfolio of *N* assets with weights *wi*, *i* = 1, ... , *N*. The observations *rt i* of the corresponding returns at various times *t* = 1, ... , *T* are assumed to be independent, symmetrically distributed random variables. Correspondingly, the average value of the portfolio is zero. Its variance is given by

$$
\sigma\_p^2 = \frac{1}{T} \sum\_{\mathbf{t}} \left( \sum\_{\mathbf{i}} w\_{\mathbf{i}} r\_{\mathbf{i}}^t \right)^2 = \sum\_{\mathbf{i}, \mathbf{j}} w\_{\mathbf{i}} w\_{\mathbf{j}} \, \frac{1}{T} \sum\_{\mathbf{t}} r\_{\mathbf{i}}^t r\_{\mathbf{j}}^t =: \sum\_{\mathbf{i}, \mathbf{j}} w\_{\mathbf{i}} w\_{\mathbf{j}} \mathbb{C}\_{\mathbf{i} \mathbf{j}}.\tag{19}
$$

where *Cij* denotes the covariance matrix of the observations. Note that the variance of a portfolio optimized in a given sample depends on the sample, so it is itself a random variable.

The variance of a portfolio obviously vanishes if the returns are fixed quantities that do not fluctuate. This subsection is not about such a trivial case. We shall see, however, that the variance optimized *under a no-short constraint* can vanish with a certain probability if the dimension *N* is larger than the number of observations *T*.

The rank of the covariance matrix is the smaller of *N* and *T*, and for *N* ≤ *T* the estimated variance is positive with the probability one. Thus, the optimization of variance can always be carried out as long as the number of observations *T* is larger than the dimension *N*, albeit with an increasingly larger error as *T*/*N* decreases. For large *N* and *T* and fixed *α* = *T*/*N*, the estimation error increases as *α*/(*α* − 1) with decreasing *α* and diverges at *α* ↓ 1 [25,26]. The divergence of the estimation error can be regarded as a phase transition. Below the critical value *αd* := 1, the optimization of variance becomes impossible. Of course, in practice, one never has such an optimization task without some additional constraints. Note that because of the possibility of short-selling (negative portfolio weights), the budget constraint (a hyperplane) in itself is not sufficient to forbid the appearance of large positive and negative positions, which then destabilize the optimization. In contrast, any constraint that makes the allowed weights finite can act as a regularizer. The usual regularizers are constraints on the norm of the portfolio vector. It was shown in [27,28] how liquidity considerations naturally lead to regularization. Ridge regression (a constraint on the -2 norm of the portfolio vector) prevents the covariance matrix from developing zero eigenvalues, and, especially in its nonlinear form [29], results in very satisfactory out-of-sample performance.

An alternative is the -1 regularizer, of which the exclusion of short positions is a special case. Together with the budget constraint, it prevents large sample fluctuations of the weights. Let us then impose the no-short ban, as it is indeed imposed in practice on a number of special portfolios (e.g., on pension funds), or, in episodes of crisis, on the whole industry. The ban on short-selling extends the region where the variance can be optimized, but below *α* = 1 the optimization acquires a probabilistic character in that the regularized variance vanishes with a certain probability, and the optimization can only be carried out when it is positive. (Otherwise, there is a continuum of solutions, namely any combination of the eigenvectors belonging to zero eigenvalues, which makes the optimized variance zero).

Interestingly, the probability of the variance vanishing is related to the problem of random dichotomies in the following way. For the portfolio variance (19) to become zero, we need to have

$$\sum\_{i} w\_{i} r\_{i}^{t} = 0 \tag{20}$$

for all *t*. If we interchange *t* and *i*, we see that according to (11), this is possible as long as the *N* points in R*<sup>T</sup>* with position vectors*ri* := {*rti*} do not form a dichotomy. Hence, the probability for zero variance is from (2)

$$P\_{\rm rev}(T, N) = 1 - P\_{\rm d}(N, T) = 1 - \frac{1}{2^{N-1}} \sum\_{i=0}^{T-1} \binom{N-1}{i} = \frac{1}{2^{N-1}} \sum\_{i=T}^{N-1} \binom{N-1}{i} . \tag{21}$$

Therefore, the probability of the variance vanishing is almost 1 for small *α*, decreases to the value 1/2 at *α* = 1/2, decreases further to 0 as *α* increases to 1, and remains identically zero for *α* > 1 [30,31]. This is similar but also somewhat complementary to the curve shown in Figure 2. Equation (21) for the vanishing of the variance was first written up in [30,31] on the basis of analogy with the minimax problem to be considered below, and it was also verified by extended numerical simulations. The above link to the Cover problem is a new result, and it is rewarding to see how a geometric proof establishes a bridge between the two problems.

In [30,31], an intriguing analogy with, for example, the condensed phase of an ideal Bose gas was pointed out. The analogous features are the vanishing of the chemical potential in the Bose gas, resp. the vanishing of the Lagrange multiplier enforcing the budget constraint in the portfolio problem; the onset of Bose condensation, resp. the appearance of zero weights ("condensation" of the solutions on the coordinate planes) due to the no-short constraint; the divergence of the transverse susceptibility, and the emergence of zero modes in both models.

#### *3.3. The Maximal Loss*

The introduction of the Maximal Loss (ML) or minimax risk measure by Young [32] in 1998 was motivated by numerical expediency. In contrast to the variance whose optimization demands a quadratic program, ML is constructed such that it can be optimized by linear programming, which could be performed very efficiently even on large datasets already at the end of the last century. Maximal Loss combines the worst outcomes of each asset and seeks the best combination of them. This may seem to be an over-pessimistic risk measure, but there are occasions when considering the worst outcomes is justifiable (think of an insurance portfolio in the time of climate change), and, as will be seen, the present regulatory market risk measure is not very far from ML.

Omitting the portfolio's return again and focusing on the risk part, the maximal loss of a portfolio is given by

$$\text{ML} := \min\_{\mathbf{w}} \max\_{1 \le t \le T} \left( -\sum\_{i} w\_{i} r\_{i}^{t} \right) \tag{22}$$

with the constraint

$$
\sum\_{i} w\_{i} = N.\tag{23}
$$

We are interested in the probability *<sup>P</sup>*ML(*<sup>T</sup>*, *N*) that this minimax problem is feasible, that is, ML does not diverge to <sup>−</sup>∞. To this end, we first eliminate the constraint (23) by putting

$$w\_N = N - \sum\_{i=1}^{N-1} w\_i. \tag{24}$$

This results in

$$\text{ML} := \min\_{\mathbf{W}} \max\_{1 \le t \le T} \left( -\sum\_{i=1}^{N-1} w\_i (r\_i^t - r\_N^t) - N r\_N^t \right) =: \min\_{\mathbf{W}} \max\_{1 \le t \le T} \left( -\sum\_{i=1}^{N-1} w\_i \overline{r}\_i^t - N r\_N^t \right) \tag{25}$$

with **w**˜ := {*<sup>w</sup>*1, ... , *wN*−<sup>1</sup>} ∈ R*N*−<sup>1</sup> and **r**˜ *t* := {*rt*1 − *<sup>r</sup>tN*, ... ,*<sup>r</sup>tN*−<sup>1</sup> − *rtN*} ∈ R*N*−1. For ML to stay finite for all choices of **w**˜ , the *T* random hyperplanes with normal vectors **r**˜*t* have to form a bounded cone. If the points **r**˜*t* form a dichotomy, then according to (6), there is a vector **W** ∈ R*N*−<sup>1</sup> with **W** · **r**˜*t* > 0 for all *t*. Since there is no constraint on the norm of **w**˜ , the maximal loss (25) can become arbitrarily small for **w**˜ = *λ***W** and *λ* → ∞. The cone then is not bounded. We therefore find

$$P\_{\rm ML}(T, N) = P\_{\rm d}(T, N - 1) = \frac{1}{2^{T - 1}} \sum\_{i = 0}^{N - 2} \binom{T - 1}{i} \tag{26}$$

for the probability that ML cannot be optimized.

In the limit *N*, *T* → ∞ with *α* = *T*/*N* kept finite, (25) displays the same abrupt change as in the problem of dichotomies, a phase transition at *αc* = 2. Note that this is larger than the critical point *αd* = 1 of the unregularized variance, which is quite natural, since the ML uses only the extremal values in the data set. The probability for the feasibility of ML was first written up without proof in [1], where a comparative study of the noise sensitivity of four risk measures, including ML, was performed. There are two important remarks we can make at this point. First, the geometric consideration above does not require any assumption about the data generating process; as long as the the returns are independent, they can be drawn from any symmetric distribution without changing the value of the critical point. This is a special case of the universality of critical points discovered by Donoho and Tanner [33].

The second remark is that the problem of bounded cones is closely related to that of bounded polytopes [34]. The difference is just the additional dimension of the ML itself. If the random hyperplanes perpendicular to the vectors **r**˜*t* form a bounded cone for ML according to (25), then they will trace out a bounded polytope on hyperplanes perpendicular to the ML axis at sufficiently high values of ML. In fact, after the replacement *N* − 1 → *N* Equation (26) coincides with the result in Theorem 4 of [34] for the probability of *T* random hyperplanes forming a bounded polytope in *N* dimensions (there is a typo in Theorem 4 in [34]; the summation has to start at *i* = 0). The close relationship between the ML problem and the bounded polytope problem, on the one hand, and the Cover problem on the other hand, was apparently not clarified before.

If we spell out the financial meaning of the above result, we are led to interesting ramifications. To gain an intuition, let us consider just two assets, *N* = 2. If asset 1 produces a return sometimes above, sometimes below that of asset 2, then the minimax problem will have a finite solution. If, however, asset 1 dominates asset 2 (i.e., yields a return which is at least as large, and, at least at one time point, larger, than the return on asset 2 in a given sample), then, with unlimited short positions allowed, the investor will be induced to take an arbitrarily large long position in asset 1 and go correspondingly short in asset 2. This means that the solution of the minimax problem will run away to infinity, and the risk of ML will be equal to minus infinity [1]. The generalization to *N* assets is immediate: if among the assets there is one that dominates the rest, or there is a combination of assets that dominates some of the rest, the solution will run away to infinity, and ML will take the value of <sup>−</sup>∞. This scenario corresponds to an arbitrage, and the investor gains an arbitrarily large profit without risk [35]. Of course, if such a dominance is realized in one

given sample, it may disappear in the next time interval, or the dominance relations can rearrange to display another mirage of an arbitrage.

Clearly, the ML risk measure is unstable against these fluctuations. In practice, such a brutal instability can never be observed, because there are always some constraints on the short positions, or groups of assets corresponding to branches of industries, geographic regions, and so forth. These constraints will prevent instabilities from taking place, and the solution cannot run away to infinity, but will go as far as allowed by the constraints and then stick to the boundary of the allowed region. Note, however, that in such a case, the solution will be determined more by the constraints (and ultimately by the risk manager imposing the constraints) rather than by the structure of the market. In addition, in the next period, a different configuration can be realized, so the solution will jump around on the boundary defined by the constraints.

We may illustrate the role of short positions for the instability of ML further by investigating the case of portfolio weights *wi* that have to be larger than a threshold *γ* ≤ 0. For *γ* → − <sup>∞</sup>, there are no restrictions on short positions, whereas *γ* = 0 corresponds to a complete ban on them. For *N*, *T* → ∞ with fixed *α* = *T*/*N*, the problem may be solved within the framework of statistical mechanics. The minimax problem for ML is equivalent to the following problem in linear programming: minimize the threshold variable *κ* under the constraints (23), *wi* ≥ *γ*, and

$$-\sum\_{i} w\_{i} r\_{i}^{t} \leq \kappa \quad \forall t = 1, \ldots, T. \tag{27}$$

Similarly to (14), the central quantity of interest is

$$\Omega(\kappa,\gamma,\kappa) = \frac{\int\_{\gamma}^{\infty} \prod\_{i=1}^{N} dw\_i \,\delta(\sum\_i w\_i - N) \prod\_{i=1}^{nN} \Theta\left(\sum\_i w\_i r\_i^t + \kappa\right)}{\int\_{\gamma}^{\infty} \prod\_{i=1}^{N} dw\_i \,\delta(\sum\_i w\_i - N)},\tag{28}$$

giving the fractional volume of points on the simplex defined by (23) that fulfill all constraints (27). For given *α* and *γ*, we decrease *κ* down to the point *κc*, where the typical value of this fractional volume vanishes. The ML is then given by *<sup>κ</sup>c*(*<sup>α</sup>*, *<sup>γ</sup>*).

Some details of the corresponding calculations are given in the Appendix A. In Figure 3, we show some results. As discussed above, the divergence of ML for *α* < 2 is indeed formally eliminated for all *γ* > − <sup>∞</sup>, and the functions ML(*α*; *γ*) smoothly interpolate between the cases *γ* = 0 and *γ* → − ∞. However, the situation is now even more dangerous, since the unreliability of ML as a risk measure for small *α* remains without being deducible from its divergence.

**Figure 3.** *Left*: The Maximal Loss ML = *κc* as a function of *α*. The analytical results (solid line) are compared to simulation results (circles) with *N* = 200 averaged over 100 samples. The symbol size corresponds to the statistical error. *Right*: Same as left with largely extended axis of ML.

The recognition of the instability of ML as a dominance problem has proved very fruitful and led to a series of generalizations. First, it was realized [1] that the instability of the expected shortfall, of which ML is an extreme special case, has a very similar geometric origin. (The current regulatory ES is the expected loss above a 97.5% quantile, whereas ML corresponds to 100%.) Both ES and ML are so-called coherent risk measures [17], and it was proved [35] that the root of this instability lies in the coherence axioms themselves, so every coherent risk measure suffers from a similar instability. Furthermore, it was proved [35] that the existence of a dominant/dominated pair of assets in the portfolio was a necessary and sufficient condition for the instability of ML, whereas it was only sufficient for other coherent risk measures. It follows that in terms of the variable *α* used in this paper (which is the reciprocal of the aspect ratio *N*/*T* used in some earlier works, such as [35–37]), the critical point of ML is a lower bound for the critical points of other coherent measures. Indeed, the critical line of ES was found to lie above the ML critical value of *αc* = 2 [36]. Value at Risk is not a coherent measure and can violate convexity, so it is not amenable to a similar study of its critical point. However, parametric VaR (that is, the quantile where the underlying distribution is given, only its expectation value and variance is determined from empirical data) *is* convex, and it was shown to possess a critical line that runs above that of ES [37]. The investigation of the semi-variance yielded similar results [37]. It seems, then, that the geometrical analysis of ML provides important information for a variety of risk measures, including some of the most widely used measures in the industry (VaR and ES), and also other downside risk measures.

#### **4. Related Problems**

In this section, we list a few problems from different fields of mathematics and physics that are linked to the random coloring of points in high-dimensional space and point out their connection with the questions discussed above.

#### *4.1. Binary Classifications with a Perceptron*

Feed-forward networks of formal neurons perform binary classifications of input data [38]. The simplest conceivable network of this type—the perceptron—consists of just an input layer of *N* units *ξi* and a single output bit *ζ* = ±1 [39]. Each input *ξi* is directly connected to the output by a real valued coupling *wi*. The output is computed as the sign of the weighted inputs

$$\mathcal{Z} = \text{sign}\left(\sum\_{i=1}^{N} w\_i \, \prescript{x}{}{\zeta}\_i\right). \tag{29}$$

Consider now a family of random inputs {*ξ<sup>t</sup> i* }, *t* = 1, ... , *T* and ask for the probability *<sup>P</sup>*p(*<sup>T</sup>*, *N*) that the perceptron is able to implement a randomly chosen binary classification {*ζ<sup>t</sup>* } of these inputs. Interpreting the vectors *ξt* := {*ξ<sup>t</sup> i* } as position vectors of *T* points in *N* dimensions and the required classifications *ζt* as a black/white coloring, we hence need to know the probability that this particular coloring is a dichotomy. Indeed, if a hyperplane exists that separates black points from white ones, it has a normal vector **w** that gives a suitable choice for the perceptron weights to ge<sup>t</sup> all classifications right. Therefore, we have

$$P\_{\mathbb{P}}(T, N) = P\_{\mathbb{d}}(T, N) = \frac{1}{2^{T-1}} \sum\_{i=0}^{N-1} \binom{T-1}{i} \,. \tag{30}$$

In the thermodynamic limit *N*, *T* → <sup>∞</sup>, this problem, together with a variety of modifications, can be analyzed using methods from the statistical mechanics of disordered systems along the lines of Equations (14)–(16), see [11].

#### *4.2. Zero-Sum Games with Random Pay-Off Matrices*

In game theory, two or more players choose among different strategies at their disposal and receive a pay-off (that may be negative) depending on the choices of all participating players. A particularly simple situation is given by a zero-sum game between two players, where one player's profit is the other player's loss. If the first player may choose among *N* strategies and the second among *T*, the setup is defined by an *N* × *T* pay-off matrix *rt i* , giving the reward for the first player if he plays strategy *i* and his opponent strategy *t*. Barring rare situations in which it is advantageous for one or both players to always choose one and the same strategy, it is known from the classical work of Morgenstern and von Neumann [40] that the best the players can do is to choose at random with different probabilities among their available strategies. The set of these probabilities *pi* and *qt* , respectively, is called a mixed strategy.

For large numbers of available strategies, it is sensible to investigate typical properties of such mixed strategies for random pay-off matrices. This can be done in a rather similar way to the calculation of ML presented in the Appendix A of the present paper [3]. One interesting result is that an extensive part of the probabilities *pi* and *qt* forming the optimal respective mixed strategies have to be identically zero: for both players, there are strategies they should never touch.

#### *4.3. Non-Negative Solutions to Large Systems of Linear Equations*

Consider a random *N* × *T* matrix *rt i* and a random vector **b** ∈ R *N*. When will the system of linear equations

$$\sum\_{t} r\_i^t \mathbf{x}^t = b\_{i\prime} \qquad i = 1, \ldots, N \tag{31}$$

typically have a solution with all *xt* being non-negative? This question is related to the optimization of financial portfolios under a ban of short-selling as discussed above, and also occurs when investigating the stability of chemical or ecological problems [6,41]. Here, the *xt* denotes concentrations of chemical or biological species, and hence has to be nonnegative. Similar to optimal mixed strategies considered in the previous subsection, the solution typically has a number of entries *xt* that are strictly zero (species that died out), the remaining ones being positive (surviving species). Again for *T* = *αN* and *N* → <sup>∞</sup>, a sharp transition at a critical value *αc* separates situations with typically no non-negative solution from those in which typically such a solution can be found [4].

To make contact with the cases discussed before, it is useful to map the problem to a dual one by again using Farkas' lemma. Let us denote by

$$\overline{\mathbf{r}} = \sum\_{t} \mathbf{c}^{t} \mathbf{r}^{t}, \qquad \mathbf{c}^{t} \ge \mathbf{0}, \ t = 1, \ldots, T \tag{32}$$

the vectors in the non-negative cone of the column vectors **r***t* of matrix *rt i* . It is clear that (31) has a non-negative solution **x** if **b** belongs to this cone, and that no such solution exists if **b** lies outside the cone. In the latter case, however, there must be a hyperplane separating **b** from the cone. Denoting the normal of this hyperplane by **w**, we hence have the following duality: either the system (31) has a non-negative solution **x**, or there exists a vector **w** with

$$\mathbf{w} \cdot \mathbf{r}^t \ge 0 \quad t = 1, \dots, T \qquad \text{and} \qquad \mathbf{w} \cdot \mathbf{b} < 0. \tag{33}$$

If the *rt i* is drawn independently from a distribution with finite first and second cumulant *R* and *σ*<sup>2</sup> *r* , respectively, and the components *bi* are independent random numbers with average *B* and variance *σ*<sup>2</sup> *b* /*N*, the dual problem (33) may be analyzed along the lines of (14)–(16). The result for the typical entropy of solution vectors **w** reads [4]

$$S\_{\rm typ}(\gamma, a) = \mathop{\rm extr}\_{q, \kappa} \left[ \frac{1}{2} \ln(1 - q) + \frac{q}{2(1 - q)} - \frac{\kappa^2 \gamma}{2(1 - q)} + a \int Dt \ln H \left( \frac{\kappa - \sqrt{q}t}{\sqrt{1 - q}} \right) \right], \tag{34}$$

where the parameter

$$\gamma := \left(\frac{B \, \sigma\_r}{R \, \sigma\_b}\right)^2 \tag{35}$$

characterizes the distributions of *rti* and *bi*. The main difference to (16) is the additional extremum over *κ* regularized by the penalty term proportional to *κ*2. Considering the limit *q* → 1 in (34), it is possible to determine the critical value *<sup>α</sup>c*(*γ*) bounding the region where typically no solution **w** may be found. For nonrandom **b**, that is, *σb* → 0 implying *γ* → <sup>∞</sup>, we find back the Cover result *αc* = 2.

The problem is closely related to a phase transition found recently in MacArthur's resource competition model [4,6,7], in which a community of purely competing species builds up a collective cooperative phase above a critical threshold of the biodiversity.
