**7. Conclusions**

The quadrangle risk theory Rockafellar and Uryasev (2013) and the decomposition theorem Rockafellar et al. (2008) provided a framework for building a regression with relevant deviations. Solution of a regression problem is split in two steps: (1) minimization of deviation from the corresponding quadrangle, and (2) determining of intercept by using statistic from this quadrangle. For CVaR regression, Rockafellar et al. (2014) reduced the optimization problem at Step 1 to

a high-dimension linear programming problem. We suggested two sets of parameters for the mixed-quantile quadrangle and investigated its relationship with the CVaR quadrangle. The Set 1 of parameters corresponds to CVaR regression in Rockafellar et al. (2014), where the Set 2 is a new set of parameters.

For the Set 1 of parameters, the minimization of error from CVaR Quadrangle was reduced to the minimization of the Rockafellar error from the mixed-quantile quadrangle. For both sets of parameters, the minimization of deviation in CVaR quadrangle is equivalent to the minimization of deviation in mixed-quantile quadrangle.

We presented optimization problem statements for CVaR regression problems using CVaR and mixed-quantile quadrangles. Linear regression problem for estimating CVaR were e fficiently implemented in Portfolio Safeguard (2018) with convex and linear programming. We have done a case study for the return-based style classification of a mutual fund with CVaR regression. We regressed the fund return by several indices as explanatory factors. Numerical results validating the theoretical statements are placed to the web (see Case Study (2016)).

**Supplementary Materials:** Data and codes used in the case study can be downloaded: 1. Case Study (2016): Estimation of CVaR through Explanatory Factors with CVaR (Superquantile) Regression. http://www.ise.ufl. edu/uryasev/research/testproblems/financial\_engineering/on-implementation-of-cvar-regression/. 2. Case Study (2014): Style Classification with Quantile Regression. http://www.ise.ufl.edu/uryasev/research/testproblems/ financial\_engineering/style-classification-with-quantile-regression/.

**Author Contributions:** Conceptualization, S.U.; Formal analysis, V.K.; Investigation, A.G.; Methodology, S.U.; Software, V.K.; Supervision, S.U.; Writing—original draft, A.G.

**Funding:** Research of Stan Uryasev was partially funded by the AFOSR gran<sup>t</sup> FA9550-18-1-0391 on Massively Parallel Approaches for Bu ffered Probability Optimization and Applications.

**Conflicts of Interest:** The authors declare no conflict of interest.

### **Appendix A CVaR Regression with Rockafellar Error: Convex and Linear Programming**

The value of the Rockafellar error with given set of parameters λ*k*, α*k* (<sup>α</sup>*k* ∈ (0, <sup>1</sup>), *k* = 1, ...*r*, *r k*=1 λ*k* = 1) for a random value *X* is a minimum w.r.t. a set of variables *B*1, ... , *Br* of a mixture of Koenker–Bassett Error functions with one linear constraint on these variables:

$$\text{Rockafallar\\_Error}\left(X\right)\_{\lambda\_1,\mu\_1,\dots,\lambda\_r,\mu\_r} = \min\_{B\_1,\dots,B\_r} \left\{ \sum\_{k=1}^r \lambda\_k \mathcal{E}\_{\text{tr}\_k}(X - B\_k) \left| \sum\_{k=1}^r \lambda\_k B\_k = 0 \right. \right\}$$

where E<sup>α</sup>*k* (*X* − *Bk*) = *E* <sup>α</sup>*k* 1−α*<sup>k</sup>* [*X* − *Bk*] + + [*X* − *Bk*] − is the normalized Koenker–Bassett error.

By using regre<sup>t</sup> from the mixed-quantile quadrangle, we express the Rockafellar error as follows:

$$\text{Rockafellar\\_Error}\left(X\right)\_{\lambda\_1, a\_1, \dots, \lambda\_r, a\_r} = \min\_{B\_1, \dots, B\_r} \left\{ \sum\_{k=1}^r \frac{\lambda\_k}{1 - \alpha\_k} E\left[X - B\_k\right]^+ \left| \sum\_{k=1}^r \lambda\_k B\_k = 0 \right. \right\} - E[X].$$

For the linear regression problem, the random variable *X* is defined by a set of di fferences between observed values *Vi* and linear functions *C*0 + *<sup>C</sup><sup>T</sup>Yi*, where *Yi* is a vector of explanatory factors, *i* = 1, 2, ... , ν. Vectors *C* and *Y* have *m* components, *C* = (*<sup>C</sup>*1, ... , *Cm*), *Y* = (*<sup>Y</sup>*1, ... ,*Ym*), and *C*0 is a scalar. Residuals *Xi* = *Vi* − *C*0 − *C<sup>T</sup>Yi* are values (scenarios) of atoms of the random value *X*. We consider that all atoms have equal probabilities. The estimation of *V* with factors *Y* is done by minimizing the error w.r.t. variables *C*, *C*0. Further we use the Set 1 of parameters. Let us denote:

$$E[V] = \frac{1}{\upsilon} \sum\_{i=1}^{\upsilon} V\_{i\prime} \, E[\mathbf{Y}] = \frac{1}{\upsilon} \sum\_{i=1}^{\upsilon} \mathbf{y}\_i.$$

*Appendix A.1 Convex Programming Formulation for CVaR Regression*

Minimize the Rockafellar error:

$$\min\_{\mathbf{C}\_{1},\ldots,\mathbf{B}\_{r},\mathbf{C}\_{0},\mathbf{C}\_{1},\ldots,\mathbf{C}\_{m}} \left\{ \sum\_{k=1}^{r} \frac{\lambda\_{k}}{(1-\alpha\_{k})v} \sum\_{i=1}^{v} \left[ V\_{i} - \mathbf{C}\_{0} - \mathbf{C}^{T} \mathbf{Y}\_{i} - B\_{k} \right]^{+} - E\left[ V \right] + \mathbf{C}\_{0} + \mathbf{C}^{T}E[\mathbf{Y}] \right\} \tag{A1}$$

subject to the constraint:

$$\sum\_{k=1}^{r} \lambda\_k B\_k = 0.\tag{A2}$$

This optimization problem has a convex objective and one linear constraint.

*Appendix A.2 Linear Programming Formulation for CVaR Regression*

Equations (A1) and (A2) are reduced to linear programming with additional variables and constraints:

$$\min\_{\begin{subarray}{c}A\_{11},\ldots,A\_{1v}\\B\_{1},\ldots,B\_{L},\mathbb{C}\_{0},\mathbb{C}\_{1},\ldots,\mathbb{C}\_{m\_{l}}\end{subarray}} \left\{ \sum\_{k=1}^{r} \frac{\lambda\_{k}}{(1-a\_{k})v} \sum\_{i=1}^{v} A\_{ki} - E[V] + \mathbb{C}\_{0} + \mathcal{C}^{T}E[\mathbf{Y}] \right\} \tag{A3}$$

subject to constraints:

$$\sum\_{k=1}^{r} \lambda\_k B\_k = 0 \tag{A4}$$

$$A\_{ki} \ge V\_i - \mathbb{C}\_0 - \mathbb{C}^T \mathbf{Y}\_i - B\_{ki}, \; k = 1, \dots, r, \; i = 1, \dots, v \tag{A5}$$

$$A\_{ki} \ge 0, \ k = 1, \dots, r, \ i = 1, \dots, v \tag{A6}$$

The linear function *C*∗ 0 + *C*<sup>∗</sup>*TY* estimates *CVaR*α(*V*) as a function of explanatory factors *Y*, where *C*∗ 0and *C*<sup>∗</sup> are optimal values of variables for Equations (A1) and (A2) or (A3)–(A6).

### **Appendix B Codes Implementing Regression Optimization Problems**

This appendix contains codes implementing Optimization Problems 1–4 described in Section 5. Codes and solution results are posted at internet link: Case Study (2016). Codes are written in Portfolio Safeguard (PSG) Text, MATLAB, and R environments. Here are codes in the Text environment.
