1. Introduction
Non-Markov models and integral equations have been widely used in financial mathematics to model memory effects and path-dependent dynamics (see [
1] for previous studies or [
2] for recent advances and results). This property makes them suitable for the description of rough volatility, volatility clustering, and heavy tails.
The main objective of this contribution is to assess financial risk using historical information as a multidimensional continuous process. It should be noted that the popularity of this topic is motivated by many considerations. However, the nonlinearity and long memory properties of the Volterra model are the main motivations for the membership of this model in the analysis of financial time series. The first substantial results on this topic were developed in [
3]. The authors of [
3] extended stochastic volatility models with long memory kernels for financial applications. In [
4], Volterra-type equations were applied to analyze volatility surfaces. The rough Heston model that was studied in [
5] improves derivative pricing by using Volterra equations to capture the effect of the rough volatility. Furthermore, in [
6], the importance of non-Markov process models in financial markets was discussed. This work allows for the extension of the classical framework towards precise calibration of derivative pricing, volatility surface calibration, and risk management. Thus, assessing financial risk using historical information is a challenge for bankers and investors. From a statistical point of view, two models are used in this field. More precisely, the common models are vector autoregression (VaR) and expected shortfall (ES). However, it is well known that these two models are not relevant in some situations (see [
7,
8] for further discussion). In particular, the inconsistency and inelicitability of these standard models are intrinsic drawbacks.
Alternatively, in this contribution, we examine financial risk using the expectile function. The literature on the expectile model, in mathematical statistics, is still limited compared to that on VaR or ES. In fact, the financial risk was studied using parametric methods in [
9]. Indeed, the authors of [
9] analyzed the asymptotic behavior of multivariate expectiles of the Fréchet model. Moreover, they proposed extreme multivariate expectile estimators in the cases of asymptotic independence and co-monotonicity. It was also shown in [
10] that the expectile is a robust alternative to VaR and ES, as it efficiently incorporates auxiliary information into risk estimation. The dynamic expectile, which is based on the expectile regression, has been considered in many applied fields such as econometrics, finance and, actuarial science (see, for example, [
11] as a pioneering work or [
12] for recent advances and references).
However, from a theoretical point of view, conditional expectiles are treated in the same way as classical regression. For example, in [
13] conditional expectations were generalized to conditional expectiles by minimizing an asymmetric quadratic loss function, and the function’s main properties were given. In [
14] estimators of regression quantiles were defined via an asymmetric absolute loss, and the same authors, in [
15], introduced a new approach to regression quantiles that is based on estimating the coefficients of regression equations. Furthermore, in [
16], the authors used expectiles to propose methods to estimate VaR and ES (or conditional VaR (CVaR)). Additional details and results on regression can be found in references such as [
17,
18,
19,
20]. Furthermore, in [
20] the authors extended the expectile-based risk measures to multivariate parameters by presenting different approaches to construct such measures, and they discussed the consistency properties of multivariate expectiles. The same authors, in [
20], studied the asymptotic behavior of multivariate expectiles for the Fréchet model, and they proposed estimators for extreme multivariate expectiles under the assumptions of asymptotic independence and co-monotonicity.
It must be said that although this work focuses on the finite-dimensional case, this study can be extended to the framework of the analysis of the effect of a functional covariate on a scalar response variable. Recall that the first results on this subject were presented in [
21]. In this paper, the almost complete convergence and the asymptotic normality of the estimator that was obtained by the kernel method (KM) when the data are independent was proven. These results were then generalized to the case of dependent data by the same authors (see [
22]). In addition, in [
23], an alternative estimation method was proposed, where the dynamic expectile was estimated using a functional local linear approach. These authors also proved the Borel–Cantelli convergence of the functional estimator that was constructed. Functional data analysis (FDA), as highlighted in this last article, is essential for many application areas. For fundamental tools in this field, one can refer, for example, to the monographs in [
24,
25,
26].
As explained above, the novelty of this contribution lies in the assessment of financial risk using a chaotic Gaussian Volterra model. This study thus offers several main advantages. First, by exploiting expectile regression, we provide an efficient risk model that combines the advantages of VaR and ES. A key advantage of this approach is its sensitivity to extreme values, allowing for more prudent and reactive risk management. Second, financial data are modeled as continuous Gaussian Volterra processes, which are particularly well suited for financial time series. These processes are powerful tools to capture memory effects and path-dependent dynamics in financial markets (see [
4]). Specifically, the process kernel introduces nonlocal interactions and long-term dependence, making it very effective for describing stylized features of financial time series, such as approximate volatility, clustering, and heavy tails. The Volterra model also improves applications in derivatives pricing, volatility surface calibration, and risk management. Moreover, in nonparametric estimation, the Volterra model provides a flexible framework to reconstruct financial dynamics without imposing strict parametric assumptions.
All these features are explored and evaluated using simulated and real data examples. Of course, the practical use of this model requires a solid mathematical basis that allows one to justify its adoption. To this end, we have established almost complete consistency of the constructed estimator and specified its convergence rate, as well as the conditions necessary for ideal implementation.
The remainder of this paper is organized as follows.
Section 2 sets out the general framework of this contribution. The results obtained are presented and discussed in
Section 3.
Section 4 is devoted to the empirical analysis, covering applications on both simulated and real data. In
Section 5, we provide a general conclusion and outline some research perspectives for further investigations in this area. Finally, proofs of the auxiliary results are provided in the
Section 6.
4. Financial Time Series Analysis
4.1. Simulation Study
The main objective of this part is to examine the easy application of the estimator
using simulated data. Recall that the use of
is related to the easy choice of its parameters. In this context of kernel smoothing, the choice of the bandwidth parameter is a crucial issue. From a theoretical point of view, there are three common rules in nonparametric functional data analysis to answer this question. Examples include the symmetric least squares cross-validation rule in [
28], the bootstrap approach in [
29], and the Bayesian approach in [
30]. In this simulation part, we introduce a new algorithm based on the asymmetric least squares loss function. Thus, we conduct a simulation study to examine the applicability of our procedure. In fact, the new rule is based on the following criterion:
where
is the scoring function of the expectile and
is a given subset of positive real numbers.
We consider the functional input variables using the sampling rule (
1). The sampling rule is constructed using the Volterra process. We generate the Volterra process using the routine code
st.int from the R-package
Sim.DiffProc. In order to cover different cases, we simulate the following two different kernels:
and
It is clear that the choice of the kernel has a significant impact on the covariance of the observations. Thus, a different choice of kernels allows one to cover many degrees of dependence. The shape of the different curves is illustrated in
Figure 1.
Next, we consider the output variable . Furthermore, we check the behavior of using the quadratic kernel on and the PCA metric associated with the third eigenvalue. For the selection of smoothing parameters, we optimize the rule (12) via two approaches:
The local
k-Nearest Neighbors approach for which
where
.
The global approach for which the subset H is the quantile of the vector distances between the observations and the localization point . The order of the quantiles is .
Finally, the performance of the financial risk model
is verified using the backtesting criterion
. Furthermore, the comparison results are summarized in
Table 1, where we report the values of
for different sample sizes,
; different Volterra kernels; and for three values of
.
The obtained results confirm the good behavior of the estimator. It appears that the rule is well adapted to compute the estimator. Note that the behavior of the estimator is affected by the nature of the correlation of the functional data through the kernel. In particular, it is well documented that the shifted fractional kernel generates a non-Markovian semi-martingale process and that the Volterra exponential kernel process satisfies the Markovian property. Finally, we can see that the empirical results integrate the theoretical results of the estimator. More precisely, the performance of is improved as a function of the sample size N and the degree of dependence.
4.2. Real Data Application
One of the most challenging questions in financial risk management is how to find appropriate decision rules to trade off the return and loss of a financial asset. According to the definition of expectile regression as the ratio of loss and gain, it seems to be the right candidate to answer this question.
Note that the main feature of our approach is to analyze financial time series data using the nonparametric method that was recently considered in big data analysis. In particular, functional expectiles allow us to integrate the functional aspect of financial time series. The novelty here is to consider multifunctional data that allow us to adjust the risk of several assets. For this aim, we assume that
and we set
as the daily rate (according to USD) of three major currencies, namely the euro, the pound sterling (gbp) and the Swiss franc (chf). The data are available at
https://stooq.com/db/h/ (accessed on 8 December 2024).
We consider the data of the week (18–22 November 2024) with a time step of 5 min. The data for the three situations are presented in the following,
Figure 2.
Often, investors are interested in the log-return, defined by
and they want to predict the extreme return at a time
T giving the process
for
It clearly seems that the
is centered and has great volatility-transformed series. Before processing this financial chronological series as functional data, we start by reproducing the Volterra process associated with
and we generate the functional sample using the sampling approach discussed in
Section 2.2. We cut
in
curves, with each representing one hour.
We aimed to predict the maximum variations in O between the three currencies and we will compare the efficiency of the approach presented and the standard risk measures when is estimated empirically.
In order to obtain a fair comparison between the two risk measures, we carry out our empirical analysis using the same criteria for selecting the fundamental parameters. In particular, for the functional expectile regression, we consider the quadratic kernel on
and the PCA metric based on the principal component analysis reduction method (see [
28]), and we choose the best smoothing parameter
a via the local
k-Nearest Neighborhood approach. To examine the effectiveness of the two approaches, we split the observations into two parts: (i) a training sample (90 observations) and (ii) a test sample (28 observations). Then, we repeated this treatment 60 times.
These calculations allow us to present, in
Figure 3, the percentage of the values exceeded for the two models,
and
, each time. The latter quantifies violation cases when the process
exceeds
and
.
It appears clearly that performs better at detecting the financial risk than its competitor model, . This statement is confirmed by the frequency of exceeding, which is very close to the threshold, p. In other words, this is the percentage when where means either or . As expected, this superiority is justified by the fact that is constructed from the error of asymmetrically weighted least squares. This increases its sensitivity to outliers and simultaneously controls the gain and loss. This model then detects the violation when it occurs. Moreover, even though the alternative model also performs well, its insensitivity to outliers reduces its interactivity with short-term risk.
5. Conclusions
Motivated by the need to develop an algorithm to instantly manage financial risk, we have proposed, in this contribution, new statistical algorithms adapted to high-frequency data, which are observed on a finer discretization grid (every 1 min). This new method constitutes an alternative approach to classical models based on the multivariate GARCH model. The proposed approaches are more informative than the classical model. Indeed, on the one hand, the multivariate GARCH model requires certain specific assumptions related to the distribution of the data that are generally not satisfied. On the other hand, the multivariate GARCH model does not allow to explore one the maximum amount of information in high-frequency data; it is, in reality, only usable for low frequencies. But with recent technological developments, the digitalization of financial risk management becomes inevitable and standard models must be modernized. An empirical analysis confirmed that the multifunctional expectile model is more efficient than empirical VaR. Besides its consistency and elicitability properties, the multifunctional expectile model is highly sensitive to outliers, which allows it to adapt to the volatility of the underlying financial data.
The theoretical part of the paper provides good mathematical support for the use of this expectile model as a financial risk. The asymptotic results obtained in this part are obtained under standard conditions of functional data analysis. They cover the main elements of nonparametric functional statistics, including data, model, and estimation techniques. As in the vector case, the curse of dimensionality related to the regressors’ components has a negative impact on the model accuracy. Thus, reducing this negative effect is a natural perspective of this contribution. In this context, combining our approach with the single index model or the partial linear model seems important to reduce this limitation.
Moreover, the present contribution opens other important avenues for the future. For example, it will be very important, in the future, to compare parametric estimation with the techniques of the nonparametric functional regime. Note that one of the important characteristics of expectile regression is that it can be estimated using the same ideas as quantile regression. It would also be interesting to extend this study to other financial time series, namely the spatial case or the multifunctional ergodic case.
Introducing another estimation approach to take into account missing observations or cases of incomplete data is also an interesting avenue for the future. Building an alternative estimator of quantile regression and mathematical expectation using the parametric or nonparametric estimator of the expectile model is also a good topic to explore. Of course, this list is not exhaustive and there are many other topics that can be developed.