Next Article in Journal
Reconstruction of Highway Vehicle Paths Using a Two-Stage Model
Next Article in Special Issue
Finite Element Method for Solving the Screened Poisson Equation with a Delta Function
Previous Article in Journal
A Two-Stage Estimation Approach to Cox Regression Under the Five-Parameter Spline Model
Previous Article in Special Issue
Exact Null Controllability of a One-Dimensional Wave Equation with a Mixed Boundary
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Deep Learning Artificial Neural Network for Pricing Multi-Asset European Options

1
School of Economics and Management, Xiangnan University, Chenzhou 423000, China
2
School of Mathematics and Information Science, Xiangnan University, Chenzhou 423000, China
*
Authors to whom correspondence should be addressed.
Mathematics 2025, 13(4), 617; https://doi.org/10.3390/math13040617
Submission received: 5 January 2025 / Revised: 3 February 2025 / Accepted: 10 February 2025 / Published: 13 February 2025
(This article belongs to the Special Issue Advances in Partial Differential Equations: Methods and Applications)

Abstract

:
This paper studies a p-layers deep learning artificial neural network (DLANN) for European multi-asset options. Firstly, a p-layers DLANN is constructed with undetermined weights and bias. Secondly, according to the terminal values of the partial differential equation (PDE) and the points that satisfy the PDE of multi-asset options, some discrete data are fed into the p-layers DLANN. Thirdly, using the least square error as the objective function, the weights and bias of the DLANN are trained well. In order to optimize the objective function, the partial derivatives for the weights and bias of DLANN are carefully derived. Moreover, to improve the computational efficiency, a time-segment DLANN is proposed. Numerical examples are presented to confirm the accuracy, efficiency, and stability of the proposed p-layers DLANN. Computational examples show that the DLANN’s relative error is less than 0.5 % for different numbers of assets d = 1 , 2 , 3 , 4 . In the future, the p-layers DLANN can be extended into American options, Asian options, Lookback options, and so on.

1. Introduction

In the last 30 years, different methods and techniques for multi-asset option pricing have been developed. For options pricing on one to three assets, the radial basis function (RBF) method uses the so-called radial basis function to approximate European and American options [1,2,3,4,5,6,7,8,9,10]. For the low-dimensional case, the semi-analytical solutions and analytical solutions of a European option and an American option can be obtained [11,12]. The finite difference (FD) method for solving an option’s partial differential equation (PDE) was derived under the Black–Scholes model and Heston model [13,14,15,16,17]. A Monte Carlo simulation for SABA options of calorimeter systems and multi-asset options was proposed [18,19,20]. Laplace transform or Mellin transform methods for some classical options have been presented [15,21,22,23,24]. The willow tree (WT) method under Lévy processes was established [25,26].
In general, for RBF, FD, and WT methods, only some cases with lower dimensions ( d 2 ) can be solved easily. For analytical or semi-analytical methods, these can only be applied to some simple or classical options. Although Monte Carlo simulation is suitable for any high-dimensional problem, it is computationally inefficient, resulting in a significant consumption of CPU time and storage space. Therefore, Monte Carlo methods are limited to simple models and are computationally heavy.
In the past decade, several scholars have applied artificial neural networks (ANNs) to solve option values. Anderson and Ulrich used deep neural networks to price American options [27]. Caverhill et al. proposed an ANN method for option pricing and hedging [28]. Based on the residual ANN, Gan and Liu discussed option pricing [29]. Kapllani and Teng provided a deep learning artificial neural network to solve nonlinear backward stochastic differential equations (BSDE) [30]. For estimating option prices, Lee and Son used artificial neural networks to predict arbitrage-free American options [31]. Mary and Salchenberger proposed an ANN model [32] to estimate option prices. Nikolas and Lorenz discussed a deep learning method to interpolate between BSDEs and PINNs [33]. Robert Balkin et al. considered a machine learning algorithm to solve stochastic delay differential equations [34]. Teng et al. discussed combining artificial neural networks with classical volatility prediction models for option pricing [35,36]. Umeorah, Mashele, and Agbaeze provided a barrier option pricing method with an ANN [37]. Wang proposed an artificial neural network prediction model for stock index options [38]. All of these BSDE-based approaches, however, do not come directly from the option’s PDE.
Unlike general data fitting problems, there are many difficulties in using artificial neural networks in option pricing. We know that these options satisfy the Black–Scholes PDE. In order to apply artificial neural network to option pricing, proper discretization of the PDE must be carried out. Using an artificial neural network to approximate the PDE of an option is the main difficulty and innovation of this paper.
In this paper, a p-layers deep learning (DLANN) is considered for multi-dimensional option pricing. After a p-layers DLANN is established, the PDE of a multi-asset option is expressed in the ANN. By minimizing the objective functions in the DLANN, all parameters (including the weights and bias) are trained well. The deep learning p-ANN is relatively easy to implement, shows the capability for multi-asset option pricing, and is much faster than other schemes (such as a Monte Carlo simulation). It is noted that this paper is an improved version of Zhou and Wu [39].
This paper is arranged as follows. In Section 2, we introduce the PDE of a multi-asset model. In Section 3, a p layers deep learning ANN ( p DLANN) computational frame is proposed. The derivatives of the p-DLANN’s weights and bias are listed. Then, the updated formulas of the weights and bias in the p DLANN are given. Section 4 lists some examples to confirm the efficiency and accuracy of the proposed algorithms. In Section 5, some conclusions and future work are proposed. Finally, Appendix A provides some appendices, including derivative formulas of the weights and bias and some computational results.

2. Model of Multi-Asset Options

Let S = ( S 1 ( t ) , S 2 ( t ) , , S d ( t ) ) , with  ( · ) denoting the transposition of ( · ) , be d-asset prices at time t, and let ln S I ( t ) , I = 1 , 2 , , d , be modeled by Brownian motion together with a drift term under a no-arbitrage assumption, i.e.,  S I ( t ) are controlled by stochastic differential equations (SDEs),
d ln S I ( t ) = d S I ( t ) S I ( t ) = ( r q I ) d t + σ I d W t ( I ) , I = 1 , 2 , , d .
In SDEs (1), r represents the risk-free interest, q I represents the dividend yield, and σ I represents the volatility of asset I. W t ( I ) , I = 1 , 2 , , d , are d standard Brownian motions, and it is assumed that
E d W t ( I ) = 0 , V a r d W t ( I ) = d t , I = 1 , 2 , , d , C o v d W t ( I ) , d W t ( J ) = ρ I J d t , I , J = 1 , 2 , , d , ,
where ρ I J are the coefficients of association between d W t ( I ) and d W t ( J ) . Then, the PDE of the d-asset option value V ( t , x ) emerges as
V ( t , S ) t + 1 2 I = 1 d J = 1 d ρ I J σ I σ J S I S J 2 V ( t , S ) S I S J + I = 1 d ( r q I ) S I V ( t , S ) S I r V ( t , S ) , t ( 0 , T ) , S Ω S ,
where S = ( S 1 ( t ) , S 2 ( t ) , , S d ( t ) ) Ω S = ( 0 , + ) d .
Let the remainder time to maturity τ = T t , and x I ( t ) = ln S I ( t ) for I = 1 , 2 , , d ; then, the d-asset option value v ( τ , x ) = V ( T t , x ) is re-written as
v τ ( τ , x ) A v ( τ , x ) = 0 , τ ( 0 , T ) , x Ω ,
where x = ( x 1 ( t ) , x 2 ( t ) , , x d ( t ) ) Ω = ( , + ) d , and the linear differential operator A is defined as
A : = 1 2 I = 1 d J = 1 d a I J 2 x I x J + I = 1 d b I x I r ,
with coefficients a I J : = ρ I J σ I σ J and b I : = r q I 1 2 a I I .
For a European option, v ( τ , x ) has the initial values (also called terminal conditions or payoff functions, for time τ = 0 )
v ( 0 , x ) = Π ( x ) .
System (4)–(6) can be written as
v τ ( τ , x ) A v ( τ , x ) = 0 , τ ( 0 , T ] , x Ω , v ( 0 , x ) = Π ( x ) , x Ω .
For put options and geometric mean payoff functions Π ( x ) , the initial values of v are taken as
v ( 0 , x ) = Π ( x ) = max 0 , K e I = 1 d α I x I , 0 α I 1 , I = 1 d α I = 1 , x Ω ,
with strike price K. In the general case, System (7) has no analytical solutions, and a certain numerical method is needed to determine the option value v ( τ , x ) in domain ( 0 , T ] × Ω , for example, the Monte Carlo simulation, RBF numerical solution, finite difference method, and artificial neural network method.
With the geometric mean payoff function Π ( x ) defined by (8), the put option governed by System (7) has an analytical solution (see Expression (42)). However, for a put option with an arithmetic mean payoff function
Π ( x ) = m a x 0 , K I = 1 d α I e x I , 0 α I 1 , I d α I = 1 , x Ω ,
the option v ( τ , x ) has no analytical solution. So, the numerical scheme is a feasible method to price a multi-asset option governed by (7) and (9). In this paper, we propose a p-layers DLANN to price the multi-asset option values.

3. Computational Frame of p DLANN

3.1. Structure of p DLANN

A deep learning artificial neural network (DLANN) is defined as shown in Figure 1. In Figure 1, the p-layers ANN includes an input layer, p 2 hidden layers, and an output layer. For an example, the neural units in five layers are n 1 = d + 1 , n 2 = 4 , n 3 = 5 , n 4 = 4 , and n 5 = 1 . We will see in Section 4 that this structure is powerful for some European option pricing.
The evolution of the p-layers DLANN is represented as matrix expressions,
O ( ) = ω ( ) U ( 1 ) + θ ( ) , = 2 , 3 , , p , U ( ) = f ( O ( ) ) , = 2 , 3 , , p ,
where O ( ) is the input data in the th layer, U ( ) is the output in the layer , and f is the transform function (or activation function). The algebraic expression is given by
O j ( ) = i = 1 n 1 ω j i ( ) U i ( 1 ) + θ j ( ) , j = 1 , 2 , , n ; = 2 , 3 , , p , U j ( ) = f ( O j ( ) ) , j = 1 , 2 , , n ; = 2 , 3 , p ,
where U ( 1 ) is a ( d + 1 ) input data column vector ( x 1 , x 2 , , x d , τ ) , and f ( · ) ( = 2 , 3 , , p ) are the activation functions, i.e.,
f ( x ) = 1 1 + e x ( = 2 , , p 1 , sigmoid function ) , f p ( x ) = x ( linear function ) .
The derivatives f ( U ) and f ( U ) are
f ( O ) = 1 , = p , U ( 1 U ) , < p , and f ( O ) = 0 , = p , U ( 1 U ) ( 1 2 U ) , < p ,
which can be found in Appendix A.2. The activation function f ( x ) can be used in other functions, but this paper only uses functions in the above form.
The weights and bias in the ANN are defined as
ω ( ) = ω 11 ( ) ω 12 ( ) ω 1 n 1 ( ) ω 21 ( ) ω 22 ( ) ω 2 n 1 ( ) ω n 1 ( ) ω n 2 ( ) ω n n 1 ( ) , θ ( ) = θ 1 ( ) θ 2 ( ) θ n ( ) ,
for = 2 , 3 , , p . We call (10) or (11) a p-layers deep learning artificial neural network (denoted as p-DLANN). Using the p-DLANN network has the following challenges:
(1)
It is relatively hard to compute the partial derivatives, Δ ω I J ( ) and Δ θ I ( ) , for  = 2 , , p , I = 1 , 2 , , n , and J = 1 , 2 , , n 1 . We must update the parameters ω I J ( ) = ω I J ( ) η Δ ω I J ( ) and θ I ( ) = θ I ( ) η Δ θ I ( ) with the appropriate leaning rate η .
(2)
For a deep learning neural network p-DLANN ( p > 3 ), it is somewhat complicated to compute the partial derivatives of ω I J ( ) and θ I ( ) for = 2 , 3 , , p and I = 1 , 2 , , n , J = 1 , 2 , , n 1 .
(3)
In option pricing, to use the deep learning networks p-DLANN to solve the PDE, we must first obtain the discrete PDE.
The output of the network is denoted by
U ( ) ( X ) = U 1 ( ) ( X ) , U 2 ( ) ( X ) , , U n ( ) ( X ) , = p , p 1 , , 2 ,
with input data U ( 1 ) = X = x 1 , x 2 , , x d , τ . We denote by v ( τ , x ) : = U 1 ( p ) ( X ) = n e t ω , θ , X the network output with input data X under parameters ω , θ . In addition, we denote by U ( , k ) = U ( ) X ( k ) , = 1 , 2 , , p with N space and time input data
X ( k ) = x 1 ( k ) , x 2 ( k ) , , x d ( k ) , τ ( k ) , k = 1 , 2 , , N .
The purpose of our p-DLANN structure is to determine the optimal weights ω and bias θ . For this purpose, we use an iteration algorithm, i.e., we modify the values of the parameters ω and θ until some objective function M S E is less than the pre-specified error ε .
For convenience, we list some of the symbols used in this paper in Table 1.
In the following paragraphs, we give the definition of the objective function for the DLANN, the mean square error (MSE),
M S E = M S E ( p a y o f f ) + M S E ( P D E ) ,
with M S E ( p a y o f f ) being the error at points X ( k ) = x 1 ( k ) , , x d ( k ) , τ 1 , with k = 1 , 2 , , N and τ 1 = 0 , and  M S E ( P D E ) being the error of the PDE (7), with X ( k ) = x 1 ( k ) , , x d ( k ) , τ ( k ) and τ ( k ) being one of values
τ ( k ) = Δ τ , , τ j , , τ M = T .

3.2. Update of Weights and Bias

Firstly, we consider the M S E ( p a y o f f ) at τ 1 = 0 .
For the initial conditions, we have U 1 ( p ) ( X ( k ) ) = v ( 0 , x ) = Π ( x ( k ) ) = y ( k ) , k = 1 , 2 , , N . Then, the M S E ( p a y o f f ) of p-DLANN at τ 0 = 0 is defined by
M S E ( p a y o f f ) = 1 2 N k = 1 N U 1 ( p , k ) y ( k ) 2 : = 1 2 N k = 1 N S E k ( p a y o f f ) .
The partial derivatives of M S E k ( p a y o f f ) with respect to the weights ω I J ( ) are
S E k ( p a y o f f ) ω I J ( ) = S E k ( p a y o f f ) U 1 ( p , k ) U 1 ( p , k ) ω I J ( ) = 2 U 1 ( p , k ) y ( k ) U 1 ( p , k ) ω I J ( ) ,
for I = 1 , 2 , , n , J = 1 , 2 , , n 1 , and = 2 , 3 , , p .
In detail, we have
U 1 ( p , k ) ω 1 J ( p ) = U J ( p 1 , k ) , U 1 ( p , k ) ω I J ( p 1 ) = ω 1 I ( p ) g U I ( p 1 , k ) U J ( p 2 , k ) , U 1 ( p , k ) ω I J ( p 2 ) = g U I ( p 2 , k ) U J ( p 3 , k ) j = 1 n p 1 ω 1 j ( p ) ω j I ( p 1 ) g U j ( p 1 , k ) , U 1 ( p , k ) ω I J ( p 3 ) = g U I ( p 3 , k ) U J ( p 4 , k ) j 1 = 1 n p 1 ω 1 j 1 ( p ) g U j 1 ( p 1 , k ) j 2 = 1 n p 2 ω j 1 j 2 ( p 1 ) g U j 2 ( p 2 , k ) ω j 2 I ( p 2 ) , U 1 ( p , k ) ω I J ( p ) = g U I ( p , k ) U J ( p ( + 1 ) , k ) j 1 = 1 n p 1 ω 1 j 1 ( p ) g U j 1 ( p 1 , k ) j 2 = 1 n p 2 ω j 1 j 2 ( p 1 ) g ( U j 2 ( p 2 , k ) ) j 1 = 1 n p ( 1 ) ω j 2 j 1 ( p ( 2 ) ) g U j 1 ( p ( 1 ) , k ) ω j 1 I ( p ( 1 ) ) ,
for I = 1 , 2 , , n p , J = 1 , 2 , , n p 1 , and = 0 , 1 , , p 2 with function
g ( U ) = f ( O ) = U ( 1 U ) , = 2 , 3 , , p 1 .
The deduction of (21) can be found in Appendix A.1.1.
The partial derivatives of S E k ( p a y o f f ) with respect to θ I ( ) are
S E k ( p a y o f f ) θ I ( ) = S E k ( p a y o f f ) U 1 ( p , k ) U 1 ( p , k ) θ I ( ) = 2 U 1 ( p , k ) y ( k ) U 1 ( p , k ) θ I ( ) ,
for I = 1 , 2 , , n and = 2 , 3 , , p . In detail, we have
U 1 ( p , k ) θ I ( p ) = 1 , I = 1 , 2 , , n p , U 1 ( p , k ) θ I ( p 1 ) = ω 1 I ( p ) g U I ( p 1 , k ) , I = 1 , 2 , , n p 1 , U 1 ( p , k ) θ I ( p 2 ) = g U I ( p 2 , k ) j = 1 n p 1 ω 1 j ( p ) g U j ( p 1 , k ) ω j I ( p 1 ) , I = 1 , 2 , , n p 2 , U 1 ( p , k ) θ I ( p 3 ) = g U I ( p 3 , k ) j 1 = 1 n p 1 ω 1 j 1 ( p ) g U j 1 ( p 1 , k ) j 2 = 1 n p 2 ω j 1 j 2 ( p 1 ) g ( U j 2 ( p 2 , k ) ) ω j 2 I ( p 2 ) , I = 1 , 2 , , n p 3 U 1 ( p , k ) θ I ( p ) = g U I ( p , k ) j 1 = 1 n p 1 ω 1 j 1 ( p ) g U j 1 ( p 1 , k ) j 2 = 1 n p 2 ω j 1 j 2 ( p 1 ) g U j 2 ( p 2 , k ) j 1 = 1 n p ( 1 ) ω j 2 j 1 ( p ( 2 ) ) · g U j 1 ( p ( 1 ) , k ) ω j 1 I ( p ( 1 ) ) , I = 1 , 2 , , n p , = 0 , 1 , , p 2 .
The deduction of (24) can be seen in Appendix A.1.2. Observing (21) and (24), we have a relationship between U 1 ( p , k ) / ω I J ( ) and U 1 ( p , k ) / θ I ( ) ,
U 1 ( p , k ) ω I J ( ) = U J ( 1 , k ) U 1 ( p , k ) θ I ( ) ,
for = 2 , 3 , , p , I = 1 , 2 , , n , and J = 1 , 2 , , n 1 . This relationship can simplify our computation and save some CPU time.
Let the increments of ω , I , J ( p a y o f f ) and θ , I ( p a y o f f ) be as follows:
Δ θ , I ( p a y o f f ) = 1 2 N k = 1 N S E k ( p a y o f f ) θ I ( ) , Δ ω , I , J ( p a y o f f ) = 1 2 N k = 1 N S E k ( p a y o f f ) ω I J ( ) = 1 2 N k = 1 N U J ( 1 ) S E k ( p a y o f f ) θ I ( ) ,
with the definition of partial derivatives (21) and (24).
Secondly, we discuss the M S E ( P D E ) at discrete points X ( k ) for k = N + 1 , 2 , , N . The discrete error M S E ( P D E ) of PDE at points X ( k ) are defined as
M S E ( P D E ) : = 1 2 ( N N ) k = N + 1 N P D E p , k 2 ,
where
P D E p , k : = U ( τ , x ) τ A U ( τ , x ) X = X ( k ) D τ U 1 ( p , k ) I = 1 d J = 1 d a I J D I J U 1 ( p , k ) I = 1 d b I D I U 1 ( p , k ) + r U 1 ( p , k ) ,
for k = N + 1 , , N . Here, differential operators D τ : = / τ , D I : = / x I , and D I J : = 2 / x I x J .
The partial derivatives / x I U κ ( , k ) are recursively computed as
x I U κ ( , k ) : = D I U κ ( , k ) = U κ ( , k ) O κ ( , k ) q = 1 n 1 O κ ( , k ) U q ( 1 , k ) U q ( 1 , k ) x I = f ( O κ ) q = 1 n 1 ω κ q ( ) D I U q ( 1 , k ) ,
for
I = 1 , 2 , , d + 1 ; = p , p 1 , , 2 ; κ = 1 , 2 , , n ; k = N + 1 , N + 2 , , N .
By careful analysis, we have recursive derivation formulas 2 U κ ( , k ) / x I x J as follows:
2 U κ ( , k ) x I x J = D I J U κ ( , k ) = f O κ ( , k ) q = 1 n 1 ω k q ( ) D I J U q ( 1 , k ) + f O κ ( , k ) q = 1 n 1 ω κ q ( ) D I U q ( 1 , k ) q = 1 n 1 ω κ q ( ) D J U q ( 1 , k ) ,
for = p , p 1 , , 2 ; κ = 1 , 2 , , n ; I , J = 1 , 2 , , d ; and k = N + 1 , 2 , , N , with  f ( O ) and f ( O ) being defined in (13). In Equation (31), the derivatives D I U q ( 1 , q ) and D J U q ( 1 , q ) are defined as in (29).
The terminate values in (29) and (31) are set as
D I U κ ( 1 , k ) 1 , I = κ , 0 , I κ , for I = 1 , 2 , d + 1 and D I J U κ ( 1 , k ) = 0 for I , J = 1 , 2 , , d .
We define the partial derivatives
M S E ( P D E ) ω I J ( ) = 1 2 ( N N ) k = N + 1 N P D E p , k 2 ω I J ( ) ,
for = 2 , , p , I = 1 , 2 , , n , and J = 1 , 2 , , n 1 . We define the partial derivatives
M S E ( P D E ) θ J ( ) = 1 2 ( N N ) k = N + 1 N P D E p , k 2 θ J ( ) ,
for = p , p 1 , , 2 and J = 1 , 2 , , n .
The partial derivatives 1 2 P D E p , k 2 / ω I J ( ) in (33) are found as
1 2 P D E p , k 2 ω I J ( ) = P D E p , k P D E p , k ω I J ( ) = P D E p , k D τ U 1 ( p , k ) ω I J ( ) 1 2 i = 1 d j = 1 d a i j D i j U 1 ( p , k ) ω I J ( ) i = 1 d b i ( D i U 1 ( p , k ) ) ω I J ( ) + r U 1 ( p , k ) ω I J ( ) ,
for = p , p 1 , , 2 , I = 1 , 2 , , n , and J = 1 , 2 , , n 1 . The partial derivatives 1 2 P D E p , k 2 / θ J ( ) in (34) are found as
1 2 P D E p , k 2 θ J ( ) = P D E p , k P D E p , k θ J ( ) = P D E p , k D τ U 1 ( p , k ) θ J ( ) 1 2 i = 1 d j = 1 d a i j D i j U 1 ( p , k ) θ J ( ) i = 1 d b i D i U 1 ( p , k ) θ J ( ) + r U 1 ( p , k ) θ J ( ) ,
for J = 1 , 2 , , n and = p , p 1 , , 2 . The  D τ , D i , and D i j are computed according to (29)–(32).
We define the increments of ω , I , J ( P D E ) and θ , I ( P D E ) as follows:
Δ ω , I , J ( P D E ) = 1 2 ( N N ) k = N + 1 N P D E p , k 2 ω I J ( ) , Δ θ , I ( P D E ) = 1 2 ( N N ) k = N + 1 N P D E p , k 2 θ I ( ) ,
with the definitions of P D E p , k 2 / ω I J ( ) and P D E p , k 2 / θ J ( ) as in (33) and (34).
Then, we update the network’s weights ω and bias θ as follows:
ω I J ( ) = ω I J ( ) η Δ ω , I , J ( p a y o f f ) + Δ ω , I , J ( P D E ) / 2 , = p , , 2 ; I = 1 , 2 , , n , J = 1 , 2 , , n 1 , θ I ( ) = θ I ( ) η Δ θ , I ( p a y o f f ) + Δ θ , I ( P D E ) / 2 , = p , , 2 ; I = 1 , 2 , , n
where η = c o n s t a n t is the learning rate, and Δ ω and Δ θ are defined by (26) and (37).
Finally, we define the total error as in (17) and
M S E ( p a y o f f ) = 1 2 N k = 1 N U 1 ( p , k ) y ( k ) 2 , ( from the initial constraints ) , M S E ( P D E ) = 1 2 ( N N ) k = N + 1 N P D E p , k 2 , ( from the discrete PDE ) ,
with P D E p , k being defined by (28). To minimize the objective function M S E , we have
ω , θ = argmin M S E = argmin M S E ( p a y o f f ) + M S E ( P D E ) .
We obtain the trained network n e t ω , θ , { U i ( 1 , k ) } k = 1 , , N i = 1 , , d + 1 with input data { U i ( 1 , k ) } k = 1 , , N i = 1 , , d + 1 and optimal parameters ( ω , θ ) by iterating algorithm (38). Finally, we can compute
v ( τ , x ) = U 1 ( p ) ( X ) = n e t ω , θ , X ,
for any input data X = ( x , τ ) = ( x 1 , x 2 , , x d , τ ) .
Algorithm 1 gives the detailed procedure of p DLANN for option pricing.    
Algorithm 1: p-layers DLANN for multi-asset option pricing in time region [ 0 , T ] .
Step.1
Generate N samples of stock prices x and time τ , i.e., X ( k ) = x 1 ( k ) , x 2 ( k ) , , x d ( k ) , τ ( k ) for k = 1 , 2 , ,  N. Assume the first N < N data with time τ ( k ) = 0 , k = 1 , 2 , , N .
Step.2
Set the numbers ( n 1 , n 2 , , n p ) in p-DLANN.
Step.3
Randomly generate initial network weights ω ( ) and bias θ ( ) for = 2 , 3 , , p .
Step.4
Set learning rate η , iteration number L, and control error ε .
  • FOR iteration i = 1 : L
Step.5
Input the first N samples into the network; then, compute the MSE at τ = 0 ,
E R R ( p a y o f f ) = 1 2 N k = 1 N U 1 ( p , k ) y ( k ) 2 .
Step.6
Input N + 1 N samples into the network; then, compute the PDE MSE at τ = τ 1 , τ 2 , , τ M ,
E R R ( P D E ) = 1 2 ( N N ) k = N + 1 N E R R p , k 2
with E R R p , k being defined in (28).
Step.7
According to (26) and (37), compute adjustment amounts Δ ω ( p a y o f f ) and Δ θ ( P D E ) .
Step.8
According to (38), update the network’s weights ω ( ) and bias θ ( ) for = 2 , 3 , , p with learning rate η .
Step.9
If the M S E (see expressions (17) and (39)) is less than the preset value ε , break the iteration.
END FOR
Step.10
Output the network n e t { ω ( ) , θ ( ) } = 2 p , { U ( 1 , k ) } k = 1 N with optimal weights { ω ( ) } = 2 p and bias { θ ( ) } = 2 p and training data { U ( 1 , k ) } k = 1 N , and then obtain the trained network.
Step.11
For any input data X = x 1 , x 2 , , x d , τ , the output p-DLANN value is
v ( τ , x ) : = U 1 p ( X ) = n e t { ω ( ) , θ ( ) } = 2 p , X .

3.3. Time-Segment p DLANN

Algorithm 2 gives a time-segment DLANN, labeled as p TSDLANN. For this algorithm, firstly, we use p DLANN to solve the options in time region [ 0 , τ 2 ] with initial values v ( 0 , x ) . Then, we use p DLANN to compute the options in a time interval [ τ 2 , τ 3 ] with initial values v ( τ 2 , x ) . Over and over again, the options are solved by p DLANN in each time interval [ τ i , τ i + 1 ] , i = 0 , 1 , , M with initial values v ( τ i , x ) . The experiments described in Section 4.5 show the p TSDLANN can improve the calculation accuracy and reduce the calculation CPU cost.    
Algorithm 2: p-TSDLANN in [ 0 , T ] .
Step.1
Split time [ 0 , T ] into sub-interval [ τ i , τ i + 1 ] with i = 0 , 1 , , M 1 .
Step.2
Let v ( τ 0 , x ) = Π ( x ) (i.e., the payoff function)
FOR i = 1 : M 1
Step.3
Use p DLANN solve the options in time region [ τ i , τ i + 1 ] with initial condition v ( τ i , x ) .
Step.4
Then, we obtain the solution v ( τ i + 1 , x ) at time τ i + 1 .
ENDFOR
Step.5
Output the p TSDLANN solution v ( τ M , x ) at time τ M = T .

4. Numerical Examples

4.1. Parameters Setting

In the experiments, we used Matlab R2016a on a computer (12th Gen Intel(R), Core(TM), i7-12700H, 2.30 GHz, RAM 16.0G, HUAWEI) to implement some numerical cases. We set the parameters of a multi-asset option as follows. The dimension of PDE was set as d = 1 , 2 , 3 , 4 . The coefficients were set as ρ I J = 0.1 ( I J ) and ρ I I = 1 for I , J = 1 , 2 , , d . The dividend yields were set as q I = 0.1 % , and the risk-free interest was set as r = 0.5 % . The volatilities were set as σ I = 0.1 for I = 1 , 2 , , d . The strike prices were set as K = 8 , 10 . The asset ratios were set as α I = 1 / d for I = 1 , 2 , , d . The maturity date was taken as T = 0.4 .
The discrete stock prices were taken as x I = ln ( 0.01 ) + I 5 ln ( 3 K ) log ( 0.01 ) with I = 1 , 2 , , 5 . The time to maturity T was discretized as τ j = j Δ τ with j = 0 , 1 , , M 1 and M = T / Δ τ . M = 4 for d = 1 , 2 , and M = 2 for d = 3 , 4 . The input data for network training were expressed by X = [ x 1 ( k ) , x 2 ( k ) , , x d ( k ) , τ ( k ) ] with each x I ( k ) being one of x I and each τ ( k ) being one of τ j . So, the total numbers of training data were 5 d M . The input data X = [ x 1 ( k ) , x 2 ( k ) , , x d ( k ) , 0 ] , k = 1 , 2 , , N were taken as the initial data with the network output being the payoff functions Π ( X ) (see (8) and (9)). The reminder data X = [ x 1 ( k ) , x 2 ( k ) , , x d ( k ) , τ ( k ) ] , k = N + 1 , , N with τ ( k ) 0 were taken as the input data corresponding to the discrete PDE.
For d = 1 , 2 , the network layers were taken as n 1 = d + 1 , n 2 = 3 , n 3 = 3 , and n 4 = 1 , and the training number was taken as L = 3000 in the p ANN structure. For d = 3 , 4 , the network layers were taken as n 1 = d + 1 , n 2 = 5 , n 3 = 5 , and n 4 = 1 , and the training number was taken as L = 5000 . When the MSE was increasing, we ended the p DLANN training. In Section 4.2 and Section 4.3, we give some computational results with p = 4 . In Section 4.4, we list some computational results with p 5 . In Section 4.5, we list some computational results of p TSDLANN.
Throughout the training process, we used the following techniques:
(1)
The learning rate η was set as η = 0.5 initially. When the objective function did not decrease, we set η = 0.5 η . Throughout the training process, we used a factor of 0.5 to reduce the learning rate when the error did not decrease, repeatedly.
(2)
The initial parameters values ω I J ( ) and θ I ( ) were set randomly between ( 0 , 1 ) .
(3)
To speed the training process, at each iteration we only used partial training data to update the weights ω ( ) and the bias θ ( ) . Simply, at the ith iteration, we used the data sequence labeled by m o d ( i , N ) : 3 : N as the input data of p-DLANN.
(4)
If the MSE of the simulated solutions was not ideal, we reset the initial values of the weights ω I J ( ) and bias θ I ( ) , randomly. This technique may prevent the optimization process from falling into a local minimum rather than a global performance values.
(5)
The simulated result was v ( τ , x ) = U 1 p ( X ) = n e t { ω ( ) , θ ( ) } = 2 , , p , X for any input data X (see expression (41)) contained within the envelope of { X ( k ) , k = 1 , 2 , , N } , with trained parameters ω ( ) and θ ( ) .
To compare the results, we list the options computed by Monte Carlo simulation. The Monte Carlo algorithm for multi-asset option is described as in Algorithm 3. For the geometric mean payoff function, we modified the simulated option P ( k ) = m a x 0 , K i = 1 d e α i x M 1 , i ( k ) at Step 3.
Algorithm 3: Monte Carlo algorithm for multi-asset option pricing in [ 0 , T ] .
Step.1
Generate N samples of stock prices x j , I ( k ) for asset I = 1 , 2 , , d , price paths k = 1 , 2 , , N , and discrete time τ j = j Δ τ , j = 0 , 1 , , M 1 with Δ τ = T / M .
Step.2
Ensure the random numbers x j , I ( k ) have co-variance c o v = Δ τ ρ I J σ I σ J at each time τ j and at each path k. We can use the Matlab R2016a command ‘mvnrnd(zeros(1,d),   cov, M)’.
Step.3
Compute the payoff function P ( k ) = m a x 0 , K I = 1 d α I e x M , I ( k ) for each path k = 1 , 2 , , N at time T = τ M .
Step.4
Compute the simulated option V ( T , x ; K , ρ , σ ) = e r T N k = 1 N P ( k ) .

4.2. Numerical Results with Geometric Mean Payoff Function

With the geometric mean payoff function Π ( x ) defined by (8), a put option governed by System (7) has an analytical solution (see the literature [40]),
v ( τ , x ) = K e r τ Φ ( d ¯ 2 ) e I = 1 d α I x I Φ ( d ¯ 1 ) ,
where Φ ( · ) is the cumulative normal distribution function, and
d ¯ 1 = 1 σ ¯ τ I = 1 d α I x I ln K + ( r q ¯ + σ ¯ 2 / 2 ) τ , d ¯ 2 = d ¯ 1 σ ¯ τ ,
with
σ ¯ 2 = I , J = 1 d a I J α I α J , q ¯ = I = 1 d α I ( q I + a I I / 2 ) σ ¯ 2 / 2 .
We list some numerical examples to illustrate the errors between the p DLANN ( p = 4 ) numerical solutions, the Monte Carlo solutions, and the analytical options.
Table 2, Table 3, Table 4 and Table 5 list the 4∼DLANN solutions (labeled by ‘DLANN’), Monte Carlo solutions (labeled by ‘MC’), and the analytical solutions (label by ‘Anal.’) with d = 1 , 2 , 3 , 4 , time to expire date τ = 0.4 , and different strike prices of K = 8 , 10 . In Table 2, Table 3, Table 4 and Table 5, the fifth column and the sixth column are the absolute errors (labeled by ‘ERR’) and the relative errors (labeled by ‘RE’), respectively. Figure 2, Figure 3, Figure 4 and Figure 5 plot the analytical solutions, Monte Carlo solutions, and 4∼DLANN solutions at different asset points with the following parameters: T = 0.4 , K = 8 , 10 , and d = 1 , 2 , 3 , 4 .
From Table 2, Table 3, Table 4 and Table 5, we see the absolute errors are less than 0.05 , and the relative errors are about 0.5 % between the 4∼DLANN solutions and analytical solutions, which illustrate that our 4∼DLANN scheme is efficient and accurate. Moreover, the results of the calculation are quite stable, and there are no unstable results. From Figure 2, Figure 3, Figure 4 and Figure 5, we see the 4∼DLANN solutions are very close to the Monte Carlo solutions and analytical solutions.
As the iteration number L increases, Figure 6 and Figure 7 record the error evolution path with respect to the training number L for different values of d. We can see from these figures that the errors are decreasing quickly as the iteration number L increases. Figure 8 records the learning rate path η . The learning rate η fluctuates between 0 and 0.1.

4.3. Numerical Results with Arithmetic Mean Payoff Function

In this subsection, we list some results of the put options with arithmetic mean payoff functions
Π ( x ) = m a x 0 , i = 1 d α i e x i .
This type of option has no analytical solution. We used the Monte Carlo simulation (the procedure can be seen in Algorithm 3) in order to compare the two results.
Table 6, Table 7, Table 8 and Table 9 list the results of the 4-DLANN solutions (labeled by ‘DLANN’) and the Monte Carlo simulation (labeled by ‘MC’). From these tables, we see the absolute errors (labeled by ‘ERR’) are less than 0.02 , and the relative errors (labeled by ‘RE’) are less than 0.5 % , which illustrate that our DLANN algorithm is effective and accurate.
Figure 9a, Figure 10a, Figure 11a and Figure 12a plot the solutions obtained by the 4-DLANN method and the Monte Carlo simulation with T = 0.4 , d = 1 , 2 , 3 , 4 , and K = 8 . Figure 9b, Figure 10b, Figure 11b and Figure 12b plot the errors with respect to the number of iterations L. From this figure, we see the errors decrease quickly, as the iteration number L increases.

4.4. Numerical Results with p 5

We set the DLANN parameters as p = 5 and n 1 = d + 1 , n 2 = n 3 = n 4 = 3 , n 5 = 1 for the arithmetic payoff functions. Table 10 lists some DLANN solutions and Monte Carlo solutions. We see the absolute errors and relative errors are less than 0.03 and 0.3 % , respectively. Figure 13a plots the DLANN solutions, Monte Carlo solutions, and analytical solutions for the geometric payoff functions. Figure 13b plots the errors with respect to the iteration number L. From the table and figure, we see the errors between the DLANN and MC are less than 0.3 % , which illustrates our 5-DLANN is more efficient than the p 4 .
Figure 14 plots the 6 DLANN solutions and the errors’ evolution process with respect to the iteration number L under the parameters n 1 = d + 1 , n 2 = n 3 = n 4 = 3 , n 5 = 2 , n 6 = 1 for arithmetic payoff functions. Again, these numerical results confirm our p DLANN is more accurate and efficient for p 5 .
Compared with p = 3 , 4 the computational results of the DLANN with p = 5 , 6 are more accurate, which illustrate that our p DLANN is convergent with respect to the ANN layers p.

4.5. Numerical Results for p TSDLANN

Using p-TSDLANN, as shown in Algorithm 2, we computed the options with geometric payoff functions. Figure 15 plots the p TSDLANN solutions with p = 4 , M = 2 , T = 0.4 , compared with the analytical solutions and Monte Carlo solutions, from which we see the p-TSDLANN solutions are very close to the analytical solutions. The experiments show, with the same accuracy, the CPU time of this Algorithm is about half of that computed by the p-DLANN.

5. Conclusions

This paper introduces a p -layers deep learning ANN to price European multi-asset options based on the discrete PDE. By setting the numbers ( n 1 , n 2 , , n p ) of the DLANN structure, the multi-asset options are simulated correctly and efficiently. The key point is obtaining the discrete formula of the PDE that is established by the option valuation and then computing the gradients of the p DLANN with respect to the net weights ω ( ) and net bias θ ( ) . By setting the learning parameter η , the p DLANN is trained well, and the optimal parameters ω ( ) and θ ( ) are obtained. Lastly, the option prices are simulated by inputting the asset log prices x and remainder time τ into the trained p DLANN.
Numerical examples are provided for geometric mean payoff functions and arithmetic mean payoff functions. These examples show that the p-layers DLANN has a powerful capability for multi-asset European option pricing. The relative errors of the p DLANN options are less than 0.5 % , and this calculation accuracy is comparable to the Monte Carlo simulation. Moreover, we propose a so-called p TSDLANN, which improves the accuracy and saves CPU time.
In the future, we will prove the convergence of p DLANN under certain conditions. The challenge is how to analyze the errors between the actual options and computed options by p DLANN. Moreover, we will consider applying the p DLANN to more complex options, such as multi-asset American options, multi-asset Asian options, and multi-asset barrier options, which is somewhat complicated.

Author Contributions

Methodology, Z.Z.; Software, H.W. and Y.W.; Formal analysis, C.K.; Resources, Y.L. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Natural Science Foundation of China (Grant No.12171409), This work was supported by the Key Project of Hunan Provincial Department of Education (Grant No.21A0533).

Data Availability Statement

The authors confirm that the data supporting the findings of this study are available within the article.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A

Appendix A.1. Deducing the Derivatives for Weights ω I J ( ) and Bias θ I ( )

Appendix A.1.1. Deducing the Derivatives for Weights ω I J ( )

The partial derivatives of U 1 ( p , k ) ω 1 J ( p ) are
U 1 ( p , k ) ω 1 J ( p ) = f O 1 ( p ) O 1 ( p , k ) ω 1 J ( p ) = f O 1 ( p ) ω 1 J ( p ) j = 1 n p 1 ω 1 j ( p ) U j ( p 1 ) = U J ( p 1 , k ) ,
for J = 1 , 2 , , n p 1 . The partial derivatives of U 1 ( p , k ) ω I J ( p 1 ) are
U 1 ( p , k ) ω I J ( p 1 ) = f O 1 ( p ) ω I J ( p 1 ) j = 1 n p 1 ω i j ( p ) U j ( p 1 ) , = f O 1 ( p ) ω I J ( p 1 ) j = 1 n p 1 ω 1 j ( p ) f O j ( p 1 , k ) = 1 n p 2 ω j ( p 1 ) U ( p 2 ) , = ω 1 I ( p ) g U I ( p 1 , k ) U J ( p 2 , k ) ,
for I = 1 , 2 , , n p 1 and J = 1 , 2 , n p 2 . Similarly, we have
U 1 ( p , k ) ω I J ( p 2 ) = g U I ( p 2 , k ) U J ( p 3 , k ) j = 1 n p 1 ω 1 j ( p ) ω j I ( p 1 ) g U j ( p 1 , k ) ,
for I = 1 , 2 , , n p 2 and J = 1 , 2 , n p 3 .
U 1 ( p , k ) ω I J ( p 3 ) = g U I ( p 3 , k ) U J ( p 4 , k ) j 1 = 1 n p 1 ω 1 j 1 ( p ) g U j 1 ( p 1 , k ) j 2 = 1 n p 2 ω j 1 j 2 ( p 1 ) g U j 2 ( p 2 , k ) ω j 2 I ( p 2 ) ,
for I = 1 , 2 , , n p 3 and J = 1 , 2 , n p 4 .
U 1 ( p , k ) ω I J ( p ) = g U I ( p , k ) U J ( p ( + 1 ) , k ) j 1 = 1 n p 1 ω 1 j 1 ( p ) g U j 1 ( p 1 , k ) j 2 = 1 n p 2 ω j 1 j 2 ( p 1 ) g ( U j 2 ( p 2 , k ) ) j 1 = 1 n p ( 1 ) ω j 2 j 1 ( p ( 2 ) ) g U j 1 ( p ( 1 ) , k ) ω j 1 I ( p ( 1 ) ) ,
for I = 1 , 2 , , n p , J = 1 , 2 , , n p 1 , and = 0 , 1 , , p 2 , with the function
g ( U ) = U ( 1 U ) .

Appendix A.1.2. Deducing the Derivatives for Weights θ I ( )

The derivatives of U 1 ( p , k ) / θ 1 ( p ) are
U 1 ( p , k ) θ 1 ( p ) = f O 1 ( p ) O 1 ( p , k ) θ 1 ( p ) = f O 1 ( p ) θ 1 ( p ) i = 1 n p 1 ω 1 i ( p ) U i ( p 1 ) + θ 1 ( p ) = 1 ,
for I = 1 , 2 , , n p . The derivatives are U 1 ( p , k ) / θ I ( p 1 ) is
U 1 ( p , k ) θ I ( p 1 ) = f O 1 ( p , k ) θ I ( p 1 ) j = 1 n p 1 ω 1 j ( p ) U j ( p 1 , k ) = f O 1 ( p , k ) θ I ( p 1 ) j = 1 n p 1 ω 1 j ( p ) f O j ( p 1 , k ) = 1 n p 2 ω j ( p 1 ) U ( p 2 , k ) + θ j ( p 1 ) = ω 1 I ( p ) g U I ( p 1 , k ) ,
for I = 1 , 2 , , n p 1 . Similarly, we have
U 1 ( p , k ) θ I ( p 2 ) = j = 1 n p 1 ω 1 j ( p ) g U j ( p 1 , k ) ω j I ( p 1 ) g U I ( p 2 , k ) ,
for I = 1 , 2 , , n p 2 . The derivatives of U 1 ( p , k ) / θ I ( p 3 ) are
U 1 ( p , k ) θ I ( p 3 ) = j 1 = 1 n p 1 ω 1 j 1 ( p ) g U j 1 ( p 1 , k ) j 2 = 1 n p 2 ω j 1 j 2 ( p 1 ) g ( U j 2 ( p 2 , k ) ) ω j 2 I ( p 2 ) g ( U I ( p 3 , k ) ) ,
for I = 1 , 2 , , n p 3 . The derivatives of U 1 ( p , k ) / θ I ( p ) are
U 1 ( p , k ) θ I ( p ) = j 1 = 1 n p 1 ω 1 j 1 ( p ) g U j 1 ( p 1 , k ) j 2 = 1 n p 2 ω j 1 j 2 ( p 1 ) g ( U j 2 ( p 2 , k ) ) j 1 = 1 n p ( 1 ) ω j 2 j 1 ( p ( 2 ) ) · g ( U j 1 ( p ( 1 ) , k ) ) ω j 1 I ( p ( 1 ) ) g ( U I ( p , k ) ) ,
for I = 1 , 2 , , n p , = 0 , 1 , , p 2 .

Appendix A.2. Deducing g ′ (U) and g ′′ (U)

The function g ( U ) is
g ( U ) = 1 1 + e U .
So, the derivative g ( U ) is
g ( U ) = 1 ( 1 + e U ) 2 e U = g ( U ) 1 g ( U ) ,
and the second derivative g ( U ) is
g ( U ) = g ( U ) ( 1 g ( U ) ) g ( U ) g ( U ) = g ( U ) ( 1 g ( U ) ( 1 2 g ( U ) ) .

References

  1. Álvarez, D.; González-Rodríguez, P.; Kindelan, M. A local radial basis function method for the Laplace-Beltrami operator. J. Sci. Comput. 2021, 86, 28. [Google Scholar] [CrossRef]
  2. Banei, S.; Shanazari, K. On the convergence analysis and stability of the RBF-adaptive method for the forward-backward heat problem in 2D. Appl. Numer. Math. 2021, 159, 297–310. [Google Scholar] [CrossRef]
  3. Bastani, A.F.; Ahmadi, Z.; Damircheli, D. A radial basis collocation method for pricing American options under regime-switching jump-diffusion models. Appl. Numer. Math. 2013, 65, 79–90. [Google Scholar] [CrossRef]
  4. Bollig, E.; Flyer, N.; Erlebacher, G. Solution to PDEs using radial basis function finite-differences (RBF-FD) on multiple GPUs. J. Comput. Phys. 2012, 231, 7133–7151. [Google Scholar] [CrossRef]
  5. Fornberg, B.; Lehto, E. Stabilization of RBF-generated finite difference methods for convective PDEs. J. Comput. Phys. 2011, 230, 2270–2285. [Google Scholar] [CrossRef]
  6. Fornberg, B.; Piret, C. A stable algorithm for at radial basis functions on a sphere. SIAM J. Sci. Comput. 2007, 30, 60–80. [Google Scholar] [CrossRef]
  7. Fornberg, B.; Piret, C. On choosing a radial basis function and a shape parameter when solving a convective PDE on a sphere. J. Comput. Phys. 2008, 227, 2758–2780. [Google Scholar] [CrossRef]
  8. Larsson, E.; Shcherbakov, V.; Heryudono, A. A least squares radial basis function partition of unity method for solving PDEs. SIAM J. Sci. Comput. 2017, 39, 2538–2563. [Google Scholar] [CrossRef]
  9. Li, H.G.; Mollapourasl, R.; Haghi, M. A local radial basis function method for pricing options under the regime switching model. J. Sci. Comput. 2019, 79, 517–541. [Google Scholar] [CrossRef]
  10. Shcherbakov, V.; Larsson, E. Radial basis function partition of unity methods for pricing vanilla Basket options. Comput. Math. Appl. 2016, 71, 185–200. [Google Scholar] [CrossRef]
  11. Sunday, E.; Olabisi, U.; Enahoro, O. Analytical solutions of the Black–Scholes pricing model for European option valuation via a projected differential transformation method. Entropy 2015, 17, 7510–7521. [Google Scholar] [CrossRef]
  12. Zhao, W.J. An artificial boundary method for American option pricing under the CEV model. SIAM J. Numer. Anal. 2008, 46, 2183–2209. [Google Scholar]
  13. Chiarella, C.; Kang, B.; Meyer, G.H. The numerical solution of the American option pricing problem-finite difference and transform approaches. World Sci. Books 2014, 127, 161–168. [Google Scholar]
  14. in’t Hout, K.; Foulon, S. ADI finite difference schemes for option pricing in the Heston model with correlation. Int. J. Numer. Anal. Model. 2008, 7, 303–320. [Google Scholar]
  15. in ’t Hout, K.; Weideman, J.A.C. A contour integral method for the Black–Scholes and Heston equations. SIAM J. Sci. Comput. 2011, 33, 763–785. [Google Scholar] [CrossRef]
  16. Pang, H.; Sun, H. Fast numerical contour integral method for fractional diffusion equations. J. Sci. Comput. 2016, 66, 41–66. [Google Scholar] [CrossRef]
  17. Song, L.; Wang, W. Solution of the fractional Black-Scholes option pricing model by finite difference method. Abstr. Appl. Anal. 2013, 2013, 194286. [Google Scholar] [CrossRef]
  18. Gabriel, T.A.; Amburgey, J.D.; Bishop, B.L. CALOR: A Monte Carlo Program Package for the Design and Analysis of Calorimeter Systems. 2024, In FORTRAN IV, osti Information Bridge Server. Available online: https://www.osti.gov/servlets/purl/7215451/ (accessed on 13 December 2024).
  19. Gamba, A. An Extension of Least Squares Monte Carlo Simulation for Multi-Options Problems; 2002. pp. 1–40. Available online: https://www.realoptions.org/papers2002/Gamba.pdf (accessed on 13 December 2024).
  20. Rodriguez, A.L.; Grzelak, L.A.; Oosterlee, C.W. On an efficient multiple time-step Monte Carlo simulation of the SABR model. Soc. Sci. Electron. Publ. 2016, 17, 1549–1565. [Google Scholar] [CrossRef]
  21. Ma, J.; Zhou, Z. Convergence analysis of iterative Laplace transform methods for the coupled PDEs from regime-switching option pricing. J. Sci. Comput. 2018, 75, 1656–1674. [Google Scholar] [CrossRef]
  22. Ma, J.; Zhou, Z. Fast Laplace transform methods for the PDE system of Parisian and Parasian option pricing. Sci. China Math. 2022, 65, 1229–1246. [Google Scholar] [CrossRef]
  23. Panini, R.; Srivastav, R.P. Option pricing with Mellin transforms. Math. Comput. Model. 2004, 40, 43–56. [Google Scholar] [CrossRef]
  24. Zhou, Z.; Ma, J.; Sun, H. Fast Laplace transform methods for free-boundary problems of fractional diffusion equations. J. Sci. Comput. 2018, 74, 49–69. [Google Scholar] [CrossRef]
  25. Zhou, Z.; Xu, W. Robust willow tree method under Lévy processes. J. Comput. Appl. Math. 2023, 424, 114982. [Google Scholar] [CrossRef]
  26. Zhou, Z.; Xu, W. Joint calibration of S&P 500 and VIX options under local stochastic volatility models. Int. J. Financ. Econ. 2024, 29, 273–310. [Google Scholar]
  27. Anderson, D.; Ulrych, U. Accelerated American option pricing with deep neural networks. Quant. Financ. Econ. 2023, 7. [Google Scholar] [CrossRef]
  28. Carverhill, A.P.; Cheuk, T.H.F. Alternative Neural Network Approach for Option Pricing and Hedging; Social Science Electronic Publishing: Rochester, NY, USA, 2024; pp. 1–17. [Google Scholar] [CrossRef]
  29. Gan, L.; Liu, W. Option pricing based on the residual neural network. Comput. Econ. 2023, 63, 1327–1347. [Google Scholar] [CrossRef]
  30. Kapllani, L.; Teng, L. Deep learning algorithms for solving high-dimensional nonlinear backward stochastic differential equations. Discret. Contin. Dyn. Syst. Ser. B 2024, 29, 1695–1729. [Google Scholar] [CrossRef]
  31. Lee, Y.; Son, Y. Predicting arbitrage-free American option prices using artificial neural network with pseudo inputs. Ind. Eng. Manag. Syst. 2021, 20, 119–129. [Google Scholar] [CrossRef]
  32. Mary, M.; Salchenberger, L. A neural network model for estimating option prices. Appl. Intell. 1993, 3, 193–206. [Google Scholar]
  33. Nikolas, N.; Lorenz, R. Interpolating Between BSDEs and PINNs: Deep Learning, for Elliptic and Parabolic Boundary Value Problems. J. Mach. Learn. 2024, 2, 31–64. [Google Scholar] [CrossRef]
  34. Balkin, R.; Ceniceros, H.D.; Hu, R. Stochastic Delay Differential Games: Financial Modeling and Machine Learning Algorithms. J. Mach. Learn. 2024, 3, 23–63. [Google Scholar] [CrossRef]
  35. Teng, Y.; Li, Y.; Wu, X. Option volatility investment strategy: The combination of neural network and classical volatility prediction model. Discret. Dyn. Nat. Soc. 2022, 2022, 8952996. [Google Scholar] [CrossRef]
  36. Tung, W.L.; Quek, C. Financial volatility trading using a self-organising neural-fuzzy semantic network and option straddle-based approach. Expert Syst. Appl. 2011, 38, 4668–4688. [Google Scholar] [CrossRef]
  37. Umeorah, N.; Mashele, P.; Agbaeze, O.M.J.C. Barrier Options and Greeks: Modeling with Neural Networks. Axioms 2023, 12, 384. [Google Scholar] [CrossRef]
  38. Wang, H. Nonlinear neural network forecasting model for stock index option price: Hybrid GJRCGARCH approach. Expert Syst. Appl. 2009, 36, 564–570. [Google Scholar] [CrossRef]
  39. Zhou, Z.; Wu, H.; Li, Y.; Kang, C.; Wu, Y. Three-layer artificial neural network for pricing multi-asset European option. Mathematics 2024, 12, 2770. [Google Scholar] [CrossRef]
  40. Jiang, L. Mathematical Models and Methods of Option Pricing (Chinese Edition); Higher Education Press: Beijing, China, 2008. [Google Scholar]
Figure 1. Graphical depiction of the artificial neural network with a input layer, three hidden layers, and a output layer.
Figure 1. Graphical depiction of the artificial neural network with a input layer, three hidden layers, and a output layer.
Mathematics 13 00617 g001
Figure 2. 4∼DLANN computational options, analytical solutions, and Monte Carlo solutions for geometric mean payoff function with T = 0.4 and d = 1 . (a) K = 8 and (b) K = 10 .
Figure 2. 4∼DLANN computational options, analytical solutions, and Monte Carlo solutions for geometric mean payoff function with T = 0.4 and d = 1 . (a) K = 8 and (b) K = 10 .
Mathematics 13 00617 g002
Figure 3. 4∼DLANN computational options, analytical solutions, and Monte Carlo solutions for geometric mean payoff function with T = 0.4 and d = 2 . (a) K = 8 and (b) K = 10 .
Figure 3. 4∼DLANN computational options, analytical solutions, and Monte Carlo solutions for geometric mean payoff function with T = 0.4 and d = 2 . (a) K = 8 and (b) K = 10 .
Mathematics 13 00617 g003
Figure 4. 4∼DLANN computational options, analytical solutions, and Monte Carlo solutions for geometric mean payoff function with T = 0.4 and d = 3 . (a) K = 8 and (b) K = 10 .
Figure 4. 4∼DLANN computational options, analytical solutions, and Monte Carlo solutions for geometric mean payoff function with T = 0.4 and d = 3 . (a) K = 8 and (b) K = 10 .
Mathematics 13 00617 g004
Figure 5. 4∼DLANN computational options, analytical solutions, and Monte Carlo solutions for geometric mean payoff function with T = 0.4 and d = 4 . (a) K = 8 and (b) K = 10 .
Figure 5. 4∼DLANN computational options, analytical solutions, and Monte Carlo solutions for geometric mean payoff function with T = 0.4 and d = 4 . (a) K = 8 and (b) K = 10 .
Mathematics 13 00617 g005
Figure 6. 4∼DLANN MSE vs. training number L with T = 0.4 and d = 1 . (a) K = 8 and (b) K = 10 .
Figure 6. 4∼DLANN MSE vs. training number L with T = 0.4 and d = 1 . (a) K = 8 and (b) K = 10 .
Mathematics 13 00617 g006
Figure 7. 4∼DLANN MSE vs. the iteration number L with T = 0.4 and K = 8 . (a) d = 2 and (b) d = 3 .
Figure 7. 4∼DLANN MSE vs. the iteration number L with T = 0.4 and K = 8 . (a) d = 2 and (b) d = 3 .
Mathematics 13 00617 g007
Figure 8. 4∼DLANN learning rate path with T = 0.4 . (a) d = 2 and K = 8 ; (b) d = 3 and K = 10 .
Figure 8. 4∼DLANN learning rate path with T = 0.4 . (a) d = 2 and K = 8 ; (b) d = 3 and K = 10 .
Mathematics 13 00617 g008
Figure 9. (a) 4∼DLANN computational options vs. Monte Carlo solutions for arithmetic mean payoff function. (b) Iteration error process. Parameters: T = 0.4 , d = 1 , and K = 8 .
Figure 9. (a) 4∼DLANN computational options vs. Monte Carlo solutions for arithmetic mean payoff function. (b) Iteration error process. Parameters: T = 0.4 , d = 1 , and K = 8 .
Mathematics 13 00617 g009
Figure 10. (a) 4∼DLANN computational options vs. Monte Carlo solutions for arithmetic mean payoff function. (b) Iteration error process. Parameters: T = 0.4 , d = 2 , and K = 8 .
Figure 10. (a) 4∼DLANN computational options vs. Monte Carlo solutions for arithmetic mean payoff function. (b) Iteration error process. Parameters: T = 0.4 , d = 2 , and K = 8 .
Mathematics 13 00617 g010
Figure 11. (a) 4∼DLANN computational options vs. Monte Carlo solutions for arithmetic mean payoff function. (b) Iteration error process. Parameters: T = 0.4 , d = 3 , and K = 8 .
Figure 11. (a) 4∼DLANN computational options vs. Monte Carlo solutions for arithmetic mean payoff function. (b) Iteration error process. Parameters: T = 0.4 , d = 3 , and K = 8 .
Mathematics 13 00617 g011
Figure 12. (a) 4∼DLANN computational options vs. Monte Carlo solutions for arithmetic mean payoff function. (b) Iteration error process. Parameters: T = 0.4 , d = 4 , and K = 8 .
Figure 12. (a) 4∼DLANN computational options vs. Monte Carlo solutions for arithmetic mean payoff function. (b) Iteration error process. Parameters: T = 0.4 , d = 4 , and K = 8 .
Mathematics 13 00617 g012
Figure 13. (a) 5∼DLANN computational options vs. Monte Carlo solutions for geometric mean payoff function. (b) Iteration error process. Parameters: T = 0.4 , d = 2 , and K = 8 .
Figure 13. (a) 5∼DLANN computational options vs. Monte Carlo solutions for geometric mean payoff function. (b) Iteration error process. Parameters: T = 0.4 , d = 2 , and K = 8 .
Mathematics 13 00617 g013
Figure 14. (a) 6∼DLANN computational options vs. Monte Carlo solutions for arithmetic mean payoff function. (b) Iteration error process. Parameters: T = 0.4 , d = 2 , and K = 8 .
Figure 14. (a) 6∼DLANN computational options vs. Monte Carlo solutions for arithmetic mean payoff function. (b) Iteration error process. Parameters: T = 0.4 , d = 2 , and K = 8 .
Mathematics 13 00617 g014
Figure 15. 4∼TSDLANN computational options, analytical solutions, and Monte Carlo solutions for geometric mean payoff function with T = 0.4 , d = 2 . (a) K = 8 and (b) K = 10 .
Figure 15. 4∼TSDLANN computational options, analytical solutions, and Monte Carlo solutions for geometric mean payoff function with T = 0.4 , d = 2 . (a) K = 8 and (b) K = 10 .
Mathematics 13 00617 g015
Table 1. Some symbols for p DLANN.
Table 1. Some symbols for p DLANN.
NameMeaning
N The number of data, which are the initial conditions
NThe number of input data for the training of DLANN
N N The input number of data for the discrete PDE
n e t ( { ω ( ) , θ ( ) } = 2 p ) p DLANN with weights { ω ( ) } = 2 p and bias { θ ( ) } = 2 p
v ( τ , x ) = U 1 p ( X ) = n e t { ω ( ) , θ ( ) } = 2 p , X p DLANN output with input data X = ( x 1 , x 2 , , x d , τ )
η Learning rate of DLANN
O k ( ) Input data at the -layer ( = 2 , 3 , , p ; k = 1 , 2 , , n p )
U k ( ) Output data at the -layer ( = 1 , 2 , , p ; k = 1 , 2 , , n p )
M S E ( p a y o f f ) Error for the payoff functions or initial values
M S E ( P D E ) Error for the discrete PDE
Δ θ ( p a y o f f , k ) , Δ θ ( P D E , k ) Partial derivatives of θ for kth input data
Δ ω ( p a y o f f , k ) , Δ ω ( P D E , k ) Partial derivatives of ω for kth input data
D Partial derivatives for x with D = D τ , D = D I , and D = D I J
Table 2. Computational results of 4∼DLANN for geometric mean payoff function with d = 1 .
Table 2. Computational results of 4∼DLANN for geometric mean payoff function with d = 1 .
Input [ τ , x T ] DLANNMCAnal.ERRRE
K = 8 , M S E = 2.8 × 10 3
( 0.0000 , 4.6052 ) 9.9806 9.9900 9.9900 0.0094 0.0942 %
( 0.0000 , 2.2504 ) 9.8630 9.8946 9.8946 0.0316 0.3206 %
( 0.0000 , + 0.1045 ) 8.9068 8.8899 8.8899 0.0169 0.1900 %
( 0.1000 , 4.6052 ) 9.9807 9.9850 9.9850 0.0043 0.0432 %
( 0.1000 , 2.2504 ) 9.8635 9.8895 9.8896 0.0261 0.2649 %
( 0.1000 , + 0.1045 ) 8.9100 8.8847 8.8849 0.0251 0.2816 %
( 0.2000 , 4.6052 ) 9.9808 9.9800 9.9800 0.0008 0.0077 %
( 0.2000 , 2.2504 ) 9.8640 9.8855 9.8846 0.0206 0.2093 %
( 0.2000 , + 0.1045 ) 8.9131 8.8650 8.8799 0.0332 0.3729 %
( 0.3000 , 4.6052 ) 9.9809 9.9750 9.9750 0.0058 0.0586 %
( 0.3000 , 2.2504 ) 9.8645 9.8797 9.8796 0.0152 0.1538 %
( 0.3000 , + 0.1045 ) 8.9163 8.8703 8.8749 0.0414 0.4638 %
( 0.4000 , 4.6052 ) 9.9809 9.9700 9.9700 0.0109 0.1094 %
( 0.4000 , 2.2504 ) 9.8649 9.8741 9.8747 0.0097 0.0984 %
K = 10 , M S E = 4.2 × 10 3
( 0.0000 , 4.6052 ) 9.9951 9.9900 9.9900 0.0051 0.0507 %
( 0.0000 , 2.4982 ) 9.8974 9.9178 9.9178 0.0203 0.2054 %
( 0.0000 , 0.3913 ) 9.3661 9.3238 9.3238 0.0423 0.4513 %
( 0.1000 , 3.3410 ) 9.9511 9.9598 9.9596 0.0085 0.0856 %
( 0.1000 , 1.2341 ) 9.6875 9.7039 9.7039 0.0164 0.1697 %
( 0.2000 , 4.1838 ) 9.9814 9.9748 9.9748 0.0066 0.0661 %
( 0.2000 , 2.0768 ) 9.8382 9.8645 9.8647 0.0265 0.2694 %
( 0.2000 , + 0.0301 ) 9.0149 8.9619 8.9595 0.0554 0.6147 %
( 0.3000 , 2.9196 ) 9.9172 9.9311 9.9311 0.0139 0.1400 %
( 0.3000 , 0.8127 ) 9.5242 9.5458 9.5413 0.0171 0.1799 %
( 0.4000 , 3.7624 ) 9.9613 9.9568 9.9568 0.0045 0.0456 %
( 0.4000 , 1.6555 ) 9.7512 9.7889 9.7890 0.0378 0.3879 %
CPU time (s)10.2213.81
Table 3. Computational results of 4∼DLANN for geometric mean payoff function with d = 2 .
Table 3. Computational results of 4∼DLANN for geometric mean payoff function with d = 2 .
Input [ τ , x T ] DLANNMCAnal.ERRRE
K = 8 , M S E = 8.1 × 10 4
( 0.0000 , 4.6052 , 4.6052 ) 7.9905 7.9900 7.9900 0.0005 0.0061 %
( 0.0000 , 4.6052 , 0.2812 ) 7.8933 7.9131 7.9131 0.0198 0.2504 %
( 0.0000 , 2.8756 , 4.6052 ) 7.9683 7.9763 7.9763 0.0080 0.0998 %
( 0.0000 , + 3.1781 , 2.0108 ) 6.2250 6.2074 6.2074 0.0176 0.2827 %
( 0.1000 , 4.6052 , 3.7404 ) 7.9814 7.9806 7.9806 0.0008 0.0098 %
( 0.1000 , 4.6052 , + 0.5836 ) 7.8439 7.8619 7.8621 0.0182 0.2325 %
( 0.1000 , + 1.4484 , + 0.5836 ) 5.2375 5.2367 5.2337 0.0037 0.0714 %
( 0.1000 , + 2.3133 , 2.8756 ) 7.2508 7.2391 7.2411 0.0097 0.1345 %
( 0.1000 , + 3.1781 , 1.1460 ) 5.2345 5.2308 5.2337 0.0008 0.0150 %
( 0.2000 , 4.6052 , 2.8756 ) 7.9689 7.9683 7.9683 0.0007 0.0085 %
( 0.2000 , 4.6052 , + 1.4484 ) 7.7710 7.7852 7.7857 0.0147 0.1889 %
( 0.2000 , + 0.5836 , 2.8756 ) 7.6664 7.6732 7.6741 0.0076 0.0997 %
( 0.2000 , + 0.5836 , + 1.4484 ) 5.2379 5.2307 5.2297 0.0081 0.1550 %
( 0.2000 , + 3.1781 , 4.6052 ) 7.4947 7.5007 7.5021 0.0074 0.0982 %
( 0.3000 , 4.6052 , 2.0108 ) 7.9518 7.9512 7.9514 0.0003 0.0042 %
( 0.3000 , 4.6052 , + 2.3133 ) 7.6617 7.6723 7.6701 0.0084 0.1100 %
( 0.3000 , 2.8756 , 2.0108 ) 7.8938 7.9013 7.9011 0.0073 0.0925 %
( 0.3000 , + 3.1781 , 3.7404 ) 7.2460 7.2350 7.2331 0.0129 0.1787 %
( 0.4000 , 4.6052 , 1.1460 ) 7.9277 7.9277 7.9276 0.0001 0.0009 %
( 0.4000 , 4.6052 , + 3.1781 ) 7.4947 7.4941 7.4941 0.0006 0.0079 %
( 0.4000 , + 2.3133 , 0.2812 ) 5.2380 5.2085 5.2218 0.0162 0.3095 %
K = 10 , M S E = 5.2 × 10 3
( 0.0000 , 4.6052 , 4.6052 ) 9.9784 9.9900 9.9900 0.0116 0.1160 %
( 0.0000 , 4.6052 , 0.1572 ) 9.8788 9.9076 9.9076 0.0287 0.2909 %
( 0.0000 , 2.8260 , 0.1572 ) 9.7535 9.7750 9.7750 0.0215 0.2203 %
( 0.0000 , + 3.4012 , 1.9364 ) 7.9466 7.9199 7.9199 0.0267 0.3358 %
( 0.1000 , 4.6052 , 3.7156 ) 9.9700 9.9794 9.9794 0.0094 0.0939 %
( 0.1000 , 4.6052 , + 0.7324 ) 9.8237 9.8508 9.8508 0.0271 0.2756 %
( 0.1000 , 0.1572 , 3.7156 ) 9.8287 9.8510 9.8508 0.0221 0.2249 %
( 0.1000 , + 2.5116 , 2.8260 ) 9.1492 9.1415 9.1405 0.0088 0.0958 %
( 0.1000 , + 3.4012 , 1.0468 ) 6.7420 6.7567 6.7497 0.0077 0.1142 %
( 0.2000 , 4.6052 , 2.8260 ) 9.9579 9.9656 9.9657 0.0078 0.0781 %
( 0.2000 , 4.6052 , + 1.6220 ) 9.7403 9.7648 9.7650 0.0247 0.2536 %
( 0.2000 , 2.8260 , + 1.6220 ) 9.4460 9.4383 9.4423 0.0037 0.0390 %
( 0.2000 , + 3.4012 , 4.6052 ) 9.4116 9.4441 9.4423 0.0307 0.3260 %
( 0.3000 , 4.6052 , 1.9364 ) 9.9401 9.9469 9.9470 0.0069 0.0693 %
( 0.3000 , 4.6052 , + 2.5116 ) 9.6122 9.6364 9.6339 0.0217 0.2262 %
( 0.3000 , 2.8260 , + 2.5116 ) 9.1486 9.1309 9.1305 0.0181 0.1978 %
( 0.3000 , + 3.4012 , 3.7156 ) 9.1238 9.1284 9.1305 0.0066 0.0726 %
( 0.4000 , 4.6052 , 1.0468 ) 9.9141 9.9201 9.9208 0.0067 0.0676 %
( 0.4000 , 4.6052 , + 3.4012 ) 9.4129 9.4310 9.4323 0.0194 0.2061 %
( 0.4000 , 2.8260 , + 3.4012 ) 8.6722 8.6495 8.6468 0.0254 0.2926 %
( 0.4000 , + 3.4012 , 2.8260 ) 8.6691 8.6447 8.6468 0.0223 0.2570 %
CPU time (s)23.56215.94
Table 4. Computational results of 4∼DLANN for geometric mean payoff function with d = 3 .
Table 4. Computational results of 4∼DLANN for geometric mean payoff function with d = 3 .
Input [ τ , x T ] DLANNMCAnal.ERRRE
K = 8 , M S E = 3.4 × 10 3
( 0.0000 , 4.6052 , 0.7136 , 0.7136 ) 7.8454 7.8661 7.8661 0.0207 0.2636 %
( 0.0000 , 4.6052 , + 3.1781 , + 3.1781 ) 6.1972 6.2074 6.2074 0.0103 0.1660 %
( 0.0000 , + 1.2322 , + 1.2322 , 4.6052 ) 7.5317 7.5101 7.5101 0.0216 0.2871 %
( 0.0000 , + 1.2322 , + 3.1781 , 2.6594 ) 6.2203 6.2074 6.2074 0.0129 0.2067 %
( 0.0000 , + 3.1781 , 4.6052 , + 1.2322 ) 7.0634 7.0629 7.0629 0.0005 0.0066 %
( 0.4000 , 4.6052 , 2.6594 , + 1.2322 ) 7.8511 7.8487 7.8501 0.0009 0.0119 %
( 0.4000 , 4.6052 , + 1.2322 , 0.7136 ) 7.7326 7.7287 7.7279 0.0047 0.0611 %
( 0.4000 , + 1.2322 , 2.6594 , 4.6052 ) 7.8368 7.8491 7.8501 0.0134 0.1704 %
( 0.4000 , + 3.1781 , 4.6052 , 0.7136 ) 7.4755 7.4907 7.4941 0.0187 0.2495 %
( 0.4000 , + 3.1781 , 2.6594 , 4.6052 ) 7.7304 7.7296 7.7279 0.0025 0.0326 %
K = 10 , M S E = 4.5 × 10 3
( 0.0000 , 4.6052 , 4.6052 , 4.6052 ) 10.0092 9.9900 9.9900 0.0192 0.1922 %
( 0.0000 , 4.6052 , 4.6052 , + 1.3996 ) 9.9342 9.9260 9.9260 0.0082 0.0830 %
( 0.0000 , 4.6052 , 2.6036 , 2.6036 ) 9.9750 9.9620 9.9620 0.0129 0.1297 %
( 0.0000 , 2.6036 , + 3.4012 , 4.6052 ) 9.7121 9.7189 9.7189 0.0068 0.0704 %
( 0.0000 , 0.6020 , 4.6052 , 2.6036 ) 9.9206 9.9260 9.9260 0.0054 0.0545 %
( 0.0000 , 0.6020 , 4.6052 , + 3.4012 ) 9.4498 9.4523 9.4523 0.0025 0.0264 %
( 0.0000 , 0.6020 , 2.6036 , 0.6020 ) 9.7112 9.7189 9.7189 0.0077 0.0795 %
( 0.0000 , 0.6020 , 0.6020 , 4.6052 ) 9.8555 9.8558 9.8558 0.0003 0.0026 %
( 0.4000 , 4.6052 , 4.6052 , 0.6020 ) 9.9696 9.9422 9.9420 0.0276 0.2767 %
( 0.4000 , 4.6052 , + 3.4012 , 4.6052 ) 9.8301 9.8356 9.8358 0.0057 0.0583 %
( 0.4000 , 2.6036 , 4.6052 , 2.6036 ) 9.9659 9.9420 9.9420 0.0239 0.2398 %
( 0.4000 , 0.6020 , + 1.3996 , 4.6052 ) 9.6965 9.6996 9.6990 0.0024 0.0253 %
( 0.4000 , 0.6020 , + 1.3996 , + 1.3996 ) 7.9212 7.8987 7.8999 0.0212 0.2679 %
( 0.4000 , 0.6020 , + 3.4012 , 2.6036 ) 8.9108 8.9150 8.9126 0.0019 0.0211 %
( 0.4000 , + 1.3996 , 2.6036 , 4.6052 ) 9.8155 9.8360 9.8358 0.0203 0.2072 %
( 0.4000 , + 1.3996 , 2.6036 , + 1.3996 ) 8.9323 8.9149 8.9126 0.0196 0.2198 %
CPU time (s)59.32756.43
Table 5. Computational results of 4∼DLANN for geometric mean payoff function with d = 4 .
Table 5. Computational results of 4∼DLANN for geometric mean payoff function with d = 4 .
Input [ τ , x T ] DLANNMCAnal.ERRRE
K = 8 , M S E = 1.3 × 10 3
( 0.0000 , 4.6052 , 0.7136 , + 1.2322 , + 3.1781 ) 7.1952 7.2032 7.2032 0.0079 0.1100 %
( 0.0000 , 2.6594 , + 3.1781 , + 1.2322 , 4.6052 ) 7.5207 7.5101 7.5101 0.0106 0.1413 %
( 0.0000 , 2.6594 , + 3.1781 , + 1.2322 , + 1.2322 ) 5.8997 5.8919 5.8919 0.0079 0.1332 %
( 0.0000 , 0.7136 , 4.6052 , 0.7136 , + 3.1781 ) 7.5114 7.5101 7.5101 0.0013 0.0170 %
( 0.0000 , 0.7136 , + 3.1781 , 4.6052 , 2.6594 ) 7.6839 7.6988 7.6988 0.0149 0.1936 %
( 0.0000 , 0.7136 , + 3.1781 , 0.7136 , 4.6052 ) 7.5220 7.5101 7.5101 0.0119 0.1583 %
( 0.0000 , + 3.1781 , + 1.2322 , + 3.1781 , 4.6052 ) 5.8990 5.8919 5.8919 0.0071 0.1211 %
( 0.0000 , + 3.1781 , + 3.1781 , 4.6052 , + 1.2322 ) 5.9058 5.8919 5.8919 0.0139 0.2360 %
( 0.4000 , 4.6052 , 4.6052 , 4.6052 , 0.7136 ) 7.9365 7.9575 7.9576 0.0211 0.2659 %
( 0.4000 , 4.6052 , + 3.1781 , 0.7136 , + 1.2322 ) 7.2039 7.1909 7.1872 0.0167 0.2318 %
( 0.4000 , 4.6052 , + 3.1781 , + 1.2322 , 2.6594 ) 7.4989 7.4912 7.4941 0.0047 0.0633 %
( 0.4000 , 2.6594 , 4.6052 , 4.6052 , 4.6052 ) 7.9476 7.9677 7.9678 0.0201 0.2534 %
( 0.4000 , 2.6594 , 4.6052 , 4.6052 , + 1.2322 ) 7.8934 7.9142 7.9140 0.0206 0.2611 %
( 0.4000 , 2.6594 , 4.6052 , 2.6594 , 2.6594 ) 7.9192 7.9408 7.9410 0.0218 0.2748 %
( 0.4000 , 2.6594 , 2.6594 , 0.7136 , 2.6594 ) 7.8536 7.8704 7.8702 0.0166 0.2115 %
( 0.4000 , 0.7136 , + 3.1781 , 4.6052 , 4.6052 ) 7.7842 7.7980 7.7988 0.0146 0.1878 %
( 0.4000 , 0.7136 , + 3.1781 , 2.6594 , + 3.1781 ) 5.8932 5.8835 5.8759 0.0173 0.2934 %
K = 10 , M S E = 1.7 × 10 3
( 0.0000 , 4.6052 , 0.6020 , + 1.3996 , 2.6036 ) 9.7780 9.7987 9.7987 0.0206 0.2110 %
( 0.0000 , 4.6052 , 0.6020 , + 1.3996 , + 3.4012 ) 9.1105 9.0966 9.0966 0.0139 0.1527 %
( 0.0000 , 4.6052 , 0.6020 , + 3.4012 , 0.6020 ) 9.4399 9.4523 9.4523 0.0124 0.1309 %
( 0.0000 , 4.6052 , + 1.3996 , + 3.4012 , + 3.4012 ) 7.5597 7.5423 7.5423 0.0173 0.2294 %
( 0.0000 , 0.6020 , + 3.4012 , 4.6052 , + 3.4012 ) 8.5273 8.5100 8.5100 0.0173 0.2032 %
( 0.0000 , 0.6020 , + 3.4012 , 2.6036 , 0.6020 ) 9.1171 9.0966 9.0966 0.0205 0.2245 %
( 0.0000 , 0.6020 , + 3.4012 , 0.6020 , 4.6052 ) 9.4508 9.4523 9.4523 0.0014 0.0153 %
( 0.0000 , 0.6020 , + 3.4012 , 0.6020 , + 1.3996 ) 7.5467 7.5423 7.5423 0.0043 0.0576 %
( 0.0000 , 0.6020 , + 3.4012 , + 1.3996 , 2.6036 ) 8.5291 8.5100 8.5100 0.0192 0.2246 %
( 0.0000 , + 1.3996 , 4.6052 , 4.6052 , + 3.4012 ) 9.6476 9.6679 9.6679 0.0203 0.2104 %
( 0.0000 , + 1.3996 , 4.6052 , 2.6036 , 0.6020 ) 9.7702 9.7987 9.7987 0.0285 0.2913 %
( 0.4000 , 4.6052 , 4.6052 , + 3.4012 , 4.6052 ) 9.8837 9.9058 9.9060 0.0223 0.2255 %
( 0.4000 , 4.6052 , + 1.3996 , 4.6052 , 0.6020 ) 9.8425 9.8575 9.8580 0.0155 0.1574 %
( 0.4000 , 4.6052 , + 1.3996 , 2.6036 , 4.6052 ) 9.8833 9.9056 9.9060 0.0227 0.2298 %
( 0.4000 , 4.6052 , + 1.3996 , 2.6036 , + 1.3996 ) 9.6594 9.6462 9.6479 0.0114 0.1183 %
( 0.4000 , 4.6052 , + 1.3996 , 0.6020 , 2.6036 ) 9.7728 9.7783 9.7787 0.0059 0.0603 %
( 0.4000 , 4.6052 , + 1.3996 , 0.6020 , + 3.4012 ) 9.0961 9.0802 9.0766 0.0195 0.2144 %
( 0.4000 , 4.6052 , + 1.3996 , + 1.3996 , 0.6020 ) 9.4280 9.4278 9.4323 0.0043 0.0461 %
( 0.4000 , 2.6036 , 4.6052 , + 3.4012 , 2.6036 ) 9.7746 9.7795 9.7787 0.0041 0.0415 %
( 0.4000 , 2.6036 , 4.6052 , + 3.4012 , + 3.4012 ) 9.0987 9.0765 9.0766 0.0221 0.2424 %
( 0.4000 , 2.6036 , 2.6036 , 2.6036 , + 1.3996 ) 9.7748 9.7790 9.7787 0.0039 0.0402 %
( 0.4000 , 2.6036 , 2.6036 , 0.6020 , 2.6036 ) 9.8432 9.8574 9.8580 0.0148 0.1501 %
CPU time (s)90.852810.64
Table 6. Computational results of 4∼DLANN for arithmetic mean payoff function with d = 1 and K = 8 .
Table 6. Computational results of 4∼DLANN for arithmetic mean payoff function with d = 1 and K = 8 .
Input [ τ , x T ] DLANNMCERRRE
K = 8 , M S E = 3.14 × 10 4
( 0.0000 , 0.3956 ) 7.3409 7.3267 0.0142 0.1937 %
( 0.0000 , + 1.7346 ) 2.3362 2.3334 0.0028 0.1186 %
( 0.1333 , 0.9281 ) 7.5767 7.5991 0.0225 0.2959 %
( 0.1333 , + 1.2020 ) 4.6539 4.6660 0.0121 0.2596 %
( 0.2667 , 0.9281 ) 7.5783 7.5937 0.0154 0.2027 %
( 0.2667 , + 1.2020 ) 4.6694 4.6586 0.0108 0.2314 %
( 0.4000 , 0.9281 ) 7.5799 7.5882 0.0083 0.1095 %
CPU time (s)9.7611.23
Table 7. Computational results of 4∼DLANN for arithmetic mean payoff function with d = 2 and K = 8 .
Table 7. Computational results of 4∼DLANN for arithmetic mean payoff function with d = 2 and K = 8 .
Input [ τ , x T ] DLANNMCERRRE
K = 8 , M S E = 1.8 × 10 3
( 0.0000 , 2.5257 , 1.0612 ) 7.7768 7.7870 0.0102 0.1308 %
( 0.0000 , 2.5257 , + 0.4032 ) 7.2094 7.2117 0.0022 0.0309 %
( 0.0000 , 1.0612 , 2.5257 ) 7.7794 7.7870 0.0076 0.0977 %
( 0.0000 , 1.0612 , 1.0612 ) 7.6579 7.6540 0.0040 0.0519 %
( 0.0000 , 1.0612 , + 0.4032 ) 7.0997 7.0787 0.0211 0.2979 %
( 0.1333 , 2.5257 , 1.0612 ) 7.7753 7.7816 0.0063 0.0811 %
( 0.1333 , 2.5257 , + 0.4032 ) 7.2038 7.2061 0.0023 0.0323 %
( 0.1333 , 2.5257 , + 1.8677 ) 4.7199 4.7170 0.0029 0.0621 %
( 0.1333 , 1.0612 , 2.5257 ) 7.7778 7.7816 0.0037 0.0481 %
( 0.1333 , 1.0612 , 1.0612 ) 7.6537 7.6484 0.0053 0.0687 %
( 0.1333 , 1.0612 , + 0.4032 ) 7.0905 7.0728 0.0177 0.2497 %
( 0.2667 , 2.5257 , 1.0612 ) 7.7737 7.7762 0.0025 0.0322 %
( 0.2667 , 2.5257 , + 0.4032 ) 7.1979 7.2004 0.0024 0.0340 %
( 0.2667 , 2.5257 , + 1.8677 ) 4.6962 4.7101 0.0139 0.2946 %
( 0.2667 , 1.0612 , 2.5257 ) 7.7762 7.7761 0.0001 0.0009 %
( 0.2667 , 1.0612 , 1.0612 ) 7.6493 7.6430 0.0063 0.0829 %
( 0.2667 , 1.0612 , + 0.4032 ) 7.0811 7.0669 0.0143 0.2019 %
( 0.4000 , 2.5257 , 1.0612 ) 7.7720 7.7706 0.0013 0.0171 %
( 0.4000 , 2.5257 , + 0.4032 ) 7.1919 7.1944 0.0025 0.0347 %
( 0.4000 , 1.0612 , 2.5257 ) 7.7745 7.7707 0.0037 0.0479 %
( 0.4000 , 1.0612 , 1.0612 ) 7.6448 7.6373 0.0075 0.0979 %
( 0.4000 , 1.0612 , + 0.4032 ) 7.0717 7.0612 0.0105 0.1485 %
( 0.4000 , + 0.4032 , + 0.4032 ) 6.4668 6.4845 0.0177 0.2726 %
CPU time (s)23.12210.23
Table 8. Computational results of 4∼DLANN for arithmetic mean payoff function with d = 3 and K = 8 .
Table 8. Computational results of 4∼DLANN for arithmetic mean payoff function with d = 3 and K = 8 .
Input [ τ , x T ] DLANNMCERRRE
K = 8 , M S E = 1.8 × 10 3
( 0.0000 , 4.6052 , 4.6052 , 4.6052 ) 7.9709 7.9900 0.0191 0.2396 %
( 0.0000 , 4.6052 , 2.6594 , 2.6594 ) 7.9468 7.9500 0.0032 0.0398 %
( 0.0000 , 2.6594 , 2.6594 , 4.6052 ) 7.9462 7.9500 0.0038 0.0474 %
( 0.0000 , 2.6594 , 0.7136 , 2.6594 ) 7.7916 7.7900 0.0015 0.0194 %
( 0.0000 , 0.7136 , 4.6052 , + 1.2322 ) 6.6995 6.6904 0.0091 0.1358 %
( 0.0000 , + 1.2322 , 4.6052 , 0.7136 ) 6.7042 6.6904 0.0138 0.2064 %
( 0.4000 , 4.6052 , 4.6052 , 2.6594 ) 7.9676 7.9540 0.0137 0.1717 %
( 0.4000 , 2.6594 , 4.6052 , 4.6052 ) 7.9674 7.9540 0.0134 0.1683 %
( 0.4000 , 2.6594 , 0.7136 , 0.7136 ) 7.6221 7.6336 0.0115 0.1503 %
( 0.4000 , 0.7136 , + 1.2322 , 0.7136 ) 6.5199 6.5127 0.0072 0.1107 %
( 0.4000 , + 1.2322 , 0.7136 , 4.6052 ) 6.6613 6.6692 0.0079 0.1185 %
CPU time (s)56.92876.12
Table 9. Computational results of 4∼DLANN for arithmetic mean payoff function with d = 4 and K = 8 .
Table 9. Computational results of 4∼DLANN for arithmetic mean payoff function with d = 4 and K = 8 .
Input [ τ , x T ] DLANNMCERRRE
K = 8 , M S E = 1.34 × 10 3
( 0.0000 , 2.0108 , 2.0108 , 4.6052 , 2.0108 ) 7.9174 7.8971 0.0203 0.2575 %
( 0.0000 , 2.0108 , 2.0108 , 2.0108 , 4.6052 ) 7.9171 7.8971 0.0200 0.2538 %
( 0.0000 , + 0.5836 , 2.0108 , 4.6052 , 2.0108 ) 7.4634 7.4824 0.0190 0.2544 %
( 0.0000 , + 0.5836 , + 0.5836 , 4.6052 , 2.0108 ) 7.0509 7.0677 0.0168 0.2378 %
( 0.0000 , + 3.1781 , 4.6052 , 4.6052 , 2.0108 ) 1.9613 1.9615 0.0003 0.0140 %
( 0.0000 , + 3.1781 , 4.6052 , 2.0108 , 4.6052 ) 1.9618 1.9615 0.0002 0.0120 %
( 0.0000 , + 3.1781 , 2.0108 , + 0.5836 , 4.6052 ) 1.5162 1.5159 0.0003 0.0209 %
( 0.4000 , 4.6052 , 4.6052 , + 3.1781 , + 0.5836 ) 1.5247 1.5228 0.0019 0.1264 %
( 0.4000 , 2.0108 , 4.6052 , 4.6052 , 4.6052 ) 7.9270 7.9430 0.0159 0.2006 %
( 0.4000 , 2.0108 , 2.0108 , 4.6052 , 4.6052 ) 7.8887 7.9119 0.0232 0.2933 %
( 0.4000 , 2.0108 , 2.0108 , 4.6052 , + 0.5836 ) 7.4825 7.4655 0.0170 0.2279 %
( 0.4000 , 2.0108 , 2.0108 , + 0.5836 , 2.0108 ) 7.4497 7.4345 0.0152 0.2041 %
( 0.4000 , 2.0108 , + 0.5836 , + 0.5836 , 2.0108 ) 7.0087 7.0192 0.0105 0.1501 %
( 0.4000 , + 0.5836 , 4.6052 , 4.6052 , 2.0108 ) 7.5100 7.4965 0.0135 0.1805 %
( 0.4000 , + 0.5836 , 4.6052 , 4.6052 , + 0.5836 ) 7.0834 7.0813 0.0021 0.0303 %
( 0.4000 , + 0.5836 , 4.6052 , 2.0108 , 4.6052 ) 7.5000 7.4967 0.0032 0.0433 %
( 0.4000 , + 0.5836 , 4.6052 , + 0.5836 , 2.0108 ) 7.0670 7.0502 0.0168 0.2383 %
( 0.4000 , + 0.5836 , + 0.5836 , 2.0108 , 4.6052 ) 7.0661 7.0502 0.0159 0.2262 %
( 0.4000 , + 3.1781 , 4.6052 , + 0.5836 , + 0.5836 ) 1.0769 1.0765 0.0004 0.0383 %
CPU time (s)96.652913.78
Table 10. Computational results of 5∼DLANN for arithmetic mean payoff function with d = 2 and K = 8 .
Table 10. Computational results of 5∼DLANN for arithmetic mean payoff function with d = 2 and K = 8 .
Input [ τ , x T ] DLANNMCERRRE
K = 8 , M S E = 1.8 × 10 3
( 0.0000 , 4.6052 , 1.4919 ) 7.8603 7.8825 0.0223 0.2825 %
( 0.0000 , 3.0485 , 1.4919 ) 7.8644 7.8638 0.0006 0.0076 %
( 0.0000 , 3.0485 , + 1.6214 ) 5.4566 5.4462 0.0104 0.1905 %
( 0.0000 , 1.4919 , 4.6052 ) 7.8746 7.8825 0.0079 0.1007 %
( 0.0000 , 1.4919 , 3.0485 ) 7.8779 7.8638 0.0141 0.1790 %
( 0.0000 , + 0.0648 , 3.0485 ) 7.4619 7.4428 0.0191 0.2563 %
( 0.4000 , 4.6052 , 1.4919 ) 7.8610 7.8664 0.0054 0.0691 %
( 0.4000 , 4.6052 , + 1.6214 ) 5.4366 5.4441 0.0075 0.1374 %
( 0.4000 , 3.0485 , 1.4919 ) 7.8649 7.8476 0.0173 0.2207 %
( 0.4000 , 3.0485 , + 0.0648 ) 7.4096 7.4258 0.0162 0.2187 %
( 0.4000 , 1.4919 , 4.6052 ) 7.8722 7.8663 0.0058 0.0739 %
( 0.4000 , + 0.0648 , 3.0485 ) 7.4187 7.4259 0.0072 0.0976 %
CPU time (s)29.23276.45
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhou, Z.; Wu, H.; Li, Y.; Kang, C.; Wu, Y. Deep Learning Artificial Neural Network for Pricing Multi-Asset European Options. Mathematics 2025, 13, 617. https://doi.org/10.3390/math13040617

AMA Style

Zhou Z, Wu H, Li Y, Kang C, Wu Y. Deep Learning Artificial Neural Network for Pricing Multi-Asset European Options. Mathematics. 2025; 13(4):617. https://doi.org/10.3390/math13040617

Chicago/Turabian Style

Zhou, Zhiqiang, Hongying Wu, Yuezhang Li, Caijuan Kang, and You Wu. 2025. "Deep Learning Artificial Neural Network for Pricing Multi-Asset European Options" Mathematics 13, no. 4: 617. https://doi.org/10.3390/math13040617

APA Style

Zhou, Z., Wu, H., Li, Y., Kang, C., & Wu, Y. (2025). Deep Learning Artificial Neural Network for Pricing Multi-Asset European Options. Mathematics, 13(4), 617. https://doi.org/10.3390/math13040617

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop