Next Article in Journal
Correlation between Computed Ion Hydration Properties and Experimental Values of Sugar Transfer through Nanofiltration and Ion Exchange Membranes in Presence of Electrolyte
Previous Article in Journal
Numerical Solution of Fractional Diffusion Wave Equation and Fractional Klein–Gordon Equation via Two-Dimensional Genocchi Polynomials with a Ritz–Galerkin Method
Previous Article in Special Issue
An Energy Landscape Treatment of Decoy Selection in Template-Free Protein Structure Prediction
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Singularly Perturbed Forward-Backward Stochastic Differential Equations: Application to the Optimal Control of Bilinear Systems

1
Laboratory of Statistics and Random Modeling, University of Abou Bekr Belkaid, Tlemcen 13000, Algeria
2
Institute of Mathematics, Brandenburgische Technische Universität Cottbus-Senftenberg, 03046 Cottbus, Germany
*
Author to whom correspondence should be addressed.
Computation 2018, 6(3), 41; https://doi.org/10.3390/computation6030041
Submission received: 12 February 2018 / Revised: 22 June 2018 / Accepted: 27 June 2018 / Published: 28 June 2018
(This article belongs to the Special Issue Computation in Molecular Modeling)

Abstract

:
We study linear-quadratic stochastic optimal control problems with bilinear state dependence where the underlying stochastic differential equation (SDE) has multiscale features. We show that, in the same way in which the underlying dynamics can be well approximated by a reduced-order dynamics in the scale separation limit (using classical homogenization results), the associated optimal expected cost converges to an effective optimal cost in the scale separation limit. This entails that we can approximate the stochastic optimal control for the whole system by a reduced-order stochastic optimal control, which is easier to compute because of the lower dimensionality of the problem. The approach uses an equivalent formulation of the Hamilton-Jacobi-Bellman (HJB) equation, in terms of forward-backward SDEs (FBSDEs). We exploit the efficient solvability of FBSDEs via a least squares Monte Carlo algorithm and show its applicability by a suitable numerical example.

1. Introduction

Stochastic optimal control is one of the important fields in mathematics which has attracted the attention of both pure and applied mathematicians [1,2]. Stochastic control problems also appear in a variety of applications, such as statistics [3,4], financial mathematics [5,6], molecular dynamics [7,8] or materials science [9,10], to mention just a few. For some applications in science and engineering, such as molecular dynamics [8,11], the high dimensionality of the state space is an important challenge when solving optimal control problems. Another issue when solving optimal control problems by discretising the corresponding dynamic programming equations in space and time are multiscale effects that come into play when the state space dynamics exhibits slow and fast motions.
Here we consider such systems that have slow and fast scales and that are possibly high-dimensional. Several techniques have been developed to reduce the spatial dimension of control systems (see e.g., [12,13] and the references therein), but these techniques treat the control as a possibly time-dependent parameter (“open loop control”) and do not take into account that the control may be a feedback control that depends on the state variables (“closed loop control”). Needless to say that homogenisation techniques for stochastic control systems have been extensively studied by applied analysts using a variety of different mathematical tools, including viscosity solutions of the Hamilton-Jacobi-Bellman equation [14,15], backward stochastic differential equations [16,17], or occupation measures [18,19]. However the convergence analysis of multiscale stochastic control systems is quite involved and non-constructive, in that the limiting equations of motion are not given in explicit or closed form, which makes these results of limited practical use; see [20,21] for notable exceptions, dealing mainly with the case when the dynamics is linear.
In general, the elimination of variables and solving control problems do not commute, so one of the key questions in control engineering is under which conditions it is possible to eliminate variables before solving an optimal problem. We call this the model reduction problem. In this paper, we identify a class of stochastic feedback control problems with bilinear state dependence that have the property that they admit the elimination of variables (i.e., model reduction) before solving the control problem. These systems turn out to be relevant in the control of high-dimensional transport PDEs, such as Fokker-Planck equations or the evolution equations of open quantum systems [22,23]. The possibility of applying model reduction before solving the corresponding optimal control problem means that it is possible to treat the control in the original equation simply as a parameter. This is in accordance with the general model reduction strategy in control engineering that is motivated by the fact that solving a dimension-reduced control problem, rather than the original one, is numerically much less demanding; see e.g., [12,13] and the references therein. We will show that this strategy, under certain assumptions, yields a good approximation of the high-dimensional optimal control, which implies that the reduced-order optimal control can be used to control the full systems dynamics almost optimally.
Our approach is based on a Donsker-Varadhan type duality principle between a linear Feynman-Kac PDE and the semi-linear dynamic programming PDE associated with a stochastic control problem [24]. Here we exploit the fact that the dynamic programming PDE can be recast as an uncoupled forward-backward stochastic differential equation (see e.g., [25,26]) that can be treated by model reduction techniques, such as averaging or homogenization. The relation between semilinear PDEs of Hamilton-Jacobi-Bellman type and forward-backward stochastic differential equations (FBSDE) is a classical subject that has been first studied by Pardoux and Peng [27] and since then received lot of attention from various sides, e.g., [28,29,30,31,32,33]. The solution theory for FBSDEs has its roots in the work of Antonelli [34] and has been extended in various directions since then; see e.g., [35,36,37,38]. From a theoretical point of view, this paper goes beyond our previous works [24,39] in that we prove strong convergence of the value function and the control without relying on compactness or periodicity assumptions for the fast variables, even though we focus on bilinear systems only, which is the weakest form of nonlinearity. (Many nonlinear systems however can be represented as bilinear systems by a so-called Carleman linearisation.) It also goes beyond the classical works [20,21] that treat systems that are either fully linear or linear in the fast variables. We stress that we are mainly aiming at the model reduction problem, but we discuss alongside with the theoretical results some ideas to discretise the corresponding FBSDE [40,41,42,43,44], since one of the main motivations for doing model reduction is to reduce the numerical complexity of solving optimal control problems.

1.1. Set-Up and Problem Statement

We briefly discuss the technical set-up of the control problem considered in this paper. In this work, we consider the linear-quadratic (LQ) stochastic control problem of the following form: minimise the expected cost
J ( u ; t , x ) = E t τ q 0 ( X s u ) + | u s | 2 d s + q 1 ( X τ u ) | X t u = x
over all admissible controls u U and subject to:
d X s u = a ( X s u ) + b ( X s u ) u s d s + σ ( X s u ) d W s , 0 t s τ .
Here τ < is a bounded stopping time (specified below), and the set of admissible controls U is chosen such that (2) has a unique strong solution. The denomination linear-quadratic for (1)–(2) is due to the specific dependence of the system on the control variable u. The state vector x R n is assumed to be high-dimensional, which is why we seek a low-dimensional approximation of (1)–(2).
Specifically, we consider the case that q 0 and q 1 are quadratic in x, a is linear and σ is constant, and the control term is an affine function of x, i.e.,
b ( x ) u = N x + B u .
In this case, the system is called bilinear (including linear systems as a special case), and the aim is to replace (2) by a lower dimensional bilinear system
d X ¯ s v = A ¯ X ¯ s v d s + N ¯ X ¯ s v + B ¯ v s d s + C ¯ d W s , 0 t s τ ,
with states x ¯ R n s , n s n and an associated reduced cost functional
J ¯ ( v ; x ¯ , t ) = E t τ q ¯ 0 ( X ¯ s v ) + | v s | 2 d s + q ¯ 1 ( X ¯ τ v ) | X ¯ t v = x ¯ ,
that is solved instead of (1)–(2). Letting v * denote the minimizer of J ¯ , we require that v * is a good approximation of the minimiser u * of the original problem where “good approximation” is understood in the sense that
J ( v * ; · , · ) J ( u * ; · , · ) .
It is a priori not clear how the symbol “≈” in the last equation must be interpreted, e.g., pointwise for all initial data ( x , t ) R n × [ 0 , T ) for some T < , or uniformly on all compact subsets of R n × [ 0 , T ) .
One situation in which the above approximation property holds is when u * v * uniformly in t and the cost is continuous in the control, but it turns out that this requirement will be too strong in general and overly restrictive. We will discuss alternative criteria in the course of this paper.

1.2. Outline

The paper is organised as follows: In Section 2 we introduce the bilinear stochastic control problem studied in this paper and derive the corresponding forward-backward stochastic differential equation (FBSDE). Section 3 contains the main result, a convergence result for the value function of a singularly perturbed control problem with bilinear state dependence, based on a FBSDE formulation. In Section 4 we present a numerical example to illustrate the theoretical findings and discuss the numerical discretisation of the FBSDE. The article concludes in Section 5 with a short summary and a discussion of future work. The proof of the main result and some technical lemmas are recorded in the Appendix A.

2. Singularly Perturbed Bilinear Control Systems

We now specify the system dynamics (2) and the corresponding cost functional (1). Let ( x 1 , x 2 ) R n s × R n f with n s + n f = n denote a decomposition of the state vector x R n into relevant (slow) and irrelevant (fast) components. Further let W = ( W t ) t 0 denote a R m -valued Brownian motion on a probability space ( Ω , F , P ) that is endowed with the filtration ( F t ) t 0 generated by W. For any initial condition x R n and any A -valued admissible control u U , with A R , we consider the following system of Itô stochastic differential equations
d X s ϵ = A X s ϵ d s + ( N X s ϵ + B ) u s d s + C d W s , X t ϵ = x ,
that depends parametrically on a parameter ϵ > 0 via the coefficients
A = A ϵ R n × n , N = N ϵ R n × n , B = B ϵ R n , and C = C ϵ R n × m ,
where for brevity we also drop the dependence of the process on the control u, i.e., X s ϵ = X s u , ϵ . The stiffness matrix A in (3) is assumed to be of the form
A = A 11 ϵ 1 / 2 A 12 ϵ 1 / 2 A 21 ϵ 1 A 22 R ( n s + n f ) × ( n s + n f ) ,
with n = n s + n f . Control and noise coefficients are given by
N = N 11 N 12 ϵ 1 / 2 N 21 ϵ 1 / 2 N 22 R ( n s + n f ) × ( n s + n f )
and
B = B 1 ϵ 1 / 2 B 2 R ( n s + n f ) × 1 , C = C 1 ϵ 1 / 2 C 2 R ( n s + n f ) × m ,
where N x + B range ( C ) for all x R n ; often we will consider either the case m = 1 with C i = ρ B i , ρ > 0 , or m = n , with C being a multiple of the identity when ϵ = 1 . All block matrices A i j , N i j , B i and C j are assumed to be order 1 and independent of ϵ .
The above ϵ -scaling of the coefficients is natural for a system with n s slow and n f fast degrees of freedom and arises, for example, as a result of a balancing transformation applied to a large-scale system of equations; see e.g., [22,45]. A special case of (3) is the linear system
d X s ϵ = A X s ϵ + B u s d s + C d W s .
Our goal is to control the stochastic dynamics (3)—or (7) as a special variant—so that a given cost criterion is optimised. Specifically, given two symmetric positive semidefinite matrices Q 0 , Q 1 R n s × n s , we consider the quadratic cost functional
J ( u ; t , x ) = E 1 2 t τ ( ( X 1 , s ϵ ) Q 0 X 1 , s ϵ + | u s | 2 ) d s + 1 2 ( X 1 , τ ϵ ) Q 1 X 1 , τ ϵ ,
that we seek to minimise subject to the dynamics (3). Here the expectation is understood as an expectation over all realisations of ( X s ϵ ) s [ t , τ ] starting at X t ϵ = x , and as a consequence J is a function of the initial data ( t , x ) . The stopping time is defined as the minimum of some time T < and the first exit time of a domain D = D s × R n f R n s × R n f where D s is an open and bounded set with smooth boundary. Specifically, we set τ = min { τ D , T } , with
τ D = inf { r t : X r ϵ D } .
In other words, τ is the stopping time that is defined by the event that either r = T or X r ϵ leaves the set D = D s × R n f , whichever comes first. Please note that the cost function does not explicitly depend on the fast variables x 2 . We define the corresponding value function by
V ϵ ( t , x ) = inf u U J ( u ; t , x ) .
Remark 1.
As a consequence of the boundedness of D s R n s , we may assume that all coefficients in our control problem are bounded or Lipschitz continuous, which makes some of the proofs in the paper more transparent.
We further note that all of the following considerations trivially carry over to the case N = 0 and a multi-dimensional control variable, i.e., u R k and B R n × k .

From Stochastic Control to Forward-Backward Stochastic Differential Equations

We suppose that the matrix pair ( A , C ) satisfies the Kalman rank condition
rank ( C | A C | A 2 C | | A n 1 C ) = n .
A necessary—and in this case sufficient—condition for optimality of our control problem is that the value function (9) solves a semilinear parabolic partial differential equation of Hamilton-Jacobi-Bellman type (a.k.a. dynamic programming equation) [46]
V ϵ t = L ϵ V ϵ + f ( x , V ϵ , C V ϵ ) , V ϵ | E + = q 1 ,
where
q 1 ( x ) = 1 2 x 1 Q 1 x 1
and E + is the terminal set of the augmented process ( s , X s ϵ ) , precisely E + = [ 0 , T ) × D { T } × D . Here L ϵ is the infinitesimal generator of the control-free process,
L ϵ = 1 2 C C : 2 + ( A x ) · ,
and the nonlinearity f is given by
f ( x , y , z ) = 1 2 x 1 Q 0 x 1 1 2 x N + B C z 2 .
Please note that f is independent of y where ( C ) denotes the Moore-Penrose pseudoinverse that is is unambiguously defined since z = C V ϵ and ( N x + B ) range ( C ) , which by noting that C C is the orthogonal projection onto range ( C ) implies that
| ( x N + B ) V ϵ | 2 = | ( x N + B ) C z | 2 .
The specific semilinear form of the equation is a consequence of the control problem being linear-quadratic. As a consequence, the dynamic programming Equation (11) admits a representation in form of an uncoupled forward-backward stochastic differential equation (FBSDE). To appreciate this point, consider the control-free process X s ϵ = X s ϵ , u = 0 with infinitesimal generator L ϵ and define an adapted process Y s ϵ = Y s ϵ , x , t by
Y s ϵ = V ϵ ( s , X s ϵ ) .
(We abuse notation and denote both the controlled and the uncontrolled process by X s ϵ .) Then, by definition, Y t ϵ = V ϵ ( t , x ) . Moreover, by Itô’s formula and the dynamic programming Equation (11), the pair ( X s ϵ , Y s ϵ ) s [ t , τ ] can be shown to solve the system of equations
d X s ϵ = A X s ϵ d s + C d W s , X t ϵ = x d Y s ϵ = f ( X s ϵ , Y s ϵ , Z s ϵ ) d s + Z s ϵ d W s , Y τ ϵ = q 1 ( X τ ϵ ) ,
with Z s ϵ = C V ϵ ( s , X s ϵ ) being the control variable. Here, the second equation is only meaningful if interpreted as a backward equation, since only in this case Z s ϵ is uniquely defined. To see this, let f = 0 and q 1 ( x ) = x and note that the ansatz (14) implies that Y s ϵ is adapted to the filtration generated by the forward process X s ϵ . If the second equation was just a time-reversed SDE then ( Y s ϵ , Z s ϵ ) ( X τ ϵ , 0 ) would be the unique solution to the SDE d Y s ϵ = Z s ϵ d W s with terminal condition Y τ ϵ = X τ ϵ . However, such a solution would not be adapted, because Y s ϵ for s < τ would depend on the future value X τ ϵ of the forward process.
Remark 2.
Equation (15) is called an uncoupled FBSDE because the forward equation for X s ϵ is independent of Y s ϵ or Z s ϵ . The fact that the FBSDE is uncoupled furnishes a well-known duality relation between the value function of an LQ optimal control problem and the cumulate generating function of the cost [47,48]; specifically, in the case that N = 0 , B = C and the pair ( A , B ) being completely controllable, it holds that
V ϵ ( t , x ) = log E exp t τ q 0 ( X s ϵ ) d s q 1 ( X τ ϵ ) ,
with
q 0 ( x ) = 1 2 x 1 Q 0 x 1 .
Here the expectation on the right hand side is taken over all realisations of the control-free process X s ϵ = X s ϵ , u = 0 , starting at X t ϵ = x . By the Feynman-Kac theorem, the function ψ ϵ = exp ( V ϵ ) solves the linear parabolic boundary value problem
t + L ϵ ψ ϵ = q 0 ( x ) ψ ϵ , ψ ϵ | E + = exp q 1 ,
which is equivalent to the corresponding dynamic programming Equation (11).

3. Model Reduction

The idea is to exploit the fact that (15) is uncoupled, which allows us to derive an FBSDE for the slow variables X ¯ s ϵ = X 1 , s ϵ only, by standard singular perturbation methods. The reduced FBSDE as ϵ 0 will then be of the form
d X ¯ s = A ¯ X ¯ s d s + C ¯ d W s , X ¯ t = x 1 d Y ¯ s = f ¯ ( X ¯ s , Y ¯ s , Z ¯ s ) d s + Z ¯ s d W s , Y ¯ τ = q ¯ 1 ( X ¯ τ ) ,
where the limiting form of the backward SDE follows from the corresponding properties of the forward SDE. Specifically, assuming that the solution of the associated SDE
d ξ s = A 22 ξ s d s + C 2 d W s ,
that is governing the fast dynamics as ϵ 0 , is ergodic with unique Gaussian invariant measure π = N ( 0 , Σ ) , where Σ = Σ > 0 is the unique solution to the Lyapunov equation
A 22 Σ + Σ A 22 = C 2 C 2 ,
we obtain that, asymptotically as ϵ 0 ,
X 2 , s ϵ ξ s / ϵ , s > 0 .
As a consequence, the limiting SDE governing the evolution of the slow process X 1 , s ϵ —in other words: the forward part of (18)—has the coefficients
A ¯ = A 11 A 12 A 22 1 A 21 , C ¯ = C 1 A 12 A 22 1 C 2 ,
which follows from standard homogenisation arguments [49]; a formal derivation is given in the Appendix A. By a similar reasoning we find that the driver of the limiting backward SDE reads
f ¯ ( x 1 , y , z 1 ) = R n f f ( ( x 1 , x 2 ) , y , ( z 1 , 0 ) ) π ( d x 2 ) ,
specifically,
f ¯ ( x 1 , y , z 1 ) = 1 2 x 1 Q ¯ 0 x 1 1 2 | x 1 N ¯ + B ¯ z 1 | 2 + K 0 ,
with
Q ¯ 0 = Q 0 , N ¯ = C 1 N 11 , B ¯ = C 1 B 1 + N 12 Σ 1 / 2 .
The limiting backward SDE is equipped with a terminal condition q ¯ 1 that equals q 1 , namely,
q ¯ 1 ( x 1 ) = 1 2 x 1 Q 1 x 1 .

3.1. Interpretation as an Optimal Control Problem

It is possible to interpret the reduced FBSDE again as the probabilistic version of a dynamic programming equation. To this end, note that (10) implies that the matrix pair ( A ¯ , C ¯ ) satisfies the Kalman rank condition [50]
rank ( C ¯ | A C ¯ | A 2 C ¯ | | A n s 1 C ¯ ) = n s .
As a consequence, the semilinear partial differential equation
V t = L ¯ V + f ¯ ( x 1 , V , C ¯ V ) , V | E s + = q ¯ 1 ,
with E s + = [ 0 , T ) × D s { T } × D s and
L ¯ = 1 2 C ¯ C ¯ : 2 + ( A ¯ x 1 ) ·
has a classical solution V C 1 , 2 ( [ 0 , T ) × D ) C 0 , 1 ( E s + ) . Letting Y ¯ s : = V ( s , X ¯ s ) , 0 t s τ , with initial data X ¯ t = x 1 and Z ¯ s = C ¯ V ( s , X ¯ s ) , the limiting FBSDE (18) can be readily seen to be equivalent to (27). The latter is the dynamic programming equation of the following LQ optimal control problem: minimise the cost functional
J ¯ ( v ; t , x 1 ) = E 1 2 t τ ( X ¯ s Q ¯ 0 X ¯ s + | v s | 2 ) d s + 1 2 X ¯ τ Q ¯ 1 X ¯ τ ,
subject to
d X ¯ s = A ¯ X ¯ s d s + M ¯ X ¯ s + D ¯ v s d s + C ¯ d w s , X ¯ t = x 1 ,
where ( w s ) s 0 denotes standard Brownian motion in R n s and we have introduced the new control coefficients M ¯ = C ¯ N ¯ and D ¯ = C ¯ B ¯ .

3.2. Convergence of the Control Value

Before we state our main result and discuss its implications for the model reduction of linear and bilinear systems, we recall the basic assumptions that we impose on the system dynamics. Specifically, we say that the dynamics (3) and the corresponding cost functional (8) satisfy Condition LQ if the following holds:
  • ( A , C ) is controllable, and the range of b ( x ) = N x + B is a subspace of range ( C ) .
  • The matrix A 22 is Hurwitz (i.e., its spectrum lies entirely in the open left complex half-plane) and the matrix pair ( A 22 , C 2 ) is controllable.
  • The driver of the FBSDE (15) is continuous and quadratically growing in Z.
  • The terminal condition in (15) is bounded; for simplicity we set Q 1 = 0 in (8).
Assumption 2 implies that the fast subsystem (19) has a unique Gaussian invariant measure π = N ( 0 , Σ ) with full topological support, i.e., we have Σ = Σ > 0 . According to ([51], Prop. 3.1) and [33], existence and uniqueness of (15) is guaranteed by Assumptions 3 and 4 and the controllability of ( A , C ) and the range condition, which imply that the transition probability densities of the (controlled or uncontrolled) forward process X s ϵ are smooth and strictly positive. As a consequence of the complete controllability of the original system, the reduced system (30) is completely controllable too, which guarantees existence and uniqueness of a classical solution of the limiting dynamic programming Equation (27); see, e.g., [52].
Uniform convergence of the value function V ϵ V is now entailed by the strong convergence of the solution to the corresponding FBSDE as is expressed by the following Theorem.
Theorem 1.
Let the assumptions of Condition LQ hold. Further let V ϵ be the classical solution of the dynamic programming Equation (11) and V be the solution of (27). Then
V ϵ V ,
uniformly on all compact subsets of [ 0 , T ] × D .
The proof of the Theorem is given in Appendix A.2. For the reader’s convenience, we present a formal derivation of the limit equation in the next subsection.

3.3. Formal Derivation of the Limiting FBSDE

Our derivation of the limit FBSDE follows standard homogenisation arguments (see [49,53,54]), taking advantage of the fact that the FBSDE is uncoupled. To this end we consider the following linear evolution equation
t L ϵ ϕ ϵ = 0 , ϕ ϵ ( x 1 , x 2 , 0 ) = g ( x 1 )
for a function ϕ ϵ : D ¯ s × R n f × [ 0 , T ] where
L ϵ = 1 ϵ L 0 + 1 ϵ L 1 + L 2 ,
with
L 0 = 1 2 C 2 C 2 : x 2 2 + ( A 22 x 2 ) · x 2
L 1 = 1 2 C 1 C 2 : x 2 x 1 2 + 1 2 C 2 C 1 : x 1 x 2 2 + ( A 12 x 2 ) · x 1 + ( A 21 x 1 ) · x 2
L 2 = 1 2 C 1 C 1 : x 1 2 + ( A 11 x 1 ) · x 1
is the generator associated with the control-free forward process X s ϵ in (15). We follow the standard procedure of [49] and consider the perturbative expansion
ϕ ϵ = ϕ 0 + ϵ ϕ 1 + ϵ ϕ 2 +
that we insert into the Kolmogorov Equation (31). Equating different powers of ϵ we find a hierarchy of equations, the first three of which read
L 0 ϕ 0 = 0 , L 0 ϕ 1 = L 1 ϕ 0 , L 0 ϕ 2 = ϕ 0 t L 1 ϕ 1 L 2 ϕ 0 .
Assumption 2 on page 7 implies that L 0 has a one-dimensional nullspace that is spanned by functions that are constant in x 2 , and thus the first of the three equations implies that ϕ 0 is independent of x 2 . Hence the second equation—the cell problem—reads
L 0 ϕ 1 = ( A 12 x 2 ) · ϕ 0 ( x 1 , t ) .
The last equation has a solution by the Fredholm alternative, since the right hand side averages to zero under the invariant measure π of the fast dynamics that is generated by the operator L 0 , in other words, the right hand side of the linear equation is orthogonal to the nullspace of L 0 * spanned by the density of π . Here L 0 * is the formal L 2 adjoint of the operator L 0 , defined on a suitable dense subspace of L 2 . The form of the equation suggests the general ansatz
ϕ 1 = ψ ( x 2 ) · ϕ 0 ( x 1 , t ) + R ( x 1 , t )
where the function R plays no role in what follows, so we set it equal to zero. Since L 0 ψ = ( A 12 x 2 ) , the function ψ must be of the form ψ = Q x 2 with a matrix Q R n s × n f . Hence
Q = A 12 A 22 1 .
Now, solvability of the last of the three equations of (36) requires again that the right hand side averages to zero under π , i.e.,
R n f ϕ t + L 1 A 12 A 22 1 x 2 · ϕ L 2 ϕ π ( d x 2 ) = 0 ,
which formally yields the limiting equation for ϕ = ϕ 0 ( x 1 , t ) . Since π is a Gaussian measure with mean 0 and covariance Σ given by (20), the integral (38) can be explicitly computed:
t L ¯ ϕ = 0 , ϕ ( x 1 , 0 ) = g ( x 1 ) ,
where L ¯ is given by (28) and the initial condition ϕ ( · , 0 ) = g is a consequence of the fact that the initial condition in (31) is independent of ϵ . By the controllability of the pair ( A ¯ , C ¯ ) , the limiting Equation (39) has a unique classical solution and uniform convergence ϕ ϵ ϕ is guaranteed by standard results, e.g., ([49], Thm. 20.1).
Since the backward part of (15) is uniformly bounded in ϵ , the final form of the homogenised FBSDE (18) is found by averaging over x 2 , with the unique solution of the corresponding backward SDE satisfying Z 2 , s = 0 as the averaged backward process is independent of x 2 .

4. Numerical Studies

In this section, we present numerical results for linear and bilinear control systems and discuss the numerical discretisation of uncoupled FBSDE associated with LQ stochastic control problems. We begin with the latter.

4.1. Numerical FBSDE Discretisation

The fact that (15) or (18) are decoupled entails that they can be discretised by an explicit time-stepping algorithm. Here we utilise a variant of the least-squares Monte Carlo algorithm proposed in [41]; see also [55]. The convergence of numerical schemes for FBSDE with quadratic nonlinearities in the driver has been analysed in [56].
The least-squares Monte Carlo scheme is based on the Euler discretisation of (15):
X ^ n + 1 = X ^ n + Δ t A X ^ n + Δ t C ξ n + 1 Y ^ n + 1 = Y ^ n Δ t f ( X ^ n , Y ^ n , Z ^ n ) + Δ t Z ^ n · ξ n + 1
where ( X ^ n , Y ^ n ) denotes the numerical discretisation of the joint process ( X s ϵ , Y s ϵ ) , where we set X s ϵ = X τ D ϵ for s ( τ D , T ] when τ D < T , and ( ξ k ) k 1 is an i.i.d. sequence of normalised Gaussian random variables. Now let
F n = σ W ^ k : 0 k n
be the σ -algebra generated by the discrete Brownian motion W ^ n : = Δ t i n ξ i . By definition the joint process ( X s ϵ , Y s ϵ ) is adapted to the filtration generated by ( W r ) 0 r s , therefore
Y ^ n = E Y ^ n | F n = E Y ^ n + 1 + Δ t f ( X ^ n , Y ^ n , Z ^ n ) | F n ,
where we have used that Z ^ n is independent of ξ n + 1 . To compute Y ^ n from Y ^ n + 1 we use the identification of Z s ϵ with C V ϵ ( s , X s ϵ ) and replace Z ^ n in (41) by
Z ^ n = C V ϵ ( t n , X ^ n ) ,
and the parametric ansatz (44) for V ϵ makes the overall scheme explicit in X ^ n and Y ^ n .

4.2. Least-Squares Solution of the Backward SDE

To evaluate the conditional expectation Y ^ n = E [ · | F n ] we recall that a conditional expectation can be characterised as the solution to the following quadratic minimisation problem:
E S | F n = argmin Y L 2 , F n measurable E [ | Y S | 2 ] .
Given M independent realisations X ^ n ( i ) , i = 1 , , M of the forward process X ^ n , this suggests the approximation scheme
Y ^ n argmin Y = Y ( X ^ n ) 1 M i = 1 M | Y Y ^ n + 1 ( i ) Δ t f X ^ n ( i ) , Y ^ n + 1 ( i ) , C Y ^ n + 1 ( i ) | 2 ,
where Y ^ ( i ) is defined by Y ^ ( i ) = Y X ^ ( i ) with terminal values
Y ^ N ( i ) = q 1 X N ( i ) , τ = N Δ t .
(Please note that N = N D is random.) For simplicity, we assume in what follows that the terminal value is zero, i.e., we set q 1 = 0 . (Recall that the existence and uniqueness result from [33] requires q 1 to be bounded.) To represent Y ^ n as a function Y ( X ^ n ) we use the ansatz
Y ( X ^ n ) = k = 1 K α k ( n ) φ k ( X ^ n ) ,
with coefficients α 1 ( · ) , , α K ( · ) R and suitable basis functions φ 1 , , φ K : R n R (e.g., Gaussians). Please note that the coefficients α k are the unknowns in the least-squares problem (43) and thus are independent of the realisation. Now the least-squares problem that has to be solved in the n-th step of the backward iteration is of the form
α ^ ( n ) = argmin α R K A n α b n 2 ,
with coefficients
A n = φ k X ^ n ( i ) i = 1 , , M ; k = 1 , , K
and data
b n = Y ^ n + 1 ( i ) Δ t f X ^ n ( i ) , Y ^ n + 1 ( i ) , C Y ^ n + 1 ( i ) i = 1 , , M .
Assuming that the coefficient matrix A n R M × K , K M defined by (46) has maximum rank K, then the solution to the least-squares problem (45) is given by
α ^ ( n ) = A n A n 1 A n b n .
The thus defined scheme is strongly convergent of order 1/2 as Δ t 0 and M , K as has been analysed by [41]. Controlling the approximation quality for finite values Δ t , M , K , however, requires a careful adjustment of the simulation parameters and appropriate basis functions, especially with regard to the condition number of the matrix A n .

4.3. Numerical Example

Illustrating our theoretical findings of Theorem 1, we consider a linear system of form (7) where the matrices A , B and C are given by
A = 0 ϵ 1 / 2 I n × n ϵ 1 / 2 I n × n γ ϵ 1 I n × n R 2 n × 2 n ,
and
B = C = 0 σ ϵ 1 / 2 I n × n R 2 n × n .
This is an instance of a controlled Langevin equation with friction and noise coefficients γ , σ > 0 which are assumed to fulfil the fluctuation-dissipation relation
2 γ = σ 2 .
In the example we let γ = 1 / 2 and σ = 1 . The quadratic cost functional (8) is determined by the running cost via Q 0 = I n × n R n × n and we apply no terminal cost, i.e., Q 1 = 0 .
The associated effective equations are given by (29)–(30), where
A ¯ = γ 1 I n × n , D ¯ = C ¯ = σ γ 1 , M ¯ = 0 , Q ¯ 0 = I n × n , Q ¯ 1 = 0 R n × n .
We apply the previously described FBSDE scheme (40), (44)–(48), which was shown to yield good results in [57], to both the full and the reduced system, and we choose n = 3 , i.e., the full system is six dimensional. To this end we choose the basis functions
ϕ k , n μ k , δ ( x ) = exp ( μ k x ) 2 2 δ
where δ = 0 . 1 is fixed but μ k = μ k ( n ) changes in each timestep such that the basis follows the forward process. For this, we simulate K additional forward trajectories X ( k ) , k = 1 , , K and set μ k ( n ) = X n ( k ) .
We choose the parameters for the numerics as follows. The number of basis functions K is given by K = 9 for the reduced system and K ϵ = 40 for the full system. We choose these values because the maximally observed rank of the coefficient matrices A n defined in (46) is 9 for the reduced system and they should not be rank deficient. For the full system we could have used a greater values for K, but we want to keep the computational effort reasonable. Further, we choose Δ t = 5 × 10 5 , the final time T = 0 . 5 and the number of realisations M = 400 .
We let the whole algorithm run five times and compute the distance between the value functions of the full and reduced systems
E ( ϵ ) : = | V ϵ ( 0 , x ) V ( 0 , x ) |
for which convergence of order 1 / 2 was found in the proof of Theorem 1. Indeed, we observe convergence of order 1 / 2 in our numerical example as can be seen in Figure 1 where we depict the mean and standard deviation of E ( ϵ ) .

4.4. Discussion

We shall now discuss the implications of the above simple example when it comes to more complicated dynamical systems. As a general remark the results show that it is possible to to apply model reduction before solving the corresponding optimal control problem where the control variable in the original equation can simply be treated as a parameter. This is in accordance with the general model reduction strategy in control engineering; see e.g., [12,13] and the references therein. Our results not only guarantee convergence of the value function via convergence of Y ϵ , but they also imply strong convergence of the optimal control, by the convergence of the control process Z ϵ in L 2 . (See the Appendix A for details.) This means that in the case of a system with time scale separation, our result is highly valuable since we can resort to the reduced system for finding the optimal control which can then be applied to the full systems dynamics.
We stress that our results carry over to fully nonlinear stochastic control problems which have a similar LQ structure [24]. Clearly, for realistic (i.e., high-dimensional or nonlinear) systems the identification of a small parameter ϵ remains challenging, and one has to resort to e.g., semi-empirical approaches, such as [58,59].
If the dynamics is linear, as is the case here, small parameters may be identified using system theoretic arguments based on balancing transformations (see, e.g., [22,45]). These approaches require that the dynamics is either linear or bilinear in the state variables, but the aforementioned duality for the quasi-linear dynamic programming equation can be used here as well in order to change the drift of the forward SDE from some nonlinear vector field b to a linear vector field, say, b 0 = A x . Assuming that the noise coefficient C is square and invertible and ignoring ϵ and the boundary condition for the moment, it is easy to see that the dynamic programming PDE (11) can be recast as
V ϵ t = L ˜ V + f ˜ ( x , V ϵ , C x V ϵ ) = 0 .
Here
L ˜ = 1 2 C C + b ( x ) ·
is the generator of a forward SDE with nonlinear drift b, and
f ˜ ( x , y , z ) = f ( x , y , z ) + C 1 ( A x b ( x ) ) · z .
is the driver of the corresponding backward SDE. Even though the change of drift is somewhat arbitrary, it shows that by changing the driver in the backward SDE it is possible to reduce the control problem to one with linear drift that falls within the category that is considered in this paper, at the expense of having a possibly non-quadratic cost functional.
Remark 3.
Changing the drift may be advantageous in connection with the numerical FBSDE solver. In the martingale basis approach of Bender and Steiner [41], the authors have suggested to use basis functions that are defined as conditional expectations of certain linearly independent candidate functions over the forward process, which makes the basis functions martingales. Computing the martingale basis, however, comes with a large computational overhead, which is why the authors consider only cases in which the conditional expectations can be computed analytically. Changing the drift of the forward SDE may thus be used to simplify the forward dynamics so that its distribution becomes analytically tractable.

5. Conclusions and Outlook

We have given a proof of concept that model reduction methods for singularly perturbed bilinear control systems can be applied to the dynamics before solving the corresponding optimal control problem. The key idea here was to exploit the equivalence between the semi-linear dynamic programming PDE corresponding to our stochastic optimal control problem and a singularly perturbed forward-backward SDE which is decoupled. Using this equivalence, we could derive a reduced-order FBSDE which was then interpreted as the representation of a reduced-order stochastic control problem. We have proved uniform convergence of the corresponding value function and, as an auxiliary result, obtained a strong convergence result for the optimal control. As we have argued the latter implies that the optimal control computed from the reduced system can be used to control the original dynamics.
We have presented a numerical example to illustrate our findings and discussed the numerical discretisation of uncoupled FBSDEs, based on the computation of conditional expectations. For the latter, the choice of the basis functions played an essential role, and how to cleverly choose the ansatz functions, possibly exploiting that the forward SDE has an explicit solution (see e.g., [41]) is an important aspect that future research ought to address, especially with regard to high dimensional problems.
Another class of important problems not considered in this article are slow-fast systems with vanishing noise. The natural question here is how the limit equation depends on the order in which noise and time scale parameters go to zero. This question has important consequences for the associated deterministic control problem and its regularisation by noise. We leave this topic for future work.

Author Contributions

All authors have contributed equally to this work.

Funding

This research has been partially funded by Deutsche Forschungsgemeinschaft (DFG) through the Grant CRC 1114 “Scaling Cascades in Complex Systems”, Project A05 “Probing scales in equilibrated systems by optimal nonequilibrium forcing”. Omar Kebiri acknowledges funding from the EU-METALIC II Programme.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Proofs and Technical Lemmas

The idea of the proof of Theorem 1 closely follows the work [60], with the main differences being (a) that we consider slow-fast systems exhibiting three time scales, in particular the slow equation contains singular O ( ϵ 1 / 2 ) terms, and (b) that the coefficients of the fast dynamics are not periodic, with the fast process being asymptotically Gaussian as ϵ 0 ; in particular the n f -dimensional fast process lives on the unbounded domain R n f .

Appendix A.1. Poisson Equation Lemma

Theorem 1 rests on the following Lemma that is similar to a result in [61].
Lemma A1.
Suppose that the assumptions of Condition LQ on page 7 hold and define h : [ 0 , T ] × R n s × R n f R to be a function of the class C b 1 , 2 , 2 . Further assume that h is centred with respect to the invariant measure π of the fast process. Then for every t [ 0 , T ] and initial conditions ( X 1 , u ϵ , X 2 , u ϵ ) = ( x 1 , x 2 ) R n s × R n f , 0 u < t , we have
lim ϵ 0 E u v h ( s , X 1 , s ϵ , X 2 , s ϵ ) d s 2 = 0 , 0 u < v t .
Proof. 
We remind the reader of the definition (33)–(35) of the differential operators L 0 , L 1 and L 2 , and consider the Poisson equation
L 0 ψ = h
on the domain R n f . (The variables x 1 R n s and t [ 0 , T ] are considered as parameters.) Since h is centred with respect to π , Equation (A2) has a solution by the Fredholm alternative. By Assumption 2 L 0 is a hypoelliptic operator in x 2 and thus by ([62], Thm. 2), the Poisson Equation (A2) has a unique solution that is smooth and bounded. Applying Itô’s formula to ψ and introducing the shorthand δ ψ ( u , v ) = ψ ( v , X 1 , v ϵ , X 2 , v ϵ ) ψ ( u , x 1 , x 2 ) yields
δ ψ ( u , v ) = u v ( t ψ + L 2 ψ ) ( s , X 1 , s ϵ , X 2 , s ϵ ) d s + 1 ϵ u v L 1 ψ ( s , X 1 , s ϵ , X 2 , s ϵ ) d s + 1 ϵ u v L 0 ψ ( s , X 1 , s ϵ , X 2 , s ϵ ) d s + M 1 ( u , v ) + 1 ϵ M 2 ( u , v ) ,
where M 1 and M 2 are square integrable martingales with respect to the natural filtration generated by the Brownian motion W s :
M 1 ( u , v ) = u v ( t ψ + L 2 ψ ) ( s , X 1 , s ϵ , X 2 , s ϵ ) d s + 1 ϵ u v L 1 ψ ( s , X 1 , s ϵ , X 2 , s ϵ ) d s + 1 ϵ u v L 0 ψ ( s , X 1 , s ϵ , X 2 , s ϵ ) d s + M 1 ( u , v ) + 1 ϵ M 2 ( u , v ) ,
By the properties of the solution to (A2) the first three integrals on the right hand side are uniformly bounded in u and v, and thus
u v h ( s , X 1 , s ϵ , X 2 , s ϵ ) d s = ϵ δ ψ ( u , v ) + ϵ u v ( t ψ + L 2 ψ ) ( s , X 1 , s ϵ , X 2 , s ϵ ) d s + ϵ u v L 1 ψ ( s , X 1 , s ϵ , X 2 , s ϵ ) d s + ϵ M 1 ( u , v ) + ϵ M 2 ( u , v ) .
By the Itô isometry and the boundedness of the derivatives x 1 ψ and x 2 ψ , the martingale term can be bounded by
E ( M i ( u , v ) ) 2 C i ( v u ) , 0 < C i < .
Hence
E u v h ( s , X 1 , s ϵ , X 2 , s ϵ ) d s 2 C ϵ ,
with a generic constant 0 < C < that is independent of u , v and ϵ . ☐

Appendix A.2. Convergence of the Value Function

Lemma A2.
Suppose that Condition LQ from page 7 holds. Then
| V ϵ ( t , x ) V ( t , x 1 ) | C ϵ ,
with x = ( x 1 , x 2 ) D = D s × R n f , where V ϵ is the solution of the original dynamic programming Equation (11) and V is the solution of the limiting dynamic programming Equation (27). The constant and C depends on x and t, but is finite on every compact subset of D × [ 0 , T ] .
Proof. 
The idea of the proof is to apply Itô’s formula to | y s ϵ | 2 , where y s ϵ = Y s ϵ V ( s , X 1 , s ϵ ) satisfies the backward SDE
d y s ϵ = G ϵ ( s , X 1 , s ϵ , X 2 , s ϵ , y s ϵ , z s ϵ ) d s + z s ϵ · d W s
where
z s ϵ = Z s ϵ C ¯ V ( s , X 1 , s ϵ ) , 0 ( V = x 1 V )
and
G ϵ ( t , x 1 , x 2 , y , z ) = G 1 ( t , x 1 , x 2 , y , z ) + G 2 ϵ ( t , x 1 , x 2 , y , z ) ,
with
G 1 = f ( t , x , y + V ( t , x 1 ) , z + ( C ¯ V ( t , x 1 ) , 0 ) ) f ¯ ( t , x 1 , V ( t , x 1 ) , C ¯ V ( t , x 1 ) ) G 2 ϵ = ( A 11 A ¯ ) x 1 + 1 ϵ A 12 x 2 · V ( t , x 1 ) + 1 2 ( C 1 C 1 C ¯ C ¯ ) 2 V ( t , x 1 ) .
We set X s ϵ = X τ D ϵ for s ( τ D , T ] when τ D < T . Then, by construction, G 1 ( t , x , 0 , 0 ) , x = ( x 1 , x 2 ) D s × R n f is centred with respect to π and bounded (since the running cost is independent of x 2 ), therefore Lemma A1 implies that
sup t [ 0 , T ] E t T G 1 ( s , X 1 , s ϵ , X 2 , s ϵ , 0 , 0 ) d s 2 C 1 ϵ ,
The second contribution to the driver can be recast as G 2 ϵ = ( L L ¯ ) V , with L 2 and L ¯ as given by (12) and (28) and thus, as ϵ 0 ,
sup t [ 0 , T ] E t T G 2 ϵ ( s , X 1 , s ϵ , X 2 , s ϵ , 0 , 0 ) d s 2 C 2 ϵ
by the functional central limit theorem for diffusions with Lipschitz coefficients [53]; cf. also Section 3.3. As a consequence of (A6) and (A7), we have G ϵ 0 in L 2 , which, since E [ | y T ϵ | 2 ] C 3 ϵ , implies strong convergence of the solution of the corresponding backward SDE in L 2 .
Specifically, since V is bounded D ¯ s , Itô’s formula applied to | y s ϵ | 2 , yields after an application of Gronwall’s Lemma:
E sup t s T | y s ϵ | 2 + t T | z s ϵ | 2 d s D E t T G ϵ ( s , X 1 , s ϵ , X 2 , s ϵ , 0 , 0 ) d s 2 + D E [ | y T ϵ | 2 ]
where the Lipschitz constant D is independent of ϵ and finite for every compact subset D ¯ s R n s by the boundedness of V (since V is a classical solution and D s in bounded). Hence E [ | y s ϵ | 2 ] C 3 ϵ uniformly for s [ t , T ] , and by setting s = t , we obtain
| y t ϵ | = | V ϵ ( t , x ) V ( t , x 1 ) | C ϵ
for a constant C ( 0 , ) .  ☐
This proves Theorem 1.

References

  1. Fleming, W.H.; Mete Soner, H. Controlled Markov Processes and Viscosity Solutions, 2nd ed.; Springer: New York, NY, USA, 2006. [Google Scholar]
  2. Stengel, F.R. Optimal Control and Estimation; Dover Books on Advanced Mathematics; Dover Publications: New York, NY, USA, 1994. [Google Scholar]
  3. Dupuis, P.; Spiliopoulos, K.; Wang, H. Importance sampling for multiscale diffusions. Multiscale Model. Simul. 2012, 10, 1–27. [Google Scholar] [CrossRef]
  4. Dupuis, P.; Wang, H. Importance sampling, large deviations, and differential games. Stoch. Rep. 2004, 76, 481–508. [Google Scholar] [CrossRef]
  5. Davis, M.H.; Norman, A.R. Portfolio selection with transaction costs. Math. Oper. Res. 1990, 15, 676–713. [Google Scholar] [CrossRef]
  6. Pham, H. Continuous-Time Stochastic Control and Optimization with Financial Applications; Springer: Berlin/Heidelberg, Germany, 2009. [Google Scholar]
  7. Hartmann, C.; Schütte, C. Efficient rare event simulation by optimal nonequilibrium forcing. J. Stat. Mech. Theor. Exp. 2012, 2012, 11004. [Google Scholar] [CrossRef]
  8. Schütte, C.; Winkelmann, S.; Hartmann, C. Optimal control of molecular dynamics using markov state models. Math. Program. Ser. B 2012, 134, 259–282. [Google Scholar] [CrossRef]
  9. Asplund, E.; Klüner, T. Optimal control of open quantum systems applied to the photochemistry of surfaces. Phys. Rev. Lett. 2011, 106, 140404. [Google Scholar] [CrossRef] [PubMed]
  10. Steinbrecher, A. Optimal Control of Robot Guided Laser Material Treatment. In Progress in Industrial Mathematics at ECMI 2008; Fitt, A.D., Norbury, J., Ockendon, H., Wilson, E., Eds.; Springer: Berlin/Heidelberg, Germany, 2010; pp. 501–511. [Google Scholar]
  11. Zhang, W.; Wang, H.; Hartmann, C.; Weber, M.; Schütte, C. Applications of the cross-entropy method to importance sampling and optimal control of diffusions. SIAM J. Sci. Comput. 2014, 36, A2654–A2672. [Google Scholar] [CrossRef]
  12. Antoulas, A.C. Approximation of Large-Scale Dynamical Systems; SIAM: Philadelphia, PA, USA, 2005. [Google Scholar]
  13. Baur, U.; Benner, P.; Feng, L. Model order reduction for linear and nonlinear systems: A system-theoretic perspective. Arch. Comput. Meth. Eng. 2014, 21, 331–358. [Google Scholar] [CrossRef]
  14. Bensoussan, A.; Blankenship, G. Singular perturbations in stochastic control. In Singular Perturbations and Asymptotic Analysis in Control Systems; Lecture Notes in Control and Information Sciences; Kokotovic, P.V., Bensoussan, A., Blankenship, G.L., Eds.; Springer: Berlin/Heidelberg, Germany, 1987; Volume 90, pp. 171–260. [Google Scholar]
  15. Evans, L.C. The perturbed test function method for viscosity solutions of nonlinear PDE. Proc. R. Soc. Edinb. A 1989, 111, 359–375. [Google Scholar] [CrossRef]
  16. Buckdahn, R.; Hu, Y. Probabilistic approach to homogenizations of systems of quasilinear parabolic PDEs with periodic structures. Nonlinear Anal. 1998, 32, 609–619. [Google Scholar] [CrossRef]
  17. Ichihara, N. A stochastic representation for fully nonlinear PDEs and its application to homogenization. J. Math. Sci. Univ. Tokyo 2005, 12, 467–492. [Google Scholar]
  18. Kushner, H.J. Weak Convergence Methods and Singularly Perturbed Stochastic Control and Filtering Problems; Birkhäuser: Boston, MA, USA, 1990. [Google Scholar]
  19. Kurtz, T.; Stockbridge, R.H. Stationary solutions and forward equations for controlled and singular martingale problems. Electron. J. Probab. 2001, 6, 5. [Google Scholar] [CrossRef]
  20. Kabanov, Y.; Pergamenshchikov, S. Two-Scale Stochastic Systems: Asymptotic Analysis and Control; Springer: Berlin/Heidelberg, Germany; Paris, France, 2003. [Google Scholar]
  21. Kokotovic, P.V. Applications of singular perturbation techniques to control problems. SIAM Rev. 1984, 26, 501–550. [Google Scholar] [CrossRef]
  22. Hartmann, C.; Schäfer-Bung, B.; Zueva, A. Balanced averaging of bilinear systems with applications to stochastic control. SIAM J. Control Optim. 2013, 51, 2356–2378. [Google Scholar] [CrossRef]
  23. Pardalos, P.M.; Yatsenko, V.A. Optimization and Control of Bilinear Systems: Theory, Algorithms, and Applications; Springer: New York, NY, USA, 2010. [Google Scholar]
  24. Hartmann, C.; Latorre, J.; Pavliotis, G.A.; Zhang, W. Optimal control of multiscale systems using reduced-order models. J. Comput. Dyn. 2014, 1, 279–306. [Google Scholar] [CrossRef]
  25. Peng, S. Backward Stochastic Differential Equations and Applications to Optimal Control. Appl. Math. Optim. 1993, 27, 125–144. [Google Scholar] [CrossRef]
  26. Touzi, N. Optimal Stochastic Control, Stochastic Target Problem, and Backward Differential Equation; Springer: Berlin, Germany, 2013. [Google Scholar]
  27. Pardoux, E.; Peng, S. Adapted solution of a backward stochastic differential equation. Syst. Control Lett. 1990, 14, 55–61. [Google Scholar] [CrossRef]
  28. Bahlali, K.; Kebiri, O.; Khelfallah, N.; Moussaoui, H. One dimensional BSDEs with logarithmic growth application to PDEs. Stochastics 2017, 89, 1061–1081. [Google Scholar] [CrossRef]
  29. Duffie, D.; Epstein, L.G. Stochastic differential utility. Econometrica 1992, 60, 353–394. [Google Scholar] [CrossRef]
  30. El Karoui, N.; Peng, S.; Quenez, M.C. Backward stochastic differential equations in finance. Math. Financ. 1997, 7, 1–71. [Google Scholar] [CrossRef]
  31. Hu, Y.; Imkeller, P.; Müller, M. Utility maximization in incomplete markets. Ann. Appl. Probab. 2005, 15, 1691–1712. [Google Scholar] [CrossRef] [Green Version]
  32. Hu, Y.; Peng, S. A stability theorem of backward stochastic differential equations and its application. Acad. Sci. Math. 1997, 324, 1059–1064. [Google Scholar] [CrossRef]
  33. Kobylanski, M. Backward stochastic differential equations and partial differential equations with quadratic growth. Ann. Probab. 2000, 28, 558–602. [Google Scholar] [CrossRef]
  34. Antonelli, F. Backward-forward stochastic differential equations. Ann. Appl. Probab. 1993, 3, 777–793. [Google Scholar] [CrossRef]
  35. Bahlali, K.; Gherbal, B.; Mezerdi, B. Existence of optimal controls for systems driven by FBSDEs. Syst. Control Lett. 1995, 60, 344–349. [Google Scholar] [CrossRef]
  36. Bahlali, K.; Kebiri, O.; Mtiraoui, A. Existence of an optimal Control for a system driven by a degenerate coupled Forward-Backward Stochastic Differential Equations. Comptes Rendus Math. 2017, 355, 84–89. [Google Scholar] [CrossRef]
  37. Ma, J.; Protter, P.; Yong, J. Solving Forward-Backward Stochastic Differential Equations Explicitly—A Four Step Scheme. Probab. Theory Relat. Fields 1994, 98, 339–359. [Google Scholar] [CrossRef]
  38. Zhen, W. Forward-backward stochastic differential equations, linear quadratic stochastic optimal control and nonzero sum differential games. J. Syst. Sci. Complex. 2005, 18, 179–192. [Google Scholar]
  39. Hartmann, C.; Schütte, C.; Weber, M.; Zhang, W. Importance sampling in path space for diffusion processes with slow-fast variables. Probab. Theory Relat. Fields 2017, 170, 177–228. [Google Scholar] [CrossRef] [Green Version]
  40. Bally, V. Approximation scheme for solutions of BSDE. In Backward Stochastic Differential Equations; El Karoui, N., Mazliak, L., Eds.; Addison Wesley Longman: Boston, MA, USA, 1997; pp. 177–191. [Google Scholar]
  41. Bender, C.; Steiner, J. Least-Squares Monte Carlo for BSDEs. In Numerical Methods in Finance; Springer: Berlin, Germany, 2012; pp. 257–289. [Google Scholar]
  42. Bouchard, B.; Elie, R.; Touzi, N. Discrete-time approximation of BSDEs and probabilistic schemes for fully nonlinear PDEs. Comput. Appl. Math. 2009, 8, 91–124. [Google Scholar]
  43. Chevance, D. Numerical methods for backward stochastic differential equations. In Numerical Methods in Finance; Publications of the Newton Institute, Cambridge University Press: Cambridge, UK, 1997; pp. 232–244. [Google Scholar]
  44. Hyndman, C.B.; Ngou, P.O. A Convolution Method for Numerical Solution of Backward Stochastic Differential Equations. Methodol. Comput. Appl. Probab. 2017, 19, 1–29. [Google Scholar] [CrossRef]
  45. Hartmann, C. Balanced model reduction of partially-observed Langevin equations: An averaging principle. Math. Comput. Model. Dyn. Syst. 2011, 17, 463–490. [Google Scholar] [CrossRef]
  46. Fleming, W.H. Optimal investment models with minimum consumption criteria. Aust. Econ. Pap. 2005, 44, 307–321. [Google Scholar] [CrossRef]
  47. Budhiraja, A.; Dupuis, P. A variational representation for positive functionals of infinite dimensional Brownian motion. Probab. Math. Stat. 2000, 20, 39–61. [Google Scholar]
  48. Dai Pra, P.; Meneghini, L.; Runggaldier, J.W. Connections between stochastic control and dynamic games. Math. Control Signal Syst. 1996, 9, 303–326. [Google Scholar] [CrossRef]
  49. Pavliotis, G.A.; Stuart, A.M. Multiscale Methods: Averaging and Homogenization; Springer: Berlin/Heidelberg, Germany, 2008. [Google Scholar]
  50. Anderson, B.D.O.; Liu, Y. Controller reduction: Concepts and approaches. IEEE Trans. Autom. Control 1989, 34, 802–812. [Google Scholar] [CrossRef]
  51. Bensoussan, A.; Boccardo, L.; Murat, F. Homogenization of elliptic equations with principal part not in divergence form and hamiltonian with quadratic growth. Commun. Pure Appl. Math. 1986, 39, 769–805. [Google Scholar] [CrossRef]
  52. Pardoux, E.; Peng, S. Backward stochastic differential equations and quasilinear parabolic partial differential equations. In Stochastic Partial Differential Equations and Their Applications; Lecture Notes in Control and Information Sciences 176; Rozovskii, B.L., Sowers, R.B., Eds.; Springer: Berlin, Germany, 1992. [Google Scholar]
  53. Freidlin, M.; Wentzell, A. Random Perturbations of Dynamical Systems; Springer: Berlin/Heidelberg, Germany, 2012. [Google Scholar]
  54. Khasminskii, R. Principle of averaging for parabolic and elliptic differential equations and for Markov processes with small diffusion. Theory Probab. Appl. 1963, 8, 1–21. [Google Scholar] [CrossRef]
  55. Gobet, E.; Turkedjiev, P. Adaptive importance sampling in least-squares Monte Carlo algorithms for backward stochastic differential equations. Stoch. Proc. Appl. 2005, 127, 1171–1203. [Google Scholar] [CrossRef]
  56. Turkedjiev, P. Numerical Methods for Backward Stochastic Differential Equations of Quadratic and Locally Lipschitz Type. Ph.D. Thesis, Humboldt-Universität zu Berlin, Berlin, Germany, 2013. [Google Scholar]
  57. Kebiri, O.; Neureither, L.; Hartmann, C. Adaptive importance sampling with forward-backward stochastic differential equations. arXiv, 2018; arXiv:1802.04981. [Google Scholar]
  58. Franzke, C.; Majda, A.J.; Vanden-Eijnden, E. Low-order stochastic mode reduction for a realistic barotropic model climate. J. Atmos. Sci. 2005, 62, 1722–1745. [Google Scholar] [CrossRef]
  59. Lall, S.; Marsden, J.; Glavaški, S. A subspace approach to balanced truncation for model reduction of nonlinear control systems. Int. J. Robust. Nonlinear Control 2002, 12, 519–535. [Google Scholar] [CrossRef] [Green Version]
  60. Briand, P.; Hu, Y. Probabilistic approach to singular perturbations of semilinear and quasilinear parabolic. Nonlinear Anal. 1999, 35, 815–831. [Google Scholar] [CrossRef]
  61. Bensoussan, A.; Lions, J.L.; Papanicolaou, G. Asymptotic Analysis for Periodic Structures; American Mathematical Society: Washington, DC, USA, 1978; pp. 769–805. [Google Scholar]
  62. Pardoux, E.; Yu, A. Veretennikov: On the poisson equation and diffusion approximation 3. Ann. Probab. 2005, 33, 1111–1133. [Google Scholar] [CrossRef]
Figure 1. Plot of the mean of E ( ϵ ) ± its standard deviation σ ( E ( ϵ ) ) and for comparison we plot ϵ against ϵ on a doubly logarithmic scale: we observe convergence of order 1 / 2 as predicted by the theory.
Figure 1. Plot of the mean of E ( ϵ ) ± its standard deviation σ ( E ( ϵ ) ) and for comparison we plot ϵ against ϵ on a doubly logarithmic scale: we observe convergence of order 1 / 2 as predicted by the theory.
Computation 06 00041 g001

Share and Cite

MDPI and ACS Style

Kebiri, O.; Neureither, L.; Hartmann, C. Singularly Perturbed Forward-Backward Stochastic Differential Equations: Application to the Optimal Control of Bilinear Systems. Computation 2018, 6, 41. https://doi.org/10.3390/computation6030041

AMA Style

Kebiri O, Neureither L, Hartmann C. Singularly Perturbed Forward-Backward Stochastic Differential Equations: Application to the Optimal Control of Bilinear Systems. Computation. 2018; 6(3):41. https://doi.org/10.3390/computation6030041

Chicago/Turabian Style

Kebiri, Omar, Lara Neureither, and Carsten Hartmann. 2018. "Singularly Perturbed Forward-Backward Stochastic Differential Equations: Application to the Optimal Control of Bilinear Systems" Computation 6, no. 3: 41. https://doi.org/10.3390/computation6030041

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop