Next Article in Journal
Ex Post Nash Equilibrium in Linear Bayesian Games for Decision Making in Multi-Environments
Next Article in Special Issue
Mean-Field Type Games between Two Players Driven by Backward Stochastic Differential Equations
Previous Article in Journal
Homophily and Social Norms in Experimental Network Formation Games
Previous Article in Special Issue
Linear–Quadratic Mean-Field-Type Games: A Direct Method
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Stochastic Maximum Principle for Markov Chains of Mean-Field Type

by
Salah Eddine Choutri
1,* and
Tembine Hamidou
2
1
Department of Mathematics, KTH Royal Institute of Technology, 100 44 Stockholm, Sweden
2
Learning and Game Theory Laboratory, New York University Abu Dhabi, P.O. Box 129188, Abu Dhabi, UAE
*
Author to whom correspondence should be addressed.
Games 2018, 9(4), 84; https://doi.org/10.3390/g9040084
Submission received: 4 September 2018 / Revised: 16 October 2018 / Accepted: 17 October 2018 / Published: 21 October 2018
(This article belongs to the Special Issue Mean-Field-Type Game Theory)

Abstract

:
We derive sufficient and necessary optimality conditions in terms of a stochastic maximum principle (SMP) for controls associated with cost functionals of mean-field type, under dynamics driven by a class of Markov chains of mean-field type which are pure jump processes obtained as solutions of a well-posed martingale problem. As an illustration, we apply the result to generic examples of control problems as well as some applications.

1. Introduction

The goal of this paper is to find sufficient and necessary optimality conditions in terms of a stochastic maximum principle (SMP) for a set of admissible controls u ¯ which minimize payoff functionals of the form
J ( u ) : = E u 0 T f ( t , x . , E u [ κ f ( x ( t ) ) ] , u ( t ) ) d t + h x ( T ) , E u [ κ h ( x ( T ) ) ] ,
w.r.t. admissible controls u, for some given functions f , h , κ f and κ h , under dynamics driven by a pure jump process x with state space I = { 0 , 1 , 2 , 3 , } whose jump intensity under the probability measure P u is of the form
λ i j u ( t ) : = λ i j ( t , x . , E u [ κ ( x ( t ) ) ] , u ( t ) ) , i , j I ,
for some given functions λ and κ , as long as the intensities are predictable. Due to the dependence of the intensities on the mean of (a function of) x ( t ) under P u , the process x is commonly called a nonlinear Markov chain or Markov chain of mean-type, although it does not satisfy the standard Markov property, as explained in the seminal paper by McKean [1] for diffusion processes. The dependence of the intensities on the whole path x . over the time interval [ 0 , T ] , makes the jump process cover a large class of real-world applications. For instance, in queuing theory it is desirable that the intensities are functions of 0 t x ( s ) d s , sup 0 s t x ( s ) , or inf 0 s t x ( s ) .
The Markov chain of mean-field type is obtained as the limit of a system of weakly interacting Markov chains ( x l , N , l = 1 , , N ) as the size N becomes large. That is,
λ i j ( t , x . , E [ κ ( x ( t ) ) ] ) = lim N λ i j ( t , x . l , N , 1 N l = 1 N κ ( x l , N ( t ) ) ) , l { 1 , , N } , i , j I .
Such a weak interaction is usually called a mean-field interaction. It occurs when the jump intensities of the Markov chains depend on their empirical mean. When the system’s size grows to infinity, the sequence of N-indexed empirical means, which describes the states of the system, converges to the expectation E [ κ ( x ( t ) ) ] of x ( t ) , which evolves according to a McKean–Vlasov equation (or nonlinear Fokker–Planck). A more general situation is when the jump intensities of the obtained nonlinear Markov chain depend on the marginal law P x 1 ( t ) of x ( t ) . To keep the content of the paper as simple as possible, we do not treat this situation.
Markov chains of mean-field type have been used as models in many different fields, such as chemistry, physics, biology, economics, epidemics, etc. (e.g., [2,3,4,5,6]). Existence and uniqueness results with bounded and unbounded jump intensities were proven in [7,8], respectively. We refer to [9] for existence and uniqueness of solutions to McKean–Vlasov equations with unbounded jump intensities, and to [10,11] for results related to the law of large numbers for unbounded jump mean-field models, and large deviations for corresponding empirical processes.
The present work is a continuation of [12], where the authors proved the existence and uniqueness of this class of processes in terms of a martingale problem, and derived sufficient conditions (cf. Theorem 4.6 in [12]) for the existence of an optimal control which minimizes J ( u ) for a rather general class of (unbounded) jump intensities. Since the suggested conditions are rather difficult to apply in concrete situations (see Remark 4.7 and Example 4.8 in [12]), we aim in this paper to investigate whether the SMP can yield optimality conditions that are tractable and easy to verify.
While in the usual strong-type control problems the dynamics are given in terms of a process X u which solves a stochastic differential equation (SDE) on a given probability space ( Ω , F , Q ) , the dynamics in our formulation are given in terms of a family of probability measures ( P u , u U ) , where x is the coordinate process (i.e., it does not change with the control u). This type of formulation is usually called a weak-type formulation for control problems.
The main idea in the martingale and dynamic programming approaches to optimal control problems for jump processes (without mean-field coupling) suggested in previous work, including the following first papers on the subject [13,14,15,16] (the list of references is far from being exhaustive), is to use the Radon–Nikodym density process L u of P u w.r.t. some reference probability measure P as dynamics and recast the control problem to a standard one. In this paper, we apply the same idea and recast the control problem to a mean-field-type control problem to which an SMP can applied. By a Girsanov-type result for pure jump processes, the density process L u is a martingale and solves a linear SDE driven by some accompanying P-martingale M. The adjoint process associated to the SMP solves a (Markov chain) backward stochastic differential equation (BSDE) driven by the P-martingale M, whose existence and uniqueness can be derived using the results by Cohen and Elliott [17,18]. For some linear and quadratic cost functionals, we explicitly solve these BSDEs and derive a closed form of the optimal control.
In Section 2, we briefly recall the basic stochastic calculus for pure jump processes that we will use in the sequel. In Section 3, we derive sufficient and necessary optimality conditions for the control problem. As already mentioned, the SMP optimality conditions are derived in terms of a mean-field stochastic maximum principle, where the adjoint equation is a Markov chain BSDE. In Section 3, we illustrate the results using two examples of optimal control problems that involve two-state chains and linear quadratic cost functionals. We also consider an optimal control of a mean-field version of the Schlögl model for chemical reactions. We consider linear and quadratic cost functionals in all examples for the sake of simplicity and also because, in these cases, we obtain the optimal controls in closed form.
The obtained results can easily be extended to pure jump processes taking values on more general state spaces such as I = Z d , d 1 .

2. Preliminaries

Let I : = { 0 , 1 , 2 , } equipped with its discrete topology and σ -field and let Ω : = D ( [ 0 , T ] , I ) be the space of functions from [ 0 , T ] to I that are right-continuous with left limits at each t [ 0 , T ) and are left-continuous at time T. We endow Ω with the Skorohod metric d 0 so that ( Ω , d 0 ) is a complete separable metric (i.e., Polish) space. Given t [ 0 , T ] and ω Ω , put x ( t , ω ) ω ( t ) and denote by F t 0 : = σ ( x ( s ) , s t ) , 0 t T , the filtration generated by x. Denote by F the Borel σ -field over Ω . It is well-known that F coincides with σ ( x ( s ) , 0 s T ) .
To x we associate the indicator process I i ( t ) = 1 { x ( t ) = i } whose value is 1 if the chain is in state i at time t and 0 otherwise, and the counting processes N i j ( t ) , i j , independent of x ( 0 ) , such that
N i j ( t ) = # { τ ( 0 , t ] : x ( τ ) = i , x ( τ ) = j } , N i j ( 0 ) = 0 ,
which counts the number of jumps from state i into state j during the time interval ( 0 , t ] . Obviously, since x is right-continuous with left limits, both I i and N i j are right-continuous with left limits. Moreover, by the relationship
x ( t ) = i i I i ( t ) , I i ( t ) = I i ( 0 ) + j : j i N j i ( t ) N i j ( t ) ,
the state process, the indicator processes, and the counting processes carry the same information, which is represented by the natural filtration F 0 : = ( F t 0 , 0 t T ) of x. Note that (1) is equivalent to the following useful representation:
x ( t ) = x ( 0 ) + i , j : i j ( j i ) N i j ( t ) .
Let G = ( g i j , i , j I ) , where g i j are constant entries, be a Q-matrix:
g i j > 0 , i j , j : j i g i j < + , g i i = j : j i g i j .
By Theorem 4.7.3 in [19], or Theorem 20.6 in [20] (for the finite state-space), given the Q-matrix G and a probability measure ξ over I, there exists a unique probability measure P on ( Ω , F ) under which the coordinate process x is a time-homogeneous Markov chain with intensity matrix G and starting distribution ξ (i.e., such that P x 1 ( 0 ) = ξ ). Equivalently, P solves the martingale problem for G with initial probability distribution ξ , meaning that for every f on I, the process defined by
M t f : = f ( x ( t ) ) f ( x ( 0 ) ) ( 0 , t ] ( G f ) ( x ( s ) ) d s
is a local martingale relative to ( Ω , F , F 0 ) , where
G f ( i ) : = j g i j f ( j ) = j : j i g i j ( f ( j ) f ( i ) ) , i I ,
and
G f ( x ( s ) ) = i , j : j i I i ( s ) g i j ( f ( j ) f ( i ) ) .
By Lemma 21.13 in [20], the compensated processes associated with the counting processes N i j , defined by
M i j ( t ) = N i j ( t ) ( 0 , t ] I i ( s ) g i j d s , M i j ( 0 ) = 0 ,
are zero mean, square integrable, and mutually orthogonal P-martingales whose predictable quadratic variations are
M i j t = ( 0 , t ] I i ( s ) g i j d s .
Moreover, at jump times t, we have
Δ M i j ( t ) = Δ N i j ( t ) = I i ( t ) I j ( t ) .
Thus, the optional variation of M
[ M ] ( t ) = 0 < s t | Δ M ( s ) | 2 = 0 < s t i , j : j i | Δ M i j ( s ) | 2
is
[ M ] ( t ) = 0 < s t i , j : j i I i ( s ) I j ( s ) .
We call M : = { M i j , i j } the accompanying martingale of the counting process N : = { N i j , i j } or of the Markov chain x.
Denote by F : = ( F t ) 0 t T the completion of F 0 = ( F t 0 ) t T with the P-null sets of Ω . Hereafter, a process from [ 0 , T ] × Ω into a measurable space is said to be predictable (resp. progressively measurable) if it is predictable (resp. progressively measurable) w.r.t. the predictable σ -field on [ 0 , T ] × Ω (resp. F ).
For a real-valued matrix m ( t ) : = ( m i j ( t ) , i , j I ) indexed by I × I , we let
m g 2 ( t ) : = i , j : i j | m i j ( t ) | 2 g i j 1 { x ( t ) = i } < .
Consider the local martingale
W ( t ) = 0 t Z ( s ) d M ( s ) : = i , j : i j 0 t Z i j ( s ) d M i j ( s ) .
Then, the optional variation of the local martingale W is
[ W ] ( t ) = 0 < s t | Z ( s ) Δ M ( s ) | 2 = 0 < s t i , j : i j | Z i j ( s ) Δ M i j ( s ) | 2 ,
and its compensator is
W t = ( 0 , t ] Z ( s ) g 2 d s .
Provided that
E ( 0 , T ] Z ( s ) g 2 d s < ,
W is a square-integrable martingale and its optional variation satisfies
E [ W ] ( t ) = E 0 < s t | Z ( s ) Δ M ( s ) | 2 = E ( 0 , t ] Z ( s ) g 2 d s .
Moreover, the following Doob’s inequality holds:
E sup 0 t T 0 t Z ( s ) d M ( s ) 2 4 E ( 0 , T ] Z ( s ) g 2 d s .

3. A Stochastic Maximum Principle

We consider controls with values in some subset U of R d and let U be the set of F -progressively measurable processes u = ( u ( t ) , 0 t T ) with values in U R d . U is the set of admissible controls. For u U , let P u be the probability measure on ( Ω , F ) under which the coordinate process x is a jump process with intensities
λ i j u ( t ) : = λ i j ( t , x . , E u [ κ ( x ( t ) ] ) , u ( t ) ) , i , j I , 0 t T ,
where for each i , j I ,
λ i j : [ 0 , T ] × Ω × R × U R , κ : I R .
The cost functional associated to P u is of the form
J ( u ) : = E u 0 T f ( t , x . , E u [ κ f ( x ( t ) ) ] , u ( t ) ) d t + h x ( T ) , E u [ κ h ( x ( T ) ) ] ,
where
f : [ 0 , T ] × Ω × R × U R , h : I × R R , κ f : I R , κ h : I R .
In this section, we propose to characterize minimizers u ¯ of J, that is, u ¯ U satisfying
J ( u ¯ ) = min u U J ( u )
in terms of a stochastic maximum principle (SMP). We first state and prove the sufficient optimality conditions. Then, we state the necessary optimality conditions.
Let P be the probability measure on ( Ω , F ) under which x is a time-homogeneous Markov chain such that P x 1 ( 0 ) = ξ and with Q-matrix ( g i j ) i j satisfying (3). Then, by a Girsanov-type result for pure jump processes (e.g., [20,21]), it holds that
d P u : = L u ( T ) d P ,
where, for 0 t T ,
L u ( t ) : = i , j i j exp ( 0 , t ] ln λ i j u ( s ) g i j d N i j ( s ) 0 t ( λ i j u ( s ) g i j ) I i ( s ) d s ,
which satisfies
L u ( t ) = 1 + ( 0 , t ] L u ( s ) i , j : i j I i ( s ) i j u ( s ) d M i j ( s ) ,
where i j u ( s ) : = i j ( t , x . , E u [ κ ( x ( s ) ) ] , u ( s ) ) is given by the formula
i j u ( s ) = λ i j u ( s ) / g i j 1 if i j , 0 if i = j ,
and ( M i j ) i j is the P-martingale given in (6). Moreover, the accompanying martingale M u = ( M i j u ) i j satisfies
M i j u ( t ) = M i j ( t ) ( 0 , t ] i j u ( s ) I i ( s ) g i j d s .
Noting that
J ( u ) = E L u ( T ) 0 T f ( t , x . , E u [ κ f ( x ( t ) ) ] , u ( t ) ) d t + L u ( T ) h ( x ( T ) , E u [ κ h ( x ( T ) ) ] ) .
Integrating by parts and taking expectation, we obtain
J ( u ) : = E 0 T L u ( t ) f ( t , x . , E [ L u ( t ) κ f ( x ( t ) ) ] , u ( t ) ) d t + L u ( T ) h ( x ( T ) , E [ L u ( T ) κ h ( x ( T ) ) ] ) .
We recast our problem of controlling a Markov chain through its intensity matrix to a standard control problem which aims at minimizing the cost functional (25) under the dynamics given by the density process L u which satisfies (22), to which the mean-field stochastic maximum principle in [22] can be applied. The corresponding optimal dynamics are given by the probability measure P ¯ on ( Ω , F ) defined by
d P ¯ = L u ¯ ( T ) d P ,
where L u ¯ is the associated density process. ( L u ¯ , u ¯ ) is called an optimal pair associated with (19).
For w { y , y ¯ , u } , ψ w denotes the partial derivative of the function ψ ( y , y ¯ , u ) w.r.t. w. for α = { , f } , we set
α ( t ) : = α ( t , x . , E [ L u ( t ) κ α ( x ( t ) ) ] , u ( t ) ) , α ¯ ( t ) : = α ( t , x . , E [ L u ¯ ( t ) κ α ( x ( t ) ) ] , u ¯ ( t ) ) ,
and we define
h ( T ) : = h ( x ( T ) , E [ L u ( t ) κ h ( x ( T ) ) ] ) , h ¯ ( T ) : = h ( x ( T ) , E [ L u ¯ ( t ) κ h ( x ( T ) ) ] ) .
To the admissible pair of processes ( L u ¯ , u ¯ ) , we associate the solution ( p , q ) (if it exits) of the following linear BSDE of mean-field type, known as first-order adjoint equation:
d p ( t ) = ¯ ( t ) , q ( t ) g f ¯ ( t ) + κ ( x ( t ) ) E [ L u ¯ ( t ) ( ¯ y ¯ ( t ) , q ( t ) g ] κ f ( x ( t ) ) E [ L u ¯ ( t ) f ¯ y ¯ ( t ) d t + q ( t ) d M ( t ) , p ( T ) = h ¯ ( T ) κ h ( x ( T ) ) E [ L u ¯ ( T ) h ¯ y ¯ ( T ) ] .
In the next proposition we give sufficient conditions on f , h , , κ , κ f , and κ h that guarantee the existence of a unique solution to the BSDE (27).
Proposition 1.
Assume that
(A1) 
For each i , j I , i j ,
(a) 
( y ¯ , u ) λ i j ( · , · , y ¯ , u ) is differentiable,
(b) 
there exists a positive constant C λ s.t. P-a.s. for all ( t , y ¯ , u ) [ 0 , T ] × R × U ,
λ i j ( t , ω , y ¯ , u ) + y ¯ λ i j ( t , ω , y ¯ , u ) C λ .
(A2) 
The functions f , h , κ f , and κ h are bounded. f and h are differentiable in y ¯ with bounded derivatives.
Then, the BSDE (27) admits a solution ( p , q ) consisting of an adapted process p which is right-continuous with left limits and a predictable process q which satisfies
E sup t [ 0 , T ] | p ( t ) | + ( 0 , T ] q ( s ) g 2 d s < + .
This solution is unique up to indistinguishability for p and equality d P × g i j I i ( s ) d s -almost everywhere for q.
Remark 1.
(i) 
Assumptions (A1) and (3) imply that there exists a positive constant C s.t. for all t [ 0 , T ]
u ( t ) g + y ¯ u ( t ) g C P - a . s .
(ii) 
By Theorem T11 (chapter VII) in [21], the uniform boundedness of ( λ i j ) i j , i j implies that for each u U
C ˜ : = sup 0 t T E [ ( L u ( t ) ) 2 ] < .
Proof. 
Assumptions (A1) and (A2) make the driver of the BSDE (27) Lipschitz continuous in q. The proof is similar to that of Theorem 3.1 for the Brownian motion-driven mean-field BSDE derived in [23] by considering the following norm:
( p , q ) β 2 : = E 0 T e β t ( | p ( t ) | 2 + q ( t ) g 2 ) d t ,
where β > 0 , along with Itô–Stieltjes formula for purely discontinuous semi-martingales. For the sake of completeness, we give a proof in Appendix A. □
Remark 2.
(i) 
The boundedness on f and h and their derivatives is strong and can be considerably weakened using standard truncation techniques.
(ii) 
If y ¯ = 0 (i.e., the intensity does not contain any mean-field coupling), the BSDE (27) becomes standard. Thanks to Theorem 3.10 in [12], it is solvable only by imposing similar conditions to (H1)–(H3) therein.
(iii) 
If y ¯ 0 (i.e., the intensity is of mean-field type), we do not know whether we can relax the imposed boundedness of , κ and y ¯ , because without this condition the standard comparison theorem for Markov chain BSDEs simply does not generally apply for such drivers.
Let ( L u ¯ , u ¯ ) be an admissible pair and ( p , q ) be the associated first-order adjoint process solution of (27).
For v U , we introduce the Hamiltonian associated to our control problem
H ( t , v ) : = L u ¯ ( t ) ( t , x . , E [ L u ¯ ( t ) κ ( x ( t ) ) ] , v ) , q ( t ) g f ( t , x . , E [ L u ¯ ( t ) κ f ( x ( t ) ) ] , v ) .
Next, we state the SMP sufficient and necessary optimality conditions, but only prove the sufficient optimality case, as the necessary optimality conditions result is tedious and more involved but by now “standard” and can be derived following the same steps of [22,24,25].
In the next two theorems, we assume that (A1) and (A2) of Proposition 1 hold.
Theorem 1 (Sufficient optimality conditions.
Let ( L u ¯ , u ¯ ) be an admissible pair and ( p , q ) be the associated first-order adjoint process which satisfies (27) and (28). Assume
(A4) 
The set of controls U is a convex body (i.e., U is convex and has a nonempty interior) of R d , and the functions ℓ and f are differentiable in u.
(A5) 
The functions ( y , y ¯ , u ) y ( · , · , y ¯ , u ) and ( y , y ¯ , u ) y f ( · , · , y ¯ , u ) are concave in ( y , y ¯ , u ) for a . e . t [ 0 , T ] , P-almost surely.
(A6) 
The function ( y , y ¯ ) y h ( · , y ¯ ) is convex.
If the admissible control u ¯ satisfies
H ( t , u ¯ ( t ) ) = max v U H ( t , v ) , a . e . t [ 0 , T ] , P - a . s . ,
then the pair ( L u ¯ , u ¯ ) is optimal.
Proof. 
We want to show that if the pair ( L u ¯ , u ¯ ) satisfies (32), then
J ( u ) J ( u ¯ ) = E 0 T ( L u ( t ) f ( t ) L u ¯ ( t ) f ¯ ( t ) ) d t + L u ( T ) h ( T ) L u ¯ ( T ) h ¯ ( T ) 0 .
Since ( y , y ¯ ) y h ( · , y ¯ ) is convex, we have
E [ L u ( T ) h ( T ) L u ¯ ( T ) h ¯ ( T ) ] E [ ( h ¯ ( T ) + κ h ( T ) E [ L u ¯ ( T ) h ¯ y ¯ ( T ) ] ) ( L u ( T ) L u ¯ ( T ) ) ] = E [ p ( T ) ( L u ( T ) L u ¯ ( T ) ) ] .
Integrating by parts, using (27), we obtain
E [ p ( T ) ( L u ( T ) L u ¯ ( T ) ) ] = E 0 T ( L u ( t ) L u ¯ ( t ) ) d p ( t ) + p ( t ) d ( L u ( t ) L u ¯ ( t ) ) + d [ L u L u ¯ , p ] ( t ) = E 0 T ¯ ( t ) , q ( t ) g f ¯ ( t ) + κ ( x ( t ) ) E [ L u ¯ ( t ) ( ¯ y ¯ ( t ) , q ( t ) g ] κ f ( x ( t ) ) E [ L u ¯ ( t ) f ¯ y ¯ ( t ) ( L u ( t ) L u ¯ ( t ) ) L u ( t ) ( t ) L u ¯ ( t ) ¯ ( t ) , q ( t ) g d t .
We introduce the following “Hamiltonian” function:
H ( t , x . , y , y ¯ , u , z ) : = y ( t , x . , y ¯ , u ) , z g y f ( t , x . , y ¯ , u ) .
Furthermore, for u and u ¯ in U , we set
H ( t ) : = L u ( t ) ( t , x . , E [ L u ( t ) κ ( x ( t ) ) ] , u ( t ) ) , q ( t ) g f ( t , x . , E [ L u ( t ) κ f ( x ( t ) ) ] , u ( t ) ) , H ¯ ( t ) : = L u ¯ ( t ) ( t , x . , E [ L u ¯ ( t ) κ ( x ( t ) ) ] , u ¯ ( t ) ) , q ( t ) g f ( t , x . , E [ L u ¯ ( t ) κ f ( x ( t ) ) ] , u ¯ ( t ) ) .
Since ( y , y ¯ , u ) y ( · , y ¯ , u ) and y f ( · , y ¯ , u ) are concave, we have
H ( t ) H ¯ ( t ) H ¯ y ( t ) ( L u ( t ) L u ¯ ( t ) ) + H ¯ y ¯ ( t ) ( E [ κ ( x ( t ) ) ( L u ( t ) L u ¯ ( t ) ) ] ) + H ¯ u ( t ) · ( u ( t ) u ¯ ( t ) ) .
Since, by (32), H ¯ u ( t ) = H u ( t , u ¯ ( t ) ) = 0 a . e . t [ 0 , T ] , we obtain
E [ H ( t ) H ¯ ( t ) ] E ¯ ( t ) , q ( t ) g f ¯ ( t ) + κ ( x ( t ) ) E [ L u ¯ ( t ) ( ¯ y ¯ ( t ) , q ( t ) g ] κ f ( x ( t ) ) E [ L u ¯ ( t ) f ¯ y ¯ ( t ) ( L u ( t ) L u ¯ ( t ) )
for a . e . t [ 0 , T ] .
Therefore,
E [ L u ( T ) h ( T ) L u ¯ ( T ) h ¯ ( T ) ] E 0 T H ( t ) H ¯ ( t ) L u ( t ) ( t ) L u ¯ ( t ) ¯ ( t ) , q ( t ) g d t .
Hence,
J ( u ) J ( u ¯ ) E 0 T H ( t ) H ¯ ( t ) + L u ( t ) f ( t ) L u ¯ ( t ) f ¯ ( t ) L u ( t ) ( t ) L u ¯ ( t ) ¯ ( t ) , q ( t ) g d t = 0 .
 □
Theorem 2
(Necessary optimality conditions (Verification Theorem)). If ( L u ¯ , u ¯ ) is an optimal pair of the control problem (19) and there is a unique pair of F -adapted processes ( p , q ) associated to ( L u ¯ , u ¯ ) which satisfies (27) and (28), then
H ( t , u ¯ ( t ) ) = max v U H ( t , v ) , a . e . t [ 0 , T ] , P - a . s .
Remark 3.
Unfortunately, the sufficient optimality conditions can rarely be satisfied in practice because the convexity conditions imposed on the involved coefficients are not always satisfied, even for the simplest examples: assume ℓ and f without mean-field coupling and linear in the control u. Then, none of the functions ( y , u ) y ( · , u ) and ( y , u ) y f ( · , u ) are concave in ( y , u ) . However, the verification theorem in terms of necessary optimality conditions holds for a fairly general class of functions with sufficient smoothness. Hence, if we can solve the associated BSDEs, the necessary optimality conditions result can be useful.

4. Numerical Examples

In this section we first solve the adjoint equation associated to an optimal control problem associated with a standard two-state Markov chain, then we extend the problem to a two-state Markov chain of mean-field type. As mentioned in Remark 3, whether sufficient or necessary conditions may apply of course depends on the smoothness of the involved functions. Not all the functions involved in the next examples satisfy the convexity conditions imposed in Theorem 1.
Example 1.
Optimal Control of a Standard Two-State Markov Chain.
We study the optimal control of a simple Markov chain x whose state space is X = { a , b } , where ( 0 a < b ) are integers, and its jump intensity matrix is
λ u ( t ) = α α u ( t ) u ( t ) ,
where α is a given positive constant intensity and u is the control process assumed to be nonnegative, bounded, and predictable. Let P be the probability measure under which the chain x has intensity matrix
G = g a b g a b g b a g b a , g a b , g b a > 0 .
Further, let L u ( t ) = d P u d P | F t be the density process given by (22), where is defined by
i j u ( t ) = λ i j u ( t ) / g i j 1 if i j , 0 if i = j .
The control problem we want to solve consists of finding the optimal control u ¯ that minimizes the linear-quadratic cost functional
J ( u ) = E u 1 2 0 T u 2 ( t ) d t + h ( x ( T ) ) , h ( b ) h ( a ) .
Given a control v U , consider the Hamiltonian
H ( t , L u ¯ ( t ) , q ( t ) , v ) : = L u ¯ ( t ) ( v ( t ) , q ( t ) g 1 2 v 2 ) = : H ( t , v ) ,
where
v ( t ) , q ( t ) g = q a b ( t ) ( α g a b ) I a ( t ) + q b a ( t ) ( v g b a ) I b ( t ) .
By the first-order optimality conditions, an optimal control u ¯ is a solution of the equation H ( t , v ) v = 0 , which implies
0 = v v ( t ) , q ( t ) g v = q b a ( t ) I b ( t ) v .
The optimal control is thus
u ¯ ( t ) = q b a ( t ) I b ( t ) ,
where for each t, q b a ( t ) 0 , since u ¯ ( t ) 0 .
It remains to identify q b a ( t ) . Consider the associated adjoint equations given by
d p ( t ) = u ¯ ( t ) , q ( t ) g 1 2 u ¯ 2 ( t ) d t + q a b ( t ) d M a b ( t ) + q b a ( t ) d M b a ( t ) , 0 t < T , p ( T ) = h ( x ( T ) ) .
In view of (36), the driver reads
u ¯ ( t ) , q ( t ) g 1 2 u ¯ 2 ( t ) = q a b ( t ) I a ( t ) ( α g a b ) + q b a ( t ) I b ( t ) 1 2 q b a ( t ) I b ( t ) g b a .
The adjoint equation becomes
d p ( t ) = q a b ( t ) ( α g a b ) I a ( t ) d t + d M a b ( t ) + q b a ( t ) ( 1 2 q b a ( t ) g b a ) I b ( t ) d t + d M b a ( t ) .
Now, considering the probability measure P ˜ under which x is a Markov chain whose jump intensity matrix is
G ˜ ( t ) = α α 1 2 q b a ( t ) 1 2 q a b ( t ) ,
the processes defined by
d M ˜ a b ( t ) = d M a b ( t ) ( α g a b ) I a ( t ) d t , d M ˜ b a ( t ) = d M b a ( t ) ( 1 2 q b a ( t ) g b a ) I b ( t ) d t
are P ˜ -martingales having the same jumps as the martingales M i j :
Δ M ˜ a b ( t ) = Δ M a b ( t ) = I a ( t ) I b ( t ) , Δ M ˜ b a ( t ) = Δ M b a ( t ) = I b ( t ) I a ( t )
and
d p ( t ) = q a b ( t ) d M ˜ a b ( t ) + q b a ( t ) d M ˜ b a ( t ) .
This yields
Δ p ( t ) = q a b ( t ) I a ( t ) I b ( t ) + q b a ( t ) I b ( t ) I a ( t ) .
Integrating (39) and then taking conditional expectation yields
p ( t ) = E ˜ [ h ( x ( T ) ) | F t ] .
Therefore,
Δ p ( t ) = Δ E ˜ [ h ( x ( T ) ) | F t ] .
Under the probability measure P ˜
h ( x ( T ) ) = h ( x ( t ) ) + t T α ( h ( b ) h ( a ) ) I a ( s ) + 1 2 q b a ( s ) ( h ( a ) h ( b ) ) I b ( s ) d s + t T ( h ( b ) h ( a ) ) d M ˜ a b ( s ) + t T ( h ( a ) h ( b ) ) d M ˜ b a ( s ) .
Taking conditional expectation, we obtain
E ˜ [ h ( x ( T ) ) | F t ] = h ( x ( t ) ) + t T E ˜ α ( h ( b ) h ( a ) ) I a ( s ) + 1 2 q b a ( s ) ( h ( a ) h ( b ) ) I b ( s ) >| F t d s
and
Δ E ˜ [ h ( x ( T ) ) | F t ] = Δ h ( x ( t ) ) = ( h ( b ) h ( a ) ) I a ( t ) I b ( t ) ( h ( a ) h ( b ) ) I b ( t ) I a ( t ) ,
which in view of (41) implies that
q a b ( t ) = h ( a ) h ( b ) , q b a ( t ) = h ( b ) h ( a ) .
Therefore,
u ¯ ( t ) = ( h ( b ) h ( a ) ) I b ( t ) = h ( b ) I b ( t ) h ( a ) + h ( a ) I a ( t ) = h ( x ( t ) ) h ( a ) ,
which yields the following explicit form of the optimal control:
u ¯ ( t ) = h ( x ( t ) ) h ( a ) .
In the next two examples we highlight the effect of the mean-field coupling in both the jump intensity and the cost functional on the optimal control.
Example 2.
Mean-Field Optimal Control of a Two-State Markov Chain.
We consider the same chain as in the first example but with the following mean-field type jump intensities, ( t [ 0 , T ] ) ,
λ u ( t ) = α α u ( t ) + E u [ x ( t ) ] u ( t ) E u [ x ( t ) ] , α > 0 , u ( t ) + E u [ x ( t ) ] 0 ,
and want to minimize the cost functional
J ( u ) = E u 1 2 0 T u 2 ( t ) d t + V a r u ( x ( T ) ) ,
where V a r u ( x ( T ) ) denotes the variance of x ( T ) under the probability P u defined by
V a r u ( x ( T ) ) : = E u x ( T ) E u [ x ( T ) ] 2 .
Given a control v U , consider the Hamiltonian
H ( t , v ) : = L u ¯ ( t ) ( v ( t ) , q ( t ) g 1 2 v 2 ) ,
where
v ( t ) , q ( t ) g = q a b ( t ) ( α g a b ) I a ( t ) + q b a ( t ) ( v + E u ¯ [ x ( t ) ] g b a ) I b ( t ) .
Performing similar calculations as in Example 1, we find that the optimal control is given by
u ¯ ( t ) = q b a ( t ) I b ( t ) .
We will now identify q b a . The associated adjoint equation is given by
d p ( t ) = u ¯ ( t ) , q ( t ) g 1 2 u ¯ 2 ( t ) + x ( t ) E u ¯ [ H ¯ y ¯ ( t ) ] d t + q a b ( t ) d M a b ( t ) + q b a ( t ) d M b a ( t ) , p ( T ) = x ( T ) E u ¯ [ x ( T ) ] 2 .
In view of (44), the driver reads
u ¯ ( t ) , q ( t ) g 1 2 u ¯ 2 ( t ) + x ( t ) E u ¯ [ H ¯ y ¯ ( t ) ] = q b a ( t ) 1 2 q b a ( t ) + E u ¯ [ x ( t ) ] g b a I b ( t ) + q a b ( t ) ( α g a b ) I a ( t ) + x ( t ) E u ¯ [ q b a ( t ) I b ( t ) ] .
The adjoint equation becomes
d p ( t ) = q a b ( t ) d M a b ( t ) ( α g a b ) I a ( t ) d t x ( t ) E u ¯ [ q b a ( t ) I b ( t ) ] d t + q b a ( t ) d M b a ( t ) 1 2 q b a ( t ) + E u ¯ [ x ( t ) ] g b a I b ( t ) d t .
Consider the probability measure P ˜ , under which x is a Markov chain whose jump intensity matrix
G ˜ ( t ) = α α 1 2 q b a ( t ) + E u ¯ [ x ( t ) ] 1 2 q b a ( t ) E u ¯ [ x ( t ) ] , 1 2 q b a ( t ) + E u ¯ [ x ( t ) ] 0 .
This change of measure yields the P ˜ -martingales
d M ˜ a b ( t ) = d M a b ( t ) ( α g a b ) I a ( t ) d t , d M ˜ b a ( t ) = d M b a ( t ) ( 1 2 q b a ( t ) + E u ¯ [ x ( t ) ] g b a ) I b ( t ) d t
and
d p ( t ) = x ( t ) E u ¯ [ q b a ( t ) I b ( t ) ] d t + q a b ( t ) d M ˜ a b ( t ) + q b a ( t ) d M ˜ b a ( t ) .
This yields
Δ p ( t ) = q a b ( t ) I a ( t ) I b ( t ) + q b a ( t ) I b ( t ) I a ( t ) .
Integrating (46), then taking conditional expectation yields
p ( t ) = E ˜ [ x ( T ) E u ¯ [ x ( T ) ] 2 | F t ] + E ˜ [ t T x ( s ) E u ¯ [ q b a ( t ) I b ( s ) ] d s | F t ] .
Therefore,
Δ p ( t ) = Δ E ˜ [ x ( T ) E u ¯ [ x ( T ) ] 2 | F t ] .
Next, we compute the right hand side of (48), then we identify q b a by matching.
Set μ ¯ ( t ) : = E u ¯ [ x ( t ) ] and ϕ ( t , x ( t ) ) : = ( x ( t ) μ ¯ ( t ) ) 2 . Under P ˜ , Dynkin’s formula yields
ϕ ( T , x ( T ) ) = ϕ ( t , x ( t ) ) + t T ϕ s + G ˜ ϕ ( s , x ( s ) ) d s + M ˜ T ϕ M ˜ t ϕ .
Taking conditional expectation yields
E ˜ [ ϕ ( T , x ( T ) ) | F t ] = ϕ ( t , x ( t ) ) + t T E ˜ ϕ s + G ˜ ϕ ( s , x ( s ) ) | F t d s
and
Δ E ˜ [ ϕ ( T , x ( T ) ) | F t ] = Δ ϕ ( t , x ( t ) ) ,
where
Δ ϕ ( t , x ( t ) ) = i , j : i j ϕ ( t , j ) ϕ ( t , i ) I i ( t ) I j ( t ) ( i , j { a , b } ) = ( b 2 a 2 ) 2 μ ¯ ( t ) ( b a ) I a ( t ) I b ( t ) + ( a 2 b 2 ) 2 μ ¯ ( t ) ( a b ) I b ( t ) I a ( t ) .
Therefore,
Δ p ( t ) = Δ E ˜ [ x ( T ) μ ¯ ( T ) 2 | F t ] = ( a 2 b 2 ) + 2 μ ¯ ( t ) ( b a ) I a ( t ) I b ( t ) + ( b 2 a 2 ) + 2 μ ¯ ( t ) ( a b ) I b ( t ) I a ( t ) .
Matching (47) with (49) yields
q a b ( t ) = ( a 2 b 2 ) + 2 μ ¯ ( t ) ( b a ) , q b a ( t ) = ( b 2 a 2 ) + 2 μ ¯ ( t ) ( a b ) .
Hence,
u ¯ ( t ) = ( ( b 2 a 2 ) + 2 μ ¯ ( t ) ( a b ) ) I b ( t ) .
Noting that a μ ¯ ( t ) b , to guarantee that both λ u ¯ ( t ) and G ˜ ( t ) above are indeed intensity matrices, it suffices to impose that
0 μ ¯ ( t ) 1 2 ( a + b ) .
We further characterize the optimal control u ¯ ( t ) by finding μ ¯ ( t ) which satisfies (50). Indeed, under P u ¯ , x has the representation
x ( t ) = x ( 0 ) + 0 t α ( b a ) I a ( s ) + μ ¯ ( s ) ( a b ) I b ( s ) + u ¯ ( s ) ( a b ) I b ( s ) d s + 0 t ( b a ) d M b a u ¯ ( s ) + 0 t ( a b ) d M a b u ¯ ( s ) .
Taking the expectation under P u ¯ yields
μ ¯ ( t ) = μ ¯ ( 0 ) + E u ¯ [ 0 t α ( b a ) I a ( s ) + μ ¯ ( s ) ( a b ) I b ( s ) + ( a b ) u ¯ ( s ) I b ( s ) d s ] .
In particular, the mapping t μ ( t ) is absolutely continuous. Using the fact that ( a b ) I b ( t ) = a x ( t ) and ( b a ) I a ( t ) = b x ( t ) , Equation (51) becomes
μ ¯ ( t ) = μ ¯ ( 0 ) + 0 t α ( b μ ¯ ( s ) ) + μ ¯ ( s ) ( a μ ¯ ( s ) ) d s + 0 t E u ¯ [ ( a b ) u ¯ ( s ) I b ( s ) ] d s = μ ¯ ( 0 ) + 0 t α ( b μ ¯ ( s ) ) + μ ¯ ( s ) ( a μ ¯ ( s ) ) d s + 0 t E u ¯ ( a b ) I b ( s ) ( b 2 a 2 ) + 2 μ ¯ ( s ) ( a b ) d s = μ ¯ ( 0 ) + 0 t α ( b μ ¯ ( s ) ) + μ ¯ ( s ) ( a μ ¯ ( s ) ) d s + 0 t ( b 2 a 2 ) ( a μ ¯ ( s ) ) + 2 ( a b ) μ ¯ ( s ) ( a μ ¯ ( s ) ) d s = μ ¯ ( 0 ) + 0 t α b + a ( b 2 a 2 ) + 2 ( b a ) 1 μ ¯ 2 ( s ) + ( 3 a 2 + a ( 1 2 b ) b 2 ) μ ¯ ( s ) d s ,
with
A : = 2 ( b a ) 1 , B : = 3 a 2 + a ( 1 2 b ) b 2 , C : = α b + a ( b 2 a 2 ) .
Thus, in view (50), μ ¯ should satisfy the following constrained Riccati equation:
μ ¯ ˙ ( t ) = A μ ¯ 2 ( t ) + B μ ¯ ( t ) + C , μ ¯ ( 0 ) = m 0 , 0 μ ¯ ( t ) 1 2 ( a + b ) ,
where m 0 is a given initial value. As is well-known, without the imposed constraint on μ ¯ , the Riccati equation admits an explicit solution that may explode in finite time unless the involved coefficients a , b , α and m 0 evolve within certain ranges. With the imposed constraint on μ ¯ , these ranges may become tighter. Below, we illustrate this with a few cases. As shown in Table 1, Table 2, Table 3, Table 4 and Table 5 below, for low values of α , the ODE (52) can be solved for any time. How low the intensity should be mainly depends on the size of b and b a . The larger b is, the wider is the range for α for which the ODE is solvable. In particular, when a = 0 and b = 1 , (52) is solvable for any time when α = 0 . 1 , 0 . 2 . For greater values of α , the ODE violates the constraint proportionally “faster”.
The results also show that the initial conditions may affect the time horizon T. Starting with values reasonably close to a + b 2 , the ODE (52) is solvable only for relatively shorter time horizons than when we start with values reasonably close to zero.
Example 3.
Mean-Field Schlögl Model.
We suggest to solve a control problem associated with a mean-field version of the Schlögl model (cf. [3,9,10,26]) where the intensities are of the form
λ i j u ( t , x , u ( t ) ) : = ν i j ( t ) if j i 1 , u ( t ) + β E u [ x ( t ) ] if j = i 1
for some predictable and positive control process u, where β > 0 and ( α i j ) i j is a deterministic Q-matrix for which there exists N 0 1 such that α i j = 0 for | j i | N 0 and α i j > 0 for | j i | < N 0 .
We consider the following mean field-type cost functional
J ( u ) = E u 0 T 1 2 u 2 ( t ) d t + x ( T ) .
Given a control v > 0 , the associated Hamiltonian reads
H ( t , v ) : = L u ¯ ( t ) i , j , j i , i 1 { I i ( t ) ( α i j g i j ) q i j ( t ) } + i { I i ( t ) ( v + β 1 E u ¯ [ x ( t ) ] g i i 1 ) q i i 1 ( t ) } v 2 2 .
The first-order optimality conditions yield
u ¯ ( t ) = i I i ( t ) q i i 1 ( t ) .
Next, we write the associated adjoint equation and identify q i i 1 .
d p ( t ) = i q i i 1 ( t ) I i ( t ) ( 1 2 q i i 1 ( t ) + β E u ¯ [ x ( t ) ] g i i 1 ) d t d M i i 1 ( t ) i j , j i , i 1 q i j ( t ) I i ( t ) ( ν i j g i j ) d t d M i j ( t ) β x ( t ) E u ¯ i q i i 1 ( t ) I i ( t ) d t , p ( T ) = x ( T ) .
Consider the probability measure P ˜ , under which x is a pure jump process whose jump intensity matrix is
G ˜ i j ( t ) = α i j if j i 1 , 1 2 q i j ( t ) + β E u ¯ [ x ( t ) ] if j = i 1 .
The adjoint equations become
d p ( t ) = β x ( t ) E u ¯ i [ q i i 1 ( t ) I i ( t ) ] d t + i q i i 1 ( t ) d M ˜ i i 1 ( t ) + i j , j i 1 q i j ( t ) d M ˜ i j ( t ) , p ( T ) = x ( T ) ,
where M ˜ i j , i j are mutually orthogonal P ˜ -martingales.
Thus
p ( t ) = E ˜ [ x ( T ) | F t ] + β t T E ˜ x ( s ) E u ¯ i q i i 1 I i ( s ) | F t d s ,
and
Δ p ( t ) = Δ E ˜ [ x ( T ) | F t ] .
Following the same steps leading to (42), from (57), we obtain q i j ( t ) = i j , thus q i i 1 ( t ) = 1 , i = 1 , 2 ,
Therefore,
u ¯ ( t ) = i 1 I i ( t ) = 1 I 0 ( t ) .
Final remark. There are more real-world Markov chain mean-field control problems that we can apply our results to, such as the mean-field model proposed in [27] for malware propagation over computer networks where the nodes/devices interact through network-based opportunistic meetings (e.g., via internet, email, USB keys). Markov chains of mean-field type can be used to model the state of the devices, which can be either Susceptible/Honest (H), Infected (I), or Dormant (D). The fact that the jump intensities may depend nonlinearly on the mean-field coupling makes it more involved to have closed-form expressions even when considering linear quadratic cost functionals.

Author Contributions

The authors contributed equally to this work.

Funding

This work was supported by the Swedish Research Council via grant 2016-04086.

Acknowledgments

The authors thank Boualem Djehiche and Alexander Aurell for their insightful comments and remarks.

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.

Appendix A

Proof of Proposition 1.
Below, C ϕ denotes the bound of ϕ { λ , κ , h , f } .
Uniqueness.
Assume ( p , q ) and ( p ¯ , q ¯ ) are two solutions of (27) which satisfy (28). Set y ( t ) : = p ( t ) p ¯ ( t ) , z ( t ) : = q ( t ) q ¯ ( t ) and
γ ( s ) : = ¯ ( s ) , z ( s ) g + κ ( x ( s ) ) E [ L u ¯ ( s ) ¯ y ¯ ( s ) , z ( s ) g ] ,
we have
y ( t ) = t T γ ( s ) d s t T z ( s ) d M ( s ) ,
where
E [ t < s T | z ( s ) Δ M ( s ) | 2 ] = E [ t T z ( s ) g 2 d s ] .
We can apply the Itô–Stieltjes chain rule to obtain
| y ( t ) | 2 = 2 t T y ( s ) γ ( s ) d s 2 t T y ( s ) z ( s ) d M ( s ) t < s T | Δ p ( s ) Δ p ¯ ( s ) | 2 .
Since Δ p ( s ) = q ( s ) Δ M ( s ) , Δ p ¯ ( s ) = q ¯ ( s ) Δ M ( s ) , we have
| y ( t ) | 2 = 2 t T y ( s ) γ ( s ) d s 2 t T y ( s ) z ( s ) d M ( s ) t < s T | z ( s ) Δ M ( s ) | 2 .
Now, since ( y , z ) satisfies (28), by Doob’s inequality the stochastic integral 0 t y ( s ) z ( s ) d M ( s ) is in fact a uniformly integrable martingale. Therefore,
E [ | y ( t ) | 2 ] = 2 t T E [ y ( s ) γ ( s ) ] d s E [ t < s T | z ( s ) Δ M ( s ) | 2 ] = 2 t T E [ y ( s ) γ ( s ) ] d s t T E [ z ( s ) g 2 ] d s . ( by ( A 1 ) )
We can now just use Young’s inequality
2 a b ϵ a 2 + b 2 ϵ
to get
2 y ( s ) γ ( s ) ϵ | y ( s ) | 2 + γ 2 ( s ) ϵ .
We have
γ 2 ( s ) 2 ( | ¯ ( s ) , z ( s ) g | 2 + | κ ( x ( s ) ) | 2 ( E [ L u ¯ ( s ) ¯ y ¯ ( s ) , z ( s ) g ] ) 2 ) 2 ( ¯ ( s ) g 2 z ( s ) g 2 + C κ ( E [ L u ¯ ( s ) ¯ y ¯ ( s ) g z ( s ) g ] ) 2 ) 2 C ( z ( s ) g 2 + C κ E [ ( L u ¯ ( s ) ) 2 ] E [ z ( s ) g 2 ] ) ( by ( 29 ) and Cauchy–Schwarz ) ( 2 C + 2 C κ C C ˜ ) ( z ( s ) g 2 + E [ z ( s ) g 2 ] ) ( by ( 30 ) ) .
By setting C ^ : = 2 ( C + C κ C C ˜ ) , we have
γ 2 ( s ) C ^ ( z ( s ) g 2 + E [ z ( s ) g 2 ] ) .
Therefore,
E [ | y ( t ) | 2 ] E [ t T ( ϵ | y ( s ) | 2 + ( 2 C ^ ϵ 1 ) z ( s ) g 2 ) d s ] ,
choose ϵ s.t. 2 C ^ ϵ 1 = 1 2 , i.e. , ϵ = 4 C ^ , to get
E [ | y ( t ) | 2 ] + 1 2 E [ t T z ( s ) g 2 d s ] ϵ E [ t T | y ( s ) | 2 d s ] ,
and by Gronwall’s inequality we obtain y = 0 P - a . s . Moreover,
E [ t T z ( s ) g 2 d s ] 2 ϵ E [ t T | y ( t ) | 2 d s ] = 0 ,
implying that z = 0 d P × g i j d t .
Existence.
We use the Picard iteration as follows: set ξ : = h ¯ ( T ) k h ( x ( T ) ) E [ L u ¯ ( T ) h ¯ y ¯ ( T ) ] and
F ( q ( t ) , q ˜ ( t ) ) : = ¯ ( t ) , q ( t ) g f ¯ ( t ) + κ ( x ( t ) ) E [ L u ¯ ( t ) ¯ y ¯ ( t ) , q ˜ ( t ) g ] κ f ( x ( t ) ) E [ L u ¯ ( t ) f ¯ y ¯ ( t ) ] .
By assumption (A2), | ξ | C P -a.s. Moreover, set ( p 0 ( t ) , q 0 ( t ) ) = ( 0 , 0 ) and consider the BSDE
p n + 1 ( t ) = ξ + t T F ( q n + 1 ( s ) , q n ( s ) ) d s t T q n + 1 ( s ) d M ( s ) .
Given q n ; ( p n + 1 , q n + 1 ) solves a standard linear BSDE whose existence and uniqueness is guaranteed (thanks to (A1) and (A2)) by Theorem 5.1 of [17]. We have
p n + 1 ( t ) p n ( t ) = t T F ( q n + 1 ( s ) , q n ( s ) ) F ( q n ( s ) , q n 1 ( s ) ) d s t T q n + 1 ( s ) q n ( s ) d M ( s ) .
In particular,
E [ sup 0 t T | p n ( t ) | + 0 T q n ( s ) g 2 d s ] .
Applying a similar estimate as (A2) we obtain
| F ( q n + 1 ( t ) , q n ( t ) ) F ( q n ( t ) , q n 1 ( t ) ) | 2 C ^ q n + 1 ( t ) q n ( t ) g 2 + C ^ E [ q n ( t ) q n 1 ( t ) g 2 ] .
Set y n ( t ) : = p n ( t ) p n 1 ( t ) , z n ( t ) : = q n ( t ) q n 1 ( t ) and apply the Itô–Stieltjes chain rule to e β t | y n ( t ) | 2 to get
E [ e β t | y n + 1 ( t ) | 2 ] + E [ t T β e β s | y n + 1 ( s ) | 2 d s ] + E [ t T e β s z n + 1 ( s ) g 2 d s ] = E [ t T e β s | y n + 1 ( s ) | ( F ( q n + 1 ( s ) , q n ( s ) ) F ( q n ( s ) , q n 1 ( s ) ) ) d s ] .
Applying Young’s inequality, we obtain
E [ e β t | y n + 1 ( t ) | 2 ] + E [ t T β e β s | y n + 1 ( s ) | 2 d s ] + E [ t T e β s z n + 1 ( s ) g 2 d s ] ϵ E [ t T e β s | y n + 1 ( s ) | 2 d s ] + C ^ ϵ E [ t T e β s z n ( s ) g 2 d s ] + C ^ ϵ E [ t T e β s z n + 1 ( s ) g 2 d s ]
or
E [ e β t | y n + 1 ( t ) | 2 ] + ( β ϵ ) E [ t T β e β s | y n + 1 ( s ) | 2 d s ] + 1 C ^ ϵ E [ t T e β s z n + 1 ( s ) g 2 d s ] C ^ ϵ E [ t T e β s z n ( s ) g 2 d s ] .
Choose ϵ = 3 C ^ , β = 3 C ^ + 1 to get
E [ 0 T e β s ( | y n + 1 ( s ) | 2 + z n + 1 ( s ) g 2 ) d s ] 1 2 E [ 0 T e β s z n ( s ) g 2 d s ] 1 2 E [ 0 T e β s ( | y n ( s ) | 2 + z n ( s ) g 2 ) d s ] .
This shows that ( p n , q n ) is a Cauchy sequence w.r.t the norm ( y , z ) β = E [ 0 T e β s ( | y ( s ) | 2 + z ( s ) g 2 ) d s ] . Taking the limit ( p = lim n p n , q = lim n q n ) in (A3), it holds that ( p , q ) solves (27) as a fixed point and thus satisfies (28) (cf. [23]). □

References

  1. McKean, H.P. A class of Markov processes associated with nonlinear parabolic equations. Proc. Natl. Acad. Sci. USA 1966, 56, 1907–1911. [Google Scholar] [CrossRef] [PubMed]
  2. Schlögl, F. Chemical reaction models for non-equilibrium phase transitions. Z. Phys. 1972, 253, 147–161. [Google Scholar] [CrossRef]
  3. Nicolis, G.; Prigogine, I. Self Organization in Non-Equilibrium Systems. In Proceedings of the APS March Meeting 2015, San Antonio, TX, USA, 2–6 March 2015. [Google Scholar]
  4. Léonard, C. Some Epidemic Systems are Long Range Interacting Particle Systems. In Stochastic Processes in Epidemic Systems; Gabriel, J.P., Ed.; Lecture Notes in Biomathematics; Springer: Berlin/Heidelberg, Germany, 1990; Volume 86. [Google Scholar]
  5. Djehiche, B.; Kaj, I. The rate function for some measure-valued jump processes. Ann. Probab. 1995, 23, 1414–1438. [Google Scholar] [CrossRef]
  6. Djehiche, B.; Schied, A. Large deviations for hierarchical systems of interacting jump processes. J. Theor. Probab. 1998, 11, 1–24. [Google Scholar] [CrossRef]
  7. Oelschläger, K. A martingale approach to the law of large numbers for weakly interacting stochastic processes. Ann. Probab. 1984, 12, 458–479. [Google Scholar] [CrossRef]
  8. Léonard, C. Large deviations for long range interacting particle systems with jumps. Ann. l’IHP Probab. Stat. 1995, 31, 289–323. [Google Scholar]
  9. Feng, S.; Zheng, X. Solutions of a class of nonlinear master equations. Stoch. Proc. Appl. 1992, 43, 65–84. [Google Scholar] [CrossRef]
  10. Dawson, D.; Zheng, X. Law of large numbers and central limit theorem for unbounded jump mean-field models. Adv. Appl. Math. 1991, 12, 293–326. [Google Scholar] [CrossRef]
  11. Feng, S. Large deviations for empirical process of mean-field interacting particle system with unbounded jumps. Ann. Probab. 1994, 22, 2122–2151. [Google Scholar] [CrossRef]
  12. Choutri, S.E.; Djehiche, B.; Tembine, H. Optimal control and zero-sum games for Markov chains of mean-field type. arXiv, 2016; arXiv:1606.04244. [Google Scholar]
  13. Boel, R.; Varaiya, P. Optimal control of jump processes. SIAM J. Control Optim. 1977, 15, 92–119. [Google Scholar] [CrossRef]
  14. Bismut, J.M. Control of jump processes and applications. Bull. Soc. Math. Fr. 1978, 106, 25–60. [Google Scholar] [CrossRef]
  15. Davis, M.; Elliott, R. Optimal control of a jump process. Zeitschrift für Wahrscheinlichkeitstheorie und Verwandte Gebiete 1977, 40, 183–202. [Google Scholar] [CrossRef]
  16. Wan, C.; Davis, M. Existence of optimal controls for stochastic jump processes. SIAM J. Control Optim. 1979, 17, 511–524. [Google Scholar] [CrossRef]
  17. Cohen, S.; Elliott, R. Existence, Uniqueness and Comparisons for BSDEs in General Spaces. Ann. Probab. 2012, 40, 2264–2297. [Google Scholar] [CrossRef]
  18. Cohen, S.; Elliott, R. Stochastic Calculus and Applications, 2nd ed.; Birkhäuser: New York, NY, USA, 2015. [Google Scholar]
  19. Ethier, S.N.; Kurtz, T.G. Markov Processes: Characterization and Convergence; John Wiley & Sons: Hoboken, NJ, USA, 2009; Volume 282. [Google Scholar]
  20. Rogers, L.C.G.; Williams, D. Diffusions, Markov Processes and Martingales-Volume 2: Itô Calculus; Cambridge University Press: Cambridge, UK, 2000; Volume 2. [Google Scholar]
  21. Brémaud, P. Point Processes and Queues: Martingale Dynamics; Springer: New York, NY, USA; Heidelberg/Berlin, Germany, 1981; Volume 50. [Google Scholar]
  22. Buckdahn, R.; Djehiche, B.; Li, J. A general stochastic maximum principle for SDEs of mean-field type. Appl. Math. Optim. 2011, 64, 197–216. [Google Scholar] [CrossRef]
  23. Buckdahn, R.; Li, J.; Peng, S. Mean-field backward stochastic differential equations and related partial differential equations. Stoch. Proc. Appl. 2009, 119, 3133–3154. [Google Scholar] [CrossRef]
  24. Shen, Y.; Siu, T.K. The maximum principle for a jump-diffusion mean-field model and its application to the mean–variance problem. Nonlinear Anal. Theory Methods Appl. 2013, 86, 58–73. [Google Scholar] [CrossRef]
  25. Tang, S.; Li, X. Necessary conditions for optimal control of stochastic systems with random jumps. SIAM J. Control Optim. 1994, 32, 1447–1475. [Google Scholar] [CrossRef]
  26. Chen, M.F. From Markov Chains to Non-Equilibrium Particle Systems; World Scientific: Singapore, 2004. [Google Scholar]
  27. Djehiche, B.; Tcheukam, A.; Tembine, H. Mean-field-type games in engineering. arXiv, 2016; arXiv:1605.03281. [Google Scholar]
Table 1. Solvability of the ODE (52) with the states a = 0 and b = 1 .
Table 1. Solvability of the ODE (52) with the states a = 0 and b = 1 .
ab α T m 0 = 0 T m 0 = 0.25
010.1..
010.2..
010.35.1453.762
010.42.3551.481
010.51.5710.928
010.61.8700.676
010.70.9550.532
010.80.8000.439
010.90.6890.373
0110.6050.325
0150.1040.053
01100.0510.026
Table 2. Solvability of the ODE (52) with the states a = 1 and b = 2 .
Table 2. Solvability of the ODE (52) with the states a = 1 and b = 2 .
ab α T m 0 = 0.25 T m 0 = 1
120.1..
120.2..
120.3..
120.42.6442.153
120.51.4291.001
120.61.0730.692
120.70.8780.535
120.80.7500.438
120.90.6590.371
1210.5890.322
1250.1210.053
12100.0620.026
Table 3. Solvability of the ODE (52) with the states a = 2 and b = 3 .
Table 3. Solvability of the ODE (52) with the states a = 2 and b = 3 .
ab α T m 0 = 0.25 T m 0 = 2
230.1..
230.2..
230.3..
230.4..
230.51.2060.761
230.60.8990.494
230.70.7460.373
230.80.6480.302
230.90.5780.254
2310.5240.220
2350.1310.035
23100.0700.018
Table 4. Solvability of the ODE (52) with the states a = 0 and b = 2 .
Table 4. Solvability of the ODE (52) with the states a = 0 and b = 2 .
ab α T m 0 = 0 T m 0 = 0.75
020.1..
020.2..
020.3..
020.4..
020.5..
020.6..
020.75.5931.433
020.82.2270.636
020.91.4700.418
0211.1110.312
0250.1120.029
02100.0530.014
Table 5. Solvability of the ODE (52) with the states a = 0 and b = 3 .
Table 5. Solvability of the ODE (52) with the states a = 0 and b = 3 .
ab α T m 0 = 0 T m 0 = 1
030.1..
030.2..
030.3..
030.4..
030.5..
030.6..
030.7..
030.8..
030.9..
031..
0350.1260.043
03100.0560.019

Share and Cite

MDPI and ACS Style

Choutri, S.E.; Hamidou, T. A Stochastic Maximum Principle for Markov Chains of Mean-Field Type. Games 2018, 9, 84. https://doi.org/10.3390/g9040084

AMA Style

Choutri SE, Hamidou T. A Stochastic Maximum Principle for Markov Chains of Mean-Field Type. Games. 2018; 9(4):84. https://doi.org/10.3390/g9040084

Chicago/Turabian Style

Choutri, Salah Eddine, and Tembine Hamidou. 2018. "A Stochastic Maximum Principle for Markov Chains of Mean-Field Type" Games 9, no. 4: 84. https://doi.org/10.3390/g9040084

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop