Next Article in Journal
A Duality Result for the Generalized Erlang Risk Model
Previous Article in Journal
A Note on the Fundamental Theorem of Asset Pricing under Model Uncertainty
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Markov Chain Model for Contagion

1
Department of Statistics, London School of Economics, Houghton Street, London WC2A 2AE, UK
2
Department of Finance, School of Economics & Wang Yanan Institute for Studies in Economics, Xiamen University, Xiamen, Fujian 361005, China
*
Authors to whom correspondence should be addressed.
Risks 2014, 2(4), 434-455; https://doi.org/10.3390/risks2040434
Submission received: 22 September 2014 / Revised: 27 October 2014 / Accepted: 29 October 2014 / Published: 5 November 2014

Abstract

:
We introduce a bivariate Markov chain counting process with contagion for modelling the clustering arrival of loss claims with delayed settlement for an insurance company. It is a general continuous-time model framework that also has the potential to be applicable to modelling the clustering arrival of events, such as jumps, bankruptcies, crises and catastrophes in finance, insurance and economics with both internal contagion risk and external common risk. Key distributional properties, such as the moments and probability generating functions, for this process are derived. Some special cases with explicit results and numerical examples and the motivation for further actuarial applications are also discussed. The model can be considered a generalisation of the dynamic contagion process introduced by Dassios and Zhao (2011).

1. Introduction

A self-exciting point process introduced earlier by Hawkes [1] and Hawkes [2] and later named as a Hawkes process, nowadays becomes a viable mathematical tool for modelling contagion risk and the clustering arrival of events in finance, insurance and economics; see Errais et al. [3], Embrechts et al. [4], Chavez-Demoulin and McGill [5], Bacry et al. [6] and Aït-Sahalia et al. [7]. More recently, Dassios and Zhao [8] introduced a more generalised self-exciting point process, named the dynamic contagion process (DCP), by extending the Hawkes process and the Cox process with exponentially decaying shot-noise intensity; the intensity process includes two types of random jumps—the self-excited and externally-excited jumps—which could be used to model the dynamics of the contagion impact from both the endogenous and exogenous factors of the underlying system in a single consistent framework.
In this paper, we introduce a new bivariate point process named the discretised dynamic contagion process (DDCP) for modelling the clustering arrival of loss claims with delayed settlement for an insurance company. This process in fact generalises the zero-reversion dynamic contagion process (ZDCP), an important special case of DCP with zero-reversion intensity (see Definition A.1). DDCP is a piecewise deterministic Markov process, and some key distributional properties, such as the moments and probability generating functions, have been derived. We also find interesting explicit results for some special cases. By comparing their infinitesimal generators and distribution functions, the transformation formulas between DDCP and ZDCP are obtained, and we find that the two processes are analogous and share some key distributional properties.
This new point process provides a general Markov chain framework. It has the potential to be applicable to modelling the clustering arrival of events such as jumps, bankruptcies, crises, catastrophes in finance, insurance and economics with both internal contagion risk and external common risk. Dassios and Zhao [9] studied the ruin problem for a special case of this model. This was a simple risk model with delayed claims. The claims arrive following a Poisson process, and each of the claims would be settled in an exponentially delayed period of time. Our paper extends this risk model to involve multiple arrivals and delayed settlements of claims with contagion.
The paper is organised as follows. Section 2 describes our model framework and gives a mathematical definition of the associated risk process. In Section 3, we derive the main results: distributional properties of the process, such as the moments and the probability generating functions. Some special cases with explicit results and numerical examples are also discussed in Section 4. The comparison analysis and transformation formulas between DDCP and ZDCP are presented in Appendix A.

2. Model Framework

For an insurance company at any time t > 0 , suppose N t is the number of cumulative settled claims within the time interval [ 0 , t ] and M t is the number of cumulative unsettled claims within the same time interval [ 0 , t ] . We assume that the claims arrive in clusters. Multiple claims may arrive simultaneously at the same time point. The clusters follow a Poisson process of constant rate ρ. They contain a random number K P of claims with the associated probability function p k . Each of the claims then will be settled with exponential delay of constant rate δ. We further assume that at each of the settlement times, only one claim can be settled. In practice, this settlement is partial, as a random number K Q of new claims with the associated probability q k are revealed and need further to be settled in the future. For a practical point of view, the assumption that only one claim can be settled appears restrictive, but this can be addressed by adjusting the rate of settlement and the distribution of new claims revealed. The assumption is common in the literature; see Yuen et al. [10] and the references therein.
The joint stochastic process ( N t , M t ) t 0 is a bivariate continuous-time Markov chain point process on state space N 0 × N 0 with intensity of N t given by ρ p k for a transition from state ( i , j ) to ( i + k , j ) and intensity of M t given by δ j q k for a transition from state ( i , j ) to ( i + k - 1 , j + 1 ) , i.e., the joint increment distribution of this process is specified by:
P M t + Δ t - M t = k , N t + Δ t - N t = 0 F t = ρ p k Δ t + o ( Δ t ) , k = 1 , 2 . . . , P M t + Δ t - M t = k - 1 , N t + Δ t - N t = 1 F t = δ M t q k Δ t + o ( Δ t ) , k = 0 , 1 . . . , P M t + Δ t - M t = 0 , N t + Δ t - N t = 0 F t = 1 - ρ ( 1 - p 0 ) + δ M t Δ t + o ( Δ t ) , P Others F t = o ( Δ t )
where:
  • δ , ρ > 0 are constants;
  • Δ t is a sufficient, small time interval and o ( Δ t ) / Δ t 0 when Δ t 0 ;
  • K P and K Q follow the probability distributions on N 0 by:
    p k = : P K P = k , q k = : P K Q = k , k = 0 , 1 . . .
  • F t is the filtration generated by the joint process ( N s , M s ) 0 s t .
K P and K Q are two types of batches of jumps in point process M t : K P jumps independently of N t , whereas K Q jumps simultaneously with N t . The first moments and probability generating functions of K P and K Q are denoted respectively by:
μ 1 P = : k = 0 k p k , μ 1 Q = : k = 0 k q k ; p ^ θ = : k = 0 θ k p k , q ^ θ = : k = 0 θ k q k
We can find that, by transformation, ( N t , M t ) t 0 is the generalisation of a special case of the dynamic contagion process [8], and hence, we name this process as a discretised dynamic contagion process. To understand this new process intuitively, a sample path of ( N t , M t ) t 0 is provided in Figure 1.
The process ( N t , M t ) t 0 could be a useful risk model for modelling the interim payments (claims) in insurance, such as cases of IBNR (incurred, but not reported) and IBNS (incurred, but not settled). This general framework can be also considered as the generalisation of a simpler risk model with delayed settlement used by Dassios and Zhao [9] where they assume that the arrival of claims follows a Poisson process of rate ρ, and each of the claims will be settled with an exponential delay of rate δ; however, there is no cluster arrival of claims nor any new claim revealed. The literature on delayed claims in insurance can also be found in Yuen et al. [10] for instance.
Figure 1. Point process N t vs. Point process M t .
Figure 1. Point process N t vs. Point process M t .
Risks 02 00434 g001
Note that, the point process M t t 0 is a non-negative process, as if M t = 0 , there is no joint jump and M t cannot be brought downward further by one step or more; if M t = 1 , 2 , . . . , M t is possible downward movement with a maximum of one step. The discrete piecewise non-negative process δ M t t 0 , in fact, can be considered as the intensity process of the point process N t (proven later by Equation (4)).

3. Distributional Properties

The infinitesimal generator of a discretised dynamic contagion process ( M t , N t , t ) acting on a function f ( m , n , t ) Ω ( A ) is given by:
A f ( m , n , t ) = f t + ρ k = 0 f ( m + k , n , t ) p k - f ( m , n , t )
+ δ m k = 0 f ( m + k - 1 , n + 1 , t ) q k - f ( m , n , t )
where Ω ( A ) is the domain of the generator A , such that f ( m , n , t ) is differentiable with respect to t, and for all m, n and t,
k = 0 f ( m + k , n , t ) p k - f ( m , n , t ) < k = 0 f ( m + k - 1 , n + 1 , t ) q k - f ( m , n , t ) <
Following the methods adopted by Dassios and Embrechts [11] and later by Dassios and Jang [12] and Dassios and Zhao [8], we will use this generator Equation (1) with the aid of some properly selected martingales to find key distributional properties of ( N t , M t ) as below.

3.1. Moments of M t and N t

We derive the first moments of M t and N t by solving systems of ODEs and also discuss the stationarity condition for the process M t .
Theorem 3.1. The expectation of M t conditional on M 0 is given by:
E [ M t M 0 ] = μ 1 P ρ κ + M 0 - μ 1 P ρ κ e - κ t , κ 0 M 0 + μ 1 P ρ t , κ = 0
where κ = δ ( 1 - μ 1 Q ) .
Proof. Set f ( m , n , t ) = m and plug into generator Equation (1); we have:
A m = ρ μ 1 P + δ m μ 1 Q - 1
or A m = - κ m + μ 1 P ρ . Since M t - M 0 - 0 t A M s d s is a martingale, then,
E [ M t | M 0 ] = M 0 + E 0 t A M s d s | M 0 = M 0 - κ 0 t E [ M s | M 0 ] d s + μ 1 P ρ t
and we can derive the expectation via the ODE:
d u ( t ) d t = - κ u ( t ) + μ 1 P ρ
where u ( t ) = E [ M t | M 0 ] with the initial condition: u ( 0 ) = M 0    □
Remark 3.2. The stationarity condition of process M t is:
μ 1 Q < 1
Corollary 3.3. The expectation of N t conditional on M 0 is given by:
E [ N t M 0 ] = δ κ μ 1 P ρ t + M 0 - μ 1 P ρ κ 1 - e - κ t , κ 0 δ M 0 t + 1 2 μ 1 P ρ t 2 , κ = 0
Proof. Set f ( m , n , t ) = n and plug into generator Equation (1); we have A n = δ m . Since N t - N 0 - 0 t A N s d s is a martingale, then,
E [ N t | M 0 ] = N 0 + E 0 t A N s d s | M 0 = δ 0 t E [ M s | M 0 ] d s
where E [ M t | M 0 ] is given by Equation (2).   □
Higher moments of M t and N t can also be obtained similarly by this ODE method, and we omit them here.

3.2. Joint Probability Generating Function of ( M T , N T )

Theorem 3.4. For constants 0 θ , φ 1 and time 0 t T , we have the joint probability generating function of ( M T , N T ) ,
E θ ( N T - N t ) φ M T F t = e - ( c ( T ) - c ( t ) ) [ A ( t ) ] M t
where A ( t ) is determined by the non-linear ODE:
A ( t ) + δ θ q ^ ( A ( t ) ) - δ A ( t ) = 0
with boundary condition A ( T ) = φ ; and c ( t ) is determined by:
c ( t ) = ρ 0 t 1 - p ^ ( A ( s ) ) d s
Proof. Assume the exponential affine form:
f ( m , n , t ) = [ A ( t ) ] m θ n e c ( t )
and set A f ( m , n , t ) = 0 in generator Equation (1); then, we have:
A ( t ) + δ θ q ^ ( A ( t ) ) - δ A ( t ) = 0 c ( t ) = ρ [ 1 - p ^ ( A ( t ) ) ]
Since [ A ( t ) ] M t θ N t e c ( t ) is a martingale, we have:
E [ A ( T ) ] M T θ N T e c ( T ) | F t = [ A ( t ) ] M t θ N t e c ( t )
with boundary conditions A ( T ) = φ .   □

3.3. Probability Generating Function of M T

Theorem 3.5. If μ 1 Q < 1 , the probability generating function of M T conditional on M 0 is given by:
E [ φ M T M 0 ] = exp - φ Q φ , 1 - 1 ( T ) ρ [ 1 - p ^ ( u ) ] δ q ^ ( u ) - δ u d u × Q φ , 1 - 1 ( T ) M 0
where:
Q φ , 1 ( L ) = : φ L d u δ q ^ ( u ) - δ u
Proof. Set t = 0 , θ = 1 and assume N 0 = 0 in Theorem 3.4, and we have:
E φ M T | M 0 = e - c ( T ) [ A ( 0 ) ] M 0
where A ( 0 ) is uniquely determined by the non-linear ODE:
A ( t ) + δ q ^ ( A ( t ) ) - δ A ( t ) = 0
with boundary condition A ( T ) = φ . Under the condition μ 1 Q < 1 , it can be solved by the following steps:
  • Set A ( t ) = L ( T - t ) and τ = T - t ; this is equivalent to the initial value problem:
    d L ( τ ) d τ = δ q ^ ( L ( τ ) ) - δ L ( τ ) = : f 1 ( L )
    with initial condition L ( 0 ) = φ ; we define the right-hand side as the function f 1 ( L ) .
  • Since μ 1 Q < 1 , we have:
    d f 1 ( L ) d L = δ k = 0 k L k - 1 p k - 1 δ k = 0 k p k - 1 = δ μ 1 Q - 1 < 0 , 0 < L 1
    then, f 1 ( L ) > 0 for 0 < L < 1 , since f 1 ( 1 ) = 0 .
  • Rewrite as:
    d L δ q ^ ( L ) - δ L = d τ
    by integrating both sides from time zero to τ with initial condition L ( 0 ) = φ > 0 ; we have:
    φ L d u δ q ^ ( u ) - δ u = τ
    where 0 < L 1 . We define the function on left-hand side as:
    Q φ , 1 ( L ) = : φ L d u δ q ^ ( u ) - δ u
    then, Q φ , 1 ( L ) = τ . Obviously, L φ when τ 0 ; by convergence test,
    lim u 1 1 1 - u 1 δ q ^ ( u ) - δ u = δ lim u 1 q ^ ( u ) - u 1 - u = δ lim u 1 q ^ ( u ) - u ( 1 - u ) = 1 - μ 1 Q > 0
    and we know that φ 1 1 1 - u d u = ; then,
    φ 1 d u δ q ^ ( u ) - δ u =
    Hence, L 1 when τ ; the integrand is positive in the domain u [ φ , 1 ) and also Q φ , 1 ( L ) is a strictly increasing function; therefore, Q φ , 1 ( L ) : [ φ , 1 ) [ 0 , ) is a well-defined (monotone) function, and its inverse function Q φ , 1 - 1 ( τ ) : [ 0 , ) [ φ , 1 ) exists.
  • The unique solution is found by:
    L ( τ ) = Q φ , 1 - 1 ( τ ) , or , A ( t ) = Q φ , 1 - 1 ( T - t )
  • A ( 0 ) is obtained,
    A ( 0 ) = L ( T ) = Q φ , 1 - 1 ( T )
Then, c ( T ) is determined by:
c ( T ) = ρ 0 T 1 - p ^ Q φ , 1 - 1 ( τ ) d τ
by the change of variable Q φ , 1 - 1 ( τ ) = u ; we have τ = Q φ , 1 ( u ) , and:
0 T 1 - h ^ Q φ , 1 - 1 ( τ ) d τ = Q φ , 1 - 1 ( 0 ) Q φ , 1 - 1 ( T ) [ 1 - p ^ ( u ) ] τ u d u = φ Q φ , 1 - 1 ( T ) 1 - p ^ ( u ) δ q ^ ( u ) - δ u d u
   □
Theorem 3.6. If μ 1 Q < 1 , the probability generating function of the asymptotic distribution of M T is given by:
lim T E [ φ M T M 0 ] = exp - φ 1 ρ [ 1 - p ^ ( u ) ] δ q ^ ( u ) - δ u d u
and this is also the probability generating function of the stationary distribution of process { M t } t 0 .
Proof. Since lim T Q φ , 1 - 1 ( T ) = 1 , and by Theorem 3.5, we have the probability generating function of the asymptotic distribution of M T immediately.
To further prove the stationarity, by Proposition 9.2 of Ethier and Kurtz [13], we need to prove that, for any function f Ω ( A ) , we have:
m = 0 A f ( m ) m = 0
where A f ( m ) is the infinitesimal generator of the discretised dynamic contagion process acting on f ( m ) , i.e.,
A f ( m ) = ρ k = 0 f ( m + k ) p k - f ( m ) + δ m k = 0 f ( m + k - 1 ) q k - f ( m )
and m k = 0 , 1 , 2 , . . . are the probabilities of m with the probability generating function given by Equation (8). Now, we try to solve Equation (9).
For the first term of Equation (9), we have:
m = 0 ρ k = 0 f ( m + k ) p k m = ρ m = 0 m k = 0 f ( m + k ) p k ( j = m + k ) = ρ j = 0 f ( j ) k = 0 j j - k p k = ρ m = 0 f ( m ) k = 0 m m - k p k
For the second term of Equation (9), we have:
m = 0 δ m k = 0 f ( m + k - 1 ) q k m = δ m = 0 m m k = 0 f ( m + k - 1 ) q k = δ m = - 1 ( m + 1 ) m + 1 k = 0 f ( m + k ) q k = δ m = 0 ( m + 1 ) m + 1 k = 0 f ( m + k ) q k ( j = m + k ) = δ j = 0 f ( j ) k = 0 j ( j - k + 1 ) j - k + 1 q k = δ m = 0 f ( m ) k = 0 m ( m - k + 1 ) m - k + 1 q k
Therefore,
m = 0 A f ( m ) m = m = 0 f ( m ) ρ k = 0 m m - k p k - m + δ k = 0 m ( m - k + 1 ) m - k + 1 q k - m m = 0
for any function f ( m ) Ω ( A ) ; then, we have recursive equation:
ρ k = 0 m m - k p k - m + δ k = 0 m ( m - k + 1 ) m - k + 1 q k - m m = 0
and:
m = 0 φ m × ρ k = 0 m m - k p k - m + δ k = 0 m ( m - k + 1 ) m - k + 1 q k - m m = 0
By the Laplace transform:
^ ( φ ) = : L { m } = m = 0 m φ m
since:
m = 0 φ m k = 0 m m - k p k = k = 0 m = k φ m - k m - k φ k p k m = k φ m - k m - k = ^ ( φ ) = ^ ( φ ) p ^ ( φ )
m = 0 φ m k = 0 m ( m - k + 1 ) m - k + 1 q k = m = 0 φ m j = 1 m + 1 j j q m + 1 - j ( j = m - k + 1 ) = 1 φ m = 0 φ j j = 1 m + 1 j j q m + 1 - j φ m + 1 - j ( i = m + 1 ) = 1 φ j = 1 i = j q i - j φ i - j φ j j j i = j q i - j φ i - j = q ^ ( φ ) = q ^ ( φ ) j = 0 j φ j - 1 j = q ^ ( φ ) ^ ( φ )
and:
m = 0 φ m m m = φ m = 0 m m φ m - 1 = φ ^ ( φ )
we have the ODE of ^ ( φ ) ,
ρ ^ ( φ ) p ^ ( φ ) - ^ ( φ ) + δ q ^ ( φ ) ^ ( φ ) - φ ^ ( φ ) = 0
then,
^ ( φ ) = ^ ( 1 ) exp - φ 1 ρ [ 1 - p ^ ( u ) ] δ q ^ ( u ) - δ u d u
with the initial condition ^ ( 1 ) = 1 ; hence, we have the unique solution:
^ ( φ ) = exp - φ 1 ρ [ 1 - p ^ ( u ) ] δ q ^ ( u ) - δ u d u
which is exactly given by Equation (8).
Since the distribution is the unique solution to Equation (9), we have the stationarity for the process { M t } t 0 .   □
Remark 3.7. If μ 1 Q < 1 , M 0 , then M T , since by Theorem 3.6 and Theorem 3.5, we have:
E ψ M T = E ψ M T | M 0 = exp - φ Q φ , 1 - 1 ( T ) ρ [ 1 - p ^ ( u ) ] δ q ^ ( u ) - δ u d u × E Q φ , 1 - 1 ( T ) M 0 = exp - φ Q φ , 1 - 1 ( T ) ρ [ 1 - p ^ ( u ) ] δ q ^ ( u ) - δ u d u × exp - Q φ , 1 - 1 ( T ) 1 ρ [ 1 - p ^ ( u ) ] δ q ^ ( u ) - δ u d u = ^ ( φ )
which also reflects the stationarity of process { M t } t 0 .

3.4. Probability Generating Function of N T

Theorem 3.8. Suppose μ 1 Q < 1 and N 0 = 0 ; the probability generating function of N T conditional on M 0 is given by:
E θ N T M 0 = exp - Q 0 , θ - 1 ( T ) 1 ρ [ 1 - p ^ ( u ) ] δ u - δ θ q ^ ( u ) d u × Q 0 , θ - 1 ( T ) M 0
where:
Q 0 , θ ( L ) = : L 1 d u δ u - δ θ q ^ ( u ) , 0 θ < 1
Proof. By setting t = 0 , φ = 1 and assuming N 0 = 0 in Theorem 3.4, we have:
E θ N T M 0 = e - c ( T ) [ A ( 0 ) ] M 0
where A ( 0 ) is uniquely determined by the non-linear ODE:
A ( t ) + δ θ q ^ ( A ( t ) ) - δ A ( t ) = 0
with boundary condition A ( T ) = 1 . It can be solved, under the condition μ 1 Q < 1 , by the following steps:
  • Set A ( t ) = L ( T - t ) and τ = T - t ,
    d L ( τ ) d τ = δ θ q ^ ( L ( τ ) ) - δ L ( τ ) = : f 2 ( L ) , 0 θ < 1
    with initial condition L ( 0 ) = 1 ; we define the right-hand side as the function f 2 ( L ) .
  • There is only one positive singular point in the interval [ 0 , 1 ] , denoted by:
    0 φ * 1
    by solving the equation f 2 ( L ) = 0 . This is because, for the case 0 < θ < 1 , the equation f 2 ( L ) = 0 is equivalent to:
    q ^ ( u ) = 1 θ u , 0 < θ < 1
    note that q ^ ( · ) is a convex function, then it is clear that there is only one positive solution within [ 0 , 1 ] to this equation; in particularly when θ 0 , φ * 0 . Then, we have f 2 ( L ) < 0 for φ * < L 1 .
  • Rewrite Equation (13) as:
    d L δ L - δ θ q ^ ( L ) = - d τ
    and integrate,
    L 1 d u δ u - δ θ q ^ ( u ) = τ
    where φ * < L 1 ; we define the function on left-hand side as:
    Q 0 , θ ( L ) = : L 1 d u δ u - δ θ q ^ ( u )
    then, Q 0 , θ ( L ) = τ , as L 1 when τ 0 and L φ * when τ ; the integrand is positive in the domain u ( φ * , 1 ] and L 0 , Q 0 , θ ( L ) is a strictly decreasing function. Therefore, Q 0 , θ ( L ) : ( φ * , 1 ] [ 0 , ) is a well-defined function, and its inverse function Q 0 , θ - 1 ( τ ) : [ 0 , ) ( φ * , 1 ] exists.
  • The unique solution is found by L ( τ ) = Q 0 , θ - 1 ( τ ) , or, A ( t ) = Q 0 , θ - 1 ( T - t ) .
  • A ( 0 ) is obtained,
    A ( 0 ) = L ( T ) = Q 0 , θ - 1 ( T )
Then, c ( T ) is determined by:
c ( T ) = ρ 0 T 1 - p ^ Q 0 , θ - 1 ( τ ) d τ
where, by the change of variable,
0 T 1 - p ^ Q 0 , θ - 1 ( τ ) d τ = Q 0 , θ - 1 ( T ) 1 1 - p ^ ( u ) δ u - δ θ q ^ ( u ) d u
   □

4. Special Cases

In this section, we focus on three important special cases where more explicit results for the distributional properties of the numbers of settled and unsettled claims N t , M t t 0 can be derived, and the associated numerical examples are also provided.

4.1. Case p 1 = 1

Case p 1 = 1 is defined as the special case of a discretised dynamic contagion process when:
p 1 = 1 , { p k } k 1 = 0 ; q 0 = 1 , { q k } k 0 = 0
This simple case could be applied, for instance, to model the delaying arrival of claims in the ruin problem for an insurance company; see more details in Dassios and Zhao [9].
Theorem 4.1. For any time t 2 > t 1 0 , if M t 1 Poisson ( υ ) , υ 0 , then,
M t 2 Poisson υ e - δ ( t 2 - t 1 ) + ρ 1 - e - δ ( t 2 - t 1 ) δ N t 2 - N t 1 Poisson υ 1 - e - δ ( t 2 - t 1 ) + ρ ( t 2 - t 1 ) - 1 - e - δ ( t 2 - t 1 ) δ
and they are independent.
Proof. By setting T = t 2 , t = t 1 in Theorem 3.4, we have:
E φ M t 2 θ N t 2 - N t 1 M t 1 = [ A ( t 1 ) ] M t 1 e - c ( t 2 ) - c ( t 1 )
where A ( t ) and c ( t ) can be solved explicitly as:
A ( t ) = ( φ - θ ) e - δ ( t 2 - t ) + θ , c ( t 2 ) - c ( t 1 ) = ρ ( 1 - θ ) ( t 2 - t 1 ) - ( φ - θ ) 1 - e - δ ( t 2 - t 1 ) δ
The joint probability generating function of M t 2 and N t 2 - N t 1 is given by:
E φ M t 2 θ N t 2 - N t 1 = E E φ M t 2 θ N t 2 - N t 1 M t 1 = E [ A ( t 1 ) ] M t 1 e - c ( t 2 ) - c ( t 1 ) = e - υ 1 - A ( t 1 ) e - c ( t 2 ) - c ( t 1 ) = exp - υ ( 1 - θ ) - ( φ - θ ) e - δ ( t 2 - t 1 ) - ρ ( 1 - θ ) ( t 2 - t 1 ) - ( φ - θ ) 1 - e - δ ( t 2 - t 1 ) δ
Set θ = 1 and φ = 1 , respectively; we have Poisson marginal distributions of M t 2 and N t 2 - N t 1 , since:
E φ M t 2 = exp - ( 1 - φ ) υ e - δ ( t 2 - t 1 ) + ρ 1 - e - δ ( t 2 - t 1 ) δ ,
E θ N t 2 - N t 1 = exp - ( 1 - θ ) υ 1 - e - δ ( t 2 - t 1 ) + ρ ( t 2 - t 1 ) - 1 - e - δ ( t 2 - t 1 ) δ
Obviously, they are also independent as:
E φ M t 2 × θ N t 2 - N t 1 = E φ M t 2 × E θ N t 2 - N t 1
   □
Corollary 4.2. If M 0 Poisson ( ζ ) , ζ 0 , then:
M t Poisson ζ e - δ t + ρ 1 - e - δ t δ N t Poisson ζ 1 - e - δ t + ρ t - 1 - e - δ t δ
and they are independent.
Proof. Set t 1 = 0 , t 2 = t > 0 and υ = ζ in Theorem 4.1; the results follow immediately.   □
Corollary 4.3. If M 0 Poisson ( ζ ) , then N t is a non-homogeneous Poisson process of rate ρ + ζ δ - ρ e - δ t .
Proof. For any time t 2 > t 1 0 , by Corollary 4.2, we have:
M t 1 Poisson ζ e - δ t 1 + ρ 1 - e - δ t 1 δ
By Theorem 4.1, set υ = ζ e - δ t 1 + ρ 1 - e - δ t 1 δ in Equation (16), then,
E θ N t 2 - N t 1 = exp - ( 1 - θ ) t 1 t 2 ζ δ e - δ s + ρ 1 - e - δ s d s
hence, the increments of N t follow a Poisson distribution,
N t 2 - N t 1 Poisson t 1 t 2 ζ δ e - δ s + ρ 1 - e - δ s d s
Based on Theorem 4.1 and Corollary 4.2, we observe that M t 2 and N t 2 - N t 1 are both Poisson distributed and independent. Because of the Markov property, all of the future increments after N t 2 only depend on M t 2 ; they are independent of N t 2 - N t 1 , as well, i.e., for any random variable X σ N s : N s - N t 2 , s t 2 , we have:
E X θ N t 2 - N t 1 = E E X θ N t 2 - N t 1 | M t 2 = E E X | M t 2 × E θ N t 2 - N t 1 | M t 2 = E E X | M t 2 × E E θ N t 2 - N t 1 | M t 2 = E X θ N t 2 - N t 1
The increments of the point process N t follow a Poisson distribution and also they are independent; therefore, N t is a non-homogeneous Poisson process of rate ζ δ e - δ t + ρ 1 - e - δ t .   □
In particular, if and only if ζ = ρ δ , N t is a Poisson process with a rate of ρ. Corollary 4.3, in fact, recovers the result obtained earlier by Mirasol [14], i.e., a delayed (or displaced) Poisson process is still a (non-homogeneous) Poisson process; see also Newell [15] and Lawrance and Lewis [16].

4.2. Case q 1 = q

Case q 1 = q is defined as the special case of a discretised dynamic contagion process when:
p 1 = 1 , { p k } k 1 = 0 ; q 0 = 1 - q , q 1 = q , { q k } k = 2 , 3 , . . . = 0 ; 0 q < 1
Corollary 4.4. The stationary distribution of M t is a Poisson distribution specified by:
M t t 0 Poisson ρ δ ( 1 - q )
Proof. The stationarity condition holds as μ 1 Q = q < 1 ; then, by Theorem 3.6, we have:
^ ( φ ) = e - ρ δ ( 1 - q ) ( 1 - φ )
which is the probability generating function of a Poisson distribution with constant intensity ρ δ ( 1 - q ) .   □
Corollary 4.5 The probability generating function of N T is given by:
E θ N T M 0 = exp - ρ δ 1 - θ 1 - θ q δ T - 1 - e - ( 1 - θ q ) δ T 1 - θ q θ ( 1 - q ) + ( 1 - θ ) e - ( 1 - θ q ) δ T 1 - θ q M 0
if M 0 Poisson ρ δ ( 1 - q ) , then,
E θ N T = exp - ρ T 1 - 1 - q 1 - θ q θ exp - ρ δ q 1 - q 1 - 1 - q 1 - θ q θ 2 1 - e - ( 1 - θ q ) δ T
Proof. The stationarity condition holds as μ 1 Q = q < 1 . By Theorem 3.8, the results follow, since:
Q 0 , θ - 1 ( T ) = θ ( 1 - q ) + ( 1 - θ ) e - ( 1 - θ q ) δ T 1 - θ q , 0 θ < 1
   □
Note that the first term of E [ θ N T ] of Equation (19) is the probability generating function of a compound Poisson distribution N 1 with point N ° T Poisson ( ρ T ) and underlying X 1 Geometric ( 1 - q ) where:
P { X 1 = j } = q j - 1 ( 1 - q ) , j = 1 , 2 , . . . , E [ θ X 1 ] = 1 - q 1 - θ q θ
The second term is the the probability generating function of a proper random variable O ˜ . Hence, N T = N 1 + O ˜ , and N T is stochastically larger than N 1 , i.e., N T N 1 .
Given the probability generating function of N T in Corollary 4.5, the probability distribution of the number of the cumulative settled claims at time T can be obtained explicitly by the basic property:
P { N T = n M 0 } = n θ n E θ N T M 0 | θ = 0
Numerical examples with the specified parameters ( ρ , δ , q ) = ( 1 , 1 , 0 . 5 ) are provided in Table 1.
Table 1. Numerical examples for case q 1 = q based on Corollary 4.5: the probability distribution of the number of cumulative settled claims at time T with parameters ( ρ , δ , q ) = ( 1 , 1 , 0 . 5 ) .
Table 1. Numerical examples for case q 1 = q based on Corollary 4.5: the probability distribution of the number of cumulative settled claims at time T with parameters ( ρ , δ , q ) = ( 1 , 1 , 0 . 5 ) .
P { N T = n | M 0 = 0 } (%) P { N T = n | M 0 = 5 } (%)
n T = 1 T = 2 T = 5 T = 1 T = 2 T = 5
069.220132.13141.81930.46640.00150.0000
121.877727.78294.51753.31690.03190.0000
26.640418.95727.392910.37240.29560.0000
31.736510.91729.733618.94681.52600.0001
40.41135.605511.125422.82014.85430.0047
50.09062.643211.477619.610910.14160.0852
60.01881.165510.939212.851714.95120.3778
70.00370.48659.78076.802117.01461.0198
80.00070.19398.29213.037915.93712.0730
90.00010.07426.71911.182512.83163.4803
100.00000.02755.23520.41099.15125.0745
Sum100.000099.985087.032599.818686.736612.1154

4.3. Case q 0 = 1

Case q 0 = 1 is defined as the special case of a discretised dynamic contagion process when:
q 0 = 1 , { q k } k 0 = 0
Indeed, this is a special case which corresponds to a Cox process with shot-noise intensity via transformation, as given by Appendix A.
Corollary 4.6. If { p k } k = 0 , 1 , 2 , . . . Geometric ( p ) , 0 p < 1 , then, the stationary distribution of M t is given:
M t t 0 NegBin ρ δ , 1 - p
Proof. If { p k } k = 0 , 1 , 2 , . . . Geometric ( p ) , then,
p ^ ( u ) = p 1 - ( 1 - p ) u
The stationarity condition holds as μ 1 Q = 0 < 1 ; then, by Theorem 3.6, we have:
^ ( φ ) = p 1 - ( 1 - p ) φ ρ δ
which is the probability generating function of a negative binomial distribution with the parameters ρ δ , 1 - p .   □
Corollary 4.7. If { p k } k = 0 , 1 , 2 , . . . Geometric ( p ) , then,
E θ N T M 0 = e - ρ T 1 - p ^ ( θ ) p T 1 - 1 - p T θ - ρ δ p ^ ( θ ) ( 1 - θ ) e - δ T + θ M 0
where p ^ ( u ) is specified by Equation (21) and
p T = : p 1 - ( 1 - p ) e - δ T
if M 0 NegBin ρ δ , 1 - p , then,
E θ N T = e - ρ T 1 - p ^ ( θ ) p T 1 - 1 - p T θ ρ δ 1 - p ^ ( θ )
Proof. By Theorem 3.8, the results follow, since:
Q 0 , θ - 1 ( T ) = ( 1 - θ ) e - δ T + θ , 0 θ < 1
   □
Note that the first term of E [ θ N T ] of Equation (22) is the probability generating function of a compound Poisson distribution N 2 with point N ° T Poisson ρ T and underlying X 2 Geometric p , where:
P { X 2 = j } = ( 1 - p ) j p , j = 0 , 1 , 2 , . . . , E [ θ X 2 ] = p 1 - ( 1 - p ) θ
the second term of Equation (22) is the probability generating function of a proper random variable O ˜ . Hence, we have N T = N 2 + O ˜ , and N T is stochastically larger than N 2 , i.e., N T N 2 .
Given the probability generating function of N T in Corollary 4.7, the probability distribution of the number of the cumulative settled claims at time T can be obtained explicitly by expansion, and numerical examples with the specified parameters ( ρ , δ , p ) = ( 1 , 0 . 5 , 0 . 5 ) are provided in Table 2.
Table 2. Numerical examples for case q 0 = 1 based on Corollary 4.7: the probability distribution of the number of cumulative settled claims at time T with parameters ( ρ , δ , p ) = ( 1 , 0 . 5 , 0 . 5 )
Table 2. Numerical examples for case q 0 = 1 based on Corollary 4.7: the probability distribution of the number of cumulative settled claims at time T with parameters ( ρ , δ , p ) = ( 1 , 0 . 5 , 0 . 5 )
P { N T = n | M 0 = 0 } (%) P { N T = n | M 0 = 5 } (%)
n T = 1 T = 2 T = 5 T = 1 T = 2 T = 5
084.518260.042415.74326.93770.40460.0001
111.285821.473517.270623.42953.62040.0033
23.027110.073516.305332.449813.25560.0770
30.83974.641213.800023.713925.41060.9043
40.23602.101310.87409.961327.26045.5660
50.06680.93768.14002.654216.860816.2074
60.01890.41345.85960.61557.187216.7767
70.00540.18054.08890.17103.309715.2520
80.00150.07812.78160.04811.499312.6134
90.00040.03361.85220.01360.66949.7828
100.00010.01431.21110.00390.29537.2381
Sum100.000099.989597.926599.998599.773384.4212

Acknowledgments

Hongbiao Zhao acknowledges the financial support from the National Natural Science Foundation of China (#71401147) and the research funds provided by Xiamen University.

Author Contributions

The two authors contributed equally to all aspects of this work.

A. Comparison with the Zero-Reversion Dynamic Contagion Process

A.1. Zero-Reversion Dynamic Contagion Process

This section is to demonstrate an alternative representation of the dynamic contagion process [8] with zero-reversion intensity (as defined by Definition A.1). We find later in Theorem A.6 that this process is the special case of a discretised dynamic contagion process when both K P and K Q follow mixed Poisson distributions.
Definition A.1 (Zero-reversion dynamic contagion process) The zero-reversion dynamic contagion process is a point process N t * T k ( 2 ) k 1 with non-negative F t - stochastic intensity process:
λ t = λ 0 e - δ t + 0 T i ( 1 ) < t Y i ( 1 ) e - δ t - T i ( 1 ) + 0 T k ( 2 ) < t Y k ( 2 ) e - δ t - T k ( 2 )
where:
  • F t t 0 is a history of the process N t * , with respect to which λ t t 0 is adapted;
  • λ 0 > 0 is a constant as the initial value of λ t at time t = 0 ;
  • δ > 0 is the constant rate of exponential decay;
  • Y i ( 1 ) i = 1 , 2 , . . . is a sequence of i.i.d. positive (externally-excited) jumps with distribution function H ( y ) , y > 0 , at the corresponding random times T i ( 1 ) i = 1 , 2 , . . . following a Poisson process of rate ρ > 0 ;
  • Y k ( 2 ) k = 1 , 2 , . . . is a sequence of i.i.d. positive (self-excited) jumps with distribution function G ( y ) , y > 0 , at the corresponding random times T k ( 2 ) k = 1 , 2 , . . . ;
  • the sequences Y i ( 1 ) i = 1 , 2 , . . . , T i ( 1 ) i = 1 , 2 , . . . and Y k ( 2 ) k = 1 , 2 , . . . are assumed to be independent of each other.
The first moments and Laplace transforms of two types of jumps Y i ( 1 ) and Y i ( 2 ) are denoted respectively by:
μ 1 H = : 0 y d H ( y ) , μ 1 G = : 0 y d G ( y ) ; h ^ ( u ) = : 0 e - u y d H ( y ) , g ^ ( u ) = : 0 e - u y d G ( y )
The generator of a zero-reversion dynamic contagion process λ t , N t * , t acting on a function f ( m , n , t ) is given by:
A f ( λ , n , t ) = f t - δ λ f λ + ρ 0 f ( λ + y , n , t ) d H ( y ) - f ( λ , n , t )
+ λ 0 f ( λ + y , n + 1 , t ) d G ( y ) - f ( λ , n , t )
Key distributional properties, which will be used later, are listed as below; see the proofs in Dassios and Zhao [8].
Proposition A.2. The stationarity condition of intensity process λ t is δ > μ 1 G .
Theorem A.3. If δ > μ 1 G , the Laplace transform λ T conditional λ 0 for a fixed time T is given by:
E e - v λ T λ 0 = exp - G v , 1 - 1 ( T ) v ρ [ 1 - h ^ ( u ) ] δ u + g ^ ( u ) - 1 d u × e - G v , 1 - 1 ( T ) λ 0
where:
G v , 1 ( L ) = : L v d u δ u + g ^ ( u ) - 1
Theorem A.4. If δ > μ 1 G , the Laplace transform of the asymptotic distribution of λ t is given by:
Π ^ ( v ) = : lim t E e - v λ t λ 0 = exp - 0 v ρ [ 1 - h ^ ( u ) ] δ u + g ^ ( u ) - 1 d u
and Equation (27) is also the Laplace transform of the stationary distribution of process { λ t } t 0 .
Dassios and Zhao [17] further apply the counting process N t * to model the arrival of insurance claims for the ruin problem via efficient Monte Carlo simulation. One of the advantages of the model using the discretised dynamic contagion process in this paper is that we would be able to investigate the properties of the unsettled number of claims (i.e., M t ) itself explicitly, whereas this quantity is not explicit in Dassios and Zhao [17].

A.2. Transformations between Two Processes

We explore the analogy between N t * , λ t and N t , M t via distributional transformations.
Lemma A.5. If:
p ^ u = h ^ 1 - u δ , q ^ u = g ^ 1 - u δ
then, the joint Laplace transform probability generating function of ( N T * , λ T ) is given by:
E θ ( N T * - N t * ) e - v λ T F t = e - ( D ( T ) - D ( t ) ) e - B ( t ) λ t , v 0
where:
B ( t ) = 1 - A ( t ) δ , D ( t ) = c ( t )
with boundary condition B ( T ) = v , and A ( t ) , c ( t ) are given by Equation (5).
Proof. Similar to the proof for Theorem 3.4, for a zero-reversion dynamic contagion process ( N t * , λ t ) , assume the form f ( λ , n , t ) = e - B ( t ) λ θ n e D ( t ) , and set A f ( λ , n , t ) = 0 in generator Equation (25); we have martingale e - B ( t ) λ t θ N t * e D ( t ) where:
B ( t ) = θ g ^ ( B ( t ) ) + δ B ( t ) - 1 D ( t ) = ρ [ 1 - h ^ ( B ( t ) ) ]
With boundary condition B ( T ) = v , and by the martingale property, we have Equation (29).
On the other hand, for the joint probability generating function of a discretised dynamic contagion process ( M t , N t ) as given by Theorem 3.4, we have:
A ( t ) = - δ θ q ^ ( A ( t ) ) + δ A ( t ) c ( t ) = ρ [ 1 - p ^ ( A ( t ) ) ]
The analogy between N t * , λ t and N t , M t is linked by Equation (32) and Equation (31): without solving the equations explicitly, if we set Equation (28) and Equation (30), then Equations (31) and (32) are equivalent.   □
We can prove in Theorem A.6 that, via distributional transformations, the increments of N t and N t * have the same distribution, and the finite-dimensional distributions of N t and N t * are the same.
Theorem A.6. If M 0 Poisson λ 0 δ and:
K P M i x e d   P o i s s o n Y δ | Y H , K Q M i x e d   P o i s s o n Y δ | Y G
i.e.,
p k = 0 e - y δ k ! y δ k d H ( y ) , q k = 0 e - y δ k ! y δ k d G ( y )
then,
E θ N T * λ 0 = E θ N T
Proof. If M 0 Poisson λ 0 δ , then E [ ψ M 0 ] = e - 1 - ψ δ λ 0 . The condition Equation (28) is equivalent to Equation (33) since:
k = 0 u k p k = p ^ u = h ^ 1 - u δ = E e - Y δ ( 1 - u ) = E E u K P | K P Poisson Y δ = k = 0 u k E P K P = k | K P Poisson Y δ = k = 0 u k P K P = k
and similarly, for K Q .
Set t = 0 , φ = 1 in Equation (5) and t = 0 , v = 0 in Equation (29); then, by Equation (30) of Lemma A.5, we have:
E θ N T * λ 0 = e - ( D ( T ) - D ( 0 ) ) e - B ( 0 ) λ 0 = e - ( c ( T ) - c ( 0 ) ) e - B ( 0 ) λ 0 , E θ N T = E E [ θ N T M 0 ] = e - ( c ( T ) - c ( 0 ) ) e - 1 - A ( 0 ) δ λ 0 = e - ( c ( T ) - c ( 0 ) ) e - B ( 0 ) λ 0
   □
Corollary A.7. If H Exp α and G Exp β , α , β > 0 , then the transformations are given by:
{ p k } k = 0 , 1 , 2 . . . Geometric p , p = : δ α δ α + 1 { q k } k = 0 , 1 , 2 . . . Geometric q ´ , q ´ = : δ β δ β + 1
Proof. If H Exp α , then, by Equation (33) or Equation (28), we have:
p ^ ( u ) = h ^ 1 - u δ = α α + 1 - u δ = p 1 - ( 1 - p ) u
and similarly, for G Exp β .   □
Remark A.8. The stationarity condition of process M t given by Equation (3) can be alternatively derived via the transformation by Theorem A.6 from the stationarity condition δ > μ 1 G for process λ t by Proposition A.2, i.e.,
μ 1 Q = E K Q = E Y δ | Y G = μ 1 G δ < 1
In particular, if K Q Geometric q ´ , then the stationarity condition is q ´ > 1 2 .
Corollary A.9. If M 0 Poisson λ 0 δ and:
K P M i x e d   P o i s s o n Y δ | Y H , K Q M i x e d   P o i s s o n Y δ | Y G
then,
E [ φ M T ] = E e - v λ T λ 0
Proof. By transformation Equation (30) of Lemma A.5, we have:
v = 1 - φ δ , G v , 1 - 1 ( T ) = 1 - Q φ , 1 - 1 ( T ) δ , A ( 0 ) = 1 - δ B ( 0 )
and:
G v , 1 - 1 ( T ) = B ( 0 ) , Q φ , 1 - 1 ( T ) = A ( 0 )
Then, by comparing Theorem 3.4 and Theorem A.3, we have:
E [ φ M T ] = E E [ φ M T M 0 ] = exp - φ Q φ , 1 - 1 ( T ) ρ [ 1 - p ^ ( u ) ] δ q ^ ( u ) - δ u d u E Q φ , 1 - 1 ( T ) M 0 = exp - φ Q φ , 1 - 1 ( T ) ρ 1 - h ^ 1 - u δ δ g ^ 1 - u δ - δ u d u e - 1 - Q φ , 1 - 1 ( T ) δ λ 0 s = 1 - u δ = exp - 1 - Q φ , 1 - 1 ( T ) δ 1 - φ δ ρ [ 1 - h ^ ( s ) ] δ s + g ^ ( s ) - 1 d s e - 1 - Q φ , 1 - 1 ( T ) δ λ 0 = E e - v λ T λ 0
   □
Corollary A.10. If K P M i x e d   P o i s s o n Y δ Y H , K Q M i x e d   P o i s s o n Y δ Y G , then,
^ ( φ ) = Π ^ ( v )
Proof. By Theorem 3.6 and Theorem A.4, we have:
Π ^ ( v ) = exp - 0 v ρ [ 1 - p ^ ( 1 - δ u ) ] δ u + q ^ ( 1 - δ u ) - 1 d u ( s = 1 - δ u ) = ^ ( 1 - δ v ) = ^ ( φ )
   □

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. A.G. Hawkes. “Point spectra of some mutually exciting point processes.” J. R. Stat. Soc. Ser. B 33 (1971): 438–443. [Google Scholar]
  2. A.G. Hawkes. “Spectra of some self-exciting and mutually exciting point processes.” Biometrika 58 (1971): 83–90. [Google Scholar] [CrossRef]
  3. E. Errais, K. Giesecke, and L.R. Goldberg. “Affine point processes and portfolio credit risk.” SIAM J. Financ. Math. 1 (2010): 642–665. [Google Scholar] [CrossRef]
  4. P. Embrechts, T. Liniger, and L. Lin. “Multivariate Hawkes processes: An application to financial data.” J. Appl. Probab. 48A (2011): 367–378. [Google Scholar] [CrossRef]
  5. V. Chavez-Demoulin, and J. McGill. “High-frequency financial data modeling using Hawkes processes.” J. Bank. Financ. 36 (2012): 3415–3426. [Google Scholar] [CrossRef]
  6. E. Bacry, S. Delattre, M. Hoffmann, and J.F. Muzy. “Modelling microstructure noise with mutually exciting point processes.” Quant. Financ. 13 (2013): 65–77. [Google Scholar] [CrossRef]
  7. Y. Aït-Sahalia, J. Cacho-Diaz, and R.J. Laeven. “Modeling financial contagion using mutually exciting jump processes.” J. Financ. Econ., 2014. [Google Scholar] [CrossRef]
  8. A. Dassios, and H. Zhao. “A dynamic contagion process.” Adv. Appl. Probab. 43 (2011): 814–846. [Google Scholar] [CrossRef]
  9. A. Dassios, and H. Zhao. “A risk model with delayed claims.” J. Appl. Probab. 50 (2013): 686–702. [Google Scholar] [CrossRef]
  10. K.C. Yuen, J. Guo, and K.W. Ng. “On ultimate ruin in a delayed-claims risk model.” J. Appl. Probab. 42 (2005): 163–174. [Google Scholar] [CrossRef]
  11. A. Dassios, and P. Embrechts. “Martingales and insurance risk.” Stoch. Models 5 (1989): 181–217. [Google Scholar] [CrossRef]
  12. A. Dassios, and J. Jang. “Pricing of catastrophe reinsurance and derivatives using the Cox process with shot noise intensity.” Financ. Stoch. 7 (2003): 73–95. [Google Scholar] [CrossRef] [Green Version]
  13. S.N. Ethier, and T.G. Kurtz. Markov Processes: Characterization and Convergence. Hoboken, NJ, USA: Wiley, 1986. [Google Scholar]
  14. N.M. Mirasol. “The output of an M/G/∞ queuing system is poisson.” Oper. Res. 11 (1963): 282–284. [Google Scholar] [CrossRef]
  15. G. Newell. “The M/G/∞ Queue.” SIAM J. Appl. Math. 14 (1966): 86–88. [Google Scholar] [CrossRef]
  16. A. Lawrance, and P.A. Lewis. “Properties of the bivariate delayed Poisson process.” J. Appl. Probab. 12 (1975): 257–268. [Google Scholar] [CrossRef]
  17. A. Dassios, and H. Zhao. “Ruin by dynamic contagion claims.” Insur. Math. Econ. 51 (2012): 93–106. [Google Scholar] [CrossRef]

Share and Cite

MDPI and ACS Style

Dassios, A.; Zhao, H. A Markov Chain Model for Contagion. Risks 2014, 2, 434-455. https://doi.org/10.3390/risks2040434

AMA Style

Dassios A, Zhao H. A Markov Chain Model for Contagion. Risks. 2014; 2(4):434-455. https://doi.org/10.3390/risks2040434

Chicago/Turabian Style

Dassios, Angelos, and Hongbiao Zhao. 2014. "A Markov Chain Model for Contagion" Risks 2, no. 4: 434-455. https://doi.org/10.3390/risks2040434

APA Style

Dassios, A., & Zhao, H. (2014). A Markov Chain Model for Contagion. Risks, 2(4), 434-455. https://doi.org/10.3390/risks2040434

Article Metrics

Back to TopTop