Next Article in Journal
Multivariate Credibility in Bonus-Malus Systems Distinguishing between Different Types of Claims
Previous Article in Journal
Under What Conditions Do Rules-Based and Capability-Based Management Modes Dominate?
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Mixed Periodic-Classical Barrier Strategies for Lévy Risk Processes

by
José-Luis Pérez
1 and
Kazutoshi Yamazaki
2,*
1
Department of Probability and Statistics, Centro de Investigación en Matemáticas A.C. Calle Jalisco s/n. C.P. 36240, Guanajuato, Mexico
2
Department of Mathematics, Faculty of Engineering Science, Kansai University, 3-3-35 Yamate-cho, Suita-shi, Osaka 564-8680, Japan
*
Author to whom correspondence should be addressed.
Risks 2018, 6(2), 33; https://doi.org/10.3390/risks6020033
Submission received: 25 February 2018 / Revised: 25 March 2018 / Accepted: 26 March 2018 / Published: 5 April 2018

Abstract

:
Given a spectrally-negative Lévy process and independent Poisson observation times, we consider a periodic barrier strategy that pushes the process down to a certain level whenever the observed value is above it. We also consider the versions with additional classical reflection above and/or below. Using scale functions and excursion theory, various fluctuation identities are computed in terms of the scale functions. Applications in de Finetti’s dividend problems are also discussed.

1. Introduction

In actuarial risk theory, the surplus of an insurance company is typically modeled by a compound Poisson process with a positive drift and negative jumps (Cramér–Lundberg model) or more generally by a spectrally-negative Lévy process. Thanks to the recent developments of the fluctuation theory of Lévy processes, there now exists a variety of tools available to compute various quantities that are useful in insurance mathematics.
By the existing fluctuation theory, it is relatively easy to deal with (classical) reflected Lévy processes that can be written as the differences between the underlying and running supremum/infimum processes.
The known results on these processes can be conveniently and efficiently applied in modeling the surplus of a dividend-paying company: under a barrier strategy, the resulting controlled surplus process becomes the process reflected from above. The work in Avram et al. (2007) obtained the expected net present value (NPV) of dividends until ruin; a sufficient condition for the optimality of a barrier strategy is given in Loeffen (2008). Similarly, capital injection is modeled by reflections from below. In the bail-out case with a requirement that ruin must be avoided, Avram et al. (2007) obtained the expected NPV of dividends and capital injections under a double barrier strategy. They also showed that it is optimal to reflect the process at zero and at some upper boundary, with the resulting surplus process being a doubly-reflected Lévy process.
These seminal works give concise expressions for various fluctuation identities in terms of the scale function. In general, conciseness is still maintained when the underlying spectrally one-sided Lévy process is replaced with its reflected process. This is typically done by using the derivative or the integral of the scale function depending on whether the reflection barrier is higher or lower. For the results on a variant called refracted Lévy processes, see, e.g., Kyprianou (2010), Kyprianou et al. (2014).
In this paper, we consider a different version of reflection, which we call the Parisian reflection. Motivated by the fact that, in reality, dividend/capital injection decisions can only be made at some intervals, several recent papers consider periodic barrier strategies that reflect the process only at discrete observation times. In particular, Avram et al. (2018) consider, for a general spectrally-negative Lévy process, the case capital injections can be made at the jump times of an independent Poisson process (the reflection barrier is lower). This current paper considers the case when dividends are made at these Poisson observation times (reflection barrier is upper). Other related papers in the compound Poisson cases include Albrecher et al. (2011); Avanzi et al. (2013), where in the former, several identities are obtained when the solvency is also observed periodically, whereas the latter studies the case where observation intervals are Erlang-distributed.
This work is also motivated by its applications in de Finetti’s dividend problems under Poisson observation times. In the dual (spectrally positive) model, Avanzi et al. (2014) solved the case where the jump size is hyper-exponentially distributed; Pérez and Yamazaki (2017) generalized the results to a general spectrally-positive Lévy case and also solved the bail-out version using the results in Avram et al. (2018). An extension with a combination of periodic and continuous dividend payments (with different transaction costs) was recently solved by Avanzi et al. (2016) when the underlying process is a Brownian motion with a drift. In these papers, optimal strategies are of a periodic barrier-type. On the other hand, this paper provides tools to study the spectrally-negative case. Recently, our results have been used to show the optimality of periodic barrier strategies in (Noba et al. 2017, 2018); see Remarks 2 and 8.
In this paper, we study the following four processes that are constructed from a given spectrally-negative Lévy process X and the jump times of an independent Poisson process with rate r > 0 :
  • The process with Parisian reflection from above X r : The process X r is constructed by modifying X so that it is pushed down to zero at the Poisson observation times at which it is above zero. Note that the barrier level zero can be changed to any real value by the spatial homogeneity of X. This process models the controlled surplus process under a periodic barrier dividend strategy.
  • The process with Parisian and classical reflection from above X ˜ r b : Suppose Y ¯ b is the reflected process of X with the classical upper barrier b > 0 . The process X ˜ r b is constructed in the same way as X r in (1) with the underlying process X replaced with Y ¯ b . This process models the controlled surplus process under a combination of the classical and periodic barrier dividend strategies. This is a generalization of the Brownian motion case as studied in Avanzi et al. (2016).
  • The process with Parisian reflection from above and classical reflection from below Y r a : Suppose Y ̲ a is the reflected process of X with the classical lower barrier a < 0 . The process Y r a is constructed in the same way as X r as in (1) with the underlying process X replaced with Y ̲ a . By shifting the process (by a ), it models the surplus under a periodic barrier dividend strategy with classical capital injections (so that it does not go below zero).
  • The process with Parisian and classical reflection from above and classical reflection from below Y ˜ r a , b : Suppose Y a , b is the doubly-reflected process of X with a classical lower barrier a < 0 and a classical upper barrier b > 0 . The process Y ˜ r a , b is constructed in the same way as X r in (1) with the underlying process X replaced with Y a , b . By shifting the process (by a ), it models the controlled surplus process under a combination of the classical and periodic barrier dividend strategies as in (2) with additional classical capital injections.
For these four processes, we compute various fluctuation identities that include:
(a)
the expected NPV of dividends (both corresponding to Parisian and classical reflections) with the horizon given by the first exit time from an interval and those with the infinite horizon,
(b)
the expected NPV of capital injections with the horizon given by the first exit time from an interval and those with the infinite horizon,
(c)
the two-sided (one-sided) exit identities.
In order to compute these for the four processes defined above, we first obtain the identities for the process (1) killed upon exiting [ a , b ] . Using the observation that the paths of the processes (2)–(4) are identical to those of (1) before the first exit time from [ a , b ] , the results for (2)–(4) can be obtained as corollaries, via the strong Markov property and the existing known identities for classical reflected processes.
The identities for (1) are obtained separately for the case that X has paths of bounded variation and for the case that it has paths of unbounded variation. The former is done by a relatively well-known technique via the strong Markov property combined with the existing known identities for the spectrally-negative Lévy process. The case of unbounded variation is done via excursion theory (in particular excursions away from zero as in Pardo et al. (2018)). Thanks to the simplifying formulae obtained in Avram et al. (2018) and Loeffen et al. (2014), concise expressions can be achieved.
The rest of the paper is organized as follows. In Section 2, we review the spectrally-negative Lévy process and construct more formally the four processes described above. In addition, scale functions and some existing fluctuation identities are briefly reviewed. In Section 3, we state the main results for the process (1) and, then, in Section 4, those for the processes (2)–(4). In Section 5 and Section 6, we give proofs for the main results for (1) for the case of bounded variation and unbounded variation, respectively.
Throughout the paper, for any function f of two variables, let f ( · , · ) be the partial derivative with respect to the first argument.

2. Spectrally-Negative Lévy Processes with Parisian Reflection above

Let X = ( X ( t ) ; t 0 ) be a Lévy process defined on a probability space ( Ω , F , P ) . For x R , we denote by P x the law of X when it starts at x and write for convenience P in place of P 0 . Accordingly, we shall write E x and E for the associated expectation operators. In this paper, we shall assume throughout that X is spectrally negative, meaning here that it has no positive jumps and that it is not the negative of a subordinator. It is a well-known fact that its Laplace exponent ψ : [ 0 , ) R , i.e.,
E e θ X ( t ) = : e ψ ( θ ) t , t , θ 0 ,
is given by the Lévy–Khintchine formula:
ψ ( θ ) : = γ θ + σ 2 2 θ 2 + ( , 0 ) e θ x 1 θ x 1 { x > 1 } Π ( d x ) , θ 0 ,
where γ R , σ 0 and Π is a measure on ( , 0 ) called the Lévy measure of X that satisfies:
( , 0 ) ( 1 x 2 ) Π ( d x ) < .
It is well known that X has paths of bounded variation if and only if σ = 0 and ( 1 , 0 ) | x | Π ( d x ) < ; in this case, X can be written as:
X ( t ) = c t S ( t ) , t 0 ,
where:
c : = γ ( 1 , 0 ) x Π ( d x )
and ( S ( t ) ; t 0 ) is a driftless subordinator. Note that necessarily c > 0 , since we have ruled out the case that X has monotone paths; its Laplace exponent is given by:
ψ ( θ ) = c θ + ( , 0 ) e θ x 1 Π ( d x ) , θ 0 .
Let us define the running infimum and supremum processes:
X ̲ ( t ) : = inf 0 t t X ( t ) and X ¯ ( t ) : = sup 0 t t X ( t ) , t 0 .
Then, the processes reflected from above at b and below at a are given, respectively, by:
Y ¯ b ( t ) : = X ( t ) L b ( t ) and Y ̲ a ( t ) : = X ( t ) + R a ( t ) , t 0 ,
where:
L b ( t ) : = ( X ¯ ( t ) b ) 0 and R a ( t ) : = ( a X ̲ ( t ) ) 0 , t 0 ,
are the cumulative amounts of reflections that push the processes downward and upward, respectively.

2.1. Lévy Processes with Parisian Reflection above

Let T r = { T ( i ) ; i 1 } be an increasing sequence of jump times of an independent Poisson process with rate r > 0 . We construct the Lévy process with Parisian reflection above X r = ( X r ( t ) ; t 0 ) as follows: the process is only observed at times T r and is pushed down to zero if and only if it is above zero.
More specifically, we have:
X r ( t ) = X ( t ) , 0 t < T 0 + ( 1 ) ,
where:
T 0 + ( 1 ) : = inf { T ( i ) : X ( T ( i ) ) > 0 } ;
here and throughout, let inf = . The process then jumps downward by X ( T 0 + ( 1 ) ) so that X r ( T 0 + ( 1 ) ) = 0 . For T 0 + ( 1 ) t < T 0 + ( 2 ) : = inf { T ( i ) > T 0 + ( 1 ) : X r ( T ( i ) ) > 0 } , we have X r ( t ) = X ( t ) X ( T 0 + ( 1 ) ) , and X r ( T 0 + ( 2 ) ) = 0 . The process can be constructed by repeating this procedure.
Suppose L r ( t ) is the cumulative amount of (Parisian) reflection until time t 0 . Then, we have:
X r ( t ) = X ( t ) L r ( t ) , t 0 ,
with
L r ( t ) : = T 0 + ( i ) t X r ( T 0 + ( i ) ) , t 0 ,
where ( T 0 + ( n ) ; n 1 ) can be constructed inductively by (4) and:
T 0 + ( n + 1 ) : = inf { T ( i ) > T 0 + ( n ) : X r ( T ( i ) ) > 0 } , n 1 .

2.2. Lévy Processes with Parisian and Classical Reflection above

Fix b > 0 . Consider an extension of the above with additional classical reflection from above at b > 0 , which we denote by X ˜ r b . More specifically, we have:
X ˜ r b ( t ) = Y ¯ b ( t ) , 0 t < T ˜ 0 + ( 1 ) ,
where T ˜ 0 + ( 1 ) : = inf { T ( i ) : Y ¯ b ( T ( i ) ) > 0 } . The process then jumps downward by Y ¯ b ( T ˜ 0 + ( 1 ) ) so that X ˜ r b ( T ˜ 0 + ( 1 ) ) = 0 . For T ˜ 0 + ( 1 ) t < T ˜ 0 + ( 2 ) : = inf { T ( i ) > T ˜ 0 + ( 1 ) : X ˜ r b ( T ( i ) ) > 0 } , it is the reflected process of X ( t ) X ( T ˜ 0 + ( 1 ) ) (with classical reflection above at b as in (2)) and X ˜ r b ( T ˜ 0 + ( 2 ) ) = 0 . The process can be constructed by repeating this procedure.
Suppose L ˜ r , P b ( t ) and L ˜ r , S b ( t ) are the cumulative amounts of Parisian reflection (with upper barrier zero) and classical reflection (with upper barrier b) until time t 0 . Then, we have:
X ˜ r b ( t ) = X ( t ) L ˜ r , P b ( t ) L ˜ r , S b ( t ) , t 0 .

2.3. Lévy Processes with Parisian Reflection above and Classical Reflection below

Fix a < 0 . The process Y r a with additional (classical) reflection below can be defined analogously. We have:
Y r a ( t ) = Y ̲ a ( t ) , 0 t < T ^ 0 + ( 1 )
where T ^ 0 + ( 1 ) : = inf { T ( i ) : Y ̲ a ( T ( i ) ) > 0 } . The process then jumps downward by Y ̲ a ( T ^ 0 + ( 1 ) ) so that Y r a ( T ^ 0 + ( 1 ) ) = 0 . For T ^ 0 + ( 1 ) t < T ^ 0 + ( 2 ) : = inf { T ( i ) > T ^ 0 + ( 1 ) : Y r a ( T ( i ) ) > 0 } , Y r a ( t ) is the reflected process of X ( t ) X ( T ^ 0 + ( 1 ) ) (with the classical reflection below at a as in (2)), and Y r a ( T ^ 0 + ( 2 ) ) = 0 . The process can be constructed by repeating this procedure. It is clear that it admits a decomposition:
Y r a ( t ) = X ( t ) L r a ( t ) + R r a ( t ) , t 0 ,
where L r a ( t ) and R r a ( t ) are, respectively, the cumulative amounts of Parisian reflection (with upper barrier zero) and classical reflection (with lower barrier a) until time t.

2.4. Lévy Processes with Parisian and Classical Reflection above and Classical Reflection below

Fix a < 0 < b . Consider a version of Y r with additional classical reflection from above at b > 0 . More specifically, we have:
Y ˜ r a , b ( t ) = Y a , b ( t ) , 0 t < T ˇ 0 + ( 1 ) ,
where Y a , b is the classical doubly-reflected process of X with lower barrier a and upper barrier b (see Pistorius (2003)) and:
T ˇ 0 + ( 1 ) : = inf { T ( i ) : Y a , b ( T ( i ) ) > 0 } .
The process then jumps downward by Y a , b ( T ˇ 0 + ( 1 ) ) so that Y ˜ r a , b ( T ˇ 0 + ( 1 ) ) = 0 . For T ˇ 0 + ( 1 ) t < T ˇ 0 + ( 2 ) : = inf { T ( i ) > T ˇ 0 + ( 1 ) : Y ˜ r a , b ( T ( i ) ) > 0 } , it is the doubly-reflected process of X ( t ) X ( T ˇ 0 + ( 1 ) ) (with classical reflections at a and b), and Y ˜ r a , b ( T ˇ 0 + ( 2 ) ) = 0 . The process can be constructed by repeating this procedure.
Suppose L ˜ r , P a , b ( t ) and L ˜ r , S a , b ( t ) are the cumulative amounts of Parisian reflection (with upper barrier zero) and classical reflection (with upper barrier b) until time t 0 , and R ˜ r a , b ( t ) is that of the classical reflection (with lower barrier a). Then, we have:
Y ˜ r a , b ( t ) = X ( t ) L ˜ r , P a , b ( t ) L ˜ r , S a , b ( t ) + R ˜ r a , b ( t ) , t 0 .

2.5. Review of Scale Functions

Fix q 0 . We use W ( q ) for the scale function of the spectrally-negative Lévy process X. This is the mapping from R to [ 0 , ) that takes the value zero on the negative half-line, while on the positive half-line, it is a strictly increasing function that is defined by its Laplace transform:
0 e θ x W ( q ) ( x ) d x = 1 ψ ( θ ) q , θ > Φ ( q ) ,
where ψ is as defined in (1) and:
Φ ( q ) : = sup { λ 0 : ψ ( λ ) = q } .
We also define, for x R ,
W ¯ ( q ) ( x ) : = 0 x W ( q ) ( y ) d y , W ¯ ¯ ( q ) ( x ) : = 0 x 0 z W ( q ) ( w ) d w d z , Z ( q ) ( x ) : = 1 + q W ¯ ( q ) ( x ) , Z ¯ ( q ) ( x ) : = 0 x Z ( q ) ( z ) d z = x + q W ¯ ¯ ( q ) ( x ) .
Noting that W ( q ) ( x ) = 0 for < x < 0 , we have:
W ¯ ( q ) ( x ) = 0 , W ¯ ¯ ( q ) ( x ) = 0 , Z ( q ) ( x ) = 1 , and Z ¯ ( q ) ( x ) = x , x 0 .
Define also:
Z ( q ) ( x , θ ) : = e θ x 1 + ( q ψ ( θ ) ) 0 x e θ z W ( q ) ( z ) d z , x R , θ 0 ,
and its partial derivative with respect to the first argument:
Z ( q ) ( x , θ ) = θ Z ( q ) ( x , θ ) + ( q ψ ( θ ) ) W ( q ) ( x ) , x R , θ 0 .
In particular, for x R , Z ( q ) ( x , 0 ) = Z ( q ) ( x ) and, for r > 0 ,
Z ( q ) ( x , Φ ( q + r ) ) = e Φ ( q + r ) x 1 r 0 x e Φ ( q + r ) z W ( q ) ( z ) d z , Z ( q + r ) ( x , Φ ( q ) ) = e Φ ( q ) x 1 + r 0 x e Φ ( q ) z W ( q + r ) ( z ) d z .
Remark 1.
1. If X has paths of unbounded variation or the Lévy measure is atomless, it is known that W ( q ) is C 1 ( R { 0 } ) ; see, e.g., (Chan et al. 2011, Theorem 3). In particular, if σ > 0 , then W ( q ) is C 2 ( R { 0 } ) ; see, e.g., (Chan et al. 2011, Theorem 1).
2. Regarding the asymptotic behavior near zero, as in Lemmas 3.1 and 3.2 of Kuznetsov et al. (2013),
W ( q ) ( 0 ) = 0 i f   X   h a s   p a t h s   o f   u n b o u n d e d   v a r i a t i o n , 1 c i f   X   h a s   p a t h s   o f   b o u n d e d   v a r i a t i o n , W ( q ) ( 0 + ) : = lim x 0 W ( q ) ( x ) = 2 σ 2 i f   σ > 0 , i f   σ = 0 a n d Π ( , 0 ) = , q + Π ( , 0 ) c 2 i f   σ = 0 a n d Π ( , 0 ) < .
On the other hand, as in Lemma 3.3 of Kuznetsov et al. (2013),
e Φ ( q ) x W ( q ) ( x ) ψ ( Φ ( q ) ) 1 , a s   x ,
where in the case ψ ( 0 + ) = 0 , the right-hand side, when q = 0 , is understood to be infinity.
Below, we list the fluctuation identities that will be used later in the paper.

2.6. Fluctuation Identities for X

Let:
τ a : = inf t 0 : X ( t ) < a and τ b + : = inf t 0 : X ( t ) > b , a , b R .
Then, for b > a and x b ,
E x e q τ b + ; τ b + < τ a = W ( q ) ( x a ) W ( q ) ( b a ) , E x e q τ a θ [ a X ( τ a ) ] ; τ b + > τ a = Z ( q ) ( x a , θ ) Z ( q ) ( b a , θ ) W ( q ) ( x a ) W ( q ) ( b a ) , θ 0 .
By taking b in the latter, as in (Albrecher et al. 2016, (7)) (see also the identity (3.19) in Avram et al. (2007)),
E x e q τ a θ [ a X ( τ a ) ] ; τ a < = Z ( q ) ( x a , θ ) W ( q ) ( x a ) ψ ( θ ) q θ Φ ( q ) ,
where, for the case θ = Φ ( q ) , it is understood as the limiting case. In addition, it is known that a spectrally-negative Lévy process creeps downwards if and only if σ > 0 ; by Theorem 2.6 (ii) of Kuznetsov et al. (2013),
E x e q τ a ; X ( τ a ) = a , τ a < = σ 2 2 W ( q ) ( x a ) Φ ( q ) W ( q ) ( x a ) , x > a ,
where we recall that W ( q ) is differentiable when σ > 0 as in Remark 1 (1). By this, the strong Markov property and (10), we have for a < b and x b ,
E x ( e q τ a ; X ( τ a ) = a , τ a < τ b + )    = E x ( e q τ a ; X ( τ a ) = a , τ a < ) E x ( e q τ b + ; τ b + < τ a ) E b ( e q τ a ; X ( τ a ) = a , τ a < )    = C b a ( q ) ( x a )
where:
C β ( q ) ( y ) : = σ 2 2 W ( q ) ( y ) W ( q ) ( y ) W ( q ) ( β ) W ( q ) ( β ) , y R { 0 } , β > 0 .

2.7. Fluctuation Identities for Y ¯ b ( t )

Fix a < b . Define the first downcrossing time of Y ¯ b ( t ) of (2):
τ ˜ a , b : = inf { t > 0 : Y ¯ b ( t ) < a } .
The Laplace transform of τ ˜ a , b is given, as in Proposition 2 (ii) of Pistorius (2004), by:
E x ( e q τ ˜ a , b ) = Z ( q ) ( x a ) q W ( q ) ( b a ) W ( q ) ( x a ) W ( q ) ( ( b a ) + ) , q 0 , x b .
As in Proposition 1 of Avram et al. (2007), the discounted cumulative amount of reflection from above as in (3) is:
E x [ 0 , τ ˜ a , b ] e q t d L b ( t ) = W ( q ) ( x a ) W ( q ) ( ( b a ) + ) , q 0 , x b .

2.8. Fluctuation Identities for Y ̲ a ( t )

Fix a < b . Define the first upcrossing time of Y ̲ a ( t ) of (2):
η a , b + : = inf { t > 0 : Y ̲ a ( t ) > b } .
First, as on page 228 of Kyprianou (2006), its Laplace transform is concisely given by:
E x ( e q η a , b + ) = Z ( q ) ( x a ) Z ( q ) ( b a ) , q 0 , x b .
Second, as in the proof of Theorem 1 of Avram et al. (2007), the discounted cumulative amount of reflection from below as in (3) is, given ψ ( 0 + ) > ,
E x [ 0 , η a , b + ] e q t d R a ( t ) = l ( q ) ( x a ) + Z ( q ) ( x a ) Z ( q ) ( b a ) l ( q ) ( b a ) , q 0 , x b ,
where:
l ( q ) ( x ) : = Z ¯ ( q ) ( x ) ψ ( 0 + ) W ¯ ( q ) ( x ) , q 0 , x R .

2.9. Some More Notations

For the rest of the paper, we fix r > 0 and use e r for the first observation time, or an independent exponential random variable with parameter r.
Let, for q 0 and x R ,
Z ˜ ( q , r ) ( x , θ ) : = r Z ( q ) ( x , θ ) + ( q ψ ( θ ) ) Z ( q ) ( x , Φ ( q + r ) ) Φ ( q + r ) θ , θ 0 , Z ˜ ( q , r ) ( x ) : = Z ˜ ( q , r ) ( x , 0 ) = r Z ( q ) ( x ) + q Z ( q ) ( x , Φ ( q + r ) ) Φ ( q + r ) ,
where the case θ = Φ ( q + r ) is understood as the limiting case.
We define, for any measurable function f : R R ,
M a ( q , r ) f ( x ) : = f ( x a ) + r 0 x W ( q + r ) ( x y ) f ( y a ) d y , x R , a < 0 .
In particular, we let, for a < 0 , q 0 and x R ,
W a ( q , r ) ( x ) : = M a ( q , r ) W ( q ) ( x ) , W ¯ a ( q , r ) ( x ) : = M a ( q , r ) W ¯ ( q ) ( x ) , Z a ( q , r ) ( x , θ ) : = M a ( q , r ) Z ( q ) ( x , θ ) , θ 0 , Z ¯ a ( q , r ) ( x ) : = M a ( q , r ) Z ¯ ( q ) ( x ) ,
with Z a ( q , r ) ( · ) : = Z a ( q , r ) ( · , 0 ) .
Thanks to these functionals, the following expectations admit concise expressions. By Lemma 2.1 in Loeffen et al. (2014) and Theorem 6.1 in Avram et al. (2018) , for all q 0 , a < 0 < b and x b ,
E x e ( q + r ) τ 0 W ( q ) ( X ( τ 0 ) a ) ; τ 0 < τ b + = W a ( q , r ) ( x ) W ( q + r ) ( x ) W ( q + r ) ( b ) W a ( q , r ) ( b ) , ( 21 ) E x e ( q + r ) τ ˜ 0 , b W ( q ) ( Y ¯ b ( τ ˜ 0 , b ) a ) = W a ( q , r ) ( x ) W ( q + r ) ( x ) W ( q + r ) ( b + ) ( W a ( q , r ) ) ( b + ) . ( 22 )
In addition, we give a slight generalization of Lemma 2.1 of Loeffen et al. (2014) and Theorem 6.1 in Avram et al. (2018). The proofs are given in Appendix A.1.
Lemma 1.
For q 0 , θ 0 , a < 0 < b , and x b ,
E x e ( q + r ) τ 0 Z ( q ) ( X ( τ 0 ) a , θ ) ; τ 0 < τ b + = Z a ( q , r ) ( x , θ ) W ( q + r ) ( x ) W ( q + r ) ( b ) Z a ( q , r ) ( b , θ ) , ( 23 ) E x e ( q + r ) τ ˜ 0 , b Z ( q ) ( Y ¯ b ( τ ˜ 0 , b ) a , θ ) = Z a ( q , r ) ( x , θ ) W ( q + r ) ( x ) W ( q + r ) ( b + ) ( Z a ( q , r ) ) ( b , θ ) . ( 24 )

3. Main Results for X r

In this section, we obtain the fluctuation identities for the process X r as constructed in Section 2.1. The main theorems are obtained for the case killed upon exiting an interval [ a , b ] for a < 0 < b . As their corollaries, we also obtain the limiting cases as a and b . The proofs for the theorems are given in Section 5 and Section 6 for the bounded and unbounded variation cases, respectively. The proofs for the corollaries are given in the Appendix B.
Define the first down-/up-crossing times for X r ,
τ a ( r ) : = inf { t > 0 : X r ( t ) < a } and τ b + ( r ) : = inf { t > 0 : X r ( t ) > b } , a , b R .
Define also for q 0 , a < 0 and x R ,
I a ( q , r ) ( x ) : = W a ( q , r ) ( x ) W ( q ) ( a ) r W ¯ ( q + r ) ( x ) , J a ( q , r ) ( x , θ ) : = Z a ( q , r ) ( x , θ ) r Z ( q ) ( a , θ ) W ¯ ( q + r ) ( x ) , J a ( q , r ) ( x ) : = J a ( q , r ) ( x , 0 ) = Z a ( q , r ) ( x ) r Z ( q ) ( a ) W ¯ ( q + r ) ( x ) .
Note in particular that:
I a ( q , r ) ( 0 ) = 1 and J a ( q , r ) ( 0 , θ ) = Z ( q ) ( a , θ ) ,
and that:
J a ( 0 , r ) ( x ) = 1 and ( J a ( 0 , r ) ) ( x ) = 0 , x R .
We shall first obtain the expected NPV of dividends (see the decomposition (5)) killed upon exiting [ a , b ] .
Theorem 1 
(Periodic control of dividends). For q 0 , a < 0 < b and x b , we have:
f ( x , a , b ) : = E x 0 τ b + ( r ) τ a ( r ) e q t d L r ( t ) = r W ¯ ¯ ( q + r ) ( b ) I a ( q , r ) ( x ) I a ( q , r ) ( b ) W ¯ ¯ ( q + r ) ( x ) .
By taking a and b in Theorem 1, we have the following.
Corollary 1.
(i) For q 0 , b > 0 and x b , we have:
E x 0 τ b + ( r ) e q t d L r ( t ) = r W ¯ ¯ ( q + r ) ( b ) I ( q , r ) ( x ) I ( q , r ) ( b ) W ¯ ¯ ( q + r ) ( x ) ,
where:
I ( q , r ) ( x ) : = lim a I a ( q , r ) ( x ) = Z ( q + r ) ( x , Φ ( q ) ) r W ¯ ( q + r ) ( x ) , q 0 , x R .
(ii) For q 0 , a < 0 and x R , we have:
E x 0 τ a ( r ) e q t d L r ( t ) = r I a ( q , r ) ( x ) Φ ( q + r ) W ( q ) ( a ) Z ( q ) ( a , Φ ( q + r ) ) W ¯ ¯ ( q + r ) ( x ) ,
where, by (7),
Z ( q ) ( x , Φ ( q + r ) ) = Φ ( q + r ) Z ( q ) ( x , Φ ( q + r ) ) r W ( q ) ( x ) , x R .
(iii) Suppose q > 0 or q = 0 with ψ ( 0 + ) < 0 . Then, for x R ,
E x 0 e q t d L r ( t ) = Φ ( q + r ) Φ ( q ) Φ ( q + r ) Φ ( q ) I ( q , r ) ( x ) r W ¯ ¯ ( q + r ) ( x ) .
Otherwise, it is infinity for x R .
Remark 2.
Recently, in Noba et al. (2018), Corollary 1 (ii) was used to show the optimality of a periodic barrier strategy in de Finetti’s dividend problem under the assumption that the Lévy measure has a completely monotone density. Thanks to the semi-analytic expression in terms of the scale function, the selection of a candidate optimal barrier, as well as the verification of optimality are conducted efficiently, without focusing on a particular class of Lévy processes.
We shall now study the two-sided exit identities. The main results are given in Theorems 2 and 3, and their corollaries are obtained by taking limits. We first obtain the Laplace transform of the upcrossing time τ b + ( r ) on the event { τ a ( r ) > τ b + ( r ) } .
Theorem 2 (Upcrossing time).
For q 0 , a < 0 < b , and x b , we have:
g ( x , a , b ) : = E x e q τ b + ( r ) ; τ a ( r ) > τ b + ( r ) = I a ( q , r ) ( x ) I a ( q , r ) ( b ) .
The following technical result will be helpful for obtaining the Laplace exponent of the upcrossing time τ b + ( r ) , as a corollary to Theorem 2.
Remark 3.
Fix b > 0 and x < b . By Lemma 3 below, we see that
I a ( q , r ) ( x ) W ( q + r ) ( x ) I a ( q , r ) ( b ) / W ( q + r ) ( b ) r 1 . Because I a ( q , r ) ( b ) r and by (9),
lim r I a ( q , r ) ( x ) I a ( q , r ) ( b ) = lim r W ( q + r ) ( x ) W ( q + r ) ( b ) = 0 .
Hence, we see that g ( x , a , b ) vanishes in the limit as r .
By taking a in Theorem 2, we have the following.
Corollary 2.
(i) For q 0 , b > 0 and x b , we have E x ( e q τ b + ( r ) ) = I ( q , r ) ( x ) / I ( q , r ) ( b ) where I ( q , r ) is given as in (28). (ii) In particular, when ψ ( 0 + ) 0 , then τ b + ( r ) < P x -a.s. for any x R .
For θ 0 , q 0 , a < 0 and x R , let:
J ^ a ( q , r ) ( x , θ ) : = Z a ( q , r ) ( x , θ ) Z ( q ) ( a , θ ) W ( q ) ( a ) W a ( q , r ) ( x ) = M a ( q , r ) Z ( q ) ( x , θ ) Z ( q ) ( a , θ ) W ( q ) ( a ) W ( q ) ( x ) ,
which satisfies:
J ^ a ( q , r ) ( x , θ ) = J a ( q , r ) ( x , θ ) Z ( q ) ( a , θ ) I a ( q , r ) ( x ) ,
and, by (26),
J ^ a ( q , r ) ( 0 , θ ) = 0 .
Using these, we express the Laplace transform of the downcrossing time τ a ( r ) on the event { τ a ( r ) < τ b + ( r ) } .
Theorem 3 (Downcrossing time and overshoot).
For q 0 , a < 0 < b , θ 0 , and x b , we have:
h ( x , a , b , θ ) : = E x e q τ a ( r ) θ [ a X r ( τ a ( r ) ) ] ; τ a ( r ) < τ b + ( r ) = J ^ a ( q , r ) ( x , θ ) I a ( q , r ) ( x ) I a ( q , r ) ( b ) J ^ a ( q , r ) ( b , θ ) = J a ( q , r ) ( x , θ ) I a ( q , r ) ( x ) I a ( q , r ) ( b ) J a ( q , r ) ( b , θ ) .
By taking b in Theorem 3, we obtain the following.
Corollary 3.
(i) For q 0 , a < 0 , θ 0 and x R ,
E x e q τ a ( r ) θ [ a X r ( τ a ( r ) ) ] = J a ( q , r ) ( x , θ ) I a ( q , r ) ( x ) Z ˜ ( q , r ) ( a , θ ) r Z ( q ) ( a , θ ) Φ ( q + r ) W ( q ) ( a ) Φ ( q + r ) Z ( q ) ( a , Φ ( q + r ) ) ,
where in particular:
E x e q τ a ( r ) = J a ( q , r ) ( x ) q I a ( q , r ) ( x ) Z ( q ) ( a , Φ ( q + r ) ) W ( q ) ( a ) Z ( q ) ( a , Φ ( q + r ) ) .
(ii) For a < 0 and x R , τ a ( r ) < P x -a.s.
By taking θ in Theorem 3 and Corollary 3, we have the following identities related to the event that the process goes continuously below a level.
Corollary 4 (Creeping).
(i) For q 0 , a < 0 < b and x b , we have:
w ( x , a , b ) : = E x e q τ a ( r ) ; X ( τ a ( r ) ) = a , τ a ( r ) < τ b + ( r ) = C a ( q , r ) ( x ) I a ( q , r ) ( x ) I a ( q , r ) ( b ) C a ( q , r ) ( b )
where (recall that W ( q ) is differentiable when σ > 0 as in Remark 1 (1)):
C a ( q , r ) ( y ) : = σ 2 2 M a ( q , r ) W ( q ) ( y ) r W ¯ ( q + r ) ( y ) W ( q ) ( a ) , y R .
(ii) For q 0 , a < 0 and x R , we have:
E x e q τ a ( r ) ; X ( τ a ( r ) ) = a = C a ( q , r ) ( x ) I a ( q , r ) ( x ) W ( q ) ( a ) σ 2 2 Φ ( q + r ) r W ( q ) ( a ) Z ( q ) ( a , Φ ( q + r ) ) .
In Theorem 3, by taking the derivative with respect to θ and taking θ 0 , we obtain the following. This will later be used to compute the identities for capital injection in Proposition 5.
Corollary 5.
Suppose ψ ( 0 + ) > . For q 0 , a < 0 < b and x b ; we have:
j ( x , a , b ) : = E x e q τ a ( r ) [ a X r ( τ a ( r ) ) ] ; τ a ( r ) < τ b + ( r ) = I a ( q , r ) ( x ) I a ( q , r ) ( b ) K a ( q , r ) ( b ) K a ( q , r ) ( x )
with
K a ( q , r ) ( y ) : = l a ( q , r ) ( y ) r l ( q ) ( a ) W ¯ ( q + r ) ( y ) , y R ,
where l a ( q , r ) ( y ) : = M a ( q , r ) l ( q ) ( y ) , y R .
By taking b in Corollary 5, we have the following.
Corollary 6.
Suppose ψ ( 0 + ) > . For q 0 , a < 0 and x R , we have:
E x e q τ a ( r ) [ a X r ( τ a ( r ) ) ] = I a ( q , r ) ( x ) W ( q ) ( a ) Z ( q ) ( a , Φ ( q + r ) ) Z ˜ ( q , r ) ( a ) ψ ( 0 + ) Z ( q ) ( a , Φ ( q + r ) ) K a ( q , r ) ( x ) .
The following remark states that as the rate r of the Poisson process associated with the Parisian reflection goes to zero, we recover classical fluctuation identities.
Remark 4.
Note that, for q 0 , a < 0 and x R ,
lim r 0 I a ( q , r ) ( x ) = W ( q ) ( x a ) W ( q ) ( a ) a n d lim r 0 J a ( q , r ) ( x , θ ) = Z ( q ) ( x a , θ ) .
Hence, as r 0 , we have the following.
1. 
By Theorem 1, f ( x , a , b ) vanishes in the limit.
2. 
By Theorems 2 and 3, g ( x , a , b ) and h ( x , a , b , θ ) converge to the right-hand sides of (10).
3. 
By Corollary 4 (i), w ( x , a , b ) converges to the right-hand side of (12).
The convergence for the limiting cases a = and/or b = hold in the same way.

4. Main Results for the Cases with Additional Classical Reflections

In this section, we shall extend the results in Section 3 and obtain similar identities for the processes X ˜ r b , Y r a and Y ˜ r a , b as defined in Section 2.2, Section 2.3 and Section 2.4, respectively. Again, the proofs for the corollaries are deferred to the Appendix B.

4.1. Results for X ˜ r b

We shall first study the process X ˜ r b as constructed in Section 2.2. Let:
τ ˜ a , b ( r ) : = inf { t > 0 : X ˜ r b ( t ) < a } , a < 0 < b ,
and ( I a ( q , r ) ) ( x + ) be the right-hand derivative of (25) with respect to x given by:
( I a ( q , r ) ) ( x + ) : = ( W a ( q , r ) ) ( x + ) W ( q ) ( a ) r W ( q + r ) ( x ) , q 0 , a < 0 , x R .
Recall the classical reflected process Y ¯ b and τ ˜ 0 , b as in (13). We shall first compute the following.
Lemma 2.
For q 0 and a < 0 < b ,
E b e q e r ; e r < τ ˜ 0 , b + E b e ( q + r ) τ ˜ 0 , b W ( q ) ( Y ¯ b ( τ ˜ 0 , b ) a ) W ( q ) ( a ) = I a ( q , r ) ( b ) W ( q + r ) ( b ) W ( q + r ) ( b + ) ( I a ( q , r ) ) ( b + ) .
Proof. 
We first note that, by (14),
E b e q e r ; e r < τ ˜ 0 , b = r r + q E b 1 e ( q + r ) τ ˜ 0 , b = r ( W ( q + r ) ( b ) ) 2 W ( q + r ) ( b + ) W ¯ ( q + r ) ( b ) .
By summing this and (22), the result follows. ☐
In order to obtain the results for X ˜ r b , we shall use the following observation and the strong Markov property.
Remark 5.
(i) For 0 t < τ ˜ 0 , b e r , X ˜ r b ( t ) = Y ¯ b ( t ) and L ˜ r , P b ( t ) = 0 . (ii) For 0 t τ 0 + , X ˜ r b ( t ) = X ( t ) and L ˜ r , P b ( t ) = L ˜ r , S b ( t ) = 0 . (iii) For 0 t τ b + ( r ) , X ˜ r b ( t ) = X r ( t ) .
We shall first compute the expected NPV of the periodic part of dividends using Lemma 2 and Remark 5. It attains a concise expression in terms of the function I a ( q , r ) and its derivative.
Proposition 1 (Periodic part of dividends).
For q 0 , a < 0 < b and x b , we have:
f ˜ P ( x , a , b ) : = E x 0 τ ˜ a , b ( r ) e q t d L ˜ r , P b ( t ) = r W ¯ ( q + r ) ( b ) I a ( q , r ) ( x ) ( I a ( q , r ) ) ( b + ) W ¯ ¯ ( q + r ) ( x ) .
Proof. 
By Remark 5 (i) and the strong Markov property, we can write:
f ˜ P ( b , a , b ) = E b e q τ ˜ 0 , b f ˜ P ( Y ¯ b ( τ ˜ 0 , b ) , a , b ) ; τ ˜ 0 , b < e r + E b e q e r [ Y ¯ b ( e r ) + f ˜ P ( 0 , a , b ) ] ; e r < τ ˜ 0 , b .
For x 0 , by Remark 5 (ii) and the strong Markov property, f ˜ P ( x , a , b ) = E x ( e q τ 0 + ; τ 0 + < τ a ) f ˜ P ( 0 , a , b ) . This together with (10) gives:
E b e q τ ˜ 0 , b f ˜ P ( Y ¯ b ( τ ˜ 0 , b ) , a , b ) ; τ ˜ 0 , b < e r = f ˜ P ( 0 , a , b ) W ( q ) ( a ) E b e q τ ˜ 0 , b W ( q ) ( Y ¯ b ( τ ˜ 0 , b ) a ) ; τ ˜ 0 , b < e r .
On the other hand, by the resolvent given in Theorem 1 (ii) of Pistorius (2004),
E b e q e r Y ¯ b ( e r ) ; e r < τ ˜ 0 , b = r E b 0 τ ˜ 0 , b e ( q + r ) s Y ¯ b ( s ) d s    = r 0 b ( b y ) W ( q + r ) ( b ) W ( q + r ) ( y ) W ( q + r ) ( b + ) W ( q + r ) ( y ) d y + b r W ( q + r ) ( b ) W ( q + r ) ( 0 ) W ( q + r ) ( b + )    = r W ( q + r ) ( b ) W ( q + r ) ( b + ) W ¯ ( q + r ) ( b ) W ¯ ¯ ( q + r ) ( b ) .
Substituting (34) and (35) in (33) and applying Lemma 2,
f ˜ P ( b , a , b ) = I a ( q , r ) ( b ) W ( q + r ) ( b ) W ( q + r ) ( b + ) ( I a ( q , r ) ) ( b + ) f ˜ P ( 0 , a , b ) + r W ( q + r ) ( b ) W ( q + r ) ( b + ) W ¯ ( q + r ) ( b ) W ¯ ¯ ( q + r ) ( b ) .
Now, by Remark 5 (iii), the strong Markov property and Theorems 1 and 2, for all x b ,
f ˜ P ( x , a , b ) = f ( x , a , b ) + g ( x , a , b ) f ˜ P ( b , a , b ) = r W ¯ ¯ ( q + r ) ( x ) + I a ( q , r ) ( x ) I a ( q , r ) ( b ) r W ¯ ¯ ( q + r ) ( b ) + f ˜ P ( b , a , b ) = r W ¯ ¯ ( q + r ) ( x ) + I a ( q , r ) ( x ) I a ( q , r ) ( b ) [ I a ( q , r ) ( b ) W ( q + r ) ( b ) W ( q + r ) ( b + ) ( I a ( q , r ) ) ( b + ) f ˜ P ( 0 , a , b ) + r W ( q + r ) ( b ) W ( q + r ) ( b + ) W ¯ ( q + r ) ( b ) ] .
Setting x = 0 and solving for f ˜ P ( 0 , a , b ) (using (26)), we have f ˜ P ( 0 , a , b ) = r W ¯ ( q + r ) ( b ) / ( I a ( q , r ) ) ( b + ) . Substituting this back in (36), the proof is complete. ☐
By taking a in Proposition 1, we have the following.
Corollary 7.
(i) For q > 0 or q = 0 with ψ ( 0 + ) < 0 , we have, for b > 0 and x b ,
E x 0 e q t d L ˜ r , P b ( t ) = r W ¯ ( q + r ) ( b ) I ( q , r ) ( x ) ( I ( q , r ) ) ( b ) W ¯ ¯ ( q + r ) ( x ) ,
where ( I ( q , r ) ) is the derivative of I ( q , r ) of (28) given by:
( I ( q , r ) ) ( x ) = Z ( q + r ) ( x , Φ ( q ) ) r W ( q + r ) ( x ) = Φ ( q ) Z ( q + r ) ( x , Φ ( q ) ) , q 0 , x R .
(ii) If q = 0 with ψ ( 0 + ) 0 , it becomes infinity.
Now, consider the singular part of dividends. We see that the related identities can again be written in terms of I a ( q , r ) and its derivative.
Proposition 2 (Singular part of dividends).
For q 0 , a < 0 < b and x b , we have:
f ˜ S ( x , a , b ) : = E x [ 0 , τ ˜ a , b ( r ) ] e q t d L ˜ r , S b ( t ) = I a ( q , r ) ( x ) ( I a ( q , r ) ) ( b + ) .
Proof. 
By Remark 5 (i) and the strong Markov property,
f ˜ S ( b , a , b ) = E b [ 0 , τ ˜ 0 , b e r ] e q t d L b ( t )                                                 + E b e q e r ; e r < τ ˜ 0 , b f ˜ S ( 0 , a , b ) + E b e q τ ˜ 0 , b f ˜ S ( Y ¯ b ( τ ˜ 0 , b ) , a , b ) ; τ ˜ 0 , b < e r .
By (15) and the computation similar to (34) (thanks to Remark 5 (ii)),
f ˜ S ( b , a , b ) = W ( q + r ) ( b ) W ( q + r ) ( b + ) + f ˜ S ( 0 , a , b ) I a ( q , r ) ( b ) W ( q + r ) ( b ) W ( q + r ) ( b + ) ( I a ( q , r ) ) ( b + ) .
For x b , because Remark 5 (iii) and the strong Markov property give f ˜ S ( x , a , b ) = g ( x , a , b ) f ˜ S ( b , a , b ) , Theorem 2 and (37) give:
f ˜ S ( x , a , b ) = I a ( q , r ) ( x ) I a ( q , r ) ( b ) W ( q + r ) ( b ) W ( q + r ) ( b + ) + f ˜ S ( 0 , a , b ) I a ( q , r ) ( b ) W ( q + r ) ( b ) W ( q + r ) ( b + ) ( I a ( q , r ) ) ( b + ) .
Setting x = 0 and solving for f ˜ S ( 0 , a , b ) (using (26)), we have f ˜ S ( 0 , a , b ) = [ ( I a ( q , r ) ) ( b + ) ] 1 . Substituting this in (38), we have the result. ☐
By taking a in Proposition 2, we have the following.
Corollary 8.
Fix b > 0 and x b . (i) For q > 0 or q = 0 with ψ ( 0 + ) < 0 , we have E x ( 0 e q t d L ˜ r , S b ( t ) ) = I ( q , r ) ( x ) / ( I ( q , r ) ) ( b + ) . (ii) If q = 0 with ψ ( 0 + ) 0 , it becomes infinity.
Finally, we obtain the (joint) identities related to τ ˜ a , b ( r ) and the position of the process at this stopping time. We first compute their Laplace transform.
Proposition 3 (Downcrossing time and overshoot).
Fix a < 0 < b and x b . (i) For q 0 and θ 0 ,
h ˜ ( x , a , b , θ ) : = E x e q τ ˜ a , b ( r ) θ [ a X ˜ r ( τ ˜ a , b ( r ) ) ] = J a ( q , r ) ( x , θ ) ( J a ( q , r ) ) ( b , θ ) I a ( q , r ) ( x ) ( I a ( q , r ) ) ( b + ) .
(ii) We have τ ˜ a , b ( r ) < , P x -a.s.
Proof. 
(i) By Remark 5 (i) and the strong Markov property, we can write:
h ˜ ( b , a , b , θ ) = E b e q τ ˜ 0 , b h ˜ ( Y ¯ b ( τ ˜ 0 , b ) , a , b , θ ) ; τ ˜ 0 , b < e r + E b e q e r ; e r < τ ˜ 0 , b h ˜ ( 0 , a , b , θ ) .
For x 0 , by Remark 5 (ii), the strong Markov property and (10),
h ˜ ( x , a , b , θ ) = E x e q τ a θ [ a X ( τ a ) ] ; τ 0 + > τ a + E x e q τ 0 + ; τ 0 + < τ a h ˜ ( 0 , a , b , θ ) = Z ( q ) ( x a , θ ) Z ( q ) ( a , θ ) W ( q ) ( x a ) W ( q ) ( a ) + h ˜ ( 0 , a , b , θ ) W ( q ) ( x a ) W ( q ) ( a ) ,
and hence, together with (22) and Lemmas 1 and 2,
h ˜ ( b , a , b , θ ) = E b e ( q + r ) τ ˜ 0 , b Z ( q ) ( Y ¯ b ( τ ˜ 0 , b ) a , θ ) Z ( q ) ( a , θ ) W ( q ) ( Y ¯ b ( τ ˜ 0 , b ) a ) W ( q ) ( a ) + h ˜ ( 0 , a , b , θ ) E b e q e r ; e r < τ ˜ 0 , b + 1 W ( q ) ( a ) E b e ( q + r ) τ ˜ 0 , b W ( q ) ( Y ¯ b ( τ ˜ 0 , b ) a ) = Z a ( q , r ) ( b , θ ) W ( q + r ) ( b ) W ( q + r ) ( b + ) ( Z a ( q , r ) ) ( b , θ ) Z ( q ) ( a , θ ) W ( q ) ( a ) W a ( q , r ) ( b ) W ( q + r ) ( b ) W ( q + r ) ( b + ) ( W a ( q , r ) ) ( b ) + h ˜ ( 0 , a , b , θ ) I a ( q , r ) ( b ) W ( q + r ) ( b ) W ( q + r ) ( b + ) ( I a ( q , r ) ) ( b + ) = J ^ a ( q , r ) ( b , θ ) W ( q + r ) ( b ) W ( q + r ) ( b + ) ( J ^ a ( q , r ) ) ( b , θ ) + h ˜ ( 0 , a , b , θ ) I a ( q , r ) ( b ) W ( q + r ) ( b ) W ( q + r ) ( b + ) ( I a ( q , r ) ) ( b + ) .
On the other hand, by Remark 5 (iii), the strong Markov property and Theorems 2 and 3, we have that, for all x b ,
h ˜ ( x , a , b , θ ) = h ( x , a , b , θ ) + g ( x , a , b ) h ˜ ( b , a , b , θ )   = J ^ a ( q , r ) ( x , θ ) I a ( q , r ) ( x ) I a ( q , r ) ( b ) J ^ a ( q , r ) ( b , θ ) + I a ( q , r ) ( x ) I a ( q , r ) ( b ) [ J ^ a ( q , r ) ( b , θ ) W ( q + r ) ( b ) W ( q + r ) ( b + ) ( J ^ a ( q , r ) ) ( b , θ )   + h ˜ ( 0 , a , b , θ ) I a ( q , r ) ( b ) W ( q + r ) ( b ) W ( q + r ) ( b + ) ( I a ( q , r ) ) ( b + ) ]   = J ^ a ( q , r ) ( x , θ ) + I a ( q , r ) ( x ) I a ( q , r ) ( b ) [ W ( q + r ) ( b ) W ( q + r ) ( b + ) ( J ^ a ( q , r ) ) ( b , θ )                                                          + h ˜ ( 0 , a , b , θ ) I a ( q , r ) ( b ) W ( q + r ) ( b ) W ( q + r ) ( b + ) ( I a ( q , r ) ) ( b + ) ] .
Setting x = 0 and solving for h ˜ ( 0 , a , b , θ ) (via (26) and (30)), h ˜ ( 0 , a , b , θ ) = ( J ^ a ( q , r ) ) ( b , θ ) / ( I a ( q , r ) ) ( b + ) . Substituting this back in (40), we have:
h ˜ ( x , a , b , θ ) = J ^ a ( q , r ) ( x , θ ) ( J ^ a ( q , r ) ) ( b , θ ) I a ( q , r ) ( x ) ( I a ( q , r ) ) ( b + ) .
Using (29), it equals the right-hand side of (39).
(ii) In view of (i), it is immediate by (27) by setting q = θ = 0 . ☐
Similar to Corollary 5, we obtain the following by Proposition 3.
Corollary 9.
Suppose ψ ( 0 + ) > . For q 0 , a < 0 < b and x b ; we have that:
j ˜ ( x , a , b ) : = E x e q τ ˜ a , b ( r ) [ a X ˜ r b ( τ ˜ a , b ( r ) ) ] = I a ( q , r ) ( x ) ( I a ( q , r ) ) ( b + ) ( K a ( q , r ) ) ( b ) K a ( q , r ) ( x ) .
Similar to Remark 4, in the following result, we see how we can recover classical fluctuation identities by taking the rate r, related to the Parisian reflection, to zero.
Remark 6.
Recall (32). As r 0 , we have the following.
1. 
By Proposition 1, f ˜ P ( x , a , b ) vanishes in the limit.
2. 
By Proposition 2, f ˜ S ( x , a , b ) converges to the right-hand side of (15).
3. 
By Proposition 3, h ˜ ( x , a , b , θ ) converges to:
E x e q τ ˜ a , b θ [ a Y ¯ b ( τ ˜ a , b ) ] = Z ( q ) ( x a , θ ) Z ( q ) ( b a , θ ) W ( q ) ( x a ) W ( q ) ( ( b a ) + ) ,
which is given in Theorem 1 of Avram et al. (2004).
4. 
By Corollary 9, j ˜ ( x , a , b ) converges to:
E x e q τ ˜ a , b [ a Y ¯ b ( τ ˜ a , b ) ] = W ( q ) ( x a ) W ( q ) ( ( b a ) + ) l ( q ) ( b a ) l ( q ) ( x a ) ,
which is given in (3.16) of Avram et al. (2007).
The convergence for the limiting case a = holds in the same way.

4.2. Results for Y r a

We shall now study the process Y r a as defined in Section 2.3. We let:
η a , b + ( r ) : = inf { t > 0 : Y r a ( t ) > b } , a < 0 < b .
Remark 7.
Recall the classical reflected process Y ̲ a = X + R a and η a , 0 + as in (16). (i) For 0 t η a , 0 + , we have Y r a ( t ) = Y ̲ a ( t ) and R r a ( t ) = R a ( t ) . (ii) For 0 t < τ a ( r ) , we have Y r a ( t ) = X r ( t ) .
Using this remark, we obtain the following identity related to Parisian reflection (periodic dividends).
Proposition 4 (Periodic part of dividends).
For q 0 , a < 0 < b and x b ,
f ^ ( x , a , b ) : = E x 0 η a , b + ( r ) e q t d L r a ( t ) = r W ¯ ¯ ( q + r ) ( b ) J a ( q , r ) ( x ) J a ( q , r ) ( b ) W ¯ ¯ ( q + r ) ( x ) .
Proof. 
By an application of Remark 7 (i), (17) and the strong Markov property,
f ^ ( a , a , b ) = E a ( e q η a , 0 + ) f ^ ( 0 , a , b ) = f ^ ( 0 , a , b ) / Z ( q ) ( a ) .
By this, Remark 7 (ii) and the strong Markov property, together with Theorems 1 and 3, we have for x b :
f ^ ( x , a , b ) = f ( x , a , b ) + h ( x , a , b , 0 ) f ^ ( a , a , b ) = r W ¯ ¯ ( q + r ) ( b ) I a ( q , r ) ( x ) I a ( q , r ) ( b ) W ¯ ¯ ( q + r ) ( x ) + J a ( q , r ) ( x ) J a ( q , r ) ( b ) I a ( q , r ) ( x ) I a ( q , r ) ( b ) f ^ ( 0 , a , b ) Z ( q ) ( a ) .
Setting x = 0 and solving for f ^ ( 0 , a , b ) (using (26)), we get f ^ ( 0 , a , b ) = r W ¯ ¯ ( q + r ) ( b ) Z ( q ) ( a ) / J a ( q , r ) ( b ) . Substituting this in (41), we have the claim. ☐
By taking b in Proposition 4, we have the following.
Corollary 10.
Fix a < 0 and x R . (i) For q > 0 , we have:
E x 0 e q t d L r a ( t ) = r 1 q Φ ( q + r ) J a ( q , r ) ( x ) Z ( q ) ( a , Φ ( q + r ) ) W ¯ ¯ ( q + r ) ( x ) .
(ii) For q = 0 , it becomes infinity.
For q 0 and a < 0 , let:
H a ( q , r ) ( y ) : = l a ( q , r ) ( y ) l ( q ) ( a ) Z ( q ) ( a ) Z a ( q , r ) ( y ) = K a ( q , r ) ( y ) J a ( q , r ) ( y ) Z ( q ) ( a ) l ( q ) ( a ) , y R .
In particular,
H a ( q , r ) ( 0 ) = 0 .
For the identities related to classical reflection below (capital injections), we will write them in terms of the functions H a ( q , r ) and J a ( q , r ) .
Proposition 5 (Capital injections).
For q 0 , a < 0 < b and x b ,
j ^ ( x , a , b ) : = E x [ 0 , η a , b + ( r ) ] e q t d R r a ( t ) = H a ( q , r ) ( b ) J a ( q , r ) ( x ) J a ( q , r ) ( b ) H a ( q , r ) ( x ) .
Proof. 
First, by Remark 7 (i), (17), (18) and an application of the strong Markov property,
j ^ ( a , a , b ) = E a [ 0 , η a , 0 + ] e q t d R a ( t ) + E a ( e q η a , 0 + ) j ^ ( 0 , a , b ) = l ( q ) ( a ) + j ^ ( 0 , a , b ) Z ( q ) ( a ) .
This, together with Remark 7 (ii), Corollary 5, Theorem 3 and the strong Markov property, gives, for x b ,
j ^ ( x , a , b ) = j ( x , a , b ) + h ( x , a , b , 0 ) j ^ ( a , a , b ) = I a ( q , r ) ( x ) I a ( q , r ) ( b ) K a ( q , r ) ( b ) K a ( q , r ) ( x ) + J a ( q , r ) ( x ) I a ( q , r ) ( x ) I a ( q , r ) ( b ) J a ( q , r ) ( b ) l ( q ) ( a ) + j ^ ( 0 , a , b ) Z ( q ) ( a ) = I a ( q , r ) ( x ) I a ( q , r ) ( b ) K a ( q , r ) ( b ) J a ( q , r ) ( b ) Z ( q ) ( a ) l ( q ) ( a ) + j ^ ( 0 , a , b ) K a ( q , r ) ( x ) + J a ( q , r ) ( x ) Z ( q ) ( a ) l ( q ) ( a ) + j ^ ( 0 , a , b ) = I a ( q , r ) ( x ) I a ( q , r ) ( b ) H a ( q , r ) ( b ) J a ( q , r ) ( b ) Z ( q ) ( a ) j ^ ( 0 , a , b ) H a ( q , r ) ( x ) + J a ( q , r ) ( x ) Z ( q ) ( a ) j ^ ( 0 , a , b ) .
Setting x = 0 and solving for j ^ ( 0 , a , b ) (using (26) and (42)), j ^ ( 0 , a , b ) = H a ( q , r ) ( b ) Z ( q ) ( a ) / J a ( q , r ) ( b ) .
Substituting this back in (44), we have the claim. ☐
By taking b in Proposition 5, we have the following.
Corollary 11.
For q > 0 , a < 0 , and x R , we have
E x [ 0 , ) e q t d R r a ( t ) = r Z ( q ) ( a ) q Φ ( q + r ) Z ( q ) ( a , Φ ( q + r ) ) + 1 Φ ( q + r ) Z a ( q , r ) ( x ) r Z ( q ) ( a ) W ¯ ( q + r ) ( x ) + r Z ¯ ( q ) ( a ) W ¯ ( q + r ) ( x ) Z ¯ a ( q , r ) ( x ) + ψ ( 0 + ) q .
Remark 8.
Recently, in Noba et al. (2017), Corollaries 10 and 11 were used to show the optimality of a mixed periodic-classical barrier strategy in de Finetti’s dividend problem with periodic dividends and classical capital injections. The candidate optimal barrier is chosen so that the slope at the barrier becomes one. The optimality is shown to hold for a general spectrally-negative Lévy process by the observation that the slope of the candidate value function is proportional to the Laplace transform of the stopping time given in Corollary 3.
Finally, we compute the Laplace transform of the upcrossing time η a , b + ( r ) .
Proposition 6 (Upcrossing time).
Fix a < 0 < b . (i) For q > 0 and x b , we have:
g ^ ( x , a , b ) : = E x e q η a , b + ( r ) = J a ( q , r ) ( x ) J a ( q , r ) ( b ) .
(ii) For all x R , we have η a , b + ( r ) < , P x -a.s.
Proof. 
(i) By Remark 7 (i) and the strong Markov property, together with (17),
g ^ ( a , a , b ) = E a ( e q η a , 0 + ) g ^ ( 0 , a , b ) = g ^ ( 0 , a , b ) / Z ( q ) ( a ) .
By this, Remark 7 (ii) and the strong Markov property, together with Theorems 2 and 3,
g ^ ( x , a , b ) = I a ( q , r ) ( x ) I a ( q , r ) ( b ) + J a ( q , r ) ( x ) J a ( q , r ) ( b ) I a ( q , r ) ( x ) I a ( q , r ) ( b ) g ^ ( 0 , a , b ) Z ( q ) ( a ) .
Setting x = 0 and by (26), we obtain g ^ ( 0 , a , b ) = Z ( q ) ( a ) / J a ( q , r ) ( b ) . Substituting this in (45), we have the claim. (ii) This is immediate by setting q = 0 in (i) by (27). ☐
In the next remark, we recover classical fluctuation identities found in Avram et al. (2007) and Kyprianou (2006), by taking the rate associated with the Parisian reflection to zero.
Remark 9.
Recall (32). As r 0 , we have the following.
1. 
By Proposition 4, f ^ ( x , a , b ) vanishes in the limit.
2. 
By Proposition 5, j ^ ( x , a , b ) converges to the right-hand side of (18).
3. 
By Proposition 6, g ^ ( x , a , b ) converges to the right-hand side of (17).
The convergence for the limiting case b = holds in the same way.

4.3. Results for Y ˜ r a , b

We conclude this section with the identities for the process Y ˜ r a , b as constructed in Section 2.4. We use the derivative of J a ( q , r ) as in (25):
( J a ( q , r ) ) ( y ) = ( Z a ( q , r ) ) ( y ) r Z ( q ) ( a ) W ( q + r ) ( y ) , q 0 , a < 0 , y R .
We shall use the following observation and the strong Markov property.
Remark 10.
(i) For 0 t η a , 0 + , we have Y ˜ r a , b ( t ) = Y ̲ a ( t ) , L ˜ r , P a , b ( t ) = L ˜ r , S a , b ( t ) = 0 and R ˜ r a , b ( t ) = R a ( t ) .
(ii) For all 0 t < τ ˜ a , b ( r ) , we have Y ˜ r a , b ( t ) = X ˜ r b ( t ) , L ˜ r , P a , b ( t ) = L ˜ r , P b ( t ) and L ˜ r , S a , b ( t ) = L ˜ r , S b ( t ) .
In view of this remark, we obtain the following identities related to the three types of reflections L ˜ r , P a , b , L ˜ r , S a , b and R ˜ r a , b in Propositions 7–9, respectively.
Proposition 7 (Periodic part of dividends).
Fix a < 0 < b and x b . (i) For q > 0 , we have:
f ˇ P ( x , a , b ) : = E x 0 e q t d L ˜ r , P a , b ( t ) = r W ¯ ( q + r ) ( b ) J a ( q , r ) ( x ) ( J a ( q , r ) ) ( b ) W ¯ ¯ ( q + r ) ( x ) .
(ii) If q = 0 , it becomes infinity.
Proof. 
By Remark 10 (i), (17) and the strong Markov property, f ˇ P ( a , a , b ) = E a ( e q η a , 0 + ) f ˇ P ( 0 , a , b ) = f ˇ P ( 0 , a , b ) / Z ( q ) ( a ) . By this, Remark 10 (ii) and the strong Markov property, together with Propositions 1 and 3, for x b ,
f ˇ P ( x , a , b ) = f ˜ P ( x , a , b ) + h ˜ ( x , a , b , 0 ) f ˇ P ( a , a , b ) = r W ¯ ( q + r ) ( b ) I a ( q , r ) ( x ) ( I a ( q , r ) ) ( b + ) W ¯ ¯ ( q + r ) ( x ) + J a ( q , r ) ( x ) ( J a ( q , r ) ) ( b ) I a ( q , r ) ( x ) ( I a ( q , r ) ) ( b + ) f ˇ P ( 0 , a , b ) Z ( q ) ( a ) .
Now taking x = 0 and by (26), we get f ˇ P ( 0 , a , b ) = r W ¯ ( q + r ) ( b ) Z ( q ) ( a ) / ( J a ( q , r ) ) ( b ) . Substituting this in (46), we have the claim. ☐
Proposition 8 (Singular part of dividends).
Fix a < 0 < b and x b . (i) For any q > 0 , we have that:
f ˇ S ( x , a , b ) : = E x [ 0 , ) e q t d L ˜ r , S a , b ( t ) = J a ( q , r ) ( x ) ( J a ( q , r ) ) ( b ) .
(ii) If q = 0 , it becomes infinity.
Proof. 
(i) By Remark 10 (i), (17) and the strong Markov property, f ˇ S ( a , a , b ) = E a ( e q η 0 , a + ) f ˇ S ( 0 , a , b ) = f ˇ S ( 0 , a , b ) / Z ( q ) ( a ) . By this, Remark 10 (ii) and the strong Markov property, together with Propositions 2 and 3, for x b ,
f ˇ S ( x , a , b ) = f ˜ S ( x , a , b ) + h ˜ ( x , a , b , 0 ) f ˇ S ( a , a , b ) = I a ( q , r ) ( x ) ( I a ( q , r ) ) ( b + ) + J a ( q , r ) ( x ) ( J a ( q , r ) ) ( b ) I a ( q , r ) ( x ) ( I a ( q , r ) ) ( b + ) f ˇ S ( 0 , a , b ) Z ( q ) ( a ) .
Now, taking x = 0 and solving for f ˇ S ( 0 , a , b ) (using (26)), we get f ˇ S ( 0 , a , b ) = Z ( q ) ( a ) / ( J a ( q , r ) ) ( b ) . Substituting this in (47), we have the claim.
(ii) It is immediate by (27) upon taking q 0 in (i). ☐
Proposition 9 (Capital injections).
Fix a < 0 < b and x b . (i) For any q > 0 , we have:
j ˇ ( x , a , b ) : = E x [ 0 , ) e q t d R ˜ r a , b ( t ) = H a ( q , r ) ( x ) + J a ( q , r ) ( x ) ( J a ( q , r ) ) ( b ) ( H a ( q , r ) ) ( b ) .
(ii) When q = 0 , it becomes infinity.
Proof. 
(i) First, by Remark 10 (i), by modifying (43), j ˇ ( a , a , b ) = [ l ( q ) ( a ) + j ˇ ( 0 , a , b ) ] / Z ( q ) ( a ) . In view of this, by Remark 10 (ii), Corollary 9 and the strong Markov property, we obtain a modification of (44): for x b ,
j ˇ ( x , a , b ) = j ˜ ( x , a , b ) + h ˜ ( x , a , b , 0 ) j ˇ ( a , a , b )   = I a ( q , r ) ( x ) ( I a ( q , r ) ) ( b + ) ( H a ( q , r ) ) ( b ) ( J a ( q , r ) ) ( b ) Z ( q ) ( a ) j ˇ ( 0 , a , b ) H a ( q , r ) ( x ) + J a ( q , r ) ( x ) Z ( q ) ( a ) j ˇ ( 0 , a , b ) .
Setting x = 0 and solving for j ˇ ( 0 , a , b ) (using (26) and (42)), j ˇ ( 0 , a , b ) = ( H a ( q , r ) ) ( b ) Z ( q ) ( a ) / ( J a ( q , r ) ) ( b ) . Substituting this back in (48), we have the claim
(ii) It is immediate by (27) upon taking q 0 in (i). ☐
Remark 11.
The results given in Propositions 7–9 can potentially be used to prove the optimality of a hybrid continuous and periodic dividend payment strategy, for an extension of Pérez and Yamazaki (forthcoming) with additional capital injections in the spectrally-negative model.
Finally, by taking the rate of the Parisian reflection r to zero, we recover the results obtained in Avram et al. (2007).
Remark 12.
Recall (32). As r 0 , we have the following.
1. 
By Proposition 7, f ˇ P ( x , a , b ) vanishes in the limit.
2. 
By Proposition 8, f ˇ S ( x , a , b ) converges to Identity (4.3) in Theorem 1 of Avram et al. (2007).
3. 
By Proposition 9, j ˇ ( x , a , b ) converges to Identity (4.4) in Theorem 1 of Avram et al. (2007).

5. Proofs of Theorems for the Bounded Variation Case

In this section, we shall show Theorems 1–3 for the case that X has bounded variation. We shall use the following remark and lemma throughout the proofs.
Remark 13.
For 0 t < e r τ 0 , we have X r ( t ) = X ( t ) and L r ( t ) = 0 .
Lemma 3.
For q 0 , a < 0 < b and x b ,
E x e q e r ; e r < τ 0 τ b + + E x e ( q + r ) τ 0 W ( q ) ( X ( τ 0 ) a ) W ( q ) ( a ) ; τ 0 < τ b + = I a ( q , r ) ( x ) W ( q + r ) ( x ) W ( q + r ) ( b ) I a ( q , r ) ( b ) .
Proof. 
As obtained in (4.30) of Avram et al. (2018) and by (10), for all x b ,
E x e q e r ; e r < τ b + τ 0 = r r + q E x 1 e ( q + r ) ( τ b + τ 0 ) = r W ( q + r ) ( x ) W ( q + r ) ( b ) W ¯ ( q + r ) ( b ) W ¯ ( q + r ) ( x ) .
By summing this and (21), the result follows. ☐

5.1. Proof of Theorem 1

We shall first show the following.
Lemma 4.
For b > 0 and x b , we have:
E x e q e r X ( e r ) ; e r < τ b + τ 0 = r W ¯ ¯ ( q + r ) ( b ) W ( q + r ) ( b ) W ( q + r ) ( x ) W ¯ ¯ ( q + r ) ( x ) .
Proof. 
First we note that integration by parts gives for any x 0 , 0 x y W ( q + r ) ( x y ) d y = W ¯ ¯ ( q + r ) ( x ) . This implies, using the resolvent given in Theorem 8.7 of Kyprianou (2006), the following:
E x e q e r X ( e r ) ; e r < τ b + τ 0 = r 0 b y W ( q + r ) ( x ) W ( q + r ) ( b y ) W ( q + r ) ( b ) W ( q + r ) ( x y ) d y = r W ¯ ¯ ( q + r ) ( b ) W ( q + r ) ( b ) W ( q + r ) ( x ) W ¯ ¯ ( q + r ) ( x ) .
 ☐
For x 0 , by an application of the strong Markov property and (10),
f ( x , a , b ) = E x e q τ 0 + ; τ 0 + < τ a f ( 0 , a , b ) = W ( q ) ( x a ) W ( q ) ( a ) f ( 0 , a , b ) .
Using this and the strong Markov property, for x b ,
f ( x , a , b ) = E x e q e r X ( e r ) ; e r < τ 0 τ b + + E x e ( q + r ) τ 0 W ( q ) ( X ( τ 0 ) a ) ; τ 0 < τ b + f ( 0 , a , b ) W ( q ) ( a ) + E x e q e r ; e r < τ 0 τ b + f ( 0 , a , b ) .
By applying Lemmas 3 and 4 and (21) in (51), we obtain for all x b ,
f ( x , a , b ) = r W ¯ ¯ ( q + r ) ( x ) + I a ( q , r ) ( x ) f ( 0 , a , b ) + W ( q + r ) ( x ) W ( q + r ) ( b ) r W ¯ ¯ ( q + r ) ( b ) I a ( q , r ) ( b ) f ( 0 , a , b ) .
Setting x = 0 and solving for f ( 0 , a , b ) (using (26) and the fact that W ( q + r ) ( 0 ) > 0 for the case of bounded variation as in (8)), we have f ( 0 , a , b ) = r W ¯ ¯ ( q + r ) ( b ) / I a ( q , r ) ( b ) . Substituting this back in (52), we have the claim.

5.2. Proof of Theorem 2

For x 0 , similarly to (50), we obtain g ( x , a , b ) = g ( 0 , a , b ) W ( q ) ( x a ) / W ( q ) ( a ) . Now, for x b , again by the strong Markov property, Lemma 3 and (21),
g ( x , a , b ) = E x e q e r ; e r < τ b + τ 0 g ( 0 , a , b ) + E x e q τ 0 g ( X ( τ 0 ) , a , b ) ; τ 0 < e r τ b + + E x ( e q τ b + ; τ b + < τ 0 e r ) = g ( 0 , a , b ) I a ( q , r ) ( x ) W ( q + r ) ( x ) W ( q + r ) ( b ) I a ( q , r ) ( b ) + W ( q + r ) ( x ) W ( q + r ) ( b ) .
Setting x = 0 and using (26), g ( 0 , a , b ) = ( I a ( q , r ) ( b ) ) 1 . Substituting this in (53), we have the result.

5.3. Proof of Theorem 3

(i) For x 0 , by using (10),
h ( x , a , b , θ ) = E x ( e q τ 0 + ; τ 0 + < τ a ) h ( 0 , a , b , θ ) + E x e q τ a θ [ a X ( τ a ) ] ; τ 0 + > τ a = W ( q ) ( x a ) W ( q ) ( a ) [ h ( 0 , a , b , θ ) Z ( q ) ( a , θ ) ] + Z ( q ) ( x a , θ ) .
Using this and the strong Markov property, for all x b ,
h ( x , a , b , θ ) = E x e ( q + r ) τ 0 h ( X ( τ 0 ) , a , b , θ ) ; τ 0 < τ b + + E x e q e r ; e r < τ 0 τ b + h ( 0 , a , b , θ )
where:
E x e ( q + r ) τ 0 h ( X ( τ 0 ) , a , b , θ ) ; τ 0 < τ b + = E x e ( q + r ) τ 0 W ( q ) ( X ( τ 0 ) a ) W ( q ) ( a ) [ h ( 0 , a , b , θ ) Z ( q ) ( a , θ ) ] + Z ( q ) ( X ( τ 0 ) a , θ ) ; τ 0 < τ b + .
Hence, by Lemma 1, (49) and (21),
h ( x , a , b , θ ) = h ( 0 , a , b , θ ) Z ( q ) ( a , θ ) W ( q ) ( a ) W a ( q , r ) ( x ) W ( q + r ) ( x ) W ( q + r ) ( b ) W a ( q , r ) ( b ) + Z a ( q , r ) ( x , θ ) W ( q + r ) ( x ) W ( q + r ) ( b ) Z a ( q , r ) ( b , θ ) + r W ( q + r ) ( x ) W ( q + r ) ( b ) W ¯ ( q + r ) ( b ) W ¯ ( q + r ) ( x ) h ( 0 , a , b , θ ) = h ( 0 , a , b , θ ) I a ( q , r ) ( x ) W ( q + r ) ( x ) W ( q + r ) ( b ) I a ( q , r ) ( b ) + J ^ a ( q , r ) ( x , θ ) W ( q + r ) ( x ) W ( q + r ) ( b ) J ^ a ( q , r ) ( b , θ ) .
Setting x = 0 and using (26) and (30), we have h ( 0 , a , b , θ ) = J ^ a ( q , r ) ( b , θ ) / I a ( q , r ) ( b ) . Substituting this in (54), we obtain the first identity (in terms of J ^ a ( q , r ) ) in (31). The last equality in (31) holds by (29).

6. Proofs for Theorems for the Unbounded Variation Case

In this section, we shall show Theorems 1–3 for the case X has paths of unbounded variation. The proof is via excursion theory. We in particular use the recent results obtained in Pardo et al. (2018) and the simplifying formula given in Avram et al. (2018). We refer the reader to Chapter IV in Bertoin (1996) and to Pardo et al. (2018) for a detailed introduction and definitions regarding excursions away from zero for the case of spectrally-negative Lévy processes.
Fix b > 0 and q > 0 . Let us consider the event:
E B : = { τ 0 > e r } { ζ > τ b + } { ζ > τ a } ,
where e r is an independent exponential clock with rate r and ζ is the length of the excursion from the point it leaves zero and returns back to zero. Due to the fact that X is spectrally negative, once an excursion gets below zero, it stays until it ends at ζ. That is, E B is the event to which (1) the exponential clock e r that starts once the excursion becomes positive rings before it downcrosses zero, (2) the excursion exceeds the level b > 0 or (3) it goes below a < 0 .
Now, let us denote by T E B the first time an excursion in the event E B occurs and also denote by:
l T E B : = sup { t < T E B : X ( t ) = 0 } ,
the left extrema of the first excursion on E B . In the event { l T E B < } , we have:
T E B = l T E B + T E B Θ l T E B ,
where we denote by Θ t the shift operator at time t 0 .
Let ( e t ; t 0 ) be the point process of excursions away from zero and V : = inf { t > 0 : e t E B } . By, for instance, Proposition 0.2 in Bertoin (1996), ( e t , t < V ) is independent of ( V , e V ) . The former is a Poisson point process with characteristic measure n ( · E B c ) , and V follows an exponential distribution with parameter n ( E B ) . Moreover, we have that l T E B = s < V ζ ( e s ) , where ζ ( e s ) denotes the lifetime of the excursion e s . Therefore, the exponential formula for Poisson point processes (see for instance, Section 0.5 in Bertoin (1996) or Proposition 1.12 in Chapter XII in Revuz and Yor (1999)), and the independence between ( e t , t < V ) and ( V , e V ) imply:
E e q l T E B = E exp q s < V ζ ( e s ) = n ( E B ) 0 e s [ n ( E B ) + n ( 1 e q ζ ; E B c ) ] d s = n ( E B ) n ( E B ) + n e q < ζ , E B c = n ( E B ) n ( E 1 ) + n ( E 2 ) + n ( E 3 ) ,
where e q is an exponential random variable with parameter q that is independent of e r and X, and:
E 1 : = { e q < ζ } { τ b + < ζ } , E 2 : = { e q > ζ , τ a < ζ < τ b + } , E 3 : = { e q > ζ , τ 0 > e r , τ a τ b + > ζ } .
To see how the last equality of (55) holds, we have:
n ( E B ) + n e q < ζ , E B c = n ( e q < ζ ) + n ( e q > ζ , E B ) = n ( E 1 ) n ( e q > ζ , τ b + < ζ ) + n ( e q > ζ , E B ) = n ( E 1 ) + n ( E 2 ) + n ( E 3 ) .
Now, by Lemma 5.1 (i) and (ii) in Avram et al. (2018) , we have:
(i)
n ( E 1 ) = e Φ ( q ) b / W ( q ) ( b ) ,
(ii)
n ( E 2 ) = 1 W ( q ) ( b ) e Φ ( q ) b W ( q ) ( b a ) W ( q ) ( a ) .
On the other hand, we have the following; the proof is deferred to Appendix A.2.
Lemma 5.
For b , q > 0 , we have:
n E 3 = 1 W ( q ) ( b ) W ( q ) ( b a ) W ( q ) ( a ) 1 W ( q + r ) ( b ) W a ( q , r ) ( b ) W ( q ) ( a ) .
Hence, n ( E 1 ) + n ( E 2 ) + n E 3 = W a ( q , r ) ( b ) / [ W ( q ) ( a ) W ( q + r ) ( b ) ] . This together with (55) gives:
E ( e q l T E B ) n ( E B ) = W ( q + r ) ( b ) W ( q ) ( a ) W a ( q , r ) ( b ) .
We now show the following lemma using the connections between n and the excursion measure of the process reflected at its infimum n ̲ , as obtained in Pardo et al. (2018).
Lemma 6.
Fix b , q > 0 . (i) We have n e q e r ; e r < τ b + τ 0 = r W ¯ ( q + r ) ( b ) / W ( q + r ) ( b ) .
(ii) We have n e q e r X ( e r ) ; e r < τ b + τ 0 = r W ¯ ¯ ( q + r ) ( b ) / W ( q + r ) ( b ) .
Proof. 
By a small modification of Theorem 3 (ii) in Pardo et al. (2018), using Proposition 1 in Chaumont and Doney (2005), and by (49),
n e q e r ; e r < τ b + τ 0 = r r + q n 1 e ( q + r ) ( τ b + τ 0 ) = r r + q n ̲ 1 e ( q + r ) ( τ b + τ 0 ) = r r + q lim x 0 1 W ( x ) E x 1 e ( q + r ) ( τ b + τ 0 ) = r W ¯ ( q + r ) ( b ) W ( q + r ) ( b ) ,
where we use in the last equality that, as in the proof of Lemma 5.1 of Avram et al. (2018) ,
0 W ( q + r ) ( x ) W ( x ) W ( x ) x 0 0 .
Similarly, we have using Lemma 4 and (58),
n e q e r X ( e r ) ; e r < τ b + τ 0 = n ̲ e q e r X ( e r ) ; e r < τ b + τ 0 = lim x 0 1 W ( x ) E x e q e r X ( e r ) ; e r < τ b + τ 0 = r W ¯ ¯ ( q + r ) ( b ) W ( q + r ) ( b ) .
 ☐
We are now ready to show the theorems. We shall show for the case q > 0 ; the case q = 0 holds by monotone convergence. For the rest of this section, let T ˜ 0 : = l T E B + τ 0 Θ l T E B .

6.1. Proof of Theorems 1

By the definition of l T E B , in the event { T ˜ 0 < ( l T E B + e r ) τ b + } , the excursion goes below a, and hence, there is no contribution to L r . Therefore, by the strong Markov property,
f ( 0 , a , b ) = f 0 ( 0 , a , b ) + g 0 ( 0 , a , b ) f ( 0 , a , b ) ,
where
f 0 ( 0 , a , b ) : = E e q ( l T E B + e r ) X ( l T E B + e r ) ; l T E B + e r < T ˜ 0 τ b + , g 0 ( 0 , a , b ) : = E e q ( l T E B + e r ) ; l T E B + e r < T ˜ 0 τ b + .
By the Master’s formula in excursion theory (see for instance excursions straddling a terminal time in Chapter XII in Revuz and Yor (1999)), Lemma 6 and (57) and because { e r < τ 0 τ b + } E B ,
f 0 ( 0 , a , b ) = E ( e q l T E B ) n ( E B ) n e q e r X ( e r ) ; e r < τ 0 τ b + , E B = r W ( q ) ( a ) W a ( q , r ) ( b ) W ¯ ¯ ( q + r ) ( b ) , g 0 ( 0 , a , b ) = E ( e q l T E B ) n ( E B ) n ( e q e r ; e r < τ 0 τ b + , E B ) = r W ( q ) ( a ) W a ( q , r ) ( b ) W ¯ ( q + r ) ( b ) .
Substituting these in (59), we obtain f ( 0 , a , b ) = r W ¯ ¯ ( q + r ) ( b ) / I a ( q , r ) ( b ) . Substituting this in (52) (which also holds for the unbounded variation case), we complete the proof.

6.2. Proof of Theorem 2

Similarly to (59),
g ( 0 , a , b ) = E e q τ b + ; τ b + < T ˜ 0 ( l T E B + e r ) + g 0 ( 0 , a , b ) g ( 0 , a , b ) .
By the Master’s formula, Lemma 5.1 (iv) in Avram et al. (2018) and (57) and because { τ b + < τ 0 } E B ,
E e q τ b + ; τ b + < T ˜ 0 ( l T E B + e r ) = E e q l T E B n ( E B ) n e ( q + r ) τ b + ; τ b + < τ 0 , E B = W ( q ) ( a ) W a ( q , r ) ( b ) .
Substituting this and (60) in (61), we have g ( 0 , a , b ) = ( I a ( q , r ) ( b ) ) 1 . Substituting this in (53) (which also holds for the unbounded variation case), we complete the proof.

6.3. Proof of Theorems 3

We shall first show the following using Theorem 5.1 in Avram et al. (2018) ; the proof is given in Appendix A.3.
Lemma 7.
For q > 0 , a < 0 < b and 0 θ < Φ ( q ) ,
n e ( q + r ) τ 0 Z ( q ) ( X ( τ 0 ) a , θ ) Z ( q ) ( a , θ ) W ( q ) ( X ( τ 0 ) a ) W ( q ) ( a ) ; τ 0 < τ b + = J ^ a ( q , r ) ( b , θ ) W ( q + r ) ( b ) .
Using Lemma 7, we shall now give the proof of the theorem for 0 θ < Φ ( q ) ; the case θ Φ ( q ) holds by analytic continuation. Using the strong Markov property, we have that:
h ( 0 , a , b , θ ) = g 0 ( 0 , a , b ) h ( 0 , a , b , θ ) + E e q [ T ˜ 0 + τ a Θ T ˜ 0 ] θ [ a X ( T ˜ 0 + τ a Θ T ˜ 0 ) ] ; T ˜ 0 < ( l T E B + e r ) τ b + .
By the Master’s formula and (57),
E e q [ T ˜ 0 + τ a Θ T ˜ 0 ] θ [ a X ( T ˜ 0 + τ a Θ T ˜ 0 ) ] ; T ˜ 0 < ( l T E B + e r ) τ b + = E ( e q l T E B ) n ( E B ) n e q τ a θ [ a X ( τ a ) ] ; τ 0 < e r τ b + , E B = W ( q + r ) ( b ) W ( q ) ( a ) W a ( q , r ) ( b ) n e q τ a θ [ a X ( τ a ) ] ; τ 0 < e r τ b + , τ a < τ 0 + τ 0 + Θ τ 0 .
Here, by the strong Markov property, (10) and Lemma 7,
n e q τ a θ [ a X ( τ a ) ] ; τ 0 < e r τ b + , τ a < τ 0 + τ 0 + Θ τ 0   = n e q τ 0 E X ( τ 0 ) e q τ a θ [ a X ( τ a ) ] ; τ a < τ 0 + ; τ 0 < e r τ b +   = n e ( q + r ) τ 0 E X ( τ 0 ) e q τ a θ [ a X ( τ a ) ] ; τ a < τ 0 + ; τ 0 < τ b + = J ^ a ( q , r ) ( b , θ ) W ( q + r ) ( b ) .
Substituting these and (60) in (62), we obtain that h ( 0 , a , b , θ ) = J ^ a ( q , r ) ( b , θ ) / I a ( q , r ) ( b ) . Using this expression in (54) (which also holds for the unbounded variation case), we complete the proof.

Acknowledgments

J.L.P. is supported by CONACYT, Project No. 241195. K.Y. is supported by MEXT KAKENHI Grant No. 26800092.

Author Contributions

José-Luis Pérez and Kazutoshi Yamazaki wrote the paper.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Proofs Regarding Simplifying Formulae

As in Loeffen et al. (2014), for any α 0 , let V 0 ( α ) be the set of measurable functions v α : R R :
E x e α τ 0 v α ( X ( τ 0 ) ) ; τ 0 < τ b + = v α ( x ) W ( α ) ( x ) W ( α ) ( b ) v α ( b ) , x b .
We shall further define V ˜ 0 ( α ) to be the set of positive measurable functions v α ( x ) that satisfy Conditions (i) or (ii) in Lemma 2.1 of Loeffen et al. (2014), which state as follows:
(i)
For the case that X has paths of bounded variation, v α V 0 ( α ) and there exists large enough λ such that:
0 e λ z v α ( z ) d z < .
(ii)
For the case that X has paths of unbounded variation, there exists a sequence of functions v α , n that converge to v α uniformly on compact sets, where v α , n belongs to the class V ˜ 0 ( α ) for the process X n ; here, ( X n ; n 1 ) is a sequence of spectrally-negative Lévy processes of bounded variation that converge to X almost surely uniformly on compact time intervals (which can be chosen as in, for example, page 210 of Bertoin (1996)).
Fix any a < 0 . By Lemma 2.2 of Loeffen et al. (2014) and spatial homogeneity,
y W ( α ) ( y a ) V ˜ 0 ( α ) and y Z ( α ) ( y a ) V ˜ 0 ( α ) .
Lemma 2.1 of Loeffen et al. (2014) shows that, for all α , β 0 , v α V ˜ 0 ( α ) and x b ,
E x e β τ 0 v α ( X ( τ 0 ) ) ; τ 0 < τ b + = v α ( x ) ( α β ) 0 x W ( β ) ( x y ) v α ( y ) d y                                        W ( β ) ( x ) W ( β ) ( b ) v α ( b ) ( α β ) 0 b W ( β ) ( b y ) v α ( y ) d y .
Similar results under the excursion measure have been obtained in Theorem 5.1 of Avram et al. (2018) (see (A8) below).
In the following proofs, we need measure-changed versions of these theorems. For θ 0 , let P θ be the measure under the Esscher transform:
d P θ d P F t = exp ( θ X ( t ) ψ ( θ ) t ) , t 0 ,
and W θ and Z θ be the corresponding scale functions. It is well known that:
W θ ( q ψ ( θ ) ) ( y ) = e θ y W ( q ) ( y ) , y R , q 0 .
Hence, we have:
Z θ ( q ψ ( θ ) ) ( y ) = e θ y Z ( q ) ( y , θ ) , y R , θ 0 .

Appendix A.1. Proof of Lemma 1

Proof of (23): Fix a < 0 . Using the measure-changed version of (A2): if v q ψ ( θ ) V ˜ 0 ( q ψ ( θ ) ) under P θ , then for x b ,
e θ ( x a ) E x e ( q + r ) τ 0 + θ ( X ( τ 0 ) a ) v q ψ ( θ ) ( X ( τ 0 ) ) ; τ 0 < τ b + = E x θ e ( q + r ψ ( θ ) ) τ 0 v q ψ ( θ ) ( X ( τ 0 ) ) ; τ 0 < τ b + = v q ψ ( θ ) ( x ) + r 0 x W θ ( q + r ψ ( θ ) ) ( x y ) v q ψ ( θ ) ( y ) d y W θ ( q + r ψ ( θ ) ) ( x ) W θ ( q + r ψ ( θ ) ) ( b ) v q ψ ( θ ) ( b ) + r 0 b W θ ( q + r ψ ( θ ) ) ( b y ) v q ψ ( θ ) ( y ) d y .
Hence, because y Z θ ( q ψ ( θ ) ) ( y a ) V ˜ 0 ( q ψ ( θ ) ) under P θ as in (A1), with M a , θ ( q , r ) the measure-changed version of (20) such that:
M a , θ ( q , r ) f ( x ) : = f ( x a ) + r 0 x W θ ( q + r ψ ( θ ) ) ( x y ) f ( y a ) d y , x R ,
the left-hand side of (23) equals:
E x e ( q + r ) τ 0 + θ ( X ( τ 0 ) a ) Z θ ( q ψ ( θ ) ) ( X ( τ 0 ) a ) ; τ 0 < τ b + = e θ ( x a ) M a , θ ( q , r ) Z θ ( q ψ ( θ ) ) ( x ) W θ ( q + r ψ ( θ ) ) ( x ) W θ ( q + r ψ ( θ ) ) ( b ) M a , θ ( q , r ) Z θ ( q ψ ( θ ) ) ( b ) = Z a ( q , r ) ( x , θ ) W ( q + r ) ( x ) W ( q + r ) ( b ) Z a ( q , r ) ( b , θ ) ,
where the last equality holds because, for all y R , by (A5),
M a , θ ( q , r ) Z θ ( q ψ ( θ ) ) ( y ) = e θ ( y a ) Z ( q ) ( y a , θ ) + r 0 y e θ ( y z ) W ( q + r ) ( y z ) e θ ( z a ) Z ( q ) ( z a , θ ) d z = e θ ( y a ) Z ( q ) ( y a , θ ) + r 0 y W ( q + r ) ( y z ) Z ( q ) ( z a , θ ) d z = e θ ( y a ) Z a ( q , r ) ( y , θ ) .
Proof of (24): We first generalize the results for Theorem 6.1 of Avram et al. (2018). The result (ii) is then immediate by setting β = q + r and α = q ψ ( θ ) and observing that y Z θ ( q ψ ( θ ) ) ( y a ) V ˜ 0 ( q ψ ( θ ) ) under P θ , and by (A5),
E x e ( q + r ) τ ˜ 0 , b Z ( q ) ( Y ¯ b ( τ ˜ 0 , b ) a , θ ) = e θ a E x e ( q + r ) τ ˜ 0 , b + θ Y ¯ b ( τ ˜ 0 , b ) Z θ ( q ψ ( θ ) ) ( Y ¯ b ( τ ˜ 0 , b ) a ) .
Theorem A1.
Fix α , β 0 , θ 0 and b > 0 . Suppose v α : R [ 0 , ) and belongs to V ˜ 0 ( α ) under P θ . Assume also that v α is right-hand differentiable at b and sup 0 y b ( , 1 ] v α ( y + u ) e θ u Π ( d u ) < . In addition, for the case of unbounded variation, in (ii) for the definition of V ˜ 0 ( α ) above, v α , n ( b + ) n v α ( b + ) . Then, for q 0 and x b ,
E x e β τ ˜ 0 , b + θ Y ¯ b ( τ ˜ 0 , b ) v α ( Y ¯ b ( τ ˜ 0 , b ) ) = W ( β ) ( x ) W ( β ) ( b + ) e θ b ( v α ( b + ) θ v α ( b ) ) + ( α β + ψ ( θ ) ) 0 b e θ y W ( β ) ( b y ) v α ( y ) d y + e θ b W ( β ) ( 0 ) v α ( b ) + e θ x v α ( x ) ( α β + ψ ( θ ) ) 0 x W ( β ) ( x y ) e θ y v α ( y ) d y .
Proof. 
We consider the case of bounded variation. It can be extended to the unbounded variation case by approximation as in the proof of Theorem 6.1 of Avram et al. (2018). We also focus on the case 0 x b ; the case x < 0 is immediate.
Using the resolvent given in Theorem 1 (ii) of Pistorius (2004) and the compensation formula, we have:
E x e β τ ˜ 0 , b + θ Y ¯ b ( τ ˜ 0 , b ) v α ( Y ¯ b ( τ ˜ 0 , b ) ) = 0 ( , y ) e θ ( y + u ) v α ( y + u ) Π ( d u ) W ( β ) ( b y ) W ( β ) ( b + ) W ( β ) ( x ) W ( β ) ( x y ) d y + W ( β ) ( x ) W ( β ) ( 0 ) W ( β ) ( b + ) ( , b ) e θ ( b + u ) v α ( b + u ) Π ( d u ) .
By (19) of Loeffen et al. (2014), (A4) and because v α belongs to V ˜ 0 ( α ) under P θ by assumption, we have:
e θ b 0 b W ( β ) ( b y ) ( , y ) v α ( y + u ) e θ ( y + u ) Π ( d u ) d y = 0 b W θ ( β ψ ( θ ) ) ( b y ) ( , y ) v α ( y + u ) e θ u Π ( d u ) d y = c v α ( 0 ) W θ ( β ψ ( θ ) ) ( b ) v α ( b ) + ( α β + ψ ( θ ) ) 0 b W θ ( β ψ ( θ ) ) ( b y ) v α ( y ) d y ,
where we recall c = W θ ( α ) ( 0 ) as in (8). Therefore, by (A4),
0 b W ( β ) ( b y ) ( , y ) v α ( y + u ) e θ ( y + u ) Π ( d u ) d y = c v α ( 0 ) W ( β ) ( b ) e θ b v α ( b ) + ( α β + ψ ( θ ) ) 0 b e θ y W ( β ) ( b y ) v α ( y ) d y .
Taking the right-hand derivative with respect to b,
0 b W ( β ) ( b y ) ( , y ) v α ( y + u ) e θ ( y + u ) Π ( d u ) d y + W ( β ) ( 0 ) ( , b ) v α ( b + u ) e θ ( b + u ) Π ( d u ) = + + b 0 b W ( β ) ( b y ) ( , y ) v α ( y + u ) e θ ( y + u ) Π ( d u ) d y = c v α ( 0 ) W ( β ) ( b + ) e θ b ( v α ( b ) + θ v α ( b ) )                                                                + ( α β + ψ ( θ ) ) 0 b e θ y W ( β ) ( b y ) v α ( y ) d y + e θ b W ( β ) ( 0 ) v α ( b ) ;
to see how the derivative can be interchanged over the integral in the first equality, see the proof of Theorem 6.1 of Avram et al. (2018). Hence, substituting this in (A7) and after simplification, we have the claim. ☐

Appendix A.2. Proof of Lemma 5

Using the fact { e q > τ 0 > e r } { e q e r > τ 0 } = { e q > τ 0 } and that e q e r is exponentially distributed with parameter q + r ,
n E 3 = n e q τ 0 e ( q + r ) τ 0 E X ( τ 0 ) e q τ 0 + ; τ 0 + < τ a ; τ 0 < τ b + .
Here, by a small modification of Theorem 3 (ii) in Pardo et al. (2018) and using Proposition 1 in Chaumont and Doney (2005) and (A2), the right-hand side equals:
n ̲ e q τ 0 e ( q + r ) τ 0 E X ( τ 0 ) e q τ 0 + ; τ 0 + < τ a ; τ 0 < τ b + = lim x 0 W ( x ) 1 E x e q τ 0 e ( q + r ) τ 0 E X ( τ 0 ) e q τ 0 + ; τ 0 + < τ a ; τ 0 < τ b + = lim x 0 1 W ( x ) W ( q ) ( a )              × r 0 x W ( q + r ) ( x y ) W ( q ) ( y a ) d y W ( q ) ( x ) W ( q ) ( b a ) W ( q ) ( b ) W ( q + r ) ( x ) W a ( q , r ) ( b ) W ( q + r ) ( b ) ,
which equals the right-hand side of (56) by (58), as desired.

Appendix A.3. Proof of Lemma 7

Let n θ be the excursion measure under the Esscher transform (A3). By Theorem 5.1 and Remark 5.1 in Avram et al. (2018) , if v q ψ ( θ ) V ˜ 0 ( q ψ ( θ ) ) , v q ψ ( θ ) ( 0 ) = 0 and it is differentiable at zero, then:
n θ e ( q + r ψ ( θ ) ) τ 0 v q ψ ( θ ) ( X ( τ 0 ) ) ; τ 0 < τ b + = v q ψ ( θ ) ( b ) + r 0 b W θ ( q + r ψ ( θ ) ) ( b y ) v q ψ ( θ ) ( y ) d y W θ ( q + r ψ ( θ ) ) ( b ) .
By (A5) and because y Z θ ( q ψ ( θ ) ) ( y a ) Z ( q ) ( a , θ ) W θ ( q ψ ( θ ) ) ( y a ) / W ( q ) ( a ) V ˜ 0 ( q ψ ( θ ) ) under P θ (by (A1) and because V ˜ 0 ( q ψ ( θ ) ) is a linear space),
n e ( q + r ) τ 0 Z ( q ) ( X ( τ 0 ) a , θ ) Z ( q ) ( a , θ ) W ( q ) ( X ( τ 0 ) a ) W ( q ) ( a ) ; τ 0 < τ b + = e θ a n θ e ( q + r ψ ( θ ) ) τ 0 Z θ ( q ψ ( θ ) ) ( X ( τ 0 ) a ) Z ( q ) ( a , θ ) W θ ( q ψ ( θ ) ) ( X ( τ 0 ) a ) W ( q ) ( a ) ; τ 0 < τ b + = e θ a W θ ( q + r ψ ( θ ) ) ( b ) M a , θ ( q , r ) Z θ ( q ψ ( θ ) ) ( b ) Z ( q ) ( a , θ ) W ( q ) ( a ) M a , θ ( q , r ) W θ ( q ψ ( θ ) ) ( b ) ,
which simplifies to J ^ a ( q , r ) ( b , θ ) / W ( q + r ) ( b ) by (A6) and because:
M a , θ ( q , r ) W θ ( q ψ ( θ ) ) ( b ) = e θ ( b a ) W ( q ) ( b a ) + r 0 b e θ ( b z ) W ( q + r ) ( b z ) e θ ( z a ) W ( q ) ( z a ) d z = e θ ( b a ) W ( q ) ( b a ) + r 0 b W ( q + r ) ( b z ) W ( q ) ( z a ) d z = e θ ( b a ) W a ( q , r ) ( b ) .

Appendix B. Proofs of Corollaries

Before we provide the proofs of the corollaries, we first state the following convergence results that will be used throughout this Appendix. By (9), it is immediate that, for q 0 ,
lim b W ( q + r ) ( b + ) W ( q + r ) ( b ) = lim b W ( q + r ) ( b ) W ¯ ( q + r ) ( b ) = Φ ( q + r ) and lim b W ( q + r ) ( b ) W ¯ ¯ ( q + r ) ( b ) = Φ 2 ( q + r ) .
Furthermore, note that we can write, by (7) of Loeffen et al. (2014) and (3.4) of Pérez and Yamazaki (2018),
W a ( q , r ) ( x ) = W ( q + r ) ( x a ) r 0 a W ( q + r ) ( x u a ) W ( q ) ( u ) d u , Z ¯ a ( q , r ) ( x ) = Z ¯ ( q + r ) ( x a ) r 0 a W ( q + r ) ( x u a ) Z ¯ ( q ) ( u ) d u .
Lemma A1.
Fix q 0 . (i) For x R , we have lim a [ W a ( q , r ) ( x ) / W ( q ) ( a ) ] = Z ( q + r ) ( x , Φ ( q ) ) .
(ii) For a < 0 , we have lim b [ W a ( q , r ) ( b ) / W ( q + r ) ( b ) ] = Z ( q ) ( a , Φ ( q + r ) ) .
(iii) For a < 0 and 0 θ < Φ ( q ) , we have lim b [ Z a ( q , r ) ( b , θ ) / W ( q + r ) ( b ) ] = Z ˜ ( q , r ) ( a , θ ) .
(iv) For a < 0 , we have:
lim b W ¯ a ( q , r ) ( b ) W ( q + r ) ( b ) = Z ˜ ( q , r ) ( a ) q r q Φ ( q + r ) ,
where it is understood for the case q = 0 that it goes to infinity.
(v) For a < 0 , we have
lim b Z ¯ a ( q , r ) ( b ) W ( q + r ) ( b ) = r Φ ( q + r ) Z ¯ ( q ) ( a ) + Z ˜ ( q , r ) ( a ) Φ ( q + r ) .
Proof. 
(i) By (9), we have:
lim a W a ( q , r ) ( x ) W ( q ) ( a ) = lim a W ( q ) ( x a ) + r 0 x W ( q + r ) ( x y ) W ( q ) ( y a ) d y W ( q ) ( a ) = e Φ ( q ) x + r 0 x e Φ ( q ) y W ( q + r ) ( x y ) d y = Z ( q + r ) ( x , Φ ( q ) ) .
(ii) By (9) and (A10), we have:
lim b W a ( q , r ) ( b ) W ( q + r ) ( b ) = e Φ ( q + r ) a 1 r 0 a e Φ ( q + r ) y W ( q ) ( y ) d y = Z ( q ) ( a , Φ ( q + r ) ) .
(iii) We have
lim b Z a ( q , r ) ( b , θ ) W ( q + r ) ( b ) = lim b Z ( q ) ( b a , θ ) + r 0 b W ( q + r ) ( b y ) Z ( q ) ( y a , θ ) d y W ( q + r ) ( b ) .
Because θ < Φ ( q ) < Φ ( q + r ) and by (9), we have lim b Z ( q ) ( b a , θ ) / W ( q + r ) ( b ) = 0 . On the other hand, by (9),
lim b 0 b W ( q + r ) ( b y ) Z ( q ) ( y a , θ ) d y W ( q + r ) ( b ) = 0 e Φ ( q + r ) y Z ( q ) ( y a , θ ) d y                                    = 0 e Φ ( q + r ) y + θ ( y a ) d y + ( q ψ ( θ ) ) 0 e Φ ( q + r ) y + θ ( y a ) 0 y a e θ z W ( q ) ( z ) d z d y .
For the first term we have 0 e Φ ( q + r ) y e θ ( y a ) d y = e θ a / ( Φ ( q + r ) θ ) . For the second term, using Fubini’s theorem,
0 e Φ ( q + r ) y + θ ( y a ) 0 y a e θ z W ( q ) ( z ) d z d y = e a θ 0 e [ Φ ( q + r ) θ ] y 0 y a e θ z W ( q ) ( z ) d z d y = e a θ 0 a 0 e [ Φ ( q + r ) θ ] y e θ z W ( q ) ( z ) d y d z + a z + a e [ Φ ( q + r ) θ ] y e θ z W ( q ) ( z ) d y d z = e a θ 0 a 1 Φ ( q + r ) θ e θ z W ( q ) ( z ) d z + a e [ Φ ( q + r ) θ ] ( z + a ) Φ ( q + r ) θ e θ z W ( q ) ( z ) d z = e a θ Φ ( q + r ) θ 0 a e θ z W ( q ) ( z ) d z + e Φ ( q + r ) a Φ ( q + r ) θ 1 r 0 a e Φ ( q + r ) z W ( q ) ( z ) d z .
Hence putting the pieces together we obtain that for θ < Φ ( q )
lim b Z a ( q , r ) ( b , θ ) W ( q + r ) ( b ) = r Φ ( q + r ) θ Z ( q ) ( a , θ ) + q ψ ( θ ) Φ ( q + r ) θ Z ( q ) ( a , Φ ( q + r ) ) = Z ˜ ( q , r ) ( a , θ ) .
(iv) Because we can write W ¯ a ( q , r ) ( b ) = [ Z a ( q , r ) ( b ) 1 r W ¯ ( q + r ) ( b ) ] / q , the result holds by (iii) and (A9).
(v) By (A9) and (A10),
lim b Z ¯ a ( q , r ) ( b ) W ( q + r ) ( b ) = lim b Z ¯ ( q + r ) ( b a ) r 0 a W ( q + r ) ( b u a ) Z ¯ ( q ) ( u ) d u W ( q + r ) ( b ) = e Φ ( q + r ) a q + r Φ 2 ( q + r ) r 0 a e Φ ( q + r ) u Z ¯ ( q ) ( u ) d u .
Here, applying integration by parts twice,
0 a e Φ ( q + r ) u Z ¯ ( q ) ( u ) d u = e Φ ( q + r ) a Z ¯ ( q ) ( a ) Φ ( q + r ) + 1 Φ ( q + r ) 0 a e Φ ( q + r ) u Z ( q ) ( u ) d u                       = e Φ ( q + r ) a Z ¯ ( q ) ( a ) Φ ( q + r ) + 1 Φ 2 ( q + r ) 1 e Φ ( q + r ) a Z ( q ) ( a ) + q 0 a e Φ ( q + r ) u W ( q ) ( u ) d u .
Hence, the right-hand side of (A11) equals:
r Z ¯ ( q ) ( a ) Φ ( q + r ) + 1 Φ 2 ( q + r ) q e Φ ( q + r ) a + r Z ( q ) ( a ) q r e Φ ( q + r ) a 0 a e Φ ( q + r ) u W ( q ) ( u ) d u   = r Z ¯ ( q ) ( a ) Φ ( q + r ) + Z ˜ ( q , r ) ( a ) Φ ( q + r ) .
 ☐
Lemma A2.
Fix q 0 and x R .
(i) We have lim a I a ( q , r ) ( x ) = I ( q , r ) ( x ) .
(ii) We have lim a ( I a ( q , r ) ) ( x ) = ( I ( q , r ) ) ( x ) , where it is understood for the case q = 0 that it goes to infinity.
Proof. 
(i) It is immediate by Lemma A1 (i). (ii) The proof follows because, by (9),
( W a ( q , r ) ) ( x + ) W ( q ) ( a ) = W ( q ) ( ( x a ) + ) W ( q ) ( a ) + r W ( q + r ) ( 0 ) W ( q ) ( x a ) W ( q ) ( a ) + r 0 x W ( q + r ) ( x y ) W ( q ) ( y a ) W ( q ) ( a ) d y a Φ ( q ) e Φ ( q ) x + r W ( q + r ) ( 0 ) e Φ ( q ) x + r 0 x e Φ ( q ) y W ( q + r ) ( x y ) d y ,
which equals Z ( q + r ) ( x , Φ ( q ) ) = Φ ( q ) Z ( q + r ) ( x , Φ ( q ) ) + r W ( q + r ) ( x ) by integration by parts. ☐
Lemma A3.
Fix q 0 and a < 0 . (i) We have:
lim b I a ( q , r ) ( b ) W ( q + r ) ( b ) = Z ( q ) ( a , Φ ( q + r ) ) W ( q ) ( a ) r Φ ( q + r ) = Z ( q ) ( a , Φ ( q + r ) ) W ( q ) ( a ) Φ ( q + r ) .
(ii) For 0 θ < Φ ( q ) ,
lim b J a ( q , r ) ( b , θ ) W ( q + r ) ( b ) = Z ˜ ( q , r ) ( a , θ ) r Z ( q ) ( a , θ ) Φ ( q + r ) ,
where in particular:
lim b J a ( q , r ) ( b ) W ( q + r ) ( b ) = q Z ( q ) ( a , Φ ( q + r ) ) Φ ( q + r ) .
(iii) We have:
lim b K a ( q , r ) ( b ) W ( q + r ) ( b ) = 1 Φ ( q + r ) Z ˜ ( q , r ) ( a ) ψ ( 0 + ) Z ( q ) ( a , Φ ( q + r ) ) .
Proof. 
(i) By Lemma A1 (ii) and (A9),
I a ( q , r ) ( b ) W ( q + r ) ( b ) = W a ( q , r ) ( b ) W ( q + r ) ( b ) W ( q ) ( a ) r W ¯ ( q + r ) ( b ) W ( q + r ) ( b ) b Z ( q ) ( a , Φ ( q + r ) ) W ( q ) ( a ) r Φ ( q + r ) .
(ii) By Lemma A1 (iii) and (A9), we have:
J a ( q , r ) ( b , θ ) W ( q + r ) ( b ) = Z a ( q , r ) ( b , θ ) W ( q + r ) ( b ) r Z ( q ) ( a , θ ) W ¯ ( q + r ) ( b ) W ( q + r ) ( b ) b Z ˜ ( q , r ) ( a , θ ) r Z ( q ) ( a , θ ) Φ ( q + r ) .
The case θ = 0 holds by (19).
(iii) By Lemma A1 (ii) and (v) and (A9),
lim b K a ( q , r ) ( b ) W ( q + r ) ( b ) = r l ( q ) ( a ) Φ ( q + r ) + lim b Z ¯ a ( q , r ) ( b ) W ( q + r ) ( b ) ψ ( 0 + ) lim b W ¯ a ( q , r ) ( b ) W ( q + r ) ( b ) = 1 Φ ( q + r ) r ψ ( 0 + ) W ¯ ( q ) ( a ) + Z ˜ ( q , r ) ( a ) ψ ( 0 + ) Φ ( q + r ) Z ˜ ( q , r ) ( a ) r q = 1 Φ ( q + r ) r ψ ( 0 + ) Z ( q ) ( a ) q + Z ˜ ( q , r ) ( a ) ψ ( 0 + ) Φ ( q + r ) Z ˜ ( q , r ) ( a ) q = 1 Φ ( q + r ) Z ˜ ( q , r ) ( a ) ψ ( 0 + ) Z ( q ) ( a , Φ ( q + r ) ) .
 ☐

Appendix B.1. Proof of Corollary 1

(i) In view of Theorem 1, it is immediate upon taking a by monotone convergence and Lemma A2 (i). The convergence (28) is confirmed in Lemma A2 (i).
(ii) Similarly, it suffices to take b . In addition, by Lemma A3 (i) and (A9),
lim b W ¯ ¯ ( q + r ) ( b ) I a ( q , r ) ( b ) = lim b W ¯ ¯ ( q + r ) ( b ) W ( q + r ) ( b ) lim b W ( q + r ) ( b ) I a ( q , r ) ( b ) = 1 Φ ( q + r ) W ( q ) ( a ) Z ( q ) ( a , Φ ( q + r ) ) .
(iii) We shall show for the case q > 0 . The case q = 0 holds by monotone convergence. By monotone convergence, it suffices to take b in (i). By (A9), this boils down to computing:
lim b W ¯ ¯ ( q + r ) ( b ) I ( q , r ) ( b ) = 1 Φ 2 ( q + r ) lim b W ( q + r ) ( b ) I ( q , r ) ( b ) .
In addition, by (9),
I ( q , r ) ( b ) W ( q + r ) ( b ) = e Φ ( q ) b + r 0 b e Φ ( q ) z W ( q + r ) ( b z ) d z r W ¯ ( q + r ) ( b ) W ( q + r ) ( b )                                        b r 0 e Φ ( q ) z e Φ ( q + r ) z d z 1 Φ ( q + r ) = r Φ ( q ) ( Φ ( q + r ) Φ ( q ) ) Φ ( q + r ) .

Appendix B.2. Proof of Corollary 2

(i) In view of Theorem 2, it is immediate upon taking a by monotone convergence and Lemma A2 (i). (ii) It is immediate by setting q = 0 and Φ ( q ) = 0 in (i) and noticing that in this case I ( 0 , r ) ( x ) = 1 uniformly in x.

Appendix B.3. Proof of Corollary 3

(i) We shall show for the case 0 θ < Φ ( q ) ; the case θ Φ ( q ) holds by analytic continuation. In view of Theorem 3, by monotone convergence, it suffices to take b . By Lemma A3 (i) and (ii), we have the claim.
(ii) By taking θ = 0 and q = 0 in (i), we obtain the claim in view of (27).

Appendix B.4. Proof of Corollary 4

(i) By (10) and (11), and monotone convergence,
lim θ Z ( q ) ( x a , θ ) Z ( q ) ( a , θ ) W ( q ) ( a ) W ( q ) ( x a ) = lim θ E x a e q τ 0 + θ X ( τ 0 ) ; τ 0 < W ( q ) ( x a ) W ( q ) ( a ) lim θ E a e q τ 0 + θ X ( τ 0 ) ; τ 0 < = E x a e q τ 0 ; X ( τ 0 ) = 0 , τ 0 < W ( q ) ( x a ) W ( q ) ( a ) E a e q τ 0 ; X ( τ 0 ) = 0 , τ 0 < = σ 2 2 W ( q ) ( x a ) Φ ( q ) W ( q ) ( x a ) W ( q ) ( x a ) W ( q ) ( a ) W ( q ) ( a ) Φ ( q ) W ( q ) ( a ) = C a ( q ) ( x a ) .
This implies that:
lim θ J ^ a ( q , r ) ( x , θ ) = lim θ M a ( q , r ) Z ( q ) ( x , θ ) Z ( q ) ( a , θ ) W ( q ) ( a ) W ( q ) ( x )                                = C a ( q ) ( x a ) + r 0 x W ( q + r ) ( x y ) C a ( q ) ( y a ) d y = M a ( q , r ) C a ( q ) ( x ) .
Here, the limit can go into the integral because, by (A12), sup 0 y x | Z ( q ) ( y a , θ ) Z ( q ) ( a , θ ) W ( q ) ( y a ) / W ( q ) ( a ) | 1 + W ( q ) ( x a ) / W ( q ) ( a ) uniformly in θ 0 .
Hence, taking θ in Theorem 3, we have:
w ( x , a , b ) = M a ( q , r ) C a ( q ) ( x ) I a ( q , r ) ( x ) I a ( q , r ) ( b ) M a ( q , r ) C a ( q ) ( b ) .
Because:
M a ( q , r ) C a ( q ) ( x ) = C a ( q , r ) ( x ) σ 2 2 I a ( q , r ) ( x ) W ( q ) ( a ) ,
we have the claim.
(ii) By (A9) and (9), we have:
lim b M a ( q , r ) W ( q ) ( b ) W ( q + r ) ( b ) = lim b 1 W ( q + r ) ( b ) W ( q ) ( b a ) + r 0 b W ( q + r ) ( b y ) W ( q ) ( y a ) d y = r 0 e Φ ( q + r ) y W ( q ) ( y a ) d y ,
where integration by parts gives:
0 e Φ ( q + r ) y W ( q ) ( y a ) d y = e Φ ( q + r ) a a e Φ ( q + r ) z W ( q ) ( z ) d z = W ( q ) ( a ) + 1 r Φ ( q + r ) Z ( q ) ( a , Φ ( q + r ) ) = 1 r Z ( q ) ( a , Φ ( q + r ) ) .
This together with (A9) shows:
lim b C a ( q , r ) ( b ) W ( q + r ) ( b ) = σ 2 2 Z ( q ) ( a , Φ ( q + r ) ) r W ( q ) ( a ) Φ ( q + r ) .
Now, the proof is complete because, by Lemma A3 (i),
C a ( q , r ) ( b ) I a ( q , r ) ( b ) b σ 2 2 Z ( q ) ( a , Φ ( q + r ) ) r W ( q ) ( a ) Φ ( q + r ) W ( q ) ( a ) Φ ( q + r ) Z ( q ) ( a , Φ ( q + r ) ) = W ( q ) ( a ) σ 2 2 Φ ( q + r ) r W ( q ) ( a ) Z ( q ) ( a , Φ ( q + r ) ) .

Appendix B.5. Proof of Corollary 5

For θ > 0 and x R ,
Z ( q ) ( x , θ ) θ = x Z ( q ) ( x , θ ) e θ x ψ ( θ ) 0 x e θ z W ( q ) ( z ) d z + ( q ψ ( θ ) ) 0 x e θ z z W ( q ) ( z ) d z .
Because integration by parts gives 0 x y W ( q ) ( y ) d y = x W ¯ ( q ) ( x ) W ¯ ¯ ( q ) ( x ) ,
lim θ 0 Z ( q ) ( x , θ ) θ = x 1 + q W ¯ ( q ) ( x ) ψ ( 0 + ) W ¯ ( q ) ( x ) q 0 x z W ( q ) ( z ) d z = l ( q ) ( x ) .
Hence,
lim θ 0 Z a ( q , r ) ( x , θ ) θ = lim θ 0 Z ( q ) ( x a , θ ) θ + r 0 x W ( q + r ) ( x y ) lim θ 0 Z ( q ) ( y a , θ ) θ d y = l a ( q , r ) ( x ) .
Hence, K a ( q , r ) ( x ) = lim θ 0 ( J a ( q , r ) ( x , θ ) / θ ) , and the result holds by Theorem 3.

Appendix B.6. Proof of Corollary 6

In view of Corollary 5, by monotone convergence, it suffices to take b . Now, the result holds by Lemma A3 (i) and (iii).

Appendix B.7. Proof of Corollary 7

For the case q > 0 , in view of Proposition 1, it is immediate upon taking a by monotone convergence and Lemma A2 (i) and (ii). The case q = 0 holds by monotone convergence upon taking q 0 .

Appendix B.8. Proof of Corollary 8

For the case q > 0 , in view of Proposition 2, it is immediate upon taking a by monotone convergence and Lemma A2 (i) and (ii). The case q = 0 holds by monotone convergence upon taking q 0 .

Appendix B.9. Proof of Corollary 9

We shall take lim θ 0 h ˜ ( x , a , b , θ ) / θ in Proposition 3. By (A13), it can be confirmed that:
lim θ 0 θ Z ( q ) ( x , θ ) = lim θ 0 x θ Z ( q ) ( x , θ ) = Z ( q ) ( x ) ψ ( 0 + ) W ( q ) ( x ) = x lim θ 0 θ Z ( q ) ( x , θ ) .
Hence, ( K a ( q , r ) ) ( x ) = lim θ 0 ( ( J a ( q , r ) ) ( x , θ ) / θ ) and by modifying the proof of Corollary 5, we have the result.

Appendix B.10. Proof of Corollary 10

For the case q > 0 , in view of Proposition 4, by monotone convergence, it is immediate by Lemma A3 (ii) and (A9). The case q = 0 holds by monotone convergence upon taking q 0 .

Appendix B.11. Proof of Corollary 11

In view of Proposition 5, by monotone convergence, it suffices to take b . Using Lemma A3 (ii) and (iii), we have that:
lim b K a ( q , r ) ( b ) J a ( q , r ) ( b ) = Z ˜ ( q , r ) ( a ) ψ ( 0 + ) Z ( q ) ( a , Φ ( q + r ) ) q Z ( q ) ( a , Φ ( q + r ) ) .
Hence,
lim b H a ( q , r ) ( b ) J a ( q , r ) ( b ) = lim b K a ( q , r ) ( b ) J a ( q , r ) ( b ) l ( q ) ( a ) Z ( q ) ( a ) = Z ˜ ( q , r ) ( a ) ψ ( 0 + ) Z ( q ) ( a , Φ ( q + r ) ) q Z ( q ) ( a , Φ ( q + r ) ) l ( q ) ( a ) Z ( q ) ( a ) = 1 q Z ˜ ( q , r ) ( a ) Z ( q ) ( a , Φ ( q + r ) ) q Z ¯ ( q ) ( a ) + ψ ( 0 + ) Z ( q ) ( a ) .
Hence, putting the pieces together, we have:
E x [ 0 , ) e q t d R r a ( t ) = 1 q Z ˜ ( q , r ) ( a ) Z ( q ) ( a , Φ ( q + r ) ) q Z ¯ ( q ) ( a ) + ψ ( 0 + ) Z ( q ) ( a ) J a ( q , r ) ( x ) H a ( q , r ) ( x ) ,
which equals:
1 q r Z ( q ) ( a ) + q Z ( q ) ( a , Φ ( q + r ) ) Φ ( q + r ) Z ( q ) ( a , Φ ( q + r ) ) q Z ¯ ( q ) ( a ) + ψ ( 0 + ) Z ( q ) ( a ) Z a ( q , r ) ( x ) r Z ( q ) ( a ) W ¯ ( q + r ) ( x )   l a ( q , r ) ( x ) l ( q ) ( a ) Z ( q ) ( a ) Z a ( q , r ) ( x )   = r Z ( q ) ( a ) q Φ ( q + r ) Z ( q ) ( a , Φ ( q + r ) ) + 1 Φ ( q + r ) Z a ( q , r ) ( x ) r Z ( q ) ( a ) W ¯ ( q + r ) ( x )   + Z ¯ ( q ) ( a ) + ψ ( 0 + ) q r W ¯ ( q + r ) ( x ) l a ( q , r ) ( x ) + l ( q ) ( a ) Z ¯ ( q ) ( a ) ψ ( 0 + ) / q Z ( q ) ( a ) Z a ( q , r ) ( x ) .
Here, we have:
l ( q ) ( a ) Z ¯ ( q ) ( a ) ψ ( 0 + ) / q Z ( q ) ( a ) = ψ ( 0 + ) q
and:
l a ( q , r ) ( x ) = Z ¯ a ( q , r ) ( x ) ψ ( 0 + ) q Z a ( q , r ) ( x ) + ψ ( 0 + ) q + r ψ ( 0 + ) q W ¯ ( q + r ) ( x ) .
Substituting these, we have:
E x [ 0 , ) e q t d R r a ( t )   = r Z ( q ) ( a ) q Φ ( q + r ) Z ( q ) ( a , Φ ( q + r ) ) + 1 Φ ( q + r ) Z a ( q , r ) ( x ) r Z ( q ) ( a ) W ¯ ( q + r ) ( x )   + Z ¯ ( q ) ( a ) + ψ ( 0 + ) q r W ¯ ( q + r ) ( x )   Z ¯ a ( q , r ) ( x ) ψ ( 0 + ) q Z a ( q , r ) ( x ) + ψ ( 0 + ) q + r ψ ( 0 + ) q W ¯ ( q + r ) ( x ) ψ ( 0 + ) q Z a ( q , r ) ( x )   = r Z ( q ) ( a ) q Φ ( q + r ) Z ( q ) ( a , Φ ( q + r ) ) + 1 Φ ( q + r ) Z a ( q , r ) ( x ) r Z ( q ) ( a ) W ¯ ( q + r ) ( x )   + r Z ¯ ( q ) ( a ) W ¯ ( q + r ) ( x ) Z ¯ a ( q , r ) ( x ) + ψ ( 0 + ) q .

References

  1. Albrecher, Hansjörg, Eric C.K. Cheung, and Stefan Thonhauser. 2011. Randomized observation periods for the compound Poisson risk model: Dividends. ASTIN Bulletin 41: 645–72. [Google Scholar]
  2. Albrecher, Hansjörg, Jevgenijs Ivanovs, and Xiaowen Zhou. 2016. Exit identities for Lévy processes observed at Poisson arrival times. Bernoulli 22: 1364–82. [Google Scholar] [CrossRef]
  3. Avanzi, Benjamin, Eric C.K. Cheung, Bernard Wong, and Jae-Kyung Woo. 2013. On a periodic dividend barrier strategy in the dual model with continuous monitoring of solvency. Insurance: Mathematics and Economics 52: 98–113. [Google Scholar] [CrossRef] [Green Version]
  4. Avanzi, Benjamin, Vincent Tu, and Bernard Wong. 2014. On optimal periodic dividend strategies in the dual model with diffusion. Insurance: Mathematics and Economics 55: 210–24. [Google Scholar] [CrossRef]
  5. Avanzi, Benjamin, Vincent Tu, and Bernard Wong. 2016. On the interface between optimal periodic and continuous dividend strategies in the presence of transaction costs. ASTIN Bulletin 46: 709–46. [Google Scholar] [CrossRef]
  6. Avram, Florin, Andreas E. Kyprianou, and Martijn R. Pistorius. 2004. Exit problems for spectrally negative Lévy processes and applications to (Canadized) Russian options. Annals of Applied Probability 14: 215–38. [Google Scholar]
  7. Avram, Florin, Zbigniew Palmowski, and Martijn R. Pistorius. 2007. On the optimal dividend problem for a spectrally negative Lévy process. Annals of Applied Probability 17: 156–80. [Google Scholar] [CrossRef]
  8. Avram, Florin, José-Luis Pérez, and Kazutoshi Yamazaki. 2018. Spectrally negative Lévy processes with Parisian reflection below and classical reflection above. Stochastic Processes and their Applications 128: 255–90. [Google Scholar] [CrossRef]
  9. Bertoin, Jean. 1996. Lévy Processes. Cambridge: Cambridge University Press. [Google Scholar]
  10. Chan, Terence, Andreas E. Kyprianou, and Mladen Savov. 2011. Smoothness of scale functions for spectrally-negative Lévy processes. Probability Theory and Related Fields 150: 691–708. [Google Scholar] [CrossRef]
  11. Chaumont, Loïc, and Ronald Doney. 2005. On Lévy processes conditioned to stay positive. Electronic Journal of Probability 10: 948–61. [Google Scholar] [CrossRef]
  12. Kuznetsov, Alexey, Andreas E. Kyprianou, and Victor Rivero. 2013. The theory of scale functions for spectrally negative Lévy processes. In Lévy Matters II, Springer Lecture Notes in Mathematics. Berlin: Springer. [Google Scholar]
  13. Kyprianou, Andreas E. 2006. Introductory Lectures on Fluctuations of Lévy Processes with Applications. Berlin: Springer. [Google Scholar]
  14. Kyprianou, Andreas E., and Ronnie L. Loeffen. 2010. Refracted Lévy processes. Annales de l’Institut Henri Poincaré 46: 24–44. [Google Scholar] [CrossRef]
  15. Kyprianou, Andreas E., Juan Carlos Pardo, and José Luis Pérez. 2014. Occupation times of refracted Lévy processes. Journal of Theoretical Probability 27: 1292–315. [Google Scholar] [CrossRef] [Green Version]
  16. Loeffen, Ronnie. 2008. On optimality of the barrier strategy in de Finetti’s dividend problem for spectrally negative Lévy processes. The Annals of Applied Probability 18: 1669–80. [Google Scholar] [CrossRef]
  17. Loeffen, Ronnie, Jean-François Renaud, and Xiaowen Zhou. 2014. Occupation times of intervals until first passage times for spectrally negative Lévy processes with applications. Stochastic Processes and their Applications 124: 1408–35. [Google Scholar] [CrossRef]
  18. Noba, Kei, José-Luis Pérez, Kazutoshi Yamazaki, and Kouji Yano. 2017. On optimal periodic dividend and capital injection strategies for spectrally negative Lévy models. arXiv arXiv:1801.00088. [Google Scholar]
  19. Noba, Kei, José-Luis Pérez, Kazutoshi Yamazaki, and Kouji Yano. 2018. On optimal periodic dividend strategies for Lévy risk processes. Insurance: Mathematics and Economics 80: 29–44. [Google Scholar] [CrossRef]
  20. Pardo, Juan Carlos, José-Luis Pérez, and Víctor Manuel Rivero. 2018. The excursion measure away from zero for spectrally negative Lévy processes. Institut Henri Poincaré 54: 75–99. [Google Scholar] [CrossRef]
  21. Pérez, José-Luis, and Kazutoshi Yamazaki. 2017. On the optimality of periodic barrier strategies for a spectrally positive Lévy process. Insurance: Mathematics and Economics 77: 1–13. [Google Scholar] [CrossRef]
  22. Pérez, José-Luis, and Kazutoshi Yamazaki. 2018. On the refracted-reflected spectrally-negative Lévy processes. Stochastic Processes and their Applications 128: 306–31. [Google Scholar] [CrossRef]
  23. Pérez, José-Luis, and Kazutoshi Yamazaki. forthcoming. Optimality of hybrid continuous and periodic barrier strategies in the dual model. Applied Mathematics & Optimization.
  24. Pistorius, Martijn R. 2003. On doubly reflected completely asymmetric Lévy processes. Stochastic Processes and their Applications 1107: 131–43. [Google Scholar] [CrossRef]
  25. Pistorius, Martijn R. 2004. On exit and ergodicity of the spectrally one-sided Lévy process reflected at its infimum. Journal of Theoretical Probability 17: 183–220. [Google Scholar] [CrossRef]
  26. Revuz, Daniel, and Marc Yor. 1999. Continuous martingales and Brownian motion. Berlin and Heidelberg: Springer Science & Business Media, vol. 293, pp. 1171–88. [Google Scholar]

Share and Cite

MDPI and ACS Style

Pérez, J.-L.; Yamazaki, K. Mixed Periodic-Classical Barrier Strategies for Lévy Risk Processes. Risks 2018, 6, 33. https://doi.org/10.3390/risks6020033

AMA Style

Pérez J-L, Yamazaki K. Mixed Periodic-Classical Barrier Strategies for Lévy Risk Processes. Risks. 2018; 6(2):33. https://doi.org/10.3390/risks6020033

Chicago/Turabian Style

Pérez, José-Luis, and Kazutoshi Yamazaki. 2018. "Mixed Periodic-Classical Barrier Strategies for Lévy Risk Processes" Risks 6, no. 2: 33. https://doi.org/10.3390/risks6020033

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop