Next Article in Journal
Custom v. Standardized Risk Models
Previous Article in Journal
Portability, Salary and Asset Price Risk: A Continuous-Time Expected Utility Comparison of DB and DC Pension Plans
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Rationality Parameter for Exercising American Put

by
Kamille Sofie Tågholt Gad
* and
Jesper Lund Pedersen
*
Department of Mathematical Sciences, University of Copenhagen, Universitetsparken 5, 2100 Copenhagen, Denmark
*
Authors to whom correspondence should be addressed.
Risks 2015, 3(2), 103-111; https://doi.org/10.3390/risks3020103
Submission received: 23 February 2015 / Revised: 20 April 2015 / Accepted: 12 May 2015 / Published: 20 May 2015

Abstract

:
In this paper, irrational exercise behavior of the buyer of an American put is characterized by a single parameter. We model irrational exercise rules as the first jump time of a point processes with stochastic intensity. By the rationality parameter, we parameterize a family of stochastic intensities that depends on the value of the put itself. We present a probabilistic proof that the value of the American put using the irrational exercise rule converges to the arbitrage-free price as the rationality parameter converges to infinity. Another application of this result is the penalty method for approximating the price of an American put.

1. Introduction

The buyer of an American put can exercise at any time of his choice within the time of the contract. The arbitrage-free value of the American put is formulated as an optimal stopping problem (see Karatzas and Shreve [1]), where the optimal stopping time is an optimal exercise rule for the buyer of the American put. Empirical studies (see, e.g., Diz and Finucane [2] and Poteshman and Serbin [3]) show that there are a large number of irrational exercises. The irrational exercises may have various reasons. For example, the irrationality may be due to that the buyer does not have the correct input for the model, he does not monitor his position sufficiently, or he holds the American put as part of a hedge where it might not be optimal to apply the optimal exercise rule. Irrational exercise rules will tend to cause overvaluation of the American put.
In the present paper, we develop a methodology that takes irrational exercise behavior into consideration. In line with the game-theoretical approach of irrational decision making (see, e.g., Chen et al. [4]), we characterize the rationality of the buyer of the American put by a parameter such that the exercise behavior converges to the optimal exercise rule (i.e., full rationality) as the rationality parameter approaches infinity. We use an intensity-based model for the valuation of American puts in which the exercise rule is modeled as the first jump time of a point process with stochastic intensity. We let the exercise intensity depend on a rationality parameter of how profitable it is to exercise. This profitability we measure as the difference between the pay-off and the value of the American put if it is not exercised yet. The parameter measures how strongly the exercise intensity is affected by the profitability, and for that reason we denote it a rationality parameter. The main contribution of the present paper is a probabilistic proof of the following convergence result: Under mild restrictions the value of the American put in the intensity-based model converges to the arbitrage-free value when the rationality parameter converges to infinity. The proof decomposes the value of the American put in the intensity-based model into the arbitrage-free price and losses coming from respectively exercising when it is not optimal and not exercising when it is optimal.
An intensity-based approach has been used for valuation of executive stock options by e.g., Jennergren and Naslund [5] and by Carr and Linetsky [6]. In the latter paper the exercise intensity depends on time and the underlying stock. Dai et al. [7] model the mortgagor’s prepayment in mortgage loans and the issuer’s call in the American warrant as an event risk where the intensity of prepayment or calling depends on the value and may be viewed as example one in Section 3. Moreover, as also pointed out by Dai et al. [7] the valuation equations (see Equation (3) below) may be viewed as the penalty method (see Forsyth and Vetzal [8]) for approximating the value of the American put.
The paper is structured in the following way. In Section 2, we introduce the rationality parameter for exercising the American put and show the convergence result that the value of the American put using the irrational exercise rule converges to the arbitrage-free value when the rationality parameter converges to infinity. In Section 3, we derive valuation equations for the American put using the exercise strategies considered in Section 2.

2. Rationality Parameter for Exercising

We assume a Black-Scholes market where the underlying stock price satisfies the stochastic differential equation (under the risk-neutral probability measure)
d S u = r S u d u + σ S u d W u
for u t with S t = s under P t , s . Here r is a constant interest rate, σ > 0 is a constant volatility, and W is a Brownian motion.
Consider an American put with strike price K and maturity date T, written on the stock and thus the pay-off process is given by ( K - S t ) + . The arbitrage-free value, P A , of the American put is given as an optimal stopping problem
P A ( t , s ) = sup t τ T E t , s [ e - r ( τ - t ) ( K - S τ ) + ] = E t , s [ e - r ( τ * - t ) ( K - S τ * ) + ]
where the supremum is taken over all exercise rules (stopping times) with values in [ t , T ] . Furthermore, there is an optimal exercise rule τ * for which the supremum is attained. This optimal exercise rule has an optimal stopping boundary b ( · ) such that it is given by
τ * = inf { t u T : S u b ( u ) }
We define irrational exercise rules τ, as the minimum of first jump time of a point process with stochastic intensity ( μ u ) t u T (see Bremaud [9]) and the terminal time T. The value of the American put exercising at time τ [ t , T ] is given by
P ( t , s ; τ ) = E t , s [ e - r ( τ - t ) ( K - S τ ) + ]
We introduce a single, strictly positive parameter, θ, that measures how rational the behavior of the holder of the American put is. This is done in the following way: We let θ be the index of a family of intensity functions, f θ and thus a family of exercise strategies, τ θ . We want the corresponding value of the American put to converge to the arbitrage-free price when the parameter θ converges to infinity. This gives us the definition.
Definition 1. Let ( τ θ ) be a family of irrational exercise rules indexed by θ > 0 and denote the corresponding values by
P θ ( t , s ) = P ( t , s ; τ θ )
We say θ is a rationality parameter for exercising an American put if
P θ ( t , s ) P A ( t , s )
for θ .
We want to model that the holder of the American put exercises the option at any time is affected by how profitable it is to exercise. The relation between the profitability and the stochastic exercise intensity is given by
μ u = f ( K - S u ) + - P ( u , S u ; τ )
where f : [ - K , K ] [ 0 , ) is an intensity function. Thus, the profitability is measured as the difference between the pay-off and the value of the American put if he does not exercise immediately.
Theorem 2 below is the main result of this paper. It gives sufficient conditions for an index of a family of intensity functions to be a rationality parameter. The proof consists of a probabilistic analysis of the exercise strategies. The key idea is to define events that categorize how profitable an exercise strategy turn out to be upon exercise. Given some tolerance, ε 1 > 0 , we use the following definition
{ τ good } = { ( K - S τ ) + - P ( τ , S τ ; τ ) 0 } { τ ok } = { ( K - S τ ) + - P ( τ , S τ ; ) [ - ε 1 , 0 ) } { τ bad } = { ( K - S τ ) + - P ( τ , S τ ) < - ε 1 }
Theorem 2. Let ( f θ ) θ > 0 be a family of positive, deterministic intensity functions and for each θ > 0 , let a stochastic intensity process be given by
μ u θ = f θ ( K - S u ) + - P θ ( u , S u )
where P θ ( t , s ) = P ( t , s ; τ θ ) and τ θ is the exercise strategy of the American put given as the first jump time of a point process with intensity μ θ .
Let ν θ ( x ) = 1 ( x < 0 ) sup y x f θ ( y ) + 1 ( x 0 ) sup y x f θ ( y ) and suppose that
  • ν θ ( 0 + ) as θ .
  • There exists a function ε : ( 0 , ) ( 0 , ) such that ν θ ( - ε ( θ ) ) 0 and ε ( θ ) ν θ ( 0 - ) 0 as θ .
Then θ is a rationality parameter, that is, for every ( t , s ) [ 0 , T ] × R + we have that P θ ( t , s ) P A ( t , s ) as θ .
Remark 1. If we include the natural restriction that f θ is increasing, then f θ = ν θ .
Proof of Theorem 2. I. Let τ n θ be the sequence of jump times of the point process with the stochastic intensity process μ θ . Note that τ 1 θ = τ θ . Let τ ^ θ be the minimum of T and the first jump time after the rational exercise rule τ * , that is,
τ ^ θ = i = 1 τ i θ 1 ( τ i - 1 θ < τ * τ i θ )
with τ 0 θ = t under P t , s . The value of the put exercising at this jump time is
P ( t , s ; τ ^ θ ) = E t , s e - r ( τ ^ θ - t ) ( K - S τ ^ θ ) + = i = 1 E t , s e - r ( τ i θ - t ) ( K - S τ i θ ) + 1 ( τ i - 1 θ < τ * τ i θ )
The strategy τ ^ θ corresponds to the strategy τ θ where we have removed the possibility for the holder of the American put to exercise too early. Studying τ ^ θ we may separate the loss from exercising too early from the loss of exercising too late.
We first study the loss of exercising too early. The overall idea of this part is the following. The starting point is representation of P ( t , s ; τ ^ θ ) given in Equation (1) . From Equation (1) it follows that the strategy τ ^ θ corresponds to the strategy τ θ , except each time there is a jump in the point process before τ * the holder of the put regrets and do not exercise. At each time of regret the holder looses some value if the exercise time was good, and he gains at most ε 1 if the exercise time was ok, and if the exercise time was bad then he gains more that ε 1 . As the exercise intensity in times which are ok or bad are sufficiently restricted, then the expected value one gains from exercising at a time which is ok may be made arbitrarily small by using a small ε 1 . In all circumstances, one cannot gain more than K from exercising, and given an arbitrary ε 1 the intensity for exercising at bad times can be made uniformly arbitrarily small by choosing a large θ. Then the gain from regretting the exercises when τ is bad can be made arbitrarily small.
II. We verify the following inequality. For given ε 1 > 0 and n N then
P θ ( t , s ) E t , s [ e - r ( τ n θ - t ) P θ ( τ n θ , S τ n θ ) 1 ( ( τ n θ < τ * ) 1 ( ( τ j θ good or ok ) j = 1 , , n ) ] + i = 1 n E t , s [ e - r ( τ i θ - t ) ( K - S τ i θ ) + 1 ( τ i - 1 θ < τ * τ i θ ) 1 ( ( τ j θ good or ok ) j = 1 , , i - 1 ) ] - ε 1 i = 1 n P t , s ( τ i θ < τ * , ( τ j θ good or ok ) j = 1 , , i - 1 , τ i θ ok )
We show this by induction. For n = 1 we have
P θ ( t , s ) = E t , s [ e - r ( τ 1 θ - t ) ( K - S τ 1 θ ) + 1 ( τ 1 θ < τ * ) ] + E t , s [ e - r ( τ 1 θ - t ) ( K - S τ 1 θ ) + 1 ( τ * τ 1 θ ) ] E t , s [ e - r ( τ 1 θ - t ) ( K - S τ 1 θ ) + 1 ( τ 1 θ < τ * ) 1 ( τ 1 θ good ) ] + E t , s [ e - r ( τ 1 θ - t ) ( K - S τ 1 θ ) + 1 ( τ 1 θ < τ * ) 1 ( τ 1 θ ok ) ] + E t , s [ e - r ( τ 1 θ - t ) ( K - S τ 1 θ ) + 1 ( τ * τ 1 θ ) ] E t , s [ e - r ( τ 1 θ - t ) P θ ( τ 1 θ , S τ 1 θ ) 1 ( τ 1 θ < τ * ) 1 ( τ 1 θ good ) ] + E t , s [ e - r ( τ 1 θ - t ) ( P θ ( τ 1 θ , S τ 1 θ ) - ε 1 ) 1 ( τ 1 θ < τ * ) 1 ( τ 1 θ ok ) ] + E t , s [ e - r ( τ 1 θ - t ) ( K - S τ 1 θ ) + 1 ( τ * τ 1 θ ) ] = E t , s [ e - r ( τ 1 θ - t ) P θ ( τ 1 θ , S τ 1 θ ) 1 ( τ 1 θ < τ * ) 1 ( τ 1 θ good or ok ) ] - ε 1 E t , s [ e - r ( τ 1 θ - t ) 1 ( τ 1 θ < τ * ) 1 ( τ 1 θ ok ) ] + E t , s [ e - r ( τ 1 θ - t ) ( K - S τ 1 θ ) + 1 ( τ * τ 1 θ ) ] E t , s [ e - r ( τ 1 θ - t ) P θ ( τ 1 θ , S τ 1 θ ) 1 ( τ 1 θ < τ * ) 1 ( τ 1 θ good or ok ) ] - ε 1 P t , s ( τ 1 θ < τ * , τ 1 θ ok ) + E t , s [ e - r ( τ 1 θ - t ) ( K - S τ 1 θ ) + 1 ( τ * τ 1 θ ) ]
We assume that the inequality is true for n. Then we have
E t , s [ e - r ( τ n θ - t ) P θ ( τ n θ , S τ n θ ) 1 ( τ n θ < τ * ) 1 ( ( τ j θ good or ok ) j = 1 , , n ) ] = E t , s [ e - r ( τ n θ - t ) E t , s [ e - r ( τ n + 1 θ - τ n θ ) ( K - S τ n + 1 θ ) + | τ n θ , S τ n θ ] 1 ( τ n θ < τ * ) 1 ( ( τ j θ good or ok ) j = 1 , , n ) ] = E t , s [ e - r ( τ n + 1 θ - t ) ( K - S τ n + 1 θ ) + 1 ( τ n θ < τ * ) 1 ( ( τ j θ good or ok ) j = 1 , , n ) ] E t , s [ e - r ( τ n + 1 θ - t ) ( K - S τ n + 1 θ ) + 1 ( τ n + 1 θ < τ * ) 1 ( ( τ j θ good or ok ) j = 1 , , n ) 1 ( τ n + 1 θ good ) ] + E t , s [ e - r ( τ n + 1 θ - t ) ( K - S τ n + 1 θ ) + 1 ( τ n + 1 θ < τ * ) 1 ( ( τ j θ good or ok ) j = 1 , , n ) 1 ( τ n + 1 θ ok ) ] + E t , s [ e - r ( τ n + 1 θ - t ) ( K - S τ n + 1 θ ) + 1 ( τ n θ < τ * τ n + 1 θ ) 1 ( ( τ j θ good or ok ) j = 1 , , n ) ] E t , s [ e - r ( τ n + 1 θ - t ) P θ ( τ n + 1 θ , S τ n + 1 θ ) 1 ( τ n + 1 θ < τ * ) 1 ( ( τ j θ good or ok ) j = 1 , , n ) 1 ( τ n + 1 θ good ) ] + E t , s [ e - r ( τ n + 1 θ - t ) ( P θ ( τ n + 1 θ , S τ n + 1 θ ) - ε 1 ) 1 ( τ n + 1 θ < τ * ) 1 ( ( τ j θ good or ok ) j = 1 , , n ) 1 ( τ n + 1 θ ok ) ] + E t , s [ e - r ( τ n + 1 θ - t ) ( K - S τ n + 1 θ ) + 1 ( τ n θ < τ * τ n + 1 θ ) 1 ( ( τ j θ good or ok ) j = 1 , , n ) ] = E t , s [ e - r ( τ n + 1 θ - t ) P θ ( τ n + 1 θ , S τ n + 1 θ ) 1 ( τ n + 1 θ < τ * ) 1 ( ( τ j θ good or ok ) j = 1 , , n + 1 ) ] - ε 1 E t , s [ e - r ( τ n + 1 θ - t ) 1 ( τ n + 1 θ < τ * ) 1 ( ( τ j θ good or ok ) j = 1 , , n ) 1 ( τ n + 1 θ ok ) ] + E t , s [ e - r ( τ n + 1 θ - t ) ( K - S τ n + 1 θ ) + 1 ( τ n θ < τ * τ n + 1 θ ) 1 ( ( τ j θ good or ok ) j = 1 , , n ) ]
Thus, using the induction assumption we find
P θ ( t , s ) E t , s [ e - r ( τ n θ - t ) P θ ( τ n θ , S τ n θ ) 1 ( τ n θ < τ * ) 1 ( ( τ j θ good or ok ) j = 1 , , n ) ] + i = 1 n E t , s [ e - r ( τ i θ - t ) ( K - S τ i θ ) + 1 ( τ i - 1 θ < τ * τ i θ ) 1 ( ( τ j θ good or ok ) j = 1 , , i - 1 ) ] - ε 1 i = 1 n P t , s ( τ i θ < τ * , ( τ j θ good or ok ) j = 1 , , i - 1 , τ i θ ok ) E t , s [ e - r ( τ n + 1 θ - t ) P θ ( τ n + 1 θ , S τ n + 1 θ ) 1 ( τ n + 1 θ < τ * ) 1 ( ( τ j θ good or ok ) j = 1 , , n + 1 ) ] + i = 1 n + 1 E t , s [ e - r ( τ i θ - t ) ( K - S τ i θ ) + 1 ( τ i - 1 θ < τ * τ i θ ) 1 ( ( τ j θ good or ok ) j = 1 , , i - 1 ) ] - ε 1 i = 1 n + 1 P t , s ( τ i θ < τ * , ( τ j θ good or ok ) j = 1 , , i - 1 , τ i θ ok )
Hence we have shown the inequality in Equation (2).
III. Now we investigate the terms in Equation (2). We begin with the second term.
i = 1 n E t , s [ e - r ( τ i θ - t ) ( K - S τ i θ ) + 1 ( τ i - 1 θ < τ * τ i θ ) 1 ( ( τ j θ good or ok ) j = 1 , , i - 1 ) ] = i = 1 n E t , s [ e - r ( τ i θ - t ) ( K - S τ i θ ) + 1 ( τ i - 1 θ < τ * τ i θ ) ] - i = 1 n E t , s [ e - r ( τ i θ - t ) ( K - S τ i θ ) + 1 ( τ i - 1 θ < τ * τ i θ ) 1 ( j { 1 , , i - 1 } : τ j θ bad ) ] i = 1 n E t , s [ e - r ( τ i θ - t ) ( K - S τ i θ ) + 1 ( τ i - 1 θ < τ * τ i θ ) ] - K i = 1 n P t , s ( τ i - 1 θ < τ * τ i θ , j { 1 , , i - 1 } : τ j θ bad ) i = 1 n E t , s [ e - r ( τ i θ - t ) ( K - S τ i θ ) + 1 ( τ i - 1 θ < τ * τ i θ ) ] - K P t , s ( i N : τ i θ < τ * , τ i θ bad ) i = 1 n E t , s [ e - r ( τ i θ - t ) ( K - S τ i θ ) + 1 ( τ i - 1 θ < τ * τ i θ ) ] - K ( 1 - e - ( T - t ) ν θ ( - ε 1 ) )
Note that given ε 1 > 0 the latter term can be made arbitrarily small by choosing θ large. This means that for large θ there is small probability that the option with price P ( t , s ; τ ^ θ ) has an exercise time which contains regrets of bad jump times.
Next we investigate the third term in Equation (2)
i = 1 n P t , s ( τ i θ < τ * , ( τ j θ good or ok ) j = 1 , , i - 1 , τ i θ ok ) E t , s ( { i N : τ i θ < τ * , τ i θ ok } ) ( T - t ) ν θ ( 0 - )
The latter inequality follows from that the ok jump times at most occur with intensity ν θ ( 0 - ) in the time until T. This shows that the expected number of regrets of ok stopping times for the exercise time of the option with price P ( t , s ; τ ^ θ ) is uniformly bounded with respect to ε 1 . Therefore the contribution from here can be made arbitrarily small by making ε 1 small, as ε 1 is then an upper bound for the contribution for every regret of an ok stopping time. Combined we get:
P θ ( t , s ) i = 1 n E t , s [ e - r ( τ i θ - t ) ( K - S τ i θ ) + 1 ( τ i - 1 θ < τ * τ i θ ) ] - K ( 1 - e - ( T - t ) ν θ ( - ε 1 ) ) - ε 1 ( T - t ) ) ν θ ( 0 - )
As this holds for all n N we find
P θ ( t , s ) - P ( t , s ; τ ^ θ ) - K ( 1 - e - ( T - t ) ν θ ( - ε 1 ) ) - ε 1 ( T - t ) ν θ ( 0 - )
IV. We now investigate the losses from too late exercise. Let
σ ε 2 = inf { u τ * : | ( K - S u ) + e - r ( u - t ) - ( K - S τ * ) + e - r ( τ * - t ) | ε 2 }
and let
C ε 2 = { u [ τ * , σ ε 2 ] | ( K - S u ) + - P A ( u , S u ) = 0 } = { u [ τ * , σ ε 2 ] | S u y u }
Let L denote the Lebesgue measure. As the optimal exercise boundary u b ( u ) is increasing and S u is a geometric Brownian motion, then L ( C ε 2 ) > 0 almost surely for every ε 2 > 0 . Hence for every ε 2 , ε 3 > 0 there exists a δ 0 such that P t , s ( L ( C ε 2 ) > δ ) > 1 - ε 3 . Now we get
P A ( t , s ) - P ( t , s ; τ ^ θ ) = E t , s [ e - r ( τ * - t ) ( K - S τ * ) + - e - r ( τ ^ θ - t ) ( K - S τ ^ θ ) + ] = E t , s [ ( e - r ( τ * - t ) ( K - S τ * ) + - e - r ( τ ^ θ - t ) ( K - S τ ^ θ ) + ) 1 τ ^ θ σ ε 2 ] + E t , s [ ( e - r ( τ * - t ) ( K - S τ * ) + - e - r ( τ ^ θ - t ) ( K - S τ ^ θ ) + ) 1 τ ^ θ > σ ε 2 ] ε 2 + K P t , s ( τ ^ θ > σ ε 2 ) = ε 2 + K ( P t , s ( τ ^ θ > σ ε 2 , L ( C ε 2 ) > δ ) + P t , s ( τ ^ θ > σ ε 2 , L ( C ε 2 ) δ ) ) ε 2 + K ( P t , s ( τ ^ θ > σ ε 2 | L ( C ε 2 ) > δ ) + P t , s ( L ( C ε 2 ) δ ) ) ε 2 + K ( e - δ ν θ ( 0 + ) + ε 3 )
Thus
P A ( t , s ) - P θ ( t , s ) ε 2 + K ( e - δ ν θ ( 0 + ) + ε 3 ) + K ( 1 - e - ( T - t ) ν θ ( - ε 1 ) ) + ε 1 ( T - t ) ν θ ( 0 - )
Choose ε 1 : R + R + such that f θ ( - ε 1 ( θ ) ) 0 as θ and such that ε 1 ( θ ) f θ ( 0 + ) 0 as θ . Then we find that P θ ( t , s ) P A ( t , s ) , when θ .   □

3. Valuation Equations

In this section, we use the set-up in the previous section to obtain valuation equations for the American put using irrational exercise rule.
Consider an irrational exercise rule, τ which is given as the first jump time of a point process with stochastic intensity μ u = λ ( u , S u ) for some positive, deterministic, measurable function λ. As in the previous section we have that the value of the American put is given by the risk-neutral expectation
P ( t , s ) = e - r ( T - t ) E t , s [ ( K - S T ) + 1 ( τ T ) ] + E t , s [ e - r ( τ - t ) ( K - S τ ) + 1 ( τ < T ) ]
Let f : [ - K , K ] [ 0 , ) be an intensity function and thus the intensity is given by
λ ( t , s ) = f ( K - s ) + - P ( t , s )
By [10] Proposition 3.1, the expectation can be re-written to
P ( t , s ) = e - r ( T - t ) E t , s exp - t T λ ( u , S u ) d u ( K - S T ) + + t T e - r ( u - t ) E t , s λ ( u , S u ) exp t u λ ( v , S v ) d v ( K - S u ) + d u
By the Feynman-Kac theorem (see [1]), the value of the American put P is the solution for the partial differential equation
P t ( t , s ) + r s P s ( t , s ) + 1 2 σ 2 s 2 P s s ( t , s ) = r P ( t , s ) + f ( K - s ) + - P ( t , s ) ( K - s ) + - P ( t , s )
with P ( T , s ) = ( K - s ) + , whenever this (nonlinear) partial differential equation has a unique solution. Note that, if this partial differential equation has a unique solution, then we use this solution to define λ ( t , s ) = f ( K - s ) + - P ( t , s ) and thus P is the value of the American put exercised by a strategy which is the minimum of T and the first jump time of a point process with intensity μ u = λ ( u , S u ) = f ( ( K - S u ) + - p ( u , S u ) ) . Thus, the existence and uniqueness of the solution to the nonlinear PDE in Equation (3) ensures that strategies in Section 2 is well defined.
Finally, we suggest two simple specifications of the function f θ given in Theorem 2.2
In the first example the function is specified as follows
f θ ( x ) = θ for x 0 0 for x < 0
This function is related to the penalty method found in recent computational finance literature (see, e.g., [8]) which approximates the arbitrage-free price of the American put by a semi-linear PDE given in Equation (3). With this function it is certain that the buyer does not exercise when it is not profitable, and the only irrational behavior he may make is to exercise too late.
In the second example the function is specified by
f θ ( x ) = θ e θ 2 x
This intensity function allows that the buyer may exercise too early and too late. Moreover, the buyer is not just affected by whether it is profitable or not to exercise, but also by how profitable it is.

Author Contributions

Both authors contributed to all aspects of this work.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. I. Karatzas, and S.E. Shreve. Methods of Mathematical Finance. Berlin/Heidelberg, Germany; Heidelberg, Germany: Springer, 1998. [Google Scholar]
  2. F. Diz, and T. Finucane. “The rationality of early exercise decisions: Evidence from the S&P 100 index options market.” Rev. Financ. Stud. 6 (1993): 765–798. [Google Scholar]
  3. A.M. Poteshman, and V. Serbin. “Clearly irrational financial market behavior: Evidence from the early exercise of exchange traded stock options.” J. Financ. 58 (2003): 37–70. [Google Scholar] [CrossRef]
  4. H.-C. Chen, J.W. Friedman, and J.F. Thisse. “Boundedly rational Nash equilibrium: A probabilistic choice approach.” Games Econ. Behav. 18 (1997): 32–54. [Google Scholar] [CrossRef]
  5. L. Jennergren, and B. Naslund. “A comment on the valution of executive stock options and the FASB proposal.” Account Rev. 68 (1993): 179–183. [Google Scholar]
  6. P. Carr, and T. Linetsky. “The valuation of executive stock options in an intensity-based framework.” Eur. Financ. Rev. 4 (2000): 211–230. [Google Scholar] [CrossRef]
  7. M. Dai, Y.K. Kwok, and H. You. “Intensity-based framework and penalty formulation of optimal stopping problems.” J. Econ. Dyn. Control 31 (2007): 3860–3880. [Google Scholar] [CrossRef]
  8. P.A. Forsyth, and K.R. Vetzal. “Quadratic convergence for valuing American options using a penalty method.” SIAM J. Sci. Comput. 23 (2002): 2096–2123. [Google Scholar] [CrossRef]
  9. P. Brémaud. Point Processes and Queues. Berlin/Heidelberg, Germany; Heidelberg, Germany: Springer, 1981. [Google Scholar]
  10. D. Lando. “On Cox processes and credit risky securities.” Rev. Deriv. Res. 2 (1998): 99–120. [Google Scholar] [CrossRef]

Share and Cite

MDPI and ACS Style

Gad, K.S.T.; Pedersen, J.L. Rationality Parameter for Exercising American Put. Risks 2015, 3, 103-111. https://doi.org/10.3390/risks3020103

AMA Style

Gad KST, Pedersen JL. Rationality Parameter for Exercising American Put. Risks. 2015; 3(2):103-111. https://doi.org/10.3390/risks3020103

Chicago/Turabian Style

Gad, Kamille Sofie Tågholt, and Jesper Lund Pedersen. 2015. "Rationality Parameter for Exercising American Put" Risks 3, no. 2: 103-111. https://doi.org/10.3390/risks3020103

Article Metrics

Back to TopTop