**1. Introduction**

The theory of information-based asset pricing proposed by Brody et al. (2007, 2008a, 2008b) and Macrina (2006) is concerned with the determination of the price processes of financial assets from first principles. In particular, the market filtration is constructed explicitly, rather than simply assumed, as it is in traditional approaches. The simplest version of the model is as follows. We fix a probability space (<sup>Ω</sup>, F, <sup>P</sup>). An asset delivers a single random cash flow *HT* at some specified time *T* > 0, where time 0 denotes the present. The cash flow is a function of a random variable *XT*, which we can think of as a "market factor" that is in some sense revealed at time *T*. In the general situation there will be many factors and many cash flows, but for the present we assume that there is a single factor *XT* : Ω → R such that the sole cash flow at time *T* is given by *HT* = *h*(*XT*) for some Borel function *h* : R → R+. For simplicity we assume that interest rates are constant and that P is the risk neutral measure. We require that *HT* should be integrable. Under these assumptions, the value of the asset at time 0 is

$$\text{So} = \mathbf{e}^{-r \, T} \to \left[ h(X\_T) \right], \tag{1}$$

where E denotes expectation under P and *r* is the short rate. Since the single "dividend" is paid at time *T*, the value of the asset at any time *t* ≥ 0 is of the form

$$S\_t = \mathbf{e}^{-r\left(T-t\right)} \mathbb{1}\_{\{t < T\}} \mathbb{E}\left[h(X\_T) \mid \mathcal{F}\_t\right],\tag{2}$$

where {F*t*}*t*≥0 is the market filtration. The task now is to model the filtration, and this will be done explicitly.

In traditional financial modelling, the filtration is usually taken to be fixed in advance. For example, in the widely-applied Brownian-motion-driven model for financial markets, the filtration is generated by an *n*-dimensional Brownian motion. A detailed account of the Brownian framework can be found, for example, in Karatzas and Shreve (1998). In the information-based approach, however, we do not assume the filtration to be given *a priori*. Instead, the filtration is constructed in a way that specifically takes into account the structures of the information flows associated with the cash flows of the various assets under consideration.

In the case of a single asset generating a single cash flow, the idea is that the filtration should contain partial or "noisy" information about the market factor *XT*, and hence the impending cash flow, in such a way that *XT* is F*T*-measurable. This can be achieved by allowing {F*t*} to be generated by a so-called information process {*ξt*}*t*≥0 with the property that for each *t* such that *t* ≥ *T* the random variable *ξt* is *σ*{*XT*}-measurable. Then by constructing specific examples of cádlàg processes having this property, we are able to formulate a variety of specific models. The resulting models are finely tuned to the structures of the assets that they represent, and therefore offer scope for a useful approach to financial risk management. In previous work on information-based asset pricing, where precise definitions can be found that expand upon the ideas summarized above, such models have been constructed using Brownian bridge information processes (Brody et al. (2007, 2008a, 2009, 2010, 2011), Filipovi´c et al. (2012), Hughston and Macrina (2012), Macrina (2006), Mengütürk (2013), Rutkowski and Yu (2007)), gamma bridge information processes (Brody et al. (2008b)), Lévy random bridge information processes (Hoyle (2010), Hoyle et al. (2011, 2015, 2020), Mengütürk (2018)) and Markov bridge information processes (Macrina (2019)). In what follows we present a new model for the market filtration, based on the variance-gamma process. The idea is to create a two-parameter family of information processes associated with the random market factor *XT*. One of the parameters is the information flow-rate *σ*. The other is an intrinsic parameter *m* associated with the variance gamma process. In the limit as *m* tends to infinity, the variance-gamma information process reduces to the type of Brownian bridge information process considered by Brody et al. (2007, 2008a) and Macrina (2006).

The plan of the paper is as follows. In Section 2 we recall properties of the gamma process, introducing the so-called scale parameter *κ* > 0 and shape parameter *m* > 0. A standard gamma subordinator is defined to be a gamma process with *κ* = 1/*<sup>m</sup>*. The mean at time *t* of a standard gamma subordinator is *t*. In Theorem 1 we prove that an increase in the shape parameter *m* results in a transfer of weight from the Lévy measure of any interval [*c*, *d*] in the space of jump size to the Lévy measure of any interval [*a*, *b*] such that *b* − *a* = *d* − *c* and *c* > *a*. Thus, roughly speaking, an increase in *m* results in an increase in the rate at which small jumps occur relative to the rate at which large jumps occur. This result concerning the interpretation of the shape parameter for a standard gamma subordinator is new as far as we are aware.

In Section 3 we recall properties of the variance-gamma process and the gamma bridge, and in Definition 1 we introduce a new type of process, which we call a normalized variance-gamma bridge. This process plays an important role in the material that follows. In Lemmas 1 and 2 we work out various properties of the normalized variance-gamma bridge. Then in Theorem 2 we show that the normalized variance-gamma bridge and the associated gamma bridge are jointly Markov, a property that turns out to be crucial in our pricing theory. In Section 4, at Definition 2, we introduce the so-called

variance-gamma information process. The information process carries noisy information about the value of a market factor *XT* that will be revealed to the market at time *T*, where the noise is represented by the normalized variance-gamma bridge. In Equation (58) we present a formula that relates the values of the information process at different times, and by use of that we establish in Theorem 3 that the information process and the associated gamma bridge are jointly Markov.

In Section 5, we consider a market where the filtration is generated by a variance gamma information process along with the associated gamma bridge. In Lemma 3 we work out a version of the Bayes formula in the form that we need for asset pricing in the present context. Then in Theorem 4 we present a general formula for the price process of a financial asset that at time *T* pays a single dividend given by a function *h*(*XT*) of the market factor. In particular, the *a priori* distribution of the market factor can be quite arbitrary, specified by a measure *FXT* (d*x*) on R, the only requirement being that *h*(*XT*) should be integrable. In Section 6 we present a number of examples, based on various choices of the payoff function and the distribution for the market factor, the results being summarized in Propositions 1–4. We conclude with comments on calibration, derivatives, and how one determines the trajectory of the information process from market prices.

#### **2. Gamma Subordinators**

We begin with some remarks about the gamma process. Let us as usual write R<sup>+</sup> for the non-negative real numbers. Let *κ* and *m* be strictly positive constants. A continuous random variable *G* : Ω → R<sup>+</sup> on a probability space (<sup>Ω</sup>, F, P) will be said to have a gamma distribution with scale parameter *κ* and shape parameter *m* if

$$\mathbb{P}\left[G \in \mathbf{dx}\right] = \mathbb{1}\_{\{x > 0\}} \frac{1}{\Gamma[m]} \,\, \kappa^{-m} \,\, \mathbf{x}^{m-1} \,\, \mathbf{e}^{-\mathbf{x}/\mathbf{x}} \,\, \mathbf{dx}\,,\tag{3}$$

where

$$
\Gamma[a] = \int\_0^\infty \mathbf{x}^{a-1} \, \mathbf{e}^{-\chi} \, \mathbf{d}x \tag{4}
$$

denotes the standard gamma function for *a* > 0, and we recall the relation <sup>Γ</sup>[*a* + 1] = *<sup>a</sup>*<sup>Γ</sup>[*a*]. A calculation shows that E [*G*] = *κ m*, and Var[*G*] = *κ*2 *m*. There exists a two-parameter family of gamma processes of the form Γ : Ω × R<sup>+</sup> → R<sup>+</sup> on (<sup>Ω</sup>, F, <sup>P</sup>). By a gamma process with scale *κ* and shape *m* we mean a Lévy process {<sup>Γ</sup>*t*}*t*≥0 such that for each *t* > 0 the random variable Γ*t* is gamma distributed with

$$\mathbb{P}\left[\Gamma\_t \in \mathbf{dx}\right] = \mathbb{1}\_{\{\mathbf{x} > 0\}} \frac{1}{\Gamma[m \, t]} \,\mathbf{x}^{-mt} \,\mathbf{x}^{mt-1} \,\mathbf{e}^{-\mathbf{x}/\mathbf{x}} \,\mathbf{dx} \,. \tag{5}$$

If we write (*a*)0 = 1 and (*a*)*k* = *a*(*a* + <sup>1</sup>)(*a* + <sup>2</sup>)···(*<sup>a</sup>* + *k* − 1) for the so-called Pochhammer symbol, we find that E[Γ*n t* ] = *<sup>κ</sup><sup>n</sup>*(*mt*)*<sup>n</sup>*. It follows that <sup>E</sup>[<sup>Γ</sup>*t*] = *μ t* and Var[<sup>Γ</sup>*t*] = *ν*2 *t*, where *μ* = *κ m* and *ν*2 = *κ*2 *m*, or equivalently *m* = *μ*2/*ν*2, and *κ* = *<sup>ν</sup>*2/*μ*.

The Lévy exponent for such a process is given for *α* < 1 by

$$\psi\_{\Gamma}(a) = \frac{1}{t} \log \mathbb{E}\left[\exp(a\Gamma\_{l})\right] = -m \log \left(1 - \kappa a\right),\tag{6}$$

and for the corresponding Lévy measure we have

$$\nu\_{\Gamma}(\mathbf{dx}) = \mathbb{1}\_{\{x > 0\}} \, m \, \frac{1}{\mathbf{x}} \, \mathbf{e}^{-\mathbf{x}/\mathbf{x}} \, \mathbf{dx} \,. \tag{7}$$

One can then check that the Lévy-Khinchine relation

$$\psi\_{\Gamma}(\mathfrak{a}) = \int\_{\mathbb{R}} \left( \mathbf{e}^{\mathrm{ax}} - \mathbf{1} - \mathbf{1}\_{\{|\mathbf{x}| < 1\}} \mathbf{a} \mathbf{x} \right) \, \nu\_{\Gamma}(\mathrm{dx}) + p\mathfrak{a} \tag{8}$$

holds for an appropriate choice of *p* (Kyprianou 2014, Lemma 1.7).

By a standard gamma subordinator we mean a gamma process {*<sup>γ</sup>t*}*t*≥0 for which *κ* = 1/*<sup>m</sup>*. This implies that <sup>E</sup>[*<sup>γ</sup>t*] = *t* and Var[*<sup>γ</sup>t*] = *m*<sup>−</sup><sup>1</sup> *t*. The standard gamma subordinators thus constitute a one-parameter family of processes labelled by *m*. An interpretation of the parameter *m* is given by the following:

**Theorem 1.** *Let* {*<sup>γ</sup>t*}*t*≥0 *be a standard gamma subordinator with parameter m. Let <sup>ν</sup>m*[*<sup>a</sup>*, *b*] *be the Lévy measure of the interval* [*a*, *b*] *for* 0 < *a* < *b. Then for any interval* [*c*, *d*] *such that c* > *a and d* − *c* = *b* − *a the ratio*

$$R\_m(a, b; c, d) = \frac{\nu\_m[a, b]}{\nu\_m[c, d]} \tag{9}$$

*is strictly greater than one and strictly increasing as a function of m.*

**Proof.** By the definition of a standard gamma subordinator we have

$$\nu\_m[a,b] = \int\_a^b m \, \frac{1}{\chi} \, \mathbf{e}^{-m \cdot x} \, \mathbf{d}x \, . \tag{10}$$

Let *δ* = *c* − *a* > 0 and note that the integrand in the right hand side of (10) is a decreasing function of the variable of integration. This allows one to conclude that

$$\nu\_m[a+\delta, b+\delta] = \int\_{a+\delta}^{b+\delta} m \frac{1}{\chi} \mathbf{e}^{-m\chi} \, \mathrm{d}x < \int\_a^b m \, \frac{1}{\chi} \mathbf{e}^{-m\chi} \, \mathrm{d}x,\tag{11}$$

from which it follows that 0 < *<sup>ν</sup>m*[*<sup>c</sup>*, *d*] < *<sup>ν</sup>m*[*<sup>a</sup>*, *b*] and hence *Rm*(*<sup>a</sup>*, *b* ; *c*, *d*) > 1. To show that *Rm*(*<sup>a</sup>*, *b*; *c*, *d*) is strictly increasing as a function of *m* we observe that

$$\nu\_m[a,b] = m \int\_a^\infty \frac{1}{x} \mathbf{e}^{-mx} \, \mathrm{d}x - m \int\_b^\infty \frac{1}{x} \mathbf{e}^{-m \cdot x} \, \mathrm{d}x = m \left( E\_1[m \, a] - E\_1[m \, b] \right) \, \tag{12}$$

where the so-called exponential integral function *<sup>E</sup>*1(*z*) is defined for *z* > 0 by

$$E\_1(z) = \int\_z^{\infty} \frac{\mathbf{e}^{-\mathbf{x}}}{\mathbf{x}} d\mathbf{x} \,. \tag{13}$$

See Abramowitz and Stegun (1972), Section 5.1.1, for properties of the exponential integral. Next, we compute the derivative of *Rm*(*<sup>a</sup>*, *b* ; *c*, *d*), which gives

$$\frac{\partial}{\partial m}R\_m(a,b;c,d) = \frac{1}{m\left(E\_1[m\,c] - E\_1[m\,d]\right)} \operatorname{e}^{-ma}\left(1 - \operatorname{e}^{-m\,\Delta}\right) \left(R\_m(a,b;c,d) - \operatorname{e}^{m(c-a)}\right),\tag{14}$$

where

$$
\Delta = d - c = b - a \,. \tag{15}
$$

We note that

$$\frac{1}{m \left( E\_1[m \, c] - E\_1[m \, d] \right)} \,\mathrm{e}^{-m \, a} \left( 1 - \mathrm{e}^{-m \, \Delta} \right) \\ > 0 \,, \tag{16}$$

which shows that the sign of the derivative in (14) is strictly positive if and only if

$$R\_m(a,b;c,d) \; > \; \mathbf{e}^{m(c-a)}.\tag{17}$$

But clearly

$$\int\_{0}^{\Delta m} \frac{\mathbf{e}^{-u}}{u + a \, m} \, \mathrm{d}u > \int\_{0}^{\Delta m} \frac{\mathbf{e}^{-u}}{u + c \, m} \, \mathrm{d}u \tag{18}$$

for *c* > *a*, which after a change of integration variables and use of (15) implies

$$\mathbf{e}^{m\mathbf{a}} \int\_{a\,m}^{b\,m} \frac{\mathbf{e}^{-\mathbf{x}}}{\mathbf{x}} \, \mathbf{d}x > \mathbf{e}^{m\mathbf{c}} \int\_{c\,m}^{d\,m} \frac{\mathbf{e}^{-\mathbf{x}}}{\mathbf{x}} \, \mathbf{d}x,\tag{19}$$

which is equivalent to (17), and that completes the proof.

We see therefore that the effect of an increase in the value of *m* is to transfer weight from the Lévy measure of any jump-size interval [*c*, *d*] ⊂ R<sup>+</sup> to any possibly-overlapping smaller-jump-size interval [*a*, *b*] ⊂ R<sup>+</sup> of the same length. The Lévy measure of such an interval is the rate of arrival of jumps for which the jump size lies in that interval.

#### **3. Normalized Variance-Gamma Bridge**

Let us fix a standard Brownian motion {*Wt*}*t*≥0 on (<sup>Ω</sup>, F, P) and an independent standard gamma subordinator {*<sup>γ</sup>t*}*t*≥0 with parameter *m*. By a standard variance-gamma process with parameter *m* we mean a time-changed Brownian motion {*Vt*}*t*≥0 of the form

$$V\_t = \mathcal{W}\_{\gamma\_t} \,. \tag{20}$$

It is straightforward to check that {*Vt*} is itself a Lévy process, with Lévy exponent

$$
\psi\_V(\alpha) = -m \log \left( 1 - \frac{\alpha^2}{2m} \right) \,. \tag{21}
$$

Properties of the variance-gamma process, and financial models based on it, have been investigated extensively in Madan (1990), Madan and Milne (1991), Madan et al. (1998), Carr et al. (2002) and many other works.

The other object we require going forward is the gamma bridge (Brody et al. (2008b), Emery and Yor (2004), Yor (2007)). Let {*<sup>γ</sup>t*} be a standard gamma subordinator with parameter *m*. For fixed *T* > 0 the process {*<sup>γ</sup>tT*}*t*≥0 defined by

$$
\gamma\_{tT} = \frac{\gamma\_t}{\gamma\_T} \tag{22}
$$

for 0 ≤ *t* ≤ *T* and *γtT* = 1 for *t* > *T* will be called a standard gamma bridge, with parameter *m*, over the interval [0, *<sup>T</sup>*]. One can check that for 0 < *t* < *T* the random variable *γtT* has a beta distribution (Brody et al. 2008b, pp. 6–9). In particular, one finds that its density is given by

$$\mathbb{P}\left[\gamma\_{tT} \in \mathrm{d}y\right] = \mathbb{1}\_{\{0 < y < 1\}} \frac{y^{mt-1}(1-y)^{m(T-t)-1}}{B[mt, m(T-t)]} \,\mathrm{d}y\,\,\,\tag{23}$$

where

$$B[a,b] = \frac{\Gamma[a]\,\Gamma[b]}{\Gamma[a+b]}.\tag{24}$$

It follows then by use of the integral formula

$$B[a,b] = \int\_0^1 y^{a-1}(1-y)^{b-1} dy\tag{25}$$

that for all *n* ∈ N we have

$$\mathbb{E}\left[\gamma\_{tT}^{n}\right] = \frac{B\left[mt + n, m(T - t)\right]}{B\left[mt, m(T - t)\right]},\tag{26}$$

and hence

$$\mathbb{E}\left[\gamma\_{tT}^{n}\right] = \frac{(mt)\_{n}}{(mT)\_{n}}.\tag{27}$$

Accordingly, one has

$$\mathbb{E}\left[\gamma\_{tT}\right] = t/T, \quad \mathbb{E}\left[\gamma\_{tT}^2\right] = t(mt+1)/T(mT+1) \tag{28}$$

and therefore

$$\text{Var}[\gamma\_{tT}] = \frac{t(T-t)}{T^2(1+mT)}.\tag{29}$$

One observes, in particular, that the expectation of *γtT* does not depend on *m*, whereas the variance of *γtT* decreases as *m* increases.

**Definition 1.** *For fixed T* > 0*, the process* {<sup>Γ</sup>*tT*}*t*≥0*defined by*

$$
\Gamma\_{tT} = \gamma\_T^{-\frac{1}{2}} \left( \mathcal{W}\_{\gamma\_T} - \gamma\_{tT} \mathcal{W}\_{\gamma\_T} \right) \tag{30}
$$

*for* 0 ≤ *t* ≤ *T and* Γ*tT* = 0 *for t* > *T will be called a normalized variance gamma bridge.*

We proceed to work out various properties of this process. We observe that Γ*tT* is conditionally Gaussian, from which it follows that E [<sup>Γ</sup>*tT* | *γt*, *<sup>γ</sup>T*] = 0 and E <sup>Γ</sup>2*tT* | *γt*, *<sup>γ</sup>T* = *γtT* (1 − *<sup>γ</sup>tT*). Therefore <sup>E</sup>[<sup>Γ</sup>*tT*] = 0 and <sup>E</sup>[Γ2*tT*] = <sup>E</sup>[*<sup>γ</sup>tT*] − <sup>E</sup>[*γ*2*tT*] ; and thus by use of (28) we have

$$\text{Var}\left[\Gamma\_{tT}\right] = \frac{mt\left(T-t\right)}{T\left(1+mT\right)}.\tag{31}$$

Now, recall (Yor (2007), Emery and Yor (2004)) that the gamma process and the associated gamma bridge have the following fundamental independence property. Define

$$\mathcal{H}\_t^\* = \sigma\left\{\gamma\_t / \gamma\_{t\prime} \text{ s} \in [0, t] \right\} \; , \quad \mathcal{H}\_t^+ = \sigma\left\{\gamma\_{\text{u}} \; \right. \; \text{u} \in [t, \infty) \right\} \; . \tag{32}$$

Then, for every *t* ≥ 0 it holds that G ∗*t* and G +*t* are independent. In particular *γst* and *γu* are independent for 0 ≤ *s* ≤ *t* ≤ *u* and *t* > 0. It also holds that *γst* and *γuv* are independent for 0 ≤ *s* ≤ *t* ≤ *u* ≤ *v* and *t* > 0. Furthermore, we have:

**Lemma 1.** *If* 0 ≤ *s* ≤ *t* ≤ *u and t* > 0 *then* Γ*st and γu are independent.*

**Proof.** We recall that if a random variable *X* is normally distributed with mean *μ* and variance *ν*2 then

$$\mathbb{P}\left[X < x\right] = N\left(\frac{x - \mu}{\nu}\right),\tag{33}$$

where *N* : R → (0, 1) is defined by

$$N(\mathbf{x}) = \frac{1}{\sqrt{2\pi}} \int\_{-\infty}^{\infty} \exp\left(-\frac{1}{2}y^2\right) \,\mathrm{d}y\,\mathrm{d}x\,\tag{34}$$

Since Γ*tT* is conditionally Gaussian, by use of the tower property we find that

$$\begin{split} F\_{\Gamma\_{\rm tt},\gamma\_{\rm t}}(x,y) &= \mathbb{E}\left[\mathbbm{1}\_{\{\Gamma\_{\rm tt}\leq x\}}\mathbbm{1}\_{\{\gamma\_{\rm tt}\leq y\}}\right] \\ &= \mathbb{E}\left[\mathbb{E}\left[\mathbbm{1}\_{\{\Gamma\_{\rm tt}\leq x\}}\mathbbm{1}\_{\{\gamma\_{\rm t}\leq y\}}\right] \left|\gamma\_{\rm t},\gamma\_{\rm t},\gamma\_{\rm t}\right|\right] \\ &= \mathbb{E}\left[\mathbbm{1}\_{\{\gamma\_{\rm tt}\leq y\}}\mathbb{E}\left[\mathbbm{1}\_{\{\Gamma\_{\rm tt}\leq x\}}\left|\gamma\_{\rm t},\gamma\_{\rm t},\gamma\_{\rm t}\right|\right]\right] \\ &= \mathbb{E}\left[\mathbbm{1}\_{\{\gamma\_{\rm tt}\leq y\}}N\left(\mathbf{x}\left(\gamma\_{\rm tt}\left(1-\gamma\_{\rm tt}\right)\right)^{-\frac{1}{2}}\right)\right] \\ &= \mathbb{E}\left[\mathbbm{1}\_{\{\gamma\_{\rm t}\leq y\}}\right]\mathbb{E}\left[N\left(\mathbf{x}\left(\gamma\_{\rm tt}\left(1-\gamma\_{\rm tt}\right)\right)^{-\frac{1}{2}}\right)\right], \end{split} \tag{35}$$

where the last line follows from the independence of *γst* and *γ<sup>u</sup>*.

By a straightforward extension of the argumen<sup>t</sup> we deduce that if 0 ≤ *s* ≤ *t* ≤ *u* ≤ *v* and *t* > 0 then Γ*st* and *γuv* are independent. Further, we have:

**Lemma 2.** *If* 0 ≤ *s* ≤ *t* ≤ *u* ≤ *v and t* > 0 *then* Γ*st and* Γ*uv are independent.*

**Proof.** We recall that the Brownian bridge {*βtT*}0≤*t*≤*T* defined by

$$
\beta\_{1T} = \mathcal{W}\_t - \frac{t}{T} \,\mathcal{W}\_T \tag{36}
$$

for 0 ≤ *t* ≤ *T* and *βtT* = 0 for *t* > *T* is Gaussian with E [*βtT*] = 0, Var[*βtT*] = *t*(*T* − *<sup>t</sup>*)/*T*, and Cov [*βsT*, *βtT*] = *s*(*<sup>T</sup>* − *t*)/*T* for 0 ≤ *s* ≤ *t* ≤ *T*. Using the tower property we find that

$$\begin{split} F\_{\mathrm{T}t,\Gamma\_{\mathrm{nr}}}(\boldsymbol{x},\boldsymbol{y}) &= \mathbb{E}\left[\mathbbm{1}\_{\{\Gamma\_{\mathrm{st}}\leq\mathbf{x}\}}\mathbbm{1}\_{\{\Gamma\_{\mathrm{nr}}\leq\mathbf{y}\}}\right] \\ &= \mathbb{E}\left[\mathbb{E}\left[\mathbbm{1}\_{\{\Gamma\_{\mathrm{st}}\leq\mathbf{x}\}}\mathbbm{1}\_{\{\Gamma\_{\mathrm{nr}}\leq\mathbf{y}\}}\left|\,\gamma\_{\boldsymbol{s}},\gamma\_{\boldsymbol{t}},\gamma\_{\boldsymbol{u}},\gamma\_{\boldsymbol{v}}\right|\right]\right] \\ &= \mathbb{E}\left[\mathbb{E}\left[\mathbbm{1}\_{\{\Gamma\_{\mathrm{st}}\leq\mathbf{x}\}}\left|\,\gamma\_{\boldsymbol{s}},\gamma\_{\boldsymbol{t}},\gamma\_{\boldsymbol{u}},\gamma\_{\boldsymbol{v}}\right|\right] \mathbb{E}\left[\mathbbm{1}\_{\{\Gamma\_{\mathrm{nr}}\leq\mathbf{y}\}}\left|\,\gamma\_{\boldsymbol{s}},\gamma\_{\boldsymbol{t}},\gamma\_{\boldsymbol{u}},\gamma\_{\boldsymbol{v}}\right|\right]\right] \\ &= \mathbb{E}\left[N\left(\mathbf{x}\left(\left(1-\gamma\_{\boldsymbol{s}t}\right)\left(\gamma\_{\boldsymbol{s}}t\right)\right)^{-\frac{1}{2}}\right)\right]\mathbb{E}\left[N\left(\boldsymbol{y}\left(\left(1-\gamma\_{\boldsymbol{u}\boldsymbol{v}}\right)\left(\gamma\_{\boldsymbol{u}\boldsymbol{v}}\right)\right)^{-\frac{1}{2}}\right)\right],\end{split} \tag{37}$$

where in the final step we use (30) along with properties of the Brownian bridge.

A straightforward calculation shows that if 0 ≤ *s* ≤ *t* ≤ *u* and *t* > 0 then

$$
\Gamma\_{su} = (\gamma\_{tu})^{\frac{1}{2}} \Gamma\_{st} + \gamma\_{st} \Gamma\_{tu} \,. \tag{38}
$$

With this result at hand we obtain the following:

**Theorem 2.** *The processes* {<sup>Γ</sup>*tT*}0≤*t*≤*T and* {*<sup>γ</sup>tT*}0≤*t*≤*T are jointly Markov.*

**Proof.** To establish the Markov property it suffices to show that for any bounded measurable function *φ* : R × R → R, any *n* ∈ N, and any 0 ≤ *tn* ≤ *tn*−<sup>1</sup> ≤ ... ≤ *t*1 ≤ *t* ≤ *T*, we have

$$\begin{split} \mathbb{E}\left[\phi(\Gamma\_{tT\prime\prime\prime tT}) \mid \Gamma\_{t\_1T\prime\prime}\gamma\_{t\_1T\prime}\;\Gamma\_{t\_2T\prime\prime}\gamma\_{t\_2T\prime}\dots\;\Gamma\_{t\_nT\prime\prime}\gamma\_{t\_nT}\right] \\ = \mathbb{E}\left[\phi(\Gamma\_{tT\prime\prime}\gamma\_{tT}) \mid \Gamma\_{t\_1T\prime\prime}\gamma\_{t\_1T}\right] . \end{split} \tag{39}$$

We present the proof for *n* = 2. Thus we need to show that

$$\begin{aligned} \mathbb{E}\left[\phi(\Gamma\_{tT\prime}\gamma\_{tT}) \mid \Gamma\_{t\_1T\prime}\gamma\_{t\_1T\prime}\Gamma\_{t\_2T\prime}\gamma\_{t\_2T}\right] \\ = \mathbb{E}\left[\phi(\Gamma\_{tT\prime}\gamma\_{tT}) \mid \Gamma\_{t\_1T\prime}\gamma\_{t\_1T}\right]. \end{aligned} \tag{40}$$

As a consequence of (38) we have

$$\begin{aligned} &\mathbb{E}\left[\phi(\Gamma\_{tT\prime}\gamma\_{tT}) \mid \Gamma\_{t\_1T\prime}\gamma\_{t\_1T\prime}\Gamma\_{t\_2T\prime}\gamma\_{t\_2T}\right] \\ &=\mathbb{E}\left[\phi(\Gamma\_{tT\prime}\gamma\_{tT}) \mid \Gamma\_{t\_1T\prime}\gamma\_{t\_1T\prime}\Gamma\_{t\_2t\_1\prime}\gamma\_{t\_2t\_1}\right]. \end{aligned} \tag{41}$$

Therefore, it suffices to show that

$$\begin{aligned} \mathbb{E}\left[\phi(\Gamma\_{tT\prime}\gamma\_{tT}) \mid \Gamma\_{t\_1T\prime}\gamma\_{t\_1T\prime}\; \Gamma\_{t\_2t\_1\prime}\; \gamma\_{t\_2t\_1}\right] \\ = \mathbb{E}\left[\phi(\Gamma\_{tT\prime}\gamma\_{tT}) \mid \Gamma\_{t\_1T\prime}\gamma\_{t\_1T}\right]. \end{aligned} \tag{42}$$

Let us write

$$f\_{\Gamma\_{tT,\gamma\_tT,\Gamma\_{t1},\Gamma\_{t1},\gamma\_{t1}T,\Gamma\_{t2}t\_1,\gamma\_{t2}t\_2}}(\mathbf{x},\mathbf{y},\mathbf{a},\mathbf{b},\mathbf{c},\mathbf{d})\tag{43}$$

for the joint density of Γ*tT*, *γtT*, <sup>Γ</sup>*t*1*T*, *γt*1*T*, <sup>Γ</sup>*t*2*t*1 , *γt*2*t*1 . Then for the conditional density of Γ*tT* and *γtT* given <sup>Γ</sup>*t*1*<sup>T</sup>* = *a*, *γt*1*<sup>T</sup>* = *b*, <sup>Γ</sup>*t*2*t*1 = *c*, *γt*2*t*1 = *d* we have

$$g\_{\Gamma\_{\text{IT},\text{T}\dagger\text{IT}}}(\mathbf{x},y,a,b,c,d) = \frac{f\_{\Gamma\_{\text{IT},\text{T}\dagger\text{IT},\Gamma\_{\text{I}\_{\text{I}\_{\text{T}},\text{T}\dagger\text{I}\_{\text{I}\_{\text{T}}}}\Gamma\_{\text{I}\_{\text{2}}\dagger\text{I}\_{\text{1}},\Gamma\_{\text{I}\_{\text{2}}\dagger\text{I}\_{\text{1}}}}(\mathbf{x},y,a,b,c,d)}{f\_{\Gamma\_{\text{I}\_{\text{T}},\text{T}},\Gamma\_{\text{I}\_{\text{2}}\dagger\text{I}\_{\text{1}}}\Gamma\_{\text{I}\_{\text{2}}\dagger\text{I}\_{\text{1}}}(a,b,c,d)}.\tag{44}$$

Thus,

$$\begin{split} & \mathbb{E} \left[ \phi(\Gamma\_{tT}, \gamma\_{tT}) \mid \Gamma\_{t\_1 T\_{\prime}} \gamma\_{t\_1 T\_{\prime}} \Gamma\_{t\_2 t\_{\prime}} \gamma\_{t\_2 t\_1} \right] \\ &= \int\_{\mathbb{R}} \int\_{\mathbb{R}} \phi(\mathbf{x}, y) \, g\_{\Gamma\_{tT}, \gamma\_{tT}}(\mathbf{x}, y, \Gamma\_{t\_1 T\_{\prime}} \gamma\_{t\_1 T\_{\prime}} \, \Gamma\_{t\_2 t\_{\prime}} \gamma\_{t\_2 t\_1}) \, \mathrm{d} \mathbf{x} \, \mathrm{d}y . \tag{45} \end{split} \tag{45}$$

Similarly,

$$\begin{split} & \mathbb{E} \left[ \boldsymbol{\phi}(\boldsymbol{\Gamma}\_{t\boldsymbol{T}\prime} \boldsymbol{\gamma}\_{t\prime\prime}) \mid \boldsymbol{\Gamma}\_{t\_{1}\boldsymbol{T}\prime} \boldsymbol{\gamma}\_{t\_{1}\boldsymbol{T}} \right] \\ &= \int\_{\mathbb{R}} \int\_{\mathbb{R}} \boldsymbol{\phi}(\boldsymbol{x}, \boldsymbol{y}) \, \boldsymbol{g}\_{\boldsymbol{\Gamma}\_{t\prime\prime}\prime t\prime}(\boldsymbol{x}, \boldsymbol{y}, \boldsymbol{\Gamma}\_{t\_{1}\boldsymbol{T}\prime} \boldsymbol{\gamma}\_{t\_{1}\boldsymbol{T}}) \, \mathrm{d}\boldsymbol{x} \, \mathrm{d}\boldsymbol{y} . \end{split} \tag{46}$$

where for the conditional density of Γ*tT* and *γtT* given <sup>Γ</sup>*t*1*<sup>T</sup>* = *a*, *γt*1*<sup>T</sup>* = *b* we have

$$g\_{\Gamma\_{l\Gamma},\gamma\_{l\Gamma}}(\mathbf{x}\_{\prime}\,\mathbf{y}\_{\prime}\,a\_{\prime}\,b) = \frac{f\_{\Gamma\_{l\Gamma},\gamma\_{l\Gamma},\Gamma\_{t\_{1}\Gamma},\gamma\_{t\_{1}\Gamma}}(\mathbf{x}\_{\prime}\,\mathbf{y}\_{\prime}\,a\_{\prime}\,b)}{f\_{\Gamma\_{t\_{1}\Gamma},\gamma\_{t\_{1}\Gamma}}(a\_{\prime}\,b)}.\tag{47}$$

Note that the conditional probability densities that we introduce in formulae such as those above are "regular" conditional densities (Williams 1991, p. 91). We shall show that

$$\mathbf{g}\_{\Gamma\_{t\Gamma\_{\prime}}\gamma\_{\rm IT}}(\mathbf{x}\_{\prime}\,\boldsymbol{y}\_{\prime}\,\Gamma\_{t\_{1}\Gamma\_{\prime}}\gamma\_{t\_{1}\Gamma\_{\prime}}\,\Gamma\_{t\_{2}t\_{1}\prime}\gamma\_{t\_{2}t\_{1}}) = \mathbf{g}\_{\Gamma\_{t\Gamma\_{\prime}}\gamma\_{\rm IT}}(\mathbf{x}\_{\prime}\,\boldsymbol{y}\_{\prime}\,\Gamma\_{t\_{1}\Gamma\_{\prime}}\gamma\_{t\_{1}\Gamma})\,. \tag{48}$$

Writing

$$\begin{aligned} &F\_{\Gamma\_{tT},\gamma\_{tT},\Gamma\_{t\_1T},\gamma\_{t\_1T},\Gamma\_{t\_2t\_1},\gamma\_{t\_2t\_1}}(\mathbf{x},\mathbf{y},\mathbf{a},\mathbf{b},\mathbf{c},\mathbf{d})\\ &=\mathbb{E}\left[\mathbbm{1}\_{\{\Gamma\_{tT}\le x\}}\mathbbm{1}\_{\{\gamma\_{tT}\le y\}}\mathbbm{1}\_{\{\Gamma\_{t\_1T}$$

for the joint distribution function, we see that

*<sup>F</sup>*Γ*tT*, *γtT*, <sup>Γ</sup>*t*1*T*, *γt*1*T*, <sup>Γ</sup>*t*2*t*1 , *γt*2*t*1 (*<sup>x</sup>*, *y*, *a*, *b*, *c*, *d*) = E <sup>1</sup>{<sup>Γ</sup>*tT*<*x*}1{*<sup>γ</sup>tT*<*y*} <sup>1</sup>{<sup>Γ</sup>*t*1*T*<*a*}<sup>1</sup>{*<sup>γ</sup>t*1*T*<*<sup>b</sup>*} <sup>1</sup>{<sup>Γ</sup>*t*2*t*1<*c*}<sup>1</sup>{*<sup>γ</sup>t*2*t*1<*<sup>d</sup>*} = E E <sup>1</sup>{<sup>Γ</sup>*tT*<*x*} <sup>1</sup>{*<sup>γ</sup>tT*<*y*} <sup>1</sup>{<sup>Γ</sup>*t*1*T*<*a*} <sup>1</sup>{*<sup>γ</sup>t*1*T*<*<sup>b</sup>*} <sup>1</sup>{<sup>Γ</sup>*t*2*t*1<*c*}<sup>1</sup>{*<sup>γ</sup>t*2*t*1<*<sup>d</sup>*} *<sup>γ</sup>t*<sup>2</sup> , *γt*1 , *γt*, *<sup>γ</sup>T* = E <sup>1</sup>{*<sup>γ</sup>tT*<*y*} <sup>1</sup>{*<sup>γ</sup>t*1*T*<*<sup>b</sup>*} <sup>1</sup>{*<sup>γ</sup>t*2*t*1<*<sup>d</sup>*} E <sup>1</sup>{<sup>Γ</sup>*tT*<*x*} <sup>1</sup>{<sup>Γ</sup>*t*1*T*<*a*} <sup>1</sup>{<sup>Γ</sup>*t*2*t*1<*c*} *<sup>γ</sup>t*<sup>2</sup> , *γt*1 , *γt*, *<sup>γ</sup>T* = EE <sup>1</sup>{<sup>Γ</sup>*tT*<*x*} <sup>1</sup>{*<sup>γ</sup>tT*<*y*} <sup>1</sup>{<sup>Γ</sup>*t*1*T*<*a*} <sup>1</sup>{*<sup>γ</sup>t*1*T*<*<sup>b</sup>*} *<sup>γ</sup>t*<sup>1</sup> , *γt*, *γT* × *N c* (1 − *γt*2*t*1 ) (*<sup>γ</sup>t*2*t*1 ) <sup>1</sup>{*<sup>γ</sup>t*2*t*1<*<sup>d</sup>*}, (50)

where the last step follows as a consequence of Lemma 2. Thus we have

*<sup>F</sup>*Γ*tT*, *γtT*, <sup>Γ</sup>*t*1*T*, *γt*1*T*, <sup>Γ</sup>*t*2*t*1 , *γt*2*t*1 (*<sup>x</sup>*, *y*, *a*, *b*, *c*, *d*) = E <sup>1</sup>{<sup>Γ</sup>*tT*<*x*} <sup>1</sup>{*<sup>γ</sup>tT*<*y*} <sup>1</sup>{<sup>Γ</sup>*t*1*T*<*a*} <sup>1</sup>{*<sup>γ</sup>t*1*T*<*<sup>b</sup>*} *N c* (1 − *γt*2*t*1 ) (*<sup>γ</sup>t*2*t*1 ) <sup>1</sup>{*<sup>γ</sup>t*2*t*1<*<sup>d</sup>*} = E <sup>1</sup>{<sup>Γ</sup>*tT*<*x*} <sup>1</sup>{*<sup>γ</sup>tT*<*y*} <sup>1</sup>{<sup>Γ</sup>*t*1*T*<*a*} <sup>1</sup>{*<sup>γ</sup>t*1*T*<*<sup>b</sup>*} E *N c* (1 − *γt*2*t*1 ) (*<sup>γ</sup>t*2*t*1 ) <sup>1</sup>{*<sup>γ</sup>t*2*t*1<*<sup>d</sup>*} = *<sup>F</sup>*Γ*tT*, *γtT*, <sup>Γ</sup>*t*1*T*, *γt*1*<sup>T</sup>* (*<sup>x</sup>*, *y*, *a*, *b*) × *<sup>F</sup>*Γ*t*2*t*1 , *γt*2*t*1 (*c*, *d*), (51)

where the next to last step follows by virtue of the fact that Γ*st* and *γuv* are independent for 0 ≤ *s* ≤ *t* ≤ *u* ≤ *v* and *t* > 0. Similarly,

$$\begin{split} &F\_{\Gamma\_{l\_{1}T},\gamma\_{l\_{1}T},\Gamma\_{l\_{2}t\_{1}},\gamma\_{l\_{2}t\_{1}}}(a,b,c,d) \\ &=\mathbb{E}\left[\mathbbm{1}\_{\{\Gamma\_{l\_{1}T}$$

and hence

$$\begin{split} &=\mathbb{E}\left[N\left(\frac{a}{\sqrt{\left(1-\gamma\_{t1}r\right)\left(\gamma\_{t1}r\right)}}\right)\mathbbm{1}\_{\left\{\gamma\_{t1}r$$

Thus we deduce that

$$f\_{\Gamma\_{t\Gamma},\gamma\_{t\Gamma},\Gamma\_{t\_1\Gamma},\gamma\_{t\_1\Gamma},\Gamma\_{t\_2t\_1},\gamma\_{t\_2t\_1}}(\mathbf{x}\_\prime\,\mathbf{y},a\_\prime\,\mathbf{b}\_\prime\,\mathbf{c}\_\prime\,\mathbf{d})\tag{54}$$

$$=f\_{\Gamma\_{t\Gamma,\gamma\_{t\Gamma},\Gamma\_{t\Gamma},\gamma\_{t\Gamma}\dagger}}(\mathbf{x},\mathbf{y},a,b) \times f\_{\Gamma\_{t\_2t\_1},\gamma\_{t\_2t\_1}}(\mathbf{c},d)\,,\tag{55}$$

and

$$f\_{\Gamma\_{t\_1 T}, \gamma\_{t\_1 T}, \Gamma\_{t\_2 t\_1}, \gamma\_{t\_2 t\_1}}(a, b, c, d) = f\_{\Gamma\_{t\_1 T}, \gamma\_{t\_1 T}}(a, b) \times f\_{\Gamma\_{t\_2 t\_1}, \gamma\_{t\_2 t\_1}}(c, d) \, , \tag{56}$$

and the theorem follows.

#### **4. Variance Gamma Information**

Fix *T* > 0 and let {<sup>Γ</sup>*tT*} be a normalized variance gamma bridge, as defined by (30). Let {*<sup>γ</sup>tT*} be the associated gamma bridge defined by (22). Let *XT* be a random variable and assume that *XT*, {*<sup>γ</sup>t*}*t*≥0 and {*Wt*}*t*≥0 are independent. We are led to the following:

**Definition 2.** *By a variance-gamma information process carrying the market factor XT we mean a process* {*ξt*}*t*≥0 *that takes the form*

$$\xi\_t = \Gamma\_{tT} + \sigma \,\gamma\_{tT} \, X\_T \tag{57}$$

*for* 0 ≤ *t* ≤ *T and ξt* = *σXT for t* > *T, where σ is a positive constant.*

The market filtration is assumed to be the standard augmented filtration generated jointly by {*ξt*} and {*<sup>γ</sup>tT*}. A calculation shows that if 0 ≤ *s* ≤ *t* ≤ *T* and *t* > 0 then

$$
\zeta\_s^\mathbf{z} = \Gamma\_{st} \left( \gamma\_{tT} \right)^\frac{1}{2} + \not\zeta\_t \left\langle \gamma\_{st} \right\rangle. \tag{58}
$$

We are thus led to the following result required for the valuation of assets.

**Theorem 3.** *The processes* {*ξt*}0≤*t*≤*T and* {*<sup>γ</sup>tT*}0≤*t*≤*T are jointly Markov.*

**Proof.** It suffices to show that for any *n* ∈ N and 0 < *t*1 < *t*2 < ··· < *tn* we have

$$\mathbb{E}\left[\phi(\xi\_{t\prime}^{\mathbf{x}},\gamma\_{t\prime\prime})\,\middle|\,\xi\_{t\_{1}\prime}^{\mathbf{z}},\xi\_{t\_{2}\prime},\dots,\,\middle|\,\xi\_{t\_{n}\prime}^{\mathbf{z}},\gamma\_{t\_{1}\prime\prime},\gamma\_{t\_{2}\prime\prime},\dots,\,\gamma\_{t\_{n}\prime}\right] = \mathbb{E}\left[\phi(\xi\_{t\prime}^{\mathbf{x}},\gamma\_{t\prime\prime})\,\middle|\,\xi\_{t\_{1}\prime}^{\mathbf{z}},\gamma\_{t\_{1}\prime}\right].\tag{59}$$

We present the proof for *n* = 2. Thus, we propose to show that

$$\mathbb{E}\left[\phi(\xi\_{t\prime}^{\mathbb{Z}}\gamma\_{t\prime})\mid\xi\_{t\_{1}\prime}\,\bigotimes\_{t\_{2}\prime}\gamma\_{t\_{1}T\prime}\,\gamma\_{t\_{2}T}\right] = \mathbb{E}\left[\phi(\xi\_{t\prime}\gamma\_{t\prime T})\,\big|\,\xi\_{t\_{1}\prime}\,\gamma\_{t\_{1}T}\right].\tag{60}$$

By (58), we have

$$\begin{split} & \mathbb{E}\left[\phi(\mathbb{\mathcal{J}}\_{t},\gamma\_{tT}) \mid \mathbb{\mathcal{J}}\_{t\_{1}}, \mathbb{\mathcal{J}}\_{t\_{2}}, \gamma\_{t\_{1}T\_{\prime}}, \gamma\_{t\_{2}T} \right] \\ &= \mathbb{E}\left[\phi\left(\mathbb{\mathcal{J}}\_{t},\gamma\_{tT}\right) \mid \mathbb{\mathcal{J}}\_{t\_{1}}, \mathbb{\mathcal{J}}\_{t\_{2}}, \gamma\_{t\_{1}T\_{\prime}} \gamma\_{t\_{2}t\_{1}} \right] \\ &= \mathbb{E}\left[\phi\left(\mathbb{\mathcal{J}}\_{t},\gamma\_{tT}\right) \mid \mathbb{\mathcal{J}}\_{t\_{1}}, \Gamma\_{t\_{2}t\_{1}}, \gamma\_{t\_{1}T\_{\prime}} \gamma\_{t\_{2}t\_{1}} \right] \\ &= \mathbb{E}\left[\phi\left(\Gamma\_{tT} + \gamma\_{tT}\sigma\,X\_{T}, \gamma\_{tT}\right) \mid \Gamma\_{t\_{1}T} + \gamma\_{t\_{1}T}\sigma\,X\_{T}, \Gamma\_{t\_{2}t\_{1}}, \gamma\_{t\_{1}T\_{\prime}} \gamma\_{t\_{2}t\_{1}} \right]. \end{split} \tag{61}$$

Finally, we invoke Lemma 2, and Theorem 2 to conclude that

$$\begin{split} \mathbb{E}\left[\phi(\underline{\boldsymbol{\xi}}\_{t},\boldsymbol{\gamma}\_{tT})|\underline{\boldsymbol{\xi}}\_{t1},\underline{\boldsymbol{\xi}}\_{t2},\boldsymbol{\gamma}\_{t\_{1}T},\boldsymbol{\gamma}\_{t\_{2}T}\right] \\ = & \mathbb{E}\left[\phi(\boldsymbol{\Gamma}\_{tT}+\boldsymbol{\gamma}\_{tT}\boldsymbol{\sigma}\,\boldsymbol{X}\_{T\prime}\boldsymbol{\gamma}\_{tT})\,\big|\,\boldsymbol{\Gamma}\_{t\_{1}T}+\boldsymbol{\gamma}\_{t\_{1}T}\boldsymbol{\sigma}\,\boldsymbol{X}\_{T\prime}\boldsymbol{\gamma}\_{t\_{1}T}\right] \\ = & \mathbb{E}\left[\phi(\underline{\boldsymbol{\xi}}\_{t},\boldsymbol{\gamma}\_{tT})|\underline{\boldsymbol{\xi}}\_{t\_{1}},\boldsymbol{\gamma}\_{t\_{1}T}\right]. \end{split} \tag{62}$$

The generalization to *n* > 2 is straightforward.

#### **5. Information Based Pricing**

Now we are in a position to consider the valuation of a financial asset in the setting just discussed. One recalls that P is understood to be the risk-neutral measure and that the interest rate is constant. The payoff of the asset at time *T* is taken to be an integrable random variable of the form *h*(*XT*) for some Borel function *h*, where *XT* is the information revealed at *T*. The filtration is generated jointly by the variance-gamma information process {*ξt*} and the associated gamma bridge {*<sup>γ</sup>tT*}. The value of the asset at time *t* ∈ [0, *T*) is then given by the general expression (2), which on account of Theorem 3 reduces in the present context to

$$\mathbf{S}\_{t} = \mathbf{e}^{-r\left(T-t\right)} \mathbb{E}\left[h(\mathbf{X}\_{T}) \mid \mathbf{j}\_{t\prime}^{\mathbf{x}} \ \gamma\_{t\prime}\right],\tag{63}$$

and our goal is to work out this expectation explicitly.

Let us write *FXT* for the *a priori* distribution function of *XT*. Thus *FXT* : *x* ∈ R → *FXT* (*x*) ∈ [0, 1] and we have

$$F\_{\mathbf{X}\_{\mathbb{T}}}(\mathbf{x}) = \mathbb{P}\left(X\_T \le \mathbf{x}\right). \tag{64}$$

Occasionally, it will be typographically convenient to write *F*(*x*) *XT* in place of *FXT* (*x*), and similarly for other distribution functions. To proceed, we require the following:

**Lemma 3.** *Let X be a random variable with distribution* {*FX*(*x*)}*x*∈<sup>R</sup> *and let Y be a continuous random variable with distribution* {*FY*(*y*)}*y*∈<sup>R</sup> *and density* { *fY*(*y*)}*y*∈R*. Then for all y* ∈ R *for which fY*(*y*) = 0 *we have*

$$F\_{X|Y=y}^{(x)} = \frac{\int\_{\mathbf{u} \in ( -\infty, x]} f\_{Y|X=u}^{(y)} \, \mathrm{d}F\_X^{(u)}}{\int\_{\mathbf{u} \in ( -\infty, \infty)} f\_{Y|X=u}^{(y)} \, \mathrm{d}F\_X^{(u)}},\tag{65}$$

*where F*(*x*) *<sup>X</sup>*|*<sup>Y</sup>*=*y denotes the conditional distribution* P (*X* ≤ *x* | *Y* = *y*)*, and where*

$$f\_{Y|X=u}^{(y)} = \frac{\mathbf{d}}{\mathbf{d}y} \mathbb{P}\left(Y \le y \mid X=u\right). \tag{66}$$

**Proof.** For *any* two random variables *X* and *Y* it holds that

$$\begin{split} \mathbb{P}\left(X \le x, \,\, Y \le y\right) &= \mathbb{E}\left[\mathbb{1}\_{\{X \le x\}} \, \mathbb{1}\_{\{Y \le y\}}\right] \\ &= \mathbb{E}\left[\mathbb{E}\left[\mathbb{1}\_{\{X \le x\}} \, \big|\, Y\right] \mathbb{1}\_{\{Y \le y\}}\right] \\ &= \mathbb{E}\left[F\_{X|Y}^{(x)} \, \mathbb{1}\_{\{Y \le y\}}\right]. \end{split} \tag{67}$$

Here we have used the fact that for each *x* ∈ R there exists a Borel measurable function *Px* : *y* ∈ R → *Px*(*y*) ∈ [0, 1] such that E <sup>1</sup>{*<sup>X</sup>*≤*x*} *<sup>Y</sup>* = *Px*(*Y*). Then for *y* ∈ R we define

$$F\_{\chi|Y=y}^{(x)} = P\_{\chi}(y) \,. \tag{68}$$

Hence

$$\mathbb{P}\left(X \le x, Y \le y\right) = \int\_{\mathcal{v} \in \left( -\infty, y\right]} F\_{X|Y=\mathcal{v}}^{\left(x\right)} \, \mathrm{d}F\_Y^{\left(v\right)} \, . \tag{69}$$

By symmetry, we have

$$\mathbb{P}\left(X \le \mathbf{x}, \ Y \le \mathbf{y}\right) = \int\_{u \in \left(-\infty, \mathbf{x}\right]} F^{(y)}\_{Y|X=u} \, \mathrm{d}F^{(u)}\_{X} \,\, \mathrm{d}V \,\, \mathrm{d}V$$

from which it follows that we have the relation

$$\int\_{\mathfrak{u}\in\{-\infty,x\}} F\_{Y|X=\mathfrak{u}}^{(y)} \, \mathrm{d}F\_X^{(u)} = \int\_{\upsilon\in\{-\infty,y\}} F\_{X|Y=\upsilon}^{(x)} \, \mathrm{d}F\_Y^{(\upsilon)} \,. \tag{71}$$

Moving ahead, let us consider the measure *FX*|*<sup>Y</sup>*=*<sup>y</sup>*(d*x*) on (<sup>R</sup>, B) defined for each *y* ∈ R by setting

$$F\_{\mathbf{X}|Y=y}(A) = \mathbb{E}\left[\mathbf{1}\_{\{X \in A\}} | Y = y\right] \tag{72}$$

for any *A* ∈ B. Then *FX*|*<sup>Y</sup>*=*<sup>y</sup>*(d*x*) is absolutely continuous with respect to *FX*(d*x*). Indeed, suppose that *FX*(*B*) = 0 for some *B* ∈ B. Now, *FX*|*<sup>Y</sup>*=*<sup>y</sup>*(*B*) = E <sup>1</sup>{*<sup>X</sup>*∈*<sup>B</sup>*} *<sup>Y</sup>* = *y*. But if E <sup>1</sup>{*<sup>X</sup>*∈*<sup>B</sup>*} = 0, then E E <sup>1</sup>{*<sup>X</sup>*∈*<sup>B</sup>*} *<sup>Y</sup>* = 0, and hence E <sup>1</sup>{*<sup>X</sup>*∈*<sup>B</sup>*} *<sup>Y</sup>* = 0, and therefore E <sup>1</sup>{*<sup>X</sup>*∈*<sup>B</sup>*} *<sup>Y</sup>* = *y* = 0. Thus *FX*|*<sup>Y</sup>*=*<sup>y</sup>*(*B*) vanishes for any *B* ∈ B for which *FX*(*B*) vanishes. It follows by the Radon-Nikodym theorem that for each *y* ∈ R there exists a density {*gy*(*x*)}*x*∈<sup>R</sup> such that

$$F\_{X|Y=y}^{(x)} = \int\_{u \in \left(-\infty, x\right]} g\_y(u) \, \mathrm{d}F\_X^{(u)} \,. \tag{73}$$

Note that {*gy*(*x*)} is determined uniquely apart from its values on *FX*-null sets. Inserting (73) into (71) we obtain

$$\int\_{u \in \left( -\infty, x \right]} F\_{Y|X=u}^{(y)} \, \mathrm{d}F\_X^{(u)} = \int\_{v \in \left( -\infty, y \right]} \int\_{u \in \left( -\infty, x \right]} g\_v(u) \, \mathrm{d}F\_X^{(u)} \, \mathrm{d}F\_Y^{(v)} \, , \tag{74}$$

and thus by Fubini's theorem we have

$$\int\_{\mathfrak{u}\in\{ -\infty, x \}} F\_{Y|X=\mathfrak{u}}^{(y)} \operatorname{d}F\_X^{(\mathfrak{u})} = \int\_{\mathfrak{u}\in\{ -\infty, x \}} \int\_{\mathfrak{v}\in\{ -\infty, y \}} \operatorname{g}\_{\mathfrak{v}}(\mathfrak{u}) \operatorname{d}F\_Y^{(\mathfrak{v})} \operatorname{d}F\_X^{(\mathfrak{u})}.\tag{75}$$

It follows then that {*F*(*y*) *<sup>Y</sup>*|*<sup>X</sup>*=*<sup>x</sup>*}*x*∈<sup>R</sup> is determined uniquely apart from its values on *FX*-null sets, and we have

$$F\_{Y|X=x}^{(y)} = \int\_{\upsilon \in \left( -\infty, y \right]} g\_{\upsilon}(\mathbf{x}) \, \mathrm{d}F\_Y^{(\upsilon)}.\tag{76}$$

This relation holds quite generally and is symmetrical between *X* and *Y*. Indeed, we have not so far assumed that *Y* is a continuous random variable. If *Y* is, in fact, a continuous random variable, then its distribution function is absolutely continuous and admits a density { *f* (*y*) *Y* }*y*∈R. In that case, (76) can be written in the form

$$F\_{Y|X=x}^{(y)} = \int\_{v \in \left( -\infty, y \right]} g\_v(x) \, f\_Y^{(v)} \, \mathrm{d}v \, \, \tag{77}$$

from which it follows that for each value of *x* the conditional distribution function {*F*(*y*) *<sup>Y</sup>*|*<sup>X</sup>*=*<sup>x</sup>*}*y*∈<sup>R</sup> is absolutely continuous and admits a density { *f* (*y*) *<sup>Y</sup>*|*<sup>X</sup>*=*<sup>x</sup>*}*y*∈<sup>R</sup> such that

$$f\_{Y|X=x}^{(y)} = \mathcal{g}\_{\mathcal{Y}}(\mathbf{x}) \, f\_Y^{(y)} \,. \tag{78}$$

The desired result (65) then follows from (73) and (78) if we observe that

$$f\_Y^{(y)} = \int\_{u \in ( -\infty, \infty)} f\_{Y|X=u}^{(y)} \, \mathrm{d}F\_X^{(u)}\,\,\,\,\tag{79}$$

and that concludes the proof.

Armed with Lemma 3, we are in a position to work out the conditional expectation that leads to the asset price, and we obtain the following:

**Theorem 4.** *The variance-gamma information-based price of a financial asset with payoff h*(*XT*) *at time T is given for t* < *T by*

$$S\_t = \mathbf{e}^{-r\left(T-t\right)} \int\_{\mathbf{x} \in \mathbb{R}} h(\mathbf{x}) \frac{\mathbf{e}^{\left(\sigma\frac{\mathbf{r}}{\varrho}\mathbf{x} - \frac{1}{2}\sigma^2 \mathbf{x}^2 \gamma\_{tT}\right) \left(1 - \gamma\_{tT}\right)^{-1}}}{\int\_{\mathbf{y} \in \mathbb{R}} \mathbf{e}^{\left(\sigma\frac{\mathbf{r}}{\varrho}\mathbf{y} - \frac{1}{2}\sigma^2 \mathbf{y}^2 \gamma\_{tT}\right) \left(1 - \gamma\_{tT}\right)^{-1}} \mathbf{d}F\_{X\_T}^{(\mathbf{x})}} \mathbf{d}F\_{X\_T}^{(\mathbf{x})}.\tag{80}$$

**Proof.** To calculate the conditional expectation of *h*(*XT*), we observe that

$$\mathbb{E}\left[h(X\_T) \mid \xi\_{t\prime}, \gamma\_{tT}\right] = \mathbb{E}\left[\mathbb{E}\left[h(X\_T) \mid \xi\_{t\prime}, \gamma\_{tT\prime}, \gamma\_T\right] \mid \xi\_{t\prime}, \gamma\_{tT}\right],\tag{81}$$

by the tower property, where the inner expectation takes the form

$$\mathbb{E}\left[h(\mathbf{X}\_{\mathcal{T}}) \mid \mathfrak{z}\_{t} = \mathfrak{z}\_{\prime}, \gamma\_{t\mathcal{T}} = b, \,\gamma\_{\mathcal{T}} = \mathfrak{g}\right] = \int\_{\mathbf{x} \in \mathbb{R}} h(\mathbf{x}) \, \mathrm{d}F^{(\mathbf{x})}\_{\mathbf{X}\_{\mathcal{T}} \mid \mathfrak{z}\_{t} = \mathfrak{z}\_{\prime}, \gamma\_{t\mathcal{T}} = b, \gamma\_{\mathcal{T}} = \mathfrak{g}}.\tag{82}$$

Here by Lemma 3 the conditional distribution function is

*F*(*x*) *XT*|*ξt*<sup>=</sup>*ξ*, *γtT*<sup>=</sup>*b*, *γT*=*g* = %*u*<sup>∈</sup>(−∞, *x* ] *f* (*ξ*) *ξt* | *XT*=*u*, *γtT*=*b* ,*γT*<sup>=</sup>*g* d*F*(*u*) *XT* | *γtT*=*b* ,*γT*<sup>=</sup>*g* %*u*∈<sup>R</sup> *f* (*ξ*) *ξt* | *XT*=*u*, *γtT*=*b* ,*γT*<sup>=</sup>*g* d*F*(*u*) *XT* | *γtT*=*b* ,*γT*<sup>=</sup>*g* = %*u*<sup>∈</sup>(−∞, *x* ] *f* (*ξ*) *ξt* | *XT*=*u*, *γtT*=*b* ,*γT*<sup>=</sup>*g* d*F*(*u*) *XT* %*u*∈<sup>R</sup> *f* (*ξ*) *ξt* | *XT*=*u*, *γtT*=*b* ,*γT*<sup>=</sup>*g* d*F*(*u*) *XT* = %*u*<sup>∈</sup>(−∞, *x* ] e(*<sup>σ</sup> ξ u*− 12 *σ*<sup>2</sup> *u*2 *<sup>b</sup>*)(<sup>1</sup>−*b*)−1d*F*(*u*) *XT* %R e(*<sup>σ</sup> ξ u*− 12 *σ*<sup>2</sup> *u*<sup>2</sup> *<sup>b</sup>*)(<sup>1</sup>−*b*)−1d*F*(*u*) *XT* . (83)

Therefore, the inner expectation in Equation (81) is given by

$$\mathbb{E}\left[h(X\_T)\mid \overline{\xi}\_{t\prime}, \gamma\_{t\prime\prime}\gamma\_T\right] = \int\_{\mathbf{x}\in\mathbb{R}} h(\mathbf{x}) \frac{\mathbf{e}^{\left(\sigma\frac{\tau}{\xi}, \mathbf{x} - \frac{1}{2}\sigma^2 \mathbf{x}^2 \gamma\_{t\prime\Gamma}\right) \left(1 - \gamma\_{t\prime\Gamma}\right)^{-1}}}{\int\_{\mathbf{y}\in\mathbb{R}} \mathbf{e}^{\left(\sigma\frac{\tau}{\xi}, \mathbf{y} - \frac{1}{2}\sigma^2 \mathbf{y}^2 \gamma\_{t\prime\Gamma}\right) \left(1 - \gamma\_{t\prime\Gamma}\right)^{-1}} \mathrm{d}F^{(\mathbf{x})}\_{\mathbf{X}\gamma} \,. \tag{84}$$

But the right hand side of (84) depends only on *ξt* and *γtT*. It follows immediately that

$$\mathbb{E}\left[h(X\_T)\mid\xi\_{l^\*}\gamma\_{lT}\right] = \int\_{\underline{x}\in\mathbb{R}} h(\mathbf{x}) \frac{\mathbf{e}\left(\boldsymbol{\sigma}\_{l^\*}^\mathbbm{z}\boldsymbol{x} - \frac{1}{2}\boldsymbol{\sigma}^2\boldsymbol{x}^2\gamma\_{lT}\right)\left(1-\gamma\_{lT}\right)^{-1}}{\int\_{\underline{y}\in\mathbb{R}} \mathbf{e}\left(\boldsymbol{\sigma}\_{l^\*}^\mathbbm{z}\boldsymbol{y} - \frac{1}{2}\boldsymbol{\sigma}^2\boldsymbol{y}^2\gamma\_{lT}\right)\left(1-\gamma\_{lT}\right)^{-1}\mathbf{d}\boldsymbol{F}\_{X\_T}^{(y)}} \mathbf{d}F\_{X\_T}^{(x)}\,\tag{85}$$

which translates into Equation (80), and that concludes the proof.

## **6. Examples**

Going forward, we present some examples of variance-gamma information pricing for specific choices of (a) the payoff function *h* : R → R<sup>+</sup> and (b) the distribution of the market factor *XT*. In the figures, we display sample paths for the information processes and the corresponding prices. These paths are generated as follows. First, we simulate outcomes for the market factor *XT*. Second, we simulate paths for the gamma process {*<sup>γ</sup>t*}*t*≥0 over the interval [0, *T*] and an independent Brownian motion {*Wt*}*t*≥0. Third, we evaluate the variance gamma process {*<sup>W</sup>γt*}*t*≥<sup>0</sup> over the interval [0, *T*] by subordinating the Brownian motion with the gamma process, and we evaluate the resulting gamma bridge {*<sup>γ</sup>tT*}<sup>0</sup>≤*t*≤*T*. Fourth, we use these ingredients to construct sample paths of the information processes, where these processes are given as in Definition 2. Finally, we evaluate the pricing formula in Equation (80) for each of the simulated paths and for each time step.

**Example 1: Credit risky bond.** We begin with the simplest case, that of a unit-principal credit-risky bond without recovery. We set *h*(*x*) = *x*, with P(*XT* = 0) = *p*0 and P(*XT* = 1) = *p*1, where *p*0 + *p*1 = 1. Thus, we have

$$F\_{\mathbf{X}\_{\Gamma}}(\mathbf{x}) = p\_0 \delta\_0(\mathbf{x}) + p\_1 \delta\_1(\mathbf{x}) \,. \tag{86}$$

where

$$\delta\_{\mathfrak{d}}(\mathbf{x}) = \int\_{\mathcal{Y} \in \left( -\infty, \mathbf{x} \right]} \delta\_{\mathfrak{d}} \left( \mathbf{d}y \right), \tag{87}$$

and *<sup>δ</sup>a*(d*x*) denotes the Dirac measure concentrated at the point *a*, and we are led to the following:

**Proposition 1.** *The variance-gamma information-based price of a unit-principal credit-risky discount bond with no recovery is given by*

$$S\_t = \mathbf{e}^{-r\left(T-t\right)} \frac{p\_1 \mathbf{e}^{\left(\sigma\frac{\tau}{\xi\_t} - \frac{1}{2}\sigma^2 \gamma\_{\ell T}\right)\left(1 - \gamma\_{\ell T}\right)^{-1}}}{p\_0 + p\_1 \mathbf{e}^{\left(\sigma\frac{\tau}{\xi\_t} - \frac{1}{2}\sigma^2 \gamma\_{\ell T}\right)\left(1 - \gamma\_{\ell T}\right)^{-1}}}.\tag{88}$$

Now let *ω* ∈ Ω denote the outcome of chance. By use of Equation (57) one can check rather directly that if *XT*(*ω*) = 1, then lim*<sup>t</sup>*→*<sup>T</sup> St* = 1, whereas if *XT*(*ω*) = 0, then lim*<sup>t</sup>*→*<sup>T</sup> St* = 0. More explicitly, we find that

$$\mathbf{S}\_{\mathbf{t}}\Big|\_{X\_{\Gamma}(w)=0} = \mathbf{e}^{-r(T-t)} \frac{p\_1 \exp\left[\sigma\left(\gamma\_T^{-1/2} \left(\mathcal{W}\_{\gamma\_t} - \gamma\_{tT} \mathcal{W}\_{\gamma\_T}\right) - \frac{1}{2}\sigma\gamma\_{tT}\right) \left(1 - \gamma\_{tT}\right)^{-1}\right]}{p\_0 + p\_1 \exp\left[\sigma\left(\gamma\_T^{-1/2} \left(\mathcal{W}\_{\gamma\_t} - \gamma\_{tT} \mathcal{W}\_{\gamma\_T}\right) - \frac{1}{2}\sigma\gamma\_{tT}\right) \left(1 - \gamma\_{tT}\right)^{-1}\right]},\tag{89}$$

whereas

$$S\_{\rm t}\Big|\_{X\_{\rm I}(w)=1} = \mathbf{e}^{-r(T-t)} \frac{p\_1 \exp\left[\sigma\left(\gamma\_T^{-1/2} \left(\mathcal{W}\_{\gamma\_I} - \gamma\_{tT}\mathcal{W}\_{\gamma\_T}\right) + \frac{1}{2}\sigma\gamma\_{tT}\right) \left(1-\gamma\_{tT}\right)^{-1}\right]}{p\_0 + p\_1 \exp\left[\sigma\left(\gamma\_T^{-1/2} \left(\mathcal{W}\_{\gamma\_I} - \gamma\_{tT}\mathcal{W}\_{\gamma\_T}\right) + \frac{1}{2}\sigma\gamma\_{tT}\right) \left(1-\gamma\_{tT}\right)^{-1}\right]},\tag{90}$$

and the claimed limiting behaviour of the asset price follows by inspection. In Figures 1 and 2 we plot sample paths for the information processes and price processes of credit risky bonds for various values of the information flow-rate parameter. One observes that for *σ* = 1 the information processes diverge, thus distinguishing those bonds that default from those that do not, only towards the end of the relevant time frame; whereas for higher values of *σ* the divergence occurs progressively earlier, and one sees a corresponding effect in the price processes. Thus, when the information flow rate is higher, the final outcome of the bond paymen<sup>t</sup> is anticipated earlier, and with greater certainty. Similar conclusions hold for the interpretation of Figures 3 and 4.

**Figure 1.** Credit-risky bonds with no recovery. The panels on the left show simulations of trajectories of the variance gamma information process, and the panels on the right show simulations of the corresponding price trajectories. Prices are quoted as percentages of the principal, and the interest rate is taken to be zero. From top to bottom, we show trajectories having *σ* = 1, 2, respectively. We take *p*0 = 0.4 for the probability of default and *p*1 = 0.6 for the probability of no default. The value of *m* is 100 in all cases. Fifteen simulated trajectories are shown in each panel.

**Figure 2.** Credit-risky bonds with no recovery. From top to bottom we show trajectories having *σ* = 3, 4, respectively. The other parameters are the same as in Figure 1.

**Figure 3.** Log-normal payoff. The panels on the left show simulations of the trajectories of the information process, whereas the panels on the right show simulations of the corresponding price process trajectories. From the top to bottom, we show trajectories having *σ* = 1, 2, respectively. The value for *m* is 100. We take *μ* = 0, *ν* = 1, and show 15 simulated trajectories in each panel.

**Figure 4.** Log-normal payoff. From the top row to the bottom, we show trajectories having *σ* = 3, 4, respectively. The other parameters are the same as those in Figure 3.

**Example 2: Random recovery.** As a somewhat more sophisticated version of the previous example, we consider the case of a defaultable bond with random recovery. We shall work out the case where *h*(*x*) = *x* and the market factor *XT* takes the value *c* with probability *p*1 and *XT* is uniformly distributed over the interval [*a*, *b*] with probability *p*0, where 0 ≤ *a* < *b* ≤ *c*. Thus, for the probability measure of *XT* we have

$$F\_{X\_T}(\mathbf{dx}) = p\_0 \mathbf{1}\_{\{a \le x < b\}} \mathbf{dx} + p\_1 \delta\_c(\mathbf{dx}) \, , \tag{91}$$

and for the distribution function we obtain

$$F\_{\mathcal{X}\_T}(\mathbf{x}) = p\_0 \ge \mathbb{1}\_{\{a \le x < b\}} + \mathbb{1}\_{\{x \ge c\}}\,. \tag{92}$$

The bond price at time *t* is then obtained by working out the expression

$$S\_{l} = \mathbf{e}^{-r\left(T-t\right)} \frac{p\_{0} \int\_{a}^{b} \mathbf{x} \,\mathbf{e}^{\left(\sigma\_{\xi l}^{\mathsf{x}} \mathbf{x} - \frac{1}{2}\sigma^{2}\mathbf{x}^{2}\gamma\_{lT}\right)\left(1-\gamma\_{lT}\right)^{-1}} \,\mathrm{d}\mathbf{x} + p\_{1} \,\mathrm{c}\,\mathrm{e}^{\left(\sigma\_{\xi l}^{\mathsf{x}} - \frac{1}{2}\sigma^{2}\gamma\_{lT}\right)\left(1-\gamma\_{lT}\right)^{-1}}}{p\_{0} \int\_{a}^{b} \mathbf{e}^{\left(\sigma\_{\xi l}^{\mathsf{x}} \mathbf{x} - \frac{1}{2}\sigma^{2}\mathbf{x}^{2}\gamma\_{lT}\right)\left(1-\gamma\_{lT}\right)^{-1}} \,\mathrm{d}\mathbf{x} + p\_{1} \,\mathrm{e}^{\left(\sigma\_{\xi l}^{\mathsf{x}} - \frac{1}{2}\sigma^{2}\gamma\_{lT}\right)\left(1-\gamma\_{lT}\right)^{-1}}},\tag{93}$$

and it should be evident that one can obtain a closed-form solution. To work this out in detail, it will be convenient to have an expression for the incomplete first moment of a normally-distributed random variable with mean *μ* and variance *ν*2. Thus we set

$$N\_1(\mathbf{x}, \boldsymbol{\mu}, \nu) = \frac{1}{\sqrt{2\pi\nu^2}} \int\_{-\infty}^{\infty} y \, \exp\left(-\frac{1}{2} \frac{(y-\mu)^2}{\nu^2}\right) \, \mathrm{d}y \, \prime \tag{94}$$

and for convenience we set

$$N\_0(\mathbf{x}, \mu, \nu) = \frac{1}{\sqrt{2\pi\nu^2}} \int\_{-\infty}^{\infty} \exp\left(-\frac{1}{2}\frac{(y-\mu)^2}{\nu^2}\right) \mathrm{d}y \,. \tag{95}$$

Then we have

$$N\_1(\mathbf{x}, \boldsymbol{\mu}, \nu) = \mu \, N\left(\frac{\mathbf{x} - \boldsymbol{\mu}}{\nu}\right) - \frac{\nu}{\sqrt{2\pi}} \exp\left(-\frac{1}{2} \frac{(\mathbf{x} - \boldsymbol{\mu})^2}{\nu^2}\right),\tag{96}$$

and of course

$$N\_0(\mathbf{x}, \mu, \nu) = N\left(\frac{\mathbf{x} - \mu}{\nu}\right),\tag{97}$$

where *<sup>N</sup>*(·) is defined by (34). We also set

$$f(\mathbf{x}, \boldsymbol{\mu}, \nu) = \frac{1}{\sqrt{2\pi\nu^2}} \exp\left(-\frac{1}{2}\frac{(\mathbf{x} - \boldsymbol{\mu})^2}{\nu^2}\right). \tag{98}$$

Finally, we obtain the following:

**Proposition 2.** *The variance-gamma information-based price of a defaultable discount bond with a uniformly-distributed fraction of the principal paid on recovery is given by*

$$S\_t = \mathbf{e}^{-r\_\cdot(T-t)} \frac{p\_0 \left(N\_1(b, \mu, \nu) - N\_1(a, \mu, \nu)\right) + p\_1 \mathbf{c} \, f(\mathbf{c}, \mu, \nu)}{p\_0 \left(N\_0(b, \mu, \nu) - N\_0(a, \mu, \nu)\right) + p\_1 \, f(\mathbf{c}, \mu, \nu)},\tag{99}$$

*where*

$$
\mu = \frac{1}{\sigma} \frac{\frac{\mathcal{Z}\_t}{\gamma\_{tT}}}{\gamma\_{tT}}, \quad \nu = \frac{1}{\sigma} \sqrt{\frac{1 - \gamma\_{tT}}{\gamma\_{tT}}}.\tag{100}
$$

**Example 3: Lognormal payoff.** Next we consider the case when the payoff of an asset at time *T* is log-normally distributed. This will hold if *h*(*x*) = e*<sup>x</sup>* and *XT* ∼ Normal(*μ*, *<sup>ν</sup>*<sup>2</sup>). It will be convenient to look at the slightly more general payoff obtained by setting *h*(*x*) = e*q x* with *q* ∈ R. If we recall the identity

$$\frac{1}{\sqrt{2\pi}}\int\_{-\infty}^{\infty} \exp\left(-\frac{1}{2}Ax^2 + Bx\right) \mathrm{d}x = \frac{1}{\sqrt{A}} \exp\left(\frac{1}{2}\frac{B^2}{A}\right),\tag{101}$$

which holds for *A* > 0 and *B* ∈ R, a calculation gives

$$\begin{split} l\_{l}(q) := \int\_{-\infty}^{\infty} \mathbf{e}^{q\_{l}x} \frac{1}{\sqrt{2\pi}\,\nu} \, \exp\left[ -\frac{1}{2} \frac{(x-\mu)^{2}}{\nu^{2}} + \frac{1}{1-\gamma\_{lT}} \left( \sigma \,\,\_{l}^{\mathbf{r}} x - \frac{1}{2} \sigma^{2} \,\boldsymbol{x}^{2} \gamma\_{lT} \right) \right] \, \mathbf{d}x \\ = \frac{1}{\nu\sqrt{A\_{l}}} \, \exp\left( \frac{1}{2} \,\frac{B\_{l}^{2}}{A\_{l}} - \mathbb{C} \right), \end{split} \tag{102}$$

where

$$A\_{l} = \frac{1 - \gamma\_{lT} + \nu^{2}\sigma^{2}\gamma\_{lT}}{\nu^{2}(1 - \gamma\_{lT})}, \quad B\_{l} = q + \frac{\mu}{\nu^{2}} + \frac{\sigma\frac{\pi}{4}l}{1 - \gamma\_{lT}}, \quad \mathbb{C} = \frac{1}{2}\frac{\mu^{2}}{\nu^{2}}.\tag{103}$$

For *q* = 1, the price is thus given in accordance with Theorem 4 by

$$S\_t = \mathbf{e}^{-r(T-t)} \frac{I\_t(1)}{I\_t(0)}.\tag{104}$$

Then clearly we have

$$S\_0 = \mathbf{e}^{-r \cdot T} \exp\left[\mu + \frac{1}{2}\nu^2\right],\tag{105}$$

and a calculation leads to the following:

**Proposition 3.** *The variance-gamma information-based price of a financial asset with a log-normally distributed payoff such that* log (*ST*− ) ∼ Normal(*μ*, *ν*<sup>2</sup>) *is given for t* ∈ (0, *T*) *by*

$$\mathbf{S}\_{t} = \mathbf{e}^{\mathrm{r}t} \, \mathbf{S}\_{0} \exp\left[\frac{\nu^{2}\sigma^{2}\gamma\_{tT}\left(1-\gamma\_{tT}\right)^{-1}}{1+\nu^{2}\sigma^{2}\gamma\_{tT}\left(1-\gamma\_{tT}\right)^{-1}}\left(\frac{1}{\sigma^{\prime}\gamma\_{tT}}\,\delta\_{t}^{\mathrm{r}}-\mu-\frac{1}{2}\,\nu^{2}\right)\right].\tag{106}$$

More generally, one can consider the case of a so-called power-payoff derivative for which

$$H\_T = (\mathcal{S}\_{T^-})^q \, , \tag{107}$$

where *ST*− = lim*<sup>t</sup>*→*<sup>T</sup> St* is the payoff of the asset priced above in Proposition 3. See Bouzianis and Hughston (2019) for aspects of the theory of power-payoff derivatives. In the present case if we write

$$\mathbf{C}\_{t} = \mathbf{e}^{-r\left(T-t\right)} \mathbb{E}\_{t} \left[ \left( \mathbf{S}\_{T-} \right)^{q} \right] \tag{108}$$

for the value of the power-payoff derivative at time *t*, we find that

$$\mathbf{C}\_{t} = \mathbf{e}^{\prime t} \, \mathbb{C}\_{0} \exp\left[\frac{\nu^{2} \sigma^{2} \gamma\_{tT} \left(1 - \gamma\_{tT}\right)^{-1}}{1 + \nu^{2} \sigma^{2} \gamma\_{tT} \left(1 - \gamma\_{tT}\right)^{-1}} \left(\frac{q}{\sigma \gamma\_{tT}} \, \prescript{\mathbf{x}}{\zeta}\_{t} - q \, \mu - \frac{1}{2} \, q^{2} \, \nu^{2}\right)\right],\tag{109}$$

where

$$\mathbf{C}\_{0} = \mathbf{e}^{-r \cdot T} \exp\left[q \,\mu + \frac{1}{2} \,q^{2} \,\nu^{2}\right]. \tag{110}$$

**Example 4: Exponentially distributed payoff.** Next we consider the case where the payoff is exponentially distributed. We let *XT* ∼ exp(*λ*), so P [*XT* ∈ d*x*] = *λ* e<sup>−</sup>*<sup>λ</sup> x* d*<sup>x</sup>*, and take *h*(*x*) = *x*. A calculation shows that

$$\int\_{0}^{\infty} \mathbf{x} \exp\left[-\lambda \,\mathbf{x} + \left(\sigma \,\xi\_{t}^{\mathbf{x}} \,\mathbf{x} - \frac{1}{2} \,\sigma^{2} \,\mathbf{x}^{2} \,\gamma\_{tT}\right) \left(1 - \gamma\_{tT}\right)^{-1}\right] d\mathbf{x} = \frac{\mu - N\_{1}(0, \mu, \nu)}{f(0, \mu, \nu)},\tag{111}$$

where we set

$$
\mu = \frac{1}{\sigma} \frac{\frac{\mathcal{Z}\_t}{\gamma\_{tT}}}{\gamma\_{tT}} - \frac{\lambda}{\sigma^2} \frac{1 - \gamma\_{tT}}{\gamma\_{tT}}, \quad \nu = \frac{1}{\sigma} \sqrt{\frac{1 - \gamma\_{tT}}{\gamma\_{tT}}}, \tag{112}
$$

and

$$\int\_0^\infty \exp\left[-\lambda \ge + \left(\sigma \, \xi\_t \, \mathbf{x} - \frac{1}{2} \sigma^2 \, \mathbf{x}^2 \, \gamma\_{tT}\right) \left(1 - \gamma\_{tT}\right)^{-1}\right] \mathrm{d}\mathbf{x} = \frac{1 - N\_0(0, \mu, \nu)}{f(0, \mu, \nu)}.\tag{113}$$

As a consequence we obtain:

**Proposition 4.** *The variance-gamma information-based price of a financial asset with an exponentially distributed payoff is given by*

$$S\_t = \frac{\mu - N\_1(0, \mu, \nu)}{1 - N\_0(0, \mu, \nu)} \, ^\prime \tag{114}$$

*where N*0 *and N*1 *are defined as in Example 2.*

## **7. Conclusions**

In the examples considered in the previous section, we have looked at the situation where there is a single market factor *XT*, which is revealed at time *T*, and where the single cash flow occurring at *T* depends on the outcome for *XT*. The value of a security *St* with that cash flow is determined by the information available at time *t*. Given the Markov property of the extended information process {*ξ<sup>t</sup>*, *<sup>γ</sup>tT*} it follows that there exists a function of three variables *F* : R × [0, 1] × R<sup>+</sup> → R<sup>+</sup> such that *St* = *<sup>F</sup>*(*ξ<sup>t</sup>*, *γtT*, *t*), and we have worked out this expression explicitly for a number of different cases, given in Examples 1–4. The general valuation formula is presented in Theorem 4.

It should be evident that once we have specified the functional dependence of the resulting asset prices on the extended information process, then we can back out values of the information process and the gamma bridge from the price data. So in that sense the process {*ξ<sup>t</sup>*, *<sup>γ</sup>tT*} is "visible" in the market, and can be inferred directly, at any time, from a suitable collection of prices. This means, in particular, that given the prices of a certain minimal collection of assets in the market, we can then work out the values of other assets in the market, such as derivatives. In the special case we have just been discussing, there is only a single market factor; but one can see at once that the ideas involved readily extend to the situation where there are multiple market factors and multiple cash flows, as one expects for general securities analysis, following the principles laid out in Brody et al. (2007, 2008a), where the merits and limitations of modelling in an information-based framework are discussed in some detail.

The potential advantages of working with the variance-gamma information process, rather than the highly tractable but more limited Brownian information process should be evident—these include the additional parametric freedom in the model, with more flexibility in the distributions of returns, but equally important, the scope for jumps. It comes as a pleasant surprise that the resulting formulae are to a large extent analytically explicit, but this is on account of the remarkable properties of the normalized variance-gamma bridge process that we have exploited in our constructions. Keep in mind that in the limit as the parameter *m* goes to infinity our model reduces to that of the Brownian bridge information-based model considered in Brody et al. (2007, 2008a), which in turn contains the standard geometric Brownian motion model (and hence the Black-Scholes option pricing model) as a special case. In the case of a single market factor *XT*, the distribution of the random variable *XT* can be inferred by observing the current prices of derivatives for which the payoff is of the form

$$H\_T = \mathbf{e}^{rT} \mathbf{1}\_{X\_T \le K\_\bullet} \tag{115}$$

for *K* ∈ R. The information flow-rate parameter *σ* and the shape parameter *m* can then be inferred from option prices. When multiple factors are involved, similar calibration methodologies are applicable.

**Author Contributions:** The authors have made equal contributions to the work. Both have read and agreed to the published version of the manuscript.

**Funding:** This research has received no external funding.

**Acknowledgments:** The authors wish to thank G. Bouzianis and J. M. Pedraza-Ramírez for useful discussions. LSB acknowledges support from (a) Oriel College, Oxford, (b) the Mathematical Institute, Oxford, (c) Consejo Nacional de Ciencia y Tenconología (CONACyT), Ciudad de México, and (d) LMAX Exchange, London. We are grateful to the anonymous referees for a number of helpful comments and suggestions.

**Conflicts of Interest:** The authors declare no conflict of interest.
