*2.3. Estimator Convergence*

An advantage of the stochastic tree method over some other MC valuation methods is that its estimators are consistent to the true option value. This property also holds for the FOST estimators. In this section we state two theorems—one for the consistency of the high estimator, and the other for the consistency of the low estimator. Here convergence is in probability to the true option value and as above the argumen<sup>t</sup> *b* that appears with the estimators refers to an arbitrary branching factor size of *b* with convergence being shown as *b* → ∞. Before stating the result, define *V* ¯ 0 (*b*, **S**0, N0, *<sup>U</sup>*0) as the average of *R* independent replications of *V* ˆ 0 (*b*, **S**0, N0, *<sup>U</sup>*0).

**Theorem 4.** *(High estimator convergence) Suppose* E -|*hi* (**<sup>S</sup>***i*, N*<sup>i</sup>*, *Ui*)|*<sup>p</sup>* < <sup>∞</sup>*, for all ti, and some p* > 1*. Then V* ¯ 0 (*b*, **S**0, N0, *<sup>U</sup>*0) *converges to B*0 (**<sup>S</sup>**0, N0, *<sup>U</sup>*0) *in p-norm for any* 0 < *p* < *p as b* → ∞*. This holds for an arbitrary number of repeated valuations of the forest, R. In particular V* ¯ 0 (*b*, **S**0, N0, *<sup>U</sup>*0) *converges to B*0 (**<sup>S</sup>**0, N0, *<sup>U</sup>*0) *in probability and is thus a consistent estimator of the option value.*

This results implies that

$$\mathbb{E}\left[\hat{V}\_{0}\left(b,\mathbf{S}\_{0},\mathcal{N}\_{0},\mathcal{U}\_{0}\right)\right] \to B\_{0}\left(\mathbf{S}\_{0},\mathcal{N}\_{0},\mathcal{U}\_{0}\right) \tag{16}$$

as *b* → ∞. Hence the estimator is asymptotically unbiased. **Theorem 5.** *(Low estimator convergence) Suppose that,*

$$\begin{split} &\mathbb{P}\left[h\_{i}\left(\mathbf{S}\_{i\prime},\mathcal{N}\_{i\prime}\wr\mathcal{U}\_{i\prime}\boldsymbol{u}^{1}\right) + H\_{i}\left(\mathbf{S}\_{i\prime}\mathcal{N}\_{i\prime} - I\_{\{\boldsymbol{u}^{1}\neq 0\}\prime}\wr\mathcal{U}\_{i} + \boldsymbol{u}^{1}\right) \\ &\neq h\_{i}\left(\mathbf{S}\_{i\prime}\mathcal{N}\_{i\prime}\wr\mathcal{U}\_{i\prime}\boldsymbol{u}^{2}\right) + H\_{i}\left(\mathbf{S}\_{i\prime}\mathcal{N}\_{i\prime} - I\_{\{\boldsymbol{u}^{2}\neq 0\}\prime}\wr\mathcal{U}\_{i} = \boldsymbol{u}^{2}\right)\right] = 1, \end{split}$$

*for u*1, *u*2 ∈ U*i, u*1 = *u*2 *and all i. Then Theorem 4 also holds for the low estimator.*

The additional condition imposed in Theorem 5 is analogous to that used in Theorem 3 of Broadie and Glasserman (1997). This condition says that, with probability one, the optimal exercise policy is never indifferent between the choices of volumes to exercise (including *u* = 0). As in Broadie and Glasserman (1997) imposing this condition simplifies the analysis of the estimator. Theorems 4 and 5 are proven in Appendix B.
