**4. Base Case Scenario**

We consider an example where a DC plan member wants to generate retirement income (in real terms) of a specified fraction *R* of final salary. Studies have shown that earnings for a typical employee increase rapidly until the age of 35, then increase slowly thereafter, until a few years before retirement, and then decrease as fewer hours are worked in the transition to retirement (Cocco et al., 2005; Ruppert and Zanella, 2015). Given an initial real annual salary *I*0, we assume that real annual income in year *t* before retirement is given by *It* = *<sup>e</sup>μItI*0. In other words, salary is expected to grow in real terms at a constant annual rate *μ<sup>I</sup>*. Upon retirement at time *T*, real annual salary is *eμI<sup>T</sup> I*0. Our assumptions would be relevant to a 35 year old employee with stable employment, who intends to work full-time until the age of 65.

To determine the amount of real wealth required to fund this replacement income during retirement, we use the well-known 4% rule of Bengen (1994). Bengen examined historical data to determine the maximum real withdrawal rate that a retiree could safely use without exhausting her assets over a 35 year period. Bengen assumed that accumulated pension wealth was invested in a portfolio having half invested in stocks and half invested in intermediate-term US Treasury securities, and concluded that a 4% withdrawal rate (escalated by the rate of inflation) was quite safe.

Dang et al. (2017) recently revisited this rule. The problem was posed somewhat differently: the idea was to determine a real withdrawal rate such that half of the real wealth at the start of

Maximum leverage indicator *L*max 1.0

If insolvent

retirement remained after 20 years, with high probability. Rather than using fixed portfolio weights as in Bengen (1994), the portfolio was invested in stocks and bonds according to the QS optimal strategy described above. Dang et al. concluded that the 4% rule still held up well under the revised assumptions.

An obvious alternative way to generate retirement cash flows is to buy a lifetime annuity. However, in practice most retirees are not willing to do this, for a variety of reasons (MacDonald et al., 2013). In the current environment of low real interest rates, annuities provide rather low income, so the reluctance of retirees to use them is particularly unsurprising.

Consequently, we pose the pension accumulation problem as follows. The desired expected accumulated real wealth at retirement *Wd* is

$$E[\mathcal{W}\_T] = \mathcal{W}\_d = \frac{\mathcal{R}e^{\mu\_I T} I\_0}{w\_r},\tag{11}$$

where *R* is the replacement ratio, *I*0 is the initial salary, *μI* is the real salary escalation rate, *T* is the end of the accumulation period, and *wr* is the safe withdrawal rate. Recall that above we denoted the set of action times prior to *T* as T1. We assume that cash is contributed into the portfolio at time *ti* ∈ T1. The initial cash contribution at *t*0 is *Fc I*0, where *Fc* is the real contribution fraction. This contribution fraction represents the total amount contributed by both the employee and the employer to all retirement savings accounts, but excludes any governmen<sup>t</sup> sponsored universal schemes (e.g., CPP in Canada, Social Security in the US). We also assume that these accounts are tax-advantaged, i.e., no tax is paid during the accumulation phase. At subsequent action times *ti* ∈ T1 (i.e., after the initial contribution), the amount contributed is assumed to be *Fc <sup>I</sup>*0*eμiti* .

Table 2 summarizes the data for this base case scenario. As indicated in the table, we assume a replacement fraction of 50%, an initial salary of \$50,000, and a real escalation rate of 1.27%.<sup>13</sup> Combined contributions by the employee and employer to the retirement savings portfolio are 20% of real salary each year. For simplicity, these contributions are assumed to be made at the start of each year during a 30 year accumulation period. The safe withdrawal rate is assumed to be 4%, in line with Bengen (1994) and Dang et al. (2017). Applying these parameters to Equation (11), we find that the expected (real) desired terminal wealth *Wd* is

$$E[W\_T] = W\_d = \frac{0.50}{0.04} \times \ $50\,000e^{0127 \times 30} \simeq \$ 915\,000. \tag{12}$$

Note that the specification given in Equation (11) implies that decreasing the withdrawal rate *wr* has the same effect as increasing the replacement fraction *R* or the salary escalation rate *μ<sup>I</sup>*.


**Table 2.** Data for base case scenario. Cash is injected into the portfolio at times *t* = 0, 1, ... , 29. Market parameters for the equity and bond indexes are provided in Table 1.

 Trading stops

<sup>13</sup> As noted by Bloom et al. (2014), this rate has been used by the US Congressional Budget Office in its long-term projections.

As indicated in Table 2, the retirement savings portfolio is invested in the value-weighted index and the 3-month T-bill index. Relevant parameters for these indexes are given in Table 1. We consider three alternative investment strategies:


Each of these strategies is rebalanced annually. We start with the constant proportion strategy, and determine the equity weight such that *E*[*WT*] = \$915,000, assuming the market parameters given in Table 1 for the value-weighted equity index and the 3-month T-bill index. This turns out to be 0.5788.<sup>14</sup> We then turn to the linear glide path strategy. In this case, we specify *p*max = 1.0, and then determine that the value of *p*min = 0.3066 is needed to have *E*[*WT*] = \$915,000. We proceed similarly for the QS optimal strategy given by the solution to problem (8). Imposing the leverage constraint *L*max = 1.0, we find by Newton iteration that the value of *W*∗ which results in *E*[*WT*] = \$915,000 is \$1,106,200. We compute and store the optimal control associated with this value of *W*∗ in order to test this strategy. Note that:


We compare these three strategies using two different types of simulation tests. As an initial test, we assume that the stochastic environment described in Table 1 holds exactly. In other words, the level of the equity market index follows a double exponential jump diffusion with the parameters given in Table 1 and the bond market index is non-stochastic, with a constant risk-free interest rate as indicated in that table. We refer to this as a *synthetic market*. In such a market, we draw 160,000 Monte Carlo simulated paths and compute performance statistics. Note that these comparisons are based on a simulated environment that corresponds exactly to the environment used to formulate the strategies. As a second and more stringent test, we draw simulated paths by bootstrap resampling of the historical return data and compute the same performance statistics. We refer to this type of backtest as a *historical market*. This is a stricter test since it does not assume that the equity market follows a jump diffusion process or that the risk-free interest rate is constant over time, although those assumptions are still used to generate the strategies that are followed. A single resampled path is constructed by pasting together enough blocks of monthly historical return data to cover the investment horizon of 30 years. The sampling is done in blocks to account for possible serial dependence. The blocks are selected simultaneously from both the historical stock and bond market indexes, to incorporate possible correlations. The blocks are chosen randomly, with replacement. To avoid end issues, the historical data is wrapped around.<sup>15</sup> To reduce the impact of a fixed blocksize and mitigate edge effects at each block end, we use the stationary block bootstrap (Patton et al., 2009;

<sup>14</sup> This is a bit more aggressive in terms of taking on equity market risk than the strategy considered by Bengen (1994) which involved equal weights between the equity and bond markets. Keep in mind that here we are investing in a 3-month T-bill index, whereas Bengen used intermediate maturity Treasury bonds which offer somewhat higher average returns.

<sup>15</sup> In other words, if the size of a block extends past the end of the sample in 2015:12, the return data resumes at the start of the sample in 1926:1 for the duration of the block.

Politis and White, 2004) where the blocksize is sampled randomly from a geometric distribution with an expected value of ˆ *b*. In principle, the optimal expected blocksize can be estimated using an algorithm provided by Patton et al. (2009). As discussed in Forsyth and Vetzal (2019), this approach is not easily applied in our context. This is because the estimated optimal blocksizes for the different market indexes we consider (i.e., the value-weighted and equal-weighted US equity market indexes and the 3-month T-bill and 10-year T-bond indexes) vary considerably, ranging from about two months for the value-weighted stock index to more than four years for the T-bill index. Recall that we sample simultaneously from both a stock index and a bond index, so we must use the same blocksize for both indexes, and our strategies involve weighted combinations of two of these indexes that can change deterministically (glide path) and also randomly (QS optimal) along a simulation path. As a result, we report results for a range of expected blocksizes ˆ *b*, acknowledging that the choice of ˆ *b* for our application is open to debate.

Table 3 gives the results for the base case input data from Table 2. Consider first the synthetic market. By construction, all three strategies have the same expected value of real terminal wealth at the retirement date of 915 (units in the table are in thousands of dollars). The constant proportion and glide path strategies are effectively indistinguishable, having the same standard deviation of terminal wealth and the same probability of ending up with wealth below 700 or below 800. By contrast, the QS optimal strategy has much lower standard deviation or shortfall probability for those two levels of terminal wealth, by a factor of around two in each case. In addition, this strategy offers a small amount of expected surplus cash (this surplus is not applicable for the other two strategies). Turning to the historical market results for expected blocksizes of 1, 2, and 5 years, we reach generally similar conclusions. Of course, the expected values no longer are exactly equal to the target of 915, but the difference from this target is much lower for the QS optimal strategy. The constant proportion and glide path strategies are again quite comparable to each other, with the glide path having slightly lower expected value and standard deviation, but a bit higher shortfall probability (for the two values of *WT* considered). In all cases, the QS optimal strategy offers the best performance, with higher mean, lower standard deviation, and lower shortfall probability, as well as a modest amount of expected surplus cash.


**Table 3.** Base case scenario results. Wealth units: thousands of dollars. Input data provided in Tables 1 and 2. Synthetic market results computed using Monte Carlo simulations with 160,000 sample paths. Historical market results based on 10,000 bootstrap resampled paths using data from 1926:1 to 2015:12.

Figure 3a shows the cumulative distributions for all three strategies in the historical market (expected blocksize of two years). The distributions for the constant proportion and glide path strategies are practically identical. Over the bulk of the distribution, the QS optimal strategy exhibits clearly better performance than the others. However, the QS optimal strategy performs a little worse than the alternatives in the extreme left tail.<sup>16</sup> This happens because there can be paths where the equity market trends downward for a very large portion of the investment horizon. In such cases, all strategies do poorly, but the QS optimal strategy will remain fully invested in the equity market in an attempt to recover and meet the quadratic wealth target. The QS optimal strategy also underperforms in the extreme right tail of the distribution. This is because there are paths where the equity market trends strongly upward over most of the investment period. Once the quadratic wealth target is reached, however, the QS optimal strategy de-risks, shifting all investment into the low return bond market. It does not capitalize on the continued strong equity market performance. The other two strategies, by comparison, retain a large equity market exposure, leading to higher terminal wealth. However, we reiterate that, over most of the distribution, the QS optimal strategy provides better results.

Figure 3b depicts properties of the optimal control for the QS optimal strategy. This strategy invests entirely in the equity market for the first several years. The percentage invested in the risky asset subsequently trends downward on average over time. However, there is considerable variation: the standard deviation of the optimal control rises strongly over time, indicating that the allocation to equities is quite sensitive to realized investment returns.<sup>17</sup>

**Figure 3.** Base case scenario results in the historical market. Input data provided in Tables 1 and 2. Results based on 10,000 bootstrap resampled paths using data from 1926:1 to 2015:12 with expected blocksize ˆ *b* = 2 years. (**a**) cumulative distributions of real terminal wealth for various strategies. Wealth units: thousands of dollars; surplus cash included for the QS optimal case; (**b**) mean and standard deviation of the fraction allocated to the equity market for the QS optimal strategy.

The historical market results given in Table 3 are somewhat encouraging for the QS optimal strategy. To formulate this strategy, recall that we assumed a double exponential jump diffusion model with known parameters for the equity market index and a constant risk-free interest rate. Our bootstrap resampled historical market tests make no such assumptions, ye<sup>t</sup> deliver results that are fairly close to

<sup>16</sup> In other words, the QS optimal strategy will appear riskier than the constant proportion or glide path strategies according to tail risk measures such as value-at-risk or conditional value-at-risk, provided that the risk measure is calculated using sufficiently low cumulative probabilities.

<sup>17</sup> Of course, the equity allocation for the constant proportion and glide path cases is fixed in advance, being at most time-dependent and not varying at all in response to realized returns.

those observed in the synthetic market tests which maintain these assumptions. This indicates that the QS optimal strategy is quite robust to departures from these assumptions.

As an additional test of the robustness of the QS optimal strategy, we explore the effect of computing and storing the optimal control based on constant parameters from Table 1 but then allowing the synthetic equity market parameters to vary randomly in simulation tests. To be specific, we carry out Monte Carlo simulations where at each action time *ti* ∈ T1 and along each stochastic path we select (*μ*, *σ*) from a uniform distribution having mean equal to the corresponding values from Table 1. This set of (*μ*, *σ*) is then used for the interval (*ti*, *ti*+<sup>1</sup>).

Table 4 shows the results. The first row reproduces the values reported in Table 3 for this strategy in the synthetic market, while the remaining two rows provide results when *μ* and *σ* are varied randomly within the given range. Table 4 indicates that only an estimate of the mean of the distribution of the market parameters *μ* and *σ* is needed to compute an effective control strategy. This is consistent with the results in Ma and Forsyth (2016), where it is shown that including stochastic volatility effects results in negligible improvements in results for long-term (i.e., greater than ten years) investors.

**Table 4.** Base case scenario results for the QS optimal strategy with random variation of market parameters *μ* and *σ*, Wealth units: thousands of dollars. Input data provided in Tables 1 and 2, except as noted. Monte Carlo simulations with (*μ*, *σ*) drawn from a uniform distribution with the indicated limits along each of 160,000 paths.


Overall, then, for the base case data in Table 2, the QS optimal strategy appears to be fairly robust to parameter and model mis-specification. Moreover, this strategy clearly outperforms the constant proportion and glide path alternatives over most of the terminal wealth distribution. It provides about the same mean terminal wealth, but has considerably lower standard deviation and shortfall probability (for the two wealth levels considered in Table 3). However, even the QS optimal strategy delivers somewhat disappointing results in absolute terms. Recall that we are trying to achieve average real terminal wealth of \$915,000. In the idealized synthetic market which conforms exactly to the modelling assumptions used to generate the controls for the QS optimal strategy, Table 3 shows that there is almost a 20% chance of ending up with real terminal wealth below \$700,000, and about a 25% chance of ending up with less than \$800,000. These shortfall probabilities do not change much under the conditions of the historical market backtests. Of course, these rather pessimistic results so far have only considered the base case data. In the following section, we investigate whether more promising results can be achieved under different assumptions.
