**7. Case-Based Parameters**

In this section, we discuss the parameters of CBL estimation based on the full sample estimation. The parameters of CBL are *λ*, which measures the sensitivity of choice to CBU (see Section 3.6), *A<sup>L</sup>* <sup>0</sup> is the parameter measuring the initial attraction for Left for the column player, while *A<sup>U</sup>* <sup>0</sup> is the initial attraction for Up for the row player (see Section 3.1). These initial attractions are relative measures as the initial attractions for Down and Right are held at zero. *Wi* are weights in the similarity function on the different characteristics of the information vector (see Equation (4)). In particular, *W*<sup>1</sup> is the weight given to recency (here, round number), *W*<sup>2</sup> is the weight given to the moving average of actions of opposing players. These parameters are estimated to best fit the data using the logit rule in Equation (12).

We do not directly estimate the aspiration parameter, because it cannot be effectively empirically distinguished from the initial attraction parameters. If one considers Equation (2), one can see that the *H* parameter and the mean of the *Aj* parameters confound identification. We cannot distinguish

between the average initial attractions to strategies due to priors and the aspiration value of the agent. Fortunately, we find that the fit of the CBL generally does not rely on the estimation of the aspiration level to achieve the same goodness-of-fit.

In Table 1, we report the estimated parameters using the full sample of observations in each treatment of each experiment. In all experimental treatments, we find a statistically significant value for *λ*, meaning that the learning algorithm estimated explains some choice. We find that the initial attraction parameters *A<sup>j</sup>* <sup>0</sup> are consistent with the frequency of choices in the first period. The relative weights of *W*<sup>1</sup> and *W*<sup>2</sup> are difficult to directly compare, as they are in different scales. We could normalize the data prior to estimation, but it is unclear what affect that might have on cumulative CBL over time. We explore ex-post normalization of the parameters in Appendix C and list results in Table A2. The empirical estimate of *W*<sup>1</sup> is positive and statistically significant. This indicates that, consistent with other learning models, recency is important to learning.

By comparing the coefficients *W*<sup>1</sup> in Table 1, we find that recency degrades similarity faster in non-constant sum games than in constant sum games. This difference suggests that in non-constant sum game, subjects 'forget' past experience faster when constructing expectations about the current problem and they put relatively more weight on the similarity of the moving average of opposing players.

The weight, *W*2, on the moving average of past play of opposing players is positive and significant. A positive parameter gives greater weight to cases with similar average playing rates to the current problem. This parameter picks up adjustments to group actions over time.


