**Strategy II:**

On day *t*, an agen<sup>t</sup> tries to go back to the same restaurant as chosen the earlier day (*t* − 1) with probability

$$p\_k^{(II)}(t) = 1,\text{ if }n\_k(t-1) = 1\text{ and}\tag{6}$$

$$p\_{k'}^{(II)}(t) = p < 1,\text{ if } n\_k(t-1) > 1\tag{7}$$

for choosing any of the *nr* neighboring restaurants (*k* = *k*).

#### *4.1. Numerical Results*

We have numerically studied the steady state dynamics of the KPR game where every day *λN* agents decide which restaurant to choose and visit among *N* restaurants following both the strategies I and II. We consider here infinite dimensional arrangemen<sup>t</sup> for restaurants, where the number of nearest neighboring restaurants *nr* to each is (*N* − <sup>1</sup>), and the cost to visit any of them is the same for all the time. The maximum social utilization *f* obtained from Equation (3) (from the point of view of agents or players) will be denoted further by *f a*. Each day (iteration), parallel choice decisions by each are processed (following either strategy I or II) and used to compute *f a*. Steady state is identified as the state when *f a* does not change (within a predefined error margin) for the next (say, hundred) iteration.

On day *t*, *ni*(*<sup>t</sup>* − <sup>1</sup>), agents decide to revisit last day's visited restaurant (i) with probability *p*(*I*) *k* (*t*) (Equation (4)) or probability *p*(*I I*) *k* (*t*) (Equation (6)), or else choose any other (*k* = *k*) from among any of the (*N* − 1) neighboring restaurants for both the strategies (Equations (5) and (7)). After the system stabilizes, (*f <sup>a</sup>*(*t*) becomes practically independent of *t*, the average statistics of *f <sup>a</sup>*(*t*) are noted as [ *f a*(*I*)] or [ *f a*(*<sup>I</sup> I*)], respectively, for strategies I and II. We find the power law fits for the steady state wastage fraction (1 − *f a*(*I*)) ∼ (1 − *f a*(*<sup>I</sup> I*)) ∼ (*λ* − *<sup>λ</sup>c*(*N*))*<sup>β</sup>* with *β* = 1.0 ± 0.05 (see Figures 6 and 7) and *τ*(*I*) ∼ *τ*(*<sup>I</sup> I*) ∼ (*<sup>λ</sup>c*(*N*) − *<sup>λ</sup>*)−*<sup>γ</sup>* with *γ* = 0.5± 0.07 (see Figures 8 and 9) in both of the strategies I and II. Varying *λ*, the steady state results of *f a*, *τ* for different system sizes (*N* = 500, 1000, 2000), with *α* = 0.05, 0.25, 0.5, 1.0 in strategy I or *p* = 0.2, 0.4, 0.6, 0.8 in strategy II are considered here. All simulations are done taking maximum *N* = 2000 with numbers of iteration/run of order 106. For finite system sizes, the effective critical points *<sup>λ</sup>c*(*N*) (where *f a* becomes unity or *τ* reaches its peak value) obtained numerically for different system sizes (*N*) and are analyzed using finite size scaling method in Figure 10.

**Figure 6.** Plots of (1 − *f a*(*I*)) against *λ* − *<sup>λ</sup>c*(*N*) following strategy I at (**a**) *α* = 0.05, (**b**) *α* = 0.25, (**c**) *α* = 0.5, (**d**) *α* = 1.0. A power law holds for (1 − *f a*(*I*)) ∼ (*λ* − *<sup>λ</sup>c*(*N*))*β*, where *β* = 1.0 ± 0.05. The insets show direct relationship between (1 − *f a*(*I*)) and *λ* (for strategy I).

**Figure 7.** Plots of steady state convergence time *τ*(*I*) from strategy I against *<sup>λ</sup>c*(*N*) − *λ* at (**a**) *α* = 0.05, (**b**) *α* = 0.25, (**c**) *α* = 0.5, (**d**) *α* = 1.0. A power law holds for *τ*(*I*) ∼ (*<sup>λ</sup>c*(*N*) − *<sup>λ</sup>*)−*γ*, where *γ* = 0.5 ± 0.05. The insets plot direct relationship between *τ*(*I*) and *λ* for different system sizes (for strategy I), also showing the variation of *λ* as *α* increases.

**Figure 8.** Plots of (1 − *f a*(*<sup>I</sup> I*)) versus *λ* − *<sup>λ</sup>c*(*N*) following strategy II at (**a**) *p* = 0.8, (**b**) *p* = 0.6, (**c**) *p* = 0.4, (**d**) *p* = 0.2. A power law holds for (1 − *f a*(*<sup>I</sup> I*)) ∼ (*λ* − *<sup>λ</sup>c*(*N*))*<sup>β</sup>* with *β* = 1.0 ± 0.05. The insets show direct relationship between variations of (1 − *f a*(*<sup>I</sup> I*)) against *λ* (for strategy II).

**Figure 9.** Plots of steady state convergence time *τ*(*<sup>I</sup> I*) against *<sup>λ</sup>c*(*N*) − *λ* following strategy II at (**a**) *p* = 0.8, (**b**) *p* = 0.6, (**c**) *p* = 0.4, (**d**) *p* = 0.2. A power law holds for *τ*(*<sup>I</sup> I*) ∼ (*<sup>λ</sup>c*(*N*) − *<sup>λ</sup>*)−*<sup>γ</sup>* with *γ* = 0.5±0.07. The insets give direct relationship between *τ*(*<sup>I</sup> I*) and *λ* for different system sizes (for strategy II), also showing the variation of *λ* as *p* decreases.

**Figure 10.** Extrapolation study of the effective finite size critical density of agents *<sup>λ</sup>c*(*N*). The system size dependence is numerically fitted to √1*N* and we estimate *λc* from *λc* ≡ *<sup>λ</sup>c*(*<sup>N</sup>* → <sup>∞</sup>). The extrapolated values of *λc* are 0.99, 0.92, 0.85, 0.75 for *α* = 0.05, 0.25, 0.5, 1.0 (strategy I) (**a**), and are 0.9, 0.8, 0.7, 0.6 for *p* = 0.8, 0.6, 0.4, 0.2 (strategy II) (**b**).

It may be mentioned that, in general, for the estimation of errors in the exponents *β* and *γ* in Figures 6–9, we tried linear fits (without any intercept) for log *y* versus log *x*, using best fits mostly for the intermediate range data points for all *N* values until they start deviating (due to extreme fluctuations near *λ* = *λc* and towards their saturation values for *λ* approaching unity [74,81]) and anticipating their universal mean field values in this infinite dimensional system. From the slopes of these best fit lines for different *α* or *p* values, we extract the universal exponent values and their standard deviations. We give this higher error in the estimate of the unified (and universal) estimate of *γ*.
