^{1}

^{*}

^{2}

^{3}

^{4}

This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution license (http://creativecommons.org/licenses/by/3.0/).

Two independent, but related, choice prediction competitions are organized that focus on behavior in simple two-person extensive form games (

Experimental studies of simple social interactions reveal robust behavioral deviations from the predictions of the rational economic model. People appear to be less selfish, and less sophisticated than the assumed Homo Economicus. The main deviations from rational choice (see a summary in

Recent research demonstrates the potential of simple models that capture these psychological factors e.g., [

The selected structure has two main advantages: the first is that, depending on the relation between the different payoffs, this structure allows for studying games that are similar to the famous examples considered in

The current study focuses on games that were sampled from

The current paper introduces two independent but related competitions: one for predicting the proportion of

Each experiment includes 120 games. The six parameters that define each game (the payoffs f1–f3, and s1–s3, see

The estimation experiment was run in the CLER lab at Harvard. One hundred and sixteen students participated in the study, which was run in four independent sessions, each of which included between 26 and 30 participants. Each session focused on 60 of the 120 extensive form games presented in

In order to clarify the main deviations from rational choice we focused on the predictions of the seven strategies presented in

The prediction of each strategy for Player 1 takes the value “1” when the strategy implies the

Much of the debate in the previous studies of behavior in simple extensive form games focuses on the relative importance of inequality aversion [

The popular models of inequality aversion imply a high rate of

One explanation to this “class of games” effect is the distinction between two definitions of inequality aversion. One definition is local: it assumes aversion to inequality in each game.

The second definition is global: it assumes aversion to inequality over games.

Another potential explanation of the class of games effect is that players tend to select strategies that were found to be effective in similar situations in the past, and the class of games used in the experiment is one of the factors that affect perceived similarity. For example, it is possible that an experiment that focuses on ultimatum-like games, increases the perceived similarity to experiences in which the player might have been treated unfairly. And an experiment with a wider set of games increases similarity to situations in which efficiency might be more important. One attractive feature of this “distinct strategies” explanation is its consistency with the results of the regression analysis presented above. The best regression equation can be summarized with the assertion that players use distinct strategies. We return to this idea in the baseline models section below.

In an additional analysis we focused on the games in which Player 2 cannot affect his/her own payoff.

The results show only mild reciprocity: Player 2 selected the option that helps Player 1 in 77% of the times when Player 1′s

The two competitions use the Mean Squared Deviation (MSD) criterion. Specifically, the winner of each competition will be the model that will minimize the average squared distance between its prediction and the observed choice proportion in the relevant condition: the proportion of

The results of the estimation study were posted on the competition website on January 2011. At the same time we posted several baseline models. Each model was implemented as a computer program that satisfies the requirements for submission to the competition. The baseline models were selected to achieve two main goals. The first goal is technical: The programs of the baseline models are part of the “instructions to participants”. They serve as examples of feasible submissions. The second goal is to illustrate the range of MSD scores that can be obtained with different modeling approaches. Participants are encouraged to build on the best baselines while developing their models. The baseline models will not participate in the competitions. The following sections describe five baseline models and their fit scores on the 120 games that are presented in

According to the subgame perfect equilibrium (SPE) Player 2 chooses the alternative that maximizes his/her payoff. Player1 anticipates this and chooses

The core assumption of the inequity aversion model [_{i}_{j}

Where determines the level of utility loss from disadvantageous inequality and determine the utility loss from advantageous inequality. The model asserts that the utility loss from disadvantageous inequality is at least the same or higher than the loss from advantageous inequality (_{i}_{i}_{i}

The probability of action

Where is the player's choice consistency parameter capturing the importance of the differences between the expected utilities associated with each action.

Applying the model for Player 1′s behavior requires an additional assumption regarding Player 1′s beliefs of Player 2′s action. The current version of the model assumes that Player 1 knows the distributions of α and β in the population and maximizes his/her own utility under the belief that he/she faces an arbitrary player from that distribution.

Like the inequality aversion model, ERC [_{i}

The parameter measures the relative importance of the deviation from equal split to player

Both players' decisions are defined by the stochastic choice rule described in _{1} is a parameter that defines the influence of experience and

Charness and Rabin's [

Where

r = 1 if _{2}> _{1} and r = 0 otherwise;

s = 1 if _{2}< _{l} and s = 0 otherwise;

q = −1 if P1 “misbehaved” and q = 0 otherwise.

and “misbehaved” in the current setting is defined as the case where Player 1 chose

Modeling the first mover's behavior requires additional assumptions about his/her beliefs of the responder's behavior. The current estimation assumes that Player 1 correctly anticipates Player 2′s responses. It further assumes that Player 1 has a similar utility function (excluding the reciprocity parameter q).

The Seven Strategies model is motivated by the regression analysis presented in the results section and the related distinct strategies explanation of the class effect suggested by

The parameters _{R}_{L}_{M}_{W}_{J}_{D}

The probability that Player 2 will choose

The parameters _{R}_{N}_{J}_{W}_{D}

In order to evaluate the risk of over fitting the data, we chose to estimate the ENO of the models by using half of the 120 games (the games played by the first cohort) to estimate the parameters, and the other 60 games to compute the ENO.

The two choice prediction competitions, presented above, are designed to improve our understanding of the relative I mportance of the distinct psychological factors that affect behavior in extensive form games. The results of the estimation study suggest that the rational model (subgame perfect equilibrium) provides relatively useful predictions of the behavior of Player 2 (ENO = 7.5), and less useful predictions of the behavior of Player 1 (ENO < 1). This observation is in line with those of Engelmann and Strobel [

The results show only weak evidence for negative reciprocity (e.g., punishing unfair actions). Comparison of the current results to previous studies of negative reciprocity suggests that the likelihood of this behavior is sensitive to the context. Strong evidence for negative reciprocity was observed in studies in which the identity of the disadvantaged players remained constant during the experiment. Negative recency appears to be a less important factor when the identity of the disadvantaged players changes between games.

We tried to fit the results with two types of behavioral models: Models that abstract the behavioral tendency in the agent's social utility function, and models that assume reliance on several simple strategies. Comparison of the different models leads to the surprising observation that the popular social utility models might be outperformed by a seven-strategy model. We hope that the competition will clarify this observation.

The structure of the basic game. Player 1 (P1) selects between

The structure and space of the games. The games are classified according to the relations between their outcomes for each player separately, and their main properties are described below the graph. Cells marked with gray are defined as “trivial games.” The lower panel shows the proportion of games under random sampling from the space, from the space excluding the “trivial games,” and under the quasi random sampling algorithm used in the estimation study.

Sequential two-player games that show deviations from rationality.

Ultimatum [ |
A Proposer offers an allocation of a pie (e.g., $10) between herself and a responder. If the responder accepts the offer, the money is allocated. If she rejects, both get nothing | The responder maximizes own payoff thus agrees to any allocation. The proposer, anticipating that, offers the lowest amount possible to the responder | Most proposers suggest equal split when such split is possible. Low offers (below 30% of the pie) are typically rejected. [ |

Dictator [ |
A “Dictator” determines an allocation of an endowment (e.g., $10) between herself and a recipient | The dictator, maximizing their own payoff, gives $0 to the recipient | Dictators, on average, give 30% of the endowment. [ |

Trust [ |
A sender receives an endowment (e.g., $10) and can send any proportion of it to the responder. The amount sent is multiplied (e.g., by 3). The responder then decides how much to send back. | Responder maximizes their own payoff and thus sends back $0. The sender, anticipating that, sends $0. | Most senders send half or more of their endowment. Many responders (e.g., 44% in [ |

Gift exchange [ |
A “manager” ( |
The employee chooses minimum effort. Anticipating that, the manager chooses the minimum wage. | The minority of transactions (less than 9%) involve minimal wages and effort. About 2/3 of the managers offers are higher than 50. [ |

The 120 games studied in the estimation experiment ranked by the Mean Squared Deviation from the Subgame Perfect Equilibrium prediction. The left-hand columns present the payoffs of the 120 games, the right-hand columns present the experimental results (proportions of

P( |
P( |
P( |
P( |
|||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|

1 | 16 | 0 | 8 | 1 | −6 | −6 | 3 | 0 | 1 | 0 | 1 | 0 |

2 | 18 | 3 | −4 | −2 | 8 | 3 | −5 | 0 | 0 | 0 | 0 | 0 |

3 | 31 | −6 | −6 | 2 | −3 | 1 | 2 | 1 | 1 | 1 | 1 | 0 |

4 | 43 | −7 | 1 | 6 | 4 | 0 | −6 | 1 | 0 | 1 | 0 | 0 |

5 | 49 | 5 | −2 | 4 | −5 | −3 | 5 | 0 | 1 | 0 | 1 | 0 |

6 | 56 | 0 | −2 | 8 | 3 | 7 | −7 | 1 | 0 | 1 | 0 | 0 |

7 | 63 | 5 | 3 | −3 | 6 | 0 | 8 | 0 | 1 | 0 | 1 | 0 |

8 | 70 | −6 | −6 | 0 | 8 | −1 | −6 | 1 | 0 | 1 | 0 | 0 |

9 | 84 | −4 | 1 | 1 | −8 | 8 | 0 | 1 | 1 | 1 | 1 | 0 |

10 | 86 | −5 | 1 | 0 | −6 | −4 | −4 | 1 | 1 | 1 | 1 | 0 |

11 | 90 | −4 | −4 | 0 | −7 | 7 | −4 | 1 | 1 | 1 | 1 | 0 |

12 | 93 | 2 | 4 | −7 | 7 | 3 | −5 | 0 | 0 | 0 | 0 | 0 |

13 | 97 | 0 | 6 | 3 | −7 | 6 | −2 | 1 | 1 | 1 | 1 | 0 |

14 | 105 | 8 | −5 | −6 | −5 | −3 | 6 | 0 | 1 | 0 | 1 | 0 |

15 | 111 | −5 | 6 | 4 | −4 | 5 | 4 | 1 | 1 | 1 | 1 | 0 |

16 | 117 | −3 | −6 | 8 | 0 | 7 | 3 | 1 | 1 | 1 | 1 | 0 |

17 | 35 | 0 | 1 | 6 | 1 | 8 | −6 | 1 | 0.03 | 1 | 0 | 0 |

18 | 39 | 5 | 7 | 7 | −7 | 0 | 4 | 0.03 | 1 | 0 | 1 | 0 |

19 | 46 | −4 | −2 | 7 | 6 | −4 | 3 | 1 | 0.03 | 1 | 0 | 0 |

20 | 3 | −7 | −4 | −5 | 8 | −5 | 8 | 1 | 0.47 | 1 | 0.5 | 0 |

21 | 4 | −3 | −4 | −1 | 4 | 3 | 7 | 1 | 0.97 | 1 | 1 | 0 |

22 | 30 | 2 | −8 | 7 | −1 | 6 | −7 | 0.97 | 0 | 1 | 0 | 0 |

23 | 36 | −5 | 3 | 7 | −7 | −4 | 5 | 1 | 0.97 | 1 | 1 | 0 |

24 | 37 | −6 | 1 | −2 | −6 | 1 | −3 | 1 | 0.97 | 1 | 1 | 0 |

25 | 44 | −7 | 6 | −5 | 2 | −4 | 7 | 1 | 0.97 | 1 | 1 | 0 |

26 | 79 | 0 | 0 | −5 | 5 | −1 | −6 | 0.04 | 0 | 0 | 0 | 0 |

27 | 89 | 6 | −7 | −3 | −3 | 6 | −7 | 0.04 | 0 | 0 | 0 | 0 |

28 | 91 | 2 | −6 | −7 | −1 | 0 | −6 | 0.04 | 0 | 0 | 0 | 0 |

29 | 96 | 2 | 6 | −3 | 6 | 3 | −3 | 0 | 0.04 | 0 | 0 | 0 |

30 | 66 | −6 | −2 | 1 | −4 | −4 | −4 | 0.96 | 0.5 | 1 | 0.5 | 0 |

31 | 81 | −6 | −5 | 0 | −1 | −4 | −6 | 0.96 | 0 | 1 | 0 | 0 |

32 | 82 | 0 | 8 | 6 | 5 | 8 | 3 | 0.96 | 0 | 1 | 0 | 0 |

33 | 83 | −6 | −6 | −1 | −1 | 5 | −5 | 0.96 | 0 | 1 | 0 | 0 |

34 | 103 | −7 | −5 | −5 | 5 | −6 | −3 | 0.96 | 0 | 1 | 0 | 0 |

35 | 104 | −5 | −4 | 4 | −7 | 2 | 2 | 1 | 0.96 | 1 | 1 | 0 |

36 | 26 | −1 | 4 | 1 | 0 | 5 | −2 | 0.97 | 0.03 | 1 | 0 | 0 |

37 | 23 | −3 | −5 | 3 | −3 | 6 | −1 | 0.97 | 0.97 | 1 | 1 | 0 |

38 | 53 | −6 | 2 | −7 | −2 | 0 | 2 | 0.97 | 0.97 | 1 | 1 | 0 |

39 | 67 | −3 | −1 | −5 | −3 | 6 | 0 | 0.96 | 0.96 | 1 | 1 | 0 |

40 | 101 | 2 | −6 | 7 | −8 | 6 | −6 | 0.96 | 0.96 | 1 | 1 | 0 |

41 | 25 | −7 | 1 | −6 | 1 | 0 | 3 | 1 | 0.93 | 1 | 1 | 0 |

42 | 32 | −3 | 6 | 2 | −4 | 5 | −3 | 1 | 0.93 | 1 | 1 | 0 |

43 | 87 | 3 | −2 | 2 | 0 | 6 | 3 | 0.93 | 1 | 1 | 1 | 0 |

44 | 106 | 4 | −1 | 6 | 3 | 0 | −2 | 0.93 | 0 | 1 | 0 | 0 |

45 | 10 | −2 | 5 | −4 | 7 | −2 | −2 | 0.07 | 0 | 0 | 0 | 0 |

46 | 24 | 2 | −4 | −3 | 5 | 3 | −2 | 0 | 0.07 | 0 | 0 | 0 |

47 | 55 | 6 | 0 | −5 | 4 | 0 | −2 | 0 | 0.07 | 0 | 0 | 0 |

48 | 12 | 2 | −7 | 3 | 3 | 4 | −6 | 0.93 | 0.03 | 1 | 0 | 0 |

49 | 15 | 1 | −4 | 1 | 2 | 6 | 5 | 0.93 | 0.97 | 1 | 1 | 0 |

50 | 59 | −6 | 5 | 2 | 0 | 7 | −8 | 0.97 | 0.07 | 1 | 0 | 0 |

51 | 120 | −2 | −7 | −3 | −6 | 0 | 3 | 0.96 | 0.93 | 1 | 1 | 0 |

52 | 64 | 6 | −2 | 1 | 1 | 4 | −3 | 0.07 | 0.04 | 0 | 0 | 0 |

53 | 102 | −1 | −6 | −4 | −2 | −4 | −3 | 0.07 | 0.04 | 0 | 0 | 0 |

54 | 62 | 2 | −6 | 3 | −2 | −4 | 5 | 0.07 | 0.96 | 0 | 1 | 0 |

55 | 99 | −5 | 7 | −2 | −1 | 4 | −4 | 0.96 | 0.07 | 1 | 0 | 0 |

56 | 114 | −4 | 1 | −3 | 6 | 6 | 1 | 0.96 | 0.07 | 1 | 0 | 0 |

57 | 118 | −5 | −6 | 7 | 7 | 4 | −6 | 0.96 | 0.07 | 1 | 0 | 0 |

58 | 11 | −2 | 1 | 5 | −7 | 5 | −7 | 0.93 | 0.43 | 1 | 0.5 | 0 |

59 | 58 | −6 | 0 | −1 | −5 | −6 | −7 | 0.93 | 0.07 | 1 | 0 | 0 |

60 | 60 | −8 | 3 | 6 | 4 | −3 | −2 | 0.93 | 0.07 | 1 | 0 | 0 |

61 | 13 | 5 | −6 | 0 | 1 | 1 | −7 | 0.07 | 0.07 | 0 | 0 | 0 |

62 | 29 | 0 | 1 | −2 | 5 | 5 | −5 | 0.1 | 0.03 | 0 | 0 | 0.01 |

63 | 71 | 7 | −1 | 5 | 4 | −5 | 7 | 0.04 | 0.89 | 0 | 1 | 0.01 |

64 | 107 | 4 | −3 | 5 | −7 | 2 | −3 | 0.11 | 0.96 | 0 | 1 | 0.01 |

65 | 2 | 7 | −6 | 5 | 3 | −4 | 0 | 0.1 | 0.07 | 0 | 0 | 0.01 |

66 | 21 | −6 | 2 | 0 | −7 | 8 | −7 | 1 | 0.63 | 1 | 0.5 | 0.01 |

67 | 47 | 7 | −7 | −6 | −6 | −4 | −6 | 0 | 0.63 | 0 | 0.5 | 0.01 |

68 | 74 | −3 | 7 | 0 | −5 | −3 | −6 | 0.89 | 0.07 | 1 | 0 | 0.01 |

69 | 7 | −3 | −1 | 1 | −7 | −4 | 6 | 0.13 | 0.97 | 0 | 1 | 0.01 |

70 | 45 | −1 | 7 | 4 | 6 | −2 | 5 | 0.87 | 0.07 | 1 | 0 | 0.01 |

71 | 14 | −6 | 2 | −2 | −4 | 6 | −5 | 1 | 0.17 | 1 | 0 | 0.01 |

72 | 57 | 3 | 0 | −4 | 3 | −2 | −2 | 0 | 0.17 | 0 | 0 | 0.01 |

73 | 48 | 7 | −7 | 1 | −5 | 5 | 2 | 0.17 | 0.97 | 0 | 1 | 0.01 |

74 | 77 | −3 | 2 | 0 | −5 | −6 | 7 | 0.18 | 1 | 0 | 1 | 0.02 |

75 | 94 | 3 | −6 | 3 | −6 | 6 | −5 | 0.82 | 1 | 1 | 1 | 0.02 |

76 | 6 | −1 | 4 | −7 | 3 | 7 | 0 | 0.13 | 0.13 | 0 | 0 | 0.02 |

77 | 68 | −1 | 4 | 2 | −4 | −3 | −2 | 0.18 | 0.96 | 0 | 1 | 0.02 |

78 | 72 | 0 | 0 | 7 | 0 | −5 | 7 | 0.18 | 0.96 | 0 | 1 | 0.02 |

79 | 19 | 1 | −6 | 4 | 5 | −1 | −6 | 0.8 | 0.03 | 1 | 0 | 0.02 |

80 | 69 | −4 | 8 | −3 | 4 | 1 | −4 | 0.79 | 0 | 1 | 0 | 0.02 |

81 | 88 | −5 | 6 | −4 | 0 | −4 | 0 | 0.79 | 0.46 | 1 | 0.5 | 0.02 |

82 | 65 | −7 | −4 | −4 | −5 | −6 | −5 | 0.96 | 0.29 | 1 | 0.5 | 0.02 |

83 | 119 | 2 | −4 | −7 | −6 | 2 | −3 | 0.29 | 0.93 | 0.5 | 1 | 0.02 |

84 | 20 | 0 | −3 | −4 | 6 | 3 | 3 | 0.1 | 0.2 | 0 | 0 | 0.03 |

85 | 95 | 1 | 3 | 5 | 2 | −4 | 3 | 0.21 | 0.89 | 0 | 1 | 0.03 |

86 | 112 | 5 | −6 | 1 | −1 | 6 | −1 | 0.21 | 0.61 | 0 | 0.5 | 0.03 |

87 | 78 | 6 | −1 | −5 | 6 | 8 | 8 | 0.75 | 1 | 1 | 1 | 0.03 |

88 | 100 | 6 | −1 | −7 | −5 | 5 | 4 | 0.25 | 1 | 0 | 1 | 0.03 |

89 | 116 | 1 | 1 | 2 | −1 | 2 | − 1 | 0.75 | 0.57 | 1 | 0.5 | 0.03 |

90 | 51 | 2 | −7 | −1 | 3 | 7 | −6 | 0.27 | 0 | 0 | 0 | 0.04 |

91 | 76 | 7 | −7 | 5 | 6 | −7 | −5 | 0.29 | 0 | 0 | 0 | 0.04 |

92 | 108 | 7 | −8 | 4 | 0 | 2 | 2 | 0.29 | 0.93 | 0 | 1 | 0.04 |

93 | 52 | 8 | 1 | 0 | 3 | 2 | 2 | 0 | 0.3 | 0 | 0 | 0.05 |

94 | 1 | −3 | −7 | −7 | 5 | −3 | 6 | 0.2 | 0.93 | 0.5 | 1 | 0.05 |

95 | 34 | 2 | 2 | 3 | −7 | 3 | −7 | 0.67 | 0.43 | 1 | 0.5 | 0.06 |

96 | 73 | 5 | −3 | 2 | 0 | 4 | 5 | 0.36 | 1 | 0 | 1 | 0.06 |

97 | 92 | 0 | 3 | 5 | −2 | 0 | 5 | 0.86 | 1 | 0.5 | 1 | 0.06 |

98 | 22 | −5 | 7 | −6 | −2 | 8 | −6 | 0.33 | 0.17 | 0 | 0 | 0.07 |

99 | 38 | 2 | 8 | 6 | 1 | −7 | −7 | 0.63 | 0.03 | 1 | 0 | 0.07 |

100 | 41 | 1 | 7 | 3 | 2 | −4 | −6 | 0.63 | 0.03 | 1 | 0 | 0.07 |

101 | 75 | −5 | 3 | −6 | 5 | 0 | 1 | 0.36 | 0.11 | 0 | 0 | 0.07 |

102 | 61 | 0 | 2 | −1 | 6 | 2 | 6 | 0.79 | 0.82 | 1 | 0.5 | 0.07 |

103 | 54 | −1 | 3 | 7 | 1 | −2 | 8 | 0.37 | 0.87 | 0 | 1 | 0.08 |

104 | 98 | 4 | 8 | 6 | 3 | 1 | 2 | 0.61 | 0.07 | 1 | 0 | 0.08 |

105 | 5 | −2 | −7 | 6 | 5 | −2 | 8 | 0.83 | 0.77 | 0.5 | 1 | 0.08 |

106 | 80 | −1 | −2 | 6 | 0 | −3 | 1 | 0.39 | 0.89 | 0 | 1 | 0.08 |

107 | 28 | 5 | −7 | −5 | −7 | 8 | 0 | 0.57 | 1 | 1 | 1 | 0.09 |

108 | 33 | 4 | 0 | −8 | −8 | 5 | 5 | 0.57 | 0.97 | 1 | 1 | 0.09 |

109 | 113 | 4 | −7 | 2 | −4 | 7 | −4 | 0.57 | 0.64 | 1 | 0.5 | 0.1 |

110 | 110 | 1 | 7 | 0 | 2 | 3 | 2 | 0.61 | 0.79 | 1 | 0.5 | 0.12 |

111 | 42 | 4 | −3 | −3 | 7 | 5 | 7 | 0.23 | 0.93 | 0 | 0.5 | 0.12 |

112 | 27 | 4 | 1 | −6 | 1 | 7 | 5 | 0.5 | 1 | 1 | 1 | 0.13 |

113 | 50 | −1 | 2 | 0 | 1 | −5 | −3 | 0.5 | 0 | 1 | 0 | 0.13 |

114 | 9 | 1 | −7 | −4 | 4 | 3 | 4 | 0.4 | 0.83 | 0 | 0.5 | 0.13 |

115 | 109 | 1 | −2 | 6 | 6 | −3 | 7 | 0.5 | 0.75 | 0 | 1 | 0.16 |

116 | 8 | 0 | 2 | −5 | 1 | 2 | 1 | 0.47 | 0.83 | 0 | 0.5 | 0.16 |

117 | 115 | 3 | 2 | −7 | −6 | 5 | −3 | 0.36 | 0.96 | 1 | 1 | 0.21 |

118 | 40 | 4 | 4 | 7 | 2 | −3 | 1 | 0.33 | 0.03 | 1 | 0 | 0.22 |

119 | 17 | 0 | 0 | 5 | −4 | −5 | −5 | 0.33 | 0.07 | 1 | 0 | 0.23 |

120 | 85 | 7 | 7 | −1 | −7 | 8 | −1 | 0.14 | 1 | 1 | 1 | 0.37 |

The seven strategies examined in the regression analyses, and the estimated equations (regression weights). Standard error in parenthesis, statistical significance at *0.05, **0.01, ***0.001 levels.

Constant | −0.047 |
0.016 | ||

Rational (Ratio) | 1 if |
0.448*** (0.029) | 1 if |
0.497*** (0.036) |

Nice rational (NiceR) | (Cannot be estimated based on the current data) | -- | 1 if |
0.357*** (0.036) |

Maxmin | 1 if |
0.199*** (0.034) | Perfectly correlated with Rational | -- |

Level-1 | 1 if |
0.206*** (0.030) | Perfectly correlated with Rational | -- |

Joint max (Joint Mx) | 1 if |
0.085** (0.031) | 1 if |
0.049** (0.017) |

Helping the weaker player (MxWeak) | 1 if |
0.068* (0.034) | 1 if min( |
0.049* (0.019) |

Minimize differences (MnDiff) | 1 if | |
0.062* (0.026) |
1 if | |
0.026* (0.013) |

Adjusted R^{2} |
0.920 | 0.984 |

Mini ultimatum games: Results from current and previous studies.

Game description | P( |
Game description | P( |
---|---|---|---|

#17: Player 1 chooses between |
7% | Falk |
45% |

#40: Player 1 chooses between |
3% | Charness and Rabin (Berk27): Player 1 chooses between |
9% |

Games in which Player 2′s choice has no effect on his/her own payoff. The rightmost column (P(help)) presents the proportion of Player 2′s choices in the alternative that maximizes Player 1′s payoff.

Player 1 was “nice”: The | ||||||||

112 | 5 | −6 | 1 | −1 | 6 | −1 | 0.61 | 0.61 |

61 | 0 | 2 | −1 | 6 | 2 | 6 | 0.82 | 0.82 |

113 | 4 | −7 | 2 | −4 | 7 | −4 | 0.64 | 0.64 |

42 | 4 | −3 | −3 | 7 | 5 | 7 | 0.93 | 0.93 |

9 | 1 | −7 | −4 | 4 | 3 | 4 | 0.83 | 0.83 |

47 | 7 | −7 | −6 | −6 | −4 | −6 | 0.63 | 0.63 |

Player 1 was “not nice”: The | ||||||||

67 | −6 | −2 | 1 | −4 | −4 | −4 | 0.5 | 0.5 |

22 | −6 | 2 | 0 | −7 | 8 | −7 | 0.63 | 0.63 |

66 | −7 | −4 | −4 | −5 | −6 | −5 | 0.29 | 0.71 |

111 | 1 | 7 | 0 | 2 | 3 | 2 | 0.79 | 0.79 |

9 | 0 | 2 | −5 | 1 | 2 | 1 | 0.83 | 0.83 |

The baseline models, the estimated parameters, and the MSD scores by player.

-- | 0.0529 | 0.0105 | |

_{1}∼U[0, 0.01], _{1}∼U[0, 0.05],_{2}∼U[0, 0.05], _{2}∼U[0, 0.05],_{1}_{2}= 2.1 |
0.0307 | 0.0099 | |

_{1} = 0.36,_{2} = 0, _{1}_{2}= 0.7,_{1} = 0.05 |
0.0367 | 0.0100 | |

_{1} = 0.05, _{1} = 0, _{1}= 0.6,_{2} = 0.05, _{2} = 0.05, _{2} = 2.9 |
0.0292 | 0.0041 | |

_{R}_{L}_{W}_{M}_{J}_{D}_{R}_{N}_{J}_{W}_{D} |
0.0121 | 0.0029 |

The ENO of the baseline models. The parameters of the models were estimated based on the first set of 60 games, and the ENO scores were calculated based on the second set.

Subgame Perfect Eq. (SPE) | -- | 1 |
0.0535 |
0.0138 |
0.87 |
6.16 |

Inequality aversion | _{1}∼U[0, 0.01],_{1} ∼U[0, 0.05],_{2}∼U[0, 0.05],_{2} ∼U[0, 0.05],_{1}= 0.5, _{2} = 2.1 |
1 |
0.0339 |
0.0120 |
4.39 | 10.70 |

ERC | _{1} = 0.40,_{2} = 0, _{1}= 0.13,_{2}= 0.63,_{1} = 0.05 |
1 |
0.0402 |
0.0133 |
3.38 | 11.39 |

CR | _{1} = 0, _{1} = 0,_{1}= 0.6, _{2} = 0.05,_{2}= 0.05,_{2}= 2.9 |
1 |
0.0348 |
0.0044 |
5.02 | 28.56 |

Seven Strategies | _{R}= 0.446, _{L}_{W}_{M}_{J}_{D}_{R}_{N}_{J}_{W}_{D} |
1 |
0.0100 |
0.0030 |
9.08 | 66.84 |

This research was supported by a grant from the U.S.A.–Israel Binational Science Foundation (2008243).

The algorithm generates 60 games in a way that ensures that each of the 10 “game types” from

f1, f2, f3, s1, s2, s3: The parameters of the game as defined in

The basic set: {−8, −7, −6, −5, −4, −3, −2, −1, 0, + 1, + 2, + 3, + 4, + 5, + 6, + 7, + 8}

fmax = max(f1, f2, f3)

smax = max(s1, s2, s3)

fbest = 1 if f1 = fmax; 2 if f2 = fmax > f1; 3 if f3 = fmax > f1 and f3 > f2

sbest = max(s1, s2, s3)

sbest = 1 if s1 = smax; 2 if s2 = smax > s1; 3 if s3 = smax > s1 and s3 > s2

f(x) the payoff for Player 1 in outcome x (x = 1, 2, or 3)

s(x) the payoff for Player 2 in outcome x (x = 1, 2, or 3)

Trivial game: A game in which (f1 = fmax and s1 = smax) or (f1 = f2 = f3) or (s1 = s2 = s3)

Draw the six payoffs repeatedly from the basic set until:

For Game 1 (c.i: “common interest”): there is x such that f(x) = fmax and s(x) = s(max)

For Game 2 (s.d: “strategic dummy”): s2 = s3 and f2 = f3

For Game 3(“dictator”): s2 = s3 and f2 = f3 and f2 > s2

For Game 4 (s.s: “safe shot”): f1 ≤ min(f2, f3)

For Game 5 (n.d: “near dictator”): f1 = fbest

For Game 6 (c.p: “costly punishment”): sbest = 1 and (f2 < f1 < f3 or f3 < f1 < f2) and

s(fbest) = max(s2, s3), and s2 ≠ s3

For Game 7 (“ultimatum”): f1 = s1 and sbest = 1 and (f2 < f1 < f3 or f3 < f1 < f2), and

s(fbest) = max(s2,s3), and s2 ≠ s3

For Game 8 (f.p: “free punishment”): sbest = 1 and (f2 < f1 < f3 or f3 < f1 < f2) and

s(fbest) = max(s2,s3), and s2 = s3

For Game 9 (r.p: “rational punishment”): sbest = 1 and (f2 < f1 < f3 or f3 < f1 < f2) and

s(fbest) < max(s2,s3)

For Game 10 (f.h: “free help”): sbest > 1 and f1 > min(f2,f3) and f(sbest) = f1

For Game 11 (c.h. “costly help”): s1 < min(s2,s3) and f(sbest) = min(f1,f2,f3) and fbest > 1

For Game 12 (tr: “trust”): s1 < min(s2, s3) and min(f2,f3) < f1 < max(f2,f3) and f(sbest) = min(f1,f2,f3) and s(fbest) < max(s2,s3)

For Game = 13 to 60: randomly select (with equal probability, with replacement) six payoffs from the “basic set,” if the game turns out to be “trivial” erase it and search again.

In this experiment you will make decisions in several different situations (“games”). Each decision (and outcome) is independent of each of your other decisions, so that your decisions and outcomes in one game will not affect your outcomes in any other game.

In every case, you will be anonymously paired with one other participant, so that your decision may affect the payoffs of others, just as the decisions of the other people may affect your payoffs. For every decision task, you will be paired with a new person.

The graph below shows an example of a game. There are “roles” in each game: “Player 1” and “Player 2”. Player 1 chooses between L and R. If he/she chooses L the game ends, and Player 1′s payoff will be $x dollars, and Player 2′s will be $y.

If Player 1 chooses R, then the payoffs are determined by Player 2′s choice.

Specifically, if Player 2 selects A then Player 1 receives $z and Player 2 receives $k. Otherwise, if Player 2 selects B then Player 1 gets $i and Player 2 gets $j.

When you make your choice you will not know the choice of the other player. After you make your choice you will presented with the next game without seeing the actual outcomes of the game you just played. The different games will involve the same structure but different payoffs. Before the start of each new game you will receive information about the payoffs in the game.

Your final payoff will be composed of a starting fee of $20 plus/minus your payoff in one randomly selected game (each game is equally likely to be selected). Recall that this payoff is determined by your choice and the choice of the person you were matched with in the selected game.

Good Luck!

Each of those games has been studied extensively with different variations.

This structure follows the structure of previous competitions we organized on other research questions [

We checked the data for potential “order of game” effect but no such effect was found.

Previous research that compares the strategy method to a sequential-decision method shows little difference between the two [

The SPE prediction for Player 2 is 0 (

In order to clarify this measure, consider the task of predicting the entry rate in a particular game. Assume that you can base your estimate on the observed rate in the first

Falk et al used the strategy method. In a similar study Güth

We started by estimating a variant of the model using the original distribution values reported by Fehr and Schmidt [

This assumption is a bit different from the original model that assumed that player 1 knows the distribution of player 2 and maximizes his/her own utility taking the whole distribution into account.

We also estimated a version of the ERC model that includes individual differences in but this version did not improve the MSD score.

We also estimated a version that includes individual difference in but this version did not improve the MSD score.

An exception is the subgame perfect equilibrium predictions. Since this model is free of parameters, ENOs were computed for each of the two sets.

Note: Dictator-like game is defined by this algorithm as a private case of strategic dummy in which the first mover's payoffs are higher than the second mover.