1. Introduction
Varied organizations, such as police forces, universities and even Starbucks Coffee, have recently started implementing implicit bias trainings. This trend speaks to recent advances in the social sciences suggesting that racial animus stems from subtle cognitive processes, which we carry without awareness or conscious direction [
1,
2]. These processes are called implicit biases and associations. The Implicit Association Test (IAT, discussed further below) introduced a tractable measure of implicit biases without having to rely on unreliable self-reporting mechanisms, thus lending support to these suggestions. However, the extent to which IAT scores, and implicit biases more generally, are able to predict behavior is unknown.
In this paper, I use a laboratory experiment to examine implicit bias as a predictor of pro-social behaviors. These behaviors are economically important, and a growing body of work suggests a relationship between bias and giving [
3,
4]. Pro-social behaviors are both non-strategic and non-spontaneous, and therefore easily controlled by laboratory subjects. Accordingly, we can think of these behaviors as bounding the set of actions into which implicit bias can map. Additionally, I highlight social pressure as the potential reason this relationship may not be so straightforward, as social pressure can weaken the extent to which implicit bias maps into more costly economic behavior.
Specifically, I extend the design of Lazear et al. [
5] (LMW) and implement a dictator game with sorting, paired with an IAT to study the effect implicit bias has on both the decision to give and social pressure as a motive for giving. I find that one’s implicit bias predicts neither the decision of whether to give, nor the decision of how much to give.
I frame these questions in the context of motives of giving argument, which states that concerns other than altruism or fairness may impact giving [
6]. A natural implication of implicit bias in this framework is that if social pressures are strong enough, spiteful dictators may still give. Additionally, I explore alternative avenues through which one’s implicit bias can correlate to giving, such as a sorting option. This option allows (potentially biased) dictators to avoid these pressures by opting out of the task entirely. This approach allows me to better identify the biases of different sharers and how those biases manifest in the market. The results remain strikingly consistent, and implicit bias does not predict sorting out of the giving environment, meaning that individuals with higher levels of implicit bias are no more likely to sort than those with lower levels of bias. These results help to rule out the attenuating effects social pressure may have on dictator game decisions, given a certain level of implicit bias.
This paper represents a necessary first step in this line of research. Through the lab environment, I am able to show how the provision of racial information affects the decision to act on a bias. Though not used to examine bias prior to this study, the motives for the giving argument and the particulars of a sorting design apply nicely to this line of inquiry. A notable feature of sorting designs is they allow us to identify the marginal giver [
7]. Recent work in other domains (e.g., college admissions) has demonstrated that detecting bias requires comparing marginal rather than average entrants [
8]. Further, people are unwilling to make judgments about individuals of different races [
9]. It follows that implicit bias should influence the entry decision.
This paper contributes to the laboratory literature in several novel ways. First, it extends the research on sorting in dictator games by examining bias as a potential pathway of reluctance to share and social pressure as a potential moderator of that bias. This paper is unique in allowing different sorting options in a dictator game while also providing racial information of the recipients. In doing so, this paper also adds to the discussion on bias in the lab (notable examples of which are Fershtman and Gneezy [
10], Ferraro and Cummings [
11], Ben-Ner et al. [
12] and Slonim and Guillen [
13]). In comparing IAT scores to both the observed rates of giving and differential exits, I am able to provide convincing economic magnitudes on both the nature and direction of implicit bias. To the best of my knowledge, this is the first laboratory paper to examine the psychological pathways of racial bias by using the IAT. This represents an important step forward as different pathways may lead to different economic behaviors and therefore have different policy prescriptions.
However, the contribution of this paper is not intended to be limited to laboratory experiments. In fact, it dovetails with the nascent field work on discrimination and implicit bias, which mostly occurs outside the U.S. and has yet to find significant consensus. Price and Wolfers [
14] argue implicit bias explains discriminatory behavior amongst NBA referees. In Africa, Lowes et al. [
15] find evidence of ethnic homophily in the DRC capital, though Berge et al. [
16] find little evidence of ethnic biases in Nairobi. In Europe, implicit bias has been predictive of negative hiring conditions [
17] and job performance [
18]. The alleged interaction between implicit bias and labor market decisions suggests a role for further economic analysis in other areas of decision-making, including the open questions of pro-social behavior.
2. Background
Since bias cannot be randomly assigned, the issue of how it can be measured remains open, particularly when subjects may be unaware of the biases they hold. Social psychologists Greenwald, McGhee and Schwartz first described implicit bias as automatic and analogized the mechanics of it to those of a reflex. As such, they claimed to be able to test for it using their “Implicit Association Test”, explained in their seminal paper as follows:
1An implicit association test (IAT) measures differential association of 2 target concepts with an attribute. The 2 concepts appear in a 2-choice task (e.g., flower vs. insect names), and the attribute in a 2nd task (e.g., pleasant vs. unpleasant words for an evaluation attribute).
2 When instructions oblige highly associated categories (e.g., flower + pleasant) to share a response key, performance is faster than when less associated categories (e.g., insect + pleasant) share a key.
To sum, the IAT consists of four timed sorting tasks. In it, subjects match features (such as faces) to more and less associated attributes (such as good or bad words). Allegedly, it is easier for an experimental subject to sort any feature with its more closely associated attributes. For instance, a picture of a chair is more closely associated with the word “furniture” than the word “food”, and hence more likely to be sorted faster as such. It is through this primitive of differential timing that one reveals her/her implicit biases.
3 Table 1 shows the the progression of IAT tasks; screen captures of which are presented in
Supplementary Materials S1.
The standard scoring metric for the IAT, the
D-Score [
22], is similar to Cohen’s measure for effect size,
d. It is calculated in Equation (
1) as the difference in mean latencies within test blocks divided by the standard deviation of latencies across test blocks. For the purposes of IAT scoring, only paired trials are considered test blocks.
4Accordingly, the
D-Score can be either positive or negative. In the case of a black-white IAT, a positive score indicates a positive preference for whites, and vice versa for a negative score.
5 A score of or around zero indicates little or no preference. The authors further classify and interpret the IAT using Cohen’s [
23] conventional measures for effect size, by binning
D-scores at
for ‘slight’, ‘moderate’ and ‘strong’ associations, respectively. For the remainder of the paper, I will continue to use these bins to categorize IAT scores.
Implicit bias testing has worthwhile aims. If efficacious, the IAT allows us to observe both the magnitude and direction of a personal bias, which people may not know or may be unwilling to divulge. Implicit bias appears to be profound, with some studies even showing extant anti-black biases among minority groups [
24]. Oft cited examples of these biases in decision-making include decisions regarding managerial hiring or where to direct budget cuts [
25,
26].
6Meta analyses seem to illustrate that implicit biases are persistent [
1,
28], yet these claims are also contested.
7 For instance, Norton et al. [
9] hypothesize that not wanting to appear biased (or wanting to appear race neutral) can cause a “race-paralysis” in IAT tasks, while Arkes and Tetlock [
30] highlight non-prejudicial reasons for ‘failing’ IAT scores. Further, Forscher et al. [
31] find that changing one’s IAT score does not bring about a behavioral change. A potential reason for these conflicting findings is that external forces, such as social pressures, may mitigate this bias. This reason has important implications for both lab and field results.
While it is understandable why the IAT and similar tests use the primitive of timing, we need something stronger and more applicable in order to draw economic conclusions. In other words, the question of interest is not whether implicit bias exists, but whether this bias predicts discrimination in marketplace behaviors, and if so, what are the dosage implications? That is, how much more do severe (as opposed to moderate or slight) levels of implicit bias impact economically relevant decisions, and how are they mitigated by outside forces?
Giving behaviors offer a simple first pass at these questions. Despite their apparent simplicity, the underlying motives for giving behaviors remain elusive [
32]. For instance, Camerer [
33] notes the relatively high rates (10–30%) of dictator giving in the lab. Reluctance to give in general [
34] and social pressures specifically have been posited to explain not only this disparity between field and laboratory giving [
6], but also discretionary behavior more generally [
35]. Extant lab results support social pressure as a motive to give [
36,
37]. To this end, sorting environments paired with a dictator game allow for much cleaner identification of bias in pro-social behavior. Here, since the second player is passive, it is much more likely that differences in giving or sorting are due to bias rather than to strategic concerns.
Given the pervasiveness of implicit bias, what are its implications for both laboratory and naturally occurring behavior? I use the toolbox of experimental economics to see if (and to what extent) IAT performance is related to differential treatment of receivers in a dictator game. In doing so, I examine the IAT as a predictor of pro-social behavior in an experimental market. This behavior includes giving, as well as sorting out of potential giving environments.
To properly ask (and answer) these questions, this experiment necessarily progresses in two stages: first, the dictator game (potentially with a sorting option) and, second, with a race (black-white) IAT. The race IAT has yet to be used in the economics literature, despite the fact that racial relations remain one of American society’s most divisive issues.
3. Experiment
3.1. Procedures
In a standard dictator game, subjects are randomly split into receivers and dictators upon arriving at the lab.
8 A first mover is given
$10 and asked how much she/he would like to give to a paired (and passive) receiver. Her/his choice ends the game. Thus, giving in this game is non-strategic. I use this standard (no information) treatment to gauge dictator giving without information on the race of the recipient.
This experiment differs from a standard game in that some treatments employ a sorting environment. Specifically, in those treatments, I offer dictators an exit option as in LMW. In other words, dictators in those treatments are given a chance to leave the game in such a way that the passive player never knows he or she was playing a dictator game. In doing so, I aim to disentangle social pressure as a motive for giving.
9 This option can be either costly or free, where the costly option is necessarily payoff-dominated by at least one dictator game choice.
Finally, I run these treatments in two types of sessions: sessions with no information (anonymous) and sessions with pictures, where dictators can see with whom they are paired and use that picture as a proxy for race.
10 In later estimations, race is controlled for using self-reported survey data. For the most part, we are concerned with outcomes in the “pictures” sessions. However, the anonymous treatments serve as an interesting comparison and are necessary for commenting on the social closeness afforded by a picture. Further, the cross between picture sessions and sorting treatments allows us to see whether implicit bias affects behavior on either the extensive or intensive margin; that is, the decision to engage in giving, as well as how much to give.
As such, this experiment necessitates a 2 × 3 design. The treatment cells are as follows: a standard (baseline) dictator game and two dictator games with sorting, costless and costly. In costless sorting, the dictator receives the same amount in entry and exit (
$10). In costly sorting, the dictator receives
$9 upon exit. These dictator games are all played across both anonymous and pictured sessions. The treatment cells and number of dictators that participated in each treatment are described further in
Table 2, as well as in the data section below. There is no significant difference in the dispersion of participant decisions between treatments (Bartlett’s test,
).
After roles are assigned, the dictators are randomly paired with a receiver, and in the pictures treatments are first shown a picture of that receiver’s face. The pictures serve as a proxy for race. In the no information treatments, dictators are not informed about their receivers in any way. In all treatments, dictators are then explained the rules of the dictator game. In the sorting treatments, dictators are asked whether or not they choose to participate. In the event that a dictator elects to not participate (takes the exit option), that dictator’s receiver is not given any information about the allocation task, and the dictator is given the exit fee ($9 or $10, depending on treatment). Otherwise, the dictator decides how to allocate a sum of $10 between herself/himself and her/his paired receiver.
Meanwhile, the receivers are passive in their role. They have their pictures taken (pictures treatment only), are guaranteed a show-up fee and asked to participate in a different task. In this case, that task is a real-money, 1
xrisk-preference elicitation [
39]. The receiver task is constant across treatments. The next task in the experiment is a race IAT (as described above), assessed on all subjects. I run this task second because the act of taking an IAT can prime someone to behave differently in a dictator task. However, knowing they have just participated in a dictator game should not influence the IAT score, which is difficult to fake or otherwise manipulate [
40]. I then close by collecting demographic data in the form of a survey and pay subjects privately. Complete subject instructions and survey questions can be found in
Supplementary Materials S2 and S3, respectively.
3.2. Data
These experiments were conducted during the summer and fall of 2015 at the Center for Experimental Economics at Georgia State University (E
xCEN). Subjects were recruited via email using the center’s recruiter. Overall, I ran 17 experimental sessions across the 6 treatments, with a roughly equal balance of subjects across treatment rows.
11In total, 228 dictators (i.e., 456 subjects) participated in the experiment.
Table 3, Panel A, describes the demographic breakdown of the dictators. Dictators in this experiment are on average 22, with a 3.3 GPA. Roughly 72% are black, and 40% are male. Most have previous experience in economics experiments, and the modal year in school is senior.
12 I strove for sessions to be racially balanced; however, this was not possible given the makeup of the subject pool. While at first blush, this racial imbalance may appear to be problematic, I believe it is not for several reasons, including past research on similar subject pools [
41], the previous evidence on extant biases among minority groups [
24], the one-on-one nature of the experimental design and the fact that E
xCEN was selected particularly for its population of minority students. Robustness checks examining differences in cross-racial giving support these results.
Similarly,
Table 3, Panel B, provides a brief description of dictator choices and performance in the experiment. On average, 27% of the endowment was passed, and a little more than 18% of those offered an exit option opted out, with more people exiting when it was costless. Rank sum tests showed that sorting significantly decreases sharing, even when sorting is costly (sorting:
,
; costly sorting:
,
). Full distributions of amounts passed are illustrated further in
Figure 1. There is meaningful implicit bias in the sample. Over 44% of dictators have an IAT score greater than or equal to 0.15, indicating a pro-white implicit bias. Consonant with past work, this bias is present and perhaps even stronger in subjects who identify as black, with a mean score of 0.162. I depict the distribution of these scores in
Figure 2 for further exploration. Both the amounts of endowment passed and the distribution of IAT scores are consistent with extant results across a variety of subject pools.
The current study reports the effects of implicit bias (as scored by the IAT) on behaviors in a modified dictator game. However, the main findings are that implicit bias is not predictive of behavior, and as such, many of the results that follow are null results.
Accordingly, I report the results of power analyses for differences in giving for different effect sizes. In the pictures treatments, 68 out of 175 dictators scored as biased against their receivers. At a 5% significance level, the power of finding a difference in giving of 1, 5 and 10 percent of one’s endowment is respectively 0.087, 0.41 and 0.88. A power of 0.8 is generally considered acceptable. Thus, this sample size is roughly large enough to detect differences in giving of 8% of one’s endowment. This is consistent with effect sizes found in Lane’s [
42] meta-analysis of biased behavior in the economics lab. Further, the findings in this paper are robust to a variety of specifications and controls. Regardless, one should always be cautious in interpreting non-significant results, and implications for future research are discussed further in the Conclusion.
4. Discrete Results
We have seen descriptively that sorting environments affect giving behaviors, but what is the role of the implicit bias (as ostensibly measured by the IAT) in making these economic decisions?
Table 4 shows the mean pass broken down by both the strength of the association and the recipient. First of all, these differences are not significant. Secondly, if implicit bias had a one-to-one mapping into giving behaviors, we would expect endowments passed to black subjects to get smaller as favorable bias towards white subjects increases. Additionally, we would expect to see the opposite pattern for white recipients. However, these directional patterns did not emerge, particularly for black recipients. Here, those who had dictators biased against them ended up earning more on average.
Given
Table 4, I discretize the IAT score into the blunt question of “does one (implicitly) like or dislike the recipient?” and regress an outcome variable on two variables
Like and
Dislike. I express this question in Equation (
2):
The outcome takes the form of either a continuous variable representing the percent of endowment shared or a binary variable indicating whether a dictator took an exit option. The two variables
Like and
Dislike are essentially binary interaction terms defined formally as follows in Equation (
3):
That is, to “like” one’s receiver means either to hold a pro-white bias and pass to a white receiver or to hold a pro-black bias and pass to a black receiver, with “dislike” similarly defined. Accordingly, the intercept term, , represents those dictators who hold little to no implicit bias ().
Results from these discrete estimations are presented in
Table 5. In the first column, we find that unbiased givers shared about 23% of their endowment, and a bias against (or in favor of) one’s receiver did not significantly alter this giving pattern. Furthermore, both directions of bias remained insignificant when decomposing the
Like and
Dislike variables into component parts in Column 2.
Table 5, Columns 3 and 4 repeat this exercise to look at how bias influences the probability of opting out. In both the blunt (Column 3) and decomposed (Column 4) measures, neither liking nor disliking one’s receiver had any significant impact on remaining in the game.
While these results indicate that bias did not affect the decision giving on average, a relevant question is whether dictators who are biased against black (white) receivers behaved differently than the average dictator. Here, I restrict the sample based on the race of receiver and only regress on the
Dislike variable. These results are shown in
Table 6.
Even when we isolate the sample by race of receiver, biased dictators were not behaving in ways that were statistically different than the average dictator, nor was this difference economically significant in that the differences were less than pennies on the dollar. While the above results are indicative that subjects were not acting on their implicit bias in this context, another way of looking at the question is to ask what is the treatment effect of being paired with someone against whom you hold a bias?
Here, I exploit the random assignment of both roles and partners and implement propensity score matching. Since each person had a particular bias strength and direction, I matched on the strength and direction of this bias, as well as covariates describing the dictator’s age, race and sex and specified the treatment as passing to a receiver against whom the dictator holds a bias; that is, passing to a black receiver if the dictator holds a pro-white bias and vice versa. I found no significant treatment effect of passing to a receiver against whom the dictator is biased (ATT= 0.14, p = 0.625).
Result 1. The sign of IAT score did not predict dictator giving.
5. Continuous Results
However, since the IAT is measured as a continuous variable, we can comment not only on the existence of implicit bias, but also the strength of that bias. As such, one would think that more severe biases would exert more influence on the giving and sorting decisions. To address this dosage question, I standardized the IAT score and outline the following reduced form empirical specification:
This standardization allowed me to interpret coefficients as the effect of a one standard deviation increase in IAT score. In this specification, I again regressed an outcome variable on two variables of interest: that dictator’s IAT score and an interaction term of dictator’s IAT score with the race of the recipient, as well as a vector of demographic controls for both dictators and receivers. The interaction term allows us to examine this giving conditional on being paired with the object of one’s bias. This interaction is also consistent with the assumption that implicit bias manifests as animus. The controls were necessary because observed differences in the outcome variable may have been driven by factors unrelated to a dictator’s implicit bias. Different specifications below highlight different sets of these parameters in my analysis.
5.1. Dictator Giving
Figure 3 shows the amount passed given a dictator’s IAT score. Despite the IAT’s popularity in academic work, there was no clear linear relationship between IAT score and amount passed (
). Further, throughout the distribution of IAT scores, there appeared to be a similar bimodal pattern of amounts passed, suggesting that levels of implicit bias do not necessarily correlate with the behaviors of interest.
To confirm these findings econometrically, we turn to
Table 7 which presents this paper’s main estimates. In these models, I restricted the sample to only the pictures sessions. Additionally, I have used indicator variables for black dictators and black receivers, rather than a covariate for the race against which a subject is biased. While this may be a coarse measure, this modeling technique makes more sense in terms of coefficient interpretation since the IAT score is increasing in the level of anti-black bias. Further, these results are consistent with the discrete estimations from
Section 4 and robust to the alternate specification of “biased against receiver”.
13In Panel A of
Table 7, I start in Model (1) with a simple OLS and regress percent shared on one’s IAT score and the interaction term of
to a black receiver, neither of which yielded a significant predictor of giving. These results held true in additional models, including specifications that control for race and gender of the dictator, the receiver and both. Further, these controls also had no significant effect on giving.
However, the presence of a sorting option consistently and significantly decreased the amount shared by around 10%. This result suggests that in terms of giving behaviors, people were not acting on their implicit biases and perhaps were able to control any bias they may have held. Instead, social preferences unrelated to the IAT, especially pressure to give, appeared to be strongly influencing these pro-social behaviors (or lack thereof).
Next, to account for the
of dictators who either gave nothing or opted out, I replicated the OLS results with a left-censored Tobit model.
14 These results are shown in
Table 7, Panel B, and are not categorically different than the OLS results. That is, the coefficient on IAT score was positive, but insignificant; the interaction term was negative and insignificant; and all controls were insignificant. However, the presence of a sorting option was strongly and negatively significant.
Following LMW, I assess the determinants of sharing in
Table 8. Specifically, I compare the relative importance of implicit bias (in Column 1) to the presence of the sorting option, as well as to self-reported demographics that could potentially affect sharing (in Column 2). Again, one’s amount of implicit bias did not significantly determine sharing. Magnitudes of these results were similar when I ran the full model, including the IAT score with demographic controls (Column 3). Additionally, I calculated coefficients of partial determination.
15 This measure shows that not only did implicit bias lack statistical significance, but one’s IAT score accounted for less than 4% of the unexplained variance and lacked economic significance, as well.
The above exercises held true when, instead of looking at the coarse measure of race of receiver, I looked at the finer measure of bias against one’s receiver. In
Figure 4, I graph box plots for a further analysis of what happens when a dictator is biased against the race of the receiver. For these figures, that means both passing to a black receiver when biased against blacks (
) as well as passing to a white receiver when biased against whites (
).
Clearly, there is no difference in giving when I considered the whole sample in
Figure 4a. However, this result also held when I considered only those dictators who did not take an exit option in
Figure 4b. We will see a similar result regarding dictators choosing to opt out in the following subsection on Dictator Sorting.
Finally, I compared the picture treatments to the anonymous ones. Using rank-sum tests, amounts given by the dictator did not appear to be different across these two treatment rows (). This held when we ignored baseline treatments and considered only those with a sorting option () or restricted the sample to dictators paired with the object of their bias (). Since a dictator could not see the receiver in the anonymous treatments, it is unlikely that implicit racial bias came into play in this sharing decision. The lack of difference between the two treatment rows here is further indicative of the null results above.
Result 2. The magnitude of IAT score did not predict dictator giving.
5.2. Dictator Sorting
Perhaps the above results indicate that biased dictators are forward thinking with regard to these biases or otherwise self-aware enough to recognize their biases. If so, they may be simply choosing not to enter sharing environments where they can express this distaste or similarly choosing to express this distaste through their opting out. However, we see in
Figure 5 the average IAT score for dictators in treatments with an exit option. Under costly and costless sorting schemes, the mean IAT score was descriptively smaller amongst those who stayed in (as compared to those who opted out), whereas in costless sorting, the mean IAT score was essentially the same. However, in both cases, this difference was not significant (costly:
,
; costless:
,
).
Accordingly, I estimate the probability of opting out in
Table 9. This model uses a probit regression and necessarily restricts the sample only to those dictators with an exit option (that is, those in sorting treatments,
). The variable structure is intended to mimic the experimental design, using dummy variables for treatment and a measurement variable to indicate the IAT score. In this model, there were no significant coefficients, suggesting that overall, one’s IAT score did not seem to influence the decision to sort out, with this result holding even when controlling for both the financial and social costs of sorting.
Nonetheless, this exploration again calls for a deeper analysis. Following Equation (
4), I looked at the econometric results to confirm. In this case, I ignored the anonymous treatments (
) and ran probit estimations to determine what effect (if any) IAT score had on the probability of opting out.
Table 10 shows the marginal effects of these estimations. Consistent with the results above, the IAT had no significant effect on sorting. This held when I controlled for whether the sorting was costly and the race and gender of the dictator, receiver and both. Similar to the analysis under dictator giving, the signs of these coefficients were also unexpected. We see that more biased dictators opted out less frequently. Specifically, an increase in IAT score by one standard deviation led to roughly an 11% smaller chance of opting out.
As a check, I examined what happens to sorting when a dictator is biased against the race of the receiver (
). In this case, I draw a bar graph in
Figure 6. Confirming the results above, there was no evidence that bias had an effect on sorting, even when the dictator held an implicit bias against the receiver’s race.
Finally, we extended the cross-treatment exercise from above and compared anonymous sorting to sorting with pictures, by way of Pearson’s test. Again, there was no statistical difference between opting out in the two treatment rows (). This held in costly sorting () and when passing to someone against whose race the participant holds a bias ). As such, it is also unlikely that implicit bias was influencing giving on the extensive margin, inclusive of sorting decisions.
Result 3. The magnitude of IAT score did not predict sorting in or out of the dictator game.
6. Small Giving: A Robustness Check
Thus far, I have suggested that the IAT does not predict giving or sorting behaviors. However, in the context of social pressure, I have also left the door open for dictators to have awareness of their biases, meta-cognitive abilities with respect to them or both. This may suggest that differences in giving are more subtle than the ones suggested above. For instance, what if biased dictators still give, but their giving is concentrated in small(er) passes?
To test for this concentration, I utilize the
Dislike variable from Equation (
3) above, noting that this variable highlights cases of both pro-white and anti-white bias. I also generated dummy variables for various small gift amounts. I then ran Pearson’s
tests to see if giving in those small amounts was different for biased and non-biased dictators in each of the pictures treatments. Full results from these tests are depicted in
Table 11.
Small giving is not different between biased and unbiased dictators in every specification. This result suggests that biased giving is not concentrated in small giving and lends further credence to the above discussion of dictator giving as a whole.
Result 4. Biased givers were no more likely to give small () gifts.
Taken together, Results 1–4 indicate that social pressures are unlikely to explain biased acts of giving. While the extent to which implicit bias affects behavior remains an open area of debate, these results can similarly be thought of as bounding the sort of behaviors they affect.
7. Conclusions
For millions of Americans, racial bias is a persistent concern. In the past two decades, a proposed method of detecting it, the Implicit Association Test, has caught fire among the academics who study bias. At the time of this writing, the IAT’s original paper has over 10,000 citations, with researchers claiming it has implications on all sorts of economic outcomes, from workplace discrimination and managerial behavior, to egalitarian ideals and general social welfare. Yet, economists have only recently started to explore these claims in detail.
In this paper, I have undertaken an in-depth examination of one of those claims in particular: that implicit (racial) bias is a predictor of pro-social behavior. I focused on these behaviors due to a growing literature suggesting the importance of the relationship between bias and pro-sociality. I then conducted a laboratory experiment to test how implicit bias maps into pro-social behaviors.
Specifically, I tested biased giving using a dictator game where acts of giving are both non-strategic and non-spontaneous, and therefore easily controlled by the subject. Additionally, in some treatments, I included a sorting (exit) option to examine social pressure as a motive for biased giving, when biased givers may simply prefer to avoid the environment altogether.
I find that, contrary to much of the previous literature where behavior has not been incentivized, implicit bias fails to predict giving on both the extensive and intensive margins. That is, not only does implicit bias not predict amounts shared in the dictator game, it also does not predict examples of zero sharing, or the choice to exit giving environments. Furthermore, these results hold not only in fine bins of analysis, but also in wider and more powerful ones, such as when I restrict my sample to small gifts or dictators paired with receivers against whom they hold an implicit bias.
To the best of my knowledge, this is the first paper to explore the implications of a race IAT in an economics experiment. As such, the analysis in this paper represents a necessary step forward in this line of research that previously consisted of fascinating, but unsubstantiated claims.
The dictator game is a compelling example in that it consists of a very simple economic decision. That the IAT fails to map into this class of decisions may be indicative of when decisions are more complex or require more deliberation, such as hiring.
However, more research is needed as the dictator game is also a very clear-cut decision, and perhaps the IAT could be better used to predict so-called fuzzier or multi-level economic decisions, such as decisions made in groups or ones where the use of heuristics has been shown to play a prominent role.
Similarly, the present study is limited in that it used photographs to provide racial information.
16 Of course, pictures convey much more information than recipient’s race, including presented gender [
44], prior friendship [
45] and subtle facial cues [
46]. Indeed, disentangling the effects of this information is an active area of research [
47].
As stated above, however, in order to account for the potential effects of receiver gender, controls were included in the estimations, and receiver gender had no significant impact on dictator giving. Further, of the 175 dictators in sessions with pictures, only four (2.29%) had the same school year and major as their recipient, suggesting a lower likelihood of dictators being paired with those in their social circle. Last, while IRB protocols required pictures to be deleted immediately after the sessions, making coding facial cues impossible, the randomization process should divorce this confounder from the final result. Given that other common ways of making receiver race more salient, such as by replacing photographs with demographic descriptions, could expose the study to experimenter demand effects, the use of photographs in this study is justifiable. However, future research should investigate the isolated impact of receiver race.
As the popularity of the IAT grows in academia, so does its use in the public domain. As such, this paper also speaks to policy more broadly. Consider the example of jurisprudence, where the typical anti-discrimination statute requires proof that harmful actions were “because of” discrimination. More and more, implicit bias is being recognized as a source of this liability. For instance, in a recent Supreme Court case regarding the Fair Housing Act, Associate Justice Kennedy wrote for the majority that:
Recognition of disparate impact liability under the FHA also plays a role in uncovering discriminatory intent: It permits plaintiffs to counteract
unconscious prejudices and disguised animus that escape easy classification as disparate treatment. In this way disparate-impact liability may prevent segregated housing patterns that might otherwise result from covert and illicit stereotyping.
17(
Texas DoH v. ICP Inc. [
48])
What this means is that bias is classified under the law as resulting in differential treatment even if one is not aware of the held bias, as in an implicit bias. Hence, we need further explorations of implicit bias, its potential to map into this sort of decision-making and potential mitigating factors, else we could be establishing ineffective policies, or worse, actively harmful ones.