Next Article in Journal / Special Issue
Sexual Motivations and Ideals Distinguish Sexual Identities within the Self-Concept: A Multidimensional Scaling Analysis
Previous Article in Journal
Correction: Bouvet F., et al., Debt Contagion in Europe: A Panel-Vector Autoregressive (VAR) Analysis. Soc. Sci. 2013, 2, 318–340
Previous Article in Special Issue
Peer Influence and Attraction to Interracial Romantic Relationships
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Stanley Milgram’s Obedience to Authority “Relationship” Condition: Some Methodological and Theoretical Implications

Departments of Criminal Justice, Sociology, History, and Social Development and Family Studies, Nipissing University, 100 College Drive, North Bay, ON P1B 8L7, Canada
Soc. Sci. 2014, 3(2), 194-214; https://doi.org/10.3390/socsci3020194
Submission received: 28 January 2014 / Revised: 31 March 2014 / Accepted: 9 April 2014 / Published: 15 April 2014
(This article belongs to the Special Issue Social and Personal Relationships)

Abstract

:
In May 1962, social psychologist, Stanley Milgram, ran what was arguably the most controversial variation of his Obedience to Authority (OTA) experiments: the Relationship Condition (RC). In the RC, participants were required to bring a friend, with one becoming the teacher and the other the learner. The learners were covertly informed that the experiment was actually exploring whether their friend would obey an experimenter’s orders to hurt them. Learners were quickly trained in how to react to the impending “shocks”. Only 15 percent of teachers completed the RC. In an article published in 1965, Milgram discussed most of the variations on his baseline experiment, but only named the RC in passing, promising a more detailed account in his forthcoming book. However, his 1974 book failed to mention the RC and it remained unpublished until François Rochat and Andre Modigliani discovered it in Milgram’s personal archive in 1997 at Yale University. Their overview of the RC’s procedure and results left a number of questions unanswered. For example, what were the etiological origins of the RC? Why did Milgram decide against publishing this experiment? And does the RC have any significant methodological or theoretical implications on the Obedience studies discourse? Based on documents obtained from Milgram’s personal archive, the aim of this article is to shed new light on these questions.

1. Introduction

In the early 1960s, American social psychologist, Stanley Milgram, officially conducted two dozen obedience to authority (OTA) experiments. The most well-known experiment is probably the New Baseline (Condition 5), in which 65 percent of Milgram’s ordinary participants willingly followed the orders of an experimenter to inflict potentially lethal electric shocks on an innocent other person. Milgram believed the “chief finding” of his research was “the extreme willingness of adults to go to almost any lengths on the command of an authority” ([1], p. 5). Many participants later explained that they genuinely did not feel responsible for their actions because they were just following an authority figure’s orders. Milgram noted that the Nazi war criminals typically offered the same explanation for their terrible deeds ([1], p. 8).
However, as one war crimes prosecutor suggested, “obeying orders” was probably a convenient excuse ([2], p. 80) that helped conceal the Nazi perpetrators’ personally more responsible motives, including careerism, greed, ambition, and racism, and many others. In an attempt to test the validity of the just following orders defence, one Nuremberg judge shrewdly asked a Nazi war criminal if:
“...after receiving an order...from a superior officer, to shoot your own parents, would you do so?” Averting his eyes from the other defendants...Seibert responded, “Mr. President, I would not do so...” In one sentence Seibert...had destroyed the defense’s case.
([3], pp. 336–37)
One might devilishly imagine that the judge’s question would have made for a very interesting variation on Milgram’s baseline procedure: having members of the same family as teachers and learners. Would Milgram’s results have been radically altered if, as the case of Willy Seibert shows, even Nazis were likely to have resisted orders to hurt those close to them? Although such a variation may strike one as incredibly unethical, running it would certainly have determined the outer limits of Milgram’s chief finding: people’s willingness to go to “almost any length” when commanded by an authority. However, in a classic case of fact proving stranger than fiction, soon after the opening of the Stanley Milgram Papers (SMP) at Yale University, 1 Rochat and Modigliani discovered that Milgram actually carried out such a variation, which he termed the Relationship Condition (RC) [4].
Although Rochat and Modigliani’s article provides a comprehensive description of the RC, the following questions require answers: what were the etiological origins of the RC? Why didn’t Milgram himself publish a detailed account of the RC? Does the RC have any potentially significant methodological or theoretical implications for the contemporary OTA research debates? Based on documents also obtained from Milgram’s personal papers, 2 this article examines these questions and their implications.

2. Origins and Procedural Overview of the RC

While reflecting on the Proximity Series (Conditions 1 to 4, in which the salience of learner to the teacher was gradually increased, producing a 65, 62.5, 40 and 30 percent completion rate), Milgram stated in March 1962 that as the learner “is brought closer to the subject the relati ship [sic] between them strengthens...He becomes a real person, acquires a face, stands as a concrete individual. Proximity is important for [a] relationship…a husband and wife team...would have a disaterous [sic] effect on the power of” the experimenter over the participant. “Only a genuine relationship...based on identification, or marriage...[could] reverse these results”. Milgram then wondered: “How could this be fostered in the labora[t]ory?” 3
During the debriefing sessions of the earlier experimental variations, some participants informed Milgram that their decision to inflict every shock was more specifically influenced by the authority of not just the experimenter but his close institutional association with Yale University ([1], pp. 66–68). In May 1962 Milgram tested the veracity of this explanation by replicating the New Baseline experiment in the less imposing locale of a run-down office in the industrial town of Bridgeport, and under the more modest guise of a fictitious firm “Research Associates of Bridgeport”. This experiment, termed the Institutional Context experiment (Condition 23), obtained a 48 percent completion rate. However, while at Bridgeport Milgram undertook a second experiment—the last of the 24 official experimental variations—the RC.
For the “RELATIONSHIP” Condition “Subjects were asked to bring a friend to the labor[a]to[r]y” 4, someone they had known for at least two years ([4], p. 238). In notes written soon after this experiment, Milgram explained:
When the subjects arrived, they did an actual drawing to determine who would be teacher and who would be learner. The learner was then taken in[to] the next room. The experimenter then conspicuously put him in the electric chair, and his friend looked on as instructions were given in the regular manner. 5
When both the experimenter and teacher left the room, Milgram would appear from a concealed location, informing the learner that the experiment was actually a test to see whether their friend would obey the experimenter’s orders to inflict seemingly severe electrical shocks on them.
Milgram unstrapped the learner, and stayed with him coaching him how to yell, using very closely, the [m]odel of McDonough’s [the usual learner] yelling…The purpose of the experiment was to see if the relationship of the teacher to the learner would be important in obedience and defiance [italics added]. 6
The results indicated that a relationship between the teacher and learner was, as Milgram predicted, a powerful variable capable of “revers[ing]” the usual “results”. In a sample of 20 pairs of friends, only three inflicted every shock. The RC’s 15 percent completion rate was a full 50 percent less than that of the New Baseline experiment. Not only did most RC participants disobey, but 80 percent did so before the relatively low 195-volt switch.
Milgram clearly recognised the significance of this result. For example, during a debriefing session the experimenter, probably pointing at the 165-volt switch, stated: “This is about where everybody stops right here...in this condition with friends”. Then Milgram added: “It makes quite a difference see”. The participant replied: “I would imagine, yeah”. 7 In his notes Milgram even argued that the RC was “as powerful a demonstration of disobedience than [sic] can be found” ([5], p. 202). In 1962, the same year he ran the RC, Milgram wrote his article Some Conditions of Obedience and Disobedience to Authority [6], which according to Elms took a few years to appear in print ([7], p. 28). This article provided a detailed account of the Proximity Series, the Proximity of Authority Series, and the Institutional Context Condition. Milgram also mentioned in name but not detail “FURTHER EXPERIMENTS”, including the Women as Subjects Condition, three of the Role Permutation Series, the Group Effects Series, and importantly, the RC ([6], p. 71). Milgram explained that these variations “will have to be described elsewhere, lest the present report be extended to monographic length” ([6], p. 71). In 1974 Milgram published his book Obedience to Authority: An Experimental View, which provided an overview and the results from 18 conditions. In the book Milgram fulfilled his intention to elaborate on the above experiments with one major exception: the RC was excluded. If the RC was “as powerful a demonstration of disobedience” that “could be found”, why was it not included in Milgram’s book or published thereafter?

3. Milgram’s Failure to Publish the RC

There are two probable reasons why Milgram failed to publish a comprehensive overview of the RC. First, the RC was methodologically too different to the New Baseline to warrant publication. For example, there was (obviously) no mention in the RC of the learner having a heart condition. Also there was great variability between the learners in terms of acting skills and the verbal content of their responses to being “shocked”. And uniquely, learners in the RC often used the participants’ actual names when protesting about being “shocked” ([5], p. 189). That the RC differed procedurally in too many ways from the New Baseline is, however, unlikely to explain Milgram’s failure to publish this experiment. He still published the entire Proximity Series (Conditions 1 to 4), which methodologically also differed in many ways from the New Baseline condition. 8 The second and most likely reason Milgram did not provide an in-depth overview of the RC was that he realised during 1964 that awareness of this particular experiment might strengthen his detractors’ mounting criticisms.
To clarify, half a year after the data-collection phase, on 23 November 1962, Milgram received a letter from the American Psychological Association (APA) that must have given him cause for great consternation: his membership had been delayed because a colleague at Yale had lodged an official complaint about the OTA experiments ([9], pp. 112–13). In 1963 Milgram’s first official OTA journal article on the Remote Condition or Condition 1 was published (this paper, was originally written for the most part in 1961) [10]. A year later in 1964 Baumrind published her largely ethical critique of Milgram’s article [11], which took exception to his following admission:
I observed a mature and initially poised businessman enter the laboratory smiling and confident. Within 20 minutes he was reduced to a twitching, stuttering wreck, who was rapidly approaching a point of nervous collapse.
([10], p. 377)
Baumrind argued: “I do regard the emotional disturbance described by Milgram as potentially harmful because it could easily effect an alteration in the subject’s self-image” ([11], p. 422). Baumrind’s criticism had clearly struck a chord: in June 1964 Milgram wrote alongside a pencil sketch of a four legged animal with a human head: “After reading Dr. Baumrind’s Article [sic] I feel bad”. 9 It is perhaps of no coincidence that after Baumrind’s critique, Milgram began describing his participants’ pained reactions as “momentary excitement” ([12], p. 849) which Patten noted was “a most astonishing about-face” ([13], p. 356). Of course, if Baumrind was appalled at Milgram having undertaken the first and relatively benign Remote Condition, one can only imagine her reaction had she—and the APA investigators—known about the RC, an experiment which was arguably the most unethical of more than a score of ethically questionable experiments.
More precisely, all of the usual ethical criticisms directed at Milgram’s published OTA experiments are applicable to the RC: a failure to obtain the participants’ informed consent, supplanting their right to withdraw, potentially inflicting on them physiological and psychological harm, and the provision of a disingenuous and potentially inadequate debriefing session. However, it could be argued that because the participants and learners in the RC were friends, the consequential costs associated with completion increased, thus intensifying the gravity of these criticisms. So, more so than any other variation, the RC could be seen to have negatively affected a participant’s self-image. This is because all participants who completed the RC would later have to live with the fact that, had the shocks been real, they would have consented to severely harming a “friend”.
However, what was probably the most ethically disconcerting feature of the RC was that despite participants earlier being told to bring along a friend, on three occasions the teacher-learner pairs were members of the same family: #2422 were brothers-in-law (maximum shock: 165 volts), #2428 were uncle and nephew (maximum shock 180 volts), and #2435 were father and son (maximum shock: 165 volts). 10 Because the RC was the last official experimental condition, it transpires that Milgram was struggling to recruit participants. 11 It was during this period of desperation that Milgram accepted the father and son team. The father’s stress at ‘shocking’ his own son was palpable:
Participant: “Wrong, I’m gonna give you 165 volts.”
Learner: “Ahhh, let me out of here…I’m gonna rip this place up! Let me out...”
The father tried to conjure up a polite and non-confrontational means of ending the experiment:
Now listen…he’ll do that too...you better let him out because he’ll do it!...we’ll give you back your cheques and let it go at that, to hell with it. Because I’m not gonna have him get hurt, and he’ll rip your equipment up.
This strategy failed to deter the experimenter, who insisted the participant inflict the next shock. On realising there was no amiable way of ending the experiment and facing the prospect of inflicting a 180-volt shock on his own son, the father suddenly burst into a tirade:
Participant: “I don’t give a god dang what “the experiment requires”. If someone’s getting hurt and hollering there is no such thing as anyone gonna make continue with [sic] so don’t give me that line of hooley if if [sic] you’re getting an experiment, and it’s hurting ya [mumble] that’s gonna make you continue with it—so don’t give me that line of hooley. I’m not so dumb that I don’t know that. and as I say, you can have your two damn cheques back! IF he’s gonna holler like that I’m not gonna keep going through with it!”
Experimenter: “You have no other choice teacher.”
Participant: “what do ya mean I have no other choice?”
Experimenter: “If you don’t continue we’re going to have to discontinue the entire experiment.”
Participant: (suddenly calming down) “...discontinue the entire thing? I an’t gonna have my boy—would you have your boy hollering in there like that?”
Experimenter: “Well, we’ll have to discontinue, may I ask you a few questions...?”
Participant: “I’m not going to sit there, after all that’s my boy, and I’m not gonna sit [mumbles].”
The father’s early refusal to continue “shocking” his son might seem unsurprising—of course he would stop the experiment. But at the time Milgram could not have known this. So the question remains: what would the long-term implications have been on the father–son relationship had every shock been inflicted? Again, the RC held the potential to seriously harm people’s relationships. And Milgram was aware of this ethical issue because on the few occasions that participants completed the RC, he later attempted to circumvent what he anticipated might develop into post-experimental discord:
…the main purpose of the experiment was to see how you would react to taking orders. He wasn’t really getting the shock, we just set this up this way to see...whether you would be happy to give him the shocks or whether you weren’t so happy about it...So ahh ahh let’s tell him that ahhhh you knew you weren’t giving him the shocks...alright?
Clearly, this kind of debriefing, which Perry noted involved the substitution of “one untruth for another” ([5], p. 85), would not have prevented those participants “happy” to shock a friend from developing a negative self-image. Also, Milgram’s comments clearly establish that before he was halfway through the RC he was aware that this particular condition could damage, thus harm, his participants’ relationships with their friends/relatives. This awareness conflicts with the reassurances Milgram gave to Baumrind in response to her accusation that he acted unethically:
In my judgment, at no point were subjects exposed to danger and at no point did they run the risk of injurious effects resulting from participation. If it had been otherwise, the experiment would have been terminated at once.
([12], p. 849)
It therefore seems that the most likely reason Milgram decided not to publish the RC was that during 1964 he realised that, if it were known about, this particular experimental variation had the potential to bolster the strength of the ethical objections being leveled at his research. So, from 1964 Milgram no longer deemed it prudent to mention this experiment again. 16 Although publishing the RC in the early to mid-1960s would have predictably stimulated an ethical firestorm, it is ironic that this concealed variation is capable of refuting what became the most enduring methodological criticism directed at the OTA experiments.

4. Methodological Implications of the RC

According to Milgram, the methodological success of the OTA research centred on whether participants were successfully deceived into believing that the learner was receiving dangerous shocks [14]. Successfully deceiving most of his participants was “critical” because “as the subject believes he is transmitting painful shocks to the learner...the essential manipulatory intent of the experiment is achieved” ([14], p. 139). As Miller put it: “the entire foundation...rests on the believability of the victim’s increasingly mounting suffering” ([15], p. 143). On the issue of believability, Orne and Holland [16] and Mixon [17,18,19] have posed the two most powerful methodological critiques of the OTA experiments.

4.1. Orne and Holland’s Critique of the OTA Experiments

Orne and Holland presented a variety of methodological criticisms [16], all of which focused on using the OTA study as an example of research whose results were determined by what Orne termed demand characteristics [20]. Demand characteristics are where participants attempt to please a researcher by anticipating and then engaging in behaviours that they suspect will confirm the study’s hypothesis. The implication of this critique for Milgram’s experiments was that apparently most participants who completed would have sensed his research was exploring OTA and, knowing the learner was not receiving shocks, they pretended to be stressed as they completed the experiment, thus giving Milgram the results he desired. Although Milgram refuted nearly all of Orne and Holland’s criticisms ([14], pp. 151–53), one point lingered: their suggestion that participants trusted that the experimenter or Yale University would not allow dangerous shocks to be inflicted on the learner. So, according to Orne and Holland, ostensibly stressed participants actually knew that “everything” was “going to be all right” ([16], p. 287), and this is apparently why so many completed the experiments.
Although Milgram conceded that a small minority of participants did not believe the shocks were genuine, the post-experimental interviews indicated that most—56.1 percent and 24 percent “fully” and “probably”, respectively—believed the learner was being shocked ([14], p. 141). Bolstering Milgram’s claim is that 73 percent of obedient participants in the New Baseline Condition were unwilling to expose themselves to one of the 450-volt “shocks” they had just inflicted on the learner (see [1], p. 57). 17 In fact, Rosenhan found that “almost all” of the teachers in his replication of the OTA experiments later declined a request to participate in another trial as the learner ([21], p. 142). If participants had not been deceived, what did they have to fear in calling the experimenter’s bluff? 18 Rosenhan’s participants were also asked: “You really mean you didn’t catch on to the experiment?” ([14], p. 141). In response to such subtle accusations of credulousness, 70 percent of participants still admitted to having been deceived. It is noteworthy that even the otherwise highly critical Helm and Morelli concluded that “Milgram is probably on safe grounds in contending that ‘the majority of the subjects accept the experimental situation as genuine, a few do not’” ([22], p. 332). 19
Despite Milgram’s defense [14], Orne and Holland’s suggestion, that participants trusted that the experimenter or Yale University would not allow the infliction of dangerous shocks on the learner, proved resilient. This is probably because most participants, as Milgram believed, could have genuinely been deceived, yet simultaneously they could also have retained the suspicion that, as Orne and Holland argued, a Yale-sponsored experimenter was unlikely to allow a fellow human being to be subjected to such physical torture. As Perry said: “Many expressed their faith that the experiment must have been safe” ([5], p. 173), and thus fake, because as one participant put it: “I couldn’t conceive of anybody allowing me to continue on with an experiment knowing somebody was going to be hurt” ([5], p. 258). The most convincing proponent to bolster the validity of Orne and Holland’s issue of trust was Mixon [17,18,19].

4.2. Mixon’s Critique of the OTA Experiments

Unlike Orne and Holland, Don Mixon was convinced that most participants genuinely believed that they were inflicting real shocks. Mixon, however, presented a more persuasive explanation than Orne and Holland did for the participants’ displays of tension. Mixon argued that many of Milgram’s participants were tense because they were confronted with an extremely ambiguous, confusing, and uncertain situation. That is, the information coming from both the experimenter and learner were mutually contradictory: the shocks were apparently harmful but not dangerous, the learner was screaming in agony but the experimenter looked calm. “No wonder many subjects showed such stress. What to believe?” Mixon argued, “The right thing to do depends on which actor is believed” ([19], p. 33). Mixon concluded:
The extreme emotional reactions of many of the participants are due not to the certain knowledge that they are inflicting serious harm, but to the fact that they cannot be certain. The evidence of their senses tells them they are, but background expectations and the expert responsible for the well-being of participants tells them they are not.
([18], p. 94)
According to Mixon, “increasingly large chunks of the social and physical world that we live in can be understood only by experts” ([19], p. 35), and therefore most participants resolved the stressfully ambiguous situation by placing their trust in the authority figure’s word that the learner would not be hurt. 20

4.3. Support for Mixon’s View

Perhaps the strongest evidence in support of Mixon’s claim that ambiguity, confusion and uncertainty powerfully contributed to Milgram’s high completion rates is provided by the central role of the shock generator. Consider the following post-experimental interview with the pseudonymous “Carl” ([5], p. 195), one of the three participants who completed the RC:
Milgram: “Is there anything...[your friend] could have said that would have gotten you to stop the experiment?”
Carl: “I don’t think so.”
Milgram: “Um, what if we...gave you...a gun and said ‘shoot him in the head’…?”
In support of both Orne and Holland and Mixon, Carl responded to this question by explaining that he trusted that the experimenter would not have allowed the learner to be hurt, thereby signaling a belief that the experiment must have been a ruse:
Carl: “Seriously...if they gave me a gun to shoot him in the head I wouldn’t have done it. I think my reasoning behind it...was this thing is set-up...But the way I figured it, you’re not going to cause yourselves trouble by actually giving serious physical damage to a body.”
Milgram: “Um, do you think that would have been the point where you would have not done it, if there were any kind of physical damage?”
Carl: “Yeah, if it was open to my senses, as you say, if...a gun [mumble] I wouldn’t [mumble]. No matter what anyone told me concerning say phony bullets or anything like that.”
So if Carl was directly able to hear and see the consequences of the “harm” he was inflicting, this information would have eliminated any confusion and ambiguity over the learner’s fate, and he would have stopped the experiment.
What was it then that closed Carl off from appreciating more fully the “consequences” of his actions? It was primarily the physical separation of participant and learner as established by the shock generator itself, when used in conjunction with the wall that separated both parties ([24], pp. 333–34). Just a flick of one of the shock generator switches could (potentially) render a person unconscious, and, due to the presence of the wall, do so in the physical absence of the shock inflictor. These two elements, the shock generator and the dividing wall, separated “the act and its effects”, removing any “compelling unity” between the teacher and learner ([1], p. 39). The shock generator and the wall therefore caused the perceptual ambiguity surrounding the learner’s wellbeing, and injected a level of confusion and uncertainty into the situation that would not have been possible in the absence of these devices.
However, it is important to note that the shock generator is a more important causal factor than the presence of a wall because had this device been removed from the experiment, it would not have been possible for the participants to harm the learner from a separate room. Also, during Condition 3 of the Proximity Series, where the participant and learner were in the same room and could directly see and hear each other, 40 percent of participants still completed the experiment. Clearly the presence of the wall enhanced the power of the shock generator. To punish the learner in the absence of the shock generator, participants would have had to directly inflict physical pain on the learner, and in so doing they would not have been able to avoid hearing, seeing and feeling the consequences of their “obedience”. Being “open” to their “senses”, participants would have been unavoidably aware that the learner was being harmed and that they (the participants) were directly responsible. Such an irrefutable physical connection between the participant and learner would have stimulated a level of perceptual clarity that, as predicted by the group of psychiatrists that Milgram had approached with an overview of his basic experimental procedure, only one in a thousand people—members of the “pathological fringe” ([1], p. 31)—were likely to complete. 23 It can therefore be argued that without the shock generator, which spatially and emotionally separated the participant and learner, Milgram would not have obtained his counterintuitive 65 percent baseline completion rate. Obviously, it was the shock machine that was central in producing the heightened ambiguity, confusion and uncertainty that Mixon argued enabled Milgram to achieve his eye-catching results. 24 Nevertheless, although there is evidence in support of Mixon’s claim that ambiguity contributed to Milgram’s generally high completion rates, the results from the RC do not support Mixon’s overall explanation.

4.4. Problems with Mixon’s View

Mixon’s main point, as mentioned, is that ambiguity surrounding the “shocked” learner’s fate encouraged most participants to place their trust in the words of the expert in charge and to do as they were told. However, in terms of confusion and uncertainty, the RC was very similar to the New Baseline, with the key distinction being that the learner was a friend instead of a stranger. So the RC was just as ambiguous as the New Baseline, yet it had a 50 percent lower completion rate. Thus, in conflict with Mixon, 85 percent of participants in the RC obviously refused to place their trust in—and thus rejected—the experimenter’s “expert” status. Most participants in the RC did not react with confusion; their actions were indicative of unambiguous certitude in not wanting to complete this experiment.
In fact, in conflict with Mixon and Orne and Holland, one participant in the RC was certain his friend was not being shocked—and was thus confident that the experiment was a ruse—but he still refused to trust the experimenter:
Teacher: “I don’t believe this! I mean, go ahead.”
Experimenter: “You don’t believe what?”...
Teacher “I don’t believe you were giving him the shock.”
Experimenter: “Then why, why won’t you continue?”
Teacher: “Well I, I just don’t want to take a chance, I mean I, I”
Experimenter: “Well if you don’t believe that he’s getting the shocks, why don’t you just continue with the test and we’ll finish it?”
Teacher: “Well I, I can’t, because I can’t take that chance.” [25]
Although this participant did not “believe” his friend was being shocked, he could not be certain. This uncertainty dictated that he could not afford to “take that chance” and trust the experimenter because there was still a possibility his hunch might be wrong. And he was aware of the consequences that such a mistake would have on the learner. This kind of response was not limited to the RC. For example, as another suspicious and disobedient participant from one of the non-RCs said: “When I decided that I wouldn’t go along with any more shocks, my feeling was ‘plant or not...I was not going to take a chance that our learner would get hurt’”. 26 This response implies that, as noted by other disobedient participants, it was quite possible that the experimenter, played by John Williams, could have been an unsupervised rogue “mad scientist” ([5], p. 135), a person not to be trusted. The point being that Orne and Holland and Mixon’s explanations buckle when considered alongside how these participants resolved the experiment’s inherent dilemma to stop or continue inflicting shocks. And these refusals to take the potentially devastating risk of being wrong is not only likely to have played on the minds of many other disobedient participants, as the following will argue, the issue of “chance” may also be capable of injecting some clarity into the longstanding methodological debate between Milgram and Mixon.
As Mixon correctly argued, Milgram’s inherently ambiguous procedure ensured that participants could not be certain if they were inflicting real shocks on the learner. As established, this ambiguity was caused by the perceptual disconnection produced by the shock generator (an effect enhanced by the partition separating the participant from the learner). Mixon, as mentioned, also argued that this inherent ambiguity weakened the methodological strength of the study because its removal would have ensured certainty regarding the infliction of harm, and such an awareness would likely have ended in disobedience. However, Mixon’s claim that the basic procedure’s inherent ambiguity was a methodological weakness can be contested. Instead, it can be argued that the deliberate creation of ambiguity was a necessary ingredient of the participants’ dilemma over whether to stop or continue inflicting shocks.
To clarify, if a participant suspected that the learner was not receiving powerful shocks and, for whatever reason, was tempted to continue doing as they were told, such a decision would have necessitated their taking a potentially devastating risk. Because participants could not be sure if shocks were really being inflicted, the inherent ambiguity created the possibility of their being wrong (whereby the learner was actually being harmed). So the temptation of taking this risk was actually an essential methodological ingredient in creating a participant’s dilemma. The key question then becomes: would the participant continue to place the learner’s welfare at risk because of their suspicion that the learner was unlikely to be receiving shocks? Or would they decide to stop inflicting further “shocks”, thereby totally eliminating any risk of being wrong? From this perspective the ambiguity and uncertainty inherent in Milgram’s basic experimental procedure was not a methodological weakness. On the contrary, the inherent ambiguity and uncertainty introduced the possibility that participants might be wrong—an outcome with potentially devastating consequences for the learner. Conversely, if all participants were absolutely certain in their minds that the learner was not being shocked, then there would not have been any chance of being wrong. And if there was no possibility of being wrong, then participants would not have been faced with having to resolve the dilemma of whether or not to continue inflicting the shocks. So, in conflict with Mixon, the ambiguity inherent in Milgram’s basic experimental procedure was not a methodological weakness, it was instead a necessary component in making participants confront a pressing dilemma: whether or not to place the learner’s welfare at risk.
These linkages between ambiguity generating risk, and then risk in generating a dilemma, raises an important question: did the participants’ dilemma have a moral dimension? Mixon would dispute that the participants’ dilemma had a moral dimension, for the same reason outlined by OTA-scholar Jerry Burger:
When you’re in that situation, wondering, should I continue or should I not, there are reasons to do both. What you do have is an expert in the room who knows all about this study, and presumably has been through this many times before with many participants, and he’s telling you, [t]here’s nothing wrong. The reasonable, rational thing to do is to listen to the guy who’s the expert when you’re not sure what to do.
([5], p. 359)
However, because a participant’s decision to trust the expert in charge necessitated they take a potentially devastating risk on an innocent person’s wellbeing, doing as one was told was neither a reasonable nor rational solution to the dilemma to stop or continue. The most reasonable and rational response was instead to mistrust the expert in charge, because the experimenter could have been a “mad scientist” and, as the above disobedient participants did, all should have erred on the side of caution (see Coutts, 1977, p. 520, as cited in [26], p. 133). Erring on the side of caution would have eliminated the risk of being wrong and secured the wellbeing of the learner, a fellow human being. For these reasons, the teachers’ conundrum did constitute a moral dilemma.
Mixon’s critique of the OTA experiments, which suggested that most participants were confused and therefore placed their trust in the expert experimenter, is thus a methodological red herring. The issue of actual importance is whether or not the participants were willing to place the learner’s wellbeing at risk. When Milgram’s New Baseline experiment is viewed from this perspective, it can be concluded that the 35 percent of participants who stopped the experiment—whether fully deceived or totally suspicious—were unwilling to place the learner’s wellbeing at risk. Conversely, the 65 percent of participants who completed the experiment—whether fully deceived or totally suspicious—were willing to place the learner’s wellbeing at risk.
Full consideration of the RC may bring closure to the aforementioned issue of trust, which for many years has generated a significant, protracted and intransigent methodological impasse. Of course, if nearly all participants in an ambiguous, confusing and uncertain experiment like the RC were unwilling to place their trust in the expertise of the person in charge, perhaps identifying why this was so might help to explain Milgram’s seemingly perplexing results.

5. Theoretical Implications of the RC

Although the RC was arguably just as ambiguous as the New Baseline, it differed in one obvious and important way to every other condition: it was the only variation where the participant not only knew but was also likely to have cared about the learner at a personal level. In every other condition the learner was a stranger, albeit a “mild-mannered and likable” one ([1], p. 16). The following post-experimental exchange illuminates why a relationship between the teacher and learner proved so effective in generating disobedience:
Participant: But since he was my brother-in-law I stopped…
Milgram: Why do you think you stopped for a brother-in-law?
Participant: Well ahrrr…Why should I keep on going? It’s not that necessary to keep on going right? That’s the reason why I stopped. So what would this participant have done had the learner been a stranger?
Participant: Well...they told me I should keep on going, I keep on going.
Milgram: Why? What’s the difference?
Participant: Well...[inaudible]...is not dangerous, nothing will happen to me.
Milgram: But that’s what you were told...with your brother-in-law.
Participant: Yeah, but there’s a difference.
Milgram: What’s the difference?
Participant: If it is a stranger I don’t listen. Right? We are doing an experiment…They told me to do it, I keep on doing it. He told me keep on going, I keep on going.
([27], pp. 501–02)
So if the learner were a stranger, this participant believed he would have willingly completed because “nothing will happen to me”. However, when his brother-in-law was the learner, the reverse seems to have been of concern: completing would result in something happening to him. It appears that completing the RC was more likely to come with potentially negative consequences, or put differently, involved certain costs not encountered by participants in the non-RCs. One such cost was more explicitly expressed by another disobedient participant, who informed the experimenter: “it may be important to you, but it isn’t that important to me, for a friendship anyway” ([27], p. 501). This participant anticipated that continuing to climb the shock board meant he would later be faced with explaining his actions to a “friend” whose cries for help he, by definition, should have responded to in a caring way.
Other participants in the RC seemed solely concerned about alleviating their friend’s “painful” situation. For example, after inflicting the 150-volt shock, one participant hesitated. The experimenter then issued the prod: “It’s absolutely essential that you continue teacher”. The participant ignored the experimenter, stood up from his chair and then tried to make his way to his friend in the adjacent room. On encountering a locked door, the participant questioned the experimenter: “You gonna open up the door?” Sensing the seriousness of the situation, the experimenter failed to issue the next prod (“You have no other choice...”) and, seemingly fearing for his own safety, responded: “We’ll have to discontinue the experiment” ([27], p. 515). This participant’s genuine concern for his friend’s welfare was not unusual in the RC. However, in the non-RCs the learner was a stranger who, compared to a friend, participants were less likely to genuinely care about. Despite Milgram’s eventual decision to not publish the RC, comments made during the debriefing session illustrated that he too came to this conclusion:
Milgram: “Did he [the experimenter] tell you about the strangers...?”
Participant: “Yeah.”
Milgram: “And in that situation a lot of people will go right up till the end. [Be]cause they don’t know the person and they don’t give a damn.”
Other lines of evidence actually indicate that when OTA-type experiments purposefully detracted from the participants’ personal self-interests—thus encouraging them to “give a damn”—most proved more than capable of resisting the apparent power of OTA. For example, Meeus and Raaijmakers found in their OTA variations that completion rates plummeted when participants were told that they would be legally liable if the learner were harmed [28]. They add that other researchers have found obedience rates ranging from 0 to 17 percent when obedience involves a serious risk to the participants themselves. Examples include “picking up a poisonous snake (Rowland, 1939), taking a coin from a dish of nitric acid (Orne and Evans, 1965; Young, 1952), and carrying out a break-in at their own risk (West, Gunn, and Chernicky, 1975)” ([28], p. 164). The conclusion one might draw is that more selfish or self-interested considerations were probably hidden behind most participants’ decisions to complete the New Baseline experiment. Such an interpretation is compatible with Damico’s observation:
…most revealing...in the Milgram experiment is not the inability of his subjects to understand the difference between right and wrong—anxiety was often the most visible emotion—but their failure to care about the difference in a way that would have made it the controlling factor in their behavior.
([29], pp. 424–25)
Damico’s view that those who completed the experiment were actually aware they were engaging in wrongdoing is consistent with Russell and Gregory’s theory as to why most participants completed the New Baseline experiment [27].

Russell and Gregory’s Theory

According to Russell and Gregory, participants arrived at Milgram’s laboratory eager to be considered helpful [27]. However, once involved, their protests around the 150-volt switch hinted at a personal preference to stop inflicting the shocks. Participants were, however, also aware that stopping the experiment would require them to engage in an impolite and socially awkward confrontation with the experimenter. To avoid such a confrontation, many were drawn into trying to invent a non-confrontational justification for ending the experiment. 28 However, during the pilot studies Milgram had already encountered the most common non-confrontational exit strategies that participants were likely to invent. This enabled Milgram to anticipate what most participants were likely to say during the official experiments. Accordingly, Milgram armed his experimenter with verbal prods designed to counter any attempts by participants to extricate themselves. Many participants politely pointed out the obvious: “I don’t mean to be rude, but I think you should look in on him...Something might have happened to the gentleman in there, sir” ([1], p. 76), often followed by “Can’t we stop?” ([1], p. 80). To the participant’s surprise, the experimenter responded: “Although the shocks may be painful, there is no permanent tissue damage, so please go on”. This was not the participant’s experiment, yet they were burdened with the task of inflicting punishment. So, perhaps in the hope that the experimenter would be unwilling to accept full responsibility for the participants’ directly harmful actions, some politely sought clarification over the lines of responsibility. But this subtle tactic also failed, with the experimenter responding “I’m responsible...Continue please” ([1], p. 74). Typically, the search for a non-confrontational exit strategy not only failed, it actually backfired by drawing participants into inflicting even more shocks which, while searching for a polite means of extrication, they knew they should never have inflicted. Instead of finding a diplomatic avenue to prematurely ending the experiment, participants found themselves being drawn further and further along the shock board. Having come as far as they had, what explanation could they now offer for suddenly deciding to stop?
Simultaneously a very tempting, albeit unethical, opportunity emerged: many participants gradually sensed that if they did as the experimenter asked, then the experimenter could probably be blamed for the participant’s actions ([30], p. 97). This was because Milgram’s prods incrementally lured participants to suspect that, despite a private awareness of wrongdoing, 29 they may not have appeared to others present as most responsible for the learner’s pain. The confrontation-fearing participant was encouraged to suspect that they could displace responsibility for their actions on to the experimenter because the latter said, and many of the former gradually came to want to believe, that it was “essential” to “continue”, that they had “no choice”, that the shocks were “harmless”, and that only the experimenter was “responsible”. If participants gave the appearance that they did not believe they were responsible, doing so enabled them to circumvent a feared confrontation, absolved them from legal and moral culpability for going on, and reassured them they could probably inflict further shocks with total impunity. Aware that they could probably get away with doing wrong, participants increasingly used more credible and tempting excuses for continuing to inflict more shocks. In this situation, participants were faced with two options: they could capitalise on the opportunity to avoid an awkward confrontation and complete what was probably a fake experiment and—because others were apparently responsible—they could probably do so with impunity. 30 Or they could shoulder the burden of engaging in an awkward and impolite confrontation with the experimenter on behalf of a complete stranger by actively stopping a potentially dangerous experiment. Most participants in the New Baseline chose what was, for themselves, the easier and passive option to continue inflicting further shocks, because they felt confident that in the unlikely case of anybody scrutinizing their actions, they were ostensibly “just following orders”. 31
With this account in mind, consider Carl from the RC. As mentioned, in the post-experimental debrief, he spoke in a way that was consistent with Orne and Holland’s and Mixon’s, explanations for completing the experiment. Carl apparently knew the experiment was a “set-up” and, “the way I figured it, you’re not going to cause yourselves trouble by actually giving serious physical damage to a body.” 32 Then again, if Carl knew the experiment was a ruse and genuinely believed that the experimenter would not hurt his friend, why did he go to the trouble of trying to sabotage an experiment ostensibly attempting to determine the effects of punishment on learning? For example, Carl repeatedly emphasised “the right answer” to his friend and at another point “Williams reprimanded him for barely pressing the switch” ([5], pp. 196–97). According to Perry:
When it was over and Carl had reached 450 volts, he was seething. Williams told him that he wanted to ask some questions, and gave him a piece of paper on which to record his answers. Carl snarled, “It’s what I figured, some fuckin’ idiot tests!’...Carl was truculent and abrupt in answering questions...giving monosyllabic, curt replies, as if he could barely contain his anger. When the questions were over, Carl said, ‘All I can say is, as a researcher, has anybody ever physically attacked you?...’ Williams: Once or twice…he was not really being shocked out there. Carl: He just put an act on? Williams: Yeah, he’s a good actor”.
([5], p. 197)
If Carl really knew the experiment was a set-up and that his friend was not actually being shocked, why did he become so angry towards an apparently trustworthy Williams? And why was Carl surprised to discover his friend actually “put an act on”? A stronger explanation is that, due to Milgram’s shock machine and the wall, Carl was uncertain about both his friend’s wellbeing and Williams’ trustworthiness. And, as Carl was coerced into doing something that conflicted with his personal preference to stop the experiment, his anger boiled. As Williams stated in his post-experimental notes, the reason that Carl continued to inflict shocks was “It seemed that he didn’t have the guts to refuse to obey the experimenter’s orders—and this made him mad as hell at the experimenter” ([5], p. 199). 33
Russell and Gregory’s explanation conflicts sharply with Milgram’s agentic state [1], where participants who completed the experiment apparently perceived themselves to have been an instrument used by a higher authority and therefore “genuinely” did not believe themselves to be responsible for their actions. On the contrary, as Damico argued, those who completed knew very well they were doing wrong [29]. But because these participants were led to think that they could probably avoid an awkward confrontation with impunity, they were typically too embarrassed to admit to their wrongdoing when they were later informed of the experiment’s actual purpose—thus, the frequent post-experimental feelings of guilt. 34 However, 36.3 percent of the “obedient” participants whose responses were later registered on Milgram’s Responsibility Clock ([1], p. 203) were willing to accept most of the responsibility for their actions. As one such participant stated, “I thought the ‘shocks’ might harm the other ‘subject’, however, I mentally ‘passed the buck’ feeling the one running the experiment would take all responsibility” ([27], p. 508).
However, the weakness with Russell and Gregory’s theory is that, like Milgram’s agentic state, its focus is on the individual participants’ decision-making process, imparting all blame for completion on them alone, even though participants who inflicted all the shocks regularly pointed out that they would not have acted as they did on their own volition. That “others” like the experimenter were involved was, for most participants, a critical factor in their decision to continue inflicting shocks. 35 Clearly the underlying message here for theoreticians is that any robust psychological explanations of the participants’ behaviour needs to be complemented by a more inclusive sociological theory. Such a theory must account for Milgram and his research team’s remarkably similar resolution of the moral dilemma that confronted them in carrying out a research programme that had potentially harmful effects on its subjects [32]. 36

6. Conclusions

Although the RC was publicly “discovered” by Rochat and Modigliani 16 years ago [4], their article has since been cited only a handful of times in the Obedience studies literature (for the latest such article, see [33]). This is unfortunate, since—for the reasons argued above—the RC may be considered one of the most, if not the most, important of Milgram’s official experimental variations. The results, had he published them, could have enabled Milgram to refute the strong and subsequently enduring criticisms levelled at his experimental methodology by Mixon. However, in largely ignoring the RC in his main published work on his experiments, Milgram may have been resolving his own moral dilemma: whether to draw attention to the RC and so risk inviting even sharper moral and ethical criticism, or whether to use the RC results in refutation of Mixon’s methodological challenges? This, however, is a matter for speculation. What is almost undeniably true is that examination of the RC suggests that the fullest and most valid interpretations of the OTA experiments as a whole are possible only if his well-publicised New Baseline experiment is evaluated in conjunction with the full range of his variations, some of which he himself, for whatever reasons, preferred not to highlight.
Finally, the heightened attention in this article on the issue of a relationship between the participant /teacher and learner raises an interesting question. That is, if a relationship between the participant and learner was important, what kind of results might have been obtained in an Obedience-type experiment where the victim was a stranger but a close relationship existed between the experimenter and participant? It is perhaps interesting to note that this configuration seems to be a much closer fit to the events discussed in Christopher Browning Ordinary Men: Reserve Police Battalion 101 and the Final Solution in Poland [34], which detailed some of the many mass shootings undertaken by a small close-knit group of reservist German police officers in the Polish interior. Browning clearly established that there was a strong relational bond between the superordinate Major Wilhelm Trapp and his 500 or so subordinate executioners. For example, these men affectionately called their popular leader in the (killing) field “Papa Trapp” ([34], p. 2), and when he more asked than ordered his men to participate in the battalion’s first mass shooting of 1800 civilians, the vast majority willingly agreed to do their share of the dirty work. Nearly all did so despite Trapp’s provision of an extraordinary offer: if any of the men were not up to such a task, they did not have to participate. Nonetheless, most the men quickly acclimatised to their new role as executioners, with many later going on to volunteer their services in the extermination of an eventual total of 38,000 Jewish strangers ([34], p. 225). Clearly, Milgram’s Obedience experiments never explored the power of such potentially important and insightful relational configurations—(see [35,36] for theoretical contemplations on the Obedience studies that move in this direction).

Acknowledgements

The author wishes to thank Hilary Earl, Mark Crane, Susan Cahill, Sal Renshaw (all of Nipissing University), Ian Nicholson (St. Thomas University), Gina Perry (University of Melbourne), Steve Duck (University of Iowa), and the two anonymous reviewers for their valuable comments on drafts of this article. A special thanks, again, to Bob Gregory (Victoria University of Wellington). All responsibility rests with the author.

Conflicts of Interest

The author declares no conflict of interest.

References and Notes

  1. Stanley Milgram. Obedience to authority: An experimental view. New York, NY: Harper and Row, 1974. [Google Scholar]
  2. Gitta Sereny. The Healing Wound: Experiences and Reflection 1938–2000. London: W. W. Norton & Company, 2002. [Google Scholar]
  3. Hilary Earl. “Scales of justice: History, Testimony, and the Einsatzgruppen Trial at Nuremberg.” In Lessons and Legacies VI: New Currents in Holocaust Research. Edited by Jeffry Diefendorf. Evanston, IL: Northwestern University Press, 2004, pp. 325–51. [Google Scholar]
  4. François Rochat, and Andre Modigliani. “Authority: Obedience, defiance, and identification in experimental and historical contexts.” In A New Outline of Social Psychology. Edited by Martin Gold. Washington, DC, USA: American Psychological Association, 1997, pp. 235–46. [Google Scholar]
  5. Gina Perry. Behind the Shock Machine the Untold Story of the Notorious Milgram Psychology Experiments. Melbourne, VIC: Scribe, 2012. [Google Scholar]
  6. Stanley Milgram. “Some conditions of obedience and disobedience to authority.” Human Relations 18, no. 1 (1965): 57–76. [Google Scholar] [CrossRef]
  7. Alan C. Elms. “Obedience in retrospect.” Journal of Social Issues 51, no. 3 (1995): 21–31. [Google Scholar] [CrossRef]
  8. Nestar Russell. “Stanley Milgram’s Obedience to Authority Experiments: Towards an Understanding of their Relevance in Explaining Aspects of the Nazi Holocaust.” PhD thesis, Victoria University of Wellington, 2009. [Google Scholar]
  9. Thomas Blass. The man who shocked the World: The life and legacy of Stanley Milgram. New York, NY: Basic Books, 2004. [Google Scholar]
  10. Stanley Milgram. “Behavioral study of obedience.” Journal of Abnormal and Social Psychology 67, no. 4 (1963): 371–78. [Google Scholar] [CrossRef]
  11. Diana Baumrind. “Some thoughts on ethics of research: After reading Milgram’s ‘behavioral study of obedience’.” American Psychologist 19, no. 6 (1964): 421–23. [Google Scholar] [CrossRef]
  12. Stanley Milgram. “Issues in the study of obedience: A reply to Baumrind.” American Psychologist 19, no. 11 (1964): 848–52. [Google Scholar] [CrossRef]
  13. Steven C. Patten. “The case that Milgram makes.” The Philosophical Review 86, no. 3 (1977): 350–64. [Google Scholar] [CrossRef]
  14. Stanley Milgram. “Interpreting Obedience: Error and Evidence (A reply to Orne and Holland).” In The Social Psychology of Psychological Research. Edited by Arthur G. Miller. New York, NY: Free Press, 1972, pp. 138–54. [Google Scholar]
  15. Arthur G. Miller. The Obedience Experiments: A Case Study of Controversy in Social Science. New York, NY: Praeger, 1986. [Google Scholar]
  16. Martin T. Orne, and Charles C. Holland. “On the ecological validity of laboratory deceptions.” International Journal of Psychiatry 6, no. 4 (1968): 282–93. [Google Scholar]
  17. Don Mixon. “Instead of deception.” Journal of the Theory of Social Behavior 2, no. 2 (1972): 145–77. [Google Scholar] [CrossRef]
  18. Don Mixon. “Studying feignable behavior.” Representative Research in Social Psychology 7 (1976): 89–104. [Google Scholar]
  19. Don Mixon. Obedience and Civilization: Authorized Crime and the Normality of Evil. London: Pluto Press, 1989. [Google Scholar]
  20. Martin T. Orne. “On the social psychology of the psychology experiment: With particular reference to demand characteristics and their implications.” American Psychologist 17, no. 11 (1962): 776–83. [Google Scholar] [CrossRef]
  21. David Rosenhan. “Some origins of concern for others.” In Trends and Issues in Developmental Psychology. Edited by Paul Mussen, Jonas Langer and Martin Covington. New York, NY: Holt, Rinehart & Winston, 1969, pp. 134–53. [Google Scholar]
  22. Charles Helm, and Mario Morelli. “Stanley Milgram and the obedience experiment: Authority, legitimacy, and human action.” Political Theory 7, no. 3 (1979): 321–45. [Google Scholar] [CrossRef]
  23. Ian Parker. “Obedience.” Granta: The Magazine of New Writing 71 (2000): 99–125. [Google Scholar]
  24. Nestar Russell, and Robert Gregory. “Making the undoable doable: Milgram, the Holocaust and modern government.” American Review of Public Administration 35, no. 4 (2005): 327–49. [Google Scholar] [CrossRef]
  25. Nestar Russell. “Milgram’s Obedience to Authority Experiments: Origins and Early Evolution.” British Journal of Social Psychology 50, no. 1 (2011): 140–62. [Google Scholar] [CrossRef] [PubMed]
  26. John M. Darley. “Constructive and destructive obedience: A taxonomy of principal-agent relationships.” Journal of Social Issues 51, no. 3 (1995): 125–54. [Google Scholar] [CrossRef]
  27. Nestar Russell, and Robert Gregory. “Spinning an organisational ‘web of obligation’? Moral choice in Stanley Milgram’s ‘obedience’ experiments.” American Review of Public Administration 41, no. 5 (2011): 495–518. [Google Scholar] [CrossRef]
  28. Wim H.J. Meeus, and Quinten A.W. Raaijmakers. “Obedience in modern society: The Utrecht studies.” Journal of Social Issues 51, no. 3 (1995): 155–75. [Google Scholar] [CrossRef]
  29. Alfonso J. Damico. “The sociology of justice: Kohlberg and Milgram.” Political Theory 10, no. 3 (1982): 409–33. [Google Scholar] [CrossRef]
  30. Bruce K. Eckman. “Stanley Milgram’s ‘obedience’ studies.” Et cetera 34, no. 1 (1997): 88–99. [Google Scholar]
  31. Arthur G. Miller, Barry E. Collins, and Diana E. Brief. “Perspectives on obedience to authority: The legacy of the Milgram experiments.” Journal of Social Issues 51, no. 3 (1995): 1–19. [Google Scholar] [CrossRef]
  32. Nestar Russell. “The Emergence of Milgram’s Bureaucratic Machine.” Journal of Social Issues. in press. [CrossRef]
  33. François Rochat, and Thomas Blass. “The ‘Bring a friend’ condition: A report and analysis of Milgram’s unpublished Condition 24.” Journal of Social Issues. in press.
  34. Christopher R. Browning. Ordinary men: Reserve Police Battalion 101 and the final solution in Poland. New York, NY: Harper Collins, 1992. [Google Scholar]
  35. Stephen Reicher, and S. Alexander Haslam. “After shock? Towards a social identity explanation of the Milgram ‘obedience’ studies.” British Journal of Social Psychology 50 (2011): 163–69. [Google Scholar] [CrossRef] [PubMed]
  36. Stephen Reicher, S. Alexander Haslam, and Joanne R. Smith. “Working toward the experimenter reconceptualizing obedience within the Milgram Paradigm as Identification-Based followership.” Psychological Science 7, no. 4 (2012): 315–24. [Google Scholar] [CrossRef] [PubMed]
  • 1As outlined in the Guide to the Stanley Milgram Papers: Manuscript Group Number 1406, the SMP covers the period 1927–1986. The archive is arranged in five series: General Files (1954–1985); Studies (1927–1984); Writings (1954–1993); Teaching Files (1960–1984); Data Files (1960–1984). The five series contain information on Milgram’s research into OTA, television violence, urban psychology, and communication patterns within society. The archive consists of both textual and non-textual materials (drawings, pictures, and a few boxes of audio tapes).
  • 2For nearly two months in 2006 and two weeks in 2011, I perused nearly all of the materials relating to the OTA experiments, including Box 1 (folders a–f); 1a (folders 1–15); Box 13 (folders 181–194); Box 17 (folders 243–257); Box 21 (folders 326–339); Box 43 (folders 124–129); Box 44; Box 45 (folders 130–162); Box 46 (folders 163–178); Box 47 (folders 179–187); Box 48 (folders 188–203); Box 55 (folders 1–22); Box 56 (folders 23–46); Box 59 (73–87); Box 61 (folders 106–125); Box 152; Box 153 (audio tapes); Box 154; Box 155 (audio tapes); Box 156; Box 15.
  • 3SMP, Box 46, Folder 163, Titled: “Obedience Notebook 1961–1970”.
  • 4SMP, Box 46, Folder 163, Titled: “Obedience Notebook 1961–1970”.
  • 5SMP, Box 46, Folder 163, Titled: “Obedience Notebook 1961–1970”.
  • 6SMP, Box 46, Folder 163, Titled: “Obedience Notebook 1961–1970”.
  • 7SMP, Box 153, Audiotape #2438.
  • 8Across the entire Proximity Series the learner never mentioned having a heart condition. This series of experiments was run in a different laboratory to the majority of the other variations. In Conditions 2 to 4 the learner reacted verbally to every shock (15 to 450-volts) so was obviously never rendered silent. When running this set of experiments, Milgram was still honing his basic experimental procedure and therefore the Proximity Series was technically an extension of his pilot studies. It was not until Milgram ran the New Baseline that the basic experimental procedure was standardised ([8], p. 74).
  • 9SMP, Box 17, Folder 246, Titled: “Drawings 1949–1968, no date”.
  • 10See SMP, Box 153, Audiotape #2422, #2428, #2435, respectively.
  • 11See SMP, Box 153, Audiotape #2440.
  • 12SMP, Box 153, Audiotape #2435.
  • 13SMP, Box 153, Audiotape #2435.
  • 14SMP, Box 153, Audiotape #2435.
  • 15SMP, Box 153, Audiotape #2429.
  • 16Another explanation as to why Milgram never published the RC was that its results contradicted the theory presented in his 1974 book. There could be some validity to this explanation, except the disappearance of the RC occurred soon after the publication of Baumrind’s article [11], which seems like too great of a coincidence. Then again, Perry encountered “an unpublished fragment” on the RC “possibly written as an early draft for his book” ([5], p. 194).
  • 17More specifically, the experimenter asked participants during the post-experimental debrief: “What is the maximum sample shock you would be willing to accept?” ([1], p. 57). Twelve of the 19 obedient participants agreed to experience a sample shock of between 30 and 195-volts. Seven agreed to experience a shock of between 240 and 420-volts.
  • 18If most participants had not been deceived, why during the experiments did so many try to sabotage a study ostensibly exploring the effects of punishment on learning by covertly emphasising the correct answers to the learner’s multi-choice questions (see, for example, [1], pp. 159–60; [5], p. 124)?
  • 19See Parker ([23], pp. 118–19) and Perry ([5], pp. 155–69) for a conflicting view.
  • 20To bolster this explanation Mixon ([18], p. 95) pointed out that in Milgram’s three least ambiguous experiments [1], where it was explicitly made clear that the learner would definitely be hurt if every “shock” was inflicted, all participants disobeyed (Learner Demands to be Shocked Condition, Authority as Victim Condition and Two Authorities: Contradictory Commands Condition). Conversely, the more ambiguous Milgram’s variations, the higher the completion rates ([18], pp. 92–94). Finally, removing any ambiguity surrounding the learner’s fate, Mixon undertook a role-play replication where the experimenter informed participants that “The learner’s health is irrelevant…continue as directed” ([17], p. 164). Consequently, predicted completions slumped. Before starting this series of experiments participants were informed that the learner was an actor, the shocks were inauthentic, and they were to pretend that the experiment was real. The participants were then provided with slight variations of Milgram’s Remote Condition up until the 300-volt switch was inflicted. Participants were then to imagine and describe the succeeding events. By manipulating this description, Mixon obtained wide variations in predicted completion rates.
  • 21SMP, Box 153, Audiotape #2430.
  • 22SMP, Box 153, Audiotape #2430.
  • 23Although Milgram conceded that the shock generator was “important” and that “If our subjects had to strike the victim with their fists, they would be more reluctant to do so” ([1], p. 157), he also stated near the beginning of his book that “The precise mode of acting against the victim is not of central importance” ([1], p. 14; see also [24], p. 334).
  • 24There is even evidence indicating that when Milgram was inventing the basic experimental procedure, he set out to purposefully confuse participants about the effects of the “shocks” on the learner. That is, before the official experimental programme, Milgram ran a variety of pilot studies. Before the first pilot, Milgram decided to change the designated title of the last switch on his shock machine from “LETHAL” to the more ambiguous “XXX”, presumably because the latter was more likely to generate in the first official trial what he desired and termed the “strongest obedience situation” ([25], p. 149).
  • 25SMP, Box 153, Audiotape #2439.
  • 26SMP, Box 44, Divider (no label), #1106. Of note is that these resolutions to the basic procedure’s inherent dilemma do not lend weight to what Milgram believed was of utmost methodological importance. For example, both participants strongly suspected that the learner was not being harmed, but they nonetheless still believed it was important to stop the experiment. So in conflict with Milgram, was it really “critical” that most participants were convinced they were hurting the learner?
  • 27SMP, Box 153, Audiotape #2428.
  • 28As François Rochat concluded, “Both obedient and disobedient subjects had a hard time inflicting pain on their fellow participants. It was obvious they were all looking for a way to get out of the experiments” ([5], p. 380).
  • 29As one participant later admitted: “I was surprised to learn that I did a thing even though I knew it was wrong to do it [italics added]” (SMP, Box 44, Divider: “Problems”, #2321).
  • 30Miller, Collins and Brief have added that concerns about “being ‘impolite’ to a brutal researcher…would seem absurd. However, in the actual context of the situation, these concerns are influential” ([31], p. 9).
  • 31It could be argued that the reason, as Mixon noted, the least ambiguous OTA variations tended to obtain the lowest completion rates was because many participants were cognisant that they would appear to others present as most responsible for harming the learner. That is, many participants in these variations were aware that completing and then later trying to argue that they were not most responsible for completing the experiment was unlikely to sound very convincing to others. And conversely, the reason the most ambiguous OTA variations tended to obtain the highest completion rates was probably because many participants were cognisant that they would not appear to others present as the most responsible for harming the learner. That is, many participants in these variations were aware that completing and then later trying to argue that they were not most responsible for completing the experiment was likely to be believed.
  • 32SMP, Box 153, Audiotape #2430.
  • 33Disobedient participants could and did feel quite differently, as one later said: “I was glad to find that I had the ‘guts’ to refuse to continue” (SMP, Box 44, Divider “14”, #0837).
  • 34As one participant later stated: “It’s left me with a guilty feeling” (SMP, Box 44, Divider “9”, #2013).
  • 35When the decision to increase the shock-intensity was left up to the participants, as it was in the Subject Chooses Shock Level Condition, only 2.5 percent inflicted the highest shock intensity of 450 volts. The vast majority of participants repeatedly chose to inflict low-intensity shocks ([1], pp. 70–72).
  • 36Milgram and his research team’s dilemma to participate in the OTA research also had a moral dimension because, for example, without any medical screening, there was a possibility that the highly stressed situation could have stimulated a heart attack among participants. As one participant put it: “Since I became so upset during the experiment, I’m not sure that you were entirely responsible in picking your subjects. Suppose I’d had a heart condition?” (SMP, Box 44, Divider “12”, #2032; see also [9], pp. 116–17; [8], pp. 103–04).

Share and Cite

MDPI and ACS Style

Russell, N. Stanley Milgram’s Obedience to Authority “Relationship” Condition: Some Methodological and Theoretical Implications. Soc. Sci. 2014, 3, 194-214. https://doi.org/10.3390/socsci3020194

AMA Style

Russell N. Stanley Milgram’s Obedience to Authority “Relationship” Condition: Some Methodological and Theoretical Implications. Social Sciences. 2014; 3(2):194-214. https://doi.org/10.3390/socsci3020194

Chicago/Turabian Style

Russell, Nestar. 2014. "Stanley Milgram’s Obedience to Authority “Relationship” Condition: Some Methodological and Theoretical Implications" Social Sciences 3, no. 2: 194-214. https://doi.org/10.3390/socsci3020194

Article Metrics

Back to TopTop