Obedience and Self-Responsibility

Abstract

The obedience studies done by Stanley Milgram involving the administration of shocks by subjects to a learner in an experiment have fostered widespread interest in the subject of obedience. The commonly high levels of irrationally obedient behavior evidenced by the subjects directed attention to the contributing factors. A critical psychological trait is always involved: the amount of self-responsibility practiced by the subject. The hypothesis devised in this review of articles is that the degree of self-responsibility practiced by a person drastically affects his or her level of obedience to unjust commands. The reviews covered, which focus on studies of chains of command and task responsibility, attributional perspectives in various role-playing and accountability settings, and personal factors like dissonance, rationalization, deindividuation, disinhibition, and treating the victim as dehumanized, are shown to support the hypothesis. The idea of self-responsibility must be clearly and unequivocally defined, so that confusion and misunderstanding of who is fully responsible for obedient acts of wrongdoing are resolved. To do this, comprehension of the nature of a human being’s capacity of self-control, self-judgment, and self-regulation is crucial.

Self-responsibility and its Effects on Obedience and Aggression

The obedience experiments by Stanley Milgram in the 1960’s and 1970’s raised much concern about the average person’s mental well-being in regard to being aggressively obedient. Many subjects were unable to do what is right or even decipher what is right in the face of a conflict with an authority figure, especially in an assumingly trusted and safe experimental environment. Yet, the military sector is filled with stories of soldiers following orders from individuals of higher rank. It could be argued that if the soldiers had relied on their own judgment, they most likely would not have proceeded to perpetrate some the acts demanded of them. In these instances, in the Milgram experiments and others of their ilk, the subjects in question throw internal criticism aside and rely inevitably on the judgment of the authority figure.

For clarification, Milgram’s (1974) experiments involved a process whereby a person (the subject, “teacher”) was required by the experimenter to administer shocks of increasing intensity to a learner in another room—who was a stooge for the experimenter and was not really receiving shocks—whenever he made an error in the learning process. The experiment was totally voluntary, and both the teacher and the learner were told that they could quit at any time without penalty—even though the experimenter kept telling the invariably reluctant teacher that it was essential to continue the experiment, or other admonishments to this effect.

It would appear from these conformity experiments and others similar to them that doing “what is right” is a contextual matter for many people. Major issues in morality enter into the picture when a subject and an experimenter are crossing the boundaries of ethical behavior, which entails force against or injury to another person (the learner), disregarding his cries of protest to further participation.

Milgram’s subjects evidenced so high degrees of obedience in his scientific situations (50-60% proceeded to the very end of the voltage spectrum—marked XXX) that many began to question what factors influenced each subject’s degree of obedience. Four factors were noted as decisive: The emotional closeness of the learner to the teacher, the proximity and legitimacy of the experimenter, the absence or presence of a dissenting observer, and the general reputation and prestige of the institution where the experiment was occurring (Milgram, 1974).

Even though these variables were seen as the main determinants of the degree of the subjects’ obedience to follow orders, the element of responsibility is key. Others, as we shall see, have in fact examined specific psychological characteristics of the situation that point to more complex issues concerning degree of obedience and conformity. The hypothesis being presented here is that subjects will be obedient in following orders of iniquity when they do not perceive themselves as being fully responsible for their actions in the final outcome.

Having been told by the experimenter in Milgram’s experiments that the procedure was totally safe and that, although the shocks are painful, they “would do no permanent damage” the teacher (i.e., subject) was given the opportunity to disregard his judgment, disinhibit himself, and dehumanize the learner. As in any situation where human beings act in less than moral ways, the cause and effect relationship was obscured and then evaded so that it became very difficult to take the proper actions. As soon as the subject distances himself in such a way that the recipients of the shocks are no longer seen as real people like him but, rather, means to other ends (namely, the end of the experiment), malevolent actions come readily to the forefront.

The Chain of Command: Who is Responsible?

In a study done to determine the interpretations of responsibility in obedience by subjects put into a mock jury simulation, Hamilton (1978) explored the trial of Lt. Calley. This trial concerned the real military case of a massacre of unarmed civilians in My Lai during the Vietnam War by American soldiers, who had been given orders to do so by Lt. Calley. Not only had the Lieutenant given the orders, but he also participated in the attack. So there was no way for him to not be held accountable for some degree of responsibility for the atrocity (although he could, of course, claim that a superior had given him orders, thereby tenuously seeking to absolve himself from wrongdoing). Hamilton (1978), who refered to it as a “crime of obedience,” commented on this situation (which was not unlike Milgram’s situations):

Authorities can certainly be said to have causal responsibility for a subordinate’s acts that they may order. They also have a role responsibility for those acts and indeed a role responsibility for overseeing action that goes beyond what they specifically order. They are both the authors of action and the overseers of actors. Reciprocally, the actors who are their subordinates physically cause deeds that they are ordered to do, and they act intentionally. But they do so in response to a role duty and with the expectation that the authority has the responsibility (in the sense of liability) for any bad outcomes. To do what they are told is both something they must do to stay in role and something they ought to do as a role occupant. (p.128)

Obedience seen in this way, which is common in military situations, really provides a rationalization for the actors to behave irresponsibly, all the while confusing the issue of who is responsible for the act: the one who enforced it or the one who did it, or both? Moreover, it seems to be treating human beings (the actors) similar to attack dogs. In fact, two different types of responsibility could be deduced from these situations—responsibility of the giver of commands and self-responsibility for one’s actions. If one is following the former type, then one is relying on the director’s degree of self-responsibility. In this sense the issue of self-responsibility is inescapable.

But the connotation of the term obedience means conforming to another’s commands in indifference to or in disregard of one’s personal integrity and values. Therefore, an act of obedience demands an explanation. To say that one should be excused because one is simply acting as an agent of another nullifies the very notion of self-responsibility, which entails self-judgment; in a sense one is forfeiting one’s self-responsibility, which can only mean forfeiting one’s mind and becoming a robot.

Since no human being is a robot—one can only try to be—it appears that commanders may be using the actors as scapegoats to avoid being accused of atrocity—because they were not the ones who actually did the act. In this respect the same dynamic can be observed in a situation where, for example, I tell someone to go kill another person. Assuming that the person is, firstly, an adult and, secondly, biologically healthy (both physically and mentally), it should make no difference—according to the principle of self-responsibility—what I tell the person to do; he or she will simply exercise the proper judgment so as to avoid malevolent behavior. We will return to this issue later.

In the jury simulation that was conducted by Hamilton (1978), the superior who gave the orders was significantly held to be more responsible than the soldiers for the soldiers’ actions. The subjects cited the fact that he was the key causal factor in the incident. Hamilton advises that the strong sympathy for the subordinates by the subjects may result from the authoritarian military atmosphere of the case. This again poses questions about the various meanings of responsibility.

In a survey designed to assess how typical citizens understand the meaning of responsibility in hierarchical situations, Hamilton (1986) again set out to decipher the problem of responsibility. In the introduction he states that historically law and psychology have viewed the superior who gives the directives as the one most responsible for the actions of the subordinate. The particular role obligations of the superior are evidenced in “The legal principle of respondeat superior, ‘let the superior answer’” (p.120).

With conditions like this, one may wonder why Milgram and others were so surprised at the results of his experiments. Interestingly, Hamilton remarked that often the greater the obligatory role of the authority, the more murky the issue of causal blame. In the survey subjects attributed on average the most responsibility to individuals like Lt. Calley, who were neither solely a distant authority nor solely an obedient soldier. The ambiguity of the responsibility issue becomes apparent here; subjects figured that they cannot lose in picking a man most directly involved in both ends of the chain of command. Of special note is that among the 391 Boston area subjects in the experiment, the ones most educated on average attributed more personal responsibility to the soldiers. In Hamilton’s words these “…results suggest that education promotes independence from authority…” (p.137).

Paradoxical Features in the Chain of Command Principle

In discussing the attributional links in responsibility with varying chains of command models, it is evident that both the actor and the commander or authority figure have differing ideas as to who is actually accountable for whatever particular action. This has led to studies designed to discover how the roles of responsibility are assumed in situations where acts are to be carried out by a person allegedly acting out the edicts of another. A study—similar in method to the Milgram experiment—was designed to determine exactly how much responsibility subjects would attribute to themselves in administering shocks. When subjects assumed different roles in the “learning” experiment, either transmitter (the one who relays the message) or executant (one who gives the shock to the learner), Kilham and Mann (1974) found that the transmitter felt less responsible. They stated:

The transmitter is in a relatively “safe” place psychologically; he can disclaim responsibility for the orders and can argue that he had no part in their execution. The transmitter can argue or rationalize that his highly specialized part in the act was only of a trivial, mechanical nature. (p.697)

Bear in mind that all these so-called learning experiments discussed thus far and the ones that follow involve the giving of shocks to a learner by a subject who believes that they are real shocks. Additionally, the sample always consists of only those subjects who choose to remain in the experiment after being told that they are free to leave without penalty if they disagree with the procedure or find it uncomfortable. So, invariably, one may end up with a biased sample (much like soldiers in the military) that agree to the practice of following orders, even if they are somewhat reprehensible.

However, it appears that virtually no experimenters had any trouble with subjects in this screening process, which evidences the fact that the average person does not or did not find such a learning experiment reprehensible. It should also be observed that the extremely rigid and formal manner of interaction in the experiment may create an atmosphere that is not psychologically conducive to defiance by the subjects—after all, they have agreed to participate in the experiment apparently designed and controlled by professionals (Kilham and Mann, 1974). But, of course, when it comes down to an issue personal integrity and personal values, these observations have only surface plausibility in attempting to justify obedience and a cruel act.

Kilham and Mann (1974) also designed a control group that was allowed to choose any level of shock they wanted in order to “teach” the learner; this is in contrast to the typical ascending level of shock voltage for each wrong response by the learner. Statistically, it was found the control group level of obedience was significantly less than each of the four experimental groups. Although the highest level of obedience (i.e., following through with shocks of highest intensity) by a group of executants was not as high as subjects in some of Milgram’s studies (upwards of 65%), obedience was still quite high (40%). And while the experimental groups sometimes proceeded from moderate to strong to very strong to intense and on to extreme intensity and danger (severe shock levels, despite the learner’s cries of protest), the control group never moved on from the first stages of moderate shock intensity.

But, all groups showed significantly higher levels of obedience in the transmitter position than the executant position, indicating again that the person telling another person to do something typically feels less responsible for the consequences of the action than if the person were take the action on his or her own (without orders).

Attributional Perspectives

A common term in social psychology is fundamental attribution error (Myers, 1993). It refers to the subjective perceptions of differences in the causes of behavior between the person behaving and a person viewing that behavior. The person acting will generally attribute his or her behavior as a reaction to or consequence of the environment and given set of circumstances. In contrast, the person observing will typically attribute the actor’s behavior to his or her personality or mental characteristics.

In a study done to assess this notion in regard to acts of obedience in a controlled experimental setting, researchers found that as the severity of the effects of the obedient person’s actions escalated—in this case the administration of shocks—the subject would blame the results on the situation. Accordingly, the observer of this behavior would attribute the effects more to the mindset of the subject, that is, perceiving the subject (actor) as more responsible than the subject would (Harvey, Harris, and Barnes, 1975). The results of the study “…show a general tendency for actors to attribute more responsibility to the experimenter than do observers” (p.25) and “…that in general observers attributed more freedom to actors than actors attributed to themselves” (p.26).

As Harvey et al. (1975) noted, there seems to be a direct relationship between the amount of “perceived freedom” and degree of felt responsibility. In other words, as the subjects involved in the shock experiment saw their actions having increasingly discomfiting consequences, they explained their behavior as being less voluntarily free and more restricted by the conditions of the experiment. Once again, we see how rationalization becomes necessary when a person, evading the issue of self-responsibility, becomes a mindless actor for another whose instructions are—at some psychological level—questionable for the subject.

Dissonance Theory and Attribution

Following from the idea of attribution differences, subjects who put themselves in aversive situations will typically see little freedom of choice in their condition. Commonly what is involved in causing rationalization for behavior is the level of dissonance one feels—that is, in this context the degree to which one is experiencing a conflict in how one is acting and how one thinks one should be acting. Any degree of dissonance concerns a disparity between these two conditions.

As we have seen in the preceding experiments, when persons engage in conduct that seems less than moral, they “…can try to excuse the behavior by denying responsibility for the consequences. This relieves them of accountability, potential punishment, and guilt” (Reiss and Schlenker, 1977, p.22). Additionally, “when aversive consequences follow an action that appears to have a reasonable likelihood of producing such consequences, justifications are needed” (Reiss and Schlenker, 1977, p.22). In the study by Reiss and Schlenker (1977) that related to these mental processes, the observations held true. Moreover, they found that when observers held subjects accountable, subjects engaged in a change in attitude that painted the behavior in a different and better light. This further confirms the idea that, when involved in questionable behavior, people will try to find a way to either not appear in the wrong or not be seen as intentionally causing harm.

Role Playing and the Responsibility of the Task

With regard to avoiding internalization of self-responsibility, a study was done to see if subjects would behave differently in various roles in another shock experiment. Kipper and Har-Even (1984) labeled one group of subjects the spontaneous group (who were free to choose the level of shock administered to a confederate learner) and another group the “mimetic-pretend” group (who assumed the role of a teacher through instruction and imagination). The fundamental difference in the groups was the way in which the two roles were stressed. They both had to teach a learner, but the mimetic-pretend group was explicitly told to act like a teacher focusing on the business at hand—supposedly causing a greater task-oriented mood. This mood would supposedly lead to a denial of the feeling of personal responsibility, whereas the spontaneous group would still be in a self-oriented mindset.

As one might suspect, the mimetic-pretend group escalated the level of shocks as the test proceeded, while the spontaneous group remained at a moderate level. Furthermore, the mimetic-pretend group attributed responsibility for the shocks to factors outside the self, and the spontaneous group focused more on personal responsibility (Kipper and Har-Even, 1984). The authors noted:

It appears that casting a person in a mimetic-pretend role accelerates disinhibition processes, at least as far as the expression of aggressive behavior is concerned, and possibly also with regard to other types of conflicts, principally those that involve guilt feelings. (p.940)

The focus was not particularly on how much conformity the experimenter could obtain from the subjects (as with Milgram) but, rather, on the kind of behavior exhibited in two different roles. The anonymous nature of the mimetic-pretend role led to increases in levels of aggression, thereby confirming once again that distancing oneself psychologically from an action aids in the denial of self-responsibility.

However, the playing of roles can obscure the normal attributions of responsibility. For example, Kipper and Har-Even (1984) found that the mimetic-pretend role subjects took responsibility for their behavior but only in the context of the role. One wonders how much accountability they would have assumed if the learners would have faked being seriously injured, like in the Milgram experiments.

General Factors Influencing Humans to Become Inhumane

When people fail to take full responsibility for their actions, typically two things take place, one internal and one external. Internally, a person becomes disinhibited. This occurs when a person’s thoughts and feelings concerning appropriate behavior that normally inhibit him or her from committing an act of wrongdoing become neglected or overridden. Circumstantial factors such as an exalted cause or a “noble” goal that treats some individuals as the means to the prescribed ends of others, or that holds the collective good above the individual good, trampling over countless people in the process, or—as we have been discussing—the welfare of an experiment that is deemed more important than the welfare of the participants, can all play their part in the disinhibition process. Of course, these may just provide fuel to the fire of resentful, hostile, vengeful, or just anxious emotions that may propel a person to abdicate reason.

Externally, in order for the person to perform acts of cruelty, he or she must dehumanize the victim—that is, see the victim as no longer possessing any truly redeeming moral qualities. These two processes have been evidenced time and again throughout the centuries by people both ordinary and monstrous. Bandura, Underwood, and Fromson (1975) stated:

Inflicting harm upon individuals who are regarded as subhuman or debased is less apt to arouse self-reproof than if they are seen as human beings with dignifying qualities. The reason for this is that people who are reduced to base creatures are likely to be viewed as insensitive to maltreatment and influenceable only through the more primitive methods. (p.255)

Bandura et al. (1975) conducted an experiment to discover the outcomes when subjects who were recruited as teachers to administer shocks to an individual learner (the intensity of which was their choice) were placed under various psychological conditions. Different subjects were put either in a position with high individual responsibility for the shocks they administered or in a position of diffused responsibility (where they would practically remain anonymous). In addition the learners were portrayed to different subjects “…in either humanized, neutral, or dehumanized terms” (p.256).

As one might presume, subjects whose shocks were mostly anonymous gave higher intensity shocks on average to learners, especially when the learner had been dehumanized. But when the learner was made to appear high in moral value, both individual and diffused responsibility groups—although different statistically—felt that high level shocks were less justified (Bandura, et al., 1975). In turn the authors stated, “when circumstances of personal responsibility and humanization made it difficult to avoid self-censure for injurious conduct, subjects disavowed the use of punitive measures and used predominantly weak shocks” (p.268).

Aspects of Deindividuation and Personal Accountability

In our discussion of self-responsibility, I have stressed the importance of internal mechanisms—that is, within the individual—that curtail inappropriate or immoral behavior that a person may contemplate on occasion. This signifies the self-control aspect of free-will, the distinctive trait of human beings. Paying attention to these internal mechanisms may prevent a person from becoming deindividuated, that is, not seeing oneself as a responsible individual with certain personal standards of moral behavior, in which self-responsibility operates.

However, when some speak of responsibility, they often mean in terms of social constraints or public influences that inhibit people from doing harm. In this sense of the term, a person is being held accountable not only by their own belief system and views of personal integrity, but also by the punitive measures of others (accountability cues). These deindividuation cues and accountability cues can be seen, respectively, as the “private and public components” affecting impulses to aggress and level of obedience (Prentice-Dunn and Rogers, 1982, p.509).

In a study (another shock test) relating these factors to the level of aggression against a learner in an experiment, Prentice-Dunn and Rogers (1982) found that “compared to subjects in the high accountability-cues conditions, subjects receiving low accountability cues displayed more aggression. In addition, the external attentional-cues condition [causing more deindividuation] produced more aggression than did internal attentional-cues condition” (p.508). Furthermore, they stated that “…the available data strongly suggest that subjective deindividuation mediated the expression of aggression” (p.512).

When subjective deindividuation is connoted with a lessening of self-responsibility, we can readily see how powerful a factor it is in any action a person takes. Obedience and conformity speak of lowering one’s state of inner awareness and acting blindly in accordance with the demands of others or external cues.

A Question about Self-Responsibility Theory

One may question whether in every instance people will attribute responsibility to someone else or something else when their acts are deemed wrong by themselves or others. As we have just noted, when the victim is dehumanized, more responsibility may be personalized on account of the fact that the subject believes the act was in some way warranted.

A study conducted almost identically to Milgram’s did question the theory of responsibility attribution. In this the experimenters also added both a group of subjects who were free to choose the level of shock voltage and a group of subjects who witnessed a preceding test in which the teacher defied the authority figure and refused to go on. Mantell and Panzarell (1976) concluded in this study the following:

A monolithic view of the obedient person as a purely passive agent who invariably relinquishes personal responsibility is a false view. There are people who obey and continue to hold themselves responsible as well as people who obey and relinquish responsibility. Similarly, among people who initially obey but then defy, there are those who accept full responsibility and those who accept none at all for the actions they performed prior to their defiance. (p.242)

The authors did note that attribution of personal responsibility was related to decision-making capability; when subjects could choose the shocks in the test, they felt more responsible. Yet, how might we explain these results? From the description of the study’s method, it appears that the subjects were asked about attributions of responsibility after they had been de-hoaxed and comforted by the fact that the learner had not really been shocked almost to death.

It would be hard to believe that subjects would take personal responsibility for following orders to shock an innocent person beyond the point of screams of protest to the point of silence. If this were in fact true, it would be either equivalent to admitting that one is inherently evil (which a sane consciousness could not maintain for long) or that one is asking that justice be enacted on oneself for abdicating proper judgment. (If it were the latter—which means evidencing a high degree of self-responsibility—then the person most likely would not have followed the depraved edicts of the experimenter in the first place.)

Concluding Remarks

In attempting to understand all the factors that contribute to obedience on the part of a naive subject, we cannot forget that all the experimental settings had the enticing aura of reputability. Subjects entered into a task assuming or believing that it must have been well thought-out and proven safe and reasonable. After all, do not psychological experimenters at sanctioned institutions abide by strict codes of ethics? These were probably some of the thoughts running through the minds of the subjects as they began performing their tasks.

Those subjects who felt the necessity to maintain a high degree of self-responsibility—no matter what the consequences—had to make what Nissani (1990) has called a “conceptual shift” (p.1385). The shift involves actually being cognizant of the actions that are requested (or demanded); that is, the subject must mentally shift into accepting the fact that the experimenter has apparently become a “malevolent” figure and the institution should be discredited, at least if the shocks are truly real. No fully self-responsible person would be reckless enough to call their bluff, speculating that the shocks are not real.

The internal mechanism of self-responsibility relies on the convictions that one is both the voluntary creator and inhibitor of one’s own actions and, concomitantly, one takes responsibility for them—especially when others are harmed in any way. Unfortunately people have always lived in an age in which obedience to an authority was demanded in some form or fashion. If not politically, at least at the familial level people (especially children) are frequently asked to defy their better judgment or negate the appropriate actions of their being.

Of course, there may be endless “reasons” why obedience is required, but until the notion of self-responsibility is completely embraced by theorists and laymen alike, there will always be ones who give irrational orders and others who irrationally follow them—and still others who are astonished by the results.

References

Bandura, A., Underwood, B. and Fromson, M. E. (1975). Disinhibition of Aggression through Diffusion of Responsibility and Dehumanization of Victims. Journal of Research in Personality, 9, 253-269.

Hamilton, V. L. (1986). Chains of Command: Responsibility Attribution in Hierarchies. Journal of Applied Social Psychology, 16, 118-138.

Hamilton, V. L. (1978). Obedience and Responsibility: A Jury Simulation. Journal of Personality and Social Psychology, 36, 126-146.

Harvey, J. H., Harris, B. and Barnes, R. D. (1975). Actor-Observer Differences in the Perceptions of Responsibility and Freedom. Journal of Personality and Social Psychology, 32, 22-28.

Kilham, W. and Mann, L. (1974). Level of Destructive Obedience as a Junction of Transmitter and Executant Roles in the Milgram Obedience Paradigm. Journal of Personality and Social Psychology, 29, 696-702.

Kipper, D. A. and Har-Even, D. (1984). Role-Playing Techniques: The Differential Effect of Behavior Simulation Interventions on the Readiness to Inflict Pain. Journal of Clinical Psychology, 40, 936-941.

Mantell, D. M. and Panzarella, R. (1976). Obedience and Responsibility. British Journal of Social and Clinical Psychology, 15, 239-245.

Milgram, S. (1974). Obedience to authority. New York: Harper and Row.

Myers, D. G. (1993). Social Psychology (4th ed.). New York: McGraw-Hill, Inc.

Nissani, M. (1990). Comment: A Cognitive Reinterpretation of Stanley Milgram’s Observations on Obedience to Authority. American Psychologist, 45, 1384-1385.

Prentice-Dunn, S. and Rogers, R. W. (1982). Effects of Public and Private Self-Awareness on Deindividuation and Aggression. Journal of Personality and Social Psychology, 43, 503-513.

Riess, M. and Schlenker, B. R. (1977) Attitude Change and Responsibility Avoidance as Modes of Dilemma Resolution in Forced-Compliance Situations. Journal of Personality and Social Psychology, 35, 21-30.