Thursday, October 24, 2019

Milgram experiment Essay

The infamous â€Å"Milgram experiment† on obedience, done in 1963, is, perhaps, the most commonly known of all psychological experiments. It gained its infamy for its use of subjects who did not know they were being tested. Since the experiment dealt with a highly stressful situation – the necessity to inflict pain at command – upon the publishing of its result, it raised a wave of debate on whether such an experiment is acceptable ethically. Nonetheless, the experiment produced interesting and controversial results – at the very least in the fact that it utterly disproved the claims of most psychologists. The psychologists had argued that only a tiny, sadistic percent of the population would be able to commonly inflict pain on command, simply obeying orders. The experiment clearly showed that 65 percent would obey authority if required, giving an alternative explanation to the phenomenon of concentration camps. Rather than sadism, the experiment shows, most people are simply obedient when the appropriate stress factors are applied. This experiment, naturally, had a variety of interesting consequences, both for psychology and the study of the later social reaction to the experiment itself. The †legend† told to the participants of the experiment was that the scientists were studying the effect of punishment on learning. The subject had to deliver an electric shock when an actor who was playing the role of the learner answered a question incorrectly. Naturally, no real shocks were given. As time passed, the subject was ordered to give progressively â€Å"stronger† punishment shocks. Most of the subjects eventually delivered what they thought were high-intensity, potentially lethal shocks in spite of serious distress on the part of the person who was playing the role of the learner. The result also gave interesting variations: many more people stopped earlier when the main scientist was not present in the room and gave his orders by voice, without the use of facial expression; when two experimenters who gave conflicting orders were present, the subject halted the instant conflicts in authority began; when another â€Å"teacher† was present, and started protesting, most joined into the protest; and, finally, if the test subject was not ordered to inflict the pain, but merely to read the words, 37 out of 40 people assumed an instrumental role, and passively watched the scientist inflict pain (Milgram, 1963). As the Milgram experiment clearly demonstrates, most people will react positively to having authority taken from them. After giving consent, most will protest weakly, passively. The test subject known as Prozi, for instance, voiced his complaints, but at being told firmly that the experiment was a necessity, continued to go on (Milgram, 1963). Despite the fact that, once explained what the experiment was, many people experienced regret, still, quite a few people trusted authority. One of the reasons not commonly noted is the effect of specialization of labor. In American society, where one goes to a specialist for every single bit of work that requires even a small bit of knowledge above the general level, it becomes almost a reflex to trust specialists. This is because most people are largely ignorant of their surroundings, and this feel assertive only within their sphere of competence. When encountering something beyond it, very few people will initially attempt to experiment. Most will try to find â€Å"an expert†, someone who is knowledgeable about a certain phenomenon or circumstance. Moral imperatives only truly come into play when a person has to make a choice without outside pressure. However, when pressured by someone who supposedly knows better, not too many question authority. This is a case of personal morality versus the trust in the experimenter’s morality: most people assume the better of the experimenter, and deduct that, without a necessity, the experiment would not be conducted. It is also interesting to note that when experimenters were in conflict, the test subject stops immediately. This reaction to divided authority also confirms the thesis given above. However, the stronger the emphasis on necessity and responsibility – both qualities enforced culturally as necessary for survival within society – the subjects become much more submissive. This may be interpreted as the fact that most people have a different real moral code than the one they announce. As Milgram duly notes, only the illusion of necessity was created. The subjects were not threatened, nor were they explicitly told they would be punished, and thus, the choice was much easier than under any explicit threat. The stress factor is the most common reason this experiment is criticized as â€Å"inhumane†, and even â€Å"torture†. Specifically, the scientist Diana Baumrind raised the ethical points of the study to public concern. She spoke of the experiment as being emotionally distressing, destructive to the self-image of the subject once he realizes the true cruelty of his actions, and the fact that the study creates a distrust of authority (Baumrind, 1964). Out of these three points of rebuttal, none are legitimate. The experiment was emotionally distressing, true, and yet 84 percent of the subjects said that they were glad they had participated after the experiment. Indeed, for a great many of them it was equivocal to an awakening as to the things they were able to do, a reason to reconsider their own behavior. The second point is also true only in a certain way. The experiment was destructive to the self-image of these people, but in a positive way. It removed a number of illusions and taught lessons. This debunking is how a human being learns how to deal with perpetual dissuasions about his own validity, and most well-adapted humans should accept this as yet another such case – as the exit survey by Milgram demonstrates rather clearly. Her third point is that belief in authority would be undermined. Once more, the debriefing only reinforces this belief. Despite what seemed to be, initially, a situation in which authority is undermined, once the test subject is informed of what has happened, he is once more reassured that the experiment has done no real and lasting harm. In short, the experiment only reinforces the authority of the scientifical community and its concern with the good of mankind, which is not created at the expense of its certain members. Thus, we can see rather clearly that ethically this experiment was flawless. . Still, if flawless ethically, the question arises of whether the experiment is so flawless methodologically. Ian Parker, in his article â€Å"Obedience† raises the question of whether the experiment was not so easily debunked by the test subjects. Some interviews with those who participated also show that many had suspicions, and a certain amount even said that the experiment was a fraud from the beginning and they knew it. Parker thus argues that the results are flawed: the whole point of deceiving the test subjects is gone when they understand that the experiment is only a test (Parker, 2000). However, this assumption is also rather faulty. What Parker seemingly fails to take into account is that the subjects enter a situation of uncertainty. As the interviews show, even when the subjects expressed guesses towards being tested, the actors continued the game. If their suspicions had been confirmed immediately, Parker’s argument would have made sense. But in this manner, they are put into a situation, where it suddenly becomes irrelevant whether this is an experiment of some kind or not. One simply does not know whether it is real or a game. In any case, those who would accept the situation as possibly real, are, once more, faced with the consequences of a dire moral dilemma. And I would also surmise that most people with at least average courage would assume the reality of such an experiment, if only out of fear of the consequences if it somehow turns out to be real. Even outright disbelief will not necessarily destroy the experience of doubting whether one is included in this experiment or not. Thus, Parker’s criticism is also irrelevant to the bulk of the data in question. Thus we can see how Milgram’s experiment effectively demonstrates the mechanisms and reasons for obedience. Milgram shows the extent to which the human mind is much like an animal’s, and how easily it can be conditioned, and also how cultural conditions add to the basic instinct of obeying someone with higher social status. The experiment is rather educational in demonstrating how much the average human examines his own behavior and learns of how he will behave in a particular situation, and how such examination might be crucial to making life-and-death choices. It is not cruel – in effect, it could have been made much harsher by invoking even further uncertainty and examining the subjects’ long-term reactions to their own behavior. Yet most of the post-effects have been beneficial without any significant damage to the participants. And not ineffective – in fact, the data gathered could be useful for an even further analysis on the effect of uncertainty on the psyche. To conclude: this is one of the more interesting, beneficial and effective experiments done in psychology, and it gives us an insight into the human mind that should not be ignored or derailed for false reasons.

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.