Welcome to the bloggy home of Noah Brier. I'm the co-founder of Percolate and general internet tinkerer. This site is about media, culture, technology, and randomness. It's been around since 2004 (I'm pretty sure). Feel free to get in touch. Get in touch.

You can subscribe to this site via RSS (the humanity!) or .

Critiquing the Stanford Prison Experiment (and Research In General)

This critique of Zimbardo’s famous Stanford Prison Experiment is really fascinating. Basically the author, who writes intro to psychology textbooks, suggests that the experiment was flawed because it urged students to act in the way they thought typical guards and prisoners would act. Here’s an excerpt that captures it pretty well:

In a nutshell, here’s the criticism, somewhat simplified.  Twenty-one boys (OK, young men) are asked to play a game of prisoners and guards.  It’s 1971.  There have recently been many news reports about prison riots and the brutality of guards.  So, in this game, what are these young men supposed to do?  Are they supposed to sit around talking pleasantly with one another about sports, girlfriends, movies, and such?  No, of course not.  This is a study of prisoners and guards, so their job clearly is to act like prisoners and guards—or, more accurately, to act out their stereotyped views of what prisoners and guards do.  Surely, Professor Zimbardo, who is right there watching them (as the Prison Superintendent) would be disappointed if, instead, they had just sat around chatting pleasantly and having tea.  Much research has shown that participants in psychological experiments are highly motivated to do what they believe the researchers want them to do.  Any characteristics of an experiment that let research participants guess how the experimenters expect or want them to behave are referred to as demand characteristics. In any valid experiment it is essential to eliminate or at least minimize demand characteristics.  In this experiment, the demands were everywhere.

I find stuff like this really interesting. I think most research is flawed in that it asks people questions they aren’t really prepared to answer and in turn forces them to come up with a conclusion. I thought about this a lot when I made Brand Tags and people were asking me to put up logos that no one had seen before so they could get feedback. I would always argue that this was measuring brand perception and if no one knew your brand they would just comment on your logo, which isn’t particularly helpful. Brands, ultimately, are the sum total of all the experiences one has and no one ever experiences one by just seeing a logo on a blank page. They hear about it, see it on a shelf next to another product, or any number of other contextual clues. Obviously this situation is pretty different, but I think it’s part of a very broad mistake research makes in not controlling for context (or lack thereof).

November 3, 2013 // This post is about: , , , , ,


  • Alan Wolk says:

    This is fascinating. Yet not particularly surprising.

    One of the things that continues to interest me about research is how flawed most of it is, how easy it is to skew results and how infrequently researchers see the flaws in their own methodology.

    This is particularly true when research takes into account self-reported responses, e.g. are you likely to do X, to buy Y. Or even worse: do you do/buy X frequently/infrequently, do you agree/agree strongly/disagree/disagree strongly.

    Those responses are all contextual to the survey, to the setting and to the participant: one man’s “agree strongly” is another man’s “agree” — I suspect the vast majority of people could not define either term to any degree of satisfaction. Frequently/occasionally is even more of a minefield. Exercising 3 times a week may be frequently to some people, occasionally to many others– and that’s just one easy example.

    What’s worse, the constant need for new “content” and the degree with which any new “study” attracts clicks leads to incredibly shoddy research: many times I find myself digging into a study to find that the results are based on the responses of 30 or 40 people– hardly a representative group. The problem arises when people don’t dig into the methodology and the results are passed on as truth, with just a link to the headline and opening paragraph.

    Rant over.

  • Leave a Comment

    Your email address will not be published. Don't sweat it.